added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:40:00.312871
| 2015-08-28T15:19:00
|
103737863
|
{
"authors": [
"ciatog",
"jhdxr"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9612",
"repo": "phpbrew/phpbrew",
"url": "https://github.com/phpbrew/phpbrew/issues/572"
}
|
gharchive/issue
|
opcode error with composer and phpunit
I had PHP 5.5.28 installed on my Ubuntu machine with everything working as expected.
I decided to start using phpbrew to make upgrading to PHP 5.6 easier. To start with I just installed the same version of PHP that I have on my machine: phpbrew install 5.5.28 +default+dbs+debug+apxs2 (I also installed 5.6.12, with xdebug, and got the same errors)
After doing so everything worked great with 2 exceptions: phpunit and composer. Both commands run to completion but right at the end I get fatal opcode errors which cause issues with the CI server.
As an example I get the following at the end of running phpunit (4.6.10):
PHP Fatal error: Invalid opcode 65/16/8. in phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php on line 0
Fatal error: Invalid opcode 65/16/8. in phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php on line 0
Let me know if you need any further details. Any help would be greatly appreciated. Thanks
Did you installe eaccelerator or some similar?
Hey jhdxr.
eaccellerator isn't installed. I did install uopz and decided to remove that temporarily and the fatal errors went away so it looks to be an issue with that and not phpbrew.
Thanks for the response and I'll just go ahead and close the issue.
I think you may refer to krakjoe/uopz#19 , which has an explanation of this fatal error.
|
2025-04-01T06:40:00.364029
| 2021-04-09T15:21:52
|
854622876
|
{
"authors": [
"phseiff"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9613",
"repo": "phseiff/github-flavored-markdown-to-html",
"url": "https://github.com/phseiff/github-flavored-markdown-to-html/issues/28"
}
|
gharchive/issue
|
Digest relative and absolute relative links before converting to pdf
It might be a good idea to remove all relative links (/foo.smth, foo.smth or file://path/foo.smth) from the HTML file before rendering it to HTML, since these links won't be valid in the resulting PDF anyways. This would, of course, leave the thing the links are displayed as intact, and only remove their linking ability.
Upgrade to v1.9.1 to have this.
|
2025-04-01T06:40:00.396650
| 2017-09-20T08:11:26
|
259076959
|
{
"authors": [
"piascikj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9615",
"repo": "piascikj/vscode",
"url": "https://github.com/piascikj/vscode/issues/179"
}
|
gharchive/issue
|
THIS FILE WILL BE OVERWRITTEN DURING BUILD TIME, DO NOT EDIT
Issue created from code comment with imdone.io
NOTE: THIS FILE WILL BE OVERWRITTEN DURING BUILD TIME, DO NOT EDIT id:179
src/vs/workbench/workbench.main.nls.js:6
@imdone - Efficiently manage your project's technical debt. imdone.io
Issue closed by removing a comment.
NOTE: THIS FILE WILL BE OVERWRITTEN DURING BUILD TIME, DO NOT EDIT id:179 gh:179
src/vs/workbench/workbench.main.nls.js:6
@imdone - Efficiently manage your project's technical debt. imdone.io
|
2025-04-01T06:40:00.404264
| 2020-10-23T07:56:41
|
728000586
|
{
"authors": [
"badetitou",
"marbetschar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9616",
"repo": "pichillilorenzo/jackson-js",
"url": "https://github.com/pichillilorenzo/jackson-js/issues/18"
}
|
gharchive/issue
|
Slow for parsing large arrays containing objects
Environment
Browser. Timing tested in latest Firefox.
Description
During experimentation with Jackson-JS we noticed that its JSON deserialization is rather slow for large arrays. The JSON response we parse is an array containing roughly 1200 of the below objects - and parsing takes about 15 seconds.
I'm aware this use case is rather extreme, but unfortunately we are dealing with a legacy API within an enterprise here - so things won't improve somewhen soon.
import {
JsonIgnoreProperties,
JsonProperty,
JsonAlias
} from 'jackson-js';
@JsonIgnoreProperties({ value: [
'abc',
'def',
//... 67 more ignore properties
]})
export class MyModel {
@JsonProperty()
@JsonAlias({values: ['xyz']})
myVarName: number;
// ... 18 more properties with @JsonProperty and @JsonAlias
}
What you'd like to happen:
I'd love to hear your thoughts on strategies on how to mitigate the parsing impact and/or speed it up.
Alternatives you've considered:
Obviously, we can cache the result - but 15 secs still seems way too slow.
Although building a paging into the backend API would solve this, this is very unlikely to happen within a reasonable time span
Hi @marbetschar
Did you find a way to speed up the deserialization?
@badetitou back then we used @marcj/marshal instead of jackson-js. Don't know if it is still maintained though (I no longer work at the project in question).
|
2025-04-01T06:40:00.407974
| 2020-07-07T01:19:41
|
651917191
|
{
"authors": [
"cdunford",
"kdubb",
"niveo"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9617",
"repo": "pichillilorenzo/jackson-js",
"url": "https://github.com/pichillilorenzo/jackson-js/issues/7"
}
|
gharchive/issue
|
Fails when client package is minified
Environment
Any environment that minifies code. For example, Angular production builds which use WebPack with optimization enabled.
Description
Minification should not have any effect on usage including serialization and/or deserialization.
After minification serialization/deserialization fails with the error Invalid Keyword. This can be traced to an issue with getArgumentNames and meriyah's parseScript failing to parse the minified function signature.
Also, it appears that even if the parsing succeeded the argument names would then be incorrect but I am not sure if this affects usage.
Steps to reproduce
Create a class that uses JsonCreator or JsonPropertys on the constructor.
Minify the class
Attempt to serialize or deserialize using the minified class.
@kdubb - I see you have a PR open for this issue; any idea what's going on with it?
the same here
@niveo @cdunford We have a fork @outfoxx/jackson-js currently published on NPM that includes all of our open PRs.
It's looking like this great project may be abandoned.
|
2025-04-01T06:40:00.409608
| 2018-09-29T18:26:01
|
365155509
|
{
"authors": [
"hiliang-cmu",
"maverickwoo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9618",
"repo": "picoCTF/picoCTF",
"url": "https://github.com/picoCTF/picoCTF/issues/208"
}
|
gharchive/issue
|
Consider normalizing submitted flags before checking
The current code uses substring (python: submitted_flag in desired_flag). This leads to surprises when there are, say, leading/trailing spaces in the submitted flag.
P.S. For competition, most problems have the flags in a special format but there are some exceptions. For the problems in the latter group (no format), this can create confusion if some user formats the flag because now the flag contains extra characters.
After brief discussion with problem devs, what would work better would be a case-insensitive flag on the problem, carried over from the problem.json config.
|
2025-04-01T06:40:00.418205
| 2023-01-14T04:03:28
|
1533126755
|
{
"authors": [
"PhrozenByte",
"mayamcdougall",
"notakoder"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9619",
"repo": "picocms/Pico",
"url": "https://github.com/picocms/Pico/issues/657"
}
|
gharchive/issue
|
Sorting pages using a customer header isn't honoured.
I am sorting pages (posts) per a meta header.
<ul>
{% if (current_page.id == "categories") %}
{% for page in pages|sort_by("page.meta.Catposition") %}
{% if (page.meta.Category == "Sections") and not page.hidden %}
<li><a href="{{ page.url }}">{{ page.title }}</a></li>
{% endif %}
{% endfor %}
{% endif %}
</ul>
---
Title: Title of the page
Template: template_name
Position: 1
Category: Sections
Catposition: 1
---
Earlier, the sorting was done by Position and it worked. I later changed it to Catposition and added this header to all the required pages numbered them correctly. But for some reason, sorting by Catposition just does not work. Posts are listed alphabetically. However, the moment I revert the sorting code to page.meta.Position, lists are sorted as per the Position. It is as if something is syntactically wrong with the code containing Catposition even though it is an exact copy with a change in the header. Any idea what could be wrong?
I'm not seeing anything obvious here... if it works with Position, and you've change all occurrences to Catposition instead, it should function exactly the same. The fact that it's falling back to alphabetical sort would probably imply that it's not finding Catposition in the page.
I know it's not helpful, but maybe double check everything for typos? Copy and paste Catposition between your code and your metadata just to sanity-check that it matches, etc.
@PhrozenByte Do you have any thoughts on this?
Check for typos and upper/lowercase (especially when the meta header was registered using a plugin's onMetaHeaders or the theme's pico-theme.yml)
I figured out what's causing it. For next and previous buttons I had added a sorting configuration in the config.yml as per this discussion.
sort_directories:
- docs
pages_order_by: meta
pages_order_by_meta: position
pages_order: asc
This is overriding my {% for page in pages|sort_by("page.meta.Number")%} in the .twig template. If I change pages_order_by_meta: position above to pages_order_by_meta: Catposition, the order of posts listings is correct. But I can't change that since elsewhere in the website, I need the pages to be sorted as per position. Don't you think that the {% for page in pages|sort_by("page.meta.Catposition")%} in the template must have overridden the sorting order in config file since templates are the bare metal layer to listings' page?
To be honest, what it sounds like is that your original code just didn't work. It only appeared to work because you were already sorting pages by position globally.
And, I'm realizing now that I'm looking at the docs, that your syntax for sort_by is wrong. π
It should be:
{% for page in pages|sort_by(['meta', 'Number'])%}
Not:
{% for page in pages|sort_by("page.meta.Number")%}
So, why don't you give that a try and see if it behaves right. π
So, why don't you give that a try and see if it behaves right.
It does. Crazy! I don't know how I got the idea of using page.meta.name. Perhaps it was an edit to the listing code that sorted pages as per default headers (title, time, etc).
Thanks for the help.
No worries. It happens. π
It is an odd syntax. On the technical side, this is because you're giving a Pico function some strings as arguments rather than reading them inside of Twig.
Don't feel too bad though, me and @PhrozenByte didn't catch that one either. π
|
2025-04-01T06:40:00.420584
| 2024-04-08T12:33:31
|
2231083310
|
{
"authors": [
"BishoyHanyRaafat",
"hal-8999-alpha"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9620",
"repo": "pieces-app/cli-agent",
"url": "https://github.com/pieces-app/cli-agent/pull/79"
}
|
gharchive/pull-request
|
Ask, conversation, conversations commands
https://drive.google.com/open?id=1PVJrobh4PflT3Vbg7HoEiDDH_jyxMNcN
https://drive.google.com/open?id=1EIywJT5rSXRE40TanCHWMAdGQpY6KBLu
@hal-8999-alpha anything wrong with this PR?
@hal-8999-alpha anything wrong with this PR?
Only minor thing I had was to change it to GPT 3.5 instead of ChatGPT3
|
2025-04-01T06:40:00.439396
| 2024-11-01T20:09:45
|
2629793554
|
{
"authors": [
"WladyX",
"pier-oliviert"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9621",
"repo": "pier-oliviert/phonebook",
"url": "https://github.com/pier-oliviert/phonebook/issues/19"
}
|
gharchive/issue
|
gcore provider not working
apiVersion: se.quencer.io/v1alpha1
kind: DNSIntegration
metadata:
labels:
argocd.argoproj.io/instance: k8s-blue-cc-phonebook
name: gcore
spec:
provider:
name: gcore
secretRef:
keys:
- key: GCORE_PERMANENT_API_TOKEN
name: GCORE_API_TOKEN
name: phonebook-secrets
zones:
- myzone.com
Name: gcore
Namespace:
Labels: argocd.argoproj.io/instance=k8s-blue-cc-phonebook
Annotations: <none>
API Version: se.quencer.io/v1alpha1
Kind: DNSIntegration
Metadata:
Creation Timestamp: 2024-11-01T20:06:44Z
Finalizers:
phonebook.se.quencer.io/deployment
Generation: 1
Resource Version: 90979843
UID: b0c39020-951e-448c-8bbd-df03550567e0
Spec:
Provider:
Name: gcore
Secret Ref:
Keys:
Key: GCORE_PERMANENT_API_TOKEN
Name: GCORE_API_TOKEN
Name: phonebook-secrets
Zones:
wxs.ro
Status:
Conditions:
Last Transition Time: 2024-11-01T20:06:44Z
Reason: Deployment.apps "provider-gcore" is invalid: spec.template.spec.containers[0].image: Required value
Status: Error
Type: Deployment
Events: <none>
Sorry about that, the release process is very manual and error prone. I have rebuilt and reshipped 0.3.7. Usually, helm chart versions needs to increment but I didn't do it.
Can you try again? You might need to remove phonebook from your helm repo or at the very least force an update.
No worries, but i tried everything that I could think off, I am deploying with argo, I've tried hard refresh, removing the app, etc, nothing worked, same error, I suspect this is still present.
Can you please reopen it and maybe bump the chart version so we can exclude the caching of the old version?
Thank you!
@WladyX Before I make changes to the helm chart can you test this DNSIntegration for me first?
apiVersion: se.quencer.io/v1alpha1
kind: DNSIntegration
metadata:
labels:
argocd.argoproj.io/instance: k8s-blue-cc-phonebook
name: gcore
spec:
provider:
name: gcore
image: "ghcr.io/pier-oliviert/providers-gcore:v0.3.7"
secretRef:
keys:
- key: GCORE_PERMANENT_API_TOKEN
name: GCORE_API_TOKEN
name: phonebook-secrets
zones:
- myzone.com
You can specify images directly and I want to make sure the image loads OK for you. One thing I realized is that all images are built for x64 platform so I want to make sure the image is actually running in your cluster.
still does not work and i don't think it's related to x64.
β― kd dnsintegrations.se.quencer.io gcore
Name: gcore
Namespace:
Labels: argocd.argoproj.io/instance=k8s-blue-cc-phonebook
Annotations: <none>
API Version: se.quencer.io/v1alpha1
Kind: DNSIntegration
Metadata:
Creation Timestamp: 2024-11-02T12:09:00Z
Finalizers:
phonebook.se.quencer.io/deployment
Generation: 2
Resource Version: 91942169
UID: a91c64b1-7d34-4a1b-83bc-fb9a00380131
Spec:
Provider:
Image: ghcr.io/pier-oliviert/providers-gcore:v0.3.7
Name: gcore
Secret Ref:
Keys:
Key: GCORE_PERMANENT_API_TOKEN
Name: GCORE_API_TOKEN
Name: phonebook-secrets
Zones:
myzone.com
Status:
Conditions:
Last Transition Time: 2024-11-02T12:09:22Z
Reason: Deployment.apps "provider-gcore" is invalid: spec.template.spec.containers[0].image: Required value
Status: Error
Type: Deployment
Events: <none>
β― kg dnsintegrations.se.quencer.io gcore -oyaml|kneat
apiVersion: se.quencer.io/v1alpha1
kind: DNSIntegration
metadata:
labels:
argocd.argoproj.io/instance: k8s-blue-cc-phonebook
name: gcore
spec:
provider:
image: ghcr.io/pier-oliviert/providers-gcore:v0.3.7
name: gcore
secretRef:
keys:
- key: GCORE_PERMANENT_API_TOKEN
name: GCORE_API_TOKEN
name: phonebook-secrets
zones:
- myzone.com
scratch that, i removed the dnsintegration gcore and redeployed it and the pod started, will do some more tests and come back, thank you!
@WladyX I was going crazy over here, looking at the code I couldn't understand what was going on and was about to create a discord server to discuss with you.
Glad you got it working!
i just tried deleting the dnsintegration and testing without image spec, still does not work, so pls bump the chart, so I can test it without image in dnsintegration.
the record was created ok on gcore when I tried with the image specified, but I would have expected an error from desec, since it should try to create the record on both, but that's another story, I can opened another issue if you like.
thank you!
|
2025-04-01T06:40:00.512714
| 2017-06-19T20:12:47
|
237010295
|
{
"authors": [
"jaffee",
"tgruben",
"travisturner"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9626",
"repo": "pilosa/pilosa",
"url": "https://github.com/pilosa/pilosa/pull/665"
}
|
gharchive/pull-request
|
convert from using uint32 to uint16 in array and run containers
Overview
See title and issue - wait on this until RLE is merged.
Fixes #664
Pull request checklist
[ ] I have read the contributing guide.
[ ] I have agreed to the Contributor License Agreement.
[ ] I have updated the documentation.
[ ] I have resolved any merge conflicts.
[ ] I have included tests that cover my changes.
[ ] All new and existing tests pass.
Code review checklist
This is the checklist that the reviewer will follow while reviewing your pull request. You do not need to do anything with this checklist, but be aware of what the reviewer will be looking for.
[ ] Ensure that any changes to external docs have been included in this pull request.
[ ] If the changes require that minor/major versions need to be updated, tag the PR appropriately.
[ ] Ensure the new code is properly commented and follows Idiomatic Go.
[ ] Check that tests have been written and that they cover the new functionality.
[ ] Run tests and ensure they pass.
[ ] Build and run the code, performing any applicable integration testing.
@travisturner On that line, we're trying to decide if it would be more space efficient to convert to an array container to a run container. Each run takes up twice the space of an element of an array, so if the number of runs is less than half the array cardinality, then using RLE is more efficient.
TLDR; I think 2 is correct there
@jaffee but doesn't the change to 16-bit array values mean that each run takes up 4x the space of an array element?
we changed to interval16 for run containers as well
oh, got it. sorry.
closed due to #758
|
2025-04-01T06:40:00.556458
| 2024-03-27T09:32:49
|
2210297384
|
{
"authors": [
"CarterAppleton",
"eruizgar91",
"gavinnewcomer",
"jxom",
"mwawrusch",
"plusminushalf"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9627",
"repo": "pimlicolabs/permissionless.js",
"url": "https://github.com/pimlicolabs/permissionless.js/issues/153"
}
|
gharchive/issue
|
getTypesForEIP712Domain error
Wagmi + Viem + AA upgrade === disaster.
Anyone seen this error:
../../node_modules/permissionless/actions/smartAccount/signTypedData.ts:137:49
const types = {
> 137 | EIP712Domain: getTypesForEIP712Domain({ domain }),
| ^
138 | ...(types_ as TTypedData)
139 | }
'((TTypedData extends { [x: string]: readonly TypedDataParameter[]; [x: `string[${string}]`]: undefined; [x: `function[${string}]`]: undefined; [x: `address[${string}]`]: undefined; [x: `uint32[${string}]`]: undefined; [x: `bytes[${string}]`]: undefined; [x: `uint256[${string}]`]: undefined; [x: `bytes32[${string}]`]...' is not assignable to type 'TypedDataDomain | undefined'.
Type '(TTypedData extends { [x: string]: readonly TypedDataParameter[]; [x: `string[${string}]`]: undefined; [x: `function[${string}]`]: undefined; [x: `address[${string}]`]: undefined; [x: `uint32[${string}]`]: undefined; [x: `bytes[${string}]`]: undefined; [x: `uint256[${string}]`]: undefined; [x: `bytes32[${string}]`]:...' is not assignable to type 'TypedDataDomain | undefined'.
Type 'unknown' is not assignable to type 'TypedDataDomain | undefined'.
"viem": "2.9.3",
"wagmi": "2.5.12",
"@alchemy/aa-accounts": "3.6.1",
"@alchemy/aa-alchemy": "3.7.0",
"@alchemy/aa-core": "3.6.1",
"@alchemy/aa-ethers": "3.6.1",
"@alchemy/aa-signers": "3.6.1",
Same error here. Anyone can help?
Hard to tell what's going on without a minimal reproduction. Likely that skipLibCheck is falsy in your tsconfig.json.
Hey what's your tsconfig's target? or as @jxom pointed out skipLibCheck is falsy in your tsconfig.json
Hey but can you provide a small repro repo? It will help us solve it!
We're also getting this bug when moving from typescript 5.2.2 -> 5.4.5
../../node_modules/permissionless/actions/smartAccount/signTypedData.ts:139:49
Type error: Type '((TTypedData extends { [x: string]: readonly TypedDataParameter[]; [x: `string[${string}]`]: undefined; [x: `function[${string}]`]: undefined; [x: `address[${string}]`]: undefined; [x: `uint32[${string}]`]: undefined; [x: `uint64[${string}]`]: undefined; [x: `uint256[${string}]`]: undefined; [x: `bytes32[${string}]`...' is not assignable to type 'TypedDataDomain | undefined'.
Type '(TTypedData extends { [x: string]: readonly TypedDataParameter[]; [x: `string[${string}]`]: undefined; [x: `function[${string}]`]: undefined; [x: `address[${string}]`]: undefined; [x: `uint32[${string}]`]: undefined; [x: `uint64[${string}]`]: undefined; [x: `uint256[${string}]`]: undefined; [x: `bytes32[${string}]`]...' is not assignable to type 'TypedDataDomain | undefined'.
Type 'unknown' is not assignable to type 'TypedDataDomain | undefined'.
137 |
138 | const types = {
> 139 | EIP712Domain: getTypesForEIP712Domain({ domain }),
| ^
140 | ...(types_ as TTypedData)
141 | }
142 |
trying to get a minimal repro/narrow down the typescript version
Sorry missed the ongoing thread. I can confirm that skipLibCheck is true on my end as well. Typescript upgrades are a huge PITA and cost factor these days :(. Would probably be best to always test on the latest supported Typescript version to catch this early.
obviously this isn't a permanent fix for the issue but you can use patch-package to patch in an ignore flag for this type issue.
Doing this with the patch-package allows the patch to be applied after post-install so this will work in production builds and for other engineers working in the project.
example patch:
const types = {
+ // @ts-ignore
EIP712Domain: getTypesForEIP712Domain({ domain }),
...(types_ as TTypedData)
}
|
2025-04-01T06:40:00.561622
| 2024-04-13T09:35:54
|
2241463129
|
{
"authors": [
"MrBisquit",
"sjefferson99"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9628",
"repo": "pimoroni/enviro",
"url": "https://github.com/pimoroni/enviro/issues/216"
}
|
gharchive/issue
|
Weather board fails to upload every other reading
My weather board is powered via the USB-Micro cable and when it uploads a reading and takes another one it seems to start flashing red and fails, but then a few tries later it succeeds (Probably a connection issue). I was wondering if there's anyway of me disabling it sleeping because I think that's what the issue is.
@MrBisquit First step would be to apply the wifi improvements code in pr #199 as that can sort all sorts of issues including ones similar to what you have described. The files for this are available in a recent build: https://github.com/pimoroni/enviro/actions/runs/8784437254
Alright, I'll try that next time I bring it in, thanks :)
(I'll close the issue once I've tested it and if it works, which may be a while depending on when I next bring it in)
@MrBisquit This code is now in main, so upgrade to v0.2.0 and see how you go.
I feel like there could be an option for like if it's going to be plugged into a constant power source (instead of batteries) to have the option to never disconnect from Wi-Fi so that it could possibly prevent this issue.
|
2025-04-01T06:40:00.567542
| 2023-09-26T16:05:51
|
1913847557
|
{
"authors": [
"Gadgetoid",
"grunkyb"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9629",
"repo": "pimoroni/pimoroni-pico",
"url": "https://github.com/pimoroni/pimoroni-pico/issues/854"
}
|
gharchive/issue
|
request: add deflate/zlib/gzip compression support
The introduction of the deflate module is slated for Micropython 1.21, but that release is months overdue. Any chance that module can be added to a pirate-flavour release? Specifically, I want to use zlib.compress, but 1.20.x only includes the decompress function and DecompIO class.
I've not had much luck or experience generating anything but super trivial .mpy files.
It looks like Deflate was merged in https://github.com/micropython/micropython/commit/3533924c36ae85ce6e8bf8598dd71cf16bbdb10b and we're already targeting a pre-release commit of MicroPython, so I just need to find the time to walk it forward to the latest upstream commit and fix all the breaking changes. (There have been some renames to MicroPython RP2 boards which will break out build.)
Okay apparently that was not as difficult as I'd thought. The "deflate" module should be in these builds: https://github.com/pimoroni/pimoroni-pico/actions/runs/6418396733?pr=858
Thanks @Gadgetoid
At some point I want to learn what got changed in that commit to allow it to be built.
Wow... https://github.com/micropython/micropython/releases/tag/v1.21.0 just released :laughing:
I think we were accidentally prescient here.
I think our build worked pretty much fine without changes by just bumping the commit has to the latest MicroPython, but no doubt the various changes will have far reaching repercussions beyond "does it build."
For anyone else finding this thread who wants to enable compression with deflate.DeflateIO, I recommend as a starting point this tutorial on Medium and this post from @Gadgetoid. In my case, I added the line #define MICROPY_PY_DEFLATE_COMPRESS (1) to mpconfigboard.h for the Pico W. The file is in micropython/board/RPI_PICO_W/. The actions take care of the rest of the firmware build.
|
2025-04-01T06:40:00.568796
| 2021-08-12T16:49:02
|
969178682
|
{
"authors": [
"MahsaShirazi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9630",
"repo": "pimsmath/m2pi.ca",
"url": "https://github.com/pimsmath/m2pi.ca/pull/145"
}
|
gharchive/pull-request
|
Mahsa shirazi profile branch
Hi!
Here is my _index and avatar.jpg files to add to the website,
Thanks!
Please add my files to the website
|
2025-04-01T06:40:00.579653
| 2020-03-31T19:58:09
|
591384575
|
{
"authors": [
"eshapard",
"pimterry",
"rmNULL"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9631",
"repo": "pimterry/notes",
"url": "https://github.com/pimterry/notes/pull/76"
}
|
gharchive/pull-request
|
Make ls ignore vim backup files
Fixes
Checklist
[x] I have made a change to this repository, be it functionality, testing, documentation, spelling, or grammar.
[x] I updated my branch with the master branch.
[ ] I have added the necessary testing to prove my fix is effective/my feature works (or I did not modify functionality).
[ ] I have added necessary documentation about the functionality in an appropriate .md file.
[ ] I have appropriately commented any code I have modified
Short description of what this PR does:
In case user has vim set up to create backup files in the current directory, make the ls command ignore backup files (ending in '~' by default).
I find this helpful with my setup, but it obviously won't be of benefit to everyone. Probably won't cause any problems, though. I don't imagine anyone uses '~' at the end of a filename for any other purpose (and for some reason, also stores them in the notes directory).
Yeah, take it or leave it. :-)
Nice! I'm definitely on board with the goal, good suggestion.
Unfortunately it will break things though, because it looks like the default version of ls on a Mac doesn't support --ignore (or -I): https://stackoverflow.com/questions/11213849/how-to-ls-ignore-on-osx.
I think this would work everywhere if implemented with grep -v instead. Would that work for you? AFAIK that should do the same thing. Any downsides you're aware of?
Ah, yes, I was worried that -I might not be supported everywhere. I had
originally planned on using grep -v before I discovered that ls had
its own --ignore option.
As far as I know, grep -v is standard and should be supported on all
*nix platforms, so that's probably the way to do it. :-)
Excerpts from Tim Perry's message of April 1, 2020 5:46 am:
Nice! I'm definitely on board with the goal, good suggestion.
Unfortunately it will break things though, because it looks like the default version of ls on a Mac doesn't support --ignore (or -I): https://stackoverflow.com/questions/11213849/how-to-ls-ignore-on-osx.
I think this would work everywhere if implemented with grep -v instead. Would that work for you? AFAIK that should do the same thing. Any downsides you're aware of?
--
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub:
https://github.com/pimterry/notes/pull/76#issuecomment-607228112
@pimterry as excluding files are popping up often.
what's your opinion on having .notesignore file like .gitignore and maybe a flag called --ignore-file to provide ignore file with other name.
Additionally, It will be good if notes respected the .gitignore file and ignore all files defined in there.
I've now updated this to use grep -v instead, added a quick test, and merged it. Thanks @eshapard! Nice improvement :+1:.
@rmNULL I'm open to some kind of ignore config, if you or others would find that useful. I'd be surprised if we needed a special command line argument for it, I expect it's very nearly always the same set of patterns, and for one-off ignoring you can just pipe to grep -v.
I think we should still ignore some standard things (like this) by default either way. We already have the config file, so I expect we'd either want to extend that somehow, or link out from that, maybe with an IGNORE_FILE param that points to a gitignore-formatted file, so users can point to .gitignore or .notesignore or wherever. I'm probably not going to jump on that any time soon, but feel free if it'd be useful to you.
|
2025-04-01T06:40:00.649209
| 2019-05-06T09:46:28
|
440620102
|
{
"authors": [
"kennytm",
"morgo",
"spongedu"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9632",
"repo": "pingcap/parser",
"url": "https://github.com/pingcap/parser/pull/316"
}
|
gharchive/pull-request
|
parser: support mysql-compatible explain format
What problem does this PR solve?
Support MySQL compatible explain format as is described in MySQL Manual. such as:
explain format=traditional select * from t;
explain format=json select * from t;
What is changed and how it works?
TRADITIONAL type would be translate as row to be compatible with TiDB current implementation.
JSON type is implemented just in parser side and will be blocked during preprocessing in TiDB
Check List
Tests
Unit test
Integration test
Manual test (add detailed scripts or steps below)
Code changes
N/A
Side effects
N/A
Related changes
N/A
@kennytm @tiancaiamao @morgo PTAL
@kennytm 'row' is peculiar to TiDB , and is semantic equivalent to TRADITIONAL in MySQL, So I think it's ok to conflate the two.
@spongedu yes but the list of columns of TiDB's "row" and MySQL's TRADITIONAL format are different.
@kennytm I think the format section just specify how the results of EXPLAIN are displayed. For example the TRADITIONAL display the results as rows, and JSON display the results as a json string. so are 'row' and 'dot'.
As for the column count issues, I think it's a content issue οΌ and is orthogonal to the format.
LGTM
|
2025-04-01T06:40:00.655930
| 2019-07-22T03:20:08
|
470873043
|
{
"authors": [
"leoppro"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9633",
"repo": "pingcap/parser",
"url": "https://github.com/pingcap/parser/pull/393"
}
|
gharchive/pull-request
|
parser: fix compatibility for OnDelete,OnUpdate clauses
What problem does this PR solve?
fix compatibility for OnDelete,OnUpdate clauses
this pr support following syntax:
reference_definition:
REFERENCES tbl_name (key_part,...)
[MATCH FULL | MATCH PARTIAL | MATCH SIMPLE]
[ON DELETE reference_option]
[ON UPDATE reference_option]
reference_option:
RESTRICT | CASCADE | SET NULL | NO ACTION | SET DEFAULT
What is changed and how it works?
Check List
Tests
[x] Unit test
[x] Integration test
[ ] Manual test (add detailed scripts or steps below)
[ ] No code
Code changes
Has exported function/method change
Has exported variable/fields change
Has interface methods change
Side effects
Possible performance regression
Increased code complexity
Breaking backward compatibility
Related changes
Need to cherry-pick to the release branch
Need to update the documentation
Need to be included in the release note
I realized a similar PR has been already merged when I tried to fix merge conflict.
https://github.com/pingcap/parser/commit/191583a459a3574d57fa6211778da25e6ef61846
what do you think about these two kinds of implements.
@kennytm @zz-jason PTAL
|
2025-04-01T06:40:00.797933
| 2022-08-18T12:05:59
|
1343003277
|
{
"authors": [
"bestwoody",
"gengliqi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9634",
"repo": "pingcap/tiflash",
"url": "https://github.com/pingcap/tiflash/issues/5653"
}
|
gharchive/issue
|
The configuration/implementation of async gRPC may be not optimal
Enhancement
Configuration Issues
https://github.com/pingcap/tiflash/blob/d7e4e2995cebf0cf25e13a2a91adebada9390b66/dbms/src/Interpreters/Settings.h#L363-L364
The async gRPC server uses these configurations for EstablishMPPConnection.
async_pollers_per_cq is the number of threads per completion queue. (default 200)
async_cqs is the number of completion queues. (default 1)
The number of gRPC threads is async_pollers_per_cq * async_cqs = 200.
In my opinion, these default configurations have two issues.
The default thread number is too large. In view of the fact that async gRPC uses a non-blocking socket, in theory, there is no blocking point when doing RPC work. Therefore, the thread number should not exceed the number of CPU cores.
The default completion queue number is too small, which may introduce some unnecessary synchronization overheads.
For example, CqEventQueue uses an MPSC queue and it also uses a spinlock to support multiple consumers. In addition,
pollset_work is called when calling CompletionQueue::Next. A mutex is acquired during this call. Although sometimes this mutex is released, this overhead can not be ignored especially when the thread number is 200.
Actually, the official guide of gRPC performance says
If having to use the async completion-queue API, the best scalability trade-off is having numcpuβs threads. The ideal number of completion queues in relation to the number of threads can change over time (as gRPC C++ evolves), but as of gRPC 1.41 (Sept 2021), using 2 threads per completion queue seems to give the best performance.
At present, TiFlash uses v1.26 gRPC. The perf_notes from v.126 gRPC says
Right now, the best performance trade-off is having numcpu's threads and one completion queue per thread.
I guess using multiple threads per completion queue is good for load balance but the number should not be too large. We can carefully test it to gain the best performance.
Implementation Issues
The first issue is about notify_cq.
https://github.com/pingcap/tiflash/blob/d7e4e2995cebf0cf25e13a2a91adebada9390b66/dbms/src/Server/FlashGrpcServerHolder.cpp#L141-L157
From the code above, we can see there is a another gRPC thread pool for notify_cq. In fact, the default number gRPC thread for EstablishMPPConnection is 400(200 for cq, 200 for notify_cq), which is a scary number.
What is the difference between call_cq and notification_cq in gRPC?
Notification_cq gets the tag back indicating a call has started. All subsequent operations (reads, writes, etc) on that call report back to call_cq. For most async servers my recommendation is to use the same cq.
This allows fine-grained control over which threads handle which kinds of events (based on which queues they are polling). Like you may have a master thread polling the notification_cq and worker threads all polling their own call_cqs, or something like that.
This code tests when the notify_cq and call_cq is called.
I think we do not need to control which threads handle notification events so the notify_cq should be the same as call_cq then the default 200 threads can be removed totally.
By the way, grpc-rs also uses one completion queue both for call_cq and notify_cq. code here
The second issue is about combining different gRPC thread pools. Async gRPC client in TiFlash also has a gRPC thread pool.
https://github.com/pingcap/tiflash/blob/d7e4e2995cebf0cf25e13a2a91adebada9390b66/dbms/src/Server/Server.cpp#L1242-L1248 (good to see that its pool size is std::thread::hardware_concurrency).
Combining different gRPC thread pools can reduce the thread number and context switch. This also makes it easier to add new async RPC in the future.
It can be done with some class abstraction and refactor.
/cc @windtalker @bestwoody @yibin87
there is block behavior in EstablishMppTaskοΌso the number of pollers should big enough to avoid that. we need modify the findTaskWithTimeout before make the number of pollers small.
there is block behavior in EstablishMppTaskοΌso the number of pollers should big enough to avoid that. we need modify the findTaskWithTimeout before make the number of pollers small.
Got it. The gPRC thread pool for notify_cq can not be removed now and the size should be big enough due to the blocking function findTaskWithTimeout. Seems it could easily become a bottleneck. Look forward to changing this function to non-blocking behavior.
But at least the number of gRPC thread pool for call_cq can be decreased to the number of CPU core and the completion queue number can be increased to a more suitable value.
there is block behavior in EstablishMppTaskοΌso the number of pollers should big enough to avoid that. we need modify the findTaskWithTimeout before make the number of pollers small.
Got it. The gPRC thread pool for notify_cq can not be removed now and the size should be big enough due to the blocking function findTaskWithTimeout. Seems it could easily become a bottleneck. Look forward to changing this function to non-blocking behavior.
But at least the number of gRPC thread pool for call_cq can be decreased to the number of CPU core and the completion queue number can be increased to a more suitable value.
400 threads is not a big number for tiflash. the performance is not as true as what official gRPC saysοΌsince I tested the params. if you can some benchmark results to prove itοΌthat will be better.
|
2025-04-01T06:40:00.803960
| 2022-11-11T05:05:29
|
1444937262
|
{
"authors": [
"gengliqi",
"windtalker",
"xzhangxian1008",
"ywqzzy"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9635",
"repo": "pingcap/tiflash",
"url": "https://github.com/pingcap/tiflash/pull/6297"
}
|
gharchive/pull-request
|
Use alarm to retry client's connection
What problem does this PR solve?
Issue Number: ref #6225
Problem Summary:
What is changed and how it works?
Check List
Tests
[ ] Unit test
[ ] Integration test
[ ] Manual test (add detailed scripts or steps below)
[ ] No code
Side effects
[ ] Performance regression: Consumes more CPU
[ ] Performance regression: Consumes more Memory
[ ] Breaking backward compatibility
Documentation
[ ] Affects user behaviors
[ ] Contains syntax changes
[ ] Contains variable changes
[ ] Contains experimental features
[ ] Changes MySQL compatibility
Release note
None
/cc @gengliqi @windtalker
/merge
/merge
/run-unit-test
/run-all-tests
/merge
|
2025-04-01T06:40:00.812601
| 2022-06-28T07:59:02
|
1286947410
|
{
"authors": [
"codecov-commenter",
"sdojjy"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9636",
"repo": "pingcap/tiflow",
"url": "https://github.com/pingcap/tiflow/pull/6103"
}
|
gharchive/pull-request
|
migrator(ticdc): Delete old changefeed
What problem does this PR solve?
Issue Number: close #xxx
What is changed and how it works?
Check List
Tests
Unit test
Integration test
Manual test (add detailed scripts or steps below)
No code
Questions
Will it cause performance regression or break compatibility?
Do you need to update user documentation, design documentation or monitoring documentation?
Release note
Please refer to [Release Notes Language Style Guide](https://pingcap.github.io/tidb-dev-guide/contribute-to-tidb/release-notes-style-guide.html) to write a quality release note.
If you don't think this PR needs a release note then fill it with `None`.
/run-all-tests
/run-all-tests
/run-all-tests
/run-all-tests
/run-integration-tests
Codecov Report
Merging #6103 (952e369) into cli-use-open-api (d1de53d) will decrease coverage by 0.6547%.
The diff coverage is 77.7777%.
Flag
Coverage Ξ
cdc
64.6197% <77.7777%> (+0.1419%)
:arrow_up:
dm
51.9506% <ΓΈ> (-0.0254%)
:arrow_down:
engine
?
Flags with carried forward coverage won't be shown. Click here to find out more.
@@ Coverage Diff @@
## cli-use-open-api #6103 +/- ##
========================================================
- Coverage 58.4562% 57.8014% -0.6548%
========================================================
Files 708 550 -158
Lines 83471 73204 -10267
========================================================
- Hits 48794 42313 -6481
+ Misses 30244 26996 -3248
+ Partials 4433 3895 -538
|
2025-04-01T06:40:00.816985
| 2023-03-09T04:24:46
|
1616390173
|
{
"authors": [
"sdojjy"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9637",
"repo": "pingcap/tiflow",
"url": "https://github.com/pingcap/tiflow/pull/8481"
}
|
gharchive/pull-request
|
apiv2(ticdc): add ut for api default value
What problem does this PR solve?
Issue Number: close #8480
What is changed and how it works?
add ut for api default value
Check List
Tests
Unit test
Integration test
Manual test (add detailed scripts or steps below)
No code
Questions
Will it cause performance regression or break compatibility?
Do you need to update user documentation, design documentation or monitoring documentation?
Release note
`None`.
/run-all-tests
/run-integration-tests
/run-verify-tests
/run-integration-tests
|
2025-04-01T06:40:00.820630
| 2016-12-26T06:46:06
|
197544942
|
{
"authors": [
"hhkbp2",
"siddontang"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9638",
"repo": "pingcap/tikv",
"url": "https://github.com/pingcap/tikv/pull/1444"
}
|
gharchive/pull-request
|
raft: port pre-vote feature
Hi,
This PR port the pre-vote feature for etcd/raft.
It merges #1330, #1425, and https://github.com/coreos/etcd/pull/7060.
PTAL @siddontang @BusyJay @ngaut
PTAL @siddontang @BusyJay @ngaut
PTAL @BusyJay
The modification for raft in https://github.com/coreos/etcd/pull/6975 is merged.
PTAL @siddontang @BusyJay
PTAL @siddontang @BusyJay
LGTM
PTAL @BusyJay @zhangjinpeng1987
PTAL @BusyJay
|
2025-04-01T06:40:00.886998
| 2016-01-31T19:08:47
|
130169524
|
{
"authors": [
"linearregression",
"scohen"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9639",
"repo": "pinterest/elixometer",
"url": "https://github.com/pinterest/elixometer/issues/31"
}
|
gharchive/issue
|
What is the intentional use of elixometer.TestReporter?
There is a TestReport In file https://github.com/pinterest/elixometer/blob/master/lib/elixometer.ex
I notice that is is being used in test:
https://github.com/pinterest/elixometer/blob/master/config/test.exs
But is that just for Test environment, but for some reason, it si part of the main module?
If production won't ever use or it, would suggest this should not be part of main module.
The idea is that you use that inside tests if you want to get the output of logging. Elixiometer was written pretty early in our elixir exploration, and this should be made top-level. Better yet, we could alter the mixfile to look in a different directory and remove it entirely from lib and put into test/support or something.
|
2025-04-01T06:40:00.888265
| 2023-06-05T18:48:19
|
1742366613
|
{
"authors": [
"dangerismycat",
"rlingineni"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9640",
"repo": "pinterest/gestalt",
"url": "https://github.com/pinterest/gestalt/pull/2988"
}
|
gharchive/pull-request
|
TileData: fix typo + copyediting
Noticed an extraneous a in TileData's header description. Took a quick pass of copyediting while looking at that file
thanks!!
|
2025-04-01T06:40:00.952104
| 2021-07-19T01:24:22
|
947169740
|
{
"authors": [
"jianzhiyao",
"yourchanges"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9641",
"repo": "pion/rtp",
"url": "https://github.com/pion/rtp/issues/146"
}
|
gharchive/issue
|
Could add support to github.com/pion/rtp or maybe add a library that makes transcoding(aac->pcm) easier
As description from https://github.com/pion/webrtc/issues/1888, could you add support to github.com/pion/rtp or maybe add a library that makes transcoding(aac->pcm) easier?
Did you find the solution to transcoding audio from acc to pcm?
|
2025-04-01T06:40:00.955581
| 2022-11-20T21:45:26
|
1457057883
|
{
"authors": [
"edaniels",
"enobufs",
"jerry-tao",
"stv0g"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9642",
"repo": "pion/sctp",
"url": "https://github.com/pion/sctp/issues/250"
}
|
gharchive/issue
|
Potential regression by a commit c0159aa in causing TestAssociation_Shutdown to fail
@enobufs Just wondering: could it be that this PR (commit c0159aa2d49c240362038edf88baa8a9e6cfcede) introduced a regression which makes the unit-test `TestAssociation_Shutdown` fail?
See https://github.com/pion/sctp/actions/runs/3495811764/jobs/5852996166
Originally posted by @stv0g in https://github.com/pion/sctp/issues/239#issuecomment-1321076117
Hi @jerry-tao, @stv0g, are you able to repro this in your environment? The error does not happen to me... :(
I did not reproduce it either, will dig deeper this week.
I just tested it again on the current master branch and the test succeeded.
But I see a possibly related PR #236
I tried to produce it again without success.
So maybe I've dreamt it...
I just saw it again but only on 1.18
|
2025-04-01T06:40:00.977590
| 2020-10-27T06:06:34
|
730141016
|
{
"authors": [
"cakecatz",
"nghialv"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9643",
"repo": "pipe-cd/pipe",
"url": "https://github.com/pipe-cd/pipe/pull/1024"
}
|
gharchive/pull-request
|
UI - fix a bug that shows error at unintended times on the application detail page
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #
Does this PR introduce a user-facing change?:
NONE
Thank you.
/approve
|
2025-04-01T06:40:01.059357
| 2020-09-07T18:17:41
|
695324407
|
{
"authors": [
"Ryuno-Ki",
"pitchmuc",
"xSAVIKx"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9644",
"repo": "pitchmuc/adobe_analytics_api_2.0",
"url": "https://github.com/pitchmuc/adobe_analytics_api_2.0/pull/20"
}
|
gharchive/pull-request
|
Allow supplying absolute path to the private key
This PR fixes #19.
I've dropped the startsWith("/" checks and improved docs a bit.
@pitchmuc PTAL.
Hello and thanks for the Pull request.
I am testing it now and the error I was trying to avoid is popping up again :
In my config file, I am using this notation :
{
"org_id": "ED48F97C5922D40C00@AdobeOrg",
"api_key": "d2ef4b231cea4cf29d91b10052",
"tech_id"<EMAIL_ADDRESS> "secret": "**********************",
"pathToKey": "/config/private.key"
}
This kind of notation could be used and I wanted my function to work in that case.
In order to avoid that issue, I was doing my startswith() but it seems that it prevents you from using full path.
Normally, when entering the full path, it should starts with something like "C://". (so my startswith should avoid messing with it).
Is it different on your environment ?
I need to think of a way to make both work.
That's some new path notation for me.
In *NIX systems, an absolute path starts with a / and a relative one starts with a ./.
In Windows, an absolute path starts with C:\\ as you've mentioned, while a relative remains the same .\.
I'm not sure why you're avoiding using C:\\-like paths, but for *NIX systems, the current startswith() workaround breaks the usage of absolute paths completely.
From my perspective, the library should support commonly-used standards and can support non-standard notations if they do not break compatibility.
I'd be glad to help you out here if there's smth else I can do.
OK. I am not having ton of experience on NIX system, this is interesting. Thanks for your explanation.
I am trying to be flexible so you can use both notation, independently of your current system.
I tried to upload a new version of the import logic in your branch.
The main idea is to check if the file can be accessed before applying this startswith().
By checking if file exist, it may solve your issue without breaking my code.
Main logic here:
test_path = _Path(path).exists()
if test_path == False:
if path.startswith('/'):
path = "."+path
with open(_Path(path), 'r') as file:
....
Let me know if this works for you.
@pitchmuc PTAL again.
I've added a reusable part that checks the presence of the file using the supplied path and uses the file if it is available, otherwise tries to convert the absolute path to a relative one and tries it out.
I've also added the fail-fast approach with exceptions if config/private key files are not available.
@pitchmuc I've back merged your latest changes. It'd be great if you can check it out.
Please let me know if there's anything else you'd like me to do/change before merging the PR.
Hello @xSAVIKx ,
I checked the commit, thank for the function provided. It makes it cleaner.
I will merge the proposed changes.
We are not in sync with the master anymore as I provided a new version with VirtualReport capability for @loldenburg. I will take care of merging the changes.
You proposed suggestion will be part of the next release on pypi.
Thanks a lot !
@pitchmuc Perhaps worth to use https://docs.python.org/3/library/os.html#os.sep in the future.
|
2025-04-01T06:40:01.072311
| 2016-06-15T20:31:33
|
160516717
|
{
"authors": [
"mrumpf",
"sclevine"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9645",
"repo": "pivotal-cf/pcfdev",
"url": "https://github.com/pivotal-cf/pcfdev/issues/77"
}
|
gharchive/issue
|
pcfdev setup fails under Windows 7
When I run the binary distribution from Pivotal network under Windows 7 I get the following error:
C:\Users\mrumpf\Downloads\pcfdev-v0.16.0-windows>pcfdev-v0.16.0-windows.exe
panic: runtime error: index out of range
goroutine 1 [running]:
panic(0x93a720, 0xc082002040)
/usr/local/go/src/runtime/panic.go:481 +0x3f4
github.com/pivotal-cf/pcfdev-cli/vendor/github.com/cloudfoundry/cli/plugin.Start
(0x33a5098, 0xc08206b200)
/ext-go/1/src/github.com/pivotal-cf/pcfdev-cli/vendor/github.com/cloudfo
undry/cli/plugin/plugin_shim.go:16 +0x494
main.main()
/ext-go/1/src/github.com/pivotal-cf/pcfdev-cli/main.go:76 +0xd51
PCFDev 0.15 setup was working fine.
Hi @mrumpf,
PCF Dev is now a cf CLI plugin. You need to install it with cf install-plugin pcfdev-v0.16.0-windows.exe.
See the getting started tutorial: https://pivotal.io/platform/pcf-tutorials/getting-started-with-pivotal-cloud-foundry-dev/install-pcf-dev
Or the docs: https://docs.pivotal.io/pcf-dev/
(We will eventually run cf install-plugin for you when you run the plugin directly, to reduce confusion.)
Hm, I was confused by the extenions "exe"...
Here is what I get now:
{code}
{ pcfdev-v0.16.0-windows } Β» cf install-plugin ./pcfdev-v0.16.0-windows.exe
Attention: Plugins are binaries written by potentially untrusted authors. Install and use plugins at your own risk.
Do you want to install the plugin ./pcfdev-v0.16.0-windows.exe? (y or n)> y
Installing plugin pcfdev-v0.16.0-windows.exe...
OK
Plugin pcfdev v0.0.0 successfully installed.
{ pcfdev-v0.16.0-windows } Β» cf dev start
Please retrieve your Pivotal Network API from:
https://network.pivotal.io/users/dashboard/edit-profile
FAILED
Error: invalid Pivotal Network API token
{code}
I think that is a corporate proxy issue...
Did you copy and paste your API token from PivNet? The Invalid Pivotal Network API token error implies a successful connection to PivNet with an invalid token. We've seen occasional issues with the Windows DOS command line and cf CLI password-type prompts, so use PowerShell if you aren't already.
ok. It seems to work under PowerShell: I was using Cygwin bash before.
PS C:\Users\mrumpf\Downloads\pcfdev-v0.16.0-windows> cf install-plugin pcfdev-v0.16.0-windows.exe
**Attention: Plugins are binaries written by potentially untrusted authors. Install and use plugins at your own risk.*
*
Do you want to install the plugin pcfdev-v0.16.0-windows.exe? (y or n)> y
Installing plugin pcfdev-v0.16.0-windows.exe...
OK
Plugin pcfdev v0.0.0 successfully installed.
PS C:\Users\mrumpf\Downloads\pcfdev-v0.16.0-windows> cf dev start
Please retrieve your Pivotal Network API from:
https://network.pivotal.io/users/dashboard/edit-profile
API token>
BETA SOFTWARE END USER LICENSE AGREEMENT
...
Last Updated: April 14th, 2014
Accept (yes/no):> yes
Downloading VM...
Progress: |=> | 1%
Cou could reduce confusion if mention
that under Windows PowerShell should be used
Where to find the API token
|
2025-04-01T06:40:01.080099
| 2021-01-08T20:13:40
|
782370008
|
{
"authors": [
"matthewmcnew",
"mgibson1121",
"tylerphelan"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9646",
"repo": "pivotal/kpack",
"url": "https://github.com/pivotal/kpack/issues/594"
}
|
gharchive/issue
|
The lifecycle image should be configurable
RFC: https://github.com/pivotal/kpack/pull/560 (soon to be merged)
Problem:
Users have no way of upgrading the lifecycle image without upgrading the kpack release
Criteria:
An update to ConfigMap lifecycle-image.data.image results in builders being recreated
The ConfigMap lifecycle-image.data.platformApiVersions should be validated that kpack supports all of the platform apis
Actions:
The ConfigMap lifecycle-image should no longer be mounted in the kpack-controller
kpack should watch for changes to ConfigMap lifecycle-image in the kpack namespace
Updates to the ConfigMap lifecycle-image.data.image value should result in builders being recreated
lifecycle-image.data.platformApiVersions is a new optional field containing space-separated values in the form of X.X
if lifecycle-image.data.platformApiVersions contains a version that is not supported by kpack, the ConfigMap should fail validation
Perhaps lifecycle-image.data.platformApiVersions should be an annotation to prevent implying that is a functional configuration attribute?
Perhaps lifecycle-image.data.platformApiVersions should be an annotation to prevent implying that is a functional configuration attribute?
Moving to accepted and will accept the user facing issue https://github.com/vmware-tanzu/kpack-cli/issues/142
Moving to accepted and will accept the user facing issue https://github.com/vmware-tanzu/kpack-cli/issues/142
|
2025-04-01T06:40:01.081641
| 2018-05-03T23:01:08
|
320108238
|
{
"authors": [
"4s3ti",
"mep85"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9647",
"repo": "pivpn/pivpn",
"url": "https://github.com/pivpn/pivpn/issues/529"
}
|
gharchive/issue
|
Add IPv6 Routes By Default
Many devices have IPv6 enabled by default, and some (e.g. iOS) do not allow it to be disabled in the .ovpn config, or pushed by the server.
It would be nice if PiVPN would also create IPv6 routes to push to the clients and redirect IPv6 traffic through the tunnel. It could then either be discarded at the server or forwarded along with the IPv4 traffic.
At a minimum this should serve as a workaround in the cases where it is impossible to block IPv6 traffic entirely, or the interface can not be disabled on the client.
closing as #259 is already up to add support for IPv6
|
2025-04-01T06:40:01.135730
| 2023-03-06T23:22:16
|
1612390025
|
{
"authors": [
"JamesMBartlett"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9648",
"repo": "pixie-io/pixie",
"url": "https://github.com/pixie-io/pixie/pull/975"
}
|
gharchive/pull-request
|
[perf_tool/cluster] Implement GKE cluster operations.
Summary: Implement operations for calling out to GKE to create, healthcheck, and delete a cluster. We use the backoff package to retry all the GKE operations, because they can often fail transiently.
Type of change: /kind test-infra
Test Plan: A follow-up PR adds the subcommand test_gke_cluster to perf_tool which allows creating and deleting a cluster with the GKE cluster provider. I used that command to verify these operations work.
@pixie-io-buildbot test this please
|
2025-04-01T06:40:01.147342
| 2015-09-29T16:24:56
|
108906283
|
{
"authors": [
"FlorianLudwig",
"GoodBoyDigital",
"englercj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9649",
"repo": "pixijs/pixi.js",
"url": "https://github.com/pixijs/pixi.js/issues/2131"
}
|
gharchive/issue
|
PIXI.Graphics is a bad parent (it inerits from container but does not care about its children)
PIXI.Graphics.{clone,generateTexture} and possible others don't take into account that Graphics can have children. I think it should - or at least be documented.
I wonder how people feel about us making it inherit from DisplayObject instead of container...
Seems to be a viable option that would clear things up - but it would break compatibility. For example we make depend on it in our svg renderer. So I guess its no option for a minor release.
fixed in v4 π
|
2025-04-01T06:40:01.156186
| 2018-03-12T03:36:50
|
304231555
|
{
"authors": [
"Nocthan",
"macguffin",
"themoonrat"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9650",
"repo": "pixijs/pixi.js",
"url": "https://github.com/pixijs/pixi.js/issues/4754"
}
|
gharchive/issue
|
Pixi FontFace in Chrome
PixiJS 4.6.2
Browser: nwjs-sdk-v0.28.3-win-x64 (chrome)
Pixi uses the default Chrome font when you point a custom font-face using the FontFace obj:
Sample code:
var fontFace = new FontFace('95845e56-1b06-457d-ac03-c765ad13ec9e.ttf','url(chrome-extension://fadmacekllolfgoklhinaljokeapepka/data/95845e56-1b06-457d-ac03-c765ad13ec9e.ttf)');
document.fonts.add(fontFace);
let pixapp = new PIXI.Application({width:810, height:41, transparent:true});
fnt_pix.appendChild(pixapp.view);
var style = new PIXI.TextStyle({ fontFamily:'95845e56-1b06-457d-ac03-c765ad13ec9e.ttf', fontSize:36 });
var richText = new PIXI.Text('ABCDEFGHIJKLMNOPQRSTUWVXYZ',style);
pixapp.stage.addChild(richText);
Pixi successfully donwloaded the font files:
But i suspect Chrome is handling the default font to Pixi, part of their "Webfonts Intervention" feature.
And because this only happens on the first moment, when Pixi is downloading the fonts:
(Each line is an PIXI.Application)
On a second moment if i don't reload the page and create new PIXI.Application objects everything runs smooth:
Any hints?
Produced another sample code to better illustrate the problem, you can run it on regular Chrome.
Observed the problem also occurs when font-faces loads directly from CSS.
Here: pxtx.zip
A quick fix might be to add https://github.com/typekit/webfontloader to your project.
You could then do something like this to only display the content once the fonts were loaded
let countLoadedFonts = 0;
WebFont.load({
custom: {
//font name set in css
families: ['css fontName1', 'css fontName2', 'css fontName3']
},
testStrings: {
'css fontName1': '\uE003\uE005',
'css fontName2': '\uE003\uE005',
'css fontName3': '\uE003\uE005'
},
loading: function () {
console.log('css font loading');
},
active: function () {
countLoadedFonts++;
console.log('css fontName active', countLoadedFonts);
//Display pixi content
}
});`
macguffin, your suggestion was effective!
But i still suggest the devs to give a chance to this issue.
I banged my head for many hours until you comment. =D
@Nocthan it can be a browser bug that exists using regular html and css with custom fonts. Not only does the font have to be loaded, but it has to be used once before it 'kicks in' as it were. Therefore you have libraries like the one given, or the one I use, https://github.com/bramstein/fontfaceobserver - which uses the font in the background, and measures the size of the area it has been used, and when the size changes it knows the custom font is loaded, active, and can be used.
Closing as I think we're covered here :)
|
2025-04-01T06:40:01.168281
| 2024-05-30T21:03:19
|
2326539861
|
{
"authors": [
"HACKER21078"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9651",
"repo": "pizzaboxer/bloxstrap",
"url": "https://github.com/pizzaboxer/bloxstrap/issues/1886"
}
|
gharchive/issue
|
Bloxstrap Crash [BUG]
Acknowledgement of preliminary instructions
[X] I have read the preliminary instructions, and I am certain that my problem has not already been addressed.
What problem did you encounter?
When I start bloxstrap a new window opens saying that Roblox has crashed. Yes it only happens when I use Bloxstrap!
Im gonna upload my configurations in the comments
ClientAppSettings.json
Settings.json
State.json
Solved after reinstalling Bloxstrap with winget
|
2025-04-01T06:40:01.175803
| 2023-09-07T23:25:20
|
1886704869
|
{
"authors": [
"MeGoddess",
"pizzaboxer"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9652",
"repo": "pizzaboxer/bloxstrap",
"url": "https://github.com/pizzaboxer/bloxstrap/issues/649"
}
|
gharchive/issue
|
Game Freeze.
After a while of playing, roblox itself completely freezes and does not respond in any way. I have to unload the application from the task manager. This happens almost always.
https://github.com/pizzaboxer/bloxstrap/assets/144396571/3eb4c80e-60b1-4400-9683-a0a1d2668bdd
Are you able to provide a more specific timeframe on how long it takes for this to happen? Also, can I see your FastFlag list? Open the menu, go to the FastFlags tab, open the editor, and show a picture of the whole list.
In about two to seven minutes.
I said to open the editor.
this?
Okay, that seems good. Have you also checked to see if this happens only with Bloxstrap? https://github.com/pizzaboxer/bloxstrap/wiki/Switching-between-Roblox-and-Bloxstrap
Yes, this only happens with Bloxstrap. With the regular roblox client everything is normal.
You could try setting your rendering mode to Automatic, and your framerate limit to 0. See if doing those two changes anything.
Okay, I'll give it a try. I will test it in several games.
It became even worse. in addition to freezing Roblox itself, it became problematic to go to the Task Manager to remove a task from Roblox and finish its work.
How about if you disable activity tracking?
I turned it off from the beginning.
Well, I can't really think of what else would be causing it since at this point Bloxstrap should be functioning exactly like how the official launcher does. You could also try forcing a Roblox reinstallation? Go to the Behaviour tab, scroll to the bottom, option should be there. Enable it, save, and try and join.
Honestly I reinstalled both Bloxstrap and Roblox using the Revo uninstaller utility. And the result is still the same.
Well, I'm not really sure what to say because at this point Bloxstrap should be working exactly like the official launcher, but I'll try and do some more troubleshooting here.
Can you go to where Bloxstrap is installed (open the menu, installation tab, open installation folder), go inside the "Versions" folder, go inside "version-xxxxxxxxxxxxxxxx", and launch RobloxPlayerBeta.exe directly? See if the problem happens with just that.
same thing
https://github.com/pizzaboxer/bloxstrap/assets/144396571/40bea83d-78d7-4257-b44e-779078d6dcdd
This time, delete the "ClientSettings" folder, then launch RobloxPlayerBeta.exe again.
Issue has been awaiting clarification for over a month, closing as stale. If you have anything further to add, just respond and I'll reopen.
|
2025-04-01T06:40:01.224415
| 2023-01-05T08:35:00
|
1520323038
|
{
"authors": [
"GrazingScientist",
"bozana"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9654",
"repo": "pkp/crossref-ojs",
"url": "https://github.com/pkp/crossref-ojs/issues/22"
}
|
gharchive/issue
|
Failing registration with missing metadata in multilanguage journal
Dear @bozana and @asmecher ,
we just had the situation that the plugin attempted to register an article in a multi-language journal (German & English). The editor entered the metadata for the article's author only in the German form, while leaving the English metadata blank. In the DOI registration process however, the plugin stopped working (and did not process the subsequent articles) when hitting the described article.
Can you reproduce the bug?
Thanks in advance.
Best Regards,
Adrian
Hi @GrazingScientist, thanks for reporting. We would need a few more information in order to figure out what is happening:
Which OJS version are you using?
What is the primary locale of the article? -- Is it German? -- All metadata must be entered in the article primary locale...
When you try to export only that article do you get any errors and what errors exactly?
If export works fine, when you try to register only that article what exactly happens?
It would be good to first use support forum, so that we are able to first check and test and to then create an issue once we figured out that it is a bug or something needs to be done...
Best wishes,
Bozana
|
2025-04-01T06:40:01.334987
| 2022-06-09T03:19:40
|
1265538997
|
{
"authors": [
"bjbuddyboy",
"git-eri"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9655",
"repo": "plamere/spotipy",
"url": "https://github.com/plamere/spotipy/issues/827"
}
|
gharchive/issue
|
Trying to get multiple account authorizations instead of just one
I'm trying to create a program for spotipy where anyone can log in. I can make it where only I can get an authorization but am struggling with trying to get it where when the program runs it asks for a new login for authorization.
Here is the code I have so far for the Authorization. Not sure if it actually works. I just don't know how to be able to login to different spotify accounts with the correct authorization only know how to hard code in my account. Not sure where to go.
Take a look at the examples, especially app.py
That example helped me alot getting started to make a multi-user app with flask.
Note that FlaskSessionCacheHandler is currently not in the pip package, for that refer to issue#838
|
2025-04-01T06:40:01.351132
| 2017-09-10T18:09:59
|
256528281
|
{
"authors": [
"pbvarga1"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9656",
"repo": "planetarypy/planetary_test_data",
"url": "https://github.com/planetarypy/planetary_test_data/issues/14"
}
|
gharchive/issue
|
Refactor Tests
We should be using temp directories rather than having multiple directories for each test
Fixed in #16
|
2025-04-01T06:40:01.356090
| 2023-10-01T19:57:19
|
1920943231
|
{
"authors": [
"codecov-commenter",
"plannigan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9657",
"repo": "plannigan/columbo",
"url": "https://github.com/plannigan/columbo/pull/537"
}
|
gharchive/pull-request
|
Use YAML merge list definition
Multiple merge keys were previously supported, but not anymore.
Codecov Report
Merging #537 (e56f176) into main (e3f1e2b) will decrease coverage by 0.01%.
The diff coverage is n/a.
Additional details and impacted files
@@ Coverage Diff @@
## main #537 +/- ##
==========================================
- Coverage 97.24% 97.23% -0.01%
==========================================
Files 6 6
Lines 363 362 -1
Branches 77 45 -32
==========================================
- Hits 353 352 -1
Misses 8 8
Partials 2 2
see 1 file with indirect coverage changes
|
2025-04-01T06:40:01.360865
| 2019-05-23T15:10:37
|
447720681
|
{
"authors": [
"BrapiCoordinatorSelby",
"peterrosario"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9658",
"repo": "plantbreeding/API",
"url": "https://github.com/plantbreeding/API/issues/377"
}
|
gharchive/issue
|
studiesStudyDbIdObservationvariablesGetAsync()
When I call the Java client's StudiesApi.studiesStudyDbIdObservationvariablesGetAsync() with studyDbId == 1001 I get:
java.lang.IllegalArgumentException: missing discriminator field: <>
at io.swagger.client.JSON.getDiscriminatorValue(JSON.java:83)
at io.swagger.client.JSON.access$000(JSON.java:41)
at io.swagger.client.JSON$2.getClassForElement(JSON.java:61)
at io.gsonfire.gson.TypeSelectorTypeAdapterFactory$TypeSelectorTypeAdapter.read(TypeSelectorTypeAdapterFactory.java:65)
at io.gsonfire.gson.NullableTypeAdapter.read(NullableTypeAdapter.java:36)
at io.gsonfire.gson.HooksTypeAdapter.deserialize(HooksTypeAdapter.java:86)
at io.gsonfire.gson.HooksTypeAdapter.read(HooksTypeAdapter.java:54)
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.read(TypeAdapterRuntimeTypeWrapper.java:41)
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:82)
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:61)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:131)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:222)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:131)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:222)
at com.google.gson.Gson.fromJson(Gson.java:927)
at com.google.gson.Gson.fromJson(Gson.java:892)
at com.google.gson.Gson.fromJson(Gson.java:841)
at io.swagger.client.JSON.deserialize(JSON.java:157)
at io.swagger.client.ApiClient.deserialize(ApiClient.java:710)
at io.swagger.client.ApiClient.handleResponse(ApiClient.java:913)
at io.swagger.client.ApiClient$1.onResponse(ApiClient.java:879)
at com.squareup.okhttp.Call$AsyncCall.execute(Call.java:177)
at com.squareup.okhttp.internal.NamedRunnable.run(NamedRunnable.java:33)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1113)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:588)
at java.lang.Thread.run(Thread.java:818)
I have found an acceptable solution to this problem in the Java client code generated from BrAPI V2.0.
The generated code contains a class JSON.java which handles the JSON serialization and de-serialization of the JSON objects. The problem is when it is de-serializing an object, it is expecting an extra "discriminator" field to handle Java type resolution. If this class had been responsible for the original serialization, then this would not be a problem, however since the JSON is coming from a remote server, the "discriminator" field is missing, just as the error message indicates.
The easiest solution is to modify JSON.java and tell it how to determine the correct class without the need for an independant discriminator field. For example, Study.java has an extra field studyDbId and StudyNewRequest.java does not have this field. Based on the presence or absence of studyDbId in the response JSON, I can determine which class to instantiate.
@peterrosario if and when you are ready to move to V2.0 for your java client, let me know and I can give you my code changes for JSON.java
|
2025-04-01T06:40:01.370374
| 2015-12-30T04:41:32
|
124303059
|
{
"authors": [
"vishyme"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9659",
"repo": "plataformatec/devise",
"url": "https://github.com/plataformatec/devise/issues/3878"
}
|
gharchive/issue
|
undefined method `utc' for "2015-12-30 04:14:49 UTC":String
#
# # allow_unconfirmed_access_for = nil
# confirmation_period_valid? # will always return true
#
def confirmation_period_valid?
self.class.allow_unconfirmed_access_for.nil? || (confirmation_sent_at && confirmation_sent_at.utc >= self.class.allow_unconfirmed_access_for.ago)
end
# Checks if the user confirmation happens before the token becomes invalid
# Examples:
#
pls see file https://github.com/vishyme/Estydemo.git
|
2025-04-01T06:40:01.381892
| 2019-10-03T10:16:31
|
501976143
|
{
"authors": [
"LeKristapino",
"colinross",
"tegon"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9660",
"repo": "plataformatec/devise",
"url": "https://github.com/plataformatec/devise/issues/5147"
}
|
gharchive/issue
|
Suggestion - devise module settings in initializer not model
Has anybody brought up why devise modules that affect not only models, are placed in models not initializer?
It is weird that a line in model directly affects routes. Might it not be better to put modules in devise initializer so that instead of
class User < ApplicationRecord
devise :database_authenticatable, :validatable
end
it would be something like
Devise.setup do |config|
config.devise_model User, :database_authenticable, :validatable
end
just food for thought
While I would admit the coupling is a bit akward as you noted, the catalyst is the scoping when using multiple models (resource types). The alternative would be having domain requirements (Users are recoverable but Admins are not, etc.) defined away from the Domain Model.
To adress the scoping issue, I propose just model name as the first parameter( as in the example). We don't even need to change that much logic under the hood (as a first step), since it should be possible to inject functionality in models exactly the same way it is done now, just doing it behind the scenes.
Another added bonus, is that this way it is possible to configure devise per-resource-type, as the configuration logic can be scoped to serve a specific resource (for example different unlock strategies, for User and Admin, etc.)
My understanding was that, it actually gets set up via the the devise_for call in the routes file (the right place). In turn, yes, it looks at the model and detects which modules are loaded and then loads the appropriate (default) routes if not overrides are given.
That said, I think your premise of "a line in model directly affects routes" is incorrect. It is more specifically, a line in routes checks the model (which is configured in the model file) to create routes.
Simply put, the model definition does not create routes, but it informs the creation of routes depending on the modules loaded.
Simply put, the model definition does not create routes, but it informs the creation of routes depending on the modules loaded.
That is correct.
Another added bonus, is that this way it is possible to configure devise per-resource-type, as the configuration logic can be scoped to serve a specific resource (for example different unlock strategies, for User and Admin, etc.)
This would require a bigger change, right? I don't think that simply changing where the code gets injected in the model would also make this possible since some of the configurations are global.
This would require a bigger change, right? I don't think that simply changing where the code gets injected in the model would also make this possible since some of the configurations are global.
Yes, but my suggestion to move the configuration from model to initializer just opens this possibility for future improvements. I do not suggest doing all of this in one go.
Yes, but my suggestion to move the configuration from model to initializer just opens this possibility for future improvements. I do not suggest doing all of this in one go.
I do think the configuration per-resource could be done with the current design too. I understand what you saying about code organization but I'm struggling to see why your proposal would be more flexible than the way it is today.
the method delete_all_data does not actually delete data, it just calls another method.
IMO, that's true. The method delete_all_data has a dependency on the class name SomeModel and its method delete_all_data. By dependency, I mean that this method would have to change whether SomeModel changes its name, the parameters or the name of its own delete_all_data method. But if SomeModel changes its implementation of delete_all_data while keeping the public API the same, then the parent method does not have to change at all.
The 'best practice' in rails-centric libraries is convention over configuration.
The convention (90% use case) is that if someone has a certain module enabled for their devise resource, Recoverable as an example, they also want to have the routes needed to make that work.
If you really need to de-couple these concerns, you may do so at the model and routes level, via overrides to the `devise_for, respectively.
The initializer is for global-level configuration (in this case global denotes applying to all the devise resources in use). If you want it to apply only to one resource-type, use options in the model or routes definition calls. You can disable creating routes altogether and create them manually if you wish as well.
I would also say as an aside, that it seems the concerns you have with de/coupling may have to do with an approach that doesn't see the library as a container [block box if you will] itself. The fact that devise informs itself through inference (via the model) rather than hard-coded configuration in the routes is just a choice for convenience of testing/implementation and keeping the library DRY.
Do you have a specific use case / failing test case that you can't accomplish with the current form?
No, it was just something that I thought would be a better way of configuring devise when looking for a problem why some devise routes weren't being generated. As I was continuously searching for something in the model itself, I realized that it seems somewhat odd to look for routes config in the model.
This was just to present a potential alternate approach in the configuration. There are no problems or cases that I've come across so far where this approach would fail to deliver. :)
As I was continuously searching for something in the model itself, I realized that it seems somewhat odd to look for routes config in the model.
@LeKristapino That's a fair point, it would be easier to find out how things work. But this kind of change requires work both from maintainers and users of the Gem. We want to make it easier for the users to upgrade to new versions, that's why we'll leave as it is.
Thanks again for bringing this up and for the discussion.
|
2025-04-01T06:40:01.390893
| 2023-07-20T07:03:49
|
1813311771
|
{
"authors": [
"karmeye",
"pjkaufman"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9661",
"repo": "platers/obsidian-linter",
"url": "https://github.com/platers/obsidian-linter/issues/810"
}
|
gharchive/issue
|
FR: Possibility to Sync the File System's Modification Date with the Yaml Date
Is Your Feature Request Related to a Problem? Please Describe.
Regarding YAML and file system modification dates that become unsynced with each other due to moving/downloading the Vault to a new location.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Running "Lint all files" / "Lint folder" will update all files because file sys mod date is later than Yaml mod date, but the file's content hasn't actually been modified.
Describe the Solution You'd Like
Is it in theory possible to programatically set the file system's mod date from within an Obsidian plugin?
In that case it would be nice to have a function that one runs only once the file sys mod dates gets out of sync.
The function would go through all files in the vault and make sure that the file mod date is set to the current value in whatever Yaml key represents the modification date.
Describe Alternatives You've Considered
Is there any stand-alone app? But it would have to run on all platforms.
Additional Context
Having run the "sync mod dates" function would allow "lint all files" to just update the Yaml mod dates on the files that were actually modified.
This relates to #386 . There is nothing that can be done by the Linter in regards to this. However you may be able to use Custom Commands to handle this.
But to be clear,the Linter cannot do this because Obsidian does not allow for this in their API.
There probably is a way to do so, but those functions are not added to an API or anything. It would take some work and testing on your part to get it working.
Would it be possible to have a "Lint all Files Modified Today" feature in Linter?
It would be possible so long as the date modified is consisered the source of truth. It would check if the date modidied was today and if so, then lint those files.
Does linting on file change (#799 ) and autosave (#392 ) not address this issue?
I am assuming the latter handles your scenario.
Looks like I linked to the wrong issue. I was referring to #183 for autosave. It would lint the file after a change is made to it (thus no need for running a lint on the current file manually).
As for linting a file when the active file changes, that makes sense. I was pretty sure it would not meet your needs, but thought I would mention it as well.
I was referring to #183 for autosave. It would lint the file after a change is made to it.
How do you detect changes made to a file?
I listen to the current active file's editor content (source mode displayed content). If the rditor says something changed, then I assume something changed.
I listen to the current active file's editor content (source mode displayed content). If the rditor says something changed, then I assume something changed.
Ok.
Autosave as in #183 doesn't work for me since I don't want the file to suddenly change.
What would work is if you manage a list (as the Recent Files plugin manages a list of recent files in a json) where you add each file as soon as a change occurs. This would be a list of files pending linting. Then make a command available that lints all files in the list and then clears it.
Then there's no need to iterate through all files as suggested above.
But this list of files pending linting might cause problems when multiple clones of the same vault exists, where files haven't been synced.
I can say that I would not add that data to a json file if I were to add it. So there would be no issue with syncing.
It sounds like you are looking for a feature that currently has not been requested, so I can put it in the backlog and see what interest current users have in this feature. I am open to PRs in the meantime, but I at this time I don't see myself working on this anytime soon.
Sounds good. I am glad that you found a solution.
|
2025-04-01T06:40:01.403575
| 2023-10-31T16:12:37
|
1970817396
|
{
"authors": [
"chadwcarlson"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9662",
"repo": "platformsh/demo-project",
"url": "https://github.com/platformsh/demo-project/issues/61"
}
|
gharchive/issue
|
Temper when full demo runthrough is executed on PRs
Currently: each push to PR going into main
Suggestions:
when label is added
only on this repo id (no forks)
only delete project if previous steps succeed?
Add a comment on which label needs to be added to a PR to run tests
https://github.com/marketplace/actions/comment-pull-request
|
2025-04-01T06:40:01.406613
| 2024-02-01T15:57:40
|
2112856183
|
{
"authors": [
"akalipetis",
"gilzow"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9663",
"repo": "platformsh/platformify",
"url": "https://github.com/platformsh/platformify/pull/202"
}
|
gharchive/pull-request
|
Fix Find() when / is used as input
This fixes issues with Platformifiers that were looking for relative files, like Laravel and Django
Fix #199
the only other challenge i see is that the message about adding the composer dependency is scrolled out of view quickly due to the size of our congrats message:
Normal size terminal window:
Expanded to be able to see it:
Good point, let's open a separate issue to discuss this.
message size moved to #204
|
2025-04-01T06:40:01.419969
| 2017-03-14T07:31:58
|
213989068
|
{
"authors": [
"gslowikowski"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9664",
"repo": "play2-maven-plugin/play2-maven-plugin",
"url": "https://github.com/play2-maven-plugin/play2-maven-plugin/issues/123"
}
|
gharchive/issue
|
Upgrade Play! version from 2.4.10 to 2.4.11
No changes other than version numbers from Maven plugin or test projects point of view, but update anyway.
New 1.0.0-beta7-SNAPSHOT snapshot deployed, documentation updated.
|
2025-04-01T06:40:01.423179
| 2018-10-12T07:38:02
|
369434089
|
{
"authors": [
"scarletsky"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9665",
"repo": "playcanvas/engine",
"url": "https://github.com/playcanvas/engine/issues/1401"
}
|
gharchive/issue
|
skyboxLayer enabled unexpectly when changing skyboxIntensity
For bug reports, include:
Description
With the new layer system, we can hide skybox easily by the following code:
pc.app.scene.layers.getLayerById(2).enabled = false;
But when changing skyboxIntensity, the skyboxLayer will be enabled again. This is annoying.
The skybox layer should not visible when changing skyboxIntensity.
Steps to Reproduce
pc.app.scene.layers.getLayerById(2).enabled = false
pc.app.scene.skyboxIntensity = 2
You guys fixed this issue one year later π
|
2025-04-01T06:40:01.435528
| 2020-11-06T22:47:31
|
738086189
|
{
"authors": [
"FutureFireplace",
"Maksims",
"dexterdeluxe88",
"mvaligursky",
"yaustar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9666",
"repo": "playcanvas/engine",
"url": "https://github.com/playcanvas/engine/issues/2538"
}
|
gharchive/issue
|
Feature request: black or at least dark light within light cookies
Can you develop the option of black or at least dark light within light cookies? ... I want to make a tool for Tattoo artists, where edges need to be really black.
Not possible right now.
cf my project: https://playcanvas.com/editor/scene/968488
Sounds like you want to project a texture onto a model.
These two demos may help in that regard
https://github.com/playcanvas/playcanvas.github.io/blob/master/graphics/painter.html
https://developer.playcanvas.com/en/tutorials/character-damage-demo
Can you develop the option of black or at least dark light within light cookies? ... I want to make a tool for Tattoo artist, where edges need to be really black.
Not possible right now.
This would be also very useful to project blob shadows.
@yaustar: in case you are referring to https://playcanvas.github.io/#graphics/painter.html, it will not help. I have a model character which recieves the texture in a bad way (what is projected on the back is also projected on the front - a bit like a mirror effect, but only that the mapping is off {ps: UV unwrap is not always a walk in the park - many of my tried Blender unwrap methods does, for example, not transfer at FBX-export to PC})
the painter example paints on each of the 6 sides as well :-/
I am more interested in the damage-example, where the UV mapping seem to perfect already (there is no mirrored effect, as all seams and islands on the black/white rendertexture to the left are mapped correctly). Here I want to project a readymade texture to the model. Kind of the reverse procedure of the present 'damage' infliction. Here is my project with a naΓ―ve approach to the problem (blue tattoo put at decal-texture): https://playcanvas.com/editor/scene/1028837
BTW @yaustar: I have found a compromise solution for the mapping problem ... you don't have to pursue this (just in case ... but thx again)
Having summed up the different mehods to project an image to a character (to a total four methods), I hereby return to this issue as each of them seem to possess their own challenge. In case the developers think that it is possible to use the two most obvious ones (texture and rendertexture), they both have inherit flaws in the shape of 'seams' and 'distorted stretches' that prevails me to make fluint dynamic movement of an image across the character without a lot stretching: https://playcanv.as/b/9tQ1i4Cq/ (forked from a Leonidas tutorial) + https://playcanv.as/b/FiSGfBFX/ (note the stretch on the back of the upper left arm). Theese dynamic stretches are (close to) never a problem when making UV-mapping for games, as the textures are static in such cases. But here the texture is dynamic, and the inherit original mapping structure (from when being developed in Blender etc.) reveals itself ... conclusion: Although not-being a natural category of physique, 'black light' as a light cookie option, would be very useful, as it can bypass mapping issues al together.
Although not-being a natural category of physique, 'black light' as a light cookie option, would be very useful, as it can bypass mapping issues al together.
Even if this is possible, you are going to have the same problem as you are not projecting onto a flat surface.
I am already ahead of that problem, as I am using a 'rolling effect' that changes the UV-tiling of the material as a function of the camera-to-bodypart position.
Decals?
A) As an option I can go back to the https://developer.playcanvas.com/en/tutorials/character-damage-demo (that includes decals) option, but so far I have seen this example as being taylor made for special game situations (and thus very relevant for most PlayCanvas developers).
B) As a parallel I made this post in the forum: https://forum.playcanvas.com/t/shooting-an-image-on-to-a-surface/15755 yesterday. From there I pursued https://playcanvas.com/project/704805/overview/paint-3d-test (also decals)
From both A) and especially B) I seem to be stuck at this line [from B)]:
this.material.setParameter("paintColor",new pc.Vec3(1.0,1.0,1.0).data);
is there a material.setParameter method/approach for 'painting' with an image?
(please note that this issue now have made me post this parallel https://github.com/playcanvas/engine/issues/2556 - it might help my/an overall goal of a more wider option pool and/or a better rendering pipeline ... all in all; so at least of of the above mentioned four approaches [texture, rendertexture, decals and light cookie] improves :-) )
I've converting this discussion to a request to implement a decal system:
https://github.com/playcanvas/engine/issues/4053
|
2025-04-01T06:40:01.438246
| 2023-10-12T07:57:22
|
1939426213
|
{
"authors": [
"kungfooman",
"mvaligursky"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9667",
"repo": "playcanvas/engine",
"url": "https://github.com/playcanvas/engine/pull/5745"
}
|
gharchive/pull-request
|
Allow the AppBase to be cleanly destroyed even when not initialized
avoiding undefined access
Closing as no go. I was hoping to avoid construction / destruction without a device, but it's more complicated than expected and not worth it.
Can't we just fix this error?
Basically only what you added already:
const canvasId = this.graphicsDevice?.canvas?.id;
if (canvasId !== undefined) {
AppBase._applications[canvasId] = null;
}
fair enough, I'll do that that.
new PR.
|
2025-04-01T06:40:01.440279
| 2023-11-01T23:07:51
|
1973236649
|
{
"authors": [
"brocollie08",
"sugarmanz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9668",
"repo": "player-ui/player",
"url": "https://github.com/player-ui/player/pull/214"
}
|
gharchive/pull-request
|
Singular workflow for CI
Change Type (required)
Indicate the type of change your pull request is:
[ ] patch
[ ] minor
[ ] major
/canary
|
2025-04-01T06:40:01.460827
| 2020-05-28T08:30:01
|
626321145
|
{
"authors": [
"chbatey",
"ennru",
"raboof"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9669",
"repo": "playframework/twirl",
"url": "https://github.com/playframework/twirl/issues/342"
}
|
gharchive/issue
|
Uploading snapshots to bintray fails
/home/play/logs/nightly-deploy-master-1590634801.log
error] java.lang.RuntimeException: error uploading to https://api.bintray.com/maven/playframework/snapshots/snapshots/com/typesafe/play/twirl-compiler_2.12/1.5.0-2020-05-27-785c3ce-SNAPSHOT/twirl-compiler_2.12-1.5.0-2020-05-27-785c3ce-SNAPSHOT.pom: {"message":"Snapshot files cannot be uploaded to OSS repositories.
Looks like we're not fully using sbt-dynver here yet?
This is every night
Fixed in the private Play build server repository.
|
2025-04-01T06:40:01.508212
| 2017-11-12T14:57:21
|
273236563
|
{
"authors": [
"NoNameProvided",
"sh3d2",
"tonyxiao",
"vekexasia"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9671",
"repo": "pleerock/routing-controllers",
"url": "https://github.com/pleerock/routing-controllers/issues/327"
}
|
gharchive/issue
|
Can middleware access controller instance?
I'd like to create a middleware that have access to the controller instance. is that possible?
What is your use case for this and why it cannot be achieved by modifying the request object?
Lets say i've a Controller similar to:
@JsonController()
class XXX {
instanceOfSomething = null;
@Get('/whatever')
public async getWhatever() {
return instanceOfSomething.getWhatever();
}
}
Lets say that I want to add a middleware that checks if instanceOfSomething was instantiated and if not return an error
Ofc this is usful cause i've a lot of methods using instanceOfSomething and depends on the "loading" of it.
Also using a method of controller as a middleware could be helpful.
It is a strange approach what you try to do. Use dependency injection instead and if instanceOfSomething needs some time to fully startup after being created then just defer all call on it. (eg return a response what is resolved when instanceOfSomething has finished setting up and completed your call.
Yes for that scenario as I advised you should keep track of the internal state of the service inside the service not in a middleware. You can do something like:
class MyService {
private ready: Promise<void>;
constructor() {
this.ready = this.setupStuff();
}
public async anyMethod(): Promise<any> {
await this.ready;
// do your stuff here
}
}
I already keep track of the readyness in the dependency.
I just want to hook up a middleware, within that middleware do
if (!this.dependency.isReady()) {
return { success: false, message: 'App is not ready' }
}
Isn't this the whole point of having a middleware? To perform "guards" over real code?
If right now middleware cannot be coupled to controller classes instances, to me that's a showstoppers unfortunately.
Isn't this the whole point of having a middleware? To perform "guards" over real code?
Yes it is, but why do you want to return an error response, when your client can wait until your app is ready (I assume your app doesn't need multi minutes setup time.) You just simply resolve the returned promise when the app has been setup and the request can be processed.
Btw you can inject the service into your middleware if you really want to and check for its readiness in the middleware, however as I already said, this seems a wrong design decision. The service itself should keep track of its state and if you decide the send error to requests which comes before then the service should create the error, not the middleware.
(I assume your app doesn't need multi minutes setup time.)
Can take hours. Don't ask :)
To me it's more clean to have the middleware check dependencies readyness and keep the Controller methods clean of checking the dependencies readyness for each route.
If I had to do that I'd need to duplicate the readyness (and response handling) code for every route. This looks like an antipattern to me.
BTW I dont want to inject the dependency to the middleware. I think i might create a decorator and decorate each method to perform such validation. Still duplicated code but a decorator looks better than code-dumplication to me.
Feel free to share your thoughts.
Can take hours. Don't ask :)
Wow, that is a long time! So then why don't you solve this at the level of load balancing? Just dont redirect any traffic to the app until it's not ready.
To me it's more clean to have the middleware check dependencies readiness and keep the Controller methods clean of checking the dependencies readiness for each route.
Your controllers shouldn't check for readiness, the services itself should check it.
BTW I don't want to inject the dependency to the middleware. I think i might create a decorator and decorate each method to perform such validation. Still duplicated code but a decorator looks better than code-duplication to me.
Yes, you can do that too, but if you do it via a vanilla decorator then you won't have access to routing-controller itself.
Feel free to share your thoughts.
By trying to check the service readiness outside of the service you break the encapsulation of logic. Your middleware doesn't need to throw, your service needs to throw when it's not ready.
By trying to check the service readiness outside of the service you break the encapsulation of logic. Your middleware doesn't need to throw, your service needs to throw when it's not ready.
Oh now I understand what you mean. Yes you're right. But right now the service code, which was not written by me, works that way so I need to adapt until I can fix that.
BTW I would really consider exposing the controller instance to middlewares as this would open up many other options.
Yes, you can do that too, but if you do it via a vanilla decorator then you won't have access to routing-controller itself.
What do you mean exactly? If I apply my own decorator after the ones from routing-controller then routing-controller will call my middleware before then actual route implementation (which will need to be called by my middleware)
Isn't this the whole point of having a middleware? To perform "guards" over real code?
A given middleware shouldnt know anything about other middlewares in the middleware chain. they all should be "self-contained". A controller (the whole routing-controllers actually) acts as a one big middleware. So accessing controller instance (part of one middleware) in other middleware would break the middleware separation principle. Only thing (in express example) that travels between middlewares is the request (and response) object itself.
Can you not just share an instance of your service between controller and a middleware via a DI container (not a best practice, but something to start with).
say, something like (pseudo-code, not tested in any way):
export class Middleware implements ExpressMiddlewareInterface {
@Inject('my-service')
protected service;
use(request: any, response: any, next?: (err?: any) => any): any {
if(!service.isInitialized()) {
throw 'service not ready';
}
next();
}
}
class Controller {
@Inject('my-service')
protected service;
@Get('/')
@UseBefore(Middleware)
anAction(){
// we want request to reach action only when service is "ready"
}
}
@sh3d2 one issue is that the service being injected can change for each request (i.e. depends on currentUser). How would you handle that in routing-controller?
|
2025-04-01T06:40:01.600484
| 2021-05-18T02:48:31
|
893893537
|
{
"authors": [
"scala-steward"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9672",
"repo": "plokhotnyuk/fast-string-interpolator",
"url": "https://github.com/plokhotnyuk/fast-string-interpolator/pull/164"
}
|
gharchive/pull-request
|
Update perfolation to 1.2.8
Updates com.outr:perfolation from 1.1.7 to 1.2.8.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "com.outr", artifactId = "perfolation" } ]
labels: library-update, semver-minor
Superseded by #197.
|
2025-04-01T06:40:01.654807
| 2023-02-26T05:33:22
|
1599937532
|
{
"authors": [
"davisagli",
"sneridagh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9673",
"repo": "plone/volto",
"url": "https://github.com/plone/volto/pull/4434"
}
|
gharchive/pull-request
|
Improvements to dev proxy
Prefer RAZZLE_INTERNAL_API_PATH over RAZZLE_API_PATH for devProxyToApiPath. This makes more sense (the proxy target is accessed from the Volto server side) and it makes the dev proxy work without additional configuration when running Volto in a Docker container RAZZLE_API_INTERNAL_API_PATH is already set correctly.
Keep the URL protocol from the API path in the virtual hosting path passed to the backend, instead of hardcoding http. This lets the backend generate correct URLs when the backend is served over https.
Use a new environment variable, RAZZLE_DEV_PROXY_INSECURE, to control whether the proxy checks certificates on the backend.
Always log where API paths are proxied on startup, not only in development mode.
In the APIResourceWithAuth helper, always use devProxyToApiPath, not only in development mode. It's a potentially confusing inconsistency.
With these changes, it's possible to run Volto with a remote backend served over https. For example:
RAZZLE_INTERNAL_API_PATH=https://demo.plone.org RAZZLE_PROXY_REWRITE_TARGET=/++api++ RAZZLE_DEV_PROXY_INSECURE=1 yarn start
Note: I still need to check if there are any implications for the docs.
@davisagli seamless traefik docker config:
routers:
frontend:
rule: "Host(`localhost`)"
service: frontend
backend:
rule: "Host(`localhost`) && PathPrefix(`/++api++`)"
service: backend
middlewares:
- backend
middlewares:
backend:
replacePathRegex:
regex: "^/\\+\\+api\\+\\+($|/.*)"
replacement: "/VirtualHostBase/http/localhost/plone/++api++/VirtualHostRoot$1"
services:
frontend:
loadBalancer:
servers:
- url: "http://host.docker.internal:3000"
backend:
loadBalancer:
servers:
- url: "http://host.docker.internal:55001"
has localhost on it, could be the problem?
works like a charm... We should document it properly.
|
2025-04-01T06:40:01.682153
| 2017-11-23T19:13:29
|
276458043
|
{
"authors": [
"chriddyp",
"n-riesco"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9674",
"repo": "plotly/falcon-sql-client",
"url": "https://github.com/plotly/falcon-sql-client/pull/277"
}
|
gharchive/pull-request
|
Use electron-builder and webpack@3
@chriddyp Could you review this PR?
I haven't gone as far as I originally planned with this work, but I'm going to open a PR in the main repo, so that people can test the installers generated with electron-builder.
A brief overview of what this PR does:
now falcon uses a patched version of ibm_db from my repo with @tarzzz 's fix (this is necessary for electron-builder to work)
I've upgraded dependencies if no code change was required (e.g. now we are using webpack@3 and mysql2)
I've removed unused dependencies, files and folders.
when possible I've moved modules from dependencies to devDependencies
Now the procedure to generate installers is:
$ yarn install
$ yarn build
$ yarn run pack
(Beware that yarn pack and yarn run pack do different things)
The procedure to build the app locally and test is still the same:
$ yarn install
$ yarn run rebuild:modules:electron
$ yarn build
$ yarn start
this looks great! π
|
2025-04-01T06:40:01.705524
| 2018-11-02T16:43:37
|
376886429
|
{
"authors": [
"mag009",
"scjody"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9675",
"repo": "plotly/orca",
"url": "https://github.com/plotly/orca/pull/144"
}
|
gharchive/pull-request
|
K8s resources adjustment
part of plotly/streambed#11766
as per below I've miss-understood the CPU metric used by the auto-scale
It is based of the cpu requests in our case it is very low at 100m and the percentage set is 90% so that mean at 90m we start scaling and by that it start spinning up new node. As per below:
kubectl describe hpa
Normal SuccessfulRescale 38m (x591 over 36d) horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 14m (x499 over 36d) horizontal-pod-autoscaler New size: 5; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 10m (x98 over 36d) horizontal-pod-autoscaler New size: 10; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 6m22s (x29 over 36d) horizontal-pod-autoscaler New size: 14; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 20s (x1548 over 36d) horizontal-pod-autoscaler New size: 3; reason: All metrics below target
so in our case we barely use 1400m so by increase the requests to 400m + a CPU target of 150% = 600m * 3 total of 1800m which give us more than what we need. Worst case the auto-scale will kick-in.
@scjody ready for review
@mag009 I don't understand your explanation. Can you provide more details on what's happening now and how it needs to change?
It looks like the old target CPU utilization was 30% - where does 90% (in your explanation) come from? I also don't understand why we need to increase the CPU requests from 100m to 400m. Doesn't that number just affect scheduling, in other words what other pods can coexist on the same node as an imageserver pod?
Finally have you done any load testing of the new values to make sure the cluster still scales up when needed?
@mag009 I don't understand your explanation. Can you provide more details on what's happening now and how it needs to change?
From the metric above, we can see how often we scaled. In our case the auto-scale add/remove new node due to the node affinity that prevent from having 2 imageserver of running on the same node.
For example : We scaled to 14 replicas 29 times during last 36 days, which represent 90m / 940m per node (waste of resources).
The goal is to maximized the utilization of our resources and save money
It looks like the old target CPU utilization was 30% - where does 90% (in your explanation) come from? I also don't understand why we need to increase the CPU requests from 100m to 400m. Doesn't that number just affect scheduling, in other words what other pods can coexist on the same node as an imageserver pod?
I manually changed the value from 30% -> 90% , it was constantly taking up/down nodes and I saw the bill was increasing ( should of been in a pr ).
Correct the cpu requests is for scheduling only. We could leave it to 100m and set the TargetCPU to 600% = 60% Utilization per node.
Finally have you done any load testing of the new values to make sure the cluster still scales up when needed?
I've tested on stage by generating load using ab -n 10000 -c 20 -p 0.json http://<IP_ADDRESS>:9091/ with single file in parallel and with random sized image to simulate regular traffic.
As we can see the result are much more efficient:
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 457% (457m) / 600%
Min replicas: 3
Max replicas: 6
Deployment pods: 3 current / 3 desired
kubectl top pods
NAME CPU(cores) MEMORY(bytes)
imageserver-588b9fcd55-fx4zw 471m 2216Mi
imageserver-588b9fcd55-rqgj2 420m 1186Mi
imageserver-588b9fcd55-sv56m 480m 1813Mi
kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-stage-default-pool-943c54e3-p8gl 496m 52% 2843Mi 107%
gke-stage-default-pool-e4669036-6gxh 677m 72% 3064Mi 115%
gke-stage-default-pool-e4669036-lpdr 393m 41% 2530Mi 95%
this is how prod currently looks like under (load) :
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 34% (34m) / 90%
Min replicas: 3
Max replicas: 18
Deployment pods: 10 current / 10 desired
kubectl top pods
NAME CPU(cores) MEMORY(bytes)
imageserver-7f65b96d76-229b4 40m 452Mi
imageserver-7f65b96d76-486tc 26m 594Mi
imageserver-7f65b96d76-96hsv 27m 430Mi
imageserver-7f65b96d76-d86ks 102m 1182Mi
imageserver-7f65b96d76-h7h9x 110m 1807Mi
imageserver-7f65b96d76-pq5bp 34m 402Mi
imageserver-7f65b96d76-snbk9 66m 638Mi
imageserver-7f65b96d76-vc648 92m 1730Mi
kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-prod-default-pool-2e5abd50-2b7t 105m 11% 1381Mi 52%
gke-prod-default-pool-2e5abd50-2s37 80m 8% 1296Mi 49%
gke-prod-default-pool-2e5abd50-br3w 97m 10% 763Mi 28%
gke-prod-default-pool-2e5abd50-j9vd 79m 8% 952Mi 35%
gke-prod-default-pool-e20874d7-6lhh 58m 6% 878Mi 33%
gke-prod-default-pool-e20874d7-7f4x 65m 6% 875Mi 33%
gke-prod-default-pool-e20874d7-kgwd 130m 13% 2413Mi 91%
gke-prod-default-pool-e20874d7-t0w8 67m 7% 773Mi 29%
gke-prod-default-pool-ed1d55e6-0f5q 184m 19% 2618Mi 99%
gke-prod-default-pool-ed1d55e6-gs16 158m 16% 1936Mi 73%
I manually changed the value from 30% -> 90% , it was constantly taking up/down nodes and I saw the bill was increasing ( should of been in a pr ).
Yes, this should have been a PR, or at least a discussion with the team. When did you make this change?
Correct the cpu requests is for scheduling only. We could leave it to 100m and set the TargetCPU to 600% = 60% Utilization per node.
Where does 600% come from? How is that equivalent to 90% in your original explanation? Can you explain exactly what these numbers mean and how you calculated them?
Thanks for the details on your testing. My concern is: with the new settings, are we autoscaling enough to keep up with the load? Do you have any data on this?
I manually changed the value from 30% -> 90% , it was constantly taking up/down nodes and I saw the bill was increasing ( should of been in a pr ).
Yes, this should have been a PR, or at least a discussion with the team. When did you make this change?
I made the change on Sept 29 2-3 days after releasing it . I saw it was using 18 nodes so i just reacted and increased the %.
Correct the cpu requests is for scheduling only. We could leave it to 100m and set the TargetCPU to 600% = 60% Utilization per node.
Where does 600% come from? How is that equivalent to 90% in your original explanation? Can you explain exactly what these numbers mean and how you calculated them?
I never said it was equivalent of 90%, although I forgot to mention that we need to increase it in order to maximized the utilization of our nodes.
This is the math I used for that:
replicas 18 * 90m = 1620m
replicas 3 * 600m ( 100m with a CPUtarget of 600% = 600m ) = 1800m
Thanks for the details on your testing. My concern is: with the new settings, are we autoscaling enough to keep up with the load? Do you have any data on this?
This graph represent our highest peak during the last 30 days :
on all 3 pools central-a , central-b and central-c we never reached out more than 20% of cpu utilization.
We can set the CPUtarget to a less aggressive value : 200% and re-evaluate in a month ?
I made the change on Sept 29 2-3 days after releasing it .
OK, so it's been at 90% for a while, so we need to increase from that number. Got it.
This is the math I used for that:
replicas 18 * 90m = 1620m
replicas 3 * 600m ( 100m with a CPUtarget of 600% = 600m ) = 1800m
I still don't understand. The calculations alone don't really help me understand the reasoning - why are these the numbers that we want to change to? (Specifically your PR sets CPU requests to 400m and target CPU utilization to 150%. I'm OK discussing either these numbers or a different proposal, but I really want to understand where they come from before we make any more changes.)
I'm also still concerned about the safety of these changes. We don't want to end up in a situation where the cluster is at capacity but autoscaling doesn't happen. I thought from the work in #9865 we'd be able to say something like "When the CPU usage of our imageserver pods is over XX%, we need to scale up", and from there we could implement a solution. Would you be able to let us know these numbers? (If you'd still need to put in a lot of effort to figure out these numbers we could consider increasing the autoscaling parameters gradually and keeping an eye out for problems on prod, but my understanding is you already did the needed tests in #9865 and I'd prefer to take a safer approach based on measurements.)
I made the change on Sept 29 2-3 days after releasing it .
OK, so it's been at 90% for a while, so we need to increase from that number. Got it.
This is the math I used for that:
replicas 18 * 90m = 1620m
replicas 3 * 600m ( 100m with a CPUtarget of 600% = 600m ) = 1800m
I still don't understand. The calculations alone don't really help me understand the reasoning - why are these the numbers that we want to change to? (Specifically your PR sets CPU requests to 400m and target CPU utilization to 150%. I'm OK discussing either these numbers or a different proposal, but I really want to understand where they come from before we make any more changes.)
We have a total of 940mCPU per node and a minimum of 3 nodes ( 1 per zone ). Running 24/7
A total of 2820mCPU
1391mCPU is allocated to kube-system pods
Which leaves us 1429mCPU for the imageserver.
Based on the highest CPU request we had over the last 30 days (~1500mCPU). As per graph : https://github.com/plotly/orca/pull/144#issuecomment-436082074
This is where I got the 600 * 3 replicas = 1800mCPU. I agree it's a bit too aggressive :smile: but my goal was to avoid scaling and better utilized the node when we know that this is the only thing running.
I'm also still concerned about the safety of these changes. We don't want to end up in a situation where the cluster is at capacity but autoscaling doesn't happen. I thought from the work in #9865 we'd be able to say something like "When the CPU usage of our imageserver pods is over XX%, we need to scale up", and from there we could implement a solution. Would you be able to let us know these numbers? (If you'd still need to put in a lot of effort to figure out these numbers we could consider increasing the autoscaling parameters gradually and keeping an eye out for problems on prod, but my understanding is you already did the needed tests in #9865 and I'd prefer to take a safer approach based on measurements.)
I do have these number as I pointed out above : https://github.com/plotly/orca/pull/144#issue-227995995 these represent the number of time it scaled-up/down. It's hard to measure from this... It tells me that we scale too often.
CPU requests for a week :
We can see that it peaks often explaining the the scale-up/down.
As for #9865 I didn't have any metrics as the auto-scale wasn't setup yet.
@scjody what you say if I change the requests CPU to 300mCPU and a target of 90%
it should cover most of our small peaks and avoid us to spin-up new nodes.
Does it sound acceptable to you?
@mag009 I agree that we're scaling up too often. Obviously though we need to be concerned with the availability and responsiveness of the service, so we need to make sure we don't go too far in the other direction (in other words not scaling up often enough).
It doesn't sound like we have any measurements or data to support your new proposal though. How much time would it take for you to test autoscaling by doing load testing using stage? Like I said above if it's going to take a significant amount of time then experimenting using prod may be the way to go, but I'd like to consider the tradeoffs first.
@mag009 I agree that we're scaling up too often. Obviously though we need to be concerned with the availability and responsiveness of the service, so we need to make sure we don't go too far in the other direction (in other words not scaling up often enough).
It doesn't sound like we have any measurements or data to support your new proposal though. How much time would it take for you to test autoscaling by doing load testing using stage? Like I said above if it's going to take a significant amount of time then experimenting using prod may be the way to go, but I'd like to consider the tradeoffs first.
Should take me roughly 2-3h to test.
|
2025-04-01T06:40:01.717484
| 2017-03-03T21:54:49
|
211812250
|
{
"authors": [
"ArturoAguileraV",
"alexandresobolevski",
"n-riesco"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9676",
"repo": "plotly/plotly-database-connector",
"url": "https://github.com/plotly/plotly-database-connector/issues/138"
}
|
gharchive/issue
|
MsSQL Express Compatibility
I'd like to know if the Plotly Database Connector (when doing a MsSQL connection) supports a MSSQL EXPRESS EDITION database.
I tried to connect using the "sa" username and its password but nothing happened. It gives me the error:
failed to connect to [hostname]:[PortNumber] - connect ECONNREFUSED [my IP address]:[PortNumber]
This are my inputs:
Username: sa
Password: mypassword
Host: myHost
Port: (I left this port blank)
Database: myDB
So, does it is compatible?
Hi! After some searching it seems the connector currently does not support MSSQL EXPRESS (relevant sequelize issue) but it would it should be an easy fix.
Relevant code in database-connector https://github.com/sequelize/sequelize/issues/3097#issuecomment-73798671
A new option 'instanceName' should be provided.
We welcome contributions from the community and will be glad to review them and work on them to include them in the connector.
Forgive my ignorance but... I searched in all the project and I just don't get to know which path does contains the sequelize connector, also, I downloaded the project and just can't make it start (most of it because I don't even know how to do it).
Thanks for your attention! :)
Hey! Thanks you for getting your hands dirty :P
What did you try exactly to run it and how did it fail?
The following commands should get you up and running.
npm install
npm run build
npm run start
Hey Alexandre! I'm happy to tell you that I got myself a big cup of coffee and finally found the file you kindly referenced to me, so, I made the changes in my application and it worked!
In my connection-manager.js which is located in plotly-database-connector/resources/app/node_modules/sequelize/lib/dialects/mssql/connection-manager.js I just added instanceName: 'SQLEXPRESS' and everything worked fine!
Thank you again, for your consideration to answer my questions. I also made a [question] in(http://stackoverflow.com/questions/42589886/plotly-and-sql-server-express-compatibility) StackOverflow. And pasted the answer there.
If you wish, you could create a branch in this repository and make a PR that would add that instanceName parameter as one of the inputs the user gives when he/she selects Microsoft SQL as the database to which it has to connect. This way others will only have to enter in the user interface without having to change the source code. I can help you with reviewing the PR. It should be quite simple.
This is the place where it could be added in the front end https://github.com/plotly/plotly-database-connector/blob/master/app/constants/constants.js#L16
And we can read that input in the back end here https://github.com/plotly/plotly-database-connector/blob/master/backend/persistent/datastores/Sql.js#L14
I just did it, I hope that's the correct way to do it.
I'd appreciate if you told me if i did it right or wrong :)
@ArturoAguileraV The project has been quite since your last comment (sorry about that). But now a new version is about to be released that includes a fix for this issue. You can download a prerelease. Please, let us know if you have any issues:
https://github.com/plotly/falcon-sql-client/releases/tag/v2.3-pre
Fixed in https://github.com/plotly/falcon-sql-client/releases/tag/v2.3.2
|
2025-04-01T06:40:01.832508
| 2017-11-11T20:01:14
|
273171910
|
{
"authors": [
"ericeslinger"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9678",
"repo": "plumpstack/plump-store-postgres",
"url": "https://github.com/plumpstack/plump-store-postgres/pull/26"
}
|
gharchive/pull-request
|
no real changes
package.json keeps getting prettified back and forth, plus package-lock keeps faffing around.
heh, that's actually not true- the changes are pretty significant, it's just that this is an old PR title apparently. This is the breaking 0.25 changes from all the other repos.
|
2025-04-01T06:40:01.887052
| 2017-05-08T23:40:44
|
227206181
|
{
"authors": [
"pahammad",
"patrickhulce",
"paulirish"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9679",
"repo": "pmdartus/speedline",
"url": "https://github.com/pmdartus/speedline/pull/49"
}
|
gharchive/pull-request
|
feat: add --fast mode
This PR changes the progress calculation to optionally skip frames that are between ones that have similar progress toward the target. When enabled, it speeds up processing by about 40% and has minimal impact when not enabled via the shortcut. This also aligns perceptual progress and visual progress by sharing the same computation code and fixes #48 by virtue of the fact that we can't wait for the global min when we only parse some of the JPEGs.
Timings
( for i in $(seq 10); do; speedline cnn.json --pretty; done; ) 55.15s user 4.63s system 98% cpu 1:00.44 total
( for i in $(seq 10); do; speedline cnn.json --pretty --fast; done; ) 34.28s user 3.32s system 99% cpu 37.737 total
1 - (34/55) = ~38%
Open Questions
Expose the direct threshold value as the option or keep as binary? If so, name? similarProgressThreshold?
Skipping frames can artificially make the visual jitter (jankiness) go away from the calculations. Please make sure that layout instability effects that perceptual speed index captures aren't sacrificed due to this frame-skipping (technically, this is temporal downsampling).
@pahammad by its nature, the impact on the indexes cannot be avoided as you point out. However, in my experience there are a number of issues with using these metrics to as a signal of layout stability in the first place and frequently I've seen jitter artificially be rewarded by speed index rather than punished. Would a warning and tunable threshold address your concerns or are you against including the option at all with these drawbacks?
I am OK with including the frame-skipping as an option if it comes with appropriate warning when the option is exercised. I am very curious about your sentence: "frequently I've seen jitter artificially be rewarded by speed index rather than punished". Are you referring to the classical (histogram-based) speed index, or are you referring to SSIM based perceptual speed index (PSI) ? If you are referring to PSI in that sentence (where it rewards jitter instead of punishing it), I would love to see an example or two - this is opposite to what I've seen.
@pahammad disclaimer text added and I've filed #50 to discuss layout stability tracking π
changes lgtm but 2 test failures we need to work out.
changes lgtm but 2 test failures we need to work out.
done
<EMAIL_ADDRESS>published with this.
|
2025-04-01T06:40:02.008203
| 2023-12-27T02:52:53
|
2056852230
|
{
"authors": [
"pmmmwh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9683",
"repo": "pmmmwh/upptime",
"url": "https://github.com/pmmmwh/upptime/issues/562"
}
|
gharchive/issue
|
β οΈ GitHub has degraded performance
In cdc2258, GitHub (https://www.githubstatus.com/api/v2/status.json) experienced degraded performance:
HTTP code: 200
Response time: 186 ms
Resolved: GitHub performance has improved in 0807d24 after 8 minutes.
|
2025-04-01T06:40:02.010602
| 2022-01-29T22:50:09
|
1118346029
|
{
"authors": [
"pmmmwh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9684",
"repo": "pmmmwh/upptime",
"url": "https://github.com/pmmmwh/upptime/issues/71"
}
|
gharchive/issue
|
β οΈ Slack has degraded performance
In 4881fc4, Slack (https://status.slack.com/api/v2.0.0/current) experienced degraded performance:
HTTP code: 200
Response time: 85 ms
Resolved: Slack performance has improved in 50353d9.
|
2025-04-01T06:40:02.029777
| 2021-09-07T15:30:28
|
990107360
|
{
"authors": [
"drcmda",
"teenkwn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9685",
"repo": "pmndrs/gltfjsx",
"url": "https://github.com/pmndrs/gltfjsx/issues/105"
}
|
gharchive/issue
|
webview setState in each component is not responding
My intention is to customize items in character in webview and sent the items name from react native.
The data is send correctly but when I need to set the state inside the web add event listener.
It have no problem when set state in color, but it crash when set state color and geometry
`function Boy(props) {
const group = useRef()
const { nodes, materials } = useGLTF('/man6.gltf')
const state = proxy ({
current: null,
color: {
skin: "#fff6dd",
hair: 'Material.010', //RN2_Green.002
shirt: 'NolmalShirt02.001',
watch: "Material.007",
shoe: 'Shoes_Normal.001', //'Material.009'
}
})
const stateGeo = proxy ({
current: null,
geo: {
shoe: nodes.Shoes2.geometry,
shirt: nodes.man_Shirt.geometry,
}
})
const snap = useSnapshot(state)
const collectionShoesGeometry = {
"shoe3": nodes.Shoes2.geometry,
"shoe4": nodes.man_Shoes.geometry,
}
const collectionShirtsGeometry = {
"shirt3": nodes.Shirt2.geometry,
"shirt4": nodes.man_Shirt.geometry,
}
window.addEventListener("message", function(event) {
var requestTrim = event.data + ""
var value = requestTrim.split("|")
if (value.length > 0) {
if (value[0] === "closet") {
stateGeo.geo['shirt'] = collectionShirtsGeometry['shirt3']
state.color['shirt'] = "NolmalShirt02.001"
} else if(value[0] === "watch") {
state.color["watch"] = "Material.008"
}
}
});
useEffect(() => {
}, [])
return (
<group ref={group} {...props} dispose={null} scale={4.7}>
<mesh
geometry={stateGeo.geo['shirt']}
material-color={snap.color['shirt']}
material={materials[snap.color['shirt']]}
position={[0, -0.01, 0.02]}
rotation={[-1.6, 0, 0]}
/>
<mesh
geometry={nodes.watch1.geometry}
material={materials['Material.001']}
position={[0, -0.01, 0.02]}
rotation={[-1.6, 0, 0]}
scale={1.01}
/>
</group>
)
}
useGLTF.preload('/man6.gltf')`
I try to put the state outside the function, it's work.
Note: useSnapshot can't use twice in the same function and I'm not sure why.
Did anyone have any suggest about this? Thank you
this doesnt seem correct, color = "NolmalShirt02.001" is not valid
|
2025-04-01T06:40:02.046121
| 2022-11-16T22:32:25
|
1452371426
|
{
"authors": [
"hybridherbst",
"optimus007",
"vanruesc"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9686",
"repo": "pmndrs/postprocessing",
"url": "https://github.com/pmndrs/postprocessing/issues/426"
}
|
gharchive/issue
|
Depth buffer issue when using DepthOfFieldEffect
Description of the bug
Follow-up to #420.
By using dofEffect.circleOfConfusionMaterial.adoptCameraSettings and worldFocusDistance, I was able to get a dynamic target to change the focus. I got it to match up / work in @vanruesc's sandbox.
However, I still don't get it to work elsewhere...
For me it looks like the depth buffer is in a wrong format, and that the calculation to go from linearized near/far values to world distances and vice versa doesn't match some setting here. I checked that we're not using logarithmic depth.
Maybe these images of expected and not expected cases help:
In the below images,
the textured plane is cut off by near and far clip planes of 1 and 10
the white cube is the focus target - placed at 1, 10, ~5.5 and ~2.5
βοΈ Target at near clip plane - as expected: near clip plane is in focus
βοΈ Target at far clip plane - as expected: far clip plane is in focus
β Target at center between near and far - not expected: focus is too close
β Target at 1/4 between near and far - not expected: focus is too close
So there seems to be some nonlinearity going on, but I have no idea why.
To Reproduce
I'm unfortunately unsure how to reproduce / what's wrong in this setup so far. Happy to answer any questions to hopefully figure out what I'm doing wrong.
Expected behavior
Ability to set the worldFocusDistance and get that distance in focus.
Screenshots
see above
Library versions used
Three: 0.145.4
Post Processing: 6.29.1
Desktop
OS: Windows 10 and 11
Browser Chrome
Graphics hardware: RTX 2070 Max-Q and RTX 3070
Thanks for the screenshots :+1:
How are you setting the worldFocusDistance? You'd want to set this to the distance from the camera to the cube. Alternatively, try setting dofEffect.target to cube.position;
(my mistake convoluting R3F and three here, of course it's vanilla...)
Yes, this distance is the world distance from the camera to the cube. Also logged these values and made sure they match up with what I'd expect (e.g. in screenshots 2 and 3 the worldFocusDistance values logged are 5.5 and 2.5).
I get the same behaviour when using dofEffect.target = cube.position.
(side note, for that to work the cube must be a at sceen root / must not be transformed by it's parents if I'm not mistaken which I made sure of).
Strange.. I'll take a closer look this weekend.
side note, for that to work the cube must be a at sceen root / must not be transformed by it's parents if I'm not mistaken which I made sure of
Thanks for pointing that out. The implementation currently doesn't use getWorldPosition which is a bug.
For low near clip values (e.g. 0.01) I only get any focus when the target is at far clip distance, and basically all other target distances result in focus planes very close to the near clip plane.
Well, the shader does linearize depth, so this sounds like the camera settings aren't being set correctly for some reason.
Thanks!
Right now I'm literally calling this
setInterval(() => {
dof.circleOfConfusionMaterial.adoptCameraSettings(camera);
dof.target = targetObject.position;
}, 50);
```
to ensure for now not messing up any timing here and that the right camera values are used. (Of course tried without that as well),
What's weird is that it does seem to work for "target distance == near clip" and "target distance == far clip" but nothing inbetween.
Hi again! Were you by any chance able to look into the issue @vanruesc? Thanks!
Sorry, I've been too busy to get back to this. However, I was able to confirm that viewZToOrthographicDepth returns correct values.
I got it to match up / work in @vanruesc's sandbox.
However, I still don't get it to work elsewhere...
Any chance you could provide a failing example with code for me to look at?
@hybridherbst does this sandbox or video demonstrate the same issue you are talking about ?
https://user-images.githubusercontent.com/6885294/204822593-cee0b9d4-25a2-4f03-b472-fb0fc02db443.mp4
@optimus007 Thanks for the sandbox! The unexpected behaviour that can be observed when the target is off-center has to do with perspective projection. The object would probably remain in focus at all times with an orthographic camera. The effect translates the world distance to a linear, orthographic depth value which basically counters the perspective projection. I wonder if we could use perspective depth instead :thinking:
Right now, I'm busy with work and the v7 redesign. I'll eventually get to the DepthOfFieldEffect.
@hybridherbst Does your project use logarithmic depth? That would explain why you're only able to focus objects close to the near and far planes.
The expected behaviour described in this ticket would be achieved by calculating the distance from individual fragments to the camera instead of using the scene depth. This would result in a spherical focus field around the camera instead of a box-like field between the near and far plane.
I don't plan on changing the current CoC implementation in postprocessing v6, but the new implementation in v7 will use the distance-based approach.
Closing this in favor of #569.
|
2025-04-01T06:40:02.086695
| 2022-11-25T09:16:20
|
1464244772
|
{
"authors": [
"Adam-it",
"Jwaegebaert",
"martinlingstuyl",
"nicodecleyre",
"waldekmastykarz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9687",
"repo": "pnp/cli-microsoft365",
"url": "https://github.com/pnp/cli-microsoft365/issues/4159"
}
|
gharchive/issue
|
New command: Apply a retentionlabel to a file using spo file retentionlabel ensure
Usage
m365 spo file retentionlabel ensure [options]
Description
Apply a retention label to a file
Options
Option
Description
-u, --webUrl <webUrl>
The url of the web
--fileUrl [fileUrl]
The server-relative URL of the file that should be labelled. Specify either fileUrl or fileId but not both.
i, --fileId [fileId]
The UniqueId (GUID) of the file that should be labelled. Specify either fileUrl or fileId but not both.
--name <name>
Name of the retention label to apply to the file.
Examples
Apply the label some label to a file
m365 spo file retentionlabel set --webUrl 'https://contoso.sharepoint.com/sites/sales' --fileUrl '/sites/sales/somelibrary/somefile.pdf' --name 'Some retention label'
More information
This command can be implemented with shared code created for #4158.
Putting this on hold till #4158 is implemented.
looks solid π
Hey @martinlingstuyl, why are we waiting with implementation until #4158 is done?
Hey @martinlingstuyl, why are we waiting with implementation until #4158 is done?
Hi @waldekmastykarz, because we might be able to use shared code. In principle files and folders are listItems as well, so we could possibly reuse the listItem command. That was my initial thought.
If you have a better idea though.., do let me know :)
Hey @martinlingstuyl, why are we waiting with implementation until #4158 is done?
Hi @waldekmastykarz, because we might be able to use shared code. In principle files and folders are listItems as well, so we could possibly reuse the listItem command. That was my initial thought.
Make sense. Next time, let's include this reasoning upfront just so that we make it clear to everyone why we're waiting π
This command can be implemented with shared code created for #4158.
I'd written something, but I agree it could have been more clear @waldekmastykarz
Can I work on this one please
Awesome, all yours!
|
2025-04-01T06:40:02.088434
| 2023-04-10T15:52:41
|
1660977869
|
{
"authors": [
"milanholemans",
"nanddeepn"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9688",
"repo": "pnp/cli-microsoft365",
"url": "https://github.com/pnp/cli-microsoft365/pull/4751"
}
|
gharchive/pull-request
|
Adds "Create custom views to differentiate SharePoint news page types in Site Pages library" sample script. Closes #1782
Adds "Create custom views to differentiate SharePoint news page types in Site Pages library" sample script. Closes #1782
Thank you @nanddeepn! We'll try to review it ASAP!
Thanks @nicodecleyre
Added a reference to the script from the docs/mkdocs.yml file.
|
2025-04-01T06:40:02.156419
| 2021-07-05T07:19:43
|
936776207
|
{
"authors": [
"gautamdsheth",
"jestinegoh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9689",
"repo": "pnp/powershell",
"url": "https://github.com/pnp/powershell/issues/890"
}
|
gharchive/issue
|
SharePoint Online Search Issue using PnP
Notice
Many bugs reported are actually related to the PnP Framework which is used behind the scenes. Consider carefully where to report an issue:
Are you using Invoke-PnPSiteTemplate or Get-PnPSiteTemplate? The issue is most likely related to the Provisioning Engine. The Provisioning engine is not located in the PowerShell repo. Please report the issue here: https://github.com/pnp/pnpframework/issues.
Is the issue related to the cmdlet itself, its parameters, the syntax, or do you suspect it is the code of the cmdlet that is causing the issue? Then please continue reporting the issue in this repo.
If you think that the functionality might be related to the underlying libraries that the cmdlet is calling (We realize that might be difficult to determine), please first double check the code of the cmdlet, which can be found here: https://github.com/pnp/powershell/tree/master/src/Commands. If related to the cmdlet, continue reporting the issue here, otherwise report the issue at https://github.com/pnp/pnpframework/issues
Reporting an Issue or Missing Feature
Please confirm what it is that your reporting
Expected behavior
Please describe what output you expect to see from the PnP PowerShell Cmdlets
Actual behavior
Please describe what you see instead. Please provide samples of output or screenshots.
Steps to reproduce behavior
Please include complete script or code samples in-line or linked from gists
What is the version of the Cmdlet module you are running?
(you can retrieve this by executing Get-Module -Name "PnP.PowerShell" -ListAvailable)
Which operating system/environment are you running PnP PowerShell on?
[ ] Windows
[ ] Linux
[ ] MacOS
[ ] Azure Cloud Shell
[ ] Azure Functions
[ ] Other : please specify
Closing this as its not clear what the issue is with PnP.
Request you to please answer the issue template questions and reopen the issue.
Hi Elhadj,
My issue has been close due to the following reason.
Regards
Jestine GOH
Senior IT Consultant, Information Technology (Applications)
Tripartite Alliance Limited
80 Jurong East St 21, #05-05/06, Devan Nair Institute for Employment and Employability, Singapore 609607
T: 6956 6407 |<EMAIL_ADDRESS>| TAL Personal Data Policyhttps://apc01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.tal.sg%2Fprivacy-statement%2F&data=02|01|jestine_goh%40tal.sg|401a4a5984c443cc424108d8095629a9|0af12b508f1940928ace4dce8f8253e0|0|0|637269612992643648&sdata=yalG7HcSQL508xcqisIuQPsuOLUbCPKhNwAXPmV25%2FE%3D&reserved=0
@.***D771B7.0167E960]
This message may contain confidential information intended only for the individual named. If you are not the named addressee, do not disseminate, distribute or copy this email. Please notify the sender immediately and delete this email from your system. Any views expressed in the email are those of the individual sender, except where the sender specifically states them to be the view of the Tripartite Alliance Limited (TAL). The information and/or comments provided or expressed in this message are not to be construed as legal advice and are not intended to replace legal advice. You should not rely on any information and/or comments provided or expressed in this message/research memorandum as legal advice. TAL shall not be responsible for any loss or damage arising from your reliance on any information and/or comments provided or expressed in this message.
From: Gautam Sheth @.>
Sent: Monday, July 5, 2021 3:55 PM
To: pnp/powershell @.>
Cc: Jestine Goh @.>; Author @.>
Subject: Re: [pnp/powershell] SharePoint Online Search Issue using PnP (#890)
Closing this as its not clear what the issue is with PnP.
Request you to please answer the issue template questions and reopen the issue.
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpnp%2Fpowershell%2Fissues%2F890%23issuecomment-873889554&data=04|01|jestine_goh%40tal.sg|6ce8f0fcd54d4430b58e08d93f8a3267|0af12b508f1940928ace4dce8f8253e0|0|0|637610685698210475|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000&sdata=AZTkGO0NUO5E5WwsJa2kNDkf2ZgM8BJxgpTMxXEbN3w%3D&reserved=0, or unsubscribehttps://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAUXL4ATYYC4BXS2CNJYVCU3TWFQNJANCNFSM472EKEAA&data=04|01|jestine_goh%40tal.sg|6ce8f0fcd54d4430b58e08d93f8a3267|0af12b508f1940928ace4dce8f8253e0|0|0|637610685698210475|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000&sdata=NaEwgpsRKk0ivkvlLVm5ZrF2d%2F8FYXBNn882Vd16gPY%3D&reserved=0.
Hello Jestine,
Good day!
Thank you for your email reply, kindly could you provide more description regarding the issue that you are facing?
Best regards,
Elhadj
From: Jestine Goh @.>
Sent: Monday, July 5, 2021 4:01 PM
To: Elhadj Mamoudou Diallo (Shanghai Wicresoft Co,.Ltd.) @.>
Cc: Author @.>; pnp/powershell @.>; pnp/powershell @.***>
Subject: [EXTERNAL] RE: [pnp/powershell] SharePoint Online Search Issue using PnP (#890)
Hi Elhadj,
My issue has been close due to the following reason.
Regards
Jestine GOH
Senior IT Consultant, Information Technology (Applications)
Tripartite Alliance Limited
80 Jurong East St 21, #05-05/06, Devan Nair Institute for Employment and Employability, Singapore 609607
T: 6956 6407 |<EMAIL_ADDRESS>| TAL Personal Data Policyhttps://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.tal.sg%2Fprivacy-statement%2F&data=04|01|v-ediallo%40microsoft.com|d642b24525844526f9b308d93f8b18e0|72f988bf86f141af91ab2d7cd011db47|1|0|637610689586315681|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=1hFBkeMJm%2F8Velx7k7eRoL%2Bg3a25B5oSfRgHIY3aDpw%3D&reserved=0
@.***D771BA.389E4070]
This message may contain confidential information intended only for the individual named. If you are not the named addressee, do not disseminate, distribute or copy this email. Please notify the sender immediately and delete this email from your system. Any views expressed in the email are those of the individual sender, except where the sender specifically states them to be the view of the Tripartite Alliance Limited (TAL). The information and/or comments provided or expressed in this message are not to be construed as legal advice and are not intended to replace legal advice. You should not rely on any information and/or comments provided or expressed in this message/research memorandum as legal advice. TAL shall not be responsible for any loss or damage arising from your reliance on any information and/or comments provided or expressed in this message.
From: Gautam Sheth<EMAIL_ADDRESS>Sent: Monday, July 5, 2021 3:55 PM
To: pnp/powershell<EMAIL_ADDRESS>Cc: Jestine Goh<EMAIL_ADDRESS>Author<EMAIL_ADDRESS>Subject: Re: [pnp/powershell] SharePoint Online Search Issue using PnP (#890)
Closing this as its not clear what the issue is with PnP.
Request you to please answer the issue template questions and reopen the issue.
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpnp%2Fpowershell%2Fissues%2F890%23issuecomment-873889554&data=04|01|v-ediallo%40microsoft.com|d642b24525844526f9b308d93f8b18e0|72f988bf86f141af91ab2d7cd011db47|1|0|637610689586315681|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=Jk%2FCld9kxZT1BBw228ChAZWQrt2Ma%2F22O3vJUQbxPmc%3D&reserved=0, or unsubscribehttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAUXL4ATYYC4BXS2CNJYVCU3TWFQNJANCNFSM472EKEAA&data=04|01|v-ediallo%40microsoft.com|d642b24525844526f9b308d93f8b18e0|72f988bf86f141af91ab2d7cd011db47|1|0|637610689586325643|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=aYTgAqI0nD%2Bp35ukDAzP17uZyh5pZJpQjFU5qtLpeZ0%3D&reserved=0.
Hi all,
We are using SharePoint web part Search function to search documents. Recently we noticed the search result is not working well. Some time ago, it was working.
If I tried to view source of the page and found the below codes for the Search input. Understand from Microsoft, PnP search is being used and there is a need to log issue with Github.
Please see below codes for your reference. For your advise please.
It looks like your browser does not have JavaScript enabled. Please turn on JavaScript and try again.
Thank you
Regards
Jestine GOH
Senior IT Consultant, Information Technology (Applications)
Tripartite Alliance Limited
80 Jurong East St 21, #05-05/06, Devan Nair Institute for Employment and Employability, Singapore 609607
T: 6956 6407 |<EMAIL_ADDRESS>| TAL Personal Data Policyhttps://apc01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.tal.sg%2Fprivacy-statement%2F&data=02|01|jestine_goh%40tal.sg|401a4a5984c443cc424108d8095629a9|0af12b508f1940928ace4dce8f8253e0|0|0|637269612992643648&sdata=yalG7HcSQL508xcqisIuQPsuOLUbCPKhNwAXPmV25%2FE%3D&reserved=0
@.***D771BA.BFCEB610]
This message may contain confidential information intended only for the individual named. If you are not the named addressee, do not disseminate, distribute or copy this email. Please notify the sender immediately and delete this email from your system. Any views expressed in the email are those of the individual sender, except where the sender specifically states them to be the view of the Tripartite Alliance Limited (TAL). The information and/or comments provided or expressed in this message are not to be construed as legal advice and are not intended to replace legal advice. You should not rely on any information and/or comments provided or expressed in this message/research memorandum as legal advice. TAL shall not be responsible for any loss or damage arising from your reliance on any information and/or comments provided or expressed in this message.
From: Elhadj Mamoudou Diallo (Shanghai Wicresoft Co,.Ltd.) @.>
Sent: Monday, July 5, 2021 4:24 PM
To: Jestine Goh @.>
Cc: Author @.>; pnp/powershell @.>; pnp/powershell @.***>
Subject: RE: [pnp/powershell] SharePoint Online Search Issue using PnP (#890)
Hello Jestine,
Good day!
Thank you for your email reply, kindly could you provide more description regarding the issue that you are facing?
Best regards,
Elhadj
From: Jestine Goh<EMAIL_ADDRESS>Sent: Monday, July 5, 2021 4:01 PM
To: Elhadj Mamoudou Diallo (Shanghai Wicresoft Co,.Ltd<EMAIL_ADDRESS>Cc: Author<EMAIL_ADDRESS>pnp/powershell<EMAIL_ADDRESS>pnp/powershell<EMAIL_ADDRESS>Subject: [EXTERNAL] RE: [pnp/powershell] SharePoint Online Search Issue using PnP (#890)
Hi Elhadj,
My issue has been close due to the following reason.
Regards
Jestine GOH
Senior IT Consultant, Information Technology (Applications)
Tripartite Alliance Limited
80 Jurong East St 21, #05-05/06, Devan Nair Institute for Employment and Employability, Singapore 609607
T: 6956 6407 |<EMAIL_ADDRESS>| TAL Personal Data Policyhttps://apc01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.tal.sg%2Fprivacy-statement%2F&data=04|01|jestine_goh%40tal.sg|d1c8b1283bad4c7cc8eb08d93f8e4f6c|0af12b508f1940928ace4dce8f8253e0|0|0|637610702733310554|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=gFCfq2SS92Z36kbcbB%2Br9l2tfgVAorFcZxMaJitrI7g%3D&reserved=0
@.***D771BA.BFCEB610]
This message may contain confidential information intended only for the individual named. If you are not the named addressee, do not disseminate, distribute or copy this email. Please notify the sender immediately and delete this email from your system. Any views expressed in the email are those of the individual sender, except where the sender specifically states them to be the view of the Tripartite Alliance Limited (TAL). The information and/or comments provided or expressed in this message are not to be construed as legal advice and are not intended to replace legal advice. You should not rely on any information and/or comments provided or expressed in this message/research memorandum as legal advice. TAL shall not be responsible for any loss or damage arising from your reliance on any information and/or comments provided or expressed in this message.
From: Gautam Sheth<EMAIL_ADDRESS>Sent: Monday, July 5, 2021 3:55 PM
To: pnp/powershell<EMAIL_ADDRESS>Cc: Jestine Goh<EMAIL_ADDRESS>Author<EMAIL_ADDRESS>Subject: Re: [pnp/powershell] SharePoint Online Search Issue using PnP (#890)
Closing this as its not clear what the issue is with PnP.
Request you to please answer the issue template questions and reopen the issue.
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpnp%2Fpowershell%2Fissues%2F890%23issuecomment-873889554&data=04|01|jestine_goh%40tal.sg|d1c8b1283bad4c7cc8eb08d93f8e4f6c|0af12b508f1940928ace4dce8f8253e0|0|0|637610702733310554|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=tC2yxoeDjw04BNdLxkiurn9x1NACazBUoeWtLhrW4o4%3D&reserved=0, or unsubscribehttps://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAUXL4ATYYC4BXS2CNJYVCU3TWFQNJANCNFSM472EKEAA&data=04|01|jestine_goh%40tal.sg|d1c8b1283bad4c7cc8eb08d93f8e4f6c|0af12b508f1940928ace4dce8f8253e0|0|0|637610702733320546|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=qWQfePHEIc1miZstF%2FTfvX5ZBUsVFp6fi45ZygpfGPs%3D&reserved=0.
|
2025-04-01T06:40:02.159369
| 2022-07-11T18:09:44
|
1301049766
|
{
"authors": [
"JimmyHang",
"KoenZomers"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9690",
"repo": "pnp/powershell",
"url": "https://github.com/pnp/powershell/pull/2132"
}
|
gharchive/pull-request
|
Update Add-PnPPageWebPart.md
Correcting Title typo from "Add-PnPWebPart" to "Add-PnPPageWebPart"
Before creating a pull request, make sure that you have read the contribution file located at
https://github.com/pnp/powerShell/blob/dev/CONTRIBUTING.md
Type
[x] Typo Fix
Thanks @JimmyHang, well noticed!
|
2025-04-01T06:40:02.164104
| 2023-04-08T18:43:24
|
1659610091
|
{
"authors": [
"gautamdsheth",
"kunj-sangani"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9691",
"repo": "pnp/powershell",
"url": "https://github.com/pnp/powershell/pull/2989"
}
|
gharchive/pull-request
|
Adding Move-PnPTerm and Move-PnPTermSet commands
Type
[x] New Feature
Related Issues?
#2978
What is in this Pull Request ?
Adding two new commands
Move-PnPTerm
Move-PnPTermSet
@kunj-sangani - code looks great !!
Some minor changes before I merge it π
In both these cmdlets, can you
Replace -DestinationTermSet with -TargetTermSet
Replace -SourceTermSet with -TermSet
Replace -SourceTermGroup with -TermGroup
Replace -DestinationTermGroup with -TargetTermGroup
Replace -DestinationTerm with -TargetTerm
Don't forget to update the docs as well.
Hi @gautamdsheth
Updated the Name of the parameters
Thanks for the help :)
Thanks @kunj-sangani, merged it , much appreciated !
|
2025-04-01T06:40:02.230738
| 2023-03-04T11:33:34
|
1609734961
|
{
"authors": [
"ganigeorgiev",
"mjadobson"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9692",
"repo": "pocketbase/pocketbase",
"url": "https://github.com/pocketbase/pocketbase/pull/1966"
}
|
gharchive/pull-request
|
Resize table columns
As per https://github.com/pocketbase/pocketbase/discussions/1542
It uses a fairly generic svelte action, so it doesn't interfere much with the existing code. Took a bit of time to smooth out any bugs, but seems stable now.
Thanks again for your work on the project; the latest release was great π.
I'm not sure about this feature.
I haven't tested it locally, nor reviewed the code changes, but just from the screenshot it looks kindof strange since the table is no longer 100% width (maybe because of the fixed layout?).
I'm not sure about this feature.
I haven't tested it locally, nor reviewed the code changes, but just from the screenshot it looks kindof strange since the table is no longer 100% width (maybe because of the fixed layout?).
I did think this after submitting the pull request with the gif.
It makes the implementation slightly less elegant, but I can keep a minimum width for the table. I will revise and see what you think.
I don't think table-layout: fixed is necessary and it could cause some issues with adding for example a new column after the resizing.
Additionally, there should be something that will prevent resizing below some min-width threshold, because from the screenshot the id seems to be cropped and I'm not sure if this is a good idea.
I've updated the code to keep the minimum table width:
This implementation does require a fixed table layout. Trying to keep columns a specific width and the resize-handle tracking the cursor was problematic with a fluid layout.
The columns will clip as you resize. I personally prefer this to min widths.
Let me know what you think, feel free to close if it doesn't make sense for the project.
Sorry, I appreciate the work you've put into this but I don't want to rush it and merge something just because there is a PR for it.
I like the idea of resizable columns but the current implementation feels a little brittle and I'm not confident that we are handling all edge cases (eg. on windows small->big resize I guess we need also to bind to the resize event to recalculate the initial table width? I'm also not sure how it will behave if we do external layout changes, eg. via plain css in a future responsive version or just toggling sibling dom elements, etc.).
I've added it in my local todo to search for other options and eventually something similar could be implemented in the future, but I don't want to invest time into it right now and for now remains out of the scope.
|
2025-04-01T06:40:02.242902
| 2022-02-14T10:39:09
|
1137106077
|
{
"authors": [
"Polina1985",
"kachar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9693",
"repo": "podkrepi-bg/frontend",
"url": "https://github.com/podkrepi-bg/frontend/issues/489"
}
|
gharchive/issue
|
Campaign page/ UI-UX elements are missing according to UI mockup (Design)
To Reproduce
Steps to reproduce the behavior:
Go to dev.podkrepi.bg
Navigate to main menu "ΠΠ°ΡΠΈΡΠ΅Π»ΡΡΠ²ΠΎ"
Scroll down to 'ΠΠ°ΠΌΠΏΠ°Π½ΠΈΠΈ.'
Choose random ΠΠ°ΠΌΠΏΠ°Π½ΠΈΡ
5.Select button "ΠΠΈΠΆΡΠ΅ ΠΏΠΎΠ²Π΅ΡΠ΅"
6.You should be successfully redirected to the new Campaign page
Expected behavior
All UX elements from the Design UI should be according to the Design mock ups.
Actual result:
Campaign page/ UI-UX elements are missing according to UI mockup (Design):
Missing Subheader
Slider for sums (too large according to UI mocΠΊ ups from Design)
List of donors and donation sums missing (under the "Π‘ΠΏΠΎΠ΄Π΅Π»ΠΈ" button)
Missing image carousel (under the Description of the Campaign)
Missing section "ΠΠΎΡΠ»Π΅Π΄Π½ΠΈ Π½ΠΎΠ²ΠΈΠ½ΠΈ"
Missing section "ΠΠΎΠΌΠ΅Π½ΡΠ°ΡΠΈ"
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: Windows
Browser chrome
Version - Version 98.0.4758.82 (Official Build) (64-bit)
@ani-kalpachka
@kachar
If any of the @podkrepi-bg/softuni-bootcamp team wanna try solving this issue it would be great
|
2025-04-01T06:40:02.244417
| 2021-08-29T09:39:48
|
982049532
|
{
"authors": [
"dimitur2204",
"imilchev"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9694",
"repo": "podkrepi-bg/infrastructure",
"url": "https://github.com/podkrepi-bg/infrastructure/issues/17"
}
|
gharchive/issue
|
[Feature] Use external database for Keycloak
Currently we deploy Keycloak with a Helm chart that also deploys PostgreSQL next to it. Since our own modules will also be using PostgreSQL it's best if we have 1 instance and point all software to it. This will save resources and will make it easier to maintain
I see most of the images and infrastructure are managed in the api repo. Is that infrastructure repo then mandatory?
|
2025-04-01T06:40:02.265009
| 2020-06-26T15:15:22
|
646332784
|
{
"authors": [
"stephencelis",
"vibrazy"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9695",
"repo": "pointfreeco/swift-composable-architecture",
"url": "https://github.com/pointfreeco/swift-composable-architecture/issues/200"
}
|
gharchive/issue
|
Crash occuring on Xcode 12 when attempting to do enum reflection on child action.
Describe the bug
Attempting to send an action view the store causes EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0) when combining reducers.
Given a child action without any cases
enum ChildAction: Equatable {}
And a parent action
enum ParentAction: Equatable {
case childAction(ChildAction)
case parentAction
}
And a main reducer
let parentReducer = Reducer<ParentState, ParentAction, ParentEnvironment>.combine(
childReducer.pullback(
state: \.childState,
action: /ParentAction.childAction,
environment: { _ in ChildEnvironment() }
),
Reducer<ParentState, ParentAction, ParentEnvironment> { state, action, _ in
switch action {
case .parentAction:
state.childState += 1
return .none
case .childAction:
return .none
}
}
)
As soon as you send viewStore.send(.parentAction) a crash occurs.
The crash happens on EnumReflection line 75 from the CasePaths package
This crash does not happen on Xcode 11.4 with the same setup.
A work around is to add a case to the child enum.
enum ChildAction {
case banana
}
To Reproduce
Send any action viewStore.send(.parentAction) with the above setup is sufficient to cause a crash.
Expected behavior
Give a clear and concise description of what you expected to happen.
Screenshots
Environment
Xcode 12 Beta 1
Swift 5.3
OS (if applicable): iOS 14
Additional context
ComposableCasePathCrash.zip
Hey @vibrazy, thanks for the detailed report. We've actually logged this issue on swift-case-paths here: https://github.com/pointfreeco/swift-case-paths/issues/11
We probably won't get around to fixing it for a bit (if you wanna take a pass at it, please do!), but in the meantime, another workaround is to use the .never case path:
childReducer.pullback(
state: \.childState,
- action: /ParentAction.childAction,
+ action: .never,
environment: { _ in ChildEnvironment() }
),
This has been fixed upstream in Case Paths 0.1.2. Be sure to update your package dependencies!
|
2025-04-01T06:40:02.288373
| 2023-08-08T19:09:58
|
1841886866
|
{
"authors": [
"ALonleyBanana",
"HavocsCall",
"Vrontis",
"fuer-lo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9696",
"repo": "pokedextracker/pokedextracker.com",
"url": "https://github.com/pokedextracker/pokedextracker.com/issues/508"
}
|
gharchive/issue
|
Support for Non-Regional Forms and Gender Differences
Was making my Living Dex and noticed the site does not support forms besides regional forms. Unown, Vivillon, Alcremie and gender differences are the big form changes I would be looking for. Perhaps have it as an option to display and/or have the option to display forms/gender differences as seperate boxes.
Hello, I'm already looking for it but it doesn't seem to be about it. please
In contrast to the previous post, I think all "other" forms should be in their own box. This is entirely because what happens when they add another hat for pikachu? You will need to move 1300+ pokemon in HOME to keep the order straight. If they are in a separate box, the worst case is you need to move the whole box one over to make room for another.
This appears to be another duplicate of #256?
(The checkboxes are probably kinda implied, due to being available for Gmax forms)
|
2025-04-01T06:40:02.296717
| 2022-10-03T20:54:12
|
1395320582
|
{
"authors": [
"Olshansk",
"deblasis"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9697",
"repo": "pokt-network/pocket",
"url": "https://github.com/pokt-network/pocket/issues/273"
}
|
gharchive/issue
|
[TECHDEBT] [P2P] Raintree scalability improvements
Objective
Tend for the TODOs:
// INVESTIGATE(olshansky/team): Does not scale to 1,000,000,000 nodes
Related to #222 and #246
Origin Document
"make it work β© make it fast β© make it pretty"
#222 improves a lot in terms of time complexity, see what we can do to improve even further.
Some research might be required
Goals
[ ] Ensure that the network can scale
[ ] Verify empirically
Deliverable
[ ] Optimizations
[ ] Tests / Benchmarks
Non-goals / Non-deliverables
_REPLACE_ME: List of things that are out of scope
...
General issue deliverables
[ ] Update the appropriate CHANGELOG
[ ] Update any relevant READMEs (local and/or global)
[ ] Update any relevant global documentation & references
[ ] If applicable, update the source code tree explanation
[ ] If applicable, add or update a state, sequence or flowchart diagram using mermaid
[Optional] Testing Methodology
_REPLACE_ME: Make sure to update the testing methodology appropriately_
Task specific tests: make ...
All tests: make test_all
LocalNet: verify a LocalNet is still functioning correctly by following the instructions at docs/development/README.md
Creator: @deblasis
Co-Owners: @Olshansk
Moving this to M4
Closing this out as the scope is too large.
|
2025-04-01T06:40:02.307092
| 2024-02-07T18:46:01
|
2123656169
|
{
"authors": [
"polatengin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9698",
"repo": "polatengin/indiana",
"url": "https://github.com/polatengin/indiana/issues/237"
}
|
gharchive/issue
|
Select instrumentation tooling (assumes OpenTelemetry exporter for Azure Monitor)
As a project lead, I want to make a decision on what tooling to use to instrument telemetry data, so that telemetry data can be collected and sent to the centralized monitoring solution
Mentioned in #228
|
2025-04-01T06:40:02.316580
| 2019-02-25T19:04:31
|
414258043
|
{
"authors": [
"oesteban",
"pvelasco"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9699",
"repo": "poldracklab/fmriprep",
"url": "https://github.com/poldracklab/fmriprep/issues/1517"
}
|
gharchive/issue
|
user-home-test branch
As mentioned in neurostars, the last few versions of the fmriprep docker images fail to run when specifying a user (docker run -u myuser ...).
@oesteban created a new docker image (poldracklab/fmriprep:user-home-test).
This new image fixes the permissions problem with $TEMPLATEFLOW_HOME. However, now I get an error similar to the one in a different Neurostats post when processing the (GRE) fieldmap:
Node: fmriprep_wf.single_subject_Pilot005_wf.func_preproc_ses_day1_task_TASK_acq_normal_run_01_echo_1_wf.sdc_wf.phdiff_wf.meta
Working directory: /tmp/work/fmriprep_wf/single_subject_Pilot005_wf/func_preproc_ses_day1_task_TASK_acq_normal_run_01_echo_1_wf/sdc_wf/phdiff_wf/meta
Node inputs:
bids_dir = None
bids_validate = False
fields = <undefined>
in_file = /data/phelpslab/Linda/BIDSdata/sub-Pilot005/ses-day1/fmap/sub-Pilot005_ses-day1_acq-GRE_run-01_phasediff.nii.gz
undef_fields = False
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 69, in run_node
result['result'] = node.run(updatehash=updatehash)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
result = self._run_interface(execute=True)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
return self._run_command(execute)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
result = self._interface.run(cwd=outdir)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 371, in run
outputs = self.aggregate_outputs(runtime)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 472, in aggregate_outputs
raise error
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 465, in aggregate_outputs
setattr(outputs, key, val)
File "/usr/local/miniconda/lib/python3.7/site-packages/traits/trait_handlers.py", line 172, in error
value )
traits.trait_errors.TraitError: The 'run' trait of a ReadSidecarJSONOutputSpec instance must be a unicode string, but a value of 1 <class 'int'> was specified.
This error was supposed to be fixed after v.1.3.0.post2, so I'm not sure from which version the user-home-test branch was created...
Thanks.
Hi @pvelasco, we've just released 1.3.0.post3 that should take care of both issues.
I'm going to close this one in favor of the neurostars thread (https://neurostars.org/t/singularity-fmriprep-permissionerror-errno-13-permission-denied-cache/3693). Please feel free to reopen if this is still an issue.
Hi @oesteban,
I tested 1.3.0.post3 (specifying a user) and I got a different error (also related to permissions inside the docker image):
Process Process-2:
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/cli/run.py", line 755, in build_workflow
err_on_aroma_warn=opts.error_on_aroma_warnings,
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/base.py", line 218, in init_fmriprep_wf
err_on_aroma_warn=err_on_aroma_warn,
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/base.py", line 516, in init_single_subject_wf
num_bold=len(subject_data['bold']))
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/bold/base.py", line 399, in init_func_preproc_wf
bold_reference_wf = init_bold_reference_wf(omp_nthreads=omp_nthreads)
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/bold/util.py", line 121, in init_bold_reference_wf
omp_nthreads=omp_nthreads, pre_mask=pre_mask)
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/bold/util.py", line 302, in init_enhance_and_skullstrip_bold_wf
'epi_atlasbased_brainmask.json')),
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/ants/registration.py", line 935, in __init__
super(Registration, self).__init__(**inputs)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/ants/base.py", line 76, in __init__
super(ANTSCommand, self).__init__(**inputs)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 645, in __init__
super(CommandLine, self).__init__(**inputs)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 182, in __init__
self.load_inputs_from_json(from_file, overwrite=True)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 495, in load_inputs_from_json
with open(json_file) as fhandle:
PermissionError: [Errno 13] Permission denied: '/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/data/epi_atlasbased_brainmask.json'
The problem is that the package_data in fmriprep gets installed with the same permissions as in /src/fmriprep, which are -rw-rw---- (only root and its group have rw access).
I have a fix for it, and will be submitting a PR shortly.
(Note: I can only submit PRs to branches, not tags, so which branch do you want me to submit the PR to? To master, since the problem is still there?)
This is surprising, why tests would then even work?. Yes, send the PR to master, please.
Sorry, I got it wrong: in the master branch, the permissions are correct.
I think the problem is the --no-cache-dir in the pip install .[all]
I tried building the Docker image for 1.3.0.post3 with
pip install .[all]
(omitting --no-cache-dir) and it runs for a regular user.
Bottom line: tag 1.3.0.post3 is fine except for the --no-cache-dir.
So I'm closing the PR. Thanks a lot for your help!
Hi, the latest release 1.3.1 is out. Please let us know if that version resolves this problem!
Hi @oesteban,
Yes, it does. It works fine.
Thanks a lot!
|
2025-04-01T06:40:02.324302
| 2018-05-06T02:08:36
|
320556257
|
{
"authors": [
"Juanlu001"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9700",
"repo": "poliastro/poliastro",
"url": "https://github.com/poliastro/poliastro/issues/367"
}
|
gharchive/issue
|
Cowell method fails to converge with small perturbation accelerations
π Problem
This code converges normally:
from astropy import units as u
from poliastro.twobody import Orbit
from poliastro.bodies import Earth
from poliastro.twobody.propagation import cowell
r0 = [-2384.46, 5729.01, 3050.46] * u.km
v0 = [-7.36138, -2.98997, 1.64354] * u.km / u.s
initial = Orbit.from_vectors(Earth, r0, v0)
def accel(t0, state, k):
v_vec = state[3:]
norm_v = (v_vec * v_vec).sum() ** .5
return 1e-5 * v_vec / norm_v
print(initial.propagate(3 * u.day, method=cowell, ad=accel))
But changing to 1e-6 * v_vec / norm_v fails to converge:
$ python ex0.py
/home/juanlu/.miniconda36/envs/poliastro36/lib/python3.6/site-packages/scipy/integrate/_ode.py:1095: UserWarning: dop853: larger nmax is needed
self.messages.get(istate, unexpected_istate_msg)))
Traceback (most recent call last):
File "ex0.py", line 20, in <module>
print(initial.propagate(3 * u.day, method=cowell, ad=accel))
File "/home/juanlu/Development/poliastro/poliastro-library/src/poliastro/twobody/orbit.py", line 271, in propagate
return propagate(self, time_of_flight, method=method, rtol=rtol, **kwargs)
File "/home/juanlu/Development/poliastro/poliastro-library/src/poliastro/twobody/propagation.py", line 209, in propagate
r, v = method(orbit, time_of_flight.to(u.s).value, rtol=rtol, **kwargs)
File "/home/juanlu/Development/poliastro/poliastro-library/src/poliastro/twobody/propagation.py", line 103, in cowell
raise RuntimeError("Integration failed")
RuntimeError: Integration failed
π₯ Please paste the output of following commands
pip freeze | grep astropy
astropy==3.0.2
pip freeze | grep poliastro
-e git+git@github.com:Juanlu001/poliastro.git@34a9e2c83cd77e918feb0182d2fa162ba06cbd07#egg=poliastro
π― Goal
I would expect a zero perturbation acceleration to be equivalent to a keplerian orbit.
π‘ Possible solutions
π Steps to solve the problem
Comment below about what you've started working on.
Add, commit, push your changes
Submit a pull request and add this in comments - Addresses #<put issue number here>
Ask for a review in comments section of pull request
Celebrate your contribution to this project π
The current API fails with this case:
def accel(t0, state, k):
v_vec = state[3:]
norm_v = (v_vec * v_vec).sum() ** .5
return 0.0 * v_vec / norm_v
Can you add a corresponding test to #368 to see if the new solvers pass?
|
2025-04-01T06:40:02.358030
| 2019-02-07T20:46:17
|
407889796
|
{
"authors": [
"vfdev-5"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9701",
"repo": "polyaxon/polyaxon-chart",
"url": "https://github.com/polyaxon/polyaxon-chart/pull/30"
}
|
gharchive/pull-request
|
Added option to configure tensorboard docker image
Should be something like that, @mouradmourafiq what do you think ?
Thank you @mouradmourafiq !
|
2025-04-01T06:40:02.363398
| 2019-06-13T20:15:41
|
455932711
|
{
"authors": [
"CLAassistant",
"mouradmourafiq",
"rcarmstrong"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9702",
"repo": "polyaxon/polyaxon-client",
"url": "https://github.com/polyaxon/polyaxon-client/pull/24"
}
|
gharchive/pull-request
|
Handle unset experiment in log_artifact(s) call
Not entirely sure why self.experiment is set to None in the Experiment class init, regardless I've added a simple if statement that will allow the use of the log_artifact(s) helper methods and shouldn't impact existing functionality.
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.Ryan Armstrong seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
I think the implementation was just wrong, there are also 2 other methods, log_output and log_outputs on the base tracker, I am going to clean the client before the v0.5 release, especially these artifacts method, since the platform will be providing annotation for image, dataframes, model among others to specify the type of the artifacts.
Is the v0.5 release happening soon? Should I close this?
@rcarmstrong yes we are testing and pushing hard to have a RC soon.
Just an update, I fixed the docs to reference the log_output(s) methods and fixed the log_artifact(s) or next release since the other method are deprecated.
|
2025-04-01T06:40:02.368807
| 2021-07-28T03:28:17
|
954437316
|
{
"authors": [
"CodEsteban",
"patrick96"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9703",
"repo": "polybar/polybar",
"url": "https://github.com/polybar/polybar/issues/2474"
}
|
gharchive/issue
|
Polybar crashes when using i3-msg for moving containers quickly
/usr/include/c++/11.1.0/bits/stl_vector.h:1045: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = std::__cxx11::basic_string<char>; _Alloc = std::allocator<std::__cxx11::basic_string<char> >; std::vector<_Tp, _Alloc>::reference = std::__cxx11::basic_string<char>&; std::vector<_Tp, _Alloc>::size_type = long unsigned int]: Assertion '__n < this->size()' failed.
this is the log that gives me when executes this piece of code
i3-msg "workspace 5; append_layout ~/.config/scripts/dummy_window.json"&
termite -e "nvim -u $HOME/.config/nvim/notes.vim -c "startinsert" $HOME/.notes/notes"&
pid="$!"
echo ${currentWsName}
i3-msg "workspace ${currentWsName}"
while : ; do
winid="`wmctrl -lp | awk -vpid=$pid '$3==pid {print $1; exit}'`"
[[ -z "${winid}" ]] || break
done
i3-msg '[id="'$winid'"] floating enable'
wmctrl -i -r $winid -e 0,$x,50,1000,1000
what it does is that it gives me a container window id and then move it to 5th workspace and take me back to workspace 1 and then bring the container within a window back to where i am, is so quick that it completely freezes polybar and then kill it
I'm unable to reproduce this. Could you share the following:
Your polybar config
The output of polybar -vvv
The entire polybar output (if possible using trace logging -l trace)
Closing due to inactivity
|
2025-04-01T06:40:02.371277
| 2021-02-06T19:09:35
|
802768296
|
{
"authors": [
"Marekkon5",
"polyfloyd"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9704",
"repo": "polyfloyd/rust-id3",
"url": "https://github.com/polyfloyd/rust-id3/pull/58"
}
|
gharchive/pull-request
|
AIFF invalid FORM chunk size fix
Hello, this is a fix to a bug with AIFF I just noticed - I forgot to update the FORM chunk header size when the ID3 size changes.
I've also updated the test so it uses temporary file, rather than writing to testdata directory.
Sorry for the issues, thanks for the work.
I could add ffprobe test, since that's how I found this issue, however it is present only in files with actual audio data, the sample AIFF I provided is just minimalistic hand made one without any data other than headers.
FFprobe is fine, there already is another test that uses it anyway ;) Would it be possible to transcode and commit quiet.mp3 so we can use that?
Hello, I've added the quiet.aiff file and updated the test to include ffprobe. Also I've changed the API - read_from_aiff - wants io::Read + io::Seek so it can be used for reading from memory for example. Now read_from_aiff_file does the same as the previous API did.
|
2025-04-01T06:40:02.373032
| 2022-12-05T19:45:38
|
1477302676
|
{
"authors": [
"johnstonmatt"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9705",
"repo": "polyseam/cndi",
"url": "https://github.com/polyseam/cndi/issues/99"
}
|
gharchive/issue
|
bringing the terraform state hack inside the cndi binary
Currently the way terraform state is maintained is within GitHub Actions. This works fine, but the code needs to be reimplemented within each supported CI systems (GitLab, etc.). What if the calls to git checkout _state, gpg --symmetric ... git add terraform.tfstate.gpg etc. were made inside the binary?
PR: https://github.com/polyseam/cndi/pull/105
|
2025-04-01T06:40:02.373888
| 2023-09-01T08:41:46
|
1877006807
|
{
"authors": [
"rihp"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9706",
"repo": "polywrap/evo.ninja",
"url": "https://github.com/polywrap/evo.ninja/pull/110"
}
|
gharchive/pull-request
|
Update googleAnalytics.tsx to properly track goals
I think the messages are not being logged because they had to be set as the label, instead of the value which should be a float.
As per https://developers.google.com/tag-platform/devguides/events
|
2025-04-01T06:40:02.385720
| 2020-05-24T21:08:46
|
623946616
|
{
"authors": [
"ValWood",
"kimrutherford",
"mah11"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9707",
"repo": "pombase/website",
"url": "https://github.com/pombase/website/issues/1553"
}
|
gharchive/issue
|
should these 2 links take you to the same page?
both "view 255 genes...." links go to the same page. Is this intentional?
They don't go to the same page. The first is a table of genes. The second link is a table of single allele genotypes and has an Allele column.
Of course. I was not seeing the "allele". I don't use this link much.
I wonder if it would be better to present the allele column first in this view? (it should not affect anything because the download options are not available from here).
I wonder if it would be better to present the allele column first in this view?
Sounds sensible. Shall I go ahead and do it?
Just wait in case @mah11 thinks there is a good reason not to.
I think it would be useful for this page to be a bit more obvious that it's genotypes (and a bit different from other pages), so I'm for it
no objection; no strong preference
I wonder if it would be better to present the allele column first in this view?
Should we continue order by the product as we do for gene tables?
OK...
All done.
|
2025-04-01T06:40:02.386998
| 2019-03-03T15:04:41
|
416525477
|
{
"authors": [
"areebbeigh",
"pomber"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9708",
"repo": "pomber/git-history",
"url": "https://github.com/pomber/git-history/issues/114"
}
|
gharchive/issue
|
Unique URL for every commit
I think to be able to share a git-history URL pointing to a specific commit in the file history would be pretty useful. At least it's definitely useful to me, personally.
That's something I want to add too. Check #42
|
2025-04-01T06:40:02.393278
| 2024-06-23T15:38:50
|
2368642169
|
{
"authors": [
"mathieu-lemay",
"pommee"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9709",
"repo": "pommee/Pocker",
"url": "https://github.com/pommee/Pocker/pull/2"
}
|
gharchive/pull-request
|
Ensure config file directory exists before creating config file
If the config file directory doesn't already exist, the application will crash on startup while trying to create the default configuration. Fix this by creating the directory.
Drive-by: Replace os.path functions by pathlib's functions
LGTM. However, please use semantic commit messages.
In this case it would be "fix: ensure config file dir exists".
Correct the commit message and I will merge.
No problem, I've just updated it. Feel free to update the PR title to match if you prefer.
|
2025-04-01T06:40:02.397815
| 2023-04-11T21:53:22
|
1663269858
|
{
"authors": [
"noloerino"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9710",
"repo": "ponder-org/modin-public",
"url": "https://github.com/ponder-org/modin-public/pull/35"
}
|
gharchive/pull-request
|
POND-983: Add support for torch_func [upstream]
What do these changes do?
[x] first commit message and PR title follow format outlined here
NOTE: If you edit the PR title to match this format, you need to add another commit (even if it's empty) or amend your last commit for the CI job that checks the PR title to pick up the new PR title.
[ ] passes flake8 modin/ asv_bench/benchmarks scripts/doc_checker.py
[ ] passes black --check modin/ asv_bench/benchmarks scripts/doc_checker.py
[ ] signed commit with git commit -s
[ ] Resolves #?
[ ] tests added and passing
[ ] module layout described at docs/development/architecture.rst is up-to-date
There's some incidental formatting changes from running black on numpy/arr.py. I tested that a few functions work locally on pushdown (torch.mul, torch.ge, etc.), but I haven't added in test cases since it would require adding an extra dependency.
|
2025-04-01T06:40:02.463031
| 2017-07-14T00:38:45
|
242867282
|
{
"authors": [
"Praetonus",
"SeanTAllen",
"jemc",
"kulibali"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9711",
"repo": "ponylang/ponyc",
"url": "https://github.com/ponylang/ponyc/pull/2039"
}
|
gharchive/pull-request
|
Feature: Explicit partial calls.
This PR implements #1771.
As promised in #1771, I've prepared a script that can be used to automatically migrate your codebase to use explicit partial calls. You can find it here:
https://gist.github.com/jemc/95969e3e2b58ddb0dede138c737907f5
This is nearly done, but I'm running into issues with SEGVs in the JIT-using compiler tests. @Praetonus, I was hoping you could take a look, since you know the most about those JIT tests.
I don't understand how the SEGVs could be caused by this kind of change. My first thought was that f43d671 might be related since it changes a bit how token objects are freed in the parser, but reverting that commit and running the tests with all the other changes included still saw the same errors.
@jemc I'll take a look.
I'm not seeing any segfaults locally, do you have a minimal case for that?
Also, it looks like you didn't update the grammar file, the CI is failing because of that.
@Praetonus figured out why I was seeing SEGVs, and filed #2047 to fix it :+1:
As soon as this passes, it's ready to merge. However, I've left on the DO NOT MERGE label because I want to clear this wit @SeanTAllen first and make sure we have whatever release notes or other text we need written up to make this a release.
@jemc what platforms has the migration script been tested on?
@SeanTAllen - only my own (Fedora 22 Linux).
I suspect it should work on any Posix-compliant system with bash. I'm not sure what we should do about the Windows crowd - I hear that the latest windows has bash support, so that might work okay.
I can test on OSX sometime in the next few days (maybe this weekend).
I think it would be good to get someone to test on Windows in some fashion or come up with a Windows solution if we can.
@kulibali Would you have a moment to look at this?
Sure, I can take a look this evening.
I have created an equivalent Windows PowerShell script at https://gist.github.com/kulibali/cd5caf3a32d510bb86412f3fd4d52d0f
I am running into an issue where I have a class with an add method that is partial, which I can use the + sugar to call. Compiling with this update gives the usual error message:
C:\Users\Gordon\Dev\Pony\kiuatan\src\kiuatan\_test.pony:192:24: call is not partial but the method is - a question mark is required after this call
let next = start + str.size()
^
The script changes the code to
let next = start +? str.size()
But this doesn't compile. Is there a way to call a partial add method using the + operator? Or do I have to use add explicitly?
Discussed this in the sync call.
I will update the PR to make the +? syntax work for @kulibali.
I will rebase to fix the merge conflicts.
I will get a thumbs-up from @kulibali and @SeanTAllen on the migration script working before merging.
We'll ignore the pony-stable compilation failures in the CI, then follow up with a PR to fix pony-stable.
After merging we'll want to initiate the 0.16.0 release fairly soon, so users of the "latest release" of pony will be able to have codebases that compile, which also compile for users of the "latest master revision" of pony.
Alright, @kulibali and @SeanTAllen - this is ready for your final testing on Windows and MacOS before we merge.
Works for me π
@SeanTAllen gave me permission offline to merge this and fix any possible MacOS issues with the migration script later.
Here we go...
|
2025-04-01T06:40:02.472855
| 2017-07-22T06:47:24
|
244828794
|
{
"authors": [
"nilslice",
"webeau"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9712",
"repo": "ponzu-cms/ponzu",
"url": "https://github.com/ponzu-cms/ponzu/issues/177"
}
|
gharchive/issue
|
Feature Request: Bind to IP
I was wondering if there is any way to support binding to a particular IP address or if you might be able to address that in a future release? I have multiple domains on a single dedicated server with 16 individual addresses. It appears that Ponzu will listen to all addresses on the HTTPS port. Thanks!
We definitely should support this - and I can try to get it added this weekend.
Would another CLI flag paired with the run command suit your needs?
Are you kidding? YES. Right now that's my biggest limitation to implementing Ponzu. Whenever you are able to get to it, please know it will be very much appreciated!
Sure thing! I'll ping you here once I have it complete. Out of curiosity, would you be able to share a bit about:
how you found Ponzu
what you're building
the deployment / hosting environment you're running
Feedback like this is super helpful to make it a better product & dev experience.
Thanks,
Steve
I have been searching for headless cms for sometime. I have not been satisfied with the shoehorning of established CMS' into this area. I also believe that website development should be moving towards application-based designs. So I basically found Ponzu through Google.
My interest is two-fold: I have need of an API interface for a small chain of movie theatres where imdb/tmdb solutions for information is overkill and limits the owners ability to customize the results.
Second, I would like to build a frontend framework that could take information from Ponzu and generate complete web applications using Angular.
Currently I use NGHTTPX as a reverse proxy to Nginx. Why? Because of its HTTP/2 Push ability that Nginx still lacks (among other reasons). My server is an OVH dedicated server with 64gb a 2TB raid array and modern processor (forgot the nomenclature right now).
I was also curious about the References add-on for Ponzu. It reminds me of JSON-LD on the surface (I haven't had a chance to dive deeply into it).
Thanks, again, for your prompt response. I was stunned by the speed with which you responded!
Hey @webeau -
This is now available in the master branch. You can get it by running $ go get -u github.com/ponzu-cms/ponzu/... and then from inside your projects, you can run $ ponzu upgrade to make sure each project has the latest core code.
Let me know if this works for you. The new --bind option for the CLI run command is documented here: https://docs.ponzu-cms.org/CLI/General-Usage/#run
Thank you for the feedback -- that sounds like a great set up and I think Ponzu would be a perfectly suitable option for your CMS / API needs. If you need any other help or have other thoughts about Ponzu, feel free to file another issue or chat with the community on slack on the #ponzu channel at https://gophers.slack.com/messages/C3TBV356D/
I had not seen JSON-LD before, but you are right - the references concept in Ponzu is very similar! I think the added bonus of Ponzu's architecture is that since the JSON responses reference same-origin data URIs, you can easily push them down with HTTP/2 Server Push. You probably already saw Ponzu's Server Push integration, but if not, it's as easy as adding a Push() method to your Content types. Here are the docs for that: https://docs.ponzu-cms.org/Interfaces/Item/#itempushable
Second, I would like to build a frontend framework that could take information from Ponzu and generate complete web applications using Angular.
If you get around to starting this, please let me know -- I'd love to see if I could help or at least follow along :) You might find @natdm's Typewriter project interesting since it helps sync your Go content types (data models) with your front-end code. There is an example using Ponzu.
|
2025-04-01T06:40:02.474501
| 2015-12-09T19:47:10
|
121323880
|
{
"authors": [
"Chibaheit",
"vuryleo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9713",
"repo": "poooi/plugin-prophet",
"url": "https://github.com/poooi/plugin-prophet/issues/58"
}
|
gharchive/issue
|
Lack of removeEventListener in unmount event
As title, the eventListener would never be removed once added.
Added.
df238ef
|
2025-04-01T06:40:02.479264
| 2024-06-21T09:04:33
|
2366079376
|
{
"authors": [
"abckhush",
"masterboy376",
"pooranjoyb"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9714",
"repo": "pooranjoyb/popShop",
"url": "https://github.com/pooranjoyb/popShop/issues/259"
}
|
gharchive/issue
|
[BUG]: No checks applicable
Description
There are two bugs I would like to fix.
When the product is added to the cart, there is no requirement for size. Without the size being selected, it is being added to the cart. I want to add a validation check for that.
After checkout, the cart should be empty. Even after checkout, the cart shows all the items which were just purchased. I would like to add a validation check for that as well.
Kindly assign me this issue under GSSoC'24 with an appropriate level.
Screenshots
Bug1
Bug2
Any additional information?
No response
What browser are you seeing the problem on?
Chrome
Congratulations, @abckhush! π Thank you for creating your issue. Your contribution is greatly appreciated and we look forward to working with you to resolve the issue. Keep up the great work!We will promptly review your changes and offer feedback. Keep up the excellent work! Kindly remember to check our contributing guidelines
Hello @pooranjoyb I would like to work on this issue under gssoc'24
Assigned to @abckhush based on fcfs basis. @masterboy376 next on the line :)
|
2025-04-01T06:40:02.526677
| 2024-11-25T04:13:46
|
2689021207
|
{
"authors": [
"MattWellie",
"codecov-commenter"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9715",
"repo": "populationgenomics/production-pipelines",
"url": "https://github.com/populationgenomics/production-pipelines/pull/1011"
}
|
gharchive/pull-request
|
SingleSample VCF from VDS correction
Another day, another gVCF -> VDS -> MT -> VCF -> Validation hiccup. Here the issue is that MTs that originate from VDSs don't have the FILTERS field, it simply doesn't exist. The 'single sample VCF from MT' script expected a Filters column, so we run into a failure when running the mt.filters.length() == 0 test. When we write the MT to VCF it does generate an empty FILTERS field on all variant rows, so there are no other compatibility issues with downstream tools (VEP, Hap.py, VQSR).
Couple of changes:
Drop gvcf_info completely. We can put it back in later if we need it, but for now we can't export VCFs with this field present, so strip it out early on
Add a --clean flag to the VCF-from-MT script. If used this will remove all non-variant rows, AND filter on mt.filters (but only if filters exists in the MT)
Drops mt.variant_qc before writing the VCF. I think this would have been dropped anyway, but just to make sure
Replace repartition with naive_coalesce - there are genuinely empty partitions (at least in the validation dataset), so reducing the number of partitions can be done cheaply by removing empty ones completely. Computationally this is much cheaper than doing a full repartition. I'm not sure if we'll use this route anyway, as it would be better for us to feed exact partitions into the combiner up front.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 26.62%. Comparing base (69e951c) to head (e54b7dc).
Additional details and impacted files
@@ Coverage Diff @@
## main #1011 +/- ##
=======================================
Coverage 26.62% 26.62%
=======================================
Files 9 9
Lines 1705 1705
=======================================
Hits 454 454
Misses 1251 1251
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
π¨ Try these New Features:
Flaky Tests Detection - Detect and resolve failed and flaky tests
|
2025-04-01T06:40:02.541136
| 2024-12-19T23:26:18
|
2751605168
|
{
"authors": [
"markmichon"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9716",
"repo": "portabletext/editor",
"url": "https://github.com/portabletext/editor/pull/631"
}
|
gharchive/pull-request
|
wip: annotated typedocs progress
Quite a few misc complications still with the typedocs. Not entirely sure if the problem is how we do things, how they assume people do things, or something in between.
I did discover that pkg-utils is blocking our ability to wholesale use all the typedocs tags. Should be able to manually add them via package.config.ts until we hit an inline tag. At the point the library will need an update since the type only supports block and modifier right now.
If the groups or categories will actually work, I think we'll be in good shape, but this implementation is still just "okay".
The createMarkdownBehaviors page is a good example of the bizarre way it handles some properties.
|
2025-04-01T06:40:02.556430
| 2017-09-07T16:48:51
|
256002720
|
{
"authors": [
"asasmoyo",
"deviantony",
"kwerle",
"ncresswell",
"zwx168238"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9717",
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/issues/1182"
}
|
gharchive/issue
|
Using https://my.host.name/v2/ for private repo chokes
I tried to configure a private registry using the host url I docker log'ed in to - which was like
https://my.host.name/v2/
It turns out that portainer didn't like that, but gave no immediate feedback. Instead it said:
"Image from container: invalid reference format"
when I tried to add a new container.
I recommend that when you add a registry portainer tries to validate it. If that's not possible then it'd be nice for the failure message to be way more explicit. Maybe including some technical error information.
You should probably try with https://my.host.name instead of https://my.host.name/v2/
I agree, we will try to test the connectivity before creating the registry.
Yes, that is the fix. My point was that it wasn't clear what the issue was. A connectivity test would have nailed it at registry creation time - if that's possible... Thanks!
i got the same issue
my setting was like below
Registry URL https://njdocker1.nj.thundersoft.com
name harbor
then i want to use pull image
name njdocker1.nj.thundersoft.com/public/rsyslog:1.0
Registry harbor
i was setting the auth info and if i use docker pull njdocker1.nj.thundersoft.com/public/rsyslog:1.0 can pull the img from my own Registry
You need to add the port number to the registry url... port 5000 is Docker default..
Rgds,
Neil Cresswell
On 20/09/2017, at 5:17 PM, zwx168238 <notifications<EMAIL_ADDRESS>wrote:
i got the same issue
my setting was like below
Registry URL https://njdocker1.nj.thundersoft.comhttps://urldefense.proofpoint.com/v2/url?u=https-3A__njdocker1.nj.thundersoft.com&d=DwMFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=H99wJQGgBWkcoainkjvqEarIuUvyd7LMw--mBBb76rw&s=05gffEv0a9BEG9dhnOJVQ6Idmcy0-e1Qc4BDMsMSMHw&e=
name harbor
then i want to use pull image
name njdocker1.nj.thundersoft.com/public/rsyslog:1.0http://njdocker1.nj.thundersoft.com/public/rsyslog:1.0
Registry harbor
i was setting the auth info and if i use docker pull njdocker1.nj.thundersoft.com/public/rsyslog:1.0http://njdocker1.nj.thundersoft.com/public/rsyslog:1.0 can pull the img from my own Registry
β
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_portainer_portainer_issues_1182-23issuecomment-2D330809591&d=DwMFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=H99wJQGgBWkcoainkjvqEarIuUvyd7LMw--mBBb76rw&s=aEJdJjYz1FStqqcwPSyWlyv6KoimWQ9j0aDevWJMVjE&e=, or mute the threadhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AWGrla8I91y6j95cuQrEc5TPPl7RFqQFks5skOZRgaJpZM4PQGVM&d=DwMFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=H99wJQGgBWkcoainkjvqEarIuUvyd7LMw--mBBb76rw&s=jzqDW_7Ayi9402eZgpLYK2ZqvY6ILZyKa72_xkpVL1Q&e=.
@ncresswell ths but we use harbor ,the default port was 80.but even add the port it still does't work
throw "Failure:invalid reference format""
@zwx168238 are you using njdocker1.nj.thundersoft.com as the registry URL and public/rsyslog:1.0 as the image you pull ?
@deviantony i use njdocker1.nj.thundersoft.com/public/rsyslog:1.0 as the image,it was the full path
@zwx168238 you need to create a registry first using njdocker1.nj.thundersoft.com and then select that registry when creating a container / pulling an image and use public/rsyslog:1.0 as the image name.
@deviantony actually i was do like this
@zwx168238 and this is not working? Feel free to ping me on Slack to discuss this.
@deviantony ths for you hard work , now it was works
Hello! I would like to start my contribution to here. If no one working on this, I would like to take it :)
@asasmoyo Nobody is working on this yet, feel free to open a PR :-)
Great :)
I am thinking to do http request to URL/v2 then check wether it returns 200. Do you think is it enough to just do this?
@asasmoyo yes, that should be enough for a check.
@deviantony please take a look my pr
|
2025-04-01T06:40:02.566228
| 2017-12-08T19:29:07
|
280592693
|
{
"authors": [
"AlexJakeGreen",
"deviantony",
"ncresswell",
"xoxys",
"zeenlym"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9718",
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/issues/1483"
}
|
gharchive/issue
|
LDAP Auto create Users
Hi LDAP authentication is great but is it possible to have a switch to enable autocreate users ?
Same opinion! Also add support to get other attributes from LDAP and Attribute Mapping like Name, SureName, Email und Team Membership in relation to LDAP Attributes
We can, but you would still need to define which users can access which endpoints... unless we switch βteamsβ to be based on an LDAP group..
Rgds,
Neil Cresswell
On 9/12/2017, at 8:29 AM, Yanis LISIMA <notifications<EMAIL_ADDRESS>wrote:
Hi LDAP authentication is great but is it possible to have a switch to enable autocreate users ?
β
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_portainer_portainer_issues_1483&d=DwMCaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=hmnIAffQMcto5tvSPmMMdaWs-QW1o8TV2HHDRZnhMt0&s=ota0FOoR8rNY-fDxPbNmp5-SZC6OIb1qumK7pivMzSE&e=, or mute the threadhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AWGrldSZHhVDOIMwagBbaeMroEXfIyTkks5s-2DY4LgaJpZM4Q7ilt&d=DwMCaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=hmnIAffQMcto5tvSPmMMdaWs-QW1o8TV2HHDRZnhMt0&s=gBHIZC97XkHu6YovhKkMIWxCksNm4oweW-Oagc6Sd8k&e=.
@zeenlym See my comment, i think this feature is also needed. Don't use only LDAP Groups. I would suggest to make the relation to a team flexible with LDAP Attribute Mapping.
@xoxys i'm agree with you
@xoxys please open another issue for LDAP attribute mapping. We'll track the user auto-creation feature in this one.
@AlexJakeGreen feel free to open a PR :-)
@deviantony Here is the PR #1839
Thanks!
ps Maybe, it is better no not store ldap users in db at all since Portainer already uses signed cookies and user data (including roles) can be kept there. But I don't know how this approach fits further development plans and looks like it should be a different story.
I just had a look at the PR @AlexJakeGreen
How do you address the fact that even if you automatically creates the user in LDAP they're still unable to access any endpoint? Thus, new users are still blocked at the authentication screen.
New user receives jwt token, but yes, he is not added into any group and thus has to be managed additionally via web ui and this PR solves only a part regarding user autocreation.
So, user's permissions have to be granted in some way, and I can see several strategies,
1 Add new user into some preexisting 'default' team - in this way we skip ldap groups and still need manually give permissions via web ui. Possible, but not flexible for me because all my users live in LDAP
2 Add a third hardcoded ReadOnlyRole (first two are Admin and StandardUser) and assign it to user - user will be able to see resources from start, but admin still needs to assign him to a proper team later. This starts to be ok for me, but community may have different opinion, so confirmation needed. There was an issue for RO, but now it is closed and seems will be implemented in different way
3 Implement mapping ldap group -> team, is the best, but needs much more work in both golang and js code + still need something for readonly access...
IMO the best solution is to implement the LDAP group <> Portainer team mapping.
What's your thoughts @ncresswell ?
Regarding read-only access, this is going to be tackled in https://github.com/portainer/portainer/issues/1259
I agree with ldap - team mapping. When a user logs in, if they are a member of a ldap group that has a corresponding portainer team associated, then autocreate their account in that team
Hy every one,
I think LDAP mapping is the best solution, I can wait for it to be avalaible.
Thanks,
|
2025-04-01T06:40:02.572717
| 2018-01-23T16:56:01
|
290913836
|
{
"authors": [
"WTFKr0",
"deviantony",
"maocorte",
"unlucio"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9719",
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/issues/1597"
}
|
gharchive/issue
|
Swarm visualizer - color by service
Could we implement a color by service model, like in https://github.com/dockersamples/docker-swarm-visualizer
I kwow color are for task status for the moment, but when we show only running tasks, it could be cool to identify that my service run on all nodes !
For information same cluster in portainer swarm visualizer :
Thanx for reading
Hi, I have tried a solution like the image posted below.
The idea is to check service status using task background color and status lable and use the border color to identify the service.
The function used to get the color for service is the same used into docker-swarm-visulizer, who use the service id.
Could this example be a good solution for this feature?
@maocorte looks good, I'll have a look at the PR.
I'm sorry for the necro-posting, but this is the only thing I found about those frame colors that are litteraly driving me nuts.
Form the PR it seems like they mena pretty much noting (thanks for make me go nuts on the for weeks, btw π₯Ή)
Checking the current code the function generating the random colour form the container ID is gone, bu and in its place I found this
visualizerTaskBorderColor
https://github.com/portainer/portainer/blob/develop/app/docker/views/swarm/visualizer/swarmvisualizer.html#L110
which I can't honestly find what it should mean.
So I'm pretty much back to square one.
What do the colored frame mean?
Kindly, my brain just goes spinning at 1000% and gets frustrated every time I drop into that view π, so many visual clues and not an explanation is just torture :(
|
2025-04-01T06:40:02.578768
| 2019-12-23T14:34:33
|
541780120
|
{
"authors": [
"itsconquest",
"qmager"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9720",
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/issues/3479"
}
|
gharchive/issue
|
Container not being listed in Container tab
Bug description
When I search for a container after I restarted it, it disappears in the container tab for a few minutes. docker ps seems to still find it.
Expected behavior
Portainer should always show every container that docker ps can show, given I have enough privilege for that container.
Steps to reproduce the issue:
Go to a container
Click on restart
Search it in the container tab
Can't find it
Technical details:
Portainer version: 1.23.0
Docker version (managed by Portainer): 18.09.1
Platform (windows/linux): Linux
Command used to start Portainer (docker run -p 9000:9000 portainer/portainer):
docker run -d -p 9000:9000 -p 8000:8000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
Browser: Firefox 71.0
Additional context
This problem appeared after the update to 1.23.0 and activating the Role Access Extension.
Update: It seems like Portainer has issues displaying containers with healthcheck in a starting state.
It immediately showed up after I stopped the container with the CLI.
Closing as I believe this is a duplicate of #3146
However I think you could mention This problem appeared after the update to 1.23.0 and activating the Role Access Extension. as this is an interesting observation that may help to fix this bug.
|
2025-04-01T06:40:02.680034
| 2023-12-11T19:58:37
|
2036427119
|
{
"authors": [
"clementnuss",
"onetwopunch"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9721",
"repo": "postfinance/kubelet-csr-approver",
"url": "https://github.com/postfinance/kubelet-csr-approver/issues/211"
}
|
gharchive/issue
|
Unable to retrieve the complete list of server APIs: certificates.k8s.io/v1 with default deployment
I'm using the default options in deploy/k8s and only overriding the KUBERNETES_SERVICE_{HOST,PORT} environment variables but getting the following error about a minute or so after the pod starts:
E1211 19:49:34.608041 1 leaderelection.go:332]
{
"level":"ERROR",
"ts":"2023-12-11T19:44:49.789Z",
"logger":"controller-runtime.source.EventHandler",
"caller":"source/kind.go:68",
"msg":"failed to get informer from cache",
"error":"failed to get API group resources:
unable to retrieve the complete list of server APIs:
certificates.k8s.io/v1:
Get \"https://API_SERVER:6443/apis/certificates.k8s.io/v1\":
dial tcp: lookup API_SERVER: i/o timeout"
}
I know that pods are able to talk to the api server because I have a running deployment of kube-state-metrics that also overrides the same env vars with the same values.
I'm running kubernetes 1.28.3 if that's helpful.
then I would assume you misconfigured your environment variables π
can you show me how you did that ?
also, it's typically not needed to customize the KUBERNETES_SERVICE_{HOST,PORT} envs, because K8s sets those automatically. can you try to run it again without modifying these envs ?
closing without further info
|
2025-04-01T06:40:02.706100
| 2016-12-07T00:57:28
|
193929390
|
{
"authors": [
"mutewinter",
"shrunyan"
],
"license": "cc0-1.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:9722",
"repo": "postlight/awesome-cms",
"url": "https://github.com/postlight/awesome-cms/pull/35"
}
|
gharchive/pull-request
|
Add Zesty.io
[x] verified that the CMS I'm adding is still maintained.
[x] read CONTRIBUTING.md.
[x] did not generate README.md.
Thank you, @shrunyan!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.