added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:35:09.669142
| 2022-03-07T10:28:42
|
1161209331
|
{
"authors": [
"dubstard"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9831",
"repo": "polkadot-js/phishing",
"url": "https://github.com/polkadot-js/phishing/pull/1229"
}
|
gharchive/pull-request
|
Update all.json
"7polkadot.js.org",
"connectwallet-kucoin.gq",
"connectwallet-kucoin.ml",
"cryptoverse-token.sale",
"cryptoxtoken.xyz",
"elongatetoken.me",
"elonkingtoken.com",
"enpolkadot.js.org",
"kusamaproject.com",
"kusamaprojects.com",
"metamask-recovery.net",
"metamask-walletconnect.ga",
"palcapolkadot.js.org",
"polkadot-wallet.net",
"polkadot-wallet.xyz",
"polkadottii3.co.uk",
"polkadotweb.js.org",
"restore-metamask.info",
"yourwalletconnect.online",
kusamaproject.com
kusamaprojects.com
Attempt to interact with the extension.
I also stumbled upon
1polkadot.com.br - seems safe
There is also
polkadot.energy where the email is tied to holdpolkadot.com
What irked me is that they are very similar (1 letter difference, same registrar and hosting (AliDNS)
I am afraid to let it interact with the extension though.
I will remove them for now.
|
2025-04-01T04:35:09.673700
| 2024-09-21T17:43:52
|
2540435406
|
{
"authors": [
"JoshQuake",
"jack-yao91",
"vaeryn-uk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9832",
"repo": "poly-hammer/BlenderTools",
"url": "https://github.com/poly-hammer/BlenderTools/pull/84"
}
|
gharchive/pull-request
|
Custom root bone name setting
Useful when you have multiple armatures in a Blender project that you want to export when the skeleton in Unreal has a root bone. I could not use the "object name as root bone" as I cannot give two armatures the same name.
This is a quick change that I'm sharing to gauge interest. Let me know if you're interested and if there's anything I should add to the PR (tests?)
Ooo thanks for the contribution! Looks like it should integrate nicely without breaking anything so it should be good, but if you can, creating some tests would be great!
@jack-yao91 whenever you are able we seem to have an issue with PR workflows not from us.
I've had a go at the tests, but found the structure unfamiliar so please shout if I'm missing some conventions.
I've confirmed the new test passed locally, but had timeout issues running the whole suite. I suspect this is a local machine issue rather than this being a breaking change; I'm hoping CI will be able to confirm that for us?
Thanks!
Hi @vaeryn-uk sorry just now getting around to looking at this. Looks good! Since their are some credentials injected into the workflow, we dont automatically let PRs from forks run the workflow.
To test we will have to PR from a branch within this repo. I created this new branch for you send2ue-custom-root-bone-name . Can you close this PR and PR to that? Ill merge it and then we can open PRs from there to main. Thanks
Thanks @jack-yao91. I have created #87.
|
2025-04-01T04:35:09.721801
| 2018-07-02T18:45:56
|
337624018
|
{
"authors": [
"fdinsdale",
"polygraphene",
"saintkamus"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9833",
"repo": "polygraphene/ALVR",
"url": "https://github.com/polygraphene/ALVR/issues/118"
}
|
gharchive/issue
|
FPS changing to 60
Is anyone else seeing this happen?
When I'm connected and streaming to my OGo with Elite Dangerous for example. I can see in the server tab and also the debug tab that the connected FPS is 72. It displays that frame rate as long as I'm connected. If I remove the HMD to do something and it goes to sleep when I reconnect to the server the FPS falls to 60. It will stay at that rate until I quit everything and relaunch.
I'm running v2.1.5 server and client and for the most part this app works really well.
Thanks!
I think this is a bug of recovering connection on resume app.
And it is fixed in alpha version, i think.
Please try when next stable is released.
If correct, it is just a notation bug. Real FPS is 72 even when the bug occurs.
"When I'm connected and streaming to my OGo with Elite Dangerous for example. I can see in the server tab and also the debug tab that the connected FPS is 72. "
You mean the refreshrate right? because the FPS are not even close to 60 or 72 on the headset.
On the server tab its called FPS and on the debug tab its called refresh rate. Either way I think you're right, what's getting to the headset display seems to be less than 60. I notice the chop more on Xplane 11 than ED.
My hardware setup for wifi is 802.11ac and a Ethernet connection from my PC to the n600 router
I imagine we will see improvements with forthcoming versions. For me I'm happy to have the use of my OGo for steam vr apps.
@saintkamus Yes. The value means refresh rate.
@fdinsdale Does the issue appear in all games?
I wrote how to check FPS on https://github.com/polygraphene/ALVR/issues/116#issuecomment-402032841
If you can use adb, please try.
It did that running Xplane 11 also. Even just running steam vr without an app/game it happens.
|
2025-04-01T04:35:09.753037
| 2016-06-21T15:23:31
|
161466547
|
{
"authors": [
"mah11"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9834",
"repo": "pombase/fypo",
"url": "https://github.com/pombase/fypo/issues/2698"
}
|
gharchive/issue
|
PMID:17724078 heterozygosity loss
no xps; these are weird
1 decreased
2 increased
break-induced loss of heterozygosity
in which one of two different alleles of a gene is lost more|less frequently than normal after a double-strand break forms nearby
3 abolished break-induced loss of heterozygosity via chromosomal translocation
chromosomal translocation that would result in the loss of one of two different alleles of a gene does not occur
comment: Loss of heterozygosity associated with chromosomal translocation may result from allelic crossover or break-induced replication. In cells with this phenotype, loss of heterozygosity may or may not occur via a different mechanism, such as chromosomal truncation.
4 telomere assembly at double-strand break site
telomere assembly occurs at one or more chromosome ends generated by a double-strand break
synonyms: telomere formation, de novo telomere addition
comment: This phenotype can lead to loss of heterozygosity.
decreased break-induced loss of heterozygosity FYPO:0005451
increased break-induced loss of heterozygosity FYPO:0005451
abolished break-induced loss of heterozygosity via chromosomal translocation FYPO:0005453
abnormal telomere assembly FYPO:0005454
telomere assembly at double-strand break site FYPO:0005455
edit file: a928e345fb59b32c28d133e2ea37ed1779a4d8d5
release: 96ebac8c2152a2fe15b9bdc0f73147c9b64d4f36
note - FYPO:0005453 is not is_a FYPO:0005451 because the total LOH level may be higher or lower than normal in FYPO:0005453
|
2025-04-01T04:35:09.754132
| 2023-02-20T08:07:10
|
1591352680
|
{
"authors": [
"ValWood"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9835",
"repo": "pombase/fypo",
"url": "https://github.com/pombase/fypo/issues/4276"
}
|
gharchive/issue
|
PMID:20089861 hcs1
decreased homocitrate synthase inhibition by L-lysine (feedback inhibition)
child of decreased catalytic activity
A molecular function phenotype in which the observed rate of homocitrate synthase is increased, due to the absence of lysine inhibition.
|
2025-04-01T04:35:09.756820
| 2021-08-17T12:59:30
|
972663081
|
{
"authors": [
"kimrutherford"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9836",
"repo": "pombase/website",
"url": "https://github.com/pombase/website/issues/1741"
}
|
gharchive/issue
|
Fix new Pandoc warnings about duplicate identifiers
For example:
[WARNING] Duplicate identifier 'pombetalks-may-th' at line 314 column 1
See https://github.com/japonicusdb/japonicus-config/issues/3#issuecomment-900275681 for more.
I fixed a problem with my Perl code which got rid of some of the warnings (it was unnecessarily removing digits from titles before making ids for linking).
There were other warnings because there were news items with identical titles, which generated identical IDs. For example there were several items with the title "PomBase data update". Most were quite old. To fix those problems I added the date of each news item to the title to prevent duplication:
PomBase data update 2013-06-20
It's not very elegant but since these are old news items I figured it would be OK.
There was one news item which I fixed in a different way. In src/docs/news.PomBase/2020-09-11-hermes-transposon-data-long.md I changed Hermes transposon insertions in PomBase to New dataset: Hermes transposon insertions.
So now there are no warnings from Pandoc.
|
2025-04-01T04:35:09.873929
| 2024-01-10T04:14:25
|
2073552188
|
{
"authors": [
"hopedisastro"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9837",
"repo": "populationgenomics/sv-workflows",
"url": "https://github.com/populationgenomics/sv-workflows/pull/110"
}
|
gharchive/pull-request
|
Create joint_merge_str_runner.py
This mergeSTR script is able to merge STR VCFs across 2 file directories, or dataset buckets.
Relative to existing merge_str_runner.py, this script requires:
two input file directories
two sample ID lists (containing internal sample IDs separated by \n)
(will wait until mergeSTR batch job on a single directory successfully completes before requesting review)
|
2025-04-01T04:35:09.927246
| 2019-08-12T07:20:00
|
479499747
|
{
"authors": [
"ztj1993"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9838",
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/issues/3082"
}
|
gharchive/issue
|
portainer stacks use existing networks
I see portainer Only Compose file format version 2 is supported at the moment.
I have a network: custom; I want to use this network in my compose and customize IP. How do I configure it?
I have tried a variety of solutions, how to configure it?
version: 2
services:
file:
image: alpine
command: sleep 100000
container_name: tmp
volumes:
- /opt/tmp:/opt/tmp
- /etc/localtime:/etc/localtime:ro
networks:
custom:
ipv4_address: <IP_ADDRESS>
networks:
custom:
external: true
see: https://github.com/portainer/portainer/issues/2041
version: 2
services:
file:
image: alpine
command: sleep 100000
container_name: tmp
volumes:
- /opt/tmp:/opt/tmp
- /etc/localtime:/etc/localtime:ro
networks:
custom:
ipv4_address: <IP_ADDRESS>
networks:
custom:
external:
name: custom
|
2025-04-01T04:35:09.943542
| 2020-07-30T18:38:53
|
669071165
|
{
"authors": [
"ncresswell",
"tiagosamaha"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9839",
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/issues/4133"
}
|
gharchive/issue
|
Stack don't create network
Bug description
Using docker-compose version 2.4, when add network I got error.
Deployment error
Error response from daemon:
page not found
Expected behavior
I just trying to create network between services.
Portainer Logs
2020/07/30 18:19:40 http error: Error response from daemon: page not found (err=Error response from daemon: page not found) (code=500)
Steps to reproduce the issue:
Create a new stack
Add compose script
Define restriction to team
Deploy the stack
Technical details:
Portainer version: 1.24.1
Docker version (managed by Portainer): Docker version 19.03.12, build 48a66213fe
Platform (windows/linux): Linux Ubuntu 19.10
Command used to start Portainer (docker run -p 9000:9000 portainer/portainer):
docker run -d -p <IP_ADDRESS>:9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer:1.24.1-alpine
Browser: Chrome Version 84.0.4147.105 (Official Build) (64-bit)
Additional context
version: "2.4"
services:
rabbitmq:
image: rabbitmq:3.8.5-management
networks:
- test_default
ports:
- 15672:15672
networks:
test_default:
driver: bridge
Only "links" works on it. But links are legacy. Anyone have any ideia what's going on?
This issue seems to have existed for quite some time; I see the same issue right back to Portainer 1.20.0, which would imply that this is a limitation with libcompose, which is what we use for non-swarm stack (compose) deployments.
Have you tried this against a swarm environment?
I have just one node. But I have enabled swarm to test, but I had the same results.
When I enabled the swarm, services box was show on dashboard.
Using the same compose file?
The example compose you shared would not even deploy on Swarm, so you would need to modify for it. Please share the one you tried against swarm.
Sorry. I have used this compose file.
This one works for me just fine on swarm:
version: "3.2"
services:
rabbitmq:
image: rabbitmq:3.8.5-management
networks:
- rabbit
ports:
- 15672:15672
networks:
rabbit:
driver: overlay
I´ll test it. But I must use swarm?
Thanks in advance!
Yes, must be swarm.
Libcompose, which is what we use for non-swarm, is just too buggy and feature limited, and e have a plan in place to replace it moving forward, so we are not able to resolve any issues related to non-swarm stacks in the current versions.
N
From: Tiago Samaha Cordeiro<EMAIL_ADDRESS>Sent: Friday, 31 July 2020 9:46 AM
To: portainer/portainer<EMAIL_ADDRESS>Cc: Neil Cresswell<EMAIL_ADDRESS>Comment<EMAIL_ADDRESS>Subject: Re: [portainer/portainer] Stack don't create network (#4133)
I´ll test it. But I must use swarm?
Thanks in advance!
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_portainer_portainer_issues_4133-23issuecomment-2D666725228&d=DwMFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=xKUM9bJTljTXVqp-p4Kuyop1ZpcCjW4AJzkJEcQXu5w&s=XK6HwMeGwW2IZkxPx25eUxD4CbrEhdC_8hhN828-86I&e=, or unsubscribehttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AFQ2XFJW2XLV76I7MUVH2ATR6HSY3ANCNFSM4POMJ6MA&d=DwMFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=xKUM9bJTljTXVqp-p4Kuyop1ZpcCjW4AJzkJEcQXu5w&s=xCM83kc6prmWux5MzhbDdIqHMrWLV-8_nsVm4Ku8lIw&e=.
@ncresswell It works! Thank you so much!
I don't have found this tip on docs. Would be good highlight it.
Does the stack support compose file up to version 3.7?
Only up to 3.6 for now.
Rgds,
Neil Cresswell
On 31/07/2020, at 11:52 PM, Tiago Samaha Cordeiro<EMAIL_ADDRESS>wrote:
@ncresswellhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ncresswell&d=DwMCaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=5lHhKm_VQgrJ655Av4JvYlnotmjsGpV6rgv_MSKMK-4&s=OxgYU8tN6nbh8oB51FQRTOyEakYe3PCu9Kg3TsvMwuU&e= It works! Thank you so much!
I don't have found this tip on docs. Would be good highlight it.
Does the stack support compose file up to version 3.7?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_portainer_portainer_issues_4133-23issuecomment-2D667082131&d=DwMCaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=5lHhKm_VQgrJ655Av4JvYlnotmjsGpV6rgv_MSKMK-4&s=ANeW9iHDKa5lEzjxsdcVfJSrlRAr9oZhUOc65UuQdJo&e=, or unsubscribehttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AFQ2XFPEMHQTWFUHESIKZKLR6KWAHANCNFSM4POMJ6MA&d=DwMCaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=5lHhKm_VQgrJ655Av4JvYlnotmjsGpV6rgv_MSKMK-4&s=Z19VJMAeq0u5pmDB7U_diBi53ct_HJq0baOUySh-7lE&e=.
|
2025-04-01T04:35:09.946771
| 2020-10-15T01:53:31
|
721908404
|
{
"authors": [
"deviantony",
"unlucio"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9840",
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/issues/4392"
}
|
gharchive/issue
|
Add the ability to connect Portainer to an external database
Requirements
Add the ability to connect Portainer to an external database to provide an HA deployment (the database technology will need to support HA deployments).
This needs to be provided as an alternative way to deploy Portainer as we want to keep the simplicity as the default option.
The database configuration must be supported via CLI (flags or configuration file to be considered).
Considerations
As we've discussed internally, the key/value model is now limiting us and we might decide to switch to another database technology as such we might need to find an alternative to the current technology powering a simple deployment (boltdb).
Interesting tools to consider/investigate:
https://dqlite.io/
https://github.com/rqlite/rqlite
This would also help if you keep portainer's data dir on a NAS share, sine sqllite doesn't like staying on a network fs.
Any news, btw? I've found this issue because of that ⬆
|
2025-04-01T04:35:09.953959
| 2021-07-13T16:42:29
|
943623965
|
{
"authors": [
"SvenDowideit",
"john8329",
"srebala"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9841",
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/issues/5305"
}
|
gharchive/issue
|
Log viewer seems to show logs in scrambled order
Bug description
The log viewer window seems to display log lines in a chaotic order when the service is crashing at every startup, making inspections more difficult.
Expected behavior
The logs should not mix lines but display them in the same order as they were emitted by the container(s)
Steps to reproduce the issue:
Create a service that crashes on startup
Run it inside a global service with an restart policy set to on-failure
Check the log viewer
Technical details:
Portainer version: 2.6.1
Docker version (managed by Portainer): 20
Platform (windows/linux): linux
Use Case (delete as appropriate): Small startup, evaluating the tool for bigger deployments
Have you reviewed our technical documentation and knowledge base? Yes
You can see in the screenshot that the golang trace is mixed with other lines (some details are obscured for security).
More details:
The "server.yaml" message shouldn't happen in that moment in my app, it surely was from before the issue was fixed.
This doesn't happen if I inspect the single container, only when I check the service's logs (which merges its containers' logs)
Swarm tries to recreate the container and leaves the failed ones (rightly so), which are the ones that originate the crash logs
Hi
please share the service yaml used to reproduce this issue
version: '3.7'
services:
max:
image: '###############'
ports:
- '#####:#####'
volumes:
- max_data:/max/data
environment:
DB_HOST: 'database'
DB_PORT: '#####'
DB_USER: '#####'
DB_PASS: '#####'
DB_NAME: '#####'
SERVER_NAME: ${CLIENT_NAME} Server
PRINT_SERVICE_WEB_HOST: print
networks:
- backend
depends_on:
- print
- database
labels:
#####.client-code: ${CLIENT_CODE}
#####.client-name: ${CLIENT_NAME}
deploy:
mode: global
restart_policy:
condition: on-failure
...
After doing some more tests today, I can confirm it's quite reproducible. New logs aren't shown, and old logs get mixed up. The timestamps should be the evidence. Happens only when seeing them from the service, not the container.
I'm going to suggest that this is really a Docker issue - https://github.com/moby/moby/issues/33673 (and many others)
and I think you can improve it by using one of the more complicated log drivers - but there is buffering in the system, preferring not-losing info instead of preferring order
|
2025-04-01T04:35:09.956282
| 2022-06-20T20:24:32
|
1277356305
|
{
"authors": [
"Shady0xfee1dead",
"samdulam"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9842",
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/issues/7099"
}
|
gharchive/issue
|
Portainer on Mixed Architecture Docker Swarm - Agent not deploying
Use Case: I setup a Docker Swarm at home and to meat quorum requirements I have 1 Raspberry Pi acting as a manager only and set to Drain and 2 x86 based 64bit 1U servers (32 Cores / 64GB Ram) as Manager/Workstation nodes. Using the instructions on the Deploy Portainer to a Docker Swarm page I executed the deployment. Portainer itself works fine on any node however, the agent will not deploy to the Raspberry Pi no matter what.
Have you reviewed our technical documentation and knowledge base? No
Question:
How can I deploy Portainer Agent successfully on the Raspberry Pi Node?
You may want to take manager out of drain mode, deploy portainer stack and then use pause mode so it won't accept any new tasks.
I tried that as well, and no luck. I can get Portainer installed on the PI and all is well but if I take the PI offline and bring it back and a different node has been elected leader of the swarm, the PI doesn't show up with any stats as the Portainer agent fails to install
|
2025-04-01T04:35:09.959953
| 2020-10-25T16:21:06
|
729056342
|
{
"authors": [
"deviantony",
"knittl",
"mcpacino",
"sbusso"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9843",
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/pull/4413"
}
|
gharchive/pull-request
|
#4374 feat(images): Add link to Docker Hub on container creation page
Add a button next to the image field when creating a new container, which
takes the user to the Docker Hub search page for this image. Version
identifiers are trimmed from the image name to ensure that matching images
will be found.
Closes #4374
Thanks for the contribution @knittl , I'll review it shortly.
/azp run
@pull-dog down
@deviantony done. Can you have a second look? :)
/azp run
@pull-dog up
@pull-dog down
hi @knittl thanks for the code and sorry for the late request, this PR is going through QA, could you push the branch rebased on latest develop?
@sbusso sure. I have just rebased and pushed.
@pull-dog down
|
2025-04-01T04:35:09.962295
| 2021-09-23T09:22:31
|
1005202293
|
{
"authors": [
"WaysonWei",
"huib-portainer"
],
"license": "Zlib",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9844",
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/pull/5733"
}
|
gharchive/pull-request
|
feat(k8s): add filter for k8s application type EE-1627
Closes EE-1627.
Changelog
This pr introduce filter for kubernetes application types:
Deployment
Daemonset
Statefulset
Pod
Helm
Closes https://github.com/portainer/portainer/issues/5726
|
2025-04-01T04:35:09.970491
| 2024-09-30T18:16:12
|
2557353815
|
{
"authors": [
"dotNomad",
"kgartland-rstudio"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9845",
"repo": "posit-dev/publisher",
"url": "https://github.com/posit-dev/publisher/pull/2332"
}
|
gharchive/pull-request
|
inotify doc udpate
Adds inotify instructions to docs for linux users.
[ ] Bug Fix
[ ] New Feature
[ ] Breaking Change
[x] Documentation
[ ] Refactor
[ ] Tooling
Resolves #2329
|
2025-04-01T04:35:09.978001
| 2023-10-25T18:18:24
|
1961979147
|
{
"authors": [
"catphish",
"washcroft",
"willpower232"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9847",
"repo": "postalserver/postal",
"url": "https://github.com/postalserver/postal/issues/2675"
}
|
gharchive/issue
|
Message Charset / Content-Transfer-Encoding Issue
When a message contains no parts (i.e. a message with a simple HTML body), that message may have been sent with a specific Charset and/or Content-Transfer-Encoding specified in the headers governing the whole message (e.g. Content-Transfer-Encoding: quoted-printable).
If link/open tracking is enabled, the message is put through the parser and the body is altered accordingly, but at the same time Postal is overwriting these two headers on the Mail object here:
https://github.com/postalserver/postal/blob/2f62baa238fc1102706ee4acf079b7a876b05283/lib/postal/message_parser.rb#L36-L43
...this means @mail.to_s is returning a differently encoded body. Normally this would be fine, but the new message headers are not updated in Postal's raw tables - meaning when Postal comes to deliver the message, it does so using the headers of the message as it was received, but with a body that has since been encoded differently.
This means email clients are having a hard time rendering the message, often failing miserably.
To fix this, I have commented out the four lines which are overwriting these two headers - but I suspect the proper solution needs to allow updated headers being returned from the parser?
Ah good spot.
Unfortunately the message with the converted links is not preserved and I feel that is on purpose however I have seen other software only record the version with the changed links so perhaps it would be more correct to save the tracked links to the database.
Presumably an alternative would be to try and identify the current charset for the message? Not sure how feasible that is.
Not storing the orignal body is OK, as only the parsed/injected body is needed for delivery post this stage of processing.
However, the headers need to be updated as well to avoid the situation where the headers say one thing, but the body is something different.
Or perhaps my solution is fine, is there a reason why Postal trys to force the Charset / Content-Transfer-Encoding?
Oh I see what you mean, that is a bit outside my knowledge, I can see that the raw_message already contains the headers so hopefully there is some method available to change the headers without reconstructing the message even further.
I don't know why it forces it as it does, I could only imagine that something in the inserted links or images would require it but I don't know if there is a difference between ANSI HTML or whatever and UTF-8 HTML.
The UTF8 charset is probabaly less problematic, but when the content transfer encoding is changed that breaks all kinds of things - particularly when a message is sent as quoted-printable, and the comes out of the parser as not quoted-printable. HTML can no longer be rended properly, because the equals signs in HTML tags aren't ecaped.
Thank you for the detailed bug report. This is fixed by 2834e2c37971db9b0b0498e38b382cf1f8ee26eb
|
2025-04-01T04:35:09.983560
| 2017-05-16T17:34:48
|
229112085
|
{
"authors": [
"evilebottnawi",
"rfgamaral"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9848",
"repo": "postcss/postcss-loader",
"url": "https://github.com/postcss/postcss-loader/issues/235"
}
|
gharchive/issue
|
What's the correct way to configure plugins within Webpack 2?
I've always had this loader configuration in my Webpack 2 configuration:
loader: 'postcss-loader',
options: {
sourceMap: 'inline',
plugins: () => {
return [
require('postcss-import'),
require('postcss-cssnext')
];
}
}
But I look at the README and I see this:
loader: 'postcss-loader',
options: {
plugins: (loader) => [
require('postcss-import')({ root: loader.resourcePath }),
require('cssnext')(),
require('autoprefixer')(),
require('cssnano')()
]
}
And I'm confused about the correct way to require plugins. I've tested both these approaches:
require('postcss-cssnext')
require('postcss-cssnext')()
Both work, but what is the correct way and what are the differences?
Also, I never defined the root property for postcss-import and it is working. In what circumstances would I need to define it?
@rfgamaral both valid, if your want to pass options to plugin use require('postcss-cssnext')({...options}) (see in docs pluginFunction), if not - use just require('postcss-cssnext') (see i docs Plugin). It is related to postcss (http://api.postcss.org/Processor.html#use)
More simple with examples:
Using postcss.plugin("postcss-plugin", (options) => {}); your got Plugin (http://api.postcss.org/postcss.html#.plugin)
Using postcss.plugin("postcss-plugin", (options) => {})(); your got pluginFunction (http://api.postcss.org/global.html#pluginFunction)
@rfgamaral ping me if need more help
No need. I get it now. Thanks very much.
|
2025-04-01T04:35:09.998013
| 2017-06-30T15:15:14
|
239809580
|
{
"authors": [
"Janaka-Steph",
"kishorevarma",
"michael-ciniawsky"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9849",
"repo": "postcss/postcss-loader",
"url": "https://github.com/postcss/postcss-loader/issues/271"
}
|
gharchive/issue
|
Warning "Previous source map found, but options.sourceMap isn't set"
Hello,
Despite all my efforts, I can't get rid of the warning "Previous source map found, but options.sourceMap isn't set"
Some packages I have:
babel-plugin-react-css-modules: 3.0.0
bootstrap: 4.0.0-alpha.6
bootstrap-loader: 2.1.0
node-sass: 4.5.3
postcss-loader: 2.0.6
postcss-scss: 1.0.1
precss: 1.4.0
sass-loader: 6.0.6
webpack: 2.6.1
webpack-dev-server: 2.5.0
My webpack.config.js : https://pastebin.com/QgLGNUbH
My webpack.bootsrap.config.js : https://pastebin.com/X9H0CTv6
My postcss.config.js : https://pastebin.com/a5grb2qx
My .bootstraprc : https://pastebin.com/NVq8Ne4J
The github repo : https://github.com/asseth/dao1901
I mention disableSassSourceMap: true in the .bootstraprc but it has no effect on the warnings.
Thank you for your help
postcss-scss: 1.0.1
precss: 1.4.0
You won't need them when using SASS (sass-loader) itself, the former is for stylelint (syntax) and the latter is mimicking SASS for folks who want to use PostCSS standalone, either remove sass-loader or remove those two
babel-plugin-react-css-modules: 3.0.0
I perosnally never used it myself but if you have
{
loader: 'css-loader'
options: { modules: true }
}
you won't need ot either. It's another way to use CSS Modules, both in the same config won't make much sense and I recommend using css-loader
As stated on gitter I also would simply use a a CDN link for vendor CSS libs instead of such madness loaders like boostrap-loader, webpack doesn't shine on this (Global CSS) in general
style.css
.title {
font-size: 3rem;
}
component.js
import React from 'react'
import $ from './style.css'
export default (props) => (
<div className='container'> // Bootstrap CSS (Global)
<h1 className={$.title}>{props.title}</h1> // Custom CSS (Local)
</div>
)
I take a a deeper look at bootstrap-loader when time, please try by remove it and notify when your you get a working build without it :)
If I remove all things related to bootstrap, but not the rest, and import bootstrap from CDN, then I don't have the warnings anymore, but now it doesn't load my custom styles. :-(
Hi,
I have a similar issue, even though I set the soureMap true in config.
module.exports = {
sourceMap: true,
plugins: [require('autoprefixer')]
};
I have tried to debug issue. I think we are not checking sourceMap option from config file.
@kishorevarma sourceMap isn't supported by postcss.config.js, you need to add it to webpack.config.js instead. The separate config only supports PostCSS related Options && Plugins to work across the different PostCSS runners (CLI, gulp, ...) seemlessly. sourceMap is webpack specific and therefore => webpack.config.js
postcss.config.js
module.exports = {
- sourceMap: true,
plugins: [require('autoprefixer')]
};
webpack.config.js
{
loader: 'postcss-loader'
+ options: { sourceMap: true }
}
@Janaka-Steph I can only recommend to skip frameworks like bootstrap when using webpack, the component based archictecture isn't well suited for CSS frameworks in general
On attempt (i wasn'T too 'lucky' with either) is to require()/import bootstrap in the app entry App.js
boostrap.scss
/* Individualize (S)CSS here */
@import bootstrap/../...scss;
...
...
App.js
import React from 'react'
// Only imported once in the entry
import 'boostrap.scss'
import $ from 'style.scss'
// => Component [index.js, style.scss, ...assets]
import Component from './components/component'
...
class App extends React.Component {
render () {
return (
<div className="container">
<Component />
</div>
)
}
}
⚠️ You will need the extract-text-webpack-plugin in production then, since the whole bootstrap blob is in the JS otherwise :)
In any case, just the compare <link href="path/to/boostrap.min.css"> to hunderds of lines for requiring a CSS file like boostrap-loader does atm. Not to black-mouth the bootstrap-loader in particular here, there is simply no straightforward way to do it, but the approach is off in general imho :). I also couldn't find the section where postcss-loader is required by boostrap-loader tbh, so please open an issue there if you intend to use the loader nevertheless 😛
I agree that requiring Bootstrap from the CDN is much simpler but then we loose the scss bootstrap variables..
thanks @michael-ciniawsky
|
2025-04-01T04:35:10.138387
| 2023-02-12T12:33:05
|
1581241898
|
{
"authors": [
"Llois41",
"unshame"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9850",
"repo": "posva/unplugin-vue-router",
"url": "https://github.com/posva/unplugin-vue-router/issues/128"
}
|
gharchive/issue
|
Provide typings for ESM (TS error when using moduleResolution node16 or nodenext)
Hey there,
first of all I'm exited to use this Plugin after seeing the presentation on VueJS Amsterdam this week. :)
I think this Plugin has a similar issue as mentioned in https://github.com/vitejs/vite/issues/10481 because I get the same error TS2349: This expression is not callable. Type 'typeof import("<...>/node_modules/unplugin-vue-router/dist/vite")' has no call signatures.
A current workaround is declaring the correct typings myself
declare module 'unplugin-vue-router/vite' {
import { type Plugin } from 'vite';
import { type Options } from 'unplugin-vue-router/options';
const plugin: (options?: Options) => Plugin;
export default plugin;
}
My Project files:
// tsconfig.json
{
"compilerOptions": {
"target": "esnext",
"module": "nodenext",
"moduleResolution": "nodenext",
"outDir": "build/js",
"noUnusedLocals": true
}
}
// package.json
{
"name": "my-project",
"version": "0.0.1",
"type": "module",
"packageManager"<EMAIL_ADDRESS> "scripts": {...},
"dependencies": {
"vue": "3.2.47",
"vue-router": "4.1.6"
},
"devDependencies": {
"@vitejs/plugin-vue": "4.0.0",
"@vue/test-utils": "2.2.10",
"jsdom": "21.1.0",
"typescript": "4.9.5",
"unplugin-vue-router": "0.3.2",
"vite": "4.1.1",
"vitest": "0.28.4"
}
}
Also getting multiple errors when I try to compile my project after generating the typed routes:
I really can't get my head wrapped around the TypeScript-Node-ESM-CJS stuff so maybe it's just the state of all those frameworks/libraries (not) working together.
Try add "type": "module" to your package.json.
Try adding "type": "module" to your package.json.
It's already in there ;)
I will try to make use of TS 5.0 new "moduleResolution": "bundler". And also compiling the dependencies is not the best idea, I will try out skipLibChecks.
|
2025-04-01T04:35:10.174479
| 2017-03-22T13:58:08
|
216072132
|
{
"authors": [
"PatMyron",
"delijati",
"hughchristensen"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9851",
"repo": "powdahound/ec2instances.info",
"url": "https://github.com/powdahound/ec2instances.info/issues/241"
}
|
gharchive/issue
|
Add a column to EC2 Instance Types: "Supports EMR"
Add a column to http://www.ec2instances.info/ called "supports emr" which has the values of 0 or 1.
If the EC2 instance can be used in an EMR cluster add 1. If it can not be add 0.
https://aws.amazon.com/emr/pricing/
Done with #349
@powdahound can be closed
https://github.com/powdahound/ec2instances.info/blob/5281faf2a9dfec2bff233ae375b6872f90e8f756/in/index.html.mako#L152
|
2025-04-01T04:35:10.190414
| 2024-10-01T07:56:01
|
2558463602
|
{
"authors": [
"Chriztiaan",
"DominicGBauer",
"fooware",
"guillempuche"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9852",
"repo": "powersync-ja/powersync-js",
"url": "https://github.com/powersync-ja/powersync-js/pull/325"
}
|
gharchive/pull-request
|
[Fix] packages/react issue with useQuery not supporting dynamic query parameters
Original issue reported at https://github.com/powersync-ja/powersync-js/issues/323.
Dynamic query dependencies would warn with the following if the query parameter array changed in size:
Warning: The final argument passed to useMemo changed size between renders. The order and size of this array must remain constant. Previous: [] Incoming: [xxxxxxxxxxxxx]
Tested with the following to verify that it works now:
const [params, setParams] = React.useState<string[]>([]);
const { data } = useQuery(`select * from lists where id=?`, params);
onclick:: setParams(["some-uuid"])
The change from React.useMemo(() => {}, [...array]); to React.useMemo(() => {},[JSON.stringify(array)]) should not introduce any issues, as valid query parameters are expected to be serialisable.
We may want to look at something like this in the future https://github.com/sandiiarov/use-deep-compare#readme.
Very cool Christian!
I just got hit with this today in our process of porting over our codebase from Electric. Spent quite some time trying to figure out which useMemo it was, until I found this. Thanks for fixing it @Chriztiaan!
Any idea when this could be merged and released? Is there any canary / nightly one could test after it is merged?
@fooware, I'll release this soonest and ping you.
|
2025-04-01T04:35:10.253896
| 2020-05-18T20:56:08
|
620500784
|
{
"authors": [
"frenzibyte",
"swoolcock"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9854",
"repo": "ppy/osu-framework",
"url": "https://github.com/ppy/osu-framework/pull/3560"
}
|
gharchive/pull-request
|
Fix project build warnings & unresolved xmldoc references
warnings
Can we do these changes in the Colour4 replacement PR instead?
Sure, I'll close this PR then.
|
2025-04-01T04:35:10.318311
| 2020-06-21T14:35:35
|
642575111
|
{
"authors": [
"FFY00",
"pradyunsg"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9855",
"repo": "pradyunsg/installer",
"url": "https://github.com/pradyunsg/installer/pull/18"
}
|
gharchive/pull-request
|
records: add Hash.validate()
Signed-off-by: Filipe Laíns<EMAIL_ADDRESS>
Thank you @FFY00 for the PR and @uranusjr for the review! ^.^
|
2025-04-01T04:35:10.325065
| 2022-09-18T20:31:52
|
1377170835
|
{
"authors": [
"capt-nemo429",
"pragmaxim"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9856",
"repo": "pragmaxim/ergo-uexplorer",
"url": "https://github.com/pragmaxim/ergo-uexplorer/issues/1"
}
|
gharchive/issue
|
Problematic queries on current explorer schema
Unspent boxes by address or ErgoTree
Unspent boxes by tokenId
Unspent boxes by register
Transactions by address or ErgoTree
Going well so far, step-by-step https://github.com/pragmaxim/ergo-uexplorer#examples
|
2025-04-01T04:35:10.337946
| 2020-11-27T21:46:26
|
752481874
|
{
"authors": [
"Mate20x",
"prakashsatyani"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9857",
"repo": "prakashsatyani/cordova-plugin-ble-zbtprinter",
"url": "https://github.com/prakashsatyani/cordova-plugin-ble-zbtprinter/issues/2"
}
|
gharchive/issue
|
Thank you for providing such a great plugin
@prakashsatyani
Thank you for providing such a great plugin
I want to ask about this plugin
Is it possible to add socket TCP/IP printing?
TCP/IP is also in line with printing mainstream
Hope the next version can add this feature
Thank you
@Mate20x Thank you for your kind words!!
Currently I am not focusing on this repository. However, in future I will try to upgrade the code base and add TCP/IP support as well :)
|
2025-04-01T04:35:10.340793
| 2017-04-15T17:28:00
|
221961522
|
{
"authors": [
"prakhar1989",
"teldosas"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9858",
"repo": "prakhar1989/react-tags",
"url": "https://github.com/prakhar1989/react-tags/pull/153"
}
|
gharchive/pull-request
|
Make drag n drop smoother
Following this example (source) I made the drag n drop smoother
Check it out
Hey @teldosas: This is awesome! I remember wanting to implement this, but at that time ReactDND didn't have this example. Can you please resolve the conflicts before I can review this? I'd suggest not adding any file from dist-* and dist folder in this PR.
@prakhar1989 It should be ready now :)
I wish the ReactDND examples included some example for testing these interactions. The current test suite lacks any unit tests for this behavior 😥
It should be ready :)
Great! Thanks for the addressing the comments!
|
2025-04-01T04:35:10.342872
| 2022-07-31T13:46:56
|
1323466860
|
{
"authors": [
"Asttttha",
"GundaSudarrshan",
"jaypansuriya104",
"ruchagosavi"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9859",
"repo": "pranjay-poddar/Dev-Geeks",
"url": "https://github.com/pranjay-poddar/Dev-Geeks/issues/356"
}
|
gharchive/issue
|
Notes app
Notes app using html, css, js.
Expected behavior
You can write your notes and store in here with date and time.
Please assign.
Notes app using html ,css ,js
special characterise : add notes to you local storage
I am interested to do contribution please assign me !!
@pranjay-poddar I am Intersted to Contribute in it pls assign me if its available
@pranjay-poddar I have solution for this issue please assign me!!
|
2025-04-01T04:35:10.345091
| 2023-05-26T15:35:03
|
1727844296
|
{
"authors": [
"Deepanshi177",
"Harikaraja",
"MohitGupta121",
"pranjay-poddar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9860",
"repo": "pranjay-poddar/Dev-Geeks",
"url": "https://github.com/pranjay-poddar/Dev-Geeks/pull/1463"
}
|
gharchive/pull-request
|
Globe-Quest ( A front-end project) : A fully responsive Travel Website with Dark and Light Mode.
Globe-Quest: A fully responsive Travel Website with Dark and Light Mode.
Description:- Created a fully responsive tour and travel website that focused on travel reviews and trip fares. Used the toggle button to switch between dark and Light Mode to enhance readability. Adding the images of the website for reference.
please assign me the issue
Mention the issue number and strictly follow and fillup the PR template.
Inactive
|
2025-04-01T04:35:10.348310
| 2023-08-16T10:05:12
|
1852917514
|
{
"authors": [
"bcohen44"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9861",
"repo": "prant0-ssp/autodoist",
"url": "https://github.com/prant0-ssp/autodoist/issues/1"
}
|
gharchive/issue
|
getting http 410 errors on connection
Running python3.10, Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux
Getting 410 errors + logging tracebacks,
bcohen@laptop-bcohen02:~/scripts/autodoist$ python3 autodoist.py -a <xyz> -l next_action --debug
2023-08-16 06:04:26 DEBUG Starting new HTTPS connection (1): api.github.com:443
2023-08-16 06:04:26 DEBUG https://api.github.com:443 "GET /repos/Hoffelhas/autodoist/releases HTTP/1.1" 200 None
2023-08-16 06:04:26 WARNING
Your version is not up-to-date!
Your version: v1.5. Latest version: v2.0
Find the latest version at: https://github.com/Hoffelhas/autodoist/releases/tag/v2.0
2023-08-16 06:04:26 INFO You are running with the following functionalities:
Next action labelling mode: Enabled
Regenerate sub-tasks mode: Disabled
Shifted end-of-day mode: Disabled
2023-08-16 06:04:26 DEBUG Connecting to the Todoist API
2023-08-16 06:04:26 DEBUG Syncing the current state from the API
2023-08-16 06:04:26 DEBUG Starting new HTTPS connection (1): api.todoist.com:443
2023-08-16 06:04:27 DEBUG https://api.todoist.com:443 "POST /sync/v8/sync HTTP/1.1" 410 None
--- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.10/logging/__init__.py", line 1100, in emit
msg = self.format(record)
File "/usr/lib/python3.10/logging/__init__.py", line 943, in format
return fmt.format(record)
File "/usr/lib/python3.10/logging/__init__.py", line 678, in format
record.message = record.getMessage()
File "/usr/lib/python3.10/logging/__init__.py", line 368, in getMessage
msg = msg % self.args
TypeError: %d format: a real number is required, not str
Call stack:
File "/home/BOSDYN/bcohen/scripts/autodoist/autodoist.py", line 1005, in <module>
main()
File "/home/BOSDYN/bcohen/scripts/autodoist/autodoist.py", line 959, in main
api, label_id, regen_labels_id = initialise(args)
File "/home/BOSDYN/bcohen/scripts/autodoist/autodoist.py", line 166, in initialise
label_id = verify_label_existance(args, api, args.label, 1)
File "/home/BOSDYN/bcohen/scripts/autodoist/autodoist.py", line 83, in verify_label_existance
logging.debug('Label \'%s\' found as label id %d',
Message: "Label '%s' found as label id %d"
Arguments: ('next_action', 'bf16b708-3b6c-11ee-b7ae-cd769a7403e9')
--- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.10/logging/__init__.py", line 1100, in emit
msg = self.format(record)
File "/usr/lib/python3.10/logging/__init__.py", line 943, in format
return fmt.format(record)
File "/usr/lib/python3.10/logging/__init__.py", line 678, in format
record.message = record.getMessage()
File "/usr/lib/python3.10/logging/__init__.py", line 368, in getMessage
msg = msg % self.args
TypeError: %d format: a real number is required, not str
Call stack:
File "/home/BOSDYN/bcohen/scripts/autodoist/autodoist.py", line 1005, in <module>
main()
File "/home/BOSDYN/bcohen/scripts/autodoist/autodoist.py", line 959, in main
api, label_id, regen_labels_id = initialise(args)
File "/home/BOSDYN/bcohen/scripts/autodoist/autodoist.py", line 166, in initialise
label_id = verify_label_existance(args, api, args.label, 1)
File "/home/BOSDYN/bcohen/scripts/autodoist/autodoist.py", line 83, in verify_label_existance
logging.debug('Label \'%s\' found as label id %d',
Message: "Label '%s' found as label id %d"
Arguments: ('next_action', 'bf16b708-3b6c-11ee-b7ae-cd769a7403e9')
2023-08-16 06:04:27 INFO Autodoist has connected and is running fine!
2023-08-16 06:04:27 DEBUG Syncing the current state from the API
2023-08-16 06:04:27 DEBUG https://api.todoist.com:443 "POST /sync/v8/sync HTTP/1.1" 410 None
2023-08-16 06:04:27 INFO No changes in queue, skipping sync.
2023-08-16 06:04:27 DEBUG Sleeping for 4 seconds
Nevermind, this was running 1.5 instead of 2.0
|
2025-04-01T04:35:10.354756
| 2020-12-11T06:04:43
|
761939107
|
{
"authors": [
"prashantsengar",
"skcy2001"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9862",
"repo": "prashantsengar/CleanPy",
"url": "https://github.com/prashantsengar/CleanPy/issues/20"
}
|
gharchive/issue
|
script takes input of folder location to clean
User can specify which folder to clean by mentioning its path in config.ini
Eg.
Config.ini:
PLocation: "C:/xyz/folderlocation"
Cool idea @skcy2001 🆒
I wanted to add this feature in it.
While you are at it, can you also:
add the extension options in the config file too? It is currently a dictionary in the script. Instead of this, we can have the script read this from the config file so that the users can change those.
Take the input folder as a command line argument.
Eg. python arrange.py /PATH/TO/CLEAN
If no path is specified, clean the current directory.
If you can do this, please let me know :) 💯
Yep, I will do it.
Check #10 . Did all the changes you mentioned.
|
2025-04-01T04:35:11.940169
| 2017-11-13T08:08:46
|
273341725
|
{
"authors": [
"mattiagalati",
"mellogarrett",
"prateekbh"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9863",
"repo": "prateekbh/preact-material-components",
"url": "https://github.com/prateekbh/preact-material-components/issues/370"
}
|
gharchive/issue
|
Add support for Textfield with Leading/Trailing icons
Cannot figure out how to achieve the feature "Textfield - Leading/Trailing icons" as presented here: https://material-components-web.appspot.com/textfield.html
woops this is currently un supported, happy to take this as a feature request.
@mellogarrett will you be able to take this up?
yea, I can work on this. @mattiagalati the link is broken. Mind updating?
https://material-components-web.appspot.com/text-field.html
|
2025-04-01T04:35:11.958608
| 2020-10-12T12:09:00
|
719328423
|
{
"authors": [
"dhairya-parikh",
"pratik-choudhari"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9866",
"repo": "pratik-choudhari/AlgoCode",
"url": "https://github.com/pratik-choudhari/AlgoCode/pull/306"
}
|
gharchive/pull-request
|
Implemented Merge Two Sorted Lists in Java
Description
Question: Given 2 sorted linked lists, merge the lists to a single sorted linked list.
Example:
Input:
List1: 2 -> 4 -> 5 -> 6 -> 8 -> 9
List2: 1 -> 3 -> 7
Output:
1 -> 2 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8 -> 9
Reference Issue number #270
issue_no must refer to the issue related to this PR, visit here
Type of change
Choosing one or more options from the following as per the nature of your Pull request.
NOTE: Check boxes using [x]
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Documentation Update
Checklist:
[x] I have named my files and folder, according to this project's guidelines.
[x] My code follows the style guidelines of this project.
[x] My Pull Request has a descriptive title. (not a vague title like Update index.md)
[x] I have commented on my code, particularly in hard-to-understand areas.
[x] I have created a helpful and easy to understand README.md, with problem description and my name.
[ ] I have included a requirements.txt file (if external libraries are required.)
[x] My changes do not produce any warnings.
[ ] I have starred this repository.
[ ] I have added a working sample/screenshot of the script.
[x] I have checked for trailing spaces in file names and none have them.
@dhairya-parikh Follow the directory structure. You have made changes in main repository folder.
|
2025-04-01T04:35:11.983068
| 2021-03-01T13:08:52
|
818850749
|
{
"authors": [
"akalex",
"ashanbrown",
"gabrielf-eb"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9867",
"repo": "pre-commit/identify",
"url": "https://github.com/pre-commit/identify/issues/172"
}
|
gharchive/issue
|
Version >1.6 dropped support of Python2.7
Version 1.6.x no longer supports Python2.7, the module identify.identify.py:31 uses syntax from PEP 448 - Additional Unpacking Generalizations that requires Python 3.5+.
Is it expected change or it has to be considered as a regression?
Proof:
$ python -V
Python 2.7.16
$ pre-commit run -a
Traceback (most recent call last):
File "/usr/local/bin/pre-commit", line 6, in <module>
from pre_commit.main import main
File "/usr/local/lib/python2.7/site-packages/pre_commit/main.py", line 11, in <module>
from pre_commit import git
File "/usr/local/lib/python2.7/site-packages/pre_commit/git.py", line 7, in <module>
from pre_commit.util import cmd_output
File "/usr/local/lib/python2.7/site-packages/pre_commit/util.py", line 15, in <module>
from pre_commit import parse_shebang
File "/usr/local/lib/python2.7/site-packages/pre_commit/parse_shebang.py", line 6, in <module>
from identify.identify import parse_shebang_from_file
File "/usr/local/lib/python2.7/site-packages/identify/identify.py", line 31
ALL_TAGS = {*TYPE_TAGS, *MODE_TAGS, *ENCODING_TAGS}
^
SyntaxError: invalid syntax
Fix
https://github.com/pre-commit/identify/pull/171
At minimum, the major version of identify might need to be bumped. I see this error even when I install pre-commit==1.21.0. pre-commit has this dependency (which would still be problematic with a major version bump): https://github.com/pre-commit/pre-commit/blob/0047fa35dd463aabe85fbd55bbd97ee03479f34c/setup.cfg#L27
|
2025-04-01T04:35:11.984309
| 2022-12-20T05:36:15
|
1504013057
|
{
"authors": [
"asottile",
"ferlatte"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9868",
"repo": "pre-commit/identify",
"url": "https://github.com/pre-commit/identify/pull/355"
}
|
gharchive/pull-request
|
Add support for osascript.
Unlike most other interpreters, osascript supports multiple languages via the -l flag. Identify osascript as either applescript or javascript depending on that flag.
Is there any feedback on this PR? Happy to make changes to make it easier to merge.
it won't be merged sorry. please in the future discuss feature requests first
|
2025-04-01T04:35:12.033880
| 2022-05-04T10:54:48
|
1225211475
|
{
"authors": [
"Net-burst",
"SerhiiNahornyi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9869",
"repo": "prebid/prebid-cache-java",
"url": "https://github.com/prebid/prebid-cache-java/pull/62"
}
|
gharchive/pull-request
|
Add apache V2 license
Resolves https://github.com/prebid/prebid-cache-java/issues/44
Oh, how about changing file extension to .md?
|
2025-04-01T04:35:12.047601
| 2021-03-04T06:43:04
|
821814162
|
{
"authors": [
"AbhijitBhosale72",
"bretg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9870",
"repo": "prebid/prebid.github.io",
"url": "https://github.com/prebid/prebid.github.io/pull/2736"
}
|
gharchive/pull-request
|
YuktaMedia Analytics Adapter Documentation Updated
Added new file 'yuktamedia.md' into 'dev-docs/analytics/'.
Easiest to resolve the conflict here in a separate PR -- done in #2753
Note that 'enable_download: false' means it won't show up on https://docs.prebid.org/download.html
|
2025-04-01T04:35:12.048584
| 2018-01-26T02:39:41
|
291783226
|
{
"authors": [
"banakemi",
"rmloveland"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9871",
"repo": "prebid/prebid.github.io",
"url": "https://github.com/prebid/prebid.github.io/pull/575"
}
|
gharchive/pull-request
|
Add adgeneration docs
Ad Generation Adapter dev docs
Should be showing up on the Downloads page soon, thanks @banakemi
|
2025-04-01T04:35:12.049952
| 2019-11-03T22:12:01
|
516897137
|
{
"authors": [
"lovell",
"vweevers"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9872",
"repo": "prebuild/prebuild-install",
"url": "https://github.com/prebuild/prebuild-install/issues/113"
}
|
gharchive/issue
|
Tests are broken on node 12, add to travis
Something to do with nock?
It looks like a lack of faked prebuilds for Node ABI v72 in https://github.com/ralphtheninja/a-native-module/releases is causing this.
PR to fix the tests at #120
|
2025-04-01T04:35:12.090020
| 2024-08-15T14:08:47
|
2468125098
|
{
"authors": [
"ruben-arts"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9873",
"repo": "prefix-dev/pixi",
"url": "https://github.com/prefix-dev/pixi/pull/1820"
}
|
gharchive/pull-request
|
fix: make proper use of NamedChannelOrUrl
Partly fixes #1764
We didn't read the channel name in the mapping correctly anymore. This fixes that.
Added a test to verify
@nichmor The tests are failing but to me it seems like I actually fixed a bug.
https://github.com/prefix-dev/pixi/blob/1432a0b80bd64276a2186ee6b7de20bd3ea2c018/tests/solve_group_tests.rs#L444-L448
Here it says that it should take boltons from the ProjectDefinedMapping, but in my new version it tells me it comes from the compressed mapping. To me that seems correct as the custom mapping doesn't have boltons and we don't provide a hash. So it can only come from the compressed_mapping.
Could you tell me whats up
@baszalmstra This reintroduces the channel-config as we're comparing the repodata with the channel thus we require the configuration. Could you rereview?
|
2025-04-01T04:35:12.095030
| 2017-09-12T14:51:57
|
257072448
|
{
"authors": [
"adamrigg",
"preichenberger"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9874",
"repo": "preichenberger/go-coinbase-exchange",
"url": "https://github.com/preichenberger/go-coinbase-exchange/issues/24"
}
|
gharchive/issue
|
Accessing sctruct members of type Time
How are sctruct members of type Time accessed in go-coinbase-exchange? Is there some way to treat these as standard golang time.Time structs in the existing implementation? Would that require a unique interface or new set of functions within time.go? I'm a bit confused when it comes to the declaration of type Time in this project's time.go.
For example, when modifying the List Account Ledger example from Readme.md (or the main Github page of the project), I wanted to include some useful information about transaction times in a human-readable format using the following line of code:
fmt.Printf(e.CreatedAt.Format("2006-01-02 15:04:05.999999+00"))
This throws the following compiler error:
e.CreatedAt.Format undefined (type coinbase.Time has no field or method Format)
I can see the struct members wall, ext and loc by printing e.CreatedAt (where e is a ledger range) by replacing the print statement above with the following:
fmt.Printf("%#v", e.CreatedAt)
But I'm not quite sure how to parse the resulting string so that the coinbase.Time variables can be treated as standard golang or UNIX times.
This project could use some development with respect to documenting the functions whose combined prototypes and raw code are kind of "self-documenting" rather than verbose. I would be happy to contribute something in that regard should someone point me in the right direction with the underlying fundamentals of go at play here.
Given the limited number of contributors here, I've also posted the question on StackOverflow in a more abstract way (with respect to computer programming terminology) which addresses the more general case of type wrapping/extension:
https://stackoverflow.com/questions/46181419/type-wrapping-of-non-atomic-types-in-golang
One option for this particular case would be to include the following line of code (or something else which works) in the Ledger example:
fmt.Printf((time.Time(e.CreatedAt)).Format("2006-01-02 15:04:05.999999+00"))
(from my accepted StackOverflow answer in the question I posted there and linked to above)
Added this: https://github.com/preichenberger/go-coinbase-exchange/commit/ecf01c862e8ca972dc307284f9584f8d08faf7c0
|
2025-04-01T04:35:12.103148
| 2024-05-13T11:27:56
|
2292545743
|
{
"authors": [
"nsosio",
"vittoriop17"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9875",
"repo": "premAI-io/prem-utils",
"url": "https://github.com/premAI-io/prem-utils/pull/124"
}
|
gharchive/pull-request
|
Updated model pricing
closes #122
A few problems and observations:
some text2vector models do not have the following keys: context_window, output_dimension. I will open another PR and work on that later;
Follow up to the previous point: some text2vector models have both the info mentioned above. Nevertheless, the key output_dimension doesn't seem to be used (the command load_models in prem-saas does not load that info);
I did not find the model pricing for Anyscale models. I believe they use a different pricing system, which is not based on input/output tokens, but rather on usage time;
OpenRouter pricing is dynamic. They route each request internally, to any of the available providers. Thus, the price may differ from time to time. Possible solution: consider the worst-case scenario, thus I could include the input and output price given the most expensive provider;
MistralAI: I got confused while looking at this page. Because of misnamed models (thus the confusion). Plus, some of the values we have in our models.json don't seem to be aligned with what I found on the docs (in terms of context_window)
Cohere: I've struggled to find the costs of command and command-light models. But while searching, I found a disclaimer (see picture). Thus my following question: should we deprecate command and command-light? What's the current usage of these two models? Are users using them?
okok, no problem lower priority the embeddings
@giowe are these info used in any way for the RAG?
what are the differences between best and worst scenario? How many openrouter models do we have?
let's correct the non updated value for context_window. Regarding names what do you mean?
we can deprecate command and command-light
the difference can vary based on the model we're talking about. E.g.: llama-3-70B: worst case 0.95$, best case 0.64$. In total, we offer 13 models by OpenRouter;
we have 3 text2text models offered by mistralai: tiny, small, and medium. Instead, if you check the first picture, mistralai offers 6 models. In particular the following optimized versions: small, medium, and large. But if you check the second picture, the open-weights models have naming "conflicts". In particular, open-mixtral-8x7b is used to be called mistral-small. Then, in our case, which is the model behind our mistral-small model? And what about mistral-tiny? Is it the open-mistral-7b model? Or am I missing sth?
okay
let's add the average value
Now I see. From Mistral AI we should use -latest since now we are using the open-source versions
|
2025-04-01T04:35:12.139333
| 2017-08-10T06:33:00
|
249246149
|
{
"authors": [
"icanfly",
"petroav"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9878",
"repo": "prestodb/ambari-presto-service",
"url": "https://github.com/prestodb/ambari-presto-service/pull/29"
}
|
gharchive/pull-request
|
Fix 'restart presto failed in Ambari 2.5' and Upgrade version of presto component
fix the issue: https://github.com/prestodb/ambari-presto-service/issues/28
@teamsoo thanks for notes, I have noticed these details. I'll submit another pull request soon. so this request should be closed
@teamsoo it looks like @icanfly reverted the change in a follow up commit. @icanfly if you'd like you can add a second commit to this PR that updates the Presto version to 0.182 in metainfo.xml as @teamsoo suggested as well as updating the links in download.ini.
I'm a newer for submitting on github. I will take a good look at this article, and then hope to do better
@icanfly I took the liberty to incorporate my comments, squashed your commits and merged them to master. Thanks for the pull request!
|
2025-04-01T04:35:12.166344
| 2019-05-31T06:37:58
|
450638998
|
{
"authors": [
"SimonWan1029",
"findepi",
"vincentpoon"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9879",
"repo": "prestosql/presto",
"url": "https://github.com/prestosql/presto/issues/858"
}
|
gharchive/issue
|
Phoenix Connector Lowercase Table/View could not be find
Once create a phoenix view which schema or table name is lowercased, then presto could not find these table ;
Upon cursory reading of the source, I would assume this line might be responsible:
https://github.com/prestosql/presto/blob/b763176ff1c33bce413911908bda67b63ee5e2f4/presto-base-jdbc/src/main/java/io/prestosql/plugin/jdbc/BaseJdbcClient.java#L649-L652
If the uppercase is the default case for Phoenix, it may be it works as intended. JDBC-based connectors support case-insensitive name matching
https://github.com/prestosql/presto/blob/03bb0a426732f6ea947844dcc652a3ae2c76af1b/presto-base-jdbc/src/main/java/io/prestosql/plugin/jdbc/BaseJdbcConfig.java#L107-L112
It's not available for Phoenix though, as the connector does not bind/configure the BaseJdbcConfig object.
@vincentpoon would you like to take a look?
For DatabaseMetaData#storesUpperCaseIdentifiers, Phoenix returns true, so by default unquoted identifiers are uppercased.
I think the issue here is with quoted identifiers. My general recommendation would be to not use quoted mixed/lowercase identifiers. Phoenix does support them, but certain areas can be confusing and/or not implemented well. For example, both PhoenixDatabaseMetaData#storesUpperCaseQuotedIdentifiers and PhoenixDatabaseMetaData#storesMixedCaseQuotedIdentifiers return true. But I'll see if we can support mixed case quoted here in the connector.
@vincentpoon proper support for case sensitive identifiers requires engine changes (https://github.com/prestosql/presto/issues/17).
Part of that will be to add case insensitive table name resolution when user does not quote the identifier in the query.
So when https://github.com/prestosql/presto/issues/17 is done, then:
SELECT .. FROM "some_table" in Presto ⇒ table needs to be exactly lower case (some_table)
SELECT .. FROM "SOME_TABLE" in Presto ⇒ table needs to be exactly upper case (SOME_TABLE)
SELECT .. FROM some_table in Presto ⇒ table needs to be some_table case insensitively (some_table, SomE_TabLe, SOME_TABLE, etc.)
To support 3. the connector will need to be able to do case insensitive name resolution.
In JDBC connectors this is currently supported as an opt-in feature (the case-insensitive-name-matching configuration toggle).
Would it be possible to expose this functionality in Phoenix connector as well?
I guess you didn't bind BaseJdbcConfig in guice because not all its configuration toggles make sense.
Make we introduce a separate config class for the opt-in case insensitive name matching?
cc @electrum
@SimonWan1029 @findepi Check out #872 , which adds support for case-insensitive-name-matching
You'll need to add the following line into your phoenix catalog properties file:
case-insensitive-name-matching=true. Then you should be able to query your case sensitive tables.
|
2025-04-01T04:35:12.172118
| 2019-10-31T09:17:22
|
515277508
|
{
"authors": [
"buffcode",
"d4rth-v4d3r",
"ebyhr",
"findepi",
"martint",
"tchunwei",
"zifer123"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9880",
"repo": "prestosql/presto",
"url": "https://github.com/prestosql/presto/pull/1915"
}
|
gharchive/pull-request
|
Support MongoDB case insensitive collection name
Fixes https://github.com/prestosql/presto/issues/1102
Currently any MongoDB collection with uppercase name like "Hello" could not be queried.
This PR makes it possible by supporting case insensitivity for collection name, providing solution for case 2 mentioned here https://github.com/prestosql/presto/issues/1102#issuecomment-536331360, it doesn't help case 1 though.
@cla-bot check
@tchunwei could you please rebase on current master and squash commits? Ideally, there should be just one commit after that.
@findepi thanks for guiding, has rebased, made it into single commit and force pushed. Kindly let me know if I did it incorrectly, thanks again.
@tchunwei Why was this closed?
@buffcode that was done unintentionally, was rebasing my code then accidentally wiped my commit, will re-open later
Thank you @ebyhr for your effort guiding me and reviewing the code. Have updated the code based on the feedback for your further review.
any updated about this?
I am sorry I do not have much time for this and this is consider low priority for me so will put this aside first. Also, this PR will be a temporary solution until https://github.com/prestosql/presto/pull/2350 is ready. So I think it would be better to wait for https://github.com/prestosql/presto/pull/2350 since it is a better approach?
I am sorry I do not have much time for this and this is consider low priority for me so will put this aside first. Also, this PR will be a temporary solution until #2350 is ready. So I think it would be better to wait for #2350 since it is a better approach?
ok, thanks
Any updates of this?
Covered by #3453
|
2025-04-01T04:35:12.177764
| 2020-08-07T10:43:45
|
674930311
|
{
"authors": [
"dain",
"electrum",
"sopel39"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9881",
"repo": "prestosql/presto",
"url": "https://github.com/prestosql/presto/pull/4728"
}
|
gharchive/pull-request
|
Fix ArbitraryOutputBuffer#isOverutilized
In order for buffer to be utilized it has to
have certain utilization AND it must be in a state
where it can still accept pages. Buffer cannot be
overutilized (and blocked) in FLUSHING, FINISHED
or FAILED pages as no more pages can be added.
FYI: @rohangarg @dain
This is an incomplete fix. ScaledWriterScheduler uses this flag to determine if at least 50% of the tasks are "overutilized". If we treat non-accepting (finished) buffers as not-overutilized, then it's easy to be under the 50% threshold yet still have producers blocked on writers.
ScaledWriterScheduler already filters out "done" tasks. If we also filter out flushing tasks, then this change should be safe to make. I'm trying to remember if there's any reason we treated finished buffers as overutilized, but this may have been leftover during development, since the design and logic changed several times.
@electrum it seems overutilized might mean something else depending on context.
For scaled writers it seems that overutilized buffers should be
buffers that accept pages AND are filled more than 0.5
buffers that are flushing AND are filled more than 0.5.
The rationale for 1) is that we want to preemptively scale writers up while buffer is filling up.
The rationale for 2) is that we want to scale writers up if there are some buffers that won't produce new pages, but are filled up, so more readers would help to empty them.
Therefore, the condition could be:
memoryManager.getUtilization() >= 0.5
Does that make sense?
I'm trying to remember if there's any reason we treated finished buffers as overutilized, but this may have been leftover during development, since the design and logic changed several times.
I guess maybe the idea was to help write data from flushing buffers. However, it's odd that we treat flushing buffers differently than running buffers. Is flushing buffer with just one page really overutilized?
I thought we decided we were going to expose the utilization and state so schedulers can have their own algorithms.
I thought we decided we were going to expose the utilization and state so schedulers can have their own algorithms.
This PR was about potentially improving actual condition for scaling writers.
|
2025-04-01T04:35:12.179842
| 2020-09-20T08:49:21
|
705071731
|
{
"authors": [
"ebyhr",
"findepi"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9882",
"repo": "prestosql/presto",
"url": "https://github.com/prestosql/presto/pull/5231"
}
|
gharchive/pull-request
|
Map Cassandra UUID to Presto UUID type
Related to #851
It seems uuid type on mutable table that was created by tempto become varchar(36). Filed https://github.com/prestosql/tempto/issues/58
Also do you know the reason why UUID was not in the SPI in the first place?
I this it was intentional, but @electrum would know better.
The general answer was always "use TypeManager to get instance of the type".
However, I am aware it does not work well with static enum-based code used in Cassandra type mappings.
|
2025-04-01T04:35:12.309866
| 2024-05-02T17:31:27
|
2276105090
|
{
"authors": [
"amogh-daryapurkar"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9883",
"repo": "pricefx/pricefx-eds",
"url": "https://github.com/pricefx/pricefx-eds/pull/124"
}
|
gharchive/pull-request
|
Feature: PAI-122 Enable support for scene7 video
Enabled support for dynamic media/scene7 video in embed component
Fix #PAI-122
Test URLs:
Before: https://main--pricefx-eds--pricefx.hlx.live/style-guide/components/embed
After: https://feature-pai-122-embed-video-scene7--pricefx-eds--pricefx.hlx.live/style-guide/components/embed/embed-dynamic-media
@SwathiPrasadRaju Can you approve this and get it merged?
|
2025-04-01T04:35:12.319094
| 2020-03-15T09:41:58
|
581600441
|
{
"authors": [
"jepsar",
"melloware"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9884",
"repo": "primefaces-extensions/core",
"url": "https://github.com/primefaces-extensions/core/pull/98"
}
|
gharchive/pull-request
|
Fix 765 input phone dependency
Fix https://github.com/primefaces-extensions/primefaces-extensions.github.com/issues/765
Then we need to update the documentation as well. As people might already be using this input, that would be a breaking change.
Yep i will update the wiki.
Docs updated: https://github.com/primefaces-extensions/primefaces-extensions.github.com/wiki/Getting-Started
|
2025-04-01T04:35:12.376464
| 2024-02-15T23:08:22
|
2137595094
|
{
"authors": [
"Lehoczky",
"atakantepe"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9885",
"repo": "primefaces/primevue-tailwind",
"url": "https://github.com/primefaces/primevue-tailwind/pull/148"
}
|
gharchive/pull-request
|
refactor: use px utilities from the default tailwind config
Since px is a valid spacing value in the default tailwind config, it is not necessary to use value interpolation to get 1px for padding, margin, etc.
Hi, thank you for your contribution. I merged PR 🌟
|
2025-04-01T04:35:12.393660
| 2023-04-05T18:02:10
|
1656062133
|
{
"authors": [
"GlebIrovich",
"mperrotti"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9886",
"repo": "primer/react",
"url": "https://github.com/primer/react/pull/3121"
}
|
gharchive/pull-request
|
Changes alignment of form control validation message icon
Changes the alignment of the validation message icon to be center-aligned with the first line of text.
Screenshots
Before:
After:
Merge checklist
[ ] Added/updated tests
[ ] Added/updated documentation
[ ] Tested in Chrome
[ ] Tested in Firefox
[ ] Tested in Safari
[ ] Tested in Edge
Take a look at the What we look for in reviews section of the contributing guidelines for more information on how we review PRs.
I'm a little confused by the failing snapshots. I'm going to pull from main and try updating them again...
Hey! Any updates on this PR?
@GlebIrovich - I'm going to try and incorporate @langermank 's feedback today. I'm also not sure if it's actually better, but I'm not opposed to it.
I'll try and nudge for more reviews once I'm done.
|
2025-04-01T04:35:12.395496
| 2023-01-25T04:28:38
|
1556038106
|
{
"authors": [
"Grawl",
"jonrohan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9887",
"repo": "primer/stylelint-config",
"url": "https://github.com/primer/stylelint-config/issues/318"
}
|
gharchive/issue
|
stylelint --fix is not working when primer rules is enabled
If I enable primer/no-unused-vars, I cannot use --fix feature of Stylelint.
If I just disable it, like "primer/no-unused-vars": false or remove this line, --fix is working again.
Thanks for the report! I'll see if I can track down the cause
TypeError: Cannot read properties of undefined (reading 'startsWith')
Should be fixed in 13.1.0
|
2025-04-01T04:35:12.397213
| 2021-01-26T03:01:25
|
793886703
|
{
"authors": [
"joelhawksley",
"srt32"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9888",
"repo": "primer/view_components",
"url": "https://github.com/primer/view_components/pull/165"
}
|
gharchive/pull-request
|
Add log statement on docs:build when component needs docs
We'd like to make sure all components have docs.
This change adds a log statement to the end of docs:build that will
list out the components missing docs.
The logs look like this (and show us our TODO list ;)):
35.44% documented
Converting YARD documentation to Markdown files.
Markdown compiled.
The following components needs docs. Care to contribute them? Primer::UnderlineNavComponent, Primer::HeadingComponent, Primer::FlexItemComponent, Primer::FlexComponent, Primer::DropdownMenuComponent, Primer::DetailsComponent, Primer::BaseComponent
Fantastic!
Fantastic!
|
2025-04-01T04:35:12.399567
| 2024-06-10T18:28:00
|
2344569925
|
{
"authors": [
"kushagra-singhh",
"princekhunt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9889",
"repo": "princekhunt/privateping",
"url": "https://github.com/princekhunt/privateping/issues/10"
}
|
gharchive/issue
|
Add about us page
Is your feature request related to a problem? Please describe.
There is no about us page in the website.
Describe the solution you'd like
Creating about us page for the website.
I do not think about us page is really necessary.
We're recognising contributors in humans.txt file already.
Checkout humans.txt
Check source file - source humans.txt
okay.
|
2025-04-01T04:35:12.401085
| 2016-12-27T17:58:04
|
197737324
|
{
"authors": [
"SystemDisc",
"princemaple"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9890",
"repo": "princemaple/angular2-html-syntax",
"url": "https://github.com/princemaple/angular2-html-syntax/issues/1"
}
|
gharchive/issue
|
Cannot use ApplySyntax plugin
Your example shows Angular2HTML/Angular2HTML as the value for syntax for the ApplySyntax plugin but that file does not exist in packages after installing Angular2 HTML Syntax.
The Angular2 HTML Syntax plugin is working and I can manually switch to the syntax, but I would like it to switch automatically.
Sorry about that, I've updated the README to reflect to real path.
My personal dev package is obviously not using the same name as the package that other users would have when installed through package control. My apologies.
|
2025-04-01T04:35:12.407766
| 2024-07-22T11:23:46
|
2422626290
|
{
"authors": [
"MemorySlices",
"skief"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9891",
"repo": "princeton-vl/SEA-RAFT",
"url": "https://github.com/princeton-vl/SEA-RAFT/issues/10"
}
|
gharchive/issue
|
Spring Val Split
Hi,
currently I'm trying to reproduce your results and for the evaluation of the Spring dataset I was wondering which data you used for the validation split since Spring only provides a train-test split. For me the python evaluate.py --cfg config/eval/spring-M.json --model models/Tartan-C-T-TSKH-spring540x960-M.pth from your eval.sh script fails with ValueError: Spring val directory does not exist: datasets/spring/val.
We separate sequences 0045 and 0047 from the original training set (see sec 4.3) as the subval set.
|
2025-04-01T04:35:12.421315
| 2020-09-15T14:31:55
|
701983004
|
{
"authors": [
"carmenberndt",
"janpio",
"martineboh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9892",
"repo": "prisma/language-tools",
"url": "https://github.com/prisma/language-tools/issues/474"
}
|
gharchive/issue
|
Extension issue
Issue Type: Bug
Extension Name: prisma
Extension Version: 2.6.2
OS Version: Windows_NT x64 10.0.19041
VSCode version: 1.49.0
{
"messages": [],
"activationTimes": {
"codeLoadingTime": 6967,
"activateCallTime": 136,
"activateResolvedTime": 2808,
"activationReason": {
"startup": false,
"extensionId": {
"value": "Prisma.prisma",
"_lower": "prisma.prisma"
},
"activationEvent": "onLanguage:prisma"
}
},
"runtimeErrors": []
}
Hi @martineboh! Can you please describe your issue, i.e. what the problem is?
Hi @carmenberndt I just noticed my schema.prisma file won't auto-format after updating to extension and Prisma CLI and Client all to 2.6.2. VScode Running Extension page shows Unresponsive (Performance Issue) with activation time as shown above. I restarted VScode and it automatically updated to 2.7.0, with same issues. Any help resolving is appreciated.
Oops! 2.7.0 is out already! I will update and report back ASAP.
Can you please share the log from your output when selecting Prisma Language Server?
Where does this JSON appear? Was this in the Extension Logs Folder or in Startup Performance?
From VSCode: CTRL+SHIFT+P -> Developer: Show Running Extension... Prisma 2.6.2 Unresponsive: Performance Issue. Right-Click to see Report Issue, which I did - this re-directed me here with the generated log copied automatically to my clipboard for pasting.
Ah ok interesting. Are you having the same issue with 2.7.0?
The issue has been resolved. I was actually using a VPN which was causing
the issue. I switched to another connection to fix it. Thanks
On Wed, 16 Sep 2020 at 9:08 AM, carmenberndt<EMAIL_ADDRESS>wrote:
Ah ok interesting. Are you having the same issue with 2.7.0?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/prisma/language-tools/issues/474#issuecomment-693246915,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AA5BVFHS4INBBHFTWOTWSF3SGBXAVANCNFSM4RNHKCEA
.
Oh Hi @martineboh - we met on Slack re the binary downloading problem (https://prisma.slack.com/archives/CA491RJH0/p1600254914446100)! Was this related to the same cause maybe? That could point to a problem in our downloading logic in the CLI when it the download is veeeeery slow or even hangs.
@janpio This is related and it’s fixed - thanks to your suggestion.
@carmenberndt Might be worth trying to reproduce what happens if the binary download does not work or is super slow - that seems to have been the problem for Martín.
|
2025-04-01T04:35:12.431803
| 2020-03-02T02:41:30
|
573680617
|
{
"authors": [
"abisuq",
"janpio",
"pantharshit00",
"snake575",
"tomhoule"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9893",
"repo": "prisma/prisma-client-js",
"url": "https://github.com/prisma/prisma-client-js/issues/537"
}
|
gharchive/issue
|
Mutating a Float with a small value results on a very different value written to Postgres
Hi 👋 I encountered an error when creating or updating a small Float on Postgres.
Repo: snake575/prisma-float-error
Prisma: 2.0.0-preview022
Runtime: query-engine-debian-openssl-1.1.x
PostgreSQL 12.2
Prisma schema:
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
}
model Transaction {
id Int @id @default(autoincrement())
amount Float
currency String
}
Mutation:
const transaction = await prisma.transaction.create({
data: {
amount: 0.00006927,
currency: 'BTC',
},
})
console.log({ transaction })
Wrong amount:
{ transaction: { id: 1, amount: 6.927e-17, currency: 'BTC' } }
Postgres log:
2020-03-01 23:02:16.045 -03 [278] LOG: execute s0: INSERT INTO "public"."Transaction" ("amount","currency") VALUES ($1,$2) RETURNING "public"."Transaction"."id"
2020-03-01 23:02:16.045 -03 [278] DETAIL: parameters: $1 = '0.00000000000000006927', $2 = 'BTC'
Thanks for the detailed write-up and the reproduction repository. I can confirm this issue :)
I managed to reproduce and fix the issue, it's on this branch: https://github.com/prisma/prisma-engines/pull/538 - it contains a lot more work on mapping postgres native types to prisma types, and will be merged soon.
Thanks, that was a quick fix! 😃
I found another problem that may be related
const transaction = await prisma.transaction.create({
data: {
amount: 0.00071832,
currency: 'BTC',
},
})
console.log({ transaction })
Returns correct amount
{ transaction: { id: 2, amount: 0.00071832, currency: 'BTC' } }
But a sightly different value is written to Postgres
2020-03-06 17:00:27.304 -03 [33] LOG: execute s0: INSERT INTO "public"."Transaction" ("amount","currency") VALUES ($1,$2) RETURNING "public"."Transaction"."id"
2020-03-06 17:00:27.304 -03 [33] DETAIL: parameters: $1 = '0.0007183200000000001', $2 = 'BTC
@snake575 Best create a new issue an link to it here instead - this will make it much easier for us (and @tomhoule) to track the problem and the fix. Thanks for reporting!
The second issue should be fixed as well now (latest alpha), but indeed, let's create another issue if it's still happening.
(@tomhoule Already happened: https://github.com/prisma/prisma-client-js/issues/555 Might want to close this one as well then later if really fixed)
I can confirm this is fixed with the engine changes I mentioned above. Confirmed with manual testing on alpha.927.
The fixes should be in the next preview release :)
@janpio I confirm this is fixed on alpha 927, it's also fixed on preview-24
Thanks @snake575 <3
@janpio It was right when writing to the database, but it was wrong when reading. (version 2.0.1)
@janpio It was right when writing to the database, but it was wrong when reading. (version 2.0.1)
I can confirm this, the original problem was solved but since prisma 2.1.0 the value is created correctly on the database but returns a different value to the client (2.0.1 on both @prisma/client and @prisma/cli works for me though).
I updated the repro: snake575/prisma-float-error with prisma 2.1.3
|
2025-04-01T04:35:12.441985
| 2020-06-15T19:10:33
|
639075421
|
{
"authors": [
"janpio",
"pantharshit00",
"thesunny",
"timsuchanek"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9894",
"repo": "prisma/prisma-client-js",
"url": "https://github.com/prisma/prisma-client-js/issues/728"
}
|
gharchive/issue
|
PANIC: Application logic invariant error: received null value for field last_sent_user which may not be null
Hi Prisma Team! My Prisma Client just crashed. This is the report:
Error is:
PANIC: Application logic invariant error: received null value for field last_sent_user which may not be null
The code works fine. Then it some circumstances, it stops working but it is always on last_sent_user.
Two situations where it failed:
Failed on Mac OSX. After trying several things, I removed and reinstalled @prisma/client and it started working again.
Then I deployed to Heroku which was working fine. After the deploy, it stopped working with the same error on last_sent_user
It is hard to replicate because I don't know what causes it to fail because something is happening somewhere but it's not the code. Also, it is not clear what introduces it because the Heroku deploy is a fresh deploy. But once it starts failing, it always fails (ie. it's not intermmitent).
I do wonder if it's related to the upgrade to 2.0.0 proper (ie. not beta). Perhaps Heroku is caching something somewhere? But it feels unusual since Heroku does a rebuild.
model book_share_email_invitations {
auth_token String @unique
book_id Int
email String
id Int @default(autoincrement()) @id
last_sent_by Int
permission Int
// normalized (singularized)
book books @relation(fields: [book_id], references: [id])
// normalized (singularized)
last_sent_user users @relation(fields: [last_sent_by], references: [id])
@@index([book_id], name: "book_share_email_invitations_book_id_index")
@@unique([book_id, email], name: "book_share_email_invitations_book_id_email_unique")
}
In my local environment, I once fixed it by removing @prisma/client and then adding it. But it broke again in production
Versions
Name
Version
Node
v12.16.2
OS
debian-openssl-1.1.x
Prisma
2.0.0
Logs
2020-06-15T17:31:20.730Z prisma-client Client Version 2.0.0
2020-06-15T17:31:20.730Z prisma-client Engine Version de2bc1cbdb5561ad73d2f08463fa2eec48993f56
2020-06-15T17:31:20.846Z prisma-client {
engineConfig: {
cwd: '/app/prisma',
debug: false,
datamodelPath: '/app/node_modules/.prisma/client/schema.prisma',
prismaPath: undefined,
generator: {
name: 'client',
provider: 'prisma-client-js',
output: '/tmp/build_212e74070adf806ae6e65fcbf9efea27/node_modules/@prisma/client',
binaryTargets: [Array],
config: {}
},
showColors: false,
logLevel: undefined,
logQueries: true,
flags: [],
clientVersion: '2.0.0'
}
}
2020-06-15T17:31:34.148Z prisma-client Prisma Client call:
2020-06-15T17:31:34.159Z prisma-client prisma.sessions.findOne({
select: {
id: true,
user_id: true
},
where: {
token: 'nRj7xaYa2szeC0lQKXrVs'
}
})
2020-06-15T17:31:34.159Z prisma-client Generated request:
2020-06-15T17:31:34.159Z prisma-client query {
findOnesessions(where: {
token: "faked-information-here"
}) {
id
user_id
}
}
2020-06-15T17:31:34.196Z plusX Have to call plusX on /app/node_modules/.prisma/client/query-engine-debian-openssl-1.1.x
2020-06-15T17:31:35.081Z prisma-client Prisma Client call:
2020-06-15T17:31:35.081Z prisma-client prisma.book_share_email_invitations.findOne({
select: {
id: true,
book_id: true,
book: {
select: {
id: true,
name: true,
shelf: {
select: {
id: true,
name: true
}
}
}
},
email: true,
permission: true,
last_sent_user: {
select: {
id: true,
name: true,
first_name: true,
last_name: true
}
}
},
where: {
auth_token: 'faked-information-here'
}
})
2020-06-15T17:31:35.081Z prisma-client Generated request:
2020-06-15T17:31:35.081Z prisma-client query {
findOnebook_share_email_invitations(where: {
auth_token: "faked-information-here"
}) {
id
book_id
book {
id
name
shelf {
id
name
}
}
email
permission
last_sent_user {
id
name
first_name
last_name
}
}
}
2020-06-15T17:31:35.085Z prisma-client Prisma Client call:
2020-06-15T17:31:35.085Z prisma-client prisma.users.findOne({
select: {
id: true,
name: true,
email: true,
first_name: true,
last_name: true
},
where: {
id: 14
}
})
2020-06-15T17:31:35.085Z prisma-client Generated request:
2020-06-15T17:31:35.085Z prisma-client query {
findOneusers(where: {
id: 14
}) {
id
name
email
first_name
last_name
}
}
2020-06-15T17:31:35.087Z prisma-client Prisma Client call:
2020-06-15T17:31:35.088Z prisma-client prisma.users.findOne({
select: {
id: true
},
where: {
id: 14
}
})
2020-06-15T17:31:35.088Z prisma-client Generated request:
2020-06-15T17:31:35.088Z prisma-client query {
findOneusers(where: {
id: 14
}) {
id
}
}
More issues about similar or related issues by user: https://github.com/prisma/prisma/issues/2754 + https://github.com/prisma/prisma/issues/2753
@janpio, I can close this issue if you prefer.
I posted it because the crash message suggested clicking the link which provides data automatically.
This issue is additional detail on https://github.com/prisma/prisma/issues/2754.
https://github.com/prisma/prisma/issues/2753 is a different issue though as it's more about the Panic error ending up on the wrong query. The panic just happens to be a pre-requirement for the panic-on-the-wrong-query bug.
Hey @thesunny
Can you please also try this one again on the latest version? I was unable to reproduce the other issue reported by you on the latest version and this one seems related to me. Thanks :pray:
I will try on the new version of Prisma and see if this issue still occurs.
One issue is that it happened on my Mac, then stopped happening. On Heroku, it started happening and so I just worked around the code because it was in production (I split the query into two separate queries).
@thesunny can you try to reproduce with the latest Prisma version please? Thanks!
Hi Tim, if I have time, I'll try and check it out. I'm not working on that particular app at the moment so I'm not sure when I'll get around back to it.
Thank you for following up though.
|
2025-04-01T04:35:12.451051
| 2020-12-26T03:30:21
|
774815397
|
{
"authors": [
"kimyh03",
"pantharshit00"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9895",
"repo": "prisma/prisma-client-js",
"url": "https://github.com/prisma/prisma-client-js/issues/954"
}
|
gharchive/issue
|
PANIC: index out of bounds: the len is 1 but the index is 1
Hi Prisma Team! My Prisma Client just crashed. This is the report:
Versions
Name
Version
Node
v14.15.3
OS
windows
Prisma Client
2.13.1
Logs
prisma-client { clientVersion: '2.13.1' }
everything is working perfectly in wsl2(ubuntu 18.04) with same code
but in windows "npx prisma migrate dev --preview-feature" and send any query or mutation via gql playground
I've got this error PANIC: index out of bounds: the len is 1 but the index is 1
(I follow the document of nestjs and prisma-nest git exmple repo)
please cheak my github repo (https://github.com/kimyh03/prisma-nestjs-graphql-windows-error)
src/app.module.ts
import { Module } from '@nestjs/common';
import { GraphQLModule } from '@nestjs/graphql';
import { join } from 'path';
import { PrismaService } from './prisma.service';
import { UserResolver } from './resolvers.user';
@Module({
imports: [
GraphQLModule.forRoot({
autoSchemaFile: join(process.cwd(), 'src/schema.gql'),
}),
],
controllers: [],
providers: [PrismaService, UserResolver],
})
export class AppModule {}
src\prisma.service.ts
import { Injectable, OnModuleInit, OnModuleDestroy } from '@nestjs/common';
import { PrismaClient } from '@prisma/client';
@Injectable()
export class PrismaService
extends PrismaClient
implements OnModuleInit, OnModuleDestroy {
async onModuleInit() {
await this.$connect();
}
async onModuleDestroy() {
await this.$disconnect();
}
}
prisma\schema.prisma
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
}
model User {
id Int @id @default(autoincrement())
email String @unique
name String?
}
src\resolvers.user.ts
import 'reflect-metadata';
import {
Resolver,
Query,
Mutation,
Args,
InputType,
Field,
} from '@nestjs/graphql';
import { Inject } from '@nestjs/common';
import { User } from './user';
import { PrismaService } from './prisma.service';
@InputType()
class SignupUserInput {
@Field({ nullable: true })
name: string;
@Field()
email: string;
}
@Resolver(User)
export class UserResolver {
constructor(@Inject(PrismaService) private prismaService: PrismaService) {}
@Mutation(() => User)
async signupUser(@Args('data') data: SignupUserInput): Promise<User> {
return this.prismaService.user.create({
data: {
email: data.email,
name: data.name,
},
});
}
@Query(() => User, { nullable: true })
async user(@Args('id') id: number) {
return this.prismaService.user.findUnique({
where: { id: id },
});
}
}
src/user.ts
import 'reflect-metadata';
import { ObjectType, Field, ID } from '@nestjs/graphql';
@ObjectType()
export class User {
@Field(() => ID)
id: number;
@Field()
email: string;
@Field(() => String, { nullable: true })
name?: string | null;
}
Hello @kimyh03
Sorry for the late reply here. I haven't had a windows machine in hand so this got delayed a bit.
I am unable to reproduce this in my windows machine as you can see below:
Can you please try with a fresh database? If you can still reproduce this, please share dump of the database so that I can retry this reproduction.
Hello @kimyh03
Sorry for the late reply here. I haven't had a windows machine in hand so this got delayed a bit.
I am unable to reproduce this in my windows machine as you can see below:
Can you please try with a fresh database? If you can still reproduce this, please share dump of the database so that I can retry this reproduction.
|
2025-04-01T04:35:12.459838
| 2023-02-24T14:14:59
|
1598715443
|
{
"authors": [
"janpio",
"jvcmanke"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9896",
"repo": "prisma/prisma",
"url": "https://github.com/prisma/prisma/issues/18083"
}
|
gharchive/issue
|
Some CLI commands don't respect .env file
Bug description
When running some of the setup CLI commands, the environment variables from the .env file are not loaded.
I have tested init and generate while adding the PRISMA_MIGRATION_ENGINE_BINARY and PRISMA_QUERY_ENGINE_LIBRARY environment variables to an .env file in the project root.
I ran into this while trying to start a new project within my company's proxy/firewall configuration where, there is a rule that blocks the download of the sha256, therefore I manually downloaded the engines and put them in a folder to point to with the environment variables.
I thought I was doing something wrong until I realised the commands worked if I exported the environment variables directly in the terminal, with set -a; source .env; set+a; for example, which was not the behaviour I expected.
Possibly is just certain environment variables that have this problem.
How to reproduce
I managed to test this only on windows, I'm also using git bash.
Set up a basic project following the prisma getting started documentation:
npm init -y
npm install prisma typescript ts-node @types/node --save-dev
npx tsc --init
Download the prisma engines into a ./engines folder;
- engines
- migration-engine.exe
- query_engine.dll.node
Create a .env file with the following contents:
PRISMA_MIGRATION_ENGINE_BINARY=${PWD}/engines/migration-engine.exe
PRISMA_QUERY_ENGINE_LIBRARY=${PWD}/engines/query_engine.dll.node
If you don't have connection problems with the binaries mirror, you can turn off your internet access.
Run npx prisma init
You should see an error similar to:
Error: request to https://binaries.prisma.sh/...
even though we are declaring in the .env file that we should use local files.
On the other hand, if you export these variable in the terminal (say by running set -a; source .env; set +a;), the command works as expected.
Expected behavior
All CLI commands should consume from the .env file.
This section of the documentation made me expect that: Docs
Prisma information
All of this happens before enve having a prisma setup.
Environment & setup
OS: Windows
Database: Irrelevant I think, but PostgreSQL is what I was trying to set up
Node.js version: v18.12.1
Prisma Version
Again, when running prisma -v without exporting vairables directly in the terminal, it fails with the same request error.
With them exported:
Environment variables loaded from .env
prisma : 4.10.1
@prisma/client : 4.10.1
Current platform : windows
Query Engine (Node-API) : libquery-engine aead147aa326ccb985dcfed5b065b4fdabd44b19 (at engines\query_engine.dll.node, resolved by PRISMA_QUERY_ENGINE_LIBRARY)
Migration Engine : migration-engine-cli aead147aa326ccb985dcfed5b065b4fdabd44b19 (at engines\migration-engine.exe, resolved by PRISMA_MIGRATION_ENGINE_BINARY)
Format Wasm : @prisma/prisma-fmt-wasm 4.10.1-1.80b351cc7c06d352abe81be19b8a89e9c6b7c110
Default Engines Hash : aead147aa326ccb985dcfed5b065b4fdabd44b19
Studio : 0.481.0
Hello!
Any news on this?
Do you need any more information about the issue that I can provide?
Can you please show the full error message and not cut the actual file name off? Thanks.
Does it work if you define the engine file paths absolutely instead of using ${PWD}?
|
2025-04-01T04:35:12.467644
| 2018-09-21T09:39:36
|
362534325
|
{
"authors": [
"divyenduz",
"do4gr",
"luhagel",
"marktani"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9897",
"repo": "prisma/prisma",
"url": "https://github.com/prisma/prisma/issues/3168"
}
|
gharchive/issue
|
Prisma Export: Invalid JSON
Describe the bug
As of right now, the prisma export command produces invalid data when it comes to json(list) fields resulting in an error when trying to import that data somewhere else.
To Reproduce
Steps to reproduce the behavior:
Spin up a prisma stage with some form of json list inside
Export the data with prisma export
Try to import that data into another stage with prisma import
prisma import --data export-2018-09-21T09:00:40.144Z.zip
Unzipping 31ms
Uncaught exception, cleaning up: Error: Value "{\"prop\":\"whatever\",\"label\":\"Label\"}" for field personalDownloads is not a valid Json
Validating data ✖
Expected behavior
The import should work without any hiccups
Versions (please complete the following information):
OS: Tested on OS X Mojave& Linux 4.18.8
prisma CLI: prisma/1.16.4 x64 node-v10.11.0
Prisma Server: 1.16.0
Additional context
Extracting the zip, removing the list contents from the lists folder and compressing everything again fixes the problem.
Quick update, just did some further testing, it looks like the problem lies within the fact that the export creates escaped json strings instead of objects. Removing all the backslashes and superflous quotation marks from the list before zipping everything up again works as expected, with the data being where it belongs.
Current:
{
"valueType": "lists",
"values": [
{
"_typeName": "Semester",
"id": "myid",
"personalDownloads": [
"{\"prop\":\"test_1\",\"label\":\"Test 1\"}",
"{\"prop\":\"test_2\",\"label\":\"Test 2\"}"
]
}
]
}
Working:
{
"valueType": "lists",
"values": [
{
"_typeName": "TName",
"id": "myid",
"field": [
{
"prop": "test_1",
"label": "Test 1"
},
{
"prop": "test_2",
"label": "Test 2"
}
]
}
]
}
Thanks for the follow up. It is my current understanding that this is a mismatch between the output format for JSON values by the backend, and the JSON validation done by the CLI.
Both components should agree on the exact parsing behaviour. I am not sure which is per the spec, and which is not.
The quotes are generated from backend for legacy reasons.
Bringing import/export in sync would be a breaking change but it is important and NDF compliant as NDF does not mention escape quoting for anything.
Thanks for reporting this @luhagel , we just released 1.17.2 which fixes the issue by not stringifying the output anymore.
|
2025-04-01T04:35:12.482113
| 2020-04-17T14:26:48
|
604186157
|
{
"authors": [
"janpio",
"nikolasburk"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9898",
"repo": "prisma/vscode",
"url": "https://github.com/prisma/vscode/issues/107"
}
|
gharchive/issue
|
Autocompletion in Prisma schema
Problem
Right now, the Prisma VS Code extension only shows Red squiglly lines on errors but doesn't actually write your schema in the first place.
Solution
We could implement autocomplete for the Prisma VS Code extension (and other editor plugins) so that people get suggestions for available type, model and field names.
Additional context
Ben Awad mentioned this idea at the beginning of his Prisma 2 Beta review.
@schickling wrote some notes in https://github.com/prisma/vscode/issues/96#issue-598258644:
The VSC extension should enable context-aware auto-completion. This includes:
Suggest semantically correct attributes / types (i.e. don't display attributes that would be invalid in this context)
The auto-completion should also show inline docs (similar to JSDoc for Prisma Client JS)
Update: https://imgur.com/cmSkBms.gif 🔥 🔥 🔥
Nice work so far @carmenberndt 👏
|
2025-04-01T04:35:12.488431
| 2022-12-03T23:56:17
|
1474282845
|
{
"authors": [
"angeloashmore",
"lihbr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9899",
"repo": "prismicio-community/vite-plugin-sdk",
"url": "https://github.com/prismicio-community/vite-plugin-sdk/pull/4"
}
|
gharchive/pull-request
|
feat: support custom src and out dir
Types of changes
[ ] Chore (a non-breaking change which is related to package maintenance)
[ ] Bug fix (a non-breaking change which fixes an issue)
[x] New feature (a non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Description
Config is now extended following user's config (honoring build.outDir option)
A new srcDir option is introduced to let user specify their src directory root (we could infer it from the entries but that'd be fragile/edge-case prone
Unrelated: Should we make this repository public now?
Checklist:
[ ] My change requires an update to the official documentation.
[x] All TSDoc comments are up-to-date and new ones have been added where necessary.
[x] All new and existing tests are passing.
Replied in #5 here: https://github.com/prismicio-community/vite-plugin-sdk/pull/5#issuecomment-1352406154
Basically, we should go with this PR (#4) and close #5. Everything here looks good to merge! 🙂
Vite 4 is out now, so maybe we can upgrade our support and bump to v0.1 (or v1.0?).
I came across a bug in #6 where type declarations would build in dist/dist rather than just dist/. This only happens after updating to the latest depdendencies.
The bug was fixed in #6, but it will need to be updated in this PR as well. See this commit.
OK, merging both and releasing 0.1.0
|
2025-04-01T04:35:12.531868
| 2023-06-29T22:38:51
|
1781599934
|
{
"authors": [
"priyankarpal",
"tayyab-ilyas"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9900",
"repo": "priyankarpal/ProjectsHut",
"url": "https://github.com/priyankarpal/ProjectsHut/issues/1465"
}
|
gharchive/issue
|
chore: project addition by tayyab-ilyas
Add a new project to the list
I would like to add my project PomTasker
Record
[X] I agree to follow this project's Code of Conduct
[X] I'm a GSSoC'23 contributor
[X] I want to work on this issue
[X] My project has contribution guidelines
[X] My project has a Code of conduct
[X] My project has README
[X] My project has a License
hi @priyankarpal this project got merged in another repository, I don't think it would be a good practice to create a PR for it here as well, really sorry for wasting your time.
hi @priyankarpal this project got merged in another repository, I don't think it would be a good practice to create a PR for it here as well, really sorry for wasting your time.
No problem at all! Thank you for letting me know. There's no need to apologize for anything.
|
2025-04-01T04:35:12.547055
| 2016-05-14T05:35:03
|
154835740
|
{
"authors": [
"alxempirical",
"curlette"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9901",
"repo": "probcomp/bayeslite",
"url": "https://github.com/probcomp/bayeslite/issues/415"
}
|
gharchive/issue
|
Potential sign of bug in confidence calculations
During my analysis of the College Scorecard data I came across the following:
The plot below is of 500 simulated scores for a school whose tuition was inferred with 94% confidence (Everest Univ. Jacksonville).
It is noticeably somewhat bimodal, which we would expect to lower the confidence.
@fsaad @vkmvkmvkmvkm @raxraxraxraxrax @gregory-marton
Assuming you are using a crosscat metamodel, not gpmcc, I happened on a possible explanation for this behavior earlier this week.
TL;DR:
# TODO: multistate impute doesn't exist yet
# e,confidence = su.impute_and_confidence_multistate(M_c, X_L, X_D, Y, Q, n,
# self.get_next_seed)
INFER draws 100 approximate posterior samples for an observed row by sampling from the category distributions ("cluster_model" in the crosscat source code nomenclature) for the latent category assigned to the row in the last training step of each model. This category distribution is a gaussian, so univariate.
Then the confidence estimate makes an entirely new crosscat state from the posterior sample, trains it for 100 iterations, and returns the mean frequency of the maximum-likelihood category over those training iterations. Since that state is being trained on a sample from a gaussian, it's not surprising that the ML category has very high frequency. Essentially, the confidence-estimate code, never gets to see the other mode of the posterior sample.
SIMULATE draws samples given observed-row conditions using the same code as INFER, but it draws them from all models in the generator unless you specify otherwise. So SIMULATEd samples can have multiple modes, they just come from different models. The confidence estimate in INFER (and the inference itself) is based only on the first model in the generator.
Makes sense, thanks!
If you don't mind, I think it would be good to keep this issue open. It looks like you have brought a serious bug to light.
@curlette, can you send the bdb file to<EMAIL_ADDRESS>please?
SIMULATE draws samples given observed-row conditions using the same code as INFER, but it draws them from all models in the generator unless you specify otherwise. So SIMULATEd samples can have multiple modes, they just come from different models. The confidence estimate in INFER (and the inference itself) is based only on the first model in the generator.
I misread the code in the link. The first model is used only for the confidence calculation. The imputation sampling is done over all models, so you would expect the two modes to appear in the samples generated by impute, which are then passed to continuous_imputation_confidence.
|
2025-04-01T04:35:12.554353
| 2018-10-31T18:22:19
|
376100430
|
{
"authors": [
"wolfy1339"
],
"license": "ISC",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9902",
"repo": "probot/probot",
"url": "https://github.com/probot/probot/pull/755"
}
|
gharchive/pull-request
|
Fix webhook types
This PR fixes all of the problems in the @octokit/webhooks typings, except for one which I am unsure how to fix it other than to use ESModule's export default syntax. So, I left it out. Note: The tests seem to pass fine without it, but on my local machine, in VSCode, TSLint seems to not like it
Hopefully this is a step forward in fixing #675
I split it in different commits to show exactly what was done and provide more context instead of making one giant commit.
Is there a reason for the Webhooks namespace? It breaks twhen you go to import the WebhookEvent interface in context.ts
This is rebased now on top of the master branch. It should be ready to go
@hiimbex @tcbyrd Would it be useful to add all the other methods as well? I have a commit pending for those changes
Some of these changes might become moot by #793
|
2025-04-01T04:35:12.556630
| 2017-09-27T13:41:44
|
260978822
|
{
"authors": [
"Ben3eeE",
"bkeepers"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9903",
"repo": "probot/stale",
"url": "https://github.com/probot/stale/pull/63"
}
|
gharchive/pull-request
|
Limit to 30 issues per run
Repositories with a lot of open issues tend to trigger abuse mechanisms on GitHub. This limits it to 30 issues in an attempt to reduce errors. Repositories with more than 30 stale issues will have to wait for multiple runs (every hour) for all open issues to be marked.
Should fix #26
:heart:
Are we sure that 30 is low enough to avoid triggering abuse mechanisms? I remember before we had discussed way lower numbers, like 5 or 10. (I'd rather do more if possible obviously)
No, it's just a guess. We could always lower it if the errors keep happening.
We're kinda flying blind here until we can figure out how to add things like https://github.com/probot/metrics at deploy time. I'm tempted to just start merging these things into core for now and worry about extracting them back out later.
|
2025-04-01T04:35:12.560292
| 2020-11-18T11:43:51
|
745588231
|
{
"authors": [
"tbouffard"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9904",
"repo": "process-analytics/bpmn-visualization-examples",
"url": "https://github.com/process-analytics/bpmn-visualization-examples/pull/98"
}
|
gharchive/pull-request
|
Review example preview images
Remove text from the example preview images.
This makes integration clearer in the example home page: the text of the page doesn't mess with the text of the images (no duplication, no mixed information, the preview focussed on the diagram rendering, the text on the example summary).
Previously, there were also inconsistencies between previews: some had text, others hadn't.
This change should also allow to put the title on top of the example cards. See https://github.com/process-analytics/bpmn-visualization-examples/pull/80#issuecomment-726014270
Live environment of examples
https://cdn.statically.io/gh/process-analytics/bpmn-visualization-examples/feat/review_example_preview_images/examples/index.html
Implementation notes
To generate the preview, some examples have been temporary modified (add extra margin, remove some texts, ...).
I have make the changes available in a dedicated commit which is then reverted in case of (in the future) we would like to redo the screenshots or know how they have been done.
Old Home
New Home
|
2025-04-01T04:35:12.645813
| 2016-09-29T13:23:16
|
180047877
|
{
"authors": [
"jasminSPC",
"lbobka"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9905",
"repo": "profitbricks/docker-machine-driver-profitbricks",
"url": "https://github.com/profitbricks/docker-machine-driver-profitbricks/issues/21"
}
|
gharchive/issue
|
error while creating a new docker host with current version v1.1.4
Running pre-create checks...
Creating machine...
(check1) Datacenter Created
Error creating machine: Error in driver during machine creation: Error while creating a LAN {
"httpStatus" : 404,
"messages" : [ {
"errorCode" : "309",
"message" : "Resource does not exist"
} ]
}
Rolling back...
I created the machine-driver as always:
go get -u github.com/profitbricks/docker-machine-driver-profitbricks
cd $GOPATH/src/github.com/profitbricks/docker-machine-driver-profitbricks
make install
BR
Lars
Can you provide me with a command you used?
I've found the issue and fixed it. Please go ahead and give it a try.
thanks for the quick fix
|
2025-04-01T04:35:12.691863
| 2021-02-28T18:40:40
|
818276501
|
{
"authors": [
"hiyurun",
"profzei"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9906",
"repo": "profzei/Matebook-X-Pro-2018",
"url": "https://github.com/profzei/Matebook-X-Pro-2018/issues/152"
}
|
gharchive/issue
|
Fake ethernet not visible in hackingtool
i can't find it.
@hiyurun Why did you care about it when you have a working WiFi card? Please, do not answer that you think it's related to your iMessage issue...
sorry
i'll try elsewhere. I thought this was the right place because i have a matebook pro x installed with your EFI.
Have a good day.
Sorry, I did not want to be so rude, but people are always opening issues which are not related to the real content of this repo i.e. supporting macOS install on a device not designed/built for it. For example, Reddit subforum for hackintosh is plenty of such help requests (almost one about every week) and I think you'll find there more than one successful path for resolving your issue...
|
2025-04-01T04:35:12.723798
| 2021-12-14T01:41:55
|
1079212812
|
{
"authors": [
"woody-apple",
"yunhanw-google"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9907",
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/issues/12973"
}
|
gharchive/issue
|
Take the aStatus pointer for OnAttributeData, similar to OnEventData
OnAttributeData should take either apData or apStatus, since in encoding spec, the return attribute response takes either data or status, similar to onEventData
SDK Spec Review: We believe this is resolved, marking as closed.
|
2025-04-01T04:35:12.734196
| 2023-05-09T22:47:11
|
1702849747
|
{
"authors": [
"mikaelhm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9908",
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/issues/26461"
}
|
gharchive/issue
|
[chiptool.py] Failed to run test case [TC-ACE-1.5]. Test Step 4 : Waiting after opening commissioning window
Failure
I tried to run TC-ACE-1.5 using develop branch of Matter TH.
But this test step crashed chiptool.py :
Test Step 4 : Waiting after opening commissioning window
Error:
File "/root/scripts/py_matter_yamltests/matter_yamltests/pseudo_clusters/clusters/delay_commands.py", line 61, in WaitForMs
time.sleep(duration_in_ms / 1000)
TypeError: unsupported operand type(s) for /: 'str' and 'int'
Reproduction steps
commission device using chiptool.py
run chiptool.py command with arguments:
./scripts/tests/yaml/chiptool.py tests Test_TC_ACE_1_5 --nodeId 0xf8f838e26badee7d --delayInMs 250 --timeout 900 --continueOnFailure 1 --trace_file "/logs/trace_log_2023-05-09_22.33.36_0xf8f838e26badee7d_TEST_Test_TC_ACE_1_5.log" --trace_decode 1 --endpoint 0 --payload MT:-24J0AFN00KA0648G00 --discriminator 3840 --waitAfterCommissioning 5000 --PakeVerifier hex:b96170aae803346884724fe9a3b287c30330c2a660375d17bb205a8cf1aecb350457f8ab79ee253ab6a8e46bb09e543ae422736de501e3db37d441fe344920d09548e4c18240630c4ff4913c53513839b7c07fcc0627a1b8573a149fcd1fa466cf
Expect this log output:
WARNING:root:TAG configurator::cluster::description was not handled/recognized at src/app/zap-templates/zcl/data-model/chip/diagnostic-logs-cluster.xml:41:146
WARNING:root:TAG configurator::cluster::command::description was not handled/recognized at src/app/zap-templates/zcl/data-model/chip/diagnostic-logs-cluster.xml:47:63
WARNING:root:TAG configurator::clusterExtension::command::description was not handled/recognized at src/app/zap-templates/zcl/data-model/chip/color-control-cluster.xml:373:6
Parsing 1 files.
Parsing: src/app/tests/suites/certification/Test_TC_ACE_1_5.yaml
✓ 7.0ms
Connecting: ws://localhost:9002
⚠ 12.0ms
Retrying in 1 seconds.
Connecting: ws://localhost:9002
✓ 1.0ms
Running: "42.1.5. [TC-ACE-1.5] Multi-fabric" with 17 steps.
***** Test Start : Test_TC_ACE_1_5
***** Test Step 1 : Wait for the commissioned device to be retrieved for TH1
1. Running Wait for the commissioned device to be retrieved for TH1
✓ 216.42ms
[PostProcessCheckType.IM_STATUS check] The test expects no error and no error occurred.
**** Test Setup: Device Connected
***** Test Step 2 : TH1 reads the fabric index
2. Running TH1 reads the fabric index
✓ 2.22ms
[PostProcessCheckType.IM_STATUS check] The test expects no error and no error occurred.
[PostProcessCheckType.SAVE_AS_VARIABLE check] The test save the value "1" as th1FabricIndex.
***** Test Step 3 : Open Commissioning Window from alpha
3. Running Open Commissioning Window from alpha
✓ 12.7ms
[PostProcessCheckType.IM_STATUS check] The test expects no error and no error occurred.
***** Test Step 4 : Waiting after opening commissioning window
4. Running Waiting after opening commissioning window
Traceback (most recent call last):
File "/root/./scripts/tests/yaml/chiptool.py", line 141, in <module>
chiptool_py()
File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/root/./scripts/tests/yaml/chiptool.py", line 130, in chiptool_py
success = send_yaml_command(commands[1], server_path, server_arguments, pics, commands[2:])
File "/usr/local/lib/python3.10/dist-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/root/./scripts/tests/yaml/chiptool.py", line 44, in send_yaml_command
return ctx.forward(chiptool)
File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 781, in forward
return __self.invoke(__cmd, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/root/scripts/tests/yaml/runner.py", line 323, in chiptool
return ctx.forward(websocket)
File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 781, in forward
return __self.invoke(__cmd, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/click/decorators.py", line 84, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/root/scripts/tests/yaml/runner.py", line 299, in websocket
return runner.run(parser_group.builder_config, runner_config)
File "/root/scripts/py_matter_yamltests/matter_yamltests/runner.py", line 150, in run
raise (result)
File "/root/scripts/py_matter_yamltests/matter_yamltests/runner.py", line 182, in _run
responses, logs = await config.pseudo_clusters.execute(request)
File "/root/scripts/py_matter_yamltests/matter_yamltests/pseudo_clusters/pseudo_clusters.py", line 35, in execute
status = await command(request)
File "/root/scripts/py_matter_yamltests/matter_yamltests/pseudo_clusters/clusters/delay_commands.py", line 61, in WaitForMs
time.sleep(duration_in_ms / 1000)
TypeError: unsupported operand type(s) for /: 'str' and 'int'
Test_TC_ACE_1_5.yaml
# Copyright (c) 2021 Project CHIP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Auto-generated scripts for harness use only, please review before automation. The endpoints and cluster names are currently set to default
name: 42.1.5. [TC-ACE-1.5] Multi-fabric
PICS:
- MCORE.ROLE.COMMISSIONEE
- APPDEVICE.S
config:
nodeId: 0x12344321
cluster: "Access Control"
endpoint: 0
payload:
type: char_string
defaultValue: "MT:-24J0AFN00KA0648G00"
discriminator:
type: int16u
defaultValue: 3840
waitAfterCommissioning:
type: int16u
defaultValue: 5000
PakeVerifier:
type: octet_string
defaultValue: "hex:b96170aae803346884724fe9a3b287c30330c2a660375d17bb205a8cf1aecb350457f8ab79ee253ab6a8e46bb09e543ae422736de501e3db37d441fe344920d09548e4c18240630c4ff4913c53513839b7c07fcc0627a1b8573a149fcd1fa466cf"
tests:
- label: "Wait for the commissioned device to be retrieved for TH1"
cluster: "DelayCommands"
command: "WaitForCommissionee"
arguments:
values:
- name: "nodeId"
value: nodeId
- label: "TH1 reads the fabric index"
cluster: "Operational Credentials"
command: "readAttribute"
attribute: "CurrentFabricIndex"
response:
saveAs: th1FabricIndex
- label: "Open Commissioning Window from alpha"
cluster: "Administrator Commissioning"
command: "OpenCommissioningWindow"
timedInteractionTimeoutMs: 10000
arguments:
values:
- name: "CommissioningTimeout"
value: 180
- name: "PAKEPasscodeVerifier"
value: PakeVerifier
- name: "Discriminator"
value: discriminator
- name: "Iterations"
value: 1000
- name: "Salt"
value: "SPAKE2P Key Salt"
- label: "Waiting after opening commissioning window"
cluster: "DelayCommands"
command: "WaitForMs"
arguments:
values:
- name: "ms"
value: waitAfterCommissioning
- label: "Commission from TH2"
identity: "beta"
cluster: "CommissionerCommands"
command: "PairWithCode"
arguments:
values:
- name: "nodeId"
value: nodeId
- name: "payload"
value: payload
- label: "Wait for the commissioned device to be retrieved for TH2"
identity: beta
cluster: "DelayCommands"
command: "WaitForCommissionee"
arguments:
values:
- name: "nodeId"
value: nodeId
- label: "TH2 reads the fabric index"
identity: "beta"
cluster: "Operational Credentials"
command: "readAttribute"
attribute: "CurrentFabricIndex"
response:
saveAs: th2FabricIndex
- label: "Read the commissioner node ID from the alpha fabric"
identity: "alpha"
cluster: "CommissionerCommands"
command: "GetCommissionerNodeId"
response:
values:
- name: "nodeId"
saveAs: commissionerNodeIdAlpha
- label: "TH1 writes ACL giving view privilege for descriptor cluster"
command: "writeAttribute"
attribute: "ACL"
arguments:
value: [
{
FabricIndex: th1FabricIndex,
Privilege: 5, # administer
AuthMode: 2, # case
Subjects: [commissionerNodeIdAlpha],
Targets:
[{ Cluster: 0x001f, Endpoint: 0, DeviceType: null }],
},
{
FabricIndex: th1FabricIndex,
Privilege: 1, # view
AuthMode: 2, # case
Subjects: null,
Targets:
[{ Cluster: 0x001d, Endpoint: 0, DeviceType: null }],
},
]
- label: "Read the commissioner node ID from the beta fabric"
identity: "beta"
cluster: "CommissionerCommands"
command: "GetCommissionerNodeId"
response:
values:
- name: "nodeId"
saveAs: commissionerNodeIdBeta
- label: "TH2 writes ACL giving view privilge for basic cluster"
identity: beta
command: "writeAttribute"
attribute: "ACL"
arguments:
value: [
{
FabricIndex: th2FabricIndex,
Privilege: 5, # administer
AuthMode: 2, # case
Subjects: [commissionerNodeIdBeta],
Targets:
[{ Cluster: 0x001f, Endpoint: 0, DeviceType: null }],
},
{
FabricIndex: th2FabricIndex,
Privilege: 1, # view
AuthMode: 2, # case
Subjects: null,
Targets:
[{ Cluster: 0x0028, Endpoint: 0, DeviceType: null }],
},
]
- label: "TH1 reads descriptor cluster - expect SUCCESS"
command: "readAttribute"
cluster: "Descriptor"
attribute: "DeviceTypeList"
- label: "TH1 reads basic cluster - expect UNSUPPORTED_ACCESS"
command: "readAttribute"
cluster: "Basic Information"
attribute: "VendorID"
response:
error: UNSUPPORTED_ACCESS
- label: "TH2 reads descriptor cluster - expect UNSUPPORTED_ACCESS"
identity: "beta"
command: "readAttribute"
cluster: "Descriptor"
attribute: "DeviceTypeList"
response:
error: UNSUPPORTED_ACCESS
- label: "TH2 reads basic cluster - expect SUCCESS"
identity: "beta"
command: "readAttribute"
cluster: "Basic Information"
attribute: "VendorID"
- label: "TH1 resets ACL to default"
command: "writeAttribute"
attribute: "ACL"
arguments:
value: [
{
FabricIndex: 1,
Privilege: 5, # administer
AuthMode: 2, # case
Subjects: [commissionerNodeIdAlpha],
Targets: null,
},
]
- label: "TH1 sends RemoveFabric command for TH2"
cluster: "Operational Credentials"
command: "RemoveFabric"
arguments:
values:
- name: "FabricIndex"
value: th2FabricIndex
Platform
other
Platform Version(s)
No response
Type
YAML tested, Manually tested with SDK
(Optional) If manually tested please explain why this is only manually tested
No response
Anything else?
No response
I suspect the problem here is that chiptool.py is not handling the --waitAfterCommissioning 5000 parameter as an integer, and passing it as a string to the lower layer.
CC: @vivien-apple
|
2025-04-01T04:35:12.743349
| 2023-10-12T16:22:01
|
1940346923
|
{
"authors": [
"VaishaliAvhale",
"bzbarsky-apple",
"cecille"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9909",
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/issues/29729"
}
|
gharchive/issue
|
[BUG]Facing Issue While Executing YAML TestCases
Reproduction steps
I have followed the below README file to execute the YAML test cases. However, the 'tests' command has been removed from 'connectedhomeip' and it is no longer supported.
https://github.com/project-chip/connectedhomeip/blob/master/src/app/tests/suites/README.md
I've tried out the other ways to execute the YAML test cases using the following command, but I have encountered an issue while running the test cases.
$ ./scripts/tests/run_test_suite.py --runner chip_tool_python --target 'Test_TC_OO_1_1' --log-level debug run --iterations 1 --test-timeout-seconds 120
Can someone please assist me with this?
Bug prevalence
Everytime
GitHub hash of the SDK that was being used
ecc0d63cf7eb91f4017bf8c264b53cf690420eb5
Platform
core
Platform Version(s)
v1.2
Anything else?
No response
but I have encountered an issue while running the test cases.
What issue? This is not actionable as filed... Please attach logs from the failure.
The problem here is the tests command has been removed, but the readme has not been updated and the runner documentation is either non-existent or at least hard to find. I have not yet been able to find it.
To run tests locally now, you need to fire up an interactive chip session, then use runner.py or one of the other wrappers to run the yaml.
@vivien-apple would you be able to provide instructions on which wrapper to use and how to pass in the required flags?
Is the issue "how to run the test against some device that is not all-clusters-app"?
OK, if I hack up the harness scripts a bit and run them with --dry-run, I get this "what did I run" output, paths modified to not be absolute, and the chip-tool paths are pointing to my chip-tool, and of course the --storage-directory is whatever temp folder the harness created.
python3 scripts/tests/yaml/chiptool.py pairing code 0x12344321 MT:-24J042C00KA0648G00 --server_path ./out/debug/chip-tool-tests/chip-tool --server_arguments 'interactive server' --storage-directory /var/folders/ty/gbwtrbq52rsg6r6jn35dcp9c0000gn/T/tmppgjhfl1v
python3 scripts/tests/yaml/chiptool.py tests Test_TC_OO_1_1 --PICS src/app/tests/suites/certification/ci-pics-values --server_path ./out/debug/chip-tool-tests/chip-tool --server_arguments 'interactive server' --storage-directory /var/folders/ty/gbwtrbq52rsg6r6jn35dcp9c0000gn/T/tmppgjhfl1v
Running that second command on its own after manually commissioning things seems to work right.
And yes, we desperately need to update the documentation....
https://github.com/project-chip/connectedhomeip/pull/29752 to improve the dry-run logging so it gives the info I pasted above.
Assigning this over to Vivien as he has the best handle on how to run this and can provide documentation.
FYI - most of the command flags are NOT documented properly in the help menu, but flow through to the lower layers, so you can use more flags than are exposed by the help menus.
|
2025-04-01T04:35:12.745048
| 2021-12-20T11:05:17
|
1084653405
|
{
"authors": [
"bzbarsky-apple",
"wqx6"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9910",
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/pull/13155"
}
|
gharchive/pull-request
|
app:Add missing on-off commands for all-clusters-app
Problem
Some missing on-off commands need to be added for all-clusters-app. It should be tested in Test Events.
Change overview
Add missing on-off commands in all-clusters-app's zap file
Testing
Test with ESP32C3, the commands is supported now.
/rebase
|
2025-04-01T04:35:12.747622
| 2022-01-03T19:38:49
|
1092766745
|
{
"authors": [
"bzbarsky-apple",
"electrocucaracha",
"jepenven-silabs"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9911",
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/pull/13302"
}
|
gharchive/pull-request
|
Fix Master build failure
Problem
https://github.com/project-chip/connectedhomeip/pull/13071 seems to have broken a couple of CI job
Change overview
Fix the build failure
Testing
EFR32 platform now compiles
This case is interesting the CI didn't report a failure in its PR but the error was raised in master branch
That's because it was a merge conflict, no? @electrocucaracha
This case is interesting the CI didn't report a failure in its PR but the error was raised in master branch
That's because it was a merge conflict, no? @electrocucaracha
It could be or at least it's a possibility
|
2025-04-01T04:35:12.749334
| 2022-02-10T14:00:37
|
1130129314
|
{
"authors": [
"lazarkov",
"mgarb1"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9912",
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/pull/15034"
}
|
gharchive/pull-request
|
add featuremap to pressure sensor
Problem
What is being fixed? Examples:
FeatureMap is not included in XML for Pressure Sensor
Change overview
Added featuremap for pressuresensor
Testing
Compared with Thermostat implementation
you need to run this script and commit the output:
scripts/tools/zap_regen_all.py
|
2025-04-01T04:35:12.750405
| 2024-04-11T03:16:01
|
2236829712
|
{
"authors": [
"bzbarsky-apple"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9913",
"repo": "project-chip/connectedhomeip",
"url": "https://github.com/project-chip/connectedhomeip/pull/32936"
}
|
gharchive/pull-request
|
Improve MTRDevice work item logging.
Stop including the node ID in the description passed to enqueueWorkItem, because the work queue includes that already.
Use hex for node IDs, cluster IDs, command/attribute IDs (but keep using decimal for endpoint IDs), to be consistent with other logging.
Fast-tracking platform-specific change with platform owner review.
|
2025-04-01T04:35:12.753374
| 2024-10-29T01:14:39
|
2619886341
|
{
"authors": [
"CLAassistant",
"cecille"
],
"license": "CC-BY-4.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9914",
"repo": "project-chip/matter-handbook",
"url": "https://github.com/project-chip/matter-handbook/pull/27"
}
|
gharchive/pull-request
|
Add a glossary
I have no idea what I'm doing or how to build this thing, so please just consider this as a starting point rather than an actual pull request.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.2 out of 3 committers have signed the CLA.:white_check_mark: sammachin:white_check_mark: cecille:x: amolivoYou have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T04:35:12.755574
| 2022-06-02T17:45:24
|
1258497428
|
{
"authors": [
"W95Psp",
"msprotz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9915",
"repo": "project-everest/hacl-star",
"url": "https://github.com/project-everest/hacl-star/pull/565"
}
|
gharchive/pull-request
|
Support for returning uint64 across the WASM/JS boundary
This is the companion PR to FStarLang/karamel#267 -- it enables returning uint64s as a multi-value, and the "nice" API layer on top of it is aware of that calling convention and uses a JS bignum to represent a 64-bit integer adequately.
There are some extra fixes to be reviewed by @denismerigoux ... namely, the int32 return value was renamed to uint32, because as far as I know, all of the functions that were bound by the API return unsigned values... and the api.js code was changed accordingly to interpret those as unsigned via a logical-shift (>>> 0), otherwise, a HACL function that returns 0xffffffff would be returning -1 in JS -- can you confirm that, as far as you remember, there was no usage of signed integers in the code for which you wrote bindings?
Sure, sorry, I completely forgot about this PR! It looks good to me!
|
2025-04-01T04:35:12.765018
| 2019-06-07T18:28:42
|
453640442
|
{
"authors": [
"apandurangi",
"fcastill",
"lixingwang"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9916",
"repo": "project-flogo/flogo-web",
"url": "https://github.com/project-flogo/flogo-web/issues/1109"
}
|
gharchive/issue
|
Export app missed alias while rest trigger + rest invoke in the app
I'm submitting a ... (check one with "x")
[ *] bug report => search github for a similar issue or PR before submitting
[ ] feature request
[ ] support request
Current behavior
Create an app and add rest trigger + rest invoke activity
Export the app
The rest trigger and rest invoke as same ref "#rest", one of them should be tagged an alias
"imports": [
"github.com/project-flogo/contrib/activity/rest",
"github.com/project-flogo/contrib/trigger/rest",
"github.com/project-flogo/flow"
],
Expected behavior
Rest trigger or rest invoke should add a alias
Minimal reproduction of the problem with instructions
What is the motivation / use case for changing the behavior?
Please tell us about your environment:
Flogo version: 0.X.X
Browser: [all | Chrome XX | Firefox XX | IE XX | Safari XX ]
Additional information you deem important (e.g. issue happens only occasionally):
@lixingwang that is expected as the flogo engine considers aliases for triggers and activities separately.
Let us know if it is creating any issues in building the application or runtime.
This is the expected behavior, the aliases are scoped within types of contributions and it is enforced by the core library. If there are any issues with this behavior they should be open in the core library
@apandurangi @fcastill thanks a lot. Yes, it is expected behavior.
|
2025-04-01T04:35:12.781296
| 2020-06-30T06:09:42
|
647892574
|
{
"authors": [
"boaz0",
"codecov-commenter",
"nlcwong"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9917",
"repo": "project-koku/koku-ui",
"url": "https://github.com/project-koku/koku-ui/pull/1589"
}
|
gharchive/pull-request
|
fix(costmodels): show close dialog on wizard close if >20% is filled
Display the exit confirmation dialog in the following cases:
Cost model type is Amazon or AWS and the user is in one of these steps: sources, review
Cost model type is Openshift and the user is in one of these steps: markup, sources, review
Cost model type is Openshift and the user is in the price list step and added at least one rate
Codecov Report
Merging #1589 into master will increase coverage by 0.38%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #1589 +/- ##
==========================================
+ Coverage 78.33% 78.71% +0.38%
==========================================
Files 235 237 +2
Lines 3886 3881 -5
Branches 716 712 -4
==========================================
+ Hits 3044 3055 +11
+ Misses 743 727 -16
Partials 99 99
Impacted Files
Coverage Δ
src/store/djangoUtils/query.ts
100.00% <0.00%> (ø)
src/store/djangoUtils/pagination.ts
89.47% <0.00%> (ø)
src/store/sourceSettings/selectors.ts
100.00% <0.00%> (+8.33%)
:arrow_up:
src/store/costModels/selectors.ts
82.35% <0.00%> (+17.64%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0e18cfb...ca2bb4b. Read the comment docs.
LGTM
Thanks you @nlcwong and @ddonahue007 :bow:
|
2025-04-01T04:35:12.805955
| 2021-12-24T12:34:23
|
1088359935
|
{
"authors": [
"PradyumnaNagendra",
"maheshkumargangula"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9918",
"repo": "project-sunbird/sunbird-course-service",
"url": "https://github.com/project-sunbird/sunbird-course-service/pull/419"
}
|
gharchive/pull-request
|
Issue #000 fix: Restrict batch creation or update without leafNodes
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Type of change
Please choose appropriate options.
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] This change requires a documentation update
How Has This Been Tested?
Please describe the tests that you ran to verify your changes in the below checkboxes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
[ ] Ran Test A
[ ] Ran Test B
Test Configuration:
Software versions: Java-11, play2-2.7.2, scala-2.11, redis-5.0.3
Hardware versions: 2 CPU / 4GB RAM
Checklist:
[ ] My code follows the style guidelines of this project
[ ] I have performed a self-review of my own code
[ ] I have commented my code, particularly in hard-to-understand areas
[ ] I have made corresponding changes to the documentation
[ ] My changes generate no new warnings
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] New and existing unit tests pass locally with my changes
[ ] Any dependent changes have been merged and published in downstream modules
@PradyumnaNagendra - there is no test case for invalid scenario.
|
2025-04-01T04:35:12.962837
| 2016-06-07T07:15:15
|
158854958
|
{
"authors": [
"mrunalp",
"rhatdan",
"runcom",
"stevekuznetsov"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9919",
"repo": "projectatomic/docker",
"url": "https://github.com/projectatomic/docker/pull/161"
}
|
gharchive/pull-request
|
container: memory_store: fix deadlock [rhel7-1.10.3]
Should fix https://bugzilla.redhat.com/show_bug.cgi?id=1341906
@rhatdan @mrunalp PTAL
Signed-off-by: Antonio Murdaca<EMAIL_ADDRESS>
LGTM
Did we hear back from testing on openshift?
Sent from my iPhone
On Jun 7, 2016, at 12:15 AM, Antonio Murdaca<EMAIL_ADDRESS>wrote:
Should fix https://bugzilla.redhat.com/show_bug.cgi?id=1341906
@rhatdan @mrunalp PTAL
Signed-off-by: Antonio Murdaca<EMAIL_ADDRESS>You can view, comment on, or merge this pull request online at:
https://github.com/projectatomic/docker/pull/161
Commit Summary
container: memory_store: fix deadlock
File Changes
M container/memory_store.go (40)
Patch Links:
https://github.com/projectatomic/docker/pull/161.patch
https://github.com/projectatomic/docker/pull/161.diff
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
@mrunalp not yet, testing
LGTM. Not sure if we want to wait for testing results. Irrespective, the change makes sense as we don't need full locks for just read access.
@mrunalp
I believe first batch of tests (or half of them) ran just fine (fine as in no deadlock) with the previous patch /cc @stevekuznetsov
Lokesh is going to build a new docker with this patch applied to be tested one more time
4/100 extended test runs for Origin failed in the conformance suite, none of which exhibited the Docker daemon deadlock. Running another batch as ~40 test runs flakes in the setup steps
@rh-atomic-bot r+
On the second set of tests @runcom's first set of patches did not help, found one instance with the Docker daemon hang.
let's try with this patch - as I'm sure there's another deadlock in image deletion and this patch should address it
|
2025-04-01T04:35:12.968821
| 2018-04-17T14:50:40
|
315096718
|
{
"authors": [
"edsantiago",
"mheon",
"rhatdan",
"umohnani8"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9920",
"repo": "projectatomic/libpod",
"url": "https://github.com/projectatomic/libpod/issues/631"
}
|
gharchive/issue
|
podman run --user=NONEXISTENT-UID : fails to start
podman seems to be performing UID validation when run with --user=UID and UID is not present in the image's /etc/passwd:
# podman run --rm --user=123456 fedora id
user: unknown user error looking up user with UID 123456
docker happily runs the container:
# docker run --rm --user=123456 fedora id
uid=123456 gid=0(root) groups=0(root)
I lean toward thinking that podman's behavior is correct. Filing this for visibility and for greater discussion.
We should allow it to run as long as it is a valid UID IE A number. Then we run as a number. If it is not a number we should look it up in the /etc/passwd. Seems like we are always looking up the Username in /etc/passwd.
There was a bug in Docker where a container image could create a user called "1234" and map it to UID=0, we should not read /etc/passwd if the user is a UID versus a UserName.
@umohnani8 Could you fix this.
Yup!
Closed by #652
Fixed in #652.
|
2025-04-01T04:35:12.971759
| 2018-07-27T12:38:20
|
345210305
|
{
"authors": [
"TomSweeneyRedHat",
"mheon",
"rhatdan",
"vrothberg"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9921",
"repo": "projectatomic/libpod",
"url": "https://github.com/projectatomic/libpod/pull/1170"
}
|
gharchive/pull-request
|
Fix up docker compatibility messages
Signed-off-by: Daniel J Walsh<EMAIL_ADDRESS>
LGTM
I assume the libpod.conf manpage is removed intentionally? A general question: as the files are renamed, wouldn't it makes to also s/podman/docker/ in the manpages to have valid references across the manpages?
@vrothberg No the idea is to make sure people know they are using podman, We don't want them to think they are using docker. That would be considered dishonest. We want them to know they are using podman, but if they cut and paste a command from google, or just do it by accident, or have scripts hard coded to run the Docker CLI, we want them to work without having to sed 's/docker/podman in their scripts.
their is no equivalent for libpod.conf in docker.
Code LGTM
@rh-atomic-bot r=mheon
LGTM, thx for the links 411
|
2025-04-01T04:35:12.992056
| 2024-07-16T09:07:10
|
2410621921
|
{
"authors": [
"MichalFupso",
"SarthakPALC",
"kavana-14"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9922",
"repo": "projectcalico/calico",
"url": "https://github.com/projectcalico/calico/issues/9013"
}
|
gharchive/issue
|
Is there any solution to address family redistributing routes for both ipv4 and ipv6 in calico-bird
$ bird --version
BIRD version v0.3.3+birdv1.6.8
$ k3s --version
k3s version v1.29.6+k3s1 (83ae095a)
go version go1.21.11
OS: ubuntu 22.04
With calico-bird we are also observing as below:
bird without enable-ipv6 is not distributing ipv6 routes through BGP and
bird with enable-ipv6 is not distributing ipv4 routes through BGP
Can one address family redistributing routes for both ipv4 and ipv6?
I'm working on kubernetes networking, with Calico cni plugin. I want to configure the BIRD that is available in the ProjectCalico repo. But both ipv4 and v6 are not supported. If I enable ipv6 it won't parse ipv4 address or vice versa.
https://github.com/projectcalico/bird.git //where the repo was cloned
installation step:
./configure
make
make install
bird.conf:
protocol static {
route <IP_ADDRESS>/24 via <IP_ADDRESS>;
route <IP_ADDRESS>/16 blackhole;
route <IP_ADDRESS>/20 unreachable;
route <IP_ADDRESS>/28 prohibit;
route <IP_ADDRESS>/24 via 2001:1::2;
}
protocol static {
ipv6 { export all; };
route 2001:db8:1::/48 via 5555::6666;
route 2001:db8:2::/48 blackhole;
route 2001:db8:3::/48 prohibit;
route 2001:db8:4::/48 unreachable;
}
$ ./bird -d -c ./bird.conf
bird: ./bird.conf:27:11 This is an IPv4 router, therefore IPv6 addresses are not supported
I'm expecting it should work fine for both ipv4 and v6 addresses.
Hi @kavana-14, we expect to run one instance of bird per IPV family. Could you share more of what you are trying to achieve and your setup?
@MichalFupso I'm trying to use feature RFC 5549
Setup:
ubuntu 22.04
K3s clusters
Calico and calicoctl installed
BIRD
Hi @MichalFupso
Wanted to give you more information regarding what exactly we are trying to accomplish. We are trying to use Calico in a setup which is a mix of ipv4 and ipv6 addresses. We are trying to push the BGP_peering with ipv4 having a next-hop of ipv6 address. The original Bird implementation has that feature implemented, its technical specification is mentioned in RFC 5549 in the following commits-
https://github.com/CZ-NIC/bird/commit/ef57b70fa51687865e5823c0af2df2c6de338215
https://github.com/CZ-NIC/bird/commit/d8022d26fc64121c3416abfdb4c38fcbaf81c12e
Calico's fork of bird doesn't seem to have that, though we could find the feature request a year ago without any information here.
Could you help us by providing information if there is any technical reason why it wasn't incorporated into calico? Even some background information would be helpful.
Thanks in advance.
|
2025-04-01T04:35:12.995816
| 2018-09-20T20:41:12
|
362356889
|
{
"authors": [
"tomdee"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9923",
"repo": "projectcalico/calico",
"url": "https://github.com/projectcalico/calico/pull/2201"
}
|
gharchive/pull-request
|
Fix CODEOWNERS file
[ ] Tests
[ ] Documentation
[ ] Release note
None required
Only *.md in the root directory should reviwed by core-maintainers
Deploy preview for calico ready!
Built with commit 2a90cdf33e10438f98de563ca8713f49385e0cbb
https://deploy-preview-2201--calico.netlify.com
|
2025-04-01T04:35:12.998999
| 2017-12-14T20:32:27
|
282230195
|
{
"authors": [
"tmjd"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9924",
"repo": "projectcalico/libcalico-go",
"url": "https://github.com/projectcalico/libcalico-go/pull/740"
}
|
gharchive/pull-request
|
Add ShouldMigrate to migrate Interface
Fixing my dumb mistake of forgetting to add my new function to the interface. :doh:
Release Note
None required
Being rebased against master
|
2025-04-01T04:35:13.012372
| 2023-02-03T16:40:56
|
1570121015
|
{
"authors": [
"MaKyOtOx",
"Mzack9999"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9925",
"repo": "projectdiscovery/cdncheck",
"url": "https://github.com/projectdiscovery/cdncheck/pull/60"
}
|
gharchive/pull-request
|
Add Google, GCP, Zscaler, AmazonAWS and Office365 IPs
Add Google, GCP, Zscaler, AmazonAWS and Office365 IP addresses
@MaKyOtOx Thanks for this PR - The new implementation in dev abstracted the scrape logic, so operators are now tracked into a simple yaml file, I've ported the new endpoints at https://github.com/projectdiscovery/cdncheck/pull/61
Thanks !
|
2025-04-01T04:35:13.073786
| 2017-01-11T23:39:37
|
200239857
|
{
"authors": [
"carrickr",
"jcoyne",
"mjgiarlo"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9926",
"repo": "projecthydra-labs/hyrax",
"url": "https://github.com/projecthydra-labs/hyrax/issues/253"
}
|
gharchive/issue
|
From edit-roles page link to admin sets they have access to
https://github.com/projecthydra-labs/hyku/issues/444
The issue referenced here is closed, should this be closed?
@carrickr that closed ticket was "design this feature" this ticket is for "implement this feature", so no, it should not.
Blocked by work to create the edit user/roles page.
|
2025-04-01T04:35:13.077471
| 2015-11-04T15:23:13
|
115077104
|
{
"authors": [
"AllenBW",
"stackus"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9927",
"repo": "projectjellyfish/api",
"url": "https://github.com/projectjellyfish/api/issues/1027"
}
|
gharchive/issue
|
Discussion: CRUD
CRUD pages are everywhere. They list everything and the amount of these pages make the application feel unintelligent. We're taking unfiltered data and dumping it out to the user in most places. CRUD pages do not add "value" to the data for users.
Current Implementation
List pages
Create and Edit pages
Delete buttons
Suggestions
Remove the Orders and Services from the navigation
A page showing every Order made in the system does not seem appropriate for every user.
A page showing every Service made in the system does not seem appropriate for every user.
Change pages to add that "value". Value could be limiting the data, offering search and filter options.
Provide context specific actions.
Use the has_scope feature to provide quick filtering for data
Filter the data per user.
Reserving order and service list states for the privileged groups/users rather than all would be another way to distinctly differentiate admin from common JF experience.
Do we want to add filtering to all list states or specific filtering to certain states? (this would be in addition go filtering data as per the users role or group).
|
2025-04-01T04:35:13.090388
| 2020-10-26T15:47:49
|
729681996
|
{
"authors": [
"ccremer",
"srueg"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9928",
"repo": "projectsyn/component-espejo",
"url": "https://github.com/projectsyn/component-espejo/pull/7"
}
|
gharchive/pull-request
|
Update Espejo to v0.3.1
Updates Espejo. Since the rewrite with Operator-SDK with 1.1 Espejo supports new flags and env vars.
A rollout on a cluster is not tested yet.
Checklist
[x] Keep pull requests small so they can be easily reviewed.
[x] Update the ./CHANGELOG.md.
@srueg The linting fails without any readable errors whatsoever. Do you have an idea? Compilation succeeds
@srueg The linting fails without any readable errors whatsoever. Do you have an idea? Compilation succeeds
Try running make format, this should apply the auto formatting. (Looks like inconsistent quoting in component/main.jsonnet).
|
2025-04-01T04:35:13.095051
| 2023-05-19T01:45:37
|
1716488699
|
{
"authors": [
"DebakelOrakel",
"vshn-renovate"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9929",
"repo": "projectsyn/component-topolvm",
"url": "https://github.com/projectsyn/component-topolvm/pull/62"
}
|
gharchive/pull-request
|
Update Helm release topolvm to v11.2.1
This PR contains the following updates:
Package
Update
Change
topolvm
patch
11.2.0 -> 11.2.1
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
Diff LGTM, I have no idea whether there's anything we need to be aware of for the TopoLVM 0.18.2 to 0.19.0 upgrade.
No caviats mentioned in the Changelog
|
2025-04-01T04:35:13.107490
| 2018-04-20T16:04:34
|
316325281
|
{
"authors": [
"hossenlopp",
"jbradl11"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9930",
"repo": "projecttacoma/bonnie",
"url": "https://github.com/projecttacoma/bonnie/pull/981"
}
|
gharchive/pull-request
|
Add change event to data criteria inputs to check for validation
From external BONNIE-450. Add change event as something that kicks off validation of data criteria attributes. We were only looking at the keyup event. The browser sometimes provides a dropdown or increment/decrement buttons that can fill the field.
Pull requests into Bonnie require the following. Submitter and reviewer should :white_check_mark: when done. For items that are not-applicable, note it's not-applicable ("N/A") and :white_check_mark:.
Submitter:
[x] This pull request describes why these changes were made.
[x] This PR is into the correct branch.
[x] JIRA ticket for this PR: https://jira.mitre.org/browse/BONNIE-1405
[x] JIRA ticket links to this PR
[x] Code diff has been done and been reviewed (it does not contain: additional white space, not applicable code changes, debug statements, etc.)
[x] If UI changes have been made, google WAVE plug-in has been executed to ensure no 508 issues were introduced.
[x] Tests are included and test edge cases N/A would be overly burdensome to add tests
[x] Tests have been run locally and pass (remember to update Gemfile when applicable)
[x] Test fixtures updated and documented as necessary ( see internal wiki )
[x] Code coverage has not gone down and all code touched or added is covered.
In rare situations, this may not be possible or applicable to a PR. In those situations:
Note why this could not be done or is not applicable here:
Add TODOs in the code noting that it requires a test
Add a JIRA task to add the test and link it here:
Branch
Back End Coverage
Front End Coverage
master
N/A
[New Branch]
N/A
[x] Automated regression test(s) pass N/A Does not affect calculation
If JIRA tests were used to supplement or replace automated tests: Would be overly burdensome to create tests.
[x] JIRA test links: N/A
[x] Justification for using JIRA tests: N/A
[x] JIRA tests have been added to sprint N/A
Reviewer 1:
Name:
[ ] Code is maintainable and reusable, reuses existing code and infrastructure where appropriate, and accomplishes the task’s purpose
[ ] The tests appropriately test the new code, including edge cases
[ ] You have tried to break the code
If JIRA tests were used to supplement or replace automated tests:
[ ] JIRA tests have been run and pass
[ ] You agree with the justification for use of JIRA tests or have provided input on why you disagree
Reviewer 2:
Name:
[ ] Code is maintainable and reusable, reuses existing code and infrastructure where appropriate, and accomplishes the task’s purpose
[ ] The tests appropriately test the new code, including edge cases
[ ] You have tried to break the code
If JIRA tests were used to supplement or replace automated tests:
[ ] JIRA tests have been run and pass
[ ] You agree with the justification for use of JIRA tests or have provided input on why you disagree
Can you add a simple jira test for this? Also, I think the internal jira ticket is linked to the incorrect external jira ticket.
Closing to be replaced with #982 on proper branch.
|
2025-04-01T04:35:13.140355
| 2016-10-05T15:23:22
|
181186442
|
{
"authors": [
"SanVakil"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9931",
"repo": "prolificinteractive/material-calendarview",
"url": "https://github.com/prolificinteractive/material-calendarview/issues/421"
}
|
gharchive/issue
|
how to change the selection text color?
how to change the selection text color. I can change the selection background.
Hi ,
I reviewed your documents and identified the Custom Selectors . Thanks for your supports.
|
2025-04-01T04:35:13.179912
| 2017-01-18T13:54:36
|
201580103
|
{
"authors": [
"brian-brazil",
"waqark3389"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9932",
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/issues/2350"
}
|
gharchive/issue
|
Submission to EPEL plan
Prometheus is growing massively and used very widely. Are there any plans to try and submit packages to official Redhat repos? Makes automation and keeping prometheus up-to-date easier.
We've no plans to offer packages ourselves, however someone from the community is free to do so. This what happens with Debian currently.
|
2025-04-01T04:35:13.181443
| 2017-07-24T01:47:06
|
244953144
|
{
"authors": [
"DrewDennison",
"brian-brazil"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9933",
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/issues/2981"
}
|
gharchive/issue
|
Feature Request: Allow registering multiple times (with flag)
Hello,
I'm using Prometheus with Scala and the Play Framework 2.5. Play will hot-reload every time there is a code change which means .register() is called multiple times. This causes errors saying there is already a collector with that name registered. I would like to disable the exception at registration if there is something with the same name. Perhaps it could print a warning?
This is a feature request for the Java client, you should file it there. This behaviour is intended though, as it is a mistake to register multiple collectors with the same name.
|
2025-04-01T04:35:13.190053
| 2023-12-20T22:30:06
|
2051386346
|
{
"authors": [
"beorn7",
"ptodev",
"roidelapluie"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9934",
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/pull/13316"
}
|
gharchive/pull-request
|
scraping/alerting: Fix issues with duplicate metric registration
This PR will make it so that each service discovery mechanism reports its own debug metrics series. Currently, on the latest stable Prometheus release, if a Prometheus process runs more than one instance of the same SD mechanism, then it's not possible to distinguish between metrics for different scrape instances.
Fixes #13312
Unfortunately, #13023 introduced a bug whereby if two scrape jobs use the same service discovery, Prometheus would crash.
When I was working on #13023 I was working under the assumption that Prometheus can't instantiate more than one instance of the same SD. Therefore I unfortunately missed the most obvious test case - the one where there is more then one scrape job which uses a particular SD. I also didn't test AlertManager SDs. This led to the bug described in #13312.
I tested this bugfix locally using the config file below.
Prometheus config
global:
scrape_interval: 16s
scrape_configs:
- job_name: 'testing'
scrape_interval: 6s
file_sd_configs:
- files:
- "/Users/paulintodev/Desktop/targets.json"
refresh_interval: 4m
- job_name: 'testing2'
scrape_interval: 6s
file_sd_configs:
- files:
- "/Users/paulintodev/Desktop/targets2.json"
refresh_interval: 4m
- job_name: 'config-0'
scrape_interval: 6s
file_sd_configs:
- files:
- "/Users/paulintodev/Desktop/targets3.json"
refresh_interval: 4m
alerting:
alertmanagers:
- file_sd_configs:
- files:
- "/Users/paulintodev/Desktop/targets.json"
refresh_interval: 4m
- file_sd_configs:
- files:
- "/Users/paulintodev/Desktop/targets2.json"
refresh_interval: 4m
I tested both by using --enable-feature=new-service-discovery-manager and without.
I also forced several config reloads using the pkill -SIGHUP -i prometheus command, to make sure Prometheus doesn't crash and that no errors are logged.
After the change I see metrics such as these on http://localhost:9090/metrics:
prometheus_sd_file_read_errors_total{job_name="config-0",sd_type="notify"} 0
prometheus_sd_file_read_errors_total{job_name="config-0",sd_type="scrape"} 0
prometheus_sd_file_read_errors_total{job_name="config-1",sd_type="notify"} 0
prometheus_sd_file_read_errors_total{job_name="testing",sd_type="scrape"} 0
prometheus_sd_file_read_errors_total{job_name="testing2",sd_type="scrape"} 0
cc @bwplotka @beorn7
Thanks for catching this. The most appropriate label name would probably be just job (as we use it for the default target labels).
However, I'm not sure if it is desired to partition all the SD metrics by job. Maybe it's even an improvement, but maybe it's not useful at all. I would like to hear more opinions about this. @roidelapluie @juliusv (and anyone that feels qualified) WDYT?
This is not an acceptable solution as it will create a lot of cardinality.
The goal of #13023 was to allow users who import Prometheus SD as a library, to pass their own registry. If Prometheus is to do this, while still only having one metric series for every SD instance, then I am not sure how to do this cleanly. Do you want to revert #13023 while we discuss a solution? Or would you rather us find a solution straight away?
One solution could be to have an any object containing SD-specific metrics inside a DiscovererOptions. We will need to remove Registerer from DiscovererOptions, then make new metrics structures such as FileSDMetrics which would have to be created and managed by the Discovery Manager. All of this sounds very convoluted. Maybe there is a better way?
@roidelapluie thanks for your response.
@ptodev I guess that clarifies things: If cardinality increase is a problem, that's even on top of the problem of changing the labels of long established metrics. I would say we should try to not have separate metrics per scrape job, or in other words: Whatever changes we do in the code behind the scenes, the metrics shouldn't change.
So far, we acted under the assumption that there will be one SD instance per kind of SD, not per job, but that's wrong, see above.
So the conclusion is we truly have to share metrics between SD instances. And for sharing, we either have to manage them outside of the SD instance (as you have described in your last comment), or we go back and reconsider using the AlreadyRegisteredError pattern, see example here: https://pkg.go.dev/github.com/prometheus/client_golang/prometheus#example-AlreadyRegisteredError
A problem with the latter approach is that you cannot really deregister metrics anymore (but de-registration is probably a problem with shared metrics anyway, and maybe we should just not de-register at all).
The former approach might be a bit cleaner in general, but then we are back to managing the whole lifecycle of the metrics outside of the SD instances, which is particularly annoying if you use the SD code as a library and have to remember to always initialize the correct set of metrics for each SD type.
WRT reverting #13023 : It didn't make it into v2.49, so I would say don't need to emergency-rollback #13023, but a fix should still happen soon.
|
2025-04-01T04:35:13.192310
| 2024-08-07T04:43:04
|
2452370546
|
{
"authors": [
"beorn7",
"charleskorn",
"krajorama"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9935",
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/pull/14609"
}
|
gharchive/pull-request
|
promql: fix "cannot reduce resolution to custom buckets schema" panic in rate over native histograms with mix of custom and exponential buckets
If rate is used over a range of native histograms with both custom and exponential buckets, a panic can occur.
I've also modified the behaviour of the test scripting language to check for query errors before warnings: if the query failed to evaluate, it's more useful to see the failure rather than the fact that warnings were or weren't present.
Thank you. Related to #13494
Also somewhat related: #14168
|
2025-04-01T04:35:13.204333
| 2024-10-25T13:02:50
|
2614052039
|
{
"authors": [
"fionaliao",
"krajorama"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9936",
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/pull/15221"
}
|
gharchive/pull-request
|
fix(tsdb): interleaved OOO and In-order chunks with NH counter resets
Found when testing native histograms out of order #11220 #14850
Test: Add test for triggering the bug where counter reset header is not cleared when we switch between chunks coming from in-order and out-of-order head.
Fix: Clear the native histogram counter reset hint when switching between the
in-order and out-of-order head.
Given the samples written in this order:
t=1, v=40
t=4, v=30 (reset in in-order, creates new chunk)
t=2, v=40
t=3, v=10 (reset in out-of-order, creates new chunk)
When we read them back, this is the order:
t=1, v=40
t=2, v=40
t=3, v=10 (reset in readback)
t=4, v=30
Without this fix we also see a reset for t=4, v=30 because the chunk is a
non-overlapping in-order chunk and it is used as is.
The fix is to detect when we switch between in-order and out-of-order
chunks and wrap the next chunk in an iterator that clears the
counter reset as if it overlapped with another chunk.
The fix assumes we only need to recalculate counter reset hints if we are switching between in-order and OOO chunks. But we could have a similar problem with just OOO chunks when you include mmapped OOO chunks - those chunks could be interleaved with each other, or the OOO chunks from the head. All the chunks created in a single mmapCurrentOOOHeadChunk() call will have the correct counter reset headers wrt to each other, but if you start merging chunks from different mmapCurrentOOOHeadChunk() calls+the current OOO head chunk, there's no such guarantee.
See this test https://github.com/prometheus/prometheus/compare/ooo-nh-fix...fionaliao:prometheus:fl/ooo-nh-mmap-test
(I accidentally pushed this commit to your branch and then deleted it, that's why you see the force-push from me 😅 )
I think we might have to mark all OOO chunks as UnknownCounterReset unfortunately, since we don't know which OOO chunks were mmapped from the head together.
The fix assumes we only need to recalculate counter reset hints if we are switching between in-order and OOO chunks. But we could have a similar problem with just OOO chunks when you include mmapped OOO chunks - those chunks could be interleaved with each other, or the OOO chunks from the head. All the chunks created in a single mmapCurrentOOOHeadChunk() call will have the correct counter reset headers wrt to each other, but if you start merging chunks from different mmapCurrentOOOHeadChunk() calls+the current OOO head chunk, there's no such guarantee.
See this test ooo-nh-fix...fionaliao:prometheus:fl/ooo-nh-mmap-test (I accidentally pushed this commit to your branch and then deleted it, that's why you see the force-push from me 😅 )
I think we might have to mark all OOO chunks as UnknownCounterReset unfortunately, since we don't know which OOO chunks were mmapped from the head together.
I see, no worries, plz feel free to push to the branch, it's Prometheus' :)
I think we might have to mark all OOO chunks as UnknownCounterReset unfortunately, since we don't know which OOO chunks were mmapped from the head together.
Sounds good as MVP. I wonder if there's a better algorithm that can detect if we're going out of order. For example the Ref number also encodes a monotonic increasing counter, so that might help determine if out of order is out of order after sorting by time.
I think we might have to mark all OOO chunks as UnknownCounterReset unfortunately, since we don't know which OOO chunks were mmapped from the head together.
Sounds good as MVP. I wonder if there's a better algorithm that can detect if we're going out of order. For example the Ref number also encodes a monotonic increasing counter, so that might help determine if out of order is out of order after sorting by time.
When we mmap OOO chunks, the first chunk will have unknown reset hint as there's no continuity between OOO mmaps (we start from scratch every time). This could be a missed opportunity, maybe we can fix it later.
For those reset headers to be valid this needs to be true I think:
no overlap with other chunks (any in-order + ooo)
the ref id needs to follow right after the previous one, so we know it's continuous (not sure how we can know, are these increased 1-by-1 ?)
Seems like it gets pretty complicated , we need to go with the MVP and just use unknown reset.
The one thing that bothers me is that using the current code means the chunks don't just loose their reset header, they are also recoded into new chunks. An optimization could be to not recode OOO chunks that aren't overlapping with anything, just wrap them with a chunk iterator that simply removes the counter reset from the first sample, and leaves the notreset hints alone.
@krajorama Noticed a problem with requireEqualSeries that meant we don't always check counter reset headers when we should. Fixing that has resulted in an OOO NH compaction test being updated after the changes made here to always set OOO chunks as UnknownCounterReset: https://github.com/prometheus/prometheus/pull/15252. Could you merge that into this PR?
Closing in favor of either #15343 or #15343 .
Long term solution proposed in #15346 .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.