added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T04:35:13.208515
| 2018-11-15T22:10:05
|
381360905
|
{
"authors": [
"nbirnel",
"simonpasquier"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9937",
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/pull/4870"
}
|
gharchive/pull-request
|
add a wmi_exporter node-overview console template
@brian-brazil
This is so the next person who wants a console for Windows nodes doesn't have to write it themself.
Despite #3099, I am unable to find anything on mailing list suggesting that consoles are deprecated, so I believe this is worthwhile. https://groups.google.com/forum/?fromgroups#!searchin/prometheus-developers/console|sort:date/prometheus-developers/VzmcWJKRQxY/mUVBSzVCGQAJ is an older conversation, but I see nothing more recent on the subject.
If the prometheus team does not wish to maintain console templates, but will not be removing them, perhaps the docs could instead point to a separate repo to avoid duplication of effort for those who want to use them.
This should go in the wmi-exporter repository as the decision has already been made that this repository can't contain templates for all possible exporters.
Thanks @simonpasquier, I am closing this and opening a PR over there.
|
2025-04-01T04:35:13.212004
| 2020-01-27T10:03:07
|
555468782
|
{
"authors": [
"Harkishen-Singh",
"boyskila",
"gouthamve",
"juliusv",
"roidelapluie"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9938",
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/pull/6702"
}
|
gharchive/pull-request
|
React UI: Some /rules page follow-ups
Just a few follow-ups for now, as I'm a bit short on time today:
Single table for all rule groups, so that all columns are aligned on
the page (like in the old UI).
Remove redundant direct cell styling / switch to using "rule-cell"
also for recording rules.
Remove extra left spacing of graph expression links.
Remove unused CSS.
Signed-off-by: Julius Volz<EMAIL_ADDRESS>
@juliusv @boyskila I think we can merge this.
@juliusv do you think we can merge this PR?
Hi @juliusv, there are a lot of conflicts here, could you rebase if you're still interested? If not, could we close this?
We have looked at this pull request during our bug scrub.
Considering that this has drifted a lot, @juliusv would you consider reopening a new pull request if that's needed?
Thank you for your contribution.
|
2025-04-01T04:35:13.213865
| 2020-08-02T06:12:08
|
671560497
|
{
"authors": [
"codesome",
"johncming",
"roidelapluie"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9939",
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/pull/7720"
}
|
gharchive/pull-request
|
tsdb: add check for mint and maxt.
Add a check for mint and maxt. The case that mint is bigger one leads to no samples created.
After checking it again, I would like to take back my approval
The case that mint is bigger one leads to no samples created.
This is fine (while it can be achieved by passing 0,0) and also required in some cases where we don't want samples. With the proposed change, it would create 0 series too which I don't think will be expected.
We have looked at this during the bug scrub and decided to close this as per above comments. Thanks!
|
2025-04-01T04:35:13.248804
| 2024-02-12T08:19:37
|
2129621556
|
{
"authors": [
"typpo",
"wuodar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9940",
"repo": "promptfoo/promptfoo",
"url": "https://github.com/promptfoo/promptfoo/issues/466"
}
|
gharchive/issue
|
Invalid option: a
I'm getting "Invalid option: a" error when running eval with model graded "factuality" assertion (btw docs are unambiguous about factuality, should I use factuality or model-graded-factuality?). I'm using default test provider bedrock:completion:anthropic.claude-v2.
Thanks for the catch - #468 should resolve this issue. It was happening because Claude is grading with a lowercase letter, but the code expects an uppercase letter.
|
2025-04-01T04:35:13.252299
| 2022-07-17T21:54:57
|
1307218417
|
{
"authors": [
"0cry",
"proofit404"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9941",
"repo": "proofit404/generics",
"url": "https://github.com/proofit404/generics/pull/235"
}
|
gharchive/pull-request
|
Deny keyword arguments in constructor.
Resolves #213
Fails
:no_entry_sign:
Issue marked as feature should have docs commit
Generated by :no_entry_sign: dangerJS against 8e86c4ecb677283faf4bd114c528d2f8c53be351
|
2025-04-01T04:35:13.258541
| 2022-03-28T08:31:19
|
1183082327
|
{
"authors": [
"codecov-commenter",
"dereuromark"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9942",
"repo": "propelorm/Propel2",
"url": "https://github.com/propelorm/Propel2/pull/1850"
}
|
gharchive/pull-request
|
Fix up phpstan silencing.
Since we are not solving it, this at least fixes the silencing issue and documents the truth about the abstract one
https://github.com/propelorm/Propel2/issues/1622
Codecov Report
Merging #1850 (053ec50) into master (94069d2) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #1850 +/- ##
=========================================
Coverage 87.75% 87.75%
Complexity 7761 7761
=========================================
Files 282 282
Lines 21291 21291
=========================================
Hits 18684 18684
Misses 2607 2607
Flag
Coverage Δ
5-max
87.75% <ø> (ø)
7.4
87.75% <ø> (ø)
agnostic
66.92% <ø> (ø)
mysql
69.01% <ø> (ø)
pgsql
69.03% <ø> (ø)
sqlite
66.87% <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
src/Propel/Runtime/Formatter/AbstractFormatter.php
76.19% <ø> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 94069d2...053ec50. Read the comment docs.
|
2025-04-01T04:35:13.267801
| 2022-10-24T03:10:02
|
1420137174
|
{
"authors": [
"Liminglud",
"ORainn",
"asdjia",
"chenxinshuang",
"cocotorrow",
"hezw2016",
"proteus1991",
"xupinggl",
"zyqss"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9943",
"repo": "proteus1991/PSCC-Net",
"url": "https://github.com/proteus1991/PSCC-Net/issues/4"
}
|
gharchive/issue
|
search for help
Thank you for opening the source code of your work. This work is excellent. I downloaded the code and parameter weights you provided. Because I didn't see your evaluation code, I used my own evaluation indicator code, and the test results are somewhat different from your results. The following is my evaluation index method and test results. Can you share the code of your evaluation or point out my mistakes
hellod,Are you using the weights provided in the checkpoint‘s file? Why did I only reach 70% on casiav1?
And I try to retain the net using my datasets only with splicing images,it gets a terrible process, I wonder whether you can train with a reseanable process.
I retrain the model and then test on these datasets ,but the result is worth than tested with the pretrained weight the author give.
Thanks. I have trouble in training, can I contact you in QQ?
ok,my qq is<PHONE_NUMBER>
Hi, may I ask how did you download the NIST16 dataset with 564 images. I downloaded it, but the number of images is not the same(like about 1,000 images in total).
Hi, may I ask how did you download the NIST16 dataset with 564 images. I downloaded it, but the number of images is not the same(like about 1,000 images in total).
We have provided the name list for NIST16 dataset in dataset/test/NIST16. Hope this could help you.
I used the sklearn package for measurement. More details can be found in #2 . Also, it is worth noting that the forged region in some images might be greater than 50% of the whole image. In those cases, the PSCC-Net might treat the smaller region as forged (e.g., the spliced region). Since localizing the greater or smaller region as the forged region is both reasonable in practical applications, we use 1-AUC as the final score if the AUC score is lower than 50%.
test.py中报错,找不到splice_metrics_new,能否麻烦作者公开下,非常感谢
Hello, I have the same question. Also, could you please share the code you used for testing? My code encountered an Out of Memory issue on high-resolution images, such as IMD20 and NIST16.
我用这个sklearn包来测量。更多细节可以在#2中找到 。另外,值得注意的是,某些图像中的伪造区域可能大于整个图像的 50%。在这些情况下,PSCC-Net 可能会将较小的区域视为伪造的(例如,拼接区域)。由于在实际应用中将较大或较小的区域定位为伪造区域都是合理的,因此如果 AUC 分数低于 50%,我们使用 1-AUC 作为最终分数。
Hello, could you please tell me if you have done a similar calculation for f1, I retrained the model and finetured on CASIAV2 dataset. Then I calculated AUC score which is consistent with what you said(87.08), but f1 score is only 44.04
您好,您的邮件我已收到,我会尽快回复的,谢谢!
Hello, I have the same question. Also, could you please share the code you used for testing? My code encountered an Out of Memory issue on high-resolution images, such as IMD20 and NIST16.
你解决这个问题了吗,我也遇见了同样的问题
我用这个sklearn包来测量。更多细节可以在#2中找到 。另外,值得注意的是,某些图像中的伪造区域可能大于整个图像的 50%。在这些情况下,PSCC-Net 可能会将较小的区域视为伪造的(例如,拼接区域)。由于在实际应用中将较大或较小的区域定位为伪造区域都是合理的,因此如果 AUC 分数低于 50%,我们使用 1-AUC 作为最终分数。
Hello, could you please tell me if you have done a similar calculation for f1, I retrained the model and finetured on CASIAV2 dataset. Then I calculated AUC score which is consistent with what you said(87.08), but f1 score is only 44.04
Hey, I got similar pixel-level F1 score on CASIA V1 with the .pth weights provided by the authors, which is about 46%.
您好,您的邮件我已收到,我会尽快回复的,谢谢!
|
2025-04-01T04:35:13.277951
| 2022-03-25T15:37:39
|
1180944826
|
{
"authors": [
"mrhoseah",
"muhammedfayaz"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9944",
"repo": "protonemedia/inertiajs-tables-laravel-query-builder",
"url": "https://github.com/protonemedia/inertiajs-tables-laravel-query-builder/issues/61"
}
|
gharchive/issue
|
Missing required prop: "queryBuilderProps"
I followed the guide and did everything as indicated but I get this error in the console
Missing required prop: "queryBuilderProps"
Got solution, was own mistake
same I got error, what was the solution?
|
2025-04-01T04:35:13.284329
| 2024-01-04T15:00:33
|
2065812235
|
{
"authors": [
"Haarolean",
"maximus13th"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9945",
"repo": "provectus/kafka-ui-charts",
"url": "https://github.com/provectus/kafka-ui-charts/pull/25"
}
|
gharchive/pull-request
|
Add custom labels for the Ingress
Hello!
This PR allows the addition of custom labels to the Kafka-UI ingress.
For example, I need to add some special labels for selecting Kafka-UI ingress in our projects and some automation. For now, we do not use the native ingress from this chart. Instead, we have to create custom ingress and it is not convenient.
Perhaps it will be useful for somebody else.
Best regards,
Maksim
@maximus13th hi, this repo is not maintained (https://github.com/provectus/kafka-ui/discussions/4255), see https://github.com/kafbat/kafka-ui instead
|
2025-04-01T04:35:13.288005
| 2022-01-16T11:51:02
|
1105030801
|
{
"authors": [
"Dugong42",
"Haarolean",
"tilmann-bartsch"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9946",
"repo": "provectus/kafka-ui",
"url": "https://github.com/provectus/kafka-ui/issues/1393"
}
|
gharchive/issue
|
open-id/oauth2 authentication for user interface
Is your proposal related to a problem?
I would like to authenticate kafka-ui using a keycloak server.
Describe the solution you'd like
Simple authentication can be obtained by the open-id/oauth2 implicit flow as described in
OpenID Connect Implicit Client Implementer's Guide which is supported by keycloak.
To achieve this I'd like to use the following environment variables in docker-compose.yml:
AUTH_TYPE="OAUTH_IMPLICIT_FLOW"
OAUTH_CLIENT_ID="kafka-ui"
OAUTH_AUTHORIZATION_URL="http://<IP_ADDRESS>:8080/auth/realms/master/protocol/openid-connect/auth"
where the URL is the default authorization endpoint of a Keycloak-Server in realm master.
Kafka-ui should then follow the implicit flow by redirecting:
GET "$OAUTH_AUTHORIZATION_URL"?
response_type=id_token%20token
&client_id="$OAUTH_CLIENT_ID"
&redirect_uri=<kafka-ui-url>
&scope=openid%20profile
&state=af0ifjsldkj
&nonce=n-0S6_WzA2Mj
the keycloak server redirects to <kafka-ui-url> and adds access_token, token_type, id_token which is used by kafka-ui to deny or grant access.
Describe alternatives you've considered
A simpler but less general solution would be to use the keycloak-js library as described in this medium-article.
Hey, thanks for raising the issue. As I mentioned on discord, unfortunately it's not solvable by configuration alone, we'll have to implement some keycloak support.
Hey, actually I thought you need keycloak with roles.
If you need just the oauth, that's probably gonna work out of the box.
You could refer to SSO guide and try the settings described there. Let me know how it goes.
FYI, I posted an example of Keycloak configuration as OIDC provider, see #3298
@alexted the documentation is open-source as well, located in docs branch. PRs are welcome!
|
2025-04-01T04:35:13.298218
| 2023-01-05T11:40:48
|
1520602041
|
{
"authors": [
"SrikanthREEF",
"n4ch04",
"toniblyx"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9947",
"repo": "prowler-cloud/prowler",
"url": "https://github.com/prowler-cloud/prowler/issues/1659"
}
|
gharchive/issue
|
[Bug]: Connection reset when running Prowler using remote session in Ubuntu
I realized that I haven't shared what I see as an outcome in my earlier comment. So, the attachment is from earlier attempt that confirms my creds and config is correct. I attempted this on 2 ubuntu boxes and every time scan hangs at 48% and it messes up connectivity to my box. I would have to restart the EC2 to retry again. I am assuming this may be conflicting or changing firewall config of the machine preventing host resolution or port 22 connectivity.
Originally posted by @SrikanthREEF in https://github.com/prowler-cloud/prowler/issues/1654#issuecomment-1371355197
From @n4ch04:
Hi @SrikanthREEF since the scan get hanged and you loose connectivity every time, it seems that you have a low ulimit set on your ubuntu ec2 instances. In Unix the ulimit sets the maximum file descriptors open at the same time, and the system treat as file descriptor almost everything (from sockets to proper files). Prowler uses Python paginators, which creates intermediate files to process large API responses.
Give a try to the ulimit change setting, first check the current value with ulimit -a and then change it with ulimit -n
Hi @toniblyx
I tried increasing ulimit from 1k to 40k. but scan still ended up hanging at 48%
Ok, the scanning is still hanging, but did you loose again the connectivity to the remote host?
Looking at your screenshot it seems that something went wrong can you relaunch the scanning with the flag --log-level ERROR appended to the end of the command and share the logs?
Hi @SrikanthREEF did this workaround work for you?
Hi @n4ch04 ,
Sorry, I didn't get a chance to respond yesterday. Last Friday, I have upgraded size of my EC2 instance to t2.medium from t2.micro. Since then scan started to progress to 95% & completes with a failure in creating a output file. I did also run it with logs enabled please take a look at attached text file.
It is modifying something in none.html file but it doesn't contain latest scan info as in the header section the file.
prowler error logs.txt
Hi @SrikanthREEF no worries, yep we have deal with that issue and it has solved on version 3.0.2, from your logs it seems that you are not using the latest version (you can check it with prowler -v).
Please update it and launch it again, it should be solved.
Regarding the system requirements Prowler v3 parallelise and stores information in memory to improve performance so that memory requirement may be true in your case.
Hello @n4ch04,
new version did generate the output files. I would say this issue can be closed now.
My suggestion would be to call out recommended memory requirements on Prowler documentation. so, other users can avoid this issue .
|
2025-04-01T04:35:13.300537
| 2016-08-04T22:01:06
|
169483740
|
{
"authors": [
"koithara",
"proxb",
"sheldonhull"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9948",
"repo": "proxb/PoshRSJob",
"url": "https://github.com/proxb/PoshRSJob/issues/80"
}
|
gharchive/issue
|
Issue while having more than 9 parameters to trigger a function
The below code is taking 10 parameters and it fails.. however when I run it with one less, it passes...
$MyJob = Start-RSJob -Name $ServerName -ScriptBlock {
param ($IPAddress,$Profile,$ProfileJSON,$VMCred,$BusinessUnit,$Region,$BaseEnv,$Client,$scriptsDIR,$CloudProvider)
$files = gci "${scriptsDIR}\utility" -filter "*.psm1"| Select -exp Name
foreach ($file in $files) { Import-Module -Force "${scriptsDIR}\utility\${file}" -WarningAction silentlyContinue }
Write-Output "$IPAddress,$Profile,$ProfileJSON,$VMCred,$BusinessUnit,$BaseEnv,$Client,$CloudProvider"
Write-Log "INFO" "chef-client completed for $ServerName"
} -ArgumentList $IPAddress,$Profile,$ProfileJSON,$VMCred,$BusinessUnit,$Region,$BaseEnv,$Client,$scriptsDIR,$CloudProvider
Can you provide any errors that you are seeing when using 10+ parameters?
So far I have been unable to reproduce this issue.
@koithara Just following up on this, can you verify with the latest release of the module?
@koithara Any update on this? If I don't hear anything back, I'll close this out as being fixed as I cannot reproduce this issue.
I reproduced the issue. I spent countless hours troubleshooting and finally ran across this post. Once I reduced the argument count the arguments were correctly identified.
I noticed in the pester tests there is not a single scenario for covering argumentlists either. This is definitely something worth doing. If you are swamped I can do my best to contribute some tests for this when I can. Please reopen this issue though.
So I got a work around for this. Added my arguments to psobjects and is now passing psobject.
Between good job on this module.. I wrote few other modules to support error handling from all of the jobs etc..
Re-opened per @sheldonhull comments. If you have time, feel free to add some Pester tests to this.
|
2025-04-01T04:35:13.304133
| 2023-09-13T15:38:53
|
1894794608
|
{
"authors": [
"proxfly"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9949",
"repo": "proxoar/talk-web",
"url": "https://github.com/proxoar/talk-web/issues/17"
}
|
gharchive/issue
|
Max Token error from LLM sever
ChatGPT returned an error.
got error from LLM sever: error, status code: 400,
message: This model's maximum context length is 4097 tokens.
However, you requested 4315 tokens (315 in the messages, 4000 in the completion).
Please reduce the length of the messages or completion.
By subtracting the number of tokens in the request from MaxToken, the user's mental load can be reduced
How to calculate the number of tokens before sending the message?
What about languages other than English?
It's been a while since this issue last surfaced. Perhaps OpenAI has updated its API to circumvent this problem. This hasn't been confirmed, though. Please reopen the issue if it re-emerges.
|
2025-04-01T04:35:13.488479
| 2017-09-21T10:12:32
|
259444902
|
{
"authors": [
"FDMX2",
"psantosl"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9950",
"repo": "psantosl/PseudoTcpSharp",
"url": "https://github.com/psantosl/PseudoTcpSharp/issues/1"
}
|
gharchive/issue
|
Missing license
Hi,
nice port but without a license it can't be reused...
Can you please add one ?
Ouch, sure, I can!
Should be fixed now :)
|
2025-04-01T04:35:13.574480
| 2016-03-30T17:06:40
|
144654904
|
{
"authors": [
"cam156",
"lmballinger",
"ntallman"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9951",
"repo": "psu-libraries/cho-req",
"url": "https://github.com/psu-libraries/cho-req/issues/58"
}
|
gharchive/issue
|
ability to re-define collections
As a metadata specialist I would like to be able to re-define collections, so I can break them apart or combine them, in whole or in part, at any time.
@lmballinger What does this mean? Taking one collection and splitting it into two or more / merging one or more collections into a single collection?
Take a larger collection and break it into smaller collections.
|
2025-04-01T04:35:13.631190
| 2024-10-01T17:50:52
|
2559850512
|
{
"authors": [
"mountaindude"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9952",
"repo": "ptarmiganlabs/butler",
"url": "https://github.com/ptarmiganlabs/butler/issues/1245"
}
|
gharchive/issue
|
Test case ""scheduler" fails
Running npm test scheduler fails the last test case:
H8: GET /v4/schedules/status
✓ It should respond with 200 when getting status (7 ms)
✕ Response should be a string (7 ms)
The Butler log does not show any warnings or errors.
This API endpoint returns text/plain rather than the application/json that most other endpoints return.
This is correct (for now), the test case however incorrectly assumes the response to be an object.
|
2025-04-01T04:35:13.813604
| 2020-11-17T19:05:38
|
745002894
|
{
"authors": [
"lacabra"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9953",
"repo": "publicgoods/products",
"url": "https://github.com/publicgoods/products/pull/77"
}
|
gharchive/pull-request
|
Add new product(s): products/wonder-tree.json
Add new product(s) from unicef/publicgoods-candidates
Good catch, done in 536f837
|
2025-04-01T04:35:13.937244
| 2024-05-06T14:22:41
|
2281005788
|
{
"authors": [
"SureshSoren",
"Taherabharmal",
"Vanshikajain02"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9954",
"repo": "pucardotorg/dristi",
"url": "https://github.com/pucardotorg/dristi/issues/196"
}
|
gharchive/issue
|
UAT Bug | Issue with Map
Module: User Registration
User: Litigant/Advocate
Screen: Registration Details
Issue:
On moving the PIN location in the map, the corresponding address in the 1) Field above map 2) Address Fields are not changing
On editing Pincode field, all other details remain as it.
Expected behaviour:
When the PIN is moved in the map, the corresponding address in the 1) Field above map 2) Address Fields should change
On editing pincode field 1) all other details in the address field become blank 2) The Map should repoint to new pincode
@Taherabharmal
I need more clarity since it is working for me.
the google maps api that we are using doesn't have the functionality to give the address based on searching the pincodes.
cc: @manimaarans @krishnaprasadsannidhi
Working fine in dev environment now
resolved
|
2025-04-01T04:35:13.944657
| 2024-10-10T12:55:31
|
2578766895
|
{
"authors": [
"Susmitabe",
"kashish384",
"rajeshcherukumalli",
"vaibhavct"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9955",
"repo": "pucardotorg/dristi",
"url": "https://github.com/pucardotorg/dristi/issues/1984"
}
|
gharchive/issue
|
Scrutiny - Scrutiny checklist update
Scope : 10. Scrutiny Checklist Update.pdf
on hold
@Taherabharmal Can you please provide us the new pdf for scrutiny checklist?
We will replace it with the existing pdf.
c.c @Ramu-kandimalla @anirudh-0 @Susmitabe
@rajeshcherukumalli
Deployed recent changes and bug fixes in dpg-dev env, kindly check once
Marking all dates with error once an error is marked on a single error
Showing all prev errors and counts properly
Format of the date
And other changes
c.c. @Ramu-kandimalla @anirudh-0
Test cases:
https://docs.google.com/spreadsheets/d/1GxZ4u04vhZ8wX14F84GjlD6edfQo_JaalT6dsGVpmCw/edit?gid=1530287521#gid=1530287521
Hi @vaibhavct , please find the issues identified
https://github.com/pucardotorg/dristi/issues/2370
https://github.com/pucardotorg/dristi/issues/2371
https://github.com/pucardotorg/dristi/issues/2372
https://github.com/pucardotorg/dristi/issues/2375
Marking all errors once a single date field is marked
Date format of all the dates is changed
FSO able to mark error on vakalatnama and complainant's ID card.
Complainant's ID card can be reuploaded again
These are all the chagnes. Please let me know if I missed anything @nitish-beehyv , @rajeshcherukumalli
c.c. @Ramu-kandimalla
I am closing this ticket_below i provided test cases...
Test cases: https://docs.google.com/spreadsheets/d/1GxZ4u04vhZ8wX14F84GjlD6edfQo_JaalT6dsGVpmCw/edit?gid=1530287521#gid=1530287521
Its working fine in UAT env
https://jam.dev/c/3d36ae14-59b3-4c8f-8ac1-5ec4e1cb1e4c
|
2025-04-01T04:35:13.949558
| 2011-10-20T15:35:50
|
2006008
|
{
"authors": [
"alexrothenberg",
"pyromaniac"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9956",
"repo": "puffer/puffer",
"url": "https://github.com/puffer/puffer/issues/16"
}
|
gharchive/issue
|
added specs for the generators puffer:component and puffer:controller
I noticed that you didn't have any specs for your generators so I added some.
Really great. Thanks. I'll check this and merge soon.
Merged, thanks.
|
2025-04-01T04:35:13.979288
| 2024-03-27T15:24:02
|
2211097927
|
{
"authors": [
"bess",
"carolyncole"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9957",
"repo": "pulibrary/tiger-data-app",
"url": "https://github.com/pulibrary/tiger-data-app/issues/614"
}
|
gharchive/issue
|
Setup and teardown for QA server training
We'll be using the QA environment for training, and in order to do that we'll need to set up the QA environment as expected.
Acceptance criteria
[ ] Erase all users and projects in QA, both from rails and mediaflux (a rake task should exist for this already)
[ ] Create all users from the QA users list from #613
[ ] Create fixture projects both in rails and mediaflux
[ ] Each user has an approved project where they are a data sponsor
[ ] Each user has an approved project where they are a data manager
[ ] There is one approved project where all users have been added as a data user
[ ] There is one pending project that only a system administrator can see
[ ] Metadata for these should be auto-generated from fixtures based on real data
[ ] Each title should be created by Faker so it's clear each one is it's own project
[ ] DOI should should be random
[ ] The rest of the metadata and files should be based on the fixture
Hey team! Please add your planning poker estimate with Zenhub @bess @JaymeeH @jrgriffiniii @leefaisonr
|
2025-04-01T04:35:13.981134
| 2018-09-12T14:46:21
|
359515839
|
{
"authors": [
"lordfuoco",
"mix3d",
"tanrax"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9958",
"repo": "pulilab/vue-people",
"url": "https://github.com/pulilab/vue-people/issues/114"
}
|
gharchive/issue
|
How to add meetups?
Maybe I missed something, but I couldn't find a way through the UI nor the repo readme about listing meetups.
Hi! Meetup are synchronised once an hour from the meetup api, if your meetup is missing please double check that it has the vuejs topic associated and that it has valid coordinates set!
Let us know if this solve your problem
Our Meetup does not appear on the map and has the location configured.
https://www.meetup.com/es-ES/VueJS-Valencia/
Thx!
|
2025-04-01T04:35:13.983508
| 2021-08-02T19:01:32
|
958423453
|
{
"authors": [
"niwis",
"zarubaf"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9959",
"repo": "pulp-platform/snitch",
"url": "https://github.com/pulp-platform/snitch/pull/184"
}
|
gharchive/pull-request
|
occamy: Add isolation signals to S1 quadrants
Since the clusters will start issuing AXI transactions when they boot, we want to safeguard AXI transactions since - in case a given cluster isn't performing correctly - lock up the entire system.
Isolation is controlled via the SoC control register in the top level.
Looks good to me, only one question: why are 12 isolate registers defined? I can only see 8 being used in occamy_top
That's right, my laziness in over-provisioning them and just in case we end up with more quadrants then 8 at the moment. I think in a perfect world we would generate the hjson file from the Occamy description. Do you think we can merge with that hack?
|
2025-04-01T04:35:13.996931
| 2024-04-29T00:42:32
|
2267913960
|
{
"authors": [
"confused-Techie",
"paolobenve"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9960",
"repo": "pulsar-edit/ppm",
"url": "https://github.com/pulsar-edit/ppm/issues/130"
}
|
gharchive/issue
|
Cannot install extensions in Cuba
Thanks in advance for your bug report!
[X] Have you reproduced this issue in incognito/private browsing?
[X] Have you made sure you issue doesn't already exist?
Where is the URL that this occurs?
https://example.pulsar-edit.dev/example
What's your issue?
ASASD asas ASFAS DFADFASF A
Which OS/Browser/Version does this happen on?
asdf asdf s
Steps to Reproduce/Additional Details:
Thanks in advance for your bug report!
[X] Have you reproduced issue in safe mode?
[X] Have you used the debugging guide to try to resolve the issue?
[X] Have you checked our FAQs to make sure your question isn't answered there?
[X] Have you checked to make sure your issue does not already exist?
[X] Have you checked you are on the latest release of Pulsar?
What happened?
Trying to install an extension, I got this result:
0 info it worked if it ends with ok
1 verbose cli [
1 verbose cli '/opt/Pulsar/resources/app/ppm/bin/node',
1 verbose cli '/opt/Pulsar/resources/app/ppm/node_modules/npm/bin/npm-cli.js',
1 verbose cli '--globalconfig',
1 verbose cli '/home/paolo/.pulsar/.apm/.apmrc',
1 verbose cli '--userconfig',
1 verbose cli '/home/paolo/.pulsar/.apmrc',
1 verbose cli 'install',
1 verbose cli 'https://api.pulsar-edit.dev/api/packages/symbols-tree-view/versions/0.14.0/tarball',
1 verbose cli '--target=12.2.3',
1 verbose cli '--disturl=https://artifacts.electronjs.org/headers/dist',
1 verbose cli '--arch=x64',
1 verbose cli '--force-process-config',
1 verbose cli '--global-style'
1 verbose cli ]
2 info using<EMAIL_ADDRESS>3 info using<EMAIL_ADDRESS>4 verbose npm-session e07fbfd776ab1f49
5 silly install loadCurrentTree
6 silly install readLocalPackageData
7 http fetch GET 304 https://codeload.github.com/xndcn/symbols-tree-view/legacy.tar.gz/refs/tags/v0.14.0 2232ms (from cache)
8 silly pacote remote manifest for undefined@https://api.pulsar-edit.dev/api/packages/symbols-tree-view/versions/0.14.0/tarball fetched in 2243ms
9 timing stage:loadCurrentTree Completed in 2258ms
10 silly install loadIdealTree
11 silly install cloneCurrentTreeToIdealTree
12 timing stage:loadIdealTree:cloneCurrentTree Completed in 0ms
13 silly install loadShrinkwrap
14 timing stage:loadIdealTree:loadShrinkwrap Completed in 1ms
15 silly install loadAllDepsIntoIdealTree
16 silly resolveWithNewModule<EMAIL_ADDRESS>checking installable status
17 silly fetchPackageMetaData error for event-kit@latest request to https://registry.npmjs.org/event-kit failed, reason: read ECONNRESET
18 silly fetchPackageMetaData error for<EMAIL_ADDRESS>request to https://registry.npmjs.org/atom-space-pen-views failed, reason: read ECONNRESET
19 silly fetchPackageMetaData error for<EMAIL_ADDRESS>request to https://registry.npmjs.org/q failed, reason: read ECONNRESET
20 timing stage:rollbackFailedOptional Completed in 1ms
21 timing stage:runTopLevelLifecycles Completed in 73784ms
22 silly saveTree apm-install-dir-2024326-164637-1naek43.ge59
22 silly saveTree └──<EMAIL_ADDRESS>23 verbose type system
24 verbose stack FetchError: request to https://registry.npmjs.org/event-kit failed, reason: read ECONNRESET
24 verbose stack at ClientRequest.<anonymous> (/opt/Pulsar/resources/app/ppm/node_modules/npm/node_modules/node-fetch-npm/src/index.js:68:14)
24 verbose stack at ClientRequest.emit (node:events:365:28)
24 verbose stack at TLSSocket.socketErrorListener (node:_http_client:447:9)
24 verbose stack at TLSSocket.emit (node:events:365:28)
24 verbose stack at emitErrorNT (node:internal/streams/destroy:193:8)
24 verbose stack at emitErrorCloseNT (node:internal/streams/destroy:158:3)
24 verbose stack at processTicksAndRejections (node:internal/process/task_queues:83:21)
25 verbose cwd /tmp/apm-install-dir-2024326-164637-1naek43.ge59
26 verbose Linux 6.8.0-31-generic
27 verbose argv "/opt/Pulsar/resources/app/ppm/bin/node" "/opt/Pulsar/resources/app/ppm/node_modules/npm/bin/npm-cli.js" "--globalconfig" "/home/paolo/.pulsar/.apm/.apmrc" "--userconfig" "/home/paolo/.pulsar/.apmrc" "install" "https://api.pulsar-edit.dev/api/packages/symbols-tree-view/versions/0.14.0/tarball" "--target=12.2.3" "--disturl=https://artifacts.electronjs.org/headers/dist" "--arch=x64" "--force-process-config" "--global-style"
28 verbose node v16.0.0
29 verbose npm v6.14.19-pulsar1-1
30 error code ECONNRESET
31 error errno ECONNRESET
32 error network request to https://registry.npmjs.org/event-kit failed, reason: read ECONNRESET
33 error network This is a problem related to network connectivity.
33 error network In most cases you are behind a proxy or have bad network settings.
33 error network
33 error network If you are behind a proxy, please make sure that the
33 error network 'proxy' config is set properly. See: 'npm help config'
34 verbose exit [ 1, true ]
Pulsar version
1.111.0
Which OS does this happen on?
🐧 Debian based (Linux Mint, Ubuntu, etc.)
OS details
xubuntu 24.04; same behaviour in 22.04
Which CPU architecture are you running this on?
x86_64/AMD64
What steps are needed to reproduce this?
Edit - preferences -> install
type any hint, and push the install button of any uninstalled package
Install doesn't happen, and after about a minute I get the error notification
Additional Information:
I suspect that it's because I'm in Cuba, and many web sites doesn't permit connection from this country.
Unfortunatly I'm not able to run pulsar behind a vpn.
I've already addressed this issue and the solution over on your original issue.
|
2025-04-01T04:35:14.002117
| 2022-02-12T10:12:30
|
1133810003
|
{
"authors": [
"itisyb",
"mmorainville"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9961",
"repo": "pulsardev/vue-tour",
"url": "https://github.com/pulsardev/vue-tour/issues/221"
}
|
gharchive/issue
|
Not working with pageloading
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
put the code between
Expected behavior
Vue tour will not show
Desktop (please complete the following information):
This issue doesn't follow the guidelines.
Closing it. Feel free to reopen it if needed with more information.
|
2025-04-01T04:35:14.016737
| 2011-12-03T13:22:14
|
2437290
|
{
"authors": [
"Burgov",
"pulse00"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9962",
"repo": "pulse00/Twig-Eclipse-Plugin",
"url": "https://github.com/pulse00/Twig-Eclipse-Plugin/issues/38"
}
|
gharchive/issue
|
Some parsing mistakes
Hi,
Consider the template at http://pastie.org/2954801. There's a couple of things going wrong here:
the outline isn't correct, because the parser cannot distinguish between single tags, and ranged tags, for example:
{% set test = 5 %} is parsed as an opening tag, even though it has no closing tag.
{% set test %}5{% set %} is parsed correctly
The same goes for blocks:
{% block pageTitle "test" %}
{% block pageTitle %}test{% endblock %}
I'm getting some obscure warning on line 27, {{ (labour.financialStatus.id is sameas(30))|tick }}:
"mismatched input '(' expecting PRINT_CLOSE"
Lines 30 and 31 also gives a strange warning {% set projectHours = projectHours + labour.hours %}
"no viable alternative at character '+'"
Regards
the outline problem is actually a bit tricky. The way it's implemented is that i basically use the PHP sourceelement requestor which detects the start/end positions of methods/classes etc and reports the structure of the sourcemodule to the Dynamic Language Toolkit which then renders the outline.
So each tag you see in the outline is actually a fake PHP method reported to the DLTK engine. Now this model does not know "optional" end tags - which is why i probably need to rewrite this functionality.
The validation errors are related to the ANTLR parser, which i am also thinking about re-writing from scratch...
So it might take some time to until this issue is closed.
If you have some background in ANTLR or even better jflex, you can checkout the parsers and speed this up a bit ;)
@Burgov: btw. you can switch to the "html" view in the outline window as a workaround.
the validation feature has been completely removed in 1.0.95. it was too buggy and it such a feature makes only sense if it works 100% correctly. As i don't think it adds much value to the plugin, i removed it for now as maintaining it is very time-intensive. It maybe added in a future version.
As for the outline and folding problems, this has been fixed in 1.0.95: see http://blog.dubture.com/2012/01/twig-eclipse-plugin-parser-rewrite.html for details.
|
2025-04-01T04:35:14.020245
| 2024-06-11T10:03:05
|
2345992194
|
{
"authors": [
"VenelinMartinov",
"iwahbe"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9963",
"repo": "pulumi/pulumi-auth0",
"url": "https://github.com/pulumi/pulumi-auth0/pull/566"
}
|
gharchive/pull-request
|
Enrol auth0 connection into PRC to fix panic
Fixes https://github.com/pulumi/pulumi-terraform-bridge/issues/1964 for the connection resource by enroling it into PRC: https://github.com/pulumi/pulumi-auth0/actions/runs/9451768673/job/26033989869?pr=564
Merging as part of https://github.com/pulumi/pulumi-auth0/pull/588.
|
2025-04-01T04:35:14.023672
| 2020-07-09T16:56:07
|
654210309
|
{
"authors": [
"jclangst",
"leezen",
"lukehoban"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9964",
"repo": "pulumi/pulumi-aws",
"url": "https://github.com/pulumi/pulumi-aws/issues/1040"
}
|
gharchive/issue
|
Changing subnets of SubnetGroup fails if DB Instance is a dependent
@pulumi/aws: v2.12
In the below configuration, changing the subnet ids in the SubnetGroup fails because the Instance is using the subnets. In this failure scenario, it would be ideal if the SubnetGroup were replaced rather than updated.
Example configuration:
const subnetGroup = new aws.rds.SubnetGroup(
'auth-service-rds',
{
subnetIds: config.requireObject('postgresSubnetIds'),
},
{ provider: awsProvider },
);
const db = new aws.rds.Instance(
'auth-service-rds',
{
name: config.require('postgresInitialDBName'),
engine: 'postgres',
engineVersion: config.require('postgresVersion'),
applyImmediately: true,
autoMinorVersionUpgrade: true,
deletionProtection: config.requireBoolean('postgresDeletionProtection'),
finalSnapshotIdentifier: finalSnapshotName.hex,
instanceClass: config.require('postgresInstanceClass'),
allocatedStorage: config.requireNumber('postgresAllocatedStorage'),
maxAllocatedStorage: config.requireNumber('postgresMaxAllocatedStorage'),
iops: config.getNumber('postgresIOPS'),
storageEncrypted: true,
multiAz: config.requireBoolean('postgresMultiAZ'),
backupRetentionPeriod: config.requireNumber('postgresBackupRetentionPeriod'),
performanceInsightsEnabled: config.require('postgresVersion').startsWith('1'),
enabledCloudwatchLogsExports: ['postgresql', 'upgrade'],
monitoringInterval: 15,
monitoringRoleArn: postgresMonitoringRole.arn,
password: dbPassword.result,
username: 'authservice',
port: config.requireNumber('postgresPort'),
dbSubnetGroupName: subnetGroup.name,
vpcSecurityGroupIds: [securityGroup.id],
caCertIdentifier: config.require('postgresCert'),
},
{ provider: awsProvider },
);
As a workaround, you can specify --target-replace <subnetGroup urn> to force replacement of that resource.
@leezen That's a good callout!
In this failure scenario, it would be ideal if the SubnetGroup were replaced rather than updated.
Core Pulumi doesn't currently have any way to express this sort of thing - so there's nothing the AWS provider could do currently to make that happen.
I believe you can use replaceOnChanges: ['subnetIds'] on the SubnetGroup to cause the SubnetGroup to be recreated if there are changes, which will create a new one, update all dependencies to use it, then delete the old one. This should result in the right transition (and ideally with no downtime for your database!).
|
2025-04-01T04:35:14.026553
| 2024-06-18T09:09:32
|
2359354231
|
{
"authors": [
"flostadler"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9965",
"repo": "pulumi/pulumi-aws",
"url": "https://github.com/pulumi/pulumi-aws/pull/4087"
}
|
gharchive/pull-request
|
upstream v5.54.1
Moving ./upstream to v5.54.1
Update patches
./scripts/tidy-all.sh
./script/patch_computed_only.sh
Add mod mappings for new resources
Regenerate SDK
Regenerate schema
closes #4084
/release minor
|
2025-04-01T04:35:14.055078
| 2019-06-04T13:01:35
|
451983150
|
{
"authors": [
"ChristianEder",
"CyrusNajmabadi",
"mikhailshilkov"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9966",
"repo": "pulumi/pulumi-azure",
"url": "https://github.com/pulumi/pulumi-azure/pull/267"
}
|
gharchive/pull-request
|
Event Hub Trigger for IoT Hub
This PR adds the same functionality (triggering serverless azure functions) that is already available for Event Hubs also for usages of the Azure IoT Hub.
Intended usage:
const iotHub = new IoTHub(); iotHub .onEvent("test", async (context, arg) => { console.log("message received"); });
As this is my first PR to pulumi, I'm not quite sure on contribution requirements - how testing & documentation is done. So please, give feedback :-)
I see that the build is failing - somehow, the files I added seem to be the reason but I cannot figure out how exactly - locally, npm run build shows no errors. Can someone support / give a hint what the build tries to accomplish when it fails (detached head error from git...)?
Hey @ChristianEder, welcome and thank you so much creating a PR, we are happy to have you onboard!
The build is likely failing because there is a change needed in resources.go file in the root directory. It's not an obvious thing, but you need to list your new ts file in there similar to this block. This way, mixins are combined with the auto-generated code.
Hi @mikhailshilkov - thanks for the information! I already fixed it, now the build is green. I would have liked to also add an example, but the existing samples seem to rely on the npm being published. Also, I am not sure how and where to add tests.
@ChristianEder The flow assumes you use yarn manager. If so, you can use yarn link to link your pulumi-azure local copy and then do yarn link @pulumi/azure in your sample application to use that link. This should enable local testing.
Adding an example is a great idea. It will also serve as a basic test too: just add your new example to examples_test.go. The test will only provision the resources though: it won't test whether the subscription actually works. Please test this manually by sending the messages.
Hi @mikhailshilkov - no, I'm using npm - whats the procedure for linking / local execution there?
@ChristianEder I'm not sure... NPM link seems to be a thing too? https://docs.npmjs.com/cli/link
As this is my first PR to pulumi
@ChristianEder Thanks so much for the PR! we'll do what we can to make this a smooth process for you. Apologies in advance if you run into any hiccoughs of speedbumps!
This looks great. My only request would be a small example in the examples directory. You can use https://github.com/pulumi/pulumi-azure/tree/master/examples/eventhub as something that can be copied for your example. You'd then add an entry to: https://github.com/pulumi/pulumi-azure/blob/master/examples/examples_test.go
This helps ensure that we can at least build this code and shows a simple way to use the new API.
Note: as i'm not familiar wiht this space, i'll defer to @mikhailshilkov for a lot of thoughts on the best way to expose this azure service here.
Added an example in 686b549a96cb6e7f99b9a45463172e91f237edb0 and 9942709aac96985eec44ae11a900d6d62b90f29c , also got the npm link to work, but currently I'm facing issues during pulumi up:
error: no resource plugin 'azure-v2.3.5' found in the workspace or on your $PATH, install the plugin using pulumi plugin install resource azure v2.3.5
And when I execute pulumi plugin install resource azure v2.3.5, I get
error: [resource plugin azure-2.3.5] downloading from https://api.pulumi.com: failed to download plugin: [404] 404 page not found
Is this a known issue or am I doing it wrong?
@ChristianEder Thanks for adding the example! pulumi up works for me just fine and all the resources get created. However, the function app still complains about the connection string: Error indexing method 'Functions.test'. Microsoft.Azure.EventHubs: Value for the connection string parameter name 'sb://iothub-ns-testd76543-2123456-36654cc8de.servicebus.windows.net/' was not found. Parameter name: connectionString.
no resource plugin 'azure-v2.3.5' found in the workspace
This version is clearly wrong... The latest published plugin version is 0.18.5. The package.json file in SDK uses env variable "version": "${VERSION}". What is the value that you see there? Maybe try overriding or setting it manually to 0.18.5 for a test?
So I was banging my head about why the example still didn't work until I realized we need a route to forward the messages to the Event Hub. This works for me:
const iotHub = new iot.IoTHub("test", {
resourceGroupName: resourceGroup.name,
sku: {
capacity: 1,
name: "S1",
tier: "Standard",
},
routes: [{
name: "export",
enabled: true,
condition: "true",
endpointNames: ["events"],
source: "DeviceMessages",
}]
});
A simpler but less explict way is to enable the fallback route:
const iotHub = new iot.IoTHub("test", {
resourceGroupName: resourceGroup.name,
sku: {
capacity: 1,
name: "S1",
tier: "Standard",
},
fallbackRoute: {
enabled: true,
},
});
In addition to extending the example, maybe we could even add a check into the subscription which would warn if IoT Hub has no proper routes/fallback route enabled?
Thanks for that find - its a bit weird that this is required because when I create an IoT Hub from an ARM template (json file) or via the Azure portal, I don't provide a value to enable the fallback route - its enabled by default. Nevertheless, I'll enable it in the same and add a check in the mixin
I'd throw (pulumi.ResourceError if you can) in case we know that the callback won't hit, i.e. if no routes AND no fallback route.
But I would throw it inside a .apply() block as I described?
@CyrusNajmabadi Is that a good pattern? (see four previous comments, sorry it's not a thread)
Its a bit weird - even when I set
fallbackRoute: { source: "DeviceMessages", enabled: true, endpointNames: ["events"], condition: "true" }
in the example, when I get to the following block in my mixin, the fallback route is marked as disabled - all other properties come over as I set them:
pulumi.all([iotHub.fallbackRoute, iotHub.routes]).apply(([fallbackRoute, routes]) => { if(fallbackRoute.enabled && fallbackRoute.endpointNames.some(e => e === "events")){ return; } if(routes && routes.some(r => r.enabled && r.endpointNames.some(e => e === "events"))){ return; } throw new pulumi.ResourceError("IoT Hub must have a route or fallback route enabled.", opts.parent); });
It's enough to just fallbackRoute: { enabled: true, }, no need to set other properties explicitly. Both of my examples above are enough to trigger the function.
@CyrusNajmabadi Is that a good pattern? (see four previous comments, sorry it's not a thread)
i would have to see the final code (in context) to know for certain. but from reading the back and forth it sounds like it would be fine IMO.
I wouldn't make the check so strict: we might miss some cases. If they set some routes at all - assume they know what they are doing.
@mikhailshilkov : when I try to pulumi up an IoTHub with just fallbackRoute: { enabled: true }, I get the error Plan apply failed: Error creating/updating IotHub "testb333f1b8" (Resource Group "test06bff30b"): devices.IotHubResourceClient#CreateOrUpdate: Invalid input: autorest/validation: validation failed: parameter=iotHubDescription.Properties.Routing.FallbackRoute.EndpointNames constraint=MinItems value=[]string{} details: minimum item limit is 1; got: 0
so i'll add the other properties as well to the example
Added the fallback route details in 42db2c14cb9cb5ed9906e9dd9ca7a69aacc54af1.
Anyway, I think that pulumi should enable th fallback route by default anyway - Azure Resorce Manager does it, so people expect it. But this is out of scope for this PR I think.
@mikhailshilkov , @CyrusNajmabadi : I think I now worked in all your remarks, right?
By the way thank you guys, I really appreciate how fast, responsive, helpful and detail-minded you are!
@mikhailshilkov : resolved the merge conflict in ad9132de75a873f799996eb0a228420a3ef74d59
@mikhailshilkov , @CyrusNajmabadi is there anything left I can do / optimize? If no -> thanks for helping, I‘m looking forward to having this feature released
Looking not @ChristianEder . BTW, this looks awesome. I'm curious what you needed this for. Were you just scratching an itch, or did you intend to use this in some personal or professional project of yours?
Everything looks good on my end. Once @mikhailshilkov is good with things, we can merge in. Thanks!
@CyrusNajmabadi both.
Partly the itch, because I found the existing „onEvent...“ implementations awesome and thought the one for IoT Hub couldn’t be that difficult to do.
Partly I hope to use it (semi-) productively / at work. First, it’ll hopefully provide useful for coding demos in order to spin up things fast. But with some more additions (like the ability to add additional bindings to the function, I’ll probably create another PR for that) it might even be useful at least for simple production apps.
Looks great, thank you @ChristianEder!
|
2025-04-01T04:35:14.088691
| 2022-05-24T21:38:26
|
1247152493
|
{
"authors": [
"Chri-s",
"RickStrahl",
"punker76"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9967",
"repo": "punker76/gong-wpf-dragdrop",
"url": "https://github.com/punker76/gong-wpf-dragdrop/issues/443"
}
|
gharchive/issue
|
Is there an example of using IDragHandler? (need to intercept start of drag operation)
It looks like this control supports IDragHandler but there are no examples that show how this works. I've implemented the interface and set IsDragSource=true assigned the data source and I have the events firing but items don't start actually dragging.
My use case is that I want to handle drag operations out of the control into standard DataObject so that I can drop on another control that already handles drop targets seperately (in JavaScript in a WebView control in my case) and that is not have a dd:DropSource.
IOW:
Initiate a drag operation with Gong
Complete the Drop operation using only standard D&D behavior
In essence I want to take advantage of the nice drag initiation behavior of Gong, but once we're dragging utilize standard D&D behavior on the target.
Is this possible? It seems like it should be since Gong works with standard D&D objects, but I'm not sure.
Reason for not being able to use Gong on the target is that the drag operation is actually picked up in a JavaScript client inside of a WebView - this works fine with standard D&D objects/events.
Perhaps a bit late, but I created an example repository at Chri-s/DragHandlerSample.
Have a look at CustomTextDragSource.cs and CustomFileDragSource.cs.
@RickStrahl Did you look at the example of @Chri-s ? I think this is maybe what yo need.
|
2025-04-01T04:35:14.091912
| 2016-08-01T09:22:18
|
168602612
|
{
"authors": [
"willpatera"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9968",
"repo": "pupil-labs/pupil-labs-website",
"url": "https://github.com/pupil-labs/pupil-labs-website/issues/33"
}
|
gharchive/issue
|
icons for home page
@nathakits - please make sketches for icons for the Pupil is Open Source section on the home page.
This is done, closing.
|
2025-04-01T04:35:14.130768
| 2024-04-25T13:28:57
|
2263594230
|
{
"authors": [
"Avsy",
"Disliketalking",
"OrKoN",
"Yzedank",
"cmcode003",
"dylanClimaTech",
"hydah",
"inspectxyz"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9969",
"repo": "puppeteer/puppeteer",
"url": "https://github.com/puppeteer/puppeteer/issues/12335"
}
|
gharchive/issue
|
[Bug]: use manifest V3 .When the file(manifest.json) carries ”content_scripts“,puppeteer.launch timeout
Minimal, reproducible example
const args = [
`--whitelisted-extension-id=${extensionId}`,
`--disable-extensions-except=${pathToExtension}`,
`--load-extension=${pathToExtension}`,
'--no-sandbox',
];
this.browser = await puppeteer.launch({
headless: false,
defaultViewport: null,
args: args,
executablePath: '/opt/google/chrome/chrome',
// executablePath: '/usr/local/google/chrome/chrome',
});
// manifest.json
{
"name": "Record",
"description": "Record Extension",
"version": "1.0",
"icons": {
"128": "icon.png"
},
"manifest_version": 3,
"content_scripts": [
{
"matches": ["https://*/*"],
"js": "capture.js"
}
],
"background": {
"service_worker": "background.js"
},
"permissions": [
"activeTab",
"tabs",
"tabCapture",
"storage",
"downloads"
]
}
Error string
Timed out after 30000 ms while waiting for the WS endpoint URL to appear in stdout!
Bug behavior
[x] Flaky
[ ] PDF
Background
chrome version is
Google Chrome 124.0.6367.78
Expectation
.
Reality
puppeteer.launch timeout
Puppeteer configuration file (if used)
No response
Puppeteer version
22.7.0
Node version
20.12.2
Package manager
npm
Package manager version
10.5.0
Operating system
Linux
I am not able to reproduce (not enough information), please provide a minimal example that includes all files that are required to run the script (feel free to publish a repository with the reproduction).
Seeing the same thing
Collaborator
The directory structure is as follows:
puppeteer.js
extension
|----- background.js (just console.log)
capture.js (just console.log)
icon.png
manifest.json
The specific content is as follows:
// puppeteer.js
const puppeteer = require('puppeteer');
const path = require('path');
const Xvfb = require('xvfb');
const pathToExtension = path.join(__dirname, 'extension');
async function launch() {
const displayNum = Math.floor(Math.random() * 1000);
let width = 1280;
let height = 720;
const whd = width + 'x' + height + 'x24';
const xvfb = new Xvfb({
silent: false,
displayNum: displayNum,
reuse: false,
xvfb_args: ['-screen', '0', whd],
});
xvfb.start();
const browser = await puppeteer.launch({
headless: false,
args: [
`--disable-extensions-except=${pathToExtension}`,
`--load-extension=${pathToExtension}`,
'--no-sandbox',
],
executablePath: '/opt/google/chrome/chrome',
});
console.log("hello, world");
await browser.close();
xvfb.stopSync();
}
launch();
console.log("puppeteer.js");
// manifest.json
{
"name": "Record",
"description": "Record Extension",
"version": "1.0",
"icons": {
"128": "icon.png"
},
"manifest_version": 3,
//"content_scripts": [
// {
// "matches": ["https://*/*"],
// "js": "./capture.js"
// }
// ],
"background": {
"service_worker": "background.js"
},
"permissions": [
"activeTab",
"tabs",
"tabCapture",
"storage",
"downloads"
]
}
Uncomment “content_scripts”,will see
TimeoutError: Timed out after 30000 ms while waiting for the WS endpoint URL to appear in stdout!
@Lightning00Blade could you please try to repro with the steps above?
Actively following this thread.. Borked our production rip.
+1 seeing this as well
We at inspect.xyz believe it is a result from the latest chrome version. We were able to get it working by using chromium rather than chrome.
+1 I am seeing this as well brother.
Greetings,
We at inspect.xyz believe it is caused by chrome as one of our tech leads was able to resolve this issue by switching to chromium.
Thank you kindly,
Inspect.xyz
The Layer 2 Built for X
Wow very insightful, will be trying this immediately! appretiate your hard work and dedication
Thank you.
Inspect.xyz
The Layer 2 Built for X
Greetings,
Our inspect.xyz tech lead was unable to resolve this issue In Production only locally by pointing to their local version.
const exePath = "C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe";
const puppeteerOption = {...options, executablePath: process.env.CHROMIUM_PATH}; // TECH LEAD VERSION @ INSPECT.XYZ
you have lead me astray - but will use this to help guide me to a resolution, thank you
Dylan (they/them)
Climatech innovations
+1 seeing this as well
Willing to paypal $40 for anyone that can fix this or help us fix this
@Lightning00Blade or @OrKoN I am also willing to send $100 usd to have this issue fixed
Not able to reproduce:
content scripts has to be an array
background permission is missing
The following works:
{
"name": "Record",
"description": "Record Extension",
"version": "1.0",
"manifest_version": 3,
"icons": {
"128": "icon.png"
},
"content_scripts": [
{
"matches": ["https://*/*"],
"js": ["capture.js"]
}
],
"background": {
"service_worker": "background.js"
},
"permissions": [
"background",
"tabs",
"tabCapture",
"storage",
"downloads"
]
}
import puppeteer from "puppeteer";
const args = [
`--disable-extensions-except=/Users/alexrudenko/src/pptr-test/extensions-repro`,
`--load-extension=/Users/alexrudenko/src/pptr-test/extensions-repro`,
];
const browser = await puppeteer.launch({
headless: false,
defaultViewport: null,
args: args,
});
Gentlemen, the problem is that __dirname is undefined. import.meta.dirname added in: Node v21.2.0, v20.11.0
Si lo sé me disculpo, de momento me estoy adentrando a este tema
El vie., 26 jul 2024 9:50 a. m., Avsy @.***> escribió:
Gentlemen, the problem is that __dirname is undefined. import.meta.dirname
added in: Node v21.2.0, v20.11.0
—
Reply to this email directly, view it on GitHub
https://github.com/puppeteer/puppeteer/issues/12335#issuecomment-2252930280,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/BJF2B2TWEDYQATSKDTH3AGTZOJO3VAVCNFSM6AAAAABGY4W5LCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENJSHEZTAMRYGA
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
|
2025-04-01T04:35:14.133603
| 2017-08-28T16:20:36
|
253381478
|
{
"authors": [
"MuYunyun",
"ebidel",
"vlad-zhukov"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9970",
"repo": "puppeteer/puppeteer",
"url": "https://github.com/puppeteer/puppeteer/issues/578"
}
|
gharchive/issue
|
Is there a way to open local pages?
I would like to open a page that includes CSS and JS files. Is it possible?
Yes, this is possible. Generally, if you can open the page in Chrome, you can open the page using puppeteer :)
await page.goto('file:///Users/.../page.html');
Just keep in mind that file:// has some security restrictions. Depending on what the page does, not everything will work.
It do effect:
const url = `file://${process.cwd()}/...`
It do effect:
const url = `file://${process.cwd()}/...`
|
2025-04-01T04:35:14.135932
| 2020-05-07T14:03:22
|
614083722
|
{
"authors": [
"jackfranklin"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9971",
"repo": "puppeteer/puppeteer",
"url": "https://github.com/puppeteer/puppeteer/issues/5830"
}
|
gharchive/issue
|
Migrate tests to TypeScript
We should migrate all the tests to TypeScript. This will let us test our TypeScript definitions as we use them to write Puppeteer tests. It will also unify the codebase and let us write all new tests in TypeScript. We'll get nicer type-checking and so on which will help when authoring tests.
I think we can do this in a few steps:
Rename each file to *.ts
For any files with few errors, fix them.
For any files with large errors, put a @ts-nocheck comment at the top of the file to disable the type checking in that file.
Ship this change, having done the work to have Mocha compile the TS before running the tests.
Go through each file that was @ts-nocheck'd and fix the errors.
Done!
|
2025-04-01T04:35:14.145013
| 2021-08-03T09:18:53
|
958910394
|
{
"authors": [
"OrKoN",
"akornatskyy",
"bradisbell",
"josepharhar",
"jschfflr",
"kblok",
"mathiasbynens",
"sadym-chromium"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9972",
"repo": "puppeteer/puppeteer",
"url": "https://github.com/puppeteer/puppeteer/issues/7458"
}
|
gharchive/issue
|
Roll Chromium 94 after 2021-08-26
Chrome 94 is in beta, so it's time to roll Puppeteer update.
https://chromiumdash.appspot.com/schedule
Jan, could you PTAL?
Jan, what's the status here?
FYI, the next one is coming up in three days: https://github.com/puppeteer/puppeteer/issues/7459
Unfortunately, the roll is blocked on some breaking changes that happened in Chromium.
We are currently figuring out a solution.
Quick update - Unfortunately, the roll is still blocked because of the following situation:
A while ago, a change in the network domain caused some headers and status codes to move from the requestWillBeSent/responseReceived to the according requestWillBeSentExtraInfo/responseReceivedExtraInfo. Right now, Puppeteer does not use these events at all because before this change, the information provided in the requestWillBeSent/responseReceived was enough to fulfill the needs of the provided API surface. But now that some of the information moved there, we have to implement support for them.
Unfortunately though, there is no guarantee that these events will be emitted for all requests and if they are, there is no guarantee on the order. That's why we are not able to just listen to them too. Instead we have to find a way within Chromium to tell Puppeteer if there will be ...ExtraInfo events that it should be aware of.
Hey everyone!
I am working on a new CDP feature to enable puppeteer to leverage responseReceivedExtraInfo and therefore get the raw headers back: https://chromium-review.googlesource.com/c/chromium/src/+/2898747
Here is a doc for those interested in more details: https://docs.google.com/document/d/1NM30Wg_aM3-RFZaD_lQuWQj8my8XoR9YpMi60QH1lHU/edit
I am also working on a puppeteer patch which uses the new feature, I'll open a PR for it soon.
There is observed significant performance degradation when switching from 93.0.4577.82 to 94.0.4606.81. Specifically up to 2x time for the same requests. Also the binary is +40MB.
@OrKoN @mathiasbynens why is this a breaking change?
I’ve made more fixes in chromium and puppeteer, so hopefully there will be no breaking changes
@kblok there were some breaking Chrome DevTools Protocol changes that required changes in Puppeteer. That makes the version 12.0.0 only compatible with Chromium 97.0.4692.0 or later and not compatible with the previous Chromium versions.
How about the other way around @OrKoN?
Will I need pptr 12 to use chrome 97?
Now I'm running puppeteer core 7.1 with Chrome 96
@kblok yes, I believe so. Chrome 97 has breaking CDP changes that pptr prior v12 won't handle properly. Some use cases might work but the entire test suite we have would fail.
If you use old puppeteer with new chromium, then you won't necessarily get the correct headers and response codes. It shouldn't totally blow up though. In some cases the response code may be 200 instead of 304. You will also get fewer headers, especially privacy sensitive ones like cookies - although page.cookies should still have everything.
Thank you @josepharhar!
|
2025-04-01T04:35:14.153792
| 2016-06-24T20:33:42
|
162225756
|
{
"authors": [
"johnduarte",
"kevpl",
"puppetlabs-jenkins",
"tvpartytonight"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9973",
"repo": "puppetlabs/beaker",
"url": "https://github.com/puppetlabs/beaker/pull/1161"
}
|
gharchive/pull-request
|
(BKR-856) Add el/sles support to remove_puppet_on
This commit expands platform support for the remove_puppet_on install
helper method to include sles and all el derivatives.
This allows hosts declared with a hypervisor of none on these
platforms to have puppet uninstalled as part of a pre-suite.
Refer to this link for build results (access rights to CI server needed):
http://jenkins-beaker.delivery.puppetlabs.net//job/qe_beaker_btc-intn/2648/
Refer to this link for build results (access rights to CI server needed):
http://jenkins-beaker.delivery.puppetlabs.net//job/qe_beaker_btc-intn/2656/
@tvpartytonight have all your concerns been addressed? looks good to me. 👍
yup, looks good to me. 👍
|
2025-04-01T04:35:14.154756
| 2018-06-27T00:37:37
|
336038041
|
{
"authors": [
"MikaelSmith",
"puppetcla"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9974",
"repo": "puppetlabs/bolt",
"url": "https://github.com/puppetlabs/bolt/pull/494"
}
|
gharchive/pull-request
|
(maint) Add --trace option
Adds a --trace option that prints error backtraces.
CLA signed by all contributors.
|
2025-04-01T04:35:14.157500
| 2021-12-14T09:15:43
|
1079501419
|
{
"authors": [
"CLAassistant",
"puppetlabs-jenkins",
"valia0906"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9975",
"repo": "puppetlabs/facter",
"url": "https://github.com/puppetlabs/facter/pull/2467"
}
|
gharchive/pull-request
|
FACT-3100) fix disks fact
https://tickets.puppetlabs.com/browse/FACT-3100
Fix disks fact could not get serial bacause of using absolute path for lsblk.
Can one of the admins verify this patch?
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
|
2025-04-01T04:35:14.169801
| 2015-06-13T18:54:10
|
88048056
|
{
"authors": [
"HAIL9000",
"branan",
"hkenney",
"joshcooper",
"kylog",
"petems",
"puppetcla"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9976",
"repo": "puppetlabs/puppet",
"url": "https://github.com/puppetlabs/puppet/pull/4032"
}
|
gharchive/pull-request
|
(PUP-3263) Adds architecture parameter
Can specify arch wanted when installing a package.
This is extremely rough, needs some help getting it across the line :smile:
CLA signed by all contributors.
@petems Thanks for submitting this, it would be useful for a number of package providers. rpm and yum already parse the currently installed arch, so it's part of the way there. Also it'd be nice to be able to install platform specific gems. I've rekicked the failing gems as I think we had some CI fixed after you submitted this.
@joshcooper This might be ready for merge now that tests are green? Could you take a final look at it?
@petems the current implementation makes it possible to specify the architecture during installation, but what about uninstall, e.g. ensure => absent? Also, what happens if you have a noarch package currently installed, and you specify architecture => 'x86_64'. I would expect puppet to install the more specific architecture package, but I don't think the PR supports this.
@joshcooper Good point, I feel like the arch parameter might need to get bubbled up to the RPM type, and have a method for collecting the current arch value. This would also help with the usecase of uninstall that @MikaelSmith mentioned...
I might try to break this down into a smaller PR just to add the arch parameter as a new property in the RPM T/P, and get that merged first, as the other bits are going to be a bit more complicated.
@petems Do you want to continue working off this pull request? Or did you want to close it and open a smaller pull request as you mentioned.
@petems have you had a chance to look into this one any more?
@petems If we don't hear back from you in another week or so we're gonna go ahead and close this for inactivity. Feel free to re-open if you have time to look at it again, or bother us through the normal engineering process to address the ticket.
@branan Hi, sorry was on holiday, but to be honest I think I'll need engineering help with this. I'll close for now, pending some help.
|
2025-04-01T04:35:14.174964
| 2019-05-04T03:33:40
|
440280125
|
{
"authors": [
"Iristyle",
"puppetcla",
"puppetlabs-jenkins"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9977",
"repo": "puppetlabs/puppetdb",
"url": "https://github.com/puppetlabs/puppetdb/pull/2941"
}
|
gharchive/pull-request
|
(maint) Only query host A records in Docker waiter
The host command may return non-zero exit codes when the given host
is missing certain record types. For instance:
/ # host www.google.com
www.google.com has address <IP_ADDRESS>
www.google.com has IPv6 address 2607:f8b0:400a:803::2004
Host www.google.com not found: 3(NXDOMAIN)
/ # echo $?
1
If the record type is limited to A records for IPv4, the same query
returns the expected 0 exit status
/ # host -t A www.google.com
www.google.com has address <IP_ADDRESS>
/ # echo $?
0
In TravisCI, failures are cropping up when puppetdb looks up the
postgres container via the network alias postgres.internal and
this should help to alleviate that problem
CLA signed by all contributors.
Test PASSed
Test FAILed
Test PASSed
|
2025-04-01T04:35:14.179118
| 2019-09-17T21:42:04
|
494866183
|
{
"authors": [
"Sharpie",
"puppetcla"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9978",
"repo": "puppetlabs/puppetdb",
"url": "https://github.com/puppetlabs/puppetdb/pull/3074"
}
|
gharchive/pull-request
|
(PDB-4504) Use batched inserts for add-resource-events-pk
This commit updates the add-resource-events-pk migration to perform
insertions in batches of 1000 when re-writing the resource_events
to add a primary key. The reWriteBatchedInserts option must be set
to true in the JDBC connection string in order for batching to
occur.
This commit also parallelizes the computation of resource event
hashes within each batch.
One outstanding question is why PG-JDBC requires reWriteBatchedInserts to be enabled in order to get a batched insert. We may want to enable this by default, but only for the PDBMigrationsPool.
CLA signed by all contributors.
Updated to:
De-duplicate within batches using group-by before removing duplicates seen in prior batches via filter.
Guard insert-multi! against empty batches that occur when all rows in the iteration end up filtered as duplicates.
Made the batch size a parameter of the migration with a default value of 1000. The test is updated to run with a batch size of 2 to ensure two iterations occur.
Patch updated to drop the use of pmap. Further benchmarking showed the change produces nearly a 3x speedup for the 5.7 million event test dataset even without reWriteBatchedInserts.
Using pmap to parallelize the hash computation and adding reWriteBatchedInserts to the connection options each add about a 10% speedup relative to the original migration.
|
2025-04-01T04:35:14.184804
| 2015-09-16T03:07:33
|
106689071
|
{
"authors": [
"acowan"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9979",
"repo": "puppetlabs/puppetlabs-rabbitmq",
"url": "https://github.com/puppetlabs/puppetlabs-rabbitmq/pull/389"
}
|
gharchive/pull-request
|
Add support for rabbitmq_mqtt plugin.
This adds general support for the Message Queue Telemetry Transport (MQTT) Plugin.
Has a dependency on the following pull request:
Add support to uninstall rabbitmq plugins.
#388 opened 2 days ago by acowan
This won't pass without some of my other changes. I will wait for them to come though and reissue a pull request.
|
2025-04-01T04:35:14.205999
| 2017-02-21T20:57:18
|
209270251
|
{
"authors": [
"garyb",
"justinwoo"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9980",
"repo": "purescript-contrib/purescript-dom",
"url": "https://github.com/purescript-contrib/purescript-dom/issues/77"
}
|
gharchive/issue
|
Missing HTMLAudioElement definitions and types
Right now I have a project where I could use purescript-dom-classy to convert my HTMLElement types to the proper ones, but I'm using currentTime and setCurrentTime from HTMLMediaElement on an audio element. This causes my unsafeCoerced code to work but of course, fails when using fromHTMLElement since it tries to parse to the media element and fails.
Should we add the audio element types and code to work with them?
Yes please, it's not missing intentionally!
Ah, sorry, I think I've just misunderstood. I should be using htmlAudioElementToHTMLMediaElement instead? Looks like the MDN docs are saying that this just inherits from HTMLMediaElement anyhow.
Ah right, yeah that looks like the case for this element then, since it has an empty interface :)
|
2025-04-01T04:35:14.207324
| 2019-09-11T12:46:25
|
492215747
|
{
"authors": [
"elliotdavies",
"vladciobanu"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9981",
"repo": "purescript-contrib/purescript-vim",
"url": "https://github.com/purescript-contrib/purescript-vim/issues/66"
}
|
gharchive/issue
|
Option to disable indentation
It would be nice to have a setting that disables any auto-indentation at all, leaving the plugin as just a syntax highlighter.
If this sounds good, I'm happy to do the work and submit a PR!
I would also like this myself! Would you still be up to PR it, or can I?
@vladciobanu Please, go for it! 😁
|
2025-04-01T04:35:14.218807
| 2021-03-25T14:17:21
|
840976857
|
{
"authors": [
"dnix101",
"sdodsley"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9982",
"repo": "purestorage/pso-csi",
"url": "https://github.com/purestorage/pso-csi/issues/153"
}
|
gharchive/issue
|
SmartAgent permission issue in OpenShift 4.7
Executing the smart-agent script fails the ping command with permission denied.
If I log into one of the csi-node pods smart-config container and try the command manually this is the error I get:
/ # ping pso-db-1-0.pso-db.pso-csi -w 2
PING pso-db-1-0.pso-db.pso-csi (<IP_ADDRESS>): 56 data bytes
ping: permission denied (are you root?)
Unknown if this error occurs on earlier versions of OpenShift.
This is probably related to the container running as a non-root user. In Kubernetes a non-root user does not have access to ping, because it requires a port binding that is not allowed.
Good catch @dnix101. That is the case. I'll raise a PR to fil this.
Addressed in #154
|
2025-04-01T04:35:14.224802
| 2017-09-25T15:33:50
|
260320465
|
{
"authors": [
"lukabratos",
"pusher-ci"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9983",
"repo": "pusher/pusher-platform-swift",
"url": "https://github.com/pusher/pusher-platform-swift/pull/28"
}
|
gharchive/pull-request
|
[WIP] Setup Danger.systems
What?
Setup Danger.
CC @pusher/sigsdk
1 Warning
:warning:
PR is classed as Work in Progress
Generated by :no_entry_sign: Danger
🚢
|
2025-04-01T04:35:14.227264
| 2016-12-05T19:06:32
|
193582816
|
{
"authors": [
"sotojuan"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9984",
"repo": "pushtell/react-ab-test",
"url": "https://github.com/pushtell/react-ab-test/issues/14"
}
|
gharchive/issue
|
Can you use this with Optimizely?
https://www.optimizely.com/
Don't need this information anymore.
|
2025-04-01T04:35:14.229317
| 2017-07-27T08:41:31
|
245967030
|
{
"authors": [
"Rainer-Lang",
"artem-zinnatullin",
"nikitin-da"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9985",
"repo": "pushtorefresh/storio",
"url": "https://github.com/pushtorefresh/storio/issues/809"
}
|
gharchive/issue
|
Release v2.0.0 preparations.
See https://github.com/pushtorefresh/storio/milestones/v2.0.0
@geralt-encore @nikitin-da I've cleaned up issues assigned to this milestone, can you please take care of remaining issues?
It'll probably be better if I actually perform the release once all issues will be closed, because it might not work on Travis right away and I may have to apply appropriate fixes or in worst case release manually.
Any news?
@Rainer-Lang sorry for delay( All tasks are resolved. So we may prepare release tomorrow.
Thanks. :+1:
|
2025-04-01T04:35:14.275268
| 2018-10-12T16:56:36
|
369631633
|
{
"authors": [
"cwhanse",
"mikofski",
"wholmgren"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9987",
"repo": "pvlib/pvlib-python",
"url": "https://github.com/pvlib/pvlib-python/pull/603"
}
|
gharchive/pull-request
|
Change name to solarposition.rise_set_transit_spa
pvlib python pull request guidelines
Thank you for your contribution to pvlib python! You may delete all of these instructions except for the list below.
You may submit a pull request with your code at any stage of completion.
The following items must be addressed before the code can be merged. Please don't hesitate to ask for help if you're unsure of how to accomplish any of the items below:
[x] Closes issue #316
[x] I am familiar with the contributing guidelines.
[x] Fully tested. Added and/or modified tests to ensure correct behavior for all reasonable inputs. Tests (usually) must pass on the TravisCI and Appveyor testing services.
[ ] Updates entries to docs/sphinx/source/api.rst for API changes.
[ ] Adds description and name entries in the appropriate docs/sphinx/source/whatsnew file for all changes.
[ ] Code quality and style is sufficient. Passes LGTM and SticklerCI checks.
[ ] New code is fully documented. Includes sphinx/numpydoc compliant docstrings and comments in the code where necessary.
[ ] Pull request is nearly complete and ready for detailed review.
Brief description of the problem and proposed solution (if not already fully described in the issue linked to above):
Change to function name to conform with rise_set_transit_ephem and rise_set_transit_analytical. Edits to docstrings to clarify expected timezone and location. Changed local variable time to times to avoid possible conflict with time module.
rise_set_transit_spa to parallel rise_set_transit_ephem, or sunrise_sunset_transit_spa to parallel sunrise_sunset_transit_geometric?
Or sun_rise_set_transit_spa? I don't have a preference, but we should make them all consistent before the next release. I think it only require updating the whatsnew and api files for the new functions.
We currently have rise_set_transit_ephem and 'sunrise_sunset_transit_geometric`. I prefer the 2nd for clarity but it's a pretty long name.
Right, but you're changing get_sun_rise_set_transit to rise_set_transit_spa, dropping both get_ (which is good) and sun_ (which we might consider retaining).
I'd like to see this PR:
deprecate the existing function name.
change one or both of the recently added _ephem and _geometric functions to be consistent with the new name. no deprecation necessary because added them since the last release.
So by the end of this PR we should have one of the following sets of functions:
option A:
sunrise_sunset_transit_geometric
sunrise_sunset_transit_ephem
sunrise_sunset_transit_spa
option B:
sun_rise_set_transit_geometric
sun_rise_set_transit_ephem
sun_rise_set_transit_spa
option C:
rise_set_transit_geometric
rise_set_transit_ephem
rise_set_transit_spa
We agree. Deprecation is in the PR.
Folks, please vote for A, B or C.
B
|
2025-04-01T04:35:14.289703
| 2018-10-13T23:46:54
|
369852394
|
{
"authors": [
"KMamedoff",
"pwn20wndstuff"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9988",
"repo": "pwn20wndstuff/Undecimus",
"url": "https://github.com/pwn20wndstuff/Undecimus/issues/5"
}
|
gharchive/issue
|
Is it normal to have electra folder in root?
I wiped my device completely with electra and used this JB on stock iOS.
The folder is being created to preserve the compatibility with the packages which were designed for Electra.
|
2025-04-01T04:35:14.290905
| 2017-03-09T06:08:43
|
212944263
|
{
"authors": [
"pwnbus"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9989",
"repo": "pwnbus/scoring_engine",
"url": "https://github.com/pwnbus/scoring_engine/issues/242"
}
|
gharchive/issue
|
Allow ability to customize timezone in config file and webui consume
Instead of showing GMT time I think, we can have a value in the configuration file for timezone, and then the web ui will pull this value and modify the timestamps to be "local".
Added in https://github.com/pwnbus/scoring_engine/pull/244
|
2025-04-01T04:35:14.383226
| 2024-11-09T18:54:03
|
2646424306
|
{
"authors": [
"s3alfisc",
"vincentarelbundock"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9990",
"repo": "py-econometrics/pyfixest",
"url": "https://github.com/py-econometrics/pyfixest/pull/700"
}
|
gharchive/pull-request
|
_narwhals_to_pandas
This is proof of concept to partially solve #533. It is a minimal change that swaps the existing _polars_to_pandas() by a _narwhals_to_pandas().
Let me know if you want me to continue with this approach, and what you would expect in a merge-ready PR.
Here’s an example where pyfixest ingests a DuckDB table for both fitting and prediction:
import pyfixest as pf
import duckdb
data = pf.get_data()
data = duckdb.query("SELECT * FROM data")
type(data)
<class 'duckdb.duckdb.DuckDBPyRelation'>
fit = pf.feols("Y ~ X1 | f1 + f2", data=data)
fit.predict(newdata = data)[:10]
[ 2.20341554 nan nan 3.23931955 -1.44371374 -1.29643938
-1.86119629 -1.34576434 1.38234328 -2.24958648]
fit.summary()
Estimation: OLS
Dep. var.: Y, Fixed effects: f1+f2
Inference: CRV1
Observations: 997
| Coefficient | Estimate | Std. Error | t value | Pr(>|t|) | 2.5% | 97.5% |
|:--------------|-----------:|-------------:|----------:|-----------:|-------:|--------:|
| X1 | -0.919 | 0.065 | -14.057 | 0.000 | -1.053 | -0.786 |
---
RMSE: 1.441 R2: 0.609 R2 Within: 0.2
None
pre-commit.ci autofix
This looks really cool! I love that narwhals handles not only polars but also duckdb. Thank you =)
I actually think this is 95% ready to be merged. Maybe we could add a small API test here that verifies that pf.feols() returns identical results, irrespective of the input data frame type. Additionally, would we have to change the type description for the input data frame type for the pf.feols and pf.fepois functions? It currently uses a custom DataFrameType and asks the linter not to check - I would think that narwhals might support a type hint natively?
Took a quick look at the narwhals docs and the relevant type hint seems to be narwhals.typing.IntoDataFrame:
from __future__ import annotations
import narwhals as nw
from narwhals.typing import IntoDataFrame
def func(df_native: IntoDataFrame) -> tuple[int, int]:
df = nw.from_native(df_native, eager_only=True)
return df.shape
All the test errors are struggles or rpy2, which happens sometimes. I'll take a look at these @vincentarelbundock =)
Excellent! I changed the type hint and added a small test.
Don't know what your policy is for changelog and version number while in dev. happy to make a change if you indicate or let you do it yourself in my branch before/after merge.
Don't know what your policy is for changelog and version number while in dev. happy to make a change if you indicate or let you do it yourself in my branch before/after merge.
Usually we don't bump versions when merging from dev to main, which I suppose is bad practice? Changelog are automatically handled by a release bot, which automatically updates the github release notes.
Do you have a recommendation on how to potentially improve these processes?
pre-commit.ci autofix
@all-contributors please add @vincentarelbundock for code
@all-contributors please add @MarcoGorelli for review
@all-contributors please add @MarcoGorelli for review
|
2025-04-01T04:35:14.401267
| 2024-06-25T16:38:15
|
2373137061
|
{
"authors": [
"Saransh-cpp",
"agriyakhetarpal",
"kratman"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9991",
"repo": "pybamm-team/PyBaMM",
"url": "https://github.com/pybamm-team/PyBaMM/pull/4215"
}
|
gharchive/pull-request
|
V24.5
Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # (issue)
Type of change
Please add a line in the relevant section of CHANGELOG.md to document the change (include PR #) - note reverse order of PR #s. If necessary, also add to the list of breaking changes.
[ ] New feature (non-breaking change which adds functionality)
[ ] Optimization (back-end change that speeds up the code)
[ ] Bug fix (non-breaking change which fixes an issue)
Key checklist:
[ ] No style issues: $ pre-commit run (or $ nox -s pre-commit) (see CONTRIBUTING.md for how to set this up to run automatically when committing locally, in just two lines of code)
[ ] All tests pass: $ python run-tests.py --all (or $ nox -s tests)
[ ] The documentation builds: $ python run-tests.py --doctest (or $ nox -s doctests)
You can run integration tests, unit tests, and doctests together at once, using $ python run-tests.py --quick (or $ nox -s quick).
Further checks:
[ ] Code is commented, particularly in hard-to-understand areas
[ ] Tests added that prove fix is effective or that feature works
Triggered another deployment after the last commit: https://github.com/pybamm-team/PyBaMM/actions/runs/9668643828, cancelling the tests here for now
Oops, I updated the tag but I missed pushing the tag 😅 I don't think the workflow ran at all because it was stuck in the queue, I cancelled it just now. I think I will let you do it because I am about to log off
Oops, I updated the tag but I missed pushing the tag 😅 I don't think the workflow ran at all because it was stuck in the queue, I cancelled it just now. I think I will let you do it because I am about to log off
Yeah I already triggered the correct one. Hopefully you did not cancel mine
I think I didn't, it's still running: https://github.com/pybamm-team/PyBaMM/actions/runs/9668726233
Tests and deployment passed except for lychee
Merging this
|
2025-04-01T04:35:14.407086
| 2018-06-25T11:42:55
|
335370859
|
{
"authors": [
"freakboy3742",
"hamzzy",
"stantonxu"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9992",
"repo": "pybee/toga",
"url": "https://github.com/pybee/toga/issues/555"
}
|
gharchive/issue
|
'toga_gtk.factory' has no attribute 'DetailedList'
Expected Behavior
Current Behavior
when ever i import and detailedlist and use it prompt
'toga_gtk.factory' has no attribute 'DetailedList'
Your Environment
Python Version (list the specific version []
[ ] number)
Operating System and Version (select from the following and list the specific version number; if your OS is not listed, list that as well)
[ ] @macOS - version:
[ -] Linux - distro: - version: ubuntu 18.04
[ ] Windows - version:
[ ] Other - name: - version:
Toga Target (the type of app you are trying to generate)
[ ] android
[ ] cocoa
[ ] django
[- ] gtk
[ ] iOS
[ ] tvOS
[ ] watchOS
[ ] winforms
[ ] win32
[ ] Other (please specify)
Which version of toga do you use?
Detailed list isn't implemented for the GTK backend yet; hence the error.
@hamzzy , as @freakboy3742 indicated, if you look into the source code, DetailedList is not implemented yet for GTK.
|
2025-04-01T04:35:14.427455
| 2024-08-22T06:20:42
|
2479935928
|
{
"authors": [
"MtkN1"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9993",
"repo": "pybotters/pybotters",
"url": "https://github.com/pybotters/pybotters/pull/306"
}
|
gharchive/pull-request
|
Support bitbank ACCESS-TIME-WINDOW authentication method
Refer: https://bitbank.cc/blog/articles/406396361
This pull request adds support for the ACCESS-TIME-WINDOW authentication method in the bitbank API. It includes changes to the bitbank function to handle the ACCESS-TIME-WINDOW header in both GET and POST requests. The bitbank_get_with_window test case
has been added to ensure the correct behavior of the new functionality.
The header can also be set to ACCESS-TIME-WINDOW. pybotters will honor it:
async def get_wallet_balance(client: pybotters.Client):
result = await client.fetch(
"GET", "/v1/user/assets", headers={"ACCESS-TIME-WINDOW": "3000"}
)
print(result.text[:1000])
|
2025-04-01T04:35:14.431154
| 2017-03-28T08:42:49
|
217487472
|
{
"authors": [
"mriehl",
"potasiak207589"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9994",
"repo": "pybuilder/pybuilder",
"url": "https://github.com/pybuilder/pybuilder/issues/470"
}
|
gharchive/issue
|
pip_utils does not work well with virtualenv
The pip_utils uses system's python binary (from sys.executable) when installing batch dependencies instead of the local binary for active virtualenv. This leads to error:
BUILD FAILED - Unable to install batch dependencies.
To make it work we have to install dependencies in system's python (using sudo) that leads to another error when we want to run PyBuilder in virtualenv - the build directory is now owned by the root user:
BUILD FAILED - [Errno 13] Permission denied: '/path/to/project/build/reports/unittest
So we have to remove the build directory:
$ sudo rm -r build/
And then run PyBuilder again to make it work properly.
The fix requires replacing all usages of system's python with actual python binary used in the command-line. It can be achieved by simply using python command, finding the binary path with command:
$ which python
or adding command-line parameter to specify Python binary path.
@potasiak207589 why did you close this? Did you manage to resolve this on your own?
Your assumption that sys.executable is the system python is wrong, it's the currently running python executable.
If you get permission issues probably it means that you installed pybuilder system-wide (something like sudo pip install pybuilder) instead of in a virtualenv.
|
2025-04-01T04:35:14.481012
| 2024-02-12T14:49:29
|
2130296766
|
{
"authors": [
"sergue1",
"sydney-runkle",
"tim-habitat"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9995",
"repo": "pydantic/FastUI",
"url": "https://github.com/pydantic/FastUI/pull/184"
}
|
gharchive/pull-request
|
Fix description for array-like fields
Fixes #178
I think this also applies to other field - in particular had an error with json_schema now here
@sydney-runkle , I am a bit confused that 313 line is uncovered: revert it and the newly added test will fail as expected. May be it is an issue with codecov?
Huh, odd! Yeah, maybe an issue. I'll see if I can try to skip that check for this PR.
Ah ok I think perhaps this is because we changed a line that was not covered before?
|
2025-04-01T04:35:14.485509
| 2023-07-03T15:27:14
|
1786404113
|
{
"authors": [
"adriangb",
"davidhewitt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9996",
"repo": "pydantic/pydantic-core",
"url": "https://github.com/pydantic/pydantic-core/pull/731"
}
|
gharchive/pull-request
|
update to PyO3 0.19.1
Change Summary
Update PyO3 to 0.19.1 to pick up the fix for PyO3's set memory leak and support for PyPy 3.10.
Related issue number
N/A
Checklist
[ ] Unit tests for the changes exist
[ ] Documentation reflects the changes where applicable
[ ] Pydantic tests pass with this pydantic-core (except for expected changes)
[ ] My PR is ready to review, please add a comment including the phrase "please review" to assign reviewers
Can we also incorporate https://github.com/PyO3/pyo3/pull/3156 to replace https://github.com/pydantic/pydantic-core/blob/d76812ae6f6f7ca4897eb7a17cc2b00286737c1b/src/input/return_enums.rs#L200-L213?
Can we also incorporate PyO3/pyo3#3156 to replace
https://github.com/pydantic/pydantic-core/blob/d76812ae6f6f7ca4897eb7a17cc2b00286737c1b/src/input/return_enums.rs#L200-L213
?
I took a look at this, but PyFrozenSetBuilder doesn't expose len(). I'll take a look at adding to PyO3 upstream, maybe when we get to PyO3 0.20 :)
|
2025-04-01T04:35:14.492550
| 2024-07-17T15:39:15
|
2413947093
|
{
"authors": [
"Viicos",
"sir-sigurd",
"sydney-runkle"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9997",
"repo": "pydantic/pydantic",
"url": "https://github.com/pydantic/pydantic/issues/9910"
}
|
gharchive/issue
|
get install the eval_type_backport error, though don't use new types syntax
Initial Checks
[X] I confirm that I'm using Pydantic V2
Description
I get this error:
TypeError: You have a type annotation 'bool | None' which makes use of newer typing features than are supported in your version of Python. To handle this error, you should either remove the use of new syntax or install the eval_type_backport package.
with the code example.
I can't "remove the use of new syntax" as suggested because I don't use new syntax.
I looks like this happens because pydantic itself uses new syntax in StringConstraints:
https://github.com/pydantic/pydantic/blob/b3ce47f38a906d0b957e0870a651da394b8461bc/pydantic/types.py#L696-L702
Probable fixes are:
use typing.Union instead of new syntax
add dependency on eval_type_backport package on old Python versions
Example Code
import pydantic
class M(pydantic.BaseModel):
f: pydantic.StringConstraints
Python, Pydantic & OS Version
pydantic version: 2.8.2
pydantic-core version: 2.20.1
pydantic-core build: profile=release pgo=false
install path: /Users/sergey/dev/work/quiltdata/quilt/lambdas/s3hash/venv/lib/python3.8/site-packages/pydantic
python version: 3.8.18 (default, Jan 18 2024, 12:23:57) [Clang 15.0.0 (clang-15<IP_ADDRESS>.5)]
platform: macOS-14.5-arm64-arm-64bit
related packages: typing_extensions-4.9.0
commit: unknown
@sir-sigurd,
What happens if you do:
from __future__ import annotations
import pydantic
class M(pydantic.BaseModel):
f: pydantic.StringConstraints
StringConstraints is not meant to be used as a type annotation, but only within Annotated (see docs). However, the error you're getting seems weird, I'll try to check why Pydantic doesn't error correctly.
@sir-sigurd,
What happens if you do:
from __future__ import annotations
import pydantic
class M(pydantic.BaseModel):
f: pydantic.StringConstraints
I get the same error.
StringConstraints is not meant to be used as a type annotation, but only within Annotated (see docs).
You're right, I didn't read migration guide carefully 😅.
So I'm closing the issue because it seems there is nothing TBD.
|
2025-04-01T04:35:14.495533
| 2023-07-12T13:20:32
|
1800972835
|
{
"authors": [
"lig"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9998",
"repo": "pydantic/pydantic",
"url": "https://github.com/pydantic/pydantic/pull/6622"
}
|
gharchive/pull-request
|
🐛 Support defining json_schema_extra on RootModel using Field
Change Summary
Add support for json_schema_extra using Field on Root.root
Related issue number
Fix #6579
Checklist
[x] The pull request title is a good summary of the changes - it will be used in the changelog
[x] Unit tests for the changes exist
[x] Tests pass on CI and coverage remains at 100%
[x] Documentation reflects the changes where applicable
[x] My PR is ready to review, please add a comment including the phrase "please review" to assign reviewers
please review
|
2025-04-01T04:35:14.498253
| 2022-11-01T21:22:12
|
1432117782
|
{
"authors": [
"EwoutH",
"drammock"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:9999",
"repo": "pydata/pydata-sphinx-theme",
"url": "https://github.com/pydata/pydata-sphinx-theme/pull/1046"
}
|
gharchive/pull-request
|
Add Dependabot configuration for GitHub Actions updates
Add a Dependabot configuration that checks once a week if the GitHub Actions are still using the latest version. If not, it opens a PR to update them.
It will actually open very few PRs, since we only have major versions specified (like v3), so only on a major v4 release it will update and open a PR.
See Keeping your actions up to date with Dependabot.
It will actually open very few PRs, since we only have major versions specified (like v3), so only on a major v4 release it will update and open a PR.
seems this was incorrect @EwoutH --- see #1047 😅
Still only one PR, which is not bad and put out an interesting flaw of the current CI setup!
Specifically for that action, I would suggest using their stable release branch, like they recommend:
uses: pypa/gh-action-pypi-publish@release/v1
|
2025-04-01T04:35:14.509410
| 2023-10-04T08:28:58
|
1925677779
|
{
"authors": [
"BSchilperoort"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10000",
"repo": "pydata/xarray",
"url": "https://github.com/pydata/xarray/pull/8270"
}
|
gharchive/pull-request
|
Add xarray-regrid to ecosystem.rst
I was asked to open a PR to add our xarray-regrid extension to the ecosystem list (#8260), so here it is.
Feel free to add an entry in whats-new for an extra shout-out in the next release notes.
Thank you! I have done so now.
|
2025-04-01T04:35:14.514475
| 2019-06-05T07:16:44
|
452348076
|
{
"authors": [
"coveralls",
"moltob",
"schettino72",
"slaperche-scality"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10001",
"repo": "pydoit/doit",
"url": "https://github.com/pydoit/doit/pull/306"
}
|
gharchive/pull-request
|
doc: add MetalK8s success story
Here is the success story about how we replaced the existing make-based build system by doit on MetalK8s.
Thanks for the great tool!
Coverage remained the same at 99.736% when pulling 96e3a54fdacd26762e3915b0b27e6f43e1394a97 on slaperche-scality:metalk8s_success_story into 8bf9869364c830103bf4cf45c71ae07439f4e9a7 on pydoit:master.
Great, thanks :smile:
`Invoke`` was pretty good at executing commands
Is it better then doit in this aspect? Why? How doit could improve...
I am thinking about adding the logo of project/companies in doit homepage, do you think it would ok to add it? Need any special permission?
@schettino72 Regarding the BMW success story, showing the logo should be fine. I suggest to make the bitmap a link to the story for which I was given permission to release.
@schettino72
Is it better then doit in this aspect? Why? How doit could improve...
Not really, at least for our use doit is as convenient as Invoke :slightly_smiling_face:
I am thinking about adding the logo of project/companies in doit homepage, do you think it would ok to add it? Need any special permission?
I asked around, and no problem on our side: you can add the MetalK8s logo on doit homepage.
|
2025-04-01T04:35:14.551909
| 2018-09-24T13:34:45
|
363148932
|
{
"authors": [
"eric-wieser",
"hugohadfield"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10002",
"repo": "pygae/clifford",
"url": "https://github.com/pygae/clifford/issues/54"
}
|
gharchive/issue
|
Finish algebra generation optimisation
There is still quite a bit of low hanging fruit performance-wise around the algebra generation setup. Currently we swap between tuples of grades, bitmaps and indices into the mv value array pretty liberally, eliminating this and jitting the remaining functions would allow us to speed up the generation quite a lot as well as allowing us to lean on the automatic parallelisation capabilities in numba and potentially even adding a cuda generation option.
Additionally @enkimute has suggested a nice loop free bitcount method that could add additional performance:
https://github.com/enkimute/ganja.js/issues/17#issuecomment-423962512
Currently we swap between tuples of grades, bitmaps and indices into the mv value array pretty liberally, eliminating this and ...
This is mostly done in #273
Instead of using @enkimute's optimization, we now just use __builtin_popcnt which might even turn into a single assembly instruction.
|
2025-04-01T04:35:14.555480
| 2022-02-02T14:52:29
|
1122023764
|
{
"authors": [
"Korijn",
"almarklein"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10003",
"repo": "pygfx/wgpu-py",
"url": "https://github.com/pygfx/wgpu-py/issues/240"
}
|
gharchive/issue
|
Using the wheel with Shift swaps horizontal/vertical
On MacOS: for glfw, qt, jupyter.
On Windows, todo
On Linux: todo
Not sure where to deal with this. Here or in pygx event handlers. I guess it depends on how this differs per platform.
As far as I know this works the same on all platforms.
I'm seeing something different :)
After some thought, I don't think we can do much about it: if you see Shift, a zero dy and a nonzero dx, it might as well be a true horizontal scroll (e.g. via the trackpad). In fact, the reported behavior only occurs when using the mouse, not when using the touchpad, and we cannot distinguish between them.
My inclination is to handle this downstream using something like d = dx or dy where you don't necessarily need to differentiate between vertical/horizontal.
My inclination is to handle this downstream using something like d = dx or dy
I've done this now in the gizmo pr.
I also added notes in the jupyter_rfb event spec: https://github.com/vispy/jupyter_rfb/pull/55
With that, I think we can close this issue.
|
2025-04-01T04:35:14.556988
| 2020-12-29T09:51:25
|
775802605
|
{
"authors": [
"Anteru",
"LuminousXLB"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10004",
"repo": "pygments/pygments",
"url": "https://github.com/pygments/pygments/issues/1653"
}
|
gharchive/issue
|
Lineno support for Terminal256Formatter
Why Terminal256Formatter doesn't support lineno output? Is it for some specific reason?
Can I submit a PR to add that support?
I don't think there was a particular reason for that other than "nobody asked for it" before. A PR would be certainly welcome, I assume you want this as an additional option?
I don't think there was a particular reason for that other than "nobody asked for it" before. A PR would be certainly welcome, I assume you want this as an additional option?
|
2025-04-01T04:35:14.557948
| 2023-08-03T14:05:45
|
1835137341
|
{
"authors": [
"Anteru",
"MichaelHuth"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10005",
"repo": "pygments/pygments",
"url": "https://github.com/pygments/pygments/pull/2482"
}
|
gharchive/pull-request
|
Update Igor Pro lexer for Igor Pro 9
The functions and operations list in the Igor Pro lexer is updated for the recent Igor Pro 9 version.
Thanks for the contribution!
|
2025-04-01T04:35:14.612715
| 2021-09-13T07:40:15
|
994560000
|
{
"authors": [
"liamhuber",
"niklassiemer",
"raynol-dsouza"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10006",
"repo": "pyiron/pyiron_continuum",
"url": "https://github.com/pyiron/pyiron_continuum/pull/56"
}
|
gharchive/pull-request
|
Schroedinger
A class for solving the time-independent Schroedinger equation, including helper classes for meshing and background potentials. @raynol-dsouza uses a similar home-brewed class for quantum solutions to optimal volume atm.
Known TODOs:
Unit (integration?) tests for the main class
~ImportAlarm wrapping of some of the 3d plotting stuff~
~More and correct physics for thermal occupation of states~
Units
First, just actually using them correctly, not the reduced units for the electron solution currently implied
Second, ideally, leveraging @sudarsan-surendralal's work on generic pyiron unit handling, but that can possibly wait for a future PR
~Polish and upload example notebook~
?Wait for the Toolkit stuff to be available on pyiron_base and expose meshes and potentials there.
Hi @liamhuber ,
Issue with RectMesh: Suppose I want an output from RectMesh.mesh that is 1d, essentially a simple linear interpolation between 2 extrema, let's say 1 and 10, I provide
foo = RectMesh(bounds=[1, 10], divisions=10, simplify_1d=True)
what I expect the output of foo.mesh to look like (same as using numpy.linspace):
array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.])
but the output looks like this:
array([[[0.00000000e+00, 9.99000999e-04, 1.99800200e-03, ...,
9.97002997e-01, 9.98001998e-01, 9.99000999e-01],
[0.00000000e+00, 9.99000999e-04, 1.99800200e-03, ...,
9.97002997e-01, 9.98001998e-01, 9.99000999e-01],
[0.00000000e+00, 9.99000999e-04, 1.99800200e-03, ...,
9.97002997e-01, 9.98001998e-01, 9.99000999e-01],
...,
[0.00000000e+00, 9.99000999e-04, 1.99800200e-03, ...,
9.97002997e-01, 9.98001998e-01, 9.99000999e-01],
[0.00000000e+00, 9.99000999e-04, 1.99800200e-03, ...,
9.97002997e-01, 9.98001998e-01, 9.99000999e-01],
[0.00000000e+00, 9.99000999e-04, 1.99800200e-03, ...,
9.97002997e-01, 9.98001998e-01, 9.99000999e-01]],
[[0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[9.99000999e-03, 9.99000999e-03, 9.99000999e-03, ...,
9.99000999e-03, 9.99000999e-03, 9.99000999e-03],
[1.99800200e-02, 1.99800200e-02, 1.99800200e-02, ...,
1.99800200e-02, 1.99800200e-02, 1.99800200e-02],
...,
[9.97002997e+00, 9.97002997e+00, 9.97002997e+00, ...,
9.97002997e+00, 9.97002997e+00, 9.97002997e+00],
[9.98001998e+00, 9.98001998e+00, 9.98001998e+00, ...,
9.98001998e+00, 9.98001998e+00, 9.98001998e+00],
[9.99000999e+00, 9.99000999e+00, 9.99000999e+00, ...,
9.99000999e+00, 9.99000999e+00, 9.99000999e+00]]])
I would like to think I am doing something wrong and that this is a feature of the class. The documentation is not super clear to me. I understand that 'mesh' should indeed give me a 'mesh' (equivalent to numpy.meshgrid's output), but to use the TISE class, I need to provide it with a RectMesh object since it does not accept a numpy.ndarray or a recursive list of lists with values.
That brings me to my next comment about the TISE class: As an input, I would also like to be able to provide input.potential and input.mesh as simple numpy.ndarray objects (the dimensions of the two would obviously need to match). I believe it would be more convenient to add this also as a feature than the TISE class solely relying on objects of the RectMesh and Potential classes as respective inputs.
Issue with RectMesh: Suppose I want an output from RectMesh.mesh that is 1d, essentially a simple linear interpolation between 2 extrema, let's say 1 and 10, I provide
foo = RectMesh(bounds=[1, 10], divisions=10, simplify_1d=True)
what I expect the output of foo.mesh to look like (same as using numpy.linspace)
Yes, this is easily possible. You just need bounds=[[1, 10]] instead.
The problem is that we can't simultaneously allow [1, 10] and a shortcut to a 2d domain on x=(0..1) and y=(0..10) and also allow [1, 10] to be a 1d domain with different starting points. Any 1d array (equivalent) is interpreted as only providing the end points.
As an input, I would also like to be able to provide input.potential ~and input.mesh~ as simple numpy.ndarray objects
Done.
and input.mesh
This is possible, but I don't like it. We could support both, but this would create a terrible mess of if/else or try/except clauses throughout the code and I'm not willing to support that. We could support only numpy arrays of the right dimension, but then we'd need to implicitly back out the step size in each dimension and implement Laplacian right in this class.
My hope and expectation is that putting operators right on the RectMesh object will be generically useful for the continuum module and that this class will get lifted up out of the schroedinger module at a later date. I have a strong preference to diffuse some of the responsibility among classes. Potential is just a convenience thing, but RectMesh has real responsibilities and I don't want to dump these on TISE.
(the dimensions of the two would obviously need to match).
I no longer guarantee this now that potential accepts regular arrays. It's possible to add these (read: we probably should add this check), but since they can be set in either order it is a pain in the butt so for now any user using array potentials will just have to live dangerously.
Yes, this is easily possible. You just need bounds=[[1, 10]] instead
Aha! I feel dumb for not trying this out. But I would anyway suggest adding this in the documentation for other lost souls like mine out there who'd want to use the class.
This is possible, but I don't like it. We could support both, but this would create a terrible mess of if/else or try/except clauses throughout the code and I'm not willing to support that. We could support only numpy arrays of the right dimension, but then we'd need to implicitly back out the step size in each dimension and implement Laplacian right in this class.
My hope and expectation is that putting operators right on the RectMesh object will be generically useful for the continuum module and that this class will get lifted up out of the schroedinger module at a later date. I have a strong preference to diffuse some of the responsibility among classes. Potential is just a convenience thing, but RectMesh has real responsibilities and I don't want to dump these on TISE
Fair enough. I can live with this. However, I think it would be useful to give numpy.ndarrays directly into the bounds. For ex. for the 2d case:
foo = numpy.array([0, 1, 2, 3, 4])
bar = numpy.array([5, 6, 7, 8, 9])
mesh = RectMesh(bounds=[[foo], [bar]], divisions=None)
I ask this since if I have raw data, in my case, E(V) data from a Murnaghan type job, I would not want to fit the data to a function. But I see that brings an additional problem of having a variable step size for the Laplacian. Could we come up with a workaround for this*?
I no longer guarantee this now that potential accepts regular arrays. It's possible to add these (read: we probably should add this check), but since they can be set in either order it is a pain in the butt so for now any user using array potentials will just have to live dangerously.
Hahaa! I would then suggest that the TISE class accept only objects of the Potential class in the long run. However, I would ask (or take up) the additional job of making the Potential class more flexible, in a way that I would give it my raw potential data (as a numpy.ndarray) with a corresponding mesh (again a numpy.ndarray), and the class would then return a child of itself (I see that the child would need to have a __call__ method) AND a RectMesh wrapped around this raw data. The latter, PROVIDED my * statement is satisfied.
I believe I found the missing factor to use the Schrodinger class exclusively in pyiron units. Quoting from the documentation,
H = -(hbar^2 / 2 m)(del^2 / del x^2) + V
Pyiron units:
- m = atomic mass units
- x = Angstroms
- V = eV
Thus, to get the first term to eV we need hbar in units of sqrt(eV Angstroms^2 u).
I found by dimensional analysis that we can get the entire correction term in eV by keeping HBAR in eV s itself, and using the correction factor which looks like this. I tested it out and it works in my case with a real potential.
I would set HBAR = scipy.constants.physical_constants['reduced Planck constant in eV s'][0] in the code, keep the mass in pyiron units (For C, that would be 12.011), and line 174 would then return:
-(HBAR**2 / (2 * self.input.mass) * 9.64853322e27) * self.mesh.laplacian(psi) + self._potential_psi(psi)
the magic number in there being the conversion factor.
I found by dimensional analysis that we can get the entire correction term in eV by keeping HBAR in eV s itself, and using the correction factor which looks like this. I tested it out and it works in my case with a real potential.
I would set HBAR = scipy.constants.physical_constants['reduced Planck constant in eV s'][0] in the code, keep the mass in pyiron units (For C, that would be 12.011), and line 174 would then return:
-(HBAR**2 / (2 * self.input.mass) * 9.64853322e27) * self.mesh.laplacian(psi) + self._potential_psi(psi)
the magic number in there being the conversion factor.
Super, that sounds good to me. I would only request that the correction factor be labeled file-level all-caps global at the top of the file, alongside HBAR and KB -- i.e. no magic numbers. Definitely include the Wolfram link next to it, or even better construct it from Scipy constants values (if such a construction is clear/even possible).
Super, that sounds good to me. I would only request that the correction factor be labeled file-level all-caps global at the top of the file, alongside HBAR and KB -- i.e. no magic numbers. Definitely include the Wolfram link next to it, or even better construct it from Scipy constants values (if such a construction is clear/even possible).
I concur! Does this also mean that you are (very politely) asking me to make these changes? :P
Super, that sounds good to me. I would only request that the correction factor be labeled file-level all-caps global at the top of the file, alongside HBAR and KB -- i.e. no magic numbers. Definitely include the Wolfram link next to it, or even better construct it from Scipy constants values (if such a construction is clear/even possible).
I concur! Does this also mean that you are (very politely) asking me to make these changes? :P
Haha, rather 'instructing' you to. You'll also want to re-execute the demo and adjust potential magnitudes and masses there to make sure the graphs still all look "pretty".
I'm not worried about the timescale you do this on though -- it's for your project anyhow, and I'd also like to hold off on merging until pyiron_continuum relies on a version of base that includes the Toolkit stuff.
I think it would be useful to give numpy.ndarrays directly into the bounds. For ex. for the 2d case:
foo = numpy.array([0, 1, 2, 3, 4])
bar = numpy.array([5, 6, 7, 8, 9])
mesh = RectMesh(bounds=[[foo], [bar]], divisions=None)
I ask this since if I have raw data, in my case, E(V) data from a Murnaghan type job, I would not want to fit the data to a function. But I see that brings an additional problem of having a variable step size for the Laplacian. Could we come up with a workaround for this*?
I see how this is useful, but it's also super dangerous. What's to stop a user from entering an array with non-uniform displacements in a given axis? We could support that in the future too, but it would involve more changes (e.g. to the math in laplacian) and I don't have time to make those rigourously right now.
I don't remember what the input to Murnaghan jobs is -- could you not generate a RectMesh before doing the atomistics jobs and use its mesh as input to both the E(V) calculation and the QM solution?
Hahaa! I would then suggest that the TISE class accept only objects of the Potential class in the long run. However, I would ask (or take up) the additional job of making the Potential class more flexible, in a way that I would give it my raw potential data (as a numpy.ndarray) with a corresponding mesh (again a numpy.ndarray), and the class would then return a child of itself (I see that the child would need to have a __call__ method) AND a RectMesh wrapped around this raw data. The latter, PROVIDED my * statement is satisfied.
Actually I do rather like the ability to provide the potential as a straight numpy array. User-defined Potential children are certainly easy enough to implement, but I worry that handling the storage/reinstantiation gets tricky if we insist on everyone making their own child class.
Much easier if they have an array (or function returning an array) in their notebook and slap that on as the potential -- then it gets serialized trivially.
A really determined user could extend the available Potentials or make their own personal pyiron submodule that includes these, e.g. if they have the same class of potential that they want to parameterize and use repeatedly. But let's not make that a necessity.
Thanks @samwaseda, excellent feedback! All implemented.
@liamhuber I was adding the conversion factor to the TISE class to solve the Schroedinger equation in pyiron units, which has mass in AMU. The examples you have in your demo notebooks give not so nice results for mass in AMU. For mass in electron mass though, they work the same as in your notebook.
Shall I keep the default mass in the class as AMU, and express the mass in the demo notebook, which I think works best for an electron, from electron mass to AMU? Or should we address this issue another way?
@raynol-dsouza yes, the demonstrations will certainly need some numeric adjustment with the new (correct) units in place. My gut is to use pyiron units overall, but keep the default mass as electron mass for Schroedinger. I haven't thought it through super deeply, but it should be possible to jimmy the demos to come out nicely by adjusting the size and depth of the potentials. My frame of reference was to think about hydrogen ionization energy (-13 eV for the ground state, -3 eV for first excited) and hydrogen 1s orbital size to give a rough length scale (this I would need to look up).
I'll take a look at your PR soon, but you could use the hydrogen attack to try to adjust the demos already if you want.
The alternative is to make them for carbon or aluminium mass or something, which should then give pretty solutions for energies and lengths that we select with just our gut intuition, and lines up well with pyiron atomistics, but feels off since quantum stuff is usually for electrons by default.
@liamhuber The notebook works fine for hydrogen, provided the bounds for RectMesh are between 0 and 1 (for the 1d case). However, I was unable to recover the ground state and 1st excited state energies.
On the 'mods_to_shcrodinger' (yeah, I mistyped) branch the following code runs well, but I get different eigenenergies.
job1d_square = pr.create.job.TISE('tise1d_square', delete_existing_job=True)
job1d_square.input.potential = SquareWell(depth=10)
job1d_square.input.mesh = RectMesh(bounds=[[0, 1]], divisions=100)
job1d_square.input.n_states = 10
job1d_square.input.mass = 1.00784
job1d_square.run()
This is exactly what I did initially! It looked a little unconvincing to me, so I followed the method I posted. If it looks like it should, then wonderful! If the default mass in electron mass, do we then scale internally to AMU? Or have a flag that specifies atom mass or something else?
This is exactly what I did initially! It looked a little unconvincing to me, so I followed the method I posted.
For me it looks like this
which is totally fine. I did find it pretty sensitive to the well width though.
The notebook should be adjusted so it all looks ok with M_e.
If it looks like it should, then wonderful! If the default mass in electron mass, do we then scale internally to AMU? Or have a flag that specifies atom mass or something else?
We should absolutely keep everything in AMU throughout. We can simply set the default mass to electron mass in AMU: physical_constants['electron mass in u'][0]. Easy peasy.
@samwaseda spurred on by your opinion that this might actually be useful, I'd like to show it at ADIS on Tuesday. I think you, Raynol, and I are all pretty happy with the overall setup, but would you mind taking a look through the documentation and user-interface (i.e. demo notebook) to double check that I'm not missing any important QoL issues?
ahhdammit, pyiron_base still needs another incremental bump before the tests will pass >.<
@liamhuber Sure! Will go through the changes and docs.
Aha, the notebook fails at storage but this is [known and already patched].
@niklassiemer I'd like to present this work at the ADIS meeting on Tuesday -- any chance we can get another base bump by then? IIRC Tuesday is the regular day, so even doing it Tuesday AM would be totally sufficient.
Sure, I will trigger a new release right away 👍 Although Tuesday is now 'regular' we are not bound to it 😄
|
2025-04-01T04:35:14.615190
| 2019-10-08T15:03:43
|
504101423
|
{
"authors": [
"Hertin",
"bmilde"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10007",
"repo": "pykaldi/pykaldi",
"url": "https://github.com/pykaldi/pykaldi/issues/170"
}
|
gharchive/issue
|
spectrogram has nagative number rather than complex number
I compute spectrogram but found negative number in it. Is this spectrogram giving only real part? Is there a way to have complex or the power of spectrogram.
If you want to generate a spectogram with real and complex parts its probably easier to use scipy/FFTs directly
|
2025-04-01T04:35:14.637459
| 2016-09-29T09:24:01
|
179996060
|
{
"authors": [
"david082321"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10008",
"repo": "pylerSM/XInstaller",
"url": "https://github.com/pylerSM/XInstaller/pull/51"
}
|
gharchive/pull-request
|
Update strings.xml
compare with " https://github.com/pylerSM/XInstaller/blob/master/res/values/strings.xml "
compare with " https://github.com/pylerSM/XInstaller/blob/master/res/values/strings.xml "
|
2025-04-01T04:35:14.672598
| 2023-02-28T08:04:35
|
1602551425
|
{
"authors": [
"juanitorduz",
"ricardoV94"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10009",
"repo": "pymc-labs/pymc-marketing",
"url": "https://github.com/pymc-labs/pymc-marketing/pull/179"
}
|
gharchive/pull-request
|
Relax matplotlib dependency
Shall we relax matplotlib's dependency? I do not remember wey we have a specific one.
Could it be because of https://github.com/pymc-labs/pymc-marketing/issues/120 ?
Could it be because of #120 ?
Seems so! Seaborn has cut a release and 0.12.2 is available now
In https://github.com/pymc-labs/pymc-marketing/pull/179/commits/43626f419c9cf2bbfe07fa3639a3f85c3c14965c I tried adding consistent restrictions as in https://github.com/mwaskom/seaborn/blob/master/pyproject.toml#L25-#L29. Still, let me know if there is a safer way of proceeding.
Seaborn itself seems to take care of the invalid numpy and matplotlib dependencies?
https://github.com/mwaskom/seaborn/blob/55a328ba4301f429aa454f5824861323caf91cd0/pyproject.toml#L26-L28
Would it be enough to mark seaborn >= 0.12.2 (and remove the specific != matplotlib and numpy ones)?
Changed in https://github.com/pymc-labs/pymc-marketing/pull/179/commits/fa1e047a72bf196d2a4a7441d171da66a6f9eab6
|
2025-04-01T04:35:14.714985
| 2024-07-12T21:56:57
|
2406398904
|
{
"authors": [
"fdrgsp",
"tlambert03"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10010",
"repo": "pymmcore-plus/useq-schema",
"url": "https://github.com/pymmcore-plus/useq-schema/issues/183"
}
|
gharchive/issue
|
WellPlatePlan selected_wells field_validator
@tlambert03 I think the reason for which when we do:
useq.WellPlatePlan(plate=6, a1_center_xy=(0,0), selected_wells=())
or
useq.WellPlatePlan(plate=6, a1_center_xy=(0,0), selected_wells=None)
we get all the well selected is because in the WellPlatePlan selected_wells field_validator we run plate.indices(value) which in absence of value returns the entire well indices.
Not sure what's the best way to fix that, if in the validator itself, in plate.indices or maybe in _expression_repr methods...
The intention was to select all wells by default, do you want to change it to select no wells by default?
|
2025-04-01T04:35:14.725823
| 2020-03-02T10:24:38
|
573871016
|
{
"authors": [
"cthoyt"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10011",
"repo": "pyobo/pyobo",
"url": "https://github.com/pyobo/pyobo/issues/6"
}
|
gharchive/issue
|
Get information from other registries
These are sources besides Identifiers.org, OBO Foundry, and the OLS that have names of namespaces and URLs at which they resolve.
Name
URL
Prefix Commons
https://github.com/prefixcommons/prefixes/blob/master/registry.yaml
uGene
https://github.com/ugeneunipro/ugene/blob/master/data/DBXRefRegistry.txt
NCBI
http://www.ncbi.nlm.nih.gov/genbank/collab/db_xref
[ ] What about wikidata? Make a query that grabs all databases and their format URL
Moved to https://github.com/cthoyt/bioregistry/labels/External Registry
Moved to https://github.com/cthoyt/bioregistry/labels/External Registry
|
2025-04-01T04:35:14.812909
| 2020-01-13T13:32:35
|
548932483
|
{
"authors": [
"hbielenia",
"matteius"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10012",
"repo": "pypa/pipenv",
"url": "https://github.com/pypa/pipenv/issues/4095"
}
|
gharchive/issue
|
How to determine which packages give dependency version conflict?
When I try to lock my dependencies, pipenv gives long error traceback that boils down to pipenv.exceptions.ResolutionFailure: ERROR: ERROR: Could not find a version that matches botocore<1.15.0,<2.0.0,==1.13.50,>=1.12.36,>=1.14.0. pipenv lock --clear gives the same result. I do as instructed and try pipenv install --skip-lock, which succeeds, then pipenv graph. The output from latter is also very long, so I grep the interesting part out of it withpipenv graph | grep botocore:
- botocore [required: ==1.13.50, installed: 1.13.50]
- botocore [required: >=1.12.36,<2.0.0, installed: 1.13.50]
- botocore [required: >=1.13.49,<1.14.0, installed: 1.13.50]
- botocore [required: >=1.12.36,<2.0.0, installed: 1.13.50]
Correct me if I'm wrong, but this output shows different requirements than those on which pipenv errors out. How do I find which packages actually provide the conflicting requirements, so I can fix them?
Not pasting pipenv --support output because it contains private information and is too long to reasonably redact it.
@hbielenia Can you recheck with pipenv==2022.8.19?
|
2025-04-01T04:35:14.821498
| 2018-12-28T14:27:33
|
394648487
|
{
"authors": [
"jaraco",
"pganssle"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10013",
"repo": "pypa/setuptools",
"url": "https://github.com/pypa/setuptools/issues/1612"
}
|
gharchive/issue
|
Test failure in develop on all platforms
The CI for this trivial fix is failing on all interpreters for Appveyor and Travis. Fails locally as well with this error:
_____ [doctest] setuptools.command.develop.VersionlessRequirement __
200
201 Adapt a pkg_resources.Distribution to simply return the project
202 name as the 'requirement' so that scripts will work across
203 multiple versions.
204
205 >>> dist = Distribution(project_name='foo', version='1.0')
UNEXPECTED EXCEPTION: NameError("name 'Distribution' is not defined")
Traceback (most recent call last):
File "~/.pyenv/versions/3.7.1/lib/python3.7/doctest.py", line 1329, in __run
compileflags, 1), test.globs)
File "<doctest setuptools.command.develop.VersionlessRequirement[0]>", line 1,
in <module>
NameError: name 'Distribution' is not defined
Seems like it's a doctest failing.
Looks like this commit is the problem: 0902f02d9d68f18e906e727cbafa4a05fe5c9c91, because Distribution is no longer in the namespace of the doctest.
Gah. Sorry about that. Thanks for chasing this down. I probably missed that usage because I was relying on the linter for finding the names.
@jaraco No harm no foul, though I think it would be best to go through the motions of a PR next time, even if you just do a self-merge. It's helpful for catching stuff like this.
Closed by #1613
I think it would be best to go through the motions of a PR next time.
The problem I have with this is it does increase the burden of making a contribution often to the point of making it infeasible in the time allotted. I usually have an implicit, manual check, where I'll notice if tests start failing after pushing a falsely-presumed error-free commit.
I agree with the concept though.
If I could create a command that could automate this operation, I'd be all in favor of it.
I've started looking into using hub. It looks like it can create a pull request but not set it for merge automatically. It's also not yet clear to me if it can correctly automatically detect the correct fork/branch for the source and target. But if it can, that almost streamlines the process.
I successfully created #1615 from the command-line with a simple hub pull-request invocation after pushing the branch. Hmm. Looks like GitHub doesn't support automatic merges, but there is a service called mergify, which I've enabled on this repo.
It's going to take some work to understand what the implications are of this - can mergify be readily installed on repos? Can it discern which pull requests should be automatically merged (and how)?
I've created a new rule that will automatically merge any PR with the auto-merge tag present. I still see two issues with this workflow. First, it would allow any user to use this feature (I think I can restrict it to a team). Second, it leaves the branch in the remote and local repos, and I don't see a way to mechanize the removal of either... so you'll start to get lots of stale, merged branches.
Huh. For some reason, it seems the setuptools-developers team isn't a team login. Only a team login is suitable for designating the allowed authors. So I've just set it to jaraco instead. ...until that issue can be sorted out. I've spent way too much time on this already.
The problem I have with this is it does increase the burden of making a contribution often to the point of making it infeasible in the time allotted. I usually have an implicit, manual check, where I'll notice if tests start failing after pushing a falsely-presumed error-free commit.
Generally I find that my contributions are not time-sensitive, so "make a PR" and "push to master" are roughly equivalent - both put the code on github and since it's very rare for me to not hit the "merge" button before a new release is cut, the effect is about the same. I personally prefer to let my PRs act as "drafts" that I can look at with fresh eyes before the merge if I have time and also that anyone can review.
That said, occasionally my changes are blocking stuff on master (like PR #1613, which was blocking the CI for everyone), so in that situation I can see how the PR workflow would just get in the way (though usually I still need a PR because the stuff blocking on master is almost always something about the CI itself that I need to fix).
In any case, everyone has their own way of working, and I don't think there's a single "right way". I think your solution of adding an auto-merge bot that just merges PRs that pass the CI is a great solution. It gives us the quality assurance of CI, the notification mechanism of a PR (I don't think people watching the repo get notifications of pushes to master - I certainly don't), and facilitates your contributions to the project, so it's a win all around! 🎉
|
2025-04-01T04:35:14.824497
| 2020-06-17T13:22:33
|
640438784
|
{
"authors": [
"BigRoy",
"antirotor"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10014",
"repo": "pypeclub/OpenPype",
"url": "https://github.com/pypeclub/OpenPype/issues/282"
}
|
gharchive/issue
|
unset audio file will cause crash during render publishing
Bug
if audioFile is not set in context (should such thing be in context?) test for its existence will fail as os.path.isfile() is expecting str and will get NoneType.
https://github.com/pypeclub/pype/blob/e75167445f3b1dff439bec3bba40e228d6c301d8/pype/plugins/global/publish/submit_publish_job.py#L842-L843
[cuID:PYPE-1124]
Is this fixed with #284 ?
Or do we still feel this should move to instance.data["audioFile"] so that separate instances actually can have its own audio files?
This was fixed with the check for None but audio file is still in the context. I am closing it as it is no longer valid. But the question of audioFile in context vs instance is still present.
|
2025-04-01T04:35:14.840062
| 2022-10-31T11:54:10
|
1429722088
|
{
"authors": [
"LiborBatek",
"moonyuet"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10015",
"repo": "pypeclub/OpenPype",
"url": "https://github.com/pypeclub/OpenPype/pull/4047"
}
|
gharchive/pull-request
|
Alembic Loader as Arnold Standin
Brief description
Adding Alembic Loader as Arnold Standin function as the Openpype Plugin
Description
The user can load the published workfileswith abc format as Arnold Standin if the pusblished workfiles belong to model and pointcache family.
Additional info
Doesn't have any for the current stage
Testing notes:
Start maya in Openpype
Click "Load" from Openpype in maya
Right-click the workfiles which belongs to either model or pointcache family
Click Import Alembic as Standin
It will load the abc file as an arnold standin
I ve tested it maya2023 with arnold <IP_ADDRESS>
Model imports as ASS file normally. Small note tho: use sequence is "ON" should be considered that model being static in general so should be "OFF" by nature
Speaking of Point cache at first everything seems fine and working however I ve got weird anim problems...double checked it and manually created ass.file from the same workfile and my version had no issues. Seems like some ASS creation params being wrong set. Need revision and testing. Same frame different animation:
One more thing to consider why not allow user to "Import Alembic as ASS file" even when speaking about Animation caches not just Pointcaches in general (pyblished animation alembics) could be pretty handy and also there shouldnt be any reason to not allow it too... so user could import these families as ASS:
model (already present) pointcache (already present) animation (not present now)
One of the possible reasons for the weird anim problem could be related to the fps of the animation. I find out the standin import the abc animation with 24 fps while the standard project setting is 25fps. The abc fps is set up to 25 in the updated PR.
The updated PR also tells the systems not to load frame if the abc file doesn't contain any animation. And also I added the animation into families.
Now it works normally, did tests in maya2022 and maya2023.
I think the problem with animation mismatch was caused by difference in framerates (25fps vs 29.97) even present in pyblished abc using OP tools. So it was not on your side!
Question is if special care should be taken. I mean taking note of fps from origin of the abc file (source animation maya scene). As I understand you read this from database and set it according to it right?
Also static models still have is sequence turned on in ASS properties. Should be addressed already right?
Thanks!
Now it works normally, did tests in maya2022 and maya2023.
I think the problem with animation mismatch was caused by difference in framerates (25fps vs 29.97) even present in pyblished abc using OP tools. So it was not on your side!
Question is if special care should be taken. I mean taking note of fps from origin of the abc file (source animation maya scene). As I understand you read this from database and set it according to it right?
Also static models still have is sequence turned on in ASS properties. Should be addressed already right?
Thanks!
I think most of the issues I have discussed with you in discord. The abc file published from the model main wont have the sequence on as it doesn't contain any animation. So yes. the static models should be addressed already.
Made another go with testing...the static model like my table prop still gets the sequence option on. Also would be great if the loaded/imported ass files keep color coding in outliner (see img with outliner colors applied when assets loaded normally via asset load command using OP loader)
it means family animation being green colored in outliner and model dark orange color. Each family has its own.
Made another go with testing...the static model like my table prop still gets the sequence option on. Also would be great if the loaded/imported ass files keep color coding in outliner (see img with outliner colors applied when assets loaded normally via asset load command using OP loader) it means family animation being green colored in outliner and model dark orange color. Each family has its own.
Hi I have made the outlinecolor for the group_name and I have set up the condition for not loading the sequence in model main.
Have made another testing and all the features are already working fine!
There is one thing which prevents the approval...
I have tried to fiddle with switching versions of loaded asset...first I must say that its great that user have this option!!!
There is really minor issue that is Use File Sequence being unchecked afterwards even it was turned on before switching version of the asset.
Otherwise all works perfectly!
Have made another testing and all the features are already working fine!
There is one thing which prevents the approval... I have tried to fiddle with switching versions of loaded asset...first I must say that its great that user have this option!!!
There is really minor issue that is Use File Sequence being unchecked afterwards even it was turned on before switching version of the asset.
Otherwise all works perfectly!
Hi Libor. I think you can try with the latest code. it will update use file sequence according to which family subset it belongs to.
|
2025-04-01T04:35:14.858483
| 2016-02-05T22:21:01
|
131773623
|
{
"authors": [
"pyr"
],
"license": "isc",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10016",
"repo": "pyr/bundes",
"url": "https://github.com/pyr/bundes/pull/2"
}
|
gharchive/pull-request
|
main: switch to components for project structure.
This PR builds a tree of components instead of the previous
map based system. This enforces a clean setup and teardown
of the system. A reporter comonent is introduced to handle
error notifications and metrics. The top level component
becomes DB, a component which holds transient state for
units.
With this, the stored state of the world changes a little bit. Both units and tasks are now stored.
The code takes advantage of mesomatic's ability to use vectors of task IDs when several instances need to be started.
By knowing task IDs in advance, it is now possible to add back references to units from tasks. The rough shape of the state atom's map is now:
{:units {:job1 {:id :job1 :instances 2 :tasks [{:value :t1} {:value :t2}]}
:job2 {:id :job2 :instances 1 :tasks [{:value :t3}]}
:tasks {{:value :t1} {:state :task-running :unit :job1}
{:value :t2} {:state :task-finished :unit :job1}
{:value :t3} {:state :task-running :unit :job2}}}
Additional changes worth mentioning:
Switch API routing and handling to bidi and net.
Rely on https://github.com/pyr/reporter for error reporting.
Remove the cluster framework for now, until base functionality works.
|
2025-04-01T04:35:14.859381
| 2019-04-19T11:33:47
|
435147440
|
{
"authors": [
"The-Q",
"pyr"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10017",
"repo": "pyr/kinsky",
"url": "https://github.com/pyr/kinsky/issues/46"
}
|
gharchive/issue
|
Seek operation in async client
Is there any reason why seek operation is not supported in async facade?
The async facade will be removed. It might be revived in a supporting library later on
|
2025-04-01T04:35:15.157241
| 2016-10-22T01:54:42
|
184601226
|
{
"authors": [
"coveralls",
"fgregg"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10018",
"repo": "pysal/pysal",
"url": "https://github.com/pysal/pysal/pull/874"
}
|
gharchive/pull-request
|
Use standard python facilites for warning
Hello! Please make sure to check all these boxes before submitting a Pull Request
(PR). Once you have checked the boxes, feel free to remove all text except the
justification in point 5.
[ ] You have run tests on this submission, either by using Travis Continuous Integration testing testing or running nosetests on your changes?
[ ] This pull request is directed to the pysal/dev branch.
[ ] This pull introduces new functionality covered by
docstrings and
unittests? Do you want tests for warnings?
[ ] You have assigned a
reviewer and added relevant labels I don't have permissions for this
[ ] The justification for this PR is: The Python standard library provides a rich set of facilities for nonfatal warnings, which allow downstream users to decide how they want to handle warnings as a distinct stream of messages.
Coverage increased (+0.0006%) to 83.073% when pulling d161b32337d81e77c495beed69c7cc2861e2e6a2 on fgregg:patch-2 into 535774692ae5e0fd6fbbd4f1c950d483a369033a on pysal:dev.
thanks @sjsrey
|
2025-04-01T04:35:15.164163
| 2019-04-30T03:34:58
|
438591136
|
{
"authors": [
"ajz34",
"chrinide",
"sunqm"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10019",
"repo": "pyscf/pyscf",
"url": "https://github.com/pyscf/pyscf/issues/317"
}
|
gharchive/issue
|
Pyscf(PyPI) 'hangs up' when do density_fit x2c scf calculation
Hi there,
I use Pyscf to do density_fit x2c scf calculation with gold dimer, like below settings:
**This Pyscf version is 1.6.1_post1 in PyPI site Py3.6: **
https://pypi.org/project/pyscf/
mol.build(
verbose = 5,
atom = '''
79
79 1 2.5''',
basis = 'anorcc',
)
mf = scf.RHF(mol).x2c()
mf.density_fit().kernel()
And pyscf return below 'hangs up' without any information, I find in system, the python kernel 'hangs up' and then almost dead kernel:
******** <class 'pyscf.df.df_jk.density_fit.<locals>.DFHF'> ********
method = DFHF-SFX2C1E_SCF-RHF
initial guess = minao
damping factor = 0
level shift factor = 0
DIIS = <class 'pyscf.scf.diis.CDIIS'>
DIIS start cycle = 1
DIIS space = 8
SCF tol = 1e-09
SCF gradient tol = None
max. SCF cycles = 50
direct_scf = False
chkfile to save SCF result = /root/tmpxk465u56
max_memory 4000 MB (current use 98 MB)
******** <class 'pyscf.x2c.sfx2c1e.SpinFreeX2C'> ********
exp_drop = 0.2
approx = 1e
xuncontract = 1
Then I transfer to another pyscf build in Anaconda site: https://anaconda.org/pyscf/pyscf
Use the same code in Jupyter, thank god! I got the right answer very fast by this pyscf:
******** <class 'pyscf.df.df_jk.density_fit.<locals>.DFHF'> ********
method = DFHF-SFX2C1E_SCF-RHF
initial guess = minao
damping factor = 0
level shift factor = 0
DIIS = <class 'pyscf.scf.diis.CDIIS'>
DIIS start cycle = 1
DIIS space = 8
SCF tol = 1e-09
SCF gradient tol = None
max. SCF cycles = 50
direct_scf = False
chkfile to save SCF result = /root/tmpzldtrshi
max_memory 4000 MB (current use 98 MB)
******** <class 'pyscf.x2c.sfx2c1e.SpinFreeX2C'> ********
exp_drop = 0.2
approx = 1e
xuncontract = 1
Set gradient conv threshold to 3.16228e-05
cond(S) = 18320.3370746045
******** <class 'pyscf.df.df.DF'> ********
auxbasis = None
max_memory = 4000
_cderi_to_save = /root/tmpghvcyi_3
Even tempered Gaussians are generated as DF auxbasis for Au
ETB auxbasis for Au [[0, [70782134.779904, 1]], [0, [35391067.389952, 1]] .......
..................................................
..................................................
CPU time for vj and vk 27.27 sec, wall time 0.75 sec
E1 = -54789.579571940005 E_coul = 15444.611913493081
Extra cycle E= -38023.9296691062 delta_E= -7.28e-12 |g|= 2.04e-06 |ddm|= 5.34e-06
CPU time for scf_cycle 575.97 sec, wall time 36.96 sec
CPU time for SCF 642.16 sec, wall time 44.76 sec
converged SCF energy = -38023.9296691062
So, Could be there some problem in the PyPI build?
The problem is caused by openblas' multi-threading in the pyscf-pypi release. Multiple threads were created in a nested loop, which leads to huge overhead of threading context switch. As a temporary solution for the current pypi code, you can set export OPENBLAS_NUM_THREADS=1 in bash to prevent openblas creating threads. I will handle this issue in next release.
Hi there,
I have tested the newest version of Pyscf1.6.2(Pypi), but the python kernel 'hangs up' and then almost dead.
Even I use "export OPENBLAS_NUM_THREADS=1", the issue is still on.
Do you still see the issue? What's the numpy version in your system?
Hi!
Just a little complaint :joy:
I know that using lib.num_threads, or pre-defining MKL_NUM_THREADS (numpy via conda channel) or OPENBLAS_NUM_THREADS (numpy via pypi channel) before importing pyscf could explicitly set threading number.
But I used to be confused to run code like this:
import numpy as np
import pyscf
x = np.random.randn(5000, 5000)
res = x.dot(x)
If not pyscf not imported, it will run in multi-threading, as expected (~2 sec). But after importing pyscf, it may run serially (~5 sec), depending on bash environ variables.
If I want to reuse some intermediates generated by pyscf, or do some post-pyscf calculation which is efficient critical (while using numpy or other libraries), this could be a little confusing to me :joy:
|
2025-04-01T04:35:15.169238
| 2020-10-29T12:59:24
|
732292684
|
{
"authors": [
"jamesETsmith",
"portega-usal"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10020",
"repo": "pyscf/pyscf",
"url": "https://github.com/pyscf/pyscf/issues/747"
}
|
gharchive/issue
|
Active-active orbital rotations
Greetings,
I read this paper about CASSCF with SHCI in PySCF (https://doi.org/10.1021/acs.jctc.7b00900) and I would like to know how the fixed active space (named with aHCISCF notation) is obtained. I need to keep a fixed active space along a reaction path, so I need to disable inactive-active rotations and keep only active-active rotations. I would perform full-CI SHCI after that.
Thanks in advance,
Pablo
Hi Pablo,
If we define mc as mc = shci.SHCISCF, you can set the mc.frozen attribute using a list of orbital indices that you want to freeze (i.e. not change). Then you can set mc.internal_rotation=True to perform the active-active rotations.
Be aware that optimizing active-active rotations tends to be numerically unstable and you should contain the number of MCSCF iterations (w/ mc.max_cycle_macro) to something small like 10-20 iterations.
Thanks for your answer, it seems to be working by now. Would it have any meaning if I create a macrocycle that optimizes first non-CAS orbitals, then CAS orbitals, and then check consistency with the previous step? I am using a (18e,23o) CAS in a molecule with ~60 electrons and ~150 orbitals.
Up to this date, I had been using the mc.max_stepsize attribute. The other convergence parameters from both CASSCF and FCISolver, I don't really understand them, so I prefer to leave them as default.
PS: I tried to reach you by email, maybe you're not in Boulder anymore (it's the colorado.edu address). There's some additional info there about the molecule system and a pseudocode file with my previous methodology, which I used before your answer.
Best regards,
Pablo
Apologies for that @portega-usal, I've changed jobs and although I still have that email, I check it less regularly than I used to. I'll send you an email and we can continue this discussion that way. If you're satisfied that your original question is answered, you can close the issue.
If you're still hoping for further clarification, feel free to reopen.
|
2025-04-01T04:35:15.212565
| 2020-08-05T03:17:01
|
673220838
|
{
"authors": [
"854350999",
"Bobspadger",
"GriceTurrble"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10021",
"repo": "python-amazon-mws/python-amazon-mws",
"url": "https://github.com/python-amazon-mws/python-amazon-mws/issues/199"
}
|
gharchive/issue
|
About Inboud_shipment
I called the list_inbound_shipments method in inboud_shipment.py. The two parameters, shipment_statuses and shipment_ids, require only one request, and this method requires two parameters to exist at the same time.
Should be added
shipment_statuses = shipment_statuses or []
shipment_ids = shipment_ids or []
Are you meaning that the arguments should be a list? as per the request parameters here =>
http://docs.developer.amazonservices.com/en_UK/fba_inbound/FBAInbound_ListInboundShipments.html
This is simple to fix, but can you confirm the call fails if shipment_statuses is None as the docs @ mws show that you only need either shipment_statuses or shipment_ids
In theory defaulting to an empty list if they are None should not cause any issues, but as I've not got any inbound shipments to amazon on our account to test with some extra info would be great.
I mean that if I only fill in one parameter when I call, NoneType Eroor will appear. If I fill in two parameters, it will return normally, and only one of the two parameters is required to request it.When I add these two lines It achieves the effect I expected.
eroor code before i modify
fulfilment_api = mws.InboundShipments(account_id='xxxxx', access_key='Axxxxx',
secret_key='xxxxx', region='UK')
res = fulfilment_api.list_inbound_shipments(shipment_statuses='SHIPPED', last_updated_after='2020-08-04T00:00',
last_updated_before='2020-08-05T00:00')
Yes, I guess it should be a pythonlist instead of pythonNone
Yes, I guess it should be a list instead of None
This method uses enumerate_params to handle both arguments at the same time. Seems like a bug in the case where one param is provided and the other is None.
Will investigate when I have some time, later today or tomorrow. Certainly shouldn't be breaking down like that.
Finally got around to investigating (power outages from Isaias), and I'm not seeing any errors from that method from current develop branch code. Some updates have been made to enumerate_param on that version, so that might be a fix for your issue (the same fix may not have been applied to master branch and the version currently available on PyPI, however).
As mentioned before, enumerate_params is used to handle those arguments as a shorthand method. I've tested a few different scenarios and found the results are appropriate:
from mws.utils import enumerate_params
enumerate_params({
"A": None,
"B": ["Foo", "Bar"]
})
# result: {'B.1': 'Foo', 'B.2': 'Bar'}
"A" is excluded entirely, whether its value is None (the method's default) or an empty list.
enumerate_params({
"A": ["Foo", "Bar"],
"B": "OneEntry",
})
# result: {'A.1': 'Foo', 'A.2': 'Bar', 'B.1': 'OneEntry'}
The single entry for "B" is wrapped in a list before being processed, so it outputs as a "B.1" enumerated entry, as expected.
If you can, please check latest version from develop to see if that alleviates your issue. Thank you.
I tried the latest call, but it still had the same problem. I was wondering whether its type must be a list. I saw this method
shipment_statuses = shipment_statuses or []
shipment_ids = shipment_ids or []
was also used in your api_order call, so I took That method modified it
https://github.com/python-amazon-mws/python-amazon-mws/blob/develop/mws/apis/orders.py#L49-L55
It reported an error
fulfilment_api = mws.InboundShipments(account_id='A20J3ITGSOMSJG', access_key='AKIAIGFXIDYSFVGBYMIQ',
secret_key='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')
res = fulfilment_api.list_inbound_shipments(shipment_statuses='SHIPPED', shipment_ids=None, last_updated_after='2020-08-05T00:00',
last_updated_before='2020-08-06T00:00')
TypeError: 'NoneType' object is not iterable
It executed successfully
fulfilment_api = mws.InboundShipments(account_id='xxxxxx', access_key='xxxxxx',
secret_key='xxxxx')
res = fulfilment_api.list_inbound_shipments(shipment_statuses='SHIPPED', shipment_ids=[], last_updated_after='2020-08-05T00:00',last_updated_before='2020-08-06T00:00')
It reported an error
fulfilment_api = mws.InboundShipments(account_id='xxxxxxx', access_key='xxxxxxx',
secret_key='xxxxxxx')
res = fulfilment_api.list_inbound_shipments(shipment_statuses='SHIPPED', shipment_ids=None, last_updated_after='2020-08-05T00:00',last_updated_before='2020-08-06T00:00')
TypeError: 'NoneType' object is not iterable
It executed successfully
fulfilment_api = mws.InboundShipments(account_id='xxxxxxx', access_key='xxxxxxx',
secret_key='xxxxxxx')
res = fulfilment_api.list_inbound_shipments(shipment_statuses='SHIPPED', shipment_ids=None, last_updated_after='2020-08-05T00:00',last_updated_before='2020-08-06T00:00')
Can you show the traceback for the error you received? I wrote a test locally for your inputs:
def test_list_inbound_shipments_mixed_ids_and_status(self):
response = self.api.list_inbound_shipments(
shipment_statuses="SHIPPED",
shipment_ids=None,
last_updated_after="2020-08-05T00:00",
last_updated_before="2020-08-06T00:00",
)
And I don't see an exception raised from this. If the error occurs deeper into the request logic, I personally cannot run that (don't have a seller account), but I would be able to debug from a full traceback (not just the "TypeError" line, please).
Alright, so I had some confusion myself in what I was testing.
On latest develop, this bug does exist, and persisted through at least one PR that was meant to correct it. enumerate_param is checking if not any(values) too early, when the code that wraps it in a list (if not a list, tuple, or set) should be running first.
I have a sizable update coming down the pipe soon which should include a fix for this issue.
Thank you!
I am also very grateful for your code. By the way, when I request a report with report type _GET_DATE_RANGE_FINANCIAL_TRANSACTION_DATA_, he always returns the error Request for report type 1202 is not allowed at this time
That particular report type does not appear to be in the MWS report type
enumeration:
http://docs.developer.amazonservices.com/en_US/reports/Reports_ReportType.html
The reason appears to be that that report cannot be scheduled by the API:
you will need to use the SellerCentral interface to do so:
https://stackoverflow.com/a/27745884
On Mon, Aug 17, 2020, 3:36 AM 854350999<EMAIL_ADDRESS>wrote:
I am also very grateful for your code. By the way, when I request a report
with report type GET_DATE_RANGE_FINANCIAL_TRANSACTION_DATA, he always
returns the error Request for report type 1202 is not allowed at this time
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
https://github.com/python-amazon-mws/python-amazon-mws/issues/199#issuecomment-674715822,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ACMAYPLGHT3HCAZ4WZ2QYGLSBDMXVANCNFSM4PVA2I4A
.
Received, thanks for the answer, I am looking forward to the update of the enumerate_param method, please reply me after the update
#207 open, includes the fix for this issue.
tks for your reply, I have been so busy building a material data model recently (so that I didn’t see this news), and I’m still sorting out the goods in and out of the warehouse, as well as warehouse damage, etc. I have not seen that Amazon has any obvious aspects of this Integration, if you know, please let us know
Finally found time to test the new development version. What is good is that this bug seems to have been fixed successfully. I like this new version.
|
2025-04-01T04:35:15.215718
| 2019-07-05T15:56:02
|
464698624
|
{
"authors": [
"coveralls",
"murrayrm"
],
"license": "bsd-3-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10022",
"repo": "python-control/python-control",
"url": "https://github.com/python-control/python-control/pull/324"
}
|
gharchive/pull-request
|
initial implementation of differential flatness module
This PR introduces a new module, control.flatsys, that can be used to create trajectories for differentially flat systems. This initial release is fairly minimal, supporting point-to-point trajectories for nonlinear systems as well as SISO linear systems. It builds on the control.iosys module, allowing differentially flat systems to be simulated, linearized, or composed with other input/output systems.
This code is in alpha form, but supports the operations required to implement the trajectory generation examples in FBS Ch 9 (output feedback). An example is included based on the kinematic car model used in FBS.
Documentation and (fairly minimal) unit tests are included.
Coverage increased (+0.3%) to 82.589% when pulling 5eac4580dde94fc55135d166ed44584440c1424c on murrayrm:flatsys_initial into b5aaf4a152f050c8293cd786569eb056d1b2d05e on python-control:master.
|
2025-04-01T04:35:15.221346
| 2020-10-09T12:11:53
|
718095696
|
{
"authors": [
"fieryash",
"pawangeek"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10023",
"repo": "python-geeks/Automation-scripts",
"url": "https://github.com/python-geeks/Automation-scripts/issues/117"
}
|
gharchive/issue
|
Removing watermark from scanned documents
Script Title -Script to remove watermark scanned documents
what will change - adding code
Instructions
Create a new folder for your script and file/folder name should be appropriate.
Create a README.md in your folder for program Instructions
add requirements.txt if needed
Please add/delete options that are not relevant.
[x] Adding New Code
Programming Language
[x] Python
Happy Coding,
Can you assign me this? Does it count towards hacktoberfest?
Yes, it will, and it's a good script too but can you brief about what kind of watermark is it?
Basically we use cv2 to remove the watermark from any sort of scanned documents
something like this.. we can remove the watermark which can be useful when an OCR or something is used on the document to extract text
basically removing watermark from such documents, useful if in further steps an OCR is used to read the text
Okay, that's fine. so it's pdf/doc or any other format?
jpg format.. but i can extrapolate to pdf if needed
okay do it for jpg and extend it further using pdf2image for pdfs
It is done
|
2025-04-01T04:35:15.431719
| 2021-05-12T19:14:13
|
890406697
|
{
"authors": [
"Mariatta",
"ewdurbin",
"hugovk",
"pradyunsg",
"willingc"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10024",
"repo": "python/docs-community",
"url": "https://github.com/python/docs-community/issues/7"
}
|
gharchive/issue
|
Mobile friendly theme for wiki.python.org
Wiki.python.org is not mobile friendly.
It would be great to have a mobile theme for wiki.python.org. Though I personally don't have CSS / styling / art skills, so all I can do is provide moral support and perhaps merge the PR 😅
I wasn't able to locate the source code for the wiki, and I wasn't sure who "owns" or "hosts" the wiki, so I asked in The PSF's Discourse: https://discuss.python.org/t/wiki-python-org-is-not-mobil-friendly/8700
Some info:
Hosting of the wiki is configured in psf-salt: https://github.com/python/psf-salt/blob/8c29e89e500bd1ad48c112e0144d574e89d13e01/pillar/base/moin.sls#L2-L7
The wiki is powered by moinmoin
General documentation on how to develop moinmoin theme: https://moinmo.in/MoinDev/ThemeDevelopment
wiki.python.org is using europython moinmoin theme: http://moinmo.in/ThemeMarket/Europython.
Source code of the europython theme is currently on Mercurial: http://hg.sheep.art.pl/moin-europython/ (what's the license)
Some thoughts and questions:
We can fork the europython theme and improve it?
Or perhaps we should develop a theme for us and make it look closer to python-docs-theme, or python.org?
Or perhaps we should just use a pre-existing more modern moinmoin theme? Looks like the official theme itself is quite mobile friendly?
The configuration noted is an attempt to migrate moin to salt that never came to fruition, BUT... it is running under management of salt as of August.
As far as the current theme goes, we are indeed running off of the linked mercurial repository... with a couple of (inconsequential, primarily) local changes applied. Due to the history of maintenance for the service, this is currently part of the "data" for the wiki that is backed up, but could be brought into configuration management. But regardless, changes applied to the upstream could easily be applied to wiki.python.org.
I don't have any opinion on the best course of action, but am happy to help get whatever is chosen rolled out.
The most-modern theme I could find is https://moinmo.in/ThemeMarket/memodump, which is on GitHub as well: https://github.com/dossist/moinmoin-memodump
Last updated in Oct 2014, at the time of writing.
See https://github.com/malemburg/moin-europython-theme/pull/1 for a PR to the EuroPython theme to make it minimally mobile-friendly.
@Mariatta I'm going to go ahead and close this now that I see the wiki is able to be shrunk to a mobile width. If we want to explore further, let's discuss and take up with the PSF :sunny:
|
2025-04-01T04:35:15.440206
| 2022-11-10T15:50:37
|
1444144858
|
{
"authors": [
"AA-Turner",
"CAM-Gerlach",
"encukou",
"hugovk"
],
"license": "CC0-1.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10025",
"repo": "python/docs-community",
"url": "https://github.com/python/docs-community/pull/68"
}
|
gharchive/pull-request
|
Add notes from the 2022-11 meeting
I missed most of the meeting, so I'm mostly just copying the document here.
As for the build warning due to Sphinx using the deprecated imghdr module in Python 3.11,, @AA-Turner I thought I remembered you folks already fixing this in 5.x, or am I confused?
Sphinx is failing on the CI because we're turning warnings into errors (-W error) and Sphinx is using imghdr, and triggering a deprecation warning:
Run python -bb -X dev -W error -m sphinx --color -n -E -a -W --keep-going docs build
Running Sphinx v5.3.0
Exception occurred:
File "/opt/hostedtoolcache/Python/3.11.0/x64/lib/python3.11/warnings.py", line 514, in _deprecated
warn(msg, DeprecationWarning, stacklevel=3)
DeprecationWarning: 'imghdr' is deprecated and slated for removal in Python 3.13
The full traceback has been saved in /tmp/sphinx-err-tfj_[8](https://github.com/hugovk/docs-community/actions/runs/3445416680/jobs/5749102789#step:5:9)37y.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
With pytest, you can disable arbitrary warnings, but looks like Sphinx can only suppress certain ones - is that correct @AA-Turner?
Options to fix:
Wait for Sphinx to fix https://github.com/sphinx-doc/sphinx/issues/10440 and release
Downgrade from Python 3.11 to 3.10 to avoid the deprecation warning (until it's fixed)
Don't turn warnings into errors (until it's fixed)
This is unfortunate.
Would adding -W ignore::DeprecationWarning:imghdr after the current -W error call work?
A
Nope: https://github.com/hugovk/docs-community/commit/0e078fb6deb2704b34f71c6348298f647531a034
https://github.com/hugovk/docs-community/actions/runs/3445670511/jobs/5749653832
Yeah, as mentioned I thought that was already fixed upstream since IIRC we saw it on the PEPs once we started testing on 3.11 (but it didn't block, since we didn't run with python -W error or sphinx -W yet), but it seems not.
Nope: https://github.com/hugovk/docs-community/commit/0e078fb6deb2704b34f71c6348298f647531a034
Did you try just -W ignore::DeprecationWarning? It looks like the stacklevel might be wrong on the warning so it could be coming from warning rather than imghdr, and I've had issues with modules not matching when they should from the CLI.
If that doesn't work,
Downgrade from Python 3.11 to 3.10 to avoid the deprecation warning (until it's fixed)
is the least bad option, IMO, since it avoids silencing any valid warnings and there isn't a strong pressing need to ensure this runs on 3.11, since AFAIK unlike e.g. the PEPs it doesn't have its own custom theme and Sphinx extensions.
Did you try just -W ignore::DeprecationWarning? It looks like the stacklevel might be wrong on the warning so it could be coming from warning rather than imghdr, and I've had issues with modules not matching when they should from the CLI.
Bingo! Please see PR https://github.com/python/docs-community/pull/69.
|
2025-04-01T04:35:15.522413
| 2024-05-01T17:13:51
|
2273860064
|
{
"authors": [
"drisspg",
"jerryzh168"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10027",
"repo": "pytorch/ao",
"url": "https://github.com/pytorch/ao/pull/196"
}
|
gharchive/pull-request
|
Perform quantization in Chunks
Summary
Previously we were seeing a huge memory spike when attempting to NF4'afy the very large tensors. This was do to quantize to nearest creating very large intermediates and realizing them in gmem. This PR instead does this in chunks of default size 1024**2. This was seen to be a pretty reasonable tradeoff between speed while drastically reducing memory usage.
do we want to incorporate this into our quantization primitive in the future? https://github.com/pytorch/ao/issues/160
@jerryzh168 like include the ability to apply quantization in chunks? I am not sure if this problem is manifest in all quantization techniques. I think this may have been a quirk of the broadcasting stuff I was doing. But if it does come again I think it would be good to standardize on. This change is a "free lunch" for qlora since we only quantize once. For dynamic quant I think that the speed up might not be worth it
@jerryzh168 like include the ability to apply quantization in chunks? I am not sure if this problem is manifest in all quantization techniques. I think this may have been a quirk of the broadcasting stuff I was doing. But if it does come again I think it would be good to standardize on. This change is a "free lunch" for qlora since we only quantize once. For dynamic quant I think that the speed up might not be worth it
our general op is quantizing things with certain block_size, so I'm wondering if this is what you are doing here as well, if so I think it might make sense to merge
|
2025-04-01T04:35:15.524472
| 2021-03-05T04:05:08
|
822723136
|
{
"authors": [
"ankitdobhal",
"mthrok"
],
"license": "BSD-2-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10028",
"repo": "pytorch/audio",
"url": "https://github.com/pytorch/audio/issues/1358"
}
|
gharchive/issue
|
Potential Bug Risks and Anti-Patterns
Description
Hi 👋 I ran the DeepSource static analyzer on the forked copy of this repo and found some interesting code quality issues which are available here.
The Static Code Analysis Tool found potential bugs and anti-patterns in the Code, that can be detrimental at a later point in time with respect to the Project. DeepSource helps you to automatically find and fix issues in your code during code reviews. This tool looks for anti-patterns, bug risks, performance problems, and raises issues.
This PR #1351 contains Some of the notable issues with fixes :
Fixed Object Inheritance
Consider merging these comparisons with 'in'
Remove unnecessary use of comprehension
There are plenty of other issues in relation to Bug Discovery and Anti-Patterns which you would be interested to take a look at.
If you would like to integrate DeepSource to autofix some of the common occurring issues, I can help you set that up :)
@ankitdobhal Please open it in pytorch/pytorch. not in pytorch/audio.
|
2025-04-01T04:35:15.531588
| 2022-11-01T14:14:39
|
1431503885
|
{
"authors": [
"ejguan",
"srmsoumya"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10029",
"repo": "pytorch/data",
"url": "https://github.com/pytorch/data/issues/869"
}
|
gharchive/issue
|
datapipes.to_graph() throws AttributeError: 'tuple' object has no attribute 'items'
🐛 Describe the bug
I upgraded to torch v1.13.0 & torchdata v0.5.0 over the weekend and am getting AttributeError for datapipes.to_graph() call after that - it was working completely fine before the upgrade.
Sample code to replicate the issue
from torchdata.datapipes.iter import IterableWrapper
from torchdata.datapipes.utils import to_graph
items = IterableWrapper(range(10))
items.map(lambda x: x * 2)
print(list(items))
# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] prints the right result
to_graph(items)
# AttributeError: 'tuple' object has no attribute 'items'
Complete stack trace:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In [24], line 9
5 items.map(lambda x: x * 2)
7 print(list(items))
----> 9 to_graph(items)
File ~/.local/share/hatch/env/virtual/ml-pipeline-HWwNZ1N1/nb/lib/python3.9/site-packages/torchdata/datapipes/utils/_visualization.py:175, in to_graph(dp, debug)
164 node_attr = dict(
165 style="filled",
166 shape="box",
(...)
171 fontname="monospace",
172 )
173 graph = graphviz.Digraph(node_attr=node_attr, graph_attr=dict(size="12,12"))
--> 175 for node in to_nodes(dp, debug=debug):
176 fillcolor: Optional[str]
177 if not node.parents:
File ~/.local/share/hatch/env/virtual/ml-pipeline-HWwNZ1N1/nb/lib/python3.9/site-packages/torchdata/datapipes/utils/_visualization.py:120, in to_nodes(dp, debug)
116 actual_child.add_parent(fixed_parent_node)
118 return nodes
--> 120 return aggregate(recurse(traverse_dps(dp)))
File ~/.local/share/hatch/env/virtual/ml-pipeline-HWwNZ1N1/nb/lib/python3.9/site-packages/torchdata/datapipes/utils/_visualization.py:69, in to_nodes.<locals>.aggregate(nodes)
67 def aggregate(nodes):
68 groups = defaultdict(list)
---> 69 for node in nodes:
70 groups[node].append(node)
72 nodes = set()
File ~/.local/share/hatch/env/virtual/ml-pipeline-HWwNZ1N1/nb/lib/python3.9/site-packages/torchdata/datapipes/utils/_visualization.py:65, in to_nodes.<locals>.recurse(dp_graph, child)
63 node.add_child(child)
64 yield node
---> 65 yield from recurse(dp_parents, child=node)
File ~/.local/share/hatch/env/virtual/ml-pipeline-HWwNZ1N1/nb/lib/python3.9/site-packages/torchdata/datapipes/utils/_visualization.py:60, in to_nodes.<locals>.recurse(dp_graph, child)
59 def recurse(dp_graph, child=None):
---> 60 for dp_node, dp_parents in dp_graph.items():
61 node = Node(dp_node)
62 if child is not None:
AttributeError: 'tuple' object has no attribute 'items'
Versions
Collecting environment information...
PyTorch version: 1.13.0+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.22.4
Libc version: glibc-2.27
Python version: 3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-1075-aws-x86_64-with-glibc2.27
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] pytorch-lightning==1.7.7
[pip3] torch==1.13.0
[pip3] torchdata==0.5.0
[pip3] torchmetrics==0.10.1
[pip3] torchvision==0.14.0
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.21.5 py39he7a7128_1
[conda] numpy-base 1.21.5 py39hf524024_1
[conda] numpydoc 1.2 pyhd3eb1b0_0
Thanks reporting this issue. I think it's the problem that traverse has changed the output type, which leads to the Error from
https://github.com/pytorch/data/blob/db5ec7ace3bbe5d29b65321203bafc83dc4dd2df/torchdata/datapipes/utils/_visualization.py#L60
And, could we add tests to validate visualization functions as well?
cc: @NivekT
|
2025-04-01T04:35:15.533899
| 2024-11-22T02:00:13
|
2681520599
|
{
"authors": [
"andrewkho",
"divyanshk",
"ramanishsingh"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10030",
"repo": "pytorch/data",
"url": "https://github.com/pytorch/data/pull/1371"
}
|
gharchive/pull-request
|
Add examples for HF datasets
Adding examples for HF datasets with nodes.
Fixes #1352
@ramanishsingh Can you add pointers to what all these three notebooks aim to cover in the PR top comment?
Not sure how to comment on notebooks so will add comments here:
do you need to a 4-tuple here, can it just be a 2-tuple?
nit: can you rename multiple_datasets to multidatasets?
|
2025-04-01T04:35:15.534742
| 2022-07-22T00:20:38
|
1313969223
|
{
"authors": [
"ejguan",
"vancexu"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10031",
"repo": "pytorch/data",
"url": "https://github.com/pytorch/data/pull/671"
}
|
gharchive/pull-request
|
Enable formatting with BLACK
Differential Revision: D38062352
Closing this PR as the corresponding Diff has been abandoned. A workaround has been accomplished internally
|
2025-04-01T04:35:15.550203
| 2020-08-18T10:14:08
|
680879954
|
{
"authors": [
"Skylixia",
"myleott"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10032",
"repo": "pytorch/fairseq",
"url": "https://github.com/pytorch/fairseq/issues/2494"
}
|
gharchive/issue
|
mask-whole-words training tokenize wordpiece
Hello,
I trained a Roberta model with --mask-whole-words. However, the tokenizer seem to do word piece and not whole words.
I loaded the tokenizer and model with the transformers library to fill masks but I want to do this with whole word tokenization.
Any idea why I'm getting word pieces ?
Thank you
Correct, when training with --mask-whole-words it still uses word piece tokenization, but will mask out multiple word pieces when necessary to mask a whole word.
For example, suppose your sentence is "Testing masking whole words"
which gets split into word pieces ['Testing', ' mask', 'ing', ' whole', ' words'].
When you train with --mask-whole-words it will only replace "mask" and "ing" together,
so you'll get ['Testing', '<mask>', '<mask>', ' whole', ' words']
|
2025-04-01T04:35:15.551460
| 2021-04-29T10:09:11
|
870845231
|
{
"authors": [
"Shyrm",
"alexeib"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10033",
"repo": "pytorch/fairseq",
"url": "https://github.com/pytorch/fairseq/issues/3520"
}
|
gharchive/issue
|
Wav2Vec 2.0 checkpints
What is the difference between current Wav2Vec 2.0 checkpoint called "wav2vec_vox_new" and the one that used to be before - "wav2vec_vox". Thanks in advance for any clarifications.
the new one is trained for 1m updates and uses a different normalization order in the transformer to avoid instability (and this leads to using higher LR)
|
2025-04-01T04:35:15.552652
| 2019-07-30T12:30:26
|
474556696
|
{
"authors": [
"myleott",
"rush86999"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10034",
"repo": "pytorch/fairseq",
"url": "https://github.com/pytorch/fairseq/issues/930"
}
|
gharchive/issue
|
inverse byte pair encoding?
i m trying to use an attention only (transformer) encoder-decoder model with same vocabulary but i can't find the inverse of byte pair encoding for roberta. Is there a function for this? Can I implement my own function if possible?
Added here: https://github.com/pytorch/fairseq/pull/931
|
2025-04-01T04:35:15.841147
| 2023-08-26T19:10:19
|
1868242418
|
{
"authors": [
"EkaterinaAbramova",
"vmoens"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10035",
"repo": "pytorch/rl",
"url": "https://github.com/pytorch/rl/pull/1472"
}
|
gharchive/pull-request
|
[Feature] Device transform
Description
Adds a DeviceCastTransform transform to move environment data from one device to another.
As part of this PR, transforms now can transform the device of the parent env through transform.transform_env_device.
does this make sense to use with a parent environment? when is this transform preferred to env.to(device)?
This is to address https://github.com/pytorch/rl/issues/1198 where the issue is that if the env naturally sits on MPS we can't use float64. So first you must transform the data into float32 and then cast to device. Doing env.to(device) will not work but this transform will.
does this make sense to use with a parent environment? when is this transform preferred to env.to(device)?
This is to address #1198 where the issue is that if the env naturally sits on MPS we can't use float64. So first you must transform the data into float32 and then cast to device. Doing env.to(device) will not work but this transform will.
Could you please explain exactly how to do this? I am confused. I am flooding this tutorial https://pytorch.org/tutorials/intermediate/reinforcement_ppo.html and get that error about MPS flaot64 on the line: base_env = GymEnv("InvertedDoublePendulum-v4", device=device, frame_skip=frame_skip) What code exactly shall I write to correct this error please?
Can I ask what the value of device is in your case?
device="mps".
I believe I solved it with this (after quite a few hours of trying different things!!!):
base_env = GymEnv("InvertedDoublePendulum-v4", device="cpu", frame_skip=frame_skip)
env = TransformedEnv(
base_env,
Compose(
ObservationNorm(in_keys=["observation"]), # normalise observations (make it about Standard Normal)
DoubleToFloat(),
StepCounter(), # count the number of steps before the environment is terminated
DeviceCastTransform(device=device, orig_device="cpu"),
),
)
print(env.device) # gives mps now
Could you please kindly confirm if what Ive done is correct? I am on Apple M2 max trying to use MPS.
Everything was progressing smoothly through the tutorial: https://pytorch.org/tutorials/intermediate/reinforcement_ppo.html however at this code I get an error again about MPS. Please could you kindly advise syntax to solve this issue? I double checked everything seems to be on mps, so I don't understand where the error is coming from.
collector = SyncDataCollector(
env,
policy_module,
frames_per_batch=frames_per_batch,
total_frames=total_frames,
split_trajs=False,
device=device,
)
File ~/anaconda3/envs/gpu-torch-rl/lib/python3.10/site-packages/torch/nn/modules/module.py:1143 in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
env.device
Out[23]: device(type='mps')
@EkaterinaAbramova sorry you had this terrible experience, we should document things better for MPS.
It's something we're actively looking at and any feedback is very welcome.
The way you implemented your env looks great to me!
Regarding the error I will have a look into it, shouldn't be too difficult to solve. Have you tried moving the ObservationNorm and StepCounter after the device casting transform?
like:
env = TransformedEnv(
base_env,
Compose(
DoubleToFloat(),
DeviceCastTransform(device=device, orig_device="cpu"),
ObservationNorm(in_keys=["observation"]), # normalise observations (make it about Standard Normal)
StepCounter(), # count the number of steps before the environment is terminated
),
)
Like this the buffers in the ObservationNorm transform will sit on mps but with float32 and not float64.
@vmoens thank you for swiftly helping me, this issue is quite urgent, so very glad that you had the suggestion. It makes sense and I tried it, however this way around I get this error AttributeError: 'DoubleToFloat' object has no attribute 'init_stats' (it seems I need to pass some arguments, what shall I pass to be able to follow the tutorial please https://pytorch.org/tutorials/intermediate/reinforcement_ppo.html):
You need to call init_stats on the obs norm transform
env.transform[-2].init_stats(...)
Because the transform has changed place
OK I get it. So now that Ive indexed the correct location, the MPS issue is back.
The code you suggested doesn't work for me:
env = TransformedEnv(
base_env,
Compose(
DoubleToFloat(),
DeviceCastTransform(device=device, orig_device="cpu"),
ObservationNorm(in_keys=["observation"]), # normalise observations (make it about Standard Normal)
StepCounter(), # count the number of steps before the environment is terminated
),
)
print(env.device)
env.transform[-2].init_stats(num_iter=1000, reduce_dim=0, cat_dim=0)
I get the error: Cannot convert a MPS Tensor
TO RECAP: The original code was:
env = TransformedEnv(
base_env,
Compose(
ObservationNorm(in_keys=["observation"]), # normalise observations (make it about Standard Normal)
DoubleToFloat(),
StepCounter(), # count the number of steps before the environment is terminated
DeviceCastTransform(device=device, orig_device="cpu"),
),
)
env.transform[0].init_stats(num_iter=1000, reduce_dim=0, cat_dim=0)
This code runs, however I have issues further below in the tutorial where I can't run:
collector = SyncDataCollector(
env,
policy_module,
frames_per_batch=frames_per_batch,
total_frames=total_frames,
split_trajs=False,
device=device,
)
Cannot convert a MPS Tensor
Could you please provide a solution? I am really in a rush now, Ive been trying to solve this issue for days. The tutorial I am working on is online, so if it helps maybe you could try the suggestions to make sure they resolve the issue? Thank you very much for your help, I really need to move past this ASAP please.
Ok so that's an interesting bug, which basically boils down to some internal machinery within rollout, resets and transforms.
To quickly unblock you: can you compute the stats manually with your env?
simple_env = TransformedEnv(
base_env,
Compose(
DoubleToFloat(),
DeviceCastTransform(device=device, orig_device="cpu"),
)
)
td0 = simple_env.rollout(100)
loc = td0["observation"].mean(dim=0)
scale = td0["observation"].std(dim=0)
env = TransformedEnv(
base_env,
Compose(
DoubleToFloat(),
DeviceCastTransform(device=device, orig_device="cpu"),
ObservationNorm(in_keys=["observation"], loc=loc, scale=scale),
StepCounter(),
),
)
Hopefully that should help!
I should be able to put my hands on an apple silicon computer tomorrow morning if you're still stuck!
The suggestion didn't work Im afraid. Any other thing you could propose at this stage or only tomorrow?
So why in the version I provided above, everything is fine until I get down to the SyncDataCollector? Why is it failing there? What has not been yet converted to float32?
#1589 will solve your problem!
@EkaterinaAbramova you should be good to go now!
I can reproduce your issue, let me push a fix!
|
2025-04-01T04:35:15.892593
| 2020-07-17T04:57:15
|
658860928
|
{
"authors": [
"Huangxt57",
"fmassa"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10036",
"repo": "pytorch/vision",
"url": "https://github.com/pytorch/vision/issues/2481"
}
|
gharchive/issue
|
How to use torchvision roi_align?
I'm confused about the input parameter boxes and output of torchvision.ops.roi_align. Now I have an input image and one bbox coordinate [x1, y1, x2, y2]. Does roi_align directly return the region determined by the coordinate?
For exampe, here is my test code:
import torch
from torchvision.ops import roi_align
a = torch.Tensor([[i * 6 + j for j in range(6)] for i in range(6)])
print(a)
a = a.unsqueeze(dim=0)
boxes = [torch.Tensor([[0, 2, 2, 4]])]
a = a.unsqueeze(dim=0)
aligned_rois = roi_align(input=a, boxes=boxes, output_size=2)
print(aligned_rois.shape)
print("aligned_rois:", aligned_rois)
And the result is:
tensor([[ 0., 1., 2., 3., 4., 5.],
[ 6., 7., 8., 9., 10., 11.],
[12., 13., 14., 15., 16., 17.],
[18., 19., 20., 21., 22., 23.],
[24., 25., 26., 27., 28., 29.],
[30., 31., 32., 33., 34., 35.]])
torch.Size([1, 1, 2, 2])
aligned_rois: tensor([[[[15.5000, 16.5000],
[21.5000, 22.5000]]]])
What I want to know is why the returned region is [15, 16; 21, 22]?
Thanks for answering!
Could you take an example for interpreting?
Hi,
The x2, y2 coordinates are inclusive, not exclusive, so the box size that you are considering is of size 3x3, and not 2x2.
So for the first element (which returns 15.5) you are considering elements (12 + 13 + 18 + 19) / 4 (which is equal to 15.5).
I believe I've answered your question, and as such I'm closing this issue, but let us know if you have further questions
Thanks for answering, I have understood after your explanation. Best wishes.
|
2025-04-01T04:35:15.913034
| 2014-11-18T15:25:18
|
49244944
|
{
"authors": [
"JervenBolleman",
"cbuil"
],
"license": "bsd-2-clause",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10037",
"repo": "pyvandenbussche/sparqles",
"url": "https://github.com/pyvandenbussche/sparqles/issues/28"
}
|
gharchive/issue
|
sel[avg] could be friendlier to endpoints
SELECT (AVG(?o) AS ?avg)
WHERE{
?s rdf:type ?o}
LIMIT 100
Using submitted data instead of data in the endpoint guarantees the queries are equivalent for all endpoints. e.g.
SELECT (AVG(?no) as ?avg)
WHERE
{
VALUES (?no) {(1) (2) (2) (4) (4) (4) (4)}
}
Same for self[avg]*order by
SELECT (AVG(?no) as ?avg)
WHERE
{
VALUES (?no) {(1) (2) (2) (4) (4) (4) (4)}
} GROUP BY ?no
unfortunately the empty WHERE clause returns errors in DBpedia since Virtuoso complains of a variable in the SELECT which is not in the WHERE clause. Agree with Aidan. We create simpler queries with subjects that do not exist in the endpoints.
|
2025-04-01T04:35:15.918224
| 2019-05-29T11:49:01
|
449760494
|
{
"authors": [
"akaszynski",
"supersubscript"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0004.json.gz:10038",
"repo": "pyvista/pyacvd",
"url": "https://github.com/pyvista/pyacvd/issues/5"
}
|
gharchive/issue
|
Clustering fails due to vtkSubdivisionFilter
I have a mesh, that I'm trying to resample with the ACVD algorithm. While the mesh is manifold, the subdivision filter employed by the ACVD algorithm appears to return a non-manifold mesh, that makes the algorithm crash mid-way.
In particular:
import pyvista as pv
from PyACVD import Clustering
mesh = pv.PolyData('shoot_manifold_acvd_crash.ply')
non_manifold_edges = mesh.extract_edges(feature_edges=False, boundary_edges=False,
manifold_edges=False, non_manifold_edges=True)
print(non_manifold_edges.n_cells, non_manifold_edges.n_points)
0 0
The step that fails is the generation of the clusters:
cobj = Clustering.Cluster(mesh)
cobj.GenClusters(mesh.n_points)
Subdividing mesh with 3 subdivision(s)
ERROR:root:Dataset is non-manifold and cannot be subdivided. Edge shared by 4 cells
ERROR:root:Subdivision failed.
ERROR:root:vtkInformation (0x55df794fb890)
Traceback (most recent call last):
File "<ipython-input-2-be1bf5dcec33>", line 2, in <module>
cobj.GenClusters(mesh.n_points)
File "/home/henrik/.local/anaconda2/envs/surface/lib/python3.7/site-packages/PyACVD/Clustering.py", line 184, in GenClusters
self.PrepareMesh(nclus, subratio, verbose)
File "/home/henrik/.local/anaconda2/envs/surface/lib/python3.7/site-packages/PyACVD/Clustering.py", line 139, in PrepareMesh
v = VN.vtk_to_numpy(self.mesh.GetPoints().GetData()).astype(np.float)
AttributeError: 'NoneType' object has no attribute 'GetData'
Digging deeper shows that this is due to the subdivision that is happening inside the mesh preparation step:
mesh.subdivide(3, 'loop')
ERROR:root:Dataset is non-manifold and cannot be subdivided. Edge shared by 4 cells
ERROR:root:Subdivision failed.
ERROR:root:vtkInformation (0x55df796458b0)
PolyData (0x7ff0b08cab28)
N Cells: 0
N Points: 0
X Bounds: 1.000e+00, -1.000e+00
Y Bounds: 1.000e+00, -1.000e+00
Z Bounds: 1.000e+00, -1.000e+00
N Scalars: 0
mesh.subdivide(2, 'loop')
mesh.subdivide(2, 'loop')
ERROR:root:Dataset is non-manifold and cannot be subdivided. Edge shared by 4 cells
ERROR:root:Subdivision failed.
ERROR:root:vtkInformation (0x55df79436670)
PolyData (0x7ff0b08caa68)
N Cells: 0
N Points: 0
X Bounds: 1.000e+00, -1.000e+00
Y Bounds: 1.000e+00, -1.000e+00
Z Bounds: 1.000e+00, -1.000e+00
N Scalars: 0
Notably, the issue doesn't arise for when only a single subdivision is needed, but the output of that is non-manifold.
mesh = mesh.subdivide(1, 'loop')
non_manifold_edges = mesh.extract_edges(feature_edges=False, boundary_edges=False,
manifold_edges=False, non_manifold_edges=True)
print(non_manifold_edges.n_cells, non_manifold_edges.n_points)
18 18
Thanks for pointing this out. I'm in the process of refactoring this module and I'll have an update soon.
There's some sort of bug with the subdivision filter, and I've found that I've had better luck just creating my own method. The following code snippet should work for you with the latest version of pyacvd
import pyvista as pv
from pyacvd import Clustering
pv.set_plot_theme('document')
mesh = pv.read('shoot_manifold_acvd_crash.ply')
clus = Clustering(mesh)
clus.subdivide(3) # 2 also works
clus.cluster(10000)
remesh = clus.create_mesh()
remesh.plot(color='w', show_edges=True, smooth_shading=True)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.