added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T04:54:45.531887
2022-09-22T10:28:15
1382215290
{ "authors": [ "amchandn", "jianyexi", "mentat9" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13244", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/20823" }
gharchive/pull-request
[Hub Generated] Review request for Microsoft.DataProtection to add version preview/2022-09-01-preview This is a PR generated at OpenAPI Hub. You can view your work branch via this link. ARM API Information (Control Plane) Azure 1st Party Service can try out the Shift Left experience to initiate API design review from ADO code repo. If you are interested, may request engineering support by filling in with the form https://aka.ms/ShiftLeftSupportForm. Changelog Add a changelog entry for this PR by answering the following questions: What's the purpose of the update? [ ] new service onboarding [x] new API version [ ] update existing version for new feature [ ] update existing version to fix swagger quality issue in s360 [ ] Other, please clarify When are you targeting to deploy the new service/feature to public regions? Please provide the date or, if the date is not yet available, the month. When do you expect to publish the swagger? Please provide date or, the the date is not yet available, the month. By default, Azure SDKs of all languages (.NET/Python/Java/JavaScript for both management-plane SDK and data-plane SDK, Go for management-plane SDK only ) MUST be refreshed with/after swagger of new version is published. If you prefer NOT to refresh any specific SDK language upon swagger updates in the current PR, please leave details with justification here. Contribution checklist (MS Employees Only): [x] I commit to follow the Breaking Change Policy of "no breaking changes" [x] I have reviewed the documentation for the workflow. [x] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix? If any further question about AME onboarding or validation tools, please view the FAQ. ARM API Review Checklist Applicability: :warning: If your changes encompass only the following scenarios, you should SKIP this section, as these scenarios do not require ARM review. Change to data plane APIs Adding new properties All removals Otherwise your PR may be subject to ARM review requirements. Complete the following: [x] Check this box if any of the following apply to the PR so that the label "ARMReview" and "WaitForARMFeedback" will be added by bot to kick off ARM API Review. Missing to check this box in the following scenario may result in delays to the ARM manifest review and deployment. Adding a new service Adding new API(s) Adding a new API version -[ ] To review changes efficiently, ensure you copy the existing version into the new directory structure for first commit and then push new changes, including version updates, in separate commits. You can use OpenAPIHub to initialize the PR for adding a new version. For more details refer to the wiki. [x] Ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board. [x] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them. Breaking Change Review Checklist If you have any breaking changes as defined in the Breaking Change Policy, request approval from the Breaking Change Review Board. Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Additional details on the process and office hours are on the Breaking Change Wiki. NOTE: To update API(s) in public preview for over 1 year (refer to Retirement of Previews) Please follow the link to find more details on PR review process. Most of the changes in this API have already been reviewed and approved by ARM team in a PR raised in swagger's private repo - https://github.com/Azure/azure-rest-api-specs-pr/pull/7480 /azp run unifiedPipeline "state": { What's the difference between On and AlwaysOn? If they are different, please add descriptions for the enum values to clarify (x-ms-enum allows per-value descriptions). Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:6661 in 3d96f0c. [](commit_id = 3d96f0cc1dd10bc8960dec57b23c1e7ff75063bf, deletion_comment = False) "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/backupVaults/{vaultName}/deletedBackupInstances/{backupInstanceName}/undelete": { ARM soft-delete pattern uses restore for this action. Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:2560 in 3d96f0c. [](commit_id = 3d96f0cc1dd10bc8960dec57b23c1e7ff75063bf, deletion_comment = False) "state": { What's the difference between On and AlwaysOn? If they are different, please add descriptions for the enum values to clarify (x-ms-enum allows per-value descriptions). Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:6661 in 3d96f0c. [](commit_id = 3d96f0c, deletion_comment = False) Added description for enum values. Thanks for the suggestion /azp run unifiedPipeline /azp run unifiedPipeline "state": { Nicely done. In reply to:<PHONE_NUMBER> Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:6661 in 3d96f0c. [](commit_id = 3d96f0cc1dd10bc8960dec57b23c1e7ff75063bf, deletion_comment = False) "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataProtection/backupVaults/{vaultName}/deletedBackupInstances/{backupInstanceName}/undelete": { Makes sense. In reply to:<PHONE_NUMBER> Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:2560 in 3d96f0c. [](commit_id = 3d96f0cc1dd10bc8960dec57b23c1e7ff75063bf, deletion_comment = False) "state": { Recommend adding per-value descriptions for this property as well (not blocking ARM signoff). Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:5505 in b503351. [](commit_id = b503351dcd921a9006e589736180b86a71603623, deletion_comment = False) "resourceGuardOperationRequests": { Description would be helpful. Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:7157 in b503351. [](commit_id = b503351dcd921a9006e589736180b86a71603623, deletion_comment = False) @amchandn - Signed off for ARM with comments. "state": { Recommend adding per-value descriptions for this property as well (not blocking ARM signoff). Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:5505 in b503351. [](commit_id = b503351, deletion_comment = False) thanks for the suggestion. We'll plan and take up this exercise to add enum descriptions for all enum values in our swagger in upcoming versions. "resourceGuardOperationRequests": { Description would be helpful. Refers to: specification/dataprotection/resource-manager/Microsoft.DataProtection/preview/2022-09-01-preview/dataprotection.json:7157 in b503351. [](commit_id = b503351, deletion_comment = False) thanks for the suggestion. We'll plan and take up this exercise to add all missing descriptions in our swagger in upcoming versions. /azp run
2025-04-01T04:54:45.580344
2024-08-30T08:19:31
2496559112
{ "authors": [ "DominikMe", "jiriburant" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13247", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/30381" }
gharchive/pull-request
New API for communication call settings [Private preview] Data Plane API Specification Update Pull Request [!TIP] Overwhelmed by all this guidance? See the Getting help section at the bottom of this PR description. Introducing new provisioning API for ACS calling, that will allow customers to store call related settings on ACS resource level and ACS participant level. TypeSpec link PR review workflow diagram Please understand this diagram before proceeding. It explains how to get your PR approved & merged. API Info: The Basics Most of the information about your service should be captured in the issue that serves as your API Spec engagement record. Link to API Spec engagement record issue: Is this review for (select one): [x] a private preview [ ] a public preview [ ] GA release Change Scope This section will help us focus on the specific parts of your API that are new or have been modified. Please share a link to the design document for the new APIs, a link to the previous API Spec document (if applicable), and the root paths that have been updated. Design Document: Previous API Spec Doc: N/A Updated paths: N/A Viewing API changes For convenient view of the API changes made by this PR, refer to the URLs provided in the table in the Generated ApiView comment added to this PR. You can use ApiView to show API versions diff. Suppressing failures If one or multiple validation error/warning suppression(s) is detected in your PR, please follow the Swagger-Suppression-Process to get approval. ❔Got questions? Need additional info?? We are here to help! Contact us! The Azure API Review Board is dedicated to helping you create amazing APIs. You can read about our mission and learn more about our process on our wiki. 💬 Teams Channel 💌 email Click here for links to tools, specs, guidelines & other good stuff Tooling Open API validation tools were run on this PR. Go here to see how to fix errors Spectral Linting Guidelines & Specifications Azure REST API Guidelines OpenAPI Style Guidelines Azure Breaking Change Policy Helpful Links Schedule a data plane REST API spec review Getting help First, please carefully read through this PR description, from top to bottom. If you don't have permissions to remove or add labels to the PR, request write access per aka.ms/azsdk/access#request-access-to-rest-api-or-sdk-repositories To understand what you must do next to merge this PR, see the Next Steps to Merge comment. It will appear within few minutes of submitting this PR and will continue to be up-to-date with current PR state. For guidance on fixing this PR CI check failures, see the hyperlinks provided in given failure and https://aka.ms/ci-fix. If the PR CI checks appear to be stuck in queued state, please add a comment with contents /azp run. This should result in a new comment denoting a PR validation pipeline has started and the checks should be updated after few minutes. If the help provided by the previous points is not enough, post to https://aka.ms/azsdk/support/specreview-channel and link to this PR. fix https://github.com/Azure/azure-rest-api-specs/issues/30399 From our prior conversations, this is only for private preview. If we're going with the new Swagger route, then we should include the TypeSpec sources.
2025-04-01T04:54:45.586180
2024-10-25T11:31:57
2613845819
{ "authors": [ "razvanbadea-msft", "shublnu" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13248", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/31233" }
gharchive/pull-request
nsp async association api spec changes Choose a PR Template Switch to "Preview" on this description then select one of the choices below. Click here to open a PR for a Data Plane API. Click here to open a PR for a Control Plane (ARM) API. @shublnu please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information. @microsoft-github-policy-service agree [company="{your company}"] Options: (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer. @microsoft-github-policy-service agree (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer. @microsoft-github-policy-service agree company="Microsoft" Contributor License Agreement @microsoft-github-policy-service agree company="Microsoft" Choose a PR Template description and add a description to this pr to have the Purpose of PR and due diligence sections added The first commit needs to be an exact copy of the previous API version. All new changes should only be added in the subsequent commits. This allows the reviewer to get a clear understanding of the actual changes being introduced. With the way the PR is raised now, it is not possible for the reviewer to tell what the changes are. Please either abandon the PR and raise another one with the recommendation or create a new set of commits on this PR following the recommendation. If you are doing the later option please indicate which commit is the exact copy of the previous version.
2025-04-01T04:54:45.593676
2020-02-04T04:29:13
559480911
{ "authors": [ "AutorestCI", "azuresdkci", "chandrasekarendran" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13249", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/8292" }
gharchive/pull-request
New Put API for updating vault security config Latest improvements: MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow. Contribution checklist: [x] I have reviewed the documentation for the workflow. [x] Validation tools were run on swagger spec(s) and have all been fixed in this PR. [ ] The OpenAPI Hub was used for checking validation status and next steps. ARM API Review Checklist [ ] Service team MUST add the "WaitForARMFeedback" label if the management plane API changes fall into one of the below categories. adding/removing APIs. adding/removing properties. adding/removing API-version. adding a new service in Azure. Failure to comply may result in delays for manifest application. Note this does not apply to data plane APIs. [ ] If you are blocked on ARM review and want to get the PR merged urgently, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them. Please follow the link to find more details on API review process. You don't have permission to trigger SDK Automation. Please add yourself to Azure group from opensource portal if you are MSFT employee, or please ask reviewer to add comment *** /openapibot sdkautomation ***. Please ask<EMAIL_ADDRESS>(or NullMDR in github) for additional help. /azp run automation - sdk /azp run automation - sdk /azp run automation - sdk Can one of the admins verify this patch?
2025-04-01T04:54:45.600217
2020-04-30T21:11:42
610388877
{ "authors": [ "AutorestCI", "audunn", "azuresdkci" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13250", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/9294" }
gharchive/pull-request
Update mountTarget type definition Latest improvements: MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow. Contribution checklist: [x] I have reviewed the documentation for the workflow. [x] Validation tools were run on swagger spec(s) and have all been fixed in this PR. [ ] The OpenAPI Hub was used for checking validation status and next steps. ARM API Review Checklist [ ] Service team MUST add the "WaitForARMFeedback" label if the management plane API changes fall into one of the below categories. adding/removing APIs. adding/removing properties. adding/removing API-version. adding a new service in Azure. Failure to comply may result in delays for manifest application. Note this does not apply to data plane APIs. [ ] If you are blocked on ARM review and want to get the PR merged urgently, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them. Please follow the link to find more details on API review process. /azp run automation - sdk URGENT, This is a bug fix that already was put in other API versions see pr 9078 Can one of the admins verify this patch?
2025-04-01T04:54:45.605166
2023-07-18T13:20:34
1809926144
{ "authors": [ "MIKHANYA" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13251", "repo": "Azure/azure-sdk-for-c", "url": "https://github.com/Azure/azure-sdk-for-c/issues/2605" }
gharchive/issue
azure-sdk-for-c-arduino, can't see telemetry data from devices in AZURE,ESP32 Describe the bug can't see telemetry data from devices in AZURE,ESP32 To Reproduce example from libarary https://github.com/Azure/azure-sdk-for-c-arduino/blob/main/examples/Azure_IoT_Hub_ESP32/readme.md Expected behavior It looks like the device is not sending payloads Additional context see my discussion details on stackoverflow https://stackoverflow.com/questions/76687945/cant-see-telemetry-data-from-devices-in-azure-esp32 In the end result, I'm interested in data transmission via cellular communication, ESP32 and SIM7000 module connected to AZURE IoT HUB Setup : OS: [Windows10] IDE: ARDUINO Version of the Library used: Last available [ ] Bug Description Added [ ] Repro Steps Added [ ] Setup information Added Problem solved in https://github.com/Azure/azure-sdk-for-c/issues/2611
2025-04-01T04:54:45.608102
2022-09-20T16:21:25
1379680093
{ "authors": [ "RickWinter", "ahsonkhan", "gearama" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13252", "repo": "Azure/azure-sdk-for-cpp", "url": "https://github.com/Azure/azure-sdk-for-cpp/issues/3951" }
gharchive/issue
Split test capture step out from build step Currently the builds run gtest_discover_tests as a final step of the tests. This step should be split out into its own step to speed up the builds and help isolate points of failure. Related issue tracking resolving the failures: https://github.com/Azure/azure-sdk-for-cpp/issues/1607 Having isolated the point of failure to GTest_Discover_Tests , splitting the step out is pointless and actually lengthens the build process as it being part of the cmake files, the only way to have it executed or not is to run cmake config / build twice. which overall lengthen the build process. We cannot have only the discovery test step run , but we can actually increase the timeout of the discovery from 5 s currently ( which is too short ) to something more apropriate. To quote the cmake gtest documentation for discovery timeout "Most test executables will enumerate their tests very quickly, but under some exceptional circumstances, a test may require a longer timeout. The default is 5. " the exceptional circumstances are not elaborated further. Why the current implementation of the timeout is not working is due to the fact that it is in a macro nobody calls , protected by an ENV that nobody sets thus it never really executes.
2025-04-01T04:54:45.618236
2022-06-11T21:07:02
1268372925
{ "authors": [ "Grayer123", "mblaschke", "navba-MSFT", "tadelesh" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13253", "repo": "Azure/azure-sdk-for-go", "url": "https://github.com/Azure/azure-sdk-for-go/issues/18395" }
gharchive/issue
armsubscription.Subscription: missing Tags Bug Report import path: /sdk/resourcemanager/subscription/armsubscription SDK version: latest go version: go version go1.18.3 darwin/arm64 Subscription struct doesn't include Tags: type Subscription struct { // The authorization source of the request. Valid values are one or more combinations of Legacy, RoleBased, Bypassed, Direct // and Management. For example, 'Legacy, RoleBased'. AuthorizationSource *string `json:"authorizationSource,omitempty"` // The subscription policies. SubscriptionPolicies *Policies `json:"subscriptionPolicies,omitempty"` // READ-ONLY; The subscription display name. DisplayName *string `json:"displayName,omitempty" azure:"ro"` // READ-ONLY; The fully qualified ID for the subscription. For example, /subscriptions/00000000-0000-0000-0000-000000000000. ID *string `json:"id,omitempty" azure:"ro"` // READ-ONLY; The subscription state. Possible values are Enabled, Warned, PastDue, Disabled, and Deleted. State *SubscriptionState `json:"state,omitempty" azure:"ro"` // READ-ONLY; The subscription ID. SubscriptionID *string `json:"subscriptionId,omitempty" azure:"ro"` } https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/subscription/armsubscription/zz_generated_models.go @mblaschke Thanks for your feedback. The return body of Get operation of subscription does not contains tag for now. As this is a feature request, I'll involve service team to have a look. Adding Service team to look into this feature request. according to https://docs.microsoft.com/en-us/rest/api/resources/subscriptions/get it returns the tags This is not really a feature as the old sdk also provides subscription tags. Swagger default tag still using old 2016 api-version. @anyone from service team, please help to do the upgrade. @tadelesh I will check with the Service team offline and update this github thread. You might want to use all the models under “resourcemanager/resources” instead of “resourcemanager” (might has the old version, not sure why it hasn’t been removed). Please give it a try with https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/resources/armsubscriptions/zz_generated_models.go (I do see “tags” in subscription struct) instead of https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/subscription/armsubscription/zz_generated_models.go (without tags). @Grayer123 Is the package of subscription/armsubsciption replaced by resources/armsubscriptions totally? If so, we could deprecate the former one and let the customer use the later one. @tadelesh Got a confirmation from @Grayer123 that it should be replaced. @navba-MSFT Thanks. @mblaschke You could use package github.com/Azure/azure-sdk-for-go/resources/armsubscriptions.
2025-04-01T04:54:45.621750
2024-08-24T06:45:34
2484295038
{ "authors": [ "ifeify", "richardpark-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13254", "repo": "Azure/azure-sdk-for-go", "url": "https://github.com/Azure/azure-sdk-for-go/issues/23370" }
gharchive/issue
Enhancement: combine event hub in-memory and blob checkpointing feature Feature Request This is regarding the checkpointing feature in the Event Hub Go client. I'm wondering if we can combine the benefits of in-memory checkpoints and persistent checkpointing to blob storage. Persisting to blob storage on every checkpoint has resulted in slower performance for my team's application (which requires low-latency processing of telemetry). In-memory persistence, on the other hand, means data will be lost when the application dies or restarts. I propose adding an option to only persist to blob storage after a time interval or after a certain number of checkpoints. I propose adding an option to only persist to blob storage after a time interval or after a certain number of checkpoints. We've generally left this pattern up to the application writer. Generally your app wants full visibility when you bypass certain areas of safety. Now, your application is in full control of when UpdateCheckpoint is called, so if you want to do a "only write after 'x' time" style pattern it's a simple update to wrap the update with your own logic: if time.Since(lastCheckpointTime) > arbitraryDuration { if err := partitionClient.UpdateCheckpoint(context.TODO(), events[len(events)-1], nil); err != nil { return err } lastCheckpointTime = time.Now() } The compromise, as you mention, is whether you're okay with possibly reprocessing events if you lose a processor instance and have to start again. Closing as we're not going to add this to the SDK at this time.
2025-04-01T04:54:45.623393
2022-10-30T01:05:40
1428534246
{ "authors": [ "azure-sdk", "tadelesh" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13255", "repo": "Azure/azure-sdk-for-go", "url": "https://github.com/Azure/azure-sdk-for-go/pull/19450" }
gharchive/pull-request
[Release] sdk/resourcemanager/trafficmanager/armtrafficmanager/2.0.0-beta.1 https://github.com/Azure/sdk-release-request/issues/3321 @Alancere Please help to change to minor version.
2025-04-01T04:54:45.627393
2017-12-20T22:04:47
283709264
{ "authors": [ "vladbarosan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13256", "repo": "Azure/azure-sdk-for-go", "url": "https://github.com/Azure/azure-sdk-for-go/pull/921" }
gharchive/pull-request
Update Changelog, ACS reorg and delete extra services for v12 Thanks you for your contribution to the Azure-SDK-for-Go! We will triage and review it as quickly as we can. As part of your submission, please make sure that you can make the following assertions: [ ] I'm not making changes to Auto-Generated files which will just get erased next time there's a release. If that's what you want to do, consider making a contribution here: https://github.com/Azure/autorest.go [ ] I've tested my changes, adding unit tests where applicable. [ ] I've added Apache 2.0 Headers to the top of any new source files. [ ] I'm submitting this PR to the dev branch, or I'm fixing a bug that warrants its own release and I'm targeting the master branch. [ ] If I'm targeting the master branch, I've also added a note to CHANGELOG.md. [ ] I've mentioned any relevant open issues in this PR, making clear the context for the contribution. #861
2025-04-01T04:54:45.629244
2021-02-11T17:48:36
806609609
{ "authors": [ "azure-sdk", "weshaggard" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13257", "repo": "Azure/azure-sdk-for-ios", "url": "https://github.com/Azure/azure-sdk-for-ios/pull/704" }
gharchive/pull-request
Sync eng/common directory with azure-sdk-tools for PR 1386 Sync eng/common directory with azure-sdk-tools for PR https://github.com/Azure/azure-sdk-tools/pull/1386 See eng/common workflow /check-enforcer reset
2025-04-01T04:54:45.636924
2021-03-29T21:52:10
843844128
{ "authors": [ "mitchdenny", "tjprescott" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13258", "repo": "Azure/azure-sdk-for-ios", "url": "https://github.com/Azure/azure-sdk-for-ios/pull/789" }
gharchive/pull-request
Mechanism for Mono to Clone Swift PM Repos The goal here is for us to have fine-grained, independent version control for SwiftPM modules the way we do in other SDKs and with CocoaPods while maintaining the benefits of a mono-repo. To facilitate this, we will have a per-module Package.swift and CHANGELOG.md. Each module will be represented in manifest.yml with its path in the mono-repo and its mirror repo name. Whenever the mono-repo is updated, we can update the mirror repo with: python3 eng/scripts/sync_repo.py <MODULE> This will copy that branch to the root of the mirror, which can then be target by SwiftPM. In the current model, users target the "AzureSDK" package, and then specify individual modules, whereas with this change, the target will be the module. The current example of this is AzureCore, which will mirror to azure-sdk-for-ios-core. This PR seems to have two strategies in play. One strategy that has been discussed is having a copy of all the modules in a separate repository (synchronized from the mono-repo). But there also appears to be a reference to an azure-sdk-for-ios-core repository as well (which makes me think that this is just breaking out core separately, and will be managed as a separate repo (incl the engineering system etc). This PR seems to have two strategies in play. One strategy that has been discussed is having a copy of all the modules in a separate repository (synchronized from the mono-repo). But there also appears to be a reference to an azure-sdk-for-ios-core repository as well (which makes me think that this is just breaking out core separately, and will be managed as a separate repo (incl the engineering system etc). @mitchdenny thanks for the catch. I updated the reference to SwiftPM-AzureCore. Also, @weshaggard I updated the sync_repo.py script so it no longer requires any metadata file. @weshaggard @mitchdenny this PR should be ready for review. Another thing that needs to be worked out is the tagging strategy when a release goes out. Is the plan just to replace the contents of the repo and then commit and tag? So we could end up with something like this: commit1: 1.0.0-beta.11 commit2: 1.0.0 commit3: 1.1.0-beta.1 commit4: 1.0.1 There would be corresponding tags on the monorepo like: commit1: AzureCore_1.0.0-beta.11 commit2: AzureCore_1.0.0 (albeit not anytime soon) commit3: AzureCore_1.1.0-beta.1 commit4: AzureCore_1.0.1 However, if that were the sequence in both repos, I don't see how either would be any less confusing. \ I think there will also need to be some content in the README that explains how/why these repos are done this way. Imagine a scenario where someone inherits an iOS code-base and they go and look at their dependencies and they see one of our swiftpm-* links. They may not have previously worked with the Azure SDK before so wouldn't know why we have these satellite repos. I think, most likely, they wouldn't think twice about the SwiftPM repo but would be surprised to learn of the existence of the mono-repo. There would be no reference to azure-sdk-for-ios in their files, but if they wanted to file a bug or contribute, the README would direct them to the mono-repo. But you are right, we should call this out in some fashion somewhere.
2025-04-01T04:54:45.643824
2023-04-14T18:00:39
1668723736
{ "authors": [ "MT-Jacobs", "anuchandy", "trefsahl" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13259", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/issues/34464" }
gharchive/issue
Deprecation of Service Bus Processor Stop API Today Service Bus Processor exposes two APIs that complete the message pumping - stop and close. The mainline use case is - the application wants the Processor to pump the message forever. When/If the application shuts down, it calls close API to dispose of the Processor. As the engineering team recently reevaluated the past design choices, we identified that the stop API involves quite a complexity; supporting it correctly involves a reasonable (allocation, coordination) overhead and contributes to engineering costs. Additionally, the above mainline use case (that does not use stop) pays the price of this overhead. The engineering team came to the conclusion that the mainline use case will greatly benefit from deprecating and removing stop , and saves engineering and maintenance costs. Going forward, starting Processor that was stopped before is not recommended and this feature may be deprecated in future. Recommendation is to close the Processor instance and create a new one to restart processing. The April 2023 version (v7.13.4) of Service Bus SDK includes a log message in the warning level to communicate this upcoming change; this is the pr. We run a async "short-lived" client for receving messages and has tested recreating clients with the use of close(). But this does not behave in a gracefull way today. This is a list of my top values from the logs when we call close() after a few days in production with version 7.13.4 of the java lib (I guess some of the messages could be ignored, but still...) The receiver didn't receive the disposition acknowledgment due to receive link closure. java.lang.InterruptedException Cannot subscribe. Processor is already terminated Cannot perform operations on a disposed set. Delivery not on receive link. Cannot perform operation 'renewMessageLock' on a disposed receiver Maybe a combination of first calling stop(), and the close() a short while later could be good choice? We definitely need a way for in-process messages to complete just before close, otherwise it's very easy to run into edge cases where the work completes around processing a message, but the complete call then fails because the processor was stopped/closed.
2025-04-01T04:54:45.662078
2023-09-11T03:38:22
1889588308
{ "authors": [ "JonathanGiles", "alzimmermsft", "ibrandes", "kensinzl" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13260", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/issues/36689" }
gharchive/issue
[QUERY] RequestRetryOptions seems did not work at the SpringBoot Query/Question My app is SpringBoot 3.1.0 and the package is using spring cloud azure. <dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-starter-storage-blob</artifactId> <version>5.5.0</version> </dependency> This Java app uses multiple threads to upload images(jpeg, png) into Azure Blob Storage. Each thread is uploading one image, but I face intermittently upload timeout. how to justify when there is a timeout, I set the timeout as 30 seconds because I think it should not be over 30 seconds to upload a tiny image. Please see my following attached code. In order to cushion this timeout issue, I want to use the re-try logic to failover if over 30 seconds Why is this not a Bug or a feature Request? Now I have already introduced a re-try logic as the following code, but I CAN NOT see the re-try log record when I intentionally make the image upload timeout into a quite small value eg: 10 ms, and the re-try account is 3, but I can not see three-times re-try execution log! @Bean public BlobServiceClient getBlobServiceClient() { RequestRetryOptions requestRetryOptions = new RequestRetryOptions(RetryPolicyType.EXPONENTIAL, 3, 1, 10000L, 20000L, null); return new BlobServiceClientBuilder(). retryOptions(requestRetryOptions). connectionString("******").buildClient(); } @Async("taskExecutor") public CompletableFuture<String> migrateImageToAzureStorage(Image image, BlobContainerClient forecastOrObsContainer, byte[] imageData) { String imageAzureBlobName = image.getImageId().toString() + guessFileExtension(imageData); try { BlobClient blobClient = forecastOrObsContainer.getBlobClient(imageAzureBlobName); try (ByteArrayInputStream imageStream = new ByteArrayInputStream(imageData)) { blobClient.uploadWithResponse(new BlobParallelUploadOptions(imageStream).setTier(AccessTier.HOT).setRequestConditions(new BlobRequestConditions()), Duration.ofMillis(10ms), Context.NONE); LogUtil.info("Uploaded Image "+image.getImageId()+" to Azure Blob Storage HOT Tier as " + blobClient.getBlobUrl()); } image.setObjectStoreUrl(blobClient.getBlobName()); return CompletableFuture.completedFuture(blobClient.getBlobName()); } catch (IOException e) { LogUtil.error("Failed for closing stream when UPLOAD image "+image.getImageId()+" for Azure as "+imageAzureBlobName, e); } catch (IllegalStateException e) { LogUtil.error("Timeout for UPLOADING image "+image.getImageId()+" for Azure ", e); } catch (Exception e) { LogUtil.error("Failed for uploading image "+image.getImageId()+" for Azure ", e); } // return a failed future CompletableFuture<String> failedFuture = new CompletableFuture<>(); failedFuture.completeExceptionally(new MigrationException("Failed for uploading image "+image.getImageId()+" for Azure ")); return failedFuture; } Setup (please complete the following information if applicable): OS: [e.g. iOS] IDE: [e.g. IntelliJ] Information Checklist Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report [x] Query Added [x] Setup information Added @alzimmermsft Thanks for your details explanation. I removed the timeout setting from BlobClient and assigned it to the RequestRetryOptions. Now it works and the following is the sudo code, Please have a look. But may I check with you about several things Could you please share some official documents to describe the timeout difference between among BlobClient and RequestRetryOptions? I may miss something because I found it hard to catch the azure upload timeout exception after running out of re-try count, eg: what is the particular exception type, so I have to use a crude way to match the exception string which you can see it among my code Seems I have to add the HttpLogOptions to let the azure related logs be represented, eg: the following logs seems I have to fetch out these logs with "az.sdk.message", but I still will miss some part, eg: the second log, in this instance, is there any way I can use to fetch all these logs in one time? {"@timestamp":"2023-09-12T02:24:29.712Z","ecs.version":"1.2.0","log.level":"INFO","message":"{\"az.sdk.message\":\"HTTP request\",\"method\":\"PUT\",\"url\":\"https://host.blob.core.windows.net/forecastimage/1565385274.jpeg\",\"tryCount\":\"3\",\"contentLength\":236608141}","process.thread.name":"parallel-4","log.logger":"com.azure.storage.blob.implementation.BlockBlobsImpl$BlockBlobsService.upload"} {"@timestamp":"2023-09-12T02:24:33.717Z","ecs.version":"1.2.0","log.level":"WARN","message":"[id: ***********, L:/********:***** - R:host.blob.core.windows.net/********:*****] Last write attempt timed out; force-closing the connection.","process.thread.name":"reactor-http-kqueue-3","log.logger":"io.netty.handler.ssl.SslHandler"} @Bean public BlobServiceClient getBlobServiceClient() { RequestRetryOptions requestRetryOptions = new RequestRetryOptions(RetryPolicyType.FIXED, retryMaxCount, tryTimeoutInMs/1000, retryDelayInMs, maxRetryDelayInMs, null); HttpLogOptions httpLogOptions = new HttpLogOptions().setLogLevel(HttpLogDetailLevel.BASIC); BlobServiceClient blobServiceClient = new BlobServiceClientBuilder().httpLogOptions(httpLogOptions).retryOptions(requestRetryOptions).connectionString(connectStr).buildClient(); return blobServiceClient; } @Async("taskExecutor") public CompletableFuture<String> migrateImageToAzureStorage(Image image, BlobContainerClient forecastOrObsContainer, byte[] imageData) { String imageAzureBlobName = image.getImageId().toString() + guessFileExtension(imageData); // record the size of each image and represent at the log double imageSizeInMB = imageData.length / 1024.0 / 1024.0; try { BlobClient blobClient = forecastOrObsContainer.getBlobClient(imageAzureBlobName); try (ByteArrayInputStream imageStream = new ByteArrayInputStream(imageData)) { blobClient.uploadWithResponse(new BlobParallelUploadOptions(imageStream).setTier(AccessTier.HOT).setRequestConditions(new BlobRequestConditions()), null, Context.NONE); LogUtil.info("Uploaded Image "+image.getImageId()+", Size(MB): " +imageSizeInMB+" to Azure Blob Storage HOT Tier as " + blobClient.getBlobUrl()); } image.setObjectStoreUrl(blobClient.getBlobName()); return CompletableFuture.completedFuture(blobClient.getBlobName()); } catch (IOException e) { LogUtil.error("Failed for closing stream when UPLOAD image "+image.getImageId()+", Size(MB): " +imageSizeInMB+ " for Azure as "+imageAzureBlobName, e); } catch (Exception e) { // crude way to check the azure blob upload timeout exception if(e.getMessage().contains("Did not observe any item or terminal signal within ")) { LogUtil.error("Timeout for UPLOADING image "+image.getImageId()+", Size(MB): " +imageSizeInMB+" for Azure ", e); } else { LogUtil.error("Exception happened during uploading image "+image.getImageId()+", Size(MB): " +imageSizeInMB+" for Azure ", e); } } // return a failed future CompletableFuture<String> failedFuture = new CompletableFuture<>(); failedFuture.completeExceptionally(new MigrationException("Failed for uploading image "+image.getImageId()+", Size(MB): " +imageSizeInMB+" for Azure ")); return failedFuture; } @alzimmermsft could you please have a look for the above message when you feel available, thank you so much. Could you please share some official documents to describe the timeout difference between among BlobClient and RequestRetryOptions? This is something that wasn't well documented and I'm working on adding this in a few places. So far, I have a PR opened adding this documentation to azure-core's README: https://github.com/Azure/azure-sdk-for-java/pull/36710/files#diff-b8dc45bc6fad5f70e59b49cda551e56ae6668c49fa0fd026c09b74537b9abed1R161 @ibrahimrabab could you look at porting this documentation to the Storage READMEs as well? @JonathanGiles where is the best place to put documentation like this in https://learn.microsoft.com/en-us/azure/developer/java/sdk/? I may miss something because I found it hard to catch the azure upload timeout exception after running out of re-try count, eg: what is the particular exception type, so I have to use a crude way to match the exception string which you can see it among my code Thanks for making note of this, looking at this it's a bit tricky and something the SDKs need to clean up. Depending on the timeout that triggered different exceptions will be thrown. If the timeout happened at the apiCall(Duration timeout) level it will throw an IllegalStateException. If the timeout happened at the HttpPipeline or HttpClient layer it will throw a TimeoutException. For the purposes of your application as you removed the usage of the apiCall(Duration timeout) I'd check for TimeoutException. @JonathanGiles @srnagar @lmolkova we should do a review on the exceptions thrown in timeout scenarios to make sure they are standardized. Seems I have to add the HttpLogOptions to let the azure related logs be represented, eg: the following logs seems I have to fetch out these logs with "az.sdk.message", but I still will miss some part, eg: the second log, in this instance, is there any way I can use to fetch all these logs in one time? I don't fully understand the question about fetching all the logs at one time. If you're asking how to fetch all Azure SDK logs I would recommend doing that based on the log.logger included in the log and look for com.azure.* loggers. This is something that wasn't well documented and I'm working on adding this in a few places. So far, I have a PR opened adding this documentation to azure-core's README: https://github.com/Azure/azure-sdk-for-java/pull/36710/files#diff-b8dc45bc6fad5f70e59b49cda551e56ae6668c49fa0fd026c09b74537b9abed1R161 @ibrahimrabab could you look at porting this documentation to the Storage READMEs as well? @JonathanGiles where is the best place to put documentation like this in https://learn.microsoft.com/en-us/azure/developer/java/sdk/? I'll ping you this week to discuss Alan - but generally speaking this seems like good content for the troubleshooting push I'm doing over at learn.microsoft.com @JonathanGiles @srnagar @lmolkova we should do a review on the exceptions thrown in timeout scenarios to make sure they are standardized. @alzimmermsft Please file an issue and start the ball rolling on this! Loop us in ASAP. @JonathanGiles @alzimmermsft Thank you so much for the detailed information and your valuable time. Closed, question was answered, and documentation was updated to be clearer.
2025-04-01T04:54:45.672689
2024-02-06T06:57:49
2120094243
{ "authors": [ "Netyyyy", "OleksandrShkurat", "joshfree", "nagyesta", "saragluna", "vcolin7" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13261", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/issues/38661" }
gharchive/issue
[BUG] Spring Cloud Azure 5.9.0 update requires JDK 21 Describe the bug Updating to Spring Cloud Azure 5.9.0 caused build failure complaining about the class version. I wasn't able to find any information about the JDK 21 requirement in the release so I assume this is not intentional. Please kindly let me know in case I missed something! Thank you! Exception or Stack Trace Caused by: java.lang.UnsupportedClassVersionError: com/azure/spring/cloud/autoconfigure/implementation/context/AzureGlobalConfigurationEnvironmentPostProcessor has been compiled by a more recent version of the Java Runtime (class file version 65.0), this version of the Java Runtime only recognizes class file versions up to 61.0 at java.base/java.lang.ClassLoader.defineClass1(Native Method) at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1017) at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:150) To Reproduce Steps to reproduce the behavior: Check out https://github.com/nagyesta/lowkey-vault-example/commit/f14417508a6cb433636455f9b115c21fa89e59d7 Build using JDK 17 Observe errors Code Snippet N/A Expected behavior The project is working with JDK 17 Screenshots N/A Setup (please complete the following information): OS: any IDE: any Library/Libraries: "com.azure.spring:spring-cloud-azure-starter-keyvault-secrets:5.9.0" "com.azure.spring:spring-cloud-azure-starter-keyvault:5.9.0" Java version: 17 (below 21) App Server/Environment: any Frameworks: Spring Boot Additional context Affects this PR: https://github.com/nagyesta/lowkey-vault-example/pull/274 Information Checklist Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report [x] Bug Description Added [x] Repro Steps Added [x] Setup information Added Meanwhile, Microsoft Azure Functions do not support Java 21 :) @saragluna @vcolin7 could you please investigate this apparent regression (requiring JDK 21 to run)? Yes, I'm looking into this. Sorry for the inconvenience, we will release a hotfix ASAP. Spring Cloud Azure 5.9.1 now is released Looks like 5.9.1 works well. Thank you
2025-04-01T04:54:45.674300
2019-11-22T23:53:02
527469154
{ "authors": [ "mbhaskar" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13262", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/issues/6524" }
gharchive/issue
Add support for groupby queries Java sdks currently do not support group by queries. Support for the same should be added. Added support for groupby in v4.1.0 https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/cosmos/azure-cosmos/CHANGELOG.md#410-2020-06-25
2025-04-01T04:54:45.676695
2016-05-11T14:54:22
154264001
{ "authors": [ "MSSedusch", "jianghaolu", "selvasingh" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13263", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/issues/698" }
gharchive/issue
vm createOrUpdate not working When I try to e.g. add a data disk to a vm I get an error that the createoption for the os disk is required and cannot be null. When I try to set the createoption of the os disk to DiskCreateOptionTypes.EMPTY I get "error":{"code":"PropertyChangeNotAllowed","target":"osDisk.createOption","message":"Changing property 'osDisk.createOption' is not allowed."} The issue seems to be that the DataDisk.createOption and OSDisk.createOption do not get filled on retrieval of a virtual machine. I changed the field to a string and adapted the setter and getter method which seems to work https://github.com/Azure/azure-rest-api-specs/issues/275 Not sure if that issue is related to mine. In my case, the createOption is not getting filled when I get() a virtual machine. When I try to update the VM, I get " is required and cannot be null " because the field was not set on get() This is fixed in beta2. Please verify.
2025-04-01T04:54:45.679106
2020-09-08T05:49:40
695566268
{ "authors": [ "saragluna" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13264", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/pull/14892" }
gharchive/pull-request
Azure spring data cosmos version schema The former design of Spring version schema will make two separate artifacts to cope with Spring Data 2.2.x and 2.3.x, but if we let Spring BOM manage our versions we could use one artifact to support both versions. The dependency management section in a pom file could also affect the transitive versions: https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#bill-of-materials-bom-poms. /azp run java - cosmos - tests /azp run java - cosmos - tests
2025-04-01T04:54:45.681249
2021-01-21T00:05:23
790475657
{ "authors": [ "azure-sdk", "benbp" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13265", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/pull/18713" }
gharchive/pull-request
Sync eng/common directory with azure-sdk-tools for PR 1327 Sync eng/common directory with azure-sdk-tools for PR https://github.com/Azure/azure-sdk-tools/pull/1327 See eng/common workflow /check-enforcer evaluate /check-enforcer evaluate
2025-04-01T04:54:45.682167
2021-02-11T01:23:28
806008464
{ "authors": [ "conniey" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13266", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/pull/19166" }
gharchive/pull-request
Fixing checkstyle breaks from upgrade to 8.40 Fixing indentation in IdentityClient /azp run java - anomalydetector - ci
2025-04-01T04:54:45.694057
2022-04-11T18:31:36
1200376084
{ "authors": [ "azure-sdk", "kasobol-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13267", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/pull/28187" }
gharchive/pull-request
Sync stack Incorporating feedback: rename synchronous -> sync rename getContent -> getBodyAsBinaryData rename setContent -> setBody(BinaryData) API change check for com.azure:azure-core API changes have been detected in com.azure:azure-core. You can review API changes here API change check for com.azure:azure-core-http-jdk-httpclient API changes have been detected in com.azure:azure-core-http-jdk-httpclient. You can review API changes here API changes - public JdkHttpClientProvider() + public JdkHttpClientProvider() API change check for com.azure:azure-core-http-netty API changes have been detected in com.azure:azure-core-http-netty. You can review API changes here API changes - public NettyAsyncHttpClientProvider() + public NettyAsyncHttpClientProvider() API change check for com.azure:azure-core-http-okhttp API changes have been detected in com.azure:azure-core-http-okhttp. You can review API changes here API changes - public OkHttpAsyncClientProvider() + public OkHttpAsyncClientProvider() - public OkHttpAsyncHttpClientBuilder followRedirects(boolean followRedirects) API change check for com.azure:azure-core-test API changes have been detected in com.azure:azure-core-test. You can review API changes here API change check for com.azure:azure-core-tracing-opentelemetry API changes have been detected in com.azure:azure-core-tracing-opentelemetry. You can review API changes here API changes + @Override public HttpResponse processSync(HttpPipelineCallContext context, HttpPipelineNextPolicy next) API change check for com.azure:azure-storage-common API changes have been detected in com.azure:azure-storage-common. You can review API changes here API changes + @Override public HttpResponse processSync(HttpPipelineCallContext context, HttpPipelineNextPolicy next) + @Override public HttpResponse processSync(HttpPipelineCallContext context, HttpPipelineNextPolicy next) + @Override public HttpResponse processSync(HttpPipelineCallContext context, HttpPipelineNextPolicy next) + @Override public HttpResponse processSync(HttpPipelineCallContext context, HttpPipelineNextPolicy next) + @Override public HttpResponse processSync(HttpPipelineCallContext context, HttpPipelineNextPolicy next) API change check for com.azure:azure-storage-blob API changes are not detected in this pull request for com.azure:azure-storage-blob API change check for com.azure:azure-storage-blob-batch API changes are not detected in this pull request for com.azure:azure-storage-blob-batch API change check for com.azure:azure-storage-blob-nio API changes are not detected in this pull request for com.azure:azure-storage-blob-nio
2025-04-01T04:54:45.695285
2022-06-21T22:17:07
1279174943
{ "authors": [ "azure-sdk", "ibrahimrabab" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13268", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/pull/29588" }
gharchive/pull-request
Bug Fix: Passing clientOptions to HttpPipelineBuilder in buildPipeline resolves #28783 API change check API changes are not detected in this pull request.
2025-04-01T04:54:45.697138
2019-02-28T20:20:40
415806076
{ "authors": [ "AutorestCI" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13269", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/pull/2987" }
gharchive/pull-request
[AutoPR mariadb/resource-manager] Update MariaDB api version Created to sync https://github.com/Azure/azure-rest-api-specs/pull/5280 This PR has been merged into https://github.com/Azure/azure-sdk-for-java/pull/2387
2025-04-01T04:54:45.698226
2024-04-11T08:51:14
2237229238
{ "authors": [ "azure-sdk", "weidongxu-microsoft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13270", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/pull/39664" }
gharchive/pull-request
[Automation] Generate SDK based on TypeSpec 0.15.8 [Automation] Generate SDK based on TypeSpec 0.15.8 /check-enforcer override
2025-04-01T04:54:45.699360
2024-12-26T03:09:26
2759216207
{ "authors": [ "azure-sdk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13271", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/pull/43617" }
gharchive/pull-request
[Automation] Generate Fluent Lite from Swagger security#package-composite-v3 [Automation] Generate Fluent Lite from Swagger security#package-composite-v3 API change check API changes are not detected in this pull request.
2025-04-01T04:54:45.742652
2020-11-16T19:55:08
744134206
{ "authors": [ "diberry", "leolumicrosoft", "ramya-rao-a", "sadasant", "seanknox", "southpolesteve" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13272", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/issues/12565" }
gharchive/issue
Confused about @azure/ms-node-auth vs @azure/identity vs MSAL.js Package Name: @azure/identity Package Version: All Is the bug related to documentation in [x] README.md [x] source code documentation [x] SDK API docs on https://docs.microsoft.com Describe the bug It is unclear what the difference is between @azure/ms-node-auth, @azure/identity, and MSAL.js. I do not know which lib to use when. If there is a doc that explains it, I cannot find one. Expected behavior A doc explaining each library and when they should be used with examples. Thanks for reporting @southpolesteve @jonathandturner, @daviwil, @sadasant, I was thinking that at the very least, we should do a couple of things here Update the readme for @azure/identity with notes on how it integrates with MSAL the fact that it uses v2 of AAD apis that not all credential classes can be used in the browser any scenario where one should use MSAL.js directly instead of @azure/identity Resolve #11359 See what improvements we can do to Authenticate with the Azure management modules for JavaScript @southpolesteve Can you list the top app types or identity issues, from your perspective, a customer is trying to figure out when asking this question? @diberry I think you covered it well. Maybe your first case captures this, but I think it can be broken in two: I am writing an app that needs many social logins (FB, GitHub, etc) only for identity purposes I am writing an app that specifically needs AAD login to access Azure. Like an internal app that lets my employees configure things that might be Azure resources under the hood. I also think there is a "Just make it work any way you can" case for developers. I won't put an web app in production with vscode based login, but if it helps me get the app working or improves development, I want that to be possible. @southpolesteve Hello Steve! I'll be going through your feedback and making an update to our documentation as soon as possible. I'll get in contact with you in case I'm missing something. Thank you for submitting this issue! @sadasant @ramya-rao-a Do you want to be included on Dev Center changes or localize this issue to your own SDK content? @diberry I'd like to be included in the Dev Center changes! If anything, for exposure. Hi again! Just to mention that I'll provide more information next week. @southpolesteve : I believe that the questions @diberry is working on will be helpful for you! Let me answer some of the other things you mention here. It is unclear what the difference is between @azure/ms-node-auth, @azure/identity, and MSAL.js @azure/ms-node-auth is the older authentication library! We continue to maintain it, but we're not adding new features. We're planning some better integration with it to facilitate users to move towards our newer library. @azure/identity is our newer library! This is the one that should be used if you want to authenticate any of our clients with the Azure services. MSAL.js is a library that connects to the system-specific keychains to provide a same authentication experience across environments. @azure/identity uses MSAL.js under the hood! When should each library be used? Even though all of them can be used, @azure/identity is the library that should be used with the Azure SDK for JS clients. @ramya-rao-a : Update the readme for @azure/identity I've made an issue! https://github.com/Azure/azure-sdk-for-js/issues/12669 . I'll follow up this week. See what improvements we can do to Authenticate with the Azure management modules for JavaScript. I'll take a look and I'll make some notes! While I move ahead with the readme update etc, how else can I be useful here? Please let me know if I'm missing something! On MSAL: MSAL does offer several features that are not yet available in our SDK, but we will be adding support as soon as possible. These include more control on the caching and the storing of the credentials. However, we're working as closely as possible with the MSAL team, so we should be able to level up with them in a couple of months, as far as I'm understanding. It's in our interest to request people to use the @azure/identity library as much as possible, instead of the possible alternatives, since direct customer feedback will help us make this experience better for everyone. @sadasant thanks for the explainer, I've been wondering the differences for awhile! What is the recommendation for providing browser-based authentication (e.g. for webapps needing a credential)? @azure/identity has a browser method that works well, but the package is only supported by a small number of Azure service libraries. I'm using @azure/arm-resources and @azure/arm-compute which require the older msRest.ServiceClientCredentials type of credential object. @sadasant thanks for the explainer, I've been wondering the differences for awhile! What is the recommendation for providing browser-based authentication (e.g. for webapps needing a credential)? @azure/identity has a browser method that works well, but the package is only supported by a small number of Azure service libraries. I'm using @azure/arm-resources and @azure/arm-compute which require the older msRest.ServiceClientCredentials type of credential object. @seanknox You can use the @azure/ms-rest-browserauth package when working with @azure/arm-resources and @azure/arm-compute packages for authentication needs in the browser. The readmes on these packages should have a code snippet that shows this: Code snippet in readme for compute Code snippet in readme for resources @seanknox You can use the @azure/ms-rest-browserauth package when working with @azure/arm-resources and @azure/arm-compute packages for authentication needs in the browser. The readmes on these packages should have a code snippet that shows this: Code snippet in readme for compute Code snippet in readme for resources @ramya-rao-a @azure/ms-rest-browserauth requires creating an AD app to authenticate users. Is that the only option for browser authentication, or is there another way can users authenticate directly to Microsoft auth, like @azure/identity's InteractiveBrowserCredential method? @ramya-rao-a @azure/ms-rest-browserauth requires creating an AD app to authenticate users. Is that the only option for browser authentication, or is there another way can users authenticate directly to Microsoft auth, like @azure/identity's InteractiveBrowserCredential method? @seanknox Hello hello! I wonder if a credential like @azure/identity's DeviceCodeCredential can work for you. Would that be useful? In ms-rest-nodeauth we have interactiveLoginWithAuthResponse, which is similar. @seanknox Hello hello! I wonder if a credential like @azure/identity's DeviceCodeCredential can work for you. Would that be useful? In ms-rest-nodeauth we have interactiveLoginWithAuthResponse, which is similar. I understand that this wouldn't be on the browser though. Would it be possible to move to @azure/identity instead? I understand that this wouldn't be on the browser though. Would it be possible to move to @azure/identity instead? @azure/ms-rest-browserauth requires creating an AD app to authenticate users. Is that the only option for browser authentication, or is there another way can users authenticate directly to Microsoft auth, like @azure/identity's InteractiveBrowserCredential method? @seanknox All credential classes in @azure/identity make use of the client id and therefore require you to create an app registration. The ones that don't default to the one corresponding to Azure CLI. So yes, the recommended way is to create an app registration and pass the clientId when creating the credential. @sadasant The packages @azure/arm-resources and @azure-arm-compute do not support @azure/identity. So, @seanknox won't be able to use it. @azure/ms-rest-browserauth requires creating an AD app to authenticate users. Is that the only option for browser authentication, or is there another way can users authenticate directly to Microsoft auth, like @azure/identity's InteractiveBrowserCredential method? @seanknox All credential classes in @azure/identity make use of the client id and therefore require you to create an app registration. The ones that don't default to the one corresponding to Azure CLI. So yes, the recommended way is to create an app registration and pass the clientId when creating the credential. @sadasant The packages @azure/arm-resources and @azure-arm-compute do not support @azure/identity. So, @seanknox won't be able to use it. Why don't they support @azure/identity? I'm interested in making it work. If it makes sense, is it because of Continuous Access Evaluation (CAE) challenge based authentication? I believe this is important for ARM resources. We're adding support for CAE this month. Why don't they support @azure/identity? I'm interested in making it work. If it makes sense, is it because of Continuous Access Evaluation (CAE) challenge based authentication? I believe this is important for ARM resources. We're adding support for CAE this month. This has nothing to do with CAE All the management plane packages (the ones dealing with resource management) at the moment are auto generated. The generated code works with the credentials from @azure/ms-rest-nodeauth and @azure/ms-rest-browserauth. They are of a different shape than the TokenCredential interface which is implemented by all the credentials in the @azure/identity package. We do have a feature request to update the code generator to generate code that will work with the credentials from @azure/identity as well. But it will take a while to update the code generator and re-generate over 100 management plane packages. This has nothing to do with CAE All the management plane packages (the ones dealing with resource management) at the moment are auto generated. The generated code works with the credentials from @azure/ms-rest-nodeauth and @azure/ms-rest-browserauth. They are of a different shape than the TokenCredential interface which is implemented by all the credentials in the @azure/identity package. We do have a feature request to update the code generator to generate code that will work with the credentials from @azure/identity as well. But it will take a while to update the code generator and re-generate over 100 management plane packages. @seanknox Please log an issue in the repo for @azure/ms-rest-browserauth for more on that package We have https://github.com/Azure/azure-sdk-for-js/issues/12669 tracking improvements to the @azure/identity package which we will tackle this month. We are independently tracking other efforts to improve documentation around auth. So, closing this issue. Thanks for your patience everyone @seanknox Please log an issue in the repo for @azure/ms-rest-browserauth for more on that package We have https://github.com/Azure/azure-sdk-for-js/issues/12669 tracking improvements to the @azure/identity package which we will tackle this month. We are independently tracking other efforts to improve documentation around auth. So, closing this issue. Thanks for your patience everyone then how about library @azure/msal-browser, this lib is using PublicClientApplication to achieve browser login and use graph or other web API. what distinguish it from @azure/identity? then how about library @azure/msal-browser, this lib is using PublicClientApplication to achieve browser login and use graph or other web API. what distinguish it from @azure/identity? @leolumicrosoft, @azure/identity contains multiple credential classes, all following the TokenCredential interface. You would need to use these credentials when using our newer set of libraries. When in browser, the only credential that applies from @azure/identity is the InteractiveBrowserCredential which at the moment uses the msal package. We are in the process of moving to use @azure/msal-browser instead. See #13155 and #13263 You are free to use @azure/msal-browser directly as long as you create your own credential class that follows the interface expected by the client constructor in the Azure package that you are using The client constructors in the new JS packages require a credential that follows the TokenCredential interface The client constructors in the rest of the JS packages in this repo require a credential that follows the ServiceClientCredential interface. An example can be found at Authenticating with an existing token @leolumicrosoft, @azure/identity contains multiple credential classes, all following the TokenCredential interface. You would need to use these credentials when using our newer set of libraries. When in browser, the only credential that applies from @azure/identity is the InteractiveBrowserCredential which at the moment uses the msal package. We are in the process of moving to use @azure/msal-browser instead. See #13155 and #13263 You are free to use @azure/msal-browser directly as long as you create your own credential class that follows the interface expected by the client constructor in the Azure package that you are using The client constructors in the new JS packages require a credential that follows the TokenCredential interface The client constructors in the rest of the JS packages in this repo require a credential that follows the ServiceClientCredential interface. An example can be found at Authenticating with an existing token Thank you, @ramya-rao-a. Thank you for the detail reply, I started to understand more of AD related javascript SDK. I highlight a few points I learned through experiment as it might be helpful to people who just started explore AD authenticate topic. Please correct anything that is inaccurate. DefaultAzureCredential in @azure/identity is to be used in backend service application code, or it is used in a local running application to get credential. But this library is not used in frontend code such as JavaScript in html. Browser javascript code can utilize @azure/msal-browser or earlier @azure/ms-node-auth. @azure/msal-browser uses auth code flow which allows stricter control of protected resource access, while @azure/ms-node-auth uses implicit flow Azure services exposes RESTful endpoints which need token. There are two types of token, one is for the resource management or control plane REST api. This token can be get through credential.getToken("https://management.azure.com/.default"). such as listing all data storage account in your azure account. The other type of RESTful endpoint need token from each specified service, for example azure keyvault: credential.getToken("https://vault.azure.net/.default") azure digitaltwin: credential.getToken("https://digitaltwins.azure.net/.default") Both 3 and 4 can be tested using postman after you get the token by using DefaultAzureCredential in a simple locally run script with "az login". These points maybe very basic but it still give me three days to have a clearer insight into them.
2025-04-01T04:54:45.745976
2020-11-19T00:12:57
746139786
{ "authors": [ "ramya-rao-a", "xirzec" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13273", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/issues/12612" }
gharchive/issue
Remove core-tracing dependency in core-auth The @azure/core-auth package was meant to be a light weight one to hold the types and interfaces to be used by anyone trying to implement the credentials used in our latest packages. In #11359, we are discussing using it in our older code generator so that the older packages can make use of @azure/identity as well. Since this package has a dependency on @azure/core-tracing only for types, we now end up pulling un-necessary tracing dependency as well. This issue is to consider removing the dependency on core-tracing from core-auth and instead duplicate the two types we pull in i.e. SpanOptions and SpanContext cc @xirzec, @joheredi I'm good with duplicating.
2025-04-01T04:54:45.750281
2022-02-22T21:26:59
1147384101
{ "authors": [ "appleoddity", "lmazuel", "sadasant" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13274", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/issues/20498" }
gharchive/issue
[Identity] Should we add support for tenantId on the ManagedIdentityCredential? On getToken, the ManagedIdentityCredential can receive a tenant Id since the GetTokenOptions type supports it. However, the ManagedIdentityCredential does not send the tenant Id on the outgoing requests. Should we support this? @sadasant I believe you cannot support ternant id on managed identity, since this is calling a localhost endpoint this is not how the flow is designed to work. The update we did in the Python doc, is to say that when you implement the "get_token" protocol, you may ignore silently tenant_id if you can't do anything with it, as this parameter should be seen as a hint of how to get a valid token (it's designed with challenge of KV in mind), not as a requirement that it has to be this tenant_id. This means if the hint doesn't apply to this implementation it's safe to ignore it. @lmazuel oh ok! Gotcha. So, my approach to solve this issue will be just to add tests. Thank you! I've been beating my head against the wall on this. From what I'm reading if I want to use ManagedIdentityCredential I can't specify the tenant ID, which I thought I could do. However, when I do, I always pull the managed identity from the default tenant and not the tenant that I specify. Which makes sense now. My understanding is that when I have a multi-tenant app (Azure Function) that uses a system managed identity, that a Managed Identity will be created in other tenants when an admin consents to the access in those tenants. How then do I consume the managed identity in the other tenant, so that my multi-tenant function can access the Graph API of the other tenant? Everything I try just returns an access token for the tenant that is hosting the app. My code looks like this: var credential = new DefaultAzureCredential(); var token = credential.GetToken( new Azure.Core.TokenRequestContext( new[] { "https://graph.microsoft.com/.default" }, null, null, "<Tenant B ID>")); var accessToken = token.Token;```
2025-04-01T04:54:45.753179
2022-06-10T22:01:27
1268057931
{ "authors": [ "azure-sdk", "v-alje", "xirzec" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13275", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/issues/22208" }
gharchive/issue
Missing H1 for Quantum Area The following page on Docs are using H2 header elements in place of an H1 title header. Docs throws a warning when there is no H1 element at the top of the page. https://github.com/MicrosoftDocs/azure-docs-sdk-node/blob/main/docs-ref-services/preview/quantum-jobs-readme.md This appears to have been imported from the following readme file: sdk/quantum/quantum-jobs/README.md The readme file should be modified to use H1 title headers and re-imported to Docs, or the import process needs to be modified to change the headers. Label prediction was below confidence level 0.6 for Model:ServiceLabels: 'Storage:0.23352472,Azure.Core:0.13711227,Docs:0.04791413' Tracking in #22206
2025-04-01T04:54:45.755885
2023-05-31T20:59:31
1735013004
{ "authors": [ "deyaaeldeen", "diberry", "johanste" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13276", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/issues/26064" }
gharchive/issue
OpenAI: GetChatCompletionsOptions.n - can n have a more obvious name The property n - The number of chat completions choices that should be generated for a chat completions - this needs a better name such as totalAllowedChatCompletions. Please update source. As unfortunate as it is, this name is used in the API: https://platform.openai.com/docs/api-reference/chat/create#chat/create-n and the SDK is using the same names. /cc @bterlson @johanste @deyaaeldeen How did that get past API review board? This API is owned by OpenAI. Ok.
2025-04-01T04:54:45.757280
2020-10-16T14:28:28
723275478
{ "authors": [ "HarshaNalluru", "mohsin-mehmood", "ramya-rao-a" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13277", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/pull/11887" }
gharchive/pull-request
[Service Bus] "message" word added to "createBatch", "CreateBatchOptions" and "tryAdd" PR for #11878 /azp run js - servicebus - tests /azp run js - servicebus - tests
2025-04-01T04:54:45.758099
2021-06-22T22:16:26
927675996
{ "authors": [ "maorleger" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13278", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/pull/15902" }
gharchive/pull-request
[core] - added changelog entries for recent changes I forgot the changelogs. I always forget the changelogs 🤷 /check-enforcer override
2025-04-01T04:54:45.759136
2021-09-28T09:46:13
1009526170
{ "authors": [ "azure-sdk", "qiaozha", "ramya-rao-a" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13279", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/pull/17906" }
gharchive/pull-request
Post release automated changes for eventgrid releases Post release automated changes for azure-arm-eventgrid @qiaoza Please take a look at the merge conflicts in this PR close this one as eventgrid has already been GAed
2025-04-01T04:54:45.760718
2023-03-24T09:23:47
1639023108
{ "authors": [ "azure-sdk", "kazrael2119" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13280", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/pull/25361" }
gharchive/pull-request
imagebuilder release https://github.com/Azure/sdk-release-request/issues/3930 API change check APIView has identified API level changes in this PR and created following API reviews. azure-arm-imagebuilder
2025-04-01T04:54:45.763657
2023-09-07T20:40:04
1886536919
{ "authors": [ "azure-sdk", "benbp" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13281", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/pull/27049" }
gharchive/pull-request
Move identity variable setting into test-resources-pre.ps1 Updating the identity live tests to remove logic and env setting from yaml in favor of a dedicated keyvault config and powershell script. This will make it possible to improve local and sovereign cloud testing, and make cross-language config updates more easily. Related: https://github.com/Azure/azure-sdk-for-net/pull/38473 API change check API changes are not detected in this pull request. Working through some testing issues Live tests: https://dev.azure.com/azure-sdk/internal/_build/results?buildId=3080889&view=results
2025-04-01T04:54:45.765236
2018-11-13T05:40:10
380068032
{ "authors": [ "AutorestCI" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13282", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/pull/475" }
gharchive/pull-request
[AutoPR hdinsight/resource-manager] [HDInsight] - Support KV URL Created to sync https://github.com/Azure/azure-rest-api-specs/pull/4449 This PR has been merged into https://github.com/Azure/azure-sdk-for-js/pull/515
2025-04-01T04:54:45.766940
2020-03-08T20:07:42
577557562
{ "authors": [ "annelo-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13283", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/10408" }
gharchive/issue
[Add methods:] Add method to retrieve results of completed train/analyze LROs Per .NET guidelines: https://azure.github.io/azure-sdk/dotnet_introduction.html#dotnet-longrunning Outstanding design issues here: https://github.com/Azure/azure-sdk-for-python/pull/9963#discussion_r388289970 Constructors have been added to Operation classes to resume LRO.
2025-04-01T04:54:45.785661
2020-05-14T13:05:53
618215288
{ "authors": [ "Arash-Sabet", "jsquire", "rokulka" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13284", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/12075" }
gharchive/issue
Azure QnA maker fails to create a knowledge base by raising ExtractionFailure error code Describe the bug Azure QnA Maker SDK fails to create knowledgebase from a URL powered by an Azure Http-Triggered function while the function is publicly available and it only accepts GET requests. The response of the function is a pure string return by "return new OkObjectResult(responseString)" Expected behavior Knowledgebase should be created from the given URL. Actual behavior (include Exception or Stack Trace) The following error message is produced: Unsupported / Invalid url(s). Failed to extract Q&A from the source To Reproduce Run the following code snippet in a console application: var createKbDto = new CreateKbDTO { Name = request.Name, QnaList = new List<QnADTO>(), Urls = new List<string> { "https://caccea77.ngrok.io/api/jobdescription/facade/2034/43D672205B3106BE3273C60FE423C632" } }; var createKb = await client.Knowledgebase.CreateAsync(createKbDto); var createdOp = await MonitorOperationAsync(client, createKb); return GetKbId(createdOp); Environment: Microsoft.Azure.CognitiveServices.Knowledge.QnAMaker" Version="1.1.0" IDE and version : Visual Studio 16.5.4 Environment: .NET Core SDK (reflecting any global.json): Version: 3.1.201 Commit: b1768b4ae7 Runtime Environment: OS Name: Windows OS Version: 10.0.18363 OS Platform: Windows RID: win10-x64 Base Path: C:\Program Files\dotnet\sdk\3.1.201\ Host (useful for support): Version: 3.1.3 Commit: 4a9f85e9f8 .NET Core SDKs installed: 2.1.802 [C:\Program Files\dotnet\sdk] 2.2.207 [C:\Program Files\dotnet\sdk] 2.2.402 [C:\Program Files\dotnet\sdk] 3.0.100-rc1-014190 [C:\Program Files\dotnet\sdk] 3.1.100-preview3-014645 [C:\Program Files\dotnet\sdk] 3.1.201 [C:\Program Files\dotnet\sdk] .NET Core runtimes installed: Microsoft.AspNetCore.All 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.All 2.1.17 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.All 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.All 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.App 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 2.1.17 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.0.0-rc1.19457.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.0-preview3.19555.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.NETCore.App 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 2.1.17 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 2.2.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.0.0-rc1-19456-20 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.0-preview3.19553.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.WindowsDesktop.App 3.0.0-rc1-19456-20 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 3.1.0-preview3.19553.2 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] @dfulcer Hi Dean, To be able to reproduce the problem please feed the following URL to QnAMaker's SDK: https://caccea77.ngrok.io/api/jobdescription/facade/2034/43D672205B3106BE3273C60FE423C632 Feeding this URL to QnAMaker portal results in the same error message too: Unsupported / Invalid url(s). Failed to extract Q&A from the source Does the web content produced by the URL above lack anything that QnAMaker expects it? What causes QnAMaker to reject the content? /cc @milad-simcoeai @miladghafoori @dfulcer @jsquire Any updates on this ticket? The application insights is quite silent in terms of logging what goes wrong. /cc @milad-simcoeai I am just the source of initial triage in this case. Unfortunately, that means that I don't have any insight to offer. @jsquire just wondering, who's the lead on QnA Maker to have them engaged a quickly as possible? It's a showstopper and I'm sure there are or will be other folks experiencing the same problem. With regret, I do not know. This is not a library which the Azure SDK team owns at this point. The QnA Maker service team will need to assist, but beyond that I don't have insight. Each Azure service team has their own triage process once an issue has been identified and tagged. In this case, it would appear that they've identified @dfulcer as the point of contact. If this is a show stopping issue, I'd recommend opening an Azure support ticket. That will be a more formal and expedient route for support with a proper escalation path. That would ensure that someone is actively working to engage the proper folks for attention. My apologies that I don't have a better answer for you. This thread has came to the team now :( This is related to the QnAMaker extraction logic, not specific to SDK. As per the error, the provided URL content didn't support our extraction standard to generate any QnAs. However, I see that the provided URL doesn’t exist anymore. Please close the thread if it's too late, or share a valid URL so that we can investigate. Thanks!
2025-04-01T04:54:45.792807
2021-04-20T18:36:53
863115900
{ "authors": [ "Arkatufus", "amnguye" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13285", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/20537" }
gharchive/issue
[BUG] BlobContainerClient.CreateIfNotExistsAsync returns a null Response object when container already exists. Describe the bug The bug happened here: https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/storage/Azure.Storage.Blobs/src/BlobContainerClient.cs#L1080 BlobContainerClient.CreateIfNotExistsAsync() returns default or null if said container already exists. Expected behavior It should return a Response object with the proper error code Actual behavior (include Exception or Stack Trace) It returns null Environment: Azure.Storage.Blobs 12.8.1 Hosting platform or OS and .NET runtime version: .NET SDK (reflecting any global.json): Version: 5.0.201 Commit: a09bd5c86c Runtime Environment: OS Name: Windows OS Version: 10.0.19041 OS Platform: Windows RID: win10-x64 Base Path: C:\Program Files\dotnet\sdk\5.0.201\ Host (useful for support): Version: 5.0.4 Commit: f27d337295 .NET SDKs installed: 1.0.4 [C:\Program Files\dotnet\sdk] 2.1.500 [C:\Program Files\dotnet\sdk] 2.1.812 [C:\Program Files\dotnet\sdk] 2.2.207 [C:\Program Files\dotnet\sdk] 3.1.202 [C:\Program Files\dotnet\sdk] 5.0.201 [C:\Program Files\dotnet\sdk] .NET runtimes installed: Microsoft.AspNetCore.All 2.1.6 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.All 2.1.24 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.All 2.1.26 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.All 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All] Microsoft.AspNetCore.App 2.1.6 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 2.1.24 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 2.1.26 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.13 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 5.0.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App] Microsoft.NETCore.App 1.0.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 1.1.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 2.1.6 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 2.1.24 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 2.1.26 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 2.2.8 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.13 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.NETCore.App 5.0.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App] Microsoft.WindowsDesktop.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 3.1.13 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] Microsoft.WindowsDesktop.App 5.0.4 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App] IDE and version : Microsoft Visual Studio Community 2019 Version 16.9.2 Hi, CreateIfNotExists returning null or default is an expected result if the container already exists. A response of Response<BlobContainerInfo> can not be returned because that would mean that a successful container exists. Users are expecting that if they receive a null or default response from this API that the container already exists (and that we don't throw an exception). If the container does not exist and was created a Response<BlobContainerInfo> will be returned. (If we were to stop returning default or null in the case of the container already existing, this would be a breaking change). If you're looking for this method to throw an exception upon seeing a BlobErrorCode of ContainerAlreadyExists, please use the regular Create method.
2025-04-01T04:54:45.794671
2022-08-23T20:54:18
1348532951
{ "authors": [ "nisha-bhatia" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13286", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/30691" }
gharchive/issue
Explore multiple calls to GZip WriteTo and TryComputeLength methods To be completed before GA on Sept 29 Can we WriteTo and then continue to add Json? Add test for write to the stream, get the length, and then write some more Are gzip streams appendable? https://gist.github.com/KrzysztofCwalina/f94e76a50c78968fe9c7b3df99a73eed
2025-04-01T04:54:45.805521
2023-09-05T19:33:45
1882605895
{ "authors": [ "annelo-msft", "jsquire", "kalbert312" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13287", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/38505" }
gharchive/issue
[BUG] ObjectDisposedException when mocking HttpResponseMessage content via a System.Net.Http.DelegatingHandler on the underlying HTTP client. Library name and version Azure.ResourceManager.Resources 1.6.0, Azure.ResourceManager 1.7.0, Azure.Core 1.34.0 Describe the bug I'm encountering an ObjectDisposedException in our unit test environment that uses a DelegatingHandler to mock out response content. See the repro steps for relevant code snippets. After debugging, I discovered the following: In Azure.Core.Pipeline.HttpClientTransport::ProcessAsync(HttpMessage message, bool async), the StringContent gets read as a MemoryStream and is passed to the SDK's response abstraction PipelineResponse which is an IDisposable that when disposed, will dispose the underlying HttpResponseMessage and its content. In Azure.Core.Pipeline.ResponseBodyPolicy, if the content stream is a non-seekable stream and message.BufferResponse is true, a setter message.Response.ContentStream = bufferedStream is called. This invokes the overridden setter in Azure.Core.Pipeline.PipelineResponse which nulls the Content on HttpResponseMessage This step does not occur in the unit test setup because responseContentStream.CanSeek is true for a MemoryStream In Azure.ResourceManager.Resources.ArmDeploymentResource::UpdateAsync(...) the HttpMessage is disposed after CreateOrUpdateAtScopeAsync is done, which will dispose the StringContent on the response since it was not nulled by the previous point. The SDK then wraps the response with a ResourcesArmOperation which leads to the disposed exception. Expected behavior Be able to mock a long running operation's intermediate response at the HTTP layer via DelegatingHandler. Actual behavior System.ObjectDisposedException: Cannot access a closed Stream. at System.IO.__Error.StreamIsClosed() at System.IO.MemoryStream.get_Length() at Azure.Core.NextLinkOperationImplementation.IsFinalState(Response response, HeaderSource headerSource, Nullable`1& failureState, String& resourceLocation) at Azure.Core.NextLinkOperationImplementation.Create(HttpPipeline pipeline, RequestMethod requestMethod, Uri startRequestUri, Response response, OperationFinalStateVia finalStateVia, Boolean skipApiVersionOverride, String apiVersionOverrideValue) at Azure.Core.NextLinkOperationImplementation.Create[T](IOperationSource`1 operationSource, HttpPipeline pipeline, RequestMethod requestMethod, Uri startRequestUri, Response response, OperationFinalStateVia finalStateVia, Boolean skipApiVersionOverride, String apiVersionOverrideValue) at Azure.ResourceManager.Resources.ResourcesArmOperation`1..ctor(IOperationSource`1 source, ClientDiagnostics clientDiagnostics, HttpPipeline pipeline, Request request, Response response, OperationFinalStateVia finalStateVia, Boolean skipApiVersionOverride, String apiVersionOverrideValue) at Azure.ResourceManager.Resources.ArmDeploymentResource.<UpdateAsync>d__20.MoveNext() Reproduction Steps The mock delegating handler: public class MockRequestHandler : DelegatingHandler { protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { // ... var response = new HttpResponseMessage(); response.Content = new StringContent(/* mock json */); // it does NOT call base.SendAsync() // ... return Task.FromResult(response); } The http client: var webRequestHandler = new WebRequestHandler { AllowAutoRedirect = false }; var httpClient = new HttpClient( handler: HttpClientFactory.CreatePipeline(innerHandler: webRequestHandler, handlers: delegatingHandlers), // mock handler goes here disposeHandler: true); The ArmClient is initialized as follows: var armClientOptions = new ArmClientOptions { // ... Transport = new HttpClientTransport(httpClient) // http client with the delegating handler }; var armClient = new ArmClient(tokenCredential, default, armClientOptions); The ArmClient call: var armDeploymentSdkResource = armClient.GetArmDeploymentResource(/* ResourceIdentifier */); var deploymentOperation = await armDeploymentSdkResource .UpdateAsync(WaitUntil.Started, deploymentRequestInput, this.CancellationToken); Environment Windows 11 System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription = ".NET Framework 4.8.9167.0" JetBrains Rider 2023.2.1 Thank you for your feedback. Tagging and routing to the team member best able to assist. //cc: @m-nash, @annelo-msft I think this may turn out to be the same root cause as #38219, with what we currently suspect is the root cause discussed here. Thanks, @jsquire! I was planning to spend some time looking at https://github.com/Azure/azure-sdk-for-net/issues/38219 today, so I'll look at this one as well while I'm doing that. @kalbert312, thanks for a really nice investigation and repro case! I have confirmed that this is the same issue as we're looking at in https://github.com/Azure/azure-sdk-for-net/issues/38219. I'm going to close it as a duplicate, but I'm also tagging the other one as Azure.Core, and will try to turn around a fix soon. Thanks for reporting this! Reopening this as no-longer a duplicate of the first one.
2025-04-01T04:54:45.839314
2019-12-30T22:42:12
544032828
{ "authors": [ "JimSuplizio", "JoshLove-msft", "seanmcc-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13288", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/9274" }
gharchive/issue
Either update or remove use of LoggingExtensions Throughout the Storage API, we call logging extensions such as LogMethodEnter, LogMethodExit, LogException, etc. Currently these extensions, don't do anything as they are behind a Conditional compilation attribute that isn't enabled, e.g.: [Conditional("EnableLoggingHelpers")] public static void LogMethodExit( this HttpPipeline pipeline, string className, [CallerMemberName] string member = default, string message = "") => LogTrace(pipeline, $"EXIT METHOD {className} {member}\n{message}"); If we want this logging, we should define EnableLoggingHelpers and do any other updates that are needed. If we don't need this, we can delete this file and remove all calls to these methods. I vote we remove the LoggingExtensions. Hi @JoshLove-msft, we deeply appreciate your input into this project. Regrettably, this issue has remained inactive for over 2 years, leading us to the decision to close it. We've implemented this policy to maintain the relevance of our issue queue and facilitate easier navigation for new contributors. If you still believe this topic requires attention, please feel free to create a new issue, referencing this one. Thank you for your understanding and ongoing support.
2025-04-01T04:54:45.841981
2020-04-28T01:32:34
607958463
{ "authors": [ "azuresdkci", "jsquire", "tstepanski" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13289", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/11630" }
gharchive/pull-request
Performed Code Cleanup Removed redundant code Standardized naming and style Fixed comments Fixed warnings Can one of the admins verify this patch? As the owners of this version of the client library, @nemakam, @shankarsama and team would be the authoritative voice for feedback here. @axisc and @shankarsama - Would you please be so kind as to provide feedback to @tstepanski and advise if these changes are something that you'd like to consider or if we should look to close out the PR? Hi @tstepanski. Thank you for your contribution, and I'm sorry that you haven't received any feedback. Unfortunately, it does not look as if the Service Bus team would like to consider these changes at this point in time. I'm going to close this out, since there hasn't been any recent activity or engagement. Please feel free to reopen if you'd like to continue working on these changes.
2025-04-01T04:54:45.846590
2021-09-17T21:09:38
999722907
{ "authors": [ "Candelit", "pakrym" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13290", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/24097" }
gharchive/pull-request
Swallow NotImplementedException in EventSource name deduplication logic Fixes: https://github.com/Azure/azure-sdk-for-net/issues/24055 Hi. I am part of the now closed report #24055 submitted by Muhammet Sahin and we fins ourselves blocked from publishing our xamarin based mobile app to Appstore due to this. All worked/works fine for some reason while testing the same build of the app from AppCenter, but as we now move to the next phase and add it to Appstore and Testflight, this issue emerged... In what timeframe can we get access to a fix for testing in our app? Best regards Thomas Odell Balkeståhl In addition, there is a nightly feed: pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-net/nuget/v3/index.json you can get the latest set of packages and test the fix. Hi. We have tried to downgrade, but it got even worse, now the app won't even start at all. We get 2-3 crashes at start and then it is dead. Do you know if the deployment via Testflight/Appstore affects this in any way? Anything that 'manipulates' the versions of the packages? Most likely, it is as you say, that the bug was introduced in aug, but we had a working app in AppCenter(Microsoft) and the exact same build published on Testflight triggered the bug. Update: We have now managed to get our app running. It was due to the bug in azure.storage/azure.core, but also in relation to the experimental flags in Xamarin. https://docs.microsoft.com/en-us/xamarin/xamarin-forms/internals/experimental-flags (With 'we' I'm referring to our big hero @muhammetsahin who managed to solve it with no blame to himself)
2025-04-01T04:54:45.851354
2022-05-05T22:33:12
1227238427
{ "authors": [ "azure-sdk", "heaths" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13291", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/28566" }
gharchive/pull-request
Add EntityId to secrets, keys, and certificates Fixes #28564 API change check for Azure.Security.KeyVault.Certificates API changes have been detected in Azure.Security.KeyVault.Certificates. You can review API changes here API changes + public string EntityId { get; } API change check for Azure.Security.KeyVault.Keys API changes have been detected in Azure.Security.KeyVault.Keys. You can review API changes here API changes + public string EntityId { get; } API change check for Azure.Security.KeyVault.Secrets API changes have been detected in Azure.Security.KeyVault.Secrets. You can review API changes here API changes + public string EntityId { get; } Waiting for 7.4-preview.1 to deploy so I can record tests and write assertions.
2025-04-01T04:54:45.857444
2023-02-15T23:18:12
1586729774
{ "authors": [ "AlexanderSher", "KrzysztofCwalina", "azure-sdk", "seanmcc-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13292", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/34195" }
gharchive/pull-request
Use ArrayBackedPropertyBag for PipelineRequest Benchmarks: | Categories | count | Method | Mean | Error | StdDev | Ratio | Gen 0 | Allocated | |------------------------------ |------ |------------------ |------------:|---------:|---------:|------:|-------:|----------:| | CreateHttpRequestMessage | 2 | HttpRequestHeader | 656.1 ns | 2.51 ns | 2.35 ns | 1.00 | 0.0048 | 496 B | | CreateHttpRequestMessage | 2 | ArrayBackedHeader | 470.6 ns | 0.74 ns | 0.58 ns | 0.72 | 0.0048 | 496 B | | | | | | | | | | | | CreateHttpRequestMessage | 3 | HttpRequestHeader | 727.0 ns | 2.69 ns | 2.52 ns | 1.00 | 0.0048 | 496 B | | CreateHttpRequestMessage | 3 | ArrayBackedHeader | 608.6 ns | 2.65 ns | 2.35 ns | 0.84 | 0.0076 | 776 B | | | | | | | | | | | | CreateHttpRequestMessage | 8 | HttpRequestHeader | 1,287.3 ns | 4.97 ns | 4.40 ns | 1.00 | 0.0114 | 1,104 B | | CreateHttpRequestMessage | 8 | ArrayBackedHeader | 1,180.9 ns | 14.22 ns | 13.30 ns | 0.92 | 0.0134 | 1,384 B | | | | | | | | | | | | CreateHttpRequestMessage | 16 | HttpRequestHeader | 2,490.9 ns | 6.50 ns | 5.43 ns | 1.00 | 0.0267 | 2,736 B | | CreateHttpRequestMessage | 16 | ArrayBackedHeader | 2,504.4 ns | 6.96 ns | 5.81 ns | 1.01 | 0.0305 | 3,016 B | | | | | | | | | | | | CreateHttpRequestMessage | 32 | HttpRequestHeader | 4,374.9 ns | 12.06 ns | 10.07 ns | 1.00 | 0.0381 | 4,120 B | | CreateHttpRequestMessage | 32 | ArrayBackedHeader | 5,410.5 ns | 21.57 ns | 19.12 ns | 1.24 | 0.0458 | 4,656 B | | | | | | | | | | | | CreateHttpRequestMessageTwice | 2 | HttpRequestHeader | 1,324.0 ns | 5.69 ns | 5.04 ns | 1.00 | 0.0114 | 1,256 B | | CreateHttpRequestMessageTwice | 2 | ArrayBackedHeader | 950.8 ns | 4.91 ns | 4.35 ns | 0.72 | 0.0095 | 1,032 B | | | | | | | | | | | | CreateHttpRequestMessageTwice | 3 | HttpRequestHeader | 1,506.9 ns | 5.77 ns | 5.40 ns | 1.00 | 0.0134 | 1,336 B | | CreateHttpRequestMessageTwice | 3 | ArrayBackedHeader | 1,137.6 ns | 0.91 ns | 0.80 ns | 0.76 | 0.0134 | 1,312 B | | | | | | | | | | | | CreateHttpRequestMessageTwice | 8 | HttpRequestHeader | 2,855.4 ns | 15.61 ns | 14.60 ns | 1.00 | 0.0305 | 2,888 B | | CreateHttpRequestMessageTwice | 8 | ArrayBackedHeader | 2,187.8 ns | 2.24 ns | 1.75 ns | 0.77 | 0.0267 | 2,528 B | | | | | | | | | | | | CreateHttpRequestMessageTwice | 16 | HttpRequestHeader | 5,452.2 ns | 15.40 ns | 12.86 ns | 1.00 | 0.0687 | 6,792 B | | CreateHttpRequestMessageTwice | 16 | ArrayBackedHeader | 4,419.1 ns | 29.19 ns | 27.31 ns | 0.81 | 0.0610 | 5,792 B | | | | | | | | | | | | CreateHttpRequestMessageTwice | 32 | HttpRequestHeader | 9,906.1 ns | 11.67 ns | 9.11 ns | 1.00 | 0.1068 | 10,840 B | | CreateHttpRequestMessageTwice | 32 | ArrayBackedHeader | 8,834.6 ns | 20.36 ns | 15.90 ns | 0.89 | 0.0916 | 8,816 B | | | | | | | | | | | | MultipleReads | 8 | HttpRequestHeader | 2,976.5 ns | 11.14 ns | 9.30 ns | 1.00 | 0.0229 | 2,472 B | | MultipleReads | 8 | ArrayBackedHeader | 1,447.9 ns | 2.72 ns | 2.55 ns | 0.49 | 0.0153 | 1,472 B | | | | | | | | | | | | MultipleReads | 16 | HttpRequestHeader | 5,971.1 ns | 19.34 ns | 17.14 ns | 1.00 | 0.0534 | 5,448 B | | MultipleReads | 16 | ArrayBackedHeader | 3,195.2 ns | 24.38 ns | 22.80 ns | 0.54 | 0.0305 | 3,168 B | | | | | | | | | | | | MultipleReads | 32 | HttpRequestHeader | 11,225.7 ns | 40.46 ns | 35.87 ns | 1.00 | 0.0916 | 9,520 B | | MultipleReads | 32 | ArrayBackedHeader | 7,953.4 ns | 37.65 ns | 35.22 ns | 0.71 | 0.0458 | 4,936 B | First scenario - CreateHttpRequestMessage - is the base one, when we create a request and send it directly to the socket. In case of less than 16 headers, ArrayBackedPropertyBag is faster Second scenario - CreateHttpRequestMessageTwice simulates the retry case. Here, even with 32 headers the benefit is about 10% Third scenario - MultipleReads - simulates Azure.Storage case when headers are used to create a signature API change check API changes are not detected in this pull request. It would be good if this PR description started with a note explaining about what this PR does, as opposed to starting with a benchmark table :-) But I am not surprised this improves perf. The headers collection is pretty inefficient. We should send this benchmark and the scenarios we have (changing header values) that make the BCL headers collection suboptimal. Maybe this can be fixed in the BCL so that we don't have to write code like in this PR. This PR appears to have caused test flakiness in the .NET Live tests pipeline - Beginning on 2/17, we have seen intermittent test failures in our live batch tests - https://dev.azure.com/azure-sdk/internal/_build/results?buildId=2203247&view=ms.vss-test-web.build-test-results-tab&runId=39465973&resultId=100295&paneView=debug Based on the timing, it appears it was caused by this commit. @AlexanderSher @amnguye
2025-04-01T04:54:45.858798
2023-06-01T10:31:41
1735996754
{ "authors": [ "azure-sdk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13293", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/36776" }
gharchive/pull-request
Increment version for signalr releases Increment package version after release of Azure.ResourceManager.SignalR API change check API changes are not detected in this pull request.
2025-04-01T04:54:45.865072
2017-10-13T17:21:02
265360827
{ "authors": [ "schaabs" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13294", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/3782" }
gharchive/pull-request
[KeyVault] [Do Not Merge] Adding ECC key support Description These changes introduce ECC key support to the Key Vault SDK. They correspond to the swagger update https://github.com/Azure/azure-rest-api-specs/pull/1724. This checklist is used to make sure that common guidelines for a pull request are followed. [x] I have read the contribution guidelines. [x] The pull request does not introduce breaking changes. General Guidelines [ ] Title of the pull request is clear and informative. [ ] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page. Testing Guidelines [ ] Pull request includes test coverage for the included changes. SDK Generation Guidelines [ ] If an SDK is being regenerated based on a new swagger spec, a link to the pull request containing these swagger spec changes has been included above. [ ] The generate.cmd file for the SDK has been updated with the version of AutoRest, as well as the commitid of your swagger spec or link to the swagger spec, used to generate the code. [ ] The *.csproj and AssemblyInfo.cs files have been updated with the new version of the SDK. closing since this will be merged into the KvDev branch instead. https://github.com/Azure/azure-sdk-for-net/pull/3815
2025-04-01T04:54:45.873043
2023-09-01T00:14:33
1876465694
{ "authors": [ "TimothyMothra", "azure-sdk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13295", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/38459" }
gharchive/pull-request
[AzureMonitorExporter] resolve AOT warnings Related: #37734. This PR mitigates AOT warnings in the Azure.Monitor.OpenTelemetry.Exporter library. Azure.Monitor.OpenTelemetry.Exporter has 14 totals warnings AzureMonitorExporterEventSource.cs IL2026:RequiresUnreferencedCodeAttribute Using member 'System.Diagnostics.Tracing.EventSource.WriteEvent(Int32,Object[])' which has 'RequiresUnreferencedCodeAttribute' can break functionality when trimming application code. EventSource will serialize the whole object graph. Trimmer will not safely handle this case because properties may be trimmed. This can be suppressed if the object is a primitive type. The fix is to decorate these methods with [UnconditionalSuppressMessage("ReflectionAnalysis", "IL2026:RequiresUnreferencedCode", Justification = "Parameters to this method are primitive and are trimmer safe.")] AzureMonitorStatsbeat.GetVmMetadataResponse & IngestionResponseHelper.GetErrorsFromResponse IL2026:RequiresUnreferencedCodeAttribute Using member 'System.Text.Json.JsonSerializer.Deserialize(String,JsonSerializerOptions)' which has 'RequiresUnreferencedCodeAttribute' can break functionality when trimming application code. JSON serialization and deserialization might require types that cannot be statically analyzed. Use the overload that takes a JsonTypeInfo or JsonSerializerContext, or make sure all of the required types are preserved. IL3050:RequiresDynamicCodeAttribute Using member 'System.Text.Json.JsonSerializer.Deserialize(String,JsonSerializerOptions)' which has 'RequiresDynamicCodeAttribute' can break functionality when AOT compiling. JSON serialization and deserialization might require types that cannot be statically analyzed and might need runtime code generation. Use System.Text.Json source generation for native AOT applications. The fix is to use source generation. Models\StackFrame IL2026:RequiresUnreferencedCodeAttribute Using member 'System.Diagnostics.StackFrame.GetMethod()' which has 'RequiresUnreferencedCodeAttribute' can break functionality when trimming application code. Metadata for the method might be incomplete or removed. The fix is to decorate with UnconditionalSuppressMessage . GetMethod() may return null. In this case we will fall back to ToString(). LogsHelper.GetProblemId IL2026:RequiresUnreferencedCodeAttribute Using member 'System.Diagnostics.StackFrame.GetMethod()' which has 'RequiresUnreferencedCodeAttribute' can break functionality when trimming application code. Metadata for the method might be incomplete or removed. The fix is to decorate with UnconditionalSuppressMessage . GetMethod() may return null. In this case we will fall back to ToString(). API change check API changes are not detected in this pull request. @vitek-karas, @Yun-Ting, @m-redding Please help with this review :)
2025-04-01T04:54:45.876552
2023-09-11T22:20:39
1891359488
{ "authors": [ "ArthurMa1978", "archerzz", "azure-sdk", "rohantagaru" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13296", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/38614" }
gharchive/pull-request
Azure Deployment Manager is being decommissioned. Remove its NET SDK through this PR Contributing to the Azure SDK Please see our CONTRIBUTING.md if you are not familiar with contributing to this repository or have questions. For specific information about pull request etiquette and best practices, see this section. @ArthurMa1978 I remember there is a decommission process to follow? @rohantagaru can you provide official announcement of this deprecation? API change check API changes are not detected in this pull request. From @rohantagaru, this rp has been removed from cli last year, https://github.com/Azure/azure-cli-extensions/pull/4653
2025-04-01T04:54:45.880192
2019-05-01T22:47:20
439357390
{ "authors": [ "dsgouda", "solankisamir", "weshaggard" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13297", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/6038" }
gharchive/pull-request
Add support for schema create or update for Swagger, WSDL and OpenApi Fix based on REST Spec https://github.com/Azure/azure-rest-api-specs/pull/5824 @dsgouda the build failure is related to an EventHub test. @weshaggard Please take a look at the failures. @dsgouda I wanted to add an extension to one of the models for better usability. Give me an hour. @dsgouda I re-queued the failing test leg but I don't believe it is related to the changes in this PR so if it fails again merge anyway. @jsquire looks like another EventHubs test reliability failure Failed Microsoft.Azure.EventHubs.Tests.ServiceFabricProcessor.OptionsTests.RuntimeInformationTest Error Message: Assert.True() Failure Expected: True Actual: False Stack Trace: at Microsoft.Azure.EventHubs.Tests.ServiceFabricProcessor.OptionsTests.RuntimeInformationTest() in D:\a\1\s\sdk\eventhub\Microsoft.Azure.EventHubs\tests\ServiceFabricProcessor\OptionsTests.cs:line 191 I added a note about this test in issue https://github.com/Azure/azure-sdk-for-net/issues/5995, and if we see it keep failing then I'll disable it. @dsgouda I have pushed my changes. Feel free to merge it as soon as CI passes or fails with EventHub test failure. @dsgouda can you merge this.
2025-04-01T04:54:45.894245
2020-08-24T23:02:43
685039671
{ "authors": [ "yunhaoling" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13298", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/13301" }
gharchive/issue
[Schema Registry] Cross language design review Ongoing discussion: Design: API what should the method return? - Schema: - content - SchemaProperties - SchemaProperties - id - version - ... Design: API: SRAvroSerializer.serialize(..., schema): what's the expected type of the parameter schema? if string/bytes, then it's the SDK's duty to do normalize -- remove space from the input (\n\t, etc.) or the service would handle it Laurent: good with bytes and string for p1 Design: Naming response type(object) SchemaId: in JS it's called SchemaIdResponse, probably we should consider a better/different/ name? what's naming convention here Schema: same to the question above content/string/schema Option: all return the same object type e.g.: SchemaProperties parameter name Design: SchemaProperties dict mixin support? Impl: Encoding big/small endian problem to id/format identifier Shall we use struct.pack/unpack to construct payload? Ask service team/other languages how they impl this Impl: Dependency which packages are required for user (dependency)? Laurent: postpone aio implementation later, only do the sync avro serializer now Others generate schema from class/type? the input being a object, is it pythonic? future discussion, not now, but protobuf probably need to support this Eng: Doc Doc auto generation, need to ask Scott Samples and sample readme release to official ms website API reference Finished discussion: Impl: Parsing in sr and avsr should we remove all the space (regular expression "\s") in the schema string user passes into our sdk? it's the service's duty Design: Typing serialization type: string vs enum vs both class a(str, Enum) auto register schema for SR Serializer? data collected and moved into onenote page, will spawn separate issues for each task
2025-04-01T04:54:45.897537
2021-04-26T21:05:41
868202955
{ "authors": [ "jbeauregardb", "juancamilor" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13299", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/18314" }
gharchive/issue
Chat Samples are not able to run in the pipeline Package Name: azure-communication-chat Describe the bug We want to make the Python ACS samples run in the pipeline to have an extra security layer when we do our releases. We refactored our TNM, Identity and SMS samples to make them able to run in the pipeline, however, chat samples are not being able to work because of some special environment variables they need in order to run successfully. We are seeing references to an AZURE_COMMUNICATION_SERVICE_ENDPOINT env variable in the samples. This env variable doesn't exist in the pipeline so it should be removed fron the samples or added to the key vault from the resource we use to test to avoid any inconsistencies with the env variables the Chat Client needs to initiate. @juancamilor This is the PR I opened to add this feature https://github.com/Azure/azure-sdk-for-python/pull/18234 If you go to the pipeline logs you can see exactly where are the errors that need to be addressed. @LuChen-Microsoft FYI
2025-04-01T04:54:45.900175
2022-03-21T19:17:29
1175833167
{ "authors": [ "mccoyp" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13300", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/23592" }
gharchive/issue
[Test Proxy] Remove custom default matcher setup in proxy_startup Per https://github.com/Azure/azure-sdk-for-python/pull/23148, we now call set_custom_default_matcher within proxy_startup.py in order to preserve backwards compatibility and ignore headers that we now omit from recordings. Eventually, once recordings are free from these headers, we should remove this call and use the default matcher upon startup. Note: at this point, we should also revert any set_custom_default_matcher calls that set bodiless matching to set_bodiless_matcher. The linked PR has details about this change as well. This is tracked by https://github.com/Azure/azure-sdk-for-python/issues/34897.
2025-04-01T04:54:45.906592
2022-05-11T02:29:03
1231918217
{ "authors": [ "msyyc", "rguptar", "scbedd" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13301", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/24387" }
gharchive/issue
Broken links in Azure Resources libraries for Python The links to the packages are broken. For example, clicking azure.mgmt.resources.features takes you to https://docs.microsoft.com/en-us/python/api/azure.mgmt.resource.features instead of https://docs.microsoft.com/en-us/python/api/azure-mgmt-resource/azure.mgmt.resource.features Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: e19ddc50-3cc0-eded-49ed-e1f2552a421c Version Independent ID: 391c4753-337c-8a9f-fa58-2c6fc95cb6df Content: Azure Resources libraries for Python Content Source: docs-ref-services/latest/azure.mgmt.resource.md Service: resources Product: azure Technology: azure GitHub Login: @lisawong19 Microsoft Alias: ramyar Hi @scbedd could you help merge the fix PR or address proper person to merge it? Thanks! @scbedd can you review this when you get a chance? Github auto-closed the issue when I merged the PR. Re-opening until the change is actually visible on docs.ms. The issue was fixed already.
2025-04-01T04:54:45.925921
2018-10-16T14:56:55
370655181
{ "authors": [ "irisava", "lmazuel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13302", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/3623" }
gharchive/issue
No module named 'azure.storage' Hello, I have tried the instruction to install, reinstall the azure package. However, all the time I am still getting the ModuleNotFoundError: No module named 'azure.storage'. following are the results of: pip freeze azure==4.0.0 azure-applicationinsights==0.1.0 azure-batch==4.1.3 azure-common==1.1.16 azure-cosmosdb-nspkg==2.0.2 azure-cosmosdb-table==1.0.5 azure-datalake-store==0.0.34 azure-eventgrid==1.2.0 azure-graphrbac==0.40.0 azure-keyvault==1.1.0 azure-loganalytics==0.1.0 azure-mgmt==4.0.0 azure-mgmt-advisor==1.0.1 azure-mgmt-applicationinsights==0.1.1 azure-mgmt-authorization==0.50.0 azure-mgmt-batch==5.0.1 azure-mgmt-batchai==2.0.0 azure-mgmt-billing==0.2.0 azure-mgmt-cdn==3.0.0 azure-mgmt-cognitiveservices==3.0.0 azure-mgmt-commerce==1.0.1 azure-mgmt-compute==4.3.1 azure-mgmt-consumption==2.0.0 azure-mgmt-containerinstance==1.2.0 azure-mgmt-containerregistry==2.2.0 azure-mgmt-containerservice==4.2.2 azure-mgmt-cosmosdb==0.4.1 azure-mgmt-datafactory==0.6.0 azure-mgmt-datalake-analytics==0.6.0 azure-mgmt-datalake-nspkg==2.0.0 azure-mgmt-datalake-store==0.5.0 azure-mgmt-datamigration==1.0.0 azure-mgmt-devspaces==0.1.0 azure-mgmt-devtestlabs==2.2.0 azure-mgmt-dns==2.1.0 azure-mgmt-eventgrid==1.0.0 azure-mgmt-eventhub==2.1.0 azure-mgmt-hanaonazure==0.1.1 azure-mgmt-iotcentral==0.1.0 azure-mgmt-iothub==0.5.0 azure-mgmt-iothubprovisioningservices==0.2.0 azure-mgmt-keyvault==1.1.0 azure-mgmt-loganalytics==0.2.0 azure-mgmt-logic==3.0.0 azure-mgmt-machinelearningcompute==0.4.1 azure-mgmt-managementgroups==0.1.0 azure-mgmt-managementpartner==0.1.0 azure-mgmt-maps==0.1.0 azure-mgmt-marketplaceordering==0.1.0 azure-mgmt-media==1.0.0 azure-mgmt-monitor==0.5.2 azure-mgmt-msi==0.2.0 azure-mgmt-network==2.2.1 azure-mgmt-notificationhubs==2.0.0 azure-mgmt-nspkg==3.0.2 azure-mgmt-policyinsights==0.1.0 azure-mgmt-powerbiembedded==2.0.0 azure-mgmt-rdbms==1.4.0 azure-mgmt-recoveryservices==0.3.0 azure-mgmt-recoveryservicesbackup==0.3.0 azure-mgmt-redis==5.0.0 azure-mgmt-relay==0.1.0 azure-mgmt-reservations==0.2.1 azure-mgmt-resource==2.0.0 azure-mgmt-scheduler==2.0.0 azure-mgmt-search==2.0.0 azure-mgmt-servicebus==0.5.2 azure-mgmt-servicefabric==0.2.0 azure-mgmt-signalr==0.1.1 azure-mgmt-sql==0.9.1 azure-mgmt-storage==2.0.0 azure-mgmt-subscription==0.2.0 azure-mgmt-trafficmanager==0.50.0 azure-mgmt-web==0.35.0 azure-nspkg==3.0.2 azure-servicebus==0.21.1 azure-servicefabric==<IP_ADDRESS> azure-servicemanagement-legacy==0.20.6 azure-storage==0.33.0 azure-storage-blob==1.3.1 azure-storage-common==1.3.0 azure-storage-file==1.3.1 azure-storage-nspkg==3.0.0 azure-storage-queue==1.3.0 Could anyone help on this? Thanks a lot! Hi @irisava Could you confirm version of Python, version of pip, platform (Windws, Ubuntu, etc.) and exact command used to install. Thanks you Hello @lmazuel Thank you for the reply! Following is the info of my working environment: Python 3.6.5 pip 18.1 from ...\appdata\local\programs\python\python36-32\lib\site-packages\pip (python 3.6) Windows 10 Last command used in cmd: pip install azure-storage azure-storage and azure-storage-blob/file/queue are incompatible and cannot work together. azure-storage is actually the deprecated old version of the three packages azure-storage-blob/file/queue Please juste use azure-storage-blob/file/queue or just use azure-storage but not both. azure-storage-blob/file/queue is recommended if you don't have existing code base. Thank you, Closing for inactivity, since I believe I addressed the initial question. If this is still a problem, feel to open a new issue in the storage repo: https://github.com/Azure/azure-storage-python Thanks,
2025-04-01T04:54:45.929675
2024-08-09T12:42:07
2457836376
{ "authors": [ "TaisukeIto", "rohit-ganguly" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13303", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/36833" }
gharchive/issue
which program will work? (Program which can upload a text file to vector index of Azure AI search ) I'm looking for a basic python program that will upload a text file to the vector index of Azure AI search, but everything gives me an error, and I can't find the version number of the azure search documents package or a working ptyhon program anywhere. Everything gives me an error. I guess it's probably because the development is so hard that they haven't been able to check the operation of the developed program, but which program will work? GitHub Azure/azure-search-vector-samples Azure/azure-sdk-for-python Hi @TaisukeIto! Sorry to hear about your experience with the AI Search library - it's a rapidly growing service so sometimes the API changes on newer versions and samples become outdated fast. One of our most popular samples uses the AI Search library and should have accurate behavior - here's what I found for a quick search of the SearchClient.upload_files() method: link. It looks like this sample is on 11.6.0b1. The API reference is also here if you're curious about the details. If you're still running into errors, please post the specific error you're running into.
2025-04-01T04:54:45.931167
2024-12-15T03:33:52
2740255948
{ "authors": [ "carlos2martinize" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13304", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/38885" }
gharchive/issue
Hey! I've been using Cash App to send money and spend using the Cash App Card. Try it using my code and we’ll each get $5. GQ4N8C8 https://cash.app/app/GQ4N8C8 Hello
2025-04-01T04:54:45.937870
2017-07-27T19:50:50
246149965
{ "authors": [ "codecov-io", "lmazuel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13305", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/1330" }
gharchive/pull-request
Initial Container Instance FYI @yolo3301 @derekbekoe Didn't look at the diff in details yet at the time of the PR, but naming and packaging should be ok. Codecov Report Merging #1330 into master will increase coverage by <.01%. The diff coverage is 100%. @@ Coverage Diff @@ ## master #1330 +/- ## ========================================== + Coverage 56.04% 56.04% +<.01% ========================================== Files 2691 2692 +1 Lines 71246 71247 +1 ========================================== + Hits 39932 39933 +1 Misses 31314 31314 Impacted Files Coverage Δ ...zure-mgmt-containerinstance/azure/mgmt/__init__.py 100% <100%> (ø) Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update f477734...3bf348f. Read the comment docs.
2025-04-01T04:54:45.939221
2021-02-04T01:14:49
800839653
{ "authors": [ "lsundaralingam" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13306", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/16511" }
gharchive/pull-request
Revert "Communication identity api redesign (#16420)" This reverts commit 30b917b2b377e7fadbd66c478209b3ab7427ca78. /azp run python - communication - tests
2025-04-01T04:54:45.944122
2022-09-08T07:14:59
1365665318
{ "authors": [ "kristapratico", "syso-jxx" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13307", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/26087" }
gharchive/pull-request
[Videoanalyzer] Fixed cspell typos in videoanalyzer Description Fix https://github.com/Azure/azure-sdk-for-python/issues/22681 All SDK Contribution checklist: [X] The pull request does not introduce [breaking changes] [X] CHANGELOG is updated for new features, bug fixes or other significant changes. [X] I have read the contribution guidelines. General Guidelines and Best Practices [X] Title of the pull request is clear and informative. [X] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page. Testing Guidelines [X] Pull request includes test coverage for the included changes. CI check here: https://dev.azure.com/azure-sdk/public/_build/results?buildId=1847868&view=results I'll merge as soon as it's green. Thanks again! 😸 /check-enforcer override
2025-04-01T04:54:45.950681
2023-01-23T01:07:56
1552371136
{ "authors": [ "azure-sdk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13309", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/28445" }
gharchive/pull-request
[AutoRelease] t2-cognitiveservices-2023-01-23-44734(can only be merged by SDK owner) https://github.com/Azure/sdk-release-request/issues/3679 Live test success https://dev.azure.com/azure-sdk/internal/_build?definitionId=976 BuildTargetingString azure-mgmt-cognitiveservices Skip.CreateApiReview true issue link:https://github.com/Azure/sdk-release-request/issues/3679
2025-04-01T04:54:45.951879
2023-03-17T17:27:29
1629680779
{ "authors": [ "lmazuel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13310", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/29436" }
gharchive/pull-request
Black 22.3.0 Azure-Core Nothing fancy, just use the latest black, so the Typing PRs don't fail on black, while not making those PRs full of uninteresting lines. Ok, so we don't need this PR, it happened I didn't see we have a black config file, it's why I had so many diff
2025-04-01T04:54:45.953302
2018-07-26T18:47:29
344962272
{ "authors": [ "AutorestCI" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13311", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/3014" }
gharchive/pull-request
[AutoPR keyvault/resource-manager] KV multiapi Readme Created to sync https://github.com/Azure/azure-rest-api-specs/pull/3416 This PR has been merged into https://github.com/Azure/azure-sdk-for-python/pull/2927
2025-04-01T04:54:45.964271
2023-08-04T15:40:50
1836973254
{ "authors": [ "kristapratico", "scbedd" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13313", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/31474" }
gharchive/pull-request
vnext issue creator script Resolves https://github.com/Azure/azure-sdk-for-python/issues/29344 We pin the versions of type checkers/linters in this repo so that we don't see any surprises in CI when new versions are released. To keep up with the latest, we periodically bump the pinned version of these checkers and then go through the process of getting all libraries clean for that version. We would like to improve this process by 1) giving early notice of when a version bump will happen and what errors need to be fixed in a given library and 2) standardize when we do version bumps (e.g., quarterly, the Monday after release week). The idea behind this PR is to give library owners an early heads up of what checks are failing with the next version of the type checkers/linters and provide a deadline / merge date for when that version will be merged. It adds a script which will create GH issues if a client library is failing a vnext check for pylint, mypy, or pyright and will run as part of the test-weekly pipeline. If a library fails a vnext check, the script will either create an issue (if one doesn't exist) or update the issue with the latest dates/links to builds. Example issue: https://github.com/Azure/azure-sdk-for-python/issues/31463 @kristapratico No complaints about the code of this PR at all! From a strategery point of view, I'm trying to get our common code under azure-sdk-tools instead of adding new scripts/code to tox folder. Reason being: Makes it super easy to re-use the code here that submits a new issue Has a place to run tests out of if you add them Yes you could put a test file right alongside this under tox/, but that would get awkward pretty quick 😂 That being said, I'm not going to block on that. @scbedd Ah thanks for pointing that out, I meant to mention that I wasn't sure if this was a great place for the script to live. Were you thinking under tools/azure-sdk-tools/ci_tools? I'm happy to move it in this PR. @kristapratico absolutely have some suggestions! tools/azure-sdk-tools/ci_tools/gh <-- access through ci_tools.gh or tools/azure-sdk-tools/gh_tools/ <-- would need to create top levelm so would probably just be gh_tools or whatever you come up with. Both work. Arguably creating an issue isn't tightly bound to CI, so there are arguments for making them their own namespace.
2025-04-01T04:54:45.970335
2024-05-09T23:24:45
2288619741
{ "authors": [ "azure-sdk", "mccoyp" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13314", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/35569" }
gharchive/pull-request
[Key Vault] Add support for pre-backup and pre-restore operations Description Resolves https://github.com/Azure/azure-sdk-for-python/issues/35252. This adds client-facing support for pre-backup and pre-restore methods, for checking whether a full backup or full restore operation can be performed. As a draft, this PR doesn't include tests because of default feature unavailability in the service. Once the feature can be easily enabled, tests and samples will be added. All SDK Contribution checklist: [x] The pull request does not introduce [breaking changes] [x] CHANGELOG is updated for new features, bug fixes or other significant changes. [x] I have read the contribution guidelines. General Guidelines and Best Practices [x] Title of the pull request is clear and informative. [x] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page. Testing Guidelines [ ] Pull request includes test coverage for the included changes. API change check APIView has identified API level changes in this PR and created following API reviews. azure-keyvault-administration
2025-04-01T04:54:45.975243
2024-10-15T21:34:24
2589980334
{ "authors": [ "howieleung", "jhakulin" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13315", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/37913" }
gharchive/pull-request
fixed warning for aio and get call function tools for stream within t… …he SDK Description Get rid of warning in aio AgentOperation. TO do that, I copied the AgentsOperation from sync to async/aio and modify accordingly. Call functions within SDK for streaming instead of asking developers to call in their code. I will do this for non-streaming. Please add an informative description that covers that changes made by the pull request and link all relevant issues. If an SDK is being regenerated based on a new swagger spec, a link to the pull request containing these swagger spec changes has been included above. All SDK Contribution checklist: [ ] The pull request does not introduce [breaking changes] [ ] CHANGELOG is updated for new features, bug fixes or other significant changes. [ ] I have read the contribution guidelines. General Guidelines and Best Practices [ ] Title of the pull request is clear and informative. [ ] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page. Testing Guidelines [ ] Pull request includes test coverage for the included changes. Give some time to review :)
2025-04-01T04:54:45.982797
2023-06-28T14:52:03
1779134327
{ "authors": [ "ladonnaq", "maririos" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13316", "repo": "Azure/azure-sdk-tools", "url": "https://github.com/Azure/azure-sdk-tools/issues/6431" }
gharchive/issue
Management namespace approval improvements for new UI Today the management namespace approval is a manual process. Teams have to create a GitHub issue and follow the guidance on this wiki page to initiate the process. https://dev.azure.com/azure-sdk/internal/_wiki/wikis/internal.wiki/821/Naming-for-new-initial-management-or-client-libraries-(new-SDKs) Questions: Should a dedicated team of architects be responsible for approving the management plane namespaces? Should the archboard use APIView to approve the namespaces? This is how it is done for data plane. Are teams blocked from releasing new initial SDKs if they do not have approval for namespace for management plane? This is implemented for data plane. If we are going to continue with the current process, then we should create the GitHub issue for the user in the Release Planner. The user can enter the suggested names of the namespaces and we have all of the other information needed to create the GitHub issue for them using the template - https://github.com/Azure/azure-sdk-pr/issues/new?assignees=kyle-patterson%2C+ronniegeraghty&labels=architecture%2C+board-review%2C+mgmt-namespace-review&projects=&template=adp_mgmt_namespace_review.md&title=Board+Review%3A+Management+Plane+Namespace+Review+<client+library+name> All of the questions have been covered either in docs or already implemented in the SDK release app The only reminaing one is If we are going to continue with the current process, then we should create the GitHub issue for the user in the Release Planner. The user can enter the suggested names of the namespaces and we have all of the other information needed to create the GitHub issue for them using the template - https://github.com/Azure/azure-sdk-pr/issues/new?assignees=kyle-patterson%2C+ronniegeraghty&labels=architecture%2C+board-review%2C+mgmt-namespace-review&projects=&template=adp_mgmt_namespace_review.md&title=Board+Review%3A+Management+Plane+Namespace+Review+<client+library+name> which will be covered by https://github.com/Azure/azure-sdk-tools/issues/4601
2025-04-01T04:54:45.995185
2022-12-20T19:10:23
1505118097
{ "authors": [ "JoshLove-msft", "aakash049", "ronniegeraghty", "tg-msft", "tomkerkhove" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13317", "repo": "Azure/azure-sdk", "url": "https://github.com/Azure/azure-sdk/issues/5282" }
gharchive/issue
Board Review: Event Grid System Events Thank you for submitting this review request. Thorough review of your client library ensures that your APIs are consistent with the guidelines and the consumers of your client library have a consistently good experience when using Azure. The Architecture Board reviews Track 2 libraries only. If your library does not meet this requirement, please reach out to Architecture Board before creating the issue. Please reference our review process guidelines to understand what is being asked for in the issue template. To ensure consistency, all Tier-1 languages (C#, TypeScript, Java, Python) will generally be reviewed together. In expansive libraries, we will pair dynamic languages (Python, TypeScript) together, and strongly typed languages (C#, Java) together in separate meetings. For Tier-2 languages (C, C++, Go, Android, iOS), the review will be on an as-needed basis. Before submitting, ensure you adjust the title of the issue appropriately. Note that the required material must be included before a meeting can be scheduled. Contacts and Timeline Responsible service team: API Management, DataBox Main contacts: @JoshLove-msft Expected code complete date: 1/6/23 Expected release date: 1/13/23 About the Service Link to the service REST APIs: https://github.com/Azure/azure-rest-api-specs/pull/21771 https://github.com/Azure/azure-rest-api-specs/pull/21945 .NET APIView Link: To be added Java APIView Link: To be added Python APIView Link: To be added TypeScript APIView Link: To be added Scheduled for Jan 12th, from 2:05PM - 4PM PST. As per email, I cannot attend this meeting since I'm based in Belgium. Can we move this to 10 PM CET / 1AM PT please? I'm OK to stay up for the meeting then. Even I can't attend this meeting since I'm based in India. Can we move it to 12 AM IST / 10:30 AM PST ? @tomkerkhove & @aakash049 the review session time can be moved up, just let me get an agreed upon time. Would 10:05AM - 12PM PST work for you both? I can check but allocating 2h in the evening is a bit much as it feels like the discussion will not take that much given it's 2 different topics. Can we split them or have some indication? I can potentially do 10:30-11:30 but I think this is still too late for @aakash049 who is in India. From the context in the issue description it looks like the two topics of this meeting will be 11 new events for API Management and an Event Grid system topic for DataBox. @aakash049 & @tomkerkhove, can you let me know which part you're interested in, and I'll add info to the review session stating which topic should go first and second. Since @aakash049 is in India and it will be latest for them, we can arrange it, so their topic is covered from 10-11AM PST and @tomkerkhove's topic is covered from 11AM-12PM PST. Could that work for you both? It's not ideal because that is my 8 PM but I'll make it work :) I'm joining for the 11 new events for API Management I'll be discussing about Event Grid System topic for Databox, 10-11 AM PST works for me. Okay, thanks for being flexible. I'll speak with the architects now to confirm the time. There is a chance they could do 9AM-11AM, but the normal morning time slot is 10-12. I'll keep you posted. Scheduled for 1/12 from 9:05AM - 11AM PST Thanks! I'll join at 10 AM PST to represent APIM @ronniegeraghty is this scheduled for 10-12 or 9-11. It sounds like both service reps are available from 10 onward? Adding a note here to reflect the update in the meeting invite. The review session is scheduled to take place between 9:05AM - 11AM PST. @aakash049 will be going first for the DataBox related topic from 9:05AM - 9:35AM PST. Then, @tomkerkhove will be going for the API Management related topic from 9:35 - 10:05AM PST Recording (MS INTERNAL ONLY)
2025-04-01T04:54:45.998265
2019-11-01T19:13:11
516298485
{ "authors": [ "kurtzeborn", "scbedd" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13318", "repo": "Azure/azure-sdk", "url": "https://github.com/Azure/azure-sdk/issues/761" }
gharchive/issue
Documentation - Indexes Should Account for Renamed Packages Right now, we generate github.io landing pages purely based off of what is present within the repo. We know that packages will be renamed (most recent example being azure-storage-fileshare), so we may need to account for this. Depending on the level of investment to github.io docs, this may be important. Most straightforward way I can think of is to always include locations in the index for packages that we've published to blob storage before.\ CC @kaerm This is super rare and we don't lose any history/data related to docs without this. Cutting this feature since we'll never prioritize it enough to do it. The amount of work needed doesn't match the benefit given how rare this will occur.
2025-04-01T04:54:46.000621
2020-04-14T19:10:22
599796638
{ "authors": [ "adrianhall", "bterlson" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13319", "repo": "Azure/azure-sdk", "url": "https://github.com/Azure/azure-sdk/pull/1226" }
gharchive/pull-request
Add blog for JS abort controller Let me know any comments. Edits allowed from maintainers, so feel free to fix any minor issues you like 😀 All feedback addressed, let me know if there are more suggestions! Was the second file "how q" added by accident? Might want to remove it from the PR. that extra file is very strange! I'll slice it out of the history. Well clearly I messed up my filter-branch. Recreating PR. @jongio Thanks so much for the feedback, I addressed most of this feedback over in #1240. I'm open to changing the intro paragraph to a bullet list and also interested in how to improve the point about separation of concerns between signal and controller.
2025-04-01T04:54:46.005384
2023-09-28T14:54:02
1917725361
{ "authors": [ "adreed-msft", "dinu99" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13320", "repo": "Azure/azure-storage-azcopy", "url": "https://github.com/Azure/azure-storage-azcopy/issues/2389" }
gharchive/issue
Need -whatif flag for Azcopy Sync tool What command did you run? Note: Please remove the SAS to avoid exposing your credentials. If you cannot remember the exact command, please retrieve it from the beginning of the log file. NA - feature request What problem was encountered? Please add -whatif flag for Azcopy Sync tool to estimate exact the outcome of the command. Especially useful when we are dealing with large number of files and having --delete-destination and --recursive flags. How can we reproduce the problem in the simplest way? NA Have you found a mitigation/solution? No This sounds like our --dry-run flag, which doesn't match this exact functionality at this point.
2025-04-01T04:54:46.015983
2017-07-27T00:12:06
245890530
{ "authors": [ "MikeStall", "dallancarr" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13321", "repo": "Azure/azure-webjobs-sdk-script", "url": "https://github.com/Azure/azure-webjobs-sdk-script/pull/1717" }
gharchive/pull-request
Allow direct Load webjobs dll Resolves https://github.com/Azure/azure-webjobs-sdk-script/issues/1508 Allow Functions to directly load and consume a WebJobs DLL that may come from precompiled tooling. The function.json has a new "configurationSource" : "attributes" flag in it. This builds on several previous fixes: This skips the InvokerBase and ILGeneration path. This builds on some previous changes to move non-invocation responsibility (logging, metrics, return values, etc) out of the invoker path. Recent fix to billing: https://github.com/Azure/azure-webjobs-sdk-script/issues/578 It builds on [FunctionName] and Return value support from the SDK. Can we add something to the docs on this change. Seems pretty significant.
2025-04-01T04:54:46.029954
2022-09-06T06:36:43
2354762447
{ "authors": [ "AlexanderSehr", "clintgrove", "tyconsulting" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13322", "repo": "Azure/bicep-registry-modules", "url": "https://github.com/Azure/bicep-registry-modules/issues/2414" }
gharchive/issue
[Feature Request]: add module for ADF Linked Services Description currently there is no module for ADF Linked Services resource https://docs.microsoft.com/en-us/azure/templates/microsoft.datafactory/factories/linkedservices?pivots=deployment-language-bicep Hey @clintgrove, I just migrated this issue over from CARML. Please take a look and triage if still relevant :) I am working on this, due to raise PR today or tomorrow 25th june 2024
2025-04-01T04:54:46.033520
2024-10-29T15:47:45
2621667024
{ "authors": [ "AlexanderSehr", "jtracey93" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13323", "repo": "Azure/bicep-registry-modules", "url": "https://github.com/Azure/bicep-registry-modules/issues/3661" }
gharchive/issue
[AVM Module Issue]: Virtual Network Gateway - WAF and APRL alignment Check for previous/existing GitHub issues [x] I have checked for previous/existing GitHub issues Issue Type? Feature Request Module Name avm/res/network/virtual-network-gateway (Optional) Module Version No response Description We've been asked to ensure module defaults alignment for WAF and APRL for several modules. For the Virtual Network Gateway module can we please update the following default. For Public IP's used by the gateway, set zone configuration to all zones [1,2,3] as the default value Superseding #3247 (Optional) Correlation Id No response Hey @fabmas, Please triage this issue when you get the chance 🙂
2025-04-01T04:54:46.040313
2022-04-04T21:55:09
1192354947
{ "authors": [ "VELCpro", "alex-frankel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13324", "repo": "Azure/bicep", "url": "https://github.com/Azure/bicep/issues/6407" }
gharchive/issue
AzureML: Datastore created with no subscription_id and resource_group Bicep version Bicep CLI version 0.4.1318 (ee0d808f35) Describe the bug Deploying a datastore through bicep results in a datastore without subscription_id and resource group setted in the azureml workspace. The workspace is correctly working but doesn't have the direct link to the blob storage. To Reproduce simply create a datastore resource with resource datastore 'Microsoft.MachineLearningServices/workspaces/datastores@2021-03-01-preview` Can you share the full code sample that you deployed? @stan-sz - do you happen to know anything about this one? The code that I deployed is something like the following code: resource datastore 'Microsoft.MachineLearningServices/workspaces/datastores@2021-03-01-preview' = { name: '${workspace_name}/dstr_preproc' properties: { contents: { contentsType: 'AzureBlob' accountName: ext_storage_reference.name containerName: 'bscont-preproc' credentials: { credentialsType: 'AccountKey' secrets: { key: listKeys(ext_storage_reference.id, '2019-06-01').keys[0].value secretsType: 'AccountKey' } } endpoint: environment().suffixes.storage // https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/bicep-functions-deployment protocol: 'https' } } } The issue is still there also with the stable version of 2022-05-01 resource pipeDatastore 'Microsoft.MachineLearningServices/workspaces/datastores@2022-05-01' = { name: '${workspace_name}/dstr_preproc' properties: { datastoreType: 'AzureBlob' accountName: ext_storage_reference_tmp.name containerName: 'bscont--preproc' credentials: { credentialsType: 'AccountKey' secrets: { key: listKeys(ext_storage_reference_tmp.id, '2019-06-01').keys[0].value secretsType: 'AccountKey' } } endpoint: environment().suffixes.storage // https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/bicep-functions-deployment protocol: 'https' } } using the latest api 2022-06-01-preview everything works due to the fact that now we can specify subscriptionId and resourceGroup resource tmpPipeDatastore 'Microsoft.MachineLearningServices/workspaces/datastores@2022-06-01-preview' = { name: '${workspace_name}/dstr_preproc' properties: { datastoreType: 'AzureBlob' accountName: ext_storage_reference_tmp.name containerName: 'bscont-preproc' subscriptionId: env.subcription_id resourceGroup: rg_name credentials: { credentialsType: 'AccountKey' secrets: { key: listKeys(ext_storage_reference_tmp.id, '2019-06-01').keys[0].value secretsType: 'AccountKey' } } endpoint: environment().suffixes.storage // https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/bicep-functions-deployment protocol: 'https' } } The links are created correctly but they end up in a strange "not found" error Here for example i clicked on the container link This is an issue with the Azure ML Resource Provider. Can you open an Azure support case, so this can be routed to the Azure ML team?
2025-04-01T04:54:46.048857
2023-01-11T09:11:38
1528680893
{ "authors": [ "allxiao", "ezYakaEagle442", "stephaniezyen" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13325", "repo": "Azure/bicep", "url": "https://github.com/Azure/bicep/issues/9515" }
gharchive/issue
Azure Spring Apps / App Deployment : API missing to get relativePath Bicep version az bicep version Bicep CLI version 0.11.1 (030248df55) Describe the bug I have a Bicep snipet to create an Azure Spring Apss / App Deployment : // https://learn.microsoft.com/en-us/azure/templates/microsoft.appplatform/2022-11-01-preview/spring/apps/deployments?pivots=deployment-language-bicep#usersourceinfo-objects resource adminserverappdeployment 'Microsoft.AppPlatform/Spring/apps/deployments@2022-11-01-preview' = { name: 'default' parent: adminserverapp sku: { name: azureSpringAppsSkuName } …. source: { version: deploymentVersion type: 'Jar' // Jar, Container or Source [https://learn.microsoft.com/en-us/azure/templates/microsoft.appplatform/2022-11-01-preview/spring/apps/deployments?pivots=deployment-language-bicep#usersourceinfo](https://learn.microsoft.com/en-us/azure/templates/microsoft.appplatform/2022-11-01-preview/spring/apps/deployments?pivots=deployment-language-bicep#usersourceinfo) jvmOptions: '-Xms512m -Xmx1024m -Dspring.profiles.active=mysql,key-vault,cloud' // [https://learn.microsoft.com/en-us/rest/api/azurespringapps/apps/get-resource-upload-url?tabs=HTTP#code-try-0](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Flearn.microsoft.com%2Fen-us%2Frest%2Fapi%2Fazurespringapps%2Fapps%2Fget-resource-upload-url%3Ftabs%3DHTTP%23code-try-0&data=05%7C01%7CSteve.Pincaud%40microsoft.com%7C6ba1c341743548429ec708daf35eb3dd%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638089885656486803%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=hZBkaW6MJfx0XQkTVt9cmn0hveGmDpqwACN6ZsxKyIc%3D&reserved=0) // should be a link to a BLOB storage relativePath: 'https://stasapetcliasa.blob.core.windows.net/petcliasa-blob/asa-spring-petclinic-admin-server-2.6.6.jar' runtimeVersion: 'Java_11' } } } There is this API which result provides the relativePah field , but how to get that result in Bicep ? Without this value, it looks like there is no way to create a Deployment with Bicep which is asked by my customer. To Reproduce See snippet above Additional context This is a show stopper for my customer who wants to use Bicep only WITHOUT any extra steps in a script Ask: this get-resource-upload-url API should be callable through Bicep @alex-frankel The AppPlatform/Spring RP team is looking into this Hi @ezYakaEagle442 If you need to create a new deployment, you can fill the the relativePath with a placeholder <default>. @description('The instance name of the Azure Spring Cloud resource') param springCloudInstanceName string param location string = resourceGroup().location resource springCloudInstance 'Microsoft.AppPlatform/Spring@2022-11-01-preview' = { name: springCloudInstanceName location: location sku: { name: 'S0' tier: 'Standard' } properties: { } } resource apiGatewayApp 'Microsoft.AppPlatform/Spring/apps@2022-11-01-preview' = { name: 'api-gateway' parent: springCloudInstance } resource apiGatewayDeploymentApp 'Microsoft.AppPlatform/Spring/apps/deployments@2022-11-01-preview' = { name: 'default' parent: apiGatewayApp sku: { name: 'S0' } properties: { active: true source: { relativePath: '<default>' type: 'Jar' } deploymentSettings: { resourceRequests: { cpu: '1' memory: '2Gi' } } } } If you need a real storage location that can be used to upload artifacts and pass to the deployment, you need a POST call to the app's getResourceUploadUrl action. You need to leverage the Deployment Script support in bicep to do this. create a user assigned identity assign Contributor role for the identity to the target resource group that contains the Azure Spring Apps instance Add the following snippet to get the URL. resource getUploadUrl 'Microsoft.Resources/deploymentScripts@2020-10-01' = { name: 'get-upload-url' location: location kind: 'AzureCLI' identity: { type: 'UserAssigned' userAssignedIdentities: { // replace the ??xxx?? placeholder below with your identity properties '${resourceId('??your-identity-group??', 'Microsoft.ManagedIdentity/userAssignedIdentities', '??your identity name??')}': {} } } properties: { forceUpdateTag: utcValue azCliVersion: '2.40.0' timeout: 'PT30M' scriptContent: 'az rest --method post --url ${apiGatewayApp.id}/getResourceUploadUrl?api-version=2022-11-01-preview' retentionInterval: 'P1D' } } // you can get the url and path using the following assignment var relativePath = getUploadUrl.properties.outputs.relativePath var uploadUrl = getUploadUrl.properties.outputs.uploadUrl However, if you want to do real deployment using bicep (upload JAR and then patch deployment), it's not a good idea IMO as bicep is more ARM Template oriented. You will need to write further deployment scripts to call curl to upload your JAR. Reference: https://blog.soft-cor.com/uploading-large-files-to-an-azure-file-share-using-a-shell-script-and-standard-linux-commands/
2025-04-01T04:54:46.057738
2024-07-18T23:34:48
2417566715
{ "authors": [ "MoChilia", "vn0siris" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13326", "repo": "Azure/cli", "url": "https://github.com/Azure/cli/issues/155" }
gharchive/issue
Warning: Unable to fetch all az cli versions Warning: Unable to fetch all az cli versions, please report it as an issue on https://github.com/Azure/CLI/issues. Output: *** "name": "azure-cli", "tags": [ "0.10.0", "0.10.1", "0.10.10", "0.10.11", "0.10.12", "0.10.13", "0.10.14", "0.10.2", "0.10.3", "0.10.4", "0.10.5", "0.10.6", "0.10.7", "0.10.8", "0.9.10", "0.9.13", "0.9.14", "0.9.15", "0.9.16", "0.9.17", "0.9.18", "0.9.19", "0.9.2", "0.9.20", "0.9.4", "0.9.5", "0.9.6", "0.9.7", "0.9.8", "0.9.9", "2.0.24", "2.0.26", "2.0.27", "2.0.28", "2.0.29", "2.0.31", "2.0.32", "2.0.34", "2.0.37", "2.0.38", "2.0.41", "2.0.42", "2.0.43", "2.0.44", "2.0.45", "2.0.46", "2.0.47", "2.0.49", "2.0.50", "2.0.51", "2.0.52", "2.0.53", "2.0.54", "2.0.55", "2.0.56", "2.0.57", "2.0.58", "2.0.59", "2.0.60", "2.0.61", "2.0.62", "2.0 Hi @vn0siris, this is a duplicate issue of #153, which has been fixed in #154. You can point to master branch as a temporary workaround. I will inform you once the new version is released.
2025-04-01T04:54:46.067915
2022-05-19T16:57:32
1242057622
{ "authors": [ "chlowell", "jhendrixMSFT" ], "license": "apache-2.0", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13327", "repo": "Azure/go-autorest", "url": "https://github.com/Azure/go-autorest/pull/698" }
gharchive/pull-request
Handle expires_on in int format Unmarshal the value into an interface{} and perform the proper conversion depending on the underlying type. Thank you for your contribution to Go-AutoRest! We will triage and review it as soon as we can. As part of submitting, please make sure you can make the following assertions: [ ] I've tested my changes, adding unit tests if applicable. [ ] I've added Apache 2.0 Headers to the top of any new source files. Fixes https://github.com/Azure/go-autorest/issues/696 Looking at the related issue, it looks to me that expires_on, at least in that example, isn't the number of seconds from now but probably from the Unix epoch. I need to take a closer look. Everywhere except App Service, expires_on is epoch seconds, either as a number or a string. And App Service is as a time-stamp correct? OK I did a little digging. Token.Expires() already treats ExpiresOn as Unix time. It does mean though that our handling of expires_on in date-time format is incorrect at present.
2025-04-01T04:54:46.120646
2024-12-05T14:17:39
2720541981
{ "authors": [ "kurian-dm", "weinong" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13328", "repo": "Azure/kubelogin", "url": "https://github.com/Azure/kubelogin/issues/566" }
gharchive/issue
Signed version of kubelogin.exe Can we get a signed version of kubelogin.exe, we are only allowed to use signed binaries and exe in our production systems. ack. I think the upcoming publishing update should be able to address it
2025-04-01T04:54:46.123532
2023-08-17T09:59:21
1854677886
{ "authors": [ "mitsha-microsoft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13329", "repo": "Azure/load-testing", "url": "https://github.com/Azure/load-testing/pull/65" }
gharchive/pull-request
Disabling PR Workflow for Forked PRs Using trigger pull_request_target allows workflows from forked repos to get access to the secrets and GitHub tokens of this repository, as specified here. Until testing is fixed, we are removing the capability to run tests on forked PRs to resolve a security issue. After testing is enabled for the GitHub Action, we will enable the workflow accordingly. Note: Support for running this workflow on forked branches will be added after proper investigation. There were 2 approaches of fixing this: Either support running only on non forked branches. E.x. functions GitHub Action. Make the workflow work explicitly E.x. SQL Deploy GitHub Action. We have picked the first approach, and will investigate more for the second approach
2025-04-01T04:54:46.141622
2018-10-04T17:37:41
366896506
{ "authors": [ "antoniocachuan", "imatiach-msft", "kunguang", "vinglogn" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13330", "repo": "Azure/mmlspark", "url": "https://github.com/Azure/mmlspark/issues/386" }
gharchive/issue
Try running MMLSPARK in YARN mode Hi, We are trying to use mmlspark in a Cloudera environment using pyspark through terminal [1] and Cloudera Data Science Workbench (CDSW) [2]. All our efforts have failed and we wonder if this option is possible. The only way we've got it working is to use pyspark without yarn and even if it runs we got another error [3]. *We also tried to run in a Google Cloud Dataproc cluster with same error [4] [1] Terminal pyspark2 --master local --deploy-mode yarn--packages Azure:mmlspark:0.14,com.microsoft.ml.lightgbm:lightgbmlib:2.1.250,com.jcraft:jsch:0.1.54,com.microsoft.cntk:cntk:2.4,io.spray:spray-json_2.11:1.3.2,org.openpnp:opencv:3.4.2-0 [2] CDSW from pyspark.sql import SparkSession warehouseLocation = "/prod/bcp/edv/mesapymesh/datain" jarsLocation = "/home/cdsw/" spark = SparkSession\ .builder.appName("SparkML")\ .config("spark.sql.warehouse.dir", warehouseLocation)\ .config("spark.jars.ivy", jarsLocation)\ .config("spark.jars.packages", "Azure:mmlspark:0.14,com.microsoft.ml.lightgbm:lightgbmlib:2.1.250,com.jcraft:jsch:0.1.54,com.microsoft.cntk:cntk:2.4,io.spray:spray-json_2.11:1.3.2,org.openpnp:opencv:3.4.2-0")\ .enableHiveSupport()\ .getOrCreate() [3] Error running in local mode [4] Google Cloud Data Data Proc error Code pyspark --packages Azure:mmlspark:0.14 Log `Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 18/10/04 17:31:53 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/cticbigdata/.ivy2/jars/Azure_mmlspark-0.14.jar added multiple times to distributed cache. 18/10/04 17:31:53 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/cticbigdata/.ivy2/jars/io.spray_spray-json_2.11-1.3.2.jar added multiple times to distributed cache. 18/10/04 17:31:53 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/cticbigdata/.ivy2/jars/com.microsoft.cntk_cntk-2.4.jar added multiple times to distributed cache. 18/10/04 17:31:53 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/cticbigdata/.ivy2/jars/org.openpnp_opencv-3.2.0-1.jar added multiple times to distributed cache. 18/10/04 17:31:53 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/cticbigdata/.ivy2/jars/com.jcraft_jsch-0.1.54.jar added multiple times to distributed cache. 18/10/04 17:31:53 WARN org.apache.spark.deploy.yarn.Client: Same path resource file:/home/cticbigdata/.ivy2/jars/com.microsoft.ml.lightgbm_lightgbmlib-2.1.250.jar added multiple times to distributed cache. ivysettings.xml file not found in HIVE_HOME or HIVE_CONF_DIR,/etc/hive/conf.dist/ivysettings.xml will be ... Using Python version 2.7.9 (default, Sep 25 2018 20:42:16) SparkSession available as 'spark'. import mmlspark Traceback (most recent call last): File "", line 1, in ImportError: No module named mmlspark ` [5] Notes CDH 5.12 Spark 2.2.0 CDH / Spark 2.2.1 GCD Python 2.7.13 / 3.6 Thanks in advance for your help. If you need more information or something else I will be checking for news. Hi, You should only need the option "--packages Azure:mmlspark:0.14" The other options shouldn't matter. MMLSpark should work anywhere where spark is deployed, it shouldn't matter what cluster you are using. Having said that, I've only tested it on Azure Databricks and HDInsight. If you want to meet over skype I could try and debug it with you, but I don't have access to a cloudera workbench unfortunately :(. Thank you, Ilya @antoniocachuan Can you also try on spark 2.3? We only support spark 2.3 now (older versions support 2.2). You can also send me an email at<EMAIL_ADDRESS>if you want to diagnose your issue. @imatiach-msft Thanks for your answer, In this moment is not possible to test it in Spark 2.3. Also I tried with "--packages Azure:mmlspark:0.14" in Google Cloud Data Proc cluster with the same results. PD: I really appreciate your help, just emailed you. Regards, Antonio C. @antoniocachuan the strange thing is, I don't see any errors anywhere. It looks like you retrieved the jar so you would think that it would just work. I'm not quite sure what the problem might be. We could try and add the python files manually from the zip to see if anything fails. Otherwise, when using --packages it should just pick up the python files and import them. My guess is something in the import step if failing, but that might not be the case because I don't see an error anywhere. @imatiach-msft I tried in CDH running --package using the Scala API and It works, now I am getting a error related to the issue #335 also I could test adding manually the python files. spark2-shell --master yarn --packages Azure:mmlspark:0.14,com.microsoft.ml.lightgbm:lightgbmlib:2.1.250 Error #335 Caused by: java.lang.UnsatisfiedLinkError: /data/06/yarn/nm/usercache/s16746/appcache/application_1538195866523_1970/container_e100_1538195866523_1970_01_000002/tmp/mml-natives2115945537512894448/lib_lightgbm.so: /lib64/libm.so.6: version GLIBC_2.23 not found (required by /data/06/yarn/nm/usercache/s16746/appcache/application_1538195866523_1970/container_e100_1538195866523_1970_01_000002/tmp/mml-natives2115945537512894448/lib_lightgbm.so) @antoniocachuan I also encountered the similar problem of ‘No module named mmlspark’,but I compile the source code of mmlspark-0.15, i install 'mmlspark-0.15-py2.py3-none-any.whl' package to my ubuntu16.04 envrionment,this problem is gone! @antoniocachuan hello, have you solved this question? can you give me some advise ,thanks
2025-04-01T04:54:46.153356
2021-12-23T11:18:15
1087622473
{ "authors": [ "K2CanDo", "rido-min" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13331", "repo": "Azure/opendigitaltwins-dtdl", "url": "https://github.com/Azure/opendigitaltwins-dtdl/issues/124" }
gharchive/issue
Support arrays in property definitions The Plug and Play documentation states that the definition of arrays in properties is not supported yet (https://docs.microsoft.com/en-us/azure/iot-develop/concepts-modeling-guide). Since it is now finally possible to have arrays in the device twin tough (https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-device-twins#tags-and-properties-format), it should also be possible to define arrays in component properties. this is something we are targeting for DTDL v3. Stay tuned for upcoming updates. DTDL v3 has been published as preview, with support for arrays in properties. Can you close this this issue?
2025-04-01T04:54:46.165015
2019-07-22T16:29:42
471177688
{ "authors": [ "mikekinsman", "ppgovekar" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13332", "repo": "Azure/portaldocs", "url": "https://github.com/Azure/portaldocs/pull/244" }
gharchive/pull-request
Should be portalfx in stead of portalf in the breaking changes link Should be portalfx in stead of portalf in the breaking changes link Docs Build status updates of commit e65372f: :white_check_mark: Validation status: passed File Status Preview URL Details portal-sdk/generated/downloads.md :bulb:Suggestion View Details portal-sdk/generated/downloads.md [Suggestion] Missing attribute: author. Add the current author's GitHub ID. [Suggestion] Missing attribute: title. Add a title string to show in search engine results. For more details, please refer to the build report. Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report. Docs Build status updates of commit e65372f: :white_check_mark: Validation status: passed File Status Preview URL Details portal-sdk/generated/downloads.md :bulb:Suggestion View Details portal-sdk/generated/downloads.md [Suggestion] Missing attribute: author. Add the current author's GitHub ID. [Suggestion] Missing attribute: title. Add a title string to show in search engine results. For more details, please refer to the build report. Note: If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report. @nickharris please take a look My bad I only changed display link :S
2025-04-01T04:54:46.165975
2022-10-14T14:35:47
1409451985
{ "authors": [ "BALAGA-GAYATRI", "jbenaventem" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13333", "repo": "Azure/powershell", "url": "https://github.com/Azure/powershell/pull/61" }
gharchive/pull-request
feat(#60): include CodeQL@v2 workflow Include CodeQL workflow to security scan Changes are added to resolve the conflicts, so closing it here.
2025-04-01T04:54:46.170542
2021-02-25T16:58:13
816619812
{ "authors": [ "ilya-git", "kpkool", "ltouro", "ms1111" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13334", "repo": "Azure/secrets-store-csi-driver-provider-azure", "url": "https://github.com/Azure/secrets-store-csi-driver-provider-azure/issues/412" }
gharchive/issue
Possibility to inject key-vault values as environment variables with or without k8s secrets Describe the solution you'd like Many applications have a native ability to read options/parameters from environment variables like e.g. dot net core. Right now to get that behavior we need to: Mount the files that won't be used later Add secrets as k8s secrets that won't be used later Map k8s secrets to env.variables For many applications there is not much need for all of these and it would be a nice feature to just allow injection of variables without the need to configure either mounts or secrets. Anything else you would like to add: It would be a cool additional feature if injecting as env. variable can update k8s secrets (like what is happening now during the mount if configured) with enabled auto-rotation that will restart pods to inject new environment variables with renewed secrets if the secret has changed. Or perhaps it's possible to evaluate the injected variables instead of creating a k8s secret to determine if pod should be restarted. That would allow for completely pain-free secret update, like e.g. database password update. Are we able to use Deployment env.valueFrom.secretKeyRef along with SecretProviderClass secretObjects already? Mount the files that won't be used later Even with the ability to use env.valueFrom.secretKeyRef, there is still a requirement to mount the files, otherwise the synced secrets aren't created. That's a bit unfortunate; it makes it more complicated to build a chart that can use regular secrets OR the CSI driver in different environments. Each deployment needs to be modified to mount the volume from the CSI driver. Is there anyway to expose vault secret directly as pod's env variables. My client does not allow you to use k8s secret due to base64 encode, which is still clear text. For e.g some db and sensitive data store credentials really need to be hidden from K8s admins/developers.
2025-04-01T04:54:46.224834
2024-06-13T08:33:50
2350534622
{ "authors": [ "OsoThevenin", "alasdairmackenzie" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13335", "repo": "Azure/static-web-apps", "url": "https://github.com/Azure/static-web-apps/issues/1492" }
gharchive/issue
Nextjs + Authjs v5 Callback Error with Duende Identity Server provider in a Azure Static Web App Describe the bug After the user is successfully logged in on my duende identity server v6, it tries to redirect the user to the specified callback url (following the Auth.js documentation here). Here's an example redirect url: https://mydomain/api/auth/callback/duende-identity-service?code=3D96EBB5721191CD18CDBBFE5FFD818017E4C59F09214552AE74636913DB21B6-1&scope=openid profile email&session_state=G2eqKlEXsiOGOl0W5zNmDk7MloXu18w1M3YapSqv7qI.E986B1AED0AC76CFE51406215F9A08F6&iss=myidentityserver My SWA instead of retrieving the session and redirecting back the user to the "dashboard" it returns a 302 status code with this weird url as location. https://1dd4069c374e:8080/api/auth/error?error=Configuration On localhost the redirect works as expected. To Reproduce Can't give much details of the code as it's private. But will try to create a mock application to replicate this behaviour. auth.config.ts export default { providers: [ DuendeIDS6Provider({ id: 'duende-identity-service', // default id duende-identityserver6!! name: 'Duende Identity Service', clientId: process.env.AUTH_DUENDE_IDENTITY_SERVER6_ID!, clientSecret: process.env.AUTH_DUENDE_IDENTITY_SERVER6_SECRET!, issuer: process.env.AUTH_DUENDE_IDENTITY_SERVER6_ISSUER, }), ], } satisfies NextAuthConfig; middleware.ts const intlMiddleware = createMiddleware({ defaultLocale, localePrefix, locales, pathnames, }); const authMiddleware = auth( (req: NextRequest & { auth: Session | null }): Response | void => { const session = req.auth; // Handle session return intlMiddleware(req); }, ); const middleware = (req: NextRequest) => { // some validations if (isAuthPage) { return (authMiddleware as any)(req); } if (isPublicPage) { return intlMiddleware(req); } return (authMiddleware as any)(req); }; export const config = { matcher: ['/((?!api|_next/static|_next/image|favicon.ico|.*.swa).*)/'], }; export default middleware; staticwebapp.config.json { "forwardingGateway": { "allowedForwardedHosts": [ "mydomain" ] } } Expected behavior Location should be https://mydomain/dashboard Actual response: Device info (if applicable): OS: Windows Browsers: Brave, Firefox, Chrome, Edge Version: Latest @OsoThevenin This was happening to me too. I spent quite a while working through the code and realised the way AuthJS was setting the hostname was a bit odd. The weird url is actually the HOST of the server. I can't recall exactly helped me work around the issue but it was either setting the AUTH_URL or the AUTH_REDIRECT_PROXY_URL to the actual domain i.e. "https:///api/auth" See this issue https://github.com/nextauthjs/next-auth/issues/10928#issuecomment-2121092912 @OsoThevenin This was happening to me too. I spent quite a while working through the code and realised the way AuthJS was setting the hostname was a bit odd. The weird url is actually the HOST of the server. I can't recall exactly how I worked around the issue but it was either setting the AUTH_URL or the AUTH_REDIRECT_PROXY_URL to the actual domain i.e. "https:///api/auth" See this issue nextauthjs/next-auth#10928 (comment) Definetly this helped fix the issue. Thanks a lot ❤️
2025-04-01T04:54:46.233968
2021-07-01T06:17:14
934445651
{ "authors": [ "kevinprescottwong-Dev", "miwebst" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13336", "repo": "Azure/static-web-apps", "url": "https://github.com/Azure/static-web-apps/issues/501" }
gharchive/issue
AzureStaticWebApp step fails immediately For the life of me, I can't get this AzureStaticWebApp pipeline step to succeed. I have a react app that I created through the standard create-react-app npx command. I tried following a bunch of official and unofficial tutorials, but nothing I've tried has worked. I've seen many tutorials get you to use the GitHub Actions flow, but I want to get an Azure DevOps CI/CD pipeline setup for this react project. The one thing that I suspect might be causing some issues is that I am using a self-hosted Agent (vsts-agent-win-x64-2.188.3) to run the pipelines, since I dont have access to the hosted parallelism A clear and concise description of what the bug is. To Reproduce Steps to reproduce the behavior: I run my pipeline AzureStaticWebApp step fails immediately GitHub Actions or Azure Pipelines workflow YAML file trigger: - main pool: name: Default steps: - checkout: self submodules: true - task: Npm@1 displayName: 'npm install' inputs: verbose: false - task: Npm@1 displayName: 'npm run build' inputs: command: custom verbose: false customCommand: 'run build' - task: PublishBuildArtifacts@1 displayName: 'Publish Artifact: drop' inputs: PathtoPublish: build - task: AzureStaticWebApp@0 inputs: app_location: '/build' azure_static_web_apps_api_token: '$(deployment_token)' Output of AzureStaticWebApp step 2021-07-01T04:36:39.8899695Z ##[section]Starting: AzureStaticWebApp 2021-07-01T04:36:39.9036032Z ============================================================================== 2021-07-01T04:36:39.9036303Z Task : Deploy Azure Static Web App 2021-07-01T04:36:39.9036657Z Description : [PREVIEW] Build and deploy an Azure Static Web App 2021-07-01T04:36:39.9036859Z Version : 0.187.1 2021-07-01T04:36:39.9037027Z Author : Microsoft Corporation 2021-07-01T04:36:39.9037214Z Help : https://aka.ms/swadocs 2021-07-01T04:36:39.9037436Z ============================================================================== 2021-07-01T04:36:40.1961223Z ##[section]Finishing: AzureStaticWebApp staticwebapp.config.json file { "navigationFallback": { "rewrite": "/index.html" } } Expected behavior The AzureStaticWebApps step completes and deploys my react project to my Azure Static Web App Screenshots Here is what my entire pipeline looks like Any help would be greatly appreciated! Thanks Is the VM running the pipeline a Windows machine? We've seen this in the past if the VM is not capable of running the task startup script. Hi miwebst, I am running the Agent on my own local machine, which is Windows 10. Did the issue that you are referring to get resolved? If so, how did they resolve it? Thanks I FINALLY FIXED MY ISSUE!!!! When you mentioned that it might be a Windows problem, I used my Ubuntu VM: Distributor ID: Ubuntu Description: Ubuntu 20.04.1 LTS Release: 20.04 Codename: focal Then I had to install npm and docker.io: sudo apt install npm sudo apt-get install docker.io Then I had to setup docker for my user on the machine: https://www.digitalocean.com/community/questions/how-to-fix-docker-got-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket After that I was able to run my pipeline with the following YAML: trigger: - main pool: name: Default steps: - checkout: self submodules: true - task: AzureStaticWebApp@0 inputs: app_location: '/' output_location: 'build' azure_static_web_apps_api_token: '$(deployment_token)' I hope this helps some people out :) I would have to double check, but I think installing docker would fix my Windows issue. I will reply with my results
2025-04-01T04:54:46.242808
2024-06-06T09:56:32
2337850213
{ "authors": [ "eamreyes", "jchancellor-ms", "wplj" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13338", "repo": "Azure/terraform-azurerm-avm-res-compute-virtualmachine", "url": "https://github.com/Azure/terraform-azurerm-avm-res-compute-virtualmachine/issues/85" }
gharchive/issue
[AVM Module Issue]: "Missing required argument" that is not declared as 'required'. Check for previous/existing GitHub issues [X] I have checked for previous/existing GitHub issues Issue Type? I'm not sure (Optional) Module Version No response (Optional) Correlation Id No response Description This simple code uses only the parameters that are declared as "Required" in the documentation, but Terraform (plan operation) still shows an error that it's not enough. I expected the module to generate admin password for me and provide it as output. I have no intention in using key vaults, it's a simple test scenario. module "avm-res-compute-virtualmachine" { source = "Azure/avm-res-compute-virtualmachine/azurerm" name = module.naming.windows_virtual_machine.name_unique resource_group_name = azurerm_resource_group.this.name location = azurerm_resource_group.this.location virtualmachine_sku_size = "Standard_A2_v2" zone = 1 } The error (and follow-up errors): Error: Missing required argument with module.avm-res-compute-virtualmachine.azurerm_key_vault_secret.admin_password[0], on .terraform\modules\avm-res-compute-virtualmachine\main.authentication.tf line 35, in resource "azurerm_key_vault_secret" "admin_password": 35: key_vault_id = var.admin_credential_key_vault_resource_id The argument "key_vault_id" is required, but no definition was found. Error: Attempt to get attribute from null value on .terraform\modules\avm-res-compute-virtualmachine\main.windows_vm.tf line 135, in resource "azurerm_windows_virtual_machine" "this": 135: offer = local.source_image_reference.offer This value is null, so it does not have any attributes. Error: Attempt to get attribute from null value on .terraform\modules\avm-res-compute-virtualmachine\main.windows_vm.tf line 136, in resource "azurerm_windows_virtual_machine" "this": 136: publisher = local.source_image_reference.publisher local.source_image_reference is null This value is null, so it does not have any attributes. Error: Attempt to get attribute from null value on .terraform\modules\avm-res-compute-virtualmachine\main.windows_vm.tf line 137, in resource "azurerm_windows_virtual_machine" "this": 137: sku = local.source_image_reference.sku local.source_image_reference is null This value is null, so it does not have any attributes. Error: Attempt to get attribute from null value. on .terraform\modules\avm-res-compute-virtualmachine\main.windows_vm.tf line 138, in resource "azurerm_windows_virtual_machine" "this": 138: version = local.source_image_reference.version local.source_image_reference is null This value is null, so it does not have any attributes. I encountered this issue when providing UN + PW for the admin. Referencing a KV seems tightly coupled to the parameter generate_admin_password_or_ssh_key which is defaulted to true. Since I'm providing UN+PW I disable the value, and the KV requirement is avoided. module "avm-onprem-mgmt-vm" { source = "Azure/avm-res-compute-virtualmachine/azurerm" name = module.on_prem_naming.virtual_machine.name location = azurerm_resource_group.onprem.location resource_group_name = azurerm_resource_group.onprem.name admin_username = var.username admin_password = var.password generate_admin_password_or_ssh_key = false virtualmachine_sku_size = var.vmsize zone = null virtualmachine_os_type = "Windows" source_image_reference = { publisher = "MicrosoftWindowsServer" offer = "WindowsServer" sku = "2022-datacenter-azure-edition-hotpatch" version = "latest" } network_interfaces = { mgmt_nic = { name = module.on_prem_naming.network_interface.name location = azurerm_resource_group.onprem.location resource_group_name = azurerm_resource_group.onprem.name ip_configurations = { mgmt_ipconfig = { name = "mgmt-ipconfig" subnet_id = module.avm-onprem-mgmt-subnet.resource_id private_ip_address_allocation = "Dynamic" public_ip_address_id = null primary = true } } } } } https://github.com/Azure/terraform-azurerm-avm-res-compute-virtualmachine/blob/20917bef881cc8e346864c8b159d4546d27dccb2/main.authentication.tf#L32 @wplj and @eamreyes - Release 0.15.0 removes the requirement for the key vault id, and moves the id to a single interface for all of the generated secret (password or ssh key) configuration items. (Also deprecates the old inputs for removal in a future release). It also cleans up the inputs so that it can be deployed with only required inputs. This is now also tested in the minimal example. Finally, please be aware there are breaking changes in the release so please review the release notes when you move to 0.15.
2025-04-01T04:54:46.253758
2024-09-04T09:23:22
2504778789
{ "authors": [ "jtracey93", "sissonsrob" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13340", "repo": "Azure/terraform-azurerm-caf-enterprise-scale", "url": "https://github.com/Azure/terraform-azurerm-caf-enterprise-scale/issues/1126" }
gharchive/issue
Broken Wiki links Community Note Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request If you are interested in working on this issue or have submitted a pull request, please leave a comment Versions terraform: 1.7.0 azure provider: 3.65.0 module: 6.1.0 Description Describe the bug Wiki links for connectivity with custom settings are still broken on some sub pages. Further to the issue I previously raised (#1094) which was closed due to this PR the following pages contain links to the same which need fixing: Examples.md [Examples]-Deploy-Connectivity-Resources.md [Examples]-Deploy-Multi-Region-Networking-With-Custom-Settings.md [Examples]-Deploy-Virtual-WAN-Multi-Region-With-Custom-Settings.md [Examples]-Deploy-Virtual-WAN-Resources.md [Examples]-Deploy-using-multiple-module-declarations-with-orchestration.md [Examples]-Deploy-using-multiple-module-declarations-with-remote-state.md Broken links are for both the hub and spoke and VWAN custom settings pages. Steps to Reproduce Navigate to the pages above click links for connectivity with custom settings get redirected to 'Home' Screenshots Additional context Would really like to be able to share a PR with the fixes for this but still unable to contribute - is it possible to be accepted as a contributor to raise PRs? Hey @sissonsrob, You can indeed submit a PR by forking this repo and then making your changes and submitting via a pull request. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/getting-started/about-collaborative-development-models#fork-and-pull-model Thanks @jtracey93 - unsure why I couldn't raise the PR last time but I have now raised this PR which fixes the other pages affected by this. Note - the fork was from my other account Hope it helps
2025-04-01T04:54:46.258119
2024-05-01T13:06:02
2273473780
{ "authors": [ "Keetika-Yogendra", "matt-FFFFFF" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13341", "repo": "Azure/terraform-azurerm-caf-enterprise-scale", "url": "https://github.com/Azure/terraform-azurerm-caf-enterprise-scale/issues/939" }
gharchive/issue
Bug Report : Continuous Destroy and then Create of azapi_resource diag_settings Community Note Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request If you are interested in working on this issue or have submitted a pull request, please leave a comment Continuos Destroy and then Create of azapi_resource diag_settings The module successfully deploys the Cloud Adoption Framework. However, when doing a plan with no changes to the parameters in the module, we are getting 10 add and 10 destroy of the resource "azapi_resources" "diag_settings". module.enterprise_scale.azapi_resource.diag_settings["/providers/Microsoft.Management/managementGroups/testtenant-sandboxes"] must be replaced -/+ resource "azapi_resource" "diag_settings" { ~ id = "/providers/Microsoft.Management/managementGroups/testtenant-sandboxes/providers/Microsoft.Insights/diagnosticSettings/toLA" -> (known after apply) - location = "global" -> null # forces replacement name = "toLA" ~ output = jsonencode({}) -> (known after apply) # (7 unchanged attributes hidden) } Plan: 10 to add, 0 to change, 10 to destroy. Help regarding resolution of this issue will be much appreciated. fixed by #968
2025-04-01T04:54:46.269990
2022-07-18T19:56:57
1308490517
{ "authors": [ "iamgusain", "shahzaibj" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13342", "repo": "AzureAD/microsoft-authentication-library-common-for-android", "url": "https://github.com/AzureAD/microsoft-authentication-library-common-for-android/pull/1797" }
gharchive/pull-request
Move common http test utilities from testutils to common4j test fixtures Description: Move common http test utilities from testutils to common4j test fixtures This is to break https://github.com/AzureAD/microsoft-authentication-library-common-for-android/pull/1770 into multiple smaller PRs. Would be nice to update the description with what is being changed here Would be nice to update the description with what is being changed here I've added the title to description as well. That's basically all there is to this PR. I'm not sure what more I'd put for a simple PR like this that's just relocated some files. Would be nice to update the description with what is being changed here I've added the title to description as well. That's basically all there is to this PR. I'm not sure what more I'd put for a simple PR like this that's just relocated some files. Something like why we are moving the classes will add some context, when we look back at the PR later. Would be nice to update the description with what is being changed here I've added the title to description as well. That's basically all there is to this PR. I'm not sure what more I'd put for a simple PR like this that's just relocated some files. Something like why we are moving the classes will add some context, when we look back at the PR later. Added this: Why we are moving these classes? We are moving these classes to test fixtures because test fixtures is where they truly belong. The primary purpose of test fixtures is to be able to share test code across modules. Test Fixtures is a concept that @p3dr0rv had introduced to the team some time ago, and I think I also covered it again in one of my recent brown-bags as well.
2025-04-01T04:54:46.273615
2021-02-17T07:23:50
809931330
{ "authors": [ "jasoncoolmax", "jbzdarkid" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13343", "repo": "AzureAD/microsoft-authentication-library-common-for-objc", "url": "https://github.com/AzureAD/microsoft-authentication-library-common-for-objc/pull/948" }
gharchive/pull-request
Merge Release/1.6.2 back to Master Merge Release/1.6.2 back to Master Type of change [ ] Feature work [ ] Bug fix [ ] Documentation [x] Engineering change [ ] Test [ ] Logging/Telemetry Risk [ ] High – Errors could cause MAJOR regression of many scenarios. (Example: new large features or high level infrastructure changes) [ ] Medium – Errors could cause regression of 1 or more scenarios. (Example: somewhat complex bug fixes, small new features) [x] Small – No issues are expected. (Example: Very small bug fixes, string changes, or configuration settings changes) Additional information @jasoncoolmax Any ETA on this release? @jasoncoolmax Any ETA on this release? I am doing the release now :)
2025-04-01T04:54:46.288795
2020-05-28T14:37:15
626565213
{ "authors": [ "et1975", "jmckennon", "ranjanmicrosoft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13344", "repo": "AzureAD/microsoft-authentication-library-for-js", "url": "https://github.com/AzureAD/microsoft-authentication-library-for-js/issues/1722" }
gharchive/issue
audience/resource for token acquisition Library [ ]<EMAIL_ADDRESS>or<EMAIL_ADDRESS>[x]<EMAIL_ADDRESS>[ ]<EMAIL_ADDRESS>[ ]<EMAIL_ADDRESS>[ ]<EMAIL_ADDRESS> Description Trying to figure how to acquire a token for AppConfiguration API and coming up short. The API documentation is talking about requesting a resource and even though AuthenticationParameters has a field for it I get the error: AADSTS901002: The 'resource' request parameter is not supported. I tried using scopes for it, but it's not clear to me how to correctly setup *.azconfig.io as a scope - AppConfiguration is not listed as an API I can request permissions for. scope will be something like https://{myconfig}.azconfig.io/.default in your resource map you'll do [ 'https://{myconfig}.azconfig.io/', [ 'https://{myconfig}.azconfig.io/.default' ]], @ranjanmicrosoft that's what I thought, but then my other problem: ServerError: invalid_client: AADSTS650057: Invalid resource. The client has requested access to a resource which is not listed in the requested permissions in the client's application registration. Client app ID: 7e327720-2c2b-4516-a52b-d255e3834907(avs-capman-dev). Resource value from request: https://*.azconfig.io. Resource app ID: 35ffadb3-7fc1-497e-b61b-381d28e744cc. List of valid resources from app registration: 00000003-0000-0000-c000-000000000000. How do I add the *.azconfig.io URI to my app definition? It has to be registered somewhere because the manifest takes a GUID, not the URI: "requiredResourceAccess": [ { "resourceAppId": "00000003-0000-0000-c000-000000000000", "resourceAccess": [ { "id": "e1fe6dd8-ba31-4d61-89e7-88639da4683d", "type": "Scope" } ] } ], "samlMetad Closing this as it looks like it's being handled in https://github.com/Azure/AppConfiguration/issues/338.
2025-04-01T04:54:46.290932
2018-06-28T03:43:13
336462296
{ "authors": [ "aszalacinski", "nartc", "nehaagrawal" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13345", "repo": "AzureAD/microsoft-authentication-library-for-js", "url": "https://github.com/AzureAD/microsoft-authentication-library-for-js/issues/337" }
gharchive/issue
Wiki documentation is non-existent I get we are all busy but when all MS documentation points to MSAL as the way to go for SPA apps, then documentation needs to be a first class citizen. @aszalacinski - I apologize that you are not able to find what you are looking for. We do have the documentation and we are currently working on improving it. Could you please explain why do you say that it's non-existent? @nehaagrawal the Wiki on this Github Repository does not show any useful information. Every item links back to Home. The only link that works is Register your app with AAD that links you to Microsoft website. @aszalacinski I have fixed the wiki. Please check.
2025-04-01T04:54:46.298679
2022-07-15T17:35:48
1306312635
{ "authors": [ "tnorling", "zico209" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13346", "repo": "AzureAD/microsoft-authentication-library-for-js", "url": "https://github.com/AzureAD/microsoft-authentication-library-for-js/issues/5011" }
gharchive/issue
After successful login and redirection, the execution of the routing guard is interrupted. Core Library MSAL.js v2 (@azure/msal-browser) Core Library Version 2.26.0 Wrapper Library Not Applicable Wrapper Library Version None Description HI, I have a sign in page. When you visit the website without authentication, you will be redirected to this page. When I click sign in to enter the login process and successfully authenticate and jump back to my website, there seems to be a problem with the routing guard, which makes me return to the sign in page. I need to enter the home page after successful authentication. I don't know how to implement it. But in fact, the login is successful. When I visit the website again, I can get the login user information. It seems that it is not waiting for the execution of handleredirectpromise at the routing guard. The source code is here vue3-sample-app.zip . MSAL Configuration No response Relevant Code Snippets No response Identity Provider No response Source External (Customer) @zico209 Can you please provide your configuration and your routing guard implementation so I can better assist you? Have you seen our Vue3 sample, which implements a routing Guard here? @zico209 Can you please provide your configuration and your routing guard implementation so I can better assist you? Have you seen our Vue3 sample, which implements a routing Guard here? https://github.com/AzureAD/microsoft-authentication-library-for-js/files/9122801/vue3-sample-app.zip This is my demo project modified on vue3 simple. All details are here. @zico209 Thanks! A couple things I noticed: By default the library will redirect the user back to the page which started the login flow (in your case /signin) after hitting the specified redirectUri. Sounds like you don't want this behavior so to disable you can either set the navigateToLoginRequestUrl flag to false in your auth config (authConfig.ts -> msalConfig -> auth). Alternatively, you can set the redirectStartPage parameter on the login request (also located in authConfig.ts) to tell MSAL to redirect to any page you want after login is complete. You are using the home route as your redirectUri and also configuring that route to use the Guard. This is not advisable. We recommend setting your redirectUri to a page which does not require the user to be authenticated, and then if needed, have that page redirect the user to where they need to be using the methods mentioned in point 1. @tnorling Thank you! This is really helpful.
2025-04-01T04:54:46.314950
2022-09-20T14:51:05
1379554933
{ "authors": [ "grgicpetar", "jasonnutter", "tangyinhao123" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13347", "repo": "AzureAD/microsoft-authentication-library-for-js", "url": "https://github.com/AzureAD/microsoft-authentication-library-for-js/issues/5230" }
gharchive/issue
Is it possible to have multiple application logins? Core Library MSAL.js v2 (@azure/msal-browser) Core Library Version 2.28.3 Wrapper Library MSAL React (@azure/msal-react) Wrapper Library Version 1.4.7 Public or Confidential Client? Public Description Application usage scenario: Landing page -> Azure AD B2C login -> some functionalities -> Azure AD B2B login -> all functionalities After authenticating with Azure AD B2C, the user gets access to application. After the initial login only some functionalities are available. To be granted access to all the functionalities, user must authenticate with Azure AD B2B this time. Is this scenario possible with single instance of PCA? MSAL Configuration No response Relevant Code Snippets No response Identity Provider Azure B2C Custom Policy Source External (Customer) @grgicpetar After authenticating with Azure AD B2C, the user gets access to application. After the initial login only some functionalities are available. To be granted access to all the functionalities, user must authenticate with Azure AD B2B this time. To be clear, do you want users to authenticate directly against AAD (i.e. MSAL -> AAD, as opposed to via B2C, i.e. MSAL -> B2C -> AAD)? You can technically change authorities on a per request basis, so assuming the answer to the above question is yes, then you may be able to achieve what you describe, although it may be easier if you maintain two PCA instances. Unfortunately, we do not have sample that demonstrates this scenario, as far as I know. @jasonnutter I am aware that I can change authorities on a per request basis, but what I need to do is change ClientId on a second authorization request. I believe this should explain my use case more clearly: Landing page -> Azure AD B2C login (Client ID 1) -> some functionalities -> Azure AD B2B login (Client ID 2) -> all functionalities. As far as I know, I didn't see any example where ClientId can be changed using a single instance of MSAL. @grgicpetar HI grgicpetar.I want to know how you solved it in the end because I also encountered the same scenario. Hi @tangyinhao123, multiple instances did indeed work. It just feels weird to use since you can use only one instance through useMsal(). Hi @tangyinhao123, multiple instances did indeed work. It just feels weird to use since you can use only one instance through useMsal(). Thanks @grgicpetar Is there an example for me to refer to, because I put it in index.tsx and re-instantiate it every time it is called?MY code:` // Initialize client side tracing initializeAppInsights(); // Initialize Icons initializeIcons(); const RootComponent = () => { const [instances,setInstance] = useState<PublicClientApplication|null>(null); // Inject some global styles mergeStyles({ ":global(body,html,#root)": { margin: 0, padding: 0, height: "100vh", }, }); React.useEffect(() => { const session = sessionStorage.getItem("clientId") if (session) { if (session == msalConfig.auth.clientId) { setInstance(new PublicClientApplication(msalConfig)) } else { setInstance(new PublicClientApplication(pmeConfig)) } } }, []); document.title = "OfferStore Portal"; const handAccountType = (atype:string)=>()=>{ if(atype == "pme") { setInstance(new PublicClientApplication(pmeConfig)) sessionStorage.setItem("clientId",pmeConfig.auth.clientId) } else if (atype == "ms") { setInstance(new PublicClientApplication(msalConfig)) sessionStorage.setItem("clientId",msalConfig.auth.clientId) } else { return } } return ( instances!=null?( ):( <Stack style={{ marginTop: '50px' }} tokens={stackTokens}> <PrimaryButton onClick={handAccountType("ms")} text="Microsoft Account" /> <PrimaryButton onClick={handAccountType("pme")} text="PME Account" /> ) ); }; ReactDOM.render(, document.getElementById("root"));` @tangyinhao123 1.) Define two PCA first in some config file: import { Configuration, PublicClientApplication } from "@azure/msal-browser"; export const msalConfig1 Configuration = { ... }; export const msalConfig2: Configuration = { ... }; export const pca1 = new PublicClientApplication(msalConfig1); export const pca2 = new PublicClientApplication(msalConfig2); 2.) Provide the first one through Provider, this one you can use through useMsal() hook. ... The second one you can use by explicity importing pca2 in files.
2025-04-01T04:54:46.317586
2020-07-10T23:49:31
655089237
{ "authors": [ "coveralls", "jasonnutter" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13348", "repo": "AzureAD/microsoft-authentication-library-for-js", "url": "https://github.com/AzureAD/microsoft-authentication-library-for-js/pull/1930" }
gharchive/pull-request
Add script to automate publish msal-core files to CDN Script to automate upload msal-core generated files to the CDN. Requires a .env file in the msal-core folder including environment variables with SAS keys for the cdn. Coverage remained the same at 80.998% when pulling 4f844f4a5a2fabdb4a0b8e1edcc0e8bb3e9f27b9 on automate-cdn-core into 0f352a074dff709304d87e3543c89472a1bcf875 on dev.
2025-04-01T04:54:46.318732
2016-10-06T06:32:26
181334883
{ "authors": [ "lovemaths", "polita" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0000.json.gz:13349", "repo": "AzureADQuickStarts/WebApp-OpenIDConnect-NodeJS", "url": "https://github.com/AzureADQuickStarts/WebApp-OpenIDConnect-NodeJS/pull/4" }
gharchive/pull-request
update sample to use passport-azure-ad 3.0.0 Will merge after the release of version 3.0.0. For the comments I made on the WebApp-OpenIDConnect-NodeJS PR that apply to all samples, please ensure we're making those changes across samples. Afterwards, :shipit: