added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
created
timestamp[us]date
2001-10-09 16:19:16
2025-01-01 03:51:31
id
stringlengths
4
10
metadata
dict
source
stringclasses
2 values
text
stringlengths
0
1.61M
2025-04-01T04:10:11.630564
2018-12-10T21:46:33
389500380
{ "authors": [ "tjprescott", "yareyes" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13474", "repo": "Azure/azure-cli", "url": "https://github.com/Azure/azure-cli/pull/8029" }
gharchive/pull-request
sqlvm command module implementation This checklist is used to make sure that common guidelines for a pull request are followed. [x] The PR has modified HISTORY.rst describing any customer-facing, functional changes. Note that this does not include changes only to help content. (see Modifying change log). [x] I adhere to the Command Guidelines. Closing in favor of https://github.com/Azure/azure-cli-extensions/pull/445
2025-04-01T04:10:11.631477
2013-10-15T18:44:16
21036001
{ "authors": [ "Selcin", "deneha" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13475", "repo": "Azure/azure-content", "url": "https://github.com/Azure/azure-content/issues/1636" }
gharchive/issue
Capitalized datawarehousing in SQL Provisioning article Thanks for the help. @Selcin Thank you for raising this issue. We're closing old issues. If you feel this issue should remain open, please comment to let us know that the issue should be reopened.
2025-04-01T04:10:11.632857
2015-08-25T16:32:05
103066616
{ "authors": [ "johnfmacintyre", "mimig1", "rmca14" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13476", "repo": "Azure/azure-content", "url": "https://github.com/Azure/azure-content/pull/4294" }
gharchive/pull-request
Update documentdb-partition-data.md Removed extra word in Adding and removing partitions section @mimig1 Please review and approve, thanks. #sign-off
2025-04-01T04:10:11.636184
2016-03-30T20:58:56
144718642
{ "authors": [ "NeilGo", "azurecla", "dsk-2015", "tysonn" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13477", "repo": "Azure/azure-content", "url": "https://github.com/Azure/azure-content/pull/6169" }
gharchive/pull-request
Update virtual-machines-windows-troubleshoot-rdp-connection.md Add instructions for RDP login to azure Windows VM using a "Microsoft Account", such as with the Azure MSDN Windows 10 VM gallery images. (Note: Given the complexity, a SWAY walkthrough might be better) Hi @NeilGo, I'm your friendly neighborhood Azure Pull Request Bot (You can call me AZPRBOT). Thanks for your contribution! This seems like a small (but important) contribution, so no contribution license agreement is required at this point. Real humans will now evaluate your PR. TTYL, AZPRBOT; @dsk-2015 - Dhanashri, could you please review this contribution from @NeilGo? Thanks @NeilGo for your contribution. I'll take a look at this PR in the next couple days. @NeilGo I have been trying to work through your steps, but find that it wasn't very easy for me to get it all right. Would like to talk with you to get a some more in-depth idea about these steps. What is the best way to contact you? I'm going to close this for now. @NeilGo - If you are able to reply to @dsk-2015 and work through her questions, feel free to re-open the PR. Thanks
2025-04-01T04:10:11.640928
2023-07-19T15:42:56
1812226284
{ "authors": [ "ellismg", "savannahostrowski" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13478", "repo": "Azure/azure-dev", "url": "https://github.com/Azure/azure-dev/issues/2552" }
gharchive/issue
Support --verbose with new --what-if flag on azd provision Tracking this so we don't forget about it. Not required in #2550 but could follow with user feedback Something to consider here is the interaction with terraform when it is is the IaC provider of choice. I think that by default, tf plan is going to show something that feels like --detailed has been passed. Our general strategy for TF in the past has been to just show the raw output from the tool (with the native terraform styling) instead of trying to have it match the UX we have for ARM based deployments. From looking at the tf plan documentation, it does seem like we could get the output of the plan in a structured format (i.e. json) and then render it ourselves, but that would be different from what we do today for the normal provision case where we just flow the output of both plan and apply back to the user. This is a good callout - I think it'd be nice to have a similarly high level view of the plan for Terraform but we can tackle that investigation as demand arises I think.
2025-04-01T04:10:11.643042
2023-01-30T23:50:23
1563386620
{ "authors": [ "madansr7", "msewaweru" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13479", "repo": "Azure/azure-docs-powershell-azuread", "url": "https://github.com/Azure/azure-docs-powershell-azuread/issues/883" }
gharchive/issue
Get password documentation in incorrect The documentation https://github.com/Azure/azure-docs-powershell-azuread/blob/live/azureadps-2.0/AzureAD/Get-AzureADApplicationPasswordCredential.md shows the password credential value being returned. AAD does not return value of password secrets and this is causing confusion among customers. Looking at the following GH issue for reference. https://github.com/Azure/azure-docs-powershell-azuread/issues/865 Hi @madansr7, thanks for highlighting this issue. I have a PR to correct this. Proceeding to close this.
2025-04-01T04:10:11.650851
2020-07-22T00:21:39
663387078
{ "authors": [ "NoahMillerEXO", "billmath", "qinezh" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13480", "repo": "Azure/azure-docs-powershell-azuread", "url": "https://github.com/Azure/azure-docs-powershell-azuread/pull/456" }
gharchive/pull-request
Update Set-AzureADUserThumbnailPhoto.md Typo correction, "witht eh PObjectId" should be "with the ObjectID" Docs Build status updates of commit 26385ce: :white_check_mark: Validation status: passed File Status Preview URL Details azureadps-2.0/AzureAD/Set-AzureADUserThumbnailPhoto.md :white_check_mark:Succeeded For more details, please refer to the build report. Note: Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the broken link report. For any questions, please:Try searching in the Docs contributor and Admin GuideSee the frequently asked questionsPost your question in the Docs support channel Hi @NoahMillerEXO Per the read me file: Please be aware that if you have any updates to automatically generated reference content, you should create an issue and not a PR. The issue will then be created into a bug and triaged accordingly. We do this because the reference content must mirror the *dll-help.xml file and subsequent updates to the reference content may overwrite any changes made through the repo. Capturing it in the source code ensures that this does not occur. Any PR with changes to automatically generated reference content will be closed. Automatically generated reference content is any of the content under this Reference node, such as Get-AzureADUser This only applies to reference content and not conceptual content. Pull requests for changes to conceptual content are accepted and encouraged. An example of the conceptual content is the Overview or a scenario such as importing data. Thank you! Bill
2025-04-01T04:10:11.664147
2021-01-04T16:59:46
778229817
{ "authors": [ "ArthurEzenwanne", "billmath" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13481", "repo": "Azure/azure-docs-powershell-azuread", "url": "https://github.com/Azure/azure-docs-powershell-azuread/pull/553" }
gharchive/pull-request
Update install-adv2.md Added installation for Windows 10 Docs Build status updates of commit 2021b7b: :white_check_mark: Validation status: passed File Status Preview URL Details docs-conceptual/azureadps-2.0/install-adv2.md :bulb:Suggestion Details docs-conceptual/azureadps-2.0/install-adv2.md Line 2, Column 1: [Suggestion-description-missing] Missing required attribute: 'description'. For more details, please refer to the build report. Note: Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the broken link report. For any questions, please:Try searching in the Docs contributor and Admin GuideSee the frequently asked questionsPost your question in the Docs support channel Docs Build status updates of commit 2021b7b: :white_check_mark: Validation status: passed File Status Preview URL Details docs-conceptual/azureadps-2.0/install-adv2.md :bulb:Suggestion Details docs-conceptual/azureadps-2.0/install-adv2.md Line 2, Column 1: [Suggestion-description-missing] Missing required attribute: 'description'. For more details, please refer to the build report. Note: Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the broken link report. For any questions, please:Try searching in the Docs contributor and Admin GuideSee the frequently asked questionsPost your question in the Docs support channel @ArthurEzenwanne Thanks for the submission! @ArthurEzenwanne Thanks for the submission!
2025-04-01T04:10:11.670268
2019-05-17T00:21:35
445211821
{ "authors": [ "SnehaGunda", "markjbrown" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13482", "repo": "Azure/azure-docs-powershell-samples", "url": "https://github.com/Azure/azure-docs-powershell-samples/pull/202" }
gharchive/pull-request
New PS scripts for other APIs DESCRIPTION New PS scripts for all API's. Full create examples, List-Get in single script, update ru for db-level and container level. Created Common folder, copied scripts which are shared. After merge I will be removing a number of scripts in SQL folder and submitting a follow up PR to reduce the total number of scripts. All create examples will provision account and database. Also adding @SnehaGunda for additional review. CHECKLIST [ ] This pull request was tested on: [ ] PowerShell 5.1 on Windows [X ] PowerShell 6.x on Windows [ ] PowerShell 6.x on Linux [X ] Resources not created by the scripts in the pull request are explicitly mentioned in a comment at the head of the file, and have user-supplied variables or function arguments which correspond to the required resource identifiers. [X ] All user-supplied values are set in variables at the head of the file, after any comments. [ ] All passwords are user-supplied values, or generated by a secure random string generator. [X ] All Azure identifiers required to be universally unique are guaranteed to be so. [X ] All scripts only use commands available in the latest release of the Azure PowerShell Az module. (Command reference) [ ] I have an exception! (optional - state for which service) [ ] I'm requesting an exception! (optional - you must include your MS alias for further discussion before PR review) [ ] All scripts contain only ASCII characters (no 'smart quotes' or other wide characters) @SnehaGunda I have made all the recommended changes. Please review/merge. thanks. @markjbrown looks good to me. @sptramer can you please merge #sign-off
2025-04-01T04:10:11.676531
2020-10-01T04:43:27
712473907
{ "authors": [ "VSC-Service-Account", "azure-sdk" ], "license": "CC-BY-4.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13483", "repo": "Azure/azure-docs-sdk-dotnet", "url": "https://github.com/Azure/azure-docs-sdk-dotnet/pull/1483" }
gharchive/pull-request
Docs.MS Release Updates for Azure.Storage.Common Update docs metadata and targeting for release of Azure.Storage.Common Docs Build status updates of commit 28f5f79: :white_check_mark: Validation status: passed File Status Preview URL Details api/overview/azure/storage.common-readme-pre.md :white_check_mark:Succeeded View For more details, please refer to the build report. Note: Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the broken link report. For any questions, please:Try searching in the Docs contributor and Admin GuideSee the frequently asked questionsPost your question in the Docs support channel
2025-04-01T04:10:11.681779
2017-10-25T03:34:56
268256027
{ "authors": [ "chrisparkeronline", "kirankumarkolli", "rnagpal" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13484", "repo": "Azure/azure-documentdb-dotnet", "url": "https://github.com/Azure/azure-documentdb-dotnet/issues/372" }
gharchive/issue
"Entity with the specified id does not exist in the system." Azure Function Hi, I receive a "Entity with the specified id does not exist in the system." when using the code bellow but only within a Azure Function in production. When I changed code to connect to my REST API, the same code within the Api works. _script = await _client.ReadDocumentAsync<Screenplay>( UriFactory.CreateDocumentUri( DatabaseId, CollectionId, myQueueItem.ScreenplayId ), new RequestOptions { PartitionKey = new PartitionKey( null ) } ); Additional information. "parentId" is my partition key. This document has a null "parentId". I have the firewall set to allow my local dev box IP address as well as the Azure Function Virtual IP address. null is handled differently in SDK. @chrisparkeronline please try 'Undefined.Value' as PartitionKey. How is your REST API handling nulls? Hi, sorry for the delay. No sure what the issue was but recreating the partition with "/parentId" setup the partition using the path I wanted. Additionally using the following code worked for me. For documents with a null parentId, meaning no partition key (top level documents in my case), I used this code; var retVal = client .CreateDocumentQuery<MyCustomObject>( Uri, query, new FeedOptions { EnableCrossPartitionQuery = true, PartitionKey = new PartitionKey( null ) } ).AsEnumerable().FirstOrDefault(); For documents that have a parentId, meaning sub documents of a top level document, I used this code; var retVal = client .CreateDocumentQuery<MyCustomObject>( Uri, query, new FeedOptions { EnableCrossPartitionQuery = true, PartitionKey = new PartitionKey( $"{parentIdValue}" ) } ).AsEnumerable().FirstOrDefault(); Thank you for your help @kirankumarkolli. @chrisparkeronline Just to clarify, when you pass "null" as the value of the partition key, it can be a valid value of your partition key. This is a different semantics than "missing" partition key from your document, in which case you need to pass Undefined.Value as Kiran mentioned above. The exception that you are getting from ReadDocumentAsync API is expected if there is no such document. If you try running the same using the query approach you showed, you will get a "null"(and not an exception) for the retVal which means the same thing. It's a more graceful way to check for an existence of a document without throwing an exception and checking for 404 status code. Please let me know if you have any further comments.
2025-04-01T04:10:11.683604
2017-04-17T14:01:22
222138123
{ "authors": [ "ahmelsayed", "dnfclas", "joescars" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13485", "repo": "Azure/azure-functions-cli", "url": "https://github.com/Azure/azure-functions-cli/pull/120" }
gharchive/pull-request
Resolves Null Ref Error on settings. Closes 113 Added in description for settings -ShowValue to resolve null reference exception. Closes 113 This seems like a small (but important) contribution, so no Contribution License Agreement is required at this point. We will now review your pull request. Thanks, .NET Foundation Pull Request Bot Thanks @joescars!
2025-04-01T04:10:11.698351
2024-03-05T10:35:37
2168858961
{ "authors": [ "dougq-PureGym", "jamesmcroft", "lnhzd", "satvu" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13486", "repo": "Azure/azure-functions-dotnet-worker", "url": "https://github.com/Azure/azure-functions-dotnet-worker/issues/2325" }
gharchive/issue
Middleware is not consistently writing the response when using ASP.Net Core Integration Description I have written some authentication middleware for an Azure Function App using the Isolated Process model with ASP.NET Core integration. On failure to authenticate I am returning a 401 with the following code var req = await context.GetHttpRequestDataAsync(); var res = req.CreateResponse(HttpStatusCode.Unauthorized); var invocationResult = context.GetInvocationResult(); invocationResult.Value = res; This works fine locally, but when deployed to Azure, it is only working intermittently. Sometimes the function will return a 401 response, but sometimes it will return a 200. At no point is the function itself being invoked. Some things I have noticed Switching off the ASP.NET Core integration (using ConfigureFunctionsWorkerDefaults and removing the reference to Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore) fixes the problem I can set the 401 response either by assigning the HttpResonseData to invocationResult.Value, or by setting it in the HttpContext returned by context.GetHttpContext(), but both methods exhibit the same problem When the function is first deployed, it consistently returns a 401, but after being called repeatedly we start seeing 200s. The likelihood of a 200 seems to increase over time. Logging has confirmed that the MIddleware is being called successfully and the code above is being run to completion. Here is the minimum reproduction repository https://github.com/dougq-PureGym/AzFuncHttpRewrite I am using the latest version of app packages and .Net 8. Steps to reproduce Enable ASP.Net integration in an Azure Function APP Create an implementation of IFunctionsWorkerMiddleware which sets the HTTP Response by assigning to invocationResult.Value Add the Middleware to ConfigureFunctionsWebApplication Hi @dougq-PureGym can you also share your function app privately using these instructions? We would like to observe the reported behavior in production to aid investigation. Thanks for the response, the Execution Time is 2024-03-06T10:24:29Z, ID is f9f61248-1c85-4da2-938a-ebfedb5f0bac, region is UK South. Please note, this is not a production app, the error was found in QA so not deployed to production. This is just the minimal reproduction repo I supplied earlier. One thing we have noticed during testing is that the problem can vary a lot over time. Sometimes 200s are relatively rare, maybe 5% of responses, other times they are over 50%. Thanks On further investigation it appears that this is not Middleware related. We are seeing intermittent 200 responses no matter how we construct the Response, whether in Middleware or in the Function itself. This appears to be a duplicate of this issue - https://github.com/Azure/azure-functions-dotnet-worker/issues/2215 although we are seeing the problem a lot more often that is mentioned in that report Thanks for the follow up @dougq-PureGym - we will continue investigating. @satvu @dougq-PureGym I've also been having a look into this to try and diagnose the issue. It appears that something is happening in the processing of the response as part of the invocation result based on the Function declaration. In my initial experimenting, if you throw an exception instead, e.g. an UnauthorizedAccessException, you will consistently see a failure in the request as a 500 response. However this isn't ideal if you want to control the response code. On a fork of the provided sample, I have also experimented with alternative approaches and appear to no longer see the 200 status code. Firstly, I attempted to write a JSON response into the middleware when setting the value of GetInvocationResult which reduced the frequency of the 200. I then moved on to swapping from IActionResult to Task<HttpResponseData> in the response type for the Function, as well as swapping the request body parameter from HttpRequest to HttpRequestData and this seemed to remove the issue entirely in my testing. I cannot confirm what is necessarily causing this at this stage and why the change appears to resolve the issue but I will also continue to investigate. We are hitting the same issue without using invocationResult: var httpContext = context.GetHttpContext(); await httpContext.ForbidAsync(); return; Expecting 403 but roughly got 1 or 2 (empty) 200 responses out of 20 requests sent. @satvu @dougq-PureGym I've also been having a look into this to try and diagnose the issue. It appears that something is happening in the processing of the response as part of the invocation result based on the Function declaration. In my initial experimenting, if you throw an exception instead, e.g. an UnauthorizedAccessException, you will consistently see a failure in the request as a 500 response. However this isn't ideal if you want to control the response code. On a fork of the provided sample, I have also experimented with alternative approaches and appear to no longer see the 200 status code. Firstly, I attempted to write a JSON response into the middleware when setting the value of GetInvocationResult which reduced the frequency of the 200. I then moved on to swapping from IActionResult to Task<HttpResponseData> in the response type for the Function, as well as swapping the request body parameter from HttpRequest to HttpRequestData and this seemed to remove the issue entirely in my testing. I cannot confirm what is necessarily causing this at this stage and why the change appears to resolve the issue but I will also continue to investigate. Tried this approach, unfortunately still seeing intermittent 200 from my side. //var req = await context.GetHttpRequestDataAsync(); //var res = req.CreateResponse(HttpStatusCode.Unauthorized); //await res.WriteAsJsonAsync(new { Status = "Unauthorized", Message = "Unauthorized access." }, res.StatusCode); //context.GetInvocationResult().Value = res; On further investigation it appears that this is not Middleware related. We are seeing intermittent 200 responses no matter how we construct the Response, whether in Middleware or in the Function itself. This appears to be a duplicate of this issue - #2215 although we are seeing the problem a lot more often that is mentioned in that report Closing this in favor of #2215 as we believe it's the same root issue - @lnhzd please follow the other issue for updates.
2025-04-01T04:10:11.923675
2023-02-15T11:44:11
1585711852
{ "authors": [ "rafaeldnferreira", "sathiyan-sivathas", "ziyeqf" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13487", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/issues/22634" }
gharchive/issue
Mobile Network - identity type mismatch for resources not support both SystemAssigned and UserAssigned identity, please use use UserAssignedIdentity instead of ManagedServiceIdentity e.g.: https://github.com/Azure/azure-rest-api-specs/blob/main/specification/mobilenetwork/resource-manager/Microsoft.MobileNetwork/stable/2022-11-01/simGroup.json#L325 @sathiyan-sivathas could you track fix for the next API? I'm assuming we are keeping our position to only support UserAssigned. @ziyeqf are you sure that that will work? I think there are at least a couple of problems: UserAssignedIdentity is a read-only object so there's no way for a user to specify a user-assigned MI using that type. At the very least we would need UserAssignedIdentities where the key in the dictionary is the user-assigned MI ID. Then how will will ARM treat this change? My understanding was that ARM enriches user request with additional headers based on the managed identity type set by the user. If we remove the type field, how does ARM know how to treat the request. I've found several documents that suggest that the identity property must follow a set format. I think I made a wrong link, sorry for that. just checked other service which only support UserAssigned identity, it seems they defined identity in their own Swagger. Here are some examples: https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cosmos-db/resource-manager/Microsoft.DocumentDB/stable/2022-05-15/managedCassandra.json#L771 https://github.com/Azure/azure-rest-api-specs/blob/main/specification/network/resource-manager/Microsoft.Network/stable/2022-07-01/firewallPolicy.json#L955 I don't think it should be removed, but the current type allows users to set a SystemAssigned identity, which will not be used by service I think. @ziyeqf thanks. I agree if we made API changes similar to those ones you've linked to, then we would support user-assigned only.
2025-04-01T04:10:11.929884
2024-06-27T18:46:50
2379964086
{ "authors": [ "Remuwon", "TravisCragg-MSFT" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13488", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/issues/29633" }
gharchive/issue
Long Response Times When Listing SKUs Package Name: azure-mgmt-compute Package Version: 31.0.0 Operating System: Azure App Service Linux Python Version: 3.10 Describe the bug I am experiencing significant delays when attempting to list SKUs using the azure.mgmt.compute Python SDK. The request takes an unusually long time to return (20-30 seconds), impacting the performance of my application. To Reproduce Steps to reproduce the behavior: Initialize the ComputeManagementClient with valid Azure credentials. Call the resource_skus.list() method. Observe the time taken for the method to return. Sample Code: from azure.identity import DefaultAzureCredential from azure.mgmt.compute import ComputeManagementClient # Initialize credentials and client credential = DefaultAzureCredential() subscription_id = 'your-subscription-id' compute_client = ComputeManagementClient(credential, subscription_id) # List SKUs try: skus = compute_client.resource_skus.list() skus_list = list(skus) print(f"Total SKUs retrieved: {len(skus_list)}") except Exception as e: print(f"An error occurred: {e}") Expected behavior The SKUs should be listed promptly without significant delays. Additional context For some background, I need to show customers regions where specific vm family types exist so that they can launch their workloads in the correct region. As far as I've searched there is no specific api to do this. Leaving me with the compute SKU list api. I fetch the sku and create a reverse hash map mapping vm family to region. I have tested this in only eastus region, and the delay was consistent. Other API calls using the same client are responsive and do not exhibit this delay. Any guidance on optimising or any potential workarounds would be appreciated. If this is a known issue, information on any ongoing efforts to resolve it would be helpful. @Remuwon The response size of this query is ~2-4Mb of JSON, and can take some time to parse depending upon the application. At this time, there is no way to reduce this further. We are in the process of revamping this API to allow for targeted queries with KQL, where you can write a query that will only return a list of regions that a SKU is supported. Keep an eye on upcoming announcements over the next year for more information.
2025-04-01T04:10:11.932090
2020-06-18T00:23:11
640820136
{ "authors": [ "brjohnstmsft", "heaths", "mattmsft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13489", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/issues/9876" }
gharchive/issue
Document includeSanitizedConnectionString query parameter data sources There is an includeSanitizedConnectionString query parameter that is not documented for data source APIs like this one that will send back a sanitized connection string, which is normally omitted for security reasons. Customers may want to see what non-PII they can, so it might be helpful to document this for any APIs that support it; however, it's important to warn users that sending it back sanitized will be destructive. FYI @bleroy -- Please route to the right owner @mattmsft, please confirm that this is still on your radar. Confirmed. I'll take a look at this soon.
2025-04-01T04:10:11.939753
2017-05-08T16:58:10
227109123
{ "authors": [ "AutorestCI", "msftclas", "ssankar1984" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13490", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/1200" }
gharchive/pull-request
Swagger Header Constant Change This checklist is used to make sure that common issues in a pull request are addressed. This will expedite the process of getting your pull request merged and avoid extra work on your part to fix issues discovered during the review process. PR information [Y ] The title of the PR is clear and informative. [ Y] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For information on cleaning up the commits in your pull request, see this page. [Y ] Except for special cases involving multiple contributors, the PR is started from a fork of the main repository, not a branch. [Y ] If applicable, the PR references the bug/issue that it fixes. [ Y] Swagger files are correctly named (e.g. the api-version in the path should match the api-version in the spec). Quality of Swagger [ Y] I have read the contribution guidelines. [ Y] My spec meets the review criteria: [ Y] The spec conforms to the Swagger 2.0 specification. [ Y] The spec follows the guidelines described in the Swagger checklist document. [ Y] Validation tools were run on swagger spec(s) and have all been fixed in this PR. @ssankar1984, Thanks for your contribution as a Microsoft full-time employee or intern. You do not need to sign a CLA. Thanks, Microsoft Pull Request Bot No modification for Ruby No modification for Python https://github.com/Azure/azure-sdk-for-node/pull/2160
2025-04-01T04:10:11.954732
2021-06-15T00:44:57
920882410
{ "authors": [ "lirenhe", "marjanmr" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13491", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/14804" }
gharchive/pull-request
Adding stable swaggers MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow. Changelog Please ensure to add changelog with this PR by answering the following questions. What's the purpose of the update? [ ] new service onboarding [x ] new API version [ ] update existing version for new feature [ ] update existing version to fix swagger quality issue in s360 [ ] Other, please clarify When you are targeting to deploy new service/feature to public regions? Please provide date, or month to public if date is not available yet. When you expect to publish swagger? Please provide date, or month to public if date is not available yet. If it's an update to existing version, please select SDKs of specific language and CLIs that require refresh after swagger is published. [ x] SDK of .NET (need service team to ensure code readiness) [ ] SDK of Python [ x] SDK of Java [ ] SDK of Js [ ] SDK of Go [ x] PowerShell [ x] CLI [ x] Terraform [ ] No, no need to refresh for updates in this PR Contribution checklist: [ x] I commit to follow the Breaking Change Policy of "no breaking changes" [ x] I have reviewed the documentation for the workflow. [ x] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix? If any further question about AME onboarding or validation tools, please view the FAQ. ARM API Review Checklist [ x] Ensure to check this box if one of the following scenarios meet updates in the PR, so that label “WaitForARMFeedback” will be added automatically to involve ARM API Review. Failure to comply may result in delays for manifest application. Note this does not apply to data plane APIs, all “removals” and “adding a new property” no more require ARM API review. Adding new API(s) Adding a new API version [x ] Ensure to copy the existing version into new directory structure for first commit (including refactoring) and then push new changes including version updates in separate commits. This is required to review the changes efficiently. Adding a new service [ x] Please ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board. [ x] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them. Breaking Change Review Checklist If there are following updates in the PR, ensure to request an approval from Breaking Change Review Board as defined in the Breaking Change Policy. [ ] Removing API(s) in stable version [ ] Removing properties in stable version [ ] Removing API version(s) in stable version [ ] Updating API in stable or public preview version with Breaking Change Validation errors [ ] Updating API(s) in public preview over 1 year (refer to Retirement of Previews) Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Addition details on the process and office hours are on the Breaking change Wiki. Please follow the link to find more details on PR review process. @marjanmr, was this PR already reviewed in RpaasMaster branch or is this a new PR? @marjanmr, was this PR already reviewed in RpaasMaster branch or is this a new PR? It is reviewed and merged in RPSaaSMaster of Azure-rest-api-spec-pr. Swagger already approved in private repo, https://github.com/Azure/azure-rest-api-specs-pr/pull/4002. Approve and merge the PR.
2025-04-01T04:10:11.957440
2021-09-08T02:03:04
990578890
{ "authors": [ "BigCat20196", "JackTn" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13492", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/15947" }
gharchive/pull-request
Update readme.python.md https://github.com/Azure/sdk-release-request/issues/1929 Hi, @BigCat20196. The PR has be closed for a long time and it's related branch still exist. Please tell me if you still need this branch or i will delete it in 14 days. Hi, @BigCat20196. The PR has be closed for a long time and it's related branch still exist. Please tell me if you still need this branch or i will delete it in 14 days. Hi, @BigCat20196. The PR has be closed for a long time and it's related branch still exist. Please tell me if you still need this branch or i will delete it in 14 days.
2025-04-01T04:10:11.973816
2021-11-15T07:41:32
1053314446
{ "authors": [ "KeYu-AnkhSpirit", "raych1", "tadelesh" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13493", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/16762" }
gharchive/pull-request
Fix XMS_EXAMPLE_NOTFOUND_ERROR for compute swaggers under default tag MSFT employees can try out our new experience at OpenAPI Hub - one location for using our validation tools and finding your workflow. Changelog Add a changelog entry for this PR by answering the following questions: What's the purpose of the update? [ ] new service onboarding [ ] new API version [ ] update existing version for new feature [ ] update existing version to fix swagger quality issue in s360 [x] Other, please clarify When are you targeting to deploy the new service/feature to public regions? Please provide the date or, if the date is not yet available, the month. When do you expect to publish the swagger? Please provide date or, the the date is not yet available, the month. If updating an existing version, please select the specific langauge SDKs and CLIs that must be refreshed after the swagger is published. [ ] SDK of .NET (need service team to ensure code readiness) [ ] SDK of Python [ ] SDK of Java [ ] SDK of Js [ ] SDK of Go [ ] PowerShell [ ] CLI [ ] Terraform [x] No refresh required for updates in this PR Contribution checklist: [ ] I commit to follow the Breaking Change Policy of "no breaking changes" [ ] I have reviewed the documentation for the workflow. [x] Validation tools were run on swagger spec(s) and errors have all been fixed in this PR. How to fix? If any further question about AME onboarding or validation tools, please view the FAQ. ARM API Review Checklist Applicability: :warning: If your changes encompass only the following scenarios, you should SKIP this section, as these scenarios do not require ARM review. Change to data plane APIs Adding new properties All removals Otherwise your PR may be subject to ARM review requirements. Complete the following: [ ] Check this box if any of the following apply to the PR so that label “WaitForARMFeedback” will be added automatically to begin ARM API Review. Failure to comply may result in delays to the manifest. Adding a new service Adding new API(s) Adding a new API version -[ ] To review changes efficiently, ensure you copy the existing version into the new directory structure for first commit and then push new changes, including version updates, in separate commits. [ ] Ensure you've reviewed following guidelines including ARM resource provider contract and REST guidelines. Estimated time (4 hours). This is required before you can request review from ARM API Review board. [ ] If you are blocked on ARM review and want to get the PR merged with urgency, please get the ARM oncall for reviews (RP Manifest Approvers team under Azure Resource Manager service) from IcM and reach out to them. Breaking Change Review Checklist If any of the following scenarios apply to the PR, request approval from the Breaking Change Review Board as defined in the Breaking Change Policy. [ ] Removing API(s) in a stable version [ ] Removing properties in a stable version [ ] Removing API version(s) in a stable version [ ] Updating API in a stable or public preview version with Breaking Change Validation errors [ ] Updating API(s) in public preview over 1 year (refer to Retirement of Previews) Action: to initiate an evaluation of the breaking change, create a new intake using the template for breaking changes. Addition details on the process and office hours are on the Breaking change Wiki. Please follow the link to find more details on PR review process. @changlong-liu @tadelesh , do you have any concern on these examples changes? Will auto-generated examples with minimumSet rule work for mock test? @changlong-liu @tadelesh , do you have any concern on these examples changes? Will auto-generated examples with minimumSet rule work for mock test? For request part it's OK. If any required param not set, a default value will be used for assignment in GO test generation. But if we add response check in the future, I think response should maintained more carefully. @changlong-liu What do you think? @changlong-liu @tadelesh , do you have any concern on these examples changes? Will auto-generated examples with minimumSet rule work for mock test? For request part it's OK. If any required param not set, a default value will be used for assignment in GO test generation. But if we add response check in the future, I think response should be maintained more carefully. @changlong-liu What do you think? Agree, we shall refine response when it's not accurate.
2025-04-01T04:10:12.008755
2016-10-11T21:18:22
182377048
{ "authors": [ "AutorestCI", "azurecla", "yugangw-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13496", "repo": "Azure/azure-rest-api-specs", "url": "https://github.com/Azure/azure-rest-api-specs/pull/596" }
gharchive/pull-request
webapp: expose entry point for linux web app Cherry pick from #595 so to unblock azure cli command work for linux webapp support //cc: @LukaszStem Hi @yugangw-msft, I'm your friendly neighborhood Azure Pull Request Bot (You can call me AZPRBOT). Thanks for your contribution! It looks like you're working at Microsoft (yugangw). If you're full-time, we DON'T require a contribution license agreement. If you are a vendor, DO please sign the electronic contribution license agreement. It will take 2 minutes and there's no faxing! https://cla.azure.com. TTYL, AZPRBOT; https://github.com/Azure/azure-sdk-for-python/pull/817 https://github.com/Azure/azure-sdk-for-ruby/pull/489 https://github.com/Azure/azure-sdk-for-node/pull/1925
2025-04-01T04:10:12.010198
2021-05-14T22:48:57
892290327
{ "authors": [ "antkmsft", "vhvb1989" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13497", "repo": "Azure/azure-sdk-for-cpp", "url": "https://github.com/Azure/azure-sdk-for-cpp/issues/2271" }
gharchive/issue
Doxygen drops periods in the last sentences See descriptions in the class list: https://azuresdkdocs.blob.core.windows.net/$web/cpp/azure-core/1.0.0-beta.8/annotated.html Our doxygen comments have periods, but they are not shown in the generated docs. By design, Doxygen won't add periods on table briefs.
2025-04-01T04:10:12.015751
2021-05-18T08:04:28
894098435
{ "authors": [ "Jinming-Hu", "vhvb1989" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13498", "repo": "Azure/azure-sdk-for-cpp", "url": "https://github.com/Azure/azure-sdk-for-cpp/pull/2291" }
gharchive/pull-request
copy constructor for RequestFailedException closes https://github.com/Azure/azure-sdk-for-cpp/issues/2281 Pull Request Checklist Please leverage this checklist as a reminder to address commonly occurring feedback when submitting a pull request to make sure your PR can be reviewed quickly: See the detailed list in the contributing guide. [x] C++ Guidelines [x] Doxygen docs [x] Unit tests [x] No unwanted commits/changes [x] Descriptive title/description [x] PR is single purpose [x] Related issue listed [x] Comments in source [x] No typos [x] Update changelog [x] Not work-in-progress [x] External references or docs updated [x] Self review of PR done [x] Any breaking changes? /azp run cpp - storage /azp run cpp - storage /azp run storage - cpp /azp run core - cpp /azp run cpp - storage /azp run cpp - core
2025-04-01T04:10:12.017941
2023-01-10T00:19:58
1526544125
{ "authors": [ "azure-sdk", "konrad-jamrozik" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13499", "repo": "Azure/azure-sdk-for-cpp", "url": "https://github.com/Azure/azure-sdk-for-cpp/pull/4231" }
gharchive/pull-request
Sync eng/common directory with azure-sdk-tools for PR 5095 Sync eng/common directory with azure-sdk-tools for PR https://github.com/Azure/azure-sdk-tools/pull/5095 See eng/common workflow @weshaggard @danieljurek overriding for the same reason as here: https://github.com/Azure/azure-sdk-for-cpp/pull/4227#issuecomment-1374465829 /check-enforcer override
2025-04-01T04:10:12.019404
2019-05-10T21:57:33
442908767
{ "authors": [ "AutorestCI" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13500", "repo": "Azure/azure-sdk-for-go", "url": "https://github.com/Azure/azure-sdk-for-go/pull/4774" }
gharchive/pull-request
[AutoPR managementgroups/resource-manager] Add managementGroups/{id}/descendants route to allow pagination of descendants Created to sync https://github.com/Azure/azure-rest-api-specs/pull/5917 This PR has been merged into https://github.com/Azure/azure-sdk-for-go/pull/5026
2025-04-01T04:10:12.026237
2020-05-04T19:33:31
612115640
{ "authors": [ "erzads", "joshfree", "weidongxu-microsoft", "yungezz" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13501", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/issues/10693" }
gharchive/issue
[QUERY] increasing memory usage - am I doing something wrong? Query/Question What is the correct way to use the sdk? Currently, I instantiate com.microsoft.azure.management.Azure once and keep using it. Should I instantiate it every time I need to make a request do azure? I am experiencing a very weird issue. My app gets increasing its memory usage as the system gets used. I have no cache. All I do is go to azure, fetch, create, shutdown and alter vms. Edit* I am pro filling the application and it seems that there is a big memory usage in deserializing the response from resource skus api. Setup (please complete the following information if applicable): OS: [windows] IDE : [ IntelliJ] Version of the Library used 1.33.0 Information Checklist Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report [x ] Query Added [x ] Setup information Added @yungezz could you please follow up? hi @weidongxu-microsoft could you pls have a look? thanks @erzads Yes, creating one Azure object and keep reusing it is the usual approach. For memory, usually String takes large part of memory for common app, and JVM itself may have great influence as well (GC, etc.). If you do expect certain limit on memory usage, would you try to set max heap memory (-Xmx) and see if OutOfMemoryError happens. My application was fine with 2gb ram (usually never went past 1gb). After using the skus api, it increased to more then 10gb. I am not sure why, but something with the resource skus api might make it keep references that can't be GCed. 10GB appears much too large. My test: long mem = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory(); System.out.println("memory: " + mem); PagedList<ComputeSku> skus = azure.computeSkus().list(); System.out.println("size: " + skus.size()); skus = null; for (int i = 0; i < Integer.MAX_VALUE; ++i) { mem = Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory(); System.out.println("memory: " + mem); Thread.sleep(10000); System.gc(); } Output: memory: 37404400 [pool-1-thread-1] INFO com.microsoft.aad.adal4j.AuthenticationAuthority - [Correlation ID: 42746dfd-83f2-477f-a029-2b194efbf5f8] Instance discovery was successful [com.microsoft.azure.azuresdktest.Main.main()] INFO com.microsoft.azure.management.compute.ResourceSkus list - --> GET https://management.azure.com/subscriptions/ec0aa5f7-9e78-40c9-85cd-535c6305b380/providers/Microsoft.Compute/skus?api-version=2017-09-01 [com.microsoft.azure.azuresdktest.Main.main()] INFO com.microsoft.azure.management.compute.ResourceSkus list - <-- 200 https://management.azure.com/subscriptions/ec0aa5f7-9e78-40c9-85cd-535c6305b380/providers/Microsoft.Compute/skus?api-version=2017-09-01 (1135 ms, unknown-length body) size: 9782 memory: 90319968 memory: 43813920 memory: 43768192 memory: 43768080 memory: 43768080 memory: 43768080 memory: 43767584 From what I saw, it peaked at 90319968 for 9782 items, then go back to 43767584 after GC, which is not much different from 37404400 before the task.
2025-04-01T04:10:12.029648
2023-02-23T21:03:36
1597525136
{ "authors": [ "anuchandy" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13502", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/issues/33701" }
gharchive/issue
Rework ReceiveLinkHandler to enable delivery buffer draining upon arrival on Qpid thread Rework ReceiveLinkHandler to Enable delivery buffer draining upon arrival on Qpid thread. Expose Flux of Message instead of Flux of Delivery. As a consequence of the drain on delivery, the decodeDelivery in ReactorReceiver and ServiceBusReactorReceiver will be removed. Benefits: Reduce thread hopping in Event Hubs and Service Bus and fixes an edge case of SB receiver hanging. Impl time notes: We may need a new internal ReceiveLinkHandler2 to co-exist with the current ReceiveLinkHandler for the short term while removing legacy ReceiveLinkHandler dependency over multiple iterations. This is implemented in the PR https://github.com/Azure/azure-sdk-for-java/pull/34854 Work implemented in https://central.sonatype.com/artifact/com.azure/azure-core-amqp/2.9.0
2025-04-01T04:10:12.043480
2019-05-14T20:58:41
444122215
{ "authors": [ "joshfree", "kurtzeborn", "samvaity", "v-jaswel", "wiazur" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13503", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/issues/3641" }
gharchive/issue
[BUG] Calling FaceAPIManager.authenticate(String, String) to authenticate does not set Azure region Describe the bug Calling the FaceAPIManager.authenticate(String, String) overload to create a Face API client fails to set a private member variable, which later results in thrown exceptions when you call the methods of the Faces interface. Exception or Stack Trace Parameter this.client.azureRegion() is required and cannot be null. java.lang.IllegalArgumentException: Parameter this.client.azureRegion() is required and cannot be null. at com.microsoft.azure.cognitiveservices.vision.faceapi.implementation.FacesImpl.detectWithUrlWithServiceResponseAsync(FacesImpl.java:701) at com.microsoft.azure.cognitiveservices.vision.faceapi.implementation.FacesImpl.detectWithUrl(FacesImpl.java:658) at FindSimilar.main(FindSimilar.java:64) To Reproduce Steps to reproduce the behavior: Create a Face API client with FaceAPIManager.authenticate(String, String). Note: to use FaceAPIManager, I am referencing azure-cognitiveservices-faceapi-1.0.2-beta.jar, which I compiled from https://github.com/Azure/azure-sdk-for-java. For more information see: https://github.com/Azure/azure-sdk-for-java/issues/3640. Call a method like Faces.detectWithUrl. An exception is thrown. Code Snippet String subscriptionKey = "<insert key here>"; String faceEndpoint = "https://westus.api.cognitive.microsoft.com"; String imageUrl = "https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedy---mini-biography.jpg"; FaceAPI client = FaceAPIManager.authenticate(faceEndpoint, subscriptionKey); List<DetectedFace> detectedFaces = client.faces().detectWithUrl(imageUrl, new DetectWithUrlOptionalParameter().withReturnFaceId(true)); Expected behavior Calling the FaceAPIManager.authenticate(String, String) overload to create a Face API client lets me call the methods of the Faces interface without errors. Screenshots N/A Setup (please complete the following information): OS: Windows 10 Pro 1804 IDE: None Version of the Library used: 1.0.2-beta (compiled from downloaded Github repo https://github.com/Azure/azure-sdk-for-java) Additional context FaceAPIManager.authenticate(String, String) does not set the Azure region, unlike other overrides such as FaceAPIManager.authenticate(AzureRegions, String): https://github.com/Azure/azure-sdk-for-java/blob/master/cognitiveservices/data-plane/vision/faceapi/src/main/java/com/microsoft/azure/cognitiveservices/vision/faceapi/FaceAPIManager.java#L24 public static FaceAPI authenticate(AzureRegions region, String subscriptionKey) { return authenticate("https://{AzureRegion}.api.cognitive.microsoft.com/face/v1.0/", subscriptionKey) .withAzureRegion(region); } FaceAPIImpl.withAzureRegion simply sets the azureRegion private member variable that is accessed by FaceAPIImpl.azureRegion. https://github.com/Azure/azure-sdk-for-java/blob/master/cognitiveservices/data-plane/vision/faceapi/src/main/java/com/microsoft/azure/cognitiveservices/vision/faceapi/implementation/FaceAPIImpl.java#L49 public FaceAPIImpl withAzureRegion(AzureRegions azureRegion) { this.azureRegion = azureRegion; return this; } However, FaceAPIManager.authenticate(String, String) calls FaceAPIManager.authenticate(String, ServiceClientCredentials), neither of which calls FaceAPIImpl.withAzureRegion. This is probably because AzureRegions is an enum type and baseURL is a string, so they would need to parse the URL or just scan it for known AzureRegions values. https://github.com/Azure/azure-sdk-for-java/blob/master/cognitiveservices/data-plane/vision/faceapi/src/main/java/com/microsoft/azure/cognitiveservices/vision/faceapi/FaceAPIManager.java#L77 public static FaceAPI authenticate(String baseUrl, ServiceClientCredentials credentials) { return new FaceAPIImpl(baseUrl, credentials); } FaceAPIImpl.initialize creates an instance of FacesImpl and passes itself to the constructor. https://github.com/Azure/azure-sdk-for-java/blob/master/cognitiveservices/data-plane/vision/faceapi/src/main/java/com/microsoft/azure/cognitiveservices/vision/faceapi/implementation/FaceAPIImpl.java#L211 protected void initialize() { // ... this.faces = new FacesImpl(restClient().retrofit(), this); The FacesImpl saves the FacesAPIImpl instance. https://github.com/Azure/azure-sdk-for-java/blob/master/cognitiveservices/data-plane/vision/faceapi/src/main/java/com/microsoft/azure/cognitiveservices/vision/faceapi/implementation/FacesImpl.java#L63 public FacesImpl(Retrofit retrofit, FaceAPIImpl client) { this.service = retrofit.create(FacesService.class); this.client = client; } FacesImpl's methods, such as detectWithUrlWithServiceResponseAsync, perform the following check, which throws, because FaceAPIImpl.azureRegion was not set. if (this.client.azureRegion() == null) { throw new IllegalArgumentException("Parameter this.client.azureRegion() is required and cannot be null."); } My suggestion re: the fix would be to change FaceAPIManager.authenticate(String, String) and/or FaceAPIManager.authenticate(String, ServiceClientCredentials) to call FaceAPIImpl.withAzureRegion. They'll likely need the parse the base URL or maybe just scan it for known AzureRegions values. Information Checklist Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report [x] Bug Description Added [x] Repro Steps Added [x] Setup information Added Thank you for opening this issue! We are routing it to the appropriate team for follow up. Hi @kurtzeborn, do you happen to know the status for this issue? Thanks much. @milismsft @ryogok Hello @v-jaswel, Can you try using the withAzureRegion API along with the authenticate(String, String) method? FaceAPI client = FaceAPIManager.authenticate(faceEndpoint, subscriptionKey).withAzureRegion(AzureRegions.WESTUS2) Let us know if this is still an issue.
2025-04-01T04:10:12.044782
2016-06-07T00:58:26
158807728
{ "authors": [ "martinsawicki" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13504", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/issues/773" }
gharchive/issue
more code reuse/base classes in the *Managers would help NetworkManager, StorageManager, etc share an awful lot of the same code that's asking to be genericized, perhaps... done enough for now
2025-04-01T04:10:12.046530
2020-06-12T16:58:05
637888332
{ "authors": [ "JimSuplizio" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13505", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/pull/12172" }
gharchive/pull-request
Empty tests.yml to setup spring live test pipeline This is just an empty tests.yml to setup the spring live tests pipeline /check-enforcer override
2025-04-01T04:10:12.048243
2023-07-20T19:43:44
1814690570
{ "authors": [ "azure-sdk", "mssfang" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13506", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/pull/36005" }
gharchive/pull-request
[Purview-Workflow] TestProxy Migration Steps: Followed steps in https://github.com/Azure/azure-sdk-for-java/wiki/Test-Proxy-Migration#2-migrate-updated-recordings-to-assets-repo API change check API changes are not detected in this pull request.
2025-04-01T04:10:12.052743
2024-01-03T18:27:41
2064470612
{ "authors": [ "alzimmermsft", "azure-sdk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13507", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/pull/38199" }
gharchive/pull-request
Re-enable Spotbugs in azure-core with new design Description Re-enables Spotbugs in azure-core with a new design where azure-core has its own exclusion file instead of using a globally shared exclusion file. All SDK Contribution checklist: [x] The pull request does not introduce [breaking changes] [x] CHANGELOG is updated for new features, bug fixes or other significant changes. [x] I have read the contribution guidelines. General Guidelines and Best Practices [x] Title of the pull request is clear and informative. [x] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page. Testing Guidelines [x] Pull request includes test coverage for the included changes. API change check API changes are not detected in this pull request.
2025-04-01T04:10:12.055267
2024-11-26T01:47:33
2692778535
{ "authors": [ "azure-sdk", "tvaron3" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13508", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/pull/43092" }
gharchive/pull-request
UDFs for Mapping Feed Ranges to Buckets Problem Customers trying to perform joins with databricks tables. These tables have their own partition keys. Description Two udfs are added. One gives the number of feed ranges a users asks for. The other udf gives the bucket a partition key is in. Work in progress. Still adding more tests. Any suggestions for naming the udfs? API change check API changes are not detected in this pull request. /azp run java - cosmos - tests /azp run java - cosmos - tests /azp run java - cosmos - tests /azp run java - cosmos - spark
2025-04-01T04:10:12.056129
2020-02-28T01:20:57
572460733
{ "authors": [ "JimSuplizio", "azure-sdk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13509", "repo": "Azure/azure-sdk-for-java", "url": "https://github.com/Azure/azure-sdk-for-java/pull/8556" }
gharchive/pull-request
Sync eng/common directory with azure-sdk-tools repository Sync eng/common directory with azure-sdk-tools repository The two failing runs are flaky datalake tests which are being investigated. I'm going to squash and merge this.
2025-04-01T04:10:12.058399
2022-02-23T20:26:53
1148541878
{ "authors": [ "jeremymeng", "timovv" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13510", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/issues/20512" }
gharchive/issue
[Container Registry] OCI blob push/pull support Add support for OCI blob and manifest upload/download in order to achieve feature parity with .NET. Cc: @jeremymeng for reference: .NET apiview for ContainerRegistryBlobClient https://apiview.dev/Assemblies/Review/ba57bf904e524ce3bf647dad124ddc7c#Azure.Containers.ContainerRegistry.Specialized.ContainerRegistryBlobClient
2025-04-01T04:10:12.060634
2022-08-23T22:09:43
1348600373
{ "authors": [ "JoshLove-msft", "jeremymeng" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13511", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/issues/22987" }
gharchive/issue
[Service Bus] Create Troubleshooting Guide Here is the template that each language can follow to ensure all topics are covered in a consistent way. As part of this work, we should update relevant exceptions to link out to aka.ms links that point to the corresponding section of the Troubleshooting guide, similar to what we did for Identity. PR #23322
2025-04-01T04:10:12.064257
2023-07-18T17:32:22
1810404132
{ "authors": [ "alzimmermsft", "dgetu" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13512", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/issues/26531" }
gharchive/issue
Cognitive Search August 2023 Preview Work This issue contains work items for the Cognitive Search August 2023 preview. [ ] Regenerate using the latest searchindex.json Swagger: https://raw.githubusercontent.com/Azure/azure-rest-api-specs/9383e81389c2b1c64da07cc70c66f8c54b9ad4f5/specification/search/data-plane/Azure.Search/preview/2023-07-01-Preview/searchindex.json [ ] Replace all usage of Vector with Vectors. [ ] Provide single Vector convenience in a way that doesn't break backwards compatibility. [ ] Create a sample showing how to use multiple Vectors in a search. [ ] (If applicable) Update FieldBuilder functionality to support Vector. [ ] Add convenience for setting the fields in a Vector query. [ ] The wire type is a String but the actual form is a comma-delimited list of fields. This should be exposed as a variable argument String which the SDK internally converts to a comma-delimited String. Added in 2023 August release.
2025-04-01T04:10:12.067939
2024-03-22T16:18:23
2202854820
{ "authors": [ "TimothyMothra", "hectorhdzg", "xirzec" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13513", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/issues/29033" }
gharchive/issue
[monitor-opentelemetry-exporter] Adopt OpenTelemetry HTTP semconv 1.23.1 https://opentelemetry.io/blog/2023/http-conventions-declared-stable/ @hectorhdzg, can you investigate why the .NET crew is getting assigned to JS issues? :) @TimothyMothra sounds like a github-actions misconfiguration, :) I was going to ask you guys the same thing. Let me see if I find some reference. This looks sus: https://github.com/Azure/azure-sdk-for-js/blob/37e3efd7bf2c29c6e6247a1d35fea6988989f38c/.github/CODEOWNERS#L1211-L1212 @TimothyMothra that text is apparently is commented out, so not sure why this is triggering, @xirzec any idea about this?, I can update owners there if this is by design. @hectorhdzg I agree with @TimothyMothra that it is "sus". The double @ is odd and the owners don't seem correct. I'd also think we should tie this to the actual directory path, rather than being a <NotInRepo> comment: /sdk/monitor/monitor-opentelemetry-exporter @hectorhdzg @JacksonWeber Feel free to PR an update and I will approve!
2025-04-01T04:10:12.071477
2022-07-14T09:03:54
1304485514
{ "authors": [ "dw511214992" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13514", "repo": "Azure/azure-sdk-for-js", "url": "https://github.com/Azure/azure-sdk-for-js/pull/22585" }
gharchive/pull-request
only show error log in sdk automation pipeline Packages impacted by this PR Issues associated with this PR Describe the problem that is addressed by this PR What are the possible designs available to address the problem? If there are more than one possible design, why was the one in this PR chosen? Are there test cases added in this PR? (If not, why?) Provide a list of related PRs (if any) Command used to generate this PR:**(Applicable only to SDK release request PRs) Checklists [ ] Added impacted package name to the issue description [ ] Does this PR needs any fixes in the SDK Generator?** (If so, create an Issue in the Autorest/typescript repository and link it here) [ ] Added a changelog (if necessary) /check-enforcer override /check-enforcer override
2025-04-01T04:10:12.073309
2020-06-24T15:14:54
644698246
{ "authors": [ "kasobol-msft", "pakrym" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13515", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/12975" }
gharchive/issue
Improve timeout exception message/docs @kasobol-msft The other piece of feedback I was getting is that our error messaging around timeouts/retires is that some customers don’t realize which settings they should be tweaking if they see them, so they end up reaching out to us. So maybe we should create some page/readme explaining the knobs and their relation and then include link to it in the error messages coming from networking stack. We should also account for cases where customer is observing high latencies for some percentage of requests (presumably caused by request queuing on small connection pool). This is usually the case on .NET FX with default settings.
2025-04-01T04:10:12.083918
2020-07-05T02:30:43
650987836
{ "authors": [ "amishra-dev", "blueww", "davidobrien1985", "jsquire", "tg-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13516", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/13227" }
gharchive/issue
[BUG] QueueServiceClient().SetProperties fails with Version error Describe the bug var client = new QueueServiceClient(serviceUri, credentials); var currentVersion = (await client.GetPropertiesAsync()).Value.Logging.Version; var defaultRetentionPolicy = new QueueRetentionPolicy { Days = 90, Enabled = true }; var defaultQueueMetric = new QueueMetrics { Version = currentVersion, Enabled = true, IncludeApis = true, RetentionPolicy = defaultRetentionPolicy }; var queueServiceProps = new QueueServiceProperties { Logging = new QueueAnalyticsLogging { Version = currentVersion, Delete = true, Read = true, Write = true, RetentionPolicy = defaultRetentionPolicy }, HourMetrics = defaultQueueMetric, MinuteMetrics = defaultQueueMetric }; await client.SetPropertiesAsync(queueServiceProps); This fails with "unexpected value for Version" if the Version is 2.0. We are able to send 1.0 here as a hard coded value but that will change the existing setting that we have from 2.0 to 1.0. Expected behavior The Version property should accept a value of 2.0. Actual behavior (include Exception or Stack Trace) System.Private.CoreLib: Exception while executing function: queue. Azure.Storage.Queues: XML specified is not syntactically valid. RequestId:c2cf62e9-d003-004d-4873-523b8f000000 Time:2020-07-05T02:27:24.4055884Z Status: 400 (XML specified is not syntactically valid.) ErrorCode: InvalidXmlDocument Additional Information: LineNumber: 1 LinePosition: 234 Reason: Unexpected value for Version. Headers: Server: Windows-Azure-Queue/1.0,Microsoft-HTTPAPI/2.0 x-ms-request-id: c2cf62e9-d003-004d-4873-523b8f000000 x-ms-version: 2018-11-09 x-ms-error-code: InvalidXmlDocument Date: Sun, 05 Jul 2020 02:27:23 GMT Content-Length: 332 Content-Type: application/xml To Reproduce Environment: Name and version of the Library package used: <PackageReference Include="Azure.Storage.Queues" Version="12.3.2"/> @AlexGhiondea Would you please also add "Client" tag for this issue? As we will monitor the ARP storage issues: with "Storage" but not with "Client" tag. As this issue is for dataplane, would you mind to add "Client" tag, so our monitor won't report it. @blueww: Apologies; you're correct in that the Client tag was missed here. Rest assured that we do our best to sort between management and data plane clients when triaging. When I run the repro on an account with logging values that match your Portal tab, I get traffic like: GET https://testqueue32874.queue.core.windows.net/?restype=service&comp=properties HTTP/1.1 Host: testqueue32874.queue.core.windows.net x-ms-version: 2019-12-12 x-ms-client-request-id: 767bb5d1-4588-431d-bf41-cefa29e20d89 x-ms-return-client-request-id: true User-Agent: azsdk-net-Storage.Queues/12.3.2 (.NET Core 3.1.4; Microsoft Windows 10.0.18363) x-ms-date: Mon, 06 Jul 2020 15:33:55 GMT HTTP/1.1 200 OK Cache-Control: no-cache Transfer-Encoding: chunked Content-Type: application/xml Server: Windows-Azure-Queue/1.0 Microsoft-HTTPAPI/2.0 x-ms-request-id: 6268c9cd-0003-0060-59aa-530d42000000 x-ms-client-request-id: 767bb5d1-4588-431d-bf41-cefa29e20d89 x-ms-version: 2019-12-12 Date: Mon, 06 Jul 2020 15:33:56 GMT 265 <?xml version="1.0" encoding="utf-8"?><StorageServiceProperties><Logging><Version>2.0</Version><Read>true</Read><Write>true</Write><Delete>true</Delete><RetentionPolicy><Enabled>false</Enabled></RetentionPolicy></Logging><HourMetrics><Version>1.0</Version><Enabled>true</Enabled><IncludeAPIs>true</IncludeAPIs><RetentionPolicy><Enabled>true</Enabled><Days>7</Days></RetentionPolicy></HourMetrics><MinuteMetrics><Version>1.0</Version><Enabled>true</Enabled><IncludeAPIs>true</IncludeAPIs><RetentionPolicy><Enabled>true</Enabled><Days>7</Days></RetentionPolicy></MinuteMetrics><Cors /></StorageServiceProperties> 0 PUT https://testqueue32874.queue.core.windows.net/?restype=service&comp=properties HTTP/1.1 Host: testqueue32874.queue.core.windows.net x-ms-version: 2019-12-12 x-ms-client-request-id: c348cbe0-fc32-4f9f-82e0-106bad6913fd x-ms-return-client-request-id: true User-Agent: azsdk-net-Storage.Queues/12.3.2 (.NET Core 3.1.4; Microsoft Windows 10.0.18363) x-ms-date: Mon, 06 Jul 2020 15:33:56 GMT Content-Type: application/xml Content-Length: 580 <StorageServiceProperties><Logging><Version>2.0</Version><Delete>true</Delete><Read>true</Read><Write>true</Write><RetentionPolicy><Enabled>true</Enabled><Days>90</Days></RetentionPolicy></Logging><HourMetrics><Version>2.0</Version><Enabled>true</Enabled><RetentionPolicy><Enabled>true</Enabled><Days>90</Days></RetentionPolicy><IncludeAPIs>true</IncludeAPIs></HourMetrics><MinuteMetrics><Version>2.0</Version><Enabled>true</Enabled><RetentionPolicy><Enabled>true</Enabled><Days>90</Days></RetentionPolicy><IncludeAPIs>true</IncludeAPIs></MinuteMetrics></StorageServiceProperties> HTTP/1.1 400 XML specified is not syntactically valid. Content-Length: 332 Content-Type: application/xml Server: Windows-Azure-Queue/1.0 Microsoft-HTTPAPI/2.0 x-ms-request-id: 6268ca0d-0003-0060-7aaa-530d42000000 x-ms-client-request-id: c348cbe0-fc32-4f9f-82e0-106bad6913fd x-ms-version: 2019-12-12 x-ms-error-code: InvalidXmlDocument Date: Mon, 06 Jul 2020 15:33:56 GMT <?xml version="1.0" encoding="utf-8"?><Error><Code>InvalidXmlDocument</Code><Message>XML specified is not syntactically valid. RequestId:6268ca0d-0003-0060-7aaa-530d42000000 Time:2020-07-06T15:33:56.7477326Z</Message><LineNumber>1</LineNumber><LinePosition>234</LinePosition><Reason>Unexpected value for Version.</Reason></Error> I think the important details in the initial GET response are: <Logging><Version>2.0</Version> <HourMetrics><Version>1.0</Version> You're using the Logging value for currentVersion and then setting that as the metric's version. The REST documentation describes the version parameters the same with Required if Logging, Metrics, HourMetrics, or MinuteMetrics settings are specified. The version of Storage Analytics to configure. and both 1.0 and 2.0 are valid Storage Analytics values for Queues Based on the error, it looks like only 1.0 is supported for metrics. Changing your repro code to distinguish between metrics and logging works for me: var client = new QueueServiceClient(ConnectionString); var currentProperties = (await client.GetPropertiesAsync()).Value; var defaultRetentionPolicy = new QueueRetentionPolicy { Days = 90, Enabled = true }; var defaultQueueMetric = new QueueMetrics { Version = currentProperties.HourMetrics.Version, // Assume this is the same for Minute Enabled = true, IncludeApis = true, RetentionPolicy = defaultRetentionPolicy }; var queueServiceProps = new QueueServiceProperties { Logging = new QueueAnalyticsLogging { Version = currentProperties.Logging.Version, Delete = true, Read = true, Write = true, RetentionPolicy = defaultRetentionPolicy }, HourMetrics = defaultQueueMetric, MinuteMetrics = defaultQueueMetric }; await client.SetPropertiesAsync(queueServiceProps); This is expected. Logging supports V1.0 and V2.0. Metrics, HourMetrics and MinuteMetrics only support 1.0. We will update the rest documentation to make this more clear.
2025-04-01T04:10:12.087618
2020-08-30T01:30:36
688633773
{ "authors": [ "JimSuplizio", "JoshLove-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13517", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/14694" }
gharchive/issue
Add EnableExpress to admin client See JS issue for background on this feature - https://github.com/Azure/azure-sdk-for-js/issues/10893 We would want to include this new property in CreateQueueOptions/QueueProperties/CreateTopicOptions/TopicProperties. We should check whether or not this property can be set on an existing entity by testing it out (this will guide whether the property is read-only on the Properties types listed above). Hi @JoshLove-msft, we deeply appreciate your input into this project. Regrettably, this issue has remained inactive for over 2 years, leading us to the decision to close it. We've implemented this policy to maintain the relevance of our issue queue and facilitate easier navigation for new contributors. If you still believe this topic requires attention, please feel free to create a new issue, referencing this one. Thank you for your understanding and ongoing support.
2025-04-01T04:10:12.089994
2020-12-01T14:53:48
754464902
{ "authors": [ "christothes", "jsquire", "patrickjlee" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13518", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/17256" }
gharchive/issue
Azure Tables client library: why is there no example of authenticating using Managed Identity? I thought a major benefit of the new libraries using Azure.Core was that there would be more standardisation in the ways of accessing Azure resources securely. Other libraries (e.g. for Key Vault or Azure Storage Blobs or Queues) allow one to use Managed Identity e.g. using DefaultAzureCredential(), but apparently not in the Azure Tables library. If not, why not and will this be introduced soon? A major issue for us with Managed Identity has been the lack of a consistent approach across various Azure resources, so it would be good if Azure Tables also supported the same approach as Key Vault, and Azure Storage Blobs/Queues please. Thank you for your feedback. Tagging and routing to the team member best able to assist. @patrickjlee Currently the service doesn't support Azure AD authentication. If you'd like to provide this feedback to the team that would need to implement this, please file an issue here https://feedback.azure.com/forums/263030-azure-cosmos-db .
2025-04-01T04:10:12.099949
2021-01-13T01:36:44
784716573
{ "authors": [ "SaurabhSharma-MSFT", "coppercosmo", "jsquire" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13519", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/17927" }
gharchive/issue
[BUG] Microsoft.Web/sites/functions returns ResourceNotFound Describe the bug Trying to create a new app service function using the Azure.ResourceManager library is not working. Expected behavior I expect to be able to create a new app service function by using the Microsoft.Web/sites/functions Resource Provider Namespace. This microsoft website says that it should be available. https://docs.microsoft.com/en-us/azure/templates/microsoft.web/sites/functions Here is a code snippet of what I am trying: var functionApp = await _resourcesManagementClient.Resources.StartCreateOrUpdateAsync( resourceGroup.Name, "Microsoft.Web", "sites", "functions", resource.Name, resource.ApiVersion, parameters ); Actual behavior (include Exception or Stack Trace) {"Service request failed.\r\nStatus: 404 (Not Found)\r\n\r\nContent:\r\n{"error":{"code":"ResourceNotFound","message":"The Resource 'Microsoft.Web/sites/functions' under resource group '<RESOURCE_GROUP>' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix"}}\r\n\r\nHeaders:\r\nCache-Control: no-cache\r\nPragma: no-cache\r\nx-ms-failure-cause: REDACTED\r\nx-ms-request-id: REDACTED\r\nx-ms-correlation-request-id: REDACTED\r\nx-ms-routing-request-id: REDACTED\r\nStrict-Transport-Security: REDACTED\r\nX-Content-Type-Options: REDACTED\r\nDate: Wed, 13 Jan 2021 01:26:22 GMT\r\nContent-Type: application/json; charset=utf-8\r\nExpires: -1\r\nContent-Length: 219\r\n"} To Reproduce Steps to reproduce the behavior (include a code snippet, screenshot, or any additional information that might help us reproduce the issue) Try to create an app service function using the code snippet above. Environment: Name and version of the Library package used: [e.g. Azure.ResourceManager.Resources 1.0.0-preview.2] Hosting platform or OS and .NET runtime version (dotnet --info output for .NET Core projects): [Windows 10 .NET Core 3.1] IDE and version : [e.g. Visual Studio 16.8.4] Thank you for your feedback. Tagging and routing to the team member best able to assist. Thank you for your feedback. Tagging and routing to the team member best able to assist. Any update on this? Any update on this? Was there any more information on this? Was there any more information on this? @bquantump @allenjzhang ? At this point I have had to resort to use the azure-libraries-for-net fluent libraries to create my function apps. Since this is losing support and probably becoming deprecated, I would really love some help using the azure-sdk-for-net management libraries (or client libraries) to achieve creating functions apps. @coppercosmo Apologies for the delayed response. I see that this issue is opened long time ago and no further activity had taken place. So wanted to check if you are still looking for assistance on this query? Please let us know .
2025-04-01T04:10:12.117500
2021-02-08T02:32:23
803135885
{ "authors": [ "Luyunmt", "jongio", "maririos", "nisha-bhatia", "schaabs", "tg-msft", "v-xuto", "zedy-wj" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13520", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/18520" }
gharchive/issue
[BUG][Text Analytics]Text Analytics Cilent fail to authenticate with token credential in USGOV and China We are running live Tests against other clouds like US Gov and Azure China Cloud. The goal is to check whether new azure sdk package work with other clouds or not. In https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/textanalytics/Azure.AI.TextAnalytics/src/TextAnalyticsClient.cs#L68 as follow. The scope is hard code as ' https://cognitiveservices.azure.com/.default' leading to only work well in the public cloud. The value of the scope in different clouds is follow: Azure public cloud: https://cognitiveservices.azure.com/.default USGOV cloud: https://cognitiveservices.azure.us/.default China cloud: https://cognitiveservices.azure.cn/.default AZURE_AUTHORITY_HOST setting: USGOV : https://login.microsoftonline.us/ China: https://login.microsoftonline.cn/ @jongio @danieljurek @benbp We have the same problem in FR https://github.com/Azure/azure-sdk-for-net/issues/17192 @tg-msft do you know if there is a guidance for .NET on how to do this? or how to surface this to our customers? For sdk tests, I see how some services have the default scope as an env variable that gets configured with the CI, but wonder how the user will know that in case they are in one of those clouds. FYI @suhas92 @Luyunmt - Can you please give examples of how this is achieved in the sdk with other cloud endpoints? Storage hardcodes that today so this a better question for @schaabs. If we're ready to add a common story for this, I think we should maybe stick this on the base ClientOptions or at least have a common TokenAuthenticationOptions type we can expose from any service's *ClientOptions that allow TokenCredential. @jongio Example in this link https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/textanalytics/Azure.AI.TextAnalytics/tests/DetectLanguageTests.cs#L46 can repro this issue The client generally should handle which scopes are required to properly authorize the call. Some services (such as storage) use the same scope in all clouds which is why they are able to hard code the scope they use. Other services need to use different scopes in different clouds. In these cases, the client should, if possible, handle this. There are a couple of strategies a client can choose to determine the required scope. First, if the service returns the required scopes or resource string via an authentication challenge (WWW-Authenticate header) the client should utilize this to determine the scope. Key Vault is an example of this and currently has it's own authentication policy which implements this discovery. If the no such data is available through an authentication challenge, a service client may be able to determine the targeted cloud and required scope based off the resource endpoint the user specifies. However, this can break down if the service supports custom domain links, or if end users use private links. We've avoided adding any client configuration for this up to this point as it introduces quite a bit of complexity to the user. The TokenAuthenticationOptions class @tg-msft suggests is very interesting. But we'd need to work out details such as what this means for services which don't support AAD (a shrinking minority. Also, while most azure services only use a singe scope to authenticate all calls, it's possible that different methods might require different scopes, so how would these options apply in that case. @maririos - Can you see if cognitive returns the scope/resource string in an Auth challenge? If so, does this work for default domains and resources with custom subdomain? To use AAD auth with Cognitive services, then the resource needs a custom subdomain, see: https://docs.microsoft.com/en-us/azure/cognitive-services/authentication?tabs=powershell#authenticate-with-azure-active-directory. For other services, a key is required and we therefore shouldn't have to worry about scope in other clouds. @tg-msft - Are you comfortable with the following while @johanste fleshes out cloud environment design? Ask user to set scope in TokenCredentialContext Update TA and FR clients to honor the scope if set. @johanste - Where are we with a design for cloud environment? @schaabs - I'd think we'd only put TokenAuthenticationOptions on derived classes like SearchClientOptions if: they supported AAD, varied scopes across clouds, and didn't provide an API to discover the right scope. It'd unblock things today and ideally we'd [EditorBrowsable(Never)] them over time as they added automagic discovery. We could try guessing defaults based on endpoints as well with this being an override for situations where we get it wrong. @jongio - I'm in favor of enabling something for forward progress here. I think @schaabs will have the best idea how to do that with the least long term debt and I'll get behind his plan. @maririos - How do you recommend we proceed with this? @jongio - Working on this. OK, I will do a PR for this issue. We are holding on fixing this until the ACR design is complete. #21603 (comment) Hi, @benbp , @maririos! We have updated the code, this issue has been fixed with sovereign cloud test PR. But there is a test named RecognizeHealthcareEntitiesBatchWithCancellation was not stable. The pipeline run result is at here. Could you help to see this problem, any thoughts of it? There is a known issue with that test => https://github.com/Azure/azure-sdk-for-net/issues/24052 We haven't had the time to look into it in order to fix it though @nisha-bhatia , @maririos - Do you have any progress or plans to fix this issue? This bug fix will go out in the next release.
2025-04-01T04:10:12.136453
2016-07-18T23:32:45
166216953
{ "authors": [ "adrianhall", "devigned", "rshirani" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13521", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/2190" }
gharchive/issue
Unable to deploy using ARM template by passing the content of the json file. Hi folks, I am trying to do a deployment but rather than passing the TemplateLink and ParametersLink, I'd rather passing the content. When I do so however I get an error as follows. What I do right now is simply loading the content of my template file public static Object GetTemplate() { return File.ReadAllText( @"E:\OfficialTemp\tmplt.json" ); } public static Object GetParameters(){ return File.ReadAllText( @"E:\OfficialTemp\params.json"); } and then in my main: deployment.Properties = new DeploymentProperties { Mode = DeploymentMode.Incremental, Template = GetTemplate(), Parameters = GetParameters(), }; var resourceManagementClient = new ResourceManagementClient( credential ) { SubscriptionId = subscriptionId }; return await resourceManagementClient.Deployments.CreateOrUpdateAsync( groupName, deploymentName, deployment ); I'd like to mention that my tmplt.json and params.json are exactly the file that I used to in my deployment using the URI so I do not think that is an issue. (I get the error even when I use the values stated for template and parameters files in https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-csharp-template/) Error: System.AggregateException: One or more errors occurred. ---> Microsoft.Rest.Azure.CloudException: The request content was invalid and could not be deserialized: 'Error converting value " }" to type 'Microsoft.WindowsAzure.ResourceStack.Frontdoor.Templates.Schema.Template'. Path 'properties.template', line 3, position 23171.'. at Microsoft.Azure.Management.ResourceManager.DeploymentsOperations.d__9.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.ResourceManager.DeploymentsOperations.d__8.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.ResourceManager.DeploymentsOperationsExtensions.d__7.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult() at ConsoleApplication1.Program.d__3.MoveNext() in Check out this sample: https://azure.microsoft.com/en-us/documentation/samples/resource-manager-dotnet-template-deployment/. I believe you need something like below: var templateParams = JObject.Parse(File.ReadAllText( @"E:\OfficialTemp\params.json")); } var deployParams = new Deployment{ Properties = new DeploymentProperties{ Template = JObject.Parse(File.ReadAllText(templateFileLocation)), Parameters = templateParams, Mode = DeploymentMode.Incremental } }; Thanks @devigned , it seems a bit tricky as we can pass string for template but not the parameter. I modified to this and now passes the json issue. Thanks for the link. That said I get a new exception (Long running operation failed with status 'Failed'). Should I just retry or you have any other recommendations? public static async Task CreateTemplateDeploymentAsync( TokenCredentials credential, string groupName, string deploymentName, string subscriptionId) { Console.WriteLine("Creating the template deployment..."); var templateParams = new Dictionary<string, Dictionary<string, object>>{ { "adminPassword", new Dictionary<string, object>{{"value","Azure1234567"}}}, { "adminUserName", new Dictionary<string, object>{{"value", "davidmu" } }} }; var deployment = new Deployment(); deployment.Properties = new DeploymentProperties { Mode = DeploymentMode.Incremental, Template = JObject.Parse(File.ReadAllText("Templates\\CreateVMTemplate2.json")), Parameters = templateParams }; var resourceManagementClient = new ResourceManagementClient(credential) { SubscriptionId = subscriptionId }; return await resourceManagementClient.Deployments.CreateOrUpdateAsync(groupName, deploymentName, deployment); } Here is the full exception: Creating the resource group... Succeeded Creating the template deployment... Error: System.AggregateException: One or more errors occurred. ---> Microsoft.Rest.Azure.CloudException: Long running operation failed with status 'Failed'. at Microsoft.Rest.Azure.AzureClientExtensions.d__12.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Rest.Azure.AzureClientExtensions.<GetPutOrPatchOperationResultAsync>d__01.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.ResourceManager.DeploymentsOperations.d__8.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.ResourceManager.DeploymentsOperationsExtensions.d__7.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter1.GetResult() at ConsoleApplication1.Program.<CreateTemplateDeploymentAsync>d__3.MoveNext() in C:\Users\rostams\documents\visual studio 2015\Projects\ConsoleApplication1\ConsoleApplication1\Program.cs:line 151 --- End of inner exception stack trace --- at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions) at System.Threading.Tasks.Task1.GetResultCore(Boolean waitCompletionNotification) at System.Threading.Tasks.Task1.get_Result() at ConsoleApplication1.Program.Main(String[] args) in C:\Users\rostams\documents\visual studio 2015\Projects\ConsoleApplication1\ConsoleApplication1\Program.cs:line 49 ---> (Inner Exception #0) Microsoft.Rest.Azure.CloudException: Long running operation failed with status 'Failed'. at Microsoft.Rest.Azure.AzureClientExtensions.<GetPutOrPatchOperationResultAsync>d__12.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Rest.Azure.AzureClientExtensions.d__01.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.ResourceManager.DeploymentsOperations.<CreateOrUpdateWithHttpMessagesAsync>d__8.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at Microsoft.Azure.Management.ResourceManager.DeploymentsOperationsExtensions.<CreateOrUpdateAsync>d__7.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter1.GetResult() at ConsoleApplication1.Program.d__3.MoveNext() in C:\Users\rostams\documents\visual studio 2015\Projects\ConsoleApplication1\ConsoleApplication1\Program.cs:line 151<--- Created a new issue to track this: https://github.com/Azure/azure-sdk-for-net/issues/2194. I think this code will work better with the http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json format. // NOTE: If you would like to read the template parameters from a parameters.json file // (like: https://github.com/Azure/azure-quickstart-templates/blob/master/101-vm-simple-linux/azuredeploy.parameters.json), // then you can use the following code to read the JSON, extract the parameters and then convert them to the required structure. var templateString = File.ReadAllText(Path.GetFullPath("template_params.json")); var templateParamsObj = JObject.Parse(templateString)["parameters"]; templateParams = templateParamsObj.ToObject<Dictionary<string, Dictionary<string, object>>>(); Write("{0}", JsonConvert.SerializeObject(templateParams)); I agree that it's a bit tricky. We need to have better typing on this. @markcowl, @sphibbs, @kirthik, @vivsriaus and @tbombach could we improve: https://github.com/Azure/azure-rest-api-specs/blob/master/arm-resources/resources/2016-02-01/swagger/resources.json#L1778-L1789 ? It would be great to have a stronger typed representation of Template and Parameter on DeploymentPropertiesExtended to ensure we are push toward the path of success. Perhaps this would call for an externally defined type which provides some intelligence to construction, validation, serialization and deserialization. This is an old issue. Is it still relevant?
2025-04-01T04:10:12.148569
2021-07-06T10:47:03
937764047
{ "authors": [ "amishra-dev", "hnuguse", "kasobol-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13522", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/22462" }
gharchive/issue
[BUG] BlockBlob OpenWriteAsync method takes twice as much time with the new storage package Describe the bug Writing a block blob with the OpenWriteAsync method takes twice as much time with the new Azure.Storage.Blob package compared to the deprecated WindowsAzure.Storage package. Expected behavior Performance should have been improved or at least the same with the new Azure.Storage.Blobs package. Actual behavior (include Exception or Stack Trace) Using the Azure.Storage.Blob package (version 12.9.1) the BlockBlobClient.OpenWriteAsync() method takes twice as long compared to CloudBlockBlob.OpenWriteAsync() from the now deprecated package WindowsAzure.Storage (version 9.3.3). To Reproduce We noticed performance degradation for a functionality where we create a zip file in a storage container from a number of images. We have a container with 100 images and we have another container in the same storage account (General Purpose v1) where we store the resulting zip file. First we open a stream to write the block blob zip file to a container with OpenWriteAsync() and then we create zip entries from multiple block blobs coming from a container in the same storage account. With new Azure.Storage.Blob package (on both net5.0 and netcoreapp3.1) // Create zip var zip = zipContainer.GetBlockBlobClient("media.zip"); using (var zipArchive = new ZipArchive( stream: await zip.OpenWriteAsync(overwrite:true).ConfigureAwait(false), mode: ZipArchiveMode.Create, leaveOpen: false)) { var swOpen = new Stopwatch(); var swCopy = new Stopwatch(); for(int i = 1; i <= 100; i++) { sw.Start(); var blob = blobList[i]; var fileName = string.Format(CultureInfo.InvariantCulture, "{0:D8}_{1}", i, "image.jpg"); var zipEntry = zipArchive.CreateEntry(fileName, CompressionLevel.NoCompression); using var zipStream = zipEntry.Open(); swOpen.Start(); using var blobStream = await blob.OpenReadAsync(); swOpen.Stop(); swCopy.Start(); await blobStream.CopyToAsync(zipStream); swCopy.Stop(); sw.Stop(); Console.WriteLine($"\tBlob {i} transfered in {sw.ElapsedMilliseconds} ms"); Console.WriteLine($"\t\tOpened in {swOpen.ElapsedMilliseconds} ms"); Console.WriteLine($"\t\tCopied in {swCopy.ElapsedMilliseconds} ms"); sw.Reset(); swOpen.Reset(); swCopy.Reset(); } } Result: With the deprecated WindowsAzure.Storage package (targeting netcoreapp3.1) // Create zip var zip = zipContainer.GetBlockBlobReference("media.zip"); using (var zipArchive = new ZipArchive( stream: await zip.OpenWriteAsync(), mode: ZipArchiveMode.Create, leaveOpen: false)) { var swOpen = new Stopwatch(); var swCopy = new Stopwatch(); for(int i = 1; i <= 100 ; i++) { sw.Start(); var blob = (CloudBlockBlob)blobList[i]; var fileName = string.Format(CultureInfo.InvariantCulture, "{0:D8}_{1}", i, "image.jpg"); var zipEntry = zipArchive.CreateEntry(fileName, CompressionLevel.NoCompression); using var zipStream = zipEntry.Open(); swOpen.Start(); using var blobStream = await blob.OpenReadAsync(); swOpen.Stop(); swCopy.Start(); await blobStream.CopyToAsync(zipStream); swCopy.Stop(); sw.Stop(); Console.WriteLine($"\tBlob {i} transferred in {sw.ElapsedMilliseconds} ms"); Console.WriteLine($"\t\tOpened in {swOpen.ElapsedMilliseconds} ms"); Console.WriteLine($"\t\tCopied in {swCopy.ElapsedMilliseconds} ms"); sw.Reset(); swOpen.Reset(); swCopy.Reset(); } } Result: Environment: Tested the deprecated WindowsAzure.Storage package with netcoreapp3.1 target framework, and the new Azure.Storage.Blob package with both netcoreapp3.1 and net5.0 target frameworks running in a Standard E8s v3 Azure VM, Windows 10 Enterprise IDE and version : VS code / dotnet cli @kasobol-msft can you please look at this one? @hnuguse what's the average size of the image ? The size is 2kb per image and we have 1000 of these in the container. Transferring all of these (both reading blob and writing to the zip blob) takes a couple minutes, while the total size is only around 1mb. I attached the data we used for testing here media.zip @hnuguse I was able to reproduce the issue. The new version of OpenWrite attempts to follow Stream contract more closely than what was there in earlier versions. I.e. the Flush/FlushAsync is fully operational by default. Which means that whenever the ZipArchive decides to flush it actually means the snapshot of data is materialized in the target blob - that means more requests made to storage and larger latency. See the trace for reference: There's somewhat related issue opened here https://github.com/Azure/azure-sdk-for-net/issues/20652 where we discuss whether a flag disabling intermediate flushes should be added to the OpenWrite API Meanwhile you can consider wrapping the Stream returned by OpenWrite to disable flushes. See sample for reference here https://gist.github.com/kasobol-msft/dd88c6a86f06dc981e0de96ef1169c56 . After applying workaround the time looks better: Thanks for the investigation and provided workaround @kasobol-msft.
2025-04-01T04:10:12.153554
2021-10-04T08:41:48
1014896274
{ "authors": [ "ArthurMa1978", "jsquire", "kasunsjc" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13523", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/24383" }
gharchive/issue
Typos - Commands [Enter feedback here] There are typos in the commands Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: a46dbde4-c75c-96d1-fbdf-b0b6f2418153 Version Independent ID: d1bfd135-1404-341a-1095-18d32c58cd1a Content: Database.Sku Property (Microsoft.Azure.Management.Sql.Models) - Azure for .NET Developers Content Source: xml/Microsoft.Azure.Management.Sql.Models/Database.xml Service: sql-database GitHub Login: @rloutlaw Microsoft Alias: routlaw Thank you for your feedback. Tagging and routing to the team member best able to assist. Reference doc, will contact docs team
2025-04-01T04:10:12.156040
2022-06-16T01:39:49
1272940871
{ "authors": [ "christothes", "jrmcdona", "jsquire" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13524", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/29330" }
gharchive/issue
Secrets in environment variables Library name and version Azure.Identity latest Query/Question Is it fairly safe to keep the secret in the system environment variables? I am doing this for local development. Our Azure subscription is behind protected Torus tenant so I cannot use Visual Studio account to authenticate. The entire team will need to have a secret in their system env variables. Thanks Environment Windows 11 Visual Studio 2022 v17.2 Thank you for your feedback. Tagging and routing to the team member best able to assist. Hi @jrmcdona - Are you able to use AzureCliCredential or InteractiveBrowserCredential instead? Those would avoid having to store and distribute secrets.
2025-04-01T04:10:12.160635
2024-05-13T11:11:40
2292510463
{ "authors": [ "dylan-asos", "jsquire" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13525", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/44004" }
gharchive/issue
[QUERY] Azure Service Bus "Batch Receive Timeout" Library name and version Azure.Messaging.ServiceBus Query/Question In the AmqpReceiver, when receiving messages in batch it uses a hard coded value of 20ms as a parameter to the "batch receive timeout" https://github.com/Azure/azure-sdk-for-net/blob/7c5d4781ca84a82021fb75d5d0f2cd54a2fe17c0/sdk/servicebus/Azure.Messaging.ServiceBus/src/Amqp/AmqpReceiver.cs#L364 e.g. the behaviour here is if I have set a batch size of 500 but only 100 have been retrieved by the timeout value, then it will return the smaller batch size to the client to allow them to start processing those messages. Is there a reason why this is hard-coded and not configurable? In some scenarios, I'd happily wait for longer if it meant I could receive a larger batch in one go. Environment No response Hi @dylan-asos. This is an intentional design decision that was made, and not something that we intend to make configurable at present. The rationale behind this decision is to: Ensure that messages received by the application have consistent lock durations of the expected interval. We want to avoid locked messages being held in the client and not visible to the application while their timer is ticking down while waiting for other messages. Provide a single means for client-side caching/queueing of messages to avoid the confusion of having multiple ways to fetch/cache messages on the client-side. In this case, prefetch is that mechanism with the difference being that it is an eager fetch rather than an on-demand fetch. Prioritize throughput by returning messages to the application as quickly as possible so that processing can be performed. User studies have shown us that prefetching was the more effective approach for maximizing throughput in the majority of application scenarios where message batching was advantageous. For scenarios in which reaching your maxBatchSize is important, the general guidance is to consider tuning the prefetch count to eagerly pull messages and have them available. Many thanks @jsquire - really helps understand the rationale + will make a few tweaks accordingly.
2025-04-01T04:10:12.166483
2019-07-11T23:47:43
467155478
{ "authors": [ "pakrym", "tg-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13526", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/6906" }
gharchive/issue
Testing Proposal: Simplify test recording serialization It would be great to either: Add customizable ExcludedRequestHeaders/ExcludedResponseHeaders that we skip on comparison/serialization. Check whether an existing recording matches the current recording before we serialize and skip it if they're the same. Either option allows us to just xcopy the recordings back into our src directory to see what's been updated. Option 1 has the benefit of cleaner diffs at the cost of a little more service specific work. Thoughts? What's the goal here? I have about one thousand Storage tests and it's kind of a pain to update recordings when I make a change. I generally: implement the feature, see which tests fail during playback, turn on recording, tell nUnit to run just the failed tests from the previous run, then copy only the test recordings with recently updated timestamps (across n different directories) The nUnit test runner in VS is very flaky and this process can be error prone. I'd much rather only write out test recordings when there's a meaningful change so I can xcopy the recordings back into my source directory. I see, so the goal is to avoid re-recodings. Last time I thought about it response content was a hard thing to handle as it often contains things like creation timestamps. I'd be perfectly happy to opt into this by authoring a custom RecordingComparer or similar that handled Storage specific content parsing. + @maririos @schaabs @AlexGhiondea for their thoughts here too (Anyone else spend a lot of time recording tests?) I'd be perfectly happy to opt into this by authoring a custom RecordingComparer or similar that handled Storage specific content parsing. Sounds good, I'll prototype something.
2025-04-01T04:10:12.170746
2019-05-16T10:02:12
477620197
{ "authors": [ "JamesBirdsall", "jsquire", "kasun04", "pwlodek", "samuelkoppes" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13527", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/issues/7161" }
gharchive/issue
IEventProcessor in library Microsoft.Azure.EventHubs.ServiceFabricProcessor should be an interface Actual Behavior IEventProcessor is an abstract class, which makes in difficult if you want to have your own base class for processors Expected Behavior IEventProcessor is an interface Versions OS platform and version: Does not apply .NET Version: .NET Standard 2.0 NuGet package version or commit ID: Microsoft.Azure.EventHubs.ServiceFabricProcessor 0.5.2 We could make it an interface except for the default implementation of GetLoadMetric, which is there because we believe it's an advanced scenario and most customers will be OK with the default. That said, the point about base classes is a good one, and we could be blocking a lot of customer scenarios by not being an interface. How about this as a compromise: we make IEventProcessor an interface, so customers who can't use the base class aren't blocked, but they have to implement GetLoadMetric. We also provide an abstract base class, BaseEventProcessor, which implements GetLoadMetric -- customers who don't care about the base class can use that, and it provides a sample GetLoadMetric for customers who need to use IEventProcessor. Moving this to the backlog milestone, as the associated sprint has passed. Thank you for reporting this opportunity to improve the Azure experience. We have taken this ask into our internal backlog for evaluation for prioritization in the first semester of 2021. The Microsoft.Azure.EventHubs.ServiceFabricProcessor package is in preview and there are no plans for a stable release. This package will soon be deprecated and therefore we won't be taking any changes.
2025-04-01T04:10:12.172858
2020-09-24T18:28:53
708385904
{ "authors": [ "ellismg" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13528", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/15417" }
gharchive/pull-request
[EventGrid] Light copy editing of README.md Fixes #15304 Pointing at the src folder instead of the root for the source code seemed to be what some other packages did, but I can understand the argument about what we currently have. I wavered a little about the "a HTTP" vs "an HTTP", from some light reading, it seemed like it depending on how someone says "HTTP" (since a vs an is about a relation to a vowel sound). Your call here, @JoshLove-msft, I figured I'd just save you some time and open a PR to fix the issues from #15304
2025-04-01T04:10:12.174082
2021-02-03T00:13:47
799833800
{ "authors": [ "Sandido", "markcowl" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13529", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/18372" }
gharchive/pull-request
CloudServices 43-preview release update preview.1 update to ensure the AssemblyInfo file is updated as expected. /azp run /azp run net - mgmt -ci /azp run net - compute - ci
2025-04-01T04:10:12.174917
2022-02-11T16:28:43
1132824634
{ "authors": [ "jsquire", "kinelski" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13530", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/26966" }
gharchive/pull-request
[FormRecognizer] Re-enable tests (24552) Service issues related to these tests have been fixed. Fixes #24552. /check-enforcer evaluate
2025-04-01T04:10:12.176684
2022-05-12T00:55:41
1233316038
{ "authors": [ "azure-sdk", "kinelski" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13531", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/28710" }
gharchive/pull-request
[FormRecognizer] Re-enable StartAnalyzeDocumentPopulatesExtractedReceiptJpg Fixes https://github.com/azure/azure-sdk-for-net/issues/27083. API change check for Azure.AI.FormRecognizer API changes are not detected in this pull request for Azure.AI.FormRecognizer
2025-04-01T04:10:12.178573
2022-08-26T22:07:18
1352776070
{ "authors": [ "azure-sdk", "cochi2" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13532", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/30757" }
gharchive/pull-request
Adding live tests for MediaStreaming/Recognize operations. Contributing to the Azure SDK Please see our CONTRIBUTING.md if you are not familiar with contributing to this repository or have questions. For specific information about pull request etiquette and best practices, see this section. API change check API changes are not detected in this pull request.
2025-04-01T04:10:12.180115
2022-09-02T22:52:35
1360669402
{ "authors": [ "JoshLove-msft", "azure-sdk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13533", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/30918" }
gharchive/pull-request
Avoid disposing already disposed cancellation token source Fixes https://github.com/Azure/azure-sdk-for-net/issues/30553 /azp run net - servicebus - tests API change check API changes are not detected in this pull request.
2025-04-01T04:10:12.181956
2023-03-01T06:26:02
1604331934
{ "authors": [ "azure-sdk", "rajkumar-rangaraj" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13534", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/34622" }
gharchive/pull-request
[AzureMonitorOpenTelemetryDisto] Read options from IConfiguration for Services.AddAzureMonitorOpenTelemetry() Sample telemetry which has read connection string from appSettings.json {"name":"Request","time":"2023-03-01T06:16:03.5424326Z","sampleRate":90,"iKey":"00000000-0000-0000-0000-000000000123","tags":{"ai.operation.id":"8739551e7827c7231a539bd2c43fc6a1","ai.user.userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/<IP_ADDRESS> Safari/537.36","ai.operation.name":"GET /favicon.ico","ai.location.ip":null,"ai.cloud.role":"unknown_service:Azure.Monitor.OpenTelemetry.Demo","ai.cloud.roleInstance":"rajrang-ai","ai.internal.sdkVersion":"dotnet7.0.3:otel1.4.0:ext1.0.0-alpha.20230228.1"},"data":{"baseType":"RequestData","baseData":{"id":"3d27f5119d5afbfb","name":"GET /favicon.ico","duration":"00:00:00.0146805","success":false,"responseCode":"404","url":"http://localhost:12256/favicon.ico","properties":{"http.flavor":"1.1","_MS.ProcessedByMetricExtractors":"(Name: X,Ver:\u00271.1\u0027)"},"ver":2}}} API change check API changes are not detected in this pull request.
2025-04-01T04:10:12.189015
2018-05-31T22:07:02
328314349
{ "authors": [ "dsgouda", "lewu-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13535", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/4401" }
gharchive/pull-request
[ADLS] add vnet and tests Description Added virtual network CRUD operations for account management. (https://github.com/Azure/azure-rest-api-specs/commit/b66d57be1bf2ee01797fffc34584d87b00d6629a) This checklist is used to make sure that common guidelines for a pull request are followed. [x] Please add REST spec PR link to the SDK PR [x] I have read the contribution guidelines. [x] The pull request does not introduce breaking changes. General Guidelines [x] Title of the pull request is clear and informative. [x] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page. Testing Guidelines [x] Pull request includes test coverage for the included changes. SDK Generation Guidelines [x] If an SDK is being regenerated based on a new swagger spec, a link to the pull request containing these swagger spec changes has been included above. [x] The generate.cmd file for the SDK has been updated with the version of AutoRest, as well as the commitid of your swagger spec or link to the swagger spec, used to generate the code. [x] The *.csproj and AssemblyInfo.cs files have been updated with the new version of the SDK. @lewu-msft Please fix failing tests @lewu-msft gentle ping. @lewu-msft tests are still failing working on it. tests passed locally but fail after pushing to repo
2025-04-01T04:10:12.195782
2018-06-29T18:48:41
337100854
{ "authors": [ "Tiano2017", "dsgouda" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13536", "repo": "Azure/azure-sdk-for-net", "url": "https://github.com/Azure/azure-sdk-for-net/pull/4511" }
gharchive/pull-request
bump resource manager version to 1.9.0-preview. Description This checklist is used to make sure that common guidelines for a pull request are followed. [ ] Please add REST spec PR link to the SDK PR [ ] I have read the contribution guidelines. [ ] The pull request does not introduce breaking changes. General Guidelines [ ] Title of the pull request is clear and informative. [ ] There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page. Testing Guidelines [ ] Pull request includes test coverage for the included changes. SDK Generation Guidelines [ ] If an SDK is being regenerated based on a new swagger spec, a link to the pull request containing these swagger spec changes has been included above. [ ] The generate.cmd file for the SDK has been updated with the version of AutoRest, as well as the commitid of your swagger spec or link to the swagger spec, used to generate the code. [ ] The *.csproj and AssemblyInfo.cs files have been updated with the new version of the SDK. @Tiano2017 Was this not updated in the last changes? Any reason for these updates. @dsgouda I had a sync up with Abhijeet. it seems the version 1.8.0-preview is already taken and on nuget.org the version is hidden. Will merge on CIs passing.
2025-04-01T04:10:12.200951
2020-04-23T18:15:40
605749545
{ "authors": [ "brjohnstmsft", "heaths", "rakshith91" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13537", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/11025" }
gharchive/issue
Paging for search list APIs should return paged items - especially index and datasources. Indexes yes, but why data sources? CC @heaths Apart from understanding why DataSources should be paged (keep in mind the service doesn't actually support paging currently, so the SDK would only return a single page), this would otherwise be a dup of #11008. For the new APIs we are making list_indexes return a pageable because there can be up to 3,000 for S3 plans and index models can be pretty large, but limits on other resources are pretty low and they tend to be smaller models. It would be helpful to understand why you'd want these in pages as opposed to all at once. Wasn't aware service doesn't support paging currently. Closing this issue as a duplicate. Thanks for the clarification.
2025-04-01T04:10:12.205880
2020-09-11T07:13:58
698941048
{ "authors": [ "3ttp", "kaerm" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13538", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/13735" }
gharchive/issue
[BUG Bash] Describe the bug A clear and concise description of what the bug is. Exception or Stack Trace Add the exception log and stack trace if available To Reproduce Steps to reproduce the behavior: Code Snippet Add the code snippet that causes the issue. Expected behavior A clear and concise description of what you expected to happen. Screenshots If applicable, add screenshots to help explain your problem. Setup (please complete the following information): Python Version: [e.g. Python 3.8] SDK Version: [e.g. azure-mgmt-resource-15.0.0b1] Additional context Add any other context about the problem here. Information Checklist Kindly make sure that you have added all the following information above and checkoff the required fields otherwise we will treat the issuer as an incomplete report [ ] Bug Description Added [ ] Repro Steps Added [ ] Setup information Added It seems that you didn't fill the issue template with an actual issue description, so we're closing this but feel free to open any issues if you have any comments about the SDK
2025-04-01T04:10:12.207673
2017-10-12T14:01:12
264953500
{ "authors": [ "smoser" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13539", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/1527" }
gharchive/issue
required options should be arguments The syntax on many commands require a group but make it an option rather than an argument. $ az group delete -h | tail -n 5 Examples Delete a resource group. az group delete -n MyResourceGroup Why isn't that? $ az group delete MyResourceGroup re-filed on azure-cli https://github.com/Azure/azure-cli/issues/4656
2025-04-01T04:10:12.214609
2022-03-11T02:37:28
1165932377
{ "authors": [ "msyyc" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13540", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/issues/23481" }
gharchive/issue
[testproxy] conceal sensitive info in recording files add sandi https://github.com/Azure/azure-sdk-for-python/pull/23425 https://github.com/Azure/azure-sdk-for-python/pull/23426 https://github.com/Azure/azure-sdk-for-python/pull/23427 https://github.com/Azure/azure-sdk-for-python/pull/23428 https://github.com/Azure/azure-sdk-for-python/pull/23429 https://github.com/Azure/azure-sdk-for-python/pull/23430 https://github.com/Azure/azure-sdk-for-python/pull/23456 https://github.com/Azure/azure-sdk-for-python/pull/23458 https://github.com/Azure/azure-sdk-for-python/pull/23459 https://github.com/Azure/azure-sdk-for-python/pull/23463 https://github.com/Azure/azure-sdk-for-python/pull/23461 https://github.com/Azure/azure-sdk-for-python/pull/23462 https://github.com/Azure/azure-sdk-for-python/pull/23463 https://github.com/Azure/azure-sdk-for-python/pull/23335 https://github.com/Azure/azure-sdk-for-python/pull/23336 https://github.com/Azure/azure-sdk-for-python/pull/23421 https://github.com/Azure/azure-sdk-for-python/pull/23337 https://github.com/Azure/azure-sdk-for-python/pull/23338 https://github.com/Azure/azure-sdk-for-python/pull/23339 https://github.com/Azure/azure-sdk-for-python/pull/23340 https://github.com/Azure/azure-sdk-for-python/pull/23341 https://github.com/Azure/azure-sdk-for-python/pull/23401
2025-04-01T04:10:12.216204
2022-04-20T15:37:13
1209861896
{ "authors": [ "azure-sdk", "l0lawrence" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13541", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/24105" }
gharchive/pull-request
[MetricsAdvisor] Pylint Fixes Pylint Enum fixes for Metrics. Disabling keyword api version for now API change check for azure-ai-metricsadvisor API changes are not detected in this pull request for azure-ai-metricsadvisor
2025-04-01T04:10:12.218006
2022-11-30T08:09:22
1469217373
{ "authors": [ "azure-sdk" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13542", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/27744" }
gharchive/pull-request
[AutoRelease] t2-workloads-2022-11-30-01468(can only be merged by SDK owner) https://github.com/Azure/sdk-release-request/issues/3484 Live test success https://dev.azure.com/azure-sdk/internal/_build?definitionId=4807 issue link:https://github.com/Azure/sdk-release-request/issues/3484
2025-04-01T04:10:12.224460
2023-05-04T20:23:38
1696649936
{ "authors": [ "azure-sdk", "vincenttran-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13544", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/30243" }
gharchive/pull-request
[Storage] Fix type-hint in Blob and Datalake package This PR fixes the typehints to use AsyncTokenCredential in our async clients rather than the sync TokenCredential. API change check APIView has identified API level changes in this PR and created following API reviews. azure-storage-blob
2025-04-01T04:10:12.226135
2019-04-02T17:17:30
428347473
{ "authors": [ "AutorestCI" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13545", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/4757" }
gharchive/pull-request
[AutoPR blueprint/resource-manager] Fix model type regression Created to sync https://github.com/Azure/azure-rest-api-specs/pull/5535 (message created by the CI based on PR content) This PR has been merged into https://github.com/Azure/azure-sdk-for-python/pull/4158
2025-04-01T04:10:12.227490
2019-07-03T20:47:16
463945923
{ "authors": [ "adxsdk6", "scbedd" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13546", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/6229" }
gharchive/pull-request
Enabling samples in sphinx docs for the azure-storage-X FYI @annatisch I was only pulling in stuff from examples folders. Need to include tests where sample is in the name of the test file. @Azure/azure-sdk-eng Can one of the admins verify this patch?
2025-04-01T04:10:12.228771
2019-10-03T11:14:05
502002208
{ "authors": [ "adxsdk6", "mitchdenny" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13547", "repo": "Azure/azure-sdk-for-python", "url": "https://github.com/Azure/azure-sdk-for-python/pull/7582" }
gharchive/pull-request
Setup app configuration with unified pipelines. This PR will setup app configuration for release via unified pipelines. Can one of the admins verify this patch? /azp run python - appconfiguration - ci /azp run python - appconfiguration - ci
2025-04-01T04:10:12.233048
2022-08-18T08:05:31
1342712324
{ "authors": [ "JonathanGiles", "cmcd22", "praveenkuttappan" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13548", "repo": "Azure/azure-sdk-tools", "url": "https://github.com/Azure/azure-sdk-tools/pull/3978" }
gharchive/pull-request
Support language-specific CSS files #3464 Language specific CSS fields have been created for supported languages. Upon loading a review page, the review language is checked and the appropriate CSS file is linked to. Fixes #3464 Java Example: Custom CSS to increase font size and change some font colour to red. Java with default CSS: C# Example: Custom CSS to decrease font size, change background colour to black and some text colour to white. C# with default CSS: @praveenkuttappan @chidozieononiwu Could you please take a look at this PR to support language-specific css functionality? You will have to resolve conflict with latest merge from Dozie. Please let me know before you merge after conflict is resolved. I will deploy it to staging and verify before we merge it. @praveenkuttappan It would be good to deploy this PR to staging along with #3463, as that introduces the custom Java icons. @praveenkuttappan conflict resolved @praveenkuttappan I'm not sure why the build is failing - it seems like it can't find some files, but I'm not sure why?
2025-04-01T04:10:12.234384
2019-05-06T12:57:49
440687179
{ "authors": [ "amarzavery", "mortezasoft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13549", "repo": "Azure/azure-service-bus-node", "url": "https://github.com/Azure/azure-service-bus-node/issues/241" }
gharchive/issue
broken link "azure-sdk-for-js" link is broken here: https://github.com/Azure/azure-service-bus-node/blob/master/examples/README.md @mortezasoft - Thanks for letting us know. The link has been fixed.
2025-04-01T04:10:12.258576
2019-06-04T12:56:58
451980893
{ "authors": [ "PalashBorhan", "albertxavier100", "fjmejias", "realwanpengli", "tbuha", "vicancy", "zackliu" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13550", "repo": "Azure/azure-signalr", "url": "https://github.com/Azure/azure-signalr/issues/559" }
gharchive/issue
Serverless Mode. Connected/Disconnected users Hi there, I'm going to use Azure Functions and Azure SignalR in Serverless Mode. How to determine that user is connected to Azure SignalR? Is there any API to get connected users(with UserId) to Azure SignalR? Thanks, Taras Hi @tbuha . All REST API can be found here. No query api for connected users. Hi @tbuha All REST API can be found here. No query api for connected users. @realwanpengli thanks for the reply. Does it mean that I cant do completely serverless approach? And to get Connected/Disconnected Azure SignalR clients I still need to implement Hub in the App Service with public override Task OnConnectedAsync(); public override async Task OnDisconnectedAsync(Exception exception); What will be the best approach? @tbuha , welcome. That's right. Close the issue since already resolved and over 1 week. Hi @tbuha, we are now working in progress to add EventGrid hook to serverless scenarios so that you can get notified with Connect and Disconnect events. It will be available for all regions around early July. cc @realwanpengli @zackliu Hello @vicancy, It's good to have this. I have this reported here in github as well. Any thoughts if this can be done using Azure Functions binding? Hi @tbuha, we are now working in progress to add EventGrid hook to serverless scenarios so that you can get notified with Connect and Disconnect events. It will be available for all regions around early July. cc @realwanpengli @zackliu It this EvenGridTrigger to serverless scenarios working for all regions? @fjmejias It works in all the regions and here's the sample of Event Grid integration.
2025-04-01T04:10:12.263610
2022-09-23T12:32:52
1383736070
{ "authors": [ "M-M-M-M", "nakulkar-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13551", "repo": "Azure/azure-storage-azcopy", "url": "https://github.com/Azure/azure-storage-azcopy/issues/1906" }
gharchive/issue
Unable to copy text files with a .vhd extension Which version of the AzCopy was used? Note: The version is visible when running AzCopy without any argument AzcopyVersion 10.16.0 Which platform are you using? (ex: Windows, Mac, Linux) OS-Environment windows OS-Architecture amd64 What command did you run? Note: Please remove the SAS to avoid exposing your credentials. If you cannot remember the exact command, please retrieve it from the beginning of the log file. azcopy.exe copy "P:\STAGING\App 2020-12-05\vhd\test.vhd" 'https://appvaulttest.blob.core.windows.net/app?[...]' --recursive What problem was encountered? Get an UPLOADFAILED error: 2022/09/23 12:26:58 INFO: [P#0-T#0] Starting transfer: Source "\\\\?\\P:\\STAGING\\App 2020-12-05\\vhd\\test.vhd" Destination "https://appvaulttest.blob.core.windows.net/advitium/test.vhd?[...]". Specified chunk size 4194304 2022/09/23 12:26:59 ==> REQUEST/RESPONSE (Try=1/72.5306ms, OpTime=395.1439ms) -- RESPONSE STATUS CODE ERROR PUT https://appvaulttest.blob.core.windows.net/app/test.vhd?se=2022-09-30T04%3A40%3A38Z&sig=-REDACTED-&sp=racwdlmeop&spr=https&sr=c&st=2022-09-21T20%3A40%3A38Z&sv=2021-06-08&timeout=901 Content-Length: [0] User-Agent: [AzCopy/10.16.0 Azure-Storage/0.15 (go1.17.9; Windows_NT)] X-Ms-Blob-Cache-Control: [] X-Ms-Blob-Content-Disposition: [] X-Ms-Blob-Content-Encoding: [] X-Ms-Blob-Content-Language: [] X-Ms-Blob-Content-Length: [6] X-Ms-Blob-Content-Type: [text/plain] X-Ms-Blob-Sequence-Number: [0] X-Ms-Blob-Type: [PageBlob] X-Ms-Client-Request-Id: [...] X-Ms-Version: [2020-10-02] -------------------------------------------------------------------------------- RESPONSE Status: 400 The value for one of the HTTP headers is not in the correct format. Content-Length: [331] Content-Type: [application/xml] Date: [Fri, 23 Sep 2022 12:26:59 GMT] Server: [Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0] X-Ms-Client-Request-Id: [...] X-Ms-Error-Code: [InvalidHeaderValue] X-Ms-Request-Id: [...] X-Ms-Version: [2020-10-02] Response Details: <Code>InvalidHeaderValue</Code><Message>The value for one of the HTTP headers is not in the correct format. </Message><HeaderName>x-ms-blob-content-length</HeaderName><HeaderValue>6</HeaderValue> 2022/09/23 12:26:59 ERR: [P#0-T#0] UPLOADFAILED: \\?\P:\STAGING\App 2020-12-05\vhd\test.vhd : 400 : 400 The value for one of the HTTP headers is not in the correct format.. When Creating blob. X-Ms-Request-Id: [...] How can we reproduce the problem in the simplest way? Create a text file name test.vhd with a single text line with "Test". Try to upload it by azcopy Have you found a mitigation/solution? Uploading from the web UI at portal.azure.com is working for these files. .vhd files are transferred as PageBlobs by default. Use '--blob-type=BlockBlob' in the command to override this.
2025-04-01T04:10:12.276943
2016-06-23T20:13:07
162012616
{ "authors": [ "JoeBrockhaus", "SeanFeldman", "brettsam", "cgillum", "davidebbo", "estruyf" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13552", "repo": "Azure/azure-webjobs-sdk-script", "url": "https://github.com/Azure/azure-webjobs-sdk-script/issues/456" }
gharchive/issue
Timer function doesn't get triggered every hour Hi, I have an Azure function which gets triggered every hour (normally it should only be four times a day). The cron job is configured as follows: "0 0 */1 * * *". If you check the attached screenshot, you'll see that the job doesn't run every hour. At 12:00 everything is ok, because then I configured the function. At 13:21 the job started a bit later because another job got triggered (http trigger). That seems to have woken up the timer function and so it successfully ran. Function didn't get executed at 15:00. At 16:00 I logged on to the Azure portal, again this have woken up the timer function. At 19:20 again the same HTTP trigger function got called. Same story all over. Is this issue coming from the fact that by default the always-on functionality is turned off? Is you function app Dynamic or in Classic mode? If Classic, what sku is it using? e.g. Free/Shared/Basic/Standard Configured in dynamic mode. Hmmm, then it doesn't make sense. Normally, there is no need for Always On in dynamic (in fact, it's not even available). Your Function App should be kept running automatically to take care of timers. Could you share the Function App name? You can create a test one with the same issue if you prefer not to share your main one. Function app name: functionsf9516c96 - TimerTriggerNodeJS2 is the function in which I configured the test. You mentioned that there is also an HTTP trigger in this function app. We recently discovered a bug that can cause similar problems when a function app contains timer triggers as well as other trigger types. Could you try creating a separate function app which contains only your timer trigger function(s) and see if that makes it more reliable? @cgillum / @estruyf -- Do you still see this happening? Still an issue. Thanks for the report @SeanFeldman. The fix for this hasn't been deployed yet but will be soon. We'll update this issue when it's deployed and see if anyone is still seeing the problem. The fix is is now deployed so I'll close this issue. If anyone continues to see this, please reactivate so we can investigate. @brettsam I'm seeing similar odd behavior with this trigger - rather than not triggering, it's triggering more than 1 time every hour - usually 2 or 3 times within a minute or so. My FuncApp only has 2 functions, both configured to trigger every hour, using TimerTrigger("0 0 * * * *") It's on a Consumption Plan. I've previously had RunOnStartup = true, which seems like it might explain why the most recent execution in the screenshot fired a 4th, seemingly random time .. this would be when I navigated to the Function App in the Portal. (Seems odd this would 'wake-up' the app, but 🤷‍♂️ ) @JoeBrockhaus -- thanks for the report. If you continue seeing this with your newest deployment, can you open up a new issue (and feel free to mention me so I get pinged right away). If you can give a timerange and your function app name (either directly or indirectly), that'd help me look through our logs to see if I can figure out what's happening. thanks @brettsam these latest updates (1.0.0-alpha6) and disabling RunOnStartup, look to have brought it back to the single execution.
2025-04-01T04:10:12.279645
2016-07-20T16:12:16
166621211
{ "authors": [ "brettsam", "fabiocav" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13553", "repo": "Azure/azure-webjobs-sdk-script", "url": "https://github.com/Azure/azure-webjobs-sdk-script/pull/508" }
gharchive/pull-request
adding 'Version' to HostStatus Addresses #403. We'll also need to update AppVeyor to patch the build number into AssemblyFileVersion. Added a couple of small comments Looks good! One small comment is, which is a bit of a nit, but I think it would be a better approach to have the version field made private and expose a property instead, but it's good to :shipit: ! I tried some other things: a public static property initialized in static ctor, but it wouldn't serialize a public instance property exposing a private static field. FxCop didn't like that, but I could always suppress it. The field felt the simplest. If there's a better pattern, I can come back and change it.
2025-04-01T04:10:12.284531
2017-06-22T04:19:30
237728148
{ "authors": [ "MikeStall", "mathewc", "soninaren" ], "license": "mit", "license_source": "bigquery", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13554", "repo": "Azure/azure-webjobs-sdk-templates", "url": "https://github.com/Azure/azure-webjobs-sdk-templates/issues/479" }
gharchive/issue
Fix validation regexes PR https://github.com/Azure/azure-webjobs-sdk/pull/1188 adds declarative regex validation for Table names, though the regexes no longer match. We need to get them into alignment for Table, and any of the others we move to the new declarative model. @soninaren created these template regexes I believe, so he may have more insights into their current form and whether the new one in the above referenced PR is correct. (Nit, #1188 doesn't add a new regex - this is the same regex we've had since v1.0; it just changes how it's invoked). Here's the problem: In SDK: "^[A-Za-z][A-Za-z0-9]{2,62}$" In Bindings.json: "^[A-Za-z][A-Za-z0-9]{2,62}$|^[{][a-zA-Z0-9]{1,126}[}]$|^[%][a-zA-Z0-9]{1,126}[%]$", The difference is just that bindings.json is appending additional patterns to support { } and %%. The binding.json is applied on the raw user input, which still has { } and %% values. But the SDK regex expression is applied after the { } and %% substitution occurs, so it can be simpler. That said, the { } %% substitution is more complex than described in bindings.json regex. For example, "table{x}{y}" is a legal expression but would be failed by bindings.json. This is related to https://github.com/Azure/azure-webjobs-sdk-script/issues/1416. We should have a single regex (now in SDK) and portal uses that. Moving out of sprint 6 since it needs a deeper discussion and possible update to portal and or runtime
2025-04-01T04:10:12.316832
2023-07-11T05:45:20
1798149731
{ "authors": [ "anthony-c-martin", "maciekgrzela", "stephaniezyen" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13555", "repo": "Azure/bicep", "url": "https://github.com/Azure/bicep/issues/11203" }
gharchive/issue
Duplicates highlighting for parameters inside one service Hi Guys, A few times we've been struggling with failing infrastructure pipelines. In most cases, it turns out we have the same parameter listed twice inside one service block declaration. How about implementing functionality that highlights the code when this kind of situation occurs? Regards, Maciek :) @maciekgrzela please could you share a code sample demonstrating this? Closing due to lack of response, please reopen with a code sample.
2025-04-01T04:10:12.323346
2020-09-04T09:41:55
692972127
{ "authors": [ "ChristopherGLewis", "alex-frankel", "anthony-c-martin", "aytimothy", "slavizh", "thesushil" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13556", "repo": "Azure/bicep", "url": "https://github.com/Azure/bicep/issues/448" }
gharchive/issue
Switch support Is your feature request related to a problem? Please describe. Provide option for Switch support. In cases where we have more then two choices we need switch support. I do not think Ternary operator can do that. In ARM templates you can solve that in some ways like having variable objects like variables: { "mySwitch": { "first": 1, "second": 2, "third": 3 } } and of course when you need to use it you do something like: "someProperty": "[parameters('InputParameter')[string(variables('mySwitch'))]]" You could basically have switch function and simplify that approach in Bicep. Downside is that I do not see a way of default value or more advanced ways of working like in PowerShell. Note that in Bicep the equivalent should work: param switchVal string var myVar = { first: 'name1' second: 'name2' third: 'name3' } var chosenName = myVar[switchVal] We should also soon be able to go even further and use the type system to validate that e.g. we've got a mismatch between param declaration and usage: param switchVal string { allowed: [ 'first' 'second' 'third' 'fourth' // show a warning or error because the "switch" statement doesn't support 'fourth' ] } @anthony-c-martin Yes it will work but the point was to have easier to use syntax by novice people who may come from different languages background. This is where I have issues the "json-like" syntax. When you look at the var myVar, in bicep you have the "no comma" object param switchVal string var myVar = { first: 'name1' second: 'name2' third: 'name3' } var chosenName = myVar[switchVal] output return string = chosenName However, if we switch this to a parameter, the parameter becomes an object and passing this requires real JSON: Bicep param switchVal string param myParam object var chosenName = myParam[switchVal] output return string = chosenName PowerShell $json = '{"third":"name3","first":"name1","second":"name2"}' $hash = $json | ConvertFrom-Json $param = @{ myParam = $Hash } New-AzResourceGroupDeployment -Name testDeploy -ResourceGroupName BicepTesting ` -TemplateFile .\switchTest2.json -TemplateParameterObject $param CLI az deployment group create --resource-group bicepTesting \ --template-file switchTest2.json \ --parameters myParam='{"third":"name3","first":"name1","second":"name2"}' I realize this is a parameter vs. template issue, but I feel it would be a much nicer development experience overall if objects and arrays supported pure JSON syntax. What is the bicep equivalent of default case of real switch? There is no switch expression so there is no equivalent of a default on a switch. The only conditional logic on properties is the ternary operator: var trueThing = true output isItTrue string = trueThing ? 'It is true' : 'If it is not, I will be emitted by default' In some sense, the third argument of the ternary operator is the closes that exists to default. You can also chain ternary expressions together, but that would be very finnicky code to maintain. In some sense, the third argument of the ternary operator is the closes that exists to default. You can also chain ternary expressions together, but that would be very finnicky code to maintain. I think that's the point of having a if-ifelse-else (#1171) or switch-like operator.
2025-04-01T04:10:12.331638
2022-09-14T23:21:40
1373692510
{ "authors": [ "Marc013", "alex-frankel", "bjompen", "csaba-almasi", "jeskew", "markjbrown" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13557", "repo": "Azure/bicep", "url": "https://github.com/Azure/bicep/issues/8409" }
gharchive/issue
Support regex in parameter validation I think this is tricky to implement in the ARM runtime as there are concerns over non-performant regex, but don't know the details well enough. Also seems like it may be related to #4158. cc @jeskew as FYI Some context below from #2922: Coming from the perspective of a longtime AWS CloudFormation and Terraform author now starting to use bicep - this type of password complexity validation can be achieved with a regular expression, as I've seen it done in these other tools. So, what I think is really missing - like a SERIOUS design omission - is to have a regular expression based validation for parameters. This should have full support for all regex features, including backwards and forward references. It should support the PCRE2 standard as can be validated here: https://regex101.com I noticed another another open issue to validate IP addresses and CIDRs, while looking to see if regex validation was an existing open issue. This is also something that can be done easily using regular expressions. Given that this feature has been in CloudFormation templates, first JSON, then YAML, for at least 5 years that I'm personally aware of, it's sort of incredible - a glaring omission which surprised me - that this basic feature is not yet in bicep and arm. Please consider adding it, as it's incredibly useful in both preventing use of incorrect values, but also in precisely describing what values are allowed in a way beyond what's often possible to easily describe in words. For example, I want to have a startDate parameter with the value entered as 'YYYY-MM-DD' - the fact I can't validate this simple pattern with a message to the user if they don't enter it correctly, is really surprising for a mature IaC template tool. Originally posted by @michael-crawford in https://github.com/Azure/bicep/issues/2922#issuecomment-1245765330 There are some safety mechanisms we could use when executing user-supplied regular expressions, such as setting a strict timeout. We may also be able to use a non-backtracking engine, though I believe that would not support the full PCRE2 standard. +1 for adding regex, as it would make parameter/validation so much better. As for language though, wouldn't it make more sense to have .net regex instead of PCRE2? The rest of bicep is .net, and i believe it is also is the flavour used in f.eg PowerShell, so it would be consistent in usage. This would extremely useful in a lot of our templates in the following scenarios, with the 'fail early shift left' mentality: AKV naming ACR naming Network address spacing +1 to this. I can use this for Microsoft.DocumentDB/mongoClusters which has a password policy for the cluster resource of 8-256 characters and 3 of the following: lower case, upper case, numeric and symbol. It would be fantastic to have a parameter object where I could apply rules like these (aside from min/max length which we already have) and have it validate the user input before it gets sent to the RP. @alex-frankel, Could you provide some information on the likelihood of this feature being implemented in the near future? Unlikely in the near future. We are blocked on the Deployments runtime being stuck on .NET Standard. Once we are able to migrate to .NET core, we will be able to pick this one up. The migration to .NET core is unfortunately not going to happen quickly due to a variety of dependencies.
2025-04-01T04:10:12.341561
2018-05-23T16:59:12
325793746
{ "authors": [ "f0", "radu-matei" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13558", "repo": "Azure/brigade", "url": "https://github.com/Azure/brigade/issues/472" }
gharchive/issue
[brig] Windows binary has no .exe extension The Windows binary from the download section has no .exe suffix, without the .exe suffix windows don't know how to handel the binary. Please provide the precompiled binary with a .exe suffix @f0 - Thanks for letting us know! In the meantime, as a quick fix, you could manually add the .exe extension and it should work (just tested on my Windows machine). @radu-matei yes, manually adding .exe does wrok
2025-04-01T04:10:12.345852
2024-11-14T07:57:43
2657951508
{ "authors": [ "JamesBurnside", "nainasyed03" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13559", "repo": "Azure/communication-ui-library", "url": "https://github.com/Azure/communication-ui-library/issues/5416" }
gharchive/issue
Facing Issue while trying to cancel a screen share I am using CallWithChatComposite UI for my meeting application with Azure communication services. When i click on the present screen and then click cancel in the pop up, it is throwing error. Once this is done and then if I try switching on the camera, that also throws an error once immediately after the previous issue and then onwards camera is working fine. Also before previous issue occurs camera is working fine. Steps to reproduce the issue Click on present after entering a call. From the popup that appears, click cancel which would throw error Then on turning on camera even though camera is turned on correctly, it is throwing error only immediately after previous issue and not before or after previous issue I am facing this. OS & Device: Windows Browser: Google Chrome and Brave Thanks @nainasyed03 for bringing this to our attention, we'll try and reproduce this error also and get back to you. In the meantime, can you let us know the @azure/communication-react and @azure/communication-calling versions you are using @nainasyed03 thanks we can repro the issue on our side as well. The first, cancel screenshare, error is unfortunately "by design", the error returned isn't unique enough for us to swallow and ignore. The second however should not be happening and we'll look into this - thanks for finding it! Side note: when building your app in production you won't see these errors covering the screen, you'll only see them silently in the console so end users shouldn't be impacted by these errors (but we're still looking into how we can fix them to reduce engineering noise) @JamesBurnside Thank you for confirming the issue on your end! It’s good to know that these errors won’t affect the user experience in the production. Looking forward to hearing more about the fix for the second issue. Thanks again for your prompt response! Issue here is coming from the underlying @azure/communication-calling package. I've filed a bug internally to route this to the appropriate team.
2025-04-01T04:10:12.348210
2022-11-04T23:05:16
1436717151
{ "authors": [ "edwardlee-msft" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13560", "repo": "Azure/communication-ui-library", "url": "https://github.com/Azure/communication-ui-library/pull/2492" }
gharchive/pull-request
[bug][a11y] Focus on participant list when opening people pane What Redirect focus to first participant item in the list people pane. Why Accessibility issue where focus should move to expected element when interacting with the people button https://skype.visualstudio.com/SPOOL/_workitems/edit/3007322 How Tested Tested locally on both call composite and callwithchat composite Process & policy checklist [ ] I have updated the project documentation to reflect my changes if necessary. [ ] I have read the CONTRIBUTING documentation. Is this a breaking change? [ ] This change causes current functionality to break. Note: shouldFocusOnMount={true} will focus on the first element in the FocusZone that has a tabIndex available.
2025-04-01T04:10:12.349986
2021-09-29T14:13:48
1011018947
{ "authors": [ "JamesBurnside" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13561", "repo": "Azure/communication-ui-library", "url": "https://github.com/Azure/communication-ui-library/pull/854" }
gharchive/pull-request
[do not review yet - draft] Try updated playwright npm package What Why How Tested Process & policy checklist [ ] I have updated the project documentation to reflect my changes if necessary. [ ] I have read the CONTRIBUTING documentation. Is this a breaking change? [ ] This change causes current functionality to break. CLosing for now to reduce randomization with many PRs up currently
2025-04-01T04:10:12.352649
2017-06-02T01:37:34
233053923
{ "authors": [ "bacongobbler", "msftclas", "rodcloutier", "ultimateboy" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13562", "repo": "Azure/draft", "url": "https://github.com/Azure/draft/pull/93" }
gharchive/pull-request
fix(windows): Updated to docker version v17.05.0-ce with the needed dependencies This PR intends to fix the Windows compilation. To do so, I updated the docker version to v17.05.0-ce. We also had to get the proper dependencies for docker which is not using tag commits. If need be, I could try to fix it without upgrading docker. Fixes #61 @rodcloutier, Thanks for your contribution. To ensure that the project team has proper rights to use your work, please complete the Contribution License Agreement at https://cla.microsoft.com. It will cover your contributions to all Microsoft-managed open source projects. Thanks, Microsoft Pull Request Bot @rodcloutier, thanks for signing the contribution license agreement. We will now validate the agreement and then the pull request. Thanks, Microsoft Pull Request Bot Note that this just bumps the docker client library version we use to interact with the daemon. Kubernetes is still on v1.12 or v1.11 IIRC so multi-stage builds won't work. I want it too :( @bacongobbler ya, that makes sense. I was holding off on merging this until you at least glanced at it. I've tested it all the way through and am happy with things. Merge when ready :)
2025-04-01T04:10:12.362633
2022-12-02T04:07:42
1472281843
{ "authors": [ "amrashwan", "sozercan" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13563", "repo": "Azure/eraser", "url": "https://github.com/Azure/eraser/issues/494" }
gharchive/issue
securityContext for collector pod What steps did you take and what happened: I'm trying to install eraser helm chart and the controller is not allowed to create the 3 containers (collector/scanner/eraser) due to an azure policy that prevent the containers from running if securityContext/runAsNonRoot is not true or readOnlyRootFilesystem is not true for the 3 containers (collector/scanner/eraser), What is the secuirtycontext requirements for runAsNonRoot/readOnlyRootFilesystem? {"level":"error","ts":1669953279.2390826,"msg":"Reconciler error","controller":"imagejob-controller","object":{"name":"imagejob-hpz62"},"namespace":"","name":"imagejob-hpz62","reconcileID":"2c88118e-989d-4a4e-9ba4-075cfcd57486","error":"reconcile new: admission webhook "validation.gatekeeper.sh" denied the request: [azurepolicy-k8sazurev3allowedusersgroups-973561909fcec555702f] Container collector is attempting to run without a required securityContext/runAsNonRoot or securityContext/runAsUser != 0\n[azurepolicy-k8sazurev3allowedusersgroups-973561909fcec555702f] Container eraser is attempting to run without a required securityContext/runAsNonRoot or securityContext/runAsUser != 0\n[azurepolicy-k8sazurev3allowedusersgroups-973561909fcec555702f] Container trivy-scanner is attempting to run without a required securityContext/runAsNonRoot or securityContext/runAsUser != 0\n[azurepolicy-k8sazurev3hostfilesystem-bdd7861e251772982eff] HostPath volume {"hostPath": {"path": "/run/containerd/containerd.sock", "type": ""}, "name": "containerd-sock-volume"} is not allowed, pod: collector-aks-linux1-27983059-vmss000000-d7ls6. Allowed path: [{"pathPrefix": "/var/lib/kubelet/device-plugins", "readOnly": false}, {"pathPrefix": "/usr/local/nvidia", "readOnly": false}, {"pathPrefix": "/var/log", "readOnly": false}, {"pathPrefix": "/var/tail-db/", "readOnly": false}, {"pathPrefix": "/var/lib/docker/containers", "readOnly": true}, {"pathPrefix": "/etc/machine-id", "readOnly": true}, {"pathPrefix": "/ProgramData/docker/containers", "readOnly": false}, {"pathPrefix": "/k", "readOnly": false}, {"pathPrefix": "/proc", "readOnly": true}, {"pathPrefix": "/sys", "readOnly": true}, {"pathPrefix": "/", "readOnly": true}, {"pathPrefix": "/mnt", "readOnly": false}]\n[azurepolicy-k8sazurev3readonlyrootfilesyst-76e49796d574f9db498c] Readonly root filesystem is required for container. pod:'collector-aks-linux1-27983059-vmss000000-d7ls6', container:'collector'\n[azurepolicy-k8sazurev3readonlyrootfilesyst-76e49796d574f9db498c] Readonly root filesystem is required for container. pod:'collector-aks-linux1-27983059-vmss000000-d7ls6', container:'eraser'\n[azurepolicy-k8sazurev3readonlyrootfilesyst-76e49796d574f9db498c] Readonly root filesystem is required for container. pod:'collector-aks-linux1-27983059-vmss000000-d7ls6', container:'trivy-scanner'","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:326\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:234"} What did you expect to happen: Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] Environment: Eraser version: v0.5.0 Kubernetes version: (use kubectl version): v1.24.6 @amrashwan it might be okay to set readOnlyRootFilesystem, however, eraser cannot run as non-root as it talks to CRI. If you have this policy as a requirement, eraser must be exempt from this.
2025-04-01T04:10:12.363361
2021-07-23T17:49:15
951785123
{ "authors": [ "ashnamehrotra", "sozercan" ], "license": "Apache-2.0", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13564", "repo": "Azure/eraser", "url": "https://github.com/Azure/eraser/pull/27" }
gharchive/pull-request
Eraser.go changes for test created client interface that has listImages, listContainers, and removeImage can we focus this on changes to eraser.go only?
2025-04-01T04:10:12.364091
2024-09-13T17:06:22
2525289843
{ "authors": [ "Gsantomaggio", "jhendrixMSFT" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13565", "repo": "Azure/go-amqp", "url": "https://github.com/Azure/go-amqp/pull/336" }
gharchive/pull-request
Add Null type for sending an AMQP null Fixes https://github.com/Azure/go-amqp/issues/332 It works right for me. Thanks a lot.
2025-04-01T04:10:12.365160
2023-11-07T21:01:28
1982220723
{ "authors": [ "minhng22" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13566", "repo": "Azure/go-shuttle", "url": "https://github.com/Azure/go-shuttle/pull/178" }
gharchive/pull-request
Allow custom extract function in NewTracingHandler Allow customer to pass in custom extract function when creating new tracing handler with NewTracingHandler(..) This is necessary since we want more manipulation on attributes of the span that is created in the Extract function. Addendum to description of change: Refactor tracing handler logic Simplify otel.Extract(..)
2025-04-01T04:10:12.472816
2024-01-19T21:03:24
2091416385
{ "authors": [ "codecov-commenter", "wbreza" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13567", "repo": "Azure/kubelogin", "url": "https://github.com/Azure/kubelogin/pull/398" }
gharchive/pull-request
Adds Azure Developer CLI (azd) as a new login method Adds support for Azure Developer CLI (azd) to be used as a login method. When users are deploying AKS based application with azd, the kubeconfig will automatically be converted to use azd authentication when working with clusters with RBAC enabled and local user accounts disabled. Also updates azidentity package to v1.5.1 to leverage new AzureDeveloperCLICredential What is Azure Developer CLI https://aka.ms/azd Azure Developer CLI (azd) is an open-source tool that accelerates the time it takes for you to get your application from local development environment to Azure. azd provides best practice, developer-friendly commands that map to key stages in your workflow, whether you're working in the terminal, your editor or integrated development environment (IDE), or CI/CD (continuous integration/continuous deployment). Codecov Report Attention: 29 lines in your changes are missing coverage. Please review. Comparison is base (2b43d04) 65.46% compared to head (bf5c67e) 64.95%. Files Patch % Lines pkg/internal/token/azuredevelopercli.go 52.17% 21 Missing and 1 partial :warning: pkg/internal/converter/convert.go 0.00% 4 Missing :warning: pkg/internal/token/provider.go 0.00% 2 Missing :warning: pkg/internal/token/execCredentialPlugin.go 0.00% 0 Missing and 1 partial :warning: Additional details and impacted files @@ Coverage Diff @@ ## main #398 +/- ## ========================================== - Coverage 65.46% 64.95% -0.52% ========================================== Files 27 28 +1 Lines 1894 1946 +52 ========================================== + Hits 1240 1264 +24 - Misses 579 606 +27 - Partials 75 76 +1 :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.
2025-04-01T04:10:12.482975
2020-04-24T10:28:46
606211264
{ "authors": [ "bhardwahnitish19", "chintanr97" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13568", "repo": "Azure/kubernetes-keyvault-flexvol", "url": "https://github.com/Azure/kubernetes-keyvault-flexvol/issues/189" }
gharchive/issue
Mount Volume Fails for EC certs while Key is selected to be eported. Describe the bug Mount volume always fails if I choose keys in flex volume for EC certificates. Logs: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m15s default-scheduler Successfully assigned default/mysql-keyvault-sp to aks-agentpool-50967501-3 Warning FailedMount 56s (x2 over 3m12s) kubelet, aks-agentpool-50967501-3 Unable to mount volumes for pod "mysql-keyvault-sp_default(b5d4efb8-8604-11ea-bc3e-624e96f1e750)": timeout expired waiting for volumes to attach or mount for pod "default"/"mysql-keyvault-sp". list of unmounted volumes=[private-key-volume]. list of unattached volumes=[mysql-persistent-storage private-public-key-volume private-key-volume public-key-volume cert-volume default-token-xh4x6] Warning FailedMount 55s (x10 over 5m13s) kubelet, aks-agentpool-50967501-3 MountVolume.SetUp failed for volume "private-key-volume" : mount command failed, status: Failure, reason: /etc/kubernetes/volumeplugins/azure~kv/azurekeyvault-flexvolume failed, /Users/anishramasekar/go/src/github.com/Azure/kubernetes-keyvault-flexvol/azurekeyvault-flexvolume/main.go:80 +0x129 Steps to generate Cert: Created a CSR (where keyproperties.exportable is true, keyType": "EC") Got this CSR signed by a third party CA Merged the generated public key/cert in Key Vault. Authentication used: SP NOTE: Same flexVolume settings and steps to generate works perfectly for RSA certificate. Able to fetch Key for RSA certs, but for EC certs it fails. Steps To Reproduce Create an EC, mark keys as exportable. Use SP to authenticate and try to fetch Key with Flex Volume Expected behavior Shoule be able to fetch Keys for EC certificates in plain text Access mode: service principal Kubernetes version 1.15.x Hi @bhardwahnitish19 , a quick question. When you say "I choose keys in flex volume for EC certificates" do you mean you just need private key in the pod? What I understand is following: You are creating an EC "certificate" object in key vault first. You get the CSR signed and enable the certificate by uploading the signed CSR. Right? Now the certificate object in key vault is a combination of both, the public and the private part. You need this EC key alone into your application-specific pod (preferably in PEM format). Right? Also please correct me if you need it "some other format"! If this so, then @ritazh would this require some different enhancement to the csi-driver than the one mentioned here? Hi @chintanr97 Please find my comments inline: Do you mean you just need private key in the pod? I need both public & private key in pod. But, need them in different locations like /var/privatekey & /var/publickey. These must be in pem format so that the application can utilize easily without any type conversions. To achieve this, I am trying to export key and mount it at /var/privatekey and trying to export cert to /var/publickey. (Using 2 flex volume respectively) 1. You are creating an EC "certificate" object in key vault first. You get the CSR signed and enable the certificate by uploading the signed CSR. Right? Correct 2. Now the certificate object in key vault is a combination of both, the public and the private part. You need this EC key alone into your application-specific pod (preferably in PEM format). Right? Also please correct me if you need it in "some other format"! PEM format would be perfect for now. Great! I understood! Hope the updated comments here help the project owners to create the required solutions.
2025-04-01T04:10:12.484394
2020-02-26T12:38:51
571322002
{ "authors": [ "Sudharma", "sakthi-vetrivel" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13569", "repo": "Azure/open-service-broker-azure", "url": "https://github.com/Azure/open-service-broker-azure/issues/736" }
gharchive/issue
Enable Service Binding secrets to be stored in Azure Key vault The Service Binding creates the secret in the Kubnernetes cluster. It would be great idea if the secrets can be stored into the azure key vault. We are using one of the options for secret creation from here --> https://github.com/SparebankenVest/azure-key-vault-to-kubernetes . @Sudharma I'd recommend checking out https://github.com/Azure/azure-service-operator which has an option to store these connection strings in Key Vault.
2025-04-01T04:10:12.486214
2017-10-03T20:33:24
262575804
{ "authors": [ "spryor", "timlaverty" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13570", "repo": "Azure/pcs-remote-monitoring-webui", "url": "https://github.com/Azure/pcs-remote-monitoring-webui/issues/599" }
gharchive/issue
Deep link to actual method details page for alarm, not the top level maint page Type of issue [x ] Bug [ ] New feature [ ] Improvement Description For an alarm, we need to Deep link to actual alarm details page, not the top level maint page Related to the routing issue from https://github.com/Azure/pcs-remote-monitoring-webui/issues/438 @timlaverty Is this the same issue as https://github.com/Azure/pcs-remote-monitoring-webui/issues/438? The alarms list in the dashboard? yes, look like dups. Closing as a duplicate
2025-04-01T04:10:12.487936
2017-10-30T18:39:35
269709929
{ "authors": [ "spryor" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13571", "repo": "Azure/pcs-remote-monitoring-webui", "url": "https://github.com/Azure/pcs-remote-monitoring-webui/issues/758" }
gharchive/issue
Missing rule name in the device details flyout Type of issue [x] Bug [ ] New feature [ ] Enhancement Description In the device detail flyout, the alarm name is missing in the alarms chart https://github.com/Azure/pcs-remote-monitoring-webui/pull/759 Completed
2025-04-01T04:10:12.522139
2021-11-19T15:31:50
1058655195
{ "authors": [ "manekinekko", "mellson", "tonakai" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13572", "repo": "Azure/static-web-apps-cli", "url": "https://github.com/Azure/static-web-apps-cli/issues/346" }
gharchive/issue
SWA with React + Vite development server returns HTTP 404 for requests with parameters Before filing this issue, please ensure you're using the latest CLI by running swa --version and comparing to the latest version on npm. > swa --version 0.8.1 Are you accessing the CLI from the default port :4280 ? [ ] No, I am using a different port number (--port) and accessing the CLI from that port [X] Yes, I am accessing the CLI from port :4280 Make sure you are accessing the URL printed in the console when running! Describe the bug A clear and concise description of what the bug is. I've started application swa start http://localhost:3000 --run "npm run dev". I have React Typescript Vite application that fails to load some queries specially the ones with a http parameter for example: Request URL: http://localhost:4280/node_modules/.vite/react.js?v=4fc4987f Request URL: http://localhost:4280/node_modules/.vite/react-dom.js?v=4fc4987f Request URL: http://localhost:4280/node_modules/.vite/react_jsx-dev-runtime.js?v=4fc4987f they all returned HTTP 404 when I visited the localhost:4280 when I simply go to the development server it seems be working without any problem albeit their content are a bit different so that parameter is useful somehow to the vite development server. when I remove the ?v=xxx part from those urls, it passes correctly. To Reproduce npm init vite@latest swa-vite-react-ts-test --template react-ts cd swa-vite-react-ts-test npm install npm install -g @azure/static-web-apps-cli swa start http://localhost:3000 --run "npm run dev" go to localhost:4280 and take a look at network/console Expected behavior I'd expect it to simple run and load the app Desktop (please complete the following information): OS: Windows 11 Browser Edge/Chrome Version: both 95 Thank you for opening this issue. It look like this is related to #339 I got this working by downgrading to 0.8.0 as suggested here. Look forward to a fix in the coming versions. Yay, the new 0.8.2 fixed this for me. Thanks 🙏🏻
2025-04-01T04:10:12.528212
2024-06-14T19:25:57
2353956762
{ "authors": [ "JFolberth", "kevball2" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13573", "repo": "Azure/terraform-azurerm-avm-res-resources-resourcegroup", "url": "https://github.com/Azure/terraform-azurerm-avm-res-resources-resourcegroup/pull/32" }
gharchive/pull-request
Fixing Idempotent tests Description Type of Change [ ] Non-module change (e.g. CI/CD, documentation, etc.) [x] Azure Verified Module updates: [x] Bugfix containing backwards compatible bug fixes, and I have NOT bumped the MAJOR or MINOR version in locals.version.tf.json: [ ] Someone has opened a bug report issue, and I have included "Closes #{bug_report_issue_number}" in the PR description. [x] The bug was found by the module author, and no one has opened an issue to report it yet. [ ] Feature update backwards compatible feature updates, and I have bumped the MINOR version in locals.version.tf.json. [ ] Breaking changes and I have bumped the MAJOR version in locals.version.tf.json. [ ] Update to documentation Checklist [x] I'm sure there are no other open Pull Requests for the same update/change [x] My corresponding pipelines / checks run clean and green without any errors or warnings [x] I did run all pre-commit checks @JFolberth @matebarabas could you please start the workflows at your convenience! @kevball2 looks like the latest linting checks failed on the ReadMe update. @JFolberth, I re-ran the pre-commit step and the readme files have been updated. @JFolberth would you be able to re-start the e2e tests? I don't think they are actually ran or maybe I don't have rights to view the run log
2025-04-01T04:10:12.532315
2024-10-09T05:31:54
2574802265
{ "authors": [ "chianw", "mbilalamjad" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13574", "repo": "Azure/terraform-azurerm-avm-res-sql-managedinstance", "url": "https://github.com/Azure/terraform-azurerm-avm-res-sql-managedinstance/issues/17" }
gharchive/issue
[AVM Module Issue]: Transparent Data Encryption for SQL MI AVM module Check for previous/existing GitHub issues [X] I have checked for previous/existing GitHub issues Issue Type? Feature Request (Optional) Module Version No response (Optional) Correlation Id No response Description Customer is requesting for transparent data encryption on SQL MI. This is available via azurerm resource "azurerm_mssql_managed_instance_transparent_data_encryption" supporting both service-managed and customer-managed keys - https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mssql_managed_instance_transparent_data_encryption Does it make sense to include this feature as part of the SQL MI AVM module or it should be handled outside of the module? Hey @chianw please see below, you should be able to enable TDE with service and customer managed keys, let me if this helps https://github.com/Azure/terraform-azurerm-avm-res-sql-managedinstance/blob/071b33822b07061b322c4bb701848278b26146b8/main.tf#L85 Hi @mbilalamjad yes this looks like what the customer needs. Will you be creating a new release for this feature?
2025-04-01T04:10:12.544772
2024-07-02T23:22:32
2387323725
{ "authors": [ "allenjzhang", "markcowl" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13575", "repo": "Azure/typespec-azure", "url": "https://github.com/Azure/typespec-azure/issues/1118" }
gharchive/issue
Refactor the TypeSpec migration doc Moving migration page to first level Moving troubleshooting section to restapi-spec wiki Add a cross link page to azsdkdoc on PR checklist Opportunity fix 4. is SDK local generation validation needed? If so, document it or cross link it. 5. running each CI step locally, cross link it. est: 5
2025-04-01T04:10:12.621870
2020-08-22T00:04:47
683883385
{ "authors": [ "NerevarineRule" ], "license": "MIT", "license_source": "github-api", "license_type": "permissive", "provenance": "gharchive-dolma-0004.json.gz:13576", "repo": "AzureAD/microsoft-authentication-library-common-for-objc", "url": "https://github.com/AzureAD/microsoft-authentication-library-common-for-objc/pull/821" }
gharchive/pull-request
Peter/customer settings Proposed changes The purpose of this PR is to enable XCODE 11.4 recommended settings by default as per the customer request (https://github.com/AzureAD/microsoft-authentication-library-for-objc/issues/961) Specifically, this PR is aimed at enabling the following three settings by default. CLANG_ANALYZER_LOCALIZABILITY_NONLOCALIZED = YES; CLANG_WARN_DEPRECATED_OBJC_IMPLEMENTATIONS = YES; CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF = YES; Enabling above settings also require code changes, most of which stem from enabling "CLANG_WARN_OBJC_IMPLICIT_RETAIN_SELF". Primarily, above setting requires adding self. prefix to all properties being used within block statements. This could alter XCODE behavior and result in invoking the wrong instance variable. If a private property declared within an implementation file shares identical name with another private instance method, and if that private property is being used within a dispatch block, and the private instance method with identical name also uses another dispatch block, this results in nested dispatch calls, resulting in EXC_BAD_INSTRUCTION. In order to address this issue, some private properties within MSIDLastRequestTelemetry.m have been renamed. Type of change [ ] Feature work [x] Bug fix [ ] Documentation [ ] Engineering change [ ] Test [ ] Logging/Telemetry Risk [x] High – Errors could cause MAJOR regression of many scenarios. (Example: new large features or high level infrastructure changes) [ ] Medium – Errors could cause regression of 1 or more scenarios. (Example: somewhat complex bug fixes, small new features) [ ] Small – No issues are expected. (Example: Very small bug fixes, string changes, or configuration settings changes) Additional information @oldalton, we agreed during this morning's scrum the following conventions: self. for properties self->_ ivar but for outlier cases like MSIDLastRequestTelemetry, it's ok & acceptable to do self->_ in front of a property to avoid wrong getter being triggered.