File size: 170,431 Bytes
fdc4812 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 |
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"Error \"The value for one of the HTTP headers is not in the correct format\" when using the storage emulator","SOLUTION":"This scenario typically occurs if you install and use the latest version of the Storage Client \nLibrary without updating the storage emulator. You should either install the latest version of the storage emulator or use cloud storage instead of the emulator for development and testing."}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"I am experiencing unexpected delays in message delivery on a\n queue","SOLUTION":"1. Verify that the application is successfully adding the messages to the queue. Check that the application is not retrying the AddMessage method several times before succeeding.\n\n2. Verify there is no clock skew between the worker role that adds the message to the queue and the worker role that reads the message from the queue. A clock skew makes it appear as if there is a delay in processing.\n\n3. Check if the worker role that reads the messages from the queue is failing. If a queue client calls the GetMessage method but fails to respond with an acknowledgment, the message will remain invisible on the queue until the invisibilityTimeout period expires. At this point, the message becomes available for processing again.\n\n4. Check if the queue length is growing over time. This can occur if you don't have sufficient workers available to process all of the messages that other workers are placing in the queue. Also, check the metrics to see if delete requests are failing and the dequeue count on messages, which might indicate repeated failed attempts to delete the message.\n\n5. Examine the Storage logs for any queue operations that have higher than expected E2ELatency and ServerLatency values over a longer period of time than usual."}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"Root not redirecting to the index document","SOLUTION":"When you enable static website hosting on Azure Storage, you need to specify the name of the index document that will be served when a user requests the root URL of your website. For example, if you set the index document name to \"index.html\", then your website will display the content of that file when someone visits https:\/\/yourwebsite.zxx.web.core.windows.net\/.\n\nHowever, sometimes you may find that the root URL does not redirect to the index document, and instead shows a blank page or an error message. This could happen for several reasons:\n\nEnsure the name and extension as set in the file name on the portal are the exact same of the file in the $web container, including case sensitivity. File names along with extensions are case sensitive. Even though this is served over HTTP, index.html != Index.html for Static Websites.\nEnsure that the index document exists in the $web container and has a valid content type. You can check this by using Azure Portal, Azure CLI, or Azure Storage Explorer.\nEnsure that there are no other files or folders in the $web container that have the same name as the index document. For example, if you have a folder named \"index.html\" in the $web container, it will conflict with the index document and prevent it from being served."}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"Unable to acquire token, tenant is filtered out","SOLUTION":"Sometimes you may see an error message that says a token can't be acquired because a tenant\n is filtered out. This means you're trying to access a resource that's in a tenant you filtered out. To include the tenant, go to the Account Panel. Make sure the checkbox for the tenant specified in the error is selected. For more information on filtering tenants in Storage Explorer, see Managing accounts."}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"Slow performance when unzipping files in SMB file shares","SOLUTION":"Depending on the exact compression method and unzip operation used, decompression operations may perform more slowly on an Azure file share than on your local disk. This is often because unzipping tools perform a number of metadata operations in the process of performing \nthe decompression of a compressed archive. For the best performance, we recommend copying the compressed archive from the Azure file share to your local disk, unzipping there, and then using a copy tool such as Robocopy (or AzCopy) to copy back to the Azure file share. Using a copy tool like Robocopy can compensate for the decreased performance of metadata operations in Azure Files relative to your local disk by using multiple threads to copy data in parallel."}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"How to change the Lease state of Azure Blob to Available","SOLUTION":"A lease can only be cleanly released by using the lease id that was returned during the original lease operation.\nYou can change the lease state to available manually by leasing and releasing the blob using Azure CLI, or any other SDK."}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"I am trying to upload a binary file (a blob for an excel file, actually) to \nmy storage account but the client fails to authenticate under the error message: 403 (Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.)","SOLUTION":"This message you'll get if your SAS Token expired. If this is the case just create a new \nversion of the secret using a SAS token with a longer duration. "}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"Unable to provision a network drive as Server Endpoint in Azure File Sync. Shows Server endpoint creation fails, with this error: \"MgmtServerJobFailed\" (Error code: -2147024894 or 0x80070002)","SOLUTION":"This error occurs if the server endpoint path specified is not valid. Verify the \nserver endpoint path specified is a locally attached NTFS volume. Note, Azure File Sync does not support mapped drives as a server endpoint path."}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"copy file from local machine to Azure Blob not successful. Error INFO: Any \nempty folders will not be processed, because source and\/or destination doesn't have full folder support","SOLUTION":"As indicated in the error INFO: Any empty folders will not be processed by azcopy, Just create a file inside the source directory and try the azcopy command again."}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"Unable to trigger azure function though service bus queue","SOLUTION":"That's an indication that your Function is not activated. And if it's not activated while a message is found on the queue, the potential issue would be Function configuration.\nYou need to specify connection string and queue name. If there is no connectivity exception, that tells me the connection string is working, Just validate is the right namespace connection. Then, check if the queue the Function is configured with is the right queue. "}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"What are the possible ways available to Scale Out\/In VMs based on number of outstanding requests of Azure storage queue?","SOLUTION":"You can use Metric alerts (assuming that number of requests is the same as Queue Message Count or some other metric) to create such alerts that are attached to action group that has link to automation service like Azure Automation runbook or Azure Function. The logic of those services will be code that scales out\/in.\n"}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"I am not able to create a folder under the blob container","SOLUTION":"From the azure portal you can go inside your container \u2014> click on Upload \u2014> in \nthe Advanced section go to Upload to Folder and provide a folder name \u2014> browse the file to upload \u2014> click on Upload button You should see a folder getting created."}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"Lifecycle policy moving from cool to hot not working","SOLUTION":"The policy you have defined is moving the blob from \"Hot\" to \"Cool\" after 2 days of modification. If you want to move the blob from \"Cool\" to \"Hot\" after it gets modified, you need to change the action in the first rule to \"tierToHot\" instead of \"tierToCool\".\n\nAlso, you have defined the second rule to enable auto-tiering to \"Hot\" from \"Cool\" based on last access time. However, this rule will only take effect if the blob is currently in \"Cool\" and then accessed. It will not move the blob from \"Cool\" to \"Hot\" immediately after it gets modified.\n\nYou can try adding a new rule that moves the blob from \"Cool\" to \"Hot\" based on last modified time."}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"Missing error details for some failures in Insights for storage account","SOLUTION":"You need to create a diagnostic setting to collect resource logs for blobs. Once \nthe diagnostic setting is created you can investigate the logs. If you are using Log Analytics this can be done directly from Logs (preview) under monitoring."}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"Unable to create Storage account; error loading the Creation page of \nStorage account","SOLUTION":"1. Disable if there is adblock and clear all your cookies restart the browser and relogin into azure portal\n2. If you are using chrome or firefox try opening azure portal from edge browser and create resource\n3. Open InPrivate session from your browser and login into the portal"}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"how to set access permissions for azure blob storage container at folder \n(prefix) level","SOLUTION":"1. If you use ADLS (HNS) I believe you can set an ACL on a folder . For existing storage account blob container, you would need to copy into an HNS enabled storage account (current situation)\n2. You could produce a SAS for a blob container or for individual blobs(SAS token can be used to restrict access to either an entire blob container or an individual blob. This is because a folder in blob storage is virtual and not a real folder.)."}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"Error trying to delete container in storage account","SOLUTION":"If you are getting the \"Failed to delete 1 out of 1 container(s) The request uri is \ninvalid \" Please first try to hard refresh the Screen\/Browser page. There may be some interface issue."}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"Is there a way to enable Soft delete on Storage Account through custom \npolicy","SOLUTION":"Yes, you can turn on soft deletion for storage accounts through a policy. You can do this through the portal\/powershell\/azure cli\/template options.\nYou can \"[s]pecify a retention period between 1 and 365 days.\" PowerShell (7 days)"}
{"TECHNOLOGY":"Azure Web Storage","QUESTION":"How to stream blobs to Azure Blob Storage with Node.js","SOLUTION":"Navigate to your storage account in the Azure Portal and copy the account name \nand key (under Settings > Access keys) into the .env.example file. Save the file and then rename it from .env.example to .env."}
{"TECHNOLOGY":"Azure Web Storage - Web App","QUESTION":"How do I automate App Service web apps by using PowerShell?","SOLUTION":"You can use PowerShell cmdlets to manage and maintain App Service web apps. In our blog post Automate web apps hosted in Azure App Service by using PowerShell, we describe how to use Azure Resource Manager-based PowerShell cmdlets to automate common tasks. The blog post also has sample code for various web apps management tasks."}
{"TECHNOLOGY":"Azure Web Storage - Web App","QUESTION":"How do I view my web app's event logs?","SOLUTION":"To view your web app's event logs:\n\n1. Sign in to your Kudu website (https:\/\/*yourwebsitename*.scm.azurewebsites.net).\n2. In the menu, select Debug Console > CMD.\n3. Select the LogFiles folder.\n4. To view event logs, select the pencil icon next to eventlog.xml.\n5. To download the logs, run the PowerShell cmdlet Save-AzureWebSiteLog -Name webappname."}
{"TECHNOLOGY":"Azure Web Storage - Web App","QUESTION":"How do I capture a user-mode memory dump of my web app?","SOLUTION":"To capture a user-mode memory dump of your web app:\n\n1. Sign in to your Kudu website (https:\/\/*yourwebsitename*.scm.azurewebsites.net).\n2. Select the Process Explorer menu.\n3. Right-click the w3wp.exe process or your WebJob process.\n4. Select Download Memory Dump > Full Dump."}
{"TECHNOLOGY":"Azure Web Storage - Web App","QUESTION":"I cannot create or delete a web app due to a permission error. What the\n permissions do I need to create or delete a web app?","SOLUTION":"You would need minimum Contributor access on the Resource Group to deploy App Services. If you have Contributor access only on App Service Plan and web app, it won't allow you to create the app service in the Resource Group."}
{"TECHNOLOGY":"Azure Web Storage - Web App","QUESTION":"How do I restore a deleted web app or a deleted App Service Plan?","SOLUTION":"If the web app was deleted within the last 30 days, you can restore it using Restore-AzDeletedWebApp."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Which SQL cloud database deployment options are \navailable?","SOLUTION":"Azure SQL Database is available as a single database with \nits own set of resources managed via a logical server,and\n as a pooled database in an elastic pool, with a shared set of resources managed through a logical server. In general, elastic pools are designed for a typical software-as-a-service (SaaS) application pattern, with one database per custtomer or tenant. With pools, you manage the collective performance, and the databases scale up or down automatically."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error message: Conversion failed when converting from a \ncharacter string to uniqueidentifier","SOLUTION":"In the copy activity sink, under PolyBase settings, set the use type \ndefault option to false."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Cannot open database \"master\" requested by the login. The login \nfailed","SOLUTION":"1. On the login screen of SSMS, select Options, and then select Connection Properties.\n2. In the Connect to database field, enter the user's default database name as the default login database, and then select Connect."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error 40552: The session has been terminated because of \nexcessive transaction log space usage","SOLUTION":"The issue can occur in any DML operation such as insert, update, or \ndelete. Review the transaction to avoid unnecessary writes. Try to reduce the number of rows that are operated on immediately by implementing batching or splitting into multiple smaller transactions."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error 5: Cannot connect to < servername >","SOLUTION":"To resolve this issue, make sure that port 1433 is open for outbound \nconnections on all firewalls between the client and the internet."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error 40551: The session has been terminated because of \nexcessive tempdb usage","SOLUTION":"1. Change the queries to reduce temporary table space usage.\n2. Drop temporary objects after they're no longer needed.\n3. Truncate tables or remove unused tables."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Elastic pool not found for server: '%ls', elastic pool name: '%ls'. \nSpecified elastic pool does not exist in the specified server.","SOLUTION":"Provide a valid elastic pool name."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Getting error as Elastic pool does not support service tier '%ls'. Specified service tier is not supported for elastic pool provisioning.","SOLUTION":"Provide the correct edition or leave service tier blank to use the default \nservice tier."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error Code:40860 \nElastic pool '%ls' and service objective '%ls' combination is invalid.","SOLUTION":"Specify correct combination of elastic pool and service tier."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error Code:40877\nI cannot able to delete elastic pool","SOLUTION":"Remove databases from the elastic pool in order to delete it."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error Code:40857\nElastic pool not found for server: '%ls', elastic pool name: '%ls'.","SOLUTION":"Provide a valid elastic pool name."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error code: 2056 - SqlInfoValidationFailed","SOLUTION":"Make sure to change the target Azure SQL Database collation to the same\n as the source SQL Server database. Azure SQL Database uses SQL_Latin1_General_CP1_CI_AS collation by default, in case your source SQL Server database uses a different collation you might need to re-create or select a different target database whose collation matches."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Not able to decrease the storage limit of the elastic pool","SOLUTION":"Consider reducing the storage usage of individual databases in the \nelastic pool or remove databases from the pool in order to reduce its DTUs or storage limit."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"c# error when connect to mysql \"Object cannot be cast from \nDBNull to other types\" (mariadb 10.3)","SOLUTION":" when a column value is null, the object DBNull is returned rather than a \ntyped value. You must first test that the column value is not null via the api before accessing as the desired type."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error code: AzureTableDuplicateColumnsFromSource","SOLUTION":"Double-check and fix the source columns, as necessary."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error code: MongoDbUnsupportedUuidType","SOLUTION":"In the MongoDB connection string, add the uuidRepresentation=standard option. "}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error message: Request rate is large in Azure CosmosDB","SOLUTION":"Try either of the following two solutions:\n1. Increase the container RUs number to a greater value in Azure Cosmos DB. This solution will improve the copy activity performance, but it will incur more cost in Azure Cosmos DB.\n2. Decrease writeBatchSize to a lesser value, such as 1000, and decrease parallelCopies to a lesser value, such as 1. This solution will reduce copy run performance, but it won't incur more cost in Azure Cosmos DB."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error code: SqlOpenConnectionTimeout","SOLUTION":" Retry the operation to update the linked service connection string with \na larger connection timeout value."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error code: SqlAutoCreateTableTypeMapFailed","SOLUTION":"Update the column type in mappings, or manually create the sink table \nin the target server."}
{"TECHNOLOGY":"Azure SQL","QUESTION":"Error code: SqlParallelFailedToDetectPartitionColumn","SOLUTION":"Check the table to make sure that a primary key or a unique index is \ncreated."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"Client can't reach an Azure Kubernetes Service (AKS) cluster's API ","SOLUTION":"Ensure that your client's IP address is within the ranges authorized by the cluster's API server:\n\n1. Find your local IP address. For information on how to find it on Windows and Linux, see How to find my IP.\n\n2. Update the range that's authorized by the API server by using the az aks update command in Azure CLI. Authorize your client's IP address."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"when I try to upgrade an Azure Kubernetes Service (AKS) cluster getting \nerror as \"PodDrainFailure\"","SOLUTION":"1. Adjust the PDB to enable pod draining. Generally, The allowed disruption is controlled by the Min Available \/ Max unavailable or Running pods \/ Replicas parameter. You can modify the Min Available \/ Max unavailable parameter at the PDB level or increase the number of Running pods \/ Replicas to push the Allowed Disruption value to 1 or greater.\n2.Try again to upgrade the AKS cluster to the same version that you tried to upgrade to previously. This process will trigger a reconciliation."}
{"TECHNOLOGY":"Azure AKS","QUESTION":" AKS cluster upgrade fails, and getting \"PublicIPCountLimitReached\" as \nerror message","SOLUTION":"To raise the limit or quota for your subscription, go to the Azure portal, file a \nService and subscription limits (quotas) support ticket, and set the quota type to Networking.\n\nAfter the quota change takes effect, try to upgrade the cluster to the same version that you previously tried to upgrade to. This process will trigger a reconciliation."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"\"SubnetIsFull\" error code during an AKS cluster upgrade","SOLUTION":"Reduce the cluster nodes to reserve IP addresses for the upgrade.\n\nIf scaling down isn't an option, and your virtual network CIDR has enough IP addresses, try to add a node pool that has a unique subnet:\n\n1. Add a new user node pool in the virtual network on a larger subnet.\n2. Switch the original node pool to a system node pool type.\n3. Scale up the user node pool.\n4. Scale down the original node pool."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"Failed to upgrade or scale Azure Kubernetes Service cluster due to missing \nLog Analytics workspace","SOLUTION":"If it has been more than 14 days since the workspace was deleted, disable monitoring on the AKS cluster and then run the upgrade or scale operation again.\n\nTo disable monitoring on the AKS cluster, run the following command:\n\naz aks disable-addons -a monitoring -g <clusterRG> -n <clusterName>\nIf the same error occurs while disabling the monitoring add-on, recreate the missing Log Analytics workspace and then run the upgrade or scale operation again."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"Upgrades to Kubernetes 1.16 fail when node labels have a kubernetes.io \nprefix","SOLUTION":"To mitigate this issue:\n\nUpgrade your cluster control plane to 1.16 or later.\nAdd a new node pool on 1.16 or higher without the unsupported kubernetes.io labels.\nDelete the older node pool."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"CannotDeleteLoadBalancerWithPrivateLinkService or \nPrivateLinkServiceWithPrivateEndpointConnectionsCannotBeDeleted error code","SOLUTION":"Make sure that the private link service isn't associated with any private endpoint\n connections. Delete all private endpoint connections before you delete the private link service."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"PublicIPAddressCannotBeDeleted, InUseSubnetCannotBeDeleted, or \nInUseNetworkSecurityGroupCannotBeDeleted error code","SOLUTION":"1. Remove all public IP addresses that are associated with Azure Load Balancer and the resource that's used by the subnet. For more information, see View, modify settings for, or delete a public IP address.\n\n2. In the load balancer, remove the rules for Load Balance rules, Health probes, and Backend pools.\n\n3. For the NSG and subnet, remove all associated rules."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"when I try to delete a Microsoft Azure Kubernetes Service (AKS) cluster \ngetting error as InUseRouteTableCannotBeDeleted error code","SOLUTION":"Remove the associated subnet in the route table."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"When I tried to delete an AKS cluster while the virtual machine scale set was still using the associated public IP address or network security group (NSG) getting LoadBalancerInUseByVirtualMachineScaleSet or \nNetworkSecurityGroupInUseByVirtualMachineScaleSet error code","SOLUTION":"Remove all public IP addresses that are associated with the subnet, and remove \nthe NSG that's used by the subnet."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"when I try to delete a Microsoft Azure Kubernetes Service (AKS) getting \nclusterRequestDisallowedByPolicy error(for cluster deletions)","SOLUTION":"Verify that you have permission to make any changes to policy services. If you \ndon't have permission, find someone who has access so that they can make the necessary changes. Also, check the policy name that's causing the problem, and then temporarily deny that rule so that you (or someone who has permission) can do the delete operation."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"Getting error asTooManyRequestsReceived or \nSubscriptionRequestsThrottled when I try to delete a Microsoft Azure Kubernetes Service (AKS) cluster","SOLUTION":"The HTTP response includes a Retry-After value. This specifies the number of \nseconds that your application should wait (or sleep) before it sends the next request. If you send a request before the retry value has elapsed, your request isn't processed, and a new retry value is returned"}
{"TECHNOLOGY":"Azure AKS","QUESTION":"I get an \"insufficientSubnetSize\" error when I deploy an AKS cluster that \nuses advanced networking","SOLUTION":"Because you can't update an existing subnet's CIDR range, you must have permission to create a new subnet to resolve this issue. Follow these steps:\n\n1. Rebuild a new subnet that has a larger CIDR range that's sufficient for operation goals.\n\n2.Create a new subnet that has a new non-overlapping range.\n\n3.Create a new node pool on the new subnet.\n\n4. Drain pods from the old node pool that resides in the old subnet that will be replaced.\n\n5.Delete the old subnet and old node pool."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"Cluster autoscaler fails to scale with \"failed to fix node group sizes\" error","SOLUTION":"To get out of this state, disable and re-enable the cluster autoscaler."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"Node Not Ready failures that are followed by recoveries error","SOLUTION":"To prevent this issue from occurring in the future, take one or more of the following actions:\n\n1. Make sure that your service tier is fully paid for.\n2. Reduce the number of watch and get requests to the API server.\n3. Replace the node pool with a healthy node pool."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"Can't view resources in Kubernetes resource viewer in Azure portal","SOLUTION":"Make sure that when you run the az aks create or az aks update command in \nAzure CLI, the --api-server-authorized-ip-ranges parameter includes access for the local client computer to the IP addresses or IP address ranges from which the portal is being browsed."}
{"TECHNOLOGY":"Azure AKS","QUESTION":" Getting an error when I try to upgrade or scale a Microsoft Azure Kubernetes Service (AKS) cluster","SOLUTION":"To resolve these scenarios, follow these steps:\n\n1. Scale your cluster back to a stable goal state within the quota.\n\n2. Request an increase in your resource quota.\n\n3. Try to scale up again beyond the initial quota limits.\n\n4. Retry the original operation. This second operation should bring your cluster to a successful state."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"Insufficient subnet size error while deploying an AKS cluster with advanced networking","SOLUTION":"Create new subnets. Because you can't update an existing subnet's CIDR range, you'll need to be granted the permission to create a new subnet.\n\nRebuild a new subnet with a larger CIDR range that's sufficient for operation goals by following these steps:\n\n1. Create a new subnet with a larger, non-overlapping range.\n\n2. Create a new node pool on the new subnet.\n\n3. Drain the pods from the old node pool that resides in the old subnet.\n\n4. Delete the old subnet and old node pool."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"Missing or invalid service principal when creating an AKS cluster","SOLUTION":"Make sure that there's a valid, findable service principal. To do this, use one of the following methods:\n\nDuring cluster creation, use an existing service principal that has already propagated across regions to pass into AKS.\n\nIf you use automation scripts, add time delays between service principal creation and AKS cluster creation.\n\nIf you use the Azure portal, return to the cluster settings after you try to create the cluster, and then retry the validation page after a few minutes."}
{"TECHNOLOGY":"Azure AKS","QUESTION":"when I am creating an AKS cluster getting errors after restricting egress \ntraffic in AKS","SOLUTION":"Verify that your configuration doesn't conflict with any of the required or optionally recommended settings for the following items:\n\n1. Outbound ports\n2. Network rules\n3. Fully qualified domain names (FQDNs)\n4. Application rules"}
{"TECHNOLOGY":"Azure AKS","QUESTION":"Error: TCP time-outs when kubectl or other third-party tools connect to the\n API server","SOLUTION":"Make sure the nodes that host this pod aren't overly utilized or under stress. \nConsider moving the nodes to their own system node pool."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"How can I identify how and when key vaults are accessed?","SOLUTION":"After you create one or more key vaults, you'll likely want to monitor how and \nwhen your key vaults are accessed, and by whom. You can do monitoring by enabling logging for Azure Key Vault"}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"How can I monitor vault availability, service latency periods or other \nperformance metrics for key vault?","SOLUTION":"As you start to scale your service, the number of requests sent to your key vault \nwill rise. Such demand has a potential to increase the latency of your requests and in extreme cases, cause your requests to be throttled which will degrade the performance of your service. You can monitor key vault performance metrics and get alerted for specific thresholds"}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"I'm not able to modify access policy, how can it be enabled?","SOLUTION":"The user needs to have sufficient Azure AD permissions to modify access policy. \nIn this case, the user would need to have higher contributor role."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"How can I give the AD group access to the key vault?","SOLUTION":"Give the AD group permissions to your key vault using the Azure CLI az keyvault set-policy command, or the Azure PowerShell Set-AzKeyVaultAccessPolicy cmdlet.\n\nThe application also needs at least one Identity and Access Management (IAM) role assigned to the key vault. Otherwise it will not be able to log in and will fail with insufficient rights to access the subscription. Azure AD Groups with Managed Identities may require up to eight hours to refresh tokens and become effective."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"Unable to assign a role using a service principal with Azure CLI","SOLUTION":"There are two ways to potentially resolve this error. The first way is to assign the Directory Readers role to the service principal so that it can read data in the directory.\n\nThe second way to resolve this error is to create the role assignment by using the --assignee-object-id parameter instead of --assignee. By using --assignee-object-id, Azure CLI will skip the Azure AD lookup. You'll need to get the object ID of the user, group, or application that you want to assign the role to."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"ClientCertificateCredential authentication issueClient assertion contains \nan invalid signature.","SOLUTION":"Ensure the specified certificate has been uploaded to the AAD application registration."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"ManagedIdentityCredential authentication unavailable, no managed \nidentity endpoint found","SOLUTION":"Ensure the managed identity has been properly configured on the App Service. \nVerify the App Service environment is properly configured and the managed identity endpoint is available. "}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"Deleted or rejected private end point still shows Aprroved in ADF","SOLUTION":"You should delete the managed private end point in ADF once existing private\n endpoints are rejected\/deleted from source\/sink datasets."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"Connection error in public endpoint","SOLUTION":"1. Having private endpoint enabled on the source and also the sink side when using the Managed VNet IR.\n2. If you still want to use the public endpoint, you can switch to public IR only instead of using the Managed VNet IR for the source and the sink. Even if you switch back to public IR, the service may still use the Managed VNet IR if the Managed VNet IR is still there."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"Not able to use self-hosted IR to bridge two on-premises datastores","SOLUTION":"Install drivers for both the source and destination datastores on the destination IR, and make sure that it can access the source datastore.\n\nIf the traffic can't pass through the network between two datastores (for example, they're configured in two virtual networks), you might not finish copying in one activity even with the IR installed. If you can't finish copying in a single activity, you can create two copy activities with two IRs, each in a VENT:\n\n1.Copy one IR from datastore 1 to Azure Blob Storage\n2. Copy another IR from Azure Blob Storage to datastore 2.\nThis solution could simulate the requirement to use the IR to create a bridge that connects two disconnected datastores."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"Unable to register the self-hosted IR ","SOLUTION":"Use localhost IP address 127.0.0.1 to host the file and resolve the issue."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"I can sign in to Azure portal, but I see the error, No subscriptions found","SOLUTION":"To fix this issue:\n\n1. Verify that the correct Azure directory is selected by selecting your account at the top-right corner.\n2. If the correct Azure directory is selected, but you still receive the error message, have your account added as an Owner."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"How do I check my current consumption level?","SOLUTION":"Azure customers can view their current usage levels in Cost Management"}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"Unable to remove a credit card from a saved billing payment method","SOLUTION":"By design, you can't remove a credit card from the active subscription.\n\nIf an existing card has to be deleted, one of the following actions is required:\n\n1. A new card must be added to the subscription so that the old payment instrument can be successfully deleted.\n2. You can cancel the subscription to delete the subscription permanently and then remove the card."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"VisualStudioCredential authentication issue: Failed To Read Credentials","SOLUTION":"1. In Visual Studio select the Tools > Options menu to launch the Options dialog.\n2. Navigate to the Azure Service Authentication options to sign in with your Azure Active Directory account.\n3. If you already had logged in to your account, try logging out and logging in again as that will repopulate the cache and potentially mitigate the error you're getting."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"AzureCliCredential authentication issue:Azure CLI not installed","SOLUTION":"1. Ensure the Azure CLI is properly installed. \n2. Validate the installation location has been added to the PATH environment variable."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"RequestFailedException raised from the client with a status code of 401 or \n403","SOLUTION":"1. Enable logging to determine which credential in the chain returned the authenticating token.\n2. In the case a credential other than the expected is returning a token, bypass this by either signing out of the corresponding development tool, or excluding the credential with the ExcludeXXXCredential property in the DefaultAzureCredentialOptions\n3. Ensure that the correct role is assigned to the account being used. For example, a service specific role rather than the subscription Owner role."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"UsernamePasswordCredential authentication Error Code: AADSTS50126\n","SOLUTION":"Ensure the username and password provided when constructing the credential are valid."}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"CredentialUnavailableException: The requested identity hasn't been \nassigned to this resource. ","SOLUTION":"If using a user assigned identity, ensure the specified clientId is correct.\nIf using a system assigned identity, make sure it has been enabled properly. "}
{"TECHNOLOGY":"Azure Security IAM","QUESTION":"CredentialUnavailableException: ManagedIdentityCredential \nauthentication unavailable.","SOLUTION":"Ensure the managed identity has been properly configured on the App Service. \n\nVerify the App Service environment is properly configured and the managed identity endpoint is available"}
{"TECHNOLOGY":"Azure - AML","QUESTION":"Error code: 4110\nMessage: AzureMLExecutePipeline activity missing LinkedService definition in JSON.\n","SOLUTION":"Check that the input AzureMLExecutePipeline activity JSON \ndefinition has correctly linked service details."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"Error code: 4111\nMessage: AzureMLExecutePipeline activity has wrong LinkedService type in JSON. Expected LinkedService type: '%expectedLinkedServiceType;', current LinkedService type: Expected LinkedService type: '%currentLinkedServiceType;'.","SOLUTION":"Check that the input AzureMLExecutePipeline activity JSON definition has correctly linked service details."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"Error code: 4112\nMessage: AzureMLService linked service has invalid value for property '%propertyName;'.","SOLUTION":"Check if the linked service has the property\u00a0%propertyName;\u00a0defined with correct data."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"Error code: 4121\nMessage: Request sent to Azure Machine Learning for operation '%operation;' failed with http status code '%statusCode;'. Error message from Azure Machine Learning: '%externalMessage;'.","SOLUTION":"It might be caused due to the Credential used to access Azure Machine Learning has expired.So I recommend you to verify that the credential is valid and retry."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"Error code: 4122\nMessage: Request sent to Azure Machine Learning for operation '%operation;' failed with http status code '%statusCode;'. Error message from Azure Machine Learning: '%externalMessage;'.","SOLUTION":"Verify that the credential in Linked Service is valid, and has permission to access Azure Machine Learning."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"Request sent to Azure Machine Learning for operation '%operation;' failed with http status code '%statusCode;'. Error message from Azure Machine Learning: '%externalMessage;'.\n","SOLUTION":"Check that the value of activity properties matches the expected payload of the published Azure ML pipeline specified in Linked Service"}
{"TECHNOLOGY":"Azure - AML","QUESTION":"Azure ML pipeline run failed with status: '%amlPipelineRunStatus;'. Azure ML pipeline run Id: '%amlPipelineRunId;'. Please check in Azure Machine Learning for more error logs.\n","SOLUTION":"Check Azure Machine Learning for more error logs, then fix the ML pipeline."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"Unable to pass data to PipelineData directory","SOLUTION":"Ensure you have created a directory in the script that corresponds to where \nyour pipeline expects the step output data. In most cases, an input argument will define the output directory, and then you create the directory explicitly. Use os.makedirs(args.output_dir, exist_ok=True) to create the output directory."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"Pipeline is rerunning unnecessarily","SOLUTION":"To ensure that steps only rerun when their underlying data or scripts change, \ndecouple your source-code directories for each step. If you use the same source directory for multiple steps, you may experience unnecessary reruns. Use the source_directory parameter on a pipeline step object to point to your isolated directory for that step, and ensure you aren't using the same source_directory path for multiple steps."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"Pipeline not reusing steps","SOLUTION":"Step reuse is enabled by default, but ensure you haven't disabled it in a \npipeline step. If reuse is disabled, the allow_reuse parameter in the step will be set to False."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"I am getting the following error: ModuleNotFoundError: No module \nnamed 'azureml.train' Whenever I try to import the HyperDriveConfig module: from azureml.train.hyperdrive import HyperDriveConfig.","SOLUTION":"The azureml-train package has been deprecated already and might not \nreceive future updates and removed from the distribution altogether. Please use azureml-train-core instead."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"ModuleNotFoundError: No module named 'azureml' even after \ninstallation","SOLUTION":"To resolve the issue, Please try installing on a notebook by adding % at the\n beginning of pip install command."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"I am using Azure ML for real-time machine learning. I have installed \nthe Kafka server, but I am having a connection issue when trying to create a topic. I received the following warning: WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost\/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient).","SOLUTION":"1. Verify that the Kafka broker is running: You can check if the Kafka broker is running by using the following command in a new terminal window: .\/kafka_2.13-3.3.2\/bin\/kafka-server-start.sh .\/kafka_2.13-3.3.2\/config\/server.properties\n2. Verify that the address and port are correct: Make sure that the address and port specified in the bootstrap-server parameter are correct and that there are no firewall or network configuration issues preventing you from connecting to the broker.\n3. Check the Kafka logs for errors: Check the Kafka logs to see if there are any error messages that could help identify the issue. You can find the Kafka logs in the logs directory of your Kafka installation.\n4.Try using a different topic name: It's possible that the topic name you're using is already in use or is invalid. Try using a different topic name to see if that resolves the issue"}
{"TECHNOLOGY":"Azure - AML","QUESTION":"In Azure ML studio deploy option is not there","SOLUTION":"In Azure Machine Learning Studio, the ability to deploy a model is only available in the paid tiers of the service. If you are using a trial account, you may not have access to the deploy functionality.\n\nTo deploy a model in Azure Machine Learning Studio, you will need to upgrade to a paid subscription. The deploy functionality is available in the Standard and Enterprise tiers of the service.\n\nOnce you have upgraded your subscription, you can follow these steps to deploy your trained model:\n\nOpen the Azure Machine Learning Studio and navigate to your workspace.\n\nNavigate to the \"Models\" tab and select the trained model you want to deploy.\n\nClick on the \"Deploy\" button and select the deployment target, such as Azure Kubernetes Service (AKS) or Azure Container Instances (ACI).\n\nConfigure the deployment settings, such as the number of nodes and the CPU and memory settings.\n\nClick on the \"Deploy\" button to start the deployment process.\n\nOnce the deployment is complete, you can test the deployed model by sending requests to the endpoint."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"can I use prebuilt component in custom pipeline mode?","SOLUTION":"Classic prebuilt components provides prebuilt components majorly for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.\n\nCustom components allow you to provide your own code as a component. It supports sharing across workspaces and seamless authoring across Studio, CLI, and SDK interfaces."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"Azure Machine Learning - running code in curated environement \ngives ModuleNotFoundError: No module named 'azure.ai'","SOLUTION":"You can try to upgrade pip and then install the azure package using these commands:\npip install --upgrade pip\npip install azure-ai-ml"}
{"TECHNOLOGY":"Azure - AML","QUESTION":"How to solve an error in model profiling where it is not recognizing \nthe profile attribute provided by model library?","SOLUTION":"Here are a few steps to resolve this error:\n\n1. Check the library documentation: Make sure that the library you are using to profile the model has a profile attribute and that you are using it correctly.\n2. Verify that you have imported the correct library: Check if you have imported the correct library and that the Model class you are using is the one from the library you intended to use.\n3. Rename your custom class: If you have a custom Model class with the same name as the one from the library, consider renaming your custom class to avoid any name collisions."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"How to access the data used during the azure automl pipeline \ntraining?","SOLUTION":"You can access the data that was used during the training of an Azure AutoML\n model by using the TrainingData property of the Model object in the Azure Machine Learning SDK."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"Can i run multiple jobs\/experiments on a single node using \nCompute Cluster ?","SOLUTION":"Use Azure Batch as the compute target in AzureML. With Azure Batch, you can\ncreate a pool of compute nodes and run multiple jobs\/experiments concurrently on those nodes. Azure Batch automatically manages the allocation of resources to each job\/experiment, so you don't need to worry about dividing your tasks into mini batches."}
{"TECHNOLOGY":"Azure - AML","QUESTION":"How To Connect To Managed Instance from Machine Learning Studio","SOLUTION":"To connect to an Azure SQL Database from Azure Machine Learning studio, you need to follow these steps:\n\n1. Create an Azure SQL Database and make sure that it is accessible from your Azure Machine Learning workspace.\n2. In Azure Machine Learning studio, go to the Data tab and click on the +New button.\n3. Select the SQL Database option and provide the necessary details, such as the server name, database name, and authentication method.\n4. Click on the Connect button to establish a connection to the Azure SQL Database.\n5. Once the connection is established, you can use the SQL Database as a data source for your machine learning models in Azure Machine Learning studio."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"I tried to create a bucket but received the following error:\n\n409 Conflict. Sorry, that name is not available. Please try a different one.","SOLUTION":"The bucket name you tried to use (e.g. gs:\/\/cats or gs:\/\/dogs) is \nalready taken. Cloud Storage has a global namespace so you may not name a bucket with the same name as an existing bucket. Choose a name that is not being used."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"How can I serve my content over HTTPS without using a load balancer","SOLUTION":" You can serve static content through HTTPS using direct URIs such as https:\/\/storage.googleapis.com\/my-bucket\/my-object. For other options to serve your content through a custom domain over SSL, you can:\n\n1. Use a third-party Content Delivery Network with Cloud Storage.\n2. Serve your static website content from Firebase Hosting instead of Cloud Storage."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"I get an Access denied error message for a web page served by my \nwebsite","SOLUTION":"Check that the object is shared publicly.\nIf you previously uploaded and shared an object, but then upload a new version of it, then you must reshare the object publicly. This is because the public permission is replaced with the new upload."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":" I get an error when I attempt to make my data public","SOLUTION":"Make sure that you have the setIamPolicy permission for your object or bucket. This permission is granted, for example, in the Storage Admin role. If you have the setIamPolicy permission and you still get an error, your bucket might be subject to public access prevention, which does not allow access to allUsers or allAuthenticatedUsers. Public access prevention might be set on the bucket directly, or it might be enforced through an organization policy that is set at a higher level.\n"}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":" I am prompted to download my page's content, instead of being able to \nview it in my browser.","SOLUTION":"If you specify a MainPageSuffix as an object that does not have a web\n content type, then instead of serving the page, site visitors are prompted to download the content. To resolve this issue, update the content-type metadata entry to a suitable value, such as text\/html."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"I'm seeing increased latency when uploading or downloading","SOLUTION":"Use the gsutil perfdiag command to run performance diagnostics from the affected environment. Consider the following common causes of upload and download latency:\n\nCPU or memory constraints: The affected environment's operating system should have tooling to measure local resource consumption such as CPU usage and memory usage.\n\nDisk IO constraints: As part of the gsutil perfdiag command, use the rthru_file and wthru_file tests to gauge the performance impact caused by local disk IO.\n\nGeographical distance: Performance can be impacted by the physical separation of your Cloud Storage bucket and affected environment, particularly in cross-continental cases. Testing with a bucket located in the same region as your affected environment can identify the extent to which geographic separation is contributing to your latency.\n\nIf applicable, the affected environment's DNS resolver should use the EDNS(0) protocol so that requests from the environment are routed through an appropriate Google Front End."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"I'm seeing increased latency when accessing Cloud Storage with gcloud \nstorage, gsutil, or one of the client libraries.","SOLUTION":"The CLIs and the client libraries automatically retry requests when it's\n useful to do so, and this behavior can effectively increase latency as seen from the end user. Use the Cloud Monitoring metric storage.googleapis.com\/api\/request_count to see if Cloud Storage is consistenty serving a retryable response code, such as 429 or 5xx."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"Do I need to enable billing if I was granted access to someone else's \nbucket?","SOLUTION":"No, in this case another individual has already set up a Google Cloud project and either granted you access to the entire project or to one of their buckets and the objects it contains. Once you authenticate, typically with your Google account, you can read or write data according to the access that you were granted.\n"}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"While performing a resumable upload, I received error and the \nmessage Failed to parse Content-Range header.","SOLUTION":"he value you used in your Content-Range header is invalid. For example, Content-Range: *\/* is invalid and instead should be specified as Content-Range: bytes *\/*. If you receive this error, your current resumable upload is no longer active, and you must start a new resumable upload.\n"}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"Requests to a public bucket directly, or via Cloud CDN, are failing with a \nHTTP 401: Unauthorized and an Authentication Required response.","SOLUTION":"Check that your client, or any intermediate proxy, is not adding an\nAuthorization header to requests to Cloud Storage. Any request with an Authorization header, even if empty, is validated as if it were an authentication attempt."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"How to get data that is older than 6 weeks from GCP metrics explorer \nAPI","SOLUTION":"By Default monitoring API stores data only up to 6 weeks only. If you \nneed data for more than 6 weeks or long term data then as per data retention policy you can extend up to 24 months. There is no additional cost for this extended retention policy."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"How can I maximize the availability of my data?","SOLUTION":"Consider storing your data in a multi-region or dual-region bucket location if high availability is a top requirement. All data is stored geo-redundantly in these locations, which means your data is stored in at least two geographically separated regions. In the unlikely event of a region-wide outage, such as one caused by a natural disaster, buckets in geo-redundant locations remain available, with no need to change storage paths. Also,\n because object listing in a bucket is always strongly consistent, regardless of bucket location, there is a zero recovery time objective (RTO) in most circumstances for dual- and multi-regions. Note that to achieve uninterrupted service, other products, such as Compute Engine instances, must be set up to be geo-redundant as well."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"How can I get a summary of space usage for a Cloud Storage bucket?","SOLUTION":"You can use Cloud Monitoring for daily monitoring of your bucket's byte\n count, or you can use the gsutil du command to get the total bytes in your bucket at a given moment. For more information, see Getting a bucket's size."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"I created a bucket, but don't remember which project I created it in. How can I find it?","SOLUTION":"For most common Cloud Storage operations, you only need to specify the relevant bucket's name, not the project associated with the bucket. In general, you only need to specify a project identifier when creating a bucket or listing buckets in a project. For more information, see When to specify a project.\n\nTo find which project contains a specific bucket:\n\nIf you are searching over a moderate number of projects and buckets, use the Google Cloud console, select each project, and view the buckets it contains.\nOtherwise, go to the storage.bucket.get page in the API Explorer and enter the bucket's name in the bucket field. When you click Authorize and Execute, the associated project number appears as part of the response. To get the project name, use the project number in the following terminal command:\n\ngcloud projects list | grep PROJECT_NUMBER"}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"How do I prevent race conditions for my Cloud Storage resources?","SOLUTION":"The easiest way to avoid race conditions is to use a naming scheme that\n avoids more than one mutation of the same object name. Often such a design is not feasible, in which case you can use preconditions in your request. Preconditions allow the request to proceed only if the actual state of the resource matches the criteria specified in the preconditions."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"How do I Reset Google Cloud?","SOLUTION":"f you need to reset your Google Cloud for any reason, you can reset Google Cloud by following the steps below.\n\n1. First of all you need to go to Google Cloud Console (https:\/\/console.cloud.google.com\/) and then you need to sign in with your Google Account.\n\n2. And then from the console dashboard, you need to select the project you want to reset.\n\n3. And then you need to click on the gear icon in the top-right corner to access the project settings.\n\n4. And now you have to scroll down to the \"Shut Down\" section and then you have to click on the \"Shut Down\" button.\n\n5. And now you have to confirm that you want to close the project by typing the Project ID in the text field provided.\n\n6. Finally you have to click on the \"Shut Down\" button again to confirm the action."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"Unable to view or edit a shared Google Drive access.","SOLUTION":"If that is the case then ask the owner to give you the access and then the issue should be resolved."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"Unable to access the latest version of Google Cloud.","SOLUTION":"If that is the case then all you need to do is update your Google Cloud \nto latest version so that the same is resolved."}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"Google Cloud is not being able to perform print operations","SOLUTION":"In such cases simply check for updates in your printer and update\n immediately to fix the same"}
{"TECHNOLOGY":"GCP Cloud Storage","QUESTION":"I should have permission to access a certain bucket or object, but when I attempt to do so, I get a 403 - Forbidden error with a message that is similar to: example@email.com does not have storage.objects.get access to the Google Cloud Storage object.","SOLUTION":"You are missing a IAM permission for the bucket or object that is required to complete the request. If you expect to be able to make the request but cannot, perform the following checks:\n\n1. Is the grantee referenced in the error message the one you expected? If the error message refers to an unexpected email address or to \"Anonymous caller\", then your request is not using the credentials you intended. This could be because the tool you are using to make the request was set up with the credentials from another alias or entity, or it could be because the request is being made on your behalf by a service account.\n\n2. Is the permission referenced in the error message one thought you needed? If the permission is unexpected, it's likely because the tool you're using requires additional access in order to complete your request. For example, in order to bulk delete objects in a bucket, gcloud must first construct a list of objects in the bucket to delete. This portion of the bulk delete action requires the storage.objects.list permission, which might be surprising, given that the goal is object deletion, which normally requires only the storage.objects.delete permission. If this is the cause of your error message, make sure you're granted IAM roles that have the additional necessary permissions.\n\n3. Are you granted the IAM role on the intended resource or parent resource? For example, if you're granted the Storage Object Viewer role for a project and you're trying to download an object, make sure the object is in a bucket that's in the project; you might inadvertently have the Storage Object Viewer permission for a different project."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"Lost connection to MySQL server during query when dumping table","SOLUTION":"The source may have become unavailable, or the dump contained packets too large.\nMake sure the external primary is available to connect, or use mysqldump with the max_allowed_packet option."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"The initial data migration was successful, but no data is being replicated.","SOLUTION":"One possible root cause could be your source database has defined replication flags which result in some or all database changes not being replicated over.\nMake sure the replication flags such as binlog-do-db, binlog-ignore-db, replicate-do-db or replicate-ignore-db are not set in a conflicting way.\n\nRun the command show master status on the primary instance to see the current settings."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"The initial data migration was successful but data replication stops \nworking after a while.","SOLUTION":"Things to try:\n1. Check the replication metrics for your replica instance in the Cloud Monitoring section of the Google Cloud console.\n2. The errors from the MySQL IO thread or SQL thread can be found in Cloud Logging in the mysql.err log files.\n3. The error can also be found when connecting to the replica instance. Run the command SHOW SLAVE STATUS, and check for the following fields in the output:\n Slave_IO_Running\n Slave_SQL_Running\n Last_IO_Error\n Last_SQL_Error"}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"I am getting an error as mysqld check failed: data disk is full.","SOLUTION":"The data disk of the replica instance is full.\nIncrease the disk size of the replica instance. You can either manually increase the disk size or enable auto storage increase."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"Error message: The slave is connecting ... master has purged binary logs \ncontaining GTIDs that the slave requires.","SOLUTION":"The primary Cloud SQL instance has automatic backups and binary logs and point-in-time recovery is enabled, so it should have enough logs for the replica to be able to catch up. However, in this case although the binary logs exist, the replica doesn't know which row to start reading from.\nCreate a new dump file using the correct flag settings, and configure the external replica using that file\n\n1. Connect to your mysql client through a Compute Engine instance.\n2. Run mysqldump and use the --master-data=1 and --flush-privileges flags.\nImportant: Do not include the --set-gtid-purged=OFF flag.\n\nLearn more.\n\n3. Ensure that the dump file just created contains the SET @@GLOBAL.GTID_PURGED='...' line.\n4. Upload the dump file to a Cloud Storage bucket and configure the replica using the dump file."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"After enabling a flag the instance loops between panicking and crashing.","SOLUTION":"Contact customer support to request flag removal followed by a hard \ndrain. This forces the instance t restart on a different host with a fresh configuration without the undesired flag or setting."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"Getting the error message Bad syntax for dict arg when trying to set a \nflag.","SOLUTION":"Complex parameter values, such as comma-separated lists, require special treatment when used with gcloud commands."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"HTTP Error 409: Operation failed because another operation was already \nin progress.","SOLUTION":"There is already a pending operation for your instance. Only one \noperation is allowed at a time. Try your request after the current operation is complete."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"The import operation is taking too long.","SOLUTION":"Too many active connections can interfere with import operations.\nClose unused operations. Check the CPU and memory usage of your Cloud SQL instance to make sure there are plenty of resources available. The best way to ensure maximum resources for the import is to restart the instance before beginning the operation.\n\nA restart:\n\nCloses all connections.\nEnds any tasks that may be consuming resources."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"An import operation failing with an error that a table doesn't exist.","SOLUTION":"Tables can have foreign key dependencies on other tables, and depending on the order of operations, one or more of those tables might not yet exist during the import operation.\nThings to try:\n\nAdd the following line at the start of the dump file:\nSET FOREIGN_KEY_CHECKS=0;\n \nAdditionally, add this line at the end of the dump file:\nSET FOREIGN_KEY_CHECKS=1;\n \nThese settings deactivate data integrity checks while the import operation is in progress, and reactivate them after the data is loaded. This doesn't affect the integrity of the data on the database, because the data was already validated during the creation of the dump file."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"getting Operations information is not found in logs as an error","SOLUTION":"You want to find more information about an operation.\nFor example, a user was deleted but you can't find out who did it. The logs show the operation started but don't provide any more information. You must enable audit logging for detailed and personal identifying information (PII) like this to be logged."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"Slow performance after restarting MySQL.","SOLUTION":"Cloud SQL allows caching of data in the InnoDB buffer pool. However, \nafter a restart, this cache is always empty, and all reads require a round trip to the backend to get data. As a result, queries can be slower than expected until the cache is filled."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"I am unable to manually delete binary logs.","SOLUTION":"Binary logs cannot be manually deleted. Binary logs are automatically \ndeleted with their associated automatic backup, which generally happens after about seven days."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"How do I find information about temporary files.","SOLUTION":"A file named ibtmp1 is used for storing temporary data. This file is reset upon database restart. To find information about temporary file usage, connect to the database and execute the following query:\nSELECT * FROM INFORMATION_SCHEMA.FILES WHERE TABLESPACE_NAME='innodb_temporary'\\G"}
{"TECHNOLOGY":"GCP Cloud SQL ","QUESTION":"How do I find out about table sizes.","SOLUTION":"This information is available in the database.\nConnect to the database and execute the following query:\n\nSELECT TABLE_SCHEMA, TABLE_NAME, sum(DATA_LENGTH+INDEX_LENGTH)\/pow(1024,2) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA NOT IN ('PERFORMANCE_SCHEMA','INFORMATION_SCHEMA','SYS','MYSQL') GROUP BY TABLE_SCHEMA, TABLE_NAME;"}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"My data is being automatically deleted.","SOLUTION":"Most likely a script is running somewhere in your environment.\nLook in the logs around the time of the deletion and see if there's a rogue script running from a dashboard or another automated process."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"When I am trying to delete a user getting error message as user cannot be \ndeleted.","SOLUTION":"The user probably has objects in the database that depend on it. You need to drop those objects or reassign them to another user.\nFind out which objects are dependent on the user, then drop or reassign those objects to a different user."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"Unable to create read replica - unknown error.","SOLUTION":"There's probably a more specific error in the log files. Inspect the logs in Cloud Logging to find the actual error.\nIf the error is: set Service Networking service account as servicenetworking.serviceAgent role on consumer project, then disable and re-enable the Service Networking API. This action creates the service account necessary to continue with the process."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"While changing parallel replication flags resulting an error.","SOLUTION":"An incorrect value is set for one of or more of these flags.\nOn the primary instance that's displaying the error message, set the parallel replication flags:\n\n1. Modify the binlog_transaction_dependency_tracking and transaction_write_set_extractionflags:\nbinlog_transaction_dependency_tracking=COMMIT_ORDER\ntransaction_write_set_extraction=OFF\n\n2. Add the slave_pending_jobs_size_max flag:\nslave_pending_jobs_size_max=33554432\n\n3. Modify the transaction_write_set_extraction flag:\ntransaction_write_set_extraction=XXHASH64\n\n4. Modify the binlog_transaction_dependency_tracking flag:\nbinlog_transaction_dependency_tracking=WRITESET"}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"getting error when deleting an instance.","SOLUTION":"If deletion protection is enabled for an instance, confirm your plans to \ndelete the instance. Then disable deletion protection before deleting the instance."}
{"TECHNOLOGY":"GCP Cloud SQL","QUESTION":"I am not able to see the current operation's status.","SOLUTION":"The Google Cloud console reports only success or failure when the operation is done. It isn't designed to show warnings or other updates\nRun the gcloud sql operations list command to list all operations for the given Cloud SQL instance."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Deployment failure: Insufficient permissions to (re)configure a trigger\n(permission denied for bucket <BUCKET_ID>). Please, give owner permissions to the editor role of the bucket and try again.","SOLUTION":"Reset this service account to the default role.\nor\nGrant the runtime service account the cloudfunctions.serviceAgent role.\nor\nGrant the runtime service account the storage.buckets.{get, update} and the resourcemanager.projects.get permissions."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Function deployment fails while executing function's global scope","SOLUTION":"For a more detailed error message, look into your function's build logs, \nas well as your function's runtime logs. If it is unclear why your function failed to execute its global scope, consider temporarily moving the code into the request invocation, using lazy initialization of the global variables. This allows you to add extra log statements around your client libraries, which could be timing out on their instantiation (especially if they are calling other services), or crashing\/throwing exceptions altogether. Additionally, you can try increasing the function timeout."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"When a function is attempted to be deployed, its global scope is used.","SOLUTION":"1. Disable Lifecycle Management on the buckets required by Container Registry.\n2. Delete all the images of affected functions. You can access build logs to find the image paths. Reference script to bulk delete the images. Note that this does not affect the functions that are currently deployed.\n3. Redeploy the functions."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Serving permission error due to \"allow internal traffic only\" configuration","SOLUTION":"You can:\n1. Ensure that the request is coming from your Google Cloud project or VPC Service Controls service perimeter.\nor\n2. Change the ingress settings to allow all traffic for the function."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Getting error as your client does not have permission to the requested URL","SOLUTION":"Make sure that your requests include an Authorization: \nBearer ID_TOKEN header, and that the token is an ID token, not an access or refresh token. If you are generating this token manually with a service account's private key, you must exchange the self-signed JWT token for a Google-signed Identity token, following this guide."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Attempt to invoke function using curl redirects to Google login page","SOLUTION":"Make sure you specify the name of your function correctly. You can \nalways check using gcloud functions call which returns the correct 404 error for a missing function."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"error message\nIn Cloud Logging logs: \"Infrastructure cannot communicate with function. \nThere was likely a crash or deadlock in the user-provided code.\"","SOLUTION":"Different runtimes can crash under different scenarios. To find the root cause, output detailed debug level logs, check your application logic, and test for edge cases.\n\nThe Cloud Functions Python37 runtime currently has a known limitation \non the rate that it can handle logging. If log statements from a Python37 runtime instance are written at a sufficiently high rate, it can produce this error. Python runtime versions >= 3.8 do not have this limitation. We encourage users to migrate to a higher version of the Python runtime to avoid this issue."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Function stopping in mid-execution, or continues running after my code \nfinishes","SOLUTION":"If your function terminates early, you should make sure all your function's asynchronous tasks have been completed before doing any of the following:\n\n1. returning a value\n2. resolving or rejecting a returned Promise object (Node.js functions only)\n3. throwing uncaught exceptions and\/or errors\nsending an HTTP response\ncalling a callback function\nIf your function fails to terminate once all asynchronous tasks have completed, you should verify that your function is correctly signaling Cloud Functions once it has completed. In particular, make sure that you perform one of the operations listed above as soon as your function has finished its asynchronous tasks."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"getting error as User with Project Viewer or Cloud Function role cannot \ndeploy a function","SOLUTION":"Assign the user an additional role, the Service Account User IAM role \n(roles\/iam.serviceAccountUser), scoped to the Cloud Functions runtime service account."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Deployment service account missing the Service Agent role when \ndeploying functions","SOLUTION":"Reset this service account to the default role."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Deployment service account missing Pub\/Sub permissions when \ndeploying an event-driven function","SOLUTION":"You can:\n\nReset this service account to the default role.\nor\nGrant the pubsub.subscriptions.* and pubsub.topics.* permissions to your service account manually."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Getting default runtime service account does not exist as error message","SOLUTION":"1. Specify a user managed runtime service account when deploying your 1st gen functions.\nor\n2. Recreate the default service account @appspot.gserviceaccount.com for your project."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"User with Project Editor role cannot make a function public","SOLUTION":"1. Assign the deployer either the Project Owner or the Cloud Functions Admin role, both of which contain the cloudfunctions.functions.setIamPolicy permission.\nor\n2.Grant the permission manually by creating a custom role."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Is there a way to keep track of dates on Firestore using cloud functions","SOLUTION":"One approach would be to create a scheduled function that scans your database for documents to update every minute or every five minutes. This is a good approach for popular applications with a consistent usage rate.\n\nTo improve efficiency, you can use a Firestore onCreate trigger to defer a Cloud Task Function to update the document. As each purchase is made, a Cloud Task can be scheduled to execute in 72 hours from the purchase date where it sets promo to true. This has the benefit of not running jobs that don't have any documents to update."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Is it possible to route Google Cloud Functions egress traffic through \nmultiple rotating IPs?","SOLUTION":"1. Create a Serverless VPC Connector\n2. Create a Cloud NAT Gateway and have it include the subnet that you assigned to the Serverless VPC Connector\n3. Configure your Cloud Function to use the Serverless VPC Connector for all its egress\nNow that specific Cloud Function using that specific VPC Connector will route its outbound traffic through that specific Cloud NAT Gateway.\n\nYou can repeat this process as many times as necessary. To make this work with your Cloud Function you will have to deploy them as multiple Cloud Functions rather than a single Cloud Function."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"How do I set entry point in cloud function?","SOLUTION":"In the Entry point field, enter the entry point to your function in your \nsource code. This is the code that will be executed when your function runs. The value of this flag must be a function name or fully-qualified class name that exists in your source code"}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Serverless VPC Access connector is not ready or does not exist","SOLUTION":"List your subnets to check whether your connector uses a \/28 subnet mask.\n\nIf it does not, recreate or create a new connector to use a \/28 subnet. Note the following considerations:\n\n1. If you recreate the connector, you do not need to redeploy other functions. You might experience a network interruption as the connector is recreated.\n\n2. If you create a new alternate connector, redeploy your functions to use the new connector and then delete the original connector. This method avoids network interruption."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Cloud Functions logs are not appearing in Log Explorer","SOLUTION":"Use the client library interface to flush buffered log entries before \nexiting the function or use the library to write log entries synchronously. You can also synchronously write logs directly to stdout or stderr."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Cloud Functions logs are not appearing via Log Router Sink","SOLUTION":"Make sure no exclusion filter is set for \nresource.type=\"cloud_functions\""}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Python GCP Cloud function connecting to Cloud SQL Error: \n\"ModuleNotFoundError: No module named 'google.cloud.sql'\"","SOLUTION":"The error \u201cModuleNotFoundError: No module named 'google.cloud.sql\u201d occurs as the google.cloud.sql module is not installed in the requirement.txt file. You can install it by using the command pip install \u201cgoogle.cloud.sql\u201d\n\nAlso I would like to suggest you to check whether you have assigned the \u201cCloud SQL Client\u201d role to the service account.\n\nAlso I would like to suggest you to check whether you have enabled the \"Cloud SQL Admin API\" within your Google cloud project.\n\nAs you already stated VPC connector and Cloud SQL instance are in the same VPC network, also make sure that they are in the same region.\n\nAlso check whether the installed packages in the requirements.txt are compatible with your python version you are using."}
{"TECHNOLOGY":"GCP Functions","QUESTION":"Unable to give Cloud Functions Admin role to my account on Firebase's \nproject setting","SOLUTION":"The origin of this issue is unknow. You can go to Manage roles, find \nCloud Functions Admin and create a custom role out of it. Then you can add this role instead.\n"}
{"TECHNOLOGY":"Azure Functions","QUESTION":"When adding two timed function to the same function app, only one of \nthem is triggered","SOLUTION":"Could probably be caused by a lot of issues like wrong configuration etc.\n In my case, I had the configuration just right, but found a \"feature\" in Azure Functions. If adding two timed functions with the same class name and the same schedule, Azure executes one of the two functions twice. Changing the class name in one of the functions fixes the issue."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"Azure Function not triggering when deployed, but works correctly in local \ndebugging","SOLUTION":"there are a few things you can check and try to resolve the problem: \nCheck the connection string: Verify that the connection string for the Event Hub trigger in the local.settings.json file and the connection string in the Azure Function App settings are identical (except for the \"Endpoint\" part). Make sure that the connection string in the Azure Function App settings is using the correct Event Hub namespace and Event Hub name. Check the function.json file: Ensure that your function.json file has the correct configuration for the Event Hub trigger binding. Verify the type, name, direction, eventHubName, and connection properties."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"How do I access a virtual machine through point-to-site VPN from a \nFunction?","SOLUTION":"You can secure communications between a web app and a virtual \nmachine using Azure Point-To-Site VPN the solution, is to select App Service Plan in Hosting Plan. Running the Function on the App Service Plan (rather than on the Consumption Plan), opens up for Networking settings in the Function app settings view."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"How do I set a static IP in Functions?","SOLUTION":"Deploying a function in an App Service Environment is the primary way to have static inbound and outbound IP addresses for your functions.\n\nYou can also use a virtual network NAT gateway to route outbound traffic through a public IP address that you control"}
{"TECHNOLOGY":"Azure Functions","QUESTION":"How do I restrict internet access to my function?","SOLUTION":"You can restrict internet access in a couple of ways:\n\n1. Private endpoints: Restrict inbound traffic to your function app by private link over your virtual network, effectively blocking inbound traffic from the public internet.\nIP restrictions: Restrict inbound traffic to your function app by IP range.\nUnder IP restrictions, you are also able to configure Service Endpoints, which restrict your Function to only accept inbound traffic from a particular virtual network.\n2. Removal of all HTTP triggers. For some applications, it's enough to simply avoid HTTP triggers and use any other event source to trigger your function.\n3. Keep in mind that the Azure portal editor requires direct access to your running function. Any code changes through the Azure portal will require the device you're using to browse the portal to have its IP added to the approved list. But you can still use anything under the platform features tab with network restrictions in place."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"How do I restrict my function app to a virtual network?","SOLUTION":"You are able to restrict inbound traffic for a function app to a virtual network using Service Endpoints. This configuration still allows the function app to make outbound calls to the internet.\n\nTo completely restrict a function such that all traffic flows through a virtual network, you can use a private endpoints with outbound virtual network integration or an App Service Environment."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"How can I access resources in a virtual network from a function app?","SOLUTION":"You can access resources in a virtual network from a running function by\n using virtual network integration."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"How can I trigger a function from a resource in a virtual network?","SOLUTION":"You are able to allow HTTP triggers to be called from a virtual network using Service Endpoints or Private Endpoint connections.\n\nYou can also trigger a function from all other resources in a virtual network by deploying your function app to a Premium plan, App Service plan, or App Service Environment."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"How can I deploy my function app in a virtual network?","SOLUTION":"Deploying to an App Service Environment is the only way to create a \nfunction app that's wholly inside a virtual network. "}
{"TECHNOLOGY":"Azure Functions","QUESTION":"In the Azure portal, it says 'Azure Functions runtime is unreachable'","SOLUTION":"Besides the normal network restrictions that could prevent your \nfunction app from accessing the storage account. Here it mentions an issue where the App_Offline.htm was in the file system, thereby instructing the platform your app is unreachable. It's certainly plausible, so check the kudu system (or az rest) to see if that file exists, remove it, and retry the operation."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"Orchestration is stuck in the Pending state","SOLUTION":"Use the following steps to troubleshoot orchestration instances that remain stuck indefinitely in the \"Pending\" state.\n\n1. Check the Durable Task Framework traces for warnings or errors for the impacted orchestration instance ID. A sample query can be found in the Trace Errors\/Warnings section.\n\n2. Check the Azure Storage control queues assigned to the stuck orchestrator to see if its \"start message\" is still there For more information on control queues, see the Azure Storage provider control queue documentation.\n\n3. Change your app's platform configuration version to \u201c64 Bit\u201d. Sometimes orchestrations don't start because the app is running out of memory. Switching to 64-bit process allows the app to allocate more total memory. This only applies to App Service Basic, Standard, Premium, and Elastic Premium plans. Free or Consumption plans do not support 64-bit processes."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"\"ERROR: Exception calling \"Fill\" with \"1\" argument(s): \"Timeout expired. \nThe timeout period elapsed prior to completion of the operation or the server is not responding.\" \"","SOLUTION":"Here are the few suggestions:\n\n1. Have you tried with a simple query from Azure Function and worked (different query that executes within few seconds)? If so, then try setting CommandTimeout as 0.\n2. Make sure there is a network connectivity between Azure Functions and SQL Server and Function App can access SQL server. Here is doc Typical causes and resolutions for the error with common causes\/resolutions. Any VNET integration, Firewall in between services? Review https:\/\/learn.microsoft.com\/en-us\/azure\/azure-functions\/functions-networking-options?tabs=azure-cli networking set up of Azure Functions and Use tcpping tool to test the connectivity (Tools)."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"while creating the function app from the portal the storage section is \nmissing","SOLUTION":"Retry the same operation by logging-in to portal from different browser\n or signing out and signing-in in same browser or by clearing the browser cache."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"How do I add or access an app.config file in Azure functions to add a \ndatabase connection string?","SOLUTION":"The best way to do this is to add a Connection String from the Azure portal:\n\n1. From your Function App UI, click Function App Settings\n2. Settings \/ Application Settings\n3. Add connection strings\nThey will then be available using the same logic as if they were in a web.config, e.g.\n\nvar conn = System.Configuration.ConfigurationManager\n .ConnectionStrings[\"MyConn\"].ConnectionString;"}
{"TECHNOLOGY":"Azure Functions","QUESTION":"How to rename an Azure Function?\n","SOLUTION":"The UI does not directly support renaming a Function, but you can work around this using the following manual steps:\n\n1. Stop your Function App. To do this, go under Function app settings \/ Go To App Service Settings, and click on the Stop button.\n2. Go to Kudu Console: Function app settings \/ Go to Kudu (article about that)\n3. In Kudu Console, go to D:\\home\\site\\wwwroot and rename the Function folder to the new name\n4. Now go to D:\\home\\data\\Functions\\secrets and rename [oldname].json to [newname].json\n5. Then go to D:\\home\\data\\Functions\\sampledata and rename [oldname].dat to [newname].dat\n6. Start your function app, in the same place where you stopped it above In the Functions UI, click the refresh button in the top left corner, and your renamed function should appear"}
{"TECHNOLOGY":"Azure Functions","QUESTION":"Azure function apps logs not showing","SOLUTION":"The log window is a bit fragile and doesn't always show the logs. However, logs are also written to the log files.\n\nYou can access these logs from the Kudu console: https:\/\/[your-function-app].scm.azurewebsites.net\/\n\nFrom the menu, select Debug console > CMD\n\nOn the list of files, go into LogFiles > Application > Functions > Function > [Name of your function]\n\nThere you will see a list of log files."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"How can I use PostgreSQL with Azure Functions without maxing out \nconnections?","SOLUTION":"This is the classic problem of using shared resources. You have 50 of \nthese resources in this case. The most effective way to support more consumers would be to reduce the time each consumer uses the resource. Reducing the Connection Idle Lifetime substantially is probably the most effective way. Increasing Timeout does help reduce errors (and is a good choice), but it doesn't increase the throughput. It just smooths out the load. Reducing Maximum Pool size is also good."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"I have a queue based function app, however even after publishing \nmessages to queue - function does not get triggered?","SOLUTION":"Azure function expects queue messages to be base64 encoded to trigger it. \n\nSo if message pushed to queue is not base64 encoded then the function trigger ignores it."}
{"TECHNOLOGY":"Azure Functions","QUESTION":"Azure Functions Cannot Authenticate to Storage Account","SOLUTION":" Must add the Storage Account user.impersonation permission to the \nService Principal!"}
{"TECHNOLOGY":"Azure Functions","QUESTION":"How can I assign Graph Sites.ReadWrite.All permissions in Tenant B to my \nTenant A app?","SOLUTION":"There are two ways to achieve this:\nUsing App Registration or Federated Managed Identity\n\nApp Registration\n\nIn order to assign Graph Sites.ReadWrite.All permissions in Tenant B to your Tenant A app, you will need to create an app registration for your Azure Function in Tenant\n\nHere are the steps you can follow:\n\n1)Register your Azure Function in Tenant B: a. Sign in to the Azure portal (https:\/\/portal.azure.com\/) using an account with admin privileges in Tenant B. b. Navigate to \"Azure Active Directory\" > \"App registrations\" > \"New registration\". c. Provide a name for your app registration (e.g., \"AzFunction-TenantB\"), and then click \"Register\".\n2)Grant Graph Sites.ReadWrite.All permissions to the app registration in Tenant B: a. In the app registration page for \"AzFunction-TenantB\", go to \"API permissions\" > \"Add a permission\". b. Select \"Microsoft Graph\" and choose the \"Application permissions\" tab. c. Expand the \"Sites\" group and check the \"Sites.ReadWrite.All\" permission. d. Click \"Add permissions\" to save your changes.\n3)Grant admin consent for the permissions: a. Still in the \"API permissions\" tab, click on the \"Grant admin consent for [Tenant B]\" button. You'll need to be an admin in Tenant B to perform this action.\n4)(Share the client ID and tenant ID with Tenant A: a. In the \"Overview\" tab of the \"AzFunction-TenantB\" app registration, make a note of the \"Application (client) ID\" and \"Directory (tenant) ID\" values.\n5)Configure your Azure Function in Tenant A to use the new app registration in Tenant B: a. Sign in to the Azure portal (https:\/\/portal.azure.com\/) using an account with privileges to manage your Azure Function in Tenant A. b. Go to the Azure Function App, navigate to the \"Configuration\" tab, and update the following values:\nTENANT_B_CLIENT_ID: Set this to the \"Application (client) ID\" from step 4.\nTENANT_B_TENANT_ID: Set this to the \"Directory (tenant) ID\" from step 4.\n6)Update your Azure Function code to use the new app registration when calling Microsoft Graph: a. Use the new TENANT_B_CLIENT_ID and TENANT_B_TENANT_ID values when acquiring a token for Microsoft Graph. This will ensure that your Azure Function uses the app registration from Tenant B when calling the API.\n\nFederated Managed Identity\n\nhttps:\/\/svrooij.io\/2022\/12\/16\/poc-multi-tenant-managed-identity\/#post\nhttps:\/\/blog.identitydigest.com\/azuread-federate-mi\/\n\nNote: You may also need to configure the necessary network and firewall settings to allow access to Tenant B from Tenant A.\n\nYou may also want to consider granting the necessary permissions to users in Tenant A to access the data in Tenant B. This can be done using Azure AD B2B collaboration."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Queries using Azure AD authentication fails after 1 hour","SOLUTION":"Following steps can be followed to work around the problem.\n\n1. It's recommended switching to Service Principal, Managed Identity or Shared Access Signature instead of using user identity for long running queries.\n2. Restarting client (SSMS\/ADS) acquires new token to establish the connection."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Query failures from serverless SQL pool to Azure Cosmos DB analytical \nstore","SOLUTION":"following actions can be taken as quick mitigation:\n\n1. Retry the failed query. It will automatically refresh the expired token.\n2. Disable the private endpoint. Before applying this change, confirm with your security team that it meets your company security policies."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Azure Cosmos DB analytical store view propagates wrong attributes in the \ncolumn","SOLUTION":"following actions can be taken as quick mitigation:\n\n1. Recreate the view by renaming the columns.\n2. Avoid using views if possible."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Failed to delete Synapse workspace & Unable to delete virtual network","SOLUTION":"The problem can be mitigated by retrying the delete operation. "}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"synapse notebook connection has closed unexpectedly","SOLUTION":"try to switch your network environment, such as inside\/outside corpnet, or access Synapse Notebook on another workstation.\n\nIf you can run notebook on the same workstation but in a different network environment, please work with your network administrator to find out whether the WebSocket connection has been blocked.\n\nIf you can run notebook on a different workstation but in the same network environment, please ensure you didn\u2019t install any browser plugin that may block the WebSocket request."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Websocket connection was closed unexpectedly.","SOLUTION":"To resolve this issue, rerun your query.\n1. Try Azure Data Studio or SQL Server Management Studio for the same queries instead of Synapse Studio for further investigation.\n2. If this message occurs often in your environment, get help from your network administrator. You can also check firewall settings, and check the Troubleshooting guide.\n3. If the issue continues, create a support ticket through the Azure portal."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Serverless databases aren't shown in Synapse Studio","SOLUTION":"If you don't see the databases that are created in serverless SQL pool, \ncheck to see if your serverless SQL pool started. If serverless SQL pool is deactivated, the databases won't show. Execute any query, for example, SELECT 1, on serverless SQL pool to activate it and make the databases appear."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Synapse Serverless SQL pool shows as unavailable","SOLUTION":"Incorrect network configuration is often the cause of this behavior. Make \nsure the ports are properly configured. If you use a firewall or private endpoints, check these settings too.\n\nFinally, make sure the appropriate roles are granted and have not been revoked."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Can't read, list, or access files in Azure Data Lake Storage","SOLUTION":"If you use an Azure AD login without explicit credentials, make sure that your Azure AD identity can access the files in storage. To access the files, your Azure AD identity must have the Blob Data Reader permission, or permissions to List and Read access control lists (ACL) in ADLS. For more information, see Query fails because file cannot be opened.\n\nIf you access storage by using credentials, make sure that your managed identity or SPN has the Data Reader or Contributor role or specific ACL permissions. If you used a shared access signature token, make sure that it has rl permission and that it hasn't expired.\n\nIf you use a SQL login and the OPENROWSET function without a data source, make sure that you have a server-level credential that matches the storage URI and has permission to access the storage."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"query fails with the error File cannot be opened because it does not exist or it is used by another process","SOLUTION":"If your query fails with the error File cannot be opened because it does not exist or it is used by another process and you're sure that both files exist and aren't used by another process, serverless SQL pool can't access the file. This problem usually happens because your Azure AD identity doesn't have rights to access the file or because a firewall is blocking access to the file.\n\nBy default, serverless SQL pool tries to access the file by using your Azure AD identity. To resolve this issue, you must have proper rights to access the file. The easiest way is to grant yourself a Storage Blob Data Contributor role on the storage account you're trying to query."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Query fails because it can't be executed due to current resource constraints","SOLUTION":"This message means serverless SQL pool can't execute at this moment. Here are some troubleshooting options:\n\nMake sure data types of reasonable sizes are used.\nIf your query targets Parquet files, consider defining explicit types for string columns because they'll be VARCHAR(8000) by default. Check inferred data types.\nIf your query targets CSV files, consider creating statistics.\nTo optimize your query, see Performance best practices for serverless SQL pool."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Query fails with the error message Bulk load data conversion error (type \nmismatches or invalid character for the specified code page) for row n, column m [columnname] in the data file [filepath].","SOLUTION":"To resolve this problem, inspect the file and the data types you chose. Also\n check if your row delimiter and field terminator settings are correct. The following example shows how inspecting can be done by using VARCHAR as the column type."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Query fails with the error message Column [column-name] of type \n[type-name] is not compatible with external data type [\u2026], it's likely that a PARQUET data type was mapped to an incorrect SQL data type.","SOLUTION":"To resolve this issue, inspect the file and the data types you chose. This \nmapping table helps to choose a correct SQL data type. As a best practice, specify mapping only for columns that would otherwise resolve into the VARCHAR data type. Avoiding VARCHAR when possible leads to better performance in queries."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"The query references an object that is not supported in distributed \nprocessing mode","SOLUTION":"Some objects, like system views, and functions can't be used while you \nquery data stored in Azure Data Lake or Azure Cosmos DB analytical storage. Avoid using the queries that join external data with system views, load external data in a temp table, or use some security or metadata functions to filter external data."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Query returning NULL values instead of partitioning columns or can't find \nthe partition columns","SOLUTION":"troubleshooting steps:\n\nIf you use tables to query a partitioned dataset, be aware that tables don't support partitioning. Replace the table with the partitioned views.\nIf you use the partitioned views with the OPENROWSET that queries partitioned files by using the FILEPATH() function, make sure you correctly specified the wildcard pattern in the location and used the proper index for referencing the wildcard.\nIf you're querying the files directly in the partitioned folder, be aware that the partitioning columns aren't the parts of the file columns. The partitioning values are placed in the folder paths and not the files. For this reason, the files don't contain the partitioning values."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Missing column when using automatic schema inference","SOLUTION":"You can easily query files without knowing or specifying schema, by \nomitting WITH clause. In that case column names and data types will be inferred from the files. Have in mind that if you are reading number of files at once, the schema will be inferred from the first file service gets from the storage. This can mean that some of the columns expected are omitted, all because the file used by the service to define the schema did not contain these columns. To explicitly specify the schema, please use OPENROWSET WITH clause. If you specify schema (by using external table or OPENROWSET WITH clause) default lax path mode will be used. That means that the columns that don\u2019t exist in some files will be returned as NULLs (for rows from those files). To understand how path mode is used, please check the following documentation and sample."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Failed to execute query. Error: CREATE EXTERNAL \nTABLE\/DATA SOURCE\/DATABASE SCOPED CREDENTIAL\/FILE FORMAT is not supported in master database.","SOLUTION":"1. Create a user database:\nCREATE DATABASE <DATABASE_NAME>\n\n2. Execute a CREATE statement in the context of <DATABASE_NAME>, which failed earlier for the master database.\n\nHere's an example of the creation of an external file format:\nUSE <DATABASE_NAME>\nCREATE EXTERNAL FILE FORMAT [SynapseParquetFormat] \nWITH ( FORMAT_TYPE = PARQUET)"}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Getting an error while trying to create a new Azure AD login or user \nin a database","SOLUTION":"check the login you used to connect to your database. The login that's trying to create a new Azure AD user must have permission to access the Azure AD domain and check if the user exists. Be aware that:\n\nSQL logins don't have this permission, so you'll always get this error if you use SQL authentication.\nIf you use an Azure AD login to create new logins, check to see if you have permission to access the Azure AD domain."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Resolving Azure Cosmos DB path has failed with error 'This request is not \nauthorized to perform this operation'.","SOLUTION":"check to see if you used private endpoints in Azure Cosmos DB. To allow\n serverless SQL pool to access an analytical store with private endpoints, you must configure private endpoints for the Azure Cosmos DB analytical store."}
{"TECHNOLOGY":"Azure Synapse","QUESTION":"Delta table created in Spark is not shown in serverless pool","SOLUTION":"If you created a Delta table in Spark, and it is not shown in the serverless SQL pool, check the following:\n\n1. Wait some time (usually 30 seconds) because the Spark tables are synchronized with delay.\n2. If the table didn't appear in the serverless SQL pool after some time, check the schema of the Spark Delta table. Spark tables with complex types or the types that are not supported in serverless are not available. Try to create a Spark Parquet table with the same schema in a lake database and check would that table appears in the serverless SQL pool.\n3. Check the workspace Managed Identity access Delta Lake folder that is referenced by the table. Serverless SQL pool uses workspace Managed Identity to get the table column information from the storage to create the table."}
{"TECHNOLOGY":"GCP Cloud Storage - Web App","QUESTION":"Failed to fetch metadata from the registry, with \nreason: generic::permission_denied","SOLUTION":"To resolve this issue, grant the Storage Admin role to the service account:\n\nTo see which account you used, run the gcloud auth list command.\nTo learn why assigning only the App Engine Deployer (roles\/appengine.deployer) role might not be sufficient in some cases, see App Engine roles."}
{"TECHNOLOGY":"GCP Cloud Storage - Web App","QUESTION":"Error: The App Engine appspot and App Engine flexible environment \nservice accounts must have permissions on the image IMAGE_NAME","SOLUTION":"This error occurs for one of the following reasons:\n\n1. The default App Engine service account does not have the Storage Object Viewer (roles\/storage.objectViewer) role.\n\n To resolve this issue, grant the Storage Object Viewer role to the service account.\n2. Your project has a VPC Service Perimeter which limits access to the Cloud Storage API using access levels.\n\n To resolve this issue, add the service account you use to deploy your app to the corresponding VPC Service Perimeter accessPolicies."}
{"TECHNOLOGY":"GCP Cloud Storage - Web App","QUESTION":"Failed to create cloud build: Permission denied","SOLUTION":"This error occurs if you use the gcloud app deploycommand from an account that does not have the Cloud Build Editor (roles\/cloudbuild.builds.editor) role.\n\nTo resolve this issue, grant the Cloud Build Editor role to the service account that you are using to deploy your app.\n\nTo see which account you used, run the gcloud auth list command."}
{"TECHNOLOGY":"GCP Cloud Storage - Web App","QUESTION":"Timed out waiting for the app infrastructure to become healthy","SOLUTION":"To resolve this issue, rule out the following potential causes:\n\n1. Verify that you have granted the Editor (roles\/editor) role to your default App Engine service account.\n2. Verify that you have granted the following roles to the service account that you use to run your application (usually the default service account, app-id@appspot.gserviceaccount.com):\n\n Storage Object Viewer (roles\/storage.objectViewer)\n Logs Writer (roles\/logging.logWriter)\n3. Grant the roles if the service account does not have them.\n\n4. If you are deploying in Shared VPC setup and passing instance_tag in app.yaml, refer to this section to fix the issue."}
{"TECHNOLOGY":"GCP Cloud Storage - Web App","QUESTION":"Invalid value error when deploying in a Shared VPC setup","SOLUTION":"To resolve the issue, remove the instance_tag field from app.yaml and \nredeploy."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"Container failed to start. Failed to start and then listen on the port \ndefined by the PORT environment variable.","SOLUTION":"To resolve this issue, rule out the following potential causes:\n\nVerify that you can run your container image locally. If your container image cannot run locally, you need to diagnose and fix the issue locally first.\n\nCheck if your container is listening for requests on the expected port as documented in the container runtime contract. Your container must listen for incoming requests on the port that is defined by Cloud Run and provided in the PORT environment variable. See Configuring containers for instructions on how to specify the port.\n\nCheck if your container is listening on all network interfaces, commonly denoted as 0.0.0.0.\n\nVerify that your container image is compiled for 64-bit Linux as required by the container runtime contract.\n\nNote: If you build your container image on a ARM based machine, then it might not work as expected when used with Cloud Run. To solve this issue, build your image using Cloud Build.\nUse Cloud Logging to look for application errors in stdout or stderr logs. You can also look for crashes captured in Error Reporting.\n\nYou might need to update your code or your revision settings to fix errors or crashes. You can also troubleshoot your service locally."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"The server has encountered an internal error. Please try again later. \nResource readiness deadline exceeded.","SOLUTION":"This issue might occur when the Cloud Run service agent does not exist, or when it does not have the Cloud Run Service Agent (roles\/run.serviceAgent) role.\n\nTo verify that the Cloud Run service agent exists in your Google Cloud project and has the necessary role, perform the following steps:\n\nOpen the Google Cloud console:\n\nGo to the Permissions page\n\nIn the upper-right corner of the Permissions page, select the Include Google-provided role grants checkbox.\n\nIn the Principals list, locate the ID of the Cloud Run service agent, which uses the ID\nservice-PROJECT_NUMBER@serverless-robot-prod.iam.gserviceaccount.com.\n\nVerify that the service agent has the Cloud Run Service Agent role. If the service agent does not have the role, grant it."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"Can I run Cloud Run applications on a private IP?","SOLUTION":"\"Currently no. Cloud Run applications always have a *.run.app public hostname and they cannot be placed inside a VPC (Virtual Private Cloud) network.\n\nIf any other private service (e.g. GCE VMs, GKE) needs to call your Cloud Run application, they need to use this public hostname.\n\nWith ingress settings on Cloud Run, you can allow your app to be accesible only from the VPC (e.g. VMs or clusters) or VPC+Cloud Load Balancer \u2013but it still does not give you a private IP. You can still combine this with IAM to restrict the outside world but still authenticate and authorize other apps running the VPC network.\""}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"The service has encountered an error during container import. Please try again later. Resource readiness deadline exceeded.","SOLUTION":"To resolve this issue, rule out the following potential causes:\n\n1. Ensure container's file system does not contain non-utf8 characters.\n\n2. Some Windows based Docker images make use of foreign layers. Although Container Registry doesn't throw an error when foreign layers are present, Cloud Run's control plane does not support them. To resolve, you may try setting the --allow-nondistributable-artifacts flag in the Docker daemon."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"The request was not authorized to invoke this service","SOLUTION":"To resolve this issue:\n\n1. If invoked by a service account, the audience claim (aud) of the Google-signed ID token must be set to the following:\n\n i. The Cloud Run URL of the receiving service, using the form https:\/\/service-xyz.run.app.\n The Cloud Run service must require authentication.\n The Cloud Run service can be invoked by the Cloud Run URL or through a load balancer URL.\n ii.The Client ID of an OAuth 2.0 Client ID with type Web application, using the form nnn-xyz.apps.googleusercontent.com.\n The Cloud Run service can be invoked through an HTTPS load balancer secured by IAP.\n This is great for a GCLB backed by multiple Cloud Run services in different regions.\n iii. A configured custom audience using the exact values provided. For example, if custom audience is service.example.com, the audience claim (aud) value must also be service.example.com. If custom audience is https:\/\/service.example.com, the audience claim value must also be https:\/\/service.example.com.\n\n2. The jwt.io tool is good for checking claims on a JWT.\n\n3. If the auth token is of an invalid format a 401 error occurs. If the token is of a valid format and the IAM member used to generate the token is missing the run.routes.invoke permission, a 403 error occurs."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"The request was not authenticated. Either allow unauthenticated \ninvocations or set the proper Authorization header","SOLUTION":"To resolve this issue:\n\n1. If the service is meant to be invocable by anyone, update its IAM settings to make the service public.\n2. If the service is meant to be invocable only by certain identities, make sure that you invoke it with the proper authorization token.\n i. If invoked by a developer or invoked by an end user: Ensure that the developer or user has the run.routes.invoke permission, which you can provide through the Cloud Run Admin (roles\/run.admin) and Cloud Run Invoker (roles\/run.invoker) roles.\n ii. If invoked by a service account: Ensure that the service account is a member of the Cloud Run service and that it has the Cloud Run Invoker (roles\/run.invoker) role.\n iii.Calls missing an auth token or with an auth token that is of valid format, but the IAM member used to generate the token is missing the run.routes.invoke permission, result in this 403 error."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"HTTP 429\nThe request was aborted because there was no available instance.\nThe Cloud Run service might have reached its maximum container instance\nlimit or the service was otherwise not able to scale to incoming requests.\nThis might be caused by a sudden increase in traffic, a long container startup time or a long request processing time.","SOLUTION":"To resolve this issue, check the \"Container instance count\" metric for \nyour service and consider increasing this limit if your usage is nearing the maximum. See \"max instance\" settings, and if you need more instances, request a quota increase."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"This might be caused by a sudden increase in traffic, a drawn-out container setup process, or a drawn-out request processing process.","SOLUTION":"To resolve this issue, address the previously listed issues.\n\nIn addition to fixing these issues, as a workaround you can implement exponential backoff and retries for requests that the client must not drop.\n\nNote that a short and sudden increase in traffic or request processing time might only be visible in Cloud Monitoring if you zoom in to 10 second resolution.\n\nWhen the root cause of the issue is a period of heightened transient errors attributable solely to Cloud Run, you can contact Support"}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"HTTP 500 \/ HTTP 503: Container instances are exceeding memory limits","SOLUTION":"To resolve this issue:\n\n1. Determine if your container instances are exceeding the available memory. Look for related errors in the varlog\/system logs.\n2. If the instances are exceeding the available memory, consider increasing the memory limit.\nNote that in Cloud Run, files written to the local filesystem count towards the available memory. This also includes any log files that are written to locations other than \/var\/log\/* and \/dev\/log."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"HTTP 503: Unable to process some requests due to high concurrency setting","SOLUTION":"To resolve this issue, try one or more of the following:\n\n1. Increase the maximum number of container instances for your service.\n\n2. Lower the service's concurrency. Refer to setting concurrency for more detailed instructions."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"HTTP 504\nThe request has been terminated because it has reached the maximum request timeout.","SOLUTION":"To troubleshoot this issue, try one or more of the following:\n\n1. Instrument logging and tracing to understand where your app is spending time before exceeding your configured request timeout.\n\n2. Outbound connections are reset occasionally, due to infrastructure updates. If your application reuses long-lived connections, then we recommend that you configure your application to re-establish connections to avoid the reuse of a dead connection.\n\n i. Depending on your app's logic or error handling, a 504 error might be a signal that your application is trying to reuse a dead connection and the request blocks until your configured request timeout.\n ii. You can use a liveness probe to help terminate an instance that returns persistent errors.\n3. Out of memory errors that happen inside the app's code, for example, java.lang.OutOfMemoryError, do not necessarily terminate a container instance. If memory usage does not exceed the container memory limit, then the instance will not be terminated. Depending on how the app handles app-level out of memory errors, requests might hang until they exceed your configured request timeout.\n\n i. If you want the container instance to terminate in the above scenario, then configure your app-level memory limit to be greater than your container memory limit.\n ii. You can use a liveness probe to help terminate an instance that returns persistent errors."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"asyncpg.exceptions.ConnectionDoesNotExistError: connection was \nclosed in the middle of operation","SOLUTION":"To resolve this issue:\n\n1. If you are trying to perform background work with CPU throttling, try using the \"CPU is always allocated\" CPU allocation setting.\n\n2. Ensure that you are within the outbound requests timeouts. If your application maintains any connection in an idle state beyond this thresholds, the gateway needs to reap the connection.\n\n3. By default, the TCP socket option keepalive is disabled for Cloud Run. There is no direct way to configure the keepalive option in Cloud Run at the service level, but you can enable the keepalive option for each socket connection by providing the correct socket options when opening a new TCP socket connection, depending on the client library that you are using for this connection in your application.\n\n4. Occasionally outbound connections will be reset due to infrastructure updates. If your application reuses long-lived connections, then we recommend that you configure your application to re-establish connections to avoid the reuse of a dead connection."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"assertion failed: Expected hostname or IPv6 IP enclosed in [] but got \n<IPv6 ADDRESS>","SOLUTION":"To resolve this issue:\n\nTo change the environment variable value and resolve the issue, set ENV SPARK_LOCAL_IP=\"127.0.0.1\" in your Dockerfile. In Cloud Run, if the variable SPARK_LOCAL_IP is not set, it will default to its IPv6 counterpart instead of localhost. Note that setting RUN export SPARK_LOCAL_IP=\"127.0.0.1\" will not be available on runtime and Spark will act as if it was not set."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"mount.nfs: access denied by server while mounting \nIP_ADDRESS:\/FILESHARE","SOLUTION":"If access was denied by the server, check to make sure the file share \nname is correct."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"mount.nfs: Connection timed out","SOLUTION":"If the connection times out, make sure you are providing the correct \nIP address of the filestore instance."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"How can I specify Google credentials in Cloud Run applications?","SOLUTION":"For applications running on Cloud Run, you don't need to deliver JSON keys for IAM Service Accounts, or set GOOGLE_APPLICATION_CREDENTIALS environment variable.\n\nJust specify the service account (--service-account) you want your application to use automatically while deploying the app. See configuring service identity."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"How to do canary or blue\/green deployments on Cloud Run?","SOLUTION":"If you updated your Cloud Run service, you probably realized it creates a new revision for every new configuration of your service.\n\nCloud Run allows you to split traffic between multiple revisions, so you can do gradual rollouts such as canary deployments or blue\/green deployments."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"How to configure secrets for Cloud Run applications?","SOLUTION":"You can use Secret Manager with Cloud Run. Read how to write code and set permissions to access the secrets from your Cloud Run app in the documentation.\n\nAlternatively, if you'd like to store secrets in Cloud Storage (GCS) using Cloud KMS envelope encryption, check out the Berglas tool and library (Berglas also has support for Secret Manager)."}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"How to connect IPs in a VPC network from Cloud Run?","SOLUTION":"Cloud Run now has support for \"Serverless VPC Access\". This feature allows Cloud Run applications to be able to connect private IPs in the VPC (but not the other way).\n\nThis way your Cloud Run applications can connect to private VPC IP addresses running:\n\nGCE VMs\nCloud SQL instances\nCloud Memorystore instances\nKubernetes Pods\/Services (on GKE public or private clusters)\nInternal Load Balancers\n"}
{"TECHNOLOGY":"GCP Cloud Run","QUESTION":"How can I serve responses larger than 32MB with Cloud Run?","SOLUTION":"Cloud Run can stream responses that are larger than 32MB using HTTP chunked encoding. Add the HTTP header Transfer-Encoding: chunked to your \nresponse if you know it will be larger than 32MB."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"How can I use Multi Factor Authentication (MFA) with IAM?","SOLUTION":"When individual users use MFA, the methods they authenticate with \nwill be honored. This means that your own identity system needs to support MFA. For Google Workspace accounts, this needs to be enabled by the user themselves. For Google Workspace-managed credentials, MFA can be enabled with Google Workspace tools."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"How do I control who can create a service account in my project?","SOLUTION":"Owner and editor roles have permissions to create service accounts in\n a project. If you wish to grant a user the permission to create a service account, grant them the owner or the editor role."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"How do I grant permissions to resources in my project to someone who\n is not part of my organization?","SOLUTION":"Using Google groups, you can add a user outside of your organization to a group and bind that group to the role. Note that Google groups don't have login credentials, and you cannot use Google groups to establish identity to make a request to access a resource.\n\nYou can also directly add the user to the allow policy even if they are not a part of your organization. However, check with your administrator if this is compliant with your company's requirements."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"How can I manage who can access my instances?","SOLUTION":"To manage who has access to your instances, use Google groups to \ngrant roles to principals. Granting a role creates a role binding in an allow policy; you can grant the role on the project where the instances will be launched, or on individual instances. If a user (identified by their Google Account, for example, my-user@example.com) is not a member of the group that is bound to a role, they will not have access to the resource where the allow policy is applied."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"How do I list the roles associated with a gcp service account?","SOLUTION":"To see roles per service account in the console:\n\n1. Copy the email of your service account (from IAM & Admin -> Service Accounts - Details);\n2. Go to: IAM & Admin -> Policy Analyzer -> Custom Query;\n3. Set Parameter 1 to Principal. Paste the email into Principal field;\n4. Click Continue, then click Run Query.\nYou'll get the list of roles of the given service account."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"GCP Cloud Build fails with permissions error even though correct role is\n granted","SOLUTION":"you need to add the cloudfunctions.developer and iam.serviceAccountUser roles to the [PROJECT_NUMBER]@cloudbuild.gserviceaccount.com account, and (I believe) that the aforementioned cloudbuild service account also needs to be added as a member of the service account that has permissions to deploy your Cloud Function (again shown in the linked SO answer)."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"How to set Google Cloud application credentials for a Service Account","SOLUTION":"gcloud auth application-default login uses the active|specified user account to create a local JSON file that behaves like a service account.\n\nThe alternative is to use gcloud auth activate-service-account but, as you know, you will need to have the service account's credentials as these will be used instead of the credentials created by application-default login."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"Is there a way to list all permissions from a user in GCP? ","SOLUTION":"In Google Cloud Platform there is no single command that can do this. Permissions via roles are assigned to resources. Organizations, Folders, Projects, Databases, Storage Objects, KMS keys, etc can have IAM permissions assigned to them. You must scan (check IAM permissions for) every resource to determine the total set of permissions that an IAM member account has."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"Can't delete a Google Cloud Project","SOLUTION":"1. see your project retentions: gcloud alpha resource-manager liens list\n2. if you have any retention delete: gcloud alpha resource-manager liens delete \"name\"\n3. delete your project gcloud projects delete \"project\""}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"How to read from a Storage bucket from a GCE VM with no External IP?","SOLUTION":"You simply have to:\n\n1. Go to Console -> VPC network\n2. Choose the subnet of your VM instance (for example default -> us-central1)\n3. Edit and select Private Google access -> On. Then save.\nAlso make sure that your VM has access to the Cloud Storage API."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":" I'm getting the error \"cannot use role (type string) as type\n \"cloud.google.com\/go\/iam\".RoleName in argument to policy.HasRole.","SOLUTION":"You can use type conversion as the following:\n\nreturn policy.HasRole(serviceAccount, iam.RoleName(role))\nOr simpler by declaring role as iam.RoleName\n\nfunc checkRole(key, serviceAccount, role iam.RoleName) bool {\n...\n return policy.HasRole(serviceAccount, role)\n}"}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"Can I get a list of all resources for which a user has been added to a \nrole?","SOLUTION":"Roles are not assigned directly to users. This is why there is no single command that you can use.\n\nIAM members (users, service accounts, groups, etc.) are added to resources with roles attached. A user can have permissions to a project and also have permissions at an individual resource (Compute Engine Instance A, Storage Bucket A\/Object B). A user can also have no permissions to a project but have permissions at individual resources in the project.\n\nYou will need to run a command against resources (Org, Folder, Project and items like Compute, Storage, KMS, etc).\n\nTo further complicate this, there are granted roles and also inherited roles."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"Is there a way to prevent deletion of a google spanner database even though developers have been granted broad (i.e. owner) access to the project?","SOLUTION":"A few approaches.\n\n1. If you're worrying about a Spanner Database getting dropped, you can use the --enable-drop-protection flag when creating the DB, to ensure it cannot be accidentally deleted.\n\n2. You can make negative permissions through IAM Deny Policies in Google Cloud, to expressedly prevent someone, like a developer group or Service Account, from taking a specific action."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"How to grant access to all service account in organization?","SOLUTION":"You can use Google groups which uses a collection of user and\/or \nservice accounts. Once this is done, add the service accounts to the Google group and then assign the necessary IAM roles to the Google group."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"How to restrict BigQuery's dataset access for everyone having (Project \nlevel Viewer) role","SOLUTION":"The solution here is to have Terraform (or something else) manage the resources for you.\n\nYou can develop a module that creates the appropriate things for a user e.g. a dataset, a bucket, some perms, a service account etc.\n\nThat way all you need to do is add another user to your list and re-deploy. The other additional benefit here is that you can use the repo where the TF is stored as a source of truth."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"Hoe do I Custom Role for Inserting to Specific BigQuery Dataset","SOLUTION":"You can drop the bigquery.datasets.get permission from the custom \nIAM role so that they can\u2019t list all the datasets, and then in the dataset's permissions give the READER role instead of WRITER to the user for that specific dataset."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"Service account does not have permission to access Firestore","SOLUTION":"Creating a service account by itself grants no permissions. The Permissions tab in IAM & Admin > Service Accounts shows a list of \"Principals with access to this account\" - this is not the inheritance of permissions, it's simply which accounts, aka principals, can make use of the permissions granted to this service account. The \"Grant Access\" button on this page is about granting other principals access to this service account, not granting access to resources for this service account.\n\nFor Firestore access specifically - go to IAM & Admin > IAM, and you'll be on the permissions tab. Click \"Add\" at the top of the page. Type in your newly created service account under \"New Principals\", and for roles, select \"Cloud Datastore Owner\"."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"How to connect to Cloud SQL from Azure Data Studio using an IAM user","SOLUTION":"We can connect using IAM database authentication using the Cloud SQL Auth proxy. The only step after to be done from the GUI DB tool (mine is Azure Data Studio) would be, to connect to the IP (127.0.0.1 in my case)the Cloud SQL Auth proxy listens on(127.0.0.1 is the default) after starting the Cloud SQL Auth proxy using:\n\n.\/cloud_sql_proxy -instances=<GCPproject:Region:DBname>=tcp:127.0.0.1:5432"}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"What is the correct GCP user role that I should assign to my external website developer?","SOLUTION":"you should grant the minimum role level to execute the work. If your developer only need access to the Translation API, you can grant his account with this role: Cloud Translation API Editor.\n\nIf you want him to have full access to the Cloud Translation resources, you can gran him the Cloud Translation API Admin.\n\nIn case you have more than one developer and they all need the same permissions, you can create an IAM group, add the developer's mails to the group and assign the necessary roles to it."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"How to restrict access to triggering HTTP CLoud Function via trigger URL?","SOLUTION":"The problem is your access method. You are using your own user account (who has the Cloud FUnction invoker role) but with your browser. Your request with your browser is without any authentication header.\n\nIf you want to call your cloud function now, you have to add an authorization header, and an identity token as bearer value. That command works\n\ncurl -H \"Authorization: bearer $(gcloud auth print-identity-token)\" <cloud function URL>\nNote that you need an identity token, not an authorization token."}
{"TECHNOLOGY":"GCP Security IAM","QUESTION":"What roles do my Cloud Build service account need to deploy an http \ntriggered unauthenticated Cloud Function?","SOLUTION":"The solution is replace Cloud Functions Developer role with Cloud Functions Admin role.\n\nUse of the --allow-unauthenticated flag modifies IAM permissions. To ensure that unauthorized developers cannot modify function permissions, the user or service that is deploying the function must have the cloudfunctions.functions.setIamPolicy permission. This permission is included in both the Owner and Cloud Functions Admin roles."}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"Getting error as billingNotEnabled","SOLUTION":"Enable billing for the project in the Google Cloud console."}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"How to create temporary table in Google BigQuery","SOLUTION":"To create a temporary table, use the TEMP or TEMPORARY keyword when you use the CREATE TABLE statement and use of CREATE TEMPORARY TABLE requires a script , so its better to start with begin statement.\n\nBegin\nCREATE TEMP TABLE <table_name> as select * from <table_name> where <condition>;\nEnd ;"}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"How to download all data in a Google BigQuery dataset?","SOLUTION":"Detailed step-by-step to download large query output\n\n1. enable billing\n You have to give your credit card number to Google to export the output, and you might have to pay.\n But the free quota (1TB of processed data) should suffice for many hobby projects.\n2. create a project\n3. associate billing to a project\n4. do your query\n5. create a new dataset\n6. click \"Show options\" and enable \"Allow Large Results\" if the output is very large\n7. export the query result to a table in the dataset\n8. create a bucket on Cloud Storage.\n9. export the table to the created bucked on Cloud Storage.\n make sure to click GZIP compression\n use a name like <bucket>\/prefix.gz.\n If the output is very large, the file name must have an asterisk * and the output will be split into multiple files.\n\n10. download the table from cloud storage to your computer.\nDoes not seem possible to download multiple files from the web interface if the large file got split up, but you could install gsutil and run:\ngsutil -m cp -r 'gs:\/\/<bucket>\/prefix_*' .\nSee also: Download files and folders from Google Storage bucket to a local folder\nThere is a gsutil in Ubuntu 16.04 but it is an unrelated package.\nYou must install and setup as documented at: https:\/\/cloud.google.com\/storage\/docs\/gsutil\n11. unzip locally:\nfor f in *.gz; do gunzip \"$f\"; done"}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"How to generate date series to occupy absent dates in google \nBiqQuery?","SOLUTION":"Generting a list of dates and then joining whatever table you need on top seems the easiest. I used the generate_date_array + unnest and it looks quite clean.\n\nTo generate a list of days (one day per row):\n\n SELECT\n *\n FROM \n UNNEST(GENERATE_DATE_ARRAY('2018-10-01', '2020-09-30', INTERVAL 1 DAY)) AS example"}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"How many Google Analytics views can I export to BigQuery?","SOLUTION":"You can only export one view per Google Analytics property.\n\nWhen selecting which view to export, it is important to consider which views have been customized with various changes to the View Settings (traffic \nfilters, content groupings, channel settings, etc.), or which views have the most historical data.\n\nThe view that you choose to push to BigQuery will depend on use cases for your data. We recommend selecting the view with the most data, universal customization, and essential filters that have cleaned your data (such as bot filters)."}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"How to choose the latest partition in BigQuery table?","SOLUTION":"You can use with statement, select last few partitions and filter out the result. This is better approach because:\n\nYou are not limited by fixed partition date (like today - 1 day). It will always take the latest partition from given range.\nIt will only scan last few partitions and not whole table.\nExample with last 3 partitions scan:\n\nWITH last_three_partitions as (select *, _PARTITIONTIME as PARTITIONTIME \n FROM dataset.partitioned_table \n WHERE _PARTITIONTIME > TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 3 DAY))\nSELECT col1, PARTITIONTIME from last_three_partitions \nWHERE PARTITIONTIME = (SELECT max(PARTITIONTIME) from last_three_partitions)"}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"How can I change the project in BigQuery","SOLUTION":"You have two ways to do it:\n\n1. Specify --project_id global flag in bq. Example: bq ls -j --project_id <PROJECT>\n2. Change default project by issuing gcloud config set project <PROJECT>"}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"How to catch a failed CAST statement in BigQuery SQL?","SOLUTION":"You can use the SAFE_CAST function, which returns NULL if the input \nis not a valid value when interpreted as the desired type. In your case, you would just use SAFE_CAST(UPDT_DT_TM AS DATETIME). It is in the Functions & Operators documentation."}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"JSON formatting Error when loading into Google Big Query","SOLUTION":"Yes, BigQuery only accepts new-line delimited JSON, which means \none complete JSON object per line. Before you merge the object to one line, BigQuery reads \"{\", which is start of an object, and expects to read a key, but the line ended, so you see the error message \"expected key\".\n\nFor multiple JSON objects, just put them one in each line. Don't enclose them inside an array. BigQuery expects each line to start with an object, \"{\". If you put \"[\" as the first character, you will see the second error message which means BigQuery reads an array but not inside an object."}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"I am trying to run the query \"select * from tablename \". But it throws \nerror like \"Error: Response too large to return\".","SOLUTION":"Set allowLargeResults to true in your job configuration. You must also specify a destination table with the allowLargeResults flag.\n\nIf querying via API,\n\n\"configuration\": \n {\n \"query\": \n {\n \"allowLargeResults\": true,\n \"query\": \"select uid from [project:dataset.table]\"\n \"destinationTable\": [project:dataset.table]\n\n }\n }\nIf using the bq command line tool,\n\n$ bq query --allow_large_results --destination_table \"dataset.table\" \"select uid from [project:dataset.table]\"\n\nIf using the browser tool,\n\nClick 'Enable Options'\nSelect 'Allow Large Results'"}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"How can I refresh datasets\/resources in the new Google BigQuery Web\n UI?","SOLUTION":"f you click the search box in the project\/dataset \"Explorer\" sidebar, \nthen press enter, it will refresh the list."}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"Failed to save view. Bad table reference \"myDataset.myTable\"; table \nreferences in standard SQL views require explicit project IDs","SOLUTION":"Your view has reference to myDataset.myTable - which is ok when you just run it as a query (for example in Web UI).\n\nBut to save it as a view you must fully qualify that reference as below\n\nmyProject.myDataset.myTable \nSo, just add project to that reference"}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"Bigquery Error: UPDATE\/MERGE must match at most one source row for \neach target row","SOLUTION":"It occurs because the target table of the BigQuery contains duplicated row(w.r.t you are joining). If a row in the table to be updated joins with more than one row from the FROM clause, then BigQuery returns this error:\n\nSolution\n\n1. Remove the duplicated rows from the target table and perform the UPDATE\/MERGE operation\n2. Define Primary key in BigQuery target table to avoid data redundancy"}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"Create a BigQuery table from pandas dataframe, WITHOUT specifying \nschema explicitly","SOLUTION":"Here's a code snippet to load a DataFrame to BQ:\n\nimport pandas as pd\nfrom google.cloud import bigquery\n\n# Example data\ndf = pd.DataFrame({'a': [1,2,4], 'b': ['123', '456', '000']})\n\n# Load client\nclient = bigquery.Client(project='your-project-id')\n\n# Define table name, in format dataset.table_name\ntable = 'your-dataset.your-table'\n\n# Load data to BQ\njob = client.load_table_from_dataframe(df, table)\nIf you want to specify only a subset of the schema and still import all the columns, you can switch the last row with\n\n# Define a job config object, with a subset of the schema\njob_config = bigquery.LoadJobConfig(schema=[bigquery.SchemaField('b', 'STRING')])\n\n# Load data to BQ\njob = client.load_table_from_dataframe(df, table, job_config=job_config)"}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"Table name missing dataset while no default dataset is set in the \nrequest","SOLUTION":"Depending on which API you are using, you can specify the defaultDataset parameter when running your BigQuery job. More information for the jobs.query api can be found here https:\/\/cloud.google.com\/bigquery\/docs\/reference\/rest\/v2\/jobs\/query.\n\nFor example, using the NodeJS API for createQueryJob https:\/\/googleapis.dev\/nodejs\/bigquery\/latest\/BigQuery.html#createQueryJob, you can do something similar to this:\n\nconst options = {\n keyFilename: process.env.GOOGLE_APPLICATION_CREDENTIALS,\n projectId: process.env.GOOGLE_APPLICATION_PROJECT_ID,\n defaultDataset: {\n datasetId: process.env.BIGQUERY_DATASET_ID,\n projectId: process.env.GOOGLE_APPLICATION_PROJECT_ID\n },\n query: `select * from my_table;`\n}\n\nconst [job] = await bigquery.createQueryJob(options);\nlet [rows] = await job.getQueryResults();"}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"Is there an easy way to convert rows in BigQuery to JSON?","SOLUTION":"If you want to glue together all of the rows quickly into a JSON block, you can do something like:\n\nSELECT CONCAT(\"[\", STRING_AGG(TO_JSON_STRING(t), \",\"), \"]\")\nFROM `project.dataset.table` t\nThis will produce a table with 1 row that contains a complete JSON blob summarizing the entire table."}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"How do I list tables in Google BigQuery that match a certain name?","SOLUTION":"You can do something like below in BigQuery Legacy SQL\n\nSELECT * \nFROM publicdata:samples.__TABLES__\nWHERE table_id CONTAINS 'github'\nOr with BigQuery Standard SQL\n\nSELECT * \nFROM publicdata.samples.__TABLES__\nWHERE starts_with(table_id, 'github') "}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"BigQuery fails to save view that uses functions","SOLUTION":"BigQuery now supports permanents registration of UDFs. In order to use your UDF in a view, you'll need to first create it.\n\nCREATE OR REPLACE FUNCTION `ACCOUNT-NAME11111.test.STR_TO_TIMESTAMP`\n (str STRING) \n RETURNS TIMESTAMP AS (PARSE_TIMESTAMP('%Y-%m-%dT%H:%M:%E*SZ', str));\n i. Note that you must use a backtick for the function's name.\n ii. There's no TEMPORARY in the statement, as the function will be globally registered and persisted.\n iii. Due to the way BigQuery handles namespaces, you must include both the project name and the dataset name (test) in the function's name.\nOnce it's created and working successfully, you can use it a view.\n\ncreate view test.test_view as\nselect `ACCOUNT-NAME11111.test.STR_TO_TIMESTAMP`('2015-02-10T13:00:00Z') as ts\nYou can then query you view directly without explicitly specifying the UDF anywhere.\n\nselect * from test.test_view"}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"Query Failed Error: Resources exceeded during query execution: The \nquery could not be executed in the allotted memory","SOLUTION":"The only way for this query to work is by removing the ordering applied in the end:\n\nSELECT \n fullVisitorId,\n CONCAT(CAST(fullVisitorId AS string),CAST(visitId AS string)) AS session,\n date,\n visitStartTime,\n hits.time,\n hits.page.pagepath\nFROM\n `XXXXXXXXXX.ga_sessions_*`,\n UNNEST(hits) AS hits\nWHERE\n _TABLE_SUFFIX BETWEEN \"20160801\"\n AND \"20170331\"\nORDER BY operation is quite expensive and cannot be processed in parallel so try to avoid it (or try applying it in a limited result set)"}
{"TECHNOLOGY":"GCP Big Query","QUESTION":"How to convert results returned from bigquery to Json format using \nPython?","SOLUTION":"There is no current method for automatic conversion, but there is a pretty simple manual method to convert to json:\n\nrecords = [dict(row) for row in query_job]\njson_obj = json.dumps(str(records))\nAnother option is to convert using pandas:\n\ndf = query_job.to_dataframe()\njson_obj = df.to_json(orient='records')"}
{"TECHNOLOGY":"GCP VM","QUESTION":"Getting an error when connecting to VM using the SSH-in-browser from the Google Cloud console","SOLUTION":"To resolve this issue, have a Google Workspace admin do the following:\n\n1. Confirm that Google Cloud is enabled for the organization.\n\nIf Google Cloud is disabled, enable it and retry the connection.\n\n2. Confirm that services that aren't controlled individually are enabled.\n\nIf these services are disabled, enable them and retry the connection.\n\nIf the problem persists after enabling Google Cloud settings in Google Workspace, do the following:\n\n1. Capture the network traffic in an HTTP Archive Format (HAR) file starting from when you start the SSH-in-Browser SSH connection.\n\n2. Create a Cloud Customer Care case and attach the HAR file."}
{"TECHNOLOGY":"GCP VM","QUESTION":"The following error is occuring when I start an SSH session:\nCould not connect, retrying \u2026","SOLUTION":"To resolve this issue, do the following:\n\n1. After the VM has finished booting, retry the connection. If the connection is not successful, verify that the VM did not boot in emergency mode by running the following command:\ngcloud compute instances get-serial-port-output VM_NAME \\\n| grep \"emergency mode\"\nIf the VM boots in emergency mode, troubleshoot the VM startup process to identify where the boot process is failing.\n\n2. Verify that thegoogle-guest-agent.service service is running, by running the following command in the serial console.\n\nsystemctl status google-guest-agent.service\nIf the service is disabled, enable and start the service, by running the following commands:\n\nsystemctl enable google-guest-agent.service\nsystemctl start google-guest-agent.service\n3. Verify that the Linux Google Agent scripts are installed and running. For more information, see Determining Google Agent Status. If the Linux Google Agent is not installed, re-install it.\n\n4. Verify that you have the required roles to connect to the VM. If your VM uses OS Login, see Assign OS Login IAM role. If the VM doesn't use OS Login, you need the compute instance admin role or the service account user role (if the VM is set up to run as a service account). The roles are needed to update the instance or project SSH keys-metadata.\n\n5. Verify that there is a firewall rule that allows SSH access by running the following command:\ngcloud compute firewall-rules list | grep \"tcp:22\"\n\n6. Verify that there is a default route to the Internet (or to the bastion host). For more information, see Checking routes.\n\n7. Make sure that the root volume is not out of disk space. For more information, see Troubleshooting full disks and disk resizing.\n\n8. Make sure the VM has not run out of memory, by running the following command:\n\ngcloud compute instances get-serial-port-output instance-name \\\n| grep \"Out of memory: Kill process\" - e \"Kill process\" -e \"Memory cgroup out of memory\" -e \"oom\"\nIf the VM is out of memory, connect to serial console to troubleshoot."}
{"TECHNOLOGY":"GCP VM","QUESTION":"The SSH connection failed after upgrading the VM's kernel.","SOLUTION":"To resolve this issue, do the following:\n\n1. Mount the disk to another VM.\n2. Update the grub.cfg file to use the previous version of the kernel.\n3. Attach the disk to the unresponsive VM.\n4. Verify that the status of the VM is RUNNING by using the gcloud 5. compute instances describe command.\n5. Reinstall the kernel.\n6. Restart the VM.\nAlternatively, if you created a snapshot of the boot disk before upgrading the VM, use the snapshot to create a VM."}
{"TECHNOLOGY":"GCP VM","QUESTION":"Connection via Cloud Identity-Aware Proxy Failed","SOLUTION":"To resolve this issue Create a firewall rule on port 22 that allows ingress\n traffic from Identity-Aware Proxy."}
{"TECHNOLOGY":"GCP VM","QUESTION":"ERROR: (gcloud.compute.ssh) Could not SSH into the instance.\nIt is possible that your SSH key has not propagated to the instance yet.\nTry running this command again. If you still cannot connect, verify that the firewall and instance are set to accept ssh traffic.","SOLUTION":"This error can occur for several reasons. The following are some of the most common causes of the errors:\n\n1. You tried to connect to a Windows VM that doesn't have SSH installed.\n\nTo resolve this issue, follow the instructions to Enable SSH for Windows on a running VM.\n\n2. The OpenSSH Server (sshd) isn't running or isn't configured properly. The sshd provides secure remote access to the system via SSH protocol. If it's misconfigured or not running, you can't connect to your VM via SSH.\n\nTo resolve this issue, review OpenSSH Server configuration for Windows Server and Windows to ensure that sshd is set up correctly."}
{"TECHNOLOGY":"GCP VM","QUESTION":"ERROR: (gcloud.compute.ssh) FAILED_PRECONDITION: The specified \nusername or UID is not unique within given system ID.","SOLUTION":"This error occurs when OS Login attempts to generate a username that already exists within an organization. This is common when a user account is deleted and a new user with the same email address is created shortly after. After a user account is deleted, it takes up to 48 hours to remove the user's POSIX information.\n\nTo resolve this issue, do one of the following:\n\n1. Restore the deleted account.\n2. Remove the account's POSIX information before deleting the account."}
{"TECHNOLOGY":"GCP VM","QUESTION":"Error message:\n\"code\": \"RESOURCE_OPERATION_RATE_EXCEEDED\",\n\"message\": \"Operation rate exceeded for resource 'projects\/project-id\/zones\/zone-id\/disks\/disk-name'. Too frequent operations from the source resource.\"","SOLUTION":"Resolution:\n\nTo create multiple disks from a snapshot, use the snapshot to create an image then create your disks from the image:\n\nCreate an image from the snapshot.\nCreate persistent disks from the image. In the Google Cloud console, select Image as the disk Source type. With the gcloud CLI, use the image flag. In the API, use the sourceImage parameter."}
{"TECHNOLOGY":"GCP VM","QUESTION":"Error message:\nThe resource 'projects\/PROJECT_NAME\/zones\/ZONE\/RESOURCE_TYPE\/RESOURCE_NAME' already exists\"","SOLUTION":"Resolution: Retry your creation request with a unique resource name."}
{"TECHNOLOGY":"GCP VM","QUESTION":"Error message:\nCould not fetch resource:\n- The selected machine type (MACHINE_TYPE) has a required CPU platform of REQUIRED_CPU_PLATFORM.\nThe minimum CPU platform must match this, but was SPECIFIED_CPU_PLATFORM.","SOLUTION":"Resolution:\n\n1. To learn about which CPU platform your machine type supports, review CPU platforms.\n2. Retry your request with a supported CPU platform."}
{"TECHNOLOGY":"GCP VM","QUESTION":"Error Message:\nInvalid value for field 'resource.sourceMachineImage': Updating 'sourceMachineImage' is not supported","SOLUTION":"Resolution:\n\n1. Make sure that your VM supports the processor of the new machine type. For more information about the processors supported by different machine types, see Machine family comparison.\n\n2. Try to change the machine type by using the Google Cloud CLI."}
{"TECHNOLOGY":"GCP VM","QUESTION":"ERROR: Registration failed: Registering system to registration proxy https:\/\/smt-gce.susecloud.net\ncommand '\/usr\/bin\/zypper --non-interactive refs Python_3_Module_x86_64' failed\nError: zypper returned 4 with 'Problem retrieving the repository index file for service 'Python_3_Module_x86_64':\nTimeout exceeded when accessing 'https:\/\/smt-gce.susecloud.net\/services\/2045\/repo\/repoindex.xml?credentials=Python_3_Module_x86_64'.","SOLUTION":"To resolve this issue, review the Cloud NAT configuration to verify \nthat the minimum ports per VM instance parameter is set to at least 160."}
{"TECHNOLOGY":"GCP VM","QUESTION":"ERROR: (gcloud.compute.instances.set-machine-type) Could not fetch \nresource:\nInvalid resource usage: 'Requested boot disk architecture (X86_64) is not compatible with machine type architecture (ARM64).'","SOLUTION":"Resolution:\n\nMake sure that your VM supports the processor of the new machine type. For more information about the processors supported by different machine types, see Machine family comparison.\n\nTry to change the machine type by using the Google Cloud CLI.\n\nIf you switch from an x86 machine type to an Arm T2A machine type, you might receive a `INVALID_RESOURCE_USAGE' error indicating that your disk type is not compatible with an Arm machine type. Create a new T2A Arm instance using a compatible Arm OS and disk."}
{"TECHNOLOGY":"GCP VM","QUESTION":"using an unapproved resource \"Machine type architecture (ARM64) is not compatible with requested boot disc architecture (X86_64),\" the notification states.","SOLUTION":"To resolve this issue, try one of the following:\n\n1. If you are using a zonal MIG, use a regional MIG instead.\n2. Create multiple MIGs and split your workload across them\u2014for example by adjusting your load balancing configuration.\n3. If you still need a bigger group, contact support to make a request."}
{"TECHNOLOGY":"GCP VM","QUESTION":"Can't move a VM to a sole-tenant node.","SOLUTION":"Solution:\n\n1. A VM instance with a specified minimum CPU platform can't be moved to a sole-tenant node by updating VM tenancy. To move a VM to a sole-tenant node, remove the minimum CPU platform specification by setting it to automatic.\n\n2. Because each sole-tenant node uses a specific CPU platform, all VMs running on the node cannot specify a minimum CPU platform. Before you can move a VM to a sole-tenant node by updating its tenancy, you must set the VM's --min-cpu-platform flag to AUTOMATIC."}
{"TECHNOLOGY":"GCP VM","QUESTION":"Error Message:No feasible nodes found for the instance given its node affinities and other constraints.","SOLUTION":"Specify values for the minimum number of CPUs for each VM so that \nthe total for all VMs does not exceed the number of CPUs specified by the sole-tenant node type."}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"ABORTED ERROR:\nToo much contention on these datastore entities. Please try again.","SOLUTION":"To resolve this issue:\n\n1. For rapid traffic increases, Firestore attempts to automatically scale to meet the increased demand. When Firestore scales, latency begins to decrease.\n2. Hot-spots limit the ability of Firestore to scale up, review designing for scale to identify hot-spots.\n3. Review data contention in transactions and your usage of transactions.\n4. Reduce the write rate to individual documents."}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"RESOURCE_EXHAUSTED Error:\nSome resource has been exhausted, perhaps a per-user quota, or perhaps the entire file system is out of space.","SOLUTION":"To resolve this issue:\n\nWait for the daily reset of your free tier quota or enable billing for your project."}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"INVALID_ARGUMENT: The value of property field_name is longer than \n1048487 bytes","SOLUTION":"To resolve this issue:\n\n1. For indexed field values, split the field into multiple fields. If possible, create an un-indexed field and move data that doesn't need to be indexed into the un-indexed field.\n2. For un-indexed field values, split the field into multiple fields or implement compression for the field value."}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"Firestore : \u201cError: 9 FAILED_PRECONDITION: The Cloud Firestore API is \nnot available for Cloud Datastore projects\u201d [duplicate]","SOLUTION":"Three solutions:\n\n1. Firestore is not set as your Datastore\nGo to https:\/\/console.cloud.google.com\/firestore\/. You'll notice a popup saying you need to initialize Firestore as the Native Datastore. Once done you should see this\n\n2. You are logged into the wrong account in GCloud SDK.\nyou're on localhost - In your terminal you need to switch accounts or create a new configuration that points to the correct account and project.\n\nRun gcloud init in a terminal on the machine you are using the service account on.\n\n3. Firestore Database has not yet been created.\nOpen https:\/\/console.firebase.google.com\/. Add\/Create your GCP Project, choose billing plan, and create the database."}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"I am trying to create a Vue Composable that uploads a file to Firebase Storage.\nTo do this I am using the modular Firebase 9 version.\nBut my current code does not upload anything, and instead returns this error: FirebaseError: Firebase Storage: An unknown error occurred, please check the error payload for server response. (storage\/unknown)","SOLUTION":"To fix that take these steps:\n\n1. Go to https:\/\/console.cloud.google.com\n2. Select your project in the top blue bar (you will probably need to switch to the \"all\" tab to see your Firebase projects)\n3. Scroll down the left menu and select \"Cloud Storage\"\n4. Select all your buckets then click \"Show INFO panel\" in the top right hand corner\n5. click \"ADD PRINCIPAL\"\n6. Add \"firebase-storage@system.gserviceaccount.com\" to the New Principle box and give it the role of \"Storage Admin\" and save it"}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"How can I fix Firebase\/firestore error in React native?","SOLUTION":"Issue was fixed by downgrading Firebase to version 6.0.2. Cleaning project's cache was the solution.\n\nCleaning instructons:\n\nIn \/android folder run .\/graglew clean.\n\nAlso use https:\/\/www.npmjs.com\/package\/react-native-clean-project package."}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"Firestore error : Stream closed with status : PERMISSION_DENIED","SOLUTION":"Replace your rules with this and try:\n\nrules_version = '2';\nservice cloud.firestore {\n match \/databases\/{database}\/documents {\n match \/{multiSegment=**} {\n allow read, write;\n }\n }\n}"}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"How can I fix my firestore database setup error?","SOLUTION":"Most likely snapshot.docChanges() is an empty array, so \nsnapshot.docChanges()[0].doc.data() then fails. You'll want to check for an empty result set before accessing a member by its index like that."}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"how do I fix my flutter app not building with cloud firestore?","SOLUTION":"I had the same issue and noticed, that my firebase_core dependency in pubspec.yaml was not updated.\n\nNow use firebase_core: ^1.20.0 and it works \n\nDo not forget to run flutter clean."}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"How do I fix \"Could not reach Cloud Firestore Backend\" error?","SOLUTION":"If you are using Android Studio, Go to\n\nAVD Manager\nYour virtual devices\nDrop down by the right-hand side of the device\nWipe Data\nCold Boot\nThis should fix your issue"}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"How to solve FirebaseError: Expected first argument to collection() to \nbe a CollectionReference, a DocumentReference or FirebaseFirestore problem?","SOLUTION":"You need to use in your imports either:\n\n'firebase\/firestore'\nOR\n\n'firebase\/firestore\/lite'\nNot both in the same project.\n\nIn your case, the firebase.ts file is using:\n\nimport { getFirestore } from 'firebase\/firestore\/lite'\nAnd in your hook:\n\nimport { doc, onSnapshot, Unsubscribe } from 'firebase\/firestore'\nSo you're initialising the lite but using the full version afterwards.\n\nKeep in mind that both has it's benefits, but I would suggest in your case to pick one and just use it. Then the error will be gone."}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"I am getting error while uploading date data to firestore in flutter","SOLUTION":"Firebase uses ISO8061 format to save dates. Let us say your b'day is 08-11-2004 so your code would be so\n\nfinal date = DateTime(2004, 11, 8).toIso8601String();\nNow you can upload the date variable into firebase as Date format."}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"How can I resolve the '_CastError' error when reading a timestamp from \nFirestore in Flutter?","SOLUTION":"Dart casts treat one object as a different type of object. They do not perform any conversions.\n\nTo convert a String to a cloud_firestore Timestamp, you will need to parse it:\n\n return AppUser(\n birthDate: Timestamp.fromDate(DateTime.parse(json['birth_date'])),\n ...\n );"}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"Error 400: unable to create collection using Firestore rest API","SOLUTION":"With Firestore we don't create a Collection as such. A new Collection is created when the first Document of this Collection is created.\n\nSo for creating a doc in a new abcd collection, according to the REST API documentation, you need to call the following URL (see abcd at the end of the URL)\n\nhttps:\/\/firestore.googleapis.com\/v1\/projects\/mountain-bear-****\/databases\/(default)\/documents\/abcd\nwith a POST request and the request body shall contain an instance of a Document."}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"Firestore CANNOT create document. Server flooding network with HTTP 200 non stop","SOLUTION":"1. you can try setting logLevel for Firestore and try to figure out what is happening with\nfirebase.firestore.setLogLevel('debug');\n2. Recheck your firebase\/firestore configuration\n\n3. Try to change firebase libs versions, it does matters sometimes, had a bunch of broken libs and a lot of headache with them"}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"How to fix flutter firestore stream builder error?","SOLUTION":"Check if it's null while loading the data from firestore\n\nStreamBuilder(\n stream:\n FirebaseFirestore.instance.collection('my_contact').snapshots(),\n builder: (context, AsyncSnapshot<QuerySnapshot> streamSnapshot) {\n if (!streamSnapshot.hasData) return Center();\n if (streamSnapshot.data.docs.length!=0) {\n return ListView.builder(\n itemCount: streamSnapshot.data.docs.length,\n itemBuilder: (ctx, index) => SettingRowWidget(\n \"Call\",\n vPadding: 0,\n showDivider: false,\n onPressed: () {\n Utility.launchURL((streamSnapshot.data.docs[index]['phone']));\n },\n ),\n );\n }else{\n return Center(child:Text('No data found'));\n }\n \n },\n ));"}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"When I run a transaction inside a try {} catch(error){} block in Firestore, \nI noticed that when I try to store the error in logs, it appears as empty object. However, when I print it into console in the emulator, I get a proper error message.","SOLUTION":"Potential solutions are as follows:\n\nfunctions.logger.error(`Unexpected error occurred:`, error) \/\/ Here error is a \"simple object\"\nfunctions.logger.error(`Unexpected error occurred:`, { error: error.message }) "}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"Error: Failed to get Firebase project project-name. Please make sure the\n project exists and your account has permission to access it","SOLUTION":"Try logging out of firebase CLI and then log back in with the account that has the project that you are trying to run.\n\nSteps:\n\n`firebase logout`\n`firebase login`"}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"Error in getting server timestamp in Firestore","SOLUTION":"The problem in your code is the fact that the type of your timestamp field inside your UserLight class dosn't match the type of your timestamp property in the database. See, in your UserLight class the timestamp field is of type long, which is basically a number while in the database is of type Date or Timestamp. Please note that both must match.\n\nBecause the correct way of holding dates in Cloud Firestore is to use the Date or Timestamp class, to solve this, simply change type of your timestamp field in your model class to be Date"}
{"TECHNOLOGY":"GCP Fire Store","QUESTION":"Reference error firestore is not defined in firebase cloud function when\n using firebase admin sdk","SOLUTION":"Removing the unnecessary import const { firestore } =\n require('firebase-admin') and then changing firestore.FieldValue.increment(1) to admin.firestore.FieldValue.increment(1) fixed the error."}
{"TECHNOLOGY":"GCP Cloud Build","QUESTION":"Error: \"No source files found in the repository\":","SOLUTION":"Verify that your build configuration includes the correct source file or directory. Double-check the path and ensure that the source files exist in the repository. If using a specific branch or tag, confirm that the branch or tag exists."}
{"TECHNOLOGY":"GCP Cloud Build","QUESTION":"Error: \"Permission denied\" or \"Insufficient permissions\" during build execution","SOLUTION":"Ensure that the user or service account running the build has the necessary permissions to access the required resources. Grant the appropriate IAM roles, such as roles\/cloudbuild.builds.editor or roles\/cloudbuild.builds.viewer, to the user or service account."}
{"TECHNOLOGY":"GCP Cloud Build","QUESTION":"Error: \"Build timed out because no logs were emitted\"","SOLUTION":"Check the build configuration to ensure that your build steps emit logs. Ensure that the logging configuration is set correctly. Verify that the build step commands or scripts are properly configured to produce logs."}
{"TECHNOLOGY":"GCP Cloud Build","QUESTION":"Error: \"Failed to access external resources during build\"","SOLUTION":"Check the firewall rules and network configuration to ensure that the Cloud Build service has access to the required external resources. Verify that any necessary APIs are enabled. If accessing private resources, configure the necessary VPC networking and connectivity."}
{"TECHNOLOGY":"GCP Cloud Build","QUESTION":"Error: \"Build failed due to build step dependencies not found\"","SOLUTION":"Seems that Cloud Build is starting with a specific service account, and that account does not have permissions to store build logs in Logging.\n\nGrant the Logging Admin (roles\/logging.admin) role to the service account you specified in the YAML file."}
{"TECHNOLOGY":"Cloud Deployment","QUESTION":"Error: \"Dependency not found\" or \"Incompatible version\" when deploying an application or service.","SOLUTION":"Review the application's dependencies and ensure that all required dependencies are available and compatible with the deployed environment. Update or adjust dependency versions as needed."}
{"TECHNOLOGY":"Cloud Deployment","QUESTION":"Error: \"Permission denied\" or \"Insufficient permissions\" during \ndeployment.","SOLUTION":"Ensure that the user or service account performing the \ndeployment has the necessary roles and permissions. Grant the appropriate IAM roles, such as roles\/editor or roles\/clouddeploy.admin, to the user or service account."}
{"TECHNOLOGY":"Cloud Deployment","QUESTION":"Error: \"Quota exceeded\" or \"Resource limit reached\" when deploying \nresources.","SOLUTION":"Check the quota limits for the specific resource you are \ntrying to deploy. If the quota is insufficient, request a quota increase by following the appropriate process in the GCP Console or contacting GCP Support."}
{"TECHNOLOGY":"Cloud Deployment","QUESTION":"Error: \"Failed to create network\" or \"Failed to configure firewall rules\" \nduring deployment.","SOLUTION":"Ensure that the specified network configuration and firewall\n rules are valid. Verify that the specified subnets, IP ranges, and firewall rules do not conflict with existing resources or rules."}
{"TECHNOLOGY":"Cloud Deployment","QUESTION":"Error: \"Invalid YAML syntax\" or \"Configuration file contains errors\" \nduring deployment.","SOLUTION":"Validate your YAML configuration files using a YAML linter or\n online validator. Ensure that the YAML syntax is correct and the configuration follows the expected structure and format for the deployment tool or service you are using."}
{"TECHNOLOGY":"Cloud Repository","QUESTION":"Error: \"Permission denied\" or \"Insufficient permissions\" when accessing\n or performing operations in Cloud Repository.","SOLUTION":"Ensure that the user or service account has the necessary roles and\npermissions. Grant the appropriate IAM roles, such as roles\/source.reader or roles\/source.writer, to the user or service account."}
{"TECHNOLOGY":"Cloud Repository","QUESTION":"Error: \"Repository not found\" or \"No repository exists with the given \nname.\"","SOLUTION":"Double-check the repository name and ensure that it exists in your \nproject and is spelled correctly. Use the correct project ID or name along with the repository name when referencing it."}
{"TECHNOLOGY":"Cloud Repository","QUESTION":"\"Authentication failed\" or \"Invalid credentials\" when attempting to \nauthenticate with Cloud Repository.","SOLUTION":"Verify that you are using valid credentials for accessing Cloud \nRepository. Ensure that the authentication method, such as using SSH keys or gcloud command-line tool with the correct configuration, is set up correctly."}
{"TECHNOLOGY":"Cloud Repository","QUESTION":"\"Failed to push to repository\" or \"Failed to clone repository\" when \nperforming Git operations.","SOLUTION":"Solution:\n1. Check your network connectivity and ensure you have a stable internet connection.\n2. Verify that the repository URL is correct and properly formatted.\n3. Make sure you have the necessary permissions to push or clone the repository."}
{"TECHNOLOGY":"Cloud Repository","QUESTION":"Error: \"Branch not found\" or \"Tag not found\" when attempting to access \na specific branch or tag.","SOLUTION":"Ensure that the branch or tag exists in the repository. Double-check \nthe spelling and case sensitivity when referencing the branch or tag name."}
{"TECHNOLOGY":"Cloud Scheduler","QUESTION":"Error: \"Permission denied\" or \"Insufficient permissions\" when \nattempting to create or manage Cloud Scheduler jobs.","SOLUTION":"Ensure that the user or service account has the necessary roles and \npermissions. Grant the appropriate IAM roles, such as roles\/cloudscheduler.admin or roles\/cloudscheduler.editor, to the user or service account."}
{"TECHNOLOGY":"Cloud Scheduler","QUESTION":"Error: \"Invalid job configuration\" or \"Failed to create job\" due to \nincorrect or missing configuration settings.","SOLUTION":"Double-check the job configuration, including the target HTTP\/HTTPS \nendpoint, cron schedule, time zone, and payload (if applicable). Verify that all required fields are provided and correctly formatted."}
{"TECHNOLOGY":"Cloud Scheduler","QUESTION":"Error: \"Authentication failed\" or \"Invalid credentials\" when \nauthenticating requests triggered by Cloud Scheduler.","SOLUTION":"Ensure that the target endpoint or service being invoked by the Cloud \nScheduler job is configured to accept and validate the authentication credentials. Verify that the authentication method and credentials used are correct and valid."}
{"TECHNOLOGY":"Cloud Scheduler","QUESTION":"Error: \"Failed to reach the target endpoint\" or \"Target endpoint returned\n an error\" when invoking the specified HTTP\/HTTPS endpoint.","SOLUTION":"Solution:\n1. Check the target endpoint's availability and connectivity. Verify that the endpoint is accessible from the internet and is not blocked by firewalls or other network restrictions.\n2. Ensure that the endpoint URL is correct and properly formatted.\n3. Inspect the logs or error messages returned by the target endpoint to identify and address the specific issue."}
{"TECHNOLOGY":"Cloud Scheduler","QUESTION":"Error: \"Invalid cron schedule\" or \"Failed to parse cron expression\" due to\n an incorrect cron schedule format.","SOLUTION":"Review the cron schedule syntax and ensure that it adheres to the \ncorrect format. Use tools or online cron expression validators to verify the syntax and correctness of the cron schedule."}
|