id
stringlengths
1
25.2k
title
stringlengths
5
916
summary
stringlengths
4
1.51k
description
stringlengths
2
32.8k
solution
stringlengths
2
32.8k
KB3030
NCC Health Check: check_kerberos_setup
The NCC health check check_kerberos_setup validates if Kerberos is setup correctly.
The NCC health check check_kerberos_setup validates if Kerberos is set up correctly. Running the NCC Check You can run this check as part of the complete NCC Health Checks : nutanix@cvm$ ncc health_checks run_all Or you can run this check individually: nutanix@cvm$ ncc health_checks hypervisor_checks check_kerberos_setup You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every day, by default. This check will generate an alert after 1 failure. Sample outputFor status PASS Running /health_checks/hypervisor_checks/check_kerberos_setup on all nodes [ PASS ] For status WARN Running /health_checks/hypervisor_checks/check_kerberos_setup on all nodes [ WARN ] Running : health_checks hypervisor_checks check_kerberos_setup For status ERROR Running /health_checks/hypervisor_checks/check_kerberos_setup on all nodes [ ERR ] Output messaging [ { "Check ID": "Check if Kerberos is set up correctly" }, { "Check ID": "Kerberos is not set up correctly." }, { "Check ID": "Check Kerberos setup.\t\t\tReview KB 3030 for additional details." }, { "Check ID": "Kerberos authentication is impacted." }, { "Check ID": "Kerberos is not set up correctly." }, { "Check ID": "Incorrect Kerberos Setup" } ]
After installing Microsoft Windows updates released on November 8, 2022 or later on domain controllers with Windows Servers (2016, 2019, and 2022), this check returns warning: AOS cluster computer object is not configured correctly in Active Directory This issue is a known issue and Microsoft has released the out-of-band fixes that can be downloaded from the Windows Update catalog and installed on the domain controllers manually: Windows Server 2022: KB5021656 https://support.microsoft.com/help/5021656.Windows Server 2012R2: KB5021653 https://support.microsoft.com/en-au/topic/kb5021653-out-of-band-update-for-windows-server-2012-r2-november-17-2022-8e6ec2e9-6373-46d7-95bc-852f992fd1ff.Windows Server 2019: KB5021655 https://support.microsoft.com/help/5021655.Windows Server 2016: KB5021654 https://support.microsoft.com/help/5021654. Also, this check can fail due to Active Directory or other issues : This check will fail if there is a time drift of 300+ seconds between the CVMs (Controller VMs) and the domain controllers. Check and correct if needed using KB 4519 https://portal.nutanix.com/kb/4519.If the output states the "Domain Controller(s) could not be found", then this failure is likely due to Active Directory issues. Verify the configurations for the domain controller and the Active Directory in the affected host.If the output contains keywords such as "keytab" or "keyrepository" rather than "domain controller", then this failure is likely due to other issues. Contact Nutanix Support https://portal.nutanix.com for further assistance.There is a known issue with this check prior to NCC 2.2.2, where the check might report an ERR: Failed to process the keytab. If you see this error on your clusters, upgrade NCC to 2.2.2 or later. If the check still fails after upgrading NCC, open a support case with Nutanix Support.If the check failed with a message like "AOS cluster computer object is not configured correctly in Active Directory", and the check failed randomly on different nodes, this could be a known issue due to check timeout. Basically, this NCC health check only allows 5 seconds to validate credentials in NCC 2.2 & NCC 2.2.2. When there are situations such as one of the domain controllers is down, the check could time out and therefore produce this false alert. If the check always fails on one particular node, open a support case with Nutanix Support.If the check fails with a message like "AOS cluster computer object is not configured correctly in Active Directory", consistently on all the nodes, this may be caused by the reason: AD object has duplicate UserPrincipleName as Nutanix Storage Cluster Computer Object ( KB 3633 https://portal.nutanix.com/kb/3633). Try the solution in that KB or open a support case with Nutanix Support. If you use third-party software to manage your DNS (like Infoblox), ensure the required DNS A records are present (for the Nutanix storage object and the domain controllers). Otherwise, NCC checks will fail.If the check failed with a message below: Unable to get Kerberos Key Distribution Center IP's using DNS resolution. Check Name Server configuration on this cluster the issue is likely caused by Orphan Domain Controllers SRV records, without an A record in DNS. The NCC check attempts to query DNS for SRV records and checks that the entries have corresponding A records. Any in the list generated by the PS script with an SRV record, but no corresponding A record will cause this error. Use nslookup to identify the orphan DC SRV records and remove it. An orphan Domain Controller SRV record can appear if a DC hostname changes. The corresponding A record is updated to the new DNS name, but the old SRV record remains. Refer to KB 7668 http://portal.nutanix.com/kb/7668 for more details. Note: Enabling Kerberos is not mandatory for 2012R2. Thus, this check does not return a warning for Hyper-V 2012 R2.
KB14608
Exporting an AHV VM using UEFI to OVA and importing to ESXi cluster causes an unbootable VM.
Exporting an AHV VM using UEFI to OVA and importing to ESXi cluster causes an unbootable VM.
If a Windows VM using UEFI on an AHV cluster is exported as an OVA and then subsequently imported into a vSphere cluster, the VM will not boot and will get stuck in the screen below:Windows VMs using Legacy BIOS do not have the same issue.
This is a known issue. When the UEFI VM is exported as an OVA, the NVRAM file is not exported as part of it. ESXi requires the NVRAM file to boot a UEFI VM.Workaround:Customers can use a V2V solution such as VMware Converter to convert an UEFI AHV VM to a vSphere VM.
KB7585
Two-node cluster recommended AOS version
This KB provides a recommendation of using a minimal version of AOS 5.10.5 to take advantage of numerous improvements for two-node clusters.
A traditional Nutanix cluster requires a minimum of three nodes, but Nutanix also offers the option of a one-node or two-node cluster for ROBO implementation. Both of these have a minimum requirement for the CVM to be allocated with 6 vCPUs and 20 GB memory.Specifically for two-node, there have been several improvements in 5.10 to ensure the optimal health of the cluster.The table below provides a summary of improvements that significantly increase the stability and reliability of two-node clusters: [ { "ENG number": "ENG-228261", "Issue Description": "LCM upgrades may fail on two-node cluster", "KB number": "7540", "AOS Fix version": "5.10.5" }, { "ENG number": "ENG-210700", "Issue Description": "A two-node cluster might not have automatically recovered from an outage.", "KB number": "7479", "AOS Fix version": "5.10.4" }, { "ENG number": "ENG-210039", "Issue Description": "Upgrades might stall on two-node cluster for a longer time than expected.", "KB number": "----", "AOS Fix version": "5.10.4" }, { "ENG number": "ENG-214498", "Issue Description": "Network flakiness may cause the nodes to be separated and delay in communicating with witness. In some scenarios, cluster may remain in unstable state even after network restore.", "KB number": "----", "AOS Fix version": "5.10.4" }, { "ENG number": "ENG-205884", "Issue Description": "Two-node cluster may go down during AHV upgrade leading to a hypervisor upgrade problem.", "KB number": "-----", "AOS Fix version": "5.10.3" }, { "ENG number": "ENG-188785", "Issue Description": "AOS upgrade may get stuck on two-node cluster", "KB number": "----", "AOS Fix version": "5.10" } ]
Customers should upgrade to a minimum version of AOS 5.10.7 to avoid encountering any of the potential service impacting issues present in earlier releases of AOS. If you have a valid support contract, you are entitled to upgrade to the latest AOS release.Note: Two-node clusters which are running AOS 5.10.6 MUST use the upgrade procedure in KB 8134 https://portal.nutanix.com/kb/8134.If you have any questions about the release that you need to upgrade to, or the interoperability, please refer to the following links: https://portal.nutanix.com/#/page/upgradePaths https://portal.nutanix.com/#/page/upgradePaths or https://portal.nutanix.com/#/page/softwareinteroperability https://portal.nutanix.com/#/page/softwareinteroperability
KB4469
Enabling Nutanix Guest Tools (NGT) fails in Prism with the error: "Error: Unable to generate security certificates for this VM."
Enabling Nutanix Guest Tools (NGT) fails in Prism with the error: "Error: Unable to generate security certificates for this VM."
Enabling Nutanix Guest Tools by using the Prism web console fails with the following error message: Unable to generate security certificates for this VM. The following error message is reported in /home/nutanix/data/logs/nutanix_guest_tools.out: E0524 09:13:01.960119 29283 secure_connection.cc:214] SSL library error: error:00000001:lib(0):func(0):reason(1)
Perform the following procedure to resolve the issue. Confirm that the ID of the VM on which NGT is installed matches the ID in the error message in the NGT logs: nutanix@cvm$ ncli vm list | grep -A11 -B3 Test_VM Identify where the NGT master service is running: nutanix@cvm$ nutanix_guest_tools_cli get_master_location Open an SSH session to the master service node: ssh x.x.x.3 Check the permissions of NGT ca.tar and ca subfolders. Note: All the files and folders must be owned by ngt, but these are owned by root. nutanix@cvm$ sudo ls -tlp /home/ngt/ nutanix@cvm$ sudo ls -tlp /home/ngt/ca/intermediate/certs/ Change the owner of the files whose owner is not ngt. Change the owner of those files to ngt. nutanix@cvm$ sudo chown -R ngt:ngt -R /home/ngt/ca* Confirm that the current owner has been set to ngt: nutanix@cvm$ sudo ls -tlp /home/ngt/ nutanix@cvm$ sudo ls -tlp /home/ngt/ca/intermediate/certs/ Confirm that ca.tar and ca owner in the rest of cluster nodes is ngt. Amend any wrong owner. Note: Master service could move to any node where the owner is set incorretly reproducing the same error. nutanix@cvm$ allssh "sudo ls -tlp /home/ngt/" Complete the NGT installation and confirm that the certificate has been generated correctly so that NGT gets successfully enabled and the link is active: nutanix@cvm$ ncli ngt list | grep -C10 Test
KB6810
Phoenix-4.3.1-x86_64.iso boot failures when configuring Hypervisors
Phoenix from Phoenix-4.3.1 will fail for any Hypervisor install
When booting Hyper-v from Phoenix version phoenix-4.3.1-x86_64.iso you might encounter the following error: StandardError: Failed command: [unzip —o /mnt/local/images/hyperv_binaries.zip —d /mnt/stage/sources] with reason [unzip:cannot find or open /mnt/local/images/hyperv_binaries.zip, /mnt/local/images/hyperv_binaries.zip .zip or /mnt/local/images/hyperv_binaries.zip .ZIP.[screen is terminating]
Use an older version than phoenix-4.3. ( Phoenix 4.1)Older versions can be always downloaded from AWS, please consult Kb2430:[AWS/S3] How To Download Older Phoenix, AHV, PC, Foundation and AOS Imageshttps://portal.nutanix.com/#/page/kbs/details?targetId=kA032000000TT1HCAWNote : Following error might be shown if Phoenix fails and there is data on the D: partition: FATAL Hypervisor has already been customized. Maybe you want to run firstboot? You will have to delete all the data from the 1 GB partition (usually mounted under letter D: but that depends on the customer system). The partition can be identified by its content (markers folder, firstboot.bat, etc).Delete all from that partition and boot into Phoenix 4.1 to retry Hypervisor configuration.
KB16934
Nutanix Kubernetes Engine : Failed to upgrade host image of K8S cluster - Failed to Get VM
When upgrading NKE components, it can be whether the Host OS Image or Kubernetes (K8s) version, the prechecks can failed due to a failed to retrieve VM information, that is due to a possible spec mismatch between the Prism Central cluster and Prism Element cluster.
1. Check for the Karbon leader if the Prism Central environment is a scale-out. (You can also refer to KB- https://portal.nutanix.com/kb/15824 15824 https://portal.nutanix.com/kb/15824) nutanix@PCVM:~$ panacea_cli show_leaders | grep karbon 2. In the karbon_core logs (~/data/logs/karbon_core.out) shows the following error entries: 2024-02-08T18:01:02.568Z helper.go:637: [WARN] Could not fix error state of VM in %!s(int=5) retries 3. You can see that the API Calls are successful in Prism Proxy logs (/home/apache/ikat_access_logs/prism_proxy_access_log.out) : nutanix@PCVM:~$ sudo zless /home/apache/ikat_access_logs/prism_proxy_access_log.out.1.gz | grep -i 'f83c0dd5-188e-4142-9a82-f0f0fe054fd4' | tail -3 4. In nuclei, the VM in question shows in a PENDING or ERROR state in the Prism Central side. nutanix@PCVM:~$ nuclei vm.list count=1000 2>/dev/null | grep -i 'f83c0dd5-188e-4142-9a82-f0f0fe054fd4' 5. The VM shows a spec version of 22 in the Prism Central side. (It can be any other number, not necessarily has to be 22) nutanix@PCVM:~$ nuclei vm.get f83c0dd5-188e-4142-9a82-f0f0fe054fd4 2>/dev/null | grep 'spec_version' 6. In nuclei, the VM shows as COMPLETED in the Prism Element side. nutanix@CVM:~$ nuclei vm.list count=1000 2>/dev/null | tail -n +6 | grep -i 'f83c0dd5-188e-4142-9a82-f0f0fe054fd4' 7. The VM shows a state with a spec version of 6 on the Prism Element side. (It can be any other number, not necessarily has to be 6) nutanix@CVM:~$ nuclei vm.get f83c0dd5-188e-4142-9a82-f0f0fe054fd4 2>/dev/null | grep 'spec_version'
1. If the symptoms above match, follow KB-7853 https://portal.nutanix.com/kb/7853 to solve the issue with the spec mismatch between Prism Central and Prism Element.2. If nuclei still report the VM in PENDING or ERROR state after removing the specs for the VM, check aplos logs (~/data/logs/aplos.out), and if you see "api_lock_interface" entries, follow KB-13670 https://portal.nutanix.com/kb/13670. 2024-03-15 21:23:21,884Z WARNING api_lock_interface.py:175 Lock with uuid: e50277ca-85c3-5d1a-83db-31c056433862 already present. 3. In nuclei the VM should be reported as COMPLETE state. nutanix@PCVM:~$ nuclei vm.list count=200 2>/dev/null | grep -i 'f83c0dd5-188e-4142-9a82-f0f0fe054fd4' 4. Restart the NKE components upgrade via the GUI.
KB14732
Nutanix Files - File Server upgrade are stuck 80%
File Server upgrade are stuck at 72% with the message "Upgrading File Servers", while the subtask is hung at 80% with the message "File Server Upgrade Task: Upgrading File Server vms: Completed"
Summary: File Server upgrade are stuck at 72% with the message "Upgrading File Servers", while the subtask is hung at 80% with the message "File Server Upgrade Task: Upgrading File Server vms: Completed". Impact: Nutanix Files server and FSVMs shows upgraded but the task is stuck, Nutanix Files server checks shows cluster is up and healthy, the shares are up and accessible.Cause: kResponseTooLong errors in IDF (Insights Server) causing SSR migration during the Nutanix Files cluster upgrade to get stuck.nutanix@cvm:~$ afs infra.fs_upgrade_info <afs> infra.fs_upgrade_info nutanix@cvm:~$ afs infra.resume_fs_upgrade <afs> infra.resume_fs_upgrade nutanix@FSVM:~$ afs ha.minerva_check_ha_state afs ha.minerva_check_ha_state nutanix@FSVM:~$ afs smb.health_check nutanix@NTNX-X-X-X-X-A-FSVM:~$ afs nutanix@FSVM:~$ afs version nutanix@NTNX-X-X-X-X-A-FSVM:~$ afs version This will show the version is still the old version of Nutanix Files and SSR migration status : SsrMigrationInProgressnutanix@FSVM:~$ afs fs.info nutanix@NTNX-X-X-X-X-A-FSVM:~$ afs fs.info minerva log shows: 2023-03-04 13:22:02,830Z INFO 94976432 cpdb.py:124 Failed to send RPC request. Retrying. 2023-03-04 13:22:21,554Z INFO 94976432 cpdb.py:124 Failed to send RPC request. Retrying. FSVM errors in insights_server.out: insights_server.out.20230304-052033Z.gz:E20230304 06:16:55.249749Z 12493 coordinator_watch_client.cc:648] HandleError: client_id = CWC$go-cache-cecfbc0f-fdd3-4595-b46c-baea38752965 session_id = e0f3ac66-9b27-4982-4f74-8ac5bff0526c ip = X.X.X.193 port = 2027Client encountered error. Error: kResponseTooLong. Sub error type: 0. Error details: . Setting error state. insight_server.INFO or insight_server.ERROR E20230304 21:04:04.268256Z 116438 tcp_connection.cc:407] Message too long on socket 72 Max allowed size 16777216 bytes Read packet size 17210995 bytes
WARNING: Support, SEs, and Partners should never make Zeus, Zookeeper or Gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines ( https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit).Upgrade Nutanix Files to version 4.3. If the Upgrade is not possible, use the following work-around. This requires engaging STL or Devex via an ONCALL to confirm if the workaround is applicable and provide the recommended value for the gflag.In order to provide relief to customers and complete the Nutanix Files upgrade, we can update the following insights_server gflag below on all FSVMs then restart insights_server service in a rolling fashion. . Check KB-1071 https://nutanix.my.salesforce.com/kA0600000008SsH?srPos=0&srKp=ka0&lang=en_US for guidance to update the gflag. --http_server_v2_large_message_threshold_bytes --rpc_client_v2_large_message_threshold_bytes
KB15835
How to change your current password for my.nutanix.com or portal.nutanix.com
Instructions on changing your existing Nutanix support password.
This article reviews the steps for performing a password reset for either MyNutanix https://my.nutanix.com or the Support Portal https://portal.nutanix.com.
To change any user's password for MyNutanix https://my.nutanix.com or Support Portal https://portal.nutanix.com, follow the steps below: Log on to MyNutanix or the Support Portal and click on your name in the top right corner. Example: Select "Settings" or "Profile Settings" and then select "Change Password" to change the password. Example: Example: Example: This page will ask you for your old password, new password, and to re-enter your new password. Below is the password criteria. Password must contain: Between 8 to 30 charactersAt least one uppercase letterAt least one lowercase letterAt least one digitA special character!@#$%&* Once you click on save, the password will be changed, and you will be signed out. You can now log in with the new password. Contat Nutanix Portal team on portal-accounts@nutanix.com should you run into any issues.
KB9564
Cost Governance Memory Metrics
Configuring Cost Governance of Memory Metrics in Nutanix BEAM
Note: Cost Governance is formerly known as Beam.If you cannot get Underutilized EC2 recommendations in Nutanix Beam, this article explains the prerequisites and reference links for the configuration.
Prerequisites: Memory metrics are not available by default like CPU metrics in AWS. This has to be enabled on EC2 instancesMake sure you have installed the memory metrics in the EC2 instancesRefer to Amazon User Guide http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html to configure memory metrics in EC2 instances Memory metrics configuration in Beam: Refer to Nutanix Beam User Guide http://portal.nutanix.com/page/documents/details/?targetId=Nutanix-Beam-User-Guide%3Abea-memory-metrics-configure-aws-cg-t.html to configure Memory metrics in Beam Verify before configuring "Underutilized memory metrics: Underutilized EC2 recommendations depend on more factors than just memory metrics.The memory metric names configured in EC2 and Beam should be the same. Recommendations are derived based on the CPU, Network I/O, Memory, Number of days, & Percentile of the time. Note: The default system policy provided by Beam has the following settings for Underutilized EC2 recommendations: CPU usage is less than (%) => 20Network usage is less than (MBs) => 5Memory usage (in percentage) is less than => 10 (if configured properly)Number of days => 14 Time percentile to consider => 90 These policy details are available under Configure -> Cost Policy ( Nutanix Beam User Guide https://portal.nutanix.com/page/documents/details/?targetId=Nutanix-Beam-User-Guide%3Abea-cost-policy-configure-aws-cg-t.html). This can also be modified as per your requirements. You will not be recommended for underutilized memory if in your environment CPU, Memory & Network IO usage is more than the configured in Cost Policy. FAQ of Cost Governance http://portal.nutanix.com/page/documents/kbs/details/?targetId=kA00e000000CqTFCA0.
KB14235
AHV virtual switch creation fails validation due to existing bridge configuration
Failed virtual switch validation: Uplink ports for virtual switch [vsX] failed validation because of attachment to other existing bridge(s)
When creating a non-default (new) virtual switch on AHV, for example vs[X], when there is already a previously manually created bridge interface with the chosen uplink interfaces attached, we may see the error: Failed virtual switch validation: Uplink ports for virtual switch [vsX] failed validation because of attachment to other existing bridge(s) - <host ID>uplinks:[ethx, ethx] conflict:brX-> ethx ethx This occurs because the chosen interfaces are already in use and migrating them to the new vswitch and new bridge where no VM networks are configured yet may lead to loss of connectivity to existing UVMs with vNICs currently using the existing bridge.This type of configuration may have been manually configured in an earlier version of AOS prior to the distributed virtual switch feature being available (pre-AOS 5.19).
After seeing the above message rather than manually removing the old bridge configuration and uplinks, simply migrate the existing brX with the intended uplinks to the intended new virtual switch, via CLI from one of the CVMs in the affected cluster. Log into one of the CVMsCheck and confirm if brX already exists on all the hosts and is with the same configuration nutanix@CVM:~$ allssh manage_ovs show_uplinks Run the command to migrate the existing bridge with uplinks to the new virtual switch nutanix@CVM:~$ acli net.migrate_br_to_virtual_switch brX vs_name=vsX Lastly, confirm that the new virtual switch is created nutanix@CVM:~$ acli net.list_virtual_switch
""ISB-100-2019-05-30"": ""Title""
null
null
null
null
""ISB-100-2019-05-30"": ""Description""
null
null
null
null
""ISB-100-2019-05-30"": ""Title""
null
null
null
null
KB7028
G6 Node fails to boot - VDimmP2DEF Voltage Lower Non-Recoverable going low - Assertion
Component failure in the VRM related circuitry on Multi-Node platforms can lead to the inability to power on the node,  BMC is still accessible. 
Internal OnlyPlatforms Affected :- All NX G6 Multi-Node PlatformsComponent failure in the VRM related circuitry on Multi-Node platforms can lead to the inability to power on the node, BMC is still accessible.There are scenarios where the node is powered on but with partial impact - few DIMMs being not recognised. If the IPMI events log has the below signature and node is rebooted, then please replace the node only. Replace the DIMMs only after ensuring that the DIMMs are faulty even after replacing the node..We can confirm this issue from the IPMI web interface. To check and collect the event logs, do the following: Login to the IPMI page.Go to the Server Health tab.Select the Event Log or Health Event Log option from the left.After reviewing the output on the screen, click on the Save button to export the output to CSV. Ensure the file is uploaded and attached to the case. The IPMI event log will indicate Assertion Error for Lower Voltage (Example - VDimmP2DEF Voltage Lower Non-Recoverable going low - Assertion)If you see any indications of a UECC around the time of the voltage error, and it is in the same channel as the Voltage Error, please do not replace the DIMM as it is not needed and that UECC is a subset of the voltage error.For example: 113,Warning,1/13/2021 2:46,BIOS OEM(Memory Error),Failing DIMM: DIMM location (Uncorrectable memory component found). @DIMMD1(CPU2) - Assertion You might also see Memory Training Failure error messages in the SEL logs along with the above error like shown in below given example Sensor reading from oob_log_script bundle shows that the affected DIMMs are not getting detected OK | (1009) P1-DIMMA1 Temp | 42C/108F | 5C/41F | 85C/185F |
Please dispatch a new node for this failure. We have fixed this issue with help of SuperMicro for all the nodes which are shipped post March 2019.If you see this issue on the nodes which are shipped after March 2019, please mark it for Failure Analysis (FA)
KB13095
Nutanix Files --The symboric link cannot be followed because its type is disabled
On File 3.8.1 or later, the symbolic link paths on multiprotocol shares become inaccessible. Link configuration needs to be set on Windows client to enable symbolic link access.
Symbolic links are supported on Nutanix Files. However, after upgrading to 3.8.1 or later, the symbolic link paths on multiprotocol shares become inaccessible. Additionally, WinSCP does not support following symbolic links. Copy and download operations will fail using WinSCP and symbolic links on multiprotocol shares. The error message will be similar to the following: The symboric link cannot be followed because its type is disabled Note:When using the none authentication type for NFS exports, the permissions on the symbolic link will fall back to the default user. If the symbolic link is created on the NFS client while logged in as root or a different local user, the permissions will be as the default user that is configured for the export. To properly create symbolic links on multiprotocol shares the user has to be logged into their domain account on either client NFS or SMB
Due to the changes in the code specifically for multiprotocol support, we are required to enable the "remote to remote" symbolic link by setting the following on the Windows clients to have the symbolic link supported; Command to get the existing symbolic information C:\>fsutil behavior query SymlinkEvaluation Command to enable the remote to remote symbolic link C:\>fsutil behavior set SymlinkEvaluation R2R:1 Please run the following command if Remote to local symbolic links is also required. C:\>fsutil behavior set SymlinkEvaluation R2L:1 The workaround for WinSCP is to use FileZilla. as it supports following symbolic links.
KB14468
Rolling reboot of Prism Central VMs triggers false positive OVNDuplicateRedirectChassisAlert
Rolling reboot of Prism Central VMs triggers false positive OVNDuplicateRedirectChassisAlert
1) When PC undergoes for rolling reboot during upgrades we can observe the following alerts being raised on PC for different AHV hypervisor IPs. nutanix@PCVM:~$ ncli alert ls | grep "has more than one unique Hypervisor name" -A4 | grep -E "Message|Created" 2) Alert details seen from Prism Central UI: Node at x.x.x.4 has more than one unique Hypervisor name 3) Verify from the timeline of the alert if it corresponds to the timeline of a reboot of the Prism Central VM. NOTE: The below command shows timezone reference w.r.t PCVM timezone. nutanix@PCVM:~$ allssh "who -b"
The issue is identified as a software defect tracked by jira NET-12599. The alert would be auto-resolved and no action needs to be taken for this false positive alert.
KB7844
Alert - A130200 - VssSnapshotNotSupportedOnPD
Investigating VssSnapshotNotSupportedOnPD issues on a Nutanix cluster.
This Nutanix article provides the information required for troubleshooting the alert VssSnapshotNotSupportedOnPD for your Nutanix cluster. Alert overviewThe alert VssSnapshotNotSupportedOnPD is generated when VSS Snapshot is not supported for some VMs protected by Protection Domain.Sample alert Block Serial Number: 16SMXXXXXXXX Output messaging [ { "Check ID": "VSS snapshot is not supported for some VMs." }, { "Check ID": "Some VMs in the Protected Domain have unsupported configurations." }, { "Check ID": "For more details, please review the alerts generated on Data Protection page and fix the unsupported configuration." }, { "Check ID": "Hypervisor-based application consistent snapshot is taken on ESX. Crash consistent snapshot is taken for other hypervisors." }, { "Check ID": "A130200" }, { "Check ID": "VSS Snapshot is not supported for some VMs." }, { "Check ID": "VSS snapshot is not supported for the following VMs protected by Protection Domain {protection_domain_name}. VMs list: {vm_names}." } ]
Troubleshooting and resolving the issue To create an application-consistent snapshot, the following conditions should be satisfied: If the VM is part of a protection domain, the "Use Application Consistent Snapshots" option should be checked. See the screenshot below. Nutanix Guest Tools (NGT) should be installed on the VM.If Nutanix Guest Tools is already installed on the VM, run the command below, check the output and verify that the "Communication Link Active" is true. nutanix@cvm$ ncli ngt list vm-names=VM_Name If an application-consistent snapshot is attempted on a PD or if a backup application is attempting to take an application-consistent snapshot and NGT is not installed on the VM, the alert "VSS snapshot is not supported for the VM" will be raised. Note: VssSnapshotNotSupportedOnPD Health Check score may not be updated on the following health page for all affected VMs. To identify all the affected VMs, go to the Alerts Dashboard -> Alert summary view. Refer to the Alert and Event Monitoring https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide:wc-alerts-management-wc-c.html section on the Prism Web Console Guide. To resolve this alert, install NGT on the VM following the steps listed in the Prism Web Console Guide https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide:man-nutanix-guest-tool-c.html or using Prism Central https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-ngt-pc-installation-t.html.Collecting additional information Before collecting additional information, upgrade NCC. For information on upgrading NCC, refer KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC health check bundle via cli using the following command: nutanix@cvm$ ncc health_checks run_all Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, refer KB 2871 https://portal.nutanix.com/kb/2871. Attaching files to the case To attach files to the case, follow KB-1294 http://portal.nutanix.com/kb/1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. .Requesting assistance If you need assistance from Nutanix Support, add a comment to the case on the support portal asking for Nutanix Support to contact you. You can also contact the Support Team by calling on one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers/. Closing the case If this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case.
KB9503
Genesis service will be in a crashloop in an ESXi cluster CVM when parsing pyVmomi response
In an ESXi cluster genesis service in a CVM can be in a crash loop when parsing pyVmomi response from the local ESXi host. If the issue is experienced after CVM boots up then all other CVM services will remain down.
This KB details a situation where a CVM in an ESXi cluster would not fully start its services. Genesis service would be in crash loop. The key indicator of the issue is genesis will be in a crash loop with the following traceback:Log location: /home/nutanix/data/logs/genesis.out File "/home/jenkins.svc/workspace/postcommit-jobs/nos/euphrates-5.15-stable/x86_64-aos-release-euphrates-5.15-stable/builds/build-euphrates-5.15-stable-release/python-tree/bdist.linux-x86_64/egg/cluster/genesis/ndp_server.py", line 122, in configure The last line in the traceback indicates pyVmomi/SoapAdapter.py is raising an exception when parsing a response from the local ESXi host. If the CVM is impacted with this issue after bootup then no other services will come up.
Contact Nutanix Support to further investigate.Please gather details around any recent changes made to the impacted ESXi.
KB1758
Hyper-V: VLAN tag assigned to InternalSwitch VM Network Adapters
When assigning a VLAN ID to VM Network Adapters via PowerShell, if the "-VMNetworkAdapterName" and "-VMName" options are not specified, the VLAN ID is assigned to both the External and Internal VM Network Adapter, causing Genesis to crash and instability on the cluster.
On Hyper-V Nutanix clusters, InternalSwitch is used for intra-host communication between the CVM and the host. On this switch, VLANs are not expected nor supported. This includes VLAN 0. The issue described here can arise when attempting to assign a VLAN to VM Network Adapters via PowerShell. If the name of the VM Network Adapter and the intended virtual machine are not specified, the VLAN ID will be assigned to all matching adapters. This can include the Internal VM Network Adapter of the CVM and/or host. As a result, the genesis process will fail with the following error in the ~/data/logs/genesis.out file on the CVM: "Failed to reach a node where Genesis is up" To verify if a VLAN ID has been assigned to the Internal VM Network Adapter of the CVM, run the following PowerShell command on the Hyper-V host: PS C:\> Get-VM -Name NTNX*CVM | Get-VMNetworkAdapterVlan To verify if a VLAN ID has been assigned to the Internal VM Network Adapter of the Hyper-V host, run the following PowerShell command on the Hyper-V host: PS C:\> Get-VMNetworkAdapterVlan -ManagementOS
Remove VLAN on CVM Internal VM Network Adapter If the VLAN ID has been assigned to the Internal VM Network Adapter of the CVM, remove the VLAN ID: PS C:\> Get-VM -Name NTNX*CVM | Set-VMNetworkAdapterVLAN -VMNetworkAdapterName Internal -Untagged Verify the change is applied using command "Get-VMNetworkAdapterVlan" Remove VLAN on Hyper-V host InternalSwitch VM Network Adapter If the VLAN ID has been assigned to the InternalSwitch VM Network Adapter of the Hyper-V host, remove the VLAN ID: PS C:\> Set-VMNetworkAdapterVLAN -ManagementOS -VMNetworkAdapterName InternalSwitch -Untagged Verify the change is applied using command "Get-VMNetworkAdapterVlan -ManagementOS" Verify CVM Services Check status of services on the CVM and verify services are up using the "genesis status" command: nutanix@cvm$ genesis status The output will show all services on that CVM and their process IDs (PIDs) if they are running. All services (with the exception of "foundation") should have PIDs next to their name. Special Case: VLAN 0 This can also occur if the the VLAN tag is set to "0". Hyper-V is not like AHV / ESXi in this respect; VLAN 0 is not treated the same as untagged in Hyper-V.NCC also checks for this particular condition.This is an example where the InternalSwitch on the host was assigned 'VLAN 0' instead of 'Untagged': PS C:\> Get-VMNetworkAdapterVlan -ManagementOS
KB8118
[Objects] How to disable Nutanix Objects service
This KB is to outline the steps to disable Nutanix Objects service on a Nutanix cluster if they've been enabled
Nutanix Objects is an object store service. The steps in this KB is for disabling only the objects management service (management plane) on Prism Central.Objects clusters (data plane) consist of Objects microservices running on top of an S-MSP cluster. To understand the different types of MSP cluster (S-MSP,CMSP,etc) refer here https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=163790935.Any customer looking to disable Objects should follow the instructions below. However, please note that this process: Has potential to erroneously delete critical customer dataMay cause disruption in Prism Central operation Please be extra cautious and make sure you are deleting the intended Objects clusters. Notes: To remove/disable CMSP, please refer to the following KB-10413 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000btnjCAA. If you are not one of the Objects SMEs, please involve one from your region or ask for assistance on the objects-sre slack channel before following this procedure.
All steps are to be executed on Prism Central CLI unless stated otherwise. Confirm that Objects and MSP services are enabled in Prism Central, use allssh for scaleout PC nutanix@PCVM:~$ genesis status | egrep "aoss|msp" Check if the Docker containers are running for these services in Prism Central, use allssh for scaleout PC nutanix@PCVM:~$ docker ps | egrep "aoss|msp-controller" Note: There may be other Docker containers running on the PC. We are only interested in MSP Controller and AOSS containers. Check if there is an Object cluster deployed. Confirm you do not see any objects cluster in the Prism Central Objects Page (PC Home -> Services -> Objects). If you do see any cluster - please work with customer to ensure the objects cluster is deletedssh into PCVM - use mspctl to list the msp clusters. Ensure you do not see any MSP cluster apart from possibly CMSP. To understand the different type of MSP cluster refer here https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=163790935.If the type is "Controller_MSP" - that cluster is a CMSP cluster. You can ignore this for the purpose of this operationIf you see any other cluster type(service_msp or primary_msp) and there are no objects clusters shown in UI - then there is possibly a stale objects deployment as Objects is currently the only product using MSP cluster for their data plane(as of Dec 2021). Please refer to the below example that shows firststore service_msp cluster that is most likely an objects cluster. Please refer to KB 7802 http://portal.nutanix.com/kb/7802 on deletion of objects cluster. nutanix@PCVM:~$ mspctl cluster list Once you have confirmed that there is no objects clusters(apart from possibly cmsp cluster) - you can now run the script to disable objects Download the script from here https://download.nutanix.com/kbattachments/8118/disable_objects_v1.py and copy it to /home/nutanix/cluster/bin in PCVMIf Prism Central is running version < 2020.8 - uncomment the line "import cluster.consts as consts" From: If this is a scale-out PCVM - copy the script across all PCVMsExecute the script (across all PCVMs if scale-out) nutanix@PCVM:~$ python /home/nutanix/cluster/bin/disable_objects_v1.py Check genesis.out in Prism Central to see if the disable workflow has been successfully completed: 2019-06-20 01:27:41 INFO cluster_manager.py:591 Request to disable services: {"service_list": ["AossServiceManagerService"]} Run the following commands to delete 2 files which will remove AOSS services from genesis status output, use allssh for scaleout PC nutanix@PCVM:~$ /bin/rm /home/nutanix/data/locks/aoss_service_manager You'll also see some firewall change messages in genesis.out, which is required since the ports used by these services are not required anymore. Salt will also execute to update the salt state. Once the script has been executed successfully, run the cluster status command to ensure aoss_service_manager service is not listed anymore.Delete the disable_objects_v1.py script file.List the Docker images on the PCVM, use allssh for scaleout PC. Please note the tag (in the below case it is 3.2.0) nutanix@PCVM:~$ docker image ls | grep aoss Remove only the aoss_service_manager make sure to include the TAG, use allssh for scaleout PC nutanix@PCVM:~$ docker image rm aoss_service_manager:3.2.0 Note the tag here 3.2.0 is taken from the previous step (docker image ls) If the client has pc.2023.x and up installed, please referance KB-14245 https://portal.nutanix.com/kb/14245 for the detailed steps in removing Objects from My Apps under Apps and Markketplace
KB12020
NCC Health Check: pc_backup_limit_check
Troubleshooting and resolving alert A200332 - pc_backup_limit_check.
This Nutanix article provides the information required for troubleshooting the alert A200332 - pc_backup_limit_check for your Nutanix cluster. This alert verifies if Prism Central backup exceeded Backup Limit. This Alert is introduced as an enhancement to the existing PC-DR feature originally introduced in pc.2021.7. This NCC check requires a minimum of NCC 4.5.0 and Prism Central pc.2022.4. Sample alert Warning : Prism Central backup limit reached on these Prism Elements: {pe_list} Output messaging Prism Central disaster recovery backs configuration data from Prism Central onto Prism Element Clusters. There are scenarios when we pause this backup. Scenario 1The capacity of the Prism Element Clusters to backup data is limited. To safeguard the interest of the infrastructure running on the Prism Element, such as user VMs, we pause the backup in case the data on Prism Central exceeds the capacity of the data that the Prism Element can hold. Although this is a rare occurrence, when it happens, users will see the pause icon on the Prism Central Management Page. See screenshot.Scenario 2When the Prism Central version is being upgraded, we pause the backup. Once the upgrade completes, the backup is resumed automatically. [ { "Check ID": "Check if the Prism Central backup limit is reached." }, { "Check ID": "Prism Central backup limit reached." }, { "Check ID": "Refer to KB-12020 for further details." }, { "Check ID": "Prism Central backup to Prism Elements is paused." }, { "Check ID": "Prism Central backup limit reached." }, { "Check ID": "Prism Central backup limit reached on these Prism Elements: {pe_list}" } ]
To resolve the alert, in the case of Scenario 1 above, manual intervention is required. The following actions may be tried: Upgrade the version of AOS on the target PE clusters where the backup is stored. We have been increasing the capacity of the amount of data that can be backed up on AOS. For example, the capacity of AOS 6.1 is almost three times more than AOS 6.0.Capacity for the data stored on the Prism Element is proportional to the number of nodes in the Prism Element. If possible, add more nodes to the backup targets. Alternatively, you may remove an existing backup target and add another one that has more number of nodes. If neither of the actions above resolves the issue, engage Nutanix Support http://portal.nutanix.com to assist.
""Model of the entity"": ""Intel X710 4P 10G EX710DA4G1P5 Firmware on AHV el7\t\t\t\tIntel X710 4P 10G EX710DA4G1P5 Firmware on ESXi 6.7\t\t\t\tIntel X710 4P 10G EX710DA4G1P5 Firmware on ESXi 8.0\t\t\t\tIntel X710 4P 10G EX710DA4G1P5 Firmware on AHV el8\t\t\t\tIntel X710 4P 10G EX710DA4G1P5 Firmware on ESXi 7.0\t\t\t\tIntel X710 4P 10G EX710DA4G1P5 Firmware on HyperV 2022\t\t\t\tIntel X710 4P 10G EX710DA4G1P5 Firmware on HyperV 2019""
null
null
null
null
KB8669
Nutanix Files - long filename isn't supported yet.
When the "byte" length of the filename which is represented as UTF-8 is longer than 255, it couldn't be copied int the SMB share from Nutanix Files even other SMB fileserver products inlcuding File-Server role of Windows Server could allow suche a filename.
Nutanix Files cluster may report the error "The file name you specified is not valid or too long" when migrating from NetApp or Windows File Share to Nutanix Files cluster if they have file names that are too long. This impacts all operations with the Nutanix file server including the Files TLD MMC so attempting to delete files using the Files TLD MMC that are longer than 255 multi-byte characters will make it unresponsive. However, due to limitations in the Minerva file system (ZFS), we cannot exceed 255 multi-byte characters for file and folder paths including the FQDN path to the share, even if the underlying client OS supports a larger character limit. Note: There are other characters such as Chinese or Japanese characters in UTF-8 encoding that uses between 1 and 4 bytes per character. If multi-byte characters are used, the file names need to be less than 54 characters for 4-byte languages.When copying a file that has a filename in UTF-8 encoding and a length longer than 255 bytes to the SMB share of Nutanix Files, you may encounter the below alerts: <<Destination Path>> too long, the file name is too long for the <<destination folder>> (destination folder being the Nutanix Files Share) ORWrong long filename example: 84 Kanji characters with three alphabets character file extension. ( 84 x 3 + 4(".txt") = 256bytes.) 鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖鯖.txt
The issue is fixed in Nutanix Files 4.1 and above. Upgrade to the fixed version of Nutanix Files to resolve this issue.
KB10379
Nutanix Kubernetes Engine - kubectl commands return "Forbidden" error for domain users assigned the "Cluster Admin" or "Viewer" role
Kubeconfig files downloaded with domain accounts assigned the "Cluster Admin" or "Viewer" role do not have access to execute kubectl commands against the Nutanix Kubernetes Engine Kubernetes cluster. For these users, Kubernetes RBAC needs to be configured.
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.Prism Central domain user accounts configured with the Cluster Admin or Viewer role have the ability to download a kubeconfig file for accessing Karbon Kubernetes clusters; however, when configured to use this kubeconfig file, kubectl commands may fail with a Forbidden error message. For example, attempting to list pods may fail with the following error: $ kubectl get pods A kubeconfig file downloaded by a user with a User Admin role does not encounter this behavior and is able to execute kubectl commands successfully.
This is expected behavior, as domain users without a User Admin role assigned do not have permissions to access the NKE Kubernetes cluster by default. Domain users assigned a User Admin role will, by default, have full have permissions on the NKE Kubernetes cluster.Domain users assigned the Cluster Admin or Viewer roles must be explicitly granted access to the cluster via Kubernetes RBAC. For an example of configuring RBAC, see the following Nutanix blog post: https://next.nutanix.com/community-blog-154/providing-rbac-for-your-karbon-kubernetes-clusters-33132 https://next.nutanix.com/community-blog-154/providing-rbac-for-your-karbon-kubernetes-clusters-33132.
KB6041
Using bpanalyzer for VDI Guest Optimization
null
Bpanalyzer is the result of a community effort to create a utility to help assess the performance optimization levels of a given VDI guest. If CPU contention is experienced, or if there are questions regarding the ability of a given cluster to handle a specific compute workload, then the results of the bpanalyzer should be examined and applied to ensure that the VDI guests are running as efficiently as possible with respect to CPU consumption.With bpanalyzer, there are two components, where # will be the version/revision number: The setup file BPAProfile_v# (initiates installer for the binary)The XML ruleset BPASetupv# (loads a series of values to check) The files can be accessed at www.bpanalyzer.com http://www.bpanalyzer.com/index.php/downloads/
Regarding the use of the bpanalyzer, it would be best to run the utility against a VDI guest that is mostly representative of most VDI guests. In environments where there are tiers of users in terms of compute, you may want to run the utility on vms that represent each tier.Installing the utility will require administrative privileges on the guest, so you will need to have an account with those permissions already logged in to the guest or be able to use administrative credentials when prompted during the install procedure.Once the bpanalyzer is installed, launch the application from the start menu, and you’ll get a screen like this:From the file menu, select “Load BPA RuleSet” Browse to the directory containing the extracted XML ruleset (aka profile):After loading the ruleset, the checks will be run against the system (but will not be automatically applied). Here is an example of a run:Any items with a red ‘X’ should be evaluated and applied to the system in order to fully optimize the vm. Individual Items can be selected and a description of the item will be provided in the right pane.NOTE: At this point in time, there isn’t any information regarding the benefit of applying each option individually, so we can’t answer which items will get more or less benefit in a given scenario. You can select all failed items using the “Select” menu or CTRL+Shift+A:To apply the suggested changes, select the check boxes and then click “Apply Items”.Since most changes are registry changes, they will require a reboot to take full effect.NOTE: Please be aware of any group policy objects that may have an effect on registry settings. In some environments, GPOs can revert registry settings on reboot, and this will revert many of the configuration changes made by the bpanalyzer.
KB13976
Windows VMs with vTPM enabled are unable to be snapshotted and give the error "Failed to capture the Recovery Point for VM 'VM_NAME'"
When trying to take a backup of Protection Domain snapshot for a Windows VM with vTPM enabled, and error is returned: "Failed to capture the Recovery Point for VM 'VM_NAME'".
Windows VMs with vTPM enabled cannot be snapshotted for a DR (Protection Domain snapshot) or third-party backup. The following alert is generated: Alert ID: A130157 To confirm this issue is related to vTPM, log in to the CVM as the "nutanix" user and search for the text "VM has vTPM enabled" in the /home/nutanix/data/logs/cerebro.info logs: nutanix@cvm$ allssh 'grep "VM has vTPM enabled" /home/nutanix/data/logs/cerebro.INFO' The VM name can be confirmed by: nutanix@cvm$ acli vm.get xxxxxxxx-xxxx-xxxx-xxxx-xxxx193f VM_NAME will appear as an alert for this issue.The following one-liner can also be used to list VMs with vTPM enabled: nutanix@cvm$ for i in `acli vm.list | awk '{print $NF}'`; do acli vm.get $i | egrep -i 'vtpm| name'; done Currently, third-party backups are not supported. When third-party backups are used, we see behaviour described below.Three tasks are created. The "create_vm_snapshot_intentful" task fails, "EntitySnapshot" task is aborted, and "delete_vm_snapshot_intentful" task succeeds. nutanix@cvm:~$ ecli task.list | grep -C5 cda0d45d Examining the tasks closer shows that the "create_vm_snapshot_intentful" task returns the error message "INTERNAL_ERROR: Internal Server Error." with the error code "500".The "EntitySnapshot" task will error with the message "Failed to snapshot entities" and the error code "11": nutanix@cvm:~$ ecli task.get 83e4b60f-1afe-4903-b9e8-3f547ab8332a nutanix@cvm:~$ ecli task.get cda0d45d-3659-58ec-ab0d-a4c861964dc3 Using egrep we can look for the following errors corresponding to the VM, Snapshot, and Task UUID: nutanix@cvm:~$ egrep -i 'CBR incapable VM|no CG snapshots available|Failing the PD snapshot|Failed to snapshot entities' /home/nutanix/data/logs/cerebro.*INFO*
As of AOS 6.5.1, taking a DR snapshot of a VM with vTPM enabled is not possible. Security Guide: Considerations for Enabling vTPM in AHV VMs https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v6_5:mul-vtpm-considerations-acli-ahv-r.html.You can still take an AHV snapshot of the VM (via Prism > VM, select the VM and then "Take Snapshot").
KB12992
AOS or AHV upgrade fails due to ahv-host-agent crash loop
AOS upgrade or AHV upgrade can fail at pre-check stage when ahv-host-agent is in a crash loop. A dormant bug in AHV el6.nutanix.20170830.434 puts ahv-host-agent in crash loop if Flow Network Security is enabled. This article looks into the solution for such cases
A dormant bug in AHV el6.nutanix.20170830.434 causes ahv-host-agent to crash continuously when the Flow Network Security is enabled.Service status for ahv-host-agent in AHV 20170830.434 when Flow Network Security is enabled. root@AHV ~# /etc/init.d/ahv-host-agent status The ahv-host-agent logs (/var/log/ahv-host-agent.log) will show the following entry for the crash: 2021-12-09 11:46:22,228 manager.py:110 INFO EventManager thread is listening on 127.0.0.1:2031 The solution for this issue is to upgrade AOS and AHV immediately. However the upgrade precheck "host_disk_usage_check " relies on ahv-host-agent. If ahv-host-agent is dead (like in this case), the precheck will fail.
The solution here is to temporarily stabilize ahv-host-agent to accomplish the AOS/AHV upgrade. The ahv-host-agent can be stabilized by stopping conntrack_stats_collector service in the ahv hosts. Flow Network Security visualization will not work on PC when conntrack_stats_collector is stopped. This has low impact and the service stoppage is only temporary.Resolution Steps: Initiate AOS upgrade. Stop the conntrack_stats_collector on all hosts just before starting the upgrade precheck starts. nutanix@NTNX-CVM:~$ hostssh '/etc/init.d/conntrack_stats_collector stop' Start ahv-host-agent on all hosts. nutanix@NTNX-CVM:~$ hostssh '/etc/init.d/ahv-host-agent start' Now the precheck will pass. once the pre-upgrade is completed, Genesis does an auto restart. This restart turns conntrack_stats_collector on. At this stage, the ahv-host-agent service will crash again. However, as the precheck is already completed. This will not have any impact.Repeat the same step for AHV upgrade. Once the AHV is upgraded to any version higher than el6.nutanix.20170830.434, the ahv-host-agent will be stable.
KB10989
AHV br0 bond configuration may be lost and node go down when updating the uplinks on the bridge
In an AHV multi-bridge environment the bond configuration might be lost when updating the uplinks if adding interfaces that belong to another bridge. If br0 is impacted, the node may go down as all the uplinks are lost in the default management bridge for both AHV and CVM
In an AHV, AOS multi-bridge environment configured via CLI, the bond configuration might be lost when updating the uplinks configuration to add the uplink interfaces that belong to another bridge. If it is br0 uplink, the node may go down as all the uplinks are lost in the default management bridge for both AHV and CVM.For example, we have a sample multi bridge configuration below with br0 and br1 each with 2 uplinks; ---- br1-up ---- When trying to add "All NICs" (or "All [10G|1G] NICs") in the bond br0-up via the Prism "Uplink Configuration" option, if any of the included NICs already belong to another bridge, for example br1-up, then the br0-up configuration may be left with no active uplinks on the bridge, which leads to a node down situation. Verification: nutanix@CVM$ manage_ovs show_uplinks Example output from the AHV host log file: /var/log/acropolis_ovs.log --SNIP--[]
To recover the node immediately, gain access to the CVM via IPMI and issue the command below with only the required uplinks (update the NICs according to the specific environment/required configuration. "eth0,eth2" are used for the purpose of this example); nutanix@CVM$ manage_ovs --bridge_name br0 --interfaces eth0,eth2 --bond_mode active-backup update_uplinks In order to avoid the above issue, or complete the br0 uplink configuration successfully after recovery, to add uplinks already belonging to another bridge (i.e. br1-up) first remove the intended uplinks from br1. Refer to the KB-9383 https://portal.nutanix.com/kb/9383 for any issues with the command above.Consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/ for further help with bond uplink configuration and review the health of the cluster. []
KB14147
Active Directory, SNMP, or SMTP configuration may be lost following BIOS firmware upgrade with LCM when using G6/G7 BMC 7.11
When upgrading BIOS firmware on an NX-platform of G6 or G7 generation and running BMC firmware version 7.11, some users have reported that the configuration for extra features like Active Directory role mappings, SNMP, and SMTP which were previously made in the IPMI Web UI were not retained. Basic network and user configurations were not affected by this issue.
On NX-platform nodes of G6 and G7 generations, some secondary features configured in the IPMI Web UI do not preserve their configuration after LCM has been used to upgrade the BIOS firmware.Nutanix Engineering is actively investigating this issue. The issue occurs while the BMC firmware is at version 7.11 and the BIOS is being upgraded to PB60.001 from an earlier version. The BMC is only reset to Factory Defaults during upgrades of the BIOS and BMC firmware, so other firmware or software upgrades can be done in LCM without fear of hitting this issue.This issue does not produce any failure in the LCM upgrade task. The basic networking and user configurations, such as the user-defined IP addresses and non-default passwords for the ADMIN user, are still preserved.IPMI Features Which May Lose Configuration Following Upgrade: Active Directory, including Role MappingsSMTPSNMP
This issue is resolved in LCM-2.5.0.4. Please upgrade to LCM-2.5.0.4 or higher version - Release Notes | Life Cycle Manager Version 2.5.0.4 https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-LCM:top-Release-Notes-LCM-v2_5_0_4.htmlIf you are using LCM for the upgrade at a dark site or a location without Internet access, please upgrade to the latest LCM build (LCM-2.5.0.4 or higher) using Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Guide-v2_5:Life-Cycle-Manager-Guide-v2_5For LCM versions < 2.5.0.4 , follow one of the instructions below: Pre-upgrade considerationsUsers currently running BMC firmware version 7.11 and need to upgrade the BIOS on their G6/G7 nodes should back up the configuration by downloading it from the IPMI Web UI before the upgrade. If the nodes lose the configuration for these features following the upgrade, the save_config.bin file generated here can be uploaded to the IPMI Web UI to restore the configuration. To save a copy of the IPMI configuration, perform the following steps: Login to the IPMI Web UI of your G6/G7 nodes and click on the "Maintenance" tab.Select "IPMI Configuration". Click the "Save" button next to "Save IPMI Configuration". This will download a file "save_config.bin" to your workstation. It is recommended to rename the file to something meaningful like "nyc_cluster_host_15_ipmi_config.bin". Once you have saved the IPMI configuration for all the G6 and G7 nodes in the cluster, you may safely proceed with the LCM firmware upgrade.After the firmware upgrade, log in to the IPMI Web UI on these nodes and check if the configuration of any features is lost.If you encounter any nodes which have lost their configuration, upload the save_config.bin file for that specific node into the same page in the IPMI Web UI. Under "Maintenance" and "IPMI Configuration," you will see an option to "Reload IPMI Configuration" from the save_config.bin file that you downloaded previously. Once you upload this file, log back into the IPMI Web UI again. Verify that the original configuration is restored. Workaround for users who have already upgraded and without backing up the configuration If the administrator logs into the IPMI Web UI with their Active Directory account, log in with the ADMIN user account instead.Manually reconfigure any features which have had their configuration lost. You can find the forms for Active Directory, SNMP, and SMTP under the "Configuration" tab.If you have other G6/G7 nodes running BMC 7.11 which still need to be upgraded, save the IPMI configuration first by following the steps in the above section, "Pre-upgrade considerations."
KB8023
Security scan causing Stargate crashes
A network security scanner probing the CVM's ports can lead to Stargate crashes.
In some environments where customers use network security scanners in the same subnet as the CVMs, we can see Stargate crashes throughout the cluster. The example is seen in stargate.FATAL log: Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg Stargate is crashing because it's receiving an RPC it does not know how to handle.The Stargate logs are located at /home/nutanix/data/logs/stargate.[INFO|WARNING|ERROR|FATAL].
Option 1Use modify_firewall to block the specific IP scanning the environment. Refer to Security Guide - Enabling IP Set Based Firewall https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v6_7:sec-ip-set-based-firewall-enable-t.html Option 2Configure backplane network segmentation to isolate the CDP traffic. Refer to Security Guide - Securing Traffic Through Network Segmentation https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide:wc-network-segmentation-intro-wc-c.html.
KB11754
[Move] VMs Migrated From AWS Using Move Do Not Have Correct CPU and RAM Configuration
There is currently an issue with Move where certain EC2 types are not mapped correctly for inventory. This leads to Move migrating them over with CPU and RAM different than expected
As of Move 3.7.0+, Move does not correctly migrate all AWS EC2 instance types due to them not being correctly mapped for CPU and RAM.In these cases, Move will migrate using the default values of 1 CPU and 512 MB of RAM. The following instance types are correctly mapped and should be expected to migrate with the correct CPU and RAM configured: T1Micro, T2Micro, T2Nano, T2Small, T2Medium, T2Large, T2Xlarge, T22xlarge, M4Large, M4Xlarge, M42xlarge, M44xlarge, M410xlarge, M416xlarge, R4Large, R4Xlarge, R42xlarge,
Any AWS VMs with EC2 instance types outside of those listed in the description will need to have their CPU and RAM reconfigured after migration. ENG-411515 https://jira.nutanix.com/browse/ENG-411515 is open for this issue.
KB10934
On AOS 5.19, NGT installation may get stuck in a "RUNNING" state.
On AOS 5.19, NGT installation may get stuck in a "RUNNING" state.
Description:On AOS 5.19, NGT installation may get stuck in a "RUNNING" state and user may not be able to perform VM operations until the NGT installation task is stopped.Diagnostics: Installation of NGT is stuck in a running state.Select a guest VM and click on Actions. User will notice that all the options are disabled and is unable to execute any operation on that guest VM as the task is in running state. Get the VM UUID using nuclei vm.get <vm name> command.Check ecli task.list entity_list=<vm uuid> 35aa3553-02f3-453e-661c-576ddc18ae33 anduril 159948 Put kRunning From anduril.out logs: 2020-11-16 14:51:26,980Z INFO vm_base_model.py:313 Re-initializing ngt client Check ecli task.get <task uuid of anduril task>. Check if the task is running for more than 30 mins or 1 hour, incase of bulk install (say NGT install triggered on 100 VMs)If the task is running for more than 30 mins or 1 hour, it means that the task is stuck in a "RUNNING" state.
Workaround: SSH to Prism Element.Execute rolling restart of anduril service on all the nodes. nutanix@CVM$ allssh genesis stop anduril && sleep 10 && cluster start && sleep 30 Now the task will get aborted.Login to ONPREM Prism CentralGo to the VM page. Select the desired VM, Click on Actions > Install NGT A workaround fix has been added in AOS 5.19.1, where a timeout is added so any stuck NGT install would fail after the timeout.Solution:For actual fix, please refer ENG-353909 https://jira.nutanix.com/browse/ENG-353909
KB7127
Login to Prism Central with AD account fails with error message "Server is not reachable"
Have been able to log in to Prism Central with AD until recently and now either some or all accounts are unable to log in.
Scenario 1 Login to Prism Central with AD users fails for one or all users with error: "Server is not reachable"For a single failed account only, in ~/data/logs/aplos.out, you may see the following ERROR: 2019-03-19 11:41:49 ERROR mixins.py:203 Duplicate entities {'username': '<USERNAME@DOMAIN>'}: For all accounts unable to log in, in aplos.out, you may see the below ERROR: 2019-03-20 14:09:25 ERROR directory_service.py:780 Service account not found for the directory service with domain= <DOMAIN>. Please update service account to proceed further. If you don't see errors in aplos.out and none of the AD credentials or admin accounts work, then it could be that "server not reachable" in the PC UI is a symptom of another issue.This behavior also occurs when PC Cassandra service is in a crash loop due to metadata filesystem /dev/sdc in PC to be full.Check prism_gateway.log in the Prism Central logs for more information on Cassandra: E0204 10:32:25.514113 20964 cassandra_token_util.cc:214] Partition range for node ip: 10.100.41.53 svm id: 2 is unavailable! Scenario 2 Another case when the same symptoms occur is when authentication is moved from vanilla LDAP to AD or another LDAP directory service where the same user exists.In this case, we will have the user associated with a non-existent Directory Service UUID.For example the user lynton cannot login: <nuclei> user.list Getting the details of the user: <nuclei> user.get 7f120267-5ee5-4023-a6a2-f3280dea26e4 The Directory Services configured: <nuclei> directory_service.list When trying to delete from aplos we will have the following logged in aplos.out on the leader: found item in cache for key 4a8e0fa4-3467-5170-8982-9e8d7cac8c0c 4e4043d1-2983-52e4-b8dc-0665548f0b43 uuid: "4e4043d1-2983-52e4-b8dc-0665548f0b43" Scenario 3 This can also be seen if RBAC authentication was failing when users were defined via AD groups and "directory_service_reference" for the given groups had been nulled out at some point, likely during their last upgrade.In the debug it can be seen that the user is successfully authenticated from the AD DEBUG 2022-01-26 15:14:49,858Z Thread-1 authentication_connectors.basic_authenticators.BasicAuthenticationManager.updateUserDataInRequest:186 user profile generated : UserProfile(username=adminxyz@ntnx.com, userType=ldap, userUuid=null, tenantUuid=null, emailId=null, firstName=null, locale=en_US, region=en_US, domain=null, lastName=null, middleInitial=null, userGroupUuids=null, idpUuid=null) In aplos log the login attempt is throwing error for 'user group does not belong to the domain' and 'TypeError: 'NoneType' object is not iterable' 2022-01-26 15:56:15,176Z ERROR user_group.py:189 name = cn=admin_group,ou=nutanix,ou=security,ou=groups,ou=ntnx global objects,dc=ntnx,dc=com does not belong to the domain = ntnx.com. When that user_group was checked in nuclei it was seen that the directory service reference had been nulled out for these user groups somehow which would explain the "TypeError: 'NoneType'" errors. nutanix@PCVM:~$ nuclei user_group.list format=json Proceed with the solution for scenario 3 if the above condition is met. Scenario 4 Like the issue with duplicate users, Prism Central can have duplicate groups.List the groups in this way: nutanix@PCVM:~$ nuclei user_group.list A quick way to identify if there are duplicate groups is with a long command like this: nutanix@PCVM:~$ for uuid in `nuclei user_group.list 2>/dev/null| grep -oE '[a-f0-9\-]{36}'`; do nuclei user_group.get $uuid 2>/dev/null | grep "distinguished_name:"; done In the above output, there are two groups called "server team"An easier thing to check might be distinguished name: nutanix@PCVM:~$ for uuid in `nuclei user_group.list 2>/dev/null| grep -oE '[a-f0-9\-]{36}'`; do nuclei user_group.get $uuid 2>/dev/null | grep "display_name:"; done Once you see duplicate groups, you'll need to list them out, and get the UUIDs of each.Scenario 5Login attempts for certain AD users who are part of Calm Projects will fail with the error "Server is not reachable". aplos log signature will contain the following backtraces. Ensure complete signature match: nutanix@PCVM:~$ less data/logs/aplos.out Scenario 6 Deleting the user from user list may fail with signatures "message: User cannot be deleted as there are resources associated with this user. Change the ownership on the associated entities and try again" <nuclei> user.delete e28bcab2-db45-5ff4-b6a1-2e4329fe2a50 Scenario 7Login attempts for certain AD users with the error "Server is not reachable", in ~/data/logs/aplos.out, you may see the following ERROR: 2023-09-04 00:23:27,540Z INFO athena_auth.py:135 Basic user authentication for user tfmncs-tantoh.adm@gateway.gov.sg Scenario 8 You may also see the "Server is not reachable" error reported if the user mapping for the user isn't configured. In this case you may see a line similar to this in the prism_gateway.log: ERROR 2023-10-17 17:31:36,022Z http-nio-127.0.0.1-9081-exec-4 [] auth.commands.LDAPAuthenticationProvider.assignUserRoles:419 No role mapping is defined in prism for user <USERNAME>@<DOMAIN>. Authorization failed
Scenario 1 and Scenario 2 If you see the signatures for a single or a few accounts unable to log in with AD, while others have no issue, look for a duplicate entry in authorized accounts with the below command: nutanix@pcvm$ nuclei user.list NOTE: The command lists only 20 users at a time. To search the entire list, use the count flag. For example: nutanix@pcvm$ nuclei user.list count=400 If you see a duplicate user, get each user's details until you find the list that does not show access_control_policy_list. Look for the users that do not contain any value for "access_control_policy_reference_list". The below user contains access_control_policy_list: nutanix@PCVM:~$ nuclei user.get b34c277e-fb08-5cf6-8296-7cf11b5dbf48 Look into the next user with the same name but a different UUID and confirm the access_control_policy_list is empty: nutanix@PCVM:~$ nuclei user.get ed0fee04-551d-5cb4-b93b-27e8a6f23a81 Please delete the user that does not contain information in the access_control_policy_list. nutanix@PCVM:~$ nuclei user.delete ed0fee04-551d-5cb4-b93b-27e8a6f23a81 If the deletion of the user is getting the following message: - message: User cannot be deleted as there are resources associated with this user. Please verify if the user is a member of a project or role configured in Prism Central look at the below example:username6 is a member of the SSP default project and project-NTNX: <nuclei> project.get 1a51aff6-59bd-45dd-a6b5-0ab7679b7766 From the above example, username6 belongs to both projects and cannot be removed because of it.If the issue persists, even after deleting the user from the Project and Roles, please verify Scenario 6.If you see the signatures for all users unable to log in with AD accounts, log in with the admin user and re-enter the AD authentication password OR you can reconfigure the AD altogether if the previous step did not help.If you instead found PC Cassandra service crashing upon inspection of the prism_gateway.log, then confirm metadata disk usage is high on Prism Central by checking on its filesystems with df -h: nutanix@pcvm$ df -h In case you find the PC is undersized, it is advisable for the customer to convert small prism central to large [ KB-2579 https://portal.nutanix.com/kb/2579] or perform PC scale-out Scenario 3 We will remove the role mapping and delete the group from nuclei and then re-add the role mapping. Please note that during this time none of the users in the group will be able to log in.Take a note of the role mapping for that Directory: nutanix@pcvm$ ncli authconfig ls-role-mappings name=<Directory name> Unmap the group from all 'group role mappings' from prism:Get the group info from nuclei nutanix@pcvm$ nuclei user_group.list format=json 2>&1 | grep -A35 -B6 <GroupName> change -A till you are able to get the "spec": {}, data which has the UUID of the group which needs to be deleted nutanix@PCVM:~$ nuclei user_group.list format=json 2>&1 | grep -A35 -B6 admin_group Delete the group via nuclei "nuclei user_group.delete <UserGroupUUID>". When running the delete you may need to do it a couple times until it returns like below. nutanix@PCVM:~$ nuclei user_group.delete b0761216-e30d-4248-a38e-cae2743d2317 Confirm in nuclei that the group info is deleted.Remap the group to any given role(s) in prismTest login of the user which was failing earlier. When the group is re-mapped to the role it should be re-created, hopefully with the directory_service_reference populated. nutanix@PCVM:~$ nuclei user_group.list format=json 2>&1 | grep -A35 -B6 "admin_group" Scenario 4 Get each user group until you find the duplicate groups (list does not show the group names). Look for the groups that do not contain any value for "access_control_policy_reference_list". The below group contains no reference list: nutanix@PCVM:~$ nuclei user_group.get xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx If you keep looking, you'll likely find a group that does. Do NOT delete this group: nutanix@PCVM:~$ nuclei user_group.get xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx It is best to evaluate which group can be removed to resolve the duplicate. The group with no access control list is the one that typically can be removed. The command to remove a user group: nutanix@pcvm$ nuclei user_group.delete xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Scenario 5The issue mentioned in Scenario 5 can occur if there is a Calm project with an access control policy that has a malformed filter entity: Get filter uuid from log Scenario 5 log signature. KeyError value in the signature is the uuid of the malformed filter entity: 2022-05-26 04:12:53,875Z ERROR users_info.py:167 Traceback (most recent call last): Identify the project entity that is referenced in access_control_policy where the malformed filter is referenced, and modify FILTER value in the command below accordingly: nutanix@PCVM:~$ cd /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts Identify the name for the project using project uuid from the previous step: nutanix@PCVM:~$ nuclei project.get 03d5a868-3c42-47ba-9a57-5c6e327c0700 | grep "project_reference" -A2 Via Prism Central Calm UI do a dummy update to the identified project to regenerate ACP filters. Projects -> Select project name identified in previous step -> Infrastructure -> NTNX_LOCAL_AZ -> Configure resources -> Ensure account, clusters and subnets selected -> Confirm -> Save Scenario 6 Confirm customer is not a part of any Project/categories Refer to the script in internal comments Scenario 7Check with your Active Directory Administrator for the following items about your service account. Refer to KB-9314 https://portal.nutanix.com/kb/9314 for detail Has your service account been locked from too many failed attempts?Has your service account been disabled or expired ?Does your service account have "User Must Change Password On Next Login" checked? Test service Account can login successfully and has sufficient permissionScenario 8To fix this, you simply need to configure an appropriate role mapping for the user which is failing to log in. You can do this from the Role Mapping section in the settings page in Prism. You can either configure the user for a role directly or configure a role for an active directory group the user is a part of. ENG-486459 is open to track a better error message for this scenario, since "Server is not reachable" is not really accurate.
{
null
null
null
null
""ISB-100-2019-05-30"": ""ISB-027-2016-11-16""
null
null
null
null
KB14760
Stuck stretch after removing a VM from a SyncRep Protection Policy
Stuck stretch after removing a VM from a SyncRep Protection Policy
After removing a VM from a SyncRep protection policy, the VM is successfully unprotected; however, the VM stretch is not removed from either source or destination PCVMs. There is also a "ChangeStretch" task stuck on source cluster.On the source side (where the VM resides), the PCVM reports the VM as Unprotected: nutanix@PCVM$ nuclei mh_vm.get c45065d7-6abe-4a40-a9a9-8b8957d9d053 However, the source and destination PCVMs report the stretch for the VM:Source PCVM: nutanix@source-PCVM:~$ mcli dr_coordinator.list Destination PCVM: nutanix@destination-PCVM:~$ mcli dr_coordinator.list On source PCVM (also from PE the tasks are visible), there are stuck Anduril and Magneto running tasks to change the stretch, similar to the below: nutanix@PCVM:~$ ecli task.list include_completed=0 The change stretch task is not running on destination PCVM/PE.On the source PE, the anduril log will show failed RPC requests to the remote CVMs: nutanix@CVM:~/data/logs$ grep 10.112.3.6 anduril.out|grep "Creating stub for Ergon"|tail -n5 2023-04-24 10:28:07,278Z INFO utils.py:780 remote_cluster_id: 1481130952123024958, remote_cluster_uuid: 0005d653-4ba3-9952-148e-08c0eb21063e, remote_pc_uuid: 9387ff3f-7d49-463d-afa7-e5a9655e4629, remote_ip_list: [u'10.112.3.6', u'10.112.3.2', u'10.112.3.8', u'10.112.3.7', u'10.112.3.4', u'10.112.3.1', u'10.112.3.5', u'10.112.3.3'] Checking the network connection between the clusters, it can be observed that the required ports are not open for one or more CVMs in the cluster. On this particular example, we can see port 2090 is not open for some CVMs: nutanix@NTNX-EWAA011333-A-CVM:10.112.3.6:~$ allssh "sudo iptables -L -n |grep 2090"
This issue occurs due to a lack of network communication needed between the cluster services. For the above example, it was port 2090(Ergon) which highlighted the issue.As detailed on the SyncRep requirements https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:ecd-ecdr-requirements-synchronous-protectionpolicy-pc-r.html, if the source and destination PE clusters are on a different subnet, the CVMs firewall must be open from both ends for ports 2030(Hyperint / Acropolis), 2036(Anduril), 2073(NGT) and 2090(Ergon).In the case of TH-11110 https://jira.nutanix.com/browse/TH-11110, it was observed that the customer had performed a cluster expansion of 4 nodes, but customer had not enabled the firewall flow for the new CVM IPs.To fix the issue, ensure to open the firewall for all CVMs in both directions for ports 2030, 2036, 2073 and 2090 as needed.Note: Use the eth0 interface only. eth0 is the default CVM interface that shows up when you install AOS. For network segmentation enabled Nutanix cluster, use the ntnx0 interface.To open the ports for communication to the recovery cluster, run the following command on all CVMs of the primary cluster. nutanix@cvm$ allssh 'modify_firewall -f -r remote_cvm_ip,remote_virtual_ip -p 2030,2036,2073,2090 -i eth0' Replace remote_cvm_ip with the IP address of the recovery cluster CVM. If there are multiple CVMs, replace remote_cvm_ip with the IP addresses of the CVMs separated by comma. Replace remote_virtual_ip with the virtual IP address of the recovery cluster. To open the ports for communication to the primary cluster, run the following command on all CVMs of the recovery cluster. nutanix@cvm$ allssh 'modify_firewall -f -r source_cvm_ip,source_virtual_ip -p 2030,2036,2073,2090 -i eth0' Replace source_cvm_ip with the IP address of the primary cluster CVM. If there are multiple CVMs, replace source_cvm_ip with the IP addresses of the CVMs separated by comma. Replace source_virtual_ip with the virtual IP address of the primary cluster. In TH-11110 https://jira.nutanix.com/browse/TH-11110, as the issue was long standing, anduril needed to be restarted on the cluster where the tasks were stuck in kQueued status after which they progressed: nutanix@cvm$ genesis stop anduril; cluster start
KB11345
Flow Network Security unexpected policy behaviour when AHV version is higher than AOS compatible version
Flow Network Security policy may not work as expected if a cluster uses a later version of AHV incompatible with the AOS
If a cluster uses a higher version of AHV which is incompatible with the AOS version, the Flow Network Security policy may not work as expected.Symptom 1: HCI features work normally. NCC check doesn't report compatible alerts.Symptom 2: A monitoring mode policy blocks all the incoming and outgoing of target UVM traffic.Symptom 3: AHV hitlog shows target traffic "ACTION=ALLOW". Symptom 4: PCVM Command cadmus_cli -p <security policy rule name> shows the target traffic either "0x2 -> allowed" or "0xf -> flow has ended and cleaned up without issue". Following KB-10174 http://portal.nutanix.com/kb/10174 to clear visualization doesn't help.
Upgrade AOS to a compatible version.See below AHV-OVS-AOS compatibility matrix: If AOS version is <= 5.18.x, but the AHV version is 2019xx or 2020xx, either upgrade AOS to a compatible version or re-image AHV to a lower compatible version.If customers upgrade AOS to 5.19.x or higher, they have to upgrade AHV to a compatible version if their AHV version is <= 20170830.453. Because from AHV 20190916.96 we have OVS 2.8.x which is compatible with 5.19.x and higher.Note: Always check the official Compatibility Matrix https://portal.nutanix.com/page/documents/compatibility-matrix before deploying a cluster.[ { "AHV Version": "20160925.30 - 20170830.453", "OVS Version": "2.5.x", "Compatible AOS (from Flow Perspective)": "<= 5.18.x" }, { "AHV Version": "20190916.96 - 20190916.478", "OVS Version": "2.8.x", "Compatible AOS (from Flow Perspective)": "<= 5.19.x, 5.20.x, 6.0.x and higher" }, { "AHV Version": "20201105.12 - 20201105.1161", "OVS Version": "2.11.x", "Compatible AOS (from Flow Perspective)": "<= 5.19.x, 5.20.x, 6.0.x and higher" }, { "AHV Version": "20201105.2030", "OVS Version": "2.14.x", "Compatible AOS (from Flow Perspective)": "5.20.x, 6.0.x and higher" } ]
KB13639
Nutanix Self-Service - App Launch may fail if any of the Scale-out PCVM uses a local volume
The App Launch may fail if any of the Scale-out PCVM uses a local volume to mount instead of a calm volume group.
The App Launch may fail if any of the Scale-out PCVM uses a local volume to mount instead of a calm volume group.
pc.2021.7 and above always uses Calm volume. However, if Calm was enabled in prior PC versions, it may use the local voume and upgrading to pc.2021.7 or above would not migrate automatically to Calm volume. Below workaround can be followed to migrate from the local volume to the calm volume group. 1. Stop elastic_search on problematic PCVM docker exec -it epsilon bash 2. Wait until the cluster is green with the remaining two PCVMs after stopping the service in Step 1. docker exec -it epsilon bash IMPORTANT: Do not restart the epsilon container (to make sure there is no data loss)3 . Stop elastic_search on the other two PCVMs. docker exec -it epsilon bash 4. elastic_search datadir is already in prism central - no changes are necessary5. Restart epsilon in the first PCVM genesis stop epsilon; cluster start 6. Check for any existing data under elasticsearch directory /home/epsilon/elasticsearch/data 7. Delete or move the data if anything is present.8. Stop elastic_search on the first PCVM Note: Blueprint will not launched till the elastic_search is not up. Please ensure no ongoing blueprint launch during this procedure. 9. Start elastic_search on the other two PCVMs, and wait for shard allocation to complete and cluster to go green.10. Start elastic_search on the first PCVM11. Now all the PCVMs are using the Volume group.12. Elasticsearch is green and we are good with complete data. nutanix@NTNX-PCVM:~$ docker exec -it epsilon bash
KB4301
NCC Health Check: cassandra_similar_token_check
NCC health check cassandra_similar_token_check examines Cassandra to locate similar token ranges.
The NCC health check cassandra_similar_token_check verifies that no similar tokens exist in the metadata store (Cassandra). This will also raise an alert in Prism.Each node (CVM) is responsible for a defined set of metadata (token range). This check ensures there are no nodes sharing a similar token as they might try to become the leader of the wrong range (due to an unreachable node or down for maintenance), which might lead to cluster unavailability. Tokens are considered to be equal if they match the first eight characters.As an example, the following token ranges will be considered similar: nbCOnbCOYWzBDHekpxvh7As4bbfnSbmT549p7AZcPL1HhsZ6zBNOMbUSYjTH Running the NCC Health CheckIt can be run as part of the complete NCC check by running: nutanix@cvm$ ncc health_checks run_all Or individually as: nutanix@cvm$ ncc health_checks cassandra_checks cassandra_similar_token_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every day, by default.This check will generate a Critical alert A21015 after 1 failure. Sample output For Status: PASS Running : health_checks cassandra_checks cassandra_similar_token_check For Status: FAIL /health_checks/cassandra_checks/cassandra_similar_token_check [ FAIL ] Output messaging [ { "Check ID": "Check for similar tokens in Cassandra" }, { "Check ID": "Multiple Cassandra nodes have similar tokens" }, { "Check ID": "Refer to KB 4301." }, { "Check ID": "Cluster may not tolerate a Cassandra failure." }, { "Check ID": "Multiple Cassandra nodes have similar tokens" }, { "Check ID": "Multiple Cassandra nodes have similar tokens" } ]
If NCC cassandra_similar_token_check reports a FAIL status, or if you see the corresponding alert raised on Prism, engage Nutanix Support https://portal.nutanix.com.
}
null
null
null
null
KB13397
NDB - Update max_connection parameter in PostgreSQL deployments
In PostgreSQL deployment, under the heavy traffic from an application, sometimes the connection pool is exhausted and getting error messages such as "FATAL: sorry, too many clients already". This KB will assist in updating the max_connection parameter in PostgreSQL deployments
In PostgreSQL deployment, under the heavy traffic from an application, sometimes the connection pool is exhausted and getting error messages such as "FATAL: sorry, too many clients already"
Update max_connection parameter in PostgreSQL deployments - Check current settings using the below commands : $ sudo su - postgres Single Instance Deployments : Update max_connection parameter value : postgres=# alter system set max_connections=250; As this parameter needs a database restart to update, please restart the instance using the below command :Please replace your database directory path with <DATA_DIRECTORY_LOCATION> $ pg_ctl -D <DATA_DIRECTORY_LOCATION> restart Confirm the updated value using the below commands : $ psql -h localhost PG High Availability Deployments :In PG HA deployments, the PostgreSQL cluster is maintained by the Patroni service so we have to update the max_connection setting in patroni configurations. Edit patroni configuration using the below commands : $ sudo su - postgres As this parameter update requires a restart of the cluster, please confirm whether the Pending restart option appeared in the output $ patronictl -c /etc/patroni/patroni.yml list Confirm the updated value using the below commands : $ psql -h localhost
KB16390
NDB - Secondary AG Clone Refresh May Fail
It's possible to see an MSSQL Server secondary AG clone fail to refresh from a snapshot due to the secondary DB VM being in the quarantined state in SSMS.
In NDB 2.5.x it's possible to see clone refresh operations fail with an error in the GUI that state that the secondary VM in the AG is unreachable. The following traces in the logs can also be observed: [TIMESTAMP] [139891571074880] [ERROR ] [PROCESS ID],Traceback (most recent call last): You will also see the following traceback in the NDB logs: [TIMESTAMP] [139787504224064] [INFO ] [0000-NOPID],https://ERA-AGENT-IP-ADDRESS:443/era/v0.9/clones/CLONE-ID/properties/CLONE_STATS_SNAPSHOTS The next step is to confirm in the Postgres metadata DB that the secondary DB VM is in the "UP" status with the following query: select name,status,ip_addresses from era_dbservers where '{SECONDARY VM IP ADDRESS}'=ANY(ip_addresses) and status <> 'DELETED'; If the status column of the AG's secondary DB VM reads as "UP," continue to the solution.
The health of the AG's secondary DB VM must be confirmed in SQL Server Management Studio (SSMS). It's possible the DB VM has been entered into the "Quarantined," state.The customer will need to produce logs of the events that lead up to the DB VM being entered into the quarantined state. After the events have been collected, the customer must engage Microsoft support to troubleshoot the DB VM being quarantined. Once the secondary VM is out of quarantine refresh can be reattempted.
KB8710
NCC Check cluster_active_upgrade_check Reports Active Upgrade When There Is Not an Active Upgrade
Certain NCC plugins may not run if NCC falsely detects and active upgrade due to a known issue with versions of LCM lower than 2.2.3.
When running NCC checks against a cluster, you may run into a false positive on clusters with LCM versions prior to 2.2.3 indicating that there is an upgrade in-progress when there is not.For example, the 'ncc health_checks hardware_checks disk_checks metadata_mounted_check' may show: Detailed information for metadata_mounted_check: Or when running the 'ncc health_checks system_checks cluster_active_upgrade_check' you may see: /health_checks/system_checks/cluster_active_upgrade_check [ INFO ] Check the ncc.log file found in /home/nutanix/data/logs for the following (ie grep -C2 -i "Result of is cluster upgrading" /home/nutanix/data/logs/ncc.log): 2019-11-26 13:13:49 INFO upgrade_utils.py:119 LCM autoupdate is in progress From the above, you can see the "LCM autoupdate is in progress" and "Result of cluster upgrading: True". All of the above (including running a version of LCM lower than 2.2.3) are indicative that you may be hitting this issue.
Validate that there are in-fact no active upgrades via the following commands at any CVM CLI: 'upgrade_status', 'host_upgrade_status', and 'firmware_upgrade_status'. Confirm that the version of LCM is lower than 2.2.3 (note: you are more likely to see this issue when using the dark site bundle as LCM does not auto-upgrade with the dark site bundle). Assuming you do not see any active upgrades and LCM is less than v2.2.3, please upgrade LCM to the latest version and see if the issue persists by re-running 'ncc health_checks system_checks cluster_active_upgrade_check'. Output should look similar to: Running : health_checks system_checks cluster_active_upgrade_check
KB13905
LCM - 'Not Active' Nutanix Files servers available for upgrade
In environments using Nutanix Files replication, when there is a failover and a failback of a Protection Domain to a remote site, the remote Cluster may detect the not active Nutanix Files server as available on LCM upgrades. If the user initiated an upgrade for AFS on this non-active FS, the upgrade will hung.
LCM can mark some File Servers as candidates for upgrades even if they appear as Not Active because they are the targets for a File Protection Domain replica. File Server status: CVM:~$ ncli fs ls | egrep " Name|Version|File server status" If this is the scenario and the user initiates an LCM upgrade for this File Server, the following tasks will become hung: CVM:~$ ecli task.list include_completed=0 From Prism perspective, this will be a sample image:
The definite fix is under testing as per ENG-509187 https://jira.nutanix.com/browse/ENG-509187, where the false detection of an active File Server is due to a code routine that only looks for valid name and version. So, in this example the following File Server will be detected as valid as the version can be read and also the name. CVM:~$ ncli fs ls | egrep " Name|Version|File server status|File server PD" An improvement will also be introduced to detect the following two scenarios so LCM is not showing the File Servers as upgrade candidates: Deactivate File Servers whose Protection Domains are not activated.File Servers whose PD are Activated, but the FS is not yet activated. In the meantime, if LCM is already stuck as the Non-Active File Server upgrade was initiated, proceed with the following clean-up process.WARNING: Support, SEs, and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before making any changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit. Prerequisites This workaround only applies to Non-Active File Servers being upgraded, and the following two conditions need to be present: Task FileServerUpgradeAll for the non-active File Server will be at 0%, sample below: CVM:~$ ecli task.list include_completed=no | grep minerva_cvm No other LCM upgrade task should be running as LCM tasks will be cancelled: CVM:~$ afs infra.fs_upgrade_info Clean up processIf the above conditions are met, proceed with the following clean-up process: Proceed with the LCM task cancellation process as per KB-4872 http://portal.nutanix.com/kb/4872. Only the task related to minerva_cvm should be present. Mark the minerva_cvm task, and type FileServerUpgradeAll as failed using ergon_update: Warning: This method is approved by engineering to be used for specific workflows only. Using this workflow for non-approved tasks can cause issues like genesis crash loop CVM:~$ ~/bin/ergon_update_task --task_uuid=fbe59fd4-799f-4540-a0a1-dec1d8f72f27 --task_status=failed Clear the Zookeeper entry files_lcm_task. Double-check the command to avoid any syntax errors during the execution. CVM:~$ zkls /appliance/logical/upgrade_info/afs Run an LCM Inventory to validate it runs without error, and confirm there are no more hung tasks cancelled. CVM:~$ ecli task.list include_completed=0 limit=10000
KB12960
Nutanix Files: Exceeding the max number of concurrent sessions 1000
Since Nutanix Files 3.7.0, Nutanix Files tracks how many SMB2 session exists over a TCP session. If it exceeds 1,000, any new SMB2 SESSION_SETUP request will be denied by NT_STATUS_REQUEST_NOT_ACCEPTED.
Since Nutanix Files 3.7.0, Nutanix Files tracks how many SMB2 session exists over a TCP session. If it exceeds 1,000, any new SMB2 SESSION_SETUP request will be denied by NT_STATUS_REQUEST_NOT_ACCEPTED.You may see these log lines in the samba client logs in your FSVM. The samba client log file name is "/home/log/samba/clients_<n>.log". [2022/02/07 09:18:06.199943, 2, pid=35849] ../source3/smbd/smb2_sesssetup.c:68(smbd_smb2_request_process_sesssetup) ​​​This limitation has been introduced since Nutanix Files 3.7.
When you observe this symptom, please confirm if your SMB client has been making a lot of SMB2 sessions by running the command "smbstatus -n -p" in the FSVM. Or, please contact to Nutanix Support if you need any further troubleshooting assistance.
KB15476
NDB - DB provisioning on ESXi fails on the cluster with 500 VMs or more
Database provisioning via NDB on ESXi might fail if the Nutanix Cluster has more than 500 VMs in total, not only counting NDB managed DB Server VMs.
This issue affects NDB version 2.5.3. Database provisioning on ESXi might fail if the Nutanix cluster has more than 500 VMs due to a known issue in the get all VMs Prism API call with the error message ‘Error in Registering Database’.
Workaround Get the total number of VMs on the Nutanix Cluster where provisioning fails. This can be fetched by going to the Prism Element UI > VM page > Overview > VM Summary section. If the total number of the VMs does not exceed 500, contact Nutanix Support http://portal.nutanix.com for assistance. Open the NDB config page. To do this, append ‘#/eraconfig’ to the NDB UI URL. For example, if your NDB UI URL is 10.11.12.13, then to open the NDB config, you need to open the link https://10.11.12.13/#/eraconfig https://10.11.12.13/#/eraconfig. You will see a page like the image below on opening the above link. Search for the property ‘prism_vm_list_pagination_limit’ in the search box, and this property's default value of 500 is displayed. Set the value of this property to a value greater than the number of VMs on the Nutanix cluster, which was found in Step 1. In this case, pick a number above 512, for example, 513, and click Update. You will be prompted for confirmation regarding the updation of the NDB config. Click 'Yes'. A confirmation message stating that the value of the NDB config has been changed successfully is displayed. Once the NDB config has been updated, provisioning of DBs and DBVMs should work.
KB13128
Nutanix AHV hypervisor on NX-G8 and NX-G7 hardware platforms using Intel XXV710 NICs randomly crash
Nutanix AHV hypervisor on NX-G8 and NX-G7 hardware platforms using Intel XXV710 NICs crashes randomly with crash signature "i40e_detect_recover_hung" in the stack trace. Thus far, only a couple of customers have hit this issue.
Internal only KB - The issue documented in this KB is NOT fully root-caused, ongoing updates will be provided. Nutanix AHV hypervisor on NX-G8 hardware platforms using Intel XXV710 NICs crashes randomly. The crashes are more prominent during imaging of the nodes using Foundation and during the hypervisor boot-up. Other than these 2 workflows, crashes have occurred randomly after the host and/or the host and local CVM have been up for quite some time. On very rare occasions, the crash can also happen after running the "discover_nodes" command on the local CVM. The impacted AHV node can stay in the crash loop during boot-up. The complete crash stack is: [ 753.167307] BUG: kernel NULL pointer dereference, address: 000000000000000a i40e_detect_recover_hung signature is the key identification crash signature. The crash has been observed on the NX-8035-G8 and NX-8155-G8 hardware platforms thus far using XXV710 Intel 25G Add-On-Card NICs (Intel XXV710-DA2 and Intel XXV710-DA2T Product name in Salesforce). Other facts observed from the impacted customer setups thus far are: The AOS version is 5.20.x with bundled AHV hypervisor version.The NIC driver version is i40 2.14.13 version. The firmware version is 8.50. To view the current NIC driver and firmware version run command: nutanix@cvm:~$ for i in `hostips` ; do echo ==================== host ip $i ==================== ; ssh root@$i 'for e in eth0 eth1 eth2 eth3 eth4 eth5 eth6 ; do echo ===== $e ===== ; ethtool -i $e ; done' ; done Adjust the ethX count as needed.The NICs were operating at 10Gbps speed.The impacted nodes have Intel X710 10G LAN-On-Motherboard (LOM) NICs: Ethernet Controller X710 for 10GBASE-T Sample output: nutanix@CVM:~$ hostssh 'lshw -C Network | grep -i product' The NICs are connected to a Cisco Switch. In the impacted customer environment, the Cisco switch in use was: Cisco NX-OS(tm) n6000, Software (n6000-uk9), Version 7.3(5)N1(1) LLDP is enabled on the switch ports where the XXV710 is connected. lldpctl command will not work if the node is not yet a Nutanix cluster member ( KB-12632 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V000000LYFkSAO).Downgrading the NIC firmware to the 8.20 version does not help. The crashes appear to cease when using firmware version 8.10. Note: Downgrade of the XXV710 NIC firmware below 8.50 is NOT recommended and must be done only for testing purposes. The downgrade of the XXV710 NIC firmware is NOT a valid workaround. The firmware version below 8.50 will expose the defect documented in KB-13085 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V000000LZV4SAO.The NCC health check ahv_crash_file_check will WARN ( KB-4866 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000XeL3CAK).
Intel NIC driver i40e was updated to the 2.20.12 in the following AHV versions: AOS 6.5.X family (LTS): AHV 20220304.385, which is bundled with AOS 6.5.2.6AOS 6.6.X family (STS): AHV 20220304.10019, which is bundled with AOS 6.6.0.5 If you observe a host crash on versions with an updated driver and crash dump file analysis confirms this issue is hit, it is critical to collect the crash dump, the following data and share it in ENG-465213.Collect the following data: Case# If node crashes are reproducible, try collecting additional data: To enable debugging, create /etc/modprobe.d/i40e_debug.conf file on the affected AHV host with the following contents: options i40e debug=1 dyndbg=+pfm Recreate initramfs: root@AHV# dracut -f Reboot the host.Once the issue reproduces, collect log bundle and crash dump.Revert the changes.Share collected data in ENG-465213 https://jira.nutanix.com/browse/ENG-465213. Workaround Disable LLDP on the switch ports of XXV710 NICs. Even if these NICs are in "Down" status, disable LLDP if the NICs are wired to the switch ports. Disable LLDP on the switch ports of XXV710 NICs and also disable the LLDP agent on the XXV710 NICs. These changes will be persistent. nutanix@cvm:~$ for i in $(lspci | grep XXV710 | awk '{print $1}'); do j=$(grep PCI_SLOT_NAME.*$i /sys/class/net/*/device/uevent | awk -F"/" '{print $5}'); ethtool --set-priv-flags $j disable-fw-lldp on; done
KB11559
Expand cluster pre-check - test3_hyperv_checks
Expand cluster pre-check - test3_hyperv_checks
Expand cluster pre-check test3_hyperv_checks checks the following:1. Checks if the hostname is provided for HyperV nodes.2. Cluster is joined to a domain.3. Checks if domain credentials are valid.4. Checks if the given failover cluster name is valid.In case of failure, you can see the below error: Hypervisor hostnames required for the HyperV nodes: <node>
Verify that all the above settings are taken care of and retry expand cluster operation.
KB14696
Move | Unable to view system metrics due to Grafana container being in crash loop
Grafana container keeps crashing if Move appliance doesn't have internet connectivity
As of Move-4.5.2, it is possible to view system metrics of the Move appliance via dashboards provided by grafana. If Move appliance doesn't have internet connectivity, grafana container keeps crashing since it is not able to download some plugins. See below example output: root@move on~ $ svcchk Launching metrics page from Move UI (Settings -> View Metrics) will result in "502 Bad Gateway".
Provide internet connectivity to the Move appliance. If this is not possible, contact Nutanix Support http://portal.nutanix.com/ for further assistance.
KB9520
Prism Central widget on Prism Element shows disconnected but PC connection is successful
Upon upgrading to AOS 5.10.10.1 and having PC 5.11.x, The Prism central widget shown in PE can appear as disconnected although the connection to prism central is successful.
The Prism Central (PC) widget shown in Prism Element (PE) can appear as disconnected although the connection to Prism Central is successful.SCENARIO#1:To verify: SSH to PCVM which is possible from PEPCVM IP responds to ping from PEIf an HTTP proxy is used, check that PE VIP is whitelisted on the PCPort 9440 to PCVM IP is open: nutanix@CVM:~$ nc pcvm.ip.xx.xx 9440 Multi cluster state shows connected, both from PC and affected PE cluster: nutanix@CVM:~$ ncli multicluster get-cluster-state Reset the PE to PC connection by running the below command will not resolve the widget problem. nutanix@NTNX-CVM:~$ nuclei remote_connection.reset_pe_pc_remoteconnection From Nuclei, the PE-PC remote connection health check resulted in "OK" from both PC and PE. nutanix@NTNX-CVM:~$ nuclei remote_connection.list_all Check for unsuccessful REST calls on PCVM to CVM/cluster VIP addresses, such as 401 (unauthorized) in authentication: root@NTNX-PCVM:/home/log/httpd24# grep xxx.xxx.xxx.xxx ssl_ac*|grep -v 200 It is also visible that the field "null" under the "reachable" parameter when doing curl to PCVM from the PE cluster: nutanix@NTNX-CVM:~$ curl -k -u username:password https://pcvm.ip.xx.xx:9440/PrismGateway/services/rest/v1/multicluster/cluster_external_state When running curl to loopback IP in PE registered cluster, the "configDetails" is showing up as null instead of blank "{}". nutanix@NTNX-CVM:~$ curl -k -u username:password https://127.0.0.1:9440/PrismGateway/services/rest/v1/multicluster/cluster_external_state Verify that "configDetails":null in the output, like so: [{"clusterUuid":"00000000-0000-0000-0000-00000000000","clusterDetails":{"clusterName":"Unnamed","ipAddresses":["192.XXX.X.229"],"multicluster":true,"username":"actual-uuid","password":"actual-password","prcCluster":false,"reachable":false,"port":null},"configDetails":null,"filters":[],"clusterTimestampUsecs":0,"nosVersion":null,"nosFullVersion":null,"markedForRemoval":false,"remoteConnectionExists":true}] If all the above symptoms are matched then the issue is due to the zk node missing the configuration details on the affected PE cluster. i,e, "configDetails":null is causing the problem and the expected output is "configDetails":{}Nutanix is aware of the above issue . Please follow the step in the solution section for a workaroundSCENARIO#2: Another scenario is when /home partition on PCVM is almost full as shown in the code sample below. nutanix@NTNX-A-PCVM:~$ df -kh When you try to restart the cluster services, it fails with the error as shown in the below snippet. nutanix@NTNX-A-PCVM:~$ cluster start See /home full section in solution.
SCENARIO#1: Issue has been fixed in AOS 5.15.3 and above. Please upgrade to latest AOS version available on Nutanix Portal. If you are unable to upgrade, please engage Nutanix Support. SCENARIO#2: Clear the /home partition following the KB-5228 http://portal.nutanix.com/kb/5228 and restart the cluster services using the cluster start command.
KB16321
How to use NGT Troubleshooter
This article describes how to use NGT Troubleshooter.
The NGT Troubleshooter checks the requirements for NGT on a Windows-based VM and helps resolve problems with NGT installation and communication with CVM.The NGT Troubleshooter tool is included in the NGT installer in AOS versions 5.20.4 or greater. The NGT Troubleshooter can be accessed by mounting the NGT Installer ISO and the Windows Troubleshooter script is located in the disc drive at <CD drive>:\installer\windows\NGTTroubleshooter. Alternatively, after installation, the files will be present in C:\Program files\Nutanix\NGTTroubleshooter.Note: NGTTroubleshooter requires that the .NET 4.0 framework be installed.
We can run the NGT Troubleshooter via the Windows GUI or from the command prompt.GUI Usage:Simply navigate to the directory that the NGT Troubleshooter executable is located in and double-click to run.After execution of the tool, a log file will be located in the same directory as the NGT Troubleshooter or C:\Temp directory.CLI Usage:CLI usage is similar to the GUI execution of this tool. However, there are additional flags that can be used when executing the NGT Troubleshooter.For a standard run, you can execute the tool by running the following command from the NGT Troubleshooter directory .\NGTTroubleshooter.exe More verbose output can be logged with the use of the /v flag .\NGTTroubleshooter.exe /v
KB11052
File Analytics - Hot Scale Up
This KB provide details on how to update the vCPU, memory and VG size for File Analytics VM.
Currently, we have an option to Hot add memory to the File Analytics Virtual Machine (FAVM). A monitoring script running in the FAVM will take care of increasing memory for the containers running in the FAVM when it is updated on the VM via Prism or vCenter. However, currently, to update the VG, there are some manual steps required to be run from the FAVM to rescan the new VG. Refer to the Solution section for the steps to follow.
Sizing Information can be found in the release notes (Nutanix Portal => Download => File Analytics => Release Notes) A. Memory resource: To update the FAVM memory, follow these steps: Navigate to Prism Element=>VM=>Select FAVM=>click the Update button.The "Update VM" dialog box will appear in the web console.In the "Memory" field, you can increase the memory allocation on your VMs while the VM is powered on. B. Storage Resource / To Update VG Size of FAVM: NOTE: You can increase the size of the volume group, however, reducing the size of a volume group is not supported. To update the FA volume group size, follow these steps: In the Prism Web Console, select Storage from the pull-down main menu (upper left of screen), and then select the Table and Volume Group tabs.To update a volume group, select the volume group called File_Analytics_VG and then click the Update option.Under "Storage", click the pencil icon to edit the disk.0 size and increase the value as per sizing guidelines. NOTE: can also update FA VG size via acli command: vg.disk_update File_Analytics_VG 0 new_size=<new_size> If you change the size of an existing disk, you might notice that the operating system cannot see the new disk size until you rescan the SCSI device on Linux operating system.To rescan the SCSI device, this needs to be performed through FAVM SSH access. SSH to any CVM and then SSH onto File Analytics VMSwitch to root user from FAVM Prompt. [nutanix@NTNX-FAVM ~]$sudo su - Run following rescan cmd with root user and it fetches the new size. -bash-4.2# echo 1 > /sys/class/block/sdb/device/rescan In the FAVM, the entire config and data are stored on the “/dev/sdb” device, which is mounted on the system under the /mnt. After rescanning the bloc device, we need to expand the existing filesystem so it can use the newly added space. -bash-4.2# sudo resize2fs /dev/sdb Sample Outputs: Before Resize: [nutanix@NTNX-FAVM ~]$ df -kh Following command is used to perform rescan: -bash-4.2# echo 1 > /sys/class/block/sdb/device/rescan Resize Command: -bash-4.2# sudo resize2fs /dev/sdb After Resize: [nutanix@NTNX-FAVM ~]$ df -kh
KB14983
Prism shows the memory usage as '-' for FreeBSD VM.
Prism shows the memory usage as '-' for FreeBSD VM.
Prism presents the memory usage of the FreeBSD VMs as '-' .
Due to a bug in FreeBSD's VirtIO implementation, Prism does not display the memory usage for FreeBSD virtual machines (VMs). This issue has been documented in the following link: https://www.freebsd.org/support/bugreports/ https://www.freebsd.org/support/bugreports/. It is worth noting that Nutanix does not currently have any plans to offer VirtIO separately for FreeBSD and relies entirely on upstream support.
KB16985
High Number of Flow related entity count in IDF cause Insights server exceeding the memory limit in PC and causing issues to other services.
High Number of Flow related entity count in IDF cause Insights server exceeding the memory limit in PC and causing issues to other services.
Issue High number of Flow related entity count in IDF cause Insights server exceeding the memory limit in PC and causing issues to other services. The symptoms may vary as insights crash could cause multiple services to FATAL. This KB discuss about the insights server service crash caused by large number of Flow entity count in IDF. Symptom dmesg logs state OOM and insights has reached its RSS limit [Thu Aug 25 10:49:07 2022] Control_13 invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=100 Multiple services crash due to insights failure. nutanix@PCVM:~/data/logs$ ls -l *FATAL* insights_server.INFO continuously logs RPC received traces for entity network_flow_info I20220825 02:20:13.168406Z 69476 insights_rpc_svc.cc:3481] Received RPC GetEntitiesWithMetrics from 127.0.0.1:37986. Request id: query_1661394013168374_713548_127.0.0.1. Argument: query_name: cadmus_netw High number of network_flow_info entity count on IDF unevictable cache nutanix@PCVM:~$ links -dump http://127.0.0.1:2027/detailed_unevictable_cache_stats | grep network_flow Cause Customers using FNS 3.0.0 or lower versions, it was seen that the count query to IDF (to get the number of flows) was timing out. This led to more flows being ingested to IDF. It was due to an IDF Golang query library issue ( ENG-581873 https://jira.nutanix.com/browse/ENG-581873), which was setting an empty `group_by` which inturn increased the query processing time. The query was fixed from the Flow (client) side in FNS 4.0.1, in which the query is optimized to remove the empty `group_by`. On the IDF server side, it is also fixed in ENG-581867 https://jira.nutanix.com/browse/ENG-581867 (change present in 2024.1/6.8). Hence, with versions greater than FNS 4.0.1 or pc.2024.1, we would not see an extremely high number of flows in the IDF network_flow_info table.
A permanent fix for this issue is included in FNS 4.0.1 / pc.2024.1. In case the customer cannot perform an upgrade or needs an immediate workaround, please engage the DevEx Team through a new ONCALL ticket, while including references to ONCALL-16727 https://jira.nutanix.com/browse/ONCALL-16727 and ONCALL-17963 https://jira.nutanix.com/browse/ONCALL-17963. The DevEx team can get the reference from the ONCALL-16727 https://jira.nutanix.com/browse/ONCALL-16727 / ONCALL-17963 https://jira.nutanix.com/browse/ONCALL-17963 to set up a script as a workaround to delete Flow entities from IDF.
KB12471
Cluster full storage utilization during maintenance operations leads to VM downtime
LCM or cluster maintenance operations (AOS/Hypervisor/Firmware) can lead to a cluster running out of disk space and user VMs becoming unavailable.
Data rebuild process during node outage: Whenever there is a CVM or node unavailable on the cluster due to planned maintenance (like AOS upgrade, Hypervisor or Firmware upgrade or hardware replacement) it is expected that the cluster storage usage will increase as Curator service immediately (after 60 seconds by default) starts rebuilding the data from the missing node/CVM. This happens as the cluster is not aware when the node is going to be back online and in order to restore resiliency against a second failure as soon as possible. Therefore, it is expected that cluster storage utilization temporarily increases during these operations due to the Curator rebuild tasks being generated to recreate additional replicas of data to maintain the desired data resiliency status (RF2 or RF3). Once the missing node and CVM becomes available again, the pending rebuild tasks get canceled (if not yet completed) and the extra replicas that were created (over-replication) are cleaned up. This happens in a subsequent Curator scan and results in the storage utilization going back to the original levels. Cluster data resiliency calculation assumes that the cluster always has enough free space to complete data rebuild for the biggest (from a storage capacity point of view) node in the cluster (see KB-1557 https://portal.nutanix.com/kb/1557 for more details on this calculation). Possible issue with data rebuild process consuming all free space available on the cluster: There are some scenarios when data rebuild during node unavailability can consume all available free space on the cluster, leading to the UVM(s) being unable to write and crashing (windows VMs) or becoming read-only (Linux based VMs). Note Stargate does not accept any write operation once the storage usage reaches 95% of the total available space on the Storage pool: At the time when maintenance activity started, the cluster data resilience did not support node failure due to lack of free space for the rebuild process. In this case the rebuild process even for a single node cannot be completed and if the node does not come back from maintenance the rebuild process can consume all available free space;Before the maintenance starts, node resiliency is possible. However, during the maintenance for each node/CVM going down, a rebuild process for its data is triggered. In some cases, as these are completed sequentially for different nodes without over-replication cleanup in between by Curator, then the total amount of over-replication can be more than the available free space leading to the cluster becoming full. This issue is typically seen in all-flash environments, as the rebuild process is much faster. Symptoms of the issue When the cluster reaches 95% of space usage - it is considered as full and Stargate will fail any write ops from user VMs, and rebuild-related tasks will be continuously failing. At this time in the stargate.INFO logs you will see messages with kDiskSpaceUnavailable errors, example: E0825 14:02:11.379101 30672 vdisk_micro_vblock_writer_op.cc:606] vdisk_id=64760951 operation_id=1212491692 Assign extent group for vdisk block 262924 failed with error kDiskSpaceUnavailable Note: in some cases you can see Stargate failing write ops with kDiskSpaceUnavailable even before storage pool usage reaches 95%, that may happen because there is not enough free space on disks from several nodes to place new replicas. Prism UI will show very high latency for User VMs (thousands of milliseconds)Guest OS on User VMs may crash (BSOD on Windows), or re-mount the filesystem as read-only (Linux). Issue confirmation steps It is important to identify what type of maintenance activity is in progress before proceeding with any actions from the solution section of this KB. To check if there are any rolling upgrades - check Prism UI to see a list of running tasks and the current state.In case the Prism UI is unavailable, this can be checked via the command line: Check list of running tasks with: ecli task.list include_completed=falseRun upgrade_status to list any One-Click initiated AOS upgradesRun host_uprgade_status to list any hypervisor upgrades (HyperV/ESXi)Run lcm_upgrade_status to list any LCM related upgrade tasks In order to correctly identify that Curator is indeed responsible for the cluster utilization increase, refer to the detailed Curator log analysis as per KB-11405 https://portal.nutanix.com/kb/11405: Verify the storage pool usage in the Prism UI, or from cli (panacea_cli show_storage_pool_usage). A spike in storage usage should correlate with the time of the maintenance activities and curator rebuild scans.Get detailed statistics on the storage usage on the cluster to verify the increase in storage usage since the maintenance activity was started, thru the aritmos_cli command ( KB-2633 https://portal.nutanix.com/kb/2633) nutanix@cvm$ arithmos_cli master_get_time_range_stats entity_type=storage_pool entity_id=$(zeus_config_printer | awk '/storage_pool_list/ {sp = 1} sp && /storage_pool_id/ {print $2; exit;}') field_name=storage.free_bytes start_time_usecs=$(date +%s -d '2 days ago')000000 end_time_usecs=$(date +%s)000000 sampling_interval_secs=1800 | awk '/start_time_usecs:/ {start_secs= $2/1000000} /sampling_interval_secs:/ {sampling_secs = $2} /value_list:/ {print strftime("%F %T", start_secs), "\t", $2/(1024*1024*1024*1024); start_secs += sampling_secs;}' You can also identify any ongoing rebuild operations through the Curator page: Identify the current Curator leader: nutanix@cvm$ service=/appliance/logical/leaders/curator; echo $(zkcat $service/`zkls $service| head -1`)| awk '{print "Curator master ==>",$2}' SSH to the identified Curator leader and use the links http://0:2010/master/rebuildinfo command to view the current rebuild status. A drain of oplogs is seen on the cluster as per below example in the curator.INFO - allssh "grep oplog_map_task ~/data/logs/curator.INFO | grep VDisk" nutanix@cvm$ allssh "grep 'oplog_map_task' ~/data/logs/curator.INFO | grep VDisk" Egroup replication tasks are also noted in the curator.INFO - allssh "grep ReplicateExtentGroupTasks ~/data/logs/curator.INFO" I20210603 15:39:36.613036Z 24529 curator_background_task_scheduler.cc:1598] Starting an op to enqueue background tasks; chunk_job_type=ReplicateExtentGroupTasks:NonEc.node:9, num_tasks=3641, enqueue_chunk_op_wdog_id_=11 Once the CVM is back online after the firmware or hypervisor upgrade, it is noted that the Curator scans gets canceled - allssh "grep 'Cleaned up and removed background tasks queue' ~/data/logs/curator.INFO" I20210603 15:51:03.609261Z 24529 task_chunk_queue.cc:1220] Cleaned up and removed background tasks queue; job_type=ReplicateExtentGroupTasks:NonEc.node:9, job_execution_id=48100, file_pathname=bg_tasks/ReplicateExtentGroupTasks:NonEc.node:9.48100, meta_file_pathname=bg_tasks/ReplicateExtentGroupTasks:NonEc.node:9.48100.meta, num_tasks_removed=0, all_chunks_enqueued_=true Identify that Curator has completed any scans via the below command and verify when the last full and partial scan was executed: curator_cli get_last_successful_scans Using curator master: 10.66.38.45:2010 In some cases, you'll also not see any completed Full scans as there will be no Curator Garbage report available - curator_cli display_garbage_report nutanix-CVM$ curator_cli display_garbage_report Identifying the clusters current status If there are any hypervisor(ESXi/Hyper-V) upgrades running on the cluster, verify the status of the upgrade. Below are two ways of identifying ongoing or in-progress ESXi/Hyper-V upgrades: nutanix-CVM$ host_upgrade_status Note: The IP’s listed in the below Zookeeper entry, indicates that there is currently an ongoing hypervisor upgrade (ESXi/Hyper-V) nutanix-CVM$ zkls /appliance/logical/genesis/hypervisor Verify if there are any LCM upgrade operations running on the cluster (AOS, AHV or Firmware)Also verify if there are any 1-Click initiated AOS upgrades nutanix-CVM$ upgrade_status
Stabilizing the current state of the cluster There are different options to consider when you need to quickly bring the cluster back to a working state.Here is a short list of possible options, with more details provided later in this KB: If one-click upgrade or LCM workflow is running for AOS, Hypervisor or firmware it can be paused. Then wait for Curator scans to clear the space used by over-replication and once completed the upgrade process can be resumed and monitored carefully. Follow the detailed steps below.Check if the cluster has some space reservation configured on the container level (on some clusters you may have a completely empty container created with a reservation set on it, see KB-7107 https://portal.nutanix.com/kb/7107). Consider removing reservation by setting it to zero on the container settings - this should allow Stargate to use this space to complete write operations.For ESXi clusters, check if there are any space reservations for thick-provisioned virtual disks, and consider removal of the reservation as per KB-4180 https://portal.nutanix.com/kb/4180.Check the Recycle Bin usage and clean it (AOS 5.18 and up) - KB-9964 https://portal.nutanix.com/kb/9964 Detailed explanations for each available option: Pausing ongoing maintenance operation Stop any currently running operation(s) If a firmware or AHV upgrade has been initiated from LCM, follow the detailed steps in KB-4872 https://portal.nutanix.com/kb/4872 to stop the current firmware or AHV upgrade.For AOS upgrades initiated thru the 1-Click interface or thru LCM, refer to KB-3319 https://portal.nutanix.com/kb/3319 to disable the auto upgrade pausing the current upgrade status if required. nutanix-CVM$ cluster disable_auto_install; cluster restart_genesis Pause the current hypervisor(ESXi/HyperV) upgrade as per KB-4801 https://portal.nutanix.com/kb/4801: nutanix@CVM$ cluster --host_upgrade disable_auto_install Note: Running these commands will not immediately stop the upgrade process. Upgrades will continue to run and complete for the current node, and then will not automatically proceed with the next node. Remove the AHV/ESXi/HyperV host and its associated CVM out of maintenance mode if required as per KB-4639 https://portal.nutanix.com/kb/4639 Wait for Curator to clean up space used by over replication Verify that all the required services are started on the CVM that was in maintenance mode Check and verify that the CVM has re-joined the Cassandra metadata ring thru the nodetool -h 0 ring command Start either a Full or Partial Curator scan to correct the capacity usage on the cluster - KB-1883 https://portal.nutanix.com/kb/1883 Once the Curator task has completed, you will note that the cluster's capacity will be reduced to the previous levels. OPTIONAL: (Tentative)Change stargate gflag stargate_disk_full_pct_threshold to 97% as per KB-2018 https://portal.nutanix.com/kb/2018 Resuming Firmware or Hypervisor upgrade task: As per ENG-282432 https://jira.nutanix.com/browse/ENG-282432 and FEAT-7195 https://jira.nutanix.com/browse/FEAT-7195 an option to delay rebuilds during an upgrade window or planned outage window will be implemented from AOS 6.6. Before that is available the following options should be considered on a case by case basis:Option 1: Clusters with AOS 5.20 and 6.0 and later can make use of a gflag to delay replication during maintenance. Review further guidance and steps to configure it in KB-11405 https://portal.nutanix.com/kb/11405.After the gflag is set, allow for the maintenance to complete. Option 2: Follow this option if 1 is not possible: Re-attempt the maintenance for the pending CVMs/nodes and monitor the clusters capacity usage carefully.If cluster capacity usage issues are seen again, pause the upgrade again and use Option 3 before completing the upgrades. Option 3:This option will avoid over replication but will also prevent other Curator tasks. It should be used under STL supervision and close monitoring Place Curator in safe mode as per KB-1289 https://portal.nutanix.com/kb/1289 Place Curator in safe mode and verify the status, (After recovery completion, DO NOT forget to put curator back to NORMAL mode - KB-1289 https://portal.nutanix.com/kb/1289): nutanix@cvm:~$ allssh "genesis stop curator chronos" && curator_cli update_curator_scan_mode curator_scan_mode=kSafe && cluster start nutanix@cvm:~$ links --dump http://0:2010 |head -n20 Verify if any Curator scans have run after the deletion timestamp: nutanix@cvm:~$ curator_cli get_last_successful_scans Reattempt the AHV or firmware upgrade on the cluster Once completed, set Curator back to a normal state and verify that it is set to kNormal nutanix@cvm:~$ allssh "genesis stop curator chronos" && curator_cli update_curator_scan_mode curator_scan_mode=kNormal && cluster start nutanix@cvm:~$ links --dump http://0:2010 |head -n20 Preventative Action:Until the proposed delayed rebuild feature is implemented as per ENG-282432 https://jira.nutanix.com/browse/ENG-282432 and FEAT-7195 https://jira.nutanix.com/browse/FEAT-7195, it is recommended to contact Nutanix support to implement the flag based workaround as per KB-11405 https://portal.nutanix.com/kb/11405 on clusters susceptible to this issue, prior to any upgrade maintenance on the cluster.
KB11183
VG disk missing in Oracle DB VM post reboot
This article covers a scenario where VG disk goes missing after rebooting the Oracle DB VM
In some scenarios, we have seen that post rebooting an Oracle DB VM a Volume group was missing. This issue is so far seen on Oracle VMs only. Scenario 1: Oracle VMs are registered with Nutanix Database Service (NDB), and VG disks have no index ID 0.To verify the issue: The VM had a VG with the name BACKUP_EXADATA. [root@oel-7 ~]# df -hT | grep -v tmpfs The entry was present in the fstab: # /etc/fstab The entry was present in in the pvs output: [root@oel-7 ~]# pvs But after the reboot of the VM the VG does not show up in pvs and vgs output. The corresponding entries in /etc/fstab and df -h output from the VM are missing: root@oel-7 ~]# vgs Scenario 2: Oracle DB VM has all the VG disks missing after rebooting. It's not necessary to be NDB-related. To verify the issue: No VG disks are present on the Oracle VM: [root@oracle ~]# df -h The entry was not present in the /etc/fstab: [root@oracle ~]# cat /etc/fstab pvs & vgs also have no output: [root@oracle ~]# pvs oracleasm scandisks and listdisks gave no entries, and the ASM disk directory "/dev/oracleasm/disks" was empty [root@oracle ~]# oracleasm scandisks However, all vg targets are discoverable from UVM, and sessions are established for the targets [[root@oracle ~]# iscsiadm -m discovery -t sendtargets -p <DSIP> Example Output: [root@oracle ~]# iscsiadm -m discovery -t sendtargets -p xx.xx.xx.195 From fdisk and lsblk, we could also see the disks and disk path: fdisk -l [root@oracle ~]# fdisk -l lsblk --scsi [root@oracle ~]# lsblk --scsi
Scenario 1 Please check the disk index ID of all disks that are part of the VG in question. If there are no disk with index ID 0 we may see this issue.How to check index ID of the disk: Login to Prism Element.In the dropdown menu, go to Storage and select the Volume Group tab.Under the Volume Group tab, select the VG assigned to the problematic VM.Click on Update.Under the Storage section of the popup window, check the index ID of all disks of the VG. If there is no disk with index ID 0 present in the list, perform the steps from the Workaround below. Workaround: Add a new dummy disk of 1 GB to the same VG by clicking on "Add New Disk".Verify the INDEX id of the new dummy and it should be added as INDEX 0Scan the disk under the Oracle VM and now the missing VG should be visible Scenario 2Suggest the customer engage Oracle support. All VG targets are discoverable from UVM and sessions are established for the targets, the disks are visible in UVM in fdisk and lsblk. However, those disks are not further visible in "/dev/oracleasm/disks" path and hence it was not mounting the disks.
{
null
null
null
null
KB6584
Prism Central | Filesystem corruption leads to Prism Central unavailability
Filesystem corruption leads to Prism Central unavailability, FSCK over phoenix is required to repair it
Scenario 1Prism Central boots into emergency mode and requires the user to enter password or type Control-D to continue. The following message is displayed: Welcome to emergency mode! After logging in, type "journalctl -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" or ^B to try again to boot into default mode. Dmesg log shows that filesystems have been corrupted nutanix@pcvm> dmesg -T | grep EXT4-fs Scenario 2 Prism Central VM is accessible via SSH and nutanix user but Prism Central GUI is unreachable from the browser (Single PCVM) Scale-out PCVM GUI may still be accessible if 2/3 of the PCVMs are still up.prism_gateway.log on the Prism Leader of one of the registered clusters shows that Prism Central is not reachable (Single PCVM) nutanix@cvm> less data/logs/prism_gateway.log On Prism Central, Basic commands such as "genesis status" or "cluster status" fail with an error that Filesystem is read-only or an input-output error nutanix@PCVM> genesis status Traceback (most recent call last): Identify which device is used for "/home" filesystem, in this example it's /dev/sdb3 nutanix@pcvm> df -h /home The "mount" command on Prism Central VM shows the filesystem of "/home" is mounted readonly nutanix@pcvm> mount Attempts to remount as read-write fail nutanix@pcvm> sudo mount -o remount,rw /dev/sdb3 /home dmesg log show the same EXT4-fs Errors as above or "invalid magic" nutanix@pcvm> dmesg -T | grep EXT4-fs [Thu Aug 18 11:20:13 2022] EXT4-fs error: 7 callbacks suppressed Scenario 3Prism Central may crash at some point due to I/O errors on one of /dev/sdX drives, but Prism central service is normal in most cases. I: local disk file system inconsistent. Following can be noticed in dmesg - task dockerd is blocked for more than 120 seconds just before the panic followed by EXT4-fs error on dockerd: [ 19.245635] EXT4-fs error (device sdb3): ext4_mb_generate_buddy:758: group 8, block bitmap and bg descriptor inconsistent: 14611 vs 14607 free clusters You can also run "dmesg -T | grep EXT4-fs" to specifically grep for EXT4 errors nutanix@PCVM:~$ dmesg -T | grep EXT4-fs Search on FS for inodes reported by the kernel as there may be some left over corruption in the FS:nutanix@PCVM:~$ sudo find / -inum 2491631 /home/nutanix/data/email/attachmentsLeftover corruption may occur and some files may not be accessible when trying to ls or remove the corrupted inodes: nutanix@PCVM:~$ ls -li /home/nutanix/data/email/attachments II docker storage volume file system corruption. NCC fs inconsistency check results. EXT4-fs errors found in devices: sdh on the filesystem. Review dmesg output for EXT4-fs error details. The dmesg error : [Wed Nov 16 14:59:11 2022] EXT4-fs (sdh): warning: mounting fs with errors, running e2fsck is recommended The mounted filesystem in the prism central VM nutanix@pcvm$ df -Th III The pods of CMSP or MSP cluster file system corrupt. Detailed information for pc_fs_inconsistency_check: These are the below volumes for which we are getting EXT errors, which are owned by Pods sdn 8:208 0 30G 0 disk /home/docker/msp_controller/logs/var/lib/kubelet/pods/cd41e7a6-4918-11ed-af2f-506b8df9c541/volumes/kubernetes.io~csi/pvc-9d623bb4-f36a-11ec-9445-506b8df9c541/mount Scenario 4 Prism Central VM crashes due to I/O errors in ESXi cluster. The following error message displayed on the console screen: blk_update_request: I/O error, dev sdb, sector 11439752 The root cause for this error still remains unknown thus consult with Sr. / Staff SRE or EE team and open ONCALL with the log bundle and following data: Figure out the vdisk backing /dev/sdb of the PCVM (from the past, it is always sdb that were detected corrupt);Run the below script https://jira.nutanix.com/secure/attachment/286116/286116_collect_episode_meta.py with the vdisk id from the previous step to get all the episode metafiles for future analysis; python collect_episode_meta.py --vdisk_id <vdisk-id> --output_dir ~/tmp/vdisk-<vdisk_id> Power cycle the VM and see if the problem goes away. What we know so far(as per ONCALL-6658) We believe the issue is triggered by Prism Central IO pattern when vdisks were re-hosted on different CVM, however, we are not certain why kernel aborts the journal as the oplog recovery was finished within 5 seconds. From the above error message, sector 11439752 would map to vdisk offset 5857153024. We do see write errors in stargate logs: W0611 16:35:10.292872 14232 nfs_write_op.cc:1585] Opid: 13258865386 retrying failed write 4096 bytes at offset 5857153024 to vdisk NFS:1:0:669154 for inode 1:0:669154 with error kRetry, num_retries: 1 Based on the log, the only reason the write op hit kRetry error is because oplog expired the waiters: W0611 16:35:10.291563 14232 vdisk_distributed_oplog.cc:4182] vdisk_id=72276560 inherited_episode_sequence=-1 ep_seq_base=1014236 Expired 128 external IO waiters While oplog recovering is in progress, stargate will expire the op immediately, and then write will retry. That explains the behavior we saw here. However, in order to get IO error in the PC VM due to timeout, the IO has to wait for more than /sys/block/sdb/device/timeout. The default timeout value is set to 30. The write above didn't show any error after the oplog recovery finished, indicating it went thru eventually after 5 seconds: I0611 16:35:15.851958 14231 vdisk_distributed_oplog_recovery_op.cc:82] vdisk_id=72276560 operation_id=13258873270 Finished distributed oplog recovery
Scenario 3 I: Prism central local disk file system inconsistent The solution is to boot the PCVM from Phoenix and check the filesystems on the disk of /home for errors and fix them.Note: Do not unmount the PC ISO on the existing CDROM1) Download a Phoenix ISO or a Linux Live CD ISO 2) For AHV clusters, upload the Phoenix or Linux Live CD ISO to the cluster via the "Image Configuration" under the Settings in Prism.For ESXi clusters, upload the Phoenix or Linux Live CD ISO to the NTNX-local-xxxx datastore on the ESXi host which is hosting the PCVM via vCenter. Click Storage in the VMware Host Client inventory and click Datastores.Click Datastore browser.Select the datastore that you want to store the file on (NTNX-local-xxxx)Locate the item that you want to upload from your local computer and click Open. The file uploads to the datastore that you selected.(Optional) Refresh the datastore file browser to see the uploaded file on the list.Click Close to exit the file browser. 3) Power off the PC VM.4) Create new CD-ROM and mount the uploaded Phoenix or Linux live CD ISO and change the boot order.AHVa. In Prism, Update the VM in Prism, create a new CDROMb. Select "Clone from Image Servicesc. select the Phoenix or Linux live CD ISO.d. Click Adde. Change the boot order from "Default Boot Order" to the new CDROM (i.e. ide.1)ESXia. From vCenter, right click the PC VM and edit settingsb. Click "ADD NEW DEVICE"c. Choose CD/DVD Drive, Click "Browse" and select the Phoenix image which is uploaded earlier.d. Mark "Connect at Power On" on the new CD-ROM (Drive 2)e. Unmark the "Connect at Power On" on the original CD- ROM (Drive 1)f. Go to the "VM Options" tabg. Expand "Boot Options" and mark "During the next boot, force entry into the BIOS setup screen"5) Power on the PC VM to boot into Phoenix or Linux ISO.AHV:No actions needed as it we have already changed the boot orderESXia. Use the arrow key to go to the boot tabb. Use the + key to move the CD-Rom to the topc. Use the arrow key to go to the Exit tabd. Choose Exit and Save Changes6) As the logs in dmesg could have rolled over it is recommended to run fsck -n against all partitions of the disk that has the /home partition to avoid rebooting back to PCVM to find another partition is corrupted and needs repair. For example if /home is /dev/sdb3 then run $ fsck -n /dev/sdb1 The -n option will only report if errors are found but will not repair them7) Based on the reported corrupted partitions from above. Run fsck -y against the corrupted partitions. For Example: $ fsck -y /dev/sdb2 Note: fsck steps are common to both the ESXi and AHV hosts8) Power off the PC VM.9) Unmount the Phoenix or Linux ISO from the new CD-ROM that was created and boot back to PC.AHV:a. From a CVM run the below command to empty the new CD-ROM that was createdNote: Do not empty the original CDROM (i.e. ide.0) acli vm.disk_update <PCVM NAME> disk_addr=<disk address of new CD-ROM> empty=true Example: acli vm.disk_update <PCVM NAME> disk_addr=ide.1 empty=true b. From Prism, edit the PC VM and change boot order back to "Default Boot Order"ESXia. From vCenter, right click the PC VM and edit settingsb. Unmark the "Connect at Power On" on the new CD-ROM (Drive 2)c. Mark the "Connect at Power On" CD- ROM (Drive 1) d. Go to the "VM Options" tabe. Expand "Boot Options" and mark "During the next boot, force entry into the BIOS setup screen"10) Power the PC VM back on and boot back to PC.AHV:No actions are needed as we have already changed the boot orderESXia. Use the arrow key to go to the boot tabb. Use the + key to move the Hard Drive to the topc. Use the arrow key to go to the Exit tabd. Choose Exit and Save ChangesScenario 3 II: docker storage volume file system corruption. The prism central is accessible in this case. Refer to KB-13931 https://portal.nutanix.com/kb/13931 and check the instructions under "Issue: NCC complains docker container partition has EXT4 fs errors" section. Scenario 3 III: To fix the Pods file system issues Refer to KB-14476 https://portal.nutanix.com/kb/14476, if the issue is still persistent, please collect the logs from PC & the underlying PE and proceed with opening TH or on-call accordingly.
KB16798
Unable to save the empty alert email recipients list from Prism GUI
Attempting to clear the existing email recipients list from Prism GUI and save the changes results in the "Error saving Alerts Configuration: Email Recipients field empty" error message.
A UI validation that verifies the email recipient field length prevents the user from clearing and saving the existing list of email recipients within the Alert Email Configuration through the Prism GUI. Attempts to clear the recipient field in the 'Alert Email Configuration' tab results in the "Error saving Alerts Configuration: Email Recipients field empty" error displayed on the Prism GUI. Users seeking to opt out of receiving alert emails cannot unsubscribe from the mailing list using GUI if their email address is the only one configured in the email recipient list.
The Email Recipient field can be cleared using the below nCLI command from any of the CVM: nutanix@cvm$ ncli alert edit-alert-config email-contacts=- Note: The '-' value is passed for the email-contacts argument in the above command to clear and save the Email Recipient field.
KB6486
Epoch | Cannot start dd-agent due to invalid hostname(localhost) on the machine.
You may encounter a situation when the Collector Health page does not show Collector info. One possible reason could be due to the fact that "dd-agent" is unable to determine a valid hostname on your machine.
You may run into a case where after adding the collector, the collector info does not populate any information (Settings --> Collector Health). This can happen if the hostname was set to default "localhost". We can check the error message by running the following command: [root@localhost ~]# /etc/init.d/epoch-collectors info The CRITICAL errors clearly states that it cannot determine the hostname.
In order to fix the issue, edit datadog.conf file and change the hostname as desired. Go to /etc/nutanix/epoch-dd-agent/datadog.conf using your preferred text editor. Change hostname from localhost.localhost to your desired name (ateiv-test in this example). ===> After making the change, restart the collector by using the following command. /etc/init.d/epoch-collectors restart This should resolve the issue and you should see collector info under Collector Health page.
KB11749
Nutanix Files: Files Manager shows "undefined" on Prism Central
This article describes an issue where there is no option to enable Files Manager prior to version 2.0 in PC UI.
When trying to enable Files Manager on Prism Central (PC), there is no option to enable it via the UI for versions prior to 2.0. Prism Central will show the below under Services > Files. The below message is shown when Files Manager is not enabled and you try to determine the version. nutanix@PCVM:~$ files_manager_cli get_version
Follow the below to enable Files Manager: Run the below command on the PC VM. files_manager_cli enable_service Example: nutanix@PCVM:~$ files_manager_cli enable_service Once it is enabled, verify the version by running the below command. files_manager_cli get_version Example: nutanix@PCVM:~$ files_manager_cli get_version Run an LCM (Life Cycle Manager) Inventory and then perform an LCM update to upgrade Files Manager to the latest version.
KB5035
SSP in PC: Upgrade to 5.5 without wanting to migrate SSP to PC
SSP in PC: Upgrade to 5.5 without wanting to migrate SSP to PC
"WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines ( https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit)"When attempting to upgrade a PE cluster to 5.5 you may see a pre-upgrade check failure - “Cluster with ssp configured must register to a Prism Central with 5.5 version or beyond before upgrade to avoid ssp data loss”As SSP functionality is moving to PC in version 5.5, in case SSP was enabled on a pre-5.5 PE cluster this pre-check will inform users to register their PE cluster to a 5.5 (or above) PC before upgrading the PE clusters. In certain rare scenarios users may want to proceed with upgrading their PE cluster(SSP enabled) from pre-5.5 version to 5.5 (or above) versiona. Without wanting to register their PE cluster to a 5.5(or above) PC, orb. Users does not want to preserve the SSP data or migrate it to PC, etc.This KB should be able to assist you in such scenarios.Note: Please follow this KB only in the case where customer is absolutely not ready to adopt PC yet. If customer intends to adopt PC and does not care for the SSP config, they can register 5.5 (or above) PC and follow scenario 2 solution from below. As PC has additional features (many of which are exclusively in PC 5.5 onwards) customers should be encouraged to adopt PC instead of following this KB, unless absolutely necessary.
Scenario 1: Customer has pre-5.5 PE cluster (SSP enabled) PE is not registered to any PC currently and customer does not want to deploy oneCustomer just wants to upgrade PE cluster to 5.5(or above) immediatelySSP config can be left behind and does not need to be migrated at all Solution - Either of the 2 below workarounds should work for this scenario Remove directory service from the PE cluster to be upgraded.Proceed with upgrade of the PE cluster to 5.5 (or above) OR SSH to any CVM on the PE cluster Edit zeus config edit-zeus --editor=vim​ Modify the flag enabled_in_ssp: false Save file and exitProceed with upgrade of the PE cluster to 5.5 (or above) Scenario 2: Customer has pre-5.5 PE cluster (SSP enabled) PE cluster is registered to a PC (both are are pre-5.5 version)PC has been upgraded to 5.5 (or above)Customer wants to upgrade the PE cluster to 5.5 (or above)SSP config can be left behind and does not need to be migrated at all Solution - PE cluster can be upgraded to 5.5 (or above)After upgrade, SSH to any CVM on the PE cluster Edit zeus config edit-zeus --editor=vim​ Modify the flag ssp_config_owner: false (If the flag does not exist add it) Save file and exit Scenario 3: Customer has pre-5.5 PE cluster (SSP enabled) PE is not registered to any PC currentlyCustomer wants to upgrade PE cluster to 5.5 (or above) immediatelyCustomer plans to perform PC registration with the PE cluster later and SSP data can be migrated later Solution (Please do not perform the solution for this scenario unless absolutely necessary as this involves skipping crucial pre-upgrade checks. Only perform this when all risks have been understood. Please try to check with the customer if they can deploy/register 5.5 (or above) PC instead as that will be the best solution) Skip pre-upgrade check while performing the upgrade to 5.5 (or above) on the PE cluster (Please refer KB 4230. Also refer to the release notes of the respective version before performing this) After the PE cluster completes upgrading to 5.5 version (or above), perform the SSP migration following the steps in the Prism Central Guide doc on the portal.
KB13341
[NDB] Oracle DBServer patching failed after GRID Patching
[NDB] Oracle DBServer patching failed after GRID Patching
It can be seen when customer is trying to patch the Oracle RAC database from 19.9 to 19.11. NDB version where issue was faced 2.3.0.1. The patching was completed for GRID but failed before starting database patching.Issue was observed in NDB 2.3.0.1. There were 2 issues observed: NDB was unable to stop DB services using srvctl, hence patching operation failed before executing data patch. This issue will be tracked in a separate document. There was a mismatch in db_name and db_unique_name. The fix is already merged into NDB 2.4.Since the patching operation failed, Grid was patched to 19.11, but it was left in “Rolling Patch” state. The Grid was patched to 19.11 but the database was still 19.9. Both the Database and GRID services were running fine on both the nodes. The Grid patching was marked as successful, as the pre-patch and post-patch commands returned success code.Logs to collect: The logs will be created on the respective database server. The main patching operation log is created on the RAC node which is marked as “active” in the NDB. Below is the log location: /opt/era_base/logs/drivers/patch_dbserver/<operation-id>.logIn case of failure, we can also collect the logs from the NDB diagnostic bundle.
Manual fix for GRID state. Verify the GRID at cluster level. Ideally the state should be “Normal”. But in this case, we observed that the GRID state was “Rolling Patch”. Connect to ASMCMD utility as grid user. asmcmd>showclusterstate Verify the GRID state at the cluster level as well. On both the nodes, connect as root user and execute the below commands. This confirms that the cluster state was “Rolling Patch”. Expected value was “Normal”. $crsctl query crs activeversion -f Value is different on both the nodes: $crsctl query crs softwarepatch The list is the same on both the nodes $GRID_HOME/OPatch/opatch lspatches List the same patches on both the nodes $GRID_HOME/bin/kfod op=patches List the same patches on both the nodes $GRID_HOME/bin/kfod op=PATCHLVL As per Oracle documentation, execute below pre/post-patch commands on both the nodes. $crsctl stop crs -f Now the GRID state should be “Normal”. In case, if above steps does not fix the GRID state to "Normal", then we can execute below commands. $GRID_HOME/bin/clscfg -patch With reference to ERA-16486 https://jira.nutanix.com/browse/ERA-16486, now NDB will re-verify and execute the pre/post-patch to fix the “Rolling Patch” state. The changes are targeted to merge into 2.4.1.1. Patch only database(not GRID) using NDB. Once GRID is in “Normal” state, then we proceed further to apply only database patch using NDB.Currently, in NDB UI we don’t have the option to patch just the database. We can leverage CLI to apply patch just to the Database not GRID (if supported by Oracle). Generate upgrade json file. era > dbserver cluster upgrade generate_input_file type=oracle_database output_file=~/upgrade_db.json 2. Edit the above json file and update the value for “upgrade_type” from “cluster_with_database” to “database”. Old/existing: { New: { 3. List the dbserver. Keep a note of dbserver ID. era>dbserver cluster list 4. Get the software profile id. era>profile software list engine=oracle_database 5. Get software profile version id. era>profile software version list profile_id=<s/w profile id from step-4> 6. Run below command era>dbserver cluster upgrade id=<dbserver-id step3> software_profile_id=<s/w profile ID step4> software_profile_version_id=<s/w profile version ID step5> input_file=<json file step1>
KB16465
Ungraceful AHV reboot on Lenovo hardware may fail with Kernel panic error on AOS 6.7.x versions
This KB describes an issue where an ungraceful AHV reboot might fail with a Kernel panic error on Lenovo hardware on a cluster running AOS 6.7.x.
It is noticed that in rare scenarios, if an AHV host is rebooted ungracefully on a cluster running AOS 6.7.x version on Lenovo hardware, the reboot may fail with I/O errors and kernel panic. In order to identify if you are hitting this issue, check the “vmcore-dmesg.txt” file under the path “/var/crash” on the host. Following signatures can be found in the logs,root@host# cat ~/var/crash/vmcore-dmesg.txt [25867.657222] NMI watchdog: Watchdog detected hard LOCKUP on cpu 32 No errors logs will be seen on Lenovo SEL logs. This issue is not noticed on any other AOS versions.
As per initial analysis, it is noticed that this issue could be occurring due to old Lenovo UEFI code. This issue is not seen on new Lenovo UEFI code that includes a new AMD AGESA (GenoaPI-SP5_1.0.0.B) code. If you are hitting this issue, Please attach your support case to the below JIRA ticket and ask customer to reach out to Lenovo Support to manually upgrade the UEFI Firmware version to the latest qualified version.
KB15522
Prism Central - Prism Central Recovery shows old snapshots during PCDR after failed attempt to Recover.
Article describe situation with orphaned PC backup entities left on PE IDF after failed attempt to user PCDR for PC migration.
Scenario1:Customer has Prism Central running on Cluster 1. Prism Central DR enable and configured to replicate PC on Cluster 2.Customer attempted to use PC DR to migrate PC from Cluster 1 to Cluster 2:1. PC Powered OFF on Cluster 1.2. Recover PC started on Cluster 2.3. PC Recovery partially completed: PC Recovery task failed, but most of the services on recovered PC started excluding Epsilon.4. Partially recovered PC stopped and removed on Cluster 2.5. PC started back on Cluster 1.6. PC DR disabled and re-enabled in PC UI PCDR shows up-to-date synchronization7. Attempt to repeat procedure from step 1 to 2 - Prism Element on Cluster 2 shows recovery options with old PC snapshots.Scenario2:if customer just want to test a restore and decides to revert to old PC . Identification:PE will have IDF entry type pc_backup_metadata with old last update date and incarnation id 1 on running PC side in IDF pc_backup_metadata has correct last update date value.Example:PE: nutanix@CVM:~$ idf_cli.py get-entities --guid pc_backup_metadata PC nutanix@PCVM:~$ idf_cli.py get-entities --guid pc_backup_metadata
Workaround:NOTE: AOS version 6.5 and onwards might have scripts cleanup_backup_entities_on_pe.py and cleanup_synced_data_of_unregistered_clusters.py already installed by default, however for this issue make sure to use the version of the scripts downloaded from step2 1. Stop IDT on all nodes on the PE nutanix@CVM:~/bin$ allssh genesis stop insights_data_transfer 2. Copy cleanup_backup_entities_on_pe_copy.py https://download.nutanix.com/kbattachments/15522/cleanup_backup_entities_on_pe_copy.py, cleanup_synced_entities_unregistered_clusters_copy.py https://download.nutanix.com/kbattachments/15522/cleanup_synced_data_of_unregistered_clusters_copy.py and cassandra_helper_client.py https://download.nutanix.com/kbattachments/15522/cassandra_helper_client.py to /home/nutanix/bin folder on any node on PE un ~/bin 3. Run cleanup backup entities script on PE with PC cluster uuid nutanix@CVM:~/bin$ python cleanup_backup_entities_on_pe_copy.py --unregistered_cluster_uuid=ca4efbab-2665-487c-ae88-f2c447173a30 4. Run cleanup synced entities script on PE with PC cluster uuid nutanix@CVM:~/bin$ python cleanup_synced_data_of_unregistered_clusters_copy.py --unregistered_cluster_uuid=ca4efbab-2665-487c-ae88-f2c447173a30 5. Restart IDF nutanix@CVM-:~/bin$ allssh links --dump http:0:2027/h/exit 6. Start IDT on all nodes on PE nutanix@CVM:~/bin$ cluster start 7. Confirm pc_backup_metadata entity is empty nutanix@CVM:~/bin$ idf_cli.py get-entities --guid pc_backup_metadata 8. Re-enable PC DR on PC side.9. On PE , confirm pc_backup_metadata entity is synced from PC Example: nutanix@CVM:~$ idf_cli.py get-entities --guid pc_backup_metadata
{
null
null
null
null
KB12203
DMAR errors on HPE nodes
DMAR-errors-on-HPE-nodes
With the upgrade of AHV on HPE nodes to 20201105.X versions, host might become unresponsive showing DMAR errors on consoleThis is a known issue when Linux "intel_iommu=on" Kernel Boot Parameter Is Used on HPE Servers. HPE has an advisory for this, Refer to below for more information: HPE Advisory https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c04565693 HPE Customer Notice https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c04446026Identification:(1) Host is affected Hosts must be running on HPE HWTarget AHV version during upgrade is 20201105.XUpgrade completes on the host but becomes inaccessible over network showing DMAR errors on host console 2021-06-27T19:31:10.602555+00:00 XXX-Node-1 kernel: [ 4373.322998] DMAR: [DMA Read] Request device [03:00.0] PASID ffffffff fault addr 791dc000 [fault reason 06] PTE Read access is not set Querying the SEL whether using the "hostssh ipmitool sel list" or the iLO interface shows the following errors 175 | 11/14/2021 | 14:35:33 | Drive Slot / Bay #0x4d | In Failed Array | Asserted Reboot of the host might help to get access for a short period but eventually becomes inaccessible (2) Host is not affected Following SPP upgrades to > 2021.10.0 after any CVM reboot the following DMAR-related errors may be logged in dmesg on the host: [Wed Apr 27 15:33:31 2022] DMAR: DRHD: handling fault status reg 402 These errors are benign and have no effect on the node, and can be ignored; If these types of errors are encountered after a CVM reboot, the following workaround does not apply.
Workaround:Disable HP Shared memory on HBA Follow the steps to implement the workaround: Navigate to HPE ILO console , Perform Cold rebootDuring system boot, press F9 to enter the RBSUSystem utilities >> System configurationClick Embedded RAID1: smart HBA H240r controllerSelect "HPE Shared memory features" - [enable]Click "enter" and select "Disable".Click "F10" to save changes and enter "Y".Reboot required to apply changes to the server
KB6434
Pre-Upgrade Check: test_centos7_compatibility
Pre-Upgrade-Check-test-centos7-compatibility
This is a pre-upgrade check that verifies that the current version is compatible with CentOS 7.From version 5.5 onwards, the cluster needs to be compatible with CentOS-7.3+ (el7). If the cluster is already on CentOS 7.x then the check is skipped. If the cluster is not already on CentOS 7.x then a CentOS-7.3 sandbox is installed and verified.Pre-Upgrade check failure - Unable to install CentOS 7 sandbox on [IPs of the nodes on which the installation failed]
Please collect NCC log bundle and reach out to Nutanix Support, for investigating this issue.
KB13556
Object | Baseline Replicator fails since its unable to log the unicode chars in the log file
baseline_replicator failed if the bucket or file name include unicode chars.
if the bucket or file name includes Unicode chars, replicate the baseline image failed with blank INFO. /tmp/baseline_replicator --source_endpoint_url=http://xxx.xxx.xxx.130 --source_bucket_name=<bucket_name> --source_access_key=<key> --source_secret_key=<Key> --max_concurrency=200 --log_level=DEBUG 2022-08-23 23:04:19,061 (MainThread) DEBUG [patched_baseline_replicator.py:133] : Adding a baseline tag for -1/20220602/企业微信截图_16534434413367.jpg:null
Nutanix Engineering is aware of the issue and is working on a fix in a future release.The baseline Replicator tool is a tool with which you can replicate existing objects in your bucket to a bucket at a remote site. You can use this tool only after a replication relationship is created with the destination bucket. You need to first identify the destination bucket, and then run this tool against the source bucket.baseline_replicator tool reference https://portal.nutanix.com/page/documents/details?targetId=Objects-v3_5:top-baseline-replication-c.html https://portal.nutanix.com/page/documents/details?targetId=Objects-v3_5:top-baseline-replication-c.html
KB5114
CVM Won't Boot after Upgrade to VMware Virtual Hardware Version 13
It has been observed that upgrading a Controller VM's Virtual Hardware Version to 13 in ESXi 6.5 will cause the CVM to be unable to boot. This is due to a missing configuration parameter in the .vmx file designating the presence of a PCI Passthrough device.
If the ESXi host is running version 6.5 and the CVM's Virtual Hardware is upgraded to Version 13, the CVM may no longer boot and will instead display the following message in the virtual machine console: FATAL Module scsi_wait_scan not found Issue Identification:1. ESXi host is running version 6.5 and the CVM shows Virtual Hardware Version 13:2. Login to a working CVM and run the following command to see if the PCI passthrough is enabled in the .vmx file for the affected CVM. NOTE: This will not be present in the vSphere GUI if it has this value. You must verify via the CLI. nutanix@CVM:~$ for i in `hostips` ; do echo "=============== $i ==============="; ssh -l root $i 'grep "pciPassthru[0-9].present" /vmfs/volumes/NTNX-local*/ServiceVM_Centos/ServiceVM_Centos.vmx'; done The following VMware KB describes the situation noted above: kb.vmware.com/s/article/70668 http://kb.vmware.com/s/article/70668
In order for the CVM to boot, the pciPassthru0.present configuration parameter should be set to TRUE. Note that a host might have more than one pciPassthru if the node has more than one disk controller. This enables the CVM to access the disks owned by the LSI PCI disk controller which is associated with it.Steps to resolve:1. Power down the affected CVM in vSphere.2. SSH to the ESXi host where the affected CVM resides.3. Change directory to the local datastore folder where the CVM's .vmx file resides. [root@ESX:~] cd /vmfs/volumes/NTNX-local*/ServiceVM_Centos/ 4. Take a backup of the original vmx file. [root@ESX:/vmfs/volumes/NTNX-local/ServiceVM_Centos] cp ServiceVM_Centos.vmx ServiceVM_Centos.vmx.bak 5. Edit the original vmx file with vi or your text editor of choice and change the following parameter seen in bold from FALSE to TRUE. [root@ESX:/vmfs/volumes/NTNX-local/ServiceVM_Centos] vi ServiceVM_Centos.vmx Example of config file showing incorrect value: pciBridge7.present = "TRUE" 6. Once you have saved the changes, power-on the CVM and it should boot normally. Nutanix recommends that all CVM's be of the same VMware Virtual Hardware version. Furthermore, VMware does not allow downgrading of this software. As such, you must continue to upgrade the virtual hardware versions of all other CVM's in this Nutanix cluster and update the vmx configuration to enable PCI passthrough using the steps described above. In general, Nutanix does not recommend that customers change the Virtual Hardware versions of the CVM's in their cluster. Note that Nutanix upgrades the virtual hardware version of the CVMs automatically during an AOS upgrade and that this is the preferred method to do so.Please make sure that the Controller VM's in the vSphere cluster are not configured to upgrade Virtual Hardware automatically. You can find instructions on findings these settings in VMware's documentation, found here https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-FD4ACEFC-1AE8-4DCB-9EAF-A21A4040DD33.html.To check if all CVM's in the cluster are running the same Virtual Hardware version, run the following NCC check from a working CVM: ncc health_checks system_checks cvm_virtual_hardware_version_check
KB2895
Addressing IPMI v2.0 Password Hash Disclosure CVE-2013-4786
IPMI v2 has a security vulnerability which affects Nutanix Platform Models. The following article highlights the steps to mitigate the risks from vulnerability scans and attacks.
IPMI version 2.0 is susceptible to exploitation that allows an attacker to obtain password hash information. A Nessus Vulnerability Scan by Tenable Network Security reports the following. CVE­-2013-­4786 Description
Currently, there is no patch for this vulnerability. Disabling all BMC remote functions (ikvm), and operations through (Out-­of-­band) network queries will mitigate the security risks. Applying the following steps will pass the vulnerability scans. The solution applies to NX- Hardware only. Side effect All BMC remote functions (ikvm), and operation through (Out-of-band) network queries will not work by disabling Port 623. The port can be re-enabled from IPMI Web Interface when Remote Control functions are needed. Note that WEB GUI will continue working, so it will be possible to, for example, view the console and mount ISO via HTTP HTML5 (if supported by installed BMC firmware). Disabling the port in IPMI on NX platforms: For X9 (NX­1/3/6x00), X10 (NX-G4 & G5) and X11 (NX-G6 &G7) platforms: Log on to the IPMI webUI.Navigate to Configuration and click on Port. Uncheck Virtual Media IPMI command port (623). Click Save to apply. ​​​​​ For X12 (NX-G8) platforms: Log on to the IPMI webUI.Navigate to Configuration -> Network and click on Port.Set the Virtual Media Port setting to OFF. Click Save to apply.
KB12228
LCM upgrade failed "Cannot complete login due to an incorrect user name or password."
LCM upgrade (Involving host reboot) in ESXi environment fails with error "Could not put host out of maintenance mode".
LCM upgrade operation fails to bring the node out of maintenance mode as the vCenter credentials could not be validated. Log location : /home/nutanix/data/logs/genesis.out reports the following traceback with the error message: Traceback (most recent call last):
The above described failure can occur due to one of the following reason: Connectivity disruption with vCenter while the upgrade is in progress. It is possible, the connection between the Nutanix cluster and vCenter is disrupted.Please refer KB 3815 http://portal.nutanix.com/kb/3815 Changing the credentials while the upgrades are still running. There are instances where your environment changes Active Directory credentials frequently and can lead to such failures.Please make sure the vCenter credential is not updated during upgrade process. If your node is still in maintenance mode, please remove the ESXi host out of maintenance mode manually.If you require further assistance, please contact Nutanix Support https://portal.nutanix.com/
KB17093
Nutanix Files - Enabling Network Segmentation for Volumes when File Server Storage Network is already on the target subnet (no reIP)
This article details the manual processes to be performed on the File Server VMs (FSVMs) if the Files storage network is already on the target subnet before Network Segmentation for Volumes is enabled on Prism Element (PE).
This article only applies when the File Server storage network is already on the desired subnet for which you plan to configure Network Segmentation for Volumes. For the following, please see the Files - Network Segmentation https://portal.nutanix.com/page/documents/details?targetId=Files:fil-files-network-segmentation-c.html documentation on Nutanix Portal: Network Segmentation for Volumes is enabled, and no File Server has been deployed Deploy the File Server with the storage network on the Network Segmented subnetNo further action is required Network Segmentation for Volumes is disabled, a File Server is already deployed, and the storage network needs to be reIP'd Enable Network Segmentation for Volumes in Prism Element (PE)Perform ReIP of the Nutanix Files storage network to be on the same subnet as the Network Segmented subnet Files - Updating the Network Configuration https://portal.nutanix.com/page/documents/details?targetId=Files-v5_0:fil-fs-network-update-t.html > This process updates the File Server firewall for communication with the CVMs new network and File Server configuration to use the segmented External Data Services IP.No further action is required. When the File Server is already deployed on the same subnet for which Network Segmentation will be configured, manual steps must be taken to allow communication between the File Server and CVMs.
As the File Server's storage network was already configured to leverage the Network Segmented subnet before enabling Network Segmentation for Volumes, the user would not perform a reIP of the Files storage network. As this scenario bypasses the built-in processes used to configure the File Server for proper communication to the CVMs and External Data Services IP, we must update the File Server firewall and internal configuration to allow communication. Note: SSH from the CVM to any FSVM will not function until after the Nutanix Files iptables configuration has been updated to allow traffic on the Segmented subnet. The following steps will be divided between File Servers v4.3.0.1 and earlier and File Servers v4.4.x and 5.x due to an issue with the firewall command in the latter versions of Files. For Nutanix Files 4.3.0.1 and earlier: Open a console session to one FSVM and log in to the terminal with the 'nutanix' user credentials.From the FSVM command line, collect the current File Server configuration nutanix@FSVM:~$ afs fs.info File server name : afs01 File server uuid : 0112af76-4b32-4f52-9f8f-2cf5dd249b9a File server version : 4.3.0.1 --- External data services IP address : xx.xx.141.xx CVM IP addresses : xx.xx.141.xx, xx.xx.141.yy, xx.xx.141.zz PC IP address : --- Collect existing iptables configuration for the AFS_CVM_LIST chain nutanix@FSVM:~$ allssh 'sudo iptables -nL AFS_CVM_LIST -w 5 -W 200000' ================== xx.xx.xx.2 ================= Chain AFS_CVM_LIST (1 references) target prot opt source destination ACCEPT all -- xx.xx.141.0/24 0.0.0.0/0 ================== xx.xx.xx.3 ================= Chain AFS_CVM_LIST (1 references) target prot opt source destination ACCEPT all -- xx.xx.141.0/24 0.0.0.0/0 ================== xx.xx.xx.4 ================= Chain AFS_CVM_LIST (1 references) target prot opt source destination ACCEPT all -- xx.xx.141.0/24 0.0.0.0/0 Collect current iscsi session information (note output will vary in quantity depending on the number of shares/exports on the File Server) nutanix@FSVM:~$ allssh 'sudo iscsiadm -m session P0' ================== xx.xx.xx.2 ================= tcp: [2] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:94fdecf0-dcd5-4c1b-a19d-7203e804c841-tgt0 (non-flash) sdd ================== xx.xx.xx.3 ================= tcp: [1] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt2 (non-flash) sdd tcp: [2] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt3 (non-flash) sde tcp: [3] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt6 (non-flash) sdf tcp: [4] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt0 (non-flash) sdk tcp: [5] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt5 (non-flash) sdg tcp: [6] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:d70edfee-a84c-4822-8125-0ed88ea9ea74-tgt0 (non-flash) sdh tcp: [7] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt1 (non-flash) sdi tcp: [8] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt4 (non-flash) sdj ================== xx.xx.xx.4 ================= tcp: [2] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:86af4e90-697f-4d5b-8cea-9efcf934a170-tgt0 (non-flash) sdd Update the File Server iptables to add the Network Segmented subnet and remove the old subnet from above from the AFS_CVM_LIST nutanix@FSVM:~$ afs net.update_afs_cvm_list local_only=false subnets_to_add=xx.xx.133.0/24 subnets_to_delete=xx.xx.141.0/24 Note: In the above, we are adding the new subnet xx.xx.133.0/24 and removing the old xx.xx.141.0/24 from the iptables config. At this point, SSH will be allowed from the CVM to the FSVMs within that File Server. Verify the proper subnet in the AFS_CVM_LIST nutanix@FSVM:~$ allssh 'sudo iptables -nL AFS_CVM_LIST -w 5 -W 200000' ================== xx.xx.xx.2 ================= Chain AFS_CVM_LIST (1 references) target prot opt source destination ACCEPT all -- xx.xx.133.0/24 0.0.0.0/0 ================== xx.xx.xx.3 ================= Chain AFS_CVM_LIST (1 references) target prot opt source destination ACCEPT all -- xx.xx.133.0/24 0.0.0.0/0 ================== xx.xx.xx.4 ================= Chain AFS_CVM_LIST (1 references) target prot opt source destination ACCEPT all -- xx.xx.133.0/24 0.0.0.0/0 Repeat the above steps for all File Servers that are hosted on the same PE cluster if more than one (1) File Server is deployed.Return to the CVM command line and run the following command to update the DSIP configuration on the File Server nutanix@CVM:~$ afs infra.config_change_notify Example: nutanix@CVM:~$ afs infra.config_change_notify Building the WAL to be used as arg by the master task. Created task 5ba77a56-8157-406a-6e91-7ad3ecb0fafa to notify CVM change to all fileservers. Note: This command applies to ALL File Servers on the PE Cluster at the same time. Verify successful task completion (the number of SubTasks will vary based on the number of File Servers on this PE) nutanix@CVM:~$ ecli task.list component_list=minerva_cvm Task UUID Parent Task UUID Component Sequence-id Type Status ece0068a-6c1f-4463-7687-17252a2b85a8 5ba77a56-8157-406a-6e91-7ad3ecb0fafa minerva_cvm 58 CvmConfigChangeNotifySubTask kSucceeded 5ba77a56-8157-406a-6e91-7ad3ecb0fafa minerva_cvm 57 CvmConfigChangeNotifyMasterTask kSucceeded SSH back to any FSVM on the File Server and verify the CVM and DSIP have been updated nutanix@FSVM:~$ afs fs.info File server name : afs01 File server uuid : 0112af76-4b32-4f52-9f8f-2cf5dd249b9a File server version : 4.3.0.1 --- External data services IP address : xx.xx.133.xx <--- DSIP CVM IP addresses : xx.xx.133.xx xx.xx.133.yy xx.xx.133.zz <--- CVM IPs PC IP address : --- Verify that the iscsi sessions reflected the segmented DSIP nutanix@FSVM:~$ allssh 'sudo iscsiadm -m session P0' ================== xx.xx.xx.2 ================= tcp: [2] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:94fdecf0-dcd5-4c1b-a19d-7203e804c841-tgt0 (non-flash) sdd ================== xx.xx.xx.3 ================= tcp: [1] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt2 (non-flash) sdd tcp: [2] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt3 (non-flash) sde tcp: [3] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt6 (non-flash) sdf tcp: [4] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt0 (non-flash) sdk tcp: [5] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt5 (non-flash) sdg tcp: [6] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:d70edfee-a84c-4822-8125-0ed88ea9ea74-tgt0 (non-flash) sdh tcp: [7] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt1 (non-flash) sdi tcp: [8] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt4 (non-flash) sdj ================== xx.xx.xx.4 ================= tcp: [2] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:86af4e90-697f-4d5b-8cea-9efcf934a170-tgt0 (non-flash) sdd For Nutanix Files 4.4.x and 5.x: Open a console session to one FSVM and log in to the terminal with the 'nutanix' user credentials.From the FSVM command line, collect the current File Server configuration nutanix@FSVM:~$ afs fs.info File server name : afs01 File server uuid : 0112af76-4b32-4f52-9f8f-2cf5dd249b9a File server version : 4.4.0.1 --- External data services IP address : xx.xx.141.xx CVM IP addresses : xx.xx.141.xx, xx.xx.141.yy, xx.xx.141.zz PC IP address : --- Collect existing iptables configuration for the AFS_CVM_LIST chain nutanix@FSVM:~$ allssh 'sudo iptables -nL AFS_CVM_LIST -w 5 -W 200000' ================== xx.xx.xx.2 ================= Chain AFS_CVM_LIST (1 references) target prot opt source destination ACCEPT all -- xx.xx.141.0/24 0.0.0.0/0 ================== xx.xx.xx.3 ================= Chain AFS_CVM_LIST (1 references) target prot opt source destination ACCEPT all -- xx.xx.141.0/24 0.0.0.0/0 ================== xx.xx.xx.4 ================= Chain AFS_CVM_LIST (1 references) target prot opt source destination ACCEPT all -- xx.xx.141.0/24 0.0.0.0/0 Collect current iscsi session information (note output will vary in quantity depending on the number of shares/exports on the File Server) nutanix@FSVM:~$ allssh 'sudo iscsiadm -m session P0' ================== xx.xx.xx.2 ================= tcp: [2] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:94fdecf0-dcd5-4c1b-a19d-7203e804c841-tgt0 (non-flash) sdd ================== xx.xx.xx.3 ================= tcp: [1] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt2 (non-flash) sdd tcp: [2] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt3 (non-flash) sde tcp: [3] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt6 (non-flash) sdf tcp: [4] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt0 (non-flash) sdk tcp: [5] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt5 (non-flash) sdg tcp: [6] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:d70edfee-a84c-4822-8125-0ed88ea9ea74-tgt0 (non-flash) sdh tcp: [7] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt1 (non-flash) sdi tcp: [8] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt4 (non-flash) sdj ================== xx.xx.xx.4 ================= tcp: [2] xx.xx.141.xx:3260,1 iqn.2010-06.com.nutanix:86af4e90-697f-4d5b-8cea-9efcf934a170-tgt0 (non-flash) sdd Backup iptables to a tmp location (command will not generate output) nutanix@FSVM:~$ allssh 'sudo iptables-save > ~/config/salt_templates/afs_iptables.backup' Add the network segmented subnet to the AFS_CVM_LIST (command will not generate output) nutanix@FSVM:~$ allssh 'sudo iptables -A AFS_CVM_LIST -s xx.xx.133.0/24 -j ACCEPT' Remove the old CVM subnet from AFS_CVM_LIST (command will not generate output) nutanix@FSVM:~$ allssh 'sudo iptables -D AFS_CVM_LIST -s xx.xx.141.0/24 -j ACCEPT' Save iptables (command will not generate output) nutanix@FSVM:~$ allssh 'sudo iptables-save > ~/config/salt_templates/afs_iptables.backup' Note: In the above, we are adding the new subnet xx.xx.133.0/24 and removing the old xx.xx.141.0/24 from the iptables config. At this point, SSH will be allowed from the CVM to the FSVMs within that File Server. Verify the proper subnet in the AFS_CVM_LIST Repeat the above steps for all File Servers that are hosted on the same PE cluster if more than one (1) File Server is deployed.Return to the CVM command line and run the following command to update the DSIP configuration on the File Server nutanix@CVM:~$ afs infra.config_change_notify Example: nutanix@CVM:~$ afs infra.config_change_notify Building the WAL to be used as arg by the master task. Created task 5ba77a56-8157-406a-6e91-7ad3ecb0fafa to notify CVM change to all fileservers. Note: This command applies to ALL File Servers on the PE Cluster at the same time. Verify successful task completion (the number of SubTasks will vary based on the number of File Servers on this PE) nutanix@CVM:~$ ecli task.list component_list=minerva_cvm Task UUID Parent Task UUID Component Sequence-id Type Status ece0068a-6c1f-4463-7687-17252a2b85a8 5ba77a56-8157-406a-6e91-7ad3ecb0fafa minerva_cvm 58 CvmConfigChangeNotifySubTask kSucceeded 5ba77a56-8157-406a-6e91-7ad3ecb0fafa minerva_cvm 57 CvmConfigChangeNotifyMasterTask kSucceeded SSH back to any FSVM on the File Server and verify the CVM and DSIP have been updated nutanix@FSVM:~$ afs fs.info File server name : afs01 File server uuid : 0112af76-4b32-4f52-9f8f-2cf5dd249b9a File server version : 4.3.0.1 --- External data services IP address : xx.xx.133.xx <--- DSIP CVM IP addresses : xx.xx.133.xx xx.xx.133.yy xx.xx.133.zz <--- CVM IPs PC IP address : --- Verify that the iscsi sessions reflected the segmented DSIP nutanix@FSVM:~$ allssh 'sudo iscsiadm -m session P0' ================== xx.xx.xx.2 ================= tcp: [2] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:94fdecf0-dcd5-4c1b-a19d-7203e804c841-tgt0 (non-flash) sdd ================== xx.xx.xx.3 ================= tcp: [1] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt2 (non-flash) sdd tcp: [2] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt3 (non-flash) sde tcp: [3] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt6 (non-flash) sdf tcp: [4] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt0 (non-flash) sdk tcp: [5] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt5 (non-flash) sdg tcp: [6] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:d70edfee-a84c-4822-8125-0ed88ea9ea74-tgt0 (non-flash) sdh tcp: [7] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt1 (non-flash) sdi tcp: [8] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:21bef29e-7a3c-4487-bcc0-2e0f4f21a33a-tgt4 (non-flash) sdj ================== xx.xx.xx.4 ================= tcp: [2] xx.xx.133.xx:3260,1 iqn.2010-06.com.nutanix:86af4e90-697f-4d5b-8cea-9efcf934a170-tgt0 (non-flash) sdd
KB5484
SMTP Test email failing with error: "Exception reading response" or "Client was not authenticated" or "Can't send command to SMTP host"
The SMTP test email might fail both in Prism and command line sometimes. We need to see if the same thing happens with security mode "STARTTLS" and the correct credentials.
Sometimes, after we enter the correct details for SMTP with Security mode "None", the test email fails with the following error: Error while sending email: Failed messages: OR Error while sending email: Failed messages:com.sun.mail.smtp.SMTPSendFailedException: 530 5.7.1 Client was not authenticated OR Error: Error while sending email: Can't send command to SMTP host ​The SMTP configuration is fine in this case:We might get the following error while trying the same from command line: nutanix@NTNX-XYZ-A-CVM:1X.X.X.X:~$ ncli cluster send-test-email recipient=abc@xyz.com subject="Test email"
Considering that there are no issues with the SMTP server and we have tested netcat, ping and telnet to the SMTP server, we can do the following:Instead of Security mode "NONE", we can try "STARTTLS"(depending on what is set in the SMTP server) in the security mode:We need to enter the correct credentials(username and password) for the SMTP, hostname and port number( its 25 usually) and save it.We can then "test" the SMTP and it should go through.After this we can remove the security mode to "NONE" and it should work as well.
KB15110
LCM Pre-check: test_cisco_validate_credentials_and_setup
This precheck is specific to Cisco UCS platform for validating the Cisco UCSM credentials.
The pre-check "test_cisco_validate_credentials_and_setup" is introduced in LCM 2.6.2. It checks the overall requirements for performing operations (Inventory and upgrade) with UCSM.Known Issue: On LCM-2.6.2 for UCS M4 and M5 Standalone servers we are also asking for UCSM Credentials. Please refer KB- http://portal.nutanix.com/kb/15807 15807 http://portal.nutanix.com/kb/15807 for a solution for UCS M4 and M5 nodes.You can also enable LCM Auto-inventory on Cisco UCSM by providing the UCSM credentials:Please note: However, if the credentials expire or are invalid - then the LCM auto-inventory will reset.You will need to re-enable LCM Auto Inventory from the Settings section and provide the correct credentials. Sample failure message: Lcm prechecks detected 1 issue that would cause upgrade failures. Check 'test_cisco_validate_credentials_and_setup' failed: Failed to fetch the Credentials. Please refer to KB 15110 Please refer the below table for LCM support on different Cisco UCS models : [ { "Family": "M4/M5", "User Selected Mode": "Standalone", "Supported feature": "Standalone (Cisco IMC) mode is supported by NutanixStandalone (Intersight) mode is not supported by NutanixStandalone Credential is not requested.LCM Pre-check \"test_cisco_validate_credentials_and_setup\" will not fail.\n\t\t\t\tPlease note : we have an issue with LCM-2.6.2 where LCM requests for credentials. Please refer KB-15807 for solution.\n\t\t\t\tFirmware inventory is not supported.Software inventory and upgrade is enabled." }, { "Family": "M4/M5", "User Selected Mode": "Managed", "Supported feature": "UCSM Credential is requested.Firmware inventory is not supported.Software inventory and upgrade is enabled." }, { "Family": "M6/M7", "User Selected Mode": "Standalone", "Supported feature": "Credential is not requested.Software/Firmware inventory is not supported.LCM Inventory pre-check \"test_cisco_validate_credentials_and_setup\" will fail." }, { "Family": "M6/M7", "User Selected Mode": "Managed", "Supported feature": "UCSM Credential is requested.Software/Firmware Inventory is supported in LCM-2.6.2 onwards.Firmware upgrade is enabled LCM 2.7 onwards" }, { "Family": "M6/M7", "User Selected Mode": "ISM (Intersight) managed", "Supported feature": "Intersight Credential is requested.Software/Firmware Inventory is supported LCM-3.0.1 onwards.Firmware upgrade is enabled LCM 3.0.1 onwards" } ]
Please Note: The support for LCM Inventory was provided starting from LCM-2.6.2 (UCSM managed clusters). When you are coming from older LCM version to LCM-2.6.2, the LCM framework gets upgraded and tries to fetch the Cisco UCSM credentials, which it will not have in first inventory (When the LCM framework auto upgrades itself). Hence you will face the precheck to fail for the first time. The recommendation is to ignore the first failure and perform a re-inventory, which will ask for the Cisco UCSM credentials in the LCM UI. Scenarios the pre-check can fail: [ { "Precheck failure": "Unable to get cluster info" }, { "Precheck failure": "Unable to check whether it is PCVM or not.", "Solution": "If LCM detects the system is a Prism Central VM, the precheck is skipped. But if, LCM is unable to determine the environment, then it may fail this precheck." }, { "Precheck failure": "Current privilege level does not satisfy minimum requirements", "Solution": "Based on the Cisco documentation on user roles, the user credentials should have either of the following privileges\n\t\t\t\tadminls-config-policy" }, { "Precheck failure": "Node with serial: <> is not associated to any service policy", "Solution": "Out of the nodes controlled by UCSM in a cluster, each should be “associated” and have a corresponding Service Profile.Please verify in UCS Manager." }, { "Precheck failure": "validate_ucsm_c_series_compatibility", "Solution": "Major UCSM version should be >= Major firmware version\n\t\t\t\tExample:Major version 4.2(3g)A should be >= major C-Series version 4.2(3g)C." }, { "Precheck failure": "validate_ucsm_health", "Solution": "Check if UCSM is in a “healthy” stateoperState of computeRackUnit object should be “ok”Check the Overall Status for each server under General Tab of Server in UCS Manager." }, { "Precheck failure": "validate_space_for_bundle", "Solution": "Check if UCSM has enough space to accommodate a firmware bundle upload.“Used” space of UCSM should be <= 60% of the total for precheck to pass." } ]
KB13848
Stargate service FATALs and multiple disks marked offline with AOS 6.5.x on SPDK enabled clusters
This article covers an issue with SPDK on certain drives, the nexus being 6.5.x on certain model drives and SPDK enabled clusters.
Nutanix has identified an issue with certain NVMe drives, where clusters enabled for SPDK feature and upgraded to AOS 6.5.x may start reporting Stargate FATALs, and multiple disks marked offline. The issue is specific to clusters running below combinations of software, hardware and feature set. AOS version 6.5 and 6.5.1.xSPDK enabled (i.e. CVM mem size >=48GB).Specific disks with MDTS >=8, can potentially hit this issue: MZXLR7T6HALA-000H3 MZXL57T6HALA-000H3MZ7L33T8HBLTAD3SAMSUNG MZ7L31T9SAMSUNG MZ7L31T9HBLT-00A03SAMSUNG MZ7L31T9HBLT-00A07SAMSUNG MZ7L31T9HBLT-00B7CSAMSUNG MZ7L33T8SAMSUNG MZ7L33T8HBLT-00A03SAMSUNG MZ7L33T8HBLT-00A07SAMSUNG MZ7L33T8HBLT-00B7CSAMSUNG MZQLB1T9HAJR-00007SAMSUNG MZQLB3T8HALS-00007SAMSUNG MZQLB7T6HMLA-00007VK001920GZXQVVK003840GZXRH To check the mdts value on the disk being used, we can use the below command: nutanix@cvm$ sudo nvme id-ctrl /dev/nvme0 | grep "mdts" Affected clusters could see Prism alerts for Stargate service being down and NVMe SSD disks reporting high await times: nutanix@CVM:x.y.z.23:~$ ncli alert history duration=7 |egrep -E "Mes|Cre" Stargate.FATAL logs will report Stargate crashing with the below signatures: Log file created at: 2022/10/07 05:45:14 Log file created at: 2022/10/07 07:00:58 Stargate.INFO/ERROR logs will report Contiguous read failed errors: E20221007 07:00:00.070842Z 13543 spdk_executor.cc:1163] 0000:00:0c.0: Contiguous read failed for opcode 1 offset_=3458266570752, lba_offset=6754426896, aligned_offset_=3458266570752, blocks=2032, count_=1040384, aligned_count_=1040384, block_size=512, num_reads=1, ii=0, offset_arr[ii]=3458266570752, count_arr[ii]=1040384, rc=-14: Resource temporarily unavailable [11] /home/log/messages logs will report an INVALID FIELD error returned by SPDK: 2022-10-07T20:55:50.625526-07:00 NTNX-xxxx-A-CVM stargate[20852]: nvme_qpair.c: 307:spdk_nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:2 cid:7592 cdw0:0 sqhd:0000 p:0 m:0 dnr:1 /home/log/messages for spdk_epoll0 will report segfault: 2022-10-07T08:02:47.091591-07:00 NTNX-xxxx-A-CVM kernel: [31041.941246] spdk_epoll0[4705]: segfault at 7fdf0a57bac0 ip 00007fdf22577a3f sp 00007fdf0a57bac0 error 6 in ld-2.17.so[7fdf22561000+22000]
This issue has been fixed in AOS 6.5.2 and 6.6. If you are running the affected releases and are using SPDK, consider upgrading to AOS 6.5.2 to prevent this issue. If the cluster AOS is < 6.5.x avoid upgrading to AOS 6.5 or 6.5.1.x If the cluster is already running on AOS version 6.5.x, and you are actively impacted by this issue, please contact Nutanix Support https://portal.nutanix.com/page/home for assistance with disabling SPDK on the cluster. []
KB4474
Lenovo - Booting up Hyper-V stuck at "Initializing legacy USB Devices"
While booting up Hyper-V on Lenovo nodes, it gets stuck at "Initializing legacy USB Devices"
On Lenovo Systems with Hyper-V installed, we might see hyper-v boot up stuck at "Initializing legacy USB Devices"Cause:Lenovo has found that legacy Lenovo keyboard attached to the node causes this issue.
Unplug the Keyboard and any other USB peripheral devices from the node and reset it.
KB4522
Increased IO Latency during Curator full scans
Clusters with huge amounts of data may see increased latency during Curator full scans.
Several ONCALLs have been opened reporting cluster-wide increased latency where similar symptoms were seen such as: Spikes in the latency charts in Prism,VM's disconnecting from their volume groups,Some cases reported VMs becoming unusable. When these are recurring events, verify if they occur at the same time with Curator full scans by running this command from any CVM: curator_cli get_last_successful_scans Check the last full scan and compare it with the time the customers complained about. Using curator master: x.y.z.133:2010 Or you can get the same from the logs by running this command in curator leader CVM: grep "Full Scan" ~/data/logs/curator.*INFO* Compare the latest scans time stamps: curator.CVM.nutanix.log.INFO.20170615-111600.17493:I0615 11:31:29.561938 17521 curator_task_scheduler.cc:995] Curator job id 1 with execution id 9558 (Full Scan) started for reasons [ Periodic ] NOTE: Full scan was still running when this command was executed, so there is no end time logged in the snippet above. Curator will generate a lot more work when it's configured to run in maintenance window mode. So we should check for this as well by running this command from any CVM: curator_cli curator_maintenance_window list=true It will show if any maintenance configuration is present Using curator master: x.y.z.133:2010 Verify the load on the HDDs: When the problem is currently present and a Curator scan is running, you can look at this command's outputs from the CVMs: iostat -xm 2 verify the await , w_await , and %util fields for the HDDs. Values to look for are %util near 100, w_await going over 150 consistently. Or from log bundles, run Panacea and check IOTOP and IOSTAT from the graphs. Verify Stargate logs and check how long fsyncs take: x.y.z.171-logs/cvm_logs/stargate.INFO:I0405 09:55:59.030367 7667 local_egroup_manager.cc:773] fsync on disk 37 took approx 2003 msecs while persisting egroup 516572370 The above INFO messages will only be logged when the fsync takes longer than 2 seconds (2000 msecs). Seeing these logs indicates your drives are busy. Another symptom is ops being dropped from the QoS queue: allssh 'grep "ops from disk" ~/data/logs/stargate*INFO*' If you find the above symptoms in the logs and it correlates to customer impact times then proceed to the next steps.
WARNING: Support, SEs, and Partners should never make Zeus, Zookeeper, or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://nutanix.my.salesforce.com/kA0600000008SsH, and KB 1071 https://portal.nutanix.com/kb/1071.We can throttle the amount of work that Curator will generate for the HDDs by setting the following Curator gflags: --curator_mapreduce_disk_token_bucket_min_size=25165824 If the customer has Capacity tier dedup enabled, the fingerprint reduce tasks will generate a lot of extra IO. These tasks can be stopped by disabling capacity tier dedup on the container and setting the below Curator gflags: --curator_force_low_yield_fingerprint_deletion=true Note: In AOS versions below 5.9 the fingerprint deletion gflag was named curator_delete_low_yield_fingerprints. The customer may have had capacity tier dedup enabled at one point and then disabled it. This may still leave a lot of fingerprints behind in the metadata.You can check for this by grepping for the following on Curator master: allssh ' grep ContainerFingerprintedBytes ~/data/logs/*curator* ' It will show something similar to: I0429 22:00:22.330912 12239 curator_execute_job_op.cc:3790] ContainerFingerprintedBytes[19793] = 3127672471552 Alternatively, use the command: curator_cli get_counter_info counter_names=ContainerFingerprintedBytesIf this is the case, you still need to set the above 2 gflags.
KB11037
PE Tasks Showing as 99% in a Running State in PC UI Even Though They Have Completed
We have observed that the PC UI will show a number of user VM tasks running at 99% even though they have already completed on PE.
Scenario 1: We have observed that the Prism Central (PC) UI will show a number of user VM tasks running at 99% even though they have already completed on PE. This issue has been reproduced against VMs that were created using LEAP for a planned failover. If a planned failover has been done, this is likely the scenario you are hitting. When looking in PC, you will see the following:Typically these tasks will be days, weeks, or months old. If they are more recent you will want to double check that the PE cluster listed under the 'Cluster' column does not show any running tasks. If you see the same tasks running in PE, this is not your issue and should be investigated outside of this KB. Scenario 2: A similar issue is being investigated in ENG-423624 https://jira.nutanix.com/browse/ENG-423624 where Catalog tasks fail to sync from PE back to PC. Catalog service deletion tasks in Prism Element were completed, but due to a logical timestamp mismatch, it remains running at 99% on Prism Central, blocking further LCM operations. These tasks were complete in ergon but forced by Prism to remain at 99% due to the out-of-sync state. On PE: nutanix@CVM:~$ ecli task.get 402e4511-0099-4e49-a968-e1e363a3e004 On PC: nutanix@PCVM:~$ ecli task.list include_completed=0 limit=10000
Scenario 1A: To work around the issue, we need to use the VM UUID to update the logical timestamp of VMs in question. We will want to maintain the list of UUIDs in a file called vms.txt in /home/nutanix/tmp on the PE cluster in question. To create a list of impacted VM UUIDs, you will need to leverage the following steps to identify the VMs that have a percentage complete of 99% in order to update their logical timestamp: From the PCVM run (note that the customer will need to enter the admin password for the Prism Central GUI): nutanix@PCVM:~$ curl -X GET --header "Accept: application/json" --insecure "https://127.0.0.1:9440/PrismGateway/services/rest/v1/progress_monitors?hasSubTaskDetail=false&count=500&page=1&filterCriteria=internal_task==false&oldProgressOnly==false" -u admin | python -m json.tool | grep -B17 '"percentageCompleted": 99' This will grab all tasks at a completion percentage of 99. Note the "entityId" and the "clusterUuid" that will appear before the "percentageCompleted" lines in order to ensure that you SSH to the right PE cluster You can check the output as follows: ------------------------ Once you have the "entityId", add it to a file in /home/nutanix/tmp called vms.txt on the correct PE cluster. Repeat the above steps until you have the complete list of impacted VMs in the file vms.txt on the PE cluster (this could potentially impact more than one PE cluster). THE SAMPLE FILE WILL LOOK LIKE THIS: nutanix@CVM:~$ cat vms.txt From the PE cluster, with vms.txt, run the following non-disruptive command to update the logical timestamp nutanix@CVM:~$ for vm in `cat vms.txt`; do annotation=`acli vm.get $vm |grep annotation | cut -d'"' -f2`; acli vm.update $vm annotation="temp"; acli vm.update $vm annotation="$annotation"; done This command will temporarily change the annotation for the VM and then set it back to what it was previously Once the above command completes, check Prism Central to ensure the tasks that were previously stuck at 99% are now gone. If you still see some lingering tasks at 99%, repeat the above process to ensure no VMs were missed. In some cases, if the VMs are deleted or the recovery process is completed, the VMs will not be present in PE. we can restart aplos, aplos_engine, and insights_server on PE first then on PC to resolve the issue. nutanix@CVM:~$ allssh "genesis stop aplos aplos_engine insights_server && cluster start" Note: In some cases it was noticed that even after restart of the services, the tasks still appeared to be running at 99% in the PC UI. However, the tasks disappeared from the PC UI after a while. Please ensure you monitor the tasks for 24 hours before proceeding with further troubleshooting. Scenario 1B: In some scenarios, the VMs in question will no longer be present in Prism Element, making Solution 1 impossible. If this is the case, you must delete the offending tasks directly from IDF using the following script: delete_tasks_from_idf_v1.py https://download.nutanix.com/kbattachments/11037/delete_tasks_from_idf_v1.py Place the script in /home/nutanix/bin on a CVM of the affected PE cluster and create a file 'tasks.txt.' This file should contain the list of task UUIDs, one per line, that need to be deleted (these can be captured from the json_file.txt collected above).Execute the script from /home/nutanix/bin: nutanix@CVM:~$ python delete_tasks_from_idf_v1.py Scenario 2: Please collect logs and attach your case to ENG-423624 https://jira.nutanix.com/browse/ENG-423624, You will probably need to open a TH to work around the issue after collecting the logs similar to TH-6922 https://jira.nutanix.com/browse/TH-6922 Note: Based on the information available on ENG-376225 https://jira.nutanix.com/browse/ENG-376225, this issue is resolved when using pc.2020.9.0.1 or newer and AOS 6.1 and newer. If you see the issue on those or newer versions, the scenario requires further investigation.
KB12332
VMs which were part of FNS Security Policies may encounter network connectivity issues if Flow Network Security is later disabled
If Flow Network Security is disabled in Prism Central, depending on the timing of events, some VMs may encounter network connectivity issues due to leftover stale rules.
If Flow Network Security is disabled via Prism Central(PC) during reconciliation (rule synchronisation between the control-plane (PC) and hosts (PE)), some network flow state rules may be left behind without being cleaned up in one or more AHV host OpenVSwitches(OVS).This may cause network connectivity issues for the corresponding VMs.This behaviour may occur when Flow Network Security was disabled shortly after a reconciliation task was triggered to run in the background due to certain events such as an FNS Security policy update being pushed, or a service restart due to node network instability or a maintenance event.
The issue has been resolved in LTS 5.20.4, STS 6.1.1 and later versions.If you believe you are affected by this issue, please engage Nutanix Support https://portal.nutanix.com.
KB13175
BIOS upgrade to PW60.001 causes 'No network device' on NX-1175S-G6 with AHV hypervisor
After upgrading BIOS to PW60.001, the AHV hypervisor recognizes the OnBoard LAN Device, which should be disabled. AHV tries to name the network interfaces like eth0 and eth1, but those names could conflict with the existing ones during boot. It results in the redundancy loss of the uplink and leaves AHV unschedulable and unable to run UVMs. The issue only affects the AHV hypervisor on NX-1175S-G6 with PW60.001.
After upgrading BIOS to PW60.001, the AHV hypervisor recognizes the OnBoard LAN Device, which should be disabled. AHV tries to name the network interfaces like eth0 and eth1, but those names could conflict with the existing ones during boot.It results in the redundancy loss of the uplink and leaves AHV unschedulable and unable to run UVMs. When two or more hosts are unschedulable, the LCM task may fail. The issue only affects the AHV hypervisor on NX-1175S-G6 with PW60.001. Symptoms: You can check the unexpected network interfaces by manage_ovs. nutanix@cvm:~$ manage_ovs show_interfaces Example of dmesg on PW50.002, which indicates that AHV expectedly recognizes NIC of PCI slot. [ 0.000000] DMI: Nutanix NX-WDT-1NL3-G6/X11SPW-TF-NI22, BIOS PW50.002 11/30/2021 Example of dmesg on PW60.001, which indicates that AHV unexpectedly recognizes OnBoard LAN Device as well. [ 0.000000] DMI: Nutanix NX-WDT-1NL3-G6/X11SPW-TF-NI22, BIOS PW60.001 02/23/2022 NCC outputs shows "No network device", uplink redundancy loss and that AHV is not schedulable. Detailed information for ovs_bond_config: acli host.list nutanix@cvm:~$ acli host.list acropolis.out of Acropolis Leader Exception: Command failed: /sbin/ip link show eth1: Device "eth1" does not exist.
Nutanix plans to fix the default value of the On-Board LAN Device to Disabled on the next BIOS release - PW70.001. LCM has temporarily disabled the upgrade for NX-1175S-G6 to BIOS PW60.001 with the NX-RIM-2.12 release which is present since LCM 2.3.0.1.You may follow the KB-7812 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000CqBYCA0 to upgrade manually if the BIOS level PW60.001 is needed, ensure to follow the workaround to update the BIOS settings. Workaround:If you have already upgraded the BIOS and encountered the issue, please change the BIOS settings as follows.1. Reboot the host and press F11 to invoke BIOS menu.2. Select PCIe/PCI/PnP Configuration.3. Navigate to OnBoard LAN Device section and change from Enabled to Disabled. After the change:4. Confirm that now AHV recognizes network interfaces for NIC on PCI slot. nutanix@cvm:~$ manage_ovs show_interfaces
KB15035
NDB - Provisioning a Windows DB server VM may fail with the error 'Failed to register database server: Error in attaching staging disk'.
Failed to register database server: Error in attaching staging disk' error may be seen on Windows DB Server VM's during Provisioning operation.
The following log snippets may be seen in the: ./logs/era_server/server.log server.log:2023-06-01 21:05:18,004 [Worker-17] INFO [ERAAlertEngine] CREATEALERT with arguments: id:675dd7d2-13c6-45e8-ad2c-5e7780301583 msg:Failed to Provision Database Server VM. Reason: 'Failed to register database server: Error in attaching staging disk' date:2023-06-01 21:05:18 entityType:ERA_DBSERVER entityId:a8d31a95-c19c-4547-b6bb-ec12f27186d4 entityName:TestServer policyid:3da7ee79-6f52-4964-88ec-6da0c13fd56f opId:540d2884-46dd-4eae-af17-7c51d1a5ed34 ownerId:9e4b8416-11dd-4a52-82bd-da7c822ccba6 The same failure message may also be seen in the /home/era/era_base/logs/common/eraconnection.log file as well.The operation log file located under /home/era/era_base/logs/drivers/sqlserver_database/create_dbserver/<Operation-ID>.log may show the following tracebacks: "ModuleNotFoundError: No module named 'nutanix_era.era_drivers.orchestrators" ...
This error message could be seen due to unsuccessful PIP installation of the NDB modules on the Windows DB server VM. The issue here with the PIP installation is due to the long Temp path formed for installing the modules. The command used to install the modules is: ['C:\\Users\\ndb\\AppData\\Local\\Temp\\ntnx-era-register-host-windows\\setup\\stack\\windows\\python\\scripts\\pip3.6.exe install --no-index drivers\\ntnx-era-drivers-2.5.2.tar.gz'] This command essentially tries to install the driver files by extracting them to the Temp directory and copying it to the site-packages directory in the Temp location.Note that Windows OS has a hard limit of 260 characters unless LONG PATH is enabled in the registry. The above command exceeds this hard limit, thus causing the operation to fail. The LONG PATH option can be enabled by following the steps below to resolve the failure: On the Source DB Server VM (the DB Server VM that owns either the Software Profile or the Time Machine we are using to provision the new DB Server VM), open the Registry Editor by pressing Windows Key + R, type "regedit," and click OK.Navigate to the following registry key: Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\LongPathsEnabled Verify if the value "LongPathsEnabled" exists. If it doesn't, create a new DWORD value with that name.Set the value of "LongPathsEnabled" to 1. After the LONG PATH option is enabled, create a new profile and re-try the provisioning process.If a Time Machine is used instead, create a new Snapshot for the Time Machine and re-try the provisioning process again.Additionally, verify if the antivirus is disabled on the Source DB Server VM since its running can cause provisioning failures with the identical signatures.
KB5979
CVM services unable to come up due to disks missing from df -h
CVM services unable to come up due to disks missing from df -h because Hades will not mount the disks.
CVM services unable to come up due to Hades failing to mount disks. This can be verified with the "df -h" command. nutanix@cvm:~$ df -h Note: The disks do not show up in "df -h", but they DO show up in lsscsi and list_disks. Furthermore, 'edit-hades -p' shows them as "disk_present: true". You will also see an error similar to the following in ~/data/logs/hades.out 2018-08-11 17:16:35 INFO disk_manager.py:399 Mounting disk: /dev/sde This can also be verified by going to ~/data/stargate-storage/disks and running the command "ls -al */". nutanix@cvm:~$ cd ~/data/stargate-storage/disks
In this case, Hades is unable to mount the disks because it sees the "curator" folder or "metadata" folder (in newer AOS) in the mount path. In order to resolve this, the files/folders that are causing this will need to be moved. Moving the files/folders is preferable in case they are needed later. nutanix@cvm:~$ cd ~/data/stargate-storage/disks/<disk-in-question> Restart Hades and restart Genesis. nutanix@cvm$ sudo /usr/local/nutanix/bootstrap/bin/hades restart Through ENG-228661 http://jira.nutanix.com/browse/ENG-228661, the logic of removing empty directories when trying to mount drives has been added. So if Hades sees empty directories, it proceeds to delete these from the mount path and attempts to mount the drives. If the directories in the mount path are not empty, it will continue to fail as expected.
KB16466
LCM update or Rolling reboot might fail in a mixed cluster running Icelake and Sapphire Rapid nodes on AHV
This KB article describes an issue where LCM update or a rolling reboot might fail in a mixed cluster running Icelake and Sapphire Rapid nodes on AHV hypervisor. With an Error message "VMs need to be power cycled as these VMs are running on higher feature CPUs thus limiting their migration to lower feature CPU nodes in cluster"
This KB article describes an issue where an LCM update or a rolling reboot might fail in a mixed cluster running Icelake and Sapphire Rapid (SPR) nodes on the AHV hypervisor with an error message "VMs need to be power cycled as these VMs are running on higher feature CPUs thus limiting their migration to lower feature CPU nodes in cluster".Note: This issue may happen on any Hardware vendor running mixed SPR and Icelake nodes on AHV. When this issue occurs, you will see the following message on the LCM UI: Lcm prechecks detected 1 issue that would cause upgrade failures. The same pre-check will also fail when you perform a rolling reboot or any other operation that requires migrating a VM.To identify if you are hitting this issue, you can check the CPU difference between the 2 node generations : Login to the host running Icelake Node, Search for a VM using (virsh list), and dump XML for the VM into a file. root@host# virsh dumpxml 3 | grep feature > vm3onicelake Login to another host running Sapphire Rapid Node, Search for a VM using (virsh list), and dump XML for the VM into a file. virsh dumpxml 5 | grep feature > vm5onspr Compare the feature difference between the 2 XML files. root@host# diff vm3onicelake vm5onspr The above output shows the CPU config flags missing between the 2 node generation/models.
This issue occurs because AHV does not expose the virsh capabilities for Sapphire Rapid nodes. Thus, when a VM tries to migrate from the Icelake node to Sapphire Rapid nodes, it sees CPU feature flag differences and doesn't allow VM migrations.This issue is resolved in: AOS 6.5.X family (LTS): AOS 6.5.6AOS 6.8.X family (eSTS): AOS 6.8.0.5 Upgrade AOS to versions specified above or newer.Procedure Power cycle the VMs. This will lower the CPU features baseline of the VMs and will avoid the failure of the vms_to_reboot pre-upgrade check.Upgrade the cluster to 6.5.6 or 6.8.0.5.(optional) Power cycle the VMs after the upgrade is complete. This will bring the VMs back to the baseline of higher CPU features (that existed before the issue).
KB3196
NCC Health Check: sata_dom_uvm_check
The NCC health check sata_dom_uvm_check checks if there are any VMs running on the SATA DOM.
Note: This health check has been retired from NCC 4.2.0. The NCC health check sata_dom_uvm_check verifies if there are any VMs running on the SATA DOM, and reports a FAIL status if the check detects any VMs running on the SATA DOM. No VMs must be running on the SATA DOM, as this accelerates the SATA DOM degradation. This check is available only on ESXi from AOS 5.9.x. Running the NCC check Run the NCC check as part of the complete NCC Health Checks. nutanix@cvm$ ncc health_checks run_all Or run the sata_dom_uvm_check check individually. nutanix@cvm$ ncc health_checks hardware_checks disk_checks sata_dom_uvm_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every day, by default. This check will generate an alert after 1 failure. Sample output For Status: PASS Running /health_checks/hardware_checks/disk_checks/sata_dom_uvm_check on the node [ PASS ] For Status: WARN 1 Running /health_checks/hardware_checks/disk_checks/sata_dom_uvm_check on the node [ WARN ] For Status: WARN 2 Detailed information for sata_dom_uvm_check: Output messaging [ { "Check ID": "Checks that no guest VM is installed on SATA DOM." }, { "Check ID": "A guest VM is installed on SATA DOM.\t\t\tA VM is incorrectly configured." }, { "Check ID": "Remove guest VMs from SATA DOM.\t\t\tReconfigure guest VMs on the host machine." }, { "Check ID": "Degradation of the SATA DOM will be accelerated, leading to unavailability of the node." }, { "Check ID": "A1184" }, { "Check ID": "SATA DOM on ip_address has Guest VM." }, { "Check ID": "SATA DOM has Guest VM." }, { "Check ID": "SATA DOM on host ip_address contains a guest VM," } ]
For Status: WARN 1Manually Finding the Location of the VMs Note: This check can fail if any VM names have spaces in them. This false-positive is resolved since NCC 2.2.4. To find the location of the VM, do the following. Hyper-V cluster Log on to a Controller VM by using SSH and run the following command. nutanix@cvm$ winsh From the hypervisor command line, run the following command. 192.168.5.1> (get-vm | get-vmharddiskdrive).Path An output similar to the following is displayed. C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\LocalVM.vhdx <--VM on local hard drive ESXi Log on to an ESXi host by using SSH and run the following command. root@host# vim-cmd vmsvc/getallvms This command retrieves a list and the location of all the VMs running on that server. An output similar to the following is displayed. Vmid Name File Guest OS Version Annotation Remove the VM from the SATA DOM and migrate it to another Datastore. Note: If there are no VMs in the SATA DOM, the WARN status can be due to an ESXi VM annotation (notes) containing one or more line breaks which created the false-positive (WARN example in the description section). To resolve this, identify the VMs that have annotations with a line break in it from the output in Step 2, update the annotation and re-run the sata_dom_uvm_check. Note: The Warning "VMs have no storage name" can also be seen if the VM name contains 3 or more spaces. Consider renaming the VM to have less than 3 consecutive spaces. Note: Starting from ESXi 7.0 and later there is a vSphere Cluster Services (vCLS) VM that is deployed which will not be listed under VMs for a host in vSphere. It was noticed in one instance that after a SATADOM replacement, this VM came up on the SATADOM. vCLS VM datastore location is chosen by a default datastore selection logic. To override the default vCLS VM datastore placement for a cluster, you can specify a set of allowed datastores by browsing to the cluster and clicking ADD under Configure > vSphere Cluster Service > Datastores. This will move the VM off of the SATADOM. https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-6C11D7F9-4E92-4EA8-AA63-AABAD4B299E7.html For Status: WARN 2This issue occurs when CVM files are placed in a different directory in the Local Datastore apart from ServiceVM_Centos directory. Engage Nutanix Support https://portal.nutanix.com to get this issue resolved.If you get an ERR status it indicates that the NCC health check could not properly poll the host for status of SATA DOM and its usage. There may be an issue elsewhere in the system. Always ensure that NCC is upgraded to the latest available version and re-run the health check to ensure you have the latest version of this check, including any fixes and improvements for reliability and resiliency. Note: In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/. Additionally, gather the output of "ncc health_checks run_all" and attach it to the support case.
KB16690
Prism Central - UI login errors with 403 access denied if DNS servers are not reachable
Both admin and AD users cannot to login to Prism Central due to DNS reachability issues.
Logging into the Prism Central (PC) web interface, when launched from the Prim Element, and using the individual PC IP address results in the 403 access denied error. Additionally, when using the "go back to login" button, you are redirected to the login page but encounter server timeout error when attempting to log in again.You can also see the below error when trying to log in to Prism Central: Upstream connect error or disconnect/reset before headers. Reset reason: connection failure Before proceeding, ensure the issue you are experiencing does not match KB 15566 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V0000010xP9SAI. Additionally, clear the cache or try logging in from a different browser. Check for the presence of DNS issues: NCC warnings about reachability to DNS: Detailed information for dns_server_check: Aplos.out will have DNS errors and warnings similar to the below: nutanix@pcvm:~$ less ~/data/logs/aplos.out
Verify that the configured DNS servers in the cluster respond to ping and resolve DNS queries. Remove servers that are unreachable or unable to resolve. nutanix@pcvm~$ ncli cluster get-name-servers Check the entries in /etc/resolv.conf (the DNS servers) to ensure they are all pingable and able to resolve DNS queries using nslookup nutanix@pcvm~$ allssh "cat /etc/resolv.conf" Validate via the following NCC check (this should PASS): nutanix@pcvm~$ ncc health_checks system_checks dns_server_check
KB1270
Hyper-V: Accessing Nutanix Storage
In a Hyper-V Nutanix cluster some additional configuration may be required on both client and Nutanix CVM to access SMB storage directly. This article discusses the steps required.
The Nutanix Controller VM presents SMB 3.0 shares to the Hyper-V hypervisor. These shares can be directly accessed or browsed.The shares are provided at the container level, with the format "\\<cluster name>\<container name>" Replace <cluster name> with the cluster nameReplace <container name> with the container name Notes: When you run the "setup_hyperv.py setup_scvmm" script during the setup of a Hyper-V Nutanix cluster, the SCVMM server is granted access to the storage. This article does not apply to the SCVMM, unless a new SCVMM server is introduced to the environment. The shares are presented as SMB 3.0 and are accessed by clients supporting the SMB 3.0 protocol. Windows 8 client machines and Windows 2012 servers and later can browse these shares. Windows 10 client machines and Windows 2016 servers with SMB 3.1 can access the SMB 3.0 only if Kerberos is enabled on the cluster.
Following are the steps to enable access to the storage: Determine the share name Following is the format of the shares: \\<cluster name>\<container name> The cluster name is at the top left on the Prism WebUI. You can also find Cluster Name and Cluster Domain in the output of the command nutnix@CVM:~$ ncli cluster info The container name can be found from Prism UI > Storage > Containers. You can also find the containers details in the output of the command nutanix@CVM:~$ ncli container ls Add the client to the filesystem whitelist In the Prism WebUI, go to "Settings" and select "Filesystem Whitelists". Add individual client IPs to the whitelist (subnet mask 255.255.255.255) or entire subnets, as shown in the following image (subnet mask 255.255.255.0). Hosts and Clients in the whitelist have complete access to the container share and must not be accessible to the general end-user pool. Alternatively, configure the whitelist via NCLI: nutanix@CVM:~$ ncli cluster get-nfs-whitelist Note: The nfs command controls access to the NFS-share access on ESXi and SMB-share access on Hyper-V for NOS 3.5.2. Connect with an SMB 3.0 client. Use Windows 8 or later client machine, or Windows 2012 or later servers. Navigate to the share you determined in Step 2. Permanently map it or perform a directory listing from the command prompt: dir \\<cluster name>\<container name> Note: Windows 10 client machines and Windows 2016 servers with SMB 3.1 can access the SMB 3.0 only if Kerberos is enabled on the cluster. You can reference KB- https://portal.nutanix.com/kb/2126 2126 https://portal.nutanix.com/kb/2126 for troubleshooting SMB access and KB2263 https://portal.nutanix.com/kb/2263/ for troubleshooting host access to storage.
KB10604
Alert - A801111 - L2StretchIpConflict
Investigating L2StretchIpConflict issues on Prism Central.
This Nutanix article provides the information required for troubleshooting the alert L2StretchIpConflict on Prism Central where Advanced Networking (Flow) is enabled. Alert overview The L2StretchIpConflict alert is raised when any common IP addresses are detected between local and remote availability zones. Sample alert Block Serial Number: 16SMXXXXXXXX Output messaging [ { "Check ID": "Some IP address(es) are common across the subnets involved in the Layer-2 subnet extension" }, { "Check ID": "Some UVMs are allocated IP addresses that are in use in the peer AZ involved in Layer-2 subnet extension." }, { "Check ID": "Resolve the IP conflict by ensuring the IP addresses allocated to UVMs are unique across the AZs involved in the Layer-2 subnet extension." }, { "Check ID": "Some VMs in the subnets involved in Layer-2 subnet extension will be unable to communicate with other VMs in peer AZ." }, { "Check ID": "A801111" }, { "Check ID": "Some IP address(es) are common across the subnets involved in the Layer-2 subnet extension." }, { "Check ID": "Some IP address(es) are common across subnets involved in Layer-2 subnet extension" }, { "Check ID": "Some IP address(es) are common across subnets involved in Layer-2 subnet extension" } ]
Resolving the issue Delete the vNICs referenced by the alert in one of the availability zones or modify their IP addresses. If you need further assistance or if the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com. Collecting additional information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB-2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB-2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB-6691 https://portal.nutanix.com/kb/6691. nutanix@cvm$ logbay collect --aggregate=true Attaching files to the case To attach files to the case, follow KB-1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
KB2667
[Performance] Detect Network bandwidth contention issues between CVMs
Information on how to use the ss command's output to troubleshoot IO performance issues caused by reduced network bandwidth.
In some cases, the direct network connection to the Nutanix nodes will be performing fine but an issue in the upstream switching infrastructure can reduce the available bandwidth between the various nodes in a cluster. This reduction in available bandwidth can result in IO performance issues which usually manifest as high IO latency on the hypervisor or Prism graphs. Ping statistics from the CVMs might not show any issue as ping packets are small packets and do not test the bandwidth between the nodes. Example: Node1 -------- Switch1 -------- Switch2 -------- Node 2 If the link between switch1 and switch2 is oversubscribed or having packet forwarding issues, then the network connectivity between Node1 and Node2 will be affected. The counters on the Node1 and Node2 NICs will not show any issues.
One way to troubleshoot such issues is to monitor the state of the various TCP connections. TCP protocol adjusts the transmission rate when it senses bandwidth contention between the sending and receiving entities. It does so by reducing the congestion window (cwnd) of that TCP connection. By monitoring the cwnd parameter of a TCP connection, one can say that there is a bandwidth contention between the sending and receiving entities. CVMs come with the ss command which dumps the Linux kernel's TCP state table in text format. By capturing the output of the ss command regularly, we can check if there has been network issues between the CVMs. Starting with AOS release 4.5, the output of the "ss -tin" command is captured every 60 seconds to tcp_socket.INFO in /home/nutanix/data/logs/sysstats folder. To manually collect this information the following steps can be used to setup the data collection: Create an ssinfo folder in /home/nutanix and cd into that folder.Create an ssinfo.sh script in the ssinfo folder with the following contents: #!/bin/bash Run the ssinfo.sh script and make it a background process. nutanix@cvm$ nohup bash ssinfo.sh & The script will execute "ss -tin" every 30 seconds and write the output to the compressed gzip file - sstin.gz. The ssinfo.sh background process can be stopped by doing a "kill -9 <PID>". <PID> can be found by using "ps aux | grep ssinfo.sh" Here is a simple example on how to use the information to check on network status: Run a script that cleans up the data in the file in a space delimited fashion so that it is easier to filter and sort. Here is an example script: nutanix@cvm$ sed '/^ESTAB/N;s/\n//' tcp_socket.INFO | grep ESTAB | tr "\t" " " | tr -s " " > cleaned_tcp_socket Now, find the TCP connections that are of interest. In this example, we are going to look at TCP sessions used by Cassandra. These TCP connections use port 7000. Network timeouts in Cassandra are usually the first indicator of bad network connectivity between the CVMs. The below script filters for TCP connections using port 7000 and then sorts the output with respect to tcp connections. nutanix@cvm$ grep ":7000" cleaned_tcp_socket | sort -k 4,5 The filtered and sorted output given above for one Cassandra connection clearly shows the following: The connection starts with a cwnd of 22 and a send rate of 137 Mbps.The connection then starts to increase the cwnd, which in turn increases the send rate. The highest send rate of 242.9 Mbps is hit at a cwnd of 39.Right after hitting the highest send rate, the cwnd drops to an extremely small value of 3 which brings the send rate down to 18.7 Mbps.So, clearly there is a bandwidth contention between the 192.160.161.57 and 192.160.161.37 CVMs which is causing the network to drop packets when the send rate starts going above 242 Mbps. Further look into 192.160.161.57 CVM's Cassandra log files reveals timeouts between these CVMs. The below commands finds all the timeout messages in the Cassandra logs between 05:10 and 05:20 AM on 30th July 2019, cuts out the peer IP address from the log messages, and then prints the number of occurrences of peer IP addresses in the log messages. This gives us an indication of which Cassandra peers are having the most number of connection timeouts. nutanix@cvm$ grep "Caught Timeout exception" system.log | grep "2019-07-30 05:1.:" | cut -d " " -f 19 | sort | uniq -c The example output given above indicates that the current Cassandra node experienced the most number of connection timeouts to the 192.160.161.37 CVM. The next step is to check the statistics on all the network devices between the CVMs to understand which one is causing the slowdown of network traffic.
KB14296
Performance | TCP Delayed-ACK feature in ESXi Causes Reduced IO Throughput
Workloads a large amount of read IO running on ESXi clusters may experience elevated latency as throughput requirements increase.
Issue When a CVM is transmitting data to the ESXi host at high bandwidth, such as when one or more VMs are performing large sequential-read operations, the VMs may experience high latency and lower throughput/IOPS than expected. Symptoms/Signs High latency is reported by the hypervisor, while latency reported by Prism remains low. In this example, throughput from the CVM to the host increases: As it does, the latency reported by the hypervisor increases slowly at first, then extremely rapidly: The number of IOPS reported by Prism will drop as throughput goes up: Comparing Prism latency with what's reported by vCenter will show that IO latency at the CVM is significantly lower. In this case, <5ms as compared to a peak of over 200ms at the ESX host / datastore level. Looking at the CPU usage on the ESX host at peak throughput will likely show that a single physical CPU is running at/near 100% Examining the CPU usage for vmk1 and vmk0 will show very high utilization for the internal vswitch interface on the host: Cause This issue is due to VMWare's implementation of the TCP Delayed-ACK feature first introduced in ESXi 6.7. With this feature enabled, very small rates of packet loss can result in significant increases in hypervisor latency due to the retransmission of NFS packets between the CVM and hypervisor. VMWare has outlined the basics of this issue in KB 59548 https://kb.vmware.com/s/article/59548 and provides a more detailed analysis in a whitepaper entitled ESXi NFS Read Performance: TCP Interaction between Slow Start and Delayed Acknowledgement https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/esxi7-nfs-read-perf.pdf
To resolve the issue and improve IO performance, the TCP Delayed-ACK feature needs to be disabled. The ability to disable it was introduced in ESXi 7.0. To disable the feature cluster-wide, execute the following command: nutanix@cvm:~$ hostssh 'esxcli system settings advanced set -o "/SunRPC/SetNoDelayedAck" -i 1' Note that in order for this to take effect, datastores must be unmounted and re-mounted on each ESXi host, or each host needs to be placed in maintenance-mode and restarted.To confirm that the feature has been disabled, execute the following command nutanix@cvm:~$ hostssh 'esxcli system settings advanced list | grep -A10 /SunRPC/SetNoDelayedAck' For comparison, after disabling the Delayed-ACK feature, latency reported at the datastore level for the same IO profile is significantly lower:With this change in place, max latency is ~60ms in this test, which was due to epoll thread exhaustion on the CVM.
KB16501
File Analytics - Kafka Container is in Unhealthy State
File Analytics - Kafka Container is in Unhealthy State
After scaling up FAVM /dev/sdb partition, Kafka may be in an unhealthy state in the event of Zookeeper trying to read a zero byte file.Steps to determine if Kafka container is in unhealthy state: Verify the container status on FAVM [nutanix@NTNX-XX-XX-XX-XX-FAVM ~]$ docker ps Verify that none of the FAVM partitions are running out of space [nutanix@NTNX-XX-XX-XX-XX-FAVM ~]$ df -h Check for the error signature in /mnt/logs/containers/kafka/kafka.out 2024-04-02 18:47:38,641] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
Steps to determine whether a log file needs to be deleted: Kafka should be unhealthy, restarting kafka service should print this error in /mnt/logs/containers/kafka/kafka.out [2024-04-03 05:46:30,273] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:22181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) This error means zookeeper is not in a good shape, as kafka is not able to connect to itChecking for the error signatures in /mnt/logs/containers/kafka/zookeeper.out [2024-04-03 05:47:46,876] ERROR Last transaction was partial. (org.apache.zookeeper.server.persistence.Util) The above error confirms that there is a 0 byte log file present in zookeeper data dir which needs to be removedIn the data directory /mnt/logs/containers/kafka/version-2/ , search for a file with 0 bytes, the name would be log.<random char/number>Delete the file, restart zookeeper service first, then kafka service docker exec -it Analytics_Kafka1 sh -c 'supervisorctl restart zookeeper' docker stop Analytics_Kafka1 && sleep 10 && docker start Analytics_Kafka1 Wait for 10-20 seconds, the service should turn up healthy
KB15484
Nutanix Files - Varonis not logging data for certain Files Shares
Nutanix-Files-Varonis-not-logging-data-for-certain-Files-Shares
Customer has Varonis Partner Server configured with Nutanix Files. However, Varonis not logging data for certain Files Shares.Issue 1 (Connectivity related Issues): Customer has changed the Varonis Server Collector and its IP Address had also changed.Note: Customer may have a Varonis Collector Server and several other backend Servers linked to it. Fetch the Partner Server information: nutanix@NTNX-XX-XX-XX-XX-A-FSVM:~$ nuclei partner_server.get Checking minerva_vscand.log file will show these errors: E20230811 22:16:33.455695Z 27498 vscand_server.cc:418] Failed to get file server uuid from insights error 3 Checking if the FSVM's are still pointing to the Old Varonis Server IP, we should be expecting ESTABLISHED connection on all the FSVM's nutanix@NTNX-XX-XX-XX-XX-A-FSVM:~$ allssh "netstat -an |grep -w 5671" Issue 2 (Partial Logging): Varonis is logging data for some Shares but not for all of them. Identify the Shares that are not working and check smb client logs to see if we are seeing user events: nutanix@NTNX-XX-XX-XX-XX-A-FSVM:~$ allssh "sudo cat /home/log/samba/clients_* |grep 'sharename' |tail -n2" Check minerva_vscand.log files if there is any logging for the non working ShareNo logs in minerva_vscand is seen for non working share. Change the sharename accordingly nutanix@NTNX-X-Y-Z-166-A-FSVM:~$ allssh "sudo zgrep <sharename> /home/log/vscand/minerva* |tail -n 5" Identify the Shares that are not logging and note their Share UUID and Share Path <afs> share.list sharename=Act_Corp Compare the Share UUID with the Varonis Notification Policy (Trimmed Output) <nuclei> notification_policy.get varonis mount_target_reference_list: Here we can see mount target uuid exists for the parent Share Act_Corp_ata$ but there is no mount target that exists for Share Act_Corp$ which is where all the User Activity is happening and can't see audit events in Varonis for the same.
Issue 1: Restart vscand service on all the FSVM's one by one using this command sudo killall minerva_vscand After the restart, it should show like this: nutanix@NTNX-XX-XX-XX-XX-A-FSVM:/home/log/vscand$ allssh "netstat -an |grep -w 5671" Issue 2: We can see Audit events for the non-working Shares in SMB logs, but they are not getting logged in the minerva_vscand logs since no notification policy is created for that particular nested Share.Configuring parent shares in Varonis will also not send events for the nested Shares.Nested shares will also need to be configured in Varonis separately.
KB11828
SMI-S Nutanix storage provider registration fails using the "setup_hyperv.py" script after SCVMM reinstallation
Trying to register the SMI-S storage provider with the "setup_hyperv.py setup_scvmm" script may fail with the following error message: CRITICAL setup_hyperv.py:139 Registering SMI-S provider with SCVMM failed with ret=-1, stderr=The running command stopped because the preference variable "ErrorActionPreference" or common parameter is set to Stop: Registration of storage provider https://hyper-vcluster.domain.local:5989 for user admin failed from hyper-vcluster.domain.local with error code WsManMIInvokeFailed. (Error ID: 26263)
After an SCVMM reinstallation, the SMI-S provider may fail to register using the "setup_hyperv-py" script. You will see the script fails on the "Registering the SMI-S provider with SCVMM" step. nutanix@NTNX-CVM:~$ setup_hyperv.py setup_scvmm Under /home/nutanix/data/logs/setup_hyperv.log , you will find the following error message CRITICAL setup_hyperv.py:139 Registering SMI-S provider with SCVMM failed with ret=-1, stderr=The running command stopped because the preference variable "ErrorActionPreference" or common parameter is set to Stop: Registration of storage provider https://hyper-vcluster.domain.local:5989 for user admin failed from hyper-vcluster.domain.local with error code WsManMIInvokeFailed. (Error ID: 26263) Environmental DetailsAOS: 5.19.2OS: Microsoft Windows Server 2019 DatacenterSCVMM version: 10.19.2445.0 4569533 Update Rollup 2 for System Center 2019 Virtual Machine Manager
Validate 1. The RunAs account is working.2. The PRISM user account is working.3. The port 5989 is on listening state within the cluster TCP test "sudo netstat -tulpm | grep LISTEN | grep 5989"4. The FQDN of the cluster does resolve.Once all of the above steps are validated, reset the Prism Element "admin" user password to the original password when the registration happened, and reboot all the nodes. After this workaround. then re-run the setup script again and then the storage provider should register, and the storage containers will be available.
KB6337
Nutanix Kubernetes Engine - Edge Onboarding using cloudinit
Nutanix Kubernetes Engine - Edge Onboarding using cloud-init
Nutanix Kubernetes Engine is formerly known as Karbon or Karbon Platform Services.Customer might want to onboard multiple edges at the same time using Calm blueprint. This will require the Edge to support onboarding using cloud-init scripts. With the October 2018 release https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-NTNX-IoT:Release-Notes-NTNX-IoT, we have started supporting cloud-init on Edge.
Below is an example of how to onboard an Edge device using cloudinit script manually on Prism. The same script can be modified and enhanced when needed in Calm blueprints. To assign an IP address to the Edge statically (when no DHCP in the environment) and configure the DNS server: #cloud-config To lock down the admin user to access only through a particular machine through SSH keys: #cloud-config To set up NTP on the edge at boot time: #cloud-config To set up Proxy settings on the edge: #cloud-config
""Verify all the services in CVM (Controller VM)
start and stop cluster services\t\t\tNote: \""cluster stop\"" will stop all cluster. It is a disruptive command not to be used on production cluster"": ""List all of the port groups currently on the system.""
null
null
null
KB16218
Windows 11 VMs may fail to start on clusters with Intel Sapphire Rapids CPU
Windows 11 VMs may fail to start on clusters with Intel Sapphire Rapids CPU
Windows 11 VMs may hang during the boot if all of the following conditions are met: A cluster runs AOS 6.5.5 with AHV 20220304.478 or newer in the 6.5.x family; AOS 6.7 with 20230302.207 or newer in the 6.7.x family.Intel Sapphire Rapids CPU is installed.Windows Credential Guard or WSL2 features are enabled and configured inside the guest OS. When VM hangs, VM console will show an image similar to the below:
This issue is resolved in: AOS 6.5.X family (LTS): AOS 6.5.6AOS 6.8.X family (eSTS): AOS 6.8 Upgrade AOS to versions specified above or newer.
KB9179
Nutanix Files - Enable ssh access on External Interface
Enable SSH access on FSVM External Interface.
SSH access is disabled by default on the client network (FSVM external interfaces). If there is a requirement to enable SSH access to FSVMs from outside CVM or from client networks, this can be achieved via afs command "afs misc.ssh_on_client_network".
Note:- By default, SSH is in a disabled state on FSVM external networks. Nutanix recommends SSH on external network interfaces should not be enabled except for short-term use. Steps to enable SSH access on External (eth0:2 or eth1) interfaces in iptables. SSH accessibility status, execute allssh afs 'misc.ssh_on_client_network status' command to know the status of SSH accessibility on each FSVM. FSVM$ allssh "afs misc.ssh_on_client_network status" Steps to enable SSH, Execute "afs misc.ssh_on_client_network enable" on each FSVM you need to provide SSH access.In order to enable SSH on all FSVM, you can run allssh "afs misc.ssh_on_client_network enable" FSVM$ allssh "afs misc.ssh_on_client_network enable" Steps to disable SSH, Execute "afs misc.ssh_on_client_network disable" on each FSVM you need to disable SSH access. OR to disable SSH on all FSVMs, You can run "allssh "afs 'misc.ssh_on_client_network disable' "In order to disable SSH on all FSVM allssh "afs misc.ssh_on_client_network disable" FSVM:~$ allssh "afs misc.ssh_on_client_network disable" Points to remember SSH Access enabled by the above-mentioned afs command is persistent across reboots of FSVMs.Newly added FSVM as part of Scale-Out / File Server expansion will have SSH to eth1 or eth0:2 access blocked by default. We can enable it using the command mentioned.After enabling SSH using the mentioned afs command, File Server upgrades to any version will disable(reset) SSH access on all FSVMs. We can re-enable SSH using the above-mentioned command.
KB13553
LCM update plan fails in stream timeout error
LCM update plan fails in stream timeout error
After selecting any upgrade, LCM does not start the upgrade and fails with "stream timeout" during upgrade plan generation phase.
1. Check if customer has proxy enabled. nutanix@CVM:~$ ncli http-proxy ls 2. Check if we can download 'download.nutanix.com' using ports 8080 and 80 nutanix@CVM:~$ wget download.nutanix.com -e use_proxy=yes -e http_proxy=<proxy_name>:<check_with_port_80_and_8080> Example: nutanix@CVM:~$ wget download.nutanix.com -e use_proxy=yes -e http_proxy=proxy.ftw.X.net:80 If this fails, it may be an issue with the proxy. To test, remove the proxy and try to perform upgrade again.If the upgrade is still unsuccessful, please contact Nutanix Support https://portal.nutanix.com/.To aid the resolution process, please collect logs using KB-7288 https://nutanix.my.salesforce.com/kA00e0000009CUZ?srPos=0&srKp=ka0&lang=en_US and collect HAR logs using developer tools→Network→Download HAR. See KB-5761 https://portal.nutanix.com/kb/5761 for more detailed steps on generating a HAR file.
KB8271
Nutanix Kubernetes Engine - Clusters not showing after Karbon upgrade and PC scale out
Existing clusters may not show after upgrading Nutanix Kubernetes Engine and enabling PC scale out
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.Existing Kubernetes clusters may not show after upgrading Karbon and enabling PC scale-out. This issue occurs when you have an existing Kubernetes clusters, Nutanix Kubernetes Engine is upgraded and Prism Central Scale out is initiated, this activity causes a mismatch in Nutanix Kubernetes Engine version.To confirm, login to Prism Central via SSH and run the command below and check if all Karbon versions are the same: nutanix@PCVM$ allssh 'docker ps | grep -i karbon' Sample output: nutanix@PCVM$ allssh 'docker ps | grep -i karbon'
Workaround:After Prism Central Scale Out is completed, upgrade LCM framework to the latest version then Perform Inventory. You will need to trigger one more round of LCM upgrade of Nutanix Kubernetes Engine to make all versions the same. Please refer to "Nutanix Karbon Guide" and "Life Cycle Manager Guide" document which available in support portal.After the NKE upgrade using LCM, run the command again to check, all nodes should show the same Nutanix Kubernetes Engine version:Sample Output: nutanix@PCVM$ allssh 'docker ps | grep -i karbon' To check the docker image version for NKE. nutanix@PCVM$ allssh 'docker image list |grep -i karbon' Sample Output: nutanix@PCVM$ allssh 'docker image list |grep -i karbon'
KB13493
AHV host is not schedulable due to squeezerd service being down
An AHV host is marked as non-schedulable, not being in the maintenance mode
An AHV host may be marked as non-schedulable and cannot run the VMs, showing as not connected, even though it is up and running and not in the maintenance mode: nutanix@cvm:~$ acli host.list AHV host agent daemon is up and running, but we can see errors communicating with the squeezerd deamon: root@AHV# service ahv-host-agent status And squeezerd status is down (killed): root@AHV# service squeezerd status
This issue may happen due to a rare race condition when squeezerd is starting up, and logrotate is running (on the first minute of each hour). This issue is resolved in: AOS 6.5.X family (LTS): AHV 20220304.242, which is compatible with AOS 6.5.1 Please upgrade both AOS and AHV to versions specified above or newer. Workaround To restore connectivity to the AHV host agent, it is sufficient to restart squeezerd service: root@AHV# service squeezerd restart After the service is started, the host should be marked as "Schedulable".