id
stringlengths
1
25.2k
title
stringlengths
5
916
summary
stringlengths
4
1.51k
description
stringlengths
2
32.8k
solution
stringlengths
2
32.8k
KB15941
Performance regression when enabling Credential Guard / Hardware Virtualization-based Code Integrity (HVCI)
Enabling HVCI causes a severe performance degradation on Windows VMs, and may not be required for STIG compliance.
Credential Guard is an additional virtualization-based security layer for the Local Security Authority Subsystem Service (LSASS) process in Windows that stores credentials for New Technology LAN Manager (NTLM) and Kerberos. Credential Guard isolates the LSASS process in a virtual container that can't be accessed by users and creates a proxy process to communicate with it, protecting the system from further lateral incursion. Nutanix AHV supports Credential Guard through our vTPM. Nutanix AHV supports HVCI starting from AHV 7.0 (20201105.12) ( Nutanix AHV Virtualization - Credential Guard https://portal.nutanix.com/page/documents/solutions/details?targetId=TN-2038-AHV:credential-guard.html).
Together with AOS 5.19, AHV has introduced Credential Guard (CG) feature to User Virtual Machines (UVM). Credential Guard was introduced with Microsoft's Windows 10 operating system. Credential Guard is a virtualization-based isolation technology for LSASS which prevents attackers from stealing credentials that could be used to pass the hash attacks. Credential Guard requirements: Support for Virtualization-based security (VBS) i.e CPU virtualization extensions and Windows Hypervisor.Secure bootTPM 1.2 or 2.0 (preferred - provides binding to hardware) AHV already supports the UVM secure boot, so vTPM is not required (and is optional for Credential Guard). Enabling HVCI has a severe performance impact on the VM. Based on testing, HVCI adds at least a 50% performance hit on Skylake / AHV8 / Server 2019 deployments.For a Windows VM, you can check the Credential Guard status with the below command using PowerShell: PS C:\WINDOWS\system32> Get-ComputerInfo | Select-Object "DeviceGuardSecurityServices*" The above output indicates that the Service Guard is enabled but without the HVCI component (which is the STIG requirement). However, if HVCI is also enabled, the output shows: DeviceGuardSecurityServicesRunning : {CredentialGuard, HypervisorEnforcedCodeIntegrity} This is not a STIG compliance requirement for Windows Server 2019 or 2022 or non-persistent Windows 10 and Windows 11 VDIs, and will cause a high-performance impact, so the recommendation is to disable it for these Operating Systems. However, the STIG compliance does require this setting to be set for persistent Windows 10 and Windows 11 VDIs.
""Form Factor"": ""With VMD: VMD controller(s) are passed through to CVM and provide access to NVMe drivesWith Direct-Attach: Individual PCIe NVMe disks are passed through to the CVM""
null
null
null
null
KB12699
How to map a block to Rack after chassis or node replacement or after relocation
This article explains how to map a block to a rack after node or chassis replacement based on the actual placement of the blocks in the datacenter.
This article explains how to map a block to a rack after node or chassis replacement based on the actual placement of the blocks in the data center.The article is only applicable if the following conditions are true. Rack Fault Tolerance (Rack Awareness) is configured in Prism Web console > Settings > Setup > Rack Configuration > Block Assignment. Refer to Configuring Rack Fault Tolerance https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_1:arc-configure-fault-tolerance-rack-prism-c.html for details. After node or chassis replacement or relocation, the rackable unit serial number was updated in /etc/nutanix/factory_config.json. Refer to Changing the Rackable Unit Serial Number https://portal.nutanix.com/page/documents/details?targetId=NX1065G8-Chassis-Replacement-AHV:bre-rackable-unit-sn-change-t.html for details.
Warning: Please do not proceed with the following steps if any of the above conditions are not true. Check if racks are configured in your cluster. Option 1: Prism Web console > Settings > Setup > Rack Configuration > Block Assignment.Option 2: Run the following command from a CVM in the cluster. nutanix@cvm$ zeus_config_printer | grep rack_list -A4 Sample output: nutanix@cvm$ zeus_config_printer | grep rack_list -A4 If racks are configured, identify the rack to which the current block should be added. If the block is placed on a new rack, specify the new rack name when you run the command in the next step.Run the following command with the rack name determined from the previous step. nutanix@cvm$ ~/cluster/bin/add_block_to_rack --rack_name="rack_name" Sample result 1: when the block is successfully added to the rack. nutanix@cvm$ ~/cluster/bin/add_block_to_rack --rack_name="RackA" Sample result 2: when the block is already added to the rack. nutanix@cvm$ ~/cluster/bin/add_block_to_rack --rack_name="RackA" Restart genesis service on the node. Wait for a couple of minutes until genesis restart is completed. nutanix@cvm$ genesis restart Repeat the steps on the nodes on which the rackable unit serial number was updated in /etc/nutanix/factory_config.json and verify that the blocks are mapped to the respective racks.
KB7466
MOVE: Nutanix Move Basic Troubleshooting Guide
This article describes basic troubleshooting for Nutanix Move (version 3.x & 4.x).
With the release of Nutanix Move 3.0, the Move services are dockerised and all the Move agents now run as Docker Containers. This is a major milestone that allows add/update features without much service disruption as well as provides flexibility to run Move anywhere. If you are running an older version of Move, upgrade to Move 4.x and later. To upgrade, find the latest version and procedure below: Latest Move bundle: here https://portal.nutanix.com/#/page/NutanixMove Upgrade Procedure: here https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v4_7:top-upgrade-management-c.html Note: If you face issues in booting the VM after migration to Nutanix AHV, please collect logs on AHV cluster using Logbay https://portal.nutanix.com/kb/6691.
Overview Move File locations #Anchor1 Which user do we use to SSH into the Move VM #Anchor2 Log files locations for troubleshooting #Anchor3 Basic Commands #Anchor3_1 How to generate Move support bundle from cli #Anchor4 How to configure static IP on Move VM #Anchor5 Firewall Ports requirements for Move #Anchor6 Testing Connectivity between Move and ESXi/vCenter/Hyper-V #Anchor7 Common user permissions required on the Windows OS if using the domain administrator or local administrator accounts #Anchor8 Move is stuck at 0% while "Seeding Data" #Anchor9 Move VM migration failure due to missing snapshot #Anchor10 What happens if Move upgrade is stuck #Anchor11 Are there any logs created on the VM by Move #Anchor12 If there is a backup schedule, will it affect Move #Anchor13 VM migration failed due to Error(s) Writer failed with 'connection is shutdown' #Anchor14 Move File locations undefinedMove will use the below host directories. All of them are created under /opt/xtract-vm/. Which user do we use to SSH into the Move VM undefinedWhen logging in to the Move VM using SSH, use: Username : admin Note: The admin user on Move does not really have all the major rights, so best way is to change the user to root. To do that, on the Move VM, run: admin@move$ rs Enter the password for admin. You will be able to change the mode to root user and have control over the Move VM. Log files locations for troubleshooting undefinedBelow are the log files which will be important for troubleshooting. Basic Commands undefinedTo check the status of the services, run the command svcchk as below. Note that the last column will give you the name of the service (for example, bin_srcagent_1, bin_tgtagent_1, bin_diskreader_1). root@move$ svcchk Note: If restarting any service/Docker with an active Migration Plan, the Migration Plan will turn into failed state and the target vdisk can get corrupted. In this case, a manual cleanup on AHV and a new Migration Plan are needed.To restart any single service, run the command "docker restart <service name>" as shown below. You can get the individual service names from the svcchk command. root@move$ docker restart bin_srcagent_1 To restart all the services, run the command svcrestart. It will ask you to confirm. Enter "y" (yes) to continue: root@move$ svcrestart You can also use svcstop and svcstart commands to stop and start all the container services, respectively. How to generate Move support bundle from cli undefinedTo generate the Move support bundle from CLI, run the command "root@move on ~ $ /opt/xtract-vm/bin/support-bundle [--dump-path <directory>]" as shown below. This example dumps the support bundle under the location /opt/xtract-vm/supportdumps. You can select your own directory. If you leave it blank, it will generate the bundle in the /root directory. root@move on ~ $ /opt/xtract-vm/bin/support-bundle --dump-path /opt/xtract-vm/supportdumps/ How to configure static IP on Move VM undefinedIn case you deleted the Move NIC or want to give a static IP again to the Move VM, follow the procedure of Assigning a Static IP Address to Nutanix Move https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v4_7:top-assign-ip-addresses-t.html. Firewall Port requirements for Move undefinedMove must be able to communicate with vCenter Server on port 443, ESXi hosts on 902 and 443, Hyper-V on 5985, 5986, and 8087, and Prism Element on 9440. Verify the ports between the Move VM and ESX/Hyper-V and Target AHV are open. If the source and destination environment reside on different subnets or to view a full list of required ports, please refer to Port Requirements of Move https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=Move. Testing connectivity between Move and Source/Destination Clusters undefined Starting with Move 3.0 onwards, iptables handles the routing for the docker containers residing in the Move appliance. To test the network connectivity between a docker container and a cluster, please reference the steps below: List all running docker images : admin@move on ~ $ sudo docker ps Example: admin@move on ~ $ sudo docker ps Test connectivity from within the docker container named srcagent using the following command : sudo docker exec -it <container_id_of_srcagent> nc -vz <ESXi IP Address> 902 Example: admin@move on ~ $ sudo docker exec -it c6d9e8af4062 nc -vz x.x.x.x 902 Common user permissions required on the Windows OS if using the domain administrator or local administrator accounts undefinedAny user account on a Windows VM must possess three properties to be used as a login account by the Move: The user must have "Logon as Batch Job" enabled.UAC must be disabled (for local administrator, it is usually disabled).The users must have "Restore files and directories" rights. Move is stuck at 0%, while "Seeding Data" undefinedPossible Causes The customer uses third-party solutions that rely on snapshots.Third-party snapshots may cause stale snapshots to reside in the VM folder on the VMware datastore.Old snapshots that have been forgotten. Possible Solutions Leverage the Consolidate option in VMware. This can be done through the Web or vSphere client by right-clicking the VM and selecting "Consolidate".This may fail to work, and the snapshot files may still appear in the VM folder on the datastore.The Consolidate option may be greyed out and not available. For Hyper-V, delete checkpoints Create a new snapshot, and then delete it.Storage vMotion the VM from one datastore to another: The VM must move from one VMware datastore to a different VMware datastore to ensure consolidation. Verify the absence of stale snapshots in the new datastore location.If Migrate option is greyed out: Try powering off the VM to see if the option becomes available. There are many reasons a Storage vMotion may not be possible while the VM is running.Check for registration issues. Follow VMware KB 2044369 https://kb.vmware.com/kb/2044369. Remove the VM from the inventory and re-add it. Ensure you know which datastore the .vmx file lives in before removing it. In some cases, you may need to power the VM off to remove it from inventory.If none of these options works and you need to leverage this KB, it should only be done with the assistance of VMware, as it involves modifying the DB. Move VM migration failure due to missing snapshot undefinedMove migration may fail due to missing snapshots. Consolidate snapshots from the VMware Snapshot Manager to resolve. A common signature would be found in Move mgmtserver.log: 2018-03-10T05:50:03.304278+00:00 I vm-migcutovertask.go:107] [VMMigPlan:835ea66a-04bc-4274-a62c-f4110e928df3-Vm:XXXXXX4] Step:VM cutover: cutover on source Check the Source VM Snapshot Manager, and you will be unable to find the snapshot the logs refer to. Consolidate all snapshots of the source VM from VMware Snapshot Manager. Once finished, recreate a new job in the Move for migration. What happens if Move upgrade is stuck? undefinedIf the Move upgrade is stuck, check the updater.log on the Move VM. If you cannot figure out the exact reason, collect the logs, and share them with Nutanix Support. Proceed with verification of the status for all the services (the expected state is UP and not crashing) by running the command svcchk. To roll back the Move appliance to a previous version, run the script: admin@move$ /opt/xtract-vm/scripts/rollback-xtract Are there any logs created on the VM by Move? undefinedYes, logs are created on the VMs when Move triggers VirtIO installation and IP address retention scripts.On Windows, the logs can be found in: C:\Nutanix\Temp On Linux, they can be found in: root@linux# /tmp/xtract-guest.log If there is a backup schedule, will it affect Move? undefinedYes, it will be difficult to take the number of changes into account on the disk when there is a backup job in progress. So, it would be best to perform the migration outside of backup hours or disable the backup until the migration is completed. VM migration failed due to Error(s) Writer failed with 'connection is shutdown' undefinedNoticed a scenario where Move appliance is very slow, unable to login to Move UI. From cli, unable to get any output to commands like 'svcchk' 'docker ps'.CPU usage is at 100% and memory usage is at 98%. Move Appliance has minimum resources assigned 8GB memory, 2 vCPUs 2 Cores per vCPU (4 cores)Post increasing CPU and memory resources to ( Memory to 12GB, CPU to 8 cores), able to login to Move UI, able to get response to the commands.Re-initiated those failed migrated servers and all the servers are currently showing in ready for cutover state.[ { "Files Location": "/opt/xtract-vm/key", "Description": "Path to the SSH keys generated and used by Move" }, { "Files Location": "/opt/xtract-vm/kvstore", "Description": "KVStore directories used by srcagent and diskwriter" }, { "Files Location": "/opt/xtract-vm/logs", "Description": "Path to Move service logs" }, { "Files Location": "/opt/xtract-vm/logs/diskreader.log", "Description": "Uses NFC mechanism to take a copy of the disk and ship it to the AHV side using \"pipe\"." }, { "Files Location": "/opt/xtract-vm/logs/diskwriter.log", "Description": "This is the receiver of the copy disk through pipe and it writes it to the container mounted on the Move VM." }, { "Files Location": "/opt/xtract-vm/logs/mgmtserver.log", "Description": "Orchestrator service, exposes REST APIs to source, target side. If this service is working fine, then the UI will load perfectly." }, { "Files Location": "opt/xtract-vm/logs/srcagent.log", "Description": "Dealing with source side, it prepares the migration by enabling CBT (Changed Block Tracking), shutting down the VM and shipping the last snapshot before the VM finally boots up on AHV side." }, { "Files Location": "/opt/xtract-vm/logs/tgtagent.log", "Description": "Dealing more on the AHV side. Collects cluster information and mounts the needed container to Move during migration and power on the VM on the AHV side." } ]
KB6102
Nutanix ISO Security compliance standards
null
Information is an asset to all individuals and businesses. Information Security refers to the protection of these assets in order to achieve compliance,confidentiality,Integrity and availability.Information Security Management System help companies to manage overall business risks and information in more secure and systematic manner. It specifies requirements for the implementation of security controls customised to the needs of the whole or part of individual organizations. Why ISOInternational Standards mean that consumers can have confidence that their products are safe, reliableThe ISO/IEC 27000 family of standards helps organisations keep information assets secure. Using this family of standards will help your organisation manage the security of assets such as financial information, intellectual property, employee details or information entrusted to you by third parties. What is an ISMS? An ISMS is a system of processes, documents, technology and people that helps to manage, monitor, audit and improve your organisation’s information security. It helps you manage all your security practices in one place, consistently and cost-effectively. At the heart of an ISO 27001-compliant ISMS is business-driven risk assessments, which means you will be able to identify and treat security threats according to your organisation’s risk appetite and tolerance.
Nutanix is compliant with ISO security standards like ISO 27001,ISO 27017 and ISO 27018 We can find the details about the same on Nutanix Trust Portal https://www.nutanix.com/trust/Certificates can be viewed from above specified URLMore Details on them ISO/IEC 27001 is the best-known standard in the family providing requirements for an information security management system (ISMS). ISO 27017 covers information security controls for cloud computing. ISO 27018 covers PII (Personally Identifiable Information) in public clouds.
KB10868
SNMP traps might not be sent if the EngineID that the CVM is using is different to the EngineID that the SNMP client is seeing
Prism Element fails to send SNMP traps to the monitoring tool after maintenance mode-type activity occurs (rolling updates, node taken down for DIMM failure) on the clusters.
SNMP traps might not be sent if the EngineID that the CVM is using is different to the EngineID that the SNMP client is seeingSNMP v3 uses the concept of an EngineID that can either be auto generated, or manually set in the Prism UISNMP works on the premise that the SNMP manager sends a trap request to the Alert Manager and the Alert manager CVM sends the traps across over the wire to the SNMP clientThis is how a "test alert" looks like when a test alert is generated Sending traps by cmd : snmptrap -v 3 -u ADMIN -e 0x8000A12F045d13ec9526744f11a4bc3282869fbd8d -a SHA -A 'PASS' -x AES -X 'PASS' -l authPriv udp:x.x.x.x.:162 "" NUTANIX-MIB::ntxTrapTestAlertTitle NUTANIX-MIB::ntxAlertCreationTime C 1604918406 NUTANIX-MIB::ntxAlertUuid s 763e63da-875c-4a20-8a0a-e15a59bb2690 NUTANIX-MIB::ntxAlertDisplayMsg s "AlertUuid:763e63da-875c-4a20-8a0a-e15a59bb2690: Test Alert is generated on Controller VM x.x.x.x." NUTANIX-MIB::ntxAlertTitle s "Test Alert Title" NUTANIX-MIB::ntxAlertSeverity i 1 NUTANIX-MIB::ntxAlertClusterName s "" Here, "0x8000A12F045d13ec9526744f11a4bc3282869fbd8d" is the EngineIDPoint to note: Each CVM has a unique EngineID comprised of the node UUID etcIf on Prism SNMP configuration, an EngineID is manually set, the EngineID that the SNMP client should be seeing is the manually set EngineID Now, SNMP Manager and Alert Manager can be found using the below commandsAlert Manager service=/appliance/logical/leaders/alert_manager; echo $(zkcat $service/`zkls $service| head -1`)| awk '{print "Alert manager ==>", $2} ' SNMP Manager service=/appliance/logical/pyleaders/snmp_manager; echo "SNMP manager ==>" $(zkcat $service/`zkls $service| head -1`) Alerts/SNMP will fail if the EngineID that Alert Manager is sending the alert from differs from the EngineID that the client is seeing. On the client, the SNMP packets can be captured using Wireshark or Tcpdump
Capture the EngineID of the snmptrap that the Alert Manager is sending outDo a packet capture of the SNMP traffic on the SNMP client (tcpdump or Wireshark)See if there is a discrepancy in the EngineID If there is a discrepancy in the EngineID, restart the Alert Manager or the SNMP manager services so that the Alert Manager and the SNMP Manager leaders are on the same CVM. Then try a test SNMP alert and see if this helpsTo restart the services, you can user the following commands:Alert manager allssh "genesis stop alert_manager; sleep 15; cluster start; sleep 30" SNMP manager allssh 'genesis stop snmp_manager ; cluster start ; sleep 2s' Should the issue be resolved after moving the SNMP Manager leader and Alert Manager leader on the same CVM, please open an ONCALL/TH/ENG and get engineering involved should the issue be reproducible at will
{
null
null
null
null
{
null
null
null
null
KB10530
NCC-4.0.0: Health Server logs might fail to rotate and fill up /home partition
NCC 4.0.0, health_server logs might not rotate as expected filling up /home partition on the cluster.
Nutanix is aware of an issue in NCC 4.0.0, Where the health_server logs might not rotate as expected and instead end up filling /home partition.To ensure you are hitting this issue, confirm the following : Ensure you are receiving a /home partition high usage alert.Log in to the CVM from which the alert is coming.Check space usage on the home partition of the CVM. nutanix@cvm$ df -h /home If you see the space usage is high on the /home partition. Check the NCC version on your cluster. nutanix@cvm$ ncc --version If you are running NCC-4.0.0 and satisfy the above conditions, you are likely to hit this issue. If you are running a different NCC version check KB-1540 https://portal.nutanix.com/kb/1540.
This issue is resolved in NCC version NCC-4.0.0.1. Upgrade to the latest NCC version to permanently resolve the issue.However, if the upgrade is not possible at this moment refer to the steps below for temporary resolution: Restart the cluster health service in the cluster. nutanix@cvm$ allssh "genesis stop cluster_health;cluster start" After a few minutes check space usage on the home partition of the CVMs is below the alert threshold of 75%: nutanix@cvm$ allssh 'df -h /home' The above procedure should clear the space on the /home partition temporarily and the issue would be resolved for a few days. In case you face the issue again, follow the above procedure again. For assistance contact Nutanix Support https://portal.nutanix.com/.
KB15855
NC2 on Azure - Redirect-chassis for the transit-VPC was not set causing issues with UVM traffic
Chassis Redirect for the transit-VPC was not set causing issues with UVM North-south traffic
NC2 on Azure clusters running versions ANC 2.1.x or lower will face an issue where UVM North-South traffic will stop working abruptly. UVM's wont be able to communicate with Internet as well as on-prem Network. This is a known issue NET-7537 https://jira.nutanix.com/browse/NET-7537.To confirm if we are hitting this bug, login to PCVM and run below command to find the OVN pod and make sure it is running. nutanix@NTNX-10-x-x-x-A-PCVM:~/tmp$ sudo kubectl get pods -A |grep ovn Now login to the OVN container using below command. nutanix@NTNX-10-x-x-x-A-PCVM:~/tmp$ sudo kubectl exec -it anc-ovn-0 /bin/sh Run below command to find if Chassis Redirect is claimed by Transit-vpc. In below case we don't find any entry in the highlighted section that says "cr-lrp-ext_gw_port_bbb-xxxxxx". This confirms we are hitting NET-7537 https://jira.nutanix.com/browse/NET-7537. root@anc-ovn-0:/# ovn-sbctl show | grep -v "Port_Binding port_"
To temporarily fix the issue we need to restart ovn-controller service in the Flow gateway VM. Customer should have the Private key for the Flow Gateway VM that they downloaded while deploying the NC2 cluster. Login to Flow Gateway VM: nutanix@NTNX-10-x-x-x-A-PCVM:~/tmp$ ssh -i FlowgatewayVMkey.pem <<FGW VM IP>> Restart ovn-controller service. [nutanix@localhost ovn]$ sudo systemctl restart ovn-controller Monitor log file "/var/log/ovn/ovn-controller.log" to confirm Chassis Redirect is claimed by Transit VPC. Look for a line that says "Claiming lport ext_gw_port_c3575a37-40ab-4c3f-91fd-ef9e89af4b0f for this chassis" [root@atalon-2 ovn]# grep -isr "port_c3575a37-40ab-4c3f-91fd-ef9e89af4b0f" /var/log/ovn/ After ovn-controller restart the "ovn-sbctl show" command will show Chassis Redirect port listed. root@anc-ovn-0:/# ovn-sbctl show | grep -v "Port_Binding port_" At this point UVM North-South traffic will work as expected. the above mentioned is just a temporary workaround. To fix the issue permanently upgrade ANC to 2.2.0 or later version.
KB15399
RBAC - Not able to see and perform VM tasks from duplicated system role on VMs in ESXi clusters.
From the Prism Central UI when duplicating a system role and assigning it to the AD user not all tasks can be performed. This task if performed from the 'Admin Center' in the Prism Central UI.
When system-defined roles are duplicated from the Prism Central 'Admin Center', in certain circumstances, internal permissions that are part of the role will not be available.Specifically, VMs that are on ESXi PE clusters will not be listed when reviewing the VM list even when the cluster is added as an Entity in Manage Assignment.This issue does not impact VMs on AHV clusters.Key Items1. Duplicated role is missing the 'View_ESX_Virtual_Machine' permission so the VM on an ESXi PE cluster will not be listed.2. PC UI shows only AHV VMs in the VM list view.
Consider using a system role.- or -Create a new role that is not duplicated from a current system role. See also: Controlling User Access (RBAC) https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v6_7:ssp-ssp-role-based-access-control-pc-c.htmlNote: Granular RBAC is supported on AHV only.
KB15312
Nutanix Files related alerts may be unable to generate per corresponding absence of the FSVM entity in Arithmos
In some perticular scenario, the FSVM entities may miss in Arithmos data. So this issue will prevent the ncc generate alerts for customer awareness.
In some scenarios, the FSVM entity may be absent from the Arithmos database. And NCC is unable to generate the corresponding alerts for customer awareness. The following is an example that file_server_status_check ( KB-3691 https://portal.nutanix.com/kb/3691) cannot generate FSVM down alert (A160078). We can identify the issue by the following log snips. This log snip is a sample of the single round of the check. The check plugin detects the failure and attempts to process the alerts.It raises the alert when the failure count reaches 4. So, it doesn't raise the alert at this point. "health_server.log.*" from the health_server leader ("health_scheduler_master" in "panacea_cli show_leaders") : : : Please follow the steps to identify the issue. 1. Check if the failure count increases.In this sample, the failure count doesn't increase and stays "actual 1". [user@diamond]$ grep "Number of failures expected" health_server.log.* | grep file_server_status_check If the failure count increases as expected, this article will not match your case. 2. Check if the check plugin failed to get the score from Arithmos around the previous logs.The following ERROR log is found in every round of checks in this sample. 2023-06-09 05:50:46,209Z ERROR alert_helper.py:937 [file_server_status_check] Failed in Get Time Range Stats for (entity_id 57ff3043-a07c-43ec-a869-593ffdf692c4, arithmos_entity_type 22, check_id check.160078.score). Err = 7. The following grep should be helpful for the case with "file_server_status_check". [user@diamond]$ grep "ERROR.*file_server_status_check.*arithmos" health_server.log.* This error explains it could not get the stats from Arithmos, which is supposed to be the stored failure count.Finding no stats means it is the first attempt for this check plugin. Therefore, this check assumes the count is 1. 3. Run the following command on the CVM of the affected cluster (not FSVM) and check if the File server entity is found.("57ff3043-a07c-43ec-a869-593ffdf692c4" in the above sample) nutanix@CVM$ arithmos_cli master_get_entities entity_type=file_server | grep 57ff3043-a07c-43ec-a869-593ffdf692c4 This KB doesn't match your case if something like the sample below exists. nutanix@CVM$ arithmos_cli master_get_entities entity_type=file_server | grep 8a7b4b0e-a958-4fda-9156-c1019a3c3df5 If the File server entity in question is missing, you can just follow the instruction in the Solution section. Note: Missing the entity means the health check has no place to store the score, and the failure count cannot increase.The exact cause may affect other checks with a similar symptom. The log signature may differ, but the missing File server entity is an anomaly to be corrected. Please follow the instruction in that case, too.
If the File server entity in question is missing:Please collect the logs and open TH or ONCALL for the solution. Log duration should cover the time when this issue began for the RCA.If unavailable, 4 hours would be good to start.It may be challenging to capture the trigger of the issue since a missing alert is the first appearance of the issue, typically detected far later than the event. Note: The exact cause is not identified enough to trigger this issue, and we need the logs for further investigation. (as of Aug 2023)We need assistance from DevEx to assess the logs and restore the condition.
KB10301
Cluster wide outage when removing metadata drive and running LCM related upgrades in parallel
Initiating metadata disk removal during LCM upgrades may result in cluster wide outage
Removing a metadata drive (SSD used by Cassandra for metadata) during an LCM upgrade may cause a cluster wide outage. If an LCM upgrade is started and passes the pre-check phase, and if a user initiates a disk removal via ncli or Prism, the prechecks will detect if there is a condition that could potentially generate an outage and fail the operation. Also, removing a disk while resiliency is not green will be disallowed by the GUI. But, if an LCM upgrade is started and passes the pre-check phase, a user can initiate a disk removal action from ncli or Prism that might start due to a window where data resiliency is in OK state (right after the prechecks complete and right before a node goes down for the upgrade). When a metadata disk is removed (SSD or NVMe), the node from where the disk is being removed is put immediately into kForwarding so dynamic ring changer can reshuffle the metadata in the Cassandra ring. In an FT1/RF2 cluster, when a neighbor of the node where the disk is being removed goes down for upgrade, this will cause an outage as there will not be enough Cassandra nodes to achieve quorum for the given token range. Consider the example below: nutanix@CVM$ ncli disk remove-start id=<Disk_ID> In this example, CVM X.X.X.97 is the one with the metadata disk being removed, meanwhile X.X.X.54 went down for upgrade. As X.X.X.97 is neighbor of X.X.X.54, there will be 2 nodes down for this token range and quorum will not be possible, hence causing the outage while LCM performs the upgrade and until the disk remove operation completes.Services will be restarting like pithos, stargate, acropolis and others. Pithos FATALs showing hung operations. nutanix@CVM$ allssh cat data/logs/pithos.FATAL The node having metadata drive removed will have Cassandra in Forwarding state until drive removal is finished (by design). The node picked by LCM to be upgraded will have Cassandra down.
This KB is meant to assist in a post mortem/RCA situation, but if this situation is observed in the field and the customer is suffering downtime during the LCM upgrade, engage a Sr. SRE or STL (Support Tech-Lead) immediately on how to proceed to abort the LCM task. Once drive removal finished and Cassandra is Up/Normal again, LCM upgrades can be started again. ENG-353336 https://jira.nutanix.com/browse/ENG-353336 has been logged to prevent ncli/Prism disk removals during LCM upgrades. ENG-119095 https://jira.nutanix.com/browse/ENG-119095 (resolved in AOS 5.15.3 onwards) should also help by not allowing the shutdown token to be passed if resiliency is critical, causing the LCM upgrade to stall .
}
null
null
null
null
KB16285
Nutanix Files - SSR Snapshot not being taken due to a queued Smart DR job
Files SSR snapshot may fail due to a queued smart dr replication policy job.
Customers may encounter this scenario during decommissioning old Prism Central and registering a new one. While registering a new Prism Central one should ensure that no pending replications were in progress or queued. If an old Smart DR replication job is still in progress then it could cause the successive SSR snapshots on the File Servers to fail when the PE Cluster is registered to a new PC.Identification:Customer will notice that the SSR Snapshot is not being taken as per the schedule. After confirming that schedule exists we need to check in iris logs on the iris leader for failure trace. Iris leader can be found using the following command on the FSVM. Similarly we also need to check traces in replicator log for the queued job located on replicator leader. nutanix@FSVM:~$ afs fs.info SSH to the Iris leader and check the iris.out logs located in /home/nutanix/data/logs/iris.out. From the above example the iris leader is XX.XX.XX.65In iris.out log you will see similar signature stating that SSR is failing due to an incomplete job for a Smart DR policy hence further snapshots for that share are suspended.As you can see from the below trace that there is an incomplete job for policy "5c0629d7-9ad9-4c3c-5b0f-YYYYYYYYYYYYY" due to which further snapshot jobs for share "60ac8cab-d828-4530-8a2b-ZZZZZZZZZZZZ" is skipped. You can find the share UUID of the share via afs share.list command. I0221 16:50:00.013577Z 50623 add_dr_job.go:248] Job 83732819-a55a-4398-73b1-XXXXXXXXXXXXX has already been missed an earlier schedule. Further you will see that there is another create task for snapshot queued in replicator.log on the replicator leader. From the above example the replicator leader is XX.XX.XX.67In the example below you do see that there is another task is running "a2ab1c91-ce3b-4f2c-6d5e-BBBBBBBBBBBB" for the same share "60ac8cab-d828-4530-8a2b-ZZZZZZZZZZZZ". I0221 06:48:35.805089Z 30755 cache_utils.go:143] Skipping the user callback for entity:210ce73f-b2dd-4c14-9c8d-JJJJJJJJJJJJ op:NoOp Check the task details via ergon nutanix@FSVM:~$ ecli task.get a2ab1c91-ce3b-4f2c-6d5e-BBBBBBBBBBBB
We would need to abort this task from ergon. First we need to check the task status via the following command from the FSVM. In the below example we can see that the task is stuck in a Queued State. <ergon> task.list include_completed=false limit=10000 Abort the task using ergon from the FSVM nutanix@FSVM:~$ ergon_update_task --task_status=aborted --task_uuid=a2ab1c91-ce3b-4f2c-6d5e-BBBBBBBBBBBB Confirm if the task was cancelled nutanix@FSVM:~$ ecli ask.get a2ab1c91-ce3b-4f2c-6d5e-BBBBBBBBBBBB Check Iris.out log on iris leader again and you will find that there are no Incomplete jobs for the share I0222 20:50:00.010943Z 87769 add_dr_job.go:173] No incomplete jobs found for policy (5c0629d7-9ad9-4c3c-5b0f-YYYYYYYYYYYYY) and share (60ac8cab-d828-4530-8a2b-ZZZZZZZZZZZZ)
KB10066
LCM inventory on Prism Central stuck indefinitely during the pre-checks phase
Customers may run into an issue where an LCM inventory task is stuck at 0% during the pre-checks phase
Customers may run into an issue where an LCM inventory task is stuck at 0% indefinitely during the pre-checks phase.1. One symptom is that the LCM Inventory on PC generates the Ergon task but is stuck at 0% completed. Find the LCM task UUID on the PCVM, via the Ergon CLI (ecli task.list) and check the task details: nutanix@PCVM:~$ ecli task.get 15f6ff6b-8a15-4019-990e-f5dbfa8b7774 2. You may also notice stuck LCM tasks for a long time in Prism Central UI 3. ~/data/logs/genesis.out on LCM Leader shows the start of prechecks but then just repeats “task_uuid can not be None.” 2020-08-12 15:50:41 INFO operations.py:329 Successfully set inventory start time as 1597261841733730 4. Checking all non-completed tasks, we see that there are old Catalog service tasks stuck in kRunning status without making progress in a long time. nutanix@PCVM:~$ ecli task.list include_completed=false component_list=lcm,Catalog 5. Notice the "start_time_usecs" and "last_updated_time_usecs" field with the UUID identified in the above step number 4. nutanix@PCVM:~$ ecli task.get 7625aced-f05a-4e21-a902-c41c4f322858
The old stuck Catalog tasks caused the LCM master manifest delete+create task that is part of prechecks to get queued. You may encounter the issue that Prism Central tasks not properly synced from Prism Element.The inventory precheck is stuck at "Running LCM precheck test_catalog_workflows" stage.Following KB 8503 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000CsoWCAS to check if catalog "create and delete" tasks are stuck since Prism Central tasks aren't properly synced from Prism Element. (in other words, you will see the catalog create and delete tasks are "kRunning" on the PC side while the same tasks are "kSucceeded" on the PE side)Once you confirm it, run the workaround. The inventory precheck will pass "test_catalog_workflows" after the PC is properly synchronized with PE. In some situations, you may run the workaround on the PE side and PC side. Please collect the logs during the time window when the original kRunning status catalog tasks were created. Check if all the registered PE clusters are running LCM 2.3.2.2 or later, if not, run an Inventory on them so that they are upgraded to the latest version.Confirm that there are no download issues. Check Lcm_wget.log on all CVMs using the command: allssh "tail ~/data/logs/lcm_wget.log"Follow KB-4872 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000XeM1CAK and run the python script to clear all running/pending LCM tasks and restart genesis. Make sure you follow all steps and clear the inventory intent from zookeeper as well.Try to run a fresh inventory on the PC Only Follow the steps shown here, after you have collected the log bundle and identified the Root cause for the tasks to get stuckWe need to manually update the stuck catalog tasks status from Running to Aborted, This breaks the deadlock and allows the backlog of tasks to be cleared. Subsequent LCM inventories should work without issue. Tasks can be safely marked as Aborted using ergon_update_task. nutanix@PCVM:~$ ~/bin/ergon_update_task --task_uuid=<UUID> --task_status=aborted ecli task.list should not show the tasks anymore. nutanix@PCVM:~$ ecli task.list include_completed=false
KB12143
Nutanix Move - Unable to access file since it is locked
Unable to access file since it is locked' indicates that the VM file in vSphere is locked. To resolve this, unlock it and retry the plan.
When migrating a VM from ESXi, the following error may appear in the Migration Status. vCenter Server failed during operation '' for VM <VM Name> with error 'Unable to access file since it is locked'
The error indicates the VM file in vSphere is locked. To check if the file is locked or not, refer to the following article: https://kb.vmware.com/s/article/10051 https://kb.vmware.com/s/article/10051 https://kb.vmware.com/s/article/2107795 https://kb.vmware.com/s/article/2107795Note: Ensure that no backup jobs are running on the VM while the VM migration is in progress.Retry migration for failed VM to see if migration works. Generally, taking a VMware snapshot of the VM will lock it. If the problem persists, contact VMware support. Once the lock is removed, retry the Plan.If manual intervention following VM migration actions like Cancel or Discard proves unsuccessful, and the UI shows the migration status as Cleanup Failed, Please follow "Manual Cleanup for VM Migrations" steps outlined in the Move User Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v5_1:top-manual-cleanup-r.html.
KB9866
NCC Health Check: ergon_checks
NCC 4.0.1. The NCC health check ergon_checks verifies a number of pending tasks and fails if a high number of tasks detected in the cluster.
The NCC health check ergon_checks verifies a number of pending tasks and fails if a high number of tasks detected in the cluster.Running NCC Check You can run this check as a part of the complete NCC health checks nutanix@cvm:~$ ncc health_checks run_all Or you can run this check individually nutanix@cvm:~$ ncc health_checks system_checks ergon_checks Sample Output Check Status: PASS Running : health_checks system_checks ergon_checks Check Status: FAIL Running : health_checks system_checks ergon_checks Check fails if more than 50000 tasks are found on the Prism Element cluster or if more than 400000 tasks are found on the Prism Central cluster.This check runs on all hypervisors.This check is not scheduled.This check runs on Prism Element and Prism Central.This check does not generate an alert.Output messaging [ { "111082": "High number of tasks in the cluster.", "Check ID": "Description" }, { "111082": "High number of tasks in the cluster.", "Check ID": "Causes of failure" }, { "111082": "Contact Nutanix support for help.", "Check ID": "Resolutions" }, { "111082": "Operation on entities in the cluster might not progress.", "Check ID": "Impact" } ]
Please collect the log bundle using Logbay and engage Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/. For more information on how to use Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
""ISB-100-2019-05-30"": ""Title""
null
null
null
null
KB15052
Nutanix Files - NFS crash causing slowness in accessing Home Shares
NFS crashes seen on FSVM contributing to the slowness in accessing Home Shares.
NFS crashing seen on one or more FSVM due to stack corruption issue causing latency while browsing shares and loading hosted profiles. Customer can observe slow browsing of shares on their linux clients as well as the logins of the desktop slow if hosted on NFS home share. Identification: Frequent NFS restarts would be seen in ganesha.log on FSVM. nutanix@FSVM:~$ sudo less /home/log/ganesha/ganesha.log Check for cores on any FSVM's. You will see cores on FSVM where NFS is crashing nutanix@FSVM:~$ allssh ls -l /home/log/ganesha/cores/ Then copy the core file to /home/nutanix/tmpThen gdb on the core file as follows. This should give you the share root path and the inode number of the file in question along with backtrace as below. In the example below you can see that the path of the file is "/zroot/shares/5252bdd4-5c8f-4313-8838-1ec8041fc6fb/:17b89c20-b134-4696-a39b-aac0b4f150fa/d203d19e-4160-41ba-9738-ec7a4c18ae87" and inode number is"20955633" nutanix@FSVM:~/tmp$sudo gdb -q -batch ganesha.nfsd <core file> -ex bt -ex "f 4" -ex "p obj_hdl->fs->path" -ex "p obj_hdl->fileid" You can find the file using the following command on the FSVM nutanix@FSVM:~ sudo find <share root path> -inum <inode number>
Once you find the file then you can ask the Customer to delete the file. If the file is repetitively being generated via a particular client then have Customer investigate the client computer and check if any scripts running on the client machine causing the generation of files. This issue is resolved in Files 4.4. Please update the File Server to 4.4.
KB9958
Extra vdisks added automatically to PCVM
Extra vdisks are automatically attached to a newly deployed PC VM 5.17.x (or above) or an upgrade of PC VM to 5.17.x (or above). 
You will see three extra vdisks attached to the PC VM automatically on a newly deployed PC VM 5.17.x (or above) or an upgrade of PC VM to 5.17.x (or above). The df -h output will show extra vdisks as shown below: nutanix@NTNX-PCVM:~$ df -h Cause:This is because of the implementation of "Multiple vdisk" support on PC VM introduced in PC version 5.17.This is explained in Release notes of 5.17 https://portal.nutanix.com/#/page/docs/details?targetId=Release-Notes-Acr-v5_17:Release-Notes-Acr-v5_17.
Below are the expected behavior currently: Multiple vDisk support is by default disabled for Small PCVM's. Multiple vDisk support is by default enabled for Large PCVM's. Please find below table to check the Small vs Large PCVM deployment:Multiple vDisk support works as below: (only for Large PC VM's) Scenario 1: A fresh deployment of PC VM 5.17.x version or above:Once the PC VM is created, there will be 1 vDisk of 2.5 TiB.When the hosting Prism Elements is registered to this PCVM, the Multiple vDisk feature kicks in and creates 3 new vDisk of 2.5 TiB. These are thin provisioned disks and are created to distribute the load of metadata. Once the Hosting PE is registered, the Multiple vDisk feature cannot be turned off. You need to re-deploy the PC VM if you do not want this feature. Steps mentioned in the below section. Scenario 2: PC VM is upgraded to a version 5.17.x or above:Multiple vDisk feature will be enabled while the PC VM is getting upgraded and the extra vDisks will be added automatically just after the upgrade workflow. The feature cannot be turned off if the PC has the hosting PE registered during the upgrade. So, in both the cases, the Multiple vDisk feature kicks in when you have Hosting PE registered to the PC. Starting Prism Central version 5.17.x, while PC is scaled up from "small PC to large PC" or "small PC to X-large PC" by increasing the memory and vCPU, 3 additional stargate disks of 2.5T will be added but the original disk (sdc1 in most cases which can be verified using 'df -h' command) would remain in 500G size. The size of this disk needs to be manually expanded to 2.5T. Please reach out to Nutanix Support to get assistance with disk expansion. Note: Please reach out to Nutanix Support for assistance with any of the following scenarios; If multiple vdisk feature is not required on large PC deployments for any reason.If multiple vdisk feature is desired but the deployment fails.Multiple vdisk feature is required on a small PC deployment.
KB13055
"Unable to retrieve OVA information" error shown in Prism Central due to Envelope tag
"Unable to retrieve OVA information" error shown in Prism Central due to <Envelope> tag
When trying to deploy a VM from an OVA image in Prism Central (via Prism Central UI / Virtual Infrastructure / OVAs), the "Unable to retrieve OVA information" error message may be shown.
This issue is resolved in pc.2022.6. Please upgrade Prism Central to the specified version or newer.Workaround Extract the contents of the ".ova" file: user@uservm$ tar -xvf ova_name.ova Edit the ".ovf" file and remove any namespaces (like OVF) from the <Envelope> tag in the ".ovf" file and save the file. If there is no default namespace defined (xmlns="something") in <Envelope> like <Envelope xmlns="something" xmlns:ovf="something_else">, then we need to replace xmlns:ovf = "something" with xmlns="something" to avoid deployment errors. Example: convert <ovf:Envelope> to <Envelope> and </ovf:Envelope> to </Envelope>The above step is going to invalidate the checksum (SHA-1) value of the file, which may not be the same value that is stored in the manifest (.mf) file. Calculate the checksum of the OVF file and put it in the manifest file. Manifest (.mf) file looks something like this: SHA256(ec9ba9ad-8d04-4d67-9e1f-40721384b749.ovf)=bafb32aca9d4fd8346d5eb0f2b0ee7121e2e2c8b85ef04016a36a3fbe2dbabd9 Replace the value after ‘=’ with the new checksum and save the file.When the changes are done and the files are ready for OVA creation, we can compress the files using tar in Linux. Note: ".ovf" file must be the first file when executing the tar command. Example: Contents of the ova folder - "ovf_file.ovf", "disk1.vmdk", "disk2.vmdk", "manifest.mf".Executing the below command from inside the folder containing the mentioned files will create the OVA file which will be supported by the platform: user@uservm$ tar -cvf my_ova.ova ovf_file.ovf manifest.mf disk1.vmdk disk2.vmdk General command: user@uservm$ tar -cvf <ova name> <ovf_file_name> <manifest file name> <disk1> <disk2> Note: It is recommended to create the OVA file using the tar command on a Linux platform. The OVA file created on other platforms may not be compatible with the Linux platform.
KB16933
When upgrading Nutanix Move, the Prism Central user is automatically logged out
If Move upgrade is initiated when a Prism Central user is logged in to Move, the user is logged out during the upgrade.
A Prism Central user is logged in to Move automatically logs out when: A Prism Central user is logged in to MoveA Nutanix Move upgrade is initiatedThe user is automatically logged out during the upgradeThe login screen does not display the option to log in with Prism Central
SSH into the Move VM using admin user. See Logging in to Move with SSH https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v5_2:top-access-ssh-t.htmlChange to root user: admin@move on ~ $ rs Run the command to check the mgmtserver service status: root@move on ~ $ svcchk | grep mgmtserver CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 02b8a5ea5b4c nutanix/xtract_mgmtserver "/bin/tini -g -- /op…" 15 minutes ago Up 15 minutes 80/tcp, 443/tcp, 127.0.0.1:5001->5001/tcp bin_mgmtserver_1 Wait and check that STATUS is Up for more than 3 minutes. If the STATUS is "Exited" unexpectedly or Up but the time does not increase, contact Nutanix Support.Refresh the browser tab manually and you see the login screen with an options to login to Prism Central.
KB9649
After Scale-Out, pcvm_same_disk_size_check reports "Disk sizes of Cassandra partitions are not the same on Prism Central VM"
After scaling out, PC NCC show alerts about metadata partitions across PC VMs are not consistent due to mismatch in mounted sizes.
In certain lineages of Prism Central (PC), it is possible to encounter an issue where the Cassandra metadata partitions across PCVMs have a slightly different size after a scale-out operation. The pcvm_same_disk_size_check in NCC uses "df -h" to make sure that all disk partitions have matching sizes across PCVMs and, in these cases, will produce a warning like the one seen below. Detailed information for pcvm_same_disk_size_check: This issue has been observed to appear in instances where: User deploys single-node Prism Central.User upgrades Prism Central or Scales up the single PC instance.The user scales out the Prism Central to a 3-PCVM configuration. Steps to Identify If NCC Health Check: pcvm_same_disk_size_check is failing and there is a mismatch in the size of /dev/sdc1 "fdisk" shows no difference in vDisk "Size" or in the partition offsets (Start and End) for /dev/sdc1 nutanix@NTNX-x-x-x-1-A-PCVM:~$ allssh sudo fdisk -l /dev/sdc "lsblk" shows no difference in size of /dev/sdc or /dev/sdc1 across PCVMs nutanix@NTNX-x-x-x-1-A-PCVM:~$ allssh lsblk -b /dev/sdc /dev/sdc1 should be off by roughly 1 GB. nutanix@NTNX-x-x-x-1-A-PCVM:~$ allssh df -h The dumpe2fs command highlights a number of other differences. Desired/Required extra isize is 28 on PCVM x.x.x.1 but 32 on the other two PCVMs. The Journal size is 128M on PCVM x.x.x.1 and 1024M on the other nodes. Journal length and other related attributes also vary. (For example, here, x.x.x.1 is the original PCVM, x.x.x.2 and x.x.x.3 are new PCVMs.) nutanix@NTNX-x-x-x-2-A-PCVM:~$ allssh "sudo dumpe2fs -h /dev/sdc1 | grep 'Required extra isize' -A10" Looking at the mount parameters for the partition across the three PCVMs, only x.x.x.1 is using the stripe=256 attribute. Apart from that, there are no differences. nutanix@NTNX-x-x-x-2-A-PCVM:~$ allssh "mount -l | grep sdc1" "df" shows difference in max size for mounted partition /dev/sdc1 nutanix@NTNX-x-x-x-2-A-PCVM:~$ allssh df /dev/sdc1
This issue is caused by differences in the default journal attributes across versions of Prism Central. There is no impact on Prism Central performance or functionality from this slight difference in journal attributes. Nutanix Engineering is aware of the issue and is working on a fix in a future release.
KB8129
How To Set CPU Power Management States (C-State) in AHV
This article describes how to disable processor C-States for latency sensitive applications on AHV
Modern CPUs utilized a technology called "C-States" to manage the amount of power that individual processor cores are utilizing. When a core is idle, the server's BIOS will reduce its clock rate, power draw, or both in an effort to make the system more energy efficient. In most cases, this is the desired condition as it can significantly reduce power consumption. The unused power may be used by other CPU cores to increase their frequency (GHz), allowing instructions executing on active CPU cores to complete faster.The number of C-States available depends on the processor make and model and the capabilities of the server BIOS. C-States can be broken down into two general buckets: Active/Idle C-States - Normal CPU activity, no power savings C0 - The CPU is actively executing instructionsC1 - The CPU is idle, but fully online (voltage + clock-speed)C1E - The CPU is idle, its clock-speed has been reduced, but is at full voltage Sleep States - CPU's clock is stopped and voltage reduced (most common states listed) C2 - CPU clock is stoppedC3 - CPU clock is stopped and voltage reducedC6 - CPU internal/external clocks are stopped, voltage reduced or powered off However, some application CPU usage patterns are bursty - requiring CPUs to transition from a low speed/voltage to full speed/voltage. If the application requires highly performant, low latency CPU operation, it may be sensitive to CPU cores transitioning from a low-power sleep state to full power/clock-speed. For applications where low latency is more important than power savings, the hypervisor can be instructed to disable processor C-States, preventing the system from powering down CPUs. These settings are recommended or may be required for applications including Epic Operational Database (ODB), Epic Hyperspace, Intersystem Caché and MEDITECH File Server, Oracle and MS SQL databases, as well as SAP HANA, RDMA (Remote Direct Memory Access) enabled systems and other environments where C-States could add latency.
Before modifying the CPU power states from the operating system, the BIOS has to allow it. Consult the BIOS documentation for the specific platform running the workload and follow the recommendations from the hardware manufacturer for specific applications. Ensure that the BIOS allows managing CPU power states from the OS (probably only an issue on old BIOS levels). AHV can override UEFI/BIOS C-State settings, so you need to use this process in addition to the BIOS settings. Note that directly editing C-States in the BIOS is not supported for the Nutanix hardware platform as outlined in Release Notes | BMC and BIOS https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-BMC-BIOS:bmc-Release-Notes-BMC-BIOS-overview-c.html. Enable the AHV C-State service There is a service available on AHV that will disable the use of Sleep C-States if the BIOS allows it. To use this service, run the following commands on the CVM to enable it on each AHV host and start it. On AHV 20190916.x (bundled with AOS 5.15.4 and part of AOS 5.16 and higher): nutanix@cvm$ hostssh systemctl enable cstate The cstate service disables C-States C3 and C6. Testing has shown that disabling these two states has the greatest impact on performance. C-States C0, C1, C1E and C2 are not affected. To verify that the state service is running, execute the following command from a CVM: nutanix@cvm$ hostssh systemctl status cstate Note: The setting will also need to be applied when additional hosts are added to the cluster or if a host is re-imaged.
KB15910
Prism Central: Data Table Report shows one less result
Data Table in Reports displays 1 less result than the expected results. If we are Expecting 10 Records in the Data Table, it only shows the top 9.
When a report with metrics data (i.e. CPU Usage) is generated from Prism Central running versions listed below, pc.2022.6.0.6, pc.2022.6.0.7, pc.2022.6.0.8, pc.2022.6.0.9, pc.2022.6.0.10pc.2023.3 the data table in reports may show 1 less result compared to the expected result. For example: If the report generates a data table with performance metrics for entities (i.e. VMs or clusters), one less entity than expected is returned in the generated report. If the report uses the OR operator to match two name patterns for an entity type, the report always generates results for one entity alone.
This issue is resolved in Prism Central version 2023.4 and above. Upgrade Prism Central to the latest supported version.
KB11905
Nutanix Kubernetes Engine - How to collect CSI logs
Instructions on how to collect CSI logs
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.There might be situations where one has to debug CSI-related issues in NKE. This article provides instructions on how to collect the logs for the purpose of debugging.
Download a kubeconfig https://portal.nutanix.com/page/documents/details?targetId=Karbon:kar-karbon-download-kubeconfig-t.html file of the k8s cluster.Create a folder on your Desktop, for example, nutanix_csi_logs.Change into the newly created folder. For example: cd nutanix_csi_logs Capture the following logs relevant to troubleshooting CSI issues: kubectl get pods -n ntnx-system csi-provisioner-ntnx-plugin-0 -o yaml > csi_ntnx_plugin_pod_spec.yaml ##csi node spec If this is for a support case, compress (ZIP) the folder and attach it to the case.
KB7408
Expand cluster failed with" failed to get vlan of node"
null
Expand new add node in prism console failed with "Failed to get vlan tag of node: [ip]. Check if Expand Cluster workflow supports the node AOS version with VLAN setup." login the prism web console>gear tools> expand cluster > follow the wizard in the right work pane, launch the discovery and expand cluster process.Genesis master node logs and analysis snippets. Find Genesis master node: nutanix@cvm$ convert_cluster_status | grep master Identify failed pre-check and check for errors in genesis.out nutanix@cvm$ grep pre_expand ~/data/logs/genesis.out Snippets logs of genesis.out data/logs/genesis.out:2019-05-13 16:12:10 WARNING genesis_utils.py:1188 Failed to reach a node where Genesis is up. Retrying... (Hit Ctrl-C to abort) Check the connectivity of the IP addresses of the CVM on the new node (eth0 and eth2 as well if backplane network/multihoming is enabled). Hypervisor on the new node should also be reachable.
Login to the vCenter web console > select the nutanix cluster> esxi host> configuration> firewall> edit> choice " vSphere web client" add the CVM IP addresses to whitelist . Re-launch rediscovery and expand the new node again.
KB12610
NCC - storage_container_mount_check might fail if ESXi HW UUIDs are not unique
NCC storage_container_mount_check might fail if the ESXi HW UUIDs for the ESXi nodes are not unique.
NCC storage_container_mount_check might fail on ESXi hosts even though the shares/datastores/containers are actually mounted across all ESXi hosts. Detailed information for storage_container_mount_check: This occurs even though the NFS shares/containers are actually mounted correctly, as can be confirmed using the hostssh "esxcli storage nfs list" command.
This issue comes up if two or more ESXi hosts have the same HW UUID. This can be confirmed by putting Uhura in debug mode and checking the logs for the datastore list "datastore_config_list".Sample snippets are below: 2021-12-03 10:52:32,243Z DEBUG host.py:5535 Returned Datastore list is datastore_config_list { This issue happens as Uhura actually does not currently check for duplicate node_uuids and the check fails if duplicate uuids are encountered.The reason for the duplicate UUIDs is because ESXi used the same algorithm to generate VM uuids as node uuids. The node UUID is based on a hash that uses the BIOS hash, Serial Number, etc. So if one or more nodes are using incorrect BIOS system identifiers, then the HW UUID for the nodes will actually come up the same.This is explained in the VMware KB https://kb.vmware.com/s/article/1006250 https://kb.vmware.com/s/article/1006250.Duplicate UUIDs can be identified using esxcfg-info | head 20 and checking the BIOS and Serial Number fields. $ hostssh "esxcfg-info | head -20" The current workaround is to identify the nodes that have incorrect Serial Numbers and correcting them. If this is an OEM partner, the customer needs to reach out to the respective vendors to have this corrected.
KB15396
NGT | NGT upgrade task from NGT 3.0 (bundled with AOS 6.6) might fail for linux VMs to later versions
This article describes a workaround to ensure NGT 3.0 is upgraded to later versions on a Linux VM if the upgrade tasks fails.
While upgrading NGT from version 3.0 (Bundled with AOS 6.6) to a later version, NGA on UVM is getting upgraded successfully however the ngt_guest_agent service is not coming up on the VM resulting in upgrade task failure.You can confirm the NGT service is not running and the NGT version on the UVM to be the updated version.For CentOS VM: [root@localhost tmp]# systemctl status ngt_guest_agent.service For Debian VM: nutanix@debian:~$ systemctl status ngt_guest_agent.service The NGT upgrade task fails for both CentOS and Debian user VMs. The upgrade is successful but communication doesn't come up after the upgrade causing the upgrade task to timeout
We need to manually restart the Nutanix Guest Agent service on the Linux VM post-upgrade. nutanix@uvm:~$ systemctl restart ngt_guest_agent.service If the systemctl command is not supported nutanix@uvm:~$ service ngt_guest_agent restart This issue is only with upgrades from NGT version 3.0 to a later version.
KB15708
CVM not booting due to error "kauditd hold queue overflow'
CVM not booting due to error "kauditd hold queue overflow' during AHV upgrade
CVM console shows the error "kauditd hold queue overflow" during AHV upgrade. The CVM is up but unable to log in or SSH to from other CVM ^[[32m OK ^[[0m] Stopped Getty on tty1.
The error is similar to ENG-576704 https://jira.nutanix.com/browse/ENG-576704To resolve the issue, we need to perform CVM Rescue https://confluence.eng.nutanix.com:8443/display/STK/SVM+Rescue%3A+Create+and+Mount. Please reach out to STL before applying the workaround. Note: DO NOT perform svmrescue if in the middle of an AOS upgrade. From the genesis leader CVM, run the below command to tail the genesis.out log to monitor the CVM’s status. tail -F /home/nutanix/data/logs/genesis.out | grep ssd
KB16263
Prism Central- JS files in "/home/apache/www/console/app-extension/scripts/" created for every upgrade slowly emerging as one of the potential contributors to the "/home" partition reaching full capacity.
This document outlines the process for manually cleaning up old UI files to free up space on the /home partition.
The UI files associated with the PC are stored in the /home/apache/www directory. It is observed that when a PC upgrades, the older .js and .css files are still present along with the new bits. This could potentially cause the /home partition to become full or at least take some extra space(500 MB) with each upgrade. Identification: 1. After every upgrade of Prism Central, old JS files are not deleted from /home/apache/www/console/app-extension/scripts/. These additional files might result in the /home partition having less available storage nutanix@NTNX-PCVM:~$ allssh "du -sh /home/apache/www/console/app-extension/scripts/" Check when was the PC last upgraded nutanix@NTNX-PCVM:~$cat ~/config/upgrade.history 2. Check whether there are .js files which are older than this date in the below folders. If there are older .js and .css files, go to step 3 or else there are no old UI files to cleanupFor example: In the /home/apache/www/console/ directory as shown below we can see 3 sets of UI files, the latest ones are only used, so we would need to delete the older files nutanix@NTNX-PCVM:/home/apache/www/console/app-extension/scripts$ ls -lrt 3. Since the upgrade was done at the above date and time, we do not need UI files that are older than this time.
Note: Before doing the below steps, check if a set of .js files is newer than when the upgrade was last done. There might be scenarios where there might have been a time jump to the future, which would have been corrected later. This would lead to an upgrade history having a later time, and this could lead to all .js files getting deleted. To clean up all the UI files older than the last upgrade time. We need to run the following commands for the same1. Create a file named delete_old_js_files.sh under /home/nutanix folder or can be under /home/nutanix/tmp folder nutanix@NTNX-PCVM:~$ vi delete_old_js_files.sh 2. Copy the script to the file #!/bin/bash 3. Save the file using ESC :wq4. Run the script using the following example.Usage: nutanix@NTNX-PCVM:~$ sh delete_old_js_files.sh <date-time> Example: nutanix@NTNX-PCVM:~$ sh delete_old_js_files.sh "2024-12-16 06:05:54" This would delete all the .js files from the above folders which are older than the time specified.
KB8878
Uploading tar.gz file fails with Server Error when File name has spaces
Uploading tar.gz file fails with Server Error when File name has spaces.
While Uploading Foundation tar.gz bundle or any other bundle, you may see "Server Error" if the file name has spaces:In the above example, the file name was "Foundation Upgrade foundation-4.5.1.tar.gz".Prism leader's /home/nutanix/data/logs/prism_gateway.log has the following: ERROR 2020-01-27 08:12:50,445 http-nio-0.0.0.0-9081-exec-1 [] web.providers.PathParamRequestFilter.mungeUri:156 Unable to munge url. Exception Illegal character in path at index 39: upgrade/foundation/softwares/Foundation Upgrade foundation-4.5.1.tar.gz/upload The above log snippet shows that it cannot access the upload path, as space character is considered illegal.
Rename the tar bundle and remove any spaces in the file name. The upload should go through successfully.
KB9083
Nutanix Kubernetes Engine - Enabling Karbon AirGap failed with error" Failed to configure with SSH: Failed to run command on host error : Process exited with status 8"
Enabling Nutanix Kubernetes Engine Airgap failed when Windows IIS is used as Web Server.
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services. Enabling Karbon AirGap failed with error: 2020/02/12 06:31:40.289513 utils.go:278: [WARN] failed to execute operations on remote node 10.xx.yy.153: Failed to configure with SSH: Failed to run command: on host: "10.xx.yy.153:22" error: "Process exited with status 8 From Karbon_core.out, observed below error on every file from ntnx-k8s-releases : 2020/02/12 06:31:40.284584 utils.go:278: [WARN] failed to execute operations on remote node 10.129.89.153: Failed to configure with SSH: Failed to run command:
Windows IIS webserver does not support ":" in the path name. and karbon package name includes ":" ( For example, prometheus-config-reloader:v0.24.0.tar)Using IIS to host AirGap bundle won't work, Use Apache Webserver on a Linux machine to host the AirGap BundleNutanix will be fixing the issue in future release.
KB1655
Erasing HDDs and SSDs Securely Using Blancco Software
This KB article describes how to securely erase hard disk (HDD) and solid state disk (SSD) drives on a non-production Nutanix node by using Blancco software.
The following KB article describes how to securely erase hard disk drives (HDD) and solid state disks (SSD) on a non-production Nutanix node using Blancco software. For complete information, refer to the Blancco software documentation. Note: Any Nutanix licences must be reclaimed before destroying a cluster. Refer to License Manager Guide https://portal.nutanix.com/page/documents/details?targetId=License-Manager:lmg-licmgr-unlicense-ul-t.html for more details on Reclaiming Licences.This procedure requires that you have stopped using all the cluster nodes for any production work, and you are ready to decommission the nodes and destroy the cluster. Destroying a Cluster: Refer to Destroying a Cluster https://portal.nutanix.com/page/documents/details?targetId=Advanced-Admin-AOS:app-cluster-destroy-t.html.You must restart each node where you are erasing drives.PC running Windows 7, 32-bit or 64-bit version, with downloaded Blancco software and ISO images.Keyboard and monitor connected to a node in the Nutanix block.Recommended: Blancco 5 Toolkit (includes bootable USB drive and HASP key for license management) available on http://www.blancco.com http://www.blancco.com.If you did not acquire the Blancco 5 Toolkit you will need to obtain: One 4GB or larger USB drive, Blancco 5 software with HASP key, and Blancco USB Creator software Note: You can erase multiple HDDs simultaneously.You can erase only one SSD at a time (you cannot erase two or more SSDs at the same time).
This procedure requires that you have removed the node from production. You will also have to restart each node where you are erasing drives. Perform this step if you did not acquire the Blancco 5 toolkit. On a Windows 7 PC, create a bootable USB drive by using the Blancco USB Creator software and add the Blancco 5 ISO image. Click Add to select the Blancco ISO.Select the USB drive from the Select media drop-down and check Format.Click Create to create the bootable drive.When the creation status is Done, click Quit and remove the USB drive. Insert the bootable USB drive and HASP key USB dongle into the USB slots at the rear of the Nutanix node.Restart the node and set the Blancco USB drive as the first bootable drive. On the keyboard connected to the node, press the Del key as the node boots to display the BIOS screen.In the BIOS, use the arrow keys to navigate over to Save and Exit and then down to select the USB drive as the boot override device. Under Boot Override, select the bootable_usb_drive_name as the node boot device.Select Save and Exit to boot the node. The Blancco software starts, then detects and displays the drives attached to the node. Click Advanced and select the HDD drives to erase. The software displays a Process page showing the status of the disk erasure. Select one or more HDDs to erase.Select the Erasure Standard HMG infosec Standard 5, Lower Standard and Erase mapped sectors.Click Erase and Yes in the confirmation dialog. Click Advanced and select one SSD to erase. Note: You can erase only one SSD at a time (you cannot erase two or more SSDs at the same time). If two SSDs are attached to a node, erase one drive at a time. Select the Erasure Standard Blancco SSD Erasure - ATA and Erase mapped sectors.Click Erase and Yes in the confirmation dialog.After successfully erasing the SSD, you can select an additional SSD to erase. After successfully erasing all HDDs and SSDs attached to your node, it can be returned to Nutanix or imaged with the desired Nutanix OS (NOS) version.
KB9879
LCM Pre-check: test_nx_ipmi_health_check
This article contains information about LCM pre-check for IPMI health
The pre-check "test_nx_ipmi_health_check" was introduced in LCM 2.3.4. It checks the state of IPMI for NX platforms before a Phoenix-based upgrade operation on Prism Element.Example failure message: Operation failed. Reason: Lcm prechecks detected 1 issue that would cause upgrade failures.
Ensure the IPMI for the outlined host is reachable. Ping the IPMI IP from the CVM and Host to check the same. You may run the below command from any CVM to Fetch the IPMI address for the hosts: nutanix@CVM:~$ ncli host ls | egrep -i "address|name" Run the below commands to check the IPMI status manually on the host outlined in the error message. Make sure the output has the "BMC version" or "Firmware Revision" populated: AHV [root@AHV_Host ~]# /usr/bin/ipmitool mc info ESXi [root@ESXi_Host:~] /ipmitool mc info Hyper-V nutanix@CVM:~$ winsh If you receive an error output for the above commands or if the pre-check still fails (even though IPMI is responsive), contact Nutanix Support https://portal.nutanix.com/.
KB8755
CVM degraded on Dell XC 14G hardware with BIOS older than 2.2.11 due to undetected faulty DIMM
This KB describes an issue on Dell XC 14G platforms running BIOS older than version 2.2.11 that led to a CVM being degraded multiple times. The issue was root caused to an undetected faulty DIMM module by the BIOS in the host.
An issue has been observed on Dell XC 14G platforms running BIOS older than version 2.2.11 that led to a CVM being degraded multiple times.The issue was root-caused and the culprit was found to be a faulty DIMM memory module. However, due to the old BIOS version running on the host, the SEL log from iDRAC didn't report any Hardware related issues.The following symptoms were observed: High CPU load on CVM (above 50)Cassandra timeouts leading to degraded node https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v5_20:man-node-degraded-wc-c.html conditionServices unstable, crashing with non-responsive PID: E1018 13:39:40.177721 24466 init_nutanix.cc:943] Timed out waiting to acquire DumpStackTracesOfAllThreads mutex (10 secs) - exiting process Zookeeper connection lost with greenlets in genesis.out logs: 2019-10-18 13:40:07 WARNING zookeeper_session.py:261 Got connection loss back from Zookeeper, retrying... Hung tasks pointing to page faults in kernel logs : 2019-10-18T13:50:07.820692+02:00 NTNX-A-CVM kernel: [4500297.857539] INFO: task python2.7:576 blocked for more than 120 seconds. cpu_unblock logs in the kernel report continuously: 2019-10-18T13:38:33.030992+02:00 NTNX-A-CVM cpu_unblock[11910]: child is hung, ENG-72597 was likely hit. Unexpected high CPU load on the hypervisor itself or host hang All of those symptoms point to hardware failures, but there is no error reported in the iDRAC Event logs.
Undetected Uncorrectable ECC DIMM errors by DELL BMC/BIOS are causing these issuesIt is recommended to upgrade the BIOS to at least 2.2.11 on the DELL XC 14G servers.Recommended BIOS version for each Dell XC platform can be found in the " Dell EMC XC Series Appliance and XC Core System LCM Reference Guide https://www.dell.com/support/home/en-in/products"Once the BIOS is upgraded in the host, SEL will start reporting failing DIMMs that are the cause of the issue: 10 | 10/24/2019 | 15:41:49 | Critical Interrupt #0x19 | Fatal NMI ( PCI bus:17 device:00 function:0) | Asserted Dell Support should be engaged at this point for the hardware replacement.
KB1602
Replication RPC to the remote cluster completed with error kInvalidValue
How to troubleshoot the error kInvalidValue for Remote Site replication.
When setting up a cluster to back up to a Remote Site, you may see the following error: Protection domain [PD] replication to remote site [remote-site] failed. Replicate RPC to the remote cluster [remote-site] completed with error kInvalidValue. Remote detail: Replicate path request for invalid container [remote-site-container].
This error indicates that the vStore mapping is incorrectly set up from the source to the remote cluster. Note: You must also have the remote cluster set up to the Remote Site to the source cluster. Update the Remote Site via Prism UI > Data Protection menu: Once you click update, choose advanced settings: From here, set the vStore Name Mappings to map one-to-one a container on the source site and the remote site. Note: You must map the container that contains the VMs you will be taking snapshots of. Once this is done, reattempt to replicate. The kInvalidValue error should no longer occur. If you are receiving kStaleCluster, see KB-1437 https://portal.nutanix.com/kb/1437, which recommends setting up the Remote Site in both directions.
KB4067
Configuring ESXi host using vCenter Host Profiles (New installation, SATADOM replacement, cluster expansion)
When to use host profiles: Nutanix cluster expansion (VMware ESXi hosts only) Re-imaging ESXi host in the Nutanix cluster (especially in SATADOM replacement procedure).
Prerequisites: Host Profiles require VMware Enterprise Plus license. For more information, see VMware vSphere 5 Licensing, Pricing and Packaging: http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/vmware-vsphere_pricing-white-paper.pdf http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/vmware-vsphere_pricing-white-paper.pdf About host profiles: VMware vCenter host profiles can be used to facilitate ESXi host configuration and to reduce the time required for this process for large-scale environments and/or deployment.In most cases, when re-imaging the ESXi host is required, it is not always easy to bring back all configuration as it was before re-imaging, especially network, security, and other host-level parameters. And that is what vCenter host profiles exactly accomplishes, to avoid repetitive work.Host profiles encapsulate the host configuration in a single location and store it in the vCenter database to save time of re-configuring the host.Host profiles can be taken from the same host before re-imaging, or by creating it from another similar host in the same vCenter cluster. Note: It is recommended to create the host profile from the same host before re-imaging it. This is mandatory to save all configuration before deletion. However, it is possible to create the host profile from another host in the same vCenter cluster.
Before you begin, make sure that vCenter has the required license. To check if vSphere has the required license: Log in to vCenter using administrator credentials In the Home page, click Licensing under the Administration sectionEnsure that the vSphere license is "Enterprise Plus" Only three steps to configure the host: Create host profileAttach Profiles Apply a Profile [1] To create host profile: ===================== Log in to vCenter using administrator credentials In the Host and Clusters view, select the host that you are going to re-image. Note: If it is a new host installation, cluster expansion, or the host died and you cannot get the configuration from the old ESXi, select the ESXi you want to designate as the reference host for the new host profile. The host must be a valid host to use as a reference host. Right-click the host and select Host Profile > Create Profile from Host... The Create Profile from Host wizard opens. Type the name and enter a description for the new profile and click Next.Review the summary information for the new profile and click Finish to complete creating the profile. Note: In some cases, you may need to disable a Profile Configuration to select which policy configurations are active when a host profile is applied. For example, if the ESXi host had a special configuration that needs to be automatically configured by a third party software, in this case, you may deselect this configuration from the host profile to not apply it. As mentioned in KB 2221 http://portal.nutanix.com/kb/2221: "We, at Nutanix, do support host profiles, but while configuring, please make sure NOT to select "SSH authorized key for root user" as using this could bring the cluster down when the host profile baseline is implemented. This is because the baseline Host profile goes and overwrites the existing shared keys and this breaks the CVM communication with the ESXi." [Optional] To disable a Profile Configuration: ===================== In the Host Profiles main view, select the profile with the configuration to enable or disable.Right-click the profile and select Enable/Disable Profile Configuration. Check or deselect the profile policy configurations to enable or disable.A disabled configuration is not applied when applying the host profile. Hosts are not checked for compliance with a disabled configuration.Click OK. [2] Attach Profiles from the Host: ===================== Before you can apply the profile to a host, you need to attach the host to the profile or the profile to the host. This step should be done after installing the ESXi host and joining the vCenter. In the Host and Clusters view, select the host to which you want to attach a profile.Right-click the host and select Host Profile > Manage Profile. In the Attach Profile dialog, select the profile created in previous section to attach to the host and click OK. [3] Apply a Profile from the Host: ===================== The host must be in maintenance mode before a profile is applied to it. In the Host and Clusters view, select the host to which you want to apply a profile.Right-click the host and select Host Profile > Apply Profile. In the Profile Editor, enter the parameters and click Next.Continue until all the required parameters are entered.Click Finish.
KB3432
Frequent time drifts on UVM after migration at ntp level
On UVM running applications errors about time drift are reported at log level.
After migrating applications to Nutanix environment, the hosting UVMs start reporting kernel time synchronization at ntpd level. 20 Jun 08:22:03 ntpd[4018]: kernel time sync status change 6001 20 Jun 08:56:11 ntpd[4018]: kernel time sync status change 2001 20 Jun 09:48:06 ntpd[4018]: synchronized to LOCAL(0), stratum 10 20 Jun 09:48:06 ntpd[4018]: kernel time sync status change 6001 20 Jun 09:48:49 ntpd[4018]: kernel time sync status change 2001 20 Jun 10:05:12 ntpd[4018]: synchronized to 172.16.146.231, stratum 3 20 Jun 10:21:34 ntpd[4018]: time reset -41.497757 s 20 Jun 10:21:54 ntpd[4018]: synchronized to 172.16.146.231, stratum 3 20 Jun 18:49:17 ntpd[4018]: kernel time sync status change 6001 20 Jun 19:08:04 ntpd[4018]: synchronized to LOCAL(0), stratum 10 20 Jun 19:08:04 ntpd[4018]: kernel time sync status change 2001 20 Jun 19:25:15 ntpd[4018]: synchronized to 172.16.146.231, stratum 3 20 Jun 19:40:39 ntpd[4018]: time reset -41.838818 s 20 Jun 19:41:31 ntpd[4018]: synchronized to 172.16.146.231, stratum 3 This is strongly visible with SAP applications, that can tolerate a maximum of 3 seconds. As an example, the SAP application front reports the following alerts: Jun 20 19:23:35 sde-spr1a01 SAPPR1_20[11419]: A24 Monitoring: System time: Cannot go back (< 3 sec.). Wait 1 second(s) Jun 20 19:23:35 sde-spr1a01 SAPPR1_20[53598]: A24 Monitoring: System time: Cannot go back (< 3 sec.). Wait 1 second(s) Jun 20 19:23:35 sde-spr1a01 SAPPR1_20[53608]: A24 Monitoring: System time: Cannot go back (< 3 sec.). Wait 1 second(s) Jun 20 19:23:36 sde-spr1a01 SAPPR1_20[11419]: A29 Monitoring: System time: Cannot go back (> 2 sec.). Use last "valid" time Jun 20 19:23:36 sde-spr1a01 SAPPR1_20[42753]: A24 Monitoring: System time: Cannot go back (< 3 sec.). Wait 1 second(s) Jun 20 19:23:36 sde-spr1a01 SAPPR1_20[53598]: A29 Monitoring: System time: Cannot go back (> 2 sec.). Use last "valid" time Jun 20 19:23:36 sde-spr1a01 SAPPR1_20[53608]: A29 Monitoring: System time: Cannot go back (> 2 sec.). Use last "valid" time Jun 20 19:23:37 sde-spr1a01 SAPPR1_20[42753]: A29 Monitoring: System time: Cannot go back (> 2 sec.). Use last "valid" time Jun 20 19:23:37 sde-spr1a01 SAPPR1_20[53596]: A24 Monitoring: System time: Cannot go back (< 3 sec.). Wait 1 second(s) Jun 20 19:23:38 sde-spr1a01 SAPPR1_20[53596]: A29 Monitoring: System time: Cannot go back (> 2 sec.). Use last "valid" time Jun 20 19:23:39 sde-spr1a01 SAPPR1_20[11419]: A29 Monitoring: System time: Cannot go back (> 2 sec.). Use last "valid" time Jun 20 19:23:39 sde-spr1a01 SAPPR1_20[53608]: A29 Monitoring: System time: Cannot go back (> 2 sec.). Use last "valid" time Jun 20 19:23:39 sde-spr1a01 SAPPR1_20[53598]: A29 Monitoring: System time: Cannot go back (> 2 sec.). Use last "valid" time Jun 20 19:23:46 sde-spr1a01 SAPPR1_20[53596]: A29 Monitoring: System time: Cannot go back (> 2 sec.). Use last "valid" time Jun 20 19:23:46 sde-spr1a01 SAPPR1_20[53608]: A29 Monitoring: System time: Cannot go back (> 2 sec.). Use last "valid" time Jun 20 19:23:57 sde-spr1a01 SAPPR1_20[53608]: A24 Monitoring: System time: Cannot go back (< 3 sec.). Wait 2 second(s) Jun 20 19:38:49 sde-spr1a01 SAPPR1_20[53589]: A17 Basis System: > Ill.Srvtime: ActTim 1466444329 LastTim 1466444350 zdate 1812 Jun 20 19:38:49 sde-spr1a01 SAPPR1_20[42753]: A17 Basis System: > Ill.Srvtime: ActTim 1466444329 LastTim 1466444350 zdate 1812 Jun 20 19:38:49 sde-spr1a01 SAPPR1_20[42753]: AB0 Basis System: Runtime error "ZDATE_ILLEGAL_LOCTIME" occurred. Jun 20 19:38:49 sde-spr1a01 SAPPR1_20[53589]: AB0 Basis System: Runtime error "ZDATE_ILLEGAL_LOCTIME" occurred. Jun 20 19:38:49 sde-spr1a01 SAPPR1_20[31813]: A17 Basis System: > Ill.Srvtime: ActTim 1466444329 LastTim 1466444349 zdate 1812 Jun 20 19:38:49 sde-spr1a01 SAPPR1_20[31813]: AB0 Basis System: Runtime error "ZDATE_ILLEGAL_LOCTIME" occurred. Jun 20 19:38:50 sde-spr1a01 SAPPR1_20[7808]: A17 Basis System: > Ill.Srvtime: ActTim 1466444330 LastTim 1466444351 zdate 1812 Jun 20 19:38:50 sde-spr1a01 SAPPR1_20[7808]: AB0 Basis System: Runtime error "ZDATE_ILLEGAL_LOCTIME" occurred. Jun 20 19:39:02 sde-spr1a01 SAPPR1_20[7808]: AB0 Basis System: Runtime error "ZDATE_ILLEGAL_LOCTIME" occurred. Jun 20 20:08:30 sde-spr1a01 ntpd[36626]: kernel time sync status 2040 Jun 20 20:08:40 sde-spr1a01 ntpd[36626]: kernel time sync status change 2001
As per official ntpd documentation, available here http://doc.ntp.org/4.1.0/ntpd.htm http://doc.ntp.org/4.1.0/ntpd.htm , the ntpd behavior at startup depends on whether the frequency file, usually ntp.drift, exists.ntp.drift file contains the latest estimate of clock frequency error. When the ntpd is started and the file does not exist, the ntpd enters a special mode designed to quickly adapt to the particular system clock oscillator time and frequency error.This takes approximately 15 minutes, after which the time and frequency are set to nominal values and the ntpd enters normal mode, where the time and frequency are continuously tracked relative to the server. After one hour the frequency file is created and the current frequency offset written to it. When the ntpd is started and the file does exist, the ntpd frequency is initialized from the file and enters normal mode immediately.After that, the current frequency offset is written to the file at hourly intervals. Deleting all the time drift file and restarting ntpd daemon will solve the issue.
KB6225
Witness VM admin password reset
Customer forgot the witness VM admin password and cannot login to nuclei
Failed to login with admin password on nuclei of Witness VM nutanix@NTNX-A-CVM:~$ nuclei
Reset the password forcefully from the root account (remember that any other action from the root account is not recommended) Note: Ensure the Witness admin password only contains Printable ASCII characters https://en.wikipedia.org/wiki/ASCII#Printable_characters. Characters out of the required range will cause Cerebro crashes. Please see KB-15720 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V0000010ymQSAQ and ONCALL-16289 https://jira.nutanix.com/browse/ONCALL-16289 for details. SSH to Witness VM with nutanix account Example: nutanix@NTNX-A-CVM:~$ sudo su Note: If you want to reset the password to the default "Nutanix/4u", and you hit the following error: Password has been already used. Choose another. Then you can delete this file or rename the file below as root to clear the password history. mv /etc/security/opasswd /etc/security/opasswd.old If the password is changed after the Witness VM is registered with the Metro Availability and/or Two-Node clusters, additional steps are required as in KB 4376 http://portal.nutanix.com/KB/4376
KB13088
DELL LCM- upgrade failure "EXCEPT:{"err_msg": "Update candidate list is not empty, num_candidates 1", "name": "Hardware Entities (Redfish)", "stage": 1"
Documenting a LCM defect in which inventory for Dell Redfish component is inventoried, but is not seen in the Stage 1 upgrade candidate. In stage 2 the candidate is observed, but not expected in this stage.
Symptoms:During LCM-based firmware upgrades of the Redfish Component (14G Firmware Payload) from RIM bundle DELL-LCM-2.1. The LCM operation fails with the following message 2022-04-20 15:18:46,607Z INFO helper.py:117 (<HOSTIP>, update, cc6426ce-2c7f-4f17-b97a-d77c4a5db16f, upgrade stage [2/2]) [2022-04-20 15:18:46.625123] ============================== After the LCM operation fails, a review of the firmware version within iDRAC shows that the firmware was updated to the intended version; However, LCM will report the component on a down rev version. Attempting to re-inventory the components within LCM will not update the Prism UI with the correct version.
This issue is related to a rare race condition in which LCM polls the Dell Redfish API, and the remote Redfish API services have not completely come online when polled. This in turn does not report the updated version of firmware that is currently installed on the component.To address this Dell Engineering has provided a new API call for LCM to poll the status of the remote services to validate that the services are online. This fix for this will be integrated in LCM 2.5, and customers utilizing Dell hardware are urged to upgrade to this version to mitigate the risk of hitting this issue when available.Workaround:If a cluster experiences this LCM failure during an upgrade, re-running the LCM upgrade for the component that is inaccurately reporting the firmware version will allow LCM to synchronize with the hardware components currently installed firmware revision and correct this issue.
""Title"": ""After upgrading Foundation via 1-Click
Prism shows the upgrade task stuck at 100%""
null
null
null
KB3290
Force node removal of node during AOS upgrade
There are some instances where a NOS upgrade can unravel some hardware issues in one node (such as a faulty hardware or damaged SATADOM), this KB aims to help removing the node even if there is a pending upgrade.
WARNING:Nutanix Engineering strongly discourages node removals during active upgrades (AOS especially). Node removals are resource-intensive operations that place additional stress on the cluster infrastructure and may complicate or disrupt upgrade workflows, leading to data unavailability or even data loss.Do not apply any of the steps in this article unless you have the approval of a Support Tech Lead (Staff SRE/EE) or Devex.This KB should be used with extreme caution. If your situation is not exactly the same as described in the Symptoms section or you are not sure, please engage Senior SRE or ONCALL assistance.Symptoms: This KB aims to describe a scenario where a rolling AOS upgrade uncovers hardware issues (for example, a faulty LSI controller). Normally, Engineering recommends that the cluster be left in its current state (one node down) until the replacement hardware is sent on-site and can be used to bring the failed node back online. The risk of leaving a cluster in this state for several days is minimal. Once the failed node is brought back online, the AOS upgrade will automatically resume from where it left off.This procedure is intended to cover rare cases where it the circumstances judge that the best approach was to remove the faulty node from the cluster and continue the upgrade on the remaining node. The kb applies when ALL conditions below are met:1. The cluster has more than 3 nodes. 2. The faulty node is NOT a Zookeeper node. This can be checked as the following. The faulty node should not be in the outputs of the command. If the faulty node is a Zookeeper node, do not proceed with this kb. Engage ONCALL assistance to migrate zookeeper. nutanix@CVM:~$ allssh "cat /etc/hosts |grep zk" 3. The cluster has enough space to accommodate the current usage if the faulty node is removed. Use kb1557 https://portal.nutanix.com/kb/1557 to do the calculation. For any other scenarios or If you don't feel comfortable with the operations described below, Check with a senior SRE or consider opening an ONCALL to request engineering involvement.
Pause AOS upgrade. $ cluster disable_auto_install; cluster restart_genesis Verify AOS upgrade is paused. In the outputs, after each working CVM it should show "Disabled Auto-upgrade". nutanix@CVM:~$ cs | grep -v UP Obtain node id of the faulty node: nutanix@CVM:~$ svmips -d On a working CVM, run the command below (replace node id). nutanix@CVM:~$ ncli host rm-start id=9 force=true skip-space-check=true bypass-upgrade-check=true Once the node is removed successfully, verify if the shutdown token is held by the removed node. If it is, Engage Support Tech Lead (Staff SRE/EE) or Devex. If not, go to the next step. nutanix@CVM:~$ zkcat /appliance/logical/genesis/node_shutdown_token Resume AOS upgrade and restart genesis across the cluster. The AOS upgrade should proceed on the rest of the nodes. $ cluster enable_auto_install; cluster restart_genesis Check AOS upgrade status. $ upgrade_status Once upgrade_status shows all CVMs are up-to-date. Verify that there are no pending progress_monitor tasks ongoing. If the following command shows pending AOS upgrade tasks, remove them following Delete stuck progress monitor events /articles/Knowledge_Base/Delete-stuck-progress-monitor-events KB. $ progress_monitor_cli -fetchall
KB14124
In Prism Central, VM page showing error 'Cannot read properties of null (reading 'fnClearTable)',when traversed through the tabs
This KB helps in resolving the issue where VM page shows errors when traversing through tabs.
Whenever we switch between tabs on the VM page of Prism Central multiple times, the below error is observed intermittently. This issue is observed only when we switch between tabs.
null
KB10549
Nearsync snapshot failure with error "kAlready protected Failing the PD snapshot"
Nearsync snapshot failure with error "kAlready protected Failing the PD snapshot" due to shared disks/multi writer disks configured for the VMs
Neasync snapshot fails with error "kAlready protected" when the ESXi multi writer disks are used for the protected VMs. Symptoms: Nearsync PD snapshot failure alert reported in prism. Protection domain constantly switches between Nearsync to Async and vice versa. Identification: The protection domain reports nearsync snapshot failure followed by transition-out events because of the snapshot failure. ID : 835b5512-6f16-4cdd-a315-519c30b2592b Cerebro.INFO logs from the cerebro leader report the nearsync snapshot failure with the error "kAlready protected" for specific consistency group. However, the asynchronous/full snapshot creates successfully. E1222 12:49:42.164587 14621 create_mesos_snapshot_op.cc:1115] op_id=11798935 cg_id [originating_cluster_id: 4471762457705869272 originating_cluster_incarnation_id: 1593288870990721 id: 64862832] session_id [4471762457705869272:1593288870990721:243587096] lcs_uuid=2af3757f-7f6c-46cd-8503-b45a735168d5 Failed to finalize LWS for LCS 2af3757f-7f6c-46cd-8503-b45a735168d5 error kAlreadyProtected During the same time, the stargate.INFO log reports that the snapshot_group_op failed for the problematic vdisk since it is already protected in the different CG. I1222 12:49:10.196769 13875 snapshot_group_op.cc:4303] op_id=4294028720 vdisk for inode 5:0:55520 is already a part of some cg: vdisk_id: 244347436 vdisk_name: "NFS:5:0:55520" parent_vdisk_id: 244338487 vdisk_size: 4398046511104 container_id: 1696362 params { total_reserved_capacity: 42949672960 } creation_time_usecs: 1604423297282801 closest_named_ancestor: "NFS:4611686018671735492" vdisk_creator_loc: 7 vdisk_creator_loc: 110825884 vdisk_creator_loc: 4293913596 nfs_file_name: "iamaaec1prd_2-flat.vmdk" chain_id: "(\313x\2619\354G\234\253k +\3425\332>" vdisk_uuid: "\205+^\305\205\331N\331\216q\276\224\003\340d\262" never_hosted: false cg_id { originating_cluster_id: 4471762457705869272 originating_cluster_incarnation_id: 1593288870990721 id: 240799054 } parent_draining: false near_sync_session_root_vdisk_id: 240798571 next_snapshot_time_usecs_hint: 1608624960000000 vdisk_creation_time_usecs: 1608621361685556 oplog_type: kVDiskOplog By looking at the problematic vdisk configuration, cg_id params for the vdisk is different from the cg_id of the protected VM. nutanix@CVM$ vdisk_config_printer --id=244523387 Explanation: Upon protecting VMs inside the nearsync protection domain, all vdisks part of consistency group is stamped with unique cg_id. This is used by stargate to track the LWS snapshot operation. So the vdisk can be part of a single consistency group at a time. By default, each VM will be protected with an individual consistency group with the same name as the VM name. ESXi supports the sharing the same hard disk(vmdk) between multiple VMs with multi writer configuration. Since it is a single flat-vmdk file, it will be a single vdisk from Nutanix side. However, the VMs are protected independently in PD. The snapshot operation for one of the VM will fail with error vdisk Already protected since the same vdisk is now part of two CGs.
Explicitly protect all the VMs (use the multi writer disk) within same consistency group
KB4401
Cluster conversion from ESXi to AHV and rollback may hang or fail if LACP/Etherchannel is used
If ESXi host is using LACP or Etherchannel cluster conversion and rollback may hang or fail.
If ESXi host is using LACP or Etherchannel cluster conversion and rollback may hang or fail.Symptoms: Cluster conversion from ESXi to AHV fails at 54% with the error as shown in the screenshot below: Rollback hangs: If you connect to the node via IPMI and open the console you may notice that it is booted into Phoenix and not able to reach to the foundation node.Check the status of the conversion. The node is in process of 'Imaging node': nutanix@cvm:~$ convert_cluster_status Error in genesis.out log: 2017-04-11 15:43:28 INFO cluster_manager.py:3207 convert_cluster_foundation: Starting foundation service In the convert_cluster.out log you may connection failure to the node that is currently being imaged: 2017-04-11 15:43:35 INFO foundation_rest_client.py:133 Error making request: http://X.X.X.59:8000/foundation/progress to foundation. Retrying. Ret code None. When checking VMware port Load Balancing policy, you may notice that it is set to 'Route based on IP hash' and Etherchannel is configured on the switch. Please refer to the VMware documentation for installed ESXi version to find exact steps on how to check it.
Please review Requirements and limitations for in-place hypervisor conversion section of Prism Web Console Guide (example: AOS 5.10 https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v510:man-cluster-conversion-requirements-limitations-r.html#nref_amd_hlp_k5) for information about limitations.Perform the following steps to recover the host: Mount bootbanks and restore boot configuration: mkdir /sda5 Modify the port load Balancing Mode to 'Route based on the originating Virtual port ID' and place the Adapter into Active/Standby Mode. Please refer to the VMware documentation for installed ESXi version to find exact steps on how to change it.Remove the Ether-Channel configuration on the switch.Restart Cluster conversion.
KB10503
LCM Root task % goes to 100% during the download phase
LCM-Root-task-goes-to-100-during-the-download-phase and then comes down to a lower % just after that
LCM upgrade root task completes to 100% immediately after the download operation gets completed successfully. But after a while it will fall back to the correct upgrade completion %For example:
Explanation :This is a cosmetic issue which does not have any impact on upgrade process. Every time we run an LCM inventory it will check for an available framework version and It will create a task to update to the newer version is there one. As soon as LCM framework finishes update the tasks finishes at 100 % and then a subtask gets created to start the inventory operation as the % goes down to show the current progress of the inventory subtask. We can see the sequence of these events in the genesis logs from the lcm leader shared below : 2022-11-26 09:03:19,421Z INFO 72127216 zeus_utils.py:614 check upload intent thread: I (xx.xx.xx.xx) am the LCM leader Please wait for the actual LCM upgrade root task to complete.Nutanix Engineering is aware of the issue and is working on a fix in a future release.
KB14834
Error "Unable to obtain list of VLANs: Learning flows are not added properly or uplink eth3 is not part of bridge br0." seen on AHV Cluster After AOS Upgrade from 5.20.x to 6.5.x
After AOS upgrade, the error "Unable to obtain list of VLANs: Learning flows are not added properly or uplink eth3 is not part of bridge br0." is displayed.
The following error appears after upgrading AOS from 5.20.x to 6.5.x. Detailed information for bond_uplink_usability_check: Verification steps: Confirm that the uplink configuration is the same for all CVMs with the following command output: nutanix@cvm$ allssh manage_ovs show_interfaces Verify KB-8185 http://portal.nutanix.com/kb/8185 for details on bond_uplink_usability_check.Run NCC check to see if it returns WARN status for ahv_version_check: Detailed information for ahv_version_check:
Check the Interoperability matrix https://portal.nutanix.com/page/documents/compatibility-interoperability-matrix/hardware for compatibility and interoperability information.This issue may occur due to an incompatible AHV version. If AOS is upgraded several times without bringing the AHV version up to a compatible version between each AOS upgrade, the running AHV version may fall-behind the installed AOS version compatibility and lead to several issues, including but not limited to the issue described in this article. Note: Older AOS/AHV versions did not include this "bond_uplink_usability_check" check, which is why it may begin to be seen from AOS 6.5.x onwards.Perform an LCM inventory and confirm if an AHV version is available for an upgrade. If there is, proceed with the upgrade after the usual pre-checks. Once AHV is upgraded, this check will no longer throw an ERR, and will operate as expected per KB8185.If there is not a clear path to upgrade AHV via LCM, contact Nutanix Support for further assistance.
""ISB-100-2019-05-30"": ""ISB-020-2016-10-11""
null
null
null
null
KB14054
VMs running on AHV clusters and using a GPU type of passthrough may fail to detect GPU after an AHV upgrade
After an AHV upgrade to version 20220304.242 or 20220304.336, VMs that are configured with a GPU type of passthrough may fail to detect GPU.
After an AHV upgrade to version 20220304.242 or 20220304.336, VMs that are configured with a GPU type of passthrough may fail to detect GPU: VMs running Linux will fail to detect GPU.VMs running Windows may restart unexpectedly with the following error: Stop code: VIDEO TDR FAILURE Note: VMs configured with a GPU Type of vGPU are not impacted by this Field Advisory.Use any of the following approaches to identify if the cluster is affected.Prism element To determine the AHV version of the cluster, perform the following: Log into Prism ElementCheck the “Hypervisor Summary” widget: To determine if VMs with GPU type of passthrough are present, refer to the following steps: Log into Prism Element.Go to the VM page and select the VM. If the selected VM is configured with a GPU type of passthrough, then the following information can be seen in the “VM DETAILS” section: CLI Connect to any CVM in the cluster and run the following command to check the AHV version: nutanix@cvm:~$ hostssh "uname -r" Run the following command to identify VMs with GPU type of passthrough: nutanix@cvm:~$ acli vm.get \* | gawk '$0 !~ /^[[:space:]{}]/ {printf "\n%-40s ", $1} $0 ~ / gpu:/ {printf "%-40s, ", $0} $0 ~ / vgpu_uuid:/ {printf "%-40s, ", $0}' | grep gpu | grep -v vgpu | awk '{print $1}'
If VMs have been identified with a GPU type of passthrough, avoid upgrading to the affected AHV.This issue is resolved in: AOS 6.5.X family (LTS): AHV 20220304.342, which is bundled with AOS 6.5.2AOS 6.6.X family (STS): AHV 20220304.10055, which is bundled with AOS 6.6.2 Please upgrade both AOS and AHV to versions specified above or newer.If AHV has been upgraded and the issue is observed, then the following workaround is available for both Linux and Windows guest OSes: Power off affected VMs.Connect to any CVM in the cluster and run the following command for every affected VM: nutanix@cvm:~$ acli vm.update <vm-name> extra_flags="enable_hyperv_stimer=off;enable_hyperv_vapic=off;enable_hyperv_relaxed=off;enable_hyperv_spinlocks=off;enable_hyperv_synic=off" Reboot the AHV host where the affected VM will be running. Refer to the Rebooting an AHV or ESXI Node in a Nutanix Cluster https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:wc-request-reboot-wc-t.html chapter of Prism Web Console Guide for more details.Power on affected VMs. Warning: Applying this workaround may impact Windows guest OS performance.Perform the following steps to confirm if the workaround was successfully applied: Connect to any CVM in the cluster and run the following command to get VM configuration: nutanix@cvm:~$ acli vm.get <vm-name> Sample output: nutanix@cvm:~$ acli vm.get vm Refer to KB 4901 http://portal.nutanix.com/kb/4901 to learn about extra_flags limitations. Perform the following steps to revert the workaround: Power off affected VMs.Connect to any CVM in the cluster and run the following command: nutanix@cvm:~$ acli vm.update <vm-name> extra_flags="enable_hyperv_stimer=;enable_hyperv_vapic=;enable_hyperv_relaxed=;enable_hyperv_spinlocks=;enable_hyperv_synic=" Run the following command to get VM configuration: nutanix@cvm:~$ acli vm.get <vm-name> In the resulting output, no "extra_flags" sections should be present.Power on affected VMs.
KB6064
Dell nodes with NVMe drives fail to Foundation, expand cluster fails to discover NVMe drives, NVMe drives not added to cluster
Dell nodes which contain NVMe devices may fail to Foundation a new cluster or when expanding an existing cluster, the NVMe drives on the newly added node are not discovered properly.
When attempting to Foundation a new cluster of Dell systems with NVMe drives, Foundation may fail to create the cluster. If a cluster already exists, or was created outside of using Foundation and an expand cluster operation is attempted to add a new node into the cluster, the operation will complete, but you might notice that the NVMe drives are not mounted on the CVM, do not exist in "ncli disk ls", and are not listed in zeus_config_printer. The command "sudo nvme list" will show the proper nvme drives, however using the command list_disks will fail to display just the nvme devices and display the following error with its output: WARNING:root:Timeout executing ssh -q -o CheckHostIp=no -o ConnectTimeout=15 -o StrictHostKeyChecking=no -o TCPKeepAlive=yes -o UserKnownHostsFile=/dev/null -o PreferredAuthentications=publickey root@192.168.5.1 /etc/init.d/DellPTAgent restart: 30 secs elapsed Restarting the CVM or restarting Hades should produce similar output in the hades.out log, where we see Hades attempt to turn off the LED and then fail to query PTAgent: 2018-08-27 20:14:54 INFO disk_manager.py:4221 Led off request for disks ['/dev/nvme0n1'] You may also notice that the NVMe devices are not showing as disks in the hardware diagram view.If the drives are mounted on the CVM but not visible in Prism and not being used by the cluster, refer to scenario 2.
Scenario 1: Check the version of the PTAgent running on the host. Per Dell support, there is an issue with the agent versions prior to 1.7-4, where the agent fails to be queried successfully. Nutanix leverages this agent when running a Foundation-based cluster build, as well as an expand cluster operation. To check the version: ESX: esxcli software vib list | grep -i dell Example output: dellptagent 1.7-4.r39fb0c9 Dell PartnerSupported 2018-08-27 AHV: rpm -qa | grep -i dell If the version is less than 1.7-4, PTAgent needs to be updated. The easiest way to update is through LCM. Dell can assist with updating manually if necessary. Scenario 2: In some cases PTAgent is not installed/missing on the host, which will result with the following output in hades.out ERROR nvme_disk.py:537 Error timed out received for request https://192.168.5.1:8086/api/PT/v1/host/drives The solution is to run LCM update which will install Dell PT agent as it is a requirement to add NVMe drives to the cluster. To verify we are able to successfully query the agent from a CVM, you can use the below command: curl -s -k https://192.168.5.1:8086/api/PT/v1/host/agentinfo Afterwards, a restart of the CVM is required (ensure data resiliency is OK before restarting): cvm_shutdown -r now To ensure data resiliency is OK: ncli cluster get-domain-fault-tolerance-status type=node
KB7546
Failed to upgrade packages' error during AHV hypervisor upgrade
AHV upgrade may fail with 'Failed to upgrade packages' error due to multiple reasons described in this KB.
AHV upgrade may fail with 'Failed to upgrade packages' error as seen in output of host_upgrade_status command. nutanix@CVM:~$ host_upgrade_status This situation may happen due to one of following reasons:Issue 1/home/nutanix/data/logs/host_upgrade.out log contains following errors: rpmdb: /var/lib/rpm/Requirename: unexpected file type or format This indicates that /var/lib/rpm/Requirename may be corrupted. Issue 2/home/nutanix/data/logs/host_upgrade.out log contains following errors: rpmdb: /var/lib/rpm/Requirename: unexpected file type or format This indicates that /var/lib/yum/history/history-<date>.sqlite may be corrupted.Issue 3/home/nutanix/data/logs/host_upgrade.out log contains following errors: rpmdb: PANIC: fatal region error detected; run recovery This indicates that /var/lib/rpm/_db.XXX may be corrupted.Issue 4Output of host_upgrade_status command shows failure on the host after entering maintenance mode before reboot. (Observed during upgrade of AHV from 20170830.301 to 20170830.337) nutanix@NTNX-18FMXXXXXXX2-B-CVM:144.xx.xx.x04:~$ host_upgrade_status /home/nutanix/data/logs/host_upgrade.out on local CVM of the affected host: 2020-07-11 14:06:20 INFO host_upgrade_common.py:179 Updating host upgrade progress monitor index 5 to 85 /var/log/upgrade_config.log on the affected host 12 Oct 13:20:57 Running puppet /var/log/upgrade_config-salt.log on affected host 2020-07-11 20:38:01.282 [INFO ] Running state [/bin/rpm --setugids libcgroup; /bin/rpm --setperms libcgroup] at time 20:38:01.281844
AHV 20220304.242 and newer contain fixes for ENG-487803 and ENG-465199, which fix most of the known issues that cause RPM DB corruption. If an issue happens on the latest AHV release, collect the log bundle and perform a full RCA.
KB15102
unauthorized' error while launching the Prism Central UI
It has been observed that during certain operations, Prism Central msp PODs may become unresponsive, causing the UI to be unreachable and unable to login.
It has been observed that during certain operations, Prism Central msp PODs may become unresponsive, causing the UI to be unreachable and unable to login. Login to PC VM and do the following commands to determine whether you are experiencing this behavior. PODs crashing: PCVM:~$ allssh "sudo kubectl -s 0.0.0.0:8070 -n ntnx-base get po -o wide" PCVM:~$ sudo kubectl -n ntnx-base logs -f iam-user-authn-648fb8fc45-748zh Logs to check to determine if FS by the crashloop PODs is in Read Only state Pods were failing with the below error: PCVM:~$ sudo kubectl get events -n ntnx-base | grep cape-yxll-7bbb78b66b-xwz67 We also see fsck errors in dmesg output: PCVM:~$ dmesg -T | grep -i EXT4-fs CSI logs from /var/log show the below errors: {"log":"2023-03-31T13:17:18.103Z iscsi_util.go:389: [INFO] iscsi: target iqn.2010-06.com.nutanix:ntnx-k8s-f9651565-819b-4552-95b5-9c90eff63bf1-tgt0 devicePath /dev/dm-1\n","stream":"stderr","time":"2023-03-31T13:17:18.104175868Z"}
1) To know the whereabouts of dev corruption error count per node / check corruption : nutanix@NTNX-X.X.X.X-A-PCVM:~$ allssh 'for i in `ls /sys/fs/ext4/*/errors_count`; do echo $i; cat $i; done' 2) to run fsck for a registry running on a multipath device:Get volume name: nutanix@NTNX-X.X.X.X-A-PCVM:~$ docker volume ls|grep registr 3) Check mount point entry: nutanix@NTNX-X.X.X.X-A-PCVM:~$ mount|grep registry-8d71bb3f-2535-4cd8-6baa-1983f3bd8356 4) Get device name mounted: (sudo ls -l /dev/mapper -shortcut)For this PC /dev/mapper/mpatha nutanix@NTNX-X.X.X.X-A-PCVM:~$ ls -l /dev/mapper/mpatha 5) So /dev/sdj and /dev/sdl are two devices. For these devices we need to get wwid wwid: nutanix@NTNX-X.X.X.X-A-PCVM:~$ sudo /lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sdj 5) Above wwid is needed to add to exclude list in /etc/multipath.conf. We need this otherwise multpathd will be actively using underlying device. There is no way to run fsck. This is key step. nutanix@NTNX-X.X.X.X-A-PCVM:~$ sudo iscsiadm -m session -P 3 > /tmp/iscsiadm_tmp.txt 6) Now grep for the target name and Current portal Ip corresponding to devices ( For this customer sdj and sdl) Sdj: We need all the above details before unmounting: 7) stop msp_controller: nutanix@NTNX-X.X.X.X-A-PCVM:~$ allssh "genesis stop msp_controller" 8) Make sure msp not running on any PC: nutanix@NTNX-X.X.X.X-A-PCVM:~$ allssh "genesis status|grep msp_controller" 9) Stop registry service: >>> Login to PC where registry is running: nutanix@NTNX-X.X.X.X-A-PCVM:~$ allssh "systemctl status registry" 10) Edit:/etc/multipath.conf nutanix@NTNX-X.X.X.X-A-PCVM:~$ sudo cat /etc/multipath.conf 10) Restart multipathd: nutanix@NTNX-X.X.X.X-A-PCVM:~$ sudo systemctl restart multipathd 11) From PE side:Step 1:Get Vg info: nutanix@NTNX-X.X.X.X-A-CVM:~$ acli vg.get registry-8d71bb3f-2535-4cd8-6baa-1983f3bd8356 Step 2:Expose registry vg with attach external: nutanix@NTNX-X.X.X.X-A-CVM:~$ acli vg.attach_external registry-8d71bb3f-2535-4cd8-6baa-1983f3bd8356 initiator_name=iqn.1994-05.com.redhat:444351c3a3f3 12) From PC side:-discover db-icsci login-lsscsi -tStep 1 >>> Discover: nutanix@NTNX-X.X.X.X-A-PCVM:~$ sudo iscsiadm -m discoverydb -t st -p 10.0.128.78:3260 --discover Step 2 >>> Iscsi login: nutanix@NTNX-X.X.X.X-A-PCVM:~$ sudo iscsiadm --mode node --targetname iqn.2010-06.com.nutanix:63c938fb83097e7f3a06d207b2f6bd96070f5cb2cc2f360fff343426362254fd:nutanix-docker-volume-plugin -p 10.0.128.78:3260 --login Step 3 >>> lsscsi -t nutanix@NTNX-X.X.X.X-A-PCVM:~$ lsscsi -t 13) Now sdj is a regular device and no longer multpathd monitors it, so will allow to do fsck: nutanix@NTNX-X.X.X.X-A-PCVM:~$ sudo fsck -y /dev/sdj 14) Once fsck run make sure remove device entry list getting removed from exclude list nutanix@NTNX-X.X.X.X-A-PCVM:~$ systemctl restart multipathd
KB16990
Flow Virtual Networking (FVN): Stuck vnet deletion in hermes generating alert for Advanced Networking Controller internal failure rate is excessive
Prism Central may repeatedly raise the alert "Advanced Networking Controller internal failure rate is excessive" because of stuck deletion of a vNET intent which fails because it still contains a routing policy.
For deployments of FVN Network Controller 4.x and earlier, Prism Central may raise an alert every 12 hours for "Advanced Networking Controller internal failure rate is excessive" because of a stuck virtual network deletion. Alerts for this reason typically are auto-resolved, but repeat every 12 hours until the underlying cause is addressed by support. If this is the only type of operation failure in hermes logs, the issue can be considered cosmetic. Hermes logs show the deletion attempt failing:Logs are found in /var/log/hermes/hermes.log on the hermes container, or on the hosting Prism Central VM at /var/log/ctrlog/default/hermes/anc-hermes_Deployment/hermes.log 2024-05-16 13:11:13.184 0x7fa3302ab180 ERROR hermes.intent.base_intent_op:277 Failed <NeutronRouterOp(00000000-0000-0000-0000-000000000000, deleting)>: Router deletion for virtual network failed as routing policies exist in the virtual network The UUID displayed in the ERROR message is the UUID of the virtual network. Ensure this virtual network does not exist in the Atlas config using atlas_cli: nutanix@PCVM:~$ atlas_cli network.list Use the UUID from the hermes ERROR message in this curl command to query the virtual_network details from the anc-network-service. the error "Router deletion for virtual network failed as routing policies exist in the virtual network" is shown, and the status of the virtual_network is "DELETING". Replace "00000000-0000-0000-0000-000000000000" with your virtual network UUID in the command below. nutanix@PCVM:~$ curl -k -X GET --cacert /home/certs/ca.pem --cert /home/certs/AtlasService/AtlasService.crt --key /home/certs/AtlasService/AtlasService.key 'https://anc-hermes-service.default.prism-central.cluster.local:4801/v1/virtual_network/00000000-0000-0000-0000-000000000000'
In Flow Virtual Networking 4.0 and earlier, it is possible to encounter this scenario where a vNET was previously deleted, but the underlying routing policy is not getting deleted and this blocks successful deletion of the virtual network in hermes. Connect to the anc-mysql-0 container to verify the rule_list for this routing policy is an empty set. To accomplish this: Connect to the anc-mysql-0 container using kubectl: nutanix@PCVM:~$ sudo kubectl exec -it anc-mysql-0 bash Obtain the mysql password for root root@anc-mysql-0:/# cat /etc/secrets/mysql_root_secret && echo Copy the returned password to your clipboard for the next step. Authenticate into mysql as root. When prompted for password, paste the password obtained in the previous step and hit [ENTER] root@anc-mysql-0:/# mysql -u root -p Change context to hermes and verify the routing_policy rule_list (indicated by the 'spec' field) is empty using the same virtual network ID returning the error. Use "exit" to exit mysql and the container. MariaDB [(none)]> use hermes; WARNING: Support, SEs and Partners should never make alterations to the Flow Virtual Networking/hermes database without guidance from Engineering or an STL. Consult with a Support Tech Lead (STL) and review the checks above before proceeding with next steps. The workaround involves sending an API command to delete the routing policy from the hermes database. If all the above conditions were checked and showed the results described in this KB, we can say that the alerting is caused by a previously deleted virtual network repeatedly retrying the deletion step, but failing because the attached default routing policy failed to delete first. Our next step is to remove that routing policy so the pending deletion can finish. Assuming there are no other failing operations recurring in the network controller, this should stop the alerting. With your customer's approval, proceed with the clean-up. To clear the condition, send the DELETE command via curl to remove the routing policy. This will allow the pending vnet deletion to complete. Insert the UUID captured from the hermes log ERROR message as the <vpc_uuid> in the curl command below. nutanix@PCVM:~$ curl -k -X DELETE --cert /home/certs/AtlasService/AtlasService.crt --key /home/certs/AtlasService/AtlasService.key 'https://anc-hermes-service.default.prism-central.cluster.local:4801/v1/virtual_network/<vpc_uuid>/routing_policy/3276' When successful, the command returns no output. Verify success by reviewing the hermes log again. You should find the deletion logged as running and then finished with no errors or failures. 2024-06-06 07:26:46.743 0x7f2bb6394040 DEBUG hermes.intent.base_intent_op:253 Running <NeutronRouterOp(00000000-0000-0000-0000-000000000000, deleting)> ... 2024-06-06 07:26:46.748 0x7f2bb6394040 DEBUG hermes.intent.base_intent_op:279 Finished <NeutronRouterOp(00000000-0000-0000-0000-000000000000, deleting)> When applying this workaround, please add a short note to NET-18602 https://jira.nutanix.com/browse/NET-18602 indicating the scenario and that the workaround was applied.
KB13742
VM on an AHV host may crash and become unmanageable during VG attach/detach workflows.
VM on an AHV host may crash and become unmanageable during VG attach/detach workflows.
VM on an AHV host may crash and become unmanageable during VG attach/detach workflows. This is most likely to happen on NDB (formerly ERA) database VMs that sequentially attach and detach multiple VGs for backup purposes.The following AHV versions are affected: 20190916.xxx releases: 20190916.551 and later.20211105.2xxx releases: 20211105.2030 and later.20211105.30xxx releases: 20211105.30007 The affected VM shows as powered on in Prism UI but unresponsive. Migrating VM or putting the host in maintenance mode gets stuck.Searching the Acropolis leader log for VM UUID shows multiple VG attach tasks, the last of which is stuck: nutanix@CVM:~$ grep 001dd1d4-7434-4367-bb54-f29afa90f7c3 data/logs/acropolis.out.* | less VG attach task traceback: nutanix@CVM:~$ less data/logs/acropolis.out VM QEMU log shows the VM has crashed: [root@ahv]# less /var/log/libvirt/qemu/001dd1d4-7434-4367-bb54-f29afa90f7c3.log But Acropolis is unaware, so it does not restart the VM. ISCSi log might show: [root@AHV ~]# less /var/log/iscsi_redirector
This issue is resolved in: AOS 6.0.X family (STS): AHV 20201105.30100, which is bundled with AOS 6.0.2.3AOS 6.5.X family (LTS): AHV 20201105.30398, which is bundled with AOS 6.5AOS 6.5.X family (LTS): AHV 20220304.242, which is compatible with AOS 6.5.1 Please upgrade both AOS and AHV to versions specified above or newer.Workaround Verify cluster health to make sure it can tolerate one node being down. Follow the steps described in Verifying the cluster health https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-cluster-health-verify-t.html chapter of the AHV Administration Guide.Evacuate running VMs from the affected host manually or put a host into maintenance mode. Refer to the Putting a node into maintenance mode https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-node-maintenance-mode-put-ahv-t.html chapter in AHV Administration Guide for more details.Reboot the affected host.Exit the host from the maintenance mode. Refer to the Exiting a node from the maintenance mode https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-node-maintenance-mode-exit-ahv-t.html chapter in AHV Administration Guide for more details.
KB13542
Nutanix Database Service | Provision from backup is failing with Windows error "Operating system error 64(The specified network name is no longer available.)" on the secondary node of an AG database
This article describes an issue where provision from backup is failing with Windows error "Operating system error 64(The specified network name is no longer available.)" on the secondary node of an AG database.
Note: Nutanix Database Service (NDB) was formerly known as Era. Provision from backup is failing with the following Windows error on the secondary node of an AG database because SMBv2 is enabled on the VM. Operating system error 64(The specified network name is no longer available.) This is because NDB creates the database on the primary and then takes a backup, which is used to create the database on secondary nodes. This backup is placed in an SMB share, which is shared with secondary nodes. The failure is because SMBv2 does not allow guest access to SMB shares.
Disable SMBv2 by setting the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\SMB2 to 0. You can further enable SMBv1 by setting HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\SMB1 to 1.
KB5581
NCC Health Check: cpu_unblock_check
The NCC health check cpu_unblock_check identifies and advises about stale/hung cpu_unblock processes.
The NCC health check cpu_unblock_check checks and notifies if there are any stale cpu_unblock processes running, which can impact cluster performance and, if left unchecked, can affect storage availability. This check will generate a WARN if there are more than 2 cpu_unblock processes. This check will generate a FAIL if there are more than 10 cpu_unblock processes. Running the NCC check The check can be run as part of the complete NCC health checks by running: nutanix@cvm$ ncc health_checks run_all Or individually as: nutanix@cvm$ ncc health_checks system_checks cpu_unblock_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. Sample output For status: PASS Running : health_checks system_checks cpu_unblock_check For status: WARN Node x.x.x.x: For status: FAIL Detailed information for cpu_unblock_check: Output messaging [ { "Check ID": "Check that there are no stale cpu_unblock processes running." }, { "Check ID": "Zookeeper restarting frequently on cluster node." }, { "Check ID": "Kill all cpu_unblock processes and restart cluster services on node." }, { "Check ID": "Cluster performance may be significantly degraded." }, { "Check ID": "Multiple cpu_unblock processes running" }, { "Check ID": "Multiple cpu_unblock processes are running on svm_ip." }, { "Check ID": "This check is scheduled to run every hour, by default." }, { "Check ID": "This check will generate an alert after 1 failure." } ]
Troubleshooting If this check produces a WARN or a FAIL result, then due to the nature of the underlying issue, you are requested to contact Nutanix Support to further troubleshoot the problem.You can engage Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com
KB10776
NGT installation fails on Linux VM with Permission denied error, PermissionError Operation not permitted
NGT installation fails on Linux VM with Permission denied error.
An attempt to install NGT on a Linux VM may fail with permission errors.Scenario 1. The installation attempt fails with the "Permission denied" error [root@linux_vm ~]# /mnt/installer/linux/install_ngt.py Scenario 2. The installation attempt fails with the "PermissionError: [Errno 1] Operation not permitted" error.An attempt to install NGT on a Linux VM may fail with the "Permission denied" error: From RHEL8, RedHat introduced fapolicyd which is a deamon used to allow or deny applications based on rules. On a System with this installed and running, NGT installation will fail with the below error: [root@rhel8]# ./install_ngt.py In case of this Error , check if fapolicyd is running [root@rhel8]# systemctl status fapolicyd -l
Scenario 1NGT uses the /tmp file system for the installation and runs python from it. If the /tmp file system is mounted with "noexec" option, any attempt to execute anything in /tmp will result in "Permission denied". By default, /tmp file system does not have the "noexec" option, but sometimes custom scripts that are used for installation apply that option. Also, it can be applied by some configuration tools, such as ansible, puppet, salt, etc.To verify if the noexec option is applies, run the following command on the Linux VM: [root@linux_vm ~]# mount | grep /tmp To install NGT, the /tmp file system will need to be remounted with the "exec" option: [root@linux_vm ~]# mount -o remount,exec /tmp Then, NGT can be installed using the standard way https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v5_19:man-nutanix-guest-tool-configuration-linux-t.html. If necessary, after the NGT is successfully installed, the /tmp can be remounted again with "noexec" option: [root@linux_vm ~]# mount -o remount,noexec /tmp Note that remounting a file system with a mount command is not persistent, and if the VM is rebooted, it will be reverted to the settings from /etc/fstab or systemd mount. To make the change persistently, adjust the /etc/fstab or systemd mount options.For NGT installation, it is sufficient to remount the /tmp FS in a non-persistent way and install NGT afterwards.Scenario 2. Stop fapolicyd: [root@rhel8]# systemctl stop fapolicyd Perform the installation: [root@rhel8]# ./install_ngt.py After installation is completed, restart fapolicyd: [root@rhel8 ~]# systemctl start fapolicyd Instead of stopping the service, add the /tmp in the list of the trusted folders. More details can be found on the link below from Redhat redhat doc on fapolicyd. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/assembly_blocking-and-allowing-applications-using-fapolicyd_security-hardening#doc-wrapper
KB14464
Foundation on ESXi cluster with BIOS (UEFI) setting causes node to fail imaging on Dell hardware
This article describes an error which occurs on new builds of ESXi multi-node clusters. This error occurs due to an invalid BIOS setting. The BIOS setting causes the node to fail imaging procedure when building the cluster. Change the BIOS setting to BIOS instead of UEFI, then restart the image of that node in the build.
The foundation setup screen shows the following: "fatal: Waiting for installer to boot up" for the node which is failing the image. Within Foundation logs, you will see the following signature: Traceback (most recent call last): Dell supports UEFI boot mode on DELL 15G and above platform models only.
Login to iDRAC on the node which is failing to image, check to ensure the BIOS setting is "UEFI," and if so, change it to "BIOS".After changing the setting, you can reboot the node to make the setting persistent.Once the BIOS configuration is changed, please retry imaging. If you encounter any issues with changing the BIOS settings, contact DELL to understand how to change the BIOS setting on the host. Rach out to Nutanix support http://portal.nutanix.com in case you have any questions.
}
null
null
null
null
KB12393
Installing the Network Gateway in a Dark Site for Flow Virtual Networking External Connectivity
The Network Controller in Flow Virtual Networking (FVN) leverages a Network Gateway appliance for extending VPC/overlay, VLAN and Cloud Network subnets outside of the PC-PE network environment via VPN, VTEP or BGP-capable router. When opting to use one of these technologies for external connectivity in a 'dark site' the automated deployment of the Network Gateway may not happen as expected and requires manual intervention.
The Flow Virtual Networking (FVN) https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Virtual-Networking-Guide:ear-flow-nw-overview-pc.html Network Gateway appliance is deployed for three types of connections: Virtual Private Networks as VPN GatewaysVirtual Tunnel End Points as VTEP GatewaysBorder Gateway Protocol sessions as BGP Gateways The Network Controller in FVN leverages a Network Gateway appliance for extending VPC/overlay, VLAN or Cloud Network subnets outside of the Prism Central (PC)-Prism Element (PE) network environment via VPN, VTEP or BGP-capable router. When opting to use one of these technologies for external connectivity in a 'dark site,' the automated deployment of the Network Gateway may not happen as expected and requires manual intervention.In PC 'Infrastructure' application, under menu items 'Network & Security' / 'Connectivity', when 'Create Gateway' is selected and the wizard completed, PC attempts to reach the Nutanix Portal download URL as part of the automated Network Gateway deployment. In a 'dark site,' this external connectivity request is typically blocked/dropped by network security infrastructure (i.e. edge firewalls and/or routers outside of the Nutanix SDN) and fails. As such, the Network Gateway deployment task fails.A deployment attempt of the FVN Network Gateway on a 'dark site' will fail with the error message "Image task failed: Request failed on all registered AHV clusters", and the following API task signature: { The following may also be seen in the atlas.out log in the PCVM (/home/nutanix/data/logs/atlas.out): 2023-11-1 23:32:15,087Z WARNING gateway_create_task.py:608 Starting rollback for the VPN gateway create task Notice the source URI from the the signatures above shows that the download URL is pointing to the FQDN download.nutanix.com, which may not be reachable by PC in a 'dark site' setup: source_uri': u'http://download.nutanix.com/<component>/<version>/<filename>.qcow2' For more information on: Flow Virtual Networking (FVN) and the Network Controller, refer to: Flow Virtual Networking Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Virtual-Networking-Guide:Nutanix-Flow-Virtual-Networking-GuideNetwork Gateway deployment, refer to: Installing or Upgrading the Network Gateway in a Dark Site https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Virtual-Networking-Guide:ear-flow-net-darksite-upgrade-network-gateway-pc-t.htmlLife Cycle Manager (LCM) Dark Sites, refer to: Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide:Life-Cycle-Manager-Dark-Site-Guide
Nutanix Engineering is aware of the issue of dark sites being unable to automatically deploy the FVN Network Gateway appliance when external connectivity is being configured in PC, and is working on a fix in a future release.If the dark site external firewall/proxy/routing policies and change management will allow it, if traffic from the PCVM IPs can be permitted to directly access the external URI 'download.nutanix.com' for destination HTTP and HTTPS traffic, even temporarily for the purpose of the Network Gateway deployment, then that may avoid the need for the workaround below and the deployment can be retried per Step 5.For Network Gateway upgrade when it is already deployed, refer to: FVN Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Virtual-Networking-Guide:Nutanix-Flow-Virtual-Networking-Guide / Identifying the Gateway Version https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Virtual-Networking-Guide:ear-flow-nw-vpn-gateway-version-pc-r.html, which leverages the LCM Dark Site workflow. This workflow still requires some steps similar to below, except that an LCM Dark Site server with specific requirements and packages manually deployed is used, which allows LCM Inventory on PCVM to detect the existing Network Gateway and can communicate with an LCM Dark Site Webserver to pull the image from there and upgrade the Network Gateway as part of an LCM upgrade plan. Refer to: Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide:Life-Cycle-Manager-Dark-Site-GuideAs a workaround to the gateway deployment issue discussed in this article, in summary:The appropriate version of the Network Gateway bundle can be downloaded manually from the Nutanix Portal Downloads page, presented on an internal webserver within the dark site that is accessible by PC. With assistance from Nutanix Support, PC can be pointed towards this internally-hosted package and the Network Gateway appliance deployment retried. Details The Network Gateway bundle version to be downloaded will be dependent on which PC and FVN Network Controller versions are currently running. The version compatibility matrix found in the FVN Release Notes here https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-Flow-Virtual-Networking:top-bundled-software-flownet-r.html (select your PC version in the top-right drop-down menu) will provide the required mapping of PC / NC / NG versions to use. Review the Release Notes https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-Flow-Virtual-Networking:top-bundled-software-flownet-r.html/ Table 1. Network Controller and Network Gateway Versions and make note of the Deploys Network Gateway version value. This value informs the file to download in the next step.The Network Gateway bundles are available for manual download on the Nutanix Portal / Downloads / Flow Virtual Networking / (drop down box at top) Network Gateway, or click here https://portal.nutanix.com/page/downloads?product=flowVirtualNetworking&bit=Network%20Gateway (login required). Find the "Network Gateway ZIP ( Version: <DeploysNetworkGatewayversion from Step 1 above>), take note of the SHA256 value, and click the blue 'Download' button.Extract the ZIP file from Step 2, navigate into the extracted directory, and upload the vyos_<version>.qcow2 image file and vyos_<version>.metadata.json metadata file to a local web server that is accessible from the on-prem Prism Central in the dark site over HTTP. For this workaround, this web server does not need to explicitly be an LCM Dark Site configured web server, though, it can be. The above qcow2 file just needs to be accessible via an HTTP(S) URI from the PCVM(s) IP.* Engage Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/ (login required), as an SRE is required to manually edit certain sensitive PCVM configuration items to leverage the custom URI for the webserver and qcow2 file from step 3 *Create/Recreate any desired gateway(s) using the usual creation workflow. In PC 'Infrastructure' application, under menu 'Network & Security' / 'Connectivity', select 'Create Gateway' and complete the wizard. Refer to FVN Guide / Connectivity https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Virtual-Networking-Guide-vpc_2023_3:ear-flow-nw-vpn-view-pc-r.html. Note: Also ensure that, despite being a dark site, any required Ports and Protocols are open as per Nutanix Portal / Downloads / Ports and Protocols / Flow Virtual Networking https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=Flow%20Virtual%20Networking (see Ports Detail and Ports Diagram tabs) to ensure correct operation after a successful Network Gateway deployment.
KB16161
SMTP send test email fails with "SendAsDenied; xxx@abc.com not allowed to send as yyy@xyz.com" error
This KB is related to SMTP send test email failure where 2 different email addresses are used under SMTP configuration with Office 365.
It is noticed that when SMTP is configured with Office 365 and STARTTLS, the SMTP send test email can fail with "SendAsDenied; xxx@abc.com not allowed to send as yyy@xyz.com" error.Below is the example screenshot of the error,As per SMTP configuration, it will have configuration similar to following example,
This issue occurs due to 2 different email addresses are used under SMTP configuration. As per the error, email address under "User" field in SMTP configuration is not allowed to send emails as email address under "From Email Address" field in SMTP configuration.To resolve this issue, we can keep same email address in both "User" and "From Email Address" fields in SMTP configuration.If customers would like to use different email address in "From Email Address" field, customers will need to update Office 365 and update settings to send email on behalf of another user. Following Microsoft documentation can be referred, https://learn.microsoft.com/en-us/microsoft-365/admin/add-users/give-mailbox-permissions-to-another-user?view=o365-worldwide#send-email-on-behalf-of-another-user https://learn.microsoft.com/en-us/microsoft-365/admin/add-users/give-mailbox-permissions-to-another-user?view=o365-worldwide#send-email-on-behalf-of-another-user
KB8089
Inability to manage VMs on cluster due to malfunctioning AHV host
VM management operations like power on, off, clone, migrate may be failing from Prism or Citrix console when one of the AHV host does not behave as expected.
VM management operations like power on, off, clone, migrate may be failing from Prism or Citrix console when one of the AHV hosts starts behaving in an erratic way. User VMs or CVMs maybe be missing in VM list in Prism.AHV host malfunctioning can be caused by several reasons. Some of these reasons are described below as separate issues.Note: HA event may not be triggered automatically in any of these cases.Issue 1 - File system corruption on host boot device "Structure needs cleaning" error may be seen in ~/data/logs/acropolis.out log on Acropolis master: 2019-08-18 05:03:58 INFO base_task.py:464 Running task 5363e2d1-a8b8-43a8-94ae-13ad6dd603c7(VmSetPowerState 5f228eed-ccdf-46e6-8eaf-e59cb5b73262 kPowerOn) KB 2305 http://portal.nutanix.com/kb/2305 describes steps how to find Acropolis master. Errors similar to below can be found on the affected AHV Host in /var/log/libvirt/libvirtd.log: 2019-08-18 13:00:15.503+0000: 19548: error : virNetSocketReadWire:1623 : End of file while reading data: Input/output error "EXT4-fs error" message can be seen in /var/log/messages: 2019-11-22T06:40:02.826445+00:00 kernel: [357220.508895] EXT4-fs error (device sda1): ext4_iget:4221: inode #3671357: comm python: bad extra_isize (64528 != 256) Issue 2 - Host boot device in read-only state"Read-only file system" error may be seen in ~/data/logs/acropolis.out log on Acropolis master: Traceback (most recent call last): KB 2305 http://portal.nutanix.com/kb/2305 describes steps how to find Acropolis master. Errors similar to below can be found in dmesg output on the host (this is only an example; actual errors may depend on hardware): ... The following error can be seen on the AHV host console: Failed to rotate /var/log/journal/<xxx>/system/journal : Read-only file system There may be multiple hung 'VmChangePowerState' & 'kVmSetPowerState' tasks as seen in ecli. Do not abort hung Power state-related tasks. Engage DevEx/Engineering if the cleanup of hung tasks is needed.Issue 3 - Acropolis master cannot connect to host via SSHHost may become unreachable via SSH.Few examples: Host can run out of memory. The following error may be seen in ~/data/logs/acropolis.out log on Acropolis master: 2019-01-23 12:06:56 ERROR ovs_br_manager.py:173 OVS error (10.0.98.45 list_bridge_info): [Errno 12] Cannot allocate memory KB 2305 http://portal.nutanix.com/kb/2305 describes steps how to find Acropolis master. Host may start responding slowly due to host boot device or other hardware failures.Powering on a VM from Prism fails with the error "ssh timed out during execution of create_local_port" Errors similar to mentioned below may be seen in ~/data/logs/genesis.out on CVM running on affected host: 2019-10-21 09:38:13 INFO hypervisor_ssh.py:32 Trying to access hypervisor with provided key... 2019-01-23 19:07:24 ERROR ipv4config.py:1625 Unable to get the KVM device configuration, ret 1, stdout , stderr /bin/bash: Cannot allocate memory Note: Host may still respond to pings, but SSH connections will fail.
Starting from AOS 6.1 if we detect ext4 errors or if host filesystem enters a read-only state then HA event is triggered to mitigate this issue from affecting the schedulability of VMs. Refer to VM High Availability in Acropolis https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v6_1:wc-high-availability-acropolis-c.html chapter of AHV Administration guide for more details.Acli should allow performing VM management tasks while the cluster is in such a state. If HA was not automatically triggered and there are still VMs running on the affected host, arrange with the customer a maintenance window for a downtime for those VMs to power them on, on a different host. Once the affected host is vacated of user VMs or customer has agreed to the expected downtime, and cluster is fault tolerant you can proceed with the workaround below. Follow Verifying the Cluster Health https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-cluster-health-verify-t.html chapter to make sure that cluster can tolerate node being down. Do not proceed if cluster cannot tolerate failure of at least 1 node. Check known issues that can lead to corruption: For nodes with SATADOM (G4/G5) review ISB 117 https://confluence.eng.nutanix.com:8443/display/STK/ISB-117-2020%3A++Boot-disk+file+system+integrity+issues+on+AHV+nodes+with+SATADOM+3IE3+S670330N+firmware, as this is one of the common reasons for SATADOM corruption.For G7 nodes with M.2 review KB 10229 http://portal.nutanix.com/kb/10229. If your case does not match any of the known issues then check device health and perform full RCA as we must find the reason for corruption. Collect logs and engage STL or open TH/ONCALL if help is needed. Do not reimage AHV unless we know what caused file system corruption. Perform the following steps to check SATADOM health: Put affected AHV host into the maintenance mode: nutanix@cvm$ acli host.enter_maintenance_mode <host IP> Put the CVM into the maintenance mode: nutanix@cvm$ ncli host edit id=<CVM ID> enable-maintenance-mode=true You can determine the ID of the CVM by running the following command: nutanix@cvm$ ncli host list Shutdown CVM and then AHV host: nutanix@cvm$ cvm_shutdown -P now root@ahv# shutdown -h now Download Phoenix https://portal.nutanix.com/#/page/Phoenixiso and follow steps from KB 4167 http://portal.nutanix.com/kb/4167 to check host boot device health. If check fails replace host boot disk.Run file system check: [root@phoenix ~]# fsck -y /dev/<partition name> To find partition name please run the following command: [root@phoenix ~]# ls /dev/sd* If fsck completes without errors disconnect ISO and reboot host. Exit CVM from maintenance mode and make sure that all services are up and running: From any other CVM in the cluster, run the following command to exit the CVM from the maintenance mode: nutanix@cvm$ ncli host edit id=<host-ID> enable-maintenance-mode=false Verify if all processes on all the CVMs are in the UP state: nutanix@cvm$ cluster status | grep -v UP If all services are up run the following command to exit the AHV host from the maintenance mode.: nutanix@cvm$ acli host.exit_maintenance_mode <host IP>
KB14126
test_ptagent_status (In Draft)
This KB will be used in NCC precheck - DELL-2457
Please note: LCM Precheck test_ptagent_status was not introduced in DELL-2427 https://jira.nutanix.com/browse/DELL-2427.Instead engineering team is working on the effort to write a NCC check for the same via DELL-2457 https://jira.nutanix.com/browse/DELL-2457This KB # is getting used with the NCC check.Hence, editing the KB to be kept internal and to be revamped with DELL-2457 https://jira.nutanix.com/browse/DELL-2457 - as NCC precheck KB. Please do not use the KB further. test_ptagent_status tests whether PTAgent is in a running state before an upgrade on DELL 13G, 14G, and 15G systems. The pre-check will fail if the PTAgent service is not running. Failure message: PTAgent Service is not running on hosts [u'x.x.x.x']. Please check KB 14126
Please note: For AOS-6.5.3 or higher, the above pre check is not requiredCheck PTAgent status: el6 AHV: [root@ahv~]# /etc/init.d/DellPTAgent status el7 AHV: [root@ahv~]# service DellPTAgent status ESXi: [root@ESXi:~] /etc/init.d/DellPTAgent status Restart the PTAgent process from the hostAHV: [root@ahv~]# systemctl stop DellPTAgent ESXi: [root@ESXi :~] /etc/init.d/DellPTAgent stop
KB4980
AOS Upgrade fails on node with "Input/output error" in finish.out
AOS Upgrade fails on node with "Input/output error" in finish.out.
AOS Upgrade fails on one node. finish.out shows entries similar to below: 2017-11-27 01:40:02 INFO finish:589 SVM is running on KVM, need to update the SVM xml config file as part of the upgrade SSH to the host does not seem to work and shows "Input/output error" as well: nutanix@cvm$ ssh root@192.168.5.1
Migrate all the VMs off the host, and reboot the host: If the host comes up successfully, restart genesis service on the node and the upgrade should resume automatically.If the host fails to come up, verify the SATADOM status. It might have to be replaced.Example of messages seen on host when the SATADOM goes bad:
KB14699
Supported Conversion Options for Hybrid to AF nodes (NVMe AF or NVMe+SSD AF)
Supported Conversion Options for Hybrid to AF nodes (NVMe AF or NVMe+SSD AF)
This article discusses the upgrade options, node Foundation requirements, configuration maximums and compatibility details related to FEAT-13434 https://jira.nutanix.com/browse/FEAT-13434 GA in AOS 6.5.3 and 6.7. With this FEAT AOS can support all flash non-volatile memory express (NVMe) drives and HDD drives within a node. Nodes with hybrid disk configuration (NVMe + HDD) may be converted to NVMe AF or NVMe + SSD, with foundation being required in order to use a newly converted node.
Supported Upgrade PathNote: The following are the supported upgrade paths for nodes with NVMe + HDD configuration: Unsupported Upgrade PathNote: Addition of HDD drives to an AF-NVMe node is considered a “downgrade” to Hybrid and is not supported. Configuration and Compatibility Considerations: For the NX-G9 platforms, based on the platform design, the maximum NVMe+HDD configurations allowed will vary. All configurations permitted will follow the necessary requirements and reflected in the Frontline and the Sizer: NVMe:Total Capacity Ratio required at maximum configuration allowed per platform per SW ProductTotal Capacity allowed per SW Product per RPO requirementsTotal Capacity allowed per node per SW product (NCI/NUS) – (Node capacity limits are located on the support portal here https://portal.nutanix.com/page/documents/configuration-maximum/list?software=NCI%20Node%20Capacity&version=6.6. Sample NVMe+HDD Configurations for NX-G9 platforms are listed in the following table: Blockstore Support and SPDK https://portal.nutanix.com/page/documents/details?targetId=Advanced-Admin-AOS:app-cluster-blockstore-about-aos.html is not supported on nodes with hybrid disk configurationAutonomous Extent Store (AES-Hybrid) will be disabled on SSD+HDD or NVMe + HDD hybrid clusters. (AES-Hybrid is enabled in NX-1175S as an exception for the 2x SSD + 2x HDD in the previous NX generations.)A partial NVMe + HDD population on the node is allowed in NX-8155-G9 as long as the maximum configuration of the node meets all per node requirements; Flash: Total Capacity ratio, RPO requirements, Maximum capacities per SW product chosen. (See NCI node capacity limits https://portal.nutanix.com/page/documents/configuration-maximum/list?software=NCI%20Node%20Capacity&version=6.6 for more details).If an NVMe + HDD node is being added to an existing cluster, the addition rules must adhere to the same guidelines as for SSD + HDD hybrid nodes, and the nodes should be added in pairs.Converting from a hybrid to all flash configuration ( i.e. NVMe + HDD ⇒ NVMe + SSD) requires an intermediate step of adding SSD drives to the node to i.e. NVMe + SSD + HDD then removing the HDD drives. After the HDDs are completely removed, the node will need a re-Foundation. The endpoint configuration on the node will be NVMe + SSD.For rules on mixing different node types in a cluster, see Product Mixing Restrictions https://portal.nutanix.com/page/documents/details?targetId=Hardware-Admin-Guide:har-product-mixing-restrictions-r.html. [ { "1": "Yes, node foundation required", "Starting Point": "1", "Upgrade To": "NVMe + HDD", "Support": "All Flash NVMe" }, { "1": "Yes, node foundation required", "Starting Point": "2", "Upgrade To": "NVMe + HDD", "Support": "NVMe + SSD" }, { "1": "Support", "Starting Point": "", "Upgrade To": "Starting Point", "Support": "Downgrade To" }, { "1": "No", "Starting Point": "1", "Upgrade To": "All Flash NVMe", "Support": "NVMe + HDD" }, { "1": "No", "Starting Point": "2", "Upgrade To": "NVMe + SSD", "Support": "NVMe + HDD" }, { "Starting Point": "Node Type", "Upgrade To": "Configuration Maximums" }, { "Starting Point": "NX-8155-G9", "Upgrade To": "2x NVMe + 10 HDD, 4x NVMe + 8x HDD" }, { "Starting Point": "NX-3155-G9", "Upgrade To": "2x NVMe + 4x HDD" }, { "Starting Point": "NX-1175S-G9", "Upgrade To": "2x NVMe + 2x HDD" } ]
KB11316
LCM: SATADOM firmware upgrade via LCM failing
SATADOM Firmware upgrades failing with: "Expected firmware version S740305N differs from the installed firmware version S67115N, logs have been collected and are available to download"
SATADOM Firmware upgrades failing with following on prism web console Update failed with error: [Expected firmware version S740305N differs from the installed firmware version S671115N] OR Update failed with error: [[Errno 2] No such file or directory: u'S740305N.tar'] The following traces are seen in lcm_ops.out 2021-05-03 15:21:31 INFO helper.py:109 (172.16.100.185, kLcmUpdateOperation, d68d9836-094e-4c0f-91a1-a5f5e6621b5f, upgrade stage [2/2]) DEBUG: [2021-05-03 09:51:31.243617] Device /dev/sda identified. smartctl out: ({'LU WWN Device Id': '5 24693f 2ca221959', 'User Capacity': '64,023,257,088 bytes [64.0 GB]', 'Local Time is': 'Mon May 3 09:51:31 2021 UTC', 'ATA Version is': 'ATA8-ACS (minor revision not indicated)', 'Device is': 'In smartctl database [for details use: -P show]', 'SATA Version is': 'SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)', 'Serial Number': 'BCA11803010281395', 'Device Model': 'SATADOM-SL 3IE3 V2', 'Sector Size': '512 bytes logical/physical', 'Firmware Version': 'S671115N', 'Model Family': 'Innodisk 3IE3/3ME3 SSDs', 'SMART support is': 'Enabled', 'Form Factor': '2.5 inches', 'Rotation Rate': 'Solid State Device'}) Another instance of failure is seen as follows in lcm_ops.out DEBUG: [2021-03-19 21:28:15.298611] First stage of upgrade for entity 3IE3 May see the following failure on Dell XC hardware in lcm_ops.out: 2021-09-09 16:02:46 ERROR lcm_ops_by_phoenix:1266 (192.168.50.93, kLcmUpdateOperation, 7345a981-bf05-423f-90f7-af687ed7c6d1, upgrade stage [1/2]) Encountered exception Update failed with error: [[Errno 2] No such file or directory: u'S740305N.tar']. Traceback: Traceback (most recent call last):
The above issue is fixed in LCM-2.4.4 ( ENG-420179 https://jira.nutanix.com/browse/ENG-420179)Please upgrade the LCM framework to the latest version by performing inventory in a connected site.If you are using LCM for the upgrade at a dark site or a location without Internet access, please upgrade to the latest LCM build (LCM-2.4.4 or higher) using Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide-v2_5:Life-Cycle-Manager-Dark-Site-Guide-v2_5 Workaround: Please follow the below if running an older version of LCM < 2.4.4: Shut down and power on the host when it is in phoenix after the upgrade failed. [ Power Cycle Server, Option 5 if NX HW ]Try a manual upgrade of SATADOM firmware.Replace the SATADOM if 1&2 doesn't work
KB5428
Hyper-V: Genesis in a crash loop
Genesis will enter a crash loop on Hyper-V if there are specific issues in the Hyper-V configuration.
CVM services will fail to start, and genesis will be stuck in a crash loop with one of the below messages logged to /home/nutanix/data/logs/genesis.out.Scenario 1: genesis will enter a crash loop on Hyper-V if the execution policy is not set to RemoteSigned. A Restricted setting might cause issues when you reboot the CVM resulting in the genesis service being in a crash loop. 2018-04-02 09:16:45 INFO hyperv.py:338 Copying the management powershell module to the host Scenario 2: genesis will enter a crash loop on Hyper-V if the path variable is not set correctly. A corrupted or inaccurate path setting might cause issues when you reboot the CVM resulting in the genesis service being in a crash loop. 2020-07-05 02:56:20 ERROR hyperv.py:305 Failed to install NutanixHostAgent on the host: The term 'netsh' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.The term 'netsh' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
Scenario 1Per Nutanix documentation, the PowerShell execution policy on the Hyper-V host must be set to "RemoteSigned'. From a PowerShell command line on the host, run the following to check the current ExecutionPolicy. You can also use "winsh" from a CVM to navigate to a PS command prompt for the host: PS C:\Windows\system32> Get-ExecutionPolicy If "RemoteSigned" is not returned, update the ExecutionPolicy accordingly: PS C:\Windows\system32> Set-ExecutionPolicy RemoteSigned General Host Requirements https://portal.nutanix.com/page/documents/details?targetId=HyperV-Admin-AOS-v6_5:HyperV-Admin-AOS-v6_5:Hyper-V hosts must have the remote script execution policy set at least to RemoteSigned. A Restricted setting might cause issues when you reboot the CVM.Scenario 2From a PowerShell command line on the host, run the following to check the current PATH variable. You can also use "winsh" from a CVM to navigate to a PS command prompt for the host: 192.168.5.1> $env:PATH Correct the above PATH variable by adding the appropriate semi-colons: 192.168.5.1> $env:PATH='C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\cygwin\bin;C:\Program Files (x86)\sourceforge\ipmiutil\;C:\Program Files (x86)\sourceforge\ipmiutil\;C:\Windows\system32\config\systemprofile\AppData\Local\Microsoft\WindowsApps' Or change the variable in the System Properties GUI > System > Advanced System Settings > Advanced Tab > Environment Variables > Path > Edit (make necessary adjustments to the path).
KB11743
Insights server shard unavailable causes insights fatal which may lead to cluster services crashing
When CVM shards get unavailable, it affects insights_server cluster-wide, which leads to the dependent control plane services crashing frequently. This issue impact 5.18.x clusters
This issue is observed in clusters that have a network connectivity issues between CVMs and Disk failure issues. You may see cluster-wide services frequent crashing (acropolis, ergon, aplos_engine etc) due to fatal insights_server, which may lead to production impact. nutanix@CVM$ allssh "ls -lrth ~/data/logs/*.FATAL | tail -10" Scenario 1 (Network connectivity issue): Insights server service has a fatal cluster wide with the signature as below. NTNX-Log-2021-05-22-1621742110-65903-PE-xx.xx.xx.36/cvm_logs/insights_server.ntnx-19sm3f430086-a-cvm.nutanix.log.INFO.20210522-190013Z.16582:F20210522 20:08:16.661291Z 16584 insights_server.cc:736] Failed to unload shards in 300 sec insights_server.out shows kShardNotReady for the problematic CVM .41 that is experiencing unstable network connectivity. E20210522 09:25:30.896435Z 10838 insights_entity_tree_walkup.cc:232] NodeGetParentEntityGuids failed. Arg = 0x5649375e7320 Control plane services that are dependant on insights like ergon and acropolis landed into the crash loop because of insights shards unavailable on CVM .41. ergon.out log signature: 2021-05-22 20:07:49,239Z INFO cpdb.py:124 Shards 78 39 76 64 20 91 27 104 are not all owned by xx.xx.xx.41:2027, please retry. (kRetry) (kShardNotReady). Retrying. You may see lots of such error messages in ergon.out in different CVMs. nutanix@CVM$ grep 'Db retry error' ~/data/logs/ergon.out* | wc -l FATAL log signature like below (here is the example of aplos_engine.FATAL) 21-05-22 21:55:05,851Z ERROR 25036 /home/jenkins.svc/workspace/postcommit-jobs/nos/euphrates-5.18.1-stable/x86_64-aos-release-euphrates-5.18.1-stable/bigtop/infra/infra_server/cluster/service_monitor/service_monitor.c:106 StartServiceMonitor: Child 25038 exited with status: 1 Scenario 2 (Disk Failure issue): acropolis.out on all nodes indicate IDF shards not being ready on a particular CVM. Looking for CRITICAL in acropolis logs will confirm this as well 2021-08-31 18:39:33,127Z INFO cpdb.py:124 Shards 35 52 117 7 120 16 32 84 73 28 93 are not all owned by 10.132.9.68:2027, please retry. (kRetry) (kShardNotReady). Retrying. insights_server logs indicate not being able to load all shards and range scans failing with kUnavailable indicating that cassandra isn't available E20210831 22:06:51.117308Z 8503 insights_rpc_base.cc:234] Shards 52 35 120 28 16 93 73 7 84 117 are not all owned by 10.132.9.68:2027, please retry. A successful IDF server initialization on the contrary would look like this I0415 16:43:02.436728 23054 {{insights_server.cc:1698]}} 0 shard move are pending. Looking at cassandra logs (~/data/logs/cassandra_monitor.INFO and ~/data/logs/cassandra/system.log.INFO) indicates errors on a drive, that can surface in multiple forms. Sharing a few samples here F20210831 22:19:51.903843Z 2743 cassandra_disk_util.cc:676] Check failed: FileUtil::IsMountPoint(data_dir_mount_path) Path /home/nutanix/data/stargate-storage/disks/PHYF005200MG1P9DGN is not a mountpoint
1. Verify if there are hung or queued tasks due to ergon crashing frequently. If yes, then based on the type of the task you will find a corresponding KB on how to abort the task nutanix@CVM$ ecli task.list include_completed=false limit=1000 2. Check network connectivity in ping stats logs: nutanix@CVM$ for i in `svmips` ; do echo $i ; ssh $i grep -C 20 unreachable ~/data/logs/sysstats/ping*.INFO /dev/null | cat -n | ( tail -50 ) | grep -E 'TIMESTAMP|unreachable' ; done 3. Review the logs before fatal timestamp, and see if shard unload was stuck because read lock on shards was acquired from node RPC NodeGetClusterReplicationState. You may see similar symptoms of multiple insights_server logs on different CVMs. insights_server.ntnx-xxxxxxx-a-cvm.nutanix.log.INFO.20210522-190013Z.16582- This issue is resolved in AOS 5.19+, when CVM shards get unavailable, it affects insights_server cluster-wide, which leads to the dependent control plane services crashing frequently.Workaround (Scenario 1):1. From the Insights leader restart the insights_server. nutanix@cvm:~$ service=/appliance/logical/leaders/insights; echo $(zkcat $service/`zkls $service| head -1`)| awk '{print $2}' 2. The network issue triggers insights shard unavailability and customers can hit this issue repeatedly if the network issue isn't resolved.3. The fix for this bug is already checked-in in 5.19. Upgrade to 5.19 and above versions will resolve this issue Workaround (Scenario 2): Based on what the error on the drive is (read-only, unmounted, marked as bad), follow the generic SSD troubleshooting ladder.In the interim, stopping insights_server on the CVM where IDF shards haven't been able to load successfully, will force re-sharding to occur and stabilize the cluster. nutanix@cvm:~$ genesis stop insights_server Note: In some cases all nodes may be impacted and hence the workaround for restarting insights_server should be applied on all CVMs using allssh. nutanix@CVM:~$:allssh genesis stop insights_server; cluster start 3. Permanent fix is implemented in 6.6 https://jira.nutanix.com/issues/?jql=project+%3D+ENG+AND+fixVersion+%3D+6.6, 6.6.1 https://jira.nutanix.com/issues/?jql=project+%3D+ENG+AND+fixVersion+%3D+6.6.1, pc.2023.3 https://jira.nutanix.com/issues/?jql=project+%3D+ENG+AND+fixVersion+%3D+pc.2023.3, pc.2023.1.0.1 https://jira.nutanix.com/issues/?jql=project+%3D+ENG+AND+fixVersion+%3D+pc.2023.1.0.1, 6.5.2 https://jira.nutanix.com/issues/?jql=project+%3D+ENG+AND+fixVersion+%3D+6.5.2
KB12075
Performance benchmarking with Fio on Nutanix
Specifics about using the Fio on Nutanix.
Flexible IO Tester (Fio) is a benchmarking and workload simulation tool for Linux/Unix created by Jens Axboe, who also maintains the block layer of the Linux kernel. Fio is highly tunable and widely used for storage performance benchmarking. It is also used in the official performance benchmarking tool created by Nutanix that is called X-Ray https://portal.nutanix.com/page/downloads?product=xray. X-Ray automates the deployment of VMs on which it runs Fio and then provides the results in a user-friendly way with graphs and reports.There are also ways to run Fio on Windows, but generally other tools that are better suited for Windows OS, such as IOmeter or CrystalDiskMark are recommended.
The solutions section goes through the main parameters that can be used to run Fio on a Linux based system. Fio main arguments Data randomness Data randomness is important because the storage engine of Nutanix will be compressing the file internally if compression is enabled on the storage container level.Fio lays off a file at the beginning of the test and uses that file to write I/O to it or to read from it. By default, Fio always uses random data. If for any reason it is needed to test on a file filled with zeroes, the argument --zero_buffers can be used, however, it is not recommended, because such file will be compressed on the storage level and will completely fit into the read cache. The writing of zeroes will be also optimized by the storage system, so such a test will not be fair and will not reflect the actual storage performance.In the below example, the file test1 was created with default settings and the file test2 was created with --zero_buffers. Both files are 16GB in size. To demonstrate the difference, both files were compressed into tar.gz. Even though the compression algorithm is different on the Nutanix storage level, it still shows the difference. [root@localhost ~]# du -hs /test1/* The file test2 consists only of zeroes is compressed into a 16Mb file, whereas the file test1 consists of random data that has the same size when compressed. Software caching To avoid the OS level caching, the --direct=1 should be used. That way the page cache is bypassed and therefore the memory is no longer used. The purpose of the storage benchmarking is to test the underlying storage and not the memory or OS caching capabilities. Testing on a file system vs testing on a physical disk Fio can run tests both towards a raw physical disk and towards a file system. Both options can be used and they should be considered based on the use case. If the production applications are going to be using Linux file systems, such as ext4 or xfs for example, it is better to test by creating a test file on top of the file system. If the production application is going to be using raw disk devices, for example, Oracle's ASM, then a test towards a raw disk without a file system will be more realistic. To bypass the filesystem, the --filename can be just pointed to the disk name, for example, --filename=/dev/sdc. Make sure to verify that the disk name is correct, because after running such a test all the data will be lost on the device, so specifying a wrong disk can be destructive. Fio use options - single line and jobfiles The Fio test can be run either in a single line specifying all the needed parameters or from a file that has all the parameters. If there is a need to run multiple different tests against many devices or with different settings, it might be helpful to create several different jobfiles and then just trigger the tests by specifying those files. Running a test in a single line. Example: [root@localhost ~]# fio --name=fiotest --filename=/home/test1 --size=16Gb --rw=randread --bs=8K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=32 --group_reporting --runtime=60 --startdelay=60 Running a test from a file. If a Fio job should run in parallel on multiple devices a global section should be defined. Example: [root@localhost ~]# cat jobfile.fio NOTE: Both above examples are doing exactly the same test as the options are the same. Suggested examples for testing Nutanix and other enterprise storage Always create a new vDisk for the VM before running the tests. The main reason for that is the old disk might have some data from the previous tests or other workloads and it could have been down migrated to the HDD tier (in case of a hybrid cluster). The new disk will guarantee the correct data path of the I/O.Before running a test, it is better to first collect the data about the application that is going to be running in production. The useful information to request from the application administrator before testing: the block size that the application is going to usewhether the I/O will be random or sequentialhow much is the application going to read and write (for example, 70% reads and 30% writes)the expected working set size or how much data will the application be actively usingwhether the application will use Linux file system or raw disk devices Suggested test examples: NOTE: It highly depends on the platform how much IO a system could handle and as more IO gets pushed (numjobs and iodepth) as higher the potential latency is. There are three main categories: A hybrid system: limited with the amount SSD "performance". It is possible to run into limitations around oplog specifically when running random workloads. The number of jobs, in this case, should eventually be reduced.Any system on Metro: With Metro, all IO goes to Oplog which could lead to a limitation. Specifically, as writes would get cut into 64 chunks due to how Oplog works. For the 1M workload, it is advised to lower the numjobs and iodepth parameters for these tests.All flash with or without NVMe: The numbers below are tested on a system with SSD + NVMe so these values (except for the Metro case) could be used. It is recommended to set at least 16Gb file size. That way the file is big enough to bypass read/write caches of the Nutanix cluster and test Storage performance rather than the cache.We also recommend running read and write tests separately to not limit the results if either reads or writes are slower than the counterpart. For example, if the test of 100% reads can show 80k IOPS and the test of 100% writes can do 50k IOPS, a 50/50 reads and writes test will be limiting the reads to not be higher than 50% of the workload and the result will be 50k writes and 50k reads. While in total the sum of the results is a higher number, the reads are not showing the correct number in this example as they can do much better. However, if it is required to simulate some specific workload or test a sum of reads and writes in a single simultaneous test, it is completely fine to do both reads and writes and it can be controlled by --rw/--randrw and --rwmixread with specifying percentage of reads.As --numjobs represents the amount of threads, it is recommended to set it to 8 or to the amount of vCPUs on the VM if it is higher than 8. The vast majority of modern applications are multi-threaded, so using a single thread for testing would not be realistic in most cases.The --iodepth recommendation would depend on the block size. For block size <=32k we recommend --iodepth=32 to create enough load on the storage of Nutanix and to keep it busy. For block size of 32k<bs<=256k we recommend --iodepth=16. For block size >=512k the recommended --iodepth=8.It is important to note that high iodepth values may result in increased latency, so if the latency in the test results is considered too high, the iodepth should be reduced. It will greatly depend on the underlying hardware models and the latency will vary. Interpreting the results The below example of the test results is based on the following setup: 8k block size, random I/O, 50/50 read/write ratio, 16Gb working set size and running on top of the xfs file system /test1 that is created on top of a single vDisk /dev/sdb. [root@localhost ~]# fio --name=fiotest --filename=/test1/test1 --size=16Gb --rw=randrw --bs=8K --direct=1 --rwmixread=50 --numjobs=8 --ioengine=libaio --iodepth=32 --group_reporting --runtime=60 --startdelay=60 The Fio test results are shown during the test is running in a short summary displaying the test's progress in percentage, the current I/O bandwidth for reads and writes and the amount of IOPS for reads and writes: fio-3.19 After the test is completed (or cancelled with Ctrl+C), the Fio will generate a detailed report showing more details. If --group_reporting attribute was used, it will show the summary for all the threads together, but if it wasn't used, the details will be shown for each thread separately and it may be confusing. ... slat - submission latency. This is the time it took to submit the I/O. This value can be in nanoseconds, microseconds or milliseconds — Fio will choose the most appropriate base and print that (in the example above microseconds was the best scale). All the latency statistics use the same approach of metrics choices.clat - completion latency. This denotes the time from submission to completion of the I/O pieces.lat - total latency. Shows the time from when Fio created the I/O unit to the completion of the I/O operation.clat percentiles - latency results during the test run divided into the buckets based on the percentage. Indicates the amount of time in percentage of the test time when the latency was lower than the indicated amount. For example, 99.99th=[68682] means that only 0.01% of the time during the test the latency was 68682 usec or 68 ms or 99.99% of the time the latency was lower than 68 ms.bw - bandwidth statistics based on samples.iops - IOPS statistics based on samples.lat (nsec/usec/msec) - the distribution of I/O completion latencies. This is the time from when I/O leaves Fio and when it gets completed. Unlike the separate read/write/trim sections above, the data here and in the remaining sections apply to all I/Os for the reporting group. For example, 10=22.76% in lat (msec) means that 22.76% of all the I/O took less than 10ms to complete.cpu - CPU usage. User and system time, along with the number of context switches this thread went through, usage of system and user time, and finally the number of major and minor page faults. The CPU utilization numbers are averages for the jobs in that reporting group, while the context and fault counters are summed.IO Depths - the distribution of I/O depths over the job lifetime.IO Submit - how many pieces of I/O were submitted in a single submit call.IO Complete - like the above submit number, but for completions instead.IO Issued rwt - the number of read/write/trim requests issued, and how many of them were short or dropped.IO Latency - these values are for --latency_target and related options. When these options are engaged, this section describes the I/O depth required to meet the specified latency target. Using a single vDisk vs using multiple vDisks NOTE: Nutanix scales performance per node and with the number of vDisks. The above testing example with Fio is based on a single vDisk on a single VM running on a single Node. While testing, it is important to understand a specific use case for a single vDisk, it should be understood that Fio does not replace a distributed performance test like Nutanix X-Ray to understand the full capability of a Nutanix cluster.Most of the applications can utilise multiple vDisks, because the applications are usually using the Linux file systems and not raw disks. That greatly improves the performance and can be easily achieved by using a striped logical volume created in LVM. Nutanix recommends always using multiple vDisks with OS-level striping for any applications that require high-performance I/O. The information about how to configure striping on multiple vDisks and all other useful Linux settings can be found in the Best Practices Guide for running Linux on AHV https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2105-Linux-on-AHV:BP-2105-Linux-on-AHV.Below is an example of an XFS file system using LVM and striped across 5 vDisks. There are 5 vDisks of 20Gb in the LVM striped logical volume test_lv: [root@localhost ~]# lvs -a -o+lv_layout,lv_role,stripes,devices Running exactly the same test as was previously run in the "Interpreting the results" section and the results are more than 3 times better than on a single vDisk: [root@localhost ~]# fio --name=fiotest --filename=/test2/test2 --size=16Gb --rw=randrw --bs=8K --direct=1 --rwmixread=50 --numjobs=8 --ioengine=libaio --iodepth=32 --group_reporting --runtime=60 --startdelay=60 In some situations, it may be beneficial to use Volume Groups instead of VM disks, because Volume Groups can load-balance the workload between the CVMs in the cluster whereas the VM disks are always hosted on the same CVM where the VM is running. A load balanced VG will use disks from different nodes as well as CPU and memory resources from different CVMs of the cluster. iSCSI Volume Groups are load-balanced by default and VGs with direct VM attachments require load-balancing to be enabled in the acli. More information about the Volume Groups can be found in the Nutanix Volumes Guide https://portal.nutanix.com/page/documents/details?targetId=Volumes-Guide:Volumes-Guide.The Volume Groups are the main way of providing storage to clustered VMs or physical machines. VGs can provide performance improvements if the resources of a single CVM are a bottleneck, but there are also some downsides, like partial loss of data locality and added configuration complexity. The test results may vary greatly depending on the underlying hardware configuration and software versions. The above tests were run on the following setup: VM details: [ { "--name=str": "--ioengine=str", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "This argument defines how the job issues I/O to the test file. There is a large amount of ioengines supported by Fio and the whole list can be found in the Fio documentation here. The engines worth mentioning are:\n\t\t\tlibaio - Linux native asynchronous block level I/O. Nutanix recommends using the libaio engine for testing on any Linux distributions.solarisaio - Solaris native asynchronous I/O. Suitable for testing on Solaris.posixaio - POSIX asynchronous I/O. For other Unix based operating systems.windowsaio - Windows native asynchronous I/O in case testing is done on Windows OS.nfs - I/O engine supporting asynchronous read and write operations to NFS from userspace via libnfs. This is useful for achieving higher concurrency and thus throughput than is possible via kernel NFS." }, { "--name=str": "--size=int", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "The size of the file on which the Fio will run the benchmarking test." }, { "--name=str": "--rw=str", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "Specifies the type of I/O pattern. The most common ones are as follows:\n\t\t\tread: sequential readswrite: sequential writesrandread: random readsrandwrite: random writes\n\n\t\t\trw: sequential mix of reads and writesrandrw: random mix of reads and writes\n\t\t\tFio defaults to 50/50 if mixed workload is specified (rw=randrw). If more specific read/write distribution is needed, it can be configured with --rwmixread=. For example, --rwmixread=30 would mean that 30% of the I/O will be reads and 70% will be writes." }, { "--name=str": "--bs=int", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "Defines the block size that the test will be using for generating the I/O. The default value is 4k and if not specified, the test will be using the default value. It is recommended to always specify the block size, because the default 4k is not commonly used by the applications." }, { "--name=str": "--direct=bool", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "true=1 or false=0. If the value is set to 1 (using non-buffered I/O) is fairer for testing as the benchmark will send the I/O directly to the storage subsystem bypassing the OS filesystem cache. The recommended value is always 1." }, { "--name=str": "--numjobs=int", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "The number of threads spawned by the test. By default, each thread is reported separately. To see the results for all threads as a whole, use --group_reporting." }, { "--name=str": "--iodepth=int", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "Number of I/O units to keep in flight against the file. That is the amount of outstanding I/O for each thread." }, { "--name=str": "--runtime=int", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "The amount of time the test will be running in seconds." }, { "--name=str": "--time_based", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "If given, run for the specified runtime duration even if the files are completely read or written. The same workload will be repeated as many times as runtime allows." }, { "--name=str": "--startdelay", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "Adds a delay in seconds between the initial test file creation and the actual test. Nutanix recommends always using a 60 seconds delay to allow the write cache (oplog) to drain after the test file is created and before the actual test starts to avoid reading the data from the oplog and to allow the oplog to be empty for the fair test." }, { "--name=str": "Sequential writes with 1Mb block size. Imitates write backup activity or large file copies.", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "fio --name=fiotest --filename=/test/test1 --size=16Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60 --startdelay=60" }, { "--name=str": "Sequential reads with 1Mb block size. Imitates read backup activity or large file copies.", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "fio --name=fiotest --filename=/test/test1 --size=16Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60 --startdelay=60" }, { "--name=str": "Random writes with 64Kb block size. Medium block size workload for writes.", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "fio --name=fiotest --filename=/test/test1 --size=16Gb --rw=randwrite --bs=64k --direct=1 --numjobs=8 --ioengine=libaio --iodepth=16 --group_reporting --runtime=60 --startdelay=60" }, { "--name=str": "Random reads with 64Kb block size. Medium block size workload for reads.", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "fio --name=fiotest --filename=/test/test1 --size=16Gb --rw=randread --bs=64k --direct=1 --numjobs=8 --ioengine=libaio --iodepth=16 --group_reporting --runtime=60 --startdelay=60" }, { "--name=str": "Random writes with 8Kb block size. Common database workload simulation for writes.", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "fio --name=fiotest --filename=/test/test1 --size=16Gb --rw=randwrite --bs=8k --direct=1 --numjobs=8 --ioengine=libaio --iodepth=32 --group_reporting --runtime=60 --startdelay=60" }, { "--name=str": "Random reads with 8Kb block size. Common database workload simulation for reads.", "Fio will create a file with the specified name to run the test on it. If the full path is entered, the file will be created at that path, if only a short name is provided, the file will be created in the current working directory.": "fio --name=fiotest --filename=/test/test1 --size=16Gb --rw=randread --bs=8k --direct=1 --numjobs=8 --ioengine=libaio --iodepth=32 --group_reporting --runtime=60 --startdelay=60" } ]
KB9483
Move 3.5.1: Adding PE to Move fails with "Oops - Server error"
This article describes a customer issue seen when adding LDAPS authentication enabled PE (Prism Element) cluster to Move with AD user.
After upgrading Move from 3.3.x to 3.5.1, when creating a plan, target is not listed: When updating PE (Prism Element) cluster on Move with AD credentials, following error is seen on Move UI. Same error is seen when removing and re-adding PE cluster with AD user. The following is seen in Move /opt/xtract-vm/logs/srcagent.log: E0608 07:29:46.619897 7 v3_ahv.go:96] [HostIPAddrOrFQDN="x.x.x.x", Location="/hermes/go/src/common/restclient/restclient.go:191", Response="Oops - Server error" Logging in to PE using AD user is successful and updating LADPS configuration with testing AD user authentication on PE is also successful. However, the issue exists on Move with following logged to PE aplos.out leader. 2020-06-08 01:32:59 ERROR directory_service.py:846 Service account not found for the directory service with domain= adc.cloud. Please update service account to proceed further.
Workaround: Restart aplos and aplos_engine cluster wide and try to add PE cluster again. It should be successful if there are no additional issues existing on PE cluster. In case 1 is not an option, add PE cluster using a local user such as admin or any newly created user with identical roles as admin user.
{
null
null
null
null
KB8773
ESXi does not display CDP information on Intel NICs
ESXi 10 GBE NICs do not display CDP information in VMware.
After rebooting an ESXi host when running ixgben Driver in ESXi 6.5 or above CDP information will not show on ESXi host. This affects all current versions of Intel ixgben drivers from 1.4.1 - 1.7.20.If packet capture is done on the ESXi host it will not show CDP information is being received by the host. A SPAN on the switch side will show CDP information is being sent.Example of VMware packet capture with a working interface: root@ESXi# pktcap-uw --uplink vmnic2 --ethtype 0x2000 -o - | tcpdump-uw -r - -nn Example of packet capture with a broken interface: root@ESXi# pktcap-uw --uplink vmnic0 --ethtype 0x2000 -o - | tcpdump-uw -r - -nn You may see this only affects certain types of switches. EXAMPLE:A host connected to Nexus 9k will not show CDP information, but connect the server to Nexus 5000 and it will.
Workaround:If CDP information is missing on a host remove the VMnic from the vSwitch and add it back. This will restore CDP information until the next reboot of the ESXi host.This is currently being investigated.
KB7069
NCC Health Check: disk_firmware_check
The NCC health check disk_firmware_check checks if any disk that passes through to the Controller VM (CVM) requires a firmware upgrade or not.
The NCC health check disk_firmware_check checks if any disk that passes through to the Controller VM (CVM) or the host boot disk requires a firmware upgrade or not. Running the NCC Check Run this check as part of the complete NCC health checks: nutanix@cvm$ ncc health_checks run_all Or you can run this check separately: nutanix@cvm$ ncc health_checks hardware_checks disk_checks disk_firmware_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every 3 days, by default. This check will generate an alert after 1 failure. Sample Output For Status: WARNING WARN: Firmware version SN04 for /dev/sde of model ST8000NM0055-1RM112 with serial XXXXXXXX at Location 5 is out of date. Perform inventory from Life Cycle Manager to check and update to the latest firmware version. Check KB 6937 for alternative update option. Output messaging This hardware related check executes on the below hardware [ { "Check ID": "Checks if any of the disks passthrough to CVM, requires firmware upgrade or not" }, { "Check ID": "Current disk firmware is not recommended by Nutanix" }, { "Check ID": "Upgrade Disk Firmware version. Perform inventory from Life Cycle Manager to check and update to the latest firmware version. Check KB 6937 for alternative update option." }, { "Check ID": "Disk may not function properly, resulting in data unavailability." }, { "Check ID": "A106059" }, { "Check ID": "disk_count disks of node host_ip, needs firmware upgrade" }, { "Check ID": "Disk firmware needs upgrade" }, { "Check ID": "Firmware version version for disk of model model with serial serial_id at Location location is out of date. Perform inventory from Life Cycle Manager to check and update to the latest firmware version. Check KB 6937 for alternative update option." }, { "Check ID": "106064" }, { "Check ID": "Checks if any of the host boot disks, requires firmware upgrade or not" }, { "Check ID": "Current host boot disk firmware is not recommended by Nutanix" }, { "Check ID": "Upgrade host boot disk firmware version. Perform inventory from Life Cycle Manager to check and update to the latest firmware version. Check KB 6937 for alternative update option." }, { "Check ID": "Host boot Disk may not function properly, resulting in data unavailability." }, { "Check ID": "A106064" }, { "Check ID": "Firmware of vendor host boot disk(serial ID : serial_id), needs upgrade" }, { "Check ID": "Disk firmware needs upgrade" }, { "Check ID": "Firmware version version of vendor host boot disk disk of model model with serial id serial_id is out of date." }, { "Check ID": "Disk Firmware Check" }, { "Check ID": "Nutanix NX" }, { "Check ID": "Host Boot Disk Firmware Check" }, { "106059": "Dell XC", "Check ID": "Nutanix NX" }, { "106059": "", "Check ID": "Lenovo HX" } ]
It is recommended to upgrade disk firmware using LCM (Life Cycle Manager). For more details, see the Life Cycle Manager Guide https://portal.nutanix.com/#/page/docs/details?targetId=Life-Cycle-Manager-Guide-v20:Life-Cycle-Manager-Guide-v20. For a quick introduction to the feature, watch this 5-minute video https://youtu.be/CftB7LhStnQ. If upgrading firmware using LCM is not possible, please see KB 6937 https://portal.nutanix.com/kb/6937 and Manually Updating Data Drive Firmware https://portal.nutanix.com/#/page/docs/details?targetId=Hardware-Admin-Ref-AOS-v511:bre-data-drive-update-manual-t.html for alternate options for Nutanix hardware (Note: You may need to log in to the Support Portal https://portal.nutanix.com/ to view KB 6937 https://portal.nutanix.com/kb/6937.) For Seagate drives, Nutanix recommends upgrading 6 and 8 TB capacity drives that are running less than SN05 firmware and upgrading 2 and 4 TB drives that are running less than TN05 firmware.For Samsung drives, Nutanix recommends upgrading drives that are running firmware versions GXM5004Q, GXM5104Q, GXM5204Q.For Innodisk model SATADOM-SL 3IE3, Nutanix recommends upgrading SATADOM firmware to the latest version if the SATADOM is running on the following firmware versions: S560301N, S160130N. Additional notes: For Xen Hypervisor/1-node clusters, the disk firmware needs to be upgraded manually, as LCM is not supported. Check Manually Updating Data Drive Firmware https://portal.nutanix.com/#/page/docs/details?targetId=Hardware-Admin-Ref-AOS-v511:bre-data-drive-update-manual-t.html for the manual procedure.If you receive an ERR as shown below: ERR : node (service_vm_id: xxxxx ) : Could not fetch CVM disk information. Upgrade NCC to the latest version and execute the check. If the issue persists after upgrading NCC, consider engaging Nutanix Support. https://portal.nutanix.com. Gather the NCC Health Check output and attach them to the support case.
KB17179
LCM - HPE : Upgrade of NVME Drives connected to non-VMD platforms will be skipped to firmware version HPDK5(B)
This Knowledge base article describes an issue where upgrade of NVME Drive FW on non-VMD platforms will be skipped to HPDK5(B)
This Knowledge base article describes an issue where upgrade of NVME Drive FW on non-VMD platforms will be skipped to HPDK5(B)In order to identify if you are hitting this issue, kindly confirm the below : 1. Platform model on the node is running either of the below platforms : Gen10:DX560 Gen10 24SFFDX2600 DX170r Gen10 24SFF Gen10 plus:DX380 Gen10 Plus 24SFF Gen10 plus V2:DX385 Gen10 Plus v2 24SFFDX325 Gen10 Plus v2 8SFF Gen11 Intel:DX365 Gen11 10NVMeDX385 Gen11 8SFF2. Check the disk is an NVME drive [root@phoenix firmware-hdd-a27c95663d-HPK5-2.1]# lsscsi 3 Check the version of drive firmware installed on the drive : nutanix@cvm$ sudo smartctl -x /dev/sde -T permissive The above disk model is an example, the actual disk and firmware in your case may differ. 4. You will see, firmware-hdd-a27c95663d-HPK5(B)-2.1.x86_64.rpm was added to install set during inventory phase ~$lcm_logs__xx.xx.xx.xx_/yy.yy.yy.yy/lcm_update_logs/sum/localhost/node.log 5. Self discovery of the disk firmware is skipped. ~$lcm_logs__xx.xx.xx.xx_/yy.yy.yy.yy/lcm_update_logs/sum/localhost/node.log
Based on the feedback from HPE, drive firmware version HPK5 supports upgrades of NVMe drives on Non-VMD platforms. However, does not support drive firmware upgrades for the drives attached to VMD enabled nodes. However, drive firmware HPDK5(B) supports firmware upgrades of NVMe drives attached to VMD enabled platforms only. Non-VMD enabled platforms are not supported for drive firmware upgrades of NVMe drives.Nutanix has bundled HPDK5(B) firmware bundle along with HPE-RIM-2.1. This would allow Nutanix LCM to upgrade NVME drive FW on VMD enabled servers to HPDK5(B). However, Non-VMD systems will not be able to upgrade NVMe drive firmware using LCM.In order to upgrade NVME Drive FW on Non-VMD systems, Kindly open a support case with HPE support and manually upgrade the drive firmware on such disks to HPDK5 or later. Please reach out to Nutanix support http://portal.nutanix.com/ in case of any queries.
KB14545
NDB | Database provision operation fails with Ansible file permission error
Database provision operation fails with Ansible file permission error due to ACL utility missing
Database provision in a DB Server VM created from a Software Profile template VM may fail with below error signature in Ansible scriptsIn NDB agent 'drivers/{DB-TYPE}_database/provision/{OPERATION-ID}.log' the below error can be seen: TASK [perform_db_operation : Run shell script "create_postgres_db.sh" as non-root] ***
Install 'acl' package using apt or yum or other package manager in the Template VM before creating Software Profile of the template VM.
KB13455
Nutanix Database Service - How to restore a database on Storage Spaces using CLI
This article describes how to restore a database on Storage Spaces using CLI when the Virtual Disks are shared with multiple databases.
This article describes the NDB CLI commands to restore a database on Storage Spaces when the Virtual Disks are shared with multiple databases. Restore of databases sharing the same Virtual Disks/Storage Pools is not supported via NDB UI. When trying to perform the restore, the user will get the following error: Restore of databases on storage spaces with databases sharing the same disk is supported only via CLI. Provide restore proxy VM for restoring database(s) on storage spaces Before running the CLI command to restore the databases, ensure that there is at least one restore proxy VM registered in NDB.
CLI commands to restore the database on storage space: To restore a database in a database group: era > database_group restore engine=sqlserver_database database_group_id=<dbgroup_id> database_id=<database_id> latest_snapshot same_location=true cluster_to_restore_dbserver_mapping= <cluster_name:dbserver_name> To restore a standalone TM database: era > database restore engine=sqlserver_database id=<database_id> latest_snapshot same_location=true cluster_to_restore_dbserver_mapping=<cluster_name:dbserver_name>
KB11337
iSM installation via LCM may fail on ESXi servers on Dell hardware with "iSM is active (not running)" error
On Dell hardware, iSM may fail to get installed with the log message output "iSM is active (not running)"
iSM may fail to get installed on an ESXi server via LCM with an error similar to the below in ~/data/logs/lcm_ops.out on the LCM leader: 2021-04-08 16:21:50 INFO helper.py:106 (xxx.xxx.xxx.xxx, update, 26c24c8e-98d9-4b75-b7dd-9c16c57f5ed5) DEBUG: [2021-04-08 14:21:50.414039] The Command: ['/etc/init.d/dcism-netmon-watchdog', 'status'] failed at Attempt: 1 with Output: iSM is active (not running) The issue may be preconditioned by having iSCSI (either external or internal to Nutanix a cluster) targets connected to an ESXi host. In such cases, the iSM service takes considerably longer to get to the expected state of "active (running)" and thus prevents successful LCM upgrade workflows. Identification steps Confirm if the software iSCSI adapter is enabled on ESXi: [root@esxi:~] esxcli iscsi adapter list Confirm either of the targets discovery exists: - for dynamic targets discovery: [root@esxi:~] esxcli iscsi adapter discovery sendtarget list - for static targets discovery: [root@esxi:~] esxcli iscsi adapter discovery statictarget list
As normal cluster operation does not require any iSCSI targets to be discovered on the ESXi hosts in the cluster, the software iSCSI initiator needs to be disabled: [root@esxi:~] esxcli iscsi software set -e false Reboot the ESXi host and re-attempt iSM installation.
KB12812
Failed to create an application-consistent snapshot on ESXi with VssSyncStart operation failed: iDispatch error #8723 (0x80042413)
For a VM hosted on ESXi, it may fail to take an application consistent snapshot.
When backing up or creating a Recovery Point via Leap of a Windows VM hosted on ESXi it may fail to take an application consistent snapshot and fall back to a crash consistent backup instead. The alert description says: Warning : Failed to create an application-consistent snapshot for one or more VMs in snapshot xxxxxx of protection domain pd_xxxxxxxxxxxxxxx_xxx. A crash-consistent Inside the VM, the application event log shows VSS error: Source: VSS Note the time here (DD/MM/YYYY format) and review the cerebro service logging on the node holding the cerebro leader role. Find the cerebro leader by executing the following at the command prompt of any CVM while logged in as user "nutanix": nutanix@cvm~$ cerebro_cli get_master_location SSH to the node shown and review the /home/nutanix/data/logs/cerebro.INFO log and using the Protection Domain name from the cluster alert above (pd_xxxxxxxxxxxxxxx_xxx): nutanix@cvm~$ ssh a.b.c.d Look for the time matching the Windows application event log message and matching the Protection Domain from the alert to log entries. As the error here refers to a hypervisor snapshot, which in the case of a VM hosted on ESXi is how the snapshots are taken. Details on the failure can be found in the vmware.log for the particular VM. The vmware.log is located at /vmfs/volumes/<datastore-name>/<vm-name> on the ESXi, where <datastore-name> refers to the datastore where the VM files are located and <vm-name> refers to the VM name: root@ESXi# grep -C3 VssSyncStart /vmfs/volumes/<datastore-name>/<vm-name>/vmware.log The hypervisor was unable to quiesce the guest operating system due to: 'VssSyncStart' operation failed: IDispatch error #8723 (0x80042413) Note: To isolate this issue as occurring at the hypervisor level it is possible also to take a snapshot of the VM via the vCenter console and select "Application Consistent" during the process.The Windows event log message indicates "generic_floppy_drive", this event log message implies generally an issue performing a function on a drive while trying to mount and unmount snapshot disks
This particular error is discussed in VMWare KB article Windows quiesce snapshot creation fails with error 0x80042413 (67869) https://kb.vmware.com/s/article/67869.Upgrade VMWare Tools in the VM to version 11 to resolve the issue.
KB17006
Prism unsuccessful login attempt not logged in prism_gateway.log and doesn't generate syslog event
Syslog not receiving unsuccessful login events after upgrade to AOS 6.7.1.x and 6.8
This KB describes an issue where unsuccessful login events are not being received on the syslog server when logging into Prism Element after upgrading to AOS version 6.7.1.x or 6.8 Issue identification: On AOS clusters 6.7.1.x and 6.8, when a user attempts a failed login, we see below logging in prism_gateway.log on prism leader even after enabling debug logs DEBUG 2024-05-03 09:14:00,876Z http-nio-127.0.0.1-9081-exec-8 [] auth.commands.LDAPAuthenticationProvider.authenticate:188 Matched domain xxx.xx.xx.com for username xxxbab-a@xxx.xx.xx.com For an older AOS cluster example 6.5.2, the prism_gateway.log on leader node previously displayed following log entries INFO 2024-05-03 09:11:12,448Z http-nio-127.0.0.1-9081-exec-13 [] commands.config.GetClientAuthKey.prepareToExecute:55 Invoking Get Client Auth Key command.
Issue cause: The issue is caused due to a fix of another issue as part of ENG-373789 that changed the log level for such events from WARN to DEBUG.This change was done to address an issue where An unsuccessful login attempt event was getting generated for false positive events when a user just reloads the Prism UI page without even entering any wrong username and password. Permanent fixes: Upgrade the AOS to fixed versions once released to fix the issue. ENG-373789 https://jira.nutanix.com/browse/ENG-373789 - Fixed in 6.8.1, changing to SyslogUtils.generateWarnLevelSyslog(msg) ENG-657788 https://jira.nutanix.com/browse/ENG-657788 - That should add back the syslogAppender bean in log4j2.xml fixed in AOS 6.8.1 https://jira.nutanix.com/issues/?jql=project+%3D+ENG+AND+fixVersion+%3D+6.8.1 version and above. Workaround steps: Workaround steps would require making changes to log4j2.xml files on all CVMs in the customer cluster, only apply the workaround steps if customer is not ready to wait for the fixed versions.Below one-liner commands were crafted as part of TH-14048 tested on AOS 6.7.x version and 6.8 for one of the key customer account, refer the Jira for more details.Set the correct expectations with the customer that, after below workaround steps customer would receive some additional debug level logs on their syslog server and customer is ok for now with those extra Debug level logs, until the permanent fix of the issue. Step 1 - Change the rsyslog config module to DEBUG for the PRISM module using ncli command: <ncli> rsyslog-config add-module level=DEBUG module-name=PRISM server-name=Test include-monitor-logs=true Step 2 - Take a backup of existing /home/nutanix/config/prism/log4j2.xml and /srv/salt/security/CVM/tomcat/log4j2.xml files nutanix@NTNX-CVM:xx.xx.xx124:~$ allssh "cp -p /home/nutanix/config/prism/log4j2.xml /home/nutanix/config/prism/log4j2.xml-backup" Step 3 - Adding Syslog and AsyncSyslogAppender appenders before the RollingFile appender in /home/nutanix/config/prism/log4j2.xml nutanix@NTNX--CVM:~$ allssh "sed -i '/<RollingFile name=\"file\"/i\\ Step 4 - Replacing the specific logger block with the desired AppenderRefs in /home/nutanix/config/prism/log4j2.xml nutanix@NTNX--CVM:~$ allssh "sed -i '/<Logger name=\"com.nutanix.prism.syslog\" additivity=\"false\" level=\"info\">/,/<\/Logger>/c\\ Step 5 - ​​​​​Adding Syslog and AsyncSyslogAppender appenders before the RollingFile appender in /srv/salt/security/CVM/tomcat/log4j2.xml nutanix@NTNX--CVM:~$ allssh "sudo sed -i '/<RollingFile name=\"file\"/i\\ Step 6 - Replacing the specific logger block in /srv/salt/security/CVM/tomcat/log4j2.xml nutanix@NTNX--CVM:~$ allssh "sudo sed -i '/<Logger name=\"com.nutanix.prism.syslog\" additivity=\"false\" level=\"info\">/,/<\/Logger>/c\\ Step 7 - Check and make sure that the checksum matches on all the nodes. nutanix@NTNX--CVM:~$ allssh sudo md5sum /home/nutanix/config/prism/log4j2.xml Step 8 - Restart the Prism service cluster wide nutanix@NTNX--CVM:~$ allssh 'genesis stop prism;cluster start' Workaround verification: Login to the Prism with the incorrect credentials and check below entries with the unsuccessful login attempts in both the prism_gateway.log on the prism leader and the syslog server prism_gateway.log DEBUG 2024-05-24 06:36:04,232Z http-nio-127.0.0.1-9081-exec-14 [] prism.syslog.SyslogUtils.generateDebugLevelSyslog:17 An unsuccessful login attempt was made with username: admin from IP: 10.138.239.21 and browser: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36 Syslog server side 2024-05-24T06:36:09.149915+00:00 NTNX-xxxx-A-CVM prism_gateway: DEBUG 2024-05-24 06:36:04,232Z http-nio-127.0.0.1-9081-exec-14 [] prism.syslog.SyslogUtils.generateDebugLevelSyslog:17 An unsuccessful login attempt was made with username: admin from IP: 10.138.239.21 and browser: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36
KB13393
Nutanix Files - FSVM Panic due to blocked tasks during cluster maintenance activities like AOS/AHV/LCM upgrade
Nutanix Files - FSVM Panic due to blocked tasks during cluster maintenance activities like AOS/AHV/LCM upgrade
FSVM panic due to blocked tasks during cluster maintenance activities like AOS/AHV/LCM upgrade.The vmcore-dmesg.txt file on the FSVM in question will have a signature below.Path to the vmcore-dmesg.txt file here: /home/nutanix/data/cores/127.0.0.1.<time_stamp_of_core/vmcore-dmesg.txt INFO: task metaslab_group_:12867 blocked for more than 240 seconds. When a task is hung for 240 seconds, the FSVM will panic due to blocked tasks.During ENG-482142 https://jira.nutanix.com/browse/ENG-482142 / ONCALL-13500 http://jira.nutanix.com/browse/ONCALL-13500analysis, we have seen multiple potential scenarios which can cause high IO latency / IO wait on the CVMs which leads to task time out on FSVMs.Possible scenarios1) Hybrid clusters (Mix of All-flash and hybrid nodes) with AESWhen AES is enabled on an HDD, the rocksdb is also on the same HDD for the disk. This can cause performance issues. Refer to KB 11366 https://portal.nutanix.com/kb/11366 to more details and solutions and supported AOS versions.2) Hybrid clusters (Mix of All-flash and hybrid nodes) with EC EC stripes rebuild as part of node-rebuild scans during cluster maintenance activities like AOS/AHV/LCM upgrade.A lot of IO load is generated on these disks due to the EC stripes rebuild. For EC rebuild, if any member of the stripe is missing then all the members of the stripe need to be read.With hybrid clusters, some EC stripes for an EC group might be in HDD while others are in SSD. This can lead to high latency if data is being accessed from HDD.3) Node-failure scans - CVM down events trigger urgent oplog flush operations which can trigger performance degradationRefer to KB 11677 https://portal.nutanix.com/kb/11677Symptoms 1) High %iowait and low CPU %idle observed from the FSVM in question.mpstat logs in sysstats folder on the FSVM. 08:31:25 PM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle The same can be seen in panacea report. 2) Stargate logs on the CVMsExample:For the HDD tier disks, there's more timeout compared to the SSD tier. Read on extent group 5676723556 from disk 3685040404 completed with error kTimeout EC stripe rebuild issues E20220615 03:37:21.390076Z 25006 erasure_code_base_op.cc:2182] EC op 3331997 failed to read chunk starting at slice index 0 for egroup 5678281556 error kTimeout JukeBox Errors W20220615 03:36:48.671563Z 25359 nfs_read_op.cc:390] Opid: 7975706 Sending NFS3ERR_JUKEBOX for read of 262144 bytes at offset 4564066304 from inode 4:0:18918251, num_retries: 5 trace: no trace Disk Read/write delays AIO disk 3688290701 write took 1456 msecs AIO read took 10002 msecs 3) Nodefailure scans in Curator logs during the upgradeSelective scan - curator.INFO logs on the curator master I20220615 03:06:59.442322Z 12473 curator_task_scheduler.cc:1242] Curator job id 349 with execution id 67699 (Selective Scan) started for reasons [NodeFailure] Chronos pushing tasks to Stargate - curator.INFO logs on the curator master I20220615 03:32:57.632215Z 12387 chronos_master_node_peer.cc:264] service_vm_id_=3854831188; Stargate reports it can handle 56 outstanding tasks Dropped ops in the Stargate logs (It could be on any of the CVMs) I20220615 03:32:59.770572Z 25005 disk_manager.cc:1792] Dropped 12 ops from disk 3685040403 4) Post FSVM panic due to the blocked tasks and the FSVM rebooted, errors may be noted in the zpool. The internal process zpool scrub will be run to rectify errors on the vdisk in zpool automatically. This can generate additional IO load on the PE cluster from the FSVMs.The zpool degraded and the scrub scan run will be observed in the afs_pool_recover.log in each FSVM. —— 2022-06-14 20:57:15.407914 zpool status: pool: zpool-NTNX-CSOVPICTXFS01-aa90a024-ef12-4cc6-a787-0793057b0726-eba4d2c8-551e-4ff5-8d52-2447ea1e838e — scan: scrub in progress since Tue Jun 14 20:57:21 2022
If the issue is resolved automatically post maintenance activities. Please collect complete logs from the FSVMs and the CVMs, and refer to the log signatures above for RCA.If the issue is ongoing, check for the above symptoms in the logs and engage Senior SREs or STL for further troubleshooting.
KB14432
Data-at-Rest-Encryption: Generating key backup fails with "We were unable to generate a key backup. Please check your network connectivity and try again."
While taking backup of keys from Prism getting error "We were unable to generate a key backup. Please check your network connectivity and try again."
While taking a backup of data at rest encryption keys from Prism, It fails with error below: The Alert from Prism looks will be showing as :Data at rest Encryption key backup warning. No previous backup taken. Before proceeding with any of the deeper Investigation(Shared in this article later) , please validate the following. Check for any network issues between KMS server(s) and CVMs.Check for any changes made to the current environment. (Container , KMS servers etc).No other errors or warning from NCC checks.You have tried taking keys using command-line from CVM ncli data-at-rest-encryption backup-software-encryption-keys file-path=<path> password=<password> Steps to validate :============1) From prism_gateway.log we observer the following error : ERROR 2022-11-30 06:28:10,593Z XXXXXXX-nio-LL.LL.LL.1-9081-exec-168 [] XXXX.aop.RequestInterceptor.processRequest:247 Throwing exception from XXXXXXXXXXXistration.downloadXXXXXXXX 2) From the Mantle.INFO log, we observe that there is a stale KMS server with UUID "61eb8092-1ef3-4d27-abd1-8be5be3737d7" and the errors "no longer configured" and "error:kNetworkError" is reported : I20221104 02:02:54.135083Z 13540 mantle_server_fetch_op.cc:127] Fetching from remote KMS Note: The above KMS UUID may not be in use currently and could be stale. List the existing KMS entries to find out the valid KMS server: In this case c3aabd43-638e-4ea6-8e53-05b38d455aaa is the valid KMS and is in use. ncli key-management-server ls 3) Validate the data-at-rest-encryption by running the commands below: nutanix@NTNX-xxxxxxxxAHV201-C-CVM:xx.xxx.xxx.132:~$ ncli data-at-rest-encryption test-configuration Test Status should be "success". 4) The mantle_ops print output shows entries for the valid keys as well as stale KMS keys. In this case 61eb8092-1ef3-4d27-abd1-8be5be3737d7 is the stale entry. nutanix@NTNX-xxxxxxxxAHV201-C-CVM:xx.xxx.xxx.132:~/data/logs$ mantle_ops print > mantle_ops.out 5) List all the keys present in Mantle ZK node, and compare if an entry exists for the stale KMS server. nutanix@NTNX-xxxxxxxxAHV201-C-CVM:xx.xxx.xxx.132:~/data/logs$ zkls /appliance/logical/mantle The entry for stale UUID 61eb8092-1ef3-4d27-abd1-8be5be3737d7 is not present in zeus config zeus_config_printer | grep -i 61eb8092-1ef3-4d27-abd1-8be5be3737d7 6) zeus_config output shows the connection status of the KMS server currently in use: digital_certificate_zkpath_list: "/appliance/logical/certs/06c606c4-b4d9-48f7-aef4-6b123de78611/svm_certs/c3aabd43-638e-4ea6-8e53-05b38d455aaa/cert0000000000"
Before proceeding with the solution gather information on when the stale KMS server was removed and collect log bundle during that timestamp if present to find why the stale entries are left behind. Engage an STL through TH for assistance if required. Engage a Senior SRE before applying this workaround. Follow the workaround mentioned in Issue2 of KB-7649.Note: Deleting a Mantle Key that is currently used can cause data loss/data unavailability. Engage a Senior SRE before applying this solution.
KB7595
Move - Migration plan "UUID" not found
Popup alert "Migration plan "<UUID>" not found" being displayed several times when accessing the Nutanix Move web GUI.
When customer logs in to Nutanix Move web interface, a popup alert is displayed with the following message: Migration Plan "5b656c57-c0b8-48d1-ac26-d9e6eeb41750" not found Also, in /opt/xtract-vm/logs/mgmtserver.log, the following ERROR message is displayed several times: E0613 20:23:21.913403 6 migrationplan.go:259] [Location="/hermes/go/src/mgmtserver/db/postgresql-migr If customer had a previous Move VM, with an old version, and is using the same browser to access the new version, it can lead to this behavior.This was also observed when customer has two (or more) browser tabs opened simultaneously on the Move web interface.
To fix, either: Close any additional browser tabs, leaving just one with Move web GUI opened.Clear browser cache, log out from Move and log in again.
KB7440
LCM Pre-check failure "test_esx_foundation_compatibility"
Pre-check added for foundation version in cluster using ESXi 6.7.
The pre-check "test_esx_foundation_compatibility" is introduced in LCM 2.2.3. It prevents an LCM update operation if Foundation version is lesser than 4.3.2 and ESXi version is 6.7.Firmware upgrades via LCM on ESXi host 6.7 will fail and host will fail to boot, if the Foundation version is less than 4.3.2.This Pre-check make sure that Foundation version is greater than 4.3.2 before performing any upgrades.Sample Failure message: Operation failed. Reason: Lcm prechecks detected 1 issue that would cause upgrade failures.
Please do the following to fix the issue.1. Upgrade Foundation to Latest (4.4.3 is the latest version as of writing this KB). It is always recommended to use the latest Foundation for any LCM upgrade.This will fix the issue. The Pre-check will Pass now and you will be able to perform LCM inventory.
KB13550
Space not reclaimed even after removing Protection Policies in Sync Replication
This Article describes an issue where the reference vdisks for stretch is not deleted even after removing the protection policy.
Space usage is not reclaimed after deleting all the Sync Replication Protection Policies and Recovery points.Stretch is disabled on the Primary site, but the Secondary site still shows the stretch parameters. nutanix@primary-site-CVM:~$ stretch_params_printer |wc -l The mcli command in the PCVM will also show the same. nutanix@PCVM:~$ mcli dr_coordinator.list Stretch vdisks will be still present on the Secondary cluster. nutanix@secondary-site-CVM:~$ vdisk_config_printer -skip_to_remove_vdisks |grep stretch_vdisk_name: -c
Deleting recovery points will delete snapshot vdisks. Deleting Protection policy should disable stretch on both the sites (clusters). The reference vdisks for the sync-rep snapshots will be deleted upon stretch disable.In this case, the stretch was not disabled due to some reason. Hence the reference vdisks were not deleted.Solution: Check ~/data/logs/magneto.out logs to find why the stretch was not disabled upon deleting Protection Policies.Execute the following command from PCVM to trigger stretch disable on the secondary cluster. nutanix@PCVM:~$ mcli dr_coordinator.disable_all_stretch cluster_uuid=<cluster-uuid of secondary site> The reference vdisks will be deleted once the stretch is disabled. This will reclaim the storage space. nutanix@secondary-site-CVM:~$ stretch_params_printer |wc -l Finding the root cause (step 1) is important here. File a TH if STL assistance is required.
KB2445
Identifying CVE and CESA patches in Nutanix Products (Internal KB)
Customers and the Support Engineering team (SE /SRE) should review the release notes. In order to provide additional customer guidance on CVEs and related product versions, the SE/SRE team will often need to research our CVE release roadmap in NXVD (Nutanix Vulnerability Database) or email the Nutanix Security Engineering and Response Team (nSERT) at nsert@nutanix.com.
Handling of security questions - both CVE/CESA and the more sensitive CWE is available on confluence - https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=28574893#SREBug&OnCallProcess-HandlingSecurityIssues https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=28574893#SREBug&OnCallProcess-HandlingSecurityIssues NB: SREs must refrain from discussing security matters in public forums (including the NEXT Community portal https://next.nutanix.com). Further clarity on this statement is found in the above Confluence link.
Refer to confluence for current best practice. https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=28574893#SREBug&OnCallProcess-HandlingSecurityIssues https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=28574893#SREBug&OnCallProcess-HandlingSecurityIssues
KB16105
Worldwide Cross Border Policy
null
The cross border policy defines rules for opportunities that cross geographic borders. It applies to all non-OEM sales opportunities that are multi-country in nature or cross geographic borders.
For more information, click here https://ntnx-intranet--simpplr.vf.force.com/apex/simpplr__app?u=/site/a0xf4000004zbskAAA/page/a0v5d00000JbPoHAAV. Point of contact: crossborder@nutanix.com
KB8826
Veritas backup fails with Errno = 6617: Unknown error 6617
null
Backup of VMs using Veritas Nutanix plugin https://www.veritas.com/support/en_US/doc/127664414-138646136-0/index fails with Errno = 6617: Unknown error 6617Sample error output from Veritas Backup UI.
High-level overview of communication between Veritas backup host and Nutanix is shown belowDuring the Nutanix cluster registration on Veritas side , the backup admin will use the Nutanix Prism user account credentials to discover the VMs. This credentials will be stored permanently on Veritas backup host end.When Veritas initiates the backup of a VM , a snapshot request will be sent from the Veritas Nutanix plugin to Prism via port 9440 TCP. Once snapshot is created , the snapshot UUID will be sent back to Veritas.Using this UUID , Veritas backup host will initiate an NFS mount request of the container path from one of the CVM. The CVM IP will be selected based on the host which VM running.Backup will fail if communication between Veritas Backup host and Nutanix Cluster blocked by firewall or if the Veritas Backup Host not present in Nutanix whitelist.To verify the communication between components, you may follow below procedure.Make sure that the Veritas Backup Host can reach Nutanix via Ports 9440 , 2049 and 111. 1) Verify reachability of Prism port from Veritas Backup Host. The "nc" command executed from Veritas Backup Host should show "Connected to" status if ports were open and reachable. backuphost@veritas:# nc -vz <prism_virtual_ip> 9440 If port is not reachable , then; Make sure TCP Port 9440 is open in Firewall (if one is present in the environment) to allow communication between Nutanix Cluster and Veritas Backup Host. 2) Verify that , All CVM IPs are reachable from Veritas backup Host via port 2049 and 111 for NFS access. backuphost@veritas:# nc -vz <CVM_ip> 2049 backuphost@veritas:# nc -vz <CVM_ip> 111 If ports are not reachable , then; Make sure that the Veritas Backup Host is whitelisted in Prism.(Refer documentation "Prism Web Console Guide > System Management > Configuring a Filesystem Whitelist" ).​​Make sure TCP Ports 2049 and 111 were open in Firewall (if one is present in the environment) to allow communication between Nutanix CVMs and Veritas Backup Host. 3) Verify that all CVMs IPTables rules populated after configuring Whitelist (Veritas Backup host IP is indicated as y.y.y.y in below example) nutanix@cvm:~$ allssh "sudo iptables -nL |grep y.y.y.y" If IPTables rules are were not populated on any of the CVM(s), then ; Refer internal comments to validate the scenario and apply workaround if applicable.
KB11826
NDB - Oracle DB provisioning fails if the FRA is below 12780Mb
Oracle DB provisioning fails if the Fast Recovery Area (FRA) location is less than 12780Mb in size.
Nutanix Database Service (NDB) is formerly known as Era. Attempting to provision an Oracle DB with a Fast Recovery Area (FRA) space of less than 12780 MB will fail with an error message: Error in Creating Database The GUI will not show the details about the failure, but more information is available in the operation logs on the Era server. Select the failed operation in the Era GUI to get the detailed logs and click Show Logs. You can download the detailed logs by clicking the Download button. In the downloaded bundle, the detailed operation logs will be located in home/era/era_base/logs/drivers/oracle_database/provision/tmp/<operation_id>_SCRIPTS.log. Here is an example: [FATAL] [DBT-06604] The location specified for Fast Recovery Area Location has insufficient free space. The Fast Recovery Area (FRA) is Oracle-managed and can be a directory, file system, or Oracle Automatic Storage Management (ASM) disk group that provides a centralized storage location for backup and recovery files. Oracle creates archive logs and flashback logs in the Fast Recovery Area. Oracle Recovery Manager (RMAN) can store its backup sets and image copies in the Fast Recovery Area, and it uses it when restoring files during media recovery. The size of the FRA is configured during database provisioning in Era. During the database provisioning operation, Era runs a set of scripts on the database server and the script create_database.sh will create the FRA of the size that had been set in the provisioning wizard. The script will always create the FRA, and it is a fully automated procedure. An FRA will be created for each DB instance. The FRA will be located on the Oracle DB Server, for example: /u02/app/oracle/oradata/fra_ORCL13/ORCL13/.
Select more than 12780 MB for the FRA (Fast Recovery Area) during the Oracle DB provisioning. For the FRA sizing recommendations, refer to the Oracle documentation.
KB15204
NCC Health Check: dense_node_minimum_cvm_configuration_checks
The NCC health check dense_node_minimum_cvm_configuration_checks checks supported CVM configuration and capacity for 1hr/6hr/24hr RPO protection domains.
The NCC health check plugin dense_node_minimum_cvm_configuration_checks checks supported CVM configuration and node capacity for 1hr, 6hr, and 24hr RPO protection domains or protection policies. The plugin dense_node_minimum_cvm_configuration_checks contains the following individual checks that cover specific scenarios: Check to verify if the dense node cluster configuration can support RPO under 6 hours for hybrid node(s).Check to verify if the Hybrid dense node(s) CVM configuration can support 6 or 24 hour RPO.Check to verify if the All Flash dense node(s) CVM configuration can support 6 or 24 hour RPO.Check to verify if the All Flash dense node(s) cluster configuration can support RPO under 6 hours. Running the NCC Check It can be run as part of the complete NCC check by running the following command from a Controller VM (CVM) as the user nutanix: ncc health_checks run_all Or individually as: ncc health_checks system_checks dense_node_minimum_cvm_configuration_checks You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every 24 hours by default.This check will generate a severity Warning alert A111083 after 1 check failure.Starting with NCC version 5.0.0 the other three checks in the plugin will generate alerts as well. Sample output For status: PASS nutanix@cvm:~$ ncc health_checks system_checks dense_node_minimum_cvm_configuration_checks For status: WARN /health_checks/system_checks/dense_node_minimum_cvm_configuration_checks [ WARN ] Output messaging Note: Prior to NCC version 4.6.6, check 111083 was part of another plugin, dense_node_configuration_checks. For more details, see KB 7196 https://portal.nutanix.com/kb/7196. [ { "Check ID": "Check whether the dense node cluster configuration can support RPO under 6 hours for hybrid node(s)." }, { "Check ID": "1) HDD/Total Capacity of the Node is greater than the allowed threshold for RPO less than 6 hours2) The CVM configuration of the Node cannot support the current RPO of under 6 hours." }, { "Check ID": "1) If the node's current capacity cannot support the RPO, either the RPO must be increased to 6 hours or the hardware configuration needs to be updated.2) If the CVM configuration of the node cannot support the RPO, either the RPO must be increased to 6 hours or the hardware configuration needs to be updated." }, { "Check ID": "Unsupported configuration. Cluster performance may be significantly degraded." }, { "Check ID": "A111083" }, { "Check ID": "The current configuration of the cluster cannot support RPO under 6 hours" }, { "Check ID": "The node(s) node_list do not support RPO under 6 hours. details" }, { "Check ID": "The node(s) node_list do not support RPO under 6 hours. details" }, { "Check ID": "111101" }, { "Check ID": "Check to verify if the hybrid dense node(s) CVM configuration can support 6 or 24 hour RPO." }, { "Check ID": "HDD/Total Capacity of the Node is greater than the allowed threshold for 6 hour RPOThe CVM configuration of the Node cannot support the current RPO of 6 or 24 hour" }, { "Check ID": "If the current capacity of the node cannot support the RPO, then the RPO must be increased to 24 hour or hardware configuration needs to be updated.If the CVM configuration of the node cannot support the RPO, then CVM configuration needs to be updated." }, { "Check ID": "Unsupported configuration. Cluster performance may be significantly degraded." }, { "Check ID": "A111101" }, { "Check ID": "CVM configuration of the hybrid node(s) cannot support 6 or 24 hour RPO" }, { "Check ID": "The HDD/Total capacity of node(s) node_list do not support 6 hour RPO or their CVM configuration" }, { "Check ID": "The HDD/Total capacity of node(s) node_list do not support 6 hour RPO or their CVM configuration do not support 6 or 24 hour RPO. message" }, { "Check ID": "111102" }, { "Check ID": "Check to verify if the All Flash dense node(s) CVM configuration can support 6 or 24 hour RPO." }, { "Check ID": "All Flash Total Capacity of the Node is greater than the allowed threshold for 6 hour RPOThe CVM configuration of the Node cannot support the current RPO of 6 or 24 hour" }, { "Check ID": "If the current capacity of the node cannot support the RPO, then the RPO must be increased to 24 hour or hardware configuration needs to be updated.If the CVM configuration of the node cannot support the RPO, then CVM configuration needs to be updated." }, { "Check ID": "Unsupported configuration. Cluster performance may be significantly degraded." }, { "Check ID": "A111102" }, { "Check ID": "CVM configuration of the All Flash node(s) cannot support 6 or 24 hour RPO" }, { "Check ID": "All Flash capacity of node(s) node_list do not support 6 hour RPO or their CVM configuration" }, { "Check ID": "All Flash capacity of node(s) node_list do not support 6 hour RPO or their CVM configuration do not support 6 or 24 hour RPO. message" }, { "Check ID": "111103" }, { "Check ID": "Check to verify if the All Flash dense node(s) cluster configuration can support RPO under 6 hours." }, { "Check ID": "All Flash Total Capacity of the Node is greater than the allowed threshold for RPO less than 6 hoursThe CVM configuration of the Node cannot support the current RPO of under 6 hours." }, { "Check ID": "If the current capacity of the node cannot support the RPO, either the RPO must be increased to 6 hour or hardware configuration needs to be updated.If the CVM configuration of the node cannot support the RPO, either the RPO must be increased to 6 hour or hardware configuration needs to be updated." }, { "Check ID": "Unsupported configuration. Cluster performance may be significantly degraded." }, { "Check ID": "A111103" }, { "Check ID": "Current configuration of the cluster cannot support RPO under 6 hours" }, { "Check ID": "All Flash node(s) node_list do not support RPO under 6 hours. message" }, { "Check ID": "All Flash node(s) node_list do not support RPO under 6 hours. message" } ]
The NCC dense_node_minimum_cvm_configuration_checks validate whether Asynchronous replication (NearSync, 1hr, 6hr, or 24hr) RPO requirements can be met for high storage capacity 'dense_nodes' and the configured Protection Domains or Protection Policies in a Nutanix cluster. Refer to the DR documentation https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide:wc-dr-nearsync-resource-requirements-r.html for the resource requirements on both CVM configuration and Hardware side to be able to support the desired RPO target (snapshot frequency and retention). Check Resolutions field provides options for remediation. Generally, the options are: to reduce the RPO target if it cannot be supported for specific dense_node hardware configurationincrease the CVM resources ( Memory https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:wc-controller-vm-memory-increase-wc-t.html or vCPU) to support the workload. CVM memory and vCPU allocation by Foundation at node imaging time is explained in the Controller VM (CVM) Field Specifications https://portal.nutanix.com/page/documents/details?targetId=Advanced-Admin-AOS:app-nutanix-cloud-infra-cvm-field-specifications-c.html documentation. Make sure there are sufficient physical cores per socket available from the hardware perspective to meet the RPO-related vCPU requirements for CVM. If this requirement is not met at the hardware level, it is recommended to set RPO to a higher value, i.e., take less frequent snapshots. To add vCPU to CVMs on AHV or for other assistance, engage Nutanix Support http://portal.nutanix.com. Otherwise, if CVM requirements are not met, change the Protection Domain schedules https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide:wc-protection-domain-wc-t.html to a higher RPO. See the recommendations in KB 6200 to ensure that the cluster can operate normally with configured RPO targets.
KB10113
Phoenix may not pick up that AHV is not configured
Phoenix exits as if there is nothing to do without showing configure hypervisor option
When running universal Phoenix ISO after installing AHV from ISO, it may not show configure hypervisor option and exits saying nothing to do.
Create Phoenix with AHV by following KB 3523 http://portal.nutanix.com/kb/3523.
KB4192
Installing CVM using Phoenix ISO fails with "IndexError: list index out of range"
Installing CVM using Phoenix ISO fails with "IndexError: list index out of range"
It has been observed that as part of single node bare-metal installation without Foundation, after booting with the Phoenix ISO and choosing both "Configure Hypervisor" and "Clean CVM": The installation fails with the following error: IndexError: list index out of range Sample screenshot: Cause Starting from Foundation 3.5, the Phoenix ISO downloaded from Portal no longer includes an AOS bundled release. This is the reason the installation failed when both "Configure Hypervisor" and "Clean CVM" options selected.
Use Foundation 3.5 or later (instead of Phoenix) to image/install the CVM. If Foundation fails, then you can create a custom Phoenix ISO that includes AOS builds using KB 3523 https://portal.nutanix.com/kb/3523. Once Phoenix ISO with proper AOS is created, repeat the CVM imaging/installation procedure. Alternatively, you may refer to KB 3291 https://portal.nutanix.com/kb/3291 where Phoenix fails with a similar message. The cause of this issue is because "Node Position" is not set (or is left blank).
KB16043
NDB | Provisioning PostgreSQL HA Instance may fail with error message "Failed to configure keepalived"
PG HA provisioning may fail because the keepalive dependency package iptables isn't installed.
Provisioning Postgres High Availability (PG-HA) Instance may fail with the error message: Failed to configure keepalived Follow the steps below to verify the issue: Check Software Required for PostgreSQL HA Provisioning https://portal.nutanix.com/page/documents/details?targetId=Nutanix-NDB-PostgreSQL-Database-Management-Guide-v2_5:top-postgresql-database-provision-c.html and PostgreSQL Software Compatibility and Feature Support https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-Nutanix-NDB-v2_5_3_1:v25-ndb-compatibility-postgresql-2_5_3-r.html, and ensure the installed software packages meet the requirements.You can check the installed package versions from the software profile used for this PG-HA provisioning. NDB UI > Click Dashboard Dropdown list > Profiles > Software > Select the software profile used for the PG-HA provisioning - from Packages Found, search for the package name. Example screenshot: Check if iptables-service is installed: Example screenshot: iptables-services is a dependency package used by keepalive, which is required to be installed.
If iptables is not installed, please install it using the procedure below. Install iptables-service on the template PG-HA DBVM: [root@localhost ~]$ yum install iptables-services Create a new software profile from the template PG-HA DBVM.Use the new software profile to retry the PG-HA provisioning. If the issue persists, contact Nutanix Support https://portal.nutanix.com/.
KB5826
Prism Element inaccessible from Prism Central
Prism Element cannot be accessed from Prism Central because PrismUI tar file was copied incompletely during upgrade.
After or during an upgrade PE will generate an archive PrismUI.tar.gz and place it to a folder named after current PE release version.The PrismUI.tar.gz is then fetched by Prism Central for each version of registered clusters and placed to a folder named after current PE version.PE stores it's PrismUI.tar.gz in: AOS below 5.20/6.0 /home/nutanix/prism/webapps/console/$(< /etc/nutanix/release_version)/PrismUI.tar.gzAOS is 5.20/6.0 or higher /home/apache/www/console/$(< /etc/nutanix/release_version)/PrismUI.tar.gz Prism Central stores PrismUI.tar.gz in: /home/apache/www/console/$(< /etc/nutanix/release_version)/ $(< /etc/nutanix/release_version) is a variable that will be replaced with a current version, for example el7.3-release-euphrates-5.18-stable-08c54f4e5736c73c784340506a0f67ce8e7fa5ff The content of the folders is the same. Such folders exist on each CVM and PCVM. Note: folders for old releases are not being removed automatically. nutanix@pcvm$ ls -la /home/apache/www/console/ On PCVM progress of a prefetch operation thread which downloads the prismUI.tar file from PE and places it in /home/nutanix/prism/webapps/console can be found in ~/data/logs/prism_gateway.log INFO 2021-05-27 13:16:39,249Z pool-20-thread-1 [] prism.init.PrefetchTask.downloadFromRegisteredPEs:309 Tarball URL https://x.x.x.200:9440/console/el7.3-release-euphrates-5.18-stable-08c54f4e5736c73c784340506a0f67ce8e7fa5ff/PrismUI.tar.gz If there is no folder on Prism Central with name that equals to registered Prism Element version and the folder doesn't contain PrismUI.tar.gz then Prism Element cannot be accessed from Prism central. It times out with error "Unable to access Prism Element Instance". When launching Prism Element from Prism Central, open Developer Tools (Chrome), and then go to Networking Tab. You should see a 404 error for index.html.Prism Element of the cluster must work by using the PE IP directly, otherwise the KB does not apply. Following scenarios were seen in the field: Scenario 1. md5sum of PrismUI.tar.gz on PC does not match with md5sum of PrismUI.tar.gz on PE. That usually happens if data transfer was interrupted. Scenario 2. PrismUI.tar file creation on Prism Element failed. As a consequence, no folder exists on Prism Central for the corresponding version.Scenario 3. PrismUI.tar.gz file in /home/nutanix which is growing in size on the CVM, reaches a size of ~4GB and gets auto-deleted, but comes back again and accordingly /home space usage increases to ~77% . this is preventing from performing an AOS upgrade.Scenario 4. In rare conditions, PrismUI.tar.gz file doesn't automatically fetch from prism leader or download that file, but it doesn't extract that file, even restart Prism service in PCVMScenario 5. Deep Packet Inspection/Customers network blocking the SSL certificate from being presented
Indication of successfully applied workaround will be prismUI.tar file in a corresponding folder.For example: prior 5.20.x/6.x nutanix@pcvm$ /home/nutanix/prism/webapps/console/el7.3-release-euphrates-5.18-stable-08c54f4e5736c73c784340506a0f67ce8e7fa5ff/PrismUI.tar.gz For PE 5.20.x/6.0 and above /home/apache/www/console/el7.3-release-euphrates-5.18-stable-08c54f4e5736c73c784340506a0f67ce8e7fa5ff/PrismUI.tar.gz Prism Central should be able to automatically retrieve the file from PE after applying a workaround. If this is not happening, make sure that the remote connection between PC-PE is up, the proxy settings are properly set on PC and PE. If prefetch is not happening automatically you can also try restarting the prism service on PCVM to forcibly initiate the prefetch task. nutanix@pcvm$ genesis stop prism; cluster start Search for PrismUI on the ~/data/logs/prism_gateway.log for additional details of why the workaround is not working. Scenario 1. md5sum of PrismUI.tar.gz on PC does not match with md5sum of PrismUI.tar.gz on PE You will see following events in~/data/logs/prism_gateway.log on Prism Leader CVM. Usually it means that /home doesn't have enough space to store new PrismUI.tar.gz INFO 2021-05-25 20:45:57,568Z pool-17-thread-1 [] prism.util.Utils.executeCommand:545 Executing command tar -C /home/apache/www -czf /home/nutanix/PrismUI.tar.gz --exclude=login1* console Find Prism Leader on both PE and PC nutanix@cvm$ curl -s localhost:2019/prism/leader|awk '{print "Prism ", $1}';echo Find current release version nutanix@cvm$ cat /etc/nutanix/release_version Login to Prism Leader on both PE and PC and compare output of the md5sum commands. Replace folder name with the output of the release_version.For PE version below 5.20.x/6.x nutanix@cvm$ md5sum /home/nutanix/prism/webapps/console/el7.3-release-euphrates-5.18-stable-08c54f4e5736c73c784340506a0f67ce8e7fa5ff/PrismUI.tar.gz For PE 5.20.x/6.0 and above nutanix@cvm$ md5sum /home/apache/www/console/el7.3-release-euphrates-5.18-stable-08c54f4e5736c73c784340506a0f67ce8e7fa5ff/PrismUI.tar.gz If the size or md5sum of PrismUI.tar.gz is different on the PCVM and the prism service leader CVM on the cluster then remove the directory with the release version name from /home/nutanix/prism/webapps/console on PCVM. nutanix@pcvm$ /bin/rm -rf /home/nutanix/prism/webapps/console/el7.3-release-euphrates-5.18-stable-08c54f4e5736c73c784340506a0f67ce8e7fa5ff Scenario 2. PrismUI.tar file creation on Prism Element failed Scenario 2 described here works only for versions below 5.20/6.0 due to ENG-388971 and the fix is released in AOS 6.0/6.1/5.20.1.1. If you are seeing this issue in AOS 6.0/6.1/5.20.1.1 or above, please refer to KB-11663 https://portal.nutanix.com/kb/11663Ensure /home on PE have enough space to store new PrismUI.tar.gz KB-1540 https://portal.nutanix.com/kb/1540 Find Prism Leader on both PE and PC nutanix@cvm$ curl -s localhost:2019/prism/leader|awk '{print "Prism ", $1}';echo Run the following commands on Prism leader CVM on the Prism Element cluster to create the PrismUI.tar.gzFor PE version below 5.20.x/6.x nutanix@cvm$ mkdir ~/tmp/prism Scenario 3. PrismUI.tar.gz file in /home/nutanix which is growing in size on the CVM, reaches a size of ~4GB and gets auto-deleted, but comes back again and accordingly /home space usage increases to ~77% Scenario 3 - This scenario is fixed in ( ENG-415584 https://jira.nutanix.com/browse/ENG-415585: PC 2021.9.0.5+ and PC 2022.1+) which will prevent this issue from manifesting after the upgrade on the PC side. It is also fixed in ( ENG-415585 https://jira.nutanix.com/browse/ENG-415584: AOS 6.1+ and 5.20.4+ ) which will prevent this from manifesting on PE after the upgrade. The fix removes the older stored binaries from prior upgrades that have built up and cause the creation of the tar file to take an excessive amount of space and time and cause the condition to occur. These binaries should have never been left behind to begin with. Find Prism Leader on both PE and PC nutanix@cvm$ curl -s localhost:2019/prism/leader|awk '{print "Prism ", $1}';echo Run the following commands on Prism leader CVM on the Prism Element cluster to create the PrismUI.tar.gz For PE version below 5.20.x/6.x nutanix@cvm$ mkdir ~/tmp/prism For PE 5.20.x/6.0 and above nutanix@cvm$ mkdir ~/tmp/prism Scenario 4. In rare conditions, PrismUI.tar.gz file doesn't automatically fetch from prism leader or download that file, but it doesn't extract that file, even restart Prism service in PCVM Login to PCVM prism leader and create the folder(s) that corresponds to PE version in question. For instance , if PE version is el7.3-release-euphrates-5.18-stable-08c54f4e5736c73c784340506a0f67ce8e7fa5ff and that folder is missing on PCVM at /home/apache/www/console then create it thereExample nutanix@pcvm$ cd /home/apache/www/console/ Please note that following command works only if folder structure corresponding to PE version exists on PCVM (prism leader) within /home/apache/www/console folder. nutanix@cvm$ scp -p /home/apache/www/console/$(cat /etc/nutanix/release_version)/PrismUI.tar.gz nutanix@<pcvm address>:/home/apache/www/console/$(cat /etc/nutanix/release_version)/PrismUI.tar.gz Login to the PCVM and manually extract the PrismUI.tar.gz nutanix@pcvm$ cd /home/apache/www/console/<folder corresponds to PE version created in previous step> NOTE: If PrismUI.tar.gz is being automatically removed after using scp to copy it to Prism Central, ensure that the steps have been followed on all connected PEs of the same AOS version.Scenario 5. Deep Packet Inspection/Customers network blocking the SSL certificate from being presented to the PC Check if the SSL certificate is being presented to the PC using below command : nutanix@pcvm$ sudo openssl s_client -connect <VIP IP of cluster>:9440 IF PE is unable to present the certificate, output similar to below will be seen nutanix@pcvm$ sudo openssl s_client -connect <VIP IP of cluster>:9440 Check the certificate locally from CVM of concerned PE and it should validate the certificate is present - nutanix@pcvm$ sudo openssl s_client -connect <VIP IP of cluster>:9440 In this scenario, Customer's network blocks the SSL certificate from being presented and this may be caused by Deep packet inspections being performed by customer. Customers will need to investigate this with their network teams to allow SSL certificates over their network.
KB3220
Connect-NutanixCluster PowerShell cmdlet fails to connect to Nutanix cluster
Connect-NutanixCluster PowerShell cmdlet fails to connect to Nutanix cluster. Connection fails with error: Cannot convert the "nutanix/4u" value of type "System.String" to type "System.Security.SecureString".
You want to connect to Nutanix cluster from PowerShell, but that fails with the error: PS > Connect-NutanixCluster -Server "10.0.0.1" -UserName "admin" `
To connect to a Nutanix cluster using Connect-NutanixCluster, you need to forcefully convert a plain text password into a secure string.You can connect in one of the following ways: Supply login credentials interactively: PS> $getcred = Get-Credential Convert the password string into a secure string: PS> $securepassw = ConvertTo-SecureString "passw0rd" -AsPlainText -Force;
KB4656
Failed to snapshot NFS files for consistency group in protection domain, error: kTimeout
This article describes a situation where NFS snapshot for CG fails
Customer may receive intermittent Snapshot failure Critical alert as shown below: Protection domain Ehealth_PRD snapshot (52965, 1472616834117690, 65605225) failed because Failed to snapshot NFS files for consistency group Ehealth_PRD_1495808494489621 in protection domain Ehealth_PRD, error: kTimeout. In our scenario, this alert is intermittent, but it does not mean that it cannot generate the alert on every snapshot schedule. The RPC timeout is 15 seconds for the SnapshotVdisk RPC. In this case, SnapshotVdisk RPC was forwarded to the CVM which was hosting the vdisk.Check if the alert time matches the string "Aborting snapshot since RPC" or "Aborting snapshot" in Stargate logs, if so we could be hitting this bug. But also verify if the vdisk for which the snapshot failed was a part of the Protection Domain for which the Alert was generated.In the below Stargate log snippet, we see "E0712 18:41:34.968689 13005 admctl_snapshot_vdisk_op.cc:1316] Aborting snapshot since RPC has timed out for vdisk 418390397".In our case, Alert time matched the snapshot time out in Stargate error log mentioned above. Along with that vdisk: 418390397 was part of protection domain for which the alert was generated. W0712 18:41:18.856546 13008 vdisk_chunkify_external_rewrite_op.cc:85] vdisk_id=418390397 operation_id=2676277460 Denying deduplication as there are 10 outstanding ops Note: In our case we also observed deduplication ops getting denied.
Root cause: SnapshotVdisk RPC was waiting for the background ops to complete and since the background ops are holding reader locks on vdisk_id, the SnapshotVdisk op was not able to get the write lock on the vdisk_id, hence snapshot failed. This issue is identified as a Bug and the fix tracked in ENG-100817 https://jira.nutanix.com/browse/ENG-100817 would be separating vdisk hosting lock from read/write lock.The op-cancellation changes made to 6.7 version resolve the issues for unhost but not extended to snapshot due to destructive nature.
KB14213
Nutanix DR - Handling Planned failover failure for Synchronously replicated Volume Group
The KB is to be followed only if there are persistent issues in planned failover or if the original site went down during Planned Failover
The KB is to be followed only if there are persistent issues in planned failover or if the original site went down during a planned failover.Identify the stage in which a planned failover failed: Stages in Planned Failover: Mark stretch params as migratingIssue VG service migrationIssue Disable to clean old stretch stateIssue Enable to create a new reversed stretch state.
If the migration failed in step 1 or 2 and the VG exists on original Primary: Nothing needs to be done by the user. PFO can be tried again. If the migration failed in steps 3 or 4: The corresponding option will be retried internally by Data Protection Service on Prism Central(Magneto).This can be due to the error and fencing logic in PFO, the disks go offline from the client end post the retries. In this case, the user has to bring the disks back up online / re-connect and confirm that they are working fine. Note: To run UPFO manually, a clear_migrated_updates_disabled_vg.py script is required to be run for VGs in sync rep. If in the middle of migration, the original Primary site goes down, first try to recover it. If there is no way to recover it refer to the following steps: 1. If VG does not exist on the Secondary site, UPFO (Unplanned Failover) can be performed. In case UPFO fails since the migration type is still set, the migration type will have to be cleared using mcli command first and then UPFO can be tried again.Get the cluster uuid of the secondary cluster by running the command ‘ncli cluster info’ from the PE SSH terminal.From the PC SSH terminal, Verify that the entity is in the Synced state using mcli command. If the entity is not in the Synced state, recovery will have to be done with the last available async snapshot. mcli dr_coordinator.get entity_uuid=<> cluster_uuid=<> To remove the migration_type flag, mcli command: mcli dr_coordinator.changestretchstate entity_uuid=<> cluster_uuid=<> stretch_state=kSynced migration_type=kNone 2. If VG exists on the Secondary site, check if the cluster reference of the volume group on the PCVM is the same as that of the secondary site. If it is same(likely case): Issue disable using mcli command from PC SSH terminal. mcli dr_coordinator.changestretchstate entity_uuid=<> cluster_uuid=<> to_remove=true b. Disks will be fenced and config changes will be blocked till stretch is not enabled to some other site. If that cannot be done i.e. no other site is available Run the following command to unfence the disks mcli dr_coordinator.update_fence_state_vg_disks entity_uuid=<> cluster_uuid=<> fence=false Clear the migrated_updates_disabled flag from VG PE IDF to enable config changes. The script to hear the flag can be downloaded here https://download.nutanix.com/kbattachments/14213/clear_migrated_updates_disabled_vg.py. md5sum: 2993071177d73736d29ac5ac5bf5ee6d nutanix@cvm:~/script$ python clear_migrated_updates_disabled_vg.py <vg_uuid> b. If the cluster reference is of the old Primary site. Ensure that stretch params are still secondary on the PE using stretch_params_printer command and check that ‘forward_remote_name’ is set.Delete the stale VG IDF entry from the secondary PE.Issue Unplanned Failover
KB6888
Expand cluster fails with error "Node cannot be added to cluster because they are meant for single node backup solution"
Expand cluster can fail while adding NX-8155-G6 nodes which are meant for Single node backup solution
When we expand cluster with NX-5155-G6 and NX-8155-G6 nodes, it fails with below error: Failure in pre expand-cluster tests. Errors: Nodes [{'ipv6_address': 'fe80::526b:8dff:fecd:106d%eth0', Cause: These nodes are considered as "backup target node" with "is_backup_target_node" set to True
Upgrade cluster to 5.11, 5.10.6 or later.Expand cluster from Prism UI. Refer to Expanding cluster https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_0:wc-cluster-expand-wc-t.html user guide for details. Note: Refer to Expanding Cluster with vShield and NSX https://portal.nutanix.com/kb/000005353 as added caveat.
KB9858
Alert - A160110, A160111 - FileServerTargetNodesMoreThanSource, FileServerSourceNodesMoreThanTarget
Investigating the “FileServerTargetNodesMoreThanSource” or “FileServerSourceNodesMoreThanTarget” Alert on a Nutanix Files cluster This Nutanix article provides the information required for troubleshooting the alert A160110-FileServerTargetNodesMoreThanSource or A160111-FileServerSourceNodesMoreThanTarget for your Nutanix Files cluster instance. For an overview of alerts, including who is contacted when an alert case is raised, see KB 1959.
Alert Overview Block Serial Number: 16SMXXXXXXXX The “FileServerTargetNodesMoreThanSource” alert is generated when the Files Server on the Target Site has more nodes than the Source File Server. If this alert is triggered, the article specifies the steps that should be followed to triage if the target site file server has a mismatched node count as that of the source site file server as this will cause the data inside shares to not remain in sync due to volume groups counting more on target than the source. Block Serial Number: 16SMXXXXXXXX The “FileServerSourceNodesMoreThanTarget” alert is generated when the Files Server on the Source Site has more nodes than the Target File Server. If this alert is triggered, the article specifies the steps that should be followed to triage if the source site file server has a mismatched node count as that of the destination site file server as this will cause the data inside shares to not remain in sync due to volume groups counting more on the source than the target. Output messaging [ { "Check ID": "File server Disaster Recovery - number of nodes on source and target file servers must be the same." }, { "Check ID": "Number of file server nodes on source and target has changed due to scale-out operation." }, { "Check ID": "Expand or reduce the node count on source and target file servers to make the node count identical on both file servers." }, { "Check ID": "Data inside shares may not remain in sync." }, { "Check ID": "A160110" }, { "Check ID": "File Server Disaster Recovery - source file server has more nodes than target file server" }, { "Check ID": "File server file_server_name Disaster Recovery - Target site expansion required. message" }, { "Check ID": "160111" }, { "Check ID": "File server Disaster Recovery - number of nodes on source and target file servers must be the same." }, { "Check ID": "Number of file server nodes on source and target has changed due to scale-out operation." }, { "Check ID": "Expand or reduce the node count on source and target file servers to make the node count identical on both file servers." }, { "Check ID": "Data inside shares may not remain in sync." }, { "Check ID": "A160111" }, { "Check ID": "File Server Disaster Recovery - target file server has more nodes than source file server" }, { "Check ID": "File server file_server_name Disaster Recovery - Source site expansion required. message" } ]
The alert has been raised due to the number of file server nodes on the source and the target changing due to a scale-out or scale-in operation only on a particular site. Expand or reduce the node count on source and target file servers to make the node count identical on both file servers. GUI Based From: Prism Central The below instructions will help to confirm the number of nodes on the source and destination file server for which the alert was triggered. Example: We see 4 FSVMs on FS2 File Server and 3 FSVMs on FS1 File Server which shows that there is a mismatch between source and destination file servers Click on Manage → Update → Scale in/out FSVMs → Increase the number of VMs to match destination/source File Server VM Count. From: Prism Element The below instructions will help to confirm the number of nodes on the file server for which the alert was triggered. Example: Here, we can see that the number of File server VMs count is 3. Home → File Server → Select File Server → Update → Scale in/out FSVMs → increase the number of VMs to match destination/source File Server, Count If you need further assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com. Gather the following information and attach it to the support case. Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the Logbay bundle from minerva leader using the following command: For more information on Logbay, see KB 6691 http://portal.nutanix.com/kb/6691. Note: Execute "<afs> info.get_leader" command from one of the CVMs (Controller VMs) to get the minerva leader IP. Using File Server VM Name: logbay collect -t file_server_logs -o file_server_name_list= <FSVM name> Using File Server VM IP: logbay collect -t file_server_logs -o file_server_vm_list=<FSVM IP> Attaching Files to the Case ​​​​​To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.