id
stringlengths
1
25.2k
title
stringlengths
5
916
summary
stringlengths
4
1.51k
description
stringlengths
2
32.8k
solution
stringlengths
2
32.8k
KB16052
AAG DB provision might fail if Active Directory Sites and Services configuration is inaccurate
AAG DB provision might fail if Active Directory Sites and Services configuration is inaccurate, especially if customer's AD infrastructure has multiple domain controller servers across multiple sites.
The Always On Availability Groups (AAG) feature is a high-availability and disaster-recovery solution that provides an enterprise-level alternative to database mirroring.In the Always On Availability Groups provisioning workflow, NDB prestages the VCO object for the AAG listener. Enabling and bringing up the VCO object...
Can configure an intra-site lookup instead of an inter-site lookup of the computer objects where replication is meant to happen immediately. The intra-site lookup can be configured by adding subnets against a specific site inside AD sites and services configuration. These subnets refer to the subnets defined in the vLA...
KB13251
NC2 - hibernate/resume stuck due to DRC actions on CVM without S3 access
This article describes an issue where if hibernation is started and the CVM lost access to S3 bucket, the hibernation task gets stuck without any progress.
Starting with AOS 6.0.1 onwards, NC2 on AWS offers the capability to hibernate/resume the cluster to/from an AWS S3 bucket. An issue has been identified where if hibernation is started and access to the S3 bucket is lost, blocked, etc, the task will get stuck without showing any progress. Before proceeding with the sol...
"WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines ( https://docs.google.com/document/d/1FLq...
KB6150
Intel Network Adapter X520 SFP not detected after Foundation
Intel Network Adapter X520 SFP not detected after Foundation
After Foundation completes, nodes will detect the Intel Network Adapter X520: HOST# lspci |grep -i ethernet However, not all NIC ports are visible by the host: (It is common for an unload of the module and ports if an incompatible SFP is used in all ports) CVM$ manage_ovs show_interfaces The following messages wi...
As per documentation https://www.intel.com.au/content/www/au/en/support/articles/000005528/network-and-i-o/ethernet-products.html from Intel, "Other brands of SFP+ optical modules will not work with the Intel® Ethernet Server Adapter X520 Series". Cisco is very particular about supported hardware. For the case of the...
KB13551
DR Cloud Connect - Error while setting up AWS as Cloud Connect Target: "Failed to fetch time on AWS server. Please very DNS/nameserver setting."
AWS remote site connection SSL errors due to firewall/IDS/IPS.
The Cloud Connect feature helps you back up and restore copies of virtual machines and files to and from an on-premise cluster and a Nutanix Controller VM located on an Amazon Web Service (AWS) cloud. The Nutanix Controller VM is created on an AWS cloud in a geographical region of your choice.While trying to set up AWS...
Customer needs to work with his security team to ensure communication between cluster IPs to AWS is not being affected by external IDS/IPS/firewall devices.
KB4502
NX Hardware [Memory] – Alert - A1052 - RAMFault
Investigating RAMFault issues on a Nutanix cluster.
This article provides the information required for troubleshooting the alert RAMFault for your Nutanix cluster. Alert Overview The RAMFault alert occurs when the amount of physical memory detected in a node is less than the amount of memory installed in a node. This situation arises if: A DIMM has failed in the n...
If this is a known issue where memory has been deliberately removed from the node or if a DIMM has failed, then you can run the following command to update the configuration with the increased or decreased memory. If memory has been increased: nutanix@cvm:$ ncc health_checks hardware_checks ipmi_checks ipmi_sensor...
KB11000
NCC Health Check: async_and_paused_vms_in_recovery_plan_check/async_and_paused_entities_in_recovery_plan_check
The NCC health check async_and_pause_vms_in_rp_check introduced in NCC 4.2.0 is used when witness configured Recovery Plan has async and/or break VMs. The check is expected to fail if a Recovery Plan has async or break entities.
The NCC health check async_and_paused_vms_in_recovery_plan_check introduced in NCC 4.2.0 is used when witness configured Recovery Plan has async and/or break VMs. The check is expected to fail if a Recovery Plan has async or break entities. Running NCC checkThis check can be run as part of a complete NCC health check:...
If the check fails, then follow the below troubleshooting steps: Remove VMs not protected by SyncRep out of the Recovery plan.Create a new Recovery plan for those VMs or make sure to synchronously protect the identified VMs in a Protection Policy. In case the above-mentioned steps do not resolve the issue, consider e...
KB2994
HW: LSI 3008 Firmware Manual Upgrade Guide
LSI 3008 controller firmware has concerns which can cause system hangs - Applies to: NX-8150-G3
It was determined that there are significant firmware deficiencies with the LSI 3008 firmware releases prior to PH09. They have been determined to be one cause of system hangs seen on the 8150-G3. The current recommended FW version for the NX-8150-G3 is PH14.LSI disk controller firmware upgrade:for some drive instabili...
Steps to manually update the LSI FW: 1) Verify the version of the LSI before the upgrade. a) Hyper-VFrom CVM of the node: winsh "cd \Program?Files; cd Nutanix\Utils ; .\lsiutil.exe 0" b) ESXi or AHV:From CVM of the node: sudo /home/nutanix/cluster/lib/lsi-sas/lsiutil 0 Example output: nutanix...
KB16291
Nutanix Files - File Server Clone operation fails as external interfaces are updated before checking NVM RPC server is UP
File Server clone operations fail as external interfaces are updated before checking NVM RPC server is UP.
The log signature observed in the minerva_nvm.log (/home/nutanix/data/logs/minerva_nvm.log) on the minerva_cvm leader for the failed restore task. Find the minerva_cvm leader first: nutanix@CVM:~$ afs info.get_leader Search the log file: nutanix@CVM:~$ less minerva_cvm.log | grep -B6 -i "File-server restore task failed...
This issue is resolved in File Server version 5.0. If this scenario is encountered in an earlier File Server version, contact Nutanix Support http://portal.nutanix.com/ for assistance.
KB4481
Upload of whitelist fails with error "Upload whitelist operation failed with error: Invalid whitelist. Missing field 'last_modified"
In Prism, uploading the whitelist fails with error "Upload whitelist operation failed with error: Invalid whitelist. Missing field 'last_modified"
In Prism, uploading the whitelist fails with the following error. Upload whitelist operation failed with error: Invalid whitelist. Missing field 'last_modified
The ISO whitelist is different from the JSON file of AOS metadata. Download the ISO whitelist from https://portal.nutanix.com/#/page/Foundation https://portal.nutanix.com/#/page/Foundation
KB15637
Nutanix Kubernetes Engine - How to configure etcdctl in an NKE Kubernetes cluster
The etcdctl command may be used to list etcd members, check member health, and list member status, among other operations; however, etcdctl requires endpoint and certificate variables be passed via CLI or environment variable, or the command will fail. This article explains how to specify these variables.
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon.In an etcd VM deployed as part of an NKE Kubernetes cluster, the etcdctl command provides a way to interact with the etcd datastore. etcdctl may be used for operations such as viewing etcd cluster members and checking member health; however, etcdctl commands ...
The variables may either be exported as environment variables, or set on the CLI at runtime.To include the variables on the CLI, specify --endpoints, --key, --cert, and --cacert as shown in the following: [nutanix@etcd ~]$ sudo etcdctl member list --endpoints=https://<etcd-0 IP>:2379,https://<etcd-1 IP>:2379,https://<e...
KB14395
Prism Central GUI page does not load as expected.
Users are not able to login to Prism Central GUI, but instead of getting the normal GUI login page, some lines with directory names are listed
Users are not able to login to Prism Central GUI, but instead of getting the normal GUI login page, some lines with directory names are listed Example of a blank page or directories listed instead of the expected GUI login page
This issue is observed when the contents of the directory /home/apache/www/console have been removed or modified. Follow these steps to identify if the files have been modified: Search in the history events of the relevant PC VM for events related to this directory nutanix@CVM$ panacea_cli show_bash_history | egrep...
KB13487
Prism Central: SAML authenticated users getting 403 Access Denied error
When there are changes to the SAML provider (i.e OKTA) some users UUID may still be tied to the old SAML provider.
When there are changes to the SAML provider (i.e OKTA) some user's UUID may still be tied to the old SAML provider. This will cause these users to get a 403 Access Denied error when trying to log into Prism Central.The below fix is only for MSP enabled clusters.
1) Run the below command in a PC VM to identify the user's UUID. nutanix@PCVM:~$ nuclei user.list count=1000 | grep -i <user name> Example: nutanix@NTNX-172-23-22-254-A-PCVM:~$ nuclei user.list count=1000 | grep -i test.user@nutanix.com 2) Use the UUID above and run the below command in a PC VM to verify that the user...
KB13468
Nutanix Move | Disk access error: Cannot connect to the host
When migrating from ESXi, a plan may fails with the error "Disk access error: Cannot connect to the host". srcagent.log and diskreader_vddk601.log should be checked to investigate the cause. Expired SSL certification may be the cause of the error.
A migration plan may fail at the start with the following error. Disk access error: Cannot connect to the host It indicates that Move cannot connect to the ESXi for some reason.
First, Move must be able to communicate with vCenter Server on port 443, ESXi hosts on 902 and 443, and AHV on 9440. If there is no problem, check the opt/xtract-vm/logs/srcagent.log for the cause. The log may show the following error. server: PrepareForMigration for taskid 5707cdd6-230c-4bad-9874-59c5927c2fdc complet...
KB12399
Nutanix Files :Troubleshooting CFT third-party backups
How to troubleshoot incremental third party backup in Nutanix Files
There are cases when we need to work in parallel with third-party backup vendors (Commvaul in this example) in Nutanix Files to troubleshoot the backup process. In this scenario, Commvault during his incremental backup is not able to find any files changed and it is not backing up anything.From Commvault log we will se...
1. Check ergon tasks for diff marker. This task is initiated to understand if there was any change done on the files from the last snapshot. In the example below all the tasks have succeeded. ecli task.list operation_type_list=SnapshotDiffMasterTask 2. If one of the tasks is failing check in the minerva_nvm.log and we ...
KB15299
3rd party backups may fail when Container vdisk migration is in progress
3rd party backups may fail when Container vdisk migration is in progress
3rd party VM backups may fail when Container vdisk migration is in progress.For example, Cohesity VM backup runs may fail with error: Unknown snapshot state string On the Nutanix cluster, a new task to take a snapshot is created around the same time as the backup task and may fail with error: error_code": 148, "error_...
Backups should be either re-run manually or should be scheduled to run after the container vdisk migration tasks are completed successfully. The migration tasks can be verified to be running by either checking the Prism > Tasks page or from command prompt: ecli task.list include_completed=false limit=4000 For more ...
KB13851
Alert - A160159 & A160160 - File Server Volume Group Configuration Checks
Two alerts to verify that a File Server's Volume Group configuration is present and correctly configured.
This Nutanix article provides the information required for troubleshooting the alert file_server_vg_check for your Nutanix Files cluster.Alert overview The file_server_vg_check is generated when a Nutanix Files Server volume group configuration is missing or inconsistent. Sample alert Block Serial Number: 23SMXXXXXXX...
Troubleshooting This alert is triggered when there is a change in the required Volume Group (VG) for Files. Since this VG is set up with Files and should not be altered, this alert is often an indicator of another issue. Resolving the IssueCheck Prism for any other alerts, correcting them as possible. If no other ...
KB14037
Nutanix Files - 3rd Party incremental backup fails for Nutanix File Share
3rd party incremental backups fails on home shares hosted on Nutanix Files
When using 3rd Party backup software, you may observe that the incremental snapshots fail for certain file shares. However, you will notice that the full snapshots are complete. For example on Hycu, you will see the following signature of task failures: Name: Home share On the cluster, you will see many SnapshotDiffIn...
This is a known issue due to a potential ZFS leak and it is fixed in Files 4.2.1 version. Please have the customer upgrade to Files 4.2.1 version.
KB13366
LCM upgrades fail with "Stream Timeout" when using Dark Site local web server
If the local web server is not available during LCM operations then upgrades and inventory will fail with the message "Stream Timeout"
LCM Inventory and upgrades using a local web server for Dark Site upgrades fail with the red banner message "Stream Timeout".This message appears if there is a problem accessing the web server. To confirm, check the genesis.out log on the LCM leader.To find the LCM leader, log on to the CVM as the "nutanix" user: nutan...
Ensure the Dark Site web server at aa.bb.cc.dd is active and reachable from the CVM IP addresses.Upload the required update bundles per the Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=LCM
KB14264
Curator scans failing intermittently with Medusa error kResponseTooLong
Curator Full, Partial and Selective scans can fail intermittently with Medusa error kResponseTooLong on AOS versions 6.5.x
SymptomsIn AOS 6.5.x releases, Curator Full, Partial and Selective Scans can fail intermittently with Medusa error kResponseTooLong and get marked as kCanceled in the 2010 page. Note: Regular I/O workflow on the cluster is not impacted by this issue.Verification Check the last successful scans to see the timestamps of...
WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.go...
KB8291
Unmount NGT stuck tasks - Safely Deleting Hung or Queued Tasks
Unmounting NGT may fail, resulting in a queued NGT task, Safely Deleting Hung or Queued Tasks
It is possible that ejecting the NGT ISO from a guest VM will fail, resulting in a queued task that will never complete. nutanix@cvm$ ecli task.list include_completed=false
Attempt to identify the root cause of the hung umount NGT tasks PRIOR TO deleting them. Collect a full log bundle from the task create time "create_time_usecs" which can be found in "ecli task.get <task-uuid>". RCA will not be possible if the logs have rolled over or are unavailable from the initial time that the u...
KB11688
Re-enabling bridge_chain on AHV when Flow Network Security or IPFIX is in use may require additional actions
When re-enabling bridge_chain on AHV after it has previously been disabled whilst either Flow Network Security or IPFIX features were in use, a service restart may be required to refresh commit rules in dmx.
Re-enabling bridge_chain on an AHV cluster may display the following message: nutanix@cvm$ manage_ovs enable_bridge_chain This is because when re-enabling bridge_chain on AHV after it has previously been disabled whilst either Flow Network Security (FNS) or IPFIX features were in use, a service restart may be require...
Review the cluster's health and ensure resiliency is good. Follow the AHV Administration Guide / Verifying the Cluster Health https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-cluster-health-verify-t.html before restarting any services. If there are any issues with the cluster health, firs...
KB11729
Powered-off VMs on AHV 20170830.x may disappear from the VM list in Prism
Powered-off VMs on AHV may disappear from the VM list in Prism
A VM may disappear from the VM list in the Prism GUI after shutting it down. The VM won't be listed in ncli and acli as well: nutanix@cvm:~$ ncli vm ls name=<VM_name> The problem is seen on clusters running AHV 20170830.x and OVS versions older than 2.8.x.To confirm that you are hitting this issue, run the following c...
If the VM is present, run the following command to power on the VM: nutanix@cvm:~$ acli vm.on <VM_name or VM_UUID> Perform an LCM https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Guide-v2_4:Life-Cycle-Manager-Guide-v2_4 inventory and upgrade to the latest LTS/STS AOS and AHV release to pr...
KB12786
Prism Central AD authentication fails due to unexpected security policy change on the AD server
It is possible that secure sites using ID Based Security have modified LDAP to work over port 389 and to use the simple bind authentication method to fix FEAT-13069. This requires an exception in any STiG that might be applied, and might not be allowed in default Active Directory configurations. It is likely any IT adm...
Because ID Based security requires LDAP to use port 389 (see FEAT-13069), and because we only use simple bind for LDAP authentication, users must modify the security policies in active directory to allow ID based security to work correctly.AD authentication will suddenly stop working if a new STiG was applied, and this...
Log examples are not available. One way to investigate is to run Wireshark on the AD controller (configure LDAP in Prism to point only to the IP of the AD controller), then capture all traffic to Prism Central's VIP.In the Wireshark filter, you can type "ip.addr==x.x.x.x" where "x.x.x.x" is the VIP for the Prism Centra...
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""esxcli network vswitch standard portgroup list""
null
null
null
null
KB12046
Cluster admin AD users are not allowed to pause/resume synchronous replications
This Article discusses an issue wherein AD users who are Cluster Admin are not allowed to perform pause/resume synchronous replications
Cluster admin AD users are not allowed to pause/resume synchronous replications. Option for "Pause Synchronous Replication" and "Resume Synchronous Replication" is not shown for such users.This option (VM Summary > More > Pause Synchronous Replication) is available for User Admin.
Ideally, the Pause/Resume Synchronous Replication option should be available for the Cluster Admin Role also.This is tracked on ENG-380946 and will be fixed on PC.2021.9 (QA Verified)
KB9227
Stuck Aplos tasks "create_vm_snapshot_intentful" & "delete_vm_snapshot_intentful"
Stuck Aplos tasks "create_vm_snapshot_intentful" & "delete_vm_snapshot_intentful" goes orphaned due to intent_specs getting deleted
NOTE: For both scenarios, before taking any action, ensure to run diag.get_specs and check for matching specs for every create/delete vm_snapshot_intentful task.Scenario 1: Stuck Aplos tasks "create_vm_snapshot_intentful" & "delete_vm_snapshot_intentful" goes orphaned due to intent_specs getting deleted. This issue ha...
NOTE: These scripts can't be used for any other type of stuck task that's missing its intent_spec. DO NOT abort a task that has intent_specs unless Engineering approves so. You can attempt an RCA of the stuck tasks PRIOR TO deleting them by: Collecting a full log bundle from the task create time "create_time_usecs" w...
}
null
null
null
null
KB1941
HW: Disk Debugging Guide
Internal KB - This article gives guidance on how to debug disk related issues
WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.go...
"Could not select metadata disk" (seen during a cluster start) What does this error message mean? If Genesis is starting services and detects that no metadata disk has been chosen on the local node, Genesis will try to pick a metadata disk from the disks that are mounted in /home/nutanix/data/stargate-storage/disks ....
KB16401
Flow Virtual Networking (FVN) VPN/Network Gateway Troubleshooting
This article provides basic troubleshooting steps to diagnose a Nutanix Flow Network Gateway
As per the Flow Networking Guide https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Flow%20Virtual%20Networking, this is the function of a network gateway: "A network gateway connects two networks together, and can be used in both VLAN and VPC networks on AHV. In other words, you...
Verify the communication on the necessary ports for Flow Networking Virtual Networking (FVN) from Prism Central (PC) to AVH nodes and vice-versa: See the required port connectivity in the Portal documentation https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protoc...
KB13646
How to link accounts in the Nutanix Support Portal
This article describes the process for linking accounts in the Nutanix support portal.
The linked accounts feature provides Nutanix Partners/ASP's access to their customer’s portal accounts to open and view cases or manage assets and licenses on behalf of them. This feature can also be used on customer accounts as well, specifically in scenarios where there are multiple subsidiary accounts of the same or...
Creating a Linked Account Upon receiving a request to link accounts please obtain written approval from the account owners of both accounts and save this in the related case. Please describe the complete parent to child relationship in the ticket as well.Parent – Child relationship:Parent Account = Partner/ASP account...
KB10754
Alert - A130355 - VolumeGroupRecoveryPointReplicationFailed
This Nutanix article provides the information required for troubleshooting the alert VolumeGroupRecoveryPointReplicationFailed for your Nutanix cluster.
Alert Overview The VolumeGroupRecoveryPointReplicationFailed alert is generated when the cluster detects any issues that prevent the replication of the Volume Group Recovery Point. Sample Alert Block Serial Number: 16SMXXXXXXXX From NCC 4.6.3 onwards [ { "130355": "Volume Group Recovery Point replication ...
Troubleshooting and Resolving the Issue1. Check network connection between the Primary and the Recovery Availability Zone. Login to the Primary or Destination Prism CentralGo to Administration -> Availability Zones -> Make sure that the Availability Zone is reachable Alternatively, login to the Prism Central console an...
KB8675
Cannot plug out the Phoenix (or other) ISO from the IPMI
Sometimes, when one of the users mounts an ISO on the IPMI of the host, it is kept mounted and cannot be unplugged from a different workstation.
Sometimes, when one of the users mounts an ISO on the IPMI of the host, it is kept mounted and cannot be unplugged from a different workstation. It can happen that, for example, the person who mounted the ISO forgot to unmount it and left the office. Then, his colleagues may be in a situation when the host will keep re...
To release the ISO device mount, please, reboot the IPMI Unit. To do that, you can log in to the IPMI interface and go to Maintenance - Unit Reset. The Unit Reset will simply reboot the IPMI interface. It will not reboot the host and it is absolutely safe thing to do.
KB16595
Updating vCenter Server TLS Certificate Thumbprint in DKP
Updating vCenter Server TLS Certificate Thumbprint in DKP
null
When using DKP to deploy Kubernetes clusters in a vSphere environment with self-signed certificates, the TLS thumbprint must be trusted https://docs.d2iq.com/dkp/2.4/vsphere-quick-start#id-(2.4)vSphereQuickStart-CreatetheDKPclusterdeploymentYAML, otherwise the cluster-api vSphere provider won’t be able to communicate ...
KB11930
Steps to analyze and troubleshoot sporadic increases in SSD utilization
The KB should contain the steps to analyze and troubleshoot sporadic increases in SSD utilization caused by VM I/O (heavy writes), especially if those alerts happened in the past and have been resolved since
This internal KB does explain the steps to analyze and troubleshoot sporadic increases in SSD utilization caused by VM IO (Heavy writes). Alert description: The NCC check Alert ID A1005 checking on the following conditions (check interval 2700 seconds = 45 minutes). The space usage over 90% must be true for at least 10...
There are multiple options to mitigate the issue. Discuss with the customer the load of these VMs and explain that the load of these SSDs are a result of the heavy read or write pattern.Decrease the NCC health check frequency from 1 hour to every 2 hours. This will help avoid the alerts, especially if space usage goes...
KB15780
Nutanix Files - Files crashing due to DSIP unavailable
A Nutanix Files server may experience a crash if data services IP (DSIP) is not available for a longer period of time - this KB shows an example of investigating such situation
Nutanix Files is heavily relying on communication with the Nutanix cluster storage over data services IP (DSIP). In case such communication is not successful within the current limit of 240 seconds, FSVMs may experience crashing. Below is an example of crash dumps created after FSVMs crash nutanix@NTNX-A-FSVM:~/data...
Following analysis shows why DSIP 10.0.254.49 was not available. CVM .14 which was NFS master and hosting DSIP 10.0.254.49 was experiencing low memory condition via alerts in the cluster: nutanix@NTNX-B-CVM:10.0.254.17:~$ ncli alert history duration=90 |grep 'Main memory usage in' -B1 -A4 In meminfo systats we can...
KB15814
MSP Controller upgrade failing on Scale-Out PCVMs with VLAN enabled CMSP
After upgrading to pc.2023.3 version, MSP Controller upgrade can fail on Scale-Out PCVMs if CMSP was deployed with VLANs instead of default VXLANs
Problem Description: After upgrading to pc.2023.3 version, MSP Controller upgrade may fail on Scale-Out PCVMs if CMSP was deployed with VLANs instead of default VXLANs due to race a condition between eth2 NIC removal operation and IAMv2 infra availability. This is a race condition between IAMv2 infra being ready to pro...
Workaround:If you find out that the problem was caused by described race condition above, msp_controller restart to switch leader will retrigger CTRLUPGRADE and should finish successfully. Find msp_controller leader: nutanix@pcvm:~$ panacea_cli show_leaders | grep -i msp SSH to the msp_controller leader PCVM IP and ...
}
null
null
null
null
KB10544
LCM inventory failing since httpd service failing to start
In some corner scenarios the HTTPD service may be in an error state causing LCM inventory to be stuck and prism not loading.
LCM auto inventory failing continuously and the following is seen in lcm_op.trace file 2020-11-12 02:53:37,359 {"leader_ip": "10.162.17.20", "event": "Inventory operation enqueued", "root_uuid": "80bdad30-aa64-49aa-8461-1337831ed92d"} lcm_ops.out on CVM's which has issue would have following entry 2020-11-12 04:56:28 ...
To resolve this issue we need to regenerate the shared memory segements for httpd.For more information about SHM files: http://publib.boulder.ibm.com/httpserv/manual24/mod/mod_slotmem_shm.html http://publib.boulder.ibm.com/httpserv/manual24/mod/mod_slotmem_shm.html NOTE This workaround will need to be performed on EA...
KB7554
Critical: Cluster Service: Aplos is down on the Controller VM
Upgrading the LCM framework leads to a restart of the Aplos service. If this alert is raised after the LCM framework upgrade, you can ignore it after confirming Aplos stability.
Upgrading the Life Cycle Manager (LCM) framework through an LCM inventory involves a restart of the Aplos service. This planned service restart is done by LCM for refreshing backend table schema and is expected. You may see one or more alerts within a few minutes of each other after the LCM framework update. It is lik...
The above-described symptoms of Aplos down alert with LCM update is fixed in NCC 4.2.0. Upgrade NCC to the latest version to avoid these alerts. In case you have further questions, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/. However, if your cluster is running an NCC...
KB2922
No 10GigE network devices found error when running Phoenix
While running Phoenix (or foundation) there might be some corner cases where Phoenix / Foundation fails with error "No 10GigE network devices found error".
While running Phoenix (or foundation) there might be some corner cases where Phoenix / Foundation fails with error "No 10GigE network devices found error" when the platform (>NX-1020) has indeed 10Gb interfaces. lspci -nn command will report the 10gb interfaces driver properly loaded:
These errors have been spotted in the field in some cases where customers have connected gbic transceivers in our 10gb interfaces. This can cause ethtool report 10/100/1000 speeds inaccurately, causing phoenix / foundation to fail with the error shown in the description. However, there are officially qualified gbic t...
KB14919
Alert - A130168 - VMNotReachable
Investigating VMNotReachable alerts on a Nutanix cluster
This Nutanix article provides the information required for troubleshooting the alert A130168 - VMNotReachable for your Nutanix cluster. Alert Overview The alert A130168 - VMNotReachable is generated when the NGT service on the VM is either not reachable or if it is unstable. Sample Alert Block Serial Number: 16SMXX...
Troubleshooting the Issue: Nutanix Guest Tools (NGT) is a software bundle that is installed inside the User Virtual Machines (UVM) to enable advanced VM management functionality via the Nutanix platform. This alert is generated when NGT service running on the user VM is unreachable, paused, or unstable. Checking the N...
KB16231
How to manually verify an account in RAFT.
This article describes how to verify an account manually in RAFT
This article describes how to verify a customer account manually. This can occur when the customer did not receive a verification email or when the one they received has expired.
Log on to RAFT http://raft.nutanix.com and under the "Manage" drop down select the "Manage users" option. Search using the Email Id of the user facing the issue and in the "Actions" section you will see an option to verify the user account. Click on "Verify" and confirm and the account will be verified. The customer ...
KB6306
NCC Health Check: protection_rule_max_entities_per_category_check / protection_rule_max_vms_per_category_check
Raise an alert when the number of entities in a category that is associated with a protection policy is greater the limit for the paired configruation.
NOTE: From NCC 4.3.0 onwards protection_rule_max_vms_per_category_check has been renamed to protection_rule_max_entities_per_category_check The NCC check protection_rule_max_entities_per_category_check / protection_rule_max_vms_per_category_check checks if the number of VMs associated with a category linked to a Prote...
The Below Limits apply to the category in a paired configuration: Resolution: Note the Protection Policy identified in the alert.From Prism Central, Load the Protection Policies page If executing this on the Leap Tenant - Go to Explore -> Protection PoliciesIf executing this on On-Prem cluster - Go to Dashboard -> ...
KB15486
NCC Health Check: invalid_node_population_check
The NCC health check invalid_node_population_check detects presence of Node D in the chassis of NX-1065-G9 (invalid population of Node D).
The NCC health check invalid_node_population_check detects presence of Node D in the chassis of NX-1065-G9 (invalid population of Node D). When run manually, it provides additional concise summary information to assist you with resolution and identification. This check alerts when a failure condition is detected to not...
Remove Node D from the chassis as soon as possible. Introduction of the 4th node per chassis and exceeding the power budget will result in node failures and cluster outage. The power supply may be sufficient and no issue may be detected initially. However, if one of the PSUs fails and the power draw exceeds 2200W, then...
KB14937
Self-Service UI shows incorrect cost and cost/hr for Applications whose VM configuration was updated directly via vCenter
This KB describes a behaviour where incorrect cost per hour is shown for an application.
Background: Self-Service (formerly known as Calm) uses Beam Showback to keep track of the cost and cost/hr for applications based on the amount of memory, storage and vCPU assigned to the VMs. To know more, refer to the Showback https://portal.nutanix.com/page/documents/details?targetId=Self-Service-Admin-Operations-G...
It is recommended not to update the VM configuration directly via vCenter ideally for Self-Service applications. An improvement is raised (tracked under CALM-35643 https://jira.nutanix.com/browse/CALM-35643) to push the config to Showback as part of the platform sync operation, whenever the VM config is updated from v...
KB15766
Move: Hyper-V - This VM is configured with Standard checkpoints.
When migrating the VM from a Hyper-V source provider, the following warning may appear if the VM is configured with standard Hyper-V checkpoints: This VM is configured with Standard checkpoints. For Hyper-V host OS version Windows Server 2016 and later, it is recommended to configure VM with Production Checkpoints for ...
When migrating the VM from a Hyper-V source provider, the following warning may appear if the VM is configured with standard Hyper-V checkpoints: This VM is configured with Standard checkpoints. For Hyper-V host OS version Windows Server 2016 and later, it is recommended to configure VM with Production Checkpoints for ...
The warning message was added in Nutanix Move to 5.1.1 or higher version. If the Microsoft Hyper-V Server version is 2016 or higher, Nutanix recommends configuring the checkpoints as production checkpoints for better migration performance. For information about the procedure to change checkpoints to production checkp...
KB11010
Adding Nutanix Objects as Primary Storage with Veritas Enterprise Vault
This article describes the steps to add Nutanix Objects (S3) as Primary Storage with Veritas Enterprise Vault.
This article describes the steps to add Nutanix Objects (S3) as Primary Storage with Veritas Enterprise Vault. Versions Affected: Nutanix Objects 3.1 and above Prerequisite: If using a self-signed certificate, add the Nutanix Objects self-signed CA certificate on the Enterprise Vault Servers. Refer to KB-10953 http:...
Adding Nutanix Objects as Primary Storage Enterprise Vault 14.1 and later supports Nutanix Objects (S3) as Primary Storage for data archiving. The below steps describe adding Nutanix Objects (S3) to a Vault Store Partition on Veritas Enterprise Vault. Launch the Enterprise Vault Administration Console. Navigate t...
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""netsh int ip reset resetlog.txt \t\t\tnetsh winsock reset catalog""
null
null
null
null
KB14661
Prism Central - After upgrade to 2023.1.0.1 users unable to login, including local admin
Article describe situation: After PC upgrade from 2022.6.X 2022.9.X to pc.2023.1.0.1 users are receiving 403 message after successful (with correct credentials) login
Following scenario is possible:1. Customer has pc 2022.6.X with IAM 3.6 enabled2. Customer upgrading to pc.2023.1.0.1Users may see 403 error after Prism Central upgraded to pc.2023.1.0.1 including login with local admin account.Identification:After upgrade pc.2023.1.0.1 services started - at this point prism_gateway ex...
Check if IAM upgrade is running or failed in ~/data/logs/msp_controller.out:if still running - give it more time 2023-04-17T09:34:09.754Z base_services.go:596: [INFO] [msp_cluster=prism-central] svc IAMv2 current version 3.6.0.1658839862 < spec version 3.11.0.1675810195, upgrade required alternate method, check for U...
KB11815
Prism Central services down after upgrade
Prism Central upgrade failure
Prism Central VM didn't reboot after upgrade. Customer force rebooted PC. After reboot services are down. + Upgrade history nutanix@NTNX-179-114-61-188-A-PCVM:~/data/logs$ cat ~/config/upgrade.history + Genesis status nutanix@NTNX-179-114-61-188-A-PCVM:~$ genesis status + Genesis restart nutanix@NTNX-179-114-61-188-...
Log signature resembles ENG-203789 https://jira.nutanix.com/browse/ENG-203789, KB 7906 where script was failing in config_home_dir. This issue has been resolved in 5.11. Customer was running 5.19.We couldn't RCA the issue. Please do not reboot PCVM if you see similar issue. Engage engineering via TH or oncall
KB8671
How to determine which M.2 device failed on the node
This KB describes how to find which M.2 drive is broken when S.M.A.R.T error ocurrs on the M.2 device on the node during boot up
Sometimes S.M.A.R.T error of the M.2 device is notified when node is booting up.In this situation, it is hard to determine which device Port-0 or Port-1 has failed due to which hypervisor/CVM fails to boot.In most cases we have seen Port-1 being failed, but rarely Port-0 also fails even if it is blank without any boot ...
Review the previous message of the S.M.A.R.T error.The number starting with I-SATA indicates which device has failed. I-SATA0 : INTEL SSDCSKJB240G7 0 indicates that Port-0 drive is broken. I-SATA1 : INTEL SSDCSKJB240G7 1 indicates that Port-1 drive has failed. Contact Nutanix Support http://portal.nutanix.com to re...
KB16500
Prism Central scheduled reports are not getting generated for non-utc timezones
Prism Central scheduled reports are not being generated when non-utc timezone is specified in the report config.
Scenario 1Upon reviewing the generated reports within the Prism Central tab, it was discovered that no reports had been generated for the scheduled times, especially in non-UTC time zones. This has been identified as a known issue with PC version 2023.4.The following errors can be observed in the vulcan log. I0321 11:...
Nutanix Engineering is aware of the issue and a fix has been integrated into pc.2024.1. This problem is tracked in ENG-645986 https://jira.nutanix.com/browse/ENG-645986.
}
null
null
null
null
KB10687
Unable to add Hyper-V hosts to move appliance
Customers may see different errors when trying to add Hyper-V hosts due authentication issue.
When we try to install move agent on Hyper-V hosts using Move automatically we get the following error on webpage. Error: Move HyperV agent automatic installation failed: Powershell command .\'move-agent-installer.exe' --o='install' --ip='10.240.157.202' --servicemd5='cdf7d8b792da9bea077818a9bad770ec' --certmd5='3ff37b...
Use the steps in KB-7932 http://portal.nutanix.com/kb/7932 to remove failed installation before trying the solutions given below.ISSUE 1:When installing the move agent using the username@domain.xyz format or just username it may not authenticate in some domains. Solution:From move appliance use the domain account in d...
KB13280
Nutanix DRaaS - Cannot recover VM from this Recovery Point - The VM has delta disks
Cannot recover VM from this Recovery Point - The VM has delta disks
Nutanix Disaster Recovery as a Service (DRaaS) is formerly known as Xi Leap.VMs being protected to Xi from a Nutanix-ESXi cluster cannot be recovered in Xi with the following error: Cannot recover VM from this Recovery Point Checking the cerebro logs on the on-prem cluster, it shows app-consistent snapshots are attemp...
The issue is happening because the customer has app-consistent snapshots enabled in the Protection Policy, but the VSS snapshots are not working at the Nutanix level, so Cerebro reverts to creating VMware snapshots (delta disks). These snapshots cannot be recovered in Xi, as Xi clusters are running AHV.To solve this is...
KB16255
Node Removal Stuck - Possible Scenarios
This KB lists various possible problems during Node removal and how to resolve them.
Overview This KB lists various possible problems during Node removal and how to resolve them. Scenarios are many and with the aid of this Generic Troubleshooting KB you should be able to identify which one you've hit in a case and then proceed to the specific break-fix KB describing the solution. NOTE: There is a sep...
WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.go...
KB2060
Failure to upgrade the CVM memory through vCenter
The following article explains the procedure to upgrade the CVM memory through vCenter after failure.
After increasing memory size of a CVM, it fails to start with the following error: Failed to start the virtual machine. In the virtual machine properties, the correct memory size is displayed.
Perform the following steps to resolve the issue. Connect to the ESXi host with the CVM via SSH.Under the Local-datastore, open the ServiceVM_Centos.vmx CVM configuration file and look for the following lines. sched.mem.min = "16384" Check the following field (at the top of the vmx file). memsize = "16384" If sched...
KB2263
NCC Health Check: check_storage_access
The NCC health check check_storage_access verifies if the storage is accessible from the host and whether essential configurations are in place on the Nutanix cluster.
The NCC health check check_storage_access verifies if the storage is accessible from the host and whether a few essential configurations are in place on the Nutanix cluster. This check was designed specifically for Hyper-V clusters but starting from NCC 3.6, this check runs on ESXi clusters as well. Hyper-VThis check...
A disabled Metro Availability protection domain can trigger the following result: FAIL: Failed to get ESXi storage information from hosts:... The Metro container is in read-only status, causing ESXi commands to hang and the check to timeout. A more detailed explanation is provided in KB-8283 http://portal.nutan...