header stringlengths 12 160 | content stringlengths 15 265k |
|---|---|
## 1 SAP HANA Troubleshooting and Performance Analysis Guide | ## 1 SAP HANA Troubleshooting and Performance Analysis Guide
With SAP HANA, you can analyze data at incredible speeds, for example, with scans of 1 billion rows per second per core and join performance of 10 million rows per second. However, such results are only possible if the system is monitored and performance issues are kept to a minimum.
This guide describes the measures you can take to identify and resolve specific issues and shows you how to enhance the performance of your SAP HANA database in the following areas:
- Host resources (CPU, memory, disk)
- Size and growth of data structures
- Transactional problems
- SQL statement performance
- Security, authorization, and licensing
- Configuration.
A separate guide aimed at developers is available, focusing on performance issues and steps you can take to enhance the performance of your SAP HANA database: SAP HANA Performance Guide for Developers.
## Prerequisites
- This guide assumes knowledge of the relevant functionality of the SAP HANA database (knowledge which can be gained from HANA training courses such as HA100, HA200).
- Access to the administration tool SAP HANA cockpit (or alternatively SAP HANA studio) is required.
## Overview
## Analyzing Generic Symptoms
This section of the troubleshooting guide helps you to find out about causes of generic problems such as:
- slow system-wide performance
- slow individual SQL statements
- frequent out-of-memory (OOM) situations
Furthermore, you are directed to sections of this guide that contain more specific root causes.
SAP Note references lead you to possible solutions.
Analyzing Generic Symptoms [page 8]
## Root Causes & Solutions
In this section of the troubleshooting guide you find concrete root causes for problems together with possible solutions. Some of the areas covered are:
- Memory Problems
- CPU Related Root Causes and Solutions
- License Issues
- Statement Performance Analysis
You may find the SAP Notes very useful for solving your issue as they contain detailed explanations and step-by-step instructions, for example.
Root Causes and Solutions [page 65]
## Tools and Tracing
This section of the troubleshooting guide presents monitoring tools which can be used for analyzing and tracing certain issues.
## Tools and Tracing [page 237]
Apart from those more specific monitoring tools discussed here, there are important tools for administrators and developers in general:
- SAP HANA cockpit
Tip
For the documentation of the latest SAP HANA cockpit support package (SP), see https:// help.sap.com/viewer/p/SAP_HANA_COCKPIT
- SAP HANA database explorer SAP HANA Database Explorer
- SQL analyzer Analyzing Statement Performance
- Support Log Assistant
The Support Log Assistant is a tool that allows you to automatically scan and analyze text files such as logs, configuration files or traces. The tool will then suggest solutions to known issues found in the files and highlight important details that it finds. The tool is integrated into the incident logging procedure and is also available as a standalone tool; the following link is to a Support Portal getting started page which also gives direct access to the Support Log Assistant.
Support Log Assistant
## Alerts
Alert checkers run in the background and you are notified in case of potentially critical situations arising in your system. In the SAP HANA cockpit, you can easily see in which areas you might need to take some action.
Alerts Reference [page 302]
## SAP Notes
SAP Notes are used to give detailed supplementary customer support information in addition to the formal set of published documents. This troubleshooting guide includes many references to relevant SAP Notes. The
Alerts reference section, for example, gives links to corresponding notes for each system alert. Some other FAQ-format SAP Notes which you may find useful are listed here:
- SAP Note 2000003 - FAQ: SAP HANA
- SAP Note 1999997 - FAQ: SAP HANA Memory
- SAP Note 2186744 - FAQ: SAP HANA Parameters
## Guided Answers
Guided Answers is an interactive online support tool to help users to diagnose and solve problems using decision trees. It covers many SAP products including SAP HANA and offers a set of step-by-step problemsolving online documents each one designed to address a specific topic. Guided Answers is available in the SAP Support portal at the following address:
https://ga.support.sap.com/dtp/viewer/
This troubleshooting guide includes links to specific trees where relevant. The following tree is a general high-level troubleshooting tree for SAP HANA:
https://gad5158842f.us2.hana.ondemand.com/dtp/viewer/#/tree/1623/actions/21021
## Further Resources
The following SAP HANA documents are important resources for working with SAP HANA and are often referred to in this guide:
- SAP HANA Administration Guide
- SAP HANA SQL and System Views Reference
In particular, the SAP HANA Administration Guide gives general details on using the administration tools SAP HANA cockpit and SAP HANA studio.
There is a central online portal for a variety of support resources for SAP products which is available from the SAP ONE Support Launchpad Software Downloads : https://launchpad.support.sap.com/#/ softwarecenter
In the SAP Community Network (SCN) you can find many support resources online including wikis, blogs, reference materials and so on. This SCN wiki page, for example, provides links to many specialist topics: SAP HANA In-Memory Troubleshooting Guide .
Both SAP HANA Academy and SAP Support offer Y ouT ube channels with a wide range of support materials in video format:
- https://www.youtube.com/user/saphanaacademy
- http://www.youtube.com/user/SAPSupportInfo |
## 2 Analyzing Generic Symptoms | ## 2 Analyzing Generic Symptoms
The purpose of this section of the document is to help you to find the probable root cause of some generic problems and refer you to more detailed sections of the SAP HANA Troubleshooting and Performance Analysis Guide to proceed with your analysis.
Performance issues may be difficult to diagnose; problems may be rooted in a number of seemingly unrelated components. Checking for system alerts is a good starting point if you experience any trouble with your SAP HANA system. If the system issues an alert, refer to the Reference: Alerts section to find the part of this guide, an SAP Note or Knowledge Base Article which addresses the problem.
However, alerts are configurable (see Memory Problems for information on configuring alerts) and do not cover all aspects of the system, problems can still occur without triggering an alert. This section therefore describes some generic symptoms which you may observe and helps you to analyze the underlying problem.
## Related Information
Memory Problems [page 65] Alerts Reference [page 302] |
## 2.1 Using the SQL Statement Collection for Analysis and Health Checks | ## 2.1 Using the SQL Statement Collection for Analysis and Health Checks
A collection of predefined customizable statements is available for checking the database.
An extensive SQL statement collection is available for analyzing the SAP HANA database, this can be used to examine specific problems or for routine monitoring. The script collection is available in a zip file attached to the SAP Note 1969700 SQL Statement Collection for SAP HANA . Several scripts are referred to in this Troubleshooting Guide but a general starting point for system analysis is to check the overall health of the SAP HANA database by running the script HANA_Configuration_MiniChecks as illustrated below.
## Identify the Correct Version of a Script
Many of the statements are version specific, you must therefore choose the correct version of the script for the version and database revision of SAP HANA you are running. For example, if you are on SAP HANA 2.00.023 and script versions 2.00.000+, 2.00.010+, and 2.00.030+ are available, you should use the 2.00.010+ version. Script versions for earlier releases than your database release are expected to work, but you should avoid using script version for SAP HANA releases later than the release you are using. They may describe monitoring views or columns not yet available with your SAP HANA version.
## Modify a Script
Where relevant, the SQL support scripts contain sections which can be customized to enter your own parameter values, these are marked with the comment ' /* Modification section */ '. For example:
```
( SELECT /* Modification section */ '%' HOST, '%' PORT, '%' OBJECT_TYPE, ... )
```
If you want to analyze a specific time range, you can modify the time parameters using the codes described in the comment block of the script, in the section INPUT_PARAMETERS . Y ou can use 'C-H12' BEGIN_TIME , or you can enter a specific date and time.
To do this, in place of 'C-H12' BEGIN_TIME and 'C' END_TIME , enter the date and time you want to analyze, in the format 'YYY/MM/DD HH24:MI:SS' .
Example: To check for transaction lock waits that happened between 19/08/2022 at 2pm and 20/08/2022 at 2pm, the modification part of the script would look like this:
```
SELECT Modification section 2022/08/19 2022/08/20 SERVER, UTC SITE_Id, HOST , SERVICE_NAME ,
```
## Script HANA_Configuration_MiniChecks
The output of the HANA_Configuration_MiniChecks script is illustrated here; the output report gives an overview of the overall health of the SAP HANA database. The report covers various aspects of the system including Memory, CPU, I/O, locks and other issues that may cause a performance problem. If a potentially critical issue is found the column 'C' is marked with an 'X' . The last column of the report references relevant SAP notes or Knowledge Base Articles that will help you to resolve the issues.
Example Output of the Configuration_MiniChecks Script
## Related Information
SAP Note 1969700 |
## 2.2 Handling SQL Errors | ## 2.2 Handling SQL Errors
Errors generated when executing SQL statements are identified by a code number which may be helpful to find an explanation and solution to the problem.
This section gives some recommended solutions to help solve problems related to SQL errors. The topic which follows 'SQL Error Reference' is a table of error codes which includes links to other sources of information for many of the most frequently occurring errors.
Error messages returned when executing SQL statements are identified by a number, type and description, for example:
ERROR [SQL-608] exceed maximum number of prepared statements
There are currently over 5700 SQL errors; all error codes and types are listed in the monitoring view m_ERROR_CODES. To get a complete list of errors and their descriptions you can execute the following query:
```
SELECT * FROM M_ERROR_CODES ORDER BY CODE ASC;
```
All SQL error messages are also listed in the help topic 'SQL Error Codes' in the SAP HANA SQL Reference Guide for SAP HANA Platform (link in Related Information below).
## Knowledge Base Search
You can search for existing support documentation on a specific error in the SAP HANA Support Knowledge Base and the SAP Support community (see links in Related Information below). For example, the search results for the description of the SQL-608 error given above includes references to many HAN-DB (SAP HANA Database) knowledge base articles such as:
- 2464140 - Longer running realtime replication may provide errors like "exceed maximum number of prepared statements"
- 2154870 - How-To: Understanding and defining SAP HANA Limitations
SQL errors may also be related to third party SAP tools, middleware or application clients that use HANA, and information may therefore be available under other component IDs (not only 'HAN-DB*'). The following error, for example, may occur in the context of 'SAP Access Control 12.0' between SPRO and HANA:
```
ERROR_CODE : 4.229 - ERR_PROVIDER_INVALID_PROVIDER_NAME - 'Invalid provider name'
```
The following KBA (under the component Access Request GRC-SAC-ARQ ) may be relevant but may not be immediately visible in the search results:
- 3002042 - Invalid provider name: 'SAP_PI_GRC'.'Create_User': line xx col xx (at pos xx): line x col xx (at pos xx)
If no solution can be found you may need to open a support ticket under the HAN-DB* component with the example system trace files, logs and screenshots of the error(s) for further analysis.
## Search Trace Files
You can also search the HANA service trace and service alert files (see also 'Analyzing SQL Traces') for example: 'indexserver_saphana01.30003.024.trc', 'indexserver_alert_saphana01.trc'.
For errors in BW on HANA, the values of the following transaction codes may also be helpful:
- The ABAP Dumps in transaction code ST22
- The work process trace entry from transaction code ST11
The following articles may also be helpful when analyzing trace files:
- 2194685 - How to find work process trace for SM21 System Log or ST22 dump
- 2380176 - FAQ: SAP HANA Database Trace
- 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANA Environments
## Related Information
SQL Error Codes (SAP HANA SQL Reference Guide)
SQL Error Reference [page 12] Analyzing SQL Traces [page 238] SAP HANA Support Knowledge Base
## Links to Knowledge Base Articles :
SAP Note 2194685
SAP Note 2380176
SAP Note 2399990
SAP Note 2464140
SAP Note 2154870
SAP Note 3002042 |
## 2.2.1 SQL Error Reference | ## 2.2.1 SQL Error Reference
The following reference table provides links to sources of troubleshooting information for many frequently occurring SQL errors.
SQL Error Codes
| Code | Type | Description | More information |
|--------|---------------------------|-------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 4 | FATAL_OUT_OF_MEMORY | Cannot allocate enough memory | This error indicates an out-of-memory (OOM) situation. - KBA 1999997 - FAQ:SAPHANA Memory |
| 7 | ERR_FEATURE_NOT_SUPPORTED | Feature not supported | Commonpreamble to a more detailed error. Please review the error detail following the pre- amble "feature not supported: ...." and search the SAP HANAKnowledge base for existing Knowledge based articles and SAP Notes. Some examples: 2970997 - feature not supported: Invalid op- tions 2804287 - feature not supported: Create da- tabase command on SYSTEMDB only 2870701 - Error "feature not supported: global temporary table must be empty to drop" 2661916 - feature not supported: Snapshot backup not supported on a tenant database 2710915 - feature not supported: move with- out partition not allowed on partitioned table |
| 8 | ERR_INV_ARGUMENT | Invalid argument | 3104969 - DBSQL_SEMREQ_ERROR. 22 In- valid argument. (Internal KBA) |
| 10 | ERR_AUTHENTICATION_FAILED | Authentication failed | This error indicates a failure during user authen- tication - (See KBA 2399990 ) See section '11. How can tracing be activated for security topics like authorization, authentication and login?' - 2159014 - FAQ: SAP HANASe- curity |
| 11 | ERR_INV_STATE | Invalid state | 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments. |
| 12 | ERR_FILE_OPEN_FAILED | Cannot open file | Issue with a backup file in the backup catlog or a data / redo log file in the persistence directory on the HANAdata and log volumes. Check the underlying HANAservices that use persistence and their trace files for more information on the error. |
| 15 | ERR_FILE_NOT_FOUND | Cannot find file | Typically an issue with a third party backint tool: 2801380 - ERROR: [110063] The backup destination header is corrupt 2541007 - Not able to restore database from Symantec NetBackup if the backup file name contains square brackets |
| 18 | ERR_SERVICE_SHUTDOWN | Service shutting down | Indicates that a HANAservice has been sent a shutdown command either manually from <sid>adm, or a parent process like the sap- start service, daemon or nameserver. Check the HANAservice trace files for the events leading up to the error. |
| 19 | ERR_INV_LICENSE | Invalid license | 19: invalid license' is described in more detail in SAP KBA 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 20 | ERR_CON_OUTSIDE_VALID- ITY_PERIOD | Connect attempt out- side user's validity pe- riod | Connect attempt outside user's validity peri- od' is described in more detail in SAP KBA 2380176 - FAQ: SAP HANADatabase Trace |
| 21 | ERR_PERSISTENCE | Persistence error | Persistence error' is described in more detail in SAP KBA 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 128 | ERR_TX | Transaction error | SQL error 128: transaction error: exceed maxi- mumnumberof transactions' is described in further detail in KBA 2154870 - How-To: Un- derstanding and defining SAP HANALimitations |
| 129 | ERR_TX_ROLLBACK | Transaction rolled back by an internal error | This is a quite generic message that something went wrong and the transaction had to be rolled back. Check the HANAservice trace files for related detailed error messages. |
| 131 | ERR_TX_ROLLBACK_LOCK_TIME- OUT | Transaction rolled back by lock wait timeout | SAP KBAs: 3066406 - SQL code: 131" occurred while accessing table 1999998 - FAQ: SAP HANALock Analysis 2380176 - FAQ: SAP HANADatabase Trace |
| 132 | ERR_TX_ROLLBACK_RESOURCE | Transaction rolled back due to unavailable re- source | SAP KBAs: 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace 2154870 - How-To: Understanding and defin- ing SAP HANALimitations |
| 133 | ERR_TX_ROLLBACK_DEADLOCK | Transaction rolled back by detected deadlock | This type of short dump indicates a deadlock, so a cross-wise transactional lock. In most cases deadlocks are caused by inadequate application coding that results in risk of cross-wise locking of different transactions. See 1999998 - FAQ: SAP HANALock Analy- sis |
| 138 | ERR_TX_SERIALIZATION | Transaction serializa- tion failure | This error indicates that a retry for a database request was triggered, but the request is not retriable. This trace error can be linked to "138: trans- action serialization failure: Serialization failure" errors observed on client side (SAP KBA 2399990 ) |
| 139 | ERR_TX_ROLLBACK_QUERY_CAN- CEL | Current operation can- celed by request and transaction rolled back | SAP Note 2931813 - Memory Related Error or Warning Trace Messages when Cancelling a Query |
| 142 | ERR_TX_EXCEED_MAX_TX_NUM | Exceedmaxnum of concurrent transac- tions | This error is generated if the limit of configured external connections is reached. See SAP KBAs: 2154870 - How-To: Understanding and defin- ing SAP HANALimitations 1910159 - How to handle HANAAlert 25: 'Check number of connections' |
| 144 | ERR_TX_ROLLBACK_UNIQUE_VIO- LATED | Transaction rollback unique constraint vio- lated | See '19. Which error messages exist in the con- text of indexes?' - 2160391 - FAQ:SAPHANA Indexes |
| 145 | ERR_TX_DIST_FAILURE | Transaction distribu- tion work failure | Multiple , generic causes. See KBAs: 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace |
| 146 | ERR_TX_LOCK_ACQUISITION_FAIL | Resource busy and NOWAIT specified | This error indicates that an operation is per- formed with a NOWAIT option and so it termi- nates immediately in case the transactional ob- ject or record lock can't be acquired. See KBAs for more information: 1999998 - FAQ: SAP HANALock Analysis 2380176 - FAQ: SAP HANADatabase Trace |
| 149 | ERR_TX_DIST_2PC_FAILURE | Distributed transaction commit failure | This error indicates major problems on server side, so the SAP HANAdatabase should be in- vestigated. |
| 154 | ERR_TX_INDEX_HANDLE_AC- QUISITION_FAIL | failure in acquiring in- dex handle | This error indicates an index handle contention issue (SAP Note 1999998 ) accessing a mon- itoring view like M_CS_NSE_ADVISOR. SAP Note - 3125519 - Querying M_CS_NSE_ADVISOR Might Fail and Not Pro- duce Any Recommendation |
| 256 | ERR_SQL | sql processing error | [256]: sql processing error:' is a generic error and needs further investigation via theHANA service traces. 2073630 - DBIF_RSQL_SQL_ERROR SQL er- ror 256 when access table 2380176 - FAQ: SAP HANADatabase Trace |
| 257 | ERR_SQL_PARSE | sql syntax error | This error indicates a wrong syntax and you need to check the related SQL statement for correctness. In rare cases the issue can also be a conse- quence of problems on lower layers (infrastruc- ture / hardware) erroneously modifying SQL statement strings. SAP KBA 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 258 | ERR_SQL_INSUFF_PRIV | insufficient privilege | Authorization / permission problems. This ter- mination indicates missing privileges. Make sure that proper roles and privileges are assigned. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace 2159014 - FAQ: SAP HANASecurity |
| 259 | ERR_SQL_INV_TABLE | invalid table name | This error indicates that the table or view <ob- ject> isn't found. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 260 | ERR_SQL_INV_COLUMN | invalid column name | This error indicates that column <column> doesn't exist in table <table> and so the da- tabase operation <op> (e.g. 'INS' for INSERT) fails. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 261 | ERR_SQL_INV_INDEX | invalid index name | 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2142945 - FAQ: SAP HANAHints 2800008 - FAQ: SAP HANAFulltext Indexes |
| 262 | ERR_SQL_INV_QUERY | invalid query name | ODBC_SQL* queries are built-in database quer- ies (installed when the SAP HANAdatabase is created) that are used for certain ODBCdriver related actions, e.g. ODBC_SQL_COLUMNS for retrieving column metadata. The error indicates that the query isn't available. Check why it is missing or not properly accessible. |
| 264 | ERR_SQL_INV_DATATYPE | invalid datatype | 21. Are there restrictions in processing LOB col- umns?' - 2220627 - FAQ:SAPHANALOBs 2222219 - SAP HANAErrors and different Results due to Design Changes 2765864 - Update Statement Fails With Er- ror "invalid datatype: $rowid$ of invalid data types" |
| 266 | ERR_SQL_INCNST_DATATYPE | inconsistent datatype | 2486755 - "inconsistent datatype: lob type comparison" when selecting from a column with typeTEXT-HANADB 2222219 - SAP HANAErrors and different Results due to Design Changes |
| 268 | ERR_SQL_AMBG_COLUMN | column ambiguously defined | 2695943 - SQL failed with error "column am- biguously defined" 2552712 - The Error "column ambiguously defined" May Occur When a Statement is Exe- cuted |
| 269 | ERR_SQL_MANY_VALUES | too many values | This error indicates that a database operation is executed with a wrong number of arguments, e.g.: SAP KBA 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 270 | ERR_SQL_FEW_VALUES | not enough values | This error indicates that a database operation is executed with a wrong number of arguments, e.g.: SAP KBA 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 272 | ERR_SQL_DPLC_COLUMN | duplicate column name | Column name already exists in the table |
| 274 | ERR_SQL_INS_LARGE_VALUE | inserted value too large for column | This error indicates that a value was inserted in a column that is larger than the maximum column value size, e.g.: 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace 2222219 - SAP HANAErrors and different Results due to Design Changes 3018823 - Inserted Value Too Large For Col- umn 2725050 - SqlScript Execution Fails With "[274]: inserted value too large for column" |
| 286 | ERR_SQL_LONG_IDENTIFIER | identifier is too long | Identifier are names for SAP HANAobjects like table, column or user. 2154870 - How-To: Understanding and defin- ing SAP HANALimitations 2170441 - Query on SDA object fails with Er- ror "ORA-00972: identifier is too long" |
| 287 | ERR_SQL_NOT_NULL | cannot insert NULL or update to NULL | This error happens when a NULL value is in- serted into a column that doesn't allow NULL values, e.g. because of a NOTNULLor primary key constraint. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace |
| 288 | ERR_SQL_EXST_TABLE | cannot use duplicate table name | 2800007 - FAQ: SAP HANATemporary Ta- bles 2562930 - SAP HANALocal Temporary Ta- ble existence boundaries |
| 292 | ERR_SQL_FEW_ARGUMENT | wrong number of argu- ments | This error indicates that a database operation is executed with a wrong number of arguments, e.g.: 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 293 | ERR_SQL_INV_ARGUMENT | argument type mis- match | Can occur during catalog consistency checks and during HANservice removal from the topol- ogy: 2116157 - FAQ: SAP HANAConsistency Checks and Corruptions 2342394 - Unable to remove additional index server 2504698 - Removing additional indexserver services fails with error on a SAP HANAsystem |
| 301 | ERR_SQL_UNIQUE_VIOLATED | unique constraint vio- lated | This error indicates that a modification isn't possible because the key already exists. Check the existing unique and primary keys and make sure that duplicates aren't processed. 2380176 - FAQ: SAP HANADatabase Trace 2572224 - How-To: Repairing SAP HANATa- bles 2160391 - FAQ: SAP HANAIndexes |
| 302 | ERR_SQL_INV_CHAR_VAL | invalid CHARor VAR- CHARvalue | This error indicates a wrong character encoding and is typically caused by a data corruption. See SAP Note 2116157 in order to check the consistency of the database (with a particular focus on the table returning the error) and take appropriate actions to repair inconsistencies. |
| 303 | ERR_SQL_INV_DATETIME_VAL | "invalid DATE TIME or TIMESTAMP value" | This error indicates that a value <value> can't be parsed as date, time or timestamp. This problem can be caused by wrong data in under- lying tables or inadequate application design. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace 2361364 - [303]: invalid DATE, TIME or TIMESTAMP value: search table error: [6931] at- tribute value is not a date or wrong syntax |
| 304 | ERR_SQL_DIV_BY_ZERO | division by zero unde- fined | 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2945199 - search table error: [6859] Attribu- teEngine: divide by zero 2548692 - Alert in HANAStudio '[3] 304 division by zero undefined: search table error: [6859] AttributeEngine: divide by zero' |
| 305 | ERR_SQL_SINGLE_ROW | single-row query re- turns more than one row | This termination indicates that more than 1 re- cord is returned although only a single row can be used. It is typically caused by a coding error on application side. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace |
| 306 | ERR_SQL_INV_CURSOR | invalid cursor | These termination indicates a lack of memory on SAP HANAserver side. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 307 | ERR_SQL_NUM_OUT_OF_RANGE | numeric value out of range | This termination indicates an overflow while processing decimal numbers. Adjust the appli- cation, data or column precision in order to make sure that overflows no longer happen. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace |
| 308 | ERR_SQL_EXST_COLUMN | column name already exists | A CREATE or ALTER statement processing an existing table column may encounter this during a DDL operation. |
| 310 | ERR_SQL_IN_PROC | sql error in procedure | 2647960 - Checks after phase MAIN_NEW- BAS/JOB_RS_DMO_HDB_CONTENT_ACTIVATE were negative for Delivery Unit A2EESNHI_DE- LIVERY_UNIT |
| 311 | ERR_SQL_DROP_ALL_COLUMNS | cannot drop all col- umns in a table | Restriction for issuing a DROPcommandfor all columns in a database table |
| 313 | ERR_SQL_INV_SEQ | invalid sequence | 6. Which problems exist in context of sequen- ces?' - 2600095 - FAQ: SAP HANASequen- ces |
| 314 | ERR_SQL_OVERFLOW_NUMERIC | numeric overflow | This error is returned if you use a higher number precision than defined for the column. Either reduce the precision on application side or in- crease the precision in the database. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace |
| 320 | ERR_SQL_LOB_INDEX | cannot create index on expression with data- type LOB | 21. Are there restrictions in processing LOB col- umns?' - 2220627 - FAQ:SAPHANALOBs |
| 324 | ERR_SQL_EXST_SEQ | cannot use duplicate sequence name | 2134323 - Nesting error in SQLRUN_ISU_TRIGGERS: maximum depth 8 |
| 325 | ERR_SQL_ESC_SEQ | invalid escape se- quence | 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 328 | ERR_SQL_INV_FUNC_PROC | invalid name of func- tion or procedure | This error indicates that an object (e.g. table) accessed in a database request doesn't exist. Make sure that the application only accesses objects that exist on SAP HANAlevel. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace |
| 332 | ERR_SQL_INV_USER | invalid user name | 2433879 - Restricted User getting "Invalid User Name/Password" 2354866 - "Could Not Save the Connection Data; Invalid User Nameor Password" 2506777 - Unable to login after resetting the SYSTEM user's password forHANA 2538907 - Error "Logon to SAP System Host failed" when using updated password to log on HANAsystem 2764976 - 10: authentication failed: ERR_AU- THENTICATION_FAILED SQLSTATE: 28000 |
| 338 | ERR_SQL_ZERO_LEN_NOT_AL- LOWED | zero-length columns are not allowed | HANAdoes not support zero-length columns. 2674281 - Error "SAP DBTech JDBC: [338]: zero-length columns are not allowed:" |
| 339 | ERR_SQL_INV_NUMBER | invalid number | This termination indicates that a number was expected but something else was found. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2972659 - Reason for error "339: invalid number" |
| 340 | ERR_SQL_VAR_NOT_BOUND | not all variables bound | This termination indicates an issue with bind variable processing. See SAP Note 2380176 -> "unbound parameter: <cnt> of <total>, <cnt> out of <total_batch> batches" for more details. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace |
| 348 | ERR_SQL_INV_DATETIME_FOR- MAT | invalid datetime format | 2271717 - SAP HANASDA: Error executing query [Oracle][ODBC]Invalid datetime format. 2933606 - SDA Oracle Remote Query Fails With [Oracle][ODBC]Invalid datetime format |
| 349 | ERR_SQL_CREATE_UNIQUE_IN- DEX | cannot CREATE UNIQUE INDEX; dupli- cate key found | 2838779 - Error "duplicate key found" when trying to recreate a missing primary index |
| 359 | ERR_SQL_STR_LENGTH_TOO_LAR GE | string is too long | 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace 2154870 - How-To: Understanding and defin- ing SAP HANALimitations 2222219 - SAP HANAErrors and different Results due to Design Changes |
| 361 | ERR_SQL_VIEW_UPDATE_VIOLA- TION | data manipulation op- eration not legal on this view | 2738610 - How to truncate or delete rows from sys.table_placement-SAPHANA 2638028 - Behavior Correction of Invalid Up- datable View Creation |
| 362 | ERR_SQL_INV_SCHEMA | invalid schema name | This error indicates that the specified schema name is invalid. Make sure that you specify an existing schema name and that you are author- ized to access the schema. 2535951 - FAQ: SAP HANAUsers and Sche- mas 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace |
| 378 | ERR_SQL_INV_COLUMN_VIEW | invalid column view | This error is known in the context of BWtables: 2134076 - Report RS_BW_POST_MIGRA- TION failing after migrating to theSAPHANA database 2315758 - Error 2048 creation of column view failed when executing report RSDDB_LOGI- NDEX_CREATE 2114556 - Error 2999 has occurred in the BWA/SAP HANAserver |
| 382 | ERR_SQL_COLLECT_ALL_VER- SIONS | fail to collect all version garbage | Indicates blcoked garbage collection issues. See KBA 2169283 - FAQ: SAP HANAGarbage Collection |
| 383 | ERR_SQL_INV_IDENTIFIER | invalid identifier | This error indicates that a wrong identifier was used. Check the related SQL statement for cor- rectness. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2572224 - How-To: Repairing SAP HANATa- bles 2160391 - FAQ: SAP HANAIndexes 2606638 - How to deal with error "invalid identifier: _sys_ prefix not allowed" 2728304 - "ALTER SEQUENCE" command fails with error: "invalid identifier: _sys_ prefix not allowed" |
| 384 | ERR_SQL_TOO_LONG_CONSTANT | string is too long | 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace 2154870 - How-To: Understanding and defin- ing SAP HANALimitations 2222219 - SAP HANAErrors and different Results due to Design Changes |
| 391 | ERR_SQL_INV_USABLE_VIEW | invalidated view | This error indicates that the view <view> is in- valid. This can e.g. happen if the definition of an underlying object changed or an object was dropped. You have to make sure that the view is consistent and valid or avoid accesses / drop the view. 2985592 - Database returned the SQL code 391. Error text: invalidated view |
| 396 | ERR_SQL_EXST_USER_DE- FINED_TYPE | cannot use duplicate user-defined type name | 2223237 - Troubleshooting HANAEmbed- ded Statistics Server Migration - decision tree 2122188 - ESS disabled due to duplicate cre- ation of TT_MAIL_COLLECTOR |
| 397 | ERR_SQL_INV_OBJ_NAME | invalid object name | This error indicates that an object isn't found with the specified name (and schema). 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace |
| 398 | ERR_SQL_MANY_ORDER_BY | cannot have more than one order by | Only one ORDER BY clause is allowed in a state- ment |
| 402 | ERR_SQL_DROPPED_USER | the user was already dropped before query execution | 2819401 - Errors Related to the _SYS_XB User and Schema When Running theSAPHANA System Migration Tool to Migrate From SAP HANA1.0 to SAP HANA2.0 on IBM Power |
| 403 | ERR_SQL_INTERNAL_ERROR | internal error | Generic preamble. Check the HANAservice traces and ABAP dump traces for further infor- mation on the error. |
| 406 | ERR_SQL_CANT_UP- DATE_GEN_COL | "INSERT UPDATE and UPSERT are disallowed on the generated field" | 3070791 - INSERT, UPDATE and UPSERT are disallowed on the generated field |
| 410 | ERR_SQL_INV_PRIVILEGE_NAME- SPACE | invalid privilege name- space | 2428863 - Unable to view any package in- side Content Folder of HANAStudio |
| 411 | ERR_SQL_INV_TABLE_TYPE | invalid table type | 2280915 - CREATE procedure gives error "258 - insufficient privilege: Not authorized er- ror". |
| 412 | ERR_SQL_INV_PASSWORD_LAY- OUT | invalid password layout | 2895515 - Mismatch password_layout error occurred in HANA. |
| 414 | ERR_SQL_ALTER_PASS- WORD_NEEDED | user is forced to change password | Provide a new password for <user> or disable the password life time if it isn't required (e.g. because it is a pure technical user): 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2137351 - Error: 414 user is forced to change password for SAPDBCTRL (or SAPHA- NADBSHD) 2386570 - SAP DBTech JDBC: [591]: internal error: internal error: user is forced to change password 2136604 - Error InstallingPOonaSAP HANAMulti-tenant Database |
| 415 | ERR_SQL_USER_DEACTIVATED | user is deactivated | 2574644 - SAP HANAdatabase user is deac- tivated 2452650 - User password expired with max- imum_unused_initial_password_lifetime set at shot time 2681997 - You have reached maximum num- ber of invalid password entry attempts. User is locked until ... 3066561 - Parameter detailed_er- ror_on_connect controls what messages are re- turned during error connect attempt |
| 416 | ERR_SQL_USER_LOCKED | user is locked; try again later | This error is typically caused because the max- imum number of invalid connect attempts has been reached. It is controlled by the following SAP HANAparameter: indexserver.ini -> [password policy] -> maxi- mum_invalid_connect_attempts (default: 6) 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 420 | ERR_SQL_ALTER_PASS- WORD_NOT_ALLOWED | password change cur- rently not allowed | 2765239 - Password change currently not allowed: minimal password lifetime is x days. |
| 423 | ERR_SQL_LVC | AFL error | Multiple causes, review KBA 2399990 - How- To: Analyzing ABAP Short Dumps inSAPHANA Environments |
| 424 | ERR_SQL_INV_PACKAGE | invalid name of pack- age | 1891017 - Grant Repository Access to Root Package in SAP HANAStudio |
| 430 | ERR_SQL_INV_USABLE_PROC | invalidated procedure | 2935599 - SQL error 430 (invalidated proce- dure) 2196359 - Limitations for cross-database ac- cess in an SAP HANAMDCenvironment |
| 431 | WRN_SQL_NEARLY_EX- PIRED_PASSWORD | user's password will expire within few days | 1752349 - DBconnection fails with SQL error 431 - user's password will expire within few days 1702224 - Disable password lifetime for technical users |
| 435 | ERR_SQL_INV_EXP | invalid expression | 2222219 - SAP HANAErrors and different Results due to Design Changes 2044468 - FAQ: SAP HANAPartitioning |
| 436 | ERR_SQL_SET_SYSTEM_LICENSE | could not set system li- cense | 2263579 - "could not set system license: 5503: nameserver failed to process the request" error during HANAlicense key installation 2311999 - Permanent license import failed via HDBSQL 2772760 - Unable to apply the permanent license to SYSTEMDB |
| 437 | ERR_SQL_ONLY_LICENSE_HAN- DLING | only commands for li- cense handling are al- lowed in current state | 2728984 - System Has No Valid License. Ex- ecute failed (437) - Instance is registered with permanent license 2781578 - "License Expired" error when ena- bling HDI container in MDCHANAenvironments 2670267 - Error: Get server node failed on connection 0: rc=437 |
| 440 | ERR_SQL_TABLE_TYPE_CONVER- SION_ERROR | table type conversion error | 2397189 - SAP DBTech JDBC: [440]: Table Type Conversion Error During SUMUpgrade in The Phase MAIN_SHDIMP/PARMVNT_SHD |
| 442 | ERR_SQL_MAX_NUM_COLUMN | number of columns ex- ceeds its maximum | 2154870 - How-To: Understanding and defin- ing SAP HANALimitations |
| 444 | ERR_SQL_PACKMAN | package manager er- ror | 2936298 - SAP HANAExternal Machine Learning Library 2.0 requires libssl 1.0.x |
| 446 | ERR_SQL_EXST_TRIGGER | cannot use duplicate trigger name | 2911708 - ERROR [SQL-301] unique con- straint violated: Table(STATISTICS_PROPER- TIES) |
| 447 | ERR_SQL_BACKUP_FAILED | backup could not be completed | backup could not be completed:' is a preamble to the actual root cause for the error. Please check the HANAservice trace files for more in- formation on the error. Example scenarios: 2472144 - HANABackup Using Backint fails with the error: [447] backup could not be com- pleted, [110203] Not all data could be written 2500531 - Error "ERROR BACKUP SAVE DATA terminated with error: [447] backup could not be completed, [2000004 ] Can not open file" when doing a backup 2399786 - Backup could not be completed: Error calling io_setup 2623425 - Backup could not be completed 2310262 - Error [110026] - Backup could not be completedonHANA 2723171 - [447] backup could not be com- pleted, [3000116 ] Could not acquire save- point lock |
| 448 | ERR_SQL_RECOVERY_FAILED | recovery could not be completed | recovery could not be completed:' is a preamble to the actual root cause for the error. Please check the HANAservice trace files for more in- formation on the error. Example scenarios: 2728068 - HANAPoint in time recovery fails - [448] recovery could not be completed 2688972 - Recovery Error: [448] recovery could not be completed, [110021] Object 0 with the value 1024 does not exist-SAPHANA |
| 449 | ERR_SQL_RECOVERY_STRATEGY | recovery strategy could not be deter- mined | recovery strategy could not be determined: ' is a preamble to the actual root cause for the error. Please check the HANAservice trace files for more information on the error. Example scenarios: 2123153 - [449] recovery strategy could not be determined [110512] Backint duringHANA Recovery 2610577 - Backup destination header is cor- rupt [110063] 2736164 - recovery strategy could not be de- termined - BKI1214E: TSMError for session to |
| 451 | ERR_SQL_NOT_AL- LOWED_SUBJ_TAB_AC- CESS_TRIGGER | modification of subject table in trigger not al- lowed | 2800020 - FAQ: SAP HANATriggers 2222219 - SAP HANAErrors and different Results due to Design Changes 2170927 - Create trigger fails with error JDBC: [451]: modification of subject table in trigger not allowed: maximum nesting depth |
| 452 | ERR_SQL_INV_BACKUPID | invalid backup id | 2962261 - Backup catalog housekeeping: in- valid backup id 'xxxxxxxxxxx' |
| 454 | WRN_SQL_WRONG_HINT_SYN- TAX | wrong hint syntax | 2939423 - Getting Error while creating View with HINT - incorrect syntax near "HINT" |
| 455 | ERR_SQL_READ_ONLY_SES- SION_VARIABLE | the predefined session variable cannot be set via SET command | 2470084 - XSUAA metadata for XS_APPLI- CATIONUSER trust creation |
| 459 | ERR_SQL_INV_USABLE_FUNC | invalidated function | This termination indicates thataSAPHANA function was accessed that is invalidated. You can use SQL: "HANA_Objects_Functions" (ONLY_INVALID_FUNCTIONS = 'X') available via SAP Note 1969700 to list invalid functions in the system. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 461 | ERR_SQL_FK_NOT_FOUND | foreign key constraint violation | This error indicates that a change failed due to a foreign key constraint violation. 2160391 - FAQ: SAP HANAIndexes 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 462 | ERR_SQL_FK_ON_UPDATE_DE- LETE_FAILED | failed on update or de- lete by foreign key con- straint violation | This error indicates that a change failed due to a foreign key constraint violation. 2160391 - FAQ: SAP HANAIndexes 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 463 | ERR_SQL_MAX_NUM_TABLE | number of tables ex- ceeds its maximum | 2154870 - How-To: Understanding and defin- ing SAP HANALimitations 2968736 - HANASQLError 463 - number of tables exceeds its maximum |
| 464 | ERR_SQL_MAX_PARSE_TREE_DEP TH | SQL internal parse tree depth exceeds its max- imum | 2385634 - TREX_EXT_SEARCH_CELL_TA- BLE Leads to SQL Error "parse tree depth ex- ceeds its maximum:255" 2332054 - TREX_EXT_LIST_INDEXES fails with "SQL internal parse tree depth exceeds its maximum: parse tree depth exceeds its maxi- mum:255" |
| 465 | ERR_SQL_INV_USABLE_TRIGGER | "Cannot execute trig- ger was invalidated by object change" | This error is caused by an invalidated trigger (SAP Note 2800020 ) on the underlying ta- ble. 2800020 - FAQ: SAP HANATriggers 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 468 | ERR_SQL_HINT | hint error | Generic preamble to a more detailed error. 2142945 - FAQ: SAP HANAHints 2400006 - FAQ: SAP HANAStatement Hints 2938007 - Query on SAP HANAfails with error 468: hint error 2916954 - Normalization of Whitespaces and Comments of SQL statements in Statement Hints 2570351 - Error 468/ "hint error: invalid hint" From TREX_EXT_SEARCH_CELL_TABLE or TREX_EXT_AGGREGATE |
| 472 | ERR_SQL_EXST_DATA_SOURCE | cannot use duplicate data source name | 2408262 - SAP DBTech JDBC: [472]: cannot use duplicate data source name: __esstore$ |
| 474 | ERR_SQL_INV_ADAPTER | invalid adapter name | 2552938 - Error when altering HANARe- mote Source connection adapter type from SDA to SDI: invalid adapter name - SAP HANA1.0 smart data integration 2440935 - SAP DBTech JDBC: [474] (at 21): invalid adapter name: mii: line 1 col 22 (at pos 21) |
| 476 | ERR_SQL_INV_REMOTE_OBJECT | invalid remote object name | Preamble to a more detailed error: 2262782 - Unable to export or import SAP HANAvirtual tables 2033611 - Limitation of export / import for virtual tables 2775477 - SAP DBTech JDBC: [476]: invalid remote object name: IMPORT_TABLE can't be used for functions. 2607873 - Error when creating file adapter virtual table using SQL - SAP HANASmart Data Integration 2554385 - ODBCError during Import of a HANASchema / Table(s) |
| 478 | ERR_SQL_UDF_RUNTIME | user defined function runtime error | 2945199 - search table error: [6859] Attribu- teEngine: divide by zero 2401963 - Invalid table name exception when calling UDF 2792149 - currency/unit conversion error: conversion type '<conversion type>' has unsup- ported 'BKUZU' or 'GKUZU' type set |
| 485 | ERR_SQL_INV_STRUC- TURED_PRIVILEGE_DEFINITION | invalid definition of structured privilege | 2250455 - SAP DBTech JDBC 485 - Invalid definition of structured privilege: Invalid filter condition 2762943 - ERROR [SQL-485] invalid defini- tion of structured privilege: Condition 1 = 1 can- not be used for <viewname> 2608634 - Invalid definition of structured privilege when use HANAexternal view for modified composite provider 2563281 - Error message "invalid definition of structured privilege: Complex condition [...] cannot be used for view [...]" |
| 488 | ERR_SQL_INV_DATABASE | invalid database name | HANATenant or SYSTEMDB System ID (SID) not recognised |
| 494 | ERR_SQL_INV_EPMQUERY- SOURCE_DEF | invalid EPMQuery Source definition | 2481228 - dberror(PreparedStatement.exe- cuteUpdate): 494 - invalid EPMQuery Source definition: no valid EPMlicense available error in SAP Analytics Cloud (BOC) |
| 500 | ERR_SQL_REQUIRE_PREDICATE | predicates are required in a where clause | 2748386 - Error "SAP DBTech JDBC: [500]: predicates are required in a where clause" When Running Select * From User Authorization Views 2795522 - Querying EFFEC- TIVE_ROLE_GRANTEES fails with error un- known role_name 2088971 - How-To: Controlling the Amount of Records in SAP HANAMonitoring Views 2564718 - Authorization replication termi- nates with SQL error message "predicates are required in a where clause: unknown user_name" 2541080 - Predicates are required in a where clause in SAP Analytics Cloud (BOC) |
| 503 | ERR_SQL_EXST_TASK | cannot use duplicate name of task | 2695441 - Unable to activate flow graph within the import package - SAP HANASmart Data Integration |
| 512 | ERR_REP | replication error | Generic preamble to a more detailed error: 2852687 - HANAREPLICATION ERROR 2754755 - HANAsystem replication error oc- curred 2712064 - SAP HANASystem Replication Er- ror port 4####already in use 2680982 - Error "site to unregister is still online, cannot proceed" When Unregistering the Secondary System in System Replication Envi- ronment 2813089 - Replication mode/status is UN- KNOWNinMDCscenario 2695510 - With Table Replication, Error Mes- sage Flooding in Worker Node Indexserver After Coordinator Node Failed-over 2796783 - Copying/Moving a Tenant Data- base From a System Replication Primary Sys- tem to Another HANASystem Fails When Se- cure Network Communication is Disabled 1999880 - FAQ: SAP HANASystem Replica- tion |
| 513 | ERR_SQL_REP_ALREADY_ACTIVE | cannot execute DDL statement on replica- tion table while repli- cating | DDL like CREATE/ALTER/DROP etc not possi- ble during replication operation |
| 544 | ERR_RS_PARTITION | partition error | 2930379 - How to increase the total number of partitions for CDPOStable |
| 576 | ERR_API | api error | 2618451 - SSL API error happened when copy or move tenant database betweenHANA systems 2650994 - SAP HANASystem Replication does not work with DBisolation set to HIGH 2771017 - Replication does not complete due to nameserver error 2561693 - HANADatabase fails to start due to SSL error 2862268 - BI Commentary Guidelines for HANASSLConnections |
| 578 | ERR_API_INV_STATEMENT | invalid statement | The issue can be a consequence of problems on lower layers (infrastructure / hardware) result- ing in a malformed statement ID. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 589 | ERR_API_TOO_MANY_PARAME- TERS | too many parameters are set | 3040627 - Error 589 - too many parameters are set: number of parameters xxxxx exceeds maximum number of parameters, 32767 2154870 - How-To: Understanding and defin- ing SAP HANALimitations 2380176 - FAQ: SAP HANADatabase Trace |
| 592 | ERR_API_NOT_SUP- PORTED_TYPECONV | not supported type conversion | 2962294 - HANAAlert 27 is disabled and will not be called. Reason: timeout SNAPSHOT_ID: <timestamp> 2380176 - FAQ: SAP HANADatabase Trace |
| 597 | ERR_API_SESSION_CONTEXT_ER- ROR | session context error | Preamble to more detailed error described fur- ther in: 2380176 - FAQ: SAP HANADatabase Trace 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 598 | ERR_API_EXTERNAL_EXECU- TION_FAILURE | Failed to execute the external statement | This is a common preamble to a more detailed error. It can have multiple causes. Please review the error detail from the index- server traces and search the SAP HANAKnowl- edge base for existing Knowledge based articles and SAP Notes. 2776953 -Query Fails With Error "failed to execute the external statement: no such data type:.." in HANAScale-out System |
| 600 | ERR_API_CALL_ROUTING_FAIL- URE | failed routed execution | Preamble to more detailed error described fur- ther in: 2380176 - FAQ: SAP HANADatabase Trace 2200772 - FAQ: SAP HANAStatement Rout- ing and Client Distribution Mode 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2090424 - Error -10108 (Session has been reconnected) after failover of coordina- tor/worker node 2591281 - Blocked Transactions fromHANA JDBC Client Call - Session Will be Closed due to the Error: [600] failed routed execution 2691896 - Distributed Query Fails With "StatementRouting Connection.cc(xxxxx): failed to join XA session due to active user transac- tion" |
| 603 | ERR_API_INV_LOB | invalid LOB | 2116157 - FAQ: SAP HANAConsistency Checks and Corruptions 2690991 - Error 5400 Invalid type 64 in column of type 32 at udiv during consistency check 2408032 - Reading From LOB Binary Stream After Closing the ResultSet via JDBC Fails With Error "invalid lob locator id (piecewise lob read- ing)" 3017449 - Indexserver Crash in TRexCom- monObjects::writeRow When Executing an UP- DATE Statement on a Partitioned Table |
| 606 | ERR_API_EXCEED_MAX_LOB_SIZE | exceed maximum LOB size | See 2154870 - How-To: Understanding and defining SAP HANALimitations |
| 608 | ERR_API_EXCEED_MAX_PRE- PARED_STATEMENT | exceed maximum number of prepared statements | System limitation. Described further in: 3031816 - "ERROR [SQL-608] exceed maxi- mumnumberof prepared statements" and "the number of prepared statements per connection cannot exceed <number>" 2154870 - How-To: Understanding and defin- ing SAP HANALimitations 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace |
| 613 | ERR_API_TIMEOUT | execution aborted by timeout | This error indicates that a database request hit the timeout (in seconds) 2000003 - FAQ:SAPHANA 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 3116306 - [Error] execution aborted by time- out while partitioning |
| 640 | ERR_SQL_2 | sql processing error | Starting with SAP HANARev. 122.03 the THROW_ERROR hint is available (SAP Note 2436893 ) that terminates the execution of the underlying database request 2380176 FAQ: SAP HANADatabase Trace 2073630 - DBIF_RSQL_SQL_ERROR SQL er- ror 256 when access table 2142945 FAQ: SAP HANAHints |
| 648 | ERR_SQL_TRIG- GER_AND_PROC_NEST- ING_DEPTH_EXCEEDED | nesting depth of trig- ger and procedure is exceeded | System limitation: 2134323 - Nesting error in SQLRUN_ISU_TRIGGERS: maximum depth 8 2977828 - Trigger Creation Fails With 'nest- ing depth of trigger and procedure is exceeded' 2130431 - 'Nesting depth of trigger and pro- cedure is exceeded' error is raised when SQL trigger with procedure call is created |
| 651 | ERR_SQL_EXST_OBJECT | cannot use duplicate object name | 2572224 - How-To: Repairing SAP HANATa- bles |
| 663 | ERR_SQL_CONNECT_NOT_AL- LOWED | user not allowed to connect from client | 3171342 - ERROR [SQL-663] user not al- lowed to connect from client |
| 684 | ERR_SQL_PLANSTABIL- IZER_STORED_HINT_RECORD_AL- READY_EXISTS | plan stabilizer stored hint error - statement hint record already ex- ists | 2700051 - Delivery of Statement Hints (SAP HANA>=1.00.122.03) 2400006 - FAQ: SAP HANAStatement Hints |
| 685 | ERR_SQL_PLANSTABIL- IZER_STORED_HINT_RE- CORD_DOES_NOT_EXIST | plan stabilizer stored hint error - statement hint record does not exist | 2700051 - Delivery of Statement Hints (SAP HANA>=1.00.122.03) 2400006 - FAQ: SAP HANAStatement Hints |
| 1024 | ERR_SES | session error | Preamble to more detailed error: 2303494 - Importing Delivery Unit fails with "Session error: Repository: Activation failed for at least one object" 2280915 - CREATE procedure gives error "258 - insufficient privilege: Not authorized er- ror". 2775970 - The re-import of a delivery unit fails after an upgrade |
| 1025 | ERR_COM | communication error | Network or communication related problem 2222200 - FAQ: SAP HANANetwork 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2000003 - FAQ:SAPHANA |
| 1026 | ERR_COM_LISTEN | cannot bind a commu- nication port | 2261671 - "cannot bind to the local port" when start HANAservices |
| 1033 | ERR_SES_INV_PROTOCOL | error while parsing protocol | These terminations indicate a severe prob- lem during parsing the communication pro- tocol. Check the database trace (SAP Note 2380176 ) for more details. Typically you will find an error "failed to encode cursor: error while parsing protocol" with further accompany- ing details. 2380176 - FAQ: SAP HANADatabase Trace 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 1037 | ERR_COM_UNKNOWN_HOST | unknown hostname | 2941074 - SAP HANAserver rebooted 2936450 - SAP HANAis down without crash or shutdown |
| 1038 | ERR_SES_SERVER_BUSY | rejected as server is temporarily over- loaded | 2908549 - SAP HANADatabase requests fail with error code 1038 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2222250 - FAQ: SAP HANAWorkload Man- agement 3061965 - Unexpected Error "rejected as server is temporarily overloaded" |
| 1335 | ERR_SQLSCRIPT_RUNTIME_CY- CLIC_DEPENDENCY | Cyclic dependency found in a runtime pro- cedure | 3032114 - Cyclic Dependency Error When Revoking Role From User Through GRCin SPS05 3018242 - Removing Role From User via HANAor GRCCanCause Cyclic Dependency Error |
| 1793 | ERR_SHM_CREATE_INVALID | invalid key or invalid size | This issue can arise due to wrong configuration of shared memory setting on OSlevel. 2827664 - NameServer persistence initiali- zation failed |
| 2048 | ERR_CS | Column store error | Generic preamble to a more detailed error de- scription: 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 1999997 - FAQ:SAPHANAMemory 2222200 - FAQ: SAP HANANetwork 2044468 - FAQ: SAP HANAPartitioning 2380176 - FAQ: SAP HANADatabase Trace |
| 2054 | ERR_CS_DELTA_LOG_RE- PLAY_FAILED | Redo log replay failed | Check HANAservice traces for more detail on the error cause |
| 2055 | ERR_CS_MAXIMUM_ROW | Maximum number of rows per table or parti- tion reached | System limitation: 2212518 - HANAerror maximum number of rows per partition reached 2154870 - How-To: Understanding and defin- ing SAP HANALimitations 2964715 - Maximum number of rows per table or partition reached: '_SYS_AUDIT:CS_AU- DIT_LOG_' 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 2560 | ERR_METADATA | Metadata error | Preamble to more detailed error: 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2380176 - FAQ: SAP HANADatabase Trace 2116157 - FAQ: SAP HANAConsistency Checks and Corruptions |
| 2561 | ERR_DIST_METADATA | Distributed metadata error | 2116157 - FAQ: SAP HANAConsistency Checks and Corruptions 2245772 - distributed metadata error: Failed to get table location during table lock - Error during BWupgrade or migration |
| 2566 | ERR_DIST_META- DATA_TNSCLIENT_FAILURE | Distributed environ- ment error | 2116157 - FAQ: SAP HANAConsistency Checks and Corruptions 2240241 - Failed to update dependency: dis- tributed environment error 2447887 - Removing Multiple Services from HANAScale Out Environment Can Lead to Data Loss 2062308 - Deadlock Between Worker and Coordinator Node During Worker Startup 2433918 - Execution of a Stored Procedure Fails With the Message "distributed environ- ment error: table already exists;newIndex(..): catalog creation failed" |
| 2567 | ERR_DIST_METADATA_NET- WORK_FAILURE | Network error | Generic preamble to a more detailed descrip- tion: 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 1999880 - FAQ: SAP HANASystem Replica- tion 2057046 - FAQ: SAP HANADelta Merges 2116157 - FAQ: SAP HANAConsistency Checks and Corruptions |
| 3584 | ERR_DIST_SQL | Distributed SQL error | Generic preamble to a more detailed descrip- tion: 1999997 - FAQ:SAPHANAMemory 1999998 - FAQ: SAP HANALock Analysis 2116157 - FAQ: SAP HANAConsistency Checks and Corruptions 2380176 - FAQ: SAP HANADatabase Trace 2901387 - Unable to run HANA_Configura- tion_MiniChecks_2.00.040+ 3105683 - A Query Fails With an Error "dis- tributed SQL error: [18] A received argument has an invalid value" 3121246 - SQL error "SAP DBTech JDBC: [3584]: distributed SQL error: [18] A received argument has an invalid value" 1969700 - SQL Statement Collection for SAP HANA |
| 3587 | ERR_DIST_SQL_INVALID_PROTO- COL | Invalid protocol or service shutdown dur- ing distributed query execution | The following error messages can indicateOOM situations. Be aware that some of the errors can also be issued in other scenarios. To make sure that they are really memory related, you have to check the related trace file. 1999997 - FAQ:SAPHANAMemory |
| 3589 | ERR_DIST_SQL_REMOTE_EXECU- TION_FAILURE | Remote query execu- tion failure | 2439128 - Error 3589 - "remote query execu- tion failure" When Opening XS Applications |
| 3841 | ERR_AUDITING_NO_PRIV_NAME | Invalid privilege | 2428863 - Unable to view any package in- side Content Folder of HANAStudio |
| 3843 | ERR_AUDITING_POLICY_AL- READY_EXISTS | Audit policy with cur- rent name already ex- ists | Policy name already created as per "CREATE AUDIT POLICY ... " |
| 3844 | ERR_AUDITING_INV_POL- ICY_TYPE | Invalid combination of audit actions | Check auditing configuration and setup: Auditing Activity inSAPHANA Audit Policies Audit Trails Auditing Configuration and Audit Policy Man- agement Best Practices and Recommendations for Cre- ating Audit Policies Configuring Database Auditing |
| 3846 | ERR_AUDITING_INV_LEVEL | Invalid auditing level | Check auditing configuration and setup: Auditing Activity inSAPHANA Audit Policies Audit Trails Auditing Configuration and Audit Policy Man- agement Best Practices and Recommendations for Cre- ating Audit Policies Configuring Database Auditing |
| 3847 | ERR_AUDITING_INV_POL- ICY_NAME | Invalid policy name | Check auditing configuration and setup: Auditing Activity inSAPHANA Audit Policies Audit Trails Auditing Configuration and Audit Policy Man- agement Best Practices and Recommendations for Cre- ating Audit Policies Configuring Database Auditing |
| 3848 | ERR_AUDITING_INV_ACTION_OB- JECT_TYPE | Invalid combination of audit action and object type | Check auditing configuration and setup: Auditing Activity inSAPHANA Audit Policies Audit Trails Auditing Configuration and Audit Policy Man- agement Best Practices and Recommendations for Cre- ating Audit Policies Configuring Database Auditing |
| 3849 | ERR_AUDITING_INV_OB- JECT_TYPE | Audit policy for this object type not sup- ported | Check auditing configuration and setup: Auditing Activity inSAPHANA Audit Policies Audit Trails Auditing Configuration and Audit Policy Man- agement Best Practices and Recommendations for Cre- ating Audit Policies Configuring Database Auditing |
| 4096 | ERR_PLANVIZ_GENERAL | [PlanViz] general error | 2275905 - How to disallow the Usage of PlanViz in a SAP HANASystem 2825090 - PlanViz for a DMLStatement on Row Store Table Fails in Scale-Out System 2337716 - SAP DBTech JDBC: [4096]: [Plan- Viz] general error: Not supported yet; Please update your server instance |
| 4098 | ERR_PLANVIZ_INVA- LID_PLAN_GENERAL | [PlanViz] invalid plan | 2781336 - PlanViz Execution Fails With "inva- lid plan -- try preparing statement again" |
| 4104 | ERR_PLANVIZ_PLAN_NOT_FOUND | [PlanViz] plan not found | 2337716 - SAP DBTech JDBC: [4096]: [Plan- Viz] general error: Not supported yet; Please update your server instance |
| 4105 | ERR_PLANVIZ_UNSUP- PORTED_STMT_TYPE | [PlanViz] unsupported statement type | 2822978 - Statement Execution Fails In SAP HANAStudio after PlanViz Generation Failed |
| 4109 | ERR_PLAN- VIZ_TRACE_ONLY_GENERAL | [PlanViz] error in trace-only mode | Due to a SAP HANAbug this error can happen when plan trace is activated with a statement hash filter on SAP HANA<=2.00.046. Make sure that the plan trace is only activated when really required and optimally in non-production systems. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments 2781336 - PlanViz Execution Fails With "inva- lid plan -- try preparing statement again" |
| 4249 | ERR_USER_PARAM_DUPLI- CATE_EMAIL_ADDRESS | Same email address cannot be used for dif- ferent users | 2845742 - SAP HANACockpit user deletion after failed logon attempts with wrong password |
| 4251 | ERR_USER_PARAM_INVA- LID_STATEMENT_MEMORY_LIMIT | Invalid statement memory limit | Setting User Parameters for Workload 2222250 - FAQ: SAP HANAWorkload Man- agement |
| 4252 | ERR_USER_PARAM_INVA- LID_STATEMENT_THREAD_LIMIT | Invalid statement thread limit | Setting User Parameters for Workload 2222250 - FAQ: SAP HANAWorkload Man- agement |
| 4274 | ERR_KERBEROS_DUPLI- CATE_PROVIDER | Duplicate specification of identity for KER- BEROS | 2921965 - Duplicate specification of identity for KERBEROS |
| 4280 | ERR_TICKET | General ticket error | 2872841 - Cannot Load Tile Error on Fiori Launchpad |
| 4321 | ERR_SSL_ENFORCE | Only secure connec- tions are allowed | Preamble to more detailed error: 2999904 - hdbsql failed with error "4321: only secure connections are allowed SQLSTATE: HY000". 2475246 - How to configure HANADBcon- nections using SSL from ABAP instance 2572975 - Solman monitoring of only spe- cific HANADBsusing SSL 3168368 - After setting sslenforce as true in HANAdatabase, the R3trans, transport in ABAP are failing 2818623 - Enabled SSL DBCOconnection SolMan to HANAresult in sql error 4321 3028338 - "Only secure connections are al- lowed" in HDB_CHK_PRIVILEGES phase with SUM1.0 2420711 - FAQ: SAP HANAXimoStudio 2795610 - Enabling SSO in HANACockpit Manager of SAP HANACockpit 2 Fails When sslEnforce=true on the Remote SAP HANASys- tem 3083365 - User Provided Service connec- tions with SSL functionality is not available in SAP HANAWebIDEfor HANA(On-Premise) |
| 5136 | ERR_TEXT_COMMON_ALLOCA- TION_FAILED | Allocation failed. | Memory allocation failure. Preamble to more de- tailed error: See '1. Which indications exist forSAPHANA memory problems?' - 1999997 - FAQ: SAP HANAMemory |
| 5170 | ERR_TEXT_COM- MON_FILE_NOT_FOUND | File not found. | This a generic error where an expected system, library (dll) file or third party client driver file cannot be found at the expected or configured directory location. Some example scenarios: 2349144 - HANAinstance does not stop - kill.sap file not found 2981347 - Failed to create an SDA connec- tion between HANAand DB2 - Can't open lib '/ opt/IBM/db2/dsdriver/lib/libdb2o.so.1': file not found 2928047 - libsqora.so.11.1': file not found |
| 5172 | ERR_TEXT_COMMON_INVA- LID_LENGTH | Invalid length. | Table column inconsistency. 2905396 - Error 5995 reported by check_ta- ble_consistency 2116157 - FAQ: SAP HANAConsistency Checks and Corruptions |
| 5533 | ERR_TEXT_PREPROCES- SOR_FAILED | Preprocessor: failed | This error indicates a problem when communi- cating with the preprocessor service, e.g. in con- text of text searches. You can check the prepro- cessor database trace (SAP Note 2380176 ) for further details. 2399990 - How-To: Analyzing ABAP Short Dumps in SAP HANAEnvironments |
| 5634 | ERR_CERTADM_INVA- LID_CERT_DEFINITION | Certificate definition inconsistent | 2975450 - Error: "5634: Certificate definition inconsistent" when import Trusted Certificate via SQL |
| 5637 | ERR_CERTADM_CERTIFI- CATE_IN_USE | Certificate could not be dropped because it is still in use by at least one PSE | 3169349 - Certificate could not be dropped because it is still in use by at least one pse |
| 5736 | ERR_MASKING_INVALID_COL- UMN_DATATYPE | Masking: not sup- ported data type | 2669149 - Masking: not supported data type: exception 4075007 : column col- umn_name does not have a character string data type and cannot be masked | |
## 2.3 Performance and High Resource Utilization | ## 2.3 Performance and High Resource Utilization
By observing the general symptoms shown by the system such as poor performance, high memory usage, paging or column store unloads we can start to narrow down the possible causes as a first step in analyzing the issue.
## High Memory Consumption
You observe that the amount of memory allocated by the SAP HANA database is higher than expected. The following alerts indicate issues with high memory usage:
- Memory usage of name server (Alert 12)
- Total memory usage of Column Store tables (Alert 40)
- Memory usage of services (Alert 43)
- Memory usage of main storage of Column Store tables (Alert 45)
- Runtime dump files (Alert 46)
See the section Memory Problems for information on analyzing the root cause.
## Out-of-Memory Situations
You observe trace files or error messages indicating an Out-of-Memory (OOM) situation.
See the section Memory Problems for information on analyzing the root cause.
## Paging on Operating System Level
You observe that paging is reported on operating system level.
See the section Memory Problems for information on analyzing the root cause.
## Column Store Unloads
You observe unloads in the column store. The following alerts indicate issues with high memory usage:
- Column store unloads (Alert 55)
See the section Memory Problems for information on analyzing the root cause.
## Permanently Slow System
Issues with overall system performance can be caused by a number of very different root causes. Typical reasons for a slow system are resource shortages of CPU, memory, disk I/O and, for distributed systems, network performance.
Check Overview Monitoring and Administration Performance Monitor for either Memory , CPU or Disk Usage . If you see a constant high usage of memory or CPU, proceed with the linked sections Memory Problems or CPU Related Root Causes and Solutions respectively. I/O Related Root Causes and Solutions provides ways to check for disk I/O related problems. In case of network performance issues, have a look at the Monitor Network page accessible from the Monitoring group in the SAP HANA cockpit.
Note that operating system tools can also provide valuable information on disk I/O load. Basic network I/O data is included in the Load graph and in the m_SERVICE_NETWORK_IO system view, but standard network analysis tools can also be helpful to determine whether the network is the main bottleneck. If performance issues only appear sporadically, the problem may be related to other tasks running on the database at the same time.
These include not only maintenance related tasks such as savepoints (disk I/O, see I/O Related Root Causes and Solutions ) or remote replication (network I/O), but also SQL statements dispatched by other users, which can block a lot of resources. In the case of memory, this can lead to unloads of tables, which affects future SQL statements, when a table has to be reloaded into memory. In this case, see Memory Problems as well. Another reason for poor performance, which in many cases cannot be detected by the SAP HANA instance itself, are other processes running on the same host that are not related to SAP HANA. You can use the operating system tools to check for such processes. Note that SAP only supports production systems running on validated hardware.
## Slow Individual SQL Statements or with Increasingly Long Runtimes
Issues with the performance of a particular statement can be caused by a number of very different root causes. In principle, a statement can trigger all the resource problems that also lead to an overall slowdown of the system, so most of the previous information also applies to statement performance. In addition, statement performance can suffer from transactional problems, that is, blocked transactions. Blocked transactions can be checked in the Threads tile or on the Blocked Transactions page accessible from the Monitoring group. For troubleshooting, proceed with Transaction Problems .
If the runtime of a statement increases steadily over time, there could be an issue with the delta merge operation. Alerts should be issued for most problems occurring with the delta merge, but since they depend on configurable thresholds, this is not always the case. For troubleshooting, proceed with Delta Merge . If you have none of the above problems, but the statement is still too slow, a detailed Statement Performance Analysis might reveal ways to optimize the statement. However, some queries are inherently complex and require a lot of computational resources and time.
## Related Information
Memory Problems [page 65]
CPU Related Root Causes and Solutions [page 82] Disk Related Root Causes and Solutions [page 106] I/O Related Root Causes and Solutions [page 114] m_SERVICE_NETWORK_IO Transactional Problems [page 167] Delta Merge [page 129] Statement Performance Analysis [page 180] |
## 2.4 Common Symptoms and Troubleshooting | ## 2.4 Common Symptoms and Troubleshooting
Typical symptoms and the related troubleshooting information are described in this section.
System-side slow performance, slow individual statement performance, and OOM problems are issues that you might experience while using the SAP HANA database. For each section, this document will cover the known symptoms and the corresponding troubleshooting steps to follow depending on the causes. |
## 2.4.1 Slow System-wide Performance | ## 2.4.1 Slow System-wide Performance
Slow system-wide performance issues are problems that could be caused by excessive use of CPU, database resource locks or incorrect configuration of OS parameters.
Generally, when you encounter a performance issue, you may see these symptoms in SAP HANA cockpit:
- Continually high CPU usage according to OS commands or load graph (visible from CPU Usage tile Performance Monitor )
- Many pending or waiting threads in the thread view (details visible from the Threads tile).
To look for the cause at the operating system level refer to the topic System Appears to Hang with High System CPU Usage .
If the performance issue persists or if it recurs sporadically you may need to contact Support to analyze the root cause. In this case please generate at least two runtime dumps at 3 minute intervals while the system performance is slow for further investigation. For details refer to SAP Note 1813020 - How to generate a runtime dump on SAP HANA or the Guided Answer How to generate a runtime dump .
## Related Information
System Appears to Hang with High System CPU Usage [page 54]
SAP Note 1813020
How to generate a runtime dump (Guided Answer) |
## 2.4.1.1 HANA System-wide Performance Analysis | ## 2.4.1.1 HANA System-wide Performance Analysis
This topic describes solutions for both current and former system performance problems.
## Analysis of Current Performance Problems
Before taking any other action while the problem is present it is of utmost importance to capture a few HANA runtime environment (RTE) dump files and a Kernel Profiler Trace to collect information about the database's internal processes. Proceed as follows:
## Generating a Runtime Dump
Capture a number (3-5) of RTE dumps. There are several options for recording runtime dumps as described in the KBA 1813020 - How to generate a runtime dump on SAP HANA .
If the problem occurs randomly and you cannot predict when it will happen you can automate the collection of the runtime dumps with the SAP HANASitter tool which can be used, for example, to trigger methods such as the creation of traces and dumps when specific conditions are met. Refer to KBA 2399979 - How-To: Configuring automatic SAP HANA Data Collection with SAP HANASitter .
## Kernel Profiler Trace
Create a Kernel Profiler Trace, either:
- via the SQL console (see ALTER SYSTEM START KERNEL PROFILER statement)
- in hdbcons as described in KBA 1804811 SAP HANA Database: Kernel Profiler Trace .
## Analysis
Once the RTE Dumps are ready use an automated analysis tool such as the HANA Dump Analyzer to spot obvious problems. Refer to SAP Note 2498739 - How-To: Analyzing Runtime Dumps with SAP HANA Dump Analyzer . Most known problem situations are recognizable by the tool and you should then be able to take the first mitigation steps. Using the 'Auto Analyzer' option in the HANA Dump Analyzer the following summaries might be shown in the result:
- High Workload
- High CPU
- Waitgraph detected
Each of the identified issues require specific steps to perform an in-depth analysis and gain further insight on what potential culprits are.
## High workload / High CPU
This indicates that most of the active / running threads are in ownership of one specific application workload; if the HANA instance is configured according to official SAP guidance this should not happen under normal circumstances. It is important to understand which application user is responsible for the workload. If the situation is disruptive, ask the user to refrain form any further actions until the exact root cause is identified to avoid worsening the situation.
As a second step, check whether HANA Workload Management is configured according to SAP recommendations. Refer to KBA 2222250 - FAQ: HANA Workload Management . Pay particular attention to the appropriate configuration of the following workload related parameters:
- default_statement_concurrency_limit
- max_concurrency*
- max_concurrency_hint*
- num_cores*
(*not necessarily required as of HANA SPS3) Refer to KBA 2600030 - Parameter Recommendations in SAP HANA Environments .
## Waitgraph detected
This situation indicates that threads are not able to efficiently acquire locks on objects as they are already held by other threads (see Thread investigation below). The waitgraph is a directed graph depicting which threads are waiting for which locks and in turn which threads are holding the locks to be acquired. It is a good practice here to investigate what exactly the lock-holding threads (not the lock acquiring threads) are busy with. If no insights can be gained during this analysis and the problem requires an immediate mitigation, consider canceling the session responsible for the lock-holding threads by:
1. Identifying the corresponding connection ID of the problematic thread by examining the runtime dump; search the file for the thread ID identified, for example:
2. Cancel the session with:
```
1348967027[thr=<thread_id>]: SqlExecutor, TID: 34, UTID: 364438059, CID: <conn_id>,….
```
```
ALTER SYSTEM CANCEL SESSION '<conn_id>';
```
## Threads
A next step is to check what threads are currently running on the database while the performance issue exists. Y ou can check this in SAP HANA cockpit from the Threads card of the System Overview or from the Performance tab in SAP HANA Studio which is illustrated here:
Threads Overview
Refer to KBA 2114710 - FAQ: SAP HANA Threads and Thread Samples for tips on how to identify and solve problems with running and blocked threads. The information given for the following frequently-asked questions, for example, will be helpful:
- What kind of information is available for the SAP HANA threads?
- How can I interpret the thread state?
- What are the main thread types, methods and details?
## HANA Database Parameters
Check that the database parameters are configured as recommended. You can do this by running the script HANA_Configuration_Parameters_1.00.90+ , or HANA_Configuration_Parameters_Values_1.00.90+_MDC attached to the KBA 1969700 SQL Statement Collection for SAP HANA . The script can be executed in the system database as well as at the tenant database level. Recommended parameter values are discussed in more detail in the KBA 2600030 Parameter Recommendations in SAP HANA Environments .
Example Output of the HANA_Configuration_Parameters_1.00.90+ Script
## Analysis of Former Performance Problems
You can also analyze system-wide performance problems that occurred in the past using the methods described here.
## Thread Sample Analysis
When the issue is no longer present, a HANA Thread Sample analysis can be conducted to identify potential culprits. Proceed as follows using the scripts in the SQL Statement Collection for SAP HANA (attached to SAP Note 1969700 - see link in Further Information below):
1. Define a problematic time-period: be as accurate as possible here to avoid distortion of the statistical data by workloads not directly related. Use scripts TimeFrameReport (HANA_Global_TimeFrameReport_2.00.043+.txt) and LoadHistory (HANA_LoadHistory_Services_2.00.030+.txt) to do the following:
- Evaluate CPU / Memory / Disk
- Check Running vs. Waiting Threads
- Check on MVCC Versions
- Check Blocked Transactions / Locking Issues
2. Use the above information to identify the most active application workload for a defined time-period using the script HANA_Threads_ThreadSamples_FilterAndAggregation_*.
If you had a high CPU utilization in the past, use the script HANA_LoadHistory_Services_2.00.*.txt (or HANA_LoadHistory_Services_1.00.120+.txt for HANA 1) to find a time-period where the CPU was high. If a problematic time-period is identified run the HANA_Threads_ThreadSamples_FilterAndAggregation_* script
for this specific period. For example, if you have identified the period 10:00-11:00 a.m. on the 25th April, go to the modification section of the script and enter the following values:
```
'2021/04/07 10:00:00' BEGIN_TIME, '2021/04/07 11:00:00' END_TIME,
```
If the issue lies further in the past (older than few hours), you have to switch the DATA_SOURCE to 'HISTORY':
```
'HISTORY' DATA_SOURCE,
```
You can use the AGGREGATE_BY filter, for example, to see which threads have been consuming most of the CPU time:
```
'HASH, THREAD_STATE, THREAD_DETAIL, THREAD_METHOD' AGGREGATE_BY
```
Use the AGGREGATE_BY filter with the following values if you want to see which application user and component was actively consuming most of the CPU time:
```
'HASH, THREAD_STATE, APP_SOURCE, APP_USER' AGGREGATE_BY
```
Additionally, if you are only looking for the threads actively consuming CPU, you can set a filter in the modification section for THREAD_STATE:
```
'RUNNING' THREAD_STATE,
```
For further information refer to KBA 2114710 - FAQ: SAP HANA Threads and Thread Samples .
## Log Files for OS and Network
If the steps already described do not help to identify the performance problem, then it may be that the issue is not related to the SAP HANA database but could be an operating system, hardware, or network-related problem. In this case you can check the messages file for the time of the performance issue from the directory VAR/LOG for the HANA nodes. Check the messages for any indication of OS, hardware or network-related problems.
If necessary you can collect a runtime dump when the performance issue happens again and is live on the system as described above.
## Known Performance issues and How to Resolve Them
The following table identifies some specific known problems with links to KBAs which give technical details about how to resolve them.
| Symptom | Solution |
|--------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| After migration to HANADBfor ECC or Suite on HANAsystem reports and transactions are slow. | Check that you are using HANAoptimized transactions and applications, some optimizations may need to be explicitly switched on via the switch framework (transaction SFW5). Refer to SAP Note 1761546 - SAP ERP poweredbySAPHANA - Optimizations and the attached document (Suite on HANAOptimizations) which lists optimized objects (reports and transactions). 1761546 SAP ERP powered by SAP HANA- Optimizations |
| Symptom | Solution |
|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| After migration to HANADBcustom transactions or ABAP programs are slow. | You may need to optimize custom programs and transactions by using the code inspector tool and SQL monitor. Refer to the note 1912445 - ABAP custom code migration for SAP HANA- recommendations and Code Inspector variants for SAP HANAmigration . See also the blog post 'ABAP on HANA- from analysis to optimization' which includes, for example, a section on The Golden Rules: Five performance guidelines for ABAP database operations . |
| The system is slow with a lot of state- ments running. Statements of the type SELECT TOP X cause a performance problem on SAP HANA. | Refer to the SAP Note 2238679 - High CPU Consumption Caused by UnifiedTa- ble::MVCCObject::generateOLAPBitmapMVCC . |
| After a HANAupgrade the performance of calculation views is poor. | Refer to the following SAP Notes: • 2441054 - High query compilation times and absence of plan cache entries for queries against calculation views • 2291812 SAP HANADB: Disable/Enable CalculationEngine Feature - Calc- View Unfolding . |
| In an ABAP environment you are using the fast data access (FDA) protocol and the performance of queries is poor. | Refer to the KBA 2399993 - FAQ: SAP HANAFast Data Access . Question 6 identifies a list of possible problems which may occur with FDA. |
| Using fast data access the HANAsys- tem has poor performance. In the HANAtrace files you see entries like: Destroying allocator 'Connection/xxxxx/ Pool/RowEngine/Session' with x blocks and x bytes still allocated' . | This known issue is related to memory management of FDA queries and is descri- bed in SAP Note 2580435 - Memory Leak in Pool/RowEngine/Session . This may be solved by upgrading to a higher version of SAP HANA: HANA01SPS12 Revision 122.15 or higher, or, HANA02SPS01 Revision 012.04 or higher. |
## Further Information
Links to KBAs and SAP Notes which may be helpful :
- 1969700 - SAP HANA Statement Collection
- 1813020 - How to generate a runtime dump on SAP HANA
- 2399979 - How-To: Configuring automatic SAP HANA Data Collection with SAP HANASitter
- 2498739 - How-To: Analyzing Runtime Dumps with SAP HANA Dump Analyzer.
- 2000000 - FAQ: SAP HANA Performance Optimization
- 2222217 - How-To: Troubleshooting SAP HANA Startup Times
- 2222250 - FAQ: HANA Workload Management
- 2600030 - Parameter Recommendations in SAP HANA Environments
- 2114710 - FAQ: SAP HANA Threads and Thread Samples
You can also use Guided Answers to analyze performance problems. The following Guided Answer trees are focused on the SAP NetWeaver product:
- Master Guided Answer: Performance issues
- How to perform a Network analysis (Using Operation System Monitor - ST06)
- Analysis of Hardware Bottlenecks for Performance (Using Operation System Monitor - ST06)
- Performance Analysis Procedure for an ABAP Program (Using SAP Workload Monitor - ST03)
## Related Information
ALTER SYSTEM {START | STOP | SAVE | CLEAR} KERNEL PROFILER Statement (System Management) |
## 2.4.1.2 System Appears to Hang with no new Connections and no Response from the SAP HANA Database | ## 2.4.1.2 System Appears to Hang with no new Connections and no Response from the SAP HANA Database
In cases where logs cannot be written, all DML statement will fall into wait status. This can lead to a failure of opening new connections because the system internally executes DML statements during the process. Typically, a full log volume is the cause for this.
## Basic System Checks
In cases where the SAP HANA database is no longer responding to any requests it is advisable to follow a bottom up approach to quickly identify the root cause of the problem. The following system checks will eliminate common operating system related problems which often lead to system hanging situations:
Firstly, check the system resources at the operating system level, by using the TOP command:
Check the values for CPU and MEM to make sure that these resources are not exhausted.
Secondly, check the disk utilization by running the disk free command: df -h and verify that no volume has reached 100% utilization (see example below).
Check whether you are able to connect to the indexserver process locally by using hdbsql to run a test-query, for example:
```
SELECT CURRENT_TIMESTAMP FROM DUMMY;
```
If this is successful then check the network stability between the clients facing the issue and the SAP HANA database host.
## Possible Root Causes
Some common situations which cause a system to freeze are given here with remedial actions to take:
Root cause: Log volume full either caused by disk full or quota setting
## Required action:
1. Check for the DISKFULL event in the indexserver trace or
2. Run the disk free command: df -h in the OS shell (the -h option is more readable and displays the output in units of 1024). A result such as the following with a value of 100% will immediately identify the culprit as a full log area:
3. Check the quota setting in the file system.
4. If you have verified that the log volume is full then follow the procedure described in SAP Note 1679938 Log Volume is full .
```
/dev/mapper/vg_hana-log 102G 102G 0G 100% /hana/log
```
You can find additional information in SAP Note 2083715 - Analyzing log volume full situations .
Root cause: Log volume full caused by failure of log backup
## Required action:
1. Check backup.log (located at /usr/sap/<SID>=/HDB<Instance#>=/<Host>=/trace ) to see whether it includes ERROR in log backup. Check m_BACKUP_CATALOG, m_LOG_SEGMENTS.
2. If log backup uses backint, please check backint.log (located at /usr/sap/<SID>=/HDB<Instance#>=/ <Host>=/trace ) to see whether it includes ERROR information, and contact backint vendor support.
## Admission Control
Note that if statements are rejected with SQL Error 1038 this is related to the Admission Control feature which is designed to prevent the system from being overloaded by defining thresholds and either queuing or rejecting statements when necessary. You can check whether admission control is preventing new sessions from being established due to low system resources by querying the monitoring view m_ADMISSION_CONTROL_EVENTS:
```
SELECT * FROM M_ADMISSION_CONTROL_EVENTS;
```
If many events are raised during the problematic time-period you should investigate which SAP HANA resources were exhausted making it necessary for admission control to intervene. For more information about Admission Control see the KBA 2222250 FAQ: SAP HANA Workload Management
Before SAP HANA 2 SPS05 Revision 056 there is a known coding issue where the Admission Control feature could generate the SQL Error 1038 when there is no overloading of resources. For more information about this issue see the KBA 3061965 - Unexpected Error "rejected as server is temporarily overloaded" .
## Configuration Parameters for Admission Control
You should apply the recommended settings for all admission control configuration parameters located under indexserver.ini [admission_control] . Y ou can check the parameter settings using the script 'HANA_Configuration_Parameters_2.00.040+' from the SAP Note 1969700 SQL Statement Collection for SAP HANA . The example output below from this script shows a discrepancy for the setting queue_memory_threshold which is set to a value of 90 when on a SAP HANA Database with Revision <= 2.00.053 it should be zero. The explanation for this setting is to avoid queuing in case of high memory consumption that is often harmless (e.g. caused by a large page cache). The latest information about recommended configuration parameter settings is maintained in SAP note 2600030 Parameter Recommendations in SAP HANA Environments .
Where Admission Control is functioning correctly the root cause for the Admission Control event(s) is often high CPU, this can be confirmed for current Admission Control events using the script 'HANA_Workload_AdmissionControlEvents' or for historic Admission Control events using the script 'HANA_LoadHistory_Services' from the SAP Note 1969700 SQL Statement Collection for SAP HANA .
For troubleshooting steps for high CPU analysis please see Analysis of CPU Related Issues .
## Related Information |
## 2.4.1.3 System Appears to Hang with High System CPU Usage | ## 2.4.1.3 System Appears to Hang with High System CPU Usage
The SAP HANA database is an in-memory database and by its nature it consumes large amounts of memory. Some performance issues of SAP HANA can be caused by the operating system's memory management features.
For this case, you can see high usage of SYSTEM CPU that can be monitored in the Performance Monitor , available from SAP HANA cockpit Monitoring and Administration CPU Usage .
You can begin an analysis of CPU related problems by running a Kernel Profile trace either:
- via the SQL console (see ALTER SYSTEM START KERNEL PROFILER statement)
- in hdbcons as described in KBA 1804811 SAP HANA Database: Kernel Profiler Trace .
The kernel profiler trace is required for analysis by SAP Support. Refer to the topic HANA System-wide Performance Analysis for how to analyze the kernel profiler trace. Refer to KBA 2222250 - FAQ: HANA Workload Management which gives guidance on CPU-related configuration settings.
Root cause: Problem caused by the configuration of transparent huge page
## Required action:
1. Check Transparent Huge Page whether it is set to [never] by running the command cat /sys/ kernel/mm/transparent_hugepage/enabled .
2. Apply KBA 2031375 - SAP HANA: Transparent HugePages (THP) setting on Linux .
Root cause: Problem caused by the configuration of OS page cache
## Required action:
1. Check sar file ( /var/log/sa ) whether kbcached usage ( sar -r ) is higher than 10% of physical memory and high page in/out occurred.
2. Check and apply SAP Note 1557506 - Linux paging improvements .
Root cause: Problem caused by translation lookaside buffer (TLB) shootdown
## Required action:
1. Check plan trace.
Root cause : Due to high context switches (High SYS CPU) by many SqlExecutor threads
## Required action:
1. Check Performance Monitor and indexserver.ini ->= sql ->= sql_executors / max_sql_executors and refer to Controlling Parallelism of SQL Statement Execution .
## Related Information
Transparent Huge Pages on Linux [page 204]
System Hanging Situations [page 204] HANA System-wide Performance Analysis [page 46] Controlling Parallel Execution of SQL Statements [page 87]
```
SAP Note 2031375 SAP Note 1557506 SAP Note 2222250 SAP Note 1804811
``` |
## 2.4.1.4 Slower Response with High User CPU Usage | ## 2.4.1.4 Slower Response with High User CPU Usage
The performance may degrade with increased usage of the CPU and applications.
Root cause: Not many executor threads but high CPU in a few nodes and a few tables accessed
Required Action: In an SAP Business Warehouse system, check for non-even partitioning of huge column tables. For more information, see SAP Note 1819123 - BW on SAP HANA SP5: landscape redistribution and Indicator for Large Delta Storage of Column Store Tables .
Root Cause: Performance degradation by huge MVCC versions
Required Action: Check for KPI Active Version in the Performance Monitor to find the MVCC garbage blocker, then kill it via the Threads tile on the Overview page. For more information, see Troubleshooting Blocked Transactions .
Refer also to the section 'Analysis of Former Performance Problems' in the topic HANA System-wide Performance Analysis.
## Related Information
Indicator for Large Delta Storage of Column Store Tables [page 131]
Blocked Transactions [page 167]
HANA System-wide Performance Analysis [page 46]
SAP Note 1819123 |
## 2.4.2 Slow Individual SQL Statements | ## 2.4.2 Slow Individual SQL Statements
This section looks at the causes of slow individual SQL statements although there is no significant performance issue on system level and the associated troubleshooting steps.
If the following doesn't help to resolve the problem, see Getting Support . In this case, to analyze further, collect the explain plan, plan visualizer file, performance trace and catalog export. Also refer to SQL Statement Analysis and Query Plan Analysis for more information. SAP KBA 2000002 - FAQ: SAP HANA SQL Optimization explains general information about SQL optimization.
## Related Information
```
SQL Statement Analysis [page 238] Query Plan Analysis [page 248] SAP Note 2000002 Getting Support
``` |
## 2.4.2.1 A Statement is Sometimes Slow and Sometimes Fast | ## 2.4.2.1 A Statement is Sometimes Slow and Sometimes Fast
There are a number of things to check when you experience inconsistent query execution time.
Check the following possibilities:
Root Cause: Same query shows different execution plans according to the HANA SQL Plan Cache.
Required Action: Analyze the difference in the execution plan by using the Explain Plan tool as outlined in SAP Note 2410208 Collect Explain Plan of a Prepared Statement .
Once you have found the corresponding statement hash for a problematic query, you can check how many different plans are stored in the Plan Cache by running the following query:
```
SELECT PLAN_ID, STATEMENT_HASH FROM M_SQL_PLAN_CACHE WHERE STATEMENT_HASH = '<statement_hash>'
```
Typically, the result will show multiple rows for a single statement hash value each with a different PLAN_ID value. Using the Plan IDs listed in the result you can execute the statements given in SAP Note 2410208 to display the execution plan and identify the differences there.
Additionally, you can check why a new execution plan was generated by selecting the column LAST_INVALIDATION_REASON. The reasons mentioned there are discussed in SAP KBA 2124112 FAQ: SAP HANA Parsing ('How can I determine the reason for a statement recompilation?')
If you want to prevent this from happening consider using the Plan Stability feature which captures the execution plans of the SQL statements and applies them later when the statements are compiled again. In this way Plan Stability can provide consistent performance across different revisions. You can use Plan Stability in SAP HANA Cockpit or from the command line as described in the SQL Plan Stability section of this guide. The following SAP Note gives additional information: 2639193 SAP HANA SQL Plan Stability .
Root Cause: If a related table was unloaded, it takes some time to load tables
Required Action: Check unload trace and execute after table loaded fully. You can refer to LOADED column of m_CS_TABLES.
Root Cause: Query compilation time is long.
Required Action: Check the execution time after adding 'with hint (ignore_plan_cache)' at the end of query. This hint will always cause the query to be compiled. If a long running compiled query plan has been evicted frequently from the plan cache, increase the query cache size. For more information, see SQL Plan Cache Analysis .
Root Cause: Merge status of column table can affect query plan
Required Action: Check MEMORY_SIZE_IN_DELTA, RAW_RECORD_COUNT_IN_DELTA, and LAST_MERGE_TIME columns of m_CS_TABLES whether there is large amount of data in delta. Check m_DELTA_MERGE_STATISTICS to see when the last merge occurred. For more information, see Delta Merge .
## Related Information
SQL Plan Cache Analysis [page 185]
```
Delta Merge [page 129] SQL Plan Stability [page 284] SQL Plan Stability (SAP HANA Cockpit) SAP Note 2410208 SAP Note 2124112 SAP Note 2639193
``` |
## 2.4.2.2 A Statement is Slower After an Upgrade | ## 2.4.2.2 A Statement is Slower After an Upgrade
After upgrade, the query execution time can be different because of changes in the query execution plan.
Root cause: After an upgrade, a query can have a different plan, which leads to a different execution time.
Required Action: If you have an instance running on an older revision, compare the plan and collect the plan visualizer file. See Getting Support for further help.
Refer also to the SQL Plan Stability section of this guide; this feature offers the option to preserve a query's execution plan by capturing an abstraction of the plan and reusing it after the upgrade to regenerate the original plan and retain the original performance.
For more information, see Expensive SQL Statements .
## Related Information
Expensive SQL Statements [page 77]
```
Analyzing SQL Execution with the Plan Visualizer [page 253] SQL Plan Stability [page 284] Getting Support
``` |
## 2.4.2.3 A Query on Multiple Nodes Can Be Slower | ## 2.4.2.3 A Query on Multiple Nodes Can Be Slower
In distributed systems, query execution can be routed to other nodes for better performance. However, there is a chance of having slow performance in the case where the network used for transferring intermediate results generated during query execution is slow or where there is an inefficient network configuration.
Root cause: A misconfigured network can lead to slow queries.
Required Action: Check your network configuration and its bandwidth/latency among SAP HANA servers. For more information see Network View and SAP Note 2183363 - Configuration of SAP HANA internal network .
Root cause: Statement routing and huge data transfer among distributed nodes can cause differences of query execution times due to the difference of anchor nodes.
Required Action: Check how much data is transferred among distributed nodes and the network performance. You can use the PlanViz Tool to check which nodes in the network are involved during the execution of the query and how much data is transferred (see Overview Page, Plan Viz). The KPI values for the distribution show:
- Number of nodes
- Number of network transfers
- Network traffic
Consider the locations of joined tables to reduce transferred intermediate result size.
## Related Information
Network View [page 261]
Performance Trace [page 289]
Overview Page [page 256]
SAP Note 2183363 |
## 2.4.2.4 Slow Select for all Entries (FAE) Query | ## 2.4.2.4 Slow Select for all Entries (FAE) Query
There are a number of points to check if you have performance issues with SELECT FOR ALL ENTRIES (FAE) from an ABAP query.
For FAE on SAP HANA, please generally refer to SAP Note 1662726 - Optimization of select with FOR ALL ENTRIES on SAP HANA database .
Root cause: Indexes are missing.
Required Action: Check WHERE clause and check concat indexes for all fields used in WHERE clause.
Root cause: Due to DBSL behavior, slower operator can be chosen.
Required Actions: Apply parameters for ABAP optimization. For more information, see SAP Note 1987132 SAP HANA: Parameter setting for SELECT FOR ALL ENTRIES .
Root cause: When using less than or greater than ('<' or '>=') filters in FAE query, it can be slower than having no filter.
Required Actions: Do not use less than or greater than ('<' or '>=') filters in an FAE query.
## Related Information
```
SAP Note 1662726 SAP Note 1987132
``` |
## 2.4.2.5 All Statements Accessing a Specific Table are Slow | ## 2.4.2.5 All Statements Accessing a Specific Table are Slow
In case queries run on specific tables are slow, check if there are too many versions.
Root cause: If there are too many versions of single records, accessing the table can be slow. In this case, the number of system-wide MVCC versions is in acceptable range. To verify further whether there are too many versions for a specific table, check the result of this query:
```
SELECT * FROM M_RS_TABLE_VERSION_STATISTICS where table_name='mytable';
```
This can be caused by a cursor unnecessarily being held on a query result or a long-lived transaction without a commit/rollback.
Required Action: Applications should commit as early as possible or close any cursors that are not needed. Check the application logic to see whether it really needs to update single records frequently. |
## 2.4.3 Frequent Out of Memory (OOM) | ## 2.4.3 Frequent Out of Memory (OOM)
If Out Of Memory situations happen frequently, it can also lead to performance drop by unloading tables or shrinking memory jobs.
First check Memory Problems and SAP KBA 1999997 - FAQ: SAP HANA Memory as they provide information on SAP HANA memory and its problems.
This section introduces common problems and their troubleshooting steps.
## Related Information
Memory Problems [page 65] SAP Note 1999997 |
## 2.4.3.1 Out of Memory Caused by Sudden Increased Memory Usage | ## 2.4.3.1 Out of Memory Caused by Sudden Increased Memory Usage
Check the memory consumption of statements in the event of OOMs caused by suddenly increased memory usage.
Root cause : Huge intermediate results during query processing.
Required Actions: Enable memory tracking by setting the following parameters to on in the global.ini file resource_tracking section.
- enable_tracking = on
- memory_tracking = on
Enable the Expensive Statement Trace by setting the status to Active .
Then, check the memory consumption of statements using m_EXPENSIVE_STATEMENTS.MEMORY_SIZE. After your analysis you can optimize any problematic queries that were found. |
## 2.4.3.2 Out of Memory Caused by Continuously Increased Memory Usage | ## 2.4.3.2 Out of Memory Caused by Continuously Increased Memory Usage
Check if many statements trigger an Out Of Memory in a system where used memory is continuously increased.
Root cause: Commit/rollback within stored procedure can lead to memory leakages. Do not use exec('commit') or exec('rollback') within a stored procedure. If this syntax is used, the system cannot reclaim the memory used for query execution because its related transactions are left hanging.
Required Actions: Remove exec('commit') or exec('rollback'). If you would like to use commit/rollback within stored procedure, see SAP HANA SQLScript Reference for more information.
Root cause: Due to memory leakage caused by a programming error
Required Action: Check the Performance Monitor in SAP HANA cockpit Overview Memory Usage to determine whether used memory continuously increases without a significant increase in data size. If you find a suspicious component which keeps allocating memory, create a support ticket attaching a full system dump, mm trace, and the output of _SYS_STATISTICS.HOST_HEAP_ALLOCATORS_BASE.
## Related Information
SAP HANA SQLScript Reference |
## 2.4.3.3 Out of Memory While High Usage Of Column Store Components' Allocators | ## 2.4.3.3 Out of Memory While High Usage Of Column Store Components' Allocators
If a system is undersized, Out Of Memory (OOM) can happen frequently.
Root cause:
Undersized memory
## Required Action:
1. Check top allocators in [MEMORY_OOM] section of OOM dump to see whether they are for column store components as described in section 13 of KBA 1999997 - FAQ: SAP HANA Memory .
2. Check the unload trace whether frequent unloading of tables took place.
3. Reduce the amount of data in column store or increase physical memory.
## Related Information
SAP Note 1999997 |
## 2.4.3.4 Out of Memory Caused by Large Memory Usage of Statistics Server | ## 2.4.3.4 Out of Memory Caused by Large Memory Usage of Statistics Server
This case can happen if the majority of memory used by the statisticsserver is due to many alerts or undersized memory.
Root cause: Due to big STATISTICS_ALERTS_BASE table size, an OOM can occur.
Required Action: Check table size of _SYS_STATISTICS.STATISTICS_ALERTS_BASE and truncate STATISTICS_ALERTS_BASE from hdbsql based on solution from SAP Note 2170779 - SAP HANA DB: Big statistics server table leads to performance impact on the system .
Root cause: Big Pool/Statistics allocator size
Required Action: Check SAP Note 2147247 - FAQ: SAP HANA Statistics Server first.
## Related Information
SAP Note 2170779
SAP Note 2147247 |
## 2.4.3.5 Out of Memory Occurs due to High Usage of Shared Memory | ## 2.4.3.5 Out of Memory Occurs due to High Usage of Shared Memory
Shared memory is space where system information and row store tables are stored.
Check the following if the used amount of shared memory is high.
Root cause: Severely fragmented row store tables
## Required action:
1. Check whether SHARED_MEMORY in [MEMORY_OOM] ->= IPMM short info of OOM dump is unusally high.
2. Apply SAP Note 1813245 - SAP HANA DB: Row store reorganization .
Root cause: Memory shortage is caused by high usage of memory of row store tables in an SAP Business Warehouse (BW) system
Required action: In an SAP Business Warehouse system the used amount of shared memory is high and SAP Note 1813245 doesn't recommend row store reorganization, first apply SAP Note 706478 - Preventing Basis tables from increasing considerably , then apply SAP Note 1813245 - SAP HANA DB: Row store reorganization again.
Root cause: Memory shortage is caused by high usage of memory of row store tables in non-SAP Business Warehouse (BW) systems
Required Action: Check if you can convert some row store tables into column store tables or archive old data to reduce the memory size, or else increase the system memory.
## Related Information
SAP Note 1813245
SAP Note 706478 |
## 2.4.4 Table Consistency Check | ## 2.4.4 Table Consistency Check
This section gives basic information for solving problems related to the Table Consistency Check.
The table consistency check is a procedure available in the SAP HANA database that performs a range of consistency check actions on database tables. It can be run from the command line or scheduled within the statistics service. Alert #83 Table Consistency is generated if the consistency check detects a given number of errors.
## Monitoring Views
Results of all consistency checks, whether executed manually or scheduled in the statistics service, are available in two monitoring views:
- m_CONSISTENCY_CHECK_HISTORY
- m_CONSISTENCY_CHECK_HISTORY_ERRORS
The first view gives high level information about parameters used for the check, runtime duration and the total number of tables with errors; each run is identified by a CHECK_EXECUTION_ID value. The errors table lists and gives details of all errors found within each check run.
Additionally, the monitoring view m_CS_TABLES, holds values for: LAST_CONSISTENCY_CHECK_TIME, LAST_CONSISTENCY_CHECK_ERROR_COUNT.
## Common Errors
If you encounter errors when running the table consistency check you should first try to rerun the check as some errors may only be due to timing issues related to the current workload and would not occur again if the table consistency check is repeated.
If the same error does recur check the following trace files:
- Database trace file available in SAP HANA Studio or Database Explorer: <service>=_<host>=.<port>=.<counter>=.trc
- Dedicated trace file: <service>=_<host>=.<port>=.table_consistency_check.<timestamp>=.trc. For this trace file the parameter enable_table_consistency_check_trace must be enabled.
If necessary, increase the trace level to debug to get sufficient diagnostic information:
```
ALTER SYSTEM ALTER CONFIGURATION ('indexserver.ini', 'SYSTEM') SET ('trace', 'table_consistency_check') = 'debug'
```
Search the trace file for any errors and make a note of the error code. This information will be helpful if it is finally necessary to contact SAP Support for assistance.
## More Information
For background information see Table Consistency Check in the SAP HANA Administration Guide . Refer also to the following SAP Notes:
- 2116157 FAQ: SAP HANA Consistency Checks and Corruptions
- 1977584 Technical Consistency Checks for SAP HANA Databases
## Related Information
SAP Note 2116157
SAP Note 1977584 |
## 3 Root Causes and Solutions | ## 3 Root Causes and Solutions
This section provides detailed information on the root causes of problems and their solutions.
System alerts are a good indicator for the underlying problem. The SAP Notes you will be directed to are a source of background information, explanations, alternative options, FAQs or useful SQL statements, for example. Detailed monitoring and administration of the SAP HANA databases is possible via the SAP HANA cockpit and the SAP HANA database explorer. |
## 3.1 Memory Problems | ## 3.1 Memory Problems
This section discusses the analysis steps that are required to identify and resolve memory related issues in the SAP HANA database.
For general information on SAP HANA memory management see the SAP HANA Academy video SAP HANA Memory Management which discusses the memory concept and comments on the difference between used, allocated, and resident Memory.
Alerts related to memory problems and corresponding SAP Notes are documented in the Alerts reference table (see category 'Memory'). SAP Note 1840954 - Alerts related to HANA memory consumption includes information on how to analyze out-of-memory (OOM) dump files.
The SAP HANA Administration Guide gives details on using the tool SAP HANA cockpit to analyze memory problems.
In order to understand the current and historic SAP HANA memory consumption you can use the following tools and approaches:
- Memory information in SAP HANA cockpit
- Memory information from logs and traces
- Memory information from SQL commands
- Memory information from other tools (see the tools overview in the SAP HANA Administration Guide ).
## Related Information
SAP Note 1840954
Alerts Reference [page 302]
SAP HANA Administration Guide
Video HANA Academy: SAP HANA Memory Management |
## 3.1.1 Memory Information in SAP HANA Cockpit | ## 3.1.1 Memory Information in SAP HANA Cockpit
There are a number of sources of information in SAP HANA cockpit that can assist you in understanding memory utilization.
- Open the Manage Services page from the Overall Database Status tile for high-level information about physical memory, allocation limit, and used memory for each service.
- To get a graphical overview about physical memory, allocation limit, used memory, and resident memory open the Performance Monitor from the Memory Usage tile.
- Open the Memory Analysis app from the Memory Usage tile for details about memory utilization as well as history information. For example, click the Components tab in the Memory Analysis app to view the used memory grouped by different components like "Statement Execution & Intermediate Results" or "Column Store Tables". When you choose a component, the corresponding historic information of memory usage is displayed by a graph. |
## 3.1.2 Memory Information from Logs and Traces | ## 3.1.2 Memory Information from Logs and Traces
In case of critical memory issues you can often find more detailed information in logs and trace files.
- Try to identify memory-related errors in the alert trace files in the SAP HANA database explorer (accessible via the View trace and diagnostic files link). Search for the strings 'memory' , 'allocat' , or 'OOM' (caseinsensitive).
- Check if an out-of-memory (OOM) trace file was created (users of SAP HANA Studio may need to view some trace files at the operating system level - see SAP Note 2573851 OOM rtedump file missing in HANA Studio ).
- Investigate error messages seen on the application side that occurred at times of high memory usage. If the application is an SAP NetWeaver system, good starting points for analysis are System Log (SM21), ABAP Runtime Error (ST22), and Job Selection (SM37).
If help from SAP Customer Support is needed to perform an in-depth analysis, the following information is valuable and should be added to the ticket:
- Diagnosis information (full system info dump). To collect this information, see section Diagnosis Information .
- Performance trace provides detail information on the system behavior, including statement execution details. The trace output is written to a trace file perftrace.tpt, which must be sent to SAP Customer Support. Please note that the trace must be captured while the issue is present or while reproducing the issue. To enable this trace, see section Performance Trace under Additional Analysis Tools for Support.
If specific SAP HANA system components need deeper investigation, SAP Customer Support can ask you to raise the corresponding trace levels to INFO or DEBUG, rerun the query and then send the indexserver trace files to SAP Customer Support.
To do so choose Trace Configuration in the SAP HANA database explorer and launch the Database Trace wizard. The following illustration shows the example of enabling the join engine trace (join_eval). You can enter a search string and then select the component in the indexserver.ini file and change the System Trace Level to the appropriate value. Some trace components (such as debug trace details for join engine) can create many megabytes of trace information and may also require an increase of the values maxfiles and
maxfilesize in the [trace] section of the global.ini file (see also SAP Note 2629103 SQL Traces Stop Writing When it Exceeds the Maximum Filesize ).
Internal details about SQL statement execution can be collected by enabling the Executor Trace; this provides the highest level of detail and should only be activated for the short time of query execution. After capturing the trace details you can upload the trace file (extrace.py) to SAP Customer Support. This trace is enabled by activating a configuration parameter as described and illustrated here.
Open the Configuration of System Properties view, edit the parameter trace in the [pythontrace] section of the executor.ini file, and change its value to on . When you change the default value, you have the option to set
values at each specific layer. The trace parameter is off by default. Click Change Layer to turn the parameter on. You can then change the configuration value for a selected database and for a selected layer:
## Related Information
Diagnosis Information [page 293]
Performance Trace [page 289]
SAP Note 2629103
SAP Note 2573851 |
## 3.1.3 Memory Information from SQL Commands | ## 3.1.3 Memory Information from SQL Commands
There are a number of ways to analyze memory usage based on pre-defined and modifiable SQL queries.
The Statement Library of SAP HANA database explorer provides a set of tabular views to display the memory consumption of loaded tables based on pre-defined SQL queries:
- The view Component Memory Usage shows the aggregated memory consumption in bytes for different heap memory service components, services, and ports. Order by the used memory size column and find the largest consumers. The following heap memory service components are available: 1) Column Store Tables, 2) Row Store Tables, 3) Statement Execution & Intermediate Results, 4) Monitoring & Statistical Data, 5) Caches, 6) System, 7) Code Size, 8) Stack Size.
- The view Schema Size of Loaded Tables displays the aggregated memory consumption of loaded tables in MB for different database schemas. The aggregation comprises both column store and row store tables. Order by the schema size column and find the largest consumers.
- The view Used Memory by Tables shows two values: the total memory consumption of all column store tables in MB and the total memory consumption of all row store tables in MB.
## Note
You can find this and other information in the Memory Analysis app of the SAP HANA cockpit. For more information, see Analyze Memory Statistics in the SAP HANA Administration Guide .
## SQL Statement Collection
SAP Note 1969700 - SQL Statement Collection for SAP HANA contains several commands that are useful to analyze memory-related issues. Based on your needs you can configure restrictions and parameters in the sections marked with /* Modification section */ .
Some of the memory-related statements which you may find useful are described here:
- HANA_Memory_Overview
- HANA_Memory_TopConsumers* - available in two versions depending on the release. The output of the statement is the same in both cases:
- HANA_Memory_TopConsumers_History (available in older versions of the statement collection)
- HANA_Memory_TopConsumers_TimeSlices (supersedes 'Consumers History' in versions SPS 04 and above)
- HANA_Tables_LargestTables
- HANA_Memory_SharedMemory
## HANA_Memory_Overview
This query gives an overview of current memory information showing key values in the following columns:
- NAME: Description
- TOTAL_GB: Total size (GB)
- DETAIL_GB: Size on detail level (GB) - this is shown at two levels with a breakdown at a second level - see the following table:
| NAME | TOTAL_GB | DETAIL_GB | DETAIL2_GB |
|--------------------------------------|------------|-----------------------------|---------------------------------|
| User-defined global allocation limit | not set | | |
| License memory limit | 256 | | |
| License usage | 143 | 124 (2014/03/01-2014/03/31) | |
| | | 143 (2014/04/01-2014/04/30) | |
| | | 113 (2014/05/01-2014/05/09) | |
| Physical memory | 256 | 256 (vhbshk0sdb) | |
| HANAinstance memory (allocated) | 113 | 113 (vhbshk0sdb) | |
| HANAinstance memory (used) | 85 | 85 (vhbshk0sdb) | |
| HANAshared memory | 23 | 23 (vhbshk0sdb) | |
| HANAheap memory (used) | 52 | 52 (vhbshk0sdb) | 10 (Pool/NameIdMapping/ RoDict) |
## HANA_Memory_TopConsumers_History / HANA_Memory_TopConsumers_TimeSlices
The output shows the areas in history with the highest memory requirements (column store and row store tables, heap, code, stack). The following columns contain the key values:
- SAMPLE_TIME: Timestamp
- KEY_FIGURE: Memory key figure
- VALUE_TOTAL: Total value of memory key figure
- DETAIL_<n>=: Top <n>= detail area related to memory key figure
- VALUE_<n>=: Memory key figure value of top <n>= detail area
- PCT_<n>=: Fraction of overall value occupied by top <n>= detail area (%)
VALUE_TOT
| SAMPLE_TIME | KEY_FIGURE | AL | DETAIL_1 | VALUE_1 | PCT_1 |
|-----------------|-----------------|---------|-------------------------------------------------------------|-----------|---------|
| 2014/04/22(TUE) | Space used (GB) | 408.55 | Pool/PersistenceManager/ PersistentSpace(0)/DefaultLPA/Page | 85.94 | 21 |
| 2014/04/21(MON) | Space used (GB) | 382.82 | Pool/PersistenceManager/ PersistentSpace(0)/DefaultLPA/Page | 90.44 | 24 |
| 2014/04/20(SUN) | Space used (GB) | 437.76 | Pool/PersistenceManager/ PersistentSpace(0)/DefaultLPA/Page | 126.2 | 29 |
| 2014/04/19(SAT) | Space used (GB) | 513.7 | Pool/PersistenceManager/ PersistentSpace(0)/DefaultLPA/Page | 133.29 | 26 |
| 2014/04/18(FRI) | Space used (GB) | 445.53 | Pool/PersistenceManager/ PersistentSpace(0)/DefaultLPA/Page | 112.35 | 25 |
| 2014/04/17(THU) | Space used (GB) | 363.3 | Pool/RowEngine/QueryExecution | 72.54 | 20 |
| 2014/04/16(WED) | Space used (GB) | 487.59 | Pool/RowEngine/QueryExecution | 104.39 | 21 |
| 2014/04/15(TUE) | Space used (GB) | 588.56 | Pool/PersistenceManager/ PersistentSpace(0)/DefaultLPA/Page | 168.21 | 29 |
| 2014/04/14(MON) | Space used (GB) | 3484.97 | Pool/JoinEvaluator/ JERequestedAttributes/Results | 2795.2 | 80 |
| 2014/04/13(SUN) | Space used (GB) | 572.9 | Pool/PersistenceManager/ PersistentSpace(0)/DefaultLPA/Page | 175.6 | 31 |
| 2014/04/12(SAT) | Space used (GB) | 477.98 | Pool/PersistenceManager/ PersistentSpace(0)/DefaultLPA/Page | 111.76 | 23 |
| 2014/04/11(FRI) | Space used (GB) | 491.72 | Pool/PersistenceManager/ PersistentSpace(0)/DefaultLPA/Page | 119.74 | 24 |
| 2014/04/10(THU) | Space used (GB) | 463.47 | Pool/PersistenceManager/ PersistentSpace(0)/DefaultLPA/Page | 100.82 | 22 |
## HANA_Tables_LargestTables
This query provides an overview of current memory allocation by tables. The following list explains the columns displayed in the table:
- OWNER: Name of the table owner
- TABLE_NAME: Name of the table
- S: Table store ('R' for row store, 'C' for column store)
- HOST: Host name ('various' in case of partitions on multiple hosts)
- U: 'X' if at least one unique index exists for the table
- POS: Position of table in top list
- COLS: Number of table columns
- RECORDS: Number of table records
- SUm_DISK_GB: Total size on disk (in GB, table + indexes + LOB segments)
- SUm_MEm_GB: Total potential maximum size in memory (in GB, table + indexes + LOB segments)
| OWNER | TABLE_NAME | S | HOST | U | POS | COLS | RECORDS | SUM_DIS K_GB | SUM_ME M_GB |
|---------|------------------|-----|----------|-----|-------|--------|------------|----------------|---------------|
| SAPSR3 | /BIC/AZOCEUO0500 | C | various | X | 1 | 16 | 877829360 | 63.9 | 76.15 |
| SAPSR3 | /BIC/AZOCZZO0400 | C | various | X | 2 | 33 | 965035392 | 63.45 | 70.1 |
| SAPSR3 | RSMONMESS | R | erslha33 | X | 3 | 19 | 170801504 | 27.92 | 54.21 |
| SAPSR3 | /BIC/AZFIGLO1300 | C | various | X | 4 | 60 | 652633189 | 47.2 | 53.23 |
| SAPSR3 | /BIC/AZSCXXO4400 | C | various | X | 5 | 26 | 1251448665 | 47.78 | 53.04 |
| SAPSR3 | /BIC/AZOCEUO0800 | C | various | X | 6 | 17 | 911830438 | 37.86 | 52.42 |
| SAPSR3 | /BIC/AZOCZZO2000 | C | various | X | 7 | 34 | 1200422292 | 46.5 | 50.08 |
| SAPSR3 | RSWR_DATA | R | erslha33 | X | 8 | 10 | 20471 | 36.88 | 36.88 |
| SAPSR3 | RSRWBSTORE | C | erslha33 | X | 9 | 5 | 14483956 | 36.18 | 36.2 |
| SAPSR3 | /BIC/AZMIEUO0200 | C | various | X | 10 | 52 | 403915330 | 28.58 | 33.05 |
| SAPSR3 | /BIC/AZSCXXO2900 | C | various | X | 11 | 275 | 183029330 | 30.82 | 29.26 |
| SAPSR3 | /BIC/AZSCXXO4700 | C | various | X | 12 | 42 | 648103462 | 27.38 | 28.83 |
| SAPSR3 | /BIC/FZRREUC16B | C | erslha35 | | 13 | 122 | 258261262 | 26.43 | 24.99 |
| SAPSR3 | /BIC/AZOCEUO9000 | C | various | X | 14 | 16 | 251896248 | 20.53 | 23.71 |
| SAPSR3 | RSBMNODES | R | erslha33 | X | 15 | 12 | 130344869 | 13.67 | 20.25 |
| SAPSR3 | /BIC/AZSCXXO1400 | C | various | X | 16 | 279 | 164509638 | 18.49 | 19.82 |
| SAPSR3 | /BIC/AZOCEUO0300 | C | various | X | 17 | 27 | 577787981 | 17.95 | 19.6 |
| OWNER | TABLE_NAME | S | HOST | U | POS | COLS | RECORDS | SUM_DIS K_GB | SUM_ME M_GB |
|---------|------------------|-----|----------|-----|-------|--------|------------|----------------|---------------|
| SAPSR3 | EDI40 | R | erslha33 | X | 18 | 7 | 5733625 | 18.26 | 18.4 |
| SAPSR3 | /BIC/FZOCZZC20 | C | various | | 19 | 34 | 1427403108 | 18.97 | 17.8 |
| SAPSR3 | /BIC/AZSCXXO2600 | C | various | X | 20 | 306 | 95251083 | 16.65 | 16.97 |
| SAPSR3 | /BIC/AZSCXXO0800 | C | various | X | 21 | 266 | 120598787 | 18.76 | 15.62 |
| SAPSR3 | /BIC/AZSPXXO0200 | C | various | X | 22 | 48 | 270975902 | 12.63 | 15.3 |
| SAPSR3 | /BIC/AZOCZZO5000 | C | various | X | 23 | 33 | 215732874 | 14.16 | 14.92 |
| SAPSR3 | /BIC/AZSPXXO0300 | C | various | X | 24 | 56 | 275036362 | 13.09 | 14.77 |
| SAPSR3 | /BIC/AZOCEUO0600 | C | various | X | 25 | 16 | 663581081 | 12.29 | 14.34 |
| SAPSR3 | /BIC/AZOCZZO0700 | C | various | X | 26 | 41 | 350819182 | 15 | 14.32 |
| SAPSR3 | /BIC/FZRREUC16D | C | erslha40 | | 27 | 122 | 146620284 | 15.39 | 14.06 |
| SAPSR3 | /BIC/AZMDEUO0800 | C | various | X | 28 | 246 | 151145647 | 15.39 | 14 |
| SAPSR3 | /BIC/AZMIEUO1300 | C | various | X | 29 | 16 | 406548712 | 11.52 | 13.35 |
| SAPSR3 | /BIC/AZRREUO0100 | C | various | X | 30 | 167 | 202422848 | 15.36 | 13.24 |
## HANA_Memory_SharedMemory
This query shows currently used and allocated shared memory per host and service. The following list explains the columns displayed in the table:
- HOST: Host name
- PORT: Port name
- SERVICE_NAME: Service name
- SHm_ALLOC_GB: Allocated shared memory (GB)
- SHm_USED_GB: Used shared memory (GB)
- SHm_USED_PCT: Percentage of allocated memory used
| HOST | PORT | SERVICE_NAME | SHM_ALLOC_GB | SHM_USED_GB | SHM_USED_PCT |
|----------|--------|------------------|----------------|---------------|----------------|
| MyHost01 | 31001 | nameserver | 0.13 | 0.02 | 18.64 |
| MyHost01 | 31002 | preprocessor | 0 | 0 | 0 |
| MyHost01 | 31003 | indexserver | 24.5 | 11.21 | 45.76 |
| MyHost01 | 31005 | statisticsserver | 0.14 | 0.03 | 22.98 |
| HOST | PORT | SERVICE_NAME | SHM_ALLOC_GB | SHM_USED_GB | SHM_USED_PCT |
|----------|--------|----------------|----------------|---------------|----------------|
| MyHost01 | 31006 | webdispatcher | 0 | 0 | 0 |
| MyHost01 | 31007 | xsengine | 0.14 | 0.03 | 22.96 |
| MyHost01 | 31010 | compileserver | 0 | 0 | 0 |
## Related Information
SAP Note 1969700
Memory Analysis Using the SQL Statement Collection for Analysis and Health Checks [page 8] |
## 3.1.4 Memory Information from Other Tools | ## 3.1.4 Memory Information from Other Tools
Other tools are also available to analyze high memory consumption and out-of-memory situations.
A number of SAP Notes and 'How-to' documents are available to provide help with some of the most common questions and difficulties related to memory. The tool hdbcons provides expert functionality to analyze memory issues. You can use this tool (typically with guidance from the SAP Product Support team) to create runtime dump files and analyze the details of memory consumption. If necessary, dump files can be sent to Product Support for further analysis.
The following SAP Notes may be helpful in solving some problems and when analyzing memory issues with hdbcons .
- 1999997 FAQ: SAP HANA Memory
- 2400007 FAQ: SAP HANA Runtime Dumps
- 2222218 FAQ: SAP HANA Database Server Management Console (hdbcons)
- 1786918 Required information to investigate high memory consumption (hdbcons)
- 1813020 How to generate a runtime dump on SAP HANA
- 1984422 How-To: Analyzing SAP HANA Out-of-memory (OOM) Dumps
- How to troubleshoot HANA High Memory Consumption (Guided Answer)
## Tip
Guided Answers is a support tool for troubleshooting problems using decision trees. A guided answer is available for How to troubleshoot HANA High Memory Consumption .
## Related Information
```
SAP Note 2400007 SAP Note 2222218 SAP Note 1786918 SAP Note 1813020 SAP Note 1984422 How to troubleshoot HANA High Memory Consumption (Guided Answer)
``` |
## 3.1.5 Root Causes of Memory Problems | ## 3.1.5 Root Causes of Memory Problems
Once you have completed your initial analysis, you have the information required to start the next phase of your analysis.
Based on the results from the analysis approaches you are now able to answer the following questions:
- Is it a permanent or a sporadic problem?
- Is the memory consumption steadily growing over time?
- Are there areas with critical memory consumption in heap, row store or column store?
- Is there a big difference between used memory and allocated memory?
In the following you can find typical root causes and possible solutions for the different scenarios.
A valuable tool for analyzing memory problems is the Expensive Statements Trace which is useful in cases where individual statements seem to exhaust the system's memory. This is described separately in the Tools and Tracing section: Analyzing Expensive Statements Traces.
## Related Information
Analyzing Expensive Statements Traces [page 243] |
## 3.1.5.1 Significant External Memory Consumption | ## 3.1.5.1 Significant External Memory Consumption
If the database resident memory of all SAP HANA databases on the same host is significantly smaller than the total resident memory, you have to check which processes outside of the SAP HANA databases are responsible for the additional memory requirements.
Typical memory consumers are:
- Operating system (for example, caches, mapping structures)
- Third party tools (for example, backup, virus scanner)
How to identify top memory consumers from non-SAP HANA processes is out of scope of this guide. However, when you are able to identify the reason for the increased memory consumption of the external program, you can check if it is possible to optimize its configuration. |
## 3.1.5.2 Space Consumed by Large Tables | ## 3.1.5.2 Space Consumed by Large Tables
If particularly large tables consume significant amounts of space in the row store or column store, you should check if the amount of data can be reduced.
The following references will be helpful:
- SAP Note 2388483 - How To: Data Management for Technical Tables describes archiving and deletion strategies for typical SAP tables with a technical background; for example, tables required for communication, logging or administration.
- General recommendations for managing data can be found on the Information Lifecycle Management page.
For more information on memory management for resident table data, see Managing Tables in the SAP HANA Administration Guide and the following SAP Notes:
- SAP Note 2222277 - FAQ: SAP HANA Column Store and Row Store
- SAP Note 2220627 - FAQ: SAP HANA LOBs
- SAP Note 2388483 - How-To: Data Management for Technical Tables
## Related Information
```
https://www.sap.com/products/information-lifecycle-management.html Managing Tables SAP Note 2222277 SAP Note 2220627 SAP Note 2388483
``` |
## 3.1.5.3 Internal Columns in the Column Store | ## 3.1.5.3 Internal Columns in the Column Store
For several reasons, SAP HANA creates internal columns in the column store. Some column store tables are automatically loaded into memory.
## Internal Columns
You may be able to optimize or remove internal columns in order to reduce memory usage. In some situations a cleanup is possible, for example, in the case of CONCAT attribute columns that were created in order to support joins. For more information see SAP Note 1986747: How-To: Analyzing Internal Columns in SAP HANA Column Store .
## Column Store Table Loads and Unloads
The SAP HANA system dynamically loads column store tables into memory during system restart and when required by the application. You may be able to optimize the number of pre-loaded tables. For more information, see SAP Note 2127458: FAQ: SAP HANA Loads and Unloads .
## Related Information
```
SAP Note 2127458 SAP Note 1986747
``` |
## 3.1.5.4 Memory Leaks | ## 3.1.5.4 Memory Leaks
A memory leak is a memory area (typically a heap allocator) that grows over time without any apparent reason.
If you have identified a suspicious area, proceed as follows:
- Check for SAP Notes that describe the memory leak and provide a solution.
- Check if the problem is reproducible with a recent SAP HANA revision.
- If you can't resolve the problem yourself, open an SAP customer message and use the component HANDB.
Before you open a Support incident refer to SAP Note 1786918 Required information to investigate high memory consumption .
Information about a specific memory area can be obtained from the output of the following hdbcons command giving the name of the heap allocator as a parameter. This information is also particularly helpful to the Support team when analyzing memory problems.
```
hdbcons "mm -bl <allocator_name>"
```
## Related Information
```
SAP Note 1786918 Problem Analysis Using hdbcons
``` |
## 3.1.5.5 Large Heap Areas | ## 3.1.5.5 Large Heap Areas
Some heap areas can be larger than necessary without being a memory leak.
SAP Note 1840954 - Alerts Related to HANA Memory Consumption contains an overview of heap allocators with a potentially large memory consumption and possible resolutions.
Refer also to SAP Note 1999997 - FAQ: SAP HANA Memory .
## Related Information
SAP Note 1840954 |
## 3.1.5.6 Expensive SQL Statements | ## 3.1.5.6 Expensive SQL Statements
SQL statements processing a high amount of data or using inefficient processing strategies can be responsible for increased memory requirements.
See SQL Statement Analysis for information on how to analyze expensive SQL statements during times of peak memory requirements.
## Related Information
SQL Statement Analysis [page 238]
Setting a Memory Limit for SQL Statements [page 79] Analyzing Expensive Statements Traces [page 243] |
## 3.1.5.7 Memory Consumption Related to Transactional Problems | ## 3.1.5.7 Memory Consumption Related to Transactional Problems
High memory consumption can be caused by problems with transactions.
In some cases, high memory consumption is caused by wait situations, which can have different reasons:
- Long-running or unclosed cursors,
- Blocked transactions,
- Hanging threads.
As one of the negative impacts, used memory is not released any more. In particular, the number of table versions can grow up to more than 8,000,000 which is considered the amount where an action is required.
For more information, see the separate section Transactional Problems .
## Related Information
Transactional Problems [page 167] |
## 3.1.5.8 Used Space Much Smaller than Allocated Space | ## 3.1.5.8 Used Space Much Smaller than Allocated Space
In order to optimize performance by minimizing the memory management overhead or due to fragmentation, SAP HANA may allocate additional memory rather than reusing free space within the already allocated memory.
This can lead to undesired effects that the SAP HANA memory footprint increases without apparent need.
The SAP HANA license checks against allocated space, so from a licensing perspective it is important to keep the allocated space below the license limit.
In order to limit the amount of allocated space, you can set the parameter global_allocation_limit to a value not larger than the maximum memory that should be allocated.
See Set the global_allocation_limit Parameter in the SAP HANA Administration Guide .
## Related Information
Change the Global Memory Allocation Limit |
## 3.1.5.9 Fragmentation | ## 3.1.5.9 Fragmentation
Fragmentation effects are responsible for inefficiently used memory. They can occur in different areas.
In order to minimize fragmentation of row store tables you can proceed as follows:
- If the fragmentation of row store tables in the shared memory segments of indexserver processes reaches 30% and the allocated memory size is greater than 10GB, a table redistribution operation is needed.
SAP Note 1813245 - SAP HANA DB: Row Store reorganization describes how to determine fragmentation and perform a table redistribution.
## Related Information
SAP Note 1813245 |
## 3.1.5.10 Large Memory LOBs | ## 3.1.5.10 Large Memory LOBs
LOB (Large Object) columns can be responsible for significant memory allocation in the row store and column store if they are defined as memory LOBs.
To check for memory LOBs and switch to hybrid LOBs see SAP Note 1994962 - Activation of Hybrid LOBs in SAP HANA .
## Related Information
```
SAP Note 1994962
``` |
## 3.1.5.11 Large Delta Store | ## 3.1.5.11 Large Delta Store
The delta store can allocate a significant portion of the column store memory.
You can identify the current size of the delta store by running the SQL command: HANA_Tables_ColumnStore_Overview ( SAP Note 1969700 - SQL Statement Collection for SAP HANA ). If the delta store size is larger than expected, proceed as described in the section Delta Merge .
## Related Information
```
SAP Note 1969700 Delta Merge [page 129]
``` |
## 3.1.5.12 Undersized SAP HANA Memory | ## 3.1.5.12 Undersized SAP HANA Memory
If a detailed analysis of the SAP HANA memory consumption didn't reveal any root cause of increased memory requirements, it is possible that the available memory is not sufficient for the current utilization of the SAP HANA database.
In this case you should perform a sizing verification and make sure that sufficient memory is installed on the SAP HANA hosts. |
## 3.1.5.13 Setting a Memory Limit for SQL Statements | ## 3.1.5.13 Setting a Memory Limit for SQL Statements
You can set a statement memory limit to prevent single statements from consuming too much memory.
## Prerequisites
To apply these settings you must have the system privilege INIFILE ADMIN.
For these options, enable_tracking and memory_tracking must first be enabled in the global.ini file. Additionally, resource_tracking must be enabled in this file if you wish to apply different settings for individual users (see Procedure below).
## Context
You can protect an SAP HANA system from uncontrolled queries consuming excessive memory by limiting the amount of memory used by single statement executions per host. By default, there is no limit set on statement memory usage, but if a limit is applied, statement executions that require more memory will be aborted when they reach the limit. To avoid canceling statements unnecessarily you can also apply a percentage threshold value which considers the current statement allocation as a proportion of the global memory currently available. Using this parameter, statements which have exceeded the hard-coded limit may still be executed if the memory allocated for the statement is within the percentage threshold. The percentage threshold setting is also effective for workload classes where a statement memory limit can also be defined.
You can also create exceptions to these limits for individual users (for example, to ensure an administrator is not prevented from doing a backup) by setting a different statement memory limit for each individual.
These limits only apply to single SQL statements, not the system as a whole. Tables which require much more memory than the limit applied here may be loaded into memory. The parameter global_allocation_limit limits the maximum memory allocation limit for the system as a whole.
You can view the (peak) memory consumption of a statement in m_EXPENSIVE_STATEMENTS.MEMORY_SIZE.
## Procedure
1. Enable statement memory tracking.
In the global.ini file, expand the resource_tracking section and set the following parameters to on :
- enable_tracking = on
- memory_tracking = on
2. statement_memory_limit - defines the maximum memory allocation per statement in GB. The parameter is not set by default.
- In the global.ini file, expand the memorymanager section and locate the parameter. Set an integer value in GB between 0 (no limit) and the value of the global allocation limit. Values that are too small can block the system from performing critical tasks.
- An option is also available to apply a limit in relation to the system's effective allocation limit. To do this, set the parameter to a value of '-1'. In this case a memory limit is applied calculated as 25% of the smaller of the global_allocation_limit and the process_allocation_limit.
- A value for this parameter may also be set during installation or upgrade by hdblcm.
- The value defined for this parameter can be overridden by the corresponding workload class property STATEMENT_MEMORY_LIMIT.
- When the statement memory limit is reached, a dump file is created with 'compositelimit_oom' in the name. The statement is aborted, but otherwise the system is not affected. By default only one dump file is written every 24 hours. If a second limit hits in that interval, no dump file is written. The interval can be configured in the memorymanager section of the global.ini file using the oom_dump_time_delta parameter, which sets the minimum time difference (in seconds) between two dumps of the same kind (and the same process).
After setting this parameter, statements that exceed the limit you have set on a host are stopped by running out of memory.
3. statement_memory_limit_threshold - defines a percentage of the global allocation limit. Parameter statement_memory_limit is respected only if total used memory exceeds this threshold. The default value is 0% (of the global_allocation_limit) so statement_memory_limit is always respected.
- In the global.ini file, expand the memorymanager section and set the parameter as a percentage of the global allocation limit.
- This parameter provides a means of controlling when the statement_memory_limit is applied. If this parameter is set, when a statement is issued the system will determine if the amount of memory it consumes exceeds the defined percentage value of the overall global_allocation_limit parameter setting. The statement memory limit is only applied if the current SAP HANA memory consumption exceeds this statement memory limit threshold as a percentage of the global allocation limit.
- This is a way of determining if a particular statement consumes an inordinate amount of memory compared to the overall system memory available. If so, to preserve memory for other tasks, the statement memory limit is applied and the statement fails with an exception.
- Note that the value defined for this parameter also applies to the workload class property STATEMENT_MEMORY_LIMIT.
4. total_statement_memory_limit - a value in gigabytes to define the maximum memory available to all statements running on the system. The default value is 0 (no limit).
- This limit does not apply to users with the administrator role SESSION ADMIN or WORKLOAD ADMIN who need unrestricted access to the system. However, a check of the user's privileges allowing the administrator to by-pass the limit is only made for the first request when a connection is made. The privileged user would have to reconnect to be able to bypass the statement memory limit again (see also Admission Control).
- The value defined for this parameter cannot be overridden by the corresponding workload class property TOTAL_STATEMENT_MEMORY_LIMIT.
- There is a corresponding parameter for use with system replication on an Active/Active (read enabled) secondary server. This is required to ensure that enough memory is always available for essential log shipping activity. See also sr_total_statement_memory_limit in section Memory Management .
5. User parameters can limit memory for statements. For further information, refer to Setting User Parameters for Workload .
## Results
The following example and scenarios show the effect of applying these settings:
Example showing statement memory parameters
| Parameter | Value |
|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Physical memory | 128GB |
| global_allocation_limit | The unit used by this parameter is MB. The default value is: 90%of the first 64 GBof available physical memory on the host plus 97%of each further GB; or, in the case of small physical memory, physical memory minus 1 GB. |
| Parameter | Value |
|----------------------------------|----------------------------------------------|
| statement_memory_limit | 1 GB(the unit used by this parameter is GB.) |
| statement_memory_limit_threshold | 60% |
## Scenario 1 :
A statement allocates 2GB of memory and the current used memory size in SAP HANA is 50GB.
- 0,9 * 128GB = 115,2 (global allocation limit)
- 0,6 * 115,2 = 69,12 (threshold in GB)
- 50 GB < 69,12 GB (threshold not reached)
The statement is executed, even though it exceeds the 1GB statement_memory_limit .
## Scenario 2 :
A statement allocates 2GB and the current used memory size in SAP HANA is 70GB
- 70 GB >= 69,12 GB (threshold is exceeded)
The statement is cancelled, as the threshold is exceeded, the statement_memory_limit is applied.
## Related Information
Change the Global Memory Allocation Limit
Memory Management Setting User Parameters for Workload Managing Peak Load (Admission Control) |
## 3.2 CPU Related Root Causes and Solutions | ## 3.2 CPU Related Root Causes and Solutions
This section covers the troubleshooting of high CPU consumption on the system.
A constantly high CPU consumption will lead to a considerably slower system as no more requests can be processed. From an end user perspective, the application behaves slowly, is unresponsive or can even seem to hang.
Note that a proper CPU utilization is actually desired behavior for SAP HANA, so this should be nothing to worry about unless the CPU becomes the bottleneck. SAP HANA is optimized to consume all memory and CPU available. More concretely, the software will parallelize queries as much as possible in order to provide optimal performance. So if the CPU usage is near 100% for a query execution, it does not always mean there is an issue. It also does not automatically indicate a performance issue. |
## 3.2.1 Indicators of CPU Related Issues | ## 3.2.1 Indicators of CPU Related Issues
CPU related issues are indicated by alerts issued or in views in the SAP HANA cockpit.
The following alerts may indicate CPU resource problems:
- Host CPU Usage (Alert 5)
- Most recent savepoint operation (Alert 28)
- Savepoint duration (Alert 54)
You notice very high CPU consumption on your SAP HANA database from one of the following:
- Alert 5 (Host CPU Usage) is raised for current or past CPU usage
- The CPU usage displayed in the CPU Usage tile on the Overview screen
- The Performance Monitor shows high current or past CPU consumption. Furthermore, the CPU usage of the host as well as the individual servers is displayed. |
## 3.2.2 Analysis of CPU Related Issues | ## 3.2.2 Analysis of CPU Related Issues
This section describes how to analyze high CPU consumption using tools in the SAP HANA cockpit and the command line.
When analyzing high CPU consumption, you need to distinguish between the CPU resources consumed by HANA itself and by other, non-SAP HANA processes on the host. While the CPU consumption of SAP HANA will be addressed here in detail, the CPU consumption of other processes running on the same host is not covered. Such situations are often caused by additional programs running concurrently on the SAP HANA appliance such as anti-virus and backup software. For more information see SAP Note 1730928 - Using external software in a HANA appliance .
## Analyzing CPU Consumption in Cockpit
A good starting point for the analysis is the Overview page in the SAP HANA cockpit. It contains a tile that displays CPU usage. If you click on that tile, the Performance Monitor opens and you can view the SAP HANA CPU usage versus total CPU usage. If SAP HANA CPU usage is low while total CPU usage is high, the issue is most likely related to a non-SAP HANA process.
To find out what is happening in more detail, open the Threads tile (see Thread Monitoring ). In order to prepare it for CPU time analysis, perform the following steps:
- To switch on resource tracking open the Configuration of System Properties monitor and in the resource_tracking section of the global.ini file set the enable_tracking parameter to on . See Memory Information from Logs and Traces .
- Display the CPU Time column in the Thread Monitor.
The Thread Monitor shows the CPU time of each thread running in SAP HANA in microseconds. A high CPU time of related threads is an indicator that an operation is causing the increased CPU consumption.
In order to identify expensive statements causing high resource consumption, turn on the Expensive Statement Trace and specify a reasonable runtime (see Expensive Statements Trace ). If possible, add further restrictive
criteria such as database user or application user to narrow down the amount of information traced. Note that the CPU time for each statement is shown in the column CPU_TIME if resource_tracking is activated.
## General Analysis Resources
The topic 'HANA System-wide Performance Analysis' describes a general approach which can also be applied for CPU related issues. This includes the following actions:
- Generating a runtime dump
- Creating a kernel profiler trace
- Using the HANA Dump Analyzer tool
You can enable and configure the kernel profiler in the SAP HANA Database Explorer (Trace Configuration) or you can manage the kernel profiler from the SQL command line using the ALTER SYSTEM command. In both cases RESOURCE ADMIN or TRACE ADMIN privileges are required. Note that setting a maximum duration or memory limit for profiling is good practice and should be used if appropriate values can be estimated. More information about this tool can be found in Kernel Profiler .
In addition to the runtime dump it may be possible to capture the current state of the system in a full system information dump for later analysis. However, this in itself requires resources and may therefore worsen the situation. To get a Full System Info Dump using SAP HANA cockpit, open Diagnosis Files via the link Manage full system information dumps under Alerting & Diagnostics and choose either a zip file from the list or create a new one via Collect Diagnostics . See also the topic 'Diagnosis Information' in the Tools and Tracing section.
## Tip
Guided Answers is a support tool for troubleshooting problems using decision trees. A guided answer is available for How to troubleshoot HANA High CPU Utilization .
## Related Information
SAP Note 1730928
Thread Monitoring
Memory Information from Logs and Traces [page 66]
Expensive Statements Trace [page 243]
HANA System-wide Performance Analysis [page 46]
Kernel Profiler [page 291]
Collect and Download Diagnosis Information
Diagnosis Information [page 293]
How to troubleshoot HANA High CPU Utilization (Guided Answer) |
## 3.2.3 Resolving CPU Related Issues | ## 3.2.3 Resolving CPU Related Issues
The first priority in resolving CPU related issues is to return the system to a normal operating state, which may complicate identifying the root cause.
## Immediate Remedial Action
Issue resolution should aim to bring the system back to a functional state by stopping the operation that causes the high CPU consumption. However, after resolving the situation it might not be possible to find out the actual root cause, you should therefore consider recording a HANA Runtime Dump before taking any countermeasures. See SAP Note 1813020 How to generate a runtime dump on SAP HANA and Analysis of CPU Related Issues .
You can identify and stop the specific operation causing high CPU consumption using the Threads Monitor in SAP HANA Cockpit (see Thread Monitoring). Use the values in the columns Client Host, Client IP, Client PID and Application User to identify the user that triggered the operation. Contact the user to confirm what actions are currently being performed and agree on how to resolve the situation; two options are available:
- On the client side, end the process calling the affected threads
- Cancel the operation by selecting the thread in the Threads Monitor and choose Cancel Operation.
For further analysis of the root cause, you may need to open a ticket to SAP HANA Development Support and attach the Full System Info Dump.
## Analysis of the System Configuration
Under normal circumstances when the SAP HANA Database instance is correctly configured, exhaustion of CPU resources should not occur. Firstly, therefore, verify that the configuration conforms to SAP recommendations using the script HANA_Configuration_Parameters_2.00.040+ in the SQL Statement Collection for SAP HANA (attached to SAP Note 1969700). The output of this script will list all parameters deviating from recommended settings. In this situation pay particular attention to the workload management related parameters such as default_statement_concurrency_limit as outlined in section 'Controlling Parallel Execution of SQL Statements'.
## Scenario Illustrating Unusual Load Peaks
Even if the workload management related parameters are set as recommended, CPU resources may still be exhausted by unusual peaks in the workload. Considering the following scenario:
- A system is configured with 144 CPU cores
- The default_statement_concurrency_limit is set to 48 - therefore an application workload may consume 33.3% of the available CPU resources
- An application submits an OLAP workload request to the HANA database (with a higher degree of parallelism) which, within the foreseen workload management boundaries, requires 33% of the available resources.
If, however, the application submits 3 such requests via 3 different sessions simultaneously this workload may easily exhaust the CPU resources. This illustrates the importance of finding the right workload management balance for your individual system; there is no "one-configuration-fits-all' approach and you may therefore need to change the workload management related configuration parameters to fit your own specific application requirements. In the given scenario, for example, the solution may be to further decrease the concurrency degree of individual database requests by adjusting default_statement_concurrency_limit to even lower levels.
## Related Information
Analysis of CPU Related Issues [page 83]
Thread Monitoring
Controlling Parallel Execution of SQL Statements [page 87]
SAP Note 1969700 - SAP HANA Statement Collection
SAP Note 1813020 - How to generate a runtime dump on SAP HANA |
## 3.2.4 Retrospective Analysis of CPU Related Issues | ## 3.2.4 Retrospective Analysis of CPU Related Issues
There are a number of options available to analyze what the root cause of an issue was after it has been resolved.
A retrospective analysis of high CPU consumption should start by checking the Performance Monitor and the Alerts tile in SAP HANA cockpit. Using the alert time or the graph in the Performance Monitor , determine the time frame of the high CPU consumption.
If you are not able to determine the time frame because the issue happened too long ago, check the following scripts in the SQL Statement Collection for SAP HANA (attached to SAP Note 1969700):
- HANA_LoadHistory_Services_*
- HANA_Resources_CPUAndMemory_History*
Once a problematic time-period is identified you can proceed with thread sample analysis as described in the section 'Analysis of Former Performance Problems' in HANA System-wide Performance Analysis .
## Monitoring Views and Trace Files
You can search through the trace files of the process causing high consumption if you can narrow down the time period when the event happened. The following views
If you are not able to determine the time frame because the issue happened too long ago, check the following statistics server table which includes historical host resource information up to 30 days:
HOST_RESOURCE_UTILIZATION_STATISTICS (_SYS_STATISTICS schema). A longer history can be found in the statistics server table HOST_DELTA_MERGE_STATISTICS.
With this information, search through the trace files of the responsible process. Be careful to choose the correct host when SAP HANA runs on a scale-out landscape. The information contained in the trace files will give indications on the threads or queries that were running during the affected time frame.
If the phenomenon is recurrent due to a scheduled batch job or data loading processes, turn on the Expensive
Statement Trace during that time to record all involved statements (see Expensive Statements Trace ). Furthermore, check for concurrently running background jobs like backups and Delta Merge that may cause a resource shortage when run in parallel. Historical information about such background jobs can be obtained from the system views:
- m_BACKUP_CATALOG
- m_DELTA_MERGE_STATISTICS
## Related Information
HANA System-wide Performance Analysis [page 46]
The Performance Monitor (SAP HANA Cockpit) Expensive Statement Trace (SAP HANA Cockpit) [page 243] Alerts (SAP HANA Cockpit) SAP Note 1969700 m_BACKUP_CATALOG m_DELTA_MERGE_STATISTICS HOST_DELTA_MERGE_STATISTICS |
## 3.2.5 Controlling Parallel Execution of SQL Statements | ## 3.2.5 Controlling Parallel Execution of SQL Statements
You can apply ini file settings to control the two thread pools SqlExecutor and JobExecutor that control the parallelism of statement execution.
## Caution
The settings described here should only be modified when other tuning techniques like remodeling, repartitioning, and query tuning have been applied. Modifying the parallelism settings requires a thorough understanding of the actual workload since they have impact on the overall system behavior. Modify the settings iteratively by testing each adjustment. For more information, see Understand your Workload .
On systems with highly concurrent workload, too much parallelism of single statements may lead to suboptimal performance. Note also that partitioning tables influences the degree of parallelism for statement execution; in general, adding partitions tends to increase parallelism. You can use the parameters described in this section to adjust the CPU utilization in the system.
Two thread pools control the parallelism of the statement execution. Generally, target thread numbers applied to these pools are soft limits, meaning that additional available threads can be used if necessary and deleted when no longer required:
- SqlExecutor
This thread pool handles incoming client requests and executes simple statements. For each statement execution, an SqlExecutor thread from a thread pool processes the statement. For simple OLTP-like statements against column store as well as for most statements against row store, this will be the only type of thread involved. With OLTP we mean short running statements that consume relatively little resources, however, even OLTP-systems like SAP Business Suite may generate complex statements.
## · JobExecutor
The JobExecutor is a job dispatching subsystem. Almost all remaining parallel tasks are dispatched to the JobExecutor and its associated JobWorker threads.
In addition to OLAP workload the JobExecutor also executes operations like table updates, backups, memory garbage collection, and savepoint writes.
You can set a limit for both SqlExecutor and JobExecutor to define the maximum number of threads. You can use this for example on a system where OLAP workload would normally consume too many CPU resources to apply a maximum value to the JobExecutor to reserve resources for OLTP workload.
## Caution
Lowering the value of these parameters can have a drastic effect on the parallel processing of the servers and reduce the performance of the overall system. Adapt with caution by iteratively making modifications and testing. For more information, see Understand your Workload and SAP Note 2222250 - FAQ SAP HANA Workload Management which contains more details of the workload configuration parameters.
A further option to manage statement execution is to apply a limit to an individual user profile for all statements in the current connection using 'THREADLIMIT' parameter. This option is described in Setting User Parameters .
## Parameters for SqlExecutor
The following SqlExecutor parameters are in the sql section of the indexserver.ini file.
sql_executors - sets a soft limit on the target number of logical cores for the SqlExecutor pool.
- This parameter sets the target number of threads that are immediately available to accept incoming requests. Additional threads will be created if needed and deleted if not needed any more.
- The parameter is initially not set (0); the default value is the number of logical cores in a system. As each thread allocates a particular amount of main memory for the stack, reducing the value of this parameter can help to avoid memory footprint.
max_sql_executors - sets a hard limit on the maximum number of logical cores that can be used.
- In normal operation new threads are created to handle incoming requests. If a limit is applied here, SAP HANA will reject new incoming requests with an error message if the limit is exceeded.
- The parameter is initially not set (0) so no limit is applied.
## Caution
SAP HANA will not accept new incoming requests if the limit is exceeded. Use this parameter with extreme care.
## Parameters for JobExecutor
The following JobExecutor parameters are in the execution section of the global.ini or indexserver.ini .
max_concurrency - sets the target number of logical cores for the JobExecutor pool.
- This parameter sets the size of the thread pool used by the JobExecutor used to parallelize execution of database operations. Additional threads will be created if needed and deleted if not needed any more. You can use this to limit resources available for JobExecutor threads, thereby saving capacity for SqlExecutors.
- The parameter is initially not set (0); the default value is the number of logical cores in a system. Especially on systems with at least 8 sockets consider setting this parameter to a reasonable value between the number of logical cores per CPU up to the overall number of logical cores in the system. In a system that supports tenant databases, a reasonable value is the number of cores divided by the number of tenant databases.
max_concurrency_hint - limits the number of logical cores for job workers even if more active job workers would be available.
- This parameter defines the number of jobs to create for an individual parallelized operation. The JobExecutor proposes the number of jobs to create for parallel processing based on the recent load on the system. Multiple parallelization steps may result in far more jobs being created for a statement (and hence higher concurrency) than this parameter.
- The default is 0 (no limit is applied but the hint value is never greater than the value for max_concurrency ). On large systems (that is more than 4 sockets) setting this parameter to the number of logical cores of one socket may result in better performance but testing is necessary to confirm this.
default_statement_concurrency_limit - restricts the actual degree of parallel execution per connection within a statement.
- This parameter controls the maximum overall parallelism for a single database request. Set this to a reasonable value (a number of logical cores) between 1 and max_concurrency but greater or equal to the value set for max_concurrency_hint .
- The default setting is 0; no limit is applied. Note, however, that a new default value for this parameter may also be set during installation using the SAP HANA database lifecycle manager (HDBLCM) tool. In this case, a limit may already be in force. See also SAP Note 3011356 - Default Value of global.ini [execution] default_statement_concurrency_limit Changed.
## Related Information
Understand your Workload
Example Workload Management Scenarios Setting User Parameters for Workload
SAP Note 2222250
SAP Note 3011356 |
## 3.2.6 Apply NUMA Node Affinity Settings | ## 3.2.6 Apply NUMA Node Affinity Settings
If the physical hardware on a host is shared between several processes, you can use CPU affinity settings to assign a set of logical cores to a specific SAP HANA process. These settings are coarse-grained and apply on the OS and process-level.
## Prerequisites
You can use the affinity configuration parameter to restrict CPU usage of SAP HANA server processes to certain CPUs or ranges of CPUs.
Start by analyzing how the system CPUs are configured using the Linux lscpu command and then, based on the information returned, apply affinity settings in daemon.ini to bind specific processes to logical CPU cores. Processes must be restarted before the changes become effective. This approach applies primarily to the use cases of SAP HANA tenant databases and multiple SAP HANA instances on one server; you can use this approach, for example, to partition the CPU resources of the system by tenant database.
To make the changes described here, you require access to the operating system of the SAP HANA instance to run the lscpu command and you require the privilege INIFILE ADMIN.
## Context
For Xen and VMware, the users in the VM guest system see what is configured in the VM host. So, the quality of the reported information depends on the configuration of the VM guest. Therefore, SAP can't give any performance guarantees in this case.
## Procedure
1. Firstly, to confirm the physical and logical details of your CPU architecture, analyze the system using the lscpu command. This command returns a listing of details of the system architecture. The following table gives a commentary on the most useful values based on an example system with 2 physical chips (sockets), each containing 8 physical cores. These cores are hyperthreaded to give a total of 32 logical cores.
| # | Feature | Example Value |
|-----|--------------|-----------------|
| 1 | Architecture | x86_64 |
| 2 | CPU op-modes | 32-bit, 64-bit |
| 3 | Byte Order | LittleEndian |
| 4 | CPUs | 32 |
| 5 | On-line CPUs list | 0-31 |
| 6 | Threads per core | 2 |
| 7 | Cores per socket | 8 |
| 8 | Sockets | 2 |
| 9 | NUMAnodes | 2 |
| 21 | NUMAnode0CPUs | 0-7,16-23 |
| 22 | NUMAnode1 CPUs | 8-15,24-31 |
- 4-5: This example server has 32 logical cores numbered 0-31
- 6-8: Logical cores ("threads") are assigned to physical cores. Hyperthreading is where multiple threads are assigned to a single physical core In this example, there are 2 sockets, with each socket containing 8 physical cores (total 16). Two logical cores (#6) are assigned to each physical core, thus, each core exposes two execution contexts for the independent and concurrent execution of two threads.
- 9: In this example there are 2 NUMA nodes, one for each socket. Other systems may have multiple NUMA nodes per socket.
- 21-22: The 32 logical cores are numbered and assigned to one of the two NUMA nodes.
## Note
Even on a system with 32 logical cores and two sockets the assignment of logical cores to physical CPUs and sockets can be different. It's important to collect the assignment in advance before making changes. A more detailed analysis is possible using the system commands described in the next step. These provide detailed information for each core including how CPU cores are grouped as siblings.
2. In addition to the lscpu command, you can use the set of system commands in the /sys/devices/ system/cpu/ directory tree. For each logical core, there is a numbered subdirectory beneath this node ( /cpu12/ in the following examples). The examples show how to retrieve this information and the table gives details of some of the most useful commands available:
## Example
cat /sys/devices/system/cpu/present cat /sys/devices/system/cpu/cpu12/topology/thread_siblings_list
| Command | Example Output | Commentary |
|-----------|------------------|-------------------------------------------------------|
| present | 0-15 | The number of logical cores available for scheduling. |
| Command | Example Output | Commentary |
|-------------------------------------|------------------|------------------------------------------------------------------------|
| cpu12/topology/core_siblings_list | 4-7, 12-15 | The cores on the same socket. |
| cpu12/topology/thread_siblings_list | 4, 12 | The logical cores assigned to the same physical core (hyperthreading). |
| cpu12/topology/physical_package_id | 1 | The socket of the current core - in this case cpu12. |
Other Linux commands that are relevant here are sched_setaffinity and numactl . sched_setaffinity limits the set of CPU cores available (by applying a CPU affinity mask) for execution of a specific process (this can be used, for example, to isolate tenants) and numactl controls NUMA policy for processes or shared memory.
3. Based on the results returned you can use the affinity setting to restrict CPU usage of SAP HANA server processes to certain CPUs or ranges of CPUs. You can set restrictions for the following servers: nameserver, indexserver, compileserver, preprocessor, and xsengine (each server has a section in the daemon.ini file). The affinity setting is applied by the TrexDaemon when it starts the other HANA processes using the command sched_setaffinity . Changes to the affinity settings take effect only after restarting the HANA process. The following examples and commentary show the syntax for the ALTER SYSTEM CONFIGURATION commands required.
## Example
To restrict the nameserver to two logical cores of the first CPU of socket 0 (see line 21 in the previous example), use the following affinity setting:
```
ALTER SYSTEM ALTER CONFIGURATION ('daemon.ini', 'SYSTEM') SET ('nameserver', 'affinity') = '0,16'
```
## Example
To restrict the preprocessor and the compileserver to all remaining cores (that is, all except 0 and 16) on socket 0 (see line 21 in the previous example), use the following affinity settings:
```
ALTER SYSTEM ALTER CONFIGURATION ('daemon.ini', 'SYSTEM') SET ('preprocessor', 'affinity') = '1-7,17-23' ALTER SYSTEM ALTER CONFIGURATION ('daemon.ini', 'SYSTEM') SET ('compileserver', 'affinity') = '1-7,17-23'
```
## Example
To restrict the indexserver to all cores on socket 1 (see line 22 in the previous example), use the following affinity settings:
```
ALTER SYSTEM ALTER CONFIGURATION ('daemon.ini', 'SYSTEM') SET ('indexserver', 'affinity') = '8-15,24-31'
```
4. You can assign affinities to different tenants of a multi-tenant database on the same host as shown here. Run these SQL statements on the SYSTEMDB.
```
Example In this scenario tenant NM1 already exists, here we add another tenant NM2: CREATE DATABASE NM2 ADD AT LOCATION 'host:30040' SYSTEM USER PASSWORD Manager1; Set the configuration parameter to bind CPUs to specific NUMA nodes on each tenant. You can use the following notation with a dot to identify the specific tenant: ALTER SYSTEM ALTER CONFIGURATION ('daemon.ini','SYSTEM') SET (' indexserver.NM1 ', 'affinity') ='0-7,16-23'; ALTER SYSTEM ALTER CONFIGURATION ('daemon.ini','SYSTEM') SET (' indexserver.NM2 ', 'affinity') ='8-15,24-31';
```
5. To assign affinities to multiple indexservers of the same tenant on the same host, execute the following SQL statements on the SYSTEMDB to apply the instance_affinity[port] configuration parameter:
## Example
In this scenario, an indexserver is already running on tenant NM1 on port 30003. Here we add another indexserver on a different port:
```
ALTER DATABASE NM1 ADD 'indexserver' AT LOCATION 'host:30040';
```
Set the different instances of the instance_affinity[port] configuration parameter to bind CPUs to specific NUMA nodes on each indexserver. The configuration parameter has a 1-2 digit suffix to identify the final significant digits of the port number, in this example 3000 3 and 300 40 :
```
ALTER SYSTEM ALTER CONFIGURATION ('daemon.ini','SYSTEM') SET ('indexserver.NM1', 'instance_affinity[3]')='0-7,16-23'; ALTER SYSTEM ALTER CONFIGURATION ('daemon.ini','SYSTEM') SET ('indexserver.NM1', 'instance_affinity[40]')='8-15,24-31';
```
Restart the indexserver processes to make the affinity settings effective.
6. You can test the settings either in SQL or using hdbcons as shown here:
Run this query on the tenant or SystemDB:
```
select * from M_NUMA_NODES;
```
Using hdbcons the process ID of the indexserver process is required as a parameter:
```
hdbcons -p <PID> "jexec info"
```
## Related Information
Memory and CPU Usage for Tenant Databases
SAP HANA Monitoring Views for CPU Topology Details SQL Statements to Apply NUMA Location Preferences
SAP Note 2470289 |
## 3.2.7 Basic NUMA Analysis | ## 3.2.7 Basic NUMA Analysis
In this section we present a basic workflow for analyzing performance issues, especially on large scale-up systems running SAP HANA.
## Introduction
Server machines with many CPU cores which are packed in multiple sockets today follow a Non-Uniform Memory Architecture (NUMA). A NUMA node comprises the CPU cores that access a set of DRAM DIMMs via a memory controller as a NUMA node. In most systems a NUMA node and a socket refer to the same packaging of these hardware components, see SAP Notes 2470289 and 2632231. Hence, the access to some physical memory address can be to locally attached DRAM DIMMs or DRAM DIMMs attached to another NUMA node. While the hardware hides the complexities of accessing the memory attached to different NUMA nodes, it may still result in different (that is, non-uniform) bandwidth and latency when accessing some memory address.
When SAP HANA is deployed on systems with many NUMA nodes an analysis of the NUMA effects for a given workload may be needed. This topic explains the basic steps for collecting information required for identifying NUMA-related problems. This will be the typical information requested by the SAP HANA quality team when NUMA-related problems are suspected. This topic focuses on the SAP HANA Cockpit as the administration interface because it should be available to every SAP HANA customer. A more detailed analysis may be possible when access to the operating system is available (see also the topic 'Memory Information from Other Tools').
This analysis has the following three sections:
- Collect System Information
- Examine the Workload
- Tuning for Better NUMA Behavior
The analysis should be complemented by a general analysis of performance problems; see related topics in the SAP HANA Performance Guide for Developers .
## Related Information
Indicators of CPU Related Issues [page 83]
Statement Performance Analysis [page 180]
Memory Information from Other Tools [page 73]
SAP HANA Performance Guide for Developers
SAP Note 2470289
SAP Note 2632231 |
## 3.2.7.1 Collect System Information | ## 3.2.7.1 Collect System Information
The initial step is to collect information about the system and its topology.
Basic system information may be needed to identify available features in SAP HANA. In the SAP HANA Cockpit, the tile Database Information provides the initial information. This information is also accessible via the view (or public synonym) m_HOST_INFORMATION.
The example system shown here uses SAP HANA version HANA2, Rev 56 using SUSE Linux Enterprise Server 12 SP4 as operating system version (using Linux kernel 4.12.14-95.68-default).
The next step is to collect system topology information using the SAP HANA Database Explorer. The view m_HOST_INFORMATION delivers relevant information on the system topology. In this example, the system is equipped as below:
| Key | Value | Explanation |
|-------------|--------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| cpu_cores | 88 | The number of physical CPUs in the system. |
| cpu_threads | 176 | The number of logical CPU cores. Consequently, there are 2 hyperthreads per physical core on this system. Note on the IBM Power platform there can be up to 8 hyperthreads. |
| cpu_model | Intel Xeon Gold 6152 CPU | Refer to hardware vendors' internet sites for specific details, for example: Intel Xeon Gold 6152 CPU |
| mem_phys | 405394542592 | This corresponds to approximately 400GB DRAMphysically available on the system. |
| os_name | SUSE Linux Enterprise 12 SP04 | The operating system used on this host. Refer to the internet sites of operating system vendors for more specific details and to the SAP Note 2235581 - SAP HANA: Supported Operating Systems . |
build_version
2.00.056.00.1617904818
This is HANA 2, Rev 56.
| SOL Console 4 sql | SOL Console 4 sql | SOL Console 4 sql | | |
|---------------------|---------------------|-------------------------------------|------------------------------------------------------------------------------------------|-----------|
| Raw Data | Raw Data | Analysis | Analysis | Analysis |
| Rows (63) | Rows (63) | Rows (63) | Rows (63) | Rows (63) |
| | HOST | KEY | VALUE | |
| | 1d5866 | Cpu_summary | 88 (176) | |
| | Id5866 | build_githeadcount | 503001 | |
| | Id5866 | name | SUSE Linux Enterprise Server 12 SP4 | |
| 10 | Id5866 | Os_cpe name | | |
| 11 | Id5866 | ner_domain | | |
| 12 | Id5866 | mem Swap | 34359734272 | |
| 13 | | memory_map_area_limit | 2147483647 | |
| | Id5866 | | | |
| 15 | Id5866 | hw_model | ThinkSystem SR9SO -[7XI2CTO1WW]- | |
| 16 | Id5866 | net_realhostname | | |
| 17 | Id5866 | sid | NMO | |
| | | os_ppms | | |
| 18 | Id5866 | name | LINUX_X86_64 | |
| 19 20 | Id5866 Id5866 | sapsystem topology meminto | 00 | |
| 21 | Id5866 | build_gitmergetime | 2021-04-08 20.00.18 | |
| 22 | Id5866 | cpu_cores | 88 | |
| 23 | 1d5866 | net_publicports | | |
| 24 | 1d5866 | | 1048576 | |
| | 1d5866 | cputhreads | 176 | |
| 25 26 | 1d5866 | net_publicname | | |
| 27 | Id5866 | Umezone otfset | 7200 | |
| 28 | Id5866 | | CEST | |
| 29 | Id5866 | ssfs_masterkey_systempki_changed | 2021-04-09 14.35.41 | |
| 30 | Id5866 | build_compiler | gcc (SAP release 20200227, based on SUSE gcc9-9.2.1-r275327-1.3.7) 9.2.1 20190903 [gcc-9 | |
| 31 | Id5866 | net ipv4 tcp_max_syn_backlog | 8192 | |
| 32 33 | Id5866 Id5866 | build_gen | rel 4096 | |
| 34 | | | | |
| | Id5866 | crypto_provider_version | CommonCryptoLib 8.5.36 (Nov 5 2020) [AES-NI,CLMUL SSE3 SSSE3] | |
| 35 | Id5866 | build_cloud_edition | 0000.00,00 | |
| 36 | Id5866 | build_version | 2.00.056.00.1617904818 | |
| 37 | Id5866 | topology_mem_type | shared | |
| 38 | Id5866 | open_tile_limit | 39526120 | |
| 39 | Id5866 | daemon_active | yes | |
| 40 | | | none | |
| 41 | Id5866 Id5866 | crypto_fips_version crypto_provider | commoncrypto | |
| 42 | Id5866 | mem_phys | 405394542592 | |
To understand the NUMA topology in more detail, the monitoring views NUMA_NODES and NUMA_RESOURCES are available. SAP Note 2470289 mentions further options to analyze the NUMA topology and memory usage.
## Monitoring Views NUMA_RESOURCES
From these views one can learn that the system has four NUMA nodes with NUMA_NODE_IDs 0, 1 2 and 3. Every NUMA node has 44 logical cores, and as every physical core has two hyperthreads every NUMA node has 22 physical cores. The system memory is evenly distributed across these four NUMA nodes with 100GB of DRAM attached to every NUMA node. From m_NUMA_NODES.NUMA_NODE_DISTANCES or m_NUMA_NODES.NEIGHBOUR_NUMA_NODE_IDs one can reconstruct the NUMA node topology, that is how the cores of every NUMA node can communicate with other NUMA nodes to fetch data from another NUMA node or for work stealing.
As illustrated below, in this system every NUMA node has all the other NUMA nodes as neighbors, that is fully meshed via the blue connections between the NUMA nodes.
The system used in this analysis likely does not have strong NUMA effects because all NUMA nodes are directly connected with each other. When memory of one NUMA node can only be reached via multiple hops in the NUMA topology, we can expect a stronger NUMA effect. This is the case when certain NUMA nodes are not listed as neighbors of some NUMA node in view m_NUMA_NODES. Furthermore, systems with many NUMA nodes may require a dedicated "NUMA node controller' to connect islands of NUMA nodes which are connected with fast UPI links (as in the example above). This information is provided by the hardware partner or from the system specification.
It is important to analyze performance issues related to the system setup before proceeding with the analysis of the workload. Refer to the sections in this SAP HANA Troubleshooting and Performance Analysis Guide and SAP Notes for a starting point. The SAP HANA hardware partners also offer support in this step.
## Related Information
Examine the Workload [page 99] SAP HANA Administration Guide: SAP HANA Cockpit Getting Started With the SAP HANA Database Explorer SAP Note 2470289 SAP Note 2235581 m_HOST_INFORMATION m_NUMA_NODES System View m_NUMA_RESOURCES System View Certified and Supported SAP HANA Hardware Partners |
## 3.2.7.2 Examine the Workload | ## 3.2.7.2 Examine the Workload
We can now examine the workload in detail including: CPU utilization, table distribution and the plan cache.
The system information you have collected helps to understand the system behavior given a workload of interest. For the illustration given here we run the TPC-H workload including updates using scale factor 100 and 40 concurrent streams. You can use this example to help to understand similar NUMA related problems when examining any customer workload.
For more information on the latest TPC benchmarks (versions two and three) refer to the Transaction Processing Performance Council web site: http://tpc.org/tpch.
The first step is to examine the CPU utilization and thread activity . The SAP HANA Cockpit provides the tile Threads . In the example shown below there are 230 active threads, that is threads scheduled for running on a CPU core, and no blocked threads. As there are only 176 logical cores on the system, the system's CPUs are highly contended.
More information on the thread activity can be found when navigating to the details of the tile as shown below or by examining the view m_SERVICE_THREADS. In this example it is evident that the thread with id 167942 calls many other threads, that is, the operations are heavily parallelized.
Further general information on the system's resource utilization can be gathered from the SAP HANA Cockpit Performance Monitor . It confirms the high CPU utilization close to 99%. The number of active threads and active SQL Executors confirm the large number of concurrently active threads.
In the above diagram the test first loads the input data with relatively low CPU resource consumption - up to the middle of the graph, and the query and update workload is then executed with high CPU resource consumption.
The view m_JOBEXECUTORS can provide some system-wide indication on CPU-related contention while being monitored during the runtime of the workload. The column FREE_WORKER_COUNT indicates if newly arriving work can be handled by free available worker threads. In this workload the FREE_WORKER_COUNT was mostly at zero which is consistent with the high CPU utilization from the performance monitor. High values of SYS_WAITING_JOB_COUNT and JOB_WAITING_JOB_COUNT can be caused by jobs waiting for other jobs to finish, for example due to high CPU utilization, or it could be related to general lock contention. High values in the QUEUED_WAITING_JOB_COUNT can be related to high CPU load resulting in jobs being queued for execution when job workers become available to process these jobs.
To drill down further into this analysis, the following statement can be used to detect NUMA nodes with high CPU utilization by counting the number of active threads. When at the same time other NUMA nodes keep having many fewer active threads, it can be an indication of a skewed distribution of load across the NUMA nodes that should be analyzed in more detail.
```
select numa_node_index, count(0) num_active_jobs from sys.m_service_threads where statement_hash <> '' and is_active = 'TRUE' group by numa_node_index order by numa_node_index;
```
With 44 logical cores per NUMA node in this system the results below indicate that NUMA node 3 has significantly more active threads than the other NUMA nodes. Still, all NUMA nodes seem to be highly utilized with more than 44 active threads per NUMA node. It should be noted that the fully meshed topology of NUMA nodes of this system allows for flexible stealing of jobs from highly utilized NUMA nodes. A highly skewed work distribution can be an indication of NUMA node contention.
The SAP HANA kernel profiler can be used to collect information about which code causes high CPU consumption. It requires expert knowledge of the SAP HANA code. Additionally, the files generated by the kernel profiler contain system-level statistics on the memory allocation of the system.
Complementing the analysis of CPU utilization and thread activity, we also analyze the table distribution across NUMA nodes . In general SAP HANA will try to evenly distribute tables or table partitions across the NUMA nodes in the system. However, multiple hot tables may be allocated to the same NUMA node, and this may result in high contention accessing this data only on this NUMA node. At the same time other NUMA nodes can have low or no load, which might lead to a low overall system usage.
For this analysis we execute the following query in the SQL Console of the SAP HANA Database Explorer to get an overview of the tables and table partitions and how their columns are assigned to NUMA nodes.
```
select table_name, column_name, part_id, string_agg(numa_node_index, ',') as numa_nodes from m_cs_columns where schema_name like '%TPC%'and numa_node_index >= 0 group by table_name, column_name, part_id order by table_name, column_name, part_id;
```
In our example we see that the columns of the largest table, LINEITEM, are all allocated on NUMA node 3.
Furthermore, the following query reports how much memory the columns of all column tables in the system consume per NUMA node. Adding the value for <persistent_memory_size_in_total>= also includes tempfs and non-volatile memory:
```
select numa_node_index, sum(memory_size_in_total+persistent_memory_size_in_total) as memory_consumption_per_numa_node_in_bytes from m_cs_columns group by numa_node_index;
```
The result of this query is shown below where the first column is the NUMA node index and the second column the number of bytes consumed by column store tables on that NUMA node. On NUMA node 3 almost 17GB of memory are allocated for column tables, while on NUMA nodes 0 and 2 only roughly 8.5GB of memory are allocated. Also, on NUMA node 2 only 967MB of memory are allocated, that is much less than for the other NUMA nodes. A negative value for NUMA_NODE_INDEX (for example -1) has a technical interpretation that the target column is not loaded into memory.
The following query includes the table name and gives more insight:
```
select numa_node_index, table_name, sum(memory_size_in_total + persistent_memory_size_in_total) as memory_consumption_per_numa_node_in_bytes from m_cs_columns group by numa_node_index, table_name order by memory_consumption_per_numa_node_in_bytes desc;
```
This result further explains why NUMA node 1 has much less memory allocated than the other NUMA nodes: among the six largest tables only table PART is allocated on that NUMA node. In addition to table data, memory allocation and de-allocation for transient objects can also be a source of contention or skewed memory allocation across NUMA nodes which can be analyzed with advanced profiling tools, for example Intel PCM, Intel VTune, or the SAP HANA kernel profiler.
A final important step of the initial analysis is to check the SQL Plan Cache . In the SAP HANA Cockpit one can navigate to the monitoring of Statements and from there to the analysis of the SQL Plan Cache which shows the content of view m_SQL_PLAN_CACHE. When sorting the content of the SQL plan cache by the total execution time in descending order, one can get the statements with the highest contribution to CPU consumption in the system. It is probable that these statements are the most performance-critical statements because they are executed with very high frequency or every single statement execution is very expensive. The accessed tables
are also available from the SQL plan cache. In this example it is evident that table LINEITEM is used in all of the expensive statements:
It needs to be emphasized, that optimizing the SQL statements is a critical step before further optimizing the NUMA behavior. It may be the case that optimizing the SQL statements resolves all major performance issues. The section Statement Performance Analysis in this guide and the SAP Notes mentioned there provide further information.
## Related Information
Tuning for Better NUMA Behavior [page 103]
```
SAP HANA Cockpit - Threads SAP HANA Cockpit - The Performance Monitor SAP HANA Cockpit - Statements SAP HANA Cockpit - Monitor and Analyze Statements with SQL Plan Cache M_SERVICE_THREADS System View M_JOBEXECUTORS System View M_SQL_PLAN_CACHE System View Kernel Profiler
``` |
## 3.2.7.3 Tuning for Better NUMA Behavior | ## 3.2.7.3 Tuning for Better NUMA Behavior
This section shows the performance benefits which can be achieved by partitioning tables.
One option to distribute the contention for memory accesses and potentially also CPU consumption across more NUMA nodes is to partition tables. By default, SAP HANA would calculate the NUMA node of the first partition of a table with a hashing scheme and then distribute subsequent partitions on a round-robin basis across the NUMA nodes. In this example we consider a (hash-)range partitioning of the two largest tables as shown below because it is a commonly used partitioning option for TPC-H. As discussed in the FAQ SAP Note 2044468, table partitioning needs to be done with care considering the application characteristics and related workload.
```
ALTER TABLE "TPCH_100GB"."LINEITEM" PARTITION BY HASH (L_ORDERKEY, L_LINENUMBER) PARTITIONS 1, RANGE (L_SHIPDATE) (PARTITION '1992-01-02' <= VALUES < '1993-01-01',
```
```
PARTITION '1993-01-02' <= VALUES < '1994-01-01', … PARTITION '1997-01-02' <= VALUES < '1998-01-01', PARTITION OTHERS);
```
```
ALTER TABLE "TPCH_100GB"."ORDERS" PARTITION BY HASH (O_ORDERKEY) PARTITIONS 1, RANGE (O_ORDERDATE) (PARTITION '1992-01-02' <= VALUES < '1993-01-01', PARTITION '1993-01-02' <= VALUES < '1994-01-01', … PARTITION '1997-01-02' <= VALUES < '1998-01-01', PARTITION OTHERS);
```
From the figure below one can see that all columns of the largest table, LINEITEM, are distributed across all four NUMA nodes.
Looking at the distribution of memory consumption for the tables across the NUMA nodes it is evident that NUMA node 3 still has a larger share of data assigned. But overall, one can see a more even distribution of memory allocations.
This analysis is confirmed when looking at the largest table partitions and on which NUMA nodes they are allocated. The report below shows how the tables LINEITEM and ORDERS are now distributed across all NUMA nodes.
In these exemplary measurements the throughput increased by 3%. At the same time, an increase in the CPU time (+2.3%) and peak memory (+20%) could be observed. These effects are due to a higher degree of parallelism when working on the large partitioned tables, and it is also confirmed by looking at the number of active threads in the performance monitor which shows higher values for the average and maximum number of active threads and also average CPU consumption.
Looking at the SQL plan cache we see that the most expensive query - Q9 - has a reduced total execution time. However, for Q10 a higher total execution time is reported indicating some trade-offs when partitioning the tables.
## Related Information |
## 3.3 Disk Related Root Causes and Solutions | ## 3.3 Disk Related Root Causes and Solutions
This section discusses issues related to hard disks and lack of free space.
## Low Disk Space
This problem is usually reported by alert 2 which is triggered whenever one of the disk volumes used for data, log, backup or trace files reaches a critical size.
Use the following tools in the SAP HANA cockpit to examine the situation and try to free some disk space:
- Via Alerts tile
- Via Disk Usage tile
- On the Disk Volume Monitor
For high log volume utilizations, refer to SAP KBA 2083715 - Analyzing log volume full situations . For data volume exhaustions which cannot be explained by the size of the catalog objects (tables etc.) there are a few more mechanisms which utilize the persistency layer (data volume) such as:
- Disk LOBs
- Table Sizes on Disk
- MVCC Mechanism
- Database Snapshots
The following sections will assist you in analyzing these possible problem areas step by step.
## Disk LOBs
Large binary objects are usually not optimally compressible in SAP HANA, thus tables with large LOB files may also indicate large memory footprints in SAP HANA. Since SAP HANA also offers the concept of hybrid LOBs (as of SAP HANA SPS07), the majority of the LOB data - larger than a specific threshold - is automatically outsourced to the physical persistence on disk instead. This can lead to the situation that you have a table which is small in memory but large on disk level. The SQL statement HANA_Tables_DiskSize_1.00.120+, which you can find in the SAP Note 1969700 - SQL Statement Collection for SAP HANA , may provide further insights on the disk and memory footprint of the largest tables with LOB columns. In the following example, you see tables which utilize large amounts of disk space but effectively no memory due to the nature of the data itself (LOBs):
| TABLE_NAME | VIRTUAL_FILE_N | COUNT | SIZE_GB | SIZE_NOLOB_GB |
|----------------|------------------|---------|-----------|-----------------|
| SACONTOIDI | any | 13 | 42.55 | 42.55 |
| SACONTO1 | any | 10 | 29.63 | 29.63 |
| | any | 36 | 18.7 | 0.35 |
| OBJECT_HISTORY | any | 26 | 287 | 0.01 |
For further information on handling LOBs, refer to:
- SAP Note 2220627 - FAQ: SAP HANA LOBs
- SAP Note 1994962 - How-To: Activation of Hybrid LOBs in SAP HANA
If you cannot narrow down the issue to LOBs, check the virtual file containers on persistency level in more detail.
## Table Sizes on Disk
A first insight on whether the virtual file containers of the column store tables are responsible for the data volume utilization can be gained by running the following statement:
SELECT host, port, sum(physical_size) FROM m_TABLE_VIRTUAL_FILES GROUP BY port, host
This will return the sum of the physical size of all virtual file containers on disk level, representing the total size of the catalog objects on disk level. If this deviates drastically from the data volume utilization you observe, there are objects utilizing the data volume other than database tables and indices.
If the sum of the virtual file containers does not match the disk utilization observed on OS level for the data volume, further checks are necessary.
## MVCC Mechanism
A blocked garbage collection may also over-utilize the SAP HANA data volume of the indexserver. Proceed as outlined in:
- Multiversion Concurrency Control (MVCC) Issues
- SAP KBA 2169283 - FAQ: SAP HANA Garbage Collection
Run the following query to make sure there are no excessive amounts of undo cleanup files:
```
SELECT SUM(page_count) FROM M_UNDO_CLEANUP_FILES
```
Undo files contain information needed for transaction rollback and these files are removed when the transaction completes.
Cleanup files contain deleted information which is kept because of MVCC isolation requirements. When the transaction completes garbage collection uses the cleanup files to finally remove data.
For more information, refer to m_UNDO_CLEANUP_FILES System View in the SAP HANA SQL and System Views Reference guide.
## Database Snapshots
In specific cases, for example, if data backups have failed in the past, database snapshots for these backups are not cleaned up. This can be evaluated by the following SQL statements:
```
SELECT page_sizeclass AS "Page Size-class", page_size*used_block_count/ (1024*1024*1024) AS "Disk Utilization in GB" FROM M_DATA_VOLUME_PAGE_STATISTICS WHERE volume_id = <volume_id>
```
This statement may show a result like:
This leads to the conclusion that the main contributor to the disk utilization are 16MB pages which are mainly used for the main fragment of the column store. To confirm this, check whether snapshots exist which have been created for backups:
In this case, "dangling" database snapshots which are no longer required should be dropped. This happens automatically after a database restart.
## Related Information
I/O Related Root Causes and Solutions [page 114]
```
SAP Note 1900643 SAP Note 2083715 SAP Note 1969700 SAP Note 2220627 SAP Note 1994962 Multiversion Concurrency Control (MVCC) Issues [page 175] SAP Note 2169283 M_UNDO_CLEANUP_FILES System View
``` |
## 3.3.1 Reclaiming Disk Space | ## 3.3.1 Reclaiming Disk Space
Defragmentation of data volumes must be carried out as a routine housekeeping operation to recover disk space or in response to a disk full event.
## Automatic Housekeeping in SAP HANA 2.0 SPS 06
A housekeeping job was introduced in SAP HANA 2.0 SPS 06 in the automatic data volume management process to recover unused space in a data volume caused by fragmentation. The job runs permanently in the background and becomes active when the measured level of fragmentation overhead on the data volume reaches 150% and stops when the overhead is reduced to 120%. See also SAP Note 3100336 -What is fragmentation in context of SAP HANA persistence? .
In previous releases the reclaim process could not handle snapshots which must remain untouched. The housekeeping process now bypasses snapshots and continues to work through the data volume to the end. If free space is found data pages are taken from the end of the volume to fill up the free space thus reducing the total volume.
The feature is enabled by default but in SPS 06 revisions <= 062.00 the full functionality must be enabled by setting a configuration parameter as described in SAP Note 3198269 -Enable Automatic Data Volume Reclaim in SAP HANA 2 SPS06 . The feature can be disabled by setting parameter continuous_reclaim_enabled in service.ini (persistence section) to False .
In a replication scenario automatic reclaim can also be run on connected secondary and tertiary sites by setting the configuration parameter on those systems. It is no longer necessary to run additional defragmentation processes on those systems.
The following monitoring view showing statistics aggregated over all partitions gives insight into the data volume reclaim activity: m_DATA_VOLUME_RECLAIm_STATISTICS.
## General Information
Reclaiming disk space is necessary because the size allocated to a data file is automatically increased as more space is required, but it is not automatically decreased when less space is required. This may create a difference between allocated size and used size.
For more information on data and log volumes refer to:
- Persistent Data Storage in the SAP HANA Database in the SAP HANA Administration Guide .
- SAP Note 1870858 -HANA Alerts related to file system utilization .
For routine monitoring of disk usage the following scripts from SAP Note 1969700 -SQL Statement Collection for SAP HANA may be helpful:
- HANA_Disks_Overview
- HANA_Disks_SuperblockStatistics
- HANA_Tables_ColumnStore_TableSize
- HANA_Tables_TopGrowingTables_Size_History
## Reclaiming Disk Space Command
The RECLAIM DATAVOLUME statement is described in detail in the SAP HANA SQL and System Views Reference . The following example illustrates the usage:
```
alter system reclaim datavolume 'myhost:30003' 120 defragment
```
The example shows a reasonable payload percentage of 120, that is, an overhead of 20% fragmentation is acceptable. Smaller payload percentage values can significantly increase the defragmentation runtime.
The following example lines from the SuperblockStatistics report show a comparison of disk usage before and after defragmentation:
After defragmentation the values for used and allocated disk size are much closer together, the total block count is reduced and the fragmentation percentage is much lower:
## Monitoring the Reclaim Process
No monitoring tool is available to check the progress of the RECLAIM command. The following options may be helpful:
1. Check for savepoint activity to verify that the RECLAIM process is running. This will confirm that savepoints are consistently being written:
select * from m_savepoints
2. Enable the indexserver trace to get information about the operation and estimate the job progress: set trace "pageaccess = info"
## Snapshots: Reasons Why Reclaim May Fail
In some situations where snapshots are being created RECLAIM may fail:
1. In a high availability scenario RECLAIM may not work because data snapshots which are part of the replication process may conflict with the RECLAIM datavolume command. SAP Note 2332284 -Data volume reclaim failed because of snapshot pages gives details of how to temporarily stop the creation of snapshots to avoid this problem.
2. Snapshots related to backups may also prevent RECLAIM from working. SAP Note 2592369 -HANA DataVolume Full describes steps to investigate the age of snapshots by querying the m_SNAPSHOTS view
and, if necessary, to manually delete snapshots using the hdbcons command. It may then be possible to rerun the RECLAIM process.
## Related Information
Persistent Data Storage in the SAP HANA Database |
## 3.3.2 Analyze and Resolve Internal Disk-Full Event (Alert 30) | ## 3.3.2 Analyze and Resolve Internal Disk-Full Event (Alert 30)
When it is no longer possible to write to one of the disk volumes used for data, log, backup or trace files, the database is suspended, an internal event is triggered, and alert 30 is generated. A disk-full event must be resolved before the database can resume.
## Context
If the disks on which the database data and log volumes are located run full, space on the volumes must be freed or additional space added before the database can resume. In the SAP HANA cockpit, potential disk-full events are displayed in the alerts app.
However, running out of disk space is not the only reason that SAP HANA may be prevented from writing to disk. Other possible causes, all of which lead to a disk-full event and alert 30, include:
- File system quota is exceeded
- File system runs out of inodes
- File system errors (bugs)
## Note
A number of SAP Notes are available to give troubleshooting advice in specific scenarios. For a log volume full scenario refer first to the note 1679938 - DiskFullEvent on Log Volume to resolve the issue and bring the HANA database back online. Then refer to note 2083715 - Analyzing log volume full situations for root cause analysis to prevent the scenario from happening again.
For log volume full in a replication context refer to LogReplay: Managing the Size of the Log File in this document.
- SAP Note 1679938 - DiskFullEvent on Log Volume.
- SAP Note 2083715 - Analyzing log volume full situations.
- SAP Note 1898460 - How to Handle Alert 30 'Internal disk-full event' .
- SAP Note 1870858 - HANA Alerts related to file system utilization.
## Procedure
1. Analyze disk space usage using the standard administration tools in SAP HANA cockpit: Alerts , Disk Usage , Disk Volume Monitor .
2. Optional: Perform the following steps if helpful:
## Note
You must execute the commands from the command line on the SAP HANA server.
- a. Determine the file system type:
df -T
- b. Check for disk space using file system specific commands:
- c. Check if the system is running out of inodes (NFS):
| Option | Description |
|----------|----------------|
| XFS/NFS | df |
| GPFS | mmfscheckquota |
df -i
- d. Check quota:
| Option | Description |
|----------|----------------|
| XFS/NFS | quota -v |
| GPFS | mmfscheckquota |
## Next Steps
The most serious system events (including the disk-full event) are logged as internal database events in the table m_EVENTS (see also Alert 21 which is also triggered when an internal event occurs). They are initially logged with a status of 'NEW' and, once the issue has been resolved (in this case, free up disk space), the event must be set to 'HANDLED' to be able to continue. You can do this by executing the following SQL statements:
- ALTER SYSTEM SET EVENT ACKNOWLEDGED '<host>=:<port>=' <id>=
- ALTER SYSTEM SET EVENT HANDLED '<host>=:<port>=' <id>=
If you cannot track down the root cause of the alert, contact SAP Support.
## Related Information
LogReplay: Managing the Size of the Log File [page 220] |
## 3.4 I/O Related Root Causes and Solutions | ## 3.4 I/O Related Root Causes and Solutions
This section covers troubleshooting of I/O performance problems. Although SAP HANA is an in-memory database, I/O still plays a critical role for the performance of the system.
From an end user perspective, an application or the system as a whole runs slowly, is unresponsive or can even seem to hang if there are issues with I/O performance. In the Disk Volume Monitor available in the Disk Usage tile in SAP HANA cockpit you can see the attached volumes and which services use which volumes. For details of the attached volumes, such as files and I/O statistics, select a row.
In certain scenarios data is read from or written to disk, for example during the transaction commit. Most of the time this is done asynchronously but at certain points in time synchronous I/O is done. Even during asynchronous I/O it may be that important data structures are locked.
Examples are included in the following table.
| Scenario | Description |
|---------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Savepoint | A savepoint ensures that all changed persistent data since the last savepoint gets written to disk. The SAP HANAdatabase triggers savepoints in 5 minutes intervals by default. Data is automatically saved from memory to the data volume located on disk. Depending on the type of data the block sizes vary between 4 KB and 16 MB. Savepoints run asynchronously to SAP HANAupdate operations. Database update transactions only wait at the critical phase of the savepoint, which is usually taking some microseconds. |
| Snapshot | The SAP HANAdatabase snapshots are used by certain operations like backup and system copy. They are created by triggering a system wide consistent savepoint. The system keeps the blocks belonging to the snapshot at least until the drop of the snapshot. Detailed information about snapshots can be found in the SAP HANAAdministration Guide . |
| Delta Merge | The delta merge itself takes place in memory. Updates on column store tables are stored in the delta storage. During the delta merge these changes are applied to the main storage, where they are stored read optimized and compressed. Right after the delta merge, the new main storage is persisted in the data volume, that is, written to disk. The delta merge does not block parallel read and update transactions. |
| Write Transactions | All changes to persistent data are captured in the redo log. SAP HANAasynchronously writes the redo log with I/O orders of 4 KB to 1 MBsize into log segments. Transactions writing a commit into the redo log wait until the buffer containing the commit has been written to the log volume. |
| Database restart | At database startup the services load their persistence including catalog and row store tables into memory, that is, the persistence is read from the storage. Additionally the redo log entries written after the last savepoint have to be read from the log volume and replayed in the data area in memory. When this is finished the database is accessible. The bigger the row store is, the longer it takes until the system is available for operations again. |
| Failover (Host Auto-Fail- over) | On the standby host the services are running in idle mode. Upon failover, the data and log volumes of the failed host are automatically assigned to the standby host, which then has read and write access to the files of the failed active host. Row as well as column store tables (the latter on demand) must be loaded into memory. The log entries have to be replayed. |
| Takeover (System Replica- tion) | The secondary system is already running, that is the services are active but cannot accept SQL and thus are not usable by the application. Just like in the database restart (see above) the row store tables need to be loaded into memory from persistent storage. If table preload is used, then most of the column store tables are already in memory. During takeover the replicated redo logs that were shipped since the last data transport from primary to secondary have to be replayed. |
| Data Backup | For a data backup the current payload of the data volumes is read and copied to the backup storage. For writing a data backup it is essential that on the I/O connection there are no collisions with other transactional operations running against the database. |
| Log Backup | Log backups store the content of a closed log segment. They are automatically and asyn- chronously created by reading the payload from the log segments and writing them to the backup area. |
| Database Recovery | The restore of a data backup reads the backup content from the backup device and writes it to the SAP HANAdata volumes. The I/O write orders of the data recovery have a size of 64 MB. Also the redo log can be replayed during a database recovery, that is the log backups are read from the backup device and the log entries get replayed. |
In the following table the I/O operations are listed which are executed by the above-mentioned scenarios, including the block sizes that are read or written:
I/O pattern
Savepoint,
Snapshot,
Delta merge
## Write transactions
Table load:
DB Restart,
Failover,
Takeover
Data Backup
Data Volume
WRITE
4 KB - 16 MB asynchronous bulk writes, up to 64 MB (clus-
tered Row Store super blocks)
READ
4 KB - 16 MB blocks, up to 64
MB (clustered Row Store super blocks)
READ
4 KB - 16 MB blocks, up to 64 MB (clustered Row Store super blocks) are asynchronously copied to '[data] backup buffer' of 512 MB
Log Volume (redo log)
## WRITE
OLTP - mostly 4 KB log write I/
O performance is relevant
OLAP - writes with larger I/O
order sizes
READ
Backup Medium
WRITE
in up to 64 MB blocks from
'[data] backup buffer'
| I/O pattern | Data Volume | Log Volume (redo log) | Backup Medium |
|-------------------|---------------|----------------------------------------------------------------|--------------------------------------------------------|
| Log Backup | | READ asynchronously copied to '[data] backup buffer' of 128 MB | WRITE in up to 64 MBblocks from '[data] backup buffer' |
| Database Recovery | WRITE | READ | READ | |
## 3.4.1 Analyzing I/O Throughput and Latency | ## 3.4.1 Analyzing I/O Throughput and Latency
When analyzing I/O, the focus is on throughput and latency (time taken). A set of system views (with names beginning m_VOLUME_IO_*) is available to help you analyze throughput and examples are given here to illustrate how they can be used.
You can use the following example query to read I/O statistics data which will help you to analyze the throughput of the system (in this example the index server). The result of this query presents a set of columns including throughput in MB and trigger ratios (the relationship between trigger time and I/O time) for both read and write operations:
```
select v.host, v.port, v.service_name, s.type, round(s.total_read_size / 1024 / 1024, 3) as "Reads in MB", round(s.total_read_size / case s.total_read_time when 0 then -1 else s.total_read_time end, 3) as "Read Throughput in MB", round(s.total_read_time / 1000 / 1000, 3) as "Read Time in Sec", trigger_read_ratio as "Read Ratio", round(s.total_write_size / 1024 / 1024, 3) as "Writes in MB", round(s.total_write_size / case s.total_write_time when 0 then -1 else s.total_write_time end, 3) as "Write Throughput in MB", round(s.total_write_time / 1000 / 1000, 3) as "Write Time in Sec" , trigger_write_ratio as "Write Ratio" from "PUBLIC"."M_VOLUME_IO_TOTAL_STATISTICS_RESET" s, PUBLIC.M_VOLUMES v where s.volume_id = v.volume_id and type not in ( 'TRACE' ) and v.volume_id in (select volume_id from m_volumes where service_name = 'indexserver') order by type, service_name, s.volume_id;
```
Note that some of the system views for I/O can be used with a resettable counter so that you can gather data for just the most recent period since the counter was set. This example is based on the m_VOLUME_IO_TOTAL_STATISTICS system view but uses the 'reset' version of the view.
You can reset the statistics counter to analyze the I/O throughput for a certain time frame by running the following reset command:
alter system reset monitoring view m_VOLUME_IO_TOTAL_STATISTICS_RESET;
## Multitier and Replication Scenarios
In a system using replication between primary and secondary sites it is possible to analyze throughput of the secondary remotely by running these queries on the primary site. This method uses the proxy schema of the secondary system on the primary and can be used in a 2-tier system replication setup as well as for multitier landscapes.
The proxy schema follows the naming convention _SYS_SR_SITE_<siteName>=, where <siteName>= is the name of the secondary site (case-sensitive). In the FROM clause of the example query given above the schema PUBLIC is used; in a system replication landscape replace this with the proxy schema, for example: _SYS_SR_SITE_SiteB':
```
from "_SYS_SR_SITE_SiteB"."M_VOLUME_IO_TOTAL_STATISTICS_RESET" s, "_SYS_SR_SITE_SiteB"."M_VOLUMES" v
```
Note that this specific example will only work in an Active/Active (Read Only) scenario because the view m_VOLUMES is only available in the proxy view in operation mode logreplay_readaccess, not in other operation modes.
## Trigger Ratios
I/O calls are executed asynchronously, that is, the thread does not wait for the order to return. The trigger-ratio of asynchronous reads and writes measures the trigger time divided by the I/O time. A ratio close to 0 shows good performance; it indicates that the thread does not wait at all. A ratio close to 1 means that the thread waits until the I/O request is completed.
Refer to SAP Note 1930979 and SAP Notes for Alerts 60 and 61 for more information about the significance of the trigger ratio values.
## Latency
The latency values are important for LOG devices. To analyze the latency, use the following example query which returns the log write wait time (for data of type LOG) with various buffer sizes written by the index server. The time values returned are the number of microseconds between enqueueing and finishing a request.
```
select host, port type, round(max_io_buffer_size / 1024, 3) "Maximum buffer size in KB", trigger_async_write_count, avg_trigger_async_write_time as "Avg Trigger Async Write Time in Microsecond", max_trigger_async_write_time as "Max Trigger Async Write Time in Microsecond", write_count, avg_write_time as "Avg Write Time in Microsecond", max_write_time as "Max Write Time in Microsecond" from "PUBLIC"."M_VOLUME_IO_DETAILED_STATISTICS_RESET" where type = 'LOG' and volume_id in (select volume_id from m_volumes where service_name = 'indexserver') and (write_count <> 0 or avg_trigger_async_write_time <> 0);
```
## Related Information
SAP Note 1930979 m_VOLUME_IO_TOTAL_STATISTICS_RESET System View Alerts Reference [page 302] |
## 3.4.2 Savepoint Performance | ## 3.4.2 Savepoint Performance
To perform a savepoint write operation SAP HANA needs to take a global database lock. Two phases of this operation, the 'critical phase' and the 'wait for lock' phase can be monitored and optimized if necessary.
## Global Database Lock: Critical Phase
Savepoints are used to implement backup and disaster recovery in SAP HANA. If the state of SAP HANA has to be recovered, the database log from the last savepoint will be replayed.
While SAP HANA is designed to keep the time period of the lock as short as possible, poor I/O performance can extend it to a length that causes a considerable performance impact. You can analyze the savepoint performance with this SQL statement which shows how long the last and the current savepoint writes took/are taking:
```
select start_time, volume_id, round(duration / 1000000) as "Duration in Seconds", round(critical_phase_duration / 1000000) as "Critical Phase Duration in Seconds", round(total_size / 1024 / 1024) as "Size in MB", round(total_size / duration) as "Appro. MB/sec", round (flushed_rowstore_size / 1024 / 1024) as "Row Store Part MB" from m_savepoints where volume_id in ( select volume_id from m_volumes where service_name = 'indexserver') ;
```
An example of the output is shown here. The critical phase duration, in which savepoints need to take a global database lock, must be observed carefully. The duration should not be longer than a second. In this example the times are significantly higher due to I/O problems.
| | START_TIME | VOLUME_ID | Duration in Seconds | Critical Phase Duration in Seconds | Size in MB | Appro. MB{sec | Row Store Part MB |
|----|--------------------------------|-------------|-----------------------|--------------------------------------|--------------|-----------------|---------------------|
| | Dec 16, 2013 9.28.38,469738 AM | | 23 | | 3,822 | 177 | 874 |
| | Dec 16, 2013 9.23.17.984555 AM | | 20 | | 3,845 | 197 | 796 |
| | Dec 16, 2013 9.17,49.383506 AM | | 29 | | 3,910 | 143 | 1,065 |
| | Dec 16, 2013 9.11.33.777138 AM | | 76 | | 10,072 | 140 | 1,957 |
| 5 | Dec 16, 2013 9,05.57.694349 AM | | 16 | | 3,879 | 113 | 1,362 |
| | Dec 16, 2013 9,00.38.347428 AM | | | | 3,321 | 180 | 1,026 |
| | Dec 16, 2013 8.56,23,056008 AM | | 69 | | 4,226 | 64 | 1,324 |
| | Dec 16, 2013 8,48.07.902735 AM | | 154 | | 11,558 | 79 | 4,650 |
| | Dec 16, 2013 8,09,16.575288 AM | | 167 | | 20,150 | 127 | 5,032 |
| 10 | Dec 16, 2013 8,05.58.861928 AM | | 49 | | 2,762 | 59 | 809 |
| | Dec 16. 2013 8,01.32.889947 AM | | | | 1253 | 201 | 161 |
Savepoints
The following SQL generates a histogram on the critical phase duration, the output is shown below:
```
select to_char(SERVER_TIMESTAMP,'yyyy.mm.dd') as "time", sum(case when (critical_phase_duration <= 1000000) then 1 else 0 end) as "<= 1 s", sum(case when (critical_phase_duration > 1000000 and critical_phase_duration <=2000000) then 1 else 0 end) as "<= 2 s", sum(case when (critical_phase_duration > 2000000 and critical_phase_duration <=3000000) then 1 else 0 end) as "<= 3 s", sum(case when (critical_phase_duration > 3000000 and critical_phase_duration <=4000000) then 1 else 0 end) as "<= 4 s",
```
```
sum(case when (critical_phase_duration > 4000000 and critical_phase_duration <=5000000) then 1 else 0 end) as "<= 5 s", sum(case when (critical_phase_duration > 5000000 and critical_phase_duration <=10000000) then 1 else 0 end) as "<= 10 s", sum(case when (critical_phase_duration > 10000000 and critical_phase_duration <=20000000) then 1 else 0 end) as "<= 20 s", sum(case when (critical_phase_duration > 20000000 and critical_phase_duration <=40000000) then 1 else 0 end) as "<= 40 s", sum(case when (critical_phase_duration > 40000000 and critical_phase_duration <=60000000) then 1 else 0 end) as "<= 60 s", sum(case when (critical_phase_duration > 60000000 ) then 1 else 0 end) as "> 60 s", count(critical_phase_duration) as "ALL" from "_SYS_STATISTICS"."HOST_SAVEPOINTS" where volume_id in (select volume_id from m_volumes where service_name = 'indexserver') and weekday (server_timestamp) not in (5, 6) group by to_char(SERVER_TIMESTAMP,'yyyy.mm.dd') order by to_char(SERVER_TIMESTAMP,'yyyy.mm.dd') desc;
```
Savepoint Histogram
The performance of the backup can be analyzed with this statement:
```
select mbc.backup_id, SECONDS_BETWEEN (mbc.sys_start_time, mbc.sys_end_time) seconds, round(sum(backup_size) / 1024 / 1024 / 1024,2) size_gb, round(sum(backup_size) / SECONDS_BETWEEN (mbc.sys_start_time, mbc.sys_end_time) / 1024 / 1024, 2) speed_mbs from m_backup_catalog_files mbcf , m_backup_catalog mbc where mbc.entry_type_name = 'complete data backup' and mbc.state_name = 'successful' and mbcf.backup_id = mbc.backup_id
```
## Wait for Lock Phase
Apart from a long-running 'critical phase', the savepoint can also have long-running 'wait for lock' phases which can impact the performance of the system. In the waitForLock phase the savepoint must wait for exclusive access to the ConsistentChangeLock , if another database operation holds this lock for a longer time, delays are possible.
You can find further information on the waitForLock phase in the KBA 2100009 FAQ: SAP HANA Savepoints - refer to question #6 'How can typical savepoint issues be analyzed and resolved?'.
To check if there is a potential issue with the runtime of the waitForLock phase you can use the script HANA_Configuration_MiniChecks_* (* depending on your HANA database release and revision there are different versions of the script) from the SAP Note 1969700 SQL Statement Collection for SAP HANA . Where the script finds a potentially critical issue column C in the output is marked with a 'X' .
| CHID | DESCRIPTION | VALUE | EXPECTED VALUE | C | SAP NOTE |
|--------|----------------------------------------------|---------|------------------|-----|------------|
| **** | SAP HANAMINI CHECKS | | | | 1999993 |
| M0346 | Long waitForLock savepoint phases (mid-term) | 16 | 0 | X | 2100009 |
In many cases long waitForLock phases are caused by HANA internal lock contention. We can use the script HANA_Configuration_MiniChecks_* to troubleshoot the issue further. In this example from the script in the 'LOCKS' section we get an indication that there is an IO issue on the system and an issue with FileIDMapping contention:
| CHID | DESCRIPTION | VALUE | EXPECTED VALUE | C | SAP NOTE |
|--------|-----------------------------------------------|-----------------------------------------------|------------------|-----|------------|
| M1052 | Significant internal Lock waits (short- term) | PageIO::SyncCallb ackSemaphore (1.59 threads) | none | X | 1999998 |
| M1055 | Locks with repeated wait spikes (long-term) | FileIDMap (48 x 27) | none | X | 1999998 |
| | | PageIO::SyncCallb ackSemaphore (428 x 28) | none | X | 1999998 |
## IO Bottlenecks
More information about locks is available in KBA 1999998 - FAQ: SAP HANA Lock Analysis ; there, the meaning of lock PageIO::SyncCallbackSemaphore, is described as follows:
Starting with SAP HANA 2.0 synchronous I/O requests are implicitly executed as asynchronous I/O requests. The PageIO::SyncCallbackSemaphore indicates that the thread is currently waiting for the asynchronous I/O request to finish properly. See SAP Note 1999930 for tuning SAP HANA I/O performance.
This indicates that there is an IO bottleneck on the system and the IO throughput is too slow at times.
You can confirm this by executing the script HANA_IO_Savepoints_* (*depending on your HANA Database release and revision there are different versions of the script) from the SQL Statement Collection for SAP HANA. In this case you can make a change in the modification section to only collect information where MIN_WAIT_FOR_LOCK_PHASE_DURATION_S runs for 10 seconds or greater:
```
( SELECT /* Modification section */ '1000/10/18 07:58:00' BEGIN_TIME, '9999/10/18 08:05:00' END_TIME, 'SERVER' TIMEZONE, '%' HOST, '%' PORT, -1 VERSION, -1 MIN_BLOCKING_PHASE_DURATION_S, 10 MIN_WAIT_FOR_LOCK_PHASE_DURATION_S, -1 MIN_CRITICAL_PHASE_DURATION_S,
```
The script has the following output parameters; the columns 'LOCK_S' and 'MB_PER_S' are particularly significant here:
```
[OUTPUT PARAMETERS] - START_TIME: Start time of considered time interval - BLK_PHASE_START_TIME: Start time of blocking phase - CRIT_PHASE_START_TIME: Start time of critical phase - END_TIME: End time of savepoint - START_TIME: Start time of considered time interval - HOST: Host - PORT: Port - VERSION: Savepoint version - COUNT: Number of savepoints - P: Savepoint purpose ... - I: Savepoint initiation (E -> EXECUTED_EXPLICITLY, T -> TRIGGERED_TIMEBASED) - AGG: Aggregation type used for the subsequent output columns (MAX -> maximum, AVG -> average, SUM -> total) - RETRIES: Number of flush retries in the non-critical phase - TOTAL_S: Savepoint duration (s) - BLK_S: Blocking phase (s), i.e. time when DML operations can be blocked -LOCK_S: waitForLock phase duration (s) - CRIT_S: Critical phase duration (s) - SIZE_MB: Savepoint write size (MB) -MB_PER_S: Write throughput (MB / s) - RS_SIZE_PCT: Flushed data related to rowstore compared to overall flushed data (%)
```
In the example output shown here we can see that the throughput (MB_PER_S) is very low at the times of the long running waitForLock phases (LOCK_S):
In this case your Storage/IO Team need to do a deeper analysis of the IO layers and optimize the IO to meet the workload requirements of the system. KBA 1999930 - FAQ: SAP HANA I/O Analysis can be used as a starting point for the IO analysis and optimization.
## Lock Contention
Regarding the issue related to 'FileIDMapping' contention, the ConsistentChangeLock can be held by threads waiting for FileIDMapping locks, so ConsistentChangeLock waits can be a secondary symptom of FileIDMapping contention. In that case you must analyze and resolve the FileIDMapping contention, then the ConsistentChangeLock contention will improve.
For more information and potential root causes of this issue see question #15 'What are common reasons for ConsistentChangeLock contention?' in the KBA 1999998 - FAQ: SAP HANA Lock Analysis .
## Run-time Dump
If the above does not allow you to resolve the issue, as per the information on the waitForLock phase in the section 'Long waitForLock phase' under Header 'Symptom' from question 6. 'How can typical savepoint issues be analyzed and resolved?' of the KBA 2100009 - FAQ: SAP HANA Savepoints, you can generate a runtime dump to collect more detailed information on the root cause of the long runtime of the waitForLock phase. The KBA 3014528 - How-To: Analyzing blocked savepoint runtime dumps gives specific information about analyzing runtime dumps in relation to locks and savepoints.
Automated analysis of blocked save point runtime dumps is possible using the SAP HANA Dump Analyzer Supportability Tool, see the section 'Savepoint Analyzer' from the KBA 2498739 - How-To: Analyzing Runtime Dumps with SAP HANA Dump Analyzer , for further information.
## Related Information
Using the SQL Statement Collection for Analysis and Health Checks [page 8]
SAP Note 1969700
SAP Note 2100009
SAP Note 1999930
SAP Note 1999998
SAP Note 3014528
SAP Note 2498739 |
## 3.5 Configuration Parameter Issues | ## 3.5 Configuration Parameter Issues
The SAP HANA database creates alerts if it detects an incorrect setting for any of the most critical configuration parameters.
The following table lists the monitored parameters and related alerts.
Alerts related to configuration
| Alert ID | Alert Name | Parameter | Further Information |
|------------|--------------------------------------|---------------------------------------------------|----------------------------------------------------------------|
| 10 | Delta merge (mergedog) configuration | Indexserver.ini - mergedog - active | Delta Merge |
| 16 | Lock wait timeout config- uration | Indexserver.ini - transaction - lock_wait_timeout | Transactional Problems |
| 32 | Log mode legacy | Global.ini - persistence - log_mode | Issues with Configuration Parameter log_mode (Alert 32 and 33) |
| 33 | Log mode overwrite | Global.ini - persistence - log_mode | Issues with Configuration Parameter log_mode (Alert 32 and 33) |
To check for parameters that are not set according to the default values you can use the following SQL statement:
```
select a.file_name, b.layer_name, b.tenant_name, b.host, b.section, b.key, a.value as defaultvalue, b.currentvalue from sys.m_inifile_contents a join ( select file_name, layer_name, tenant_name, host, section, key, value as currentvalue from sys.m_inifile_contents b where layer_name <> 'DEFAULT' ) b on a.file_name = b.file_name and a.section = b.section and a.key = b.key and a.value <> b.currentvalue
```
Alternatively, use the script 'HANA_Configuration_Parameters_2.00.040+' in the SQL Statement Collection for SAP HANA (see SAP Note 1969700). The output of this script will list all parameters deviating from recommended settings.
## Note
Default values of parameters may change when updating the SAP HANA database with a new revision. Custom values on the system level and on the host level will not be affected by such updates.
## Correcting Parameter Settings
You can change configuration parameters using the SAP HANA cockpit or the ALTER SYSTEM ALTER CONFIGURATION statement.
All configuration parameters are defined in table CONFIGURATION_PARAMETER_PROPERTIES. You can look up parameter details either in the system using for example SAP HANA cockpit or by referring to the Configuration Parameter Reference Guide in the SAP Help Portal. In addition to basic properties such as data type, unit and a default value the definition includes a flag to indicate if a system restart is required before a changed value becomes effective.
The SAP Note 2600030 - Parameter Recommendations in SAP HANA Environments is updated regularly with version-specific information covering cases where configuration parameter settings other than the default values are recommended.
Usually alerts on incorrect parameter settings include information about correct setting of the parameter. So, unless you have received a specific recommendation from SAP to change the parameter to another value, you can fix the issue by changing the parameter from the Configuration of System Properties monitor of SAP HANA cockpit. You can search for a specific parameter by filtering on the parameter name. In most cases the suggested correct value will be the default value. Most of the parameters can be changed online and do not require any further action; exceptions for common parameters are noted in the reference document referred to above. For more information about how to change parameters, see Memory Information from Logs and Traces .
## Note
Make sure that you change the parameter in the correct ini-file and section, since the parameter name itself may be not unique.
## Related Information
Delta Merge [page 129]
Transactional Problems [page 167]
Issues with Configuration Parameter log_mode (Alert 33) [page 124]
Memory Information from Logs and Traces [page 66]
SAP Note 2600030
SAP Note 1969700
SAP HANA Configuration Parameter Reference |
## 3.5.1 Issues with Configuration Parameter log_mode (Alert 33) | ## 3.5.1 Issues with Configuration Parameter log_mode (Alert 33)
Alert 33 is raised whenever the write mode to the database log is not set correctly for use in production.
## Context
To ensure point-in-time recovery of the database the log_mode parameter must be set to 'normal' and a data backup is required.
The following steps are recommended when facing this alert:
## Procedure
1. Change the value of the parameter log_mode to normal. You can do this in SAP HANA cockpit or from the command line.
2. Schedule an initial data backup
3. Test successful completion of the backup
4. 4.
5. Backup the database configuration For information on how to perform a backup of database configuration files see SAP Note 1651055 Scheduling SAP HANA Database Backups in Linux.
5. Schedule a regular data backup
## Related Information
Alerts Reference [page 302]
```
SAP Note 1651055 SAP Note 1900267
``` |
## 3.6 Backup And Recovery | ## 3.6 Backup And Recovery
This section discusses issues related to backup and recovery and how to troubleshoot them.
## Back and Recovery: Resources
For detailed information on how to perform a backup or recovery of an SAP HANA database please refer to section SAP HANA Database Backup and Recovery of the SAP HANA Administration Guide .
## Note
Following the configuration information, prerequisites and requirements for your HANA release and revision as documented in the SAP HANA Administration Guide can avoid the most common issues faced by customers when performing a backup or recovery.
Backup and Recovery can be done in SAP HANA studio or in SAP HANA cockpit; refer to the documentation for these administration tools for details.
The following links to the SAP HANA Support Content provide a step-by-step description with detailed screenshots of how Backup and Recovery can be executed from SAP HANA studio:
- How to Perform a Backup
- Recovery
In addition to the SAP Notes and Knowledge Base Articles referred to in this section, the following may also be useful:
- SAP Note 1642148 - FAQ: SAP HANA Database Backup & Recovery
- SAP Note 2116157 - FAQ: SAP HANA Consistency Checks and Corruptions
- SAP KBA 2101244 - FAQ: SAP HANA Multitenant Database Containers (MDC)
- SAP Note 2096000 - SAP HANA multitenant database containers - Additional Information
- SAP KBA 2486224 - T enant DB recovery to another system Tenant DB
- SAP Note 2093572 - SAP HANA Migration from Multi-Node to Single-Node
- SAP Note 1730932 - Using backup tools with Backint for HANA
- SAP Note 1651055 - Scheduling SAP HANA Database Backups in Linux
- SAP Note 2044438 - HANA: Backup fails for Out Of Memory error
- SAP Note 2123153 - HANA Recovery Failing with 'recovery strategy could not be determined'
- SAP Note 2063454 - Long running log backups
- SAP KBA 2495074 - Recovery failed with data backup file of higher HANA revision
- SAP KBA 1990971 - HANA backup failed with 'Wrong Checksum' error
- SAP Note 2605215 - Replay of Logs Hanging During Point in Time Recovery or Operations on System Replication Secondary Site
## Logs and Trace Files
In most cases for both backup and recovery issues a relevant error message will be written to the following SAP HANA database log or trace files:
- backup.log
- namesever.trc
- backint.log (if using backint and 3rd party tool)
These log and trace files (diagnostic files) can be found using SAP HANA studio, DBA Cockpit, and SAP HANA Database Explorer. The default location for the files is:
/usr/sap/<SID>=/HDB<Instance#>=/<host>=/trace
You can search for either backup or recovery tasks which generate an error in the backup.log file by referring to the task type and the date and time that the issue occurred:
```
2018-09-26T06:58:11+00:00 16614abbef0 INFO BACKUP SAVE DATA started 2018-08-29T09:45:52+00:00 1658510b5bb ERROR BACKUP SAVE DATA finished with error: [447] backup could not be completed ….rc=28: No space left 2018-08-29T09:45:52+00:00 1658510b5bb INFO BACKUP state of service…
```
With the error code and error message you can search for SAP Notes and Knowledge Base Articles in the SAP Knowledge Base using SAP ONE Support Launchpad
## Increasing the trace level
Although in most cases it should not be necessary to use the debug trace level to find the relevant error message, if the standard trace level for the backup.log file does not return a meaningful error message you can temporarily set the level to 'debug' using the following commands:
Enable the trace:
```
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('trace', 'backup') = 'debug' with reconfigure;
```
Disable the trace:
```
ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') UNSET ('trace', 'backup') with reconfigure;
```
## Common Issues Related to Backup and Recovery
Some of the most common issues are described here.
## Log Volume Full
The symptoms of a log volume full are:
- Alert #2 is generated, for example: "Alert disk usage" for "storage type: LOG".
- The database cannot be started or does not accept new requests.
- The database trace file of a service contains rc=24 no space left on device errors for the basepath_logvolumes or basepath_logbackup .
## Note
Note that when a cluster file system like HP IBRIX or IBM GPFS is used, HANA Studio or OS commands like df -h might not detect a problem. In this case use the filesystem specific commands like mmdf/ mmrepquota for IBM GPFS.
In this situation reasons for data and log backups failing may include:
- Backup location for data and log backup is the same location as data and log volumes and space runs out
- Automatic log backup parameter is not enabled (enable_auto_log_backup). If automatic log backup is disabled, the log area grows until the file system is full. If the file system is full, the database will remain frozen until the situation is resolved.
See the following SAP Notes for troubleshooting details in this case:
- 2083715 -Analyzing log volume full situations .
- 1679938 -Disk Full Event on Log Volume .
## Large Size of Log Backup Files
The backup catalog stores information about the data and log backups that were executed for a database and the catalog is backed up as part of the log backups. If you do not ensure that old backups are deleted in accordance with backup and recovery strategy the log backups can become increasingly large over time.
For information on the clean-up of the backup catalog see the following:
- Housekeeping for backup.log and backint.log in the SAP HANA Administration Guide
- KBA 2096851 -Backup Catalog Housekeeping within HANA DB
- KBA 2505218 -Large Log Backups due to large backup catalog
## Regular manual consistency check of backups required for SAP HANA Database Recovery
Although HANA has automatic checks that take place while data and log backups are performed you should manually re-check the integrity of any backup if it is copied or moved to another location or if you need to use the backup files for HANA database recovery. This manual check can be done using hdbbackupcheck .
For information on the usage of the hdbbackupcheck command see the KBA 1869119 -Checking backups with 'hdbbackupcheck'
## Problems With Backint and 3rd Party Backup Tools
SAP HANA comes with native functionality for backup and recovery but using 3rd party backup tools is an alternative to using the native functionality. SAP HANA database provides a backup interface called Backint which enables 3rd-party backup tool vendors to connect their product to the backup and recovery capabilities of the SAP HANA database. Backint is fully integrated into the SAP HANA database, that is, data and log backups can be individually configured to be created and recovered using the 3rd-party backup tool. Backups are transferred via named pipe from the SAP HANA database to the 3rd party backup agent, which runs on the SAP HANA database server and then sends the backups to the 3rd party backup server.
For more information on SAP HANA backup and recovery and the integration of 3rd party backup tools with SAP HANA, please refer to:
- The SAP HANA Administration Guide
- The installation documentation for 3rd party backup tools provided by the tool vendor
- SAP Note 2031547 -Overview of SAP-certified 3rd party backup tools and associated support process
If you face an issue with a 3rd party backup tool check SAP Note 2031547 to confirm that the backup tool is supported. If the tool is supported the next step is to check for error messages in the backint.log file as described above. Follow the appropriate support process for the tool as described in this SAP note.
As a workaround for a problem with a 3rd party backup tool, and if you have the required disk space, you could use the SAP HANA native functionality for backup.
## Backint Pipe Size
A cause of slow performance related to backint and SUSE Linux Enterprise Server 15 SP4 may be related to the default pipe size. Due to a change in the default configuration of the maximum size of a pipe in the operating system, the data transfer rate of backint may be restricted. In HANA 2.0 SPS 07 (072.00) a configuration parameter was introduced which makes it possible to set the pipe capacity. The parameter backint_pipe_capacity (in the backup section of the global.ini file) is not active by default but can be set to a number of bytes to override the default capacity of the operating system. Further details and examples of how this can be applied are given in SAP Note 3417100 -HANA Database Backup or Recovery using Backint for SAP HANA is very slow .
## HANA BACKUP and Recovery Performance
In many cases, problems with backup and recovery performance are related to non-optimal system I/O or the backup catalog size.
To check the HANA I/O performance refer to the Knowledge Base article 1999930 -FAQ: SAP HANA I/O Analysis . Regarding the backup catalog refer to the subsection above Large size of Log Backup files .
To analyze performance problems related to backup and recovery you can also refer to the guided answer Troubleshooting Performance Issues in SAP HANA , if the guided answer does not provide a solution for further analysis SAP Support would require the information in the SAP Note 1835075 -Analyze backup and recovery performance issues .
## Related Information
SAP HANA Database Backup and Recovery (SAP HANA Administration Guide) |
## 3.7 Delta Merge | ## 3.7 Delta Merge
This section covers troubleshooting of delta merge problems.
The column store uses efficient compression algorithms to keep relevant application data in memory. Write operations on the compressed data are costly as they require reorganizing the storage structure and recalculating the compression. Therefore write operations in column store do not directly modify the compressed data structure in the so called main storage. Instead, all changes are at first written into a separate data structure called the delta storage and at a later point in time synchronized with the main storage. This synchronization operation is called delta merge.
From an end user perspective, performance issues may occur if the amount of data in the delta storage is large, because read times from delta storage are considerably slower than reads from main storage.
In addition the merge operation on a large data volume may cause bottleneck situations, since the data to be merged is held twice in memory during the merge operation.
The following alerts indicate an issue with delta merges:
- Delta merge (mergedog) configuration (Alert 10)
- Size of delta storage of column store tables (Alert 29)
## Related Information
SAP Note 1909641
SAP Note 1977314 |
## 3.7.1 Inactive Delta Merge | ## 3.7.1 Inactive Delta Merge
In case the delta merge is set to inactive, Alert 10 Delta merge (mergedog) configuration is raised. In a production system this alert needs to be handled with very high priority in order to avoid performance issues.
## Context
Whenever issues with delta merge are suspected, this alert should be checked first. You can do that from the Alerts tile or the Alert Checker Configuration app. An inactive delta merge has a severe performance impact on database operations.
## Procedure
1. Check the current parameter value in the Configuration of System Properties page of SAP HANA cockpit and filter for mergedog .
Check the value of active in the mergedog section of the indexserver.ini file.
2. To correct the value, click Change Layer and choose Restore Default .
This will delete all custom values on system and host level and restore the default value system-wide.
## Note
Depending on the check frequency (default frequency: 15 minutes) the alert will stay in the Alert inbox until the new value is recognized the next time the check is run.
## Related Information
SAP Note 1909641
Memory Information from Logs and Traces [page 66] |
## 3.7.1.1 Retrospective Analysis of Inactive Delta Merge | ## 3.7.1.1 Retrospective Analysis of Inactive Delta Merge
Retrospective analysis of the root cause of the parameter change that led to the configuration alert requires the activation of an audit policy in SAP HANA that tracks configuration changes.
Other sources of information are external tools (for example, SAP Solution Manager) that create a snapshot of configuration settings at regular intervals.
For details about configuring security auditing and for analyzing audit logs, refer to the SAP HANA Security Guide .
## Related Information
## SAP HANA Security Guide |
## 3.7.2 Indicator for Large Delta Storage of Column Store Tables | ## 3.7.2 Indicator for Large Delta Storage of Column Store Tables
If the delta storage of a table gets too large, read operations on the table will slow down. This usually results in degraded performance of queries reading from the affected table.
When the delta storage of a table gets too large, the Alert Size of delta storage of column-store tables (Alert 29) can be raised.
Alert 29 is raised when the amount of memory consumed by the delta storage exceeds the configured thresholds. The thresholds can be customized in the SAP HANA cockpit to take into account the configured size of the delta storage. Note that if the alerts are not configured properly, the symptoms can occur without raising an alert, or there may be no symptoms, even though an alert is raised. For each affected table a separate alert is created.
Usually this problem occurs because of mass write operations (insert, update, delete) on a column table. If the total count of records (record count * column count) in the delta storage exceeds the threshold of this alert before the next delta merge, this alert will be triggered.
Corrective action needs to be taken in one of the following areas:
- Change of an application
- Changed partitioning of the table
- Configuration of delta merge
## Related Information
SAP Note 1977314 |
## 3.7.2.1 Analyze Large Delta Storage of Column Store Tables | ## 3.7.2.1 Analyze Large Delta Storage of Column Store Tables
Analyze and interpret issues related to delta storage with help from alerts in SAP HANA cockpit.
## Procedure
1. If an alert was raised, go to the Alerts tile in the SAP HANA cockpit, click Show all and filter for "delta storage".
Check if the alert is raised for a small number of tables or for many tables. Focus on tables where the alert has high priority. Alerts raised with low or medium priority usually don't need immediate action, but should be taken as one indicator for checking the sizing. Also these alerts should be taken into account when specific performance issues with end-user operations on these tables are reported, since read-access on delta storage may be one reason for slow performance.
2. Click on an alert and check the alert details about its previous occurrences visible in the graph.
- a. If the alert occurred several times, check since when this started. To monitor a longer period of time, use the drop-down menu on the right.
- b. Check whether it occurs regularly at a certain time.
This may indicate a specific usage pattern from application side that might have room for optimization. For example, when many inserts and deletes are performed during a load process, it might be possible to replace these operations with a suitable filter in the source system. You can employ the data in the Expensive Statement Trace and the Performance Monitor to determine the usage of the table by applications.
3. Check the timestamp of the alert if it is current, then start with checking current attributes of this table. Information regarding the delta merge operation on specific tables can be obtained from the system view m_CS_TABLES.
SELECT * FROM SYS.m_CS_TABLES where table_name='mytable' and schema_name='myschema';
If no alert was raised, you can check for the tables with the most records in the delta.
SELECT * FROM SYS.m_CS_TABLES where record_count>=0 order by raw_record_count_in_delta desc;
4. Check the following attributes:
- LAST_MERGE_TIME
- MERGE_COUNT
- READ_COUNT, WRITE_COUNT
- RECORD_COUNT
- RAW_RECORD_COUNT_IN_MAIN
- RAW_RECORD_COUNT_IN_DELTA
- MEMORY_SIZE_IN_MAIN
- MEMORY_SIZE_IN_DELTA
- a. If MERGE_COUNT is high then this is an indicator that the delta merge works properly, while a low MERGE_COUNT suggests a need for corrective action.
A large difference between RAW_RECORD_COUNT_IN_MAIN and RECORD_COUNT suggests that the table has not been compressed properly. Note that compression is not triggered when a merge is triggered from an SQLScript, but only in case of Auto, Smart or Critical Merge.
A high WRITE_COUNT suggests that many insert, update and delete operations occur. If the occurrence of the delta merge problem is rare, then it usually will be sufficient to trigger the merge for this table manually. See Perform a Manual Delta Merge Operation in the SAP HANA Administration Guide .
- b. If there are many deleted records, you can trigger a compress of the table with the following command:
- c. Confirm the delta merge operation has succeeded in the following ways:
- LAST_MERGE_TIME
- MERGE_COUNT
- RAW_RECORD_COUNT_IN_DELTA
- LAST_COMPRESSED_RECORD_COUNT
5. If problems with the delta storage re-occur frequently for a specific table, check the merge statistics for this table. Y ou can do this in the SAP HANA database explorer by executing the Merge Statistics statement in the Statement Library . Y ou can filter by table name and schema name.
```
UPDATE mytable WITH PARAMETERS('OPTIMIZE_COMPRESSION'='YES');
```
Alternatively you can run the following SQL statement and perform the following checks:
SELECT * FROM SYS.m_DELTA_MERGE_STATISTICS where table_name='mytable' and schema_name='myschema';
- a. Check column SUCCESS for records with value other than TRUE.
- b. Check the column LAST_ERROR for records with value other than 0. A typical error is 2048 and ERROR_DESCRIPTION shows error 2484 which indicates that there was not enough memory to compress the table after the merge.
For other error codes please refer to the SAP HANA Administration Guide .
- c. Check the columns START_TIME, EXECUTION_TIME, MOTIVATION and MERGED_DELTA_RECORDS.
2. For cases where MERGED_DELTA_RECORDS becomes excessively large the trigger function for the MOTIVATION type should be reviewed and the LOAD should be analyzed for that time frame ( Performance Monitor ). A value of MERGED_DELTA_RECORDS = -1 suggests that no records were merged but that a compression optimization was performed.
6. If you need to analyze the delta merge statistics for a longer period, use the equivalent select on table HOST_DELTA_MERGE_STATISTICS of the statistics server:
```
SELECT * FROM _SYS_STATISTICS.HOST_DELTA_MERGE_STATISTICS where table_name='mytable' and schema_name='myschema';
```
You can check the delta merge configuration in SAP HANA cockpit by opening Configuration of System Properties indexserver.ini mergedog .
Since the default value for the frequency of delta merges is already 1 minute ( check_interval = 60.000 ms), optimization with regards to memory consumption can only be done by adjusting the decision function of the corresponding merge type and the corresponding priority function. However, changes
should be done very carefully and always with involvement of experts from SAP. Parameters of the functions are documented in the SAP HANA Administration Guide .
## Related Information
m_CS_TABLES System View (SAP HANA SQL Reference)
## Alerts
Inactive Delta Merge [page 130]
Use the Statement Library to Administer Your Database
Memory Information from Logs and Traces [page 66] |
## 3.7.3 Failed Delta Merge | ## 3.7.3 Failed Delta Merge
If many cases are identified where auto merge has failed, the error codes need to be analyzed in more detail. Note that the merge only failed if SUCCESS is not TRUE. In any other case the error code describes a non-critical condition during a successful merge.
To analyze the error codes, you should increase the trace level to INFO for the components mergedog and mergemonitor in the INDEXSERVER section of the Database Trace .
To change the trace configuration choose Trace Configuration from the database context menu in the SAP HANA database explorer and change the configuration of the Database Trace .
The following table lists error codes and typical corrective actions.
## Error Codes
| Error | Description | Recommended Action |
|---------|------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1999 | General error (no further information available) | Check the indexserver trace for more errors regarding the exception |
| 2450 | Error during merge of delta index occurred | Check in diagnostic files for an Out-Of-Memory dump that occurred during the delta merge operation |
| 2458 | Table delta merge aborted as cancel was manually requested by a kill session call. | No action required. |
| 2480 | The table in question is already being merged. | No action required. |
| 2481 | There are already other smart merge requests for this table in the queue. | No action required. |
| 2482 | The delta storage is empty or the evaluation of the smart merge cost function indicated that a merge is not necessary. | No further action required if this occurs occasionally. If it happens frequently: Check M_DELTA_MERGE_STATISTICS and review smart merge cost function with SAP experts. (parameter smart_merge_decision_func ) |
| 2483 | Smart merge is not active (parameter smart_merge_enabled=no ) | Change the parameter smart_merge_enabled=yes ) |
| 2484 | Memory required to optimize table exceeds heap limit (for failed compression optimization operations, TYPE=SPARSE, SUCCESS=FALSE ). | No further action required if this occurs occasionally. If it happens frequently: A) Analyze change operations on the table and con- sider table partitioning to minimize the size of the delta storage. If no knowledge about application is available, Hash Partitioning with a size of 500.000.00 records is a good initial choice. B) Analyze change operations on the ta- ble and consider adjusting the parameter auto_merge_decision_func C) Increase delta storage D) Review sizing |
| 6900 | Attribute engine failed | Internal error. Check the indexserver trace for more errors regarding the exception. |
| 29020 | ltt::exception caught while operating on $STORA- GEOBJECT$ | Internal error. Check the indexserver trace for more errors regarding the exception. |
## Related Information
Memory Information from Logs and Traces [page 66] |
## 3.7.4 Delta Storage Optimization | ## 3.7.4 Delta Storage Optimization
Table partitioning allows you to optimize the size of tables in memory and their memory consumption as each partition has its own delta storage.
The memory consumption of a table in memory during a merge operation depends on the number of records, the number and memory size of columns and the memory size of the table. While the number of records can be kept low by triggering a smart merge from the application, optimization with regards to the size of the table can be achieved by table partitioning. This is due to the fact that each partition holds a separate delta storage. When a merge is performed, the data from the main storage has to be loaded into memory which is a considerably less amount when only a single partition is handled rather than the full table.
When considering partitioning it is recommended to analyze the typical usage of this table. Partitions should be created in a way that avoids as much as possible that single statements need to access multiple partitions. If no application knowledge is available, then hash partitioning with a partition size of about 500.000.000 records is a good initial choice.
See, Table Partitioning in the SAP HANA Database in the SAP HANA Administration Guide.
## Related Information
Table Partitioning |
## 3.8 Post-Installation Problems with Web IDE | ## 3.8 Post-Installation Problems with Web IDE
This section identifies some common post-installation issues which you may encounter when working with WebIDE.
## Logging in to Space Enablement UI
| Issue: | Logging into space enablement UI gives a 'Forbidden' error. |
|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Cause: | Missing authorizations. |
| Resolution: | Ensure that the user logging into the di-space-enablement-ui app has the proper authorizations as follows: • The role collection XS_CONTROLLER_USER • A manually-created role collection containing WebIDE_Administrator and/or WebIDE_Developer role. • SpaceDeveloper role and/or SpaceManager role. |
| More Information: | For more information see the following documents (depending on your SAP HANAversion): • For SAP HANAversion below HANA2SPS3see 'Post-Installation Administration Tasks' in SAP Web IDE for SAP HANA- Installation and Upgrade Guide • For HANA2SPS3andabove see 'Roles and Permissions for Administration and Development' in: SAP HANADeveloper Guide for SAP HANAXSAdvanced Model . |
## Logging into Web IDE
| Issue: | Logging into WebIDE gives an internal server error or shows a blank page. |
|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|
| Cause: | Missing authorizations |
| Resolution: | Ensure the WebIDE User has the correct role collections and is part of space as SpaceDeveloper (role). |
| More Information: | For more information see 'Roles and Permissions for Administration and Development' in the SAP HANADeveloper Guide for SAP HANAXSAdvanced Model. |
## Unable to access di-local-npm-registry
| Issue: | When trying to access the URL for di-local-npm-registry to check if a given node module is present and check if the versions are available, the following error message appears: "Error: web interface is disabled in the config file" |
|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Cause: | Access to the web interface of di-local-npm-registry is disabled for security reasons. |
| Resolution: | 1. Run the following command: |
| | xs env di-local-npm-registry. |
| | 2. Copy the path shown in the 'storage-path'. It should look similar to this example: |
| | /usr/sap/hana/shared/XSA/xs/controller_data/fss/ a44ae1be-8fa5-4470-b08a-db0begfs8f9f |
| | 3. Run your command-line session as root user and change directory to the storage path you retrieved from the registry. |
| | 4. Search the SAP storage directory by first changing directory again and then using the find command: |
| | cd storage/@sap |
| | find |
## Related Information
SAP Web IDE for SAP HANA - Installation and Upgrade Guide SAP HANA Developer Guide for SAP HANA XS Advanced Model |
## 3.9 Troubleshooting BW on HANA | ## 3.9 Troubleshooting BW on HANA
This section identifies some of the common problem areas for users of Business Warehouse on HANA and points to existing information sources which will help to resolve the issues.
## Documentation
All public documentation related to BW on HANA can be found in the SAP Help Portal on the SAP Business Warehouse product page. In the SAP Community Network (SCN) you can find a complete end-to-end
collection of detailed BW on HANA related information including presentations, blogs, discussions, how-to guides, release information, videos, roadmap and so on. An internal WIKI page is also available with additional information on BW on HANA troubleshooting:
- SAP Help Portal: SAP Business Warehouse, powered by SAP HANA.
- SAP Support Community: SAP Business Warehouse on SAP HANA and HANA SQL-based Data Warehousing.
- WIKI: BW on HANA Documentation and Troubleshooting
This chapter refers to the transaction codes which are used to access each BW function, such as RSRT for the Report Monitor and RSRV for Analysis and Repair of BW Objects. RSA1 is the Data Warehousing Workbench which is the central cockpit used for the administration of almost the entire BW system. A list of the most commonly-used transaction codes is given in a table at the end of this chapter: Frequently-used Transaction Codes.
## Table Redistribution
BW on HANA Table Redistribution ensures that the tables of a BW system are partitioned correctly according to the SAP recommendations and, in the case of a scale-out system, are defined on the correct database nodes. This is important for the optimal performance of the system.
Refer to the following for details of how to configure table redistribution:
- SAP Note 1908075 - BW on SAP HANA: Table placement and landscape redistribution.
- KBA: 2517621 - After landscape redistribution data does not appear evenly distributed between the slave nodes in terms of memory usage per a HANA node.
- Guided answer How to carry out preparations for a table redistribution for BW with a step-by-step description of how to configure the table redistribution.
If the table placement and table redistribution are done correctly there should be no requirement to make manual changes to the partitioning for BW tables. The expected partitioning for BW objects and associated tables is described here:
- KBA 2044468 -FAQ: SAP HANA Partitioning (question 17: How should tables be partitioned in BW environments?)
- SAP Note 2019973 -Handling Very Large Data Volumes in SAP BW on SAP HANA
## Two Billion Record Limit
## Tip
It is not possible to partition BW master data SID tables. Irrespective of the underlying database, SID tables of master data InfoObjects already have the upper limit value of 2 billion records imposed by the datatype of the SIDs (INT4). A detailed explanation on this topic and what can be done to avoid this limitation is available in the SAP Note 1331403 -SIDs, Number Ranges and BW InfoObjects .
It is possible to proactively monitor the 2 billion table or partition limit on HANA using the SAP EarlyWatch Alert Workspace in the SAP ONE Support Launchpad. Using this application, you get an overview of which tables are approaching the limit including a prediction of when the limit will be reached, broken down into a mean,
best-case and worst-case scenario. By default, the forecast is based on all previous measurements. You can find details in the following blog post:
- The New 2 Billion Record Limit in the SAP EarlyWatch Alert Workspace .
## Checking Consistency with RSDU_TABLE_CONSISTENCY
For BW on HANA systems you can use the ABAP report RSDU_TABLE_CONSISTENCY to check that the table placement is correct and that the BW objects are located on the correct HANA nodes. A user document for the report is attached to the SAP Note 1937062 - Usage of RSDU_TABLE_CONSISTENCY.
A practical example of where this report can be useful is around DSO activation performance issues. If the DSO activation is in a distributed HANA environment and it takes longer than expected, you should check if the partitioning of DSO tables (Active Data, Activation Queue and Change Log tables) is consistent, that is, same partitioning and same hosts for the related partitions.
The RSDU_TABLE_CONSISTENCY report is only available in BW on HANA systems and not in the nextgeneration data warehouse product BW/4HANA where most of the checks made by the report are obsolete. Some checks made by the report that are still required will move to transaction RSRV - see the following subsection and also Analysis and Repair Environment in the NetWeaver section of the SAP Help Portal.
## Consistency Checks with RSRV Transaction
RSRV consistency checks (Analysis and Repair Environment) are part of the BW application. In the analysis and repair environment you can perform consistency checks on the data and metadata stored in a BW system. The main purpose of this is to test the foreign key relationships between the individual tables of the enhanced star schema of the BW system. These checks can be useful to solve problems in BW on HANA environments if you are getting unexpected data results. BW application inconsistencies can also cause errors when converting standard InfoCubes to SAP HANA-optimized InfoCubes, the RSRV checks can be used to find and resolve these inconsistencies. See SAP NetWeaver modeling documentation: Converting Standard InfoCubes to SAP HANA-Optimized InfoCubes.
## Delta Merge
The implementation and operation of a SAP BW on HANA system requires a solid understanding of the delta merge process in the database. Understanding this will allow administrators to optimize data loading and can in some cases help improve the performance of BW on HANA queries. Delta merge processing can consume a significant amount of system resources especially for large tables and therefore needs to be understood to manage any system powered by SAP HANA. For more information please see:
- For details of delta merge configuration for BW: Triggering a Delta Merge (SAP BW/4HANA).
- For information on troubleshooting delta merge: KBA 2057046 - FAQ: SAP HANA Delta Merges
## Query Execution Modes (RSRT)
For query execution BW always checks if optimized operations can be pushed down to the HANA database, the four available execution modes can be seen in RSRT from the query properties:
In some cases, especially where there is no support for push down of BW functionality to HANA Database or where you get an error or have a performance problem with HANA operation modes (TREXOPS=6 or TREXOPS=7 or TREXOPS=8) it can make sense to use operation mode 0 (No optimized operations in SAP HANA/no BWA) at least as a workaround to execute the query.
Not all BW queries will benefit from the HANA operation modes that push down BW functionality to the HANA Database. The benefit depends on the exact query definition and the data in the providers. Therefore, where the runtime of the query is critical it is necessary to test the impact of this feature on the performance of every single query and then choose the proper query operation mode.
For more information please see:
- For more details on the operation modes: NetWeaver Generic Tools and Services documentation: Operations in SAP HANA / BWA
- For more detail on the process logic, the SAP Community Wiki: Query Property: Operations in HANA BWA (TREXOPS) .
- For a list of the functionality that is currently pushed down to HANA, refer to the SAP Note 2063449 -Push down of BW OLAP functionalities to SAP HANA.
- For information on testing the benefits of pushdown: 2365030 - Bex Query slower with HANA pushdown.
## Performance Issues
There is often the expectation after a BW on HANA migration from another database that all BW queries should be much faster. Of course, if the query was slow already before database migration to SAP HANA due to some calculations happening on the BW application server and these calculations cannot be pushed down to the HANA database then HANA cannot help in these cases; a redesign of the BW query will be necessary.
The query run time component that we can expect to see significantly improved with SAP HANA is the database time, this is the Event ID 9000 (Data Manager Time) in the BW Query run time statistics. What we can possibly improve with HANA, apart from the database time, is the time spent in the Analytic Manager. The Analytic Manager in BW (previously known as the OLAP engine) is the brain of BW when it comes to BW query processing, so it is responsible for navigation, filtering and aggregation among other things in the query execution. For additional information here please refer to the following KBA:
- 2122535 -How to determine if a BW on HANA query performance problem is HANA database related .
Before starting performance troubleshooting on HANA we first need to identify if there is a general performance problem for the BW on HANA system or if the problem is just for one BW query or report. For information on the steps to verify this, troubleshoot the issue and collect the required logs and traces refer to the HANA WIKI:
- Troubleshooting HANA Performance issues .
## Plan Viz Trace Generated in RSRT
RSRT can be used to check Business Explorer (BEx) queries. The benefit of using RSRT to check a BW query is that it is independent of the client frontend, so you can eliminate the client frontend tool (Bex, BI frontend tool etc) from the problem analysis. If the issue can be reproduced in RSRT then the issue is not related to the client frontend.
Transaction RSRT provides several debug features that are helpful when analyzing a BW query. With BW on HANA, queries are usually executed on the HANA database using stored procedure TrexViaDbsl or TrexViaDbslWithParameter. The call of procedure TrexViaDbsl(withParameter) can be displayed in RSRT if the Debug flag Python Trace or Plan/Performance Trace is set on BW740 systems or Generate Plan Viz File on BW750 systems. You can copy this call and use it in SAP HANA Studio to execute Plan Viz to check the execution plan.
Prerequisites: at least BW 740 SP13 and HANA 1.0 SPS 11
Refer to the Wiki as follows:
- For basic information on transaction RSRT: Query Monitor .
- For the detailed steps on how to do this: BWonHana: PlanViz Trace generated in RSRT .
It is often the case that performance issues for individual BW queries can be resolved with hints. For details of hints refer to the KBA 2142945 - FAQ: SAP HANA Hints:
- Question 4. What important hints are related to SAP HANA?
- Question 3 How can hints be specified? (describes how the hints can be set on the BW application level).
## Composite Provider
One of the most used InfoProviders for BW on HANA reporting is the composite provider which can be created using the BW Modeling tools. These tools provide a new development environment for editing BW metadata objects. The development environment is based on the SAP HANA Studio. For more information refer to the following blog:
- SAP BW Modeling tools: Three things to know when using the BW modeling tools in Eclipse
A Composite Provider is an InfoProvider in which you can combine data from BW InfoProviders such as InfoObjects, DataStore Objects, SPOs and InfoCubes, or SAP HANA views such as Analytical or Calculation Views using join or union operations to make the data available for reporting.
To avoid performance issues with BW queries created on the composite provider, especially related to the use of joins or navigation attributes, it is important that the composite provider is modeled based on the information in the following SAP Notes:
- 2271658 - Design Considerations for Composite Provider
- 2103032 - Long runtimes in query on Composite Provider
## Advanced DSO
The end goal for the advanced DSO (ADSO) is to be the central persistency object in BW-on-HANA replacing the InfoCube, classic DSO, HybridProvider and PSA. While there are still some gaps to cover the complete functionality, we recommend considering the advanced DSO for all new projects as the central (and only) persistency object.
Additional information on the ADSO can be found in the following:
- SCN blog post The "advanced' DataStoreObject - renovating BW's persistency layer
- SAP Note 2070577 - (advanced) DataStore Object - availability in BW7.4 SP08, SP09 and SP10
To avoid data loading or reporting performance problems related to ADSO please implement the recommendations and corrections from the following SAP Notes:
- 2185212 - ADSO: Recommendations and restrictions regarding reporting
- 2684950 - 750SP14: Performance problems during access of DTP with source as ADSO
- 2374652 - Handling very large data volumes with advanced DataStore objects in SAP BW on SAP HANA and BW/4HANA
## Column Views
After you migrate a BW system to a BW on SAP HANA Database all InfoProviders and InfoObjects must have column views which are needed for reporting. In case a new BW InfoProvider or InfoObject is created, the associated column view is generated during activation on the SAP HANA database. If column views do not exist or are inconsistent, the associated BW queries based on the view may fail.
Problems with column views on SAP HANA can often be resolved by recreating the column view for the associated InfoProvider, this can be done using the report RSDDB_LOGINDEX_CREATE. For further information refer to the following SAP Notes:
- KBA 2106851 - Column Store Error 2048 executing RSDDB_LOGINDEX_CREATE on Multiprovider - on symptoms related to these issues and how to use the report to resolve them.
- SAP Note 1695112 - Activities in BW after migrating to the SAP HANA database for possible root causes for HANA view inconsistencies especially after the database migration to HANA.
You can use the report RSDDB_INDEX_CREATE_MASS to generate views for more than one InfoProvider. Further information can be found in the Wiki:
- BWonHANA: InfoProvider column views .
## Authorization issues related to BW on HANA Generated views
To ensure that you have the required permissions to access views and data related to BW on HANA please follow the configuration steps in the following:
- Help Portal: Authorizations for Generating SAP HANA Views.
- SAP BW ∕ 4HANA: Generating SAP HANA Views from the SAP BW ∕ 4HANA System
If after doing the above configuration you still get an authorization related error when accessing the BW generated view on HANA you can find the missing privilege(s) using the information in SAP Note: 1809199 -SAP HANA DB: Debugging user authorization errors.
## Cross-Database Access for BW Calculation views
Read-only queries between tenant databases in the same SAP HANA system are also possible for BW calculation views. This means that database objects such as tables and views can be local to one database but be read by users from other databases in the same system.
The configuration that is required on the HANA Database side is described in the SAP HANA Administration Guide under the section Cross-Database Access. For this to work with BW on HANA generated views an additional configuration step is required. This BW side configuration for the BW External HANA View is described in the following SAP note (see question 8 'External HANA View'):
- 2312583 -FAQ: BW and HANA MDC: Cross-Database-Access in HANA Multitenant Database Containers .
For the External HANA View, a dedicated setting is available to support the read access from a remote tenant database via HANA Cross-Database-Access (CDA) based on MDC. If the flag is set, then the request handling
is de-activated for the external SAP HANA views. This is required as the request handling is implemented in procedures because HANA CDA does not support embedded procedures in remote views.
Further information can be found in the context help of BW transaction RS2HANA_ADMIN General Settings Multi-tenant enabled Additional limitations for cross-database access are described in the following SAP note:
- 2196359 -Limitations for cross-database access in an SAP HANA MDC environment .
## Privilege Errors
A privilege error may occur for a user trying to do a data preview or access a view in the remote database:
```
[258]: insufficient privilege: Detailed info for this error can be found with guid '<some GUID>'
```
See Troubleshooting Authorization Problems [page 156] in this guide for how to use the GUID value to find out which privileges are missing.
Or an authorization error may occur:
User <USER NAME>= is not authorized to use VIEW _SYS_BIC.<package>=/<view_name>= because of missing grantable privileges on underlying objects
The most likely root cause of these errors is that the user does not have SELECT privilege on the bottom column view WITH GRANT OPTION. For a particular user to be able to select from a view (independent of whether this is an activated column view modeled through the SAP HANA Modeler or a SQL view that was created manually), the view owner needs the SELECT privilege on all underlying objects WITH GRANT OPTION. In a cross-database setup, the remote user that is mapped to _SYS_REPO on the tenant database that created the top calculation view will usually not have this privilege. See also point 6 of the solution in the following SAP note:
- 2196359 -Limitations for cross-database access in an SAP HANA MDC environment .
Further background information on this privilege requirement can be found in the following:
- SAP HANA Security Guide: Cross-Database Authorization in Tenant Databases (in the section CrossDatabase Authorization in Tenant Databases)
- The 'Cause' section of KBA 2787578 Running a HANA view fails with error "User _SYS_REPO is not allowed to grant privilege SELECT for TABLE"
## Consumption of External BW HANA Views for BW InfoProviders in XSA WebIDE and Required Privileges
If you want to use BW external HANA views in your modeling in WebIDE then you need to assign certain privileges for this to work. If you don't do this then the creation of the native views and/or querying the native views in WebIDE will fail with an "insufficient privilege" error such as:
```
Error: (dberror)[258]:insufficient privilege: search table error: [2950] exception 306003: Authorization failed in Calculation Engine. User is not authorized to 'SELECT'.
```
The required privileges are:
- The HDI container owner (technical user with suffix "#OO") needs SELECT WITH GRANT OPTION on the relevant BW views.
- The application user needs analysis authorization (Analytic Privileges) on the relevant BW views.
See also SAP note 2907570 -Consumption of external HANA views for BW InfoProviders in XSA WebIDE database modules . Attached to this SAP note is a template ABAP report ZRS2HANA_CREATE_XSA_ROLES that you can use to assign the required privileges and a detailed document that has step-by-step information and examples on how to use the report.
## Data Loading, Sizing and Memory Usage
After a BW on HANA migration if you experience high memory usage or OOM dumps on HANA during data loading using Data Transfer Process or info package please check if the below SAP Notes are applicable in your case and applied in your system if relevant:
- 2230080 - Consulting: DTP: Out of memory situation during 'SAP HANA Execution' and the 'Request by Request' Extraction
- 2402503 - Increased memory requirement of BW objects in SAP HANA
- 2602477 - Considerations for packagewise extraction from HANA DataSource based on Calculation Views
If you experience high loads and unloads of column store tables as described in the KBA 2127458 - FAQ: SAP HANA Loads and Unloads the sizing of the system should be checked.
You can use the program /SDF/HANA_BW_SIZING to size the system either before the database is migrated to HANA to get an estimation of the required hardware and memory requirements, or after the migration when the database is running on HANA to check if the sizing of the BW on HANA system is correct. Further information on the usage of this program is available in the following SAP note:
- 2296290 - New Sizing Report for SAP BW/4HANA.
To get an overview of the current memory usage in the HANA system you can use the script HANA_Memory_Overview_1.00.90+.txt from the KBA 1969700 - SQL Statement Collection for SAP HANA, the script shows the size of the row and column store tables and also the size of the HANA memory HEAP allocators.
Background information, for example, to understand the purpose of the HEAP allocator and to understand what can be done to reduce the memory consumption for the HEAP allocator, can be found in the following SAP note:
- 1999997 -FAQ: SAP HANA Memory , (question 13: What can I do if a certain heap allocator is unusually large?)
For information on how to do housekeeping for large BW on HANA row store tables refer to the information in the following KBA:
- 2388483 -How-To: Data Management for Technical Tables .
## Wrong Data
As a first step to analyze wrong data or unexpected results, the issue should be checked from the BW application side using the steps given in the BW application KBAs:
- 2271335 - Wrong Data is Read from a HANA InfoProvider,
- 2399752 - Different Results with Operations Mode in BWA/HANA (TREXOPS) in a BW Query
- 1508237 - First Steps to Check Wrong Number in BW Query (Transaction RSFC).
Additionally, all known reasons why the HANA database could return wrong or unexpected results are explained in the following SAP HANA KBA:
- 2222121 - SAP HANA Wrong Result Sets
## Frequently-used Transaction Codes
The following table lists (in alphabetical order) the most often used transaction codes when working in a BW on HANA system:
| Tcode | Description |
|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| AL11 | SAP Directories, you can find trans log under DIR_TRANS, work process log under DIR_HOME, etc. It's useful when you don't have authorization to run st11 |
| DB01 | Display blocked transactions, you can find the same information under DBACOCKPI- >Diagnostics->Locks->Blocked Transactions |
| DB02 | Display Consistency Check result, help you to find out missing objects in database, unknown objects in ABAP. Index checks etc. It will display the same interface as DBACOCKPIT/ST04->Diagnostics->Missing Tables and Indexes |
| DBCO | Maintain secondary Database connections |
| LISTCUBE | List content of an Info Cube |
| OS01 | LAN Check by Ping, can check the connectivity between DBserver and Application server |
| RS2HANA_CHECK | Check all prerequisites for successful replication of BWauthorizations to SAP HANA, it can be used to check if the BWusers has authorization to run the HANAviews generated from the BWmodel |
| RS2HANA_ADMIN | It can be used as a general entry point where all BWobjects with enabled "External SAP HANAView" are visible. |
| RSA1 | Data Warehousing Workbench (>= BI 7.0) / Administrator Workbench (< BI 7.0), The administrator workbench is the central cockpit used for the administration of almost the entire BWsystem. The RSA1 main screen can be divided into three general areas. The extreme left area, allows us to choose BWmodelling components like Infoproviders, |
InfoObjects, InfoSources and DataSources. Individual BW components represented by different icons:
| RSD1 | Characteristic maintenance, For issues like InfoObjects cannot be activated, you can use this tcode to activate the infoObjects again to reproduce the issue. |
|----------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| RSDDB | SAP HANA/BWAindex maintenance, |
| RSLIMOBW | Create/modify Composite Provider |
| RSPC | Process Chain Maintenance, can be used to find the process chain job logs |
| RSRQ | Data Load Monitor for a Request |
| RSRT | Start of the report monitor, can be used to run Bex queries. |
| RSRV | Analysis and Repair of BWObjects, can be used to check BWObject consistency. |
| RSTT | RS Trace Tool |
| RZ11 | List ABAP profile parameters |
| SAINT | SAP Add-on Installation Tool |
| SPAM | SAP Patch Manager |
| SE01 | Transport Organizer Tool, for issues related to BWtransports, can be used to find the related transports and find the transport log |
| SE11 | ABAP Dictionary, display the ABAP table/view definition, can be used to find the primary index definition as well |
| SE14 | ABAP Dictionary Database Utility, can be used to check if ABAP table exist on DBlevel, recreate table etc. |
| SE16 | Display content of an ABAP table/view |
| SE38 | ABAP Editor, can be used to display/run ABAP programs. If only need to run the ABAP program, can use SA38 instead. Useful ABAP program: • RSDU_TABLE_CONSISTENCY • RSDDB_LOGINDEX_CREATE • RSDDB_MASS_INDEX_CREATE • RS_BW_POST_MIGRATION • RS2HANA_AUTH_RUN • RSPFPAR - Display profile parameter • ADBC_TEST_CONNECTION -- test if the DBconnection works • RSDU_EXEC_SQL - if SQL editor is not available through DBACOCKPIT • RSBDCOS0 -- runanOScommand • RSTRC000 - lock a SAP work process for yourself, and increase work process trace level |
| SLG1 | Analyze Application log. It is useful to get the logs and timestamps for errors on application side |
| SM04 | Display all the current ABAP user sessions |
| SM12 | Display and Delete Locks on ABAP level |
| SM21 | Online System Log Analysis |
| SM37 | Overview of BWscheduled jobs, for a failed job, you can view the job log and from the job details you will be able to find which work process has been used to run the job. |
| SM50 | Work Process Overview |
| SM51 | List of SAP Systems, it's useful to switch to different application servers |
| SM59 | RFC Destinations (Display/Maintain) |
| SM66 | Global work process Monitor, list all the active work processes on the system |
| SNOTE | Check if SAP Note being implemented on the system or not |
| ST05 | Enable SQL trace, which will help you to tell which query cause the long running time. |
| ST06 | Operating System Monitor |
| ST11 | Check work process log |
| ST03 | Work load monitor |
| ST04 | DBACOCKPIT-> DBoverview |
| ST12 | Single Transaction analysis, only available if the SAP ServiceTools has been installed(component ST-A/PI. SAP Note 69455). It's a combination of the standard ABAP(SE30) and SQL trace(ST05) see SAP Note 755977 for step-by-step instructions |
| ST22 | ABAP Runtime Error |
| SU01 | User maintenance, display/Modify BWusers | |
## 3.10 Troubleshooting Multi-Dimensional Services Queries | ## 3.10 Troubleshooting Multi-Dimensional Services Queries
This section describes how to execute, capture and analyze InA queries for SAP Analytics Cloud (SAC) HANA Multi-Dimensional Service.
## Introduction
SAP Analytics Cloud's query language is called InA (Information Access). InA queries are executed using HANA Multi-Dimensional Service and the result set is returned to SAC front-end for rendering.
InA queries are expressed in JSON which is not designed for human consumption but it is a text format which can at least be read by humans. To analyze an InA query for troubleshooting purposes you can record the query execution using the browser's development tools and save the query as a file. You can then execute the query in isolation using the EXECUTE_MDS procedure call and analyze the execution plan using visualization tools.
A similar approach can be used for other applications such as Analysis Office and Lumira Designer, but in these cases the third-party tool Fiddler is required.
This section describes how to execute, capture and analyze InA queries in this way and includes an overview of using Fiddler for Analysis Office:
- How to capture InA queries
- How to execute InA queries and visualize the execution plan
- How to execute multiple InA queries in parallel
- Debugging Analysis Office InA queries
## More Information
For a general introduction to MDS and frequently-asked questions which may also help with troubleshooting, refer to the SAP Note 2670064 FAQ: SAP HANA Multi-Dimensional Services (MDS) . For details of diagnosis data required by SAP Support refer to SAP Note 2691501 SAP HANA MDS: Information needed by Development Support .
Other SAP Notes related to MDS include:
- 2550833 - HANA built-in procedure: EXECUTE_MDS
- 2773755 - Guideline for Controlling Server Load with InA/MDS Analytic Requests
## How to Capture InA Queries
You can capture InA queries issued by SAC using the browser's development tools, the following steps and examples are illustrated using Google Chrome.
1. Start the Chrome debugging tool by one of the following methods:
- From the Customize and Control Google menu, choose More tools Developer tools (shortcut key Ctrl+Shift+I).
- From the menu bar select View Developer Developer tools (shortcut key F12).
2. From the panel and console which is opened select the Network tab.
3. Navigate to the problematic page in SAC and when the debug screen has loaded filter for GetResponse as shown here:
To save the network capture:
1. Look for the Record button on the toolbar of the Network tab. It should be red. If it is gray, click it once to start recording.
2. Check the Preserve log checkbox on the toolbar.
3. Click the Clear button to clear out any existing logs from the Network tab.
4. Now try to reproduce the issue that you want to analyze while the network requests are being recorded.
5. Once you have reproduced the issue, right click on a grid line with gray background on the grid of network requests and select Save as HAR with Content , and save the file to your computer. This HAR file can be reopened to analyze and replay the InA queries within.
## How to Execute and Visualize InA Queries
InA queries can be individually executed as described here. To execute an InA query the SAC user must have the role INA_ROLE. If the customer is using a technical user to execute InA queries, then the role must also be assigned to the user who wants to troubleshoot the issue.
1. Open the SQL Console in Web IDE (or any other console such as SAP HANA studio).
2. Call built-in stored procedure SYS.EXECUTE_MDS with the InA query that was previously captured as the REQUEST parameter. Both REQUEST and RESPONSE are in JSON format (data type NCLOB):
```
CALL SYS.EXECUTE_MDS('Analytics' ,'' ,'' ,'' ,'' ,REQUEST=>'{"Analytics"…' ,RESPONSE=>?);
```
It is possible to render the RESPONSE into a tabular format, but it is not needed for performance analysis.
1. You can now visualize the execution plan of an InA query by invoking the menu actions shown below:
The result of visualization depends on the query, but it resembles a SQL query visualization plan as shown below:
The execution of each step of the plan is identified in this graph. If Execute Model is expanded, it will eventually lead to column store operations that are used to fetch the data from source tables. This method can be used to evaluate the performance of model with respect to the InA queries issued by SAC.
If most of the time is spent in Process Cube and/or Prepare Query, then it is possible that the bottleneck is in MDS engine (see related topics in the SAP HANA Performance Guide for Developers ). But if most of the execution time is spent in Execute Model, then the bottleneck is most probably in the way the model is designed.
The use of workload classes to apply workload settings for your application may be one way of improving performance. If, for example, a single execution of a query in isolation does not take a lot of resources, but there is a degradation in performance in a production environment where many queries are executed simultaneously, a workload class and mapping could be used to manage the allocation of resources. This could, for example, apply a higher priority to the query or apply resource limitations on other queries.
## How to Execute Multiple InA Queries in Parallel
SAC stories contain visualizations and each visualization requires at least one InA query. The InA queries in a story are executed in parallel which will cause more load on a customer's HANA machine and which may lead to performance degradation.
Customers can test their HANA machine and the model's performance by opening multiple connections and executing multiple InA queries at once using the EXECUTE_MDS procedure explained above.
## Analysis Office InA Queries
A similar approach to that described above for SAC can be used to collect InA queries for Analysis Office and Lumira Designer using Fiddler; Fiddler is a freely-available third-party troubleshooting tool that can trace web traffic activity. Once Fiddler is installed you can open the application where the issue occurs and reproduce the problem. The query and response are then visible and, if necessary, you can save these and send them to SAP Support for analysis. Refer to the following SAP Notes for more details:
- How to download, install and run Fiddler is described in SAP Note 1766704 How to use Fiddler to collect HTTP and HTTPS traces .
- Details of the procedure to extract the query can be found in SAP Note 2839059 How To Extract Ina Json Query and Response from Fiddler Traces .
## Related Information
SAP Note 2691501
SAP Note 2550833
SAP Note 1766704
SAP Note 2839059
SAP Note 2773755
Query Execution Engine Overview
Using "Execute in SQL Engine" in Calculation Views |
## 3.11 License Issues | ## 3.11 License Issues
This section covers license-related issues.
## Related Information
System Locked Due to Missing, Expired, or Invalid License [page 154] License Problem Identification and Analysis [page 154] Resolution of License Issues [page 155] |
## 3.11.1 System Locked Due to Missing, Expired, or Invalid License | ## 3.11.1 System Locked Due to Missing, Expired, or Invalid License
New installations of SAP HANA are equipped with a temporary license that expires after 90 days. To keep the system functional after this period, you have to install a permanent license.
Improper licensing may lead to a lockdown of your SAP HANA system. In this case, the only allowed action is to install a valid license.
The system goes into lockdown in the following situations:
- The permanent license key has expired and either:
- You did not renew the subsequently installed temporary license key within 28 days, or
- You did renew the subsequently installed temporary license key but the hardware key has changed
- The installed license key is an enforced license key and the current memory consumption exceeds the licensed amount plus the tolerance.
- You deleted all license keys installed in your database.
For more information, see Managing SAP HANA Licenses in the SAP HANA Administration Guide .
## Related Information
Managing SAP HANA Licenses |
## 3.11.2 License Problem Identification and Analysis | ## 3.11.2 License Problem Identification and Analysis
The first signs of problems related to licensing will be visible by Alert 31 or Alert 44 being issued.
You can check your current license using the SAP HANA cockpit: Choose the Manage system licenses link on the Overview page.
Alternatively, you can retrieve the same information using SQL:
SELECT * FROM m_LICENSE;
## Note
To be able to query license information, you must have the system privilege LICENSE ADMIN.
The m_LICENSE system view includes the following information:
- License data:
- SID
- Hardware key
- Installation number
- System number
- Product limit (licensed amount of memory)
- Validity start date
- Expiration date - See SAP Note 1899480 - How to handle HANA Alert 31: 'License expiry'
- Last successful check date
- License status (permanent, valid, enforced, local) See SAP Note 1899511 - How to handle HANA Alert 44: 'Licensed Memory Usage'
Note that in case of system lockdown, only SID and hardware key are displayed. Information on previously installed licenses is available.
SAP HANA licenses can be installed for the system database (global) or for a single tenant database (local). Global licenses are for the system database and all tenants but a license installed in a tenant will govern only the tenant. If the license is installed incorrectly at the local level you can remove the tenant-specific license key to revert the license validity to the global license key installed in the system database.
## Related Information
```
M_LICENSE System View SAP Note 1899480 SAP Note 1899511
``` |
## 3.11.3 Resolution of License Issues | ## 3.11.3 Resolution of License Issues
If your license becomes invalid, you need to install a new license.
You can install a new license either in the SAP HANA cockpit or using SQL.
## Note
To install a license key, you need the LICENSE ADMIN system privilege.
You install a license key with the following SQL statement:
SET SYSTEM LICENSE '<license file content goes here, line breaks matter>=';
## Note
Line breaks are essential for interpretation of the license key text, hence they must not be removed. If you use the command line tool SAP HANA HDBSQL to install the license, make sure to enable multi-line statement support (command line option -m or \mu ON when within SAP HANA HDBSQL).
The command will fail if the license key has a different installation number or system number than the current ones in the database instance. If you have successfully installed a license but your system is still locked down, check the following:
- The current system time is within the validity period of the license.
- Your installed license key is correct, in particular, the m_LICENSE view displays only one row with a valid license for the product SAP HANA.
- The SAP Notes in the Related Links section.
For more detailed information about how to install a license key, see the SAP HANA Administration Guide .
## Related Information
SAP Note 1704499
SAP Note 1634687
SAP Note 1699111
Managing SAP HANA Licenses
Run Long Commands in Multiple-Line Mode |
## 3.12 Security-Related Issues | ## 3.12 Security-Related Issues
This section looks at issues related to authorization and authentication.
## Related Information
Troubleshooting Authorization Problems [page 156]
Troubleshooting Problems with User Name/Password Authentication [page 162]
Troubleshooting Problems with User Authentication and SSO [page 164] |
## 3.12.1 Troubleshooting Authorization Problems | ## 3.12.1 Troubleshooting Authorization Problems
SAP HANA implements its authorization concept based on the entities user, privilege, and role.
## General Analysis
The system view EFFECTIVE_PRIVILEGES is useful for checking the privileges of a specific user. It includes information about all privileges granted to a specific user (both directly and indirectly through roles), as well as how the privileges were obtained (GRANTOR and GRANTOR_TYPE column).
Output of Effective Privileges
Examples of querying this view can be found in the topics which follow:
- Troubleshoot the Display of Unrestricted or Incorrect Results for a View Secured with Analytic Privileges
- Troubleshoot the Error "Insufficient privilege: Not authorized" Although User Has Analytic Privileges
- Troubleshoot the Error "Invalidated View" During SELECT Statement Execution
Full details of this view are given in the SAP HANA SQL Reference Guide .
For more information about other system views related to authorization, see System Views for Verifying Users' Authorization in the Security Administration and User Management section of the SAP HANA Administration Guide .
For more information about the authorization concept in SAP HANA, see the SAP HANA Security Guide .
## Insufficient Privilege Errors
To avoid the need to search through the indexserver trace files to analyze insufficient privilege errors, a procedure is available which you can use to quickly find out details of missing privileges: GET_INSUFFICIENT_PRIVILEGE_ERROR_DETAILS().
The insufficient privilege error message includes a GUID value, for example:
```
insufficient privilege: Detailed info for this error can be found with guid '3DFFF7D0CA291F4CA69B327067947BEE'
```
Use this value as a parameter for the stored procedure:
The procedure returns details of the missing privilege with the user name and the session user name. Object details may also be returned if available, and if the missing privilege is contained in one or more roles then the roles are displayed.
The procedure does not return information about analytical privileges.
For more information about this procedure, see 'Resolve Insufficient Privilege Errors' in the SAP HANA Administration Guide (Security Administration and User Management) and the corresponding topic in the SAP HANA Cockpit Guide , 'Display Information about an Insufficient Privilege Error'.
## Authorization Dependency Viewer
SAP HANA Studio and SAP HANA cockpit include a graphical tool, the Authorization Dependency Viewer, to analyze authorization issues. This may be helpful to troubleshoot authorization errors and invalid object errors for stored procedures and calculation views with complex dependency structures. See the links under 'Related Information'.
## Related Information
System Views for Verifying Users' Authorization EFFECTIVE_PRIVILEGES System View SAP HANA Security Guide Display Information about an "Insufficient Privilege" Error Resolve Insufficient Privilege Errors Resolve Errors Using the Authorization Dependency Viewer (SAP HANA Studio) Resolve Object Authorization Errors (SAP HANA cockpit) |
## 3.12.1.1 Troubleshoot the Display of Unrestricted or Incorrect Results for a View Secured with Analytic Privileges | ## 3.12.1.1 Troubleshoot the Display of Unrestricted or Incorrect Results for a View Secured with Analytic Privileges
If a user has unrestricted access to a view or sees results that he should not, even though he has been granted an analytic privilege, you need to determine which privileges have been granted to the user and whether or not they are correct.
## Prerequisites
To troubleshoot this issue, you require the following system privileges:
- CATALOG READ
- TRACE ADMIN
## Procedure
- Check which analytic privileges have been granted to the user using the system view EFFECTIVE_PRIVILEGES.
Execute the following SQL statement:
SELECT * FROM EFFECTIVE_PRIVILEGES WHERE USER_NAME = 'myuser' AND OBJECT_TYPE = 'ANALYTICALPRIVILEGE';
In particular, verify that the user does not have the analytic privilege _SYS_BI_CP_ALL. This analytic privilege potentially allows a user to access all the data in all activated views, regardless of any other analytic privileges that apply. Usually, the user will have this analytic privilege through a role, for example, MODELING.
## Caution
The MODELING role is very privileged and should not be granted to users, particularly in production systems. The MODELING role should only be used as a template.
- Identify wrong filters specified in the analytic privileges granted to the user. Information about filter conditions generated from the relevant analytic privileges can be traced in the indexserver trace file. This can help you to identify wrong filters specified in the analytic privileges granted
- to the user. In the Trace Configuration monitor of the SAP HANA database explorer, set the database trace level for the
component analyticprivilegehandler of the indexserver service to DEBUG .
## Related Information
EFFECTIVE_PRIVILEGES System View System Views for Verifying Users' Authorization |
## 3.12.1.2 Troubleshoot the Error "Insufficient privilege: Not authorized" Although User Has Analytic Privileges | ## 3.12.1.2 Troubleshoot the Error "Insufficient privilege: Not authorized" Although User Has Analytic Privileges
Even if a user has the correct analytic privileges for a view, he still may receive the error Insufficient privilege: Not authorized if there is an issue with privileges at another level.
## Prerequisites
To troubleshoot this issue, you require the following system privileges:
- CATALOG READ
## · TRACE ADMIN
## Procedure
- Verify that the _SYS_REPO user has all required privileges (for example, SELECT) with GRANT OPTION on the base tables of the view.
You can do this by selecting from the EFFECTIVE_PRIVILEGES system view:
```
SELECT * FROM EFFECTIVE_PRIVILEGES WHERE USER_NAME = '_SYS_REPO';
```
- Verify that the analytic privileges required for any underlying views have been granted to the user.
- If the view is a top-level view (calculation view) with underlying views, the granted analytic privilege grants access only to this top-level view. Analytic privileges are required for all underlying views. Note that analytic privileges have to contain at least a view attribute with or without filter condition in order to grant access to the view.
You can verify a user's privileges by selecting from the EFFECTIVE_PRIVILEGES system view:
```
SELECT * FROM EFFECTIVE_PRIVILEGES WHERE USER_NAME = '<user>' AND OBJECT_TYPE = 'ANALYTICALPRIVILEGE';
```
- If the analytic privilege uses a database procedure to define dynamic value filters at runtime, check for errors in the execution of the underlying procedure.
To find out the actual error during procedure execution for analytical privileges, check the indexserver_alert_<host>=.trc trace file (accessible in the SAP HANA database explorer via the View trace and diagnostic files link in the SAP HANA cockpit). |
## 3.12.1.3 Troubleshoot the Error "Invalidated View" During SELECT Statement Execution | ## 3.12.1.3 Troubleshoot the Error "Invalidated View" During SELECT Statement Execution
A user may receive the error Invalidated view when executing a SELECT statement against a view that was activated from the repository. Additionally, the data preview for an activated view may not show any data.
## Prerequisites
To troubleshoot this issue, you require the CATALOG READ system privilege.
## Context
In addition to the privileges checks described here, other possible causes of invalidated views are described in knowledge base articles. The cause may be related, for example, to the view not being active or the view may
have been changed and is now unavailable. The KBAs listed below describe troubleshooting procedures in the context of SAP HANA Studio, but the causes and solutions may still be relevant in any context.
## Procedure
- Verify that the _SYS_REPO user has all required privileges (for example, SELECT, SELECT WITH GRANT OPTION) on all base objects of the view (for example, tables) and schemas used.
You can do this by selecting from the EFFECTIVE_PRIVILEGES system view:
```
SELECT * FROM EFFECTIVE_PRIVILEGES WHERE USER_NAME = '_SYS_REPO';
```
- To grant this privilege you can execute the following statement:
```
GRANT SELECT ON SCHEMA <YOUR_SCHEMA> TO _SYS_REPO WITH GRANT OPTION;
```
This example shows GRANT SELECT ON SCHEMA, you can also grant the privilege on tables or views.
- Verify that the user activating the object has at least SELECT privilege over the schemas used within the scenario.
To grant this privilege you can execute the following statement:
```
GRANT SELECT ON SCHEMA <YOUR_SCHEMA> TO <USER_THAT_WILL_ACTIVATE_ANALYTIC_OBJECT>;
```
- Make sure all the dependencies of the analytic object to be activated are contained in the analytic privilege assigned to the user that will actually activate the main analytic object.
## Results
For more information on the GRANT Statement refer also to the details in the SAP HANA SQL Reference Guide for SAP HANA Platform .
## Related Information
GRANT Statement (Access Control)
## Links to Knowledge Base Articles:
SAP Note 2318731 - Error "SAP DBTech JDBC: [391]: invalidated view: sap.hba.ecc" during data preview SAP Note 3048185 - Error "SAP DBTech JDBC: [391]: invalidated view:" error occurs when add object
```
privilege SAP Note 2985592 - Database returned the SQL code 391. Error text: invalidated view SAP Note 1761917 - Error when executing an Analytic Object
``` |
## 3.12.2 Troubleshooting Problems with User Name/Password Authentication | ## 3.12.2 Troubleshooting Problems with User Name/Password Authentication
Common problems with regards to authentication are related to incorrect or expired passwords.
User administrators can change users' passwords on the Edit User monitor in the Cockpit Manager of the SAP HANA cockpit.
For more information about managing users, see Security Administration and User Management and Reset the SYSTEM User Password of a Tenant Database in the SAP HANA Administration Guide .
## Related Information
Security Administration and User Management Reset the SYSTEM User Password in a Tenant Database (SAP HANA Cockpit) |
## 3.12.2.1 Resetting the Password for the SYSTEM User | ## 3.12.2.1 Resetting the Password for the SYSTEM User
If the password of the SYSTEM user is irretrievably lost, it can be reset.
Follow the procedure as described in the SAP HANA Administration Guide .
## Note
If you can log on as SYSTEM and you want to change the password, do not use the emergency reset procedure. Simply change the password directly using the SAP HANA cockpit or the ALTER USER SQL statement: ALTER USER SYSTEM PASSWORD new_password .
## Related Information
Resetting the SYSTEM User Password (SAP HANA Cockpit) |
## 3.12.2.2 Troubleshoot the Error "User is locked" | ## 3.12.2.2 Troubleshoot the Error "User is locked"
A user receives the error User is locked after too many failed log on attempts.
## Prerequisites
You have system privilege USER ADMIN.
## Context
An example of this error might look like:
Error "user is locked; try again later: lock time is 1440 minutes; user is locked until 2014-05-28 21:42:24.12214212" (the time is given in UTC).
Most likely, the user logged on too many times with the wrong password. The default maximum number of failed logon attempts is 6. This is defined by the password policy parameter maximum_invalid_connect_attempts .
For more information about this and other password policy parameters, see Password Policy Configuration Options in the SAP HANA Security Guide .
## Procedure
Reset the invalid connect attempts with the following SQL statement:
```
ALTER USER <user> RESET CONNECT ATTEMPTS;
```
The user can now log on again.
## Related Information
Password Policy Configuration Options |
## 3.12.3 Troubleshooting Problems with User Authentication and SSO | ## 3.12.3 Troubleshooting Problems with User Authentication and SSO
Authentication problems manifest themselves as failed user logon. In many cases, the reason for the failure will not be clear to the user. You need to analyze the database trace to determine the cause of the problem.
For security reasons, no information about error conditions are provided to a user directly after a failed logon attempt, since this could be abused by attackers. In case of authentication problems, the affected user must contact the system administrator, who will then analyze the database trace on the server side.
## Tracing for SSO Issues
Logon information is available in the database trace, but by default, it does not log much. The first step in troubleshooting any SSO logon issue therefore is to increase the trace level of the authentication-related components of the database trace. You can do this in the SAP HANA cockpit / Database Explorer or in SAP HANA Studio on the Database Trace section of the Trace Configuration tab.
For problems with JDBC/ODBC-based logon, increase the trace level of the authentication for the index sever to DEBUG .
For problems with HTTP-based logon via SAP HANA XS classic, increase the trace level of the authentication , xssession , and xsauthentication components for the xsengine server to DEBUG .
Once you have increased tracing, reproduce the problematic logon. The traces will now contain more descriptive error messages.
## Remember
After completing troubleshooting, reduce the authentication trace level back to the default.
In some cases, especially for Kerberos and SPNEGO, it is necessary to use other sources of tracing, such as:
- JDBC, ODBC or SQLDB trace
- Windows event log
- Debugger of browser
- SAP Web dispatcher trace
- Network packet sniffer, for example, Wireshark
For more information about tracing in SAP HANA see the section on traces in the SAP HANA Administration Guide and SAP Note 2119087.
## Tip
Guided Answers is a support tool for troubleshooting problems using decision trees. A guided answer is available for tracing SSO issues.
## Related Information
Traces Single Sign-On Integration
Kerberos-Related Authentication Issues [page 165]
SAML Authentication [page 166] Traces for SSO Troubleshooting (Guided Answer) SAP Note 2119087 |
## 3.12.3.1 Kerberos-Related Authentication Issues | ## 3.12.3.1 Kerberos-Related Authentication Issues
## GSS API Errors
Kerberos authentication is implemented in the SAP HANA database using the Generic Security Services Application Program Interface (GSS API). Since GSS API is an internet standard (RFC 4121), all Kerberosrelated errors are traced under the authentication trace component in the following generic way:
<SAP HANA DB error text>= (<GSS major code>=.<GSS minor code>= - <GSS major text>= <GSS minor text>=)
GSS API error texts are sometimes difficult to relate to the concrete problem. The following table contains some hints for selected trace messages, refer also to the How-to document attached to SAP Note 1837331 for full details of these and other problems.
| GSS API Error Code | Error Text | Hint | Solution |
|----------------------|-------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|
| 851968.252963 9142 | Minor error text: Key version number for principal in key table is incorrect | The service key table (keytab) in use on the SAP HANAdatabase host does not match the one cre- ated on the authentication server. | Re-export the keytab file from the authentication server and re-im- port it into the host's Kerberos in- stallation. |
| 851968.397560 33 | SAP HANAdatabase error text: Cannot get keytab entry for host: <FQDN> Minor error text: No principal in keytab matches desired name | The location of the Kerberos keytab file might be incor- rect. The file is located in directory $HOME/etc (by default, that is /usr/sap/ <SID>/home/etc/ krb5_hdb.keytab). | Check environment variable KRB5_KTNAME. |
| 851968.252963 9136 | SAP HANAdatabase error text: Cannot get keytab entry for host: <FQDN> Minor error text: Configuration file does not specify default realm | The location of the Kerberos configuration file might be in- correct. The file is located in directory $HOME/etc (by default, that is /usr/sap/ <SID>/home/etc/ krb5_hdb.conf). | Check environment variable KRB5_CONFIG. |
## Configuration
There are many potential problems setting up a Kerberos infrastructure that are not related to the SAP HANA system in particular, but relevant for any Kerberos-based authentication. For further information, refer to the documentation provided with MIT Kerberos or Microsoft Server/Active Directory.
## Useful SAP Notes
| SAP Note | Description |
|------------|----------------------------------------------------------|
| 1837331 | HOWTOHANADBSSOKerberos/ Active Directory |
| 2354473 | SSO troubleshooting for HANAand Analysis Office (SPNEGO) |
| 1813724 | HANASSO/Kerberos: create keytab and validate conf |
| 2354556 | Commonerrors when executing hdbkrbconf.py |
## Related Information
Single Sign-On Using Kerberos |
## 3.12.3.2 SAML Authentication | ## 3.12.3.2 SAML Authentication
## User cannot connect with SAML assertion
If a user cannot connect to SAP HANA with a SAML assertion, the issuer and subject distinguished names (DNs) in the SAML assertion do not match those configured in the identity provider. Investigate which issuer and subject DNs were used in the SAML assertion. You will find them in the trace file indexserver_alert_<hostname>=.trc . Compare these with those configured in the service provider.
## Useful SAP Notes
| SAP Note | Description |
|------------|-----------------------------------------------------|
| 1766704 | How to use Fiddler to collect HTTP and HTTPS traces |
| 2284620 | HOW-TO HANADBSSOSAMLandBIPlatform 4.1 / AO2.2 |
## Tip
Guided Answers is a support tool for troubleshooting problems using decision trees. A guided answer is available for SAML authentication with SAP HANA.
## Related Information
Single Sign-On Using SAML 2.0 SAML Authentication for Single Sign-On (Guided Answer) |
## 3.13 Transactional Problems | ## 3.13 Transactional Problems
This section covers troubleshooting of transaction problems. Symptoms seen from an end user perspective may include an application running sluggishly, or not responding or even seeming to hang. These problems may be caused by issues with uncommitted transactions, long-lived cursors blocking garbage collection, a high number of active versions or blocked transactions.
## Maximum Transaction Limit
The default system limit for the maximum number of transactions is 64000. If this limit is exceeded, an error message may be returned when submitting queries: "exceed maximum number of transactions". This may be caused by application programs which are not correctly managing connections/transactions, or a bug within the database engine. This scenario and possible workarounds are described in detail in SAP Note 2368981 SAP HANA DB: Queries fail with error "exceed maximum number of transaction" .
## Related Information
## SAP Note 2368981 |
## 3.13.1 Blocked Transactions | ## 3.13.1 Blocked Transactions
Blocked transactions are write transactions that are unable to be further processed because they need to acquire transactional locks (record or table locks) which are currently held by another write transaction.
This section focuses on blocking caused by transaction locks though transactions can also be blocked waiting for physical resources like network or disk. A basic general analysis step is to determine if few transactions are blocked or many and this section deals with these two aspects separately.
Administration tools for analyzing the cause of blocked transactions are available in SAP HANA Cockpit and SAP HANA Studio. The analysis steps you take depend firstly on whether the blocking is current or whether it happened in the past. Monitoring views (named m_*) provide information about the current state of the database and the corresponding set of HOST_ views include data from persisted history information, for example:
m_BLOCKED_TRANSACTIONS (current data)
HOST_BLOCKED_TRANSACTIONS (historical data)
The SAP Note 1858357 - HANA DB: Analysis of Wait Situations describes two locking scenarios covering both current and historical lock situations.
The SQL Statement Collection for SAP HANA available with SAP Note 1969700 includes scripts which can be executed to provide a basic analysis. The following scripts can be customized to search current or historical data:
- HANA_Locks_Transactional_LockWaits*
- HANA_Locks_Transactional_LockWaits_Hierarchy* - shows the relationship between dependent transactions
The output of these scripts includes essential information about the type of lock, its duration, transaction ID and information about the lock holder. See also the script HANA_Locks_Transactional_LockWaits_PerObject.
For more information on lock analysis refer to the SAP Note 1999998 - FAQ: SAP HANA Lock Analysis
## Lock Timeout
Transactional lock waits automatically terminate after a predetermined time period - by default this is 30 minutes (1800000 ms). The timeout period is defined in the following parameter: lock_wait_timeout in the [transaction] section of the inexserver.ini file. If a blocked transaction times out it is rolled back with error: SQL error 131: transaction rolled back by lock wait timeout . The value of this parameter is monitored and alert 16 is generated if the value is outside the range of 100,000ms to 7,200,000ms.
## Related Information
```
Identify and Assess Blocked Transaction Issues [page 171] Single or Few Transactions are Blocked [page 173] Many Transactions are Blocked [page 174] Troubleshooting Blocked Transaction Issues that Occurred in the Past [page 175] SAP Note 1969700 SAP Note 1999998 SAP Note 1858357
``` |
## 3.13.1.1 Using Scripts to Analyze Blocked Transactions | ## 3.13.1.1 Using Scripts to Analyze Blocked Transactions
You can use scripts to analyze the issues underlying blocked transactions. Modify the time range analyzed by the script, then use it to identify the lock owner, get information on the lock wait hierarchy, or analyze thread activity on the database at the time of the block.
## Identify the Lock Owner: Transactional Lock Waits Hierarchy Script
The HANA_Locks_Transactional_LockWaits_Hierarchy_* script can help you to gather information on current and historic lock wait situations, including the lock owner and the lock wait hierarchy. The asterisk (*) in the script title is a placeholder for the SAP HANA release and revision.
## Modify a Script
Where relevant, the SQL support scripts contain sections which can be customized to enter your own parameter values, these are marked with the comment ' /* Modification section */ '. For example:
```
( SELECT /* Modification section */ '%' HOST, '%' PORT, '%' OBJECT_TYPE, ... )
```
If you want to analyze a specific time range, you can modify the time parameters using the codes described in the comment block of the script, in the section INPUT_PARAMETERS . Y ou can use 'C-H12' BEGIN_TIME , or you can enter a specific date and time.
To do this, in place of 'C-H12' BEGIN_TIME and 'C' END_TIME , enter the date and time you want to analyze, in the format 'YYY/MM/DD HH24:MI:SS' .
Example: To check for transaction lock waits that happened between 19/08/2022 at 2pm and 20/08/2022 at 2pm, the modification part of the script would look like this:
```
Modification section 2022/08/19 BEGIN_TIME , 2022/08/20 14;00:00 SERVER, UTC 1 SITE_Id, Port
```
You can then execute the script and check for transaction lock waits in your system for the relevant date and time.
The output parameters are explained in the comment block of the script. Key columns include:
| Output Columns | Explanation |
|------------------|----------------------------------------------------------------------------------|
| WAIT_S | Seconds since the lock wait started |
| LT | Lock type ('R'=RECORD, 'T'=TABLE, 'M'=METADATA) |
| LM | Lock mode ('E'=EXCLUSIVE, 'IE'=INTENTIONAL EXCLUSIVE, 'S'=SHARED) |
| A | Lock holder ('X'=lock holder on SAP HANAside, ' '= lock holder outside SAP HANA) |
| OBJECT_NAME | Object name related to the lock wait |
| STATEMENT_HASH | Statement hash related to the lock wait |
| CONN_ID | Connection ID (Indent indicates hierarchy) |
As a starting point for your analysis, refer to the value for the Column 'A' to identify the lock owner. The steps you take to troubleshoot the block depend on whether the issue is within the SAP HANA Database or caused by factors external to the database.
## If the issue is outside of the SAP HANA Database
If the value for Column 'A' in the script output is blank (' '), the lock owner is outside the SAP HANA Database. Consider checking the application and the network.
- Check the application: The application associated with the lock is named in the column 'APPLICATION SOURCE'. You can search for known application lock issues in KBA 2000002 , FAQ: SAP HANA SQL Optimization, in particular Question 13, "Are there standard recommendations for specific SQL statements available?" Search the section using the application name, the OBJECT_NAME (this is usually a table name), and the STATEMENT_HASH related to the lock.
- If you don't find any issues on the application side, the next step would be to check the network between the application client and the SAP HANA Database. See KBA 2081065 , T roubleshooting SAP HANA Network, especially Section 1, "Analyze Round Trip Time (RTT) Between Server and Client". You can also use SAP's NIPING program to analyze the network connection between any two machines running SAP software: see SAP Note 500235 , Network Diagnosis with NIPING.
## If the lock owner is active on the SAP HANA Database side
If the value in Column 'A' in the script output is 'X', the lock owner is active on the SAP HANA Database side and further analysis is required on the database side. You can check the HANA thread samples to see what the HANA threads are doing at the time of the block.
## Analyze Thread Activity: Threads Filter and Aggregation Script
You can use the script HANA_Threads_ThreadSamples_FilterAndAggregation_* to see what the HANA threads are doing on the database during a block.
First you need to modify the time range of your analysis, as described above.
Next you can define the level of detail you want in the script output. Use the parameter AGGREGATE_BY in the 'Modification section' of the script. For example, the 'AGGREGATE_BY' value 'THREAD_METHOD, THREAD_DETAIL' specifies that the script will be aggregated by thread method and thread detail. If you want to see more detailed information in the script output you can remove this aggregation detail by replacing 'THREAD_METHOD, THREAD_DETAIL' with 'NONE'.
In the script output, you get important information from the values in the columns 'THREAD_STATE' and 'LOCK_NAME' for the time that you have the lock wait issue.
## Further Information
- The collection of support scripts is provided as an attachment to 1969700, SQL Statement Collection for SAP HANA. See the "Solution" section for further information on how support scripts can be modified and for known issues.
- See also KBA 1999998 , FAQ:SAP HANA Lock Analysis, in particular Question 5, "How can transactional lock waits be analyzed and optimized?"
- For more information on HANA thread samples, and to assess whether thread samples are involved in the lock situation, follow the steps outlined in see KBA 2114710 , FAQ: SAP HANA Threads and Thread Samples.
- If you notice a high number of transactional (record and object) lock waits at the same time as many semaphore and mutex waits, then it's likely that the semaphore and mutex waits are causing the transactional locks. See KBA 1999998 , FAQ:SAP HANA Lock Analysis, in particular Question 6, "How can internal lock waits be analyzed?"
- For known application lock contention issues, see KBA 2000002 , FAQ: SAP HANA SQL Optimization, especially Question 13, "Are there standard recommendations for specific SQL statements available?" |
## 3.13.1.2 Identify and Assess Blocked Transaction Issues | ## 3.13.1.2 Identify and Assess Blocked Transaction Issues
The first signs of blocked transactions may be poor application response or alerts 49 or 59 are raised.
The initial indicators of blocked transactions are often given by:
- Users reporting bad application responsiveness
- Alert 49 - Long-running blocking situations
- Alert 59 - Percentage of transactions blocked
To confirm that poor database performance is caused by blocked transactions, you should check the following performance monitoring tools in SAP HANA cockpit:
## Performance Monitor
On the Performance Monitor you can enable the KPI Blocked Transactions to plot the occurrence of any blocked transactions in the current time frame.
## Workload Analysis Monitor
Also on the Performance Monitor, look at the Background Jobs tab which shows currently running SAP HANA background processes like Delta Merge. Since the Delta Merge needs to lock tables to proceed, it is a common cause for blocked transactions. Another background job displayed by this monitor is the savepoint write job which requires a global database lock in its critical phase. The Savepoint Performance topic gives more details about analyzing and troubleshooting the locking phases of this job.
The following views may help to analyze delta merge:
- SYS.m_DELTA_MERGE_STATISTICS
- _SYS_STATISTICS.HOST_DELTA_MERGE_STATISTICS - for historical information.
The following views may help to analyze and check if a savepoint was written during the time of the blockage:
- SYS.SAVEPOINTS
- _SYS_STATISTICS.HOST_SAVEPOINTS - for historical information.
## Sessions Monitor
The Sessions tile and monitoring tool lists all currently opened SQL sessions (meaning user connections). In the context of blocked transaction troubleshooting, the columns Blocked By Connection ID and Blocks No. Of Transactions are of special interest. The first tells you whether the session is blocked by another session and identifies the ID of the blocking one. The latter gives you the corresponding information if a session blocks other sessions, and how many transactions are affected.
## Blocked Transactions Monitor
The Open blocked transactions link on the Monitoring tile is the next drill down step. The display only lists those transactions that are currently blocked. The ordering is done via a blocking/blocked relation. That means transactions that are blockers are highlighted. Directly beneath the blocked transactions are displayed. In the following illustration you can see transaction 46 blocking multiple other transactions:
## Blocked Transactions Monitor
See also Blocked Transactions in the SAP HANA cockpit documentation.
## Threads Monitor
The Threads monitor (available from the Threads tile) gives the most fine-grained view of the current situation by listing all threads currently active in the system, although it is usually not necessary to drill into that level of detail. See also the Threads topic in the SAP HANA cockpit documentation.
## Related Information
## Savepoint Performance [page 118]
## Links to SAP HANA Cockpit Guide:
The Performance Monitor
Blocked Transactions Session Monitoring
Threads
SAP Note 2079396
SAP Note 2081856 |
## 3.13.1.3 Single or Few Transactions are Blocked | ## 3.13.1.3 Single or Few Transactions are Blocked
If you identified only a single or a few blocking transactions, there is likely an issue on application side.
## Check the Application
A usual pattern is a flaw in the application coding that does not commit a write transaction. Such a transaction will be a blocker for any other transaction that needs to access the same database object. To release the situation you have to close the blocking transaction.
There are several possibilities to achieve this:
- Contact the Application User
You can identify the user of the application via the Sessions tile. This information is visible in the 'Database User' column or, in case the application has its own user management (for example, SAP BW), in the 'Application User' column. Contact the user and ask them to close the application.
- Contact the Application Developer
As a follow-up, the author of the application should be contacted to discuss how such situations can be avoided in the future by changing the application code.
## Cancel the Session
If you are not able to contact the user to cancel the session, you can also cancel the session from the list of sessions accessed via the Sessions tile. The session cancellation may take some time to complete, but after canceling the session the current transaction will be rolled back.
## Kill the Client Application
In case the session cancellation takes too long or does not complete at all, you can kill the client process that opened the session. This will terminate the blocking transaction as well. Note that killing the client application is safe from a database consistency standpoint, the current transaction will be rolled back gracefully.
To do this you have to have access to the client machine. The information needed for this task can be retrieved from the Sessions tile. See columns <Client IP>= and <Client Process ID>= to determine the host and process to be killed. |
## 3.13.1.4 Many Transactions are Blocked | ## 3.13.1.4 Many Transactions are Blocked
In the case that a large number of transactions are blocked, the troubleshooting should take a slightly different approach.
First you need to determine whether there is a single or few blocking transactions that block a large amount of other transactions. For this, open the Blocked Transactions monitor and check the amount of blocking transactions. If you determine that there are only a few blocking transactions, use the techniques described in Single or Few Transactions are Blocked to resolve the situation.
If there are many transactions in a blocking state, you need to find out whether a specific access pattern causes the situation. In cases where multiple transactions try to write to the same database objects they block each other. To check if this situation exists, open the Blocked Transactions monitor and analyze the 'Waiting Schema Name' , 'Waiting Object Name' and 'Waiting Record Id' columns. If you find a fair amount of blocking transactions that block many other transactions you need to investigate if the following is possible:
- Change the client applications to avoid the access pattern
- If a background job is running that issues many write transactions (for example, a data load job) then reschedule the job to a period with a low user load
- Partition tables that are accessed frequently to avoid clashes. See the SAP HANA Administration Guide for more details on partitioning.
In case you cannot identify specific transactions or specific database objects that lead to transactions being blocked, you have to assume a problem with the database itself or its configuration. One example is an issue with long savepoint durations - see Savepoint Performance for troubleshooting such issues.
## Related Information
Single or Few Transactions are Blocked [page 173] Savepoint Performance [page 118] Table Partitioning |
## 3.13.1.5 Troubleshooting Blocked Transaction Issues that Occurred in the Past | ## 3.13.1.5 Troubleshooting Blocked Transaction Issues that Occurred in the Past
Finding the root cause of blocked transaction situations that occurred in the past and which you have resolved is more difficult than troubleshooting issues that are currently happening. Tools such as the Performance Monitor , system views and the SQL Plan Cache are available to help you.
Using SAP HANA cockpit you can firstly use the Performance Monitor to set the time frame covering the period where the issue happened. Using the information returned, investigate what happened at this specific point in time.
You should check the monitoring and Statistics Server view
_SYS_STATISTICS.HOST_BLOCKED_TRANSACTIONS. Analyze the columns 'WAITING_SCHEMA_NAME' ,
'WAITING_TABLE_NAME' and 'WAITING_RECORD_ID' to identify the database objects that lead to blocked transactions.
In addition, the SAP HANA cockpit SQL Plan Cache monitor (available on a tab page from the SQL Statements tile) may be able to provide information about the statements that were involved in the situation. You may need to change the selection of visible columns to see the relevant information and then use the sort feature to search for and identify the relevant entries. For entries that have 'TOTAL_LOCK_WAIT_COUNT' >= 0 compare the column 'MAX_CURSOR_DURATION' against 'AVG_CURSOR_DURATION' . An entry where there is a significant difference (where the transaction took much longer than average) may indicate that it was involved in the situation.
The SQL Statement Collection for SAP HANA available with SAP Note 1969700 includes scripts which can be customized to search historical data (see the topic Blocked Transactions).
## Related Information
The Performance Monitor Monitor and Analyze Statements with SQL Plan Cache Blocked Transactions [page 167] |
## 3.13.2 Multiversion Concurrency Control (MVCC) Issues | ## 3.13.2 Multiversion Concurrency Control (MVCC) Issues
In this section you will learn how to troubleshoot issues arising from MVCC.
Multiversion Concurrency Control (MVCC) is a concept that ensures transactional data consistency by isolating transactions that are accessing the same data at the same time.
To do so, multiple versions of a record are kept in parallel. Issues with MVCC are usually caused by a high number of active versions. Old versions of data records are no longer needed if they are no longer part of a snapshot that can be seen by any running transaction. These versions are obsolete and need to be removed from time to time to free up memory.
This process is called Garbage Collection (GC) or Version Consolidation. It can happen that a transaction is blocking the garbage collection. The consequence is a high number of active versions and that can lead to system slowdown or out-of-memory issues. |
## 3.13.2.1 Row Store Tables | ## 3.13.2.1 Row Store Tables
Garbage collection is triggered after a transaction is committed and also periodically (every hour by default).
A transaction that is currently committing can be identified in the Threads tile (see System Performance Analysis ). The Thread Type will be 'SqlExecutor' and the Thread Method 'commit' .
The periodic garbage collection can be identified by Thread Type 'MVCCGarbageCollector' .
Note that the periodic garbage collection interval can be configured in the indexserver.ini file transaction section with the parameter gc_interval .
## Related Information
System Performance Analysis [page 237] |
## 3.13.2.2 MVCC Problem Identification | ## 3.13.2.2 MVCC Problem Identification
You can check for a number of indicators of MVCC problems.
Problems with a high number of active versions can be identified by:
- users reporting an increase of response times
- the indexserver trace containing "There are too many un-collected versions. The transaction blocks the garbage collection of HANA database."
- checking Active Versions in the Performance Monitor
Transactions blocking garbage collection can originate from:
- Long-running or unclosed cursors
- Long-running transactions with isolation mode 'serializable' or 'repeatable read'
- Hanging threads
In order to validate that there is a problem with MVCC, check the number of active versions in the row-store MVCC manager monitoring view (m_MVCC_TABLES). Note that in a multihost environment, you have to check the master host.
select * from m_mvcc_tables where host='MyHost' and port='30003' and (name='NUm_VERSIONS' or name='MAX_VERSIONS_PER_RECORD' or name='TABLE_ID_OF_MAX_NUm_VERSIONS');
## MVCC Information on a Healthy System
| | PORT | | VALUE |
|----|---------|----|---------|
| 1 | | | 7535 |
| | | 2 | 2428 |
| 3 | 30 .003 | | 131188 |
If the number of active versions (NUm_VERSIONS) is greater than eight million, it is considered a problem and an overall slowdown of the system can be experienced. Similarly, if the maximum number of versions per record (MAX_VERSIONS_PER_RECORD) exceeds 8,000,000, this should be treated as a problem and a slowdown of accesses to a specific table is expected. Use TABLE_ID_OF_MAX_NUm_VERSIONS and join it against the SYS.TABLES system view to determine the table which is having the problem.
## Related Information
Performance Trace [page 289] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.