Questions
stringlengths
5
360
Answers
stringlengths
6
2.23k
what causes of had oop throw classnotfoundexception while sq oop integration?
The most causes of that the supporting library (like connectors) was not updated in sqoop's library path, so we need to update it on that specific path.
how to view all the databases and tables in rdbms from sq oop?
Using below commands we can, sqoop -list-databases sqoop -list-tables
how to view table columns details in rdbms from sq oop?
null
to a hive table so how do we resolve it?
you can specify the --hive-overwrite option to indicate that existing table in hive must be replaced. After your data is imported into HDFS or this step i s omitted
what is the default file format to import data using apache sq oop?
null
how do i resolve a communications link failure when connecting to mysql?
null
how do i resolve an illegalargumentexception when connecting to oracle?
This could be caused a non -owner trying to connect to the table so prefix the table name with the schema, for example Sche maName.OracleTableName. 11) What's causing this Exception in thread main
java lang incompatible class change error when running noncdhhadoopwithsqoop?
null
to import the tables one by one?
This can be accomp lished using the import -all-tables import command in Sqoop and by specifying the exclude -tables option with it as follows - sqoop import -all-tables --connect -username -password --exclude -tables Table298, Table 123, Table 299
does apache sq oop have a default database?
Yes, MySQL is the default database. bigdatascholars.blogspot.com/2018/08/sqoop -interview -question -and-answers.html
how can i import large objects blob and clob objects in apache sq oop?
null
manner?
null
what is the difference between sq oop and dis tcp command in had oop?
Both distCP (Distributed Copy in Hadoop) and Sqoop transfer data in parallel but the only difference is that distCP command can transfer any kind of data from one Hadoop cluster to another whereas Sqoop transfers data between RDBMS and other components in the Hadoop ecosystem like HBase, Hive, HDFS, etc.
what is sq oop meta store?
Sqoop metastore is a shared metadata repository for remote users to define and execute saved jobs created using sqoop job defined in the metastore. The sqoop -site.xml should be configured to connect to the metastore. 18) What is the significance of using -split-by clau se for running parallel import tasks in Apache
sq oop?
null
parallel map reduce tasks why?
Hadoop MapReduce cluster is configured to run a maximum of 4 parallel MapReduce tasks and the sqoop import can be configured with number of parallel tasks less than or equal to 4 but not more than4. 21) You successfully imported a table u sing Apache Sqoop to HBase but when you query the
table it is found that the number of rows is less than expected what could be the likely reason?
null
row into rdbms in which the columns are defined as not null?
Using the -input -null-string parameter, a default value can be specified so that the row gets inserted with the default value for the column that it has a NULL value in HDFS.
how will you synchronize the data in hdfs that is imported by sq oop?
null
what are the relational databases supported in sq oop?
null
what are the destination types allowed in sq oop import command?
Currently Sqoop Supports data imported into below services. HDFS Hive HBase HCatalog Accumulo
is sq oop similar to dist cp in had oop?
null
what are the majorly used commands in sq oop?
null
possible speed what can you do?
null
what might be the root cause and fix for this error scenario?
null
what is the importance of eval tool?
null
what is the process to perform an incremental data load in sq oop?
null
what is the significance of using compress codec parameter?
To get the out file of a sqoop import in formats other than .gz like .bz2 compressions when we use the -compress -code parameter.
can freeform sql queries be used with sq oop import command if yes then how can they
null
be used?
Follow Me : https://www.yo utube.com/c/SauravAgarwa l Sqoop allows us to use fre e form SQL queries with the import command. The import command should be used with the -e and - query options to execute free form SQL queries. When using the -e and -query options with the import command the -target dir value must be specified.
what is the purpose of sq oop merge?
The merge tool combines two datasets where entries in one dataset should overwrite entries of an older dataset preserving only the newest version of the records between both the data sets.
how do you clear the data in a staging table before loading it by sq oop?
By specifying the -clear -staging -table option we can clear the staging table before it is loaded. This can be done again and again till we get proper data in staging.
how will you update the rows that are already exported?
null
what is the role of jdbc driver in asq oop setup?
To connect to different relational databases sqoop needs a connector. Almost every DB vendor makes this connecter available as a JDBC driver which is specific to that DB. So Sqoop needs the JDBC driver of each of the database it needs to interact with.
when to use targetdir and warehouse dir while importing data?
null
sync with the data in hdfs imported by sq oop?
Follow Me : https://www.yo utube.com/c/SauravAgarwa l sqoop can have 2 approaches. To use the --incremental parameter with append option where value of some columns are checked and only in case of modified values the row is imported as a new row. To use the --incre mental parameter with lastmodified option where a date column in the source is checked for records which have been updated after the last import.
is it possible to add a parameter while running a saved job?
Yes, we can add an argument to a saved job a t runtime by using the --exec option sqoop job --exec jobname -- -- newparameter 41) sqoop takes a long time to retrieve the minimum and maximum values of columns
mentioned in split by parameter how can we make it efficient?
We can use the --boundary -query parameter in which we specify the min and max value for the column based on which the split can happen into multiple mapreduce tasks. This makes it faster as the query inside the - boundary -query parameter is executed first and the job is ready with t he information on how many mapreduce tasks to create before executing the main query.
how will you implement all or nothing load using sq oop?
Using the staging -table option we first load the data into a staging table and then load it to the final target table only if the staging load is successful.
how will you update the rows that are already exported?
null
deleted?
Follow Me : https://www.yo utube.com/c/SauravAgarwa l Truncate the target table and load it again. 45) How can we load to a column in a relational table which is not null but the incoming value
from hdfs has a null value?
By using the -input -null-string parameter we can specify a default value and that will allow the row to be inserted into the target table.
how can you schedule asqoopjobusingoozie?
Oozie has in -built sqoop actions inside which we can mention the sqoop commands to be executed. 47) Sqoop imported a table successfully to HBase but it is found that the number of rows is
fewer than expected what can be the cause?
Some of the imported records might have null values in all the columns. As Hbase does not allow all null values in a row, those rows get dropped. 48) How can you force sqoop to execute a free form Sql query only once and import the rows
serially?
null
sqooprunsonly4 what can be the reason?
The Mapreduce cluster is configured to run 4 parallel tasks. So the sqoop command must have number of parallel tasks less or equal to that of the MapReduce cluster. 50) What happe ns when a table is imported into a HDFS directory which already exists using
the append parameter?
Follow Me : https://www.yo utube.com/c/SauravAgarwa l Using the --append argument, Sqoop will import data to a temporary directory and then rename the files into the normal target directory in a manner that doe s not conflict with existing filenames in that directory. 51) How to import only the updated rows form a table into HDFS using sqoop assuming the
source has last update timestamp details for each row?
By using the lastmodified mode. Rows where the check c olumn holds a timestamp more recent than the timestamp specified with --last-value are imported. 52) Give a Sqoop command to import all the records from employee table divided into groups of records by the values in the column department_id. $ sqoop impor t --connect jdbc:mysql://DineshDB --table EMPLOYEES --split-by dept_id -m2
what does the following query do?
null
what is the importance of conditions in sq oop?
null
can sq oop run without a had oop cluster?
To run Sqoop commands, Hadoop is a mandatory prerequisite. You cannot run sqoop commands without the Hadoop libraries.
is it possible to import a file in fixed column length from the database using sq oop import?
null
how to use sq oop validation?
You can use this parameter ( --validate) to validate the counts between whats imported/exported between RDBMS and HDFS.
how to pass sq oop command as file arguments in sq oop?
specify an options file, simply create an options file in a convenient locat ion and pass it to the command line via - -options -file argument. eg: sqoop --options -file /users/homer/work/import.txt --table TEST
is it possible to import data apart from hdfs and hive?
Sqoop supports additional import targets beyond HDFS and Hive. Sqoop can also import records into a table in HBase and Accumulo.
follow me https www you tubecomcsauravagarwal60 is it possible to use sq oop direct command in h base?
This function is incompatible with direct import. But Sqoop can d o bulk loading as opposed to direct writes. To use bulk loading, enable it using --hbase -bulkload.
can i configure two sq oop command so that they are dependent on each other like if the
null
first sq oop job is successful second gets triggered if first fails second should not run?
No, using sqoop commands it is not possible, but You can use oozie for this. Create an oozie workflow. Execute the second action only if the first action succeeds.
what is uber mode and where is the settings to enable in had oop?
null
what is hive?
null
why do we need hive?
null
what is a meta store in hive?
null
is hive suitable to be used for oltp systems why?
No, Hive does not provide insert and update at row level. So it is not suitable for OLTP system.
can you explain about acid transactions in hive?
null
what are the types of tables in hive?
null
what kind of data warehouse application is suitable for hive?
Hive is not considered as a full database. The design rules and regulations of Hadoop and HDFS put restrictions on what Hive can do.Hive is most s uitable for data warehouse applications. Where Analyzing the relatively static data, Less Responsive time and No rapid changes in data. Hive does not provide fundamental features required for OLTP (Online Transaction Processing). Hive is suitable for data warehouse applications in large data sets.
explain what is a hive variable what do we use it for?
null
how to change the warehouse dir location for older tables?
null
what are the types of meta store available in hive?
There are three types of meta stores available in Hive. Embedded Metastore (Derby) Local Metastore Remote Metastore.
is it possible to use same meta store by multiple users in case of embedded hive?
null
if you run hive server what are the available mechanism for connecting it from application?
null
what is ser de in apache hive?
null
which classes are used by the hive to read and write hdfs files?
null
give examples of these r de classes which hive uses to serialize and deserialize data?
Hive currently use these SerDe classes to serialize and Deserialize data: MetadataTypedColumnsetSerDe: This SerDe is used to read/write delimited records like CSV, tab - separated control -A separated records (quote i s not supported yet.) ThriftSerDe: This SerDe is used to read or write thrift serialized objects. The class file for the Thrift object must be loaded first. DynamicSerDe: This SerDe also read or write thrift serialized objects, but it understands thrift DD L so the schema of the object can be provided at runtime. Also it supports a lot of different protocols, including TBinaryProtocol, TJSONProtocol, TCTLSeparatedProtocol(which writes data in delimited records).
how do you write your own custom ser de and what is the need for that?
null
what is object inspector functionality?
null
what is the functionality of query processor in apache hive?
This component implements the processing framework for c onverting SQL to a graph of map or reduce jobs and the execution time framework to run those jobs in the order of dependencies and the help of metastore details.
what is the limitation of derby database for hive meta store?
With derby database, you can not have multiple connections or multiple sessions instantiated at the same time. Derby database runs in the local mode and it creates a log file so that multiple users cannot access Hive simultaneously.
what are managed and external tables?
null
what are the complex data types in hive?
null
how does partitioning help in the faster execution of queries?
With the help of partitioning, a sub directory will be created with the name of the partitioned column and when you perform a query using the WHERE clause, only the particular sub -directory will be scanned instead of scanning the whole table. This gives yo u faster execution of queries.
how to enable dynamic partitioning in hive?
null
what is bucket ing?
null
how does bucket ing help in the faster execution of queries?
null
how to enable bucket ing in hive?
By default bucketing is disabled in Hive, you can enforce to enable it by setting the below property set hive.enforce.bucketing = true;
what are the different file formats in hive?
Every file format ha s its own characteristics and Hive allows you to choose easily the file format which you wanted to use. There are different file formats supported by Hive Follow Me : https://www.yo utube.com/c/SauravAgarwa l 1. Text File format 2. Sequence File format 3. Parquet 4. Avro 5. RC file format 6. ORC
how is ser de different from file format in hive?
null
what is regex ser de?
null
how is orc file format optimised for data storage and analysis?
ORC stores collections of rows in one file and within the collection the row data will be stored in a columnar format. With columnar format, it is very easy to compress, thus reducing a lot of storage cost. While querying also, it queries the par ticular column instead of querying the whole row as the records are stored in columnar format. Follow Me : https://www.yo utube.com/c/SauravAgarwa l ORC has got indexing on every block based on the statistics min, max, sum, count on columns so when you query, it will skip the blocks based on the indexing.
how to access h base tables from hive?
Using Hive -HBase storage handler, you can access the HBase tables from Hive and once you are connected, you can query HBase using the SQL queries from Hive. You can also join multiple tables in HBase from Hive and re trieve the result.
when running a join query i see outofmemoryerror s?
null
communications exception communications link failure?
null
does hive support unicode?
null
are hive sql identifiers eg table names columns etc case sensitive?
No, Hive is case insensitive.
what is the best way to loadxml data into hive?
The easiest way is to use the Hive XML SerDe (com.ibm.spss.hive.serde2.xml.XmlSerDe), which will all ow you to directly import and work with XML data.
when hive is not suitable?
null
mention what are the different modes of hive?
Depending on the size of data nodes in Hadoop, Hive can operate in two modes. These modes are, Local mode and Map reduce mode
mention what is hs2hiveserver2?
null
mention what hive query processor does?
Hive query processor convert graph of MapReduce jobs with the execution time framework. So that the jobs can be executed in the order of dependencies. Follow Me : https://www.yo utube.com/c/SauravAgarwa l
mention what are the steps of hive in query processor?
The components of a Hive query processor include, 1. Logical Plan Generation 2. Physical Plan Generation Execution Engine 3. Operators 4. UDFs and UDAFs 5. Optimizer 6. Parser 7. Semantic Analyzer 8. Type Checking