Questions
stringlengths 5
360
⌀ | Answers
stringlengths 6
2.23k
⌀ |
|---|---|
explain how can you change a column datatype in hive?
|
You can change a column data type in Hive by using command, ALTER TABLE table_name CHANGE column_name column_name new_datatype;
|
mention what is the difference between order by and sortby in hive?
|
SORT BY will sort the data within each reducer. You can use any number of reducers for SORT BY operation. ORDER BY will sort all of the data together, which has to pass through one reduc er. Thus, ORDER BY in hive uses a single.
|
explain when to use explode in hive?
|
Follow Me : https://www.yo utube.com/c/SauravAgarwa l Hadoop developers sometimes take an array as input and convert into a separate table row. To convert complex data types into desired table formats, then we can use explode function.
|
mention how can you stop a partition form being queried?
| null |
can were name a hive table?
|
yes, using below command Alter Table table_name RENAME TO new_name
|
what is the default location where hive stores table data?
|
hdfs://namenode_server/user/hive/warehouse
|
is there a date datatype in hive?
| null |
can we run unix shell commands from hive give example
|
Yes, using the ! mark just before the command. For example !pwd at hive prompt will list the current directory.
|
can hive queries be executed from script file show?
|
Using the source comm and. Example Hive> source /path/to/file/file_with_query.hql
|
what is the importance of hive rcfile?
| null |
what are the default record and field delimiter used for hive text files?
|
The default record delimiter is \n And the filed delimiters are \001,\002,\003
|
what do you mean by schema on read?
|
The schema is validated with the data when reading the data and not enforced when writing data.
|
how do you list all databases whose name starts with p?
| null |
what does the use command in hive do?
|
With the use command you fix the database on which all the subsequent hive queries will run.
|
how can you delete the db property in hive?
|
There is no way you can delete the DBPROPERTY.
|
what is the significance of the line?
| null |
how do you check if a particular partition exists?
| null |
which java class handles the input and output records encoding into files in hive tables?
| null |
what is the significance of if exists clause while dropping a table?
| null |
when you point a partition of a hive table to a new directory what happens to the data?
| null |
does the archiving of hive tables it saves any spaces in hdfs?
| null |
a hdfs file and not a local file?
| null |
are new and files which already exist?
| null |
what does the following query do?
| null |
what is a table generating function on hive?
|
A table generating function is a function which takes a single column as argument and expands it to multiple column or rows. Example exploe().
|
how can hive avoid map reduce?
| null |
what is the difference between like an dr like operators in hive?
| null |
is it possible to create cartesian join between 2 tables using hive?
|
No. As this kind of Join can not be implemented in map reduce
|
follow me https www you tubecomcsauravagarwal70 what should be the order of table size in a join query?
| null |
what is the usefulness of the distributed by clause in hive?
| null |
how will you convert the string 512toa floatvalue in the price column?
|
Select cast(price as FLOAT)
|
what will be the result when you do cast abc as int?
|
Hive will return NULL
|
can we load data into a view?
| null |
what types of costs are associated in creating index on hive tables?
| null |
what does stream table tablename do?
| null |
can a partition be archived what are the advantages and disadvantages?
|
Follow Me : https://www.yo utube.com/c/SauravAgarwa l Yes. A partition can be archived. Advantage is it decreases the number of files stored in namenode and the archived file can be queried using hive. The disadvantage is it will cause less efficient query and does not offer any space savings.
|
what is a generic udf in hive?
| null |
the following statement failed to execute what can be the cause?
| null |
how do you specify the table creator name when creating a table in hive?
| null |
which method has to be overridden when we use custom udf in hive?
| null |
same time?
|
Follow Me : https://www.yo utube.com/c/SauravAgarwa l The default metastore configuration allows only one Hive session to be opened at a time for accessing the metastore. Therefore, if multiple clients try to access the metastore at the same time, they will get an error. One has to use a standalone metastore, i.e. Local or remote metastore configuration in Apache Hive for allowing access to multiple clients concurrently. Following are the steps to configure MySQL database as the local metastor e in Apache Hive: One should make the following changes in hive -site.xml: 1. javax.jdo.option.ConnectionURL property should be set to
|
jdbc mysql host dbname create database if not exist true
|
2. javax.jdo.option.ConnectionDriverName property should be set to com. mysql.jdbc.Driver. One should also set the username and password as: 3. javax.jdo.option.ConnectionUserName is set to desired username. 4. javax.jdo.option.ConnectionPassword is set to the desired password. The JDBC driver JAR file for MySQL must be on the Hive classpath, i.e. The jar file should be copied into the Hive lib directory. Now, after restarting the Hive shell, it will automatically connect to the MySQL database which is running as a standalone metastore.
|
is it possible to change the default location of a managed table?
| null |
when should we use sort by instead of order by?
|
We should use SORT BY instead of ORDER BY whe n we have to sort huge datasets because SORT BY clause sorts the data using multiple reducers whereas ORDER BY sorts all of the data together using a single reducer. Therefore, using ORDER BY against a large number of inputs will take a lot of time to exec ute.
|
what is dynamic partitioning and when is it used?
| null |
order to do so?
| null |
how can you add a new partition for the month december in the above partitioned table?
| null |
what is the default maximum dynamic partition that can be created by a mapper reducer?
| null |
how can you change it?
| null |
requires at least one static partition column how will you remove this error?
| null |
how will you consume this csv file into the hive warehouse using built ser de?
| null |
files without degrading the performance of the system?
| null |
can we change settings within hive session if yes how?
| null |
is it possible to add 100nodeswhenwehave100 nodes already in hive how?
| null |
explain the concatenation function in hive with an example?
| null |
explain trim and reverse function in hive with examples?
| null |
explain process to access subdirectories recursively in hive queries?
|
By using below commands we can access sub directories recursively in Hive hive> Set mapred.input.dir.recursive=true; hive> Set hive.mapred.supports.subdirectories=true; Hive tables can be pointed to the h igher level directory and this is suitable for the directory structure which is like /data/country/state/city/
|
how to skip header rows from a table in hive?
| null |
what is the maximum size of string datatype supported by hive mention the hive support
| null |
binary formats?
| null |
what is the precedence order of hive configuration?
| null |
if you run a select query in hive why does it not run map reduce?
| null |
how hive can improve performance with orc format tables?
| null |
explain about the different types of join in hive?
| null |
how can you configure remote meta store mode in hive?
| null |
what happens on executing the below query after executing the below query if you modify the
| null |
column how will the changes be tracked?
| null |
how to load data from a txt file to table stored as orc in hive?
| null |
hive?
| null |
how to improve hive query performance with had oop?
| null |
how do i query from a horizontal output to vertical output?
| null |
and ve numbers?
|
we can try to use regexp_extract instead: regexp_extract('abcd -9090','.*( -[0-9]+)',1)
|
follow me https www you tubecomcsauravagarwal111 what is hive tablename maximum character limit?
| null |
hive?
|
use from_unixtime in conjunction with unix_timestamp. select from_unixtime(unix_timestamp(`date`,'MMM dd, yyyy'),'yyyy -MM-dd')
|
how to drop the hive database whether it contains some tables?
|
Use cascade command while drop the database. Example: hive> drop database sampleDB cascade;
|
i dropped and recreated hive external table but no data shown so what should i do?
| null |
difference between rdd data frame dataset?
| null |
when to user dds?
|
Consider these scenarios or common use cases for using RDDs when: 1. you want low -level transformation and actions and control on your dataset; your data is unstructured, such as media streams or streams of text; 2. you want to manipulate your data with functional programming constructs than domain specific expressions; 3. you do nt care about imposing a schema, such as columnar format, while processing or accessing data attributes by name or column; and 4. you can forgo some optimization and performance benefits available with DataFrames and Datasets for structured and semi -structur ed data.
|
what are the various modes in which spark runs on yarn client vs cluster mode
|
YARN client mode: The driver runs on the machine from which client is connected YARN Cluster Mode: The driver runs inside cluster
|
what is dag directed acyclic graph?
| null |
what is a rdd and how it works internally?
| null |
what do we mean by partitions or slices?
| null |
what is the difference between map and flat map?
| null |
how can you minimize data transfers when working with spark?
|
The various ways in which data transfers can be minimized when working with Apache Spark are: 1.Broadcast Variable - Broadcast variable enhances the efficiency of joins between small and large RDDs. 2.Accumulators - Accumulators help update the values of variables in parallel while executing. 3.The most common way is to avoid operations ByKey, repartition or any other operations which trigger shuffles.
|
why is there a need for broadcast variables when working with apache spark?
|
These are read only variables, present in -memory cache on every machine. When working with Spark , usage of broadcast variables eliminates the necessity to ship copies of a variable for every task, so data can be processed faster. Broadcast variables help in storing a lookup table inside the memory which enhances the retrieval efficiency when compared to an RDD lookup ().
|
how can you trigger automatic cleanups in spark to handle accumulated metadata?
|
You can trigger the clean -ups by setting the parameter "spark.cleaner.ttl" or by dividing the long running jobs into different batches and writing the intermediary results to the disk. Follow Me : https://www.yo utube.com/c/SauravAgarwa l
|
why is blink db used?
|
BlinkDB is a query engine for executing interactive SQL queries on huge volumes of data and renders query results marked with meaningful error bars. BlinkDB helps users balance query a ccuracy with response time.
|
what is sliding window operation?
|
Sliding Window controls transmission of data packets between various computer networks. Spark Streaming library provides windowed computations where the transformations on RDDs are applied over a sliding window of data. Whenever the window slides, the RDDs that fall within the particular window are combined and operated upon to produce new RDDs of the windowed DStream.
|
what is catalyst optimiser?
| null |
what do you understand by pair rdd?
| null |
what is the difference between persist and cache?
|
persist () allows t he user to specify the storage level whereas cache () uses the default storage level(MEMORY_ONLY).
|
what are the various levels of persistence in apache spark?
| null |
does apache spark provide checkpointing?
| null |
what do you understand by lazy evaluation?
|
Spark is intellectual in the manner in which it operates on data. When you tell Spark to operate on a given dataset, it heeds the instructions and makes a note of it, so that it does not forget - but it does nothing, unless asked for the final result. When a transformation like map () is called on a RDD -the operation is not performed immediately. Transformations in Spark are not evaluated till you perform an action. This helps optimize the overall data processing workflow.
|
what do you understand by schema rdd?
|
An RDD that consists of row objects (wrappers around basic string or integer arrays) w ith schema information about the type of data in each column. Dataframe is an example of SchemaRDD.
|
what are the disadvantages of using apache spark over had oop map reduce?
|
Apache spark does not scale well for compute intensive jobs and consumes large n umber of system resources. Apache Sparks in -memory capability at times comes a major roadblock for cost efficient processing of big data. Also, Spark does have its own file management system and hence needs to be integrated with other cloud based data pla tforms or apache hadoop.
|
what is lineage graph in spark?
| null |
what do you understand by executor memory in a spark application?
|
Follow Me : https://www.yo utube.com/c/SauravAgarwa l Every spark application has same fixed heap size and fixed number of cores for a spark executor. The heap size is what referred to as the Spark executor memory which is controlled with the spark. executor.memory property of the -executor -memory flag. Every spark application will have one executor on each worker node. The executor memory is basically a measure on how much memory of the worker node will the application utilize.
|
what is an accumulator?
|
Accumulators are Sparks offline debuggers. Similar to Hadoop Counters, Accumulators provide the number of events in a program. Accumulators are the variables that can be added through associative operations. Spark natively supports accumulators of numeric value types and standard mutable collections. AggregrateByKey() and combineByKey() uses accumul ators.
|
what is spark context?
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.