Questions,Answers what is the central limit theorem and why is it important?,"Suppose that we are interested in estimating the average height among all people. Collecting data for every person in the world is impossible. While we cant obtain a height measurement from everyone in the population, we can still sample some people. The question now becomes, what can we say about the average height of the entire population given a single sample. The Central Limit Theorem addresses this question exactly. Read more here ." what is sampling how many sampling methods do you know?,"Data sampling is a statistical analysis technique used to select, manipulate and analyze a representative subset of data points to identify patterns and trends in the l arger data set being examined. Read the full answer here ." what is the difference between type ivs type ii error?, what is linear regression what do the terms p value coefficient and r squared, value mean what is the significance of each of these components?, what are the assumptions required for linear regression?,"There are four major assumptions: 1. There is a linear relationship between the dependent variables and the regres sors, meaning the model you are creating actually fits the data, 2. The errors or residuals of the data are normally distributed and independent from each other, 3. There is minimal multicollinearity between explanatory variables, and 4. Homoscedasticity. This means the variance around the regression line is the same for all values of the predictor variable ." what is a statistical interaction?,"Basically, an interaction is when the effect of one factor (input variable) on the dependent variable (output varia ble) differs among levels of another factor. Read more here ." what is selection bias?, what is an example of a dataset with a non gaussian distribution?,"The Gaussian distribution is part of the Exponential family of distributions, but there are a lot more of them, with the same sort of ease of use, in many cases, and if the person doing th e machine learning has a solid grounding in statistics, they can be utilized where appropriate. Read more here ." what is the binomial probability formula?,The binomial distribution consists of the probabilities of each of the possible numbers of successes on N trials for independent events that each have a pro bability of (the Greek letter pi) of occurring. Read more Data Science : what is data science list the differences between supervised and unsupervised,"learning. Data Science is a blend of various tools, algorithms, and machine learning principles wi th the goal to discover" hidden patterns from the raw data how is this different from what statisticians have been doing for years?, what is bias variance tradeoff?, what is a confusion matrix?, what is the difference between long and wide format data?, what do you understand by the term normal distribution?,"Data is usually distributed in different ways with a bias to the left or to the right or it can all be jumbled up. However, there are chances that data is distributed around a central value without any bias to the left or right and reaches normal distribution in the form of a bell -shaped curve. Figure: Normal distribution in a bell curve The random variables are distributed in the form of a symmetrical, bell -shaped curve. Properties of Normal Distribution are as follows; 1. Unimodal -one mode 2. Symmetrical -left and right halves are mirror images 3. Bell-shaped -maximum height (mode) at the mean 4. Mean, Mode, and Median are all located in the cen ter 5. Asymptotic" what is correlation and covariance in statistics?, what is the difference between point estimates and confidence interval?,"Point Estimation gives us a particular value as an estimate of a populatio n parameter. Method of Moments and Maximum Likelihood estimator methods are used to derive Point Estimators for population parameters. A confidence interval gives us a r ange of values which is likely to contain the population parameter. The confidence inte rval is generally preferred, as it tells us how likely this interval is to contain the population parameter. This likeliness or probability is called Confidence Level or Confidence coefficient and represented by 1 alpha, where alpha is the level of signi ficance." what is the goal of ab testing?, what is p value?, what is the probability that you see at least one shooting star in the period of an hour?,Probability of not seeing any shooting star in 15 minutes is = 1 P( Seeing one shooting s tar ) = 1 0.2 = 0.8 Probability of not seeing any shooting star in the period of one hour = (0.8) ^ 4 = 0.4096 Probability of see ing at least one shooting star in the one hour = 1 P( Not seeing any star ) = 1 0.4096 = 0.5904 how can you generate a random number between 17 with only a die?, probability that they have two girls?, is also ahead?,There are two ways of choosing the coin. One is to pick a fair coin and the other is to pick the one with two heads. Probability of sele cting fair coin = 999/1000 = 0.999 Probability of selecting unfair coin = 1/1000 = 0.001 Selecting 10 heads in a row = Selecting fair coin * Getting 10 heads + Selecting an unfair coin P (A) = 0.999 * (1/ 2)^5 = 0.999 * (1/1024) = 0.000976 P (B) = 0.001 * 1 = 0.001 P( A / A + B ) = 0.000976 / (0.000976 + 0.001) = 0.4939 P( B / A + B ) = 0.001 / 0.001976 = 0.5061 Probability of selecting another head = P(A/A+B) * 0.5 + P(B/A+B) * 1 = 0.4939 * 0.5 + 0.5061 = 0.7531 what do you understand by statistical power of sensitivity and how do you calculate it?,"Sensitivity is commonly used to validate the accuracy of a classifier (Logistic, SVM, Random Forest etc.). Sensitivity is nothing but Pred icted True events/ Total events. True events he re are the events which were true and model also predicted them as true. Calculation of seasonality is pretty straightforward. Seasonality = ( True Positives ) / ( Positives in Actual Dependent Variable )" why is resampling done?,"Resampling is done in any of these cases: Estimating the accuracy of sample statistics by using subsets of accessible data or drawing randomly with replacement from a set of data points Substituting labels on data points when per forming significance tests Validating models by using random subsets (bootstrapping, cross -validation)" what are the differences between overfitting and under fitting?, how to combat overfitting and under fitting?,"To combat overfi tting and underfitting, you can resample the data to estimate the model accuracy (k -fold cross -validation) and by having a validation dataset to evaluate the model." what is regularisation why is it useful?,Data Scientist Masters Program Explore Curriculum Regularisation is the process of adding tuning parameter to a model to induce smoothness in order to prevent overfitting. This is mo st often done by adding a constant multiple to an existing weight vector. This constant is often the L1(Lasso) or L2(ridge). The model pre dictions should then minimize the loss function calculated on the regularized training set. what is the law of large numbers?, what are confounding variables?, what are the types of biases that can occur during sampling?,Selection bias Under coverage bias Survivorship bias what is survivorship bias?, what is selection bias?,Selection bias occurs when the sample obtained is not representative of the population intended to be analysed. explain how a roc curve works?, what is tf idf vectorization?, why we generally use soft max nonlinearity function as last operation in network?, python orr which one would you prefer for text analytics?,We will prefer Python because of the following reasons: Python would be the best option because it has Pandas library that provides easy to use data structures and high -performance data analysis tools. R is more suitable for machine le arning than j ust text analysis. Python performs faster for all types of text analytics. how does data cleaning plays a vital role in the analysis?, what is cluster sampling?, what is systematic sampling?, what are eigenvectors and eigenvalues?, can you cite some examples where a false positive is important than a false negative?, actually purchased anything but are marked as having made 10000 worth of purchase follow steven our if or more ai and data science posts httpslnkdingzu463xq36 can you cite some examples where a false negative important than a false positive?,"Examp le 1: Assume there is an airport A which has received high -security threats and based on certain characteristics they identify whether a particular passenger can be a threat or not. Due to a shortage of staff, they decide to scan passe ngers being predict ed as risk positives by their predictive model. What will happen" if a true threat customer is being flagged as non threat by airport model?, example 2whatifjuryor judge decides to make a criminal go free?,Example 3 : What if you rejec ted to marry a very good person based on your predictive model and you happen to meet him her after a few years and realize that you had a false negative?,Q37. Can you cite some examples where both false positive and false negatives are equally important?, can you explain the difference between a validation set and a test set?, what is machine learning?, what is supervised learning?, what is unsupervised learning?, what are the various classification algorithms?,The diagram lists the most important classification algorithms . what is naive in a naive bayes?, what are the support vectors in svm?, what are the different kernels in svm?, what are entropy and information gain in decision tree algorithm?, what is pruning in decision tree?,"Pruning is a technique in machine learning and search algorithms that reduces the size of decision trees by removing sections of the tree that provide little power to classify instances. S o, when we remove sub-node s of a decision node, this process is called pruning or opposite process of splitting." what is logistic regression state an example when you have used logistic, what is linear regression?, what are the drawbacks of the linear model?, what is the difference between regression and classification ml techniques?, what are recommender systems?, what is collaborative filtering?, how can outlier values be treated?, what are the various steps involved in an analytics project?,"The following are the various steps involved in an analytics project : 1. Understand the Business problem 2. Explore the data and become familiar with it. 3. Prepare the data for modelling by detecting outliers, trea ting missing values, transforming variables, etc. 4. After data preparation, start running the model, analyze the result and tweak the approach. This is an iterative step until the best possible outcome is achieved. 5. Validate the model using a ne w data set. 6. Start implementing the model and track the result to analyze the performance of the model over the period of time." during analysis how do you treat missing values?, follow steven our if or more ai and data science posts httpslnkdingzu463xq60how will you define the number of clusters in a clustering algorithm?, what is ensemble learning?, describe in brief any type of ensemble learning?, what is a random forest how does it work?, how do you work towards a random forest?,"The underlying principle of this technique is that several weak learners combined to provide a keen learner. The steps involved are Build several decision trees on bootstrapped training samples of data On each tree, each time a split is considered, a random sample of mm predictors is chosen as split candidates, out of all pp predictors Rule of thumb: At eac h split m=pm=p Predictions: At the majority rule" what cross validation technique would you use on a time series data set?, what is a box cox transformation?, how regularly must an algorithm be updated?, gb dataset how would you go about this problem have you ever faced this kind of, problem in your machine learning data science experience so far?, what do you mean by deep learning?,Deep Learning is nothing but a paradigm of mach ine learning which has shown incred ible promise in recent years. This is because of the fact that Deep Learning shows a great analogy with the functioning of the human brain. what is the difference between machine learning and deep learning?,"Machine l earning is a field of computer science that gives computers the ability to learn without being explicitly programmed. Machine learning can be categorised in the following three categories. 1. Supervised machine learning, 2. Unsupervised machine learning, 3. Reinfor cement learning Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. Q71. What, in your opinion, is the reason for the popularity of Deep Learning in recent" times?, what is reinforcement learning?,"Reinforcement Learning is learning what to do and how to map situations to actions. The end result is to maximise the numerical reward signal. The learner is not told which acti on to take but in stead must discover which action will yield the maximum reward. Reinforcement learning is inspired by the learning of human beings, it is based on the reward/penalty mechanism." what are artificial neural networks?,Artificial Neural ne tworks are a spec ific set of algorithms that have revolutionized machine learning. They are inspired by biological neural networks. Neural Networks can adapt to changing the i nput so the netwo rk generates the best possible result without needing to redesign the output criteria. describe the structure of artificial neural networks?, how are weights initialized in a network?, what is the cost function?, what are hyperparameters?, what will happen if the learning rate is set in accurately too low or too high?, what is the difference between epoch batch and iteration in deep learning?, what are the different layers on cnn?, pooling on cnn and how does it work?, what are recurrent neural networks r nns?, how does an lst m network work?, what is a multilayer perceptron mlp?, exploding gradients?, what is vanishing gradients?, what are the variants of backpropagation?, what are the different deep learning frameworks?, what is the role of the activation function?,"The Activation function is used to introduce non-linearity into the neural netwo rk helping it to learn more complex function. Without which the neural network would be only able to learn linear function which is a linear combination of its input data. An activation function is a function in an artifi cial neuron that delivers an output based on inputs. Q93. Name a few Machine Learning libraries for various purposes . Purpose Libraries Scientific Computation Numpy Tabular Data Pandas Data Modelling & Pr eprocessing Scikit Learn Time -Series Analysis Statsmodels Text processing Regular Expressions, NLTK Deep Learning Tensorflow, Pytorch" what is an auto encoder?,"Auto -enco ders are simple learning networks that aim to transform inputs into outputs with the minimum possible error. This means that we want the output to be as close to input as possible. We add a couple of layers between the input and the output, and the sizes of these layers are smaller than the input layer. The auto-encoder receives unlabelled input which is then encoded to reconstruct the input." what is a boltzmann machine?,Boltzmann machines have a simple learning algorithm that allows them to discove r interesting features that represent complex regularities in the training data. The Boltzmann machine is basically used to optimise the weights and the quantity for the g iven problem. The learning algorithm is very slow in networks with many layers of fea ture detectors. Restricted Boltzmann Machines algorithm has a single layer of feature detectors which makes it faster than the rest. follow steven our if or more ai and data science posts httpslnkdingzu463xq96 what is dropout and batch normalization?, descent?, why is tensor flow the most preferred library in deep learning?, what do you mean by tensor in tensor flow?, what is the computational graph?, what are the differences between supervised and unsupervised learning?,"Supervised Learning Unsupervise d Learning Uses known and labeled data as input Supervised learning has a feedback mechanism Most commonly used supervised learning algorithms are decision trees, logistic regression, and support vector machine Uses unlabeled data as input Unsupervised l earning has no feedback mechanism Most commonly used unsupervised learning algorithms are k -means clustering, hierarchical clustering, and apriori algorithm" how is logistic regression done?, how do you build a random forest model?, how can you avoid the overfitting your model?, what are the features election methods used to select the right variables?, value show will you deal with them?, follow steven our if or more ai and data science posts httpslnkdingzu463x110 for the given point show will you calculate the euclidean distance in python?,"plot1 = [1,3] plot2 = [2,5] The Euclidean distance can b e calculated as follows: euclidean_distance = sqrt( (plot1[0] -plot2[0])**2 + (plot1[1] -plot2[1])**2 )" what are dimensionality reduction and its benefits?, how will you calculate eigenvalues and eigenvectors of the following 3x3 matrix?, how should you maintain a deployed model?, how do you find rmse and mse in a linear regression model?,RMSE a nd MSE are two of the most common measures of accuracy for a linear regression model. RMSE indicates the Root Mea n Square Error. MSE indicates the Mean Square Error. how can you select k fork means?, what is the significance of pvalue?,"p-value typically 0.05 This indicates strong evidence against the null hypothesis; so you reject the null hypothesis. p-value typically > 0.05 This indicates weak evidence against the null hypothesis, so you accept the null hypothesis. p-value at cutoff 0.05 This is considered to be marginal, meaning it could go either way." how can a time series data be declared as stationery?, how can you calculate accuracy using a confusion matrix?, result of which algorithm?, what is a generative adversarial network?, performance what can you do about it?, missing values of both categorical and continuous variables?,"K-means clustering Linear regression K-NN (k -nearest neighbor) Decision trees The K nearest neighbor algorithm can be used because it can compute the nearest neighbor and if it doesn't have a value, it just computes the nearest neighbor based on all t he other features. When you're dealing with K -means clustering or linear regressio n, you need to do that in your pre - processing, otherwise, they'll crash. Decision trees also have the same problem, although there is some variance. 126. Below are the eight actual values of the target variable in the train file. What is the" entropy of the target variable?,"[0, 0, 0, 1, 1, 1, 1, 1] Choose the correct answer. 1. -(5/8 log(5/8) + 3/8 log(3/8)) 2. 5/8 log(5/8) + 3/8 log(3/8) 3. 3/8 log(5/8) + 5/8 log(3/8) 4. 5/8 log(3/8) 3/8 log(5/8) The target variable, in this case, is 1. The formula for calculating the entropy is: Putting p=5 and n=8, we get Entropy = A = -(5/8 log(5/8) + 3/8 log(3/8)) 127. We want to predict t he probability of death from heart disease based on three ri sk factors: age, gender, and blood cholesterol level. What is the most appropriate" algorithm for this case?, study?,"Choose the correct option: 1. K-means clustering 2. Linear regression 3. Association rules 4. Decision trees As we are looking for grouping people to gether specifically by four different similarities, it indicates the value of k. T herefore, K -means clustering (answer A) is the most appropriate algorithm for this study. 129. You have run the association rules algorithm on your dataset, and the two rules {banana, apple} => {grape} and {apple, orange} => {grape} have been found to be" relevant what else must be true?, their purchase decisions which analysis method should you use?, what are the feature vectors?, what are the steps in making a decision tree?,1. Take the entire data set as input. 2. Look for a split that maximizes the separation of the classes. A split is any test that div ides the data into two sets. 3. Apply the split to the input data ( divide step). 4. Re-apply steps one and two to the divided data. 5. Stop when you meet any stopping criteria. 6. This step is called pruning. Clean up the tree if you went too far doing splits. what is root cause analysis?, what is logistic regression?, what is collaborative filtering?,"Most recommender systems use this filtering process to find patterns and information by collaborating perspectives, numerous data sources, and several agents." do gradient descent methods always converge to similar points?,"They do not, because in some cases, they reach a local minima or a local optima point. You would not reach the global optima point. This is governed by the data and the starting conditions." what is the goal of ab testing?,"This is statistical hypothesis testing for randomized experiments with two variables, A and B. The objective of A/B testing is to detect any changes to a web pag e to maximize or increase the outcome of a strategy." what are the confounding variables?,These are extraneous variables in a statistical model that correlates directly or inversely with both the dependent and the independent variable. The estimate fails to account for the confounding factor. what is star schema?, how regularly must an algorithm be updated?,You will want to update an algorithm when: You want the model to evolve as dat a streams through infrastructure The underlying data source is changing There is a case of non -stationarity what are eigenvalue and eigenvector?, why is resampling done?,"Resampling is done in any of these cases: Estimating the accuracy of sample statistics by using subsets of accessible data, or drawing randomly with replacement from a set of data points Substituting labels on data points when perfo rming significance tests Validating models by using random subsets (bootstrapping, cross -validation)" what is selection bias?,"Selection bias, in general, is a problematic situation in which error is introduced due to a non -random population sample." what are the types of biases that can occur during sampling?,1. Selection bias 2. Undercoverage bias 3. Survivorship bias what is survivorship bias?,overlooking those that did not because of their lack of prominence. This can lead to wrong conclusions in numerous ways. how do you work towards a random forest?,"The underlying principle of this technique is that several weak learners combine to provide a strong learner. The s teps involved are: 1. Build several decision trees on bootstrapped training samples of data 2. On each tree, each time a split is considered, a random sample of mm predictors is chosen as split candidates out of all pp predictors 3. Rule of thumb: At each split m=p m=p 4. Predictions: At the majority rule" what are the important skills to have in python with regard to data analysis?,"The following are some of the important skills to possess which will come handy when performing data analysis using Python. Good understanding of the built -in data types especially lists, dictionaries, tuples, and sets. Master y of N -dimensional NumPy Arrays . Mastery of Pandas dataframes. Ability to perform element -wise ve ctor and matrix operations on NumPy arrays. Knowing that you should use the Anaconda distribution and the conda package manager. Familiarity with Scikit -learn . **Scikit -Learn Cheat Sheet ** Ability to write efficient list comprehensions instead of traditional for loops. Ability to write small, clean functions (important for any devel oper), preferably pure functions that dont alter objects. Knowing how to profile the performance of a Python script and how to optimize bottlenecks. Credit: kdnuggets, Simplilearn, Edureka, Guru99, Hackernoon, Datacamp, Nitin Panwa r, Michael Rundell" How do you subset or filter data in SQL?,"To subset or filter data in SQL, we use WHERE and HAVING clauses." What is the difference between a WHERE clause and a HAVING clause in SQL?, "How are Union, Intersect, and Except used in SQL?", What is a Subquery in SQL?,"A Subquery in SQL is a query within another query. It is also known as a nested query or an inner query. Subqueries are used to enhance the data to be queried by the main query. It is of two types - Correlated and Non-Correlated Query." How is joining different from blending in Tableau?, What do you understand by LOD in Tableau?,"LOD in Tableau stands for Level of Detail. It is an expression that is used to execute complex queries involving many dimensions at the data sourcing level. Using LOD expression, you can find duplicate values, synchronize chart axes and create bins on aggregated data." Can you discuss the process of feature selection and its importance in data analysis?,"Feature selection is the process of selecting a subset of relevant features from a larger set of variables or predictors in a dataset. It aims to improve model performance, reduce overfitting, enhance interpretability, and optimize computational efficiency." What are the different connection types in Tableau Software?,"There are mainly 2 types of connections available in Tableau. Extract: Extract is an image of the data that will be extracted from the data source and placed into the Tableau repository. This image(snapshot) can be refreshed periodically, fully, or incrementally. Live: The live connection makes a direct connection to the data source. The data will be fetched straight from tables. So, data is always up to date and consistent." What are the different joins that Tableau provides?,"Joins in Tableau work similarly to the SQL join statement. Below are the types of joins that Tableau supports: Left Outer Join Right Outer Join Full Outer Join Inner Join" What is a Gantt Chart in Tableau?,"A Gantt chart in Tableau depicts the progress of value over the period, i.e., it shows the duration of events. It consists of bars along with the time axis. The Gantt chart is mostly used as a project management tool where each bar is a measure of a task in the project." What is the difference between Treemaps and Heatmaps in Tableau?, What is the correct syntax for reshape() function in NumPy?, What are the different ways to create a data frame in Pandas?,"There are two ways to create a Pandas data frame. By initializing a list By initializing a dictionary" Write the Python code to create an employee’s data frame from the “emp.csv” file and display the head and summary., "How will you select the Department and Age columns from an Employee data frame? Since we only want the odd number from 0 to 9, you can perform the modulus operation and check if the remainder is equal to 1.", "Suppose there is an array that has values [0,1,2,3,4,5,6,7,8,9]. How will you display the following values from the array - [1,3,5,7,9]?", How can you add a column to a Pandas Data Frame?, How will you print four random integers between 1 and 15 using NumPy?,"generate Random numbers using NumPy, we use the random.randint() function." What do data analysts do?,"Outline the main tasks of a data analyst: identify, collect, clean, analyze, and interpret. Talk about how these tasks can lead to better business decisions, and be ready to explain the value of data-driven decision-making." What was your most successful/most challenging data analysis project?,"Getting asked about a project you’re proud of is your chance to highlight your skills and strengths. Do this by discussing your role in the project and what made it so successful. As you prepare your answer, take a look at the original job description. See if you can incorporate some of the skills and requirements listed." What is your process for cleaning data?,"walk through the steps you typically take to clean a data set. Consider mentioning how you handle: Missing data, Duplicate data, Data from different sources, Structural errors, Outliers" How do you explain technical concepts to a non-technical audience?,"While drawing insights from data is a critical skill for a data analyst, communicating those insights to stakeholders, management, and non-technical co-workers is just as important. Answer should include the types of audiences you’ve presented to in the past (size, background, context). If you don’t have a lot of experience presenting, you can still talk about how you’d present data findings differently depending on the audience." Tell me about a time when you got unexpected results.,describe the situation that surprised you and what you learned from it. Take this as opportunity to demonstrate your natural curiosity and excitement to learn new things from data. What data analytics software are you familiar with?,Mention software solutions you’ve used for various stages of the data analysis process. What scripting languages are you trained in?,"As a data analyst, you’ll likely have to use SQL and a statistical programming language like R or Python. If you’re already familiar with the language of choice at the company, you’re applying to, great. If not, you can take this time to show enthusiasm for learning. Point out that your experience with one (or more) languages has set you up for success in learning new ones. Talk about how you’re currently growing your skills." What statistical methods have you used in data analysis?,"Mean, Standard deviation, Variance, Regression, Sample size, Descriptive and inferential statistics" How have you used Excel for data analysis in the past?, "What is a VLOOKUP, and what are its limitations?", "What is a pivot table, and how do you make one?", How do you find and remove duplicate data?, "What are INDEX and MATCH functions, and how do they work together?", What’s the difference between a function and a formula?, What is the difference between 1-sample T-test vs. 2-sample T-test in SQL, "What exactly does the term ""Data Science"" mean?","Data Science is an interdisciplinary discipline that encompasses a variety of scientific procedures, algorithms, tools, and machine learning algorithms that work together to uncover common patterns and gain useful insights from raw input data using statistical and mathematical analysis. Gathering business needs and related data is the first step; data cleansing, data staging, data warehousing, and data architecture are all procedures in the data acquisition process. Exploring, mining, and analyzing data are all tasks that data processing does, and the results may then be utilized to provide a summary of the data's insights." Distinguish between data in long and wide formats.,"Data in a long format Each row of the data reflects a subject's one-time information. Each subject's data would be organized in different/multiple rows. When viewing rows as groupings, the data may be identified. This data format is commonly used in R analysis and for writing to log files at the end of each experiment. Data in a W ide FormatThe repeated replies of a subject are divided into various columns in this example. By viewing columns as groups, the data may be identified. This data format is most wide ly used in stats programs for repeated measures ANOV As and is seldom utilized in R analysis." List down the criteria for Overfitting and Underfitting Overfitting:,"Overfitting: The model works well just on the sample training data. Any new data is supplied as input to the model fails to generate any result. These situations emer ge owing to low bias and large variation in the model. Decision trees are usually prone to overfitting. Underfitting: Here, the model is very simple in that it cannot find the proper connection in the data, and consequently , it does not perform well on the test data. This might arise owing to excessive bias and low variance. Underfitting is more common in linear regression" "What exactly does the term ""Data Science"" mean?","Data Science is an interdisciplin ary discipline that encompasses a variety of scientific procedures, algorithms, tools, and machine learning algorithms that work together to uncover common patterns and gain useful insights from raw input data using statistical and mathematical analysis.Gathering busin ess needs and related data is the first step; data cleansing, data staging, data warehousing, and data architecture are all procedures in the data acquisition process. Exploring, mining, and analyzing data are all tasks that data processing does, and the results may then be utilized to provide a summary of the data's insights. Following the exploratory phases, the cleansed data is exposed to many algorithms, such as predictive analysis, regression, text mining, pattern recognition, and so on, depending on the needs. In the final last stage, the outcomes are aesthetically appealingly when conveyed to the business. This is where the ability to see data, report on it, and use other business intelligence tools come into play ." What is the differ ence between data science and data analytics?,"Data science is altering data using a variety of technical analysis approaches to derive useful insights that data analysts may apply to their business scenarios. Data analytics is concerned with verifying current hypotheses and facts and answering quest ions for a more efficient and successful business decision- making process. Data Science fosters innovation by providing answers to questio ns that help people make connections and solve challenges in the future. Data analytics is concerned with removing current meaning from past context, whereas data science is concerned with predictive modelling. Data science is a wide topic that employs a variety of mathe matical and scientific tools and methods to solve complicated issues. In contrast, data analytics is a more focused area that employs fewer statistical and visualization techniques to solve particular problems." What are some of the strat egies utilized for sampling? What is the major advantage of sampling?,"Data analysis cannot be done on an entire amount of data at a time, especially when it concerns bigger datasets. It becomes important to obtaindata samples that can represent the full population and then analyse it. While doing this, it is vital to properly choose sample data out of the enormous data that represents the complete dataset. There are two types of sampling procedures depending on the engagement of statistics, they are: Non-Probability sampling techniques: Convenience sampling, Quota sampling, snowball sampling, etc. Probability sampling techniques: Simple random sampling, clustered sampling, stratified sampling." What is the differ ence between Eigenvectors and Eigenvalues?,Eigenvectors are column vectors of unit vectors with a length/magnitude of 1; they are also known as right vectors. Eigenvalues are coefficients applied to eigenvectors to varying length or magnitude values. Eigen decomposition is the process of breaking down a matrix into Eigenvectors and Eigenvalues. These are then utilized in machine learning approaches such as PCA (Principal Component Analysis) to extract useful information from a matrix. What does it mean to have high and low p-values?,"A p-value measures the possibi lity of getting outcomes equal to or greater than those obtained under a certain hypothesis, provided the null hypothesis is true. This indicates the likelihood that the observed discrepancy happened by coincidence. When the p-value is less than 0.05, we say have a low p-val ue, the null hypothesis may be rejected, and the data is unlikely to be true null. A high p-value indicates the strength in support of the null hypothesis, i.e., values greater than 0.05, indicating that the data is true null. The hypothesis can go either way with a p-value of 0.05." When to do re-sampling?,Re-sampling is a data sampling procedure that improves accuracy and quantifies the uncertainty of population characteristics. It is observed that the model is efficient by training it on different patterns in a dataset to guarantee that variances are taken care of. It's also done when models needto be verified using random subsets or tests with labels substituted on data points. "What does it mean to have ""imbalanced data""?",A data is highly imbalanced when the data is unevenly distrib uted across several categories. These datas ets cause a performance problem in the model and inaccuracies. "Do the predicted value, and the mean value varies in any way?","Although there aren't many variations between these two, it's worth noting that they're employed in different situations. In general, the mean value talks about the probability distribution; in contrast, the anticipated value is used when dealing with random variables." What does Survivorship bias mean to you?,"Due to a lack of prominence, this bias refers to the logica l fallacy of focusing on parts that survived a procedure while missing others that did not. This bias can lead to incorrect conclusions being drawn." "Define key performance indicators (KPIs), lift, model fitting, robustness, and design of experiment (DOE).",KPI is a metric that assesses how successfully a company meets its goals. Lift measures the target model's performance compared to a random choice model. The lift represents how well the model predicts compare d to if there was no model. Model fitting measures how well the model under consideration matches the data. Robustness refers to the system's capacity to successfully handle variations and variances. DOE refers to the work of describing and explaining informati on variance under postulated settings to reflect factors. Identify confounding variables,"Another name for confounding variables is confounders. They are extraneous variables that impact both independent and dependent variables, generating erroneous associations and mathematical correlations." What distinguishes time-series issues from other regression problems?,"Time series data could be considered an extension of linear regression, which uses terminology such as autocorrelation and average movement to summarize previous data of y-axis variables to forecast a better future. Time series issues' major purpose is to forecast and predict when exact forecasts could be produced, but the determinant factors are not always known. The presence of Time in an issue might not determine that it is a time series problem. To be determined that an issue is a time series problem , there must be a relationship between tar get and T ime. The observations that are closer in time are anticipated to be comparable to those far away , providing seasonality accountability . Today's weather , for example, would be in comparison to tomorrow's weather but not to weather four months from now. As a result, forecasting the weather based on historical data becomes a time series challenge." What if a dataset contains variables with more than 30% missing values? How would you deal with such a dataset?,"We use one of the following methods, depending on the size of the dataset: If the datasets are minimal, the missing values are replaced with the average or mean of the remaining data. This may be done in pandas by using mean = df. Mean (), where df is the panda's data frame that contains the dataset and mean () determines the data's mean. We may use df.Fillna to fill in the missing numbers with the computed mean (mean).The rows with missing values may be deleted from bigger datasets, and the remaining data can be utilized for data prediction." "What is Cr oss-V alidation, and how does it work?","Cross-validation is a statistical approach for enhancing the performance of a model. It will be designed and evaluated with rotation using different samples of the training dataset to ensure that the model performs adequately for unkn own data. The training data will be divided into groups, and the model will be tested and verified against each group in turn. The following are the most regularly used techniques: Leave p-out method K-Fold method Holdout method Leave-one-out method" How do you go about tackling a data analytics project?,"In general, we follow the steps below: The first stage is to understan d the company's problem or need. Then, sternly examine and evaluate the facts you've been given. If any data is missing, contact the company to clarify the needs. The following stage is to clean and prepa re the data, which will then be utilized for modelling. The variables are converted, and the missing values are available here. To acquire useful insights, run your model on the data, create meaningful visualizations, and evaluate the findings. Release the model implementation and evaluate its usefulness by tracking the outcomes and performance over a set period. V alidate the model using cross-validation." What is the purpose of selection bias?,Selection bias occurs when no randomization is obtained while selecting a sample subset. This bias indicates that the sample used in the analysis does not reflect the whole population being studied. Why is data cleansing so important?, Why is data cleansing so important? What method do you use to clean the data?,"It is critical to have correct and clean data that contains only essential information to get good insights while running an algorithm on any data. Poor or erroneo us insights and projections are frequently the product of contaminated data, resulting in disastrous consequences. For exam ple, while starting a large marketing campaign for a product, if our data analysis instructs us to target a product that has little demand, in reality , the campaign will almost certainly fail. As a result, the company's revenue is reduced. This is when the value of having accurate and clean data becomes apparent. Data cleaning from many sources aid data transformation and produces data scientists may work on. Clean data improves the model's performance and results in extremely accurate predictions. When a dataset is sufficiently huge, running data on it becomes difficult. If the data is large, the data cleansing stage takes a long time (about 80% of the time), and it is impossible to include it in the model's execution. As a result, cleansing data before running the model improves the model's speed and ef ficiency . Data cleaning aids in the detec tion and correction of structural flaws in a dataset, and it also aids in the removal of duplicates and the maintenance of data consistency . 20. What featur e selection strategies are available for picking the" What featur e selection strategies are available for picking the appr opriate variables for cr eating effective pr ediction models?,"When utilizing a dataset in data science or machine learning techniques, it's possible that not all of the variables are required or relevant for the model to be built. To eliminate duplicating models and boost the efficiency of our model, we need to use smarter feature selection approaches. The three primary strategies for feature selection are as follows:Filter Approaches: These methods only take up intrinsic attributes of features assessed using univariate statistics, not cross-validated performance. They are simple and typically quicker than wrapper approaches and need fewer processing resources. The Chi-Square test, Fisher's Score technique, Correlation Coef ficient, Variance Threshold, Mean Absolute Difference (MAD) method, Dispersion Ratios, and more filter methods are available. Wrapper Approaches: These methods need a way to search greedily on all potential feature subsets, access their quality , and evaluate a classifier using the feature. The selection method uses a machine-learning algorithm that must suit the provided dataset. W rapper approaches are divided into three categories: Forward Selection: In this method , one feature is checked, and more features are added until a good match is found. Backward Selection: Here, all of the characteristics are evaluated, and the ones that don't fit are removed one by one to determine which works best. Recursive Featur e Elimination: The features are examined and assessed recursively to see how well they perform. These approache s are often computationally expensive, necessitating high- end computing resources for analysis. However , these strategies frequently result in more accurate prediction models than filter methods. Embedded Methods By including feature interactions while retaining appropriate computing costs, embedded techniques combine the benefits of both filter and wrapper methods. These approaches are iterative because they meticulously extract characteristics contributing to most training in each model iteration. LASSO Regularization (L1) and Random Forest Importance are two examples of embedded approaches." Will reclassifying categorical variables as continuous variables impr ove the pr edictive model?,Yes! A categorical variable has no particular category ordering and can be allocated to two or more categ ories. Ordinal variables are comparable to categorical variables because they have a defined and consistent ordering. Treating the categorical value as just a continuous variable should result in stronger prediction models if the variable is ordinal. How will you handle missing values in your data analysis?,"After determining which variab les contain missing values, the impact of missing values may be determined. If the data analyst can detect a pattern in these missing values, there is a potential to uncover useful information. If no patterns are detected, the missing numbers can be disregarded or replaced with default paramete rs such as minimum, mean, maximum, or median. The default values are assigned if the missing values are for categorical varia bles, and missing values are assigned mean values if the data has a normal distribution. If 80 percent of the data are missing, the analyst must decide whether to use default values or remove the variables." "What is the ROC Curve, and how do you make one?","The ROC (Receiver Operating Characteristic) curve depicts the difference between false-positive and true-positive rates at various thresholds. The curve is used as a surrogate for a sensitivity-specificity trade-of f. Plotting values of true-positive rates (TPR or sensitivity) against false- positive rates (FPR or (1-specificity) yields the ROC curve. TPR is the percentage of positive observations correctly predicted out of all positive observations, and the FPR reflects the fraction of observations mistakenly anticipated out of all negative observations. Take medical testing as an example: the TPR shows the rate at which patients are appropriately tested positive for an illness." What ar e the differ ences between the Test and V alidation sets?,The test set is used to evaluate or test the trained model's performance. It assesses the model's prediction ability . The validation set is a subset of the training set used to choose parameters to avoid overfitting the model. What exactly does the kernel trick mean?,Kernel function s are extended dot product functions utilized in high- dimensional feature space to compute the dot product of vector s xx and yy. A linear classifier uses the Kernel trick approach to solve a non-linear issue by chan ging linearly inseparable data into separable data in higher dimensions. Recognize the differ ences between a box plot and a histogram.,"Box plots and histograms are visualizations for displaying data distributions and communicating information effectively . Histograms are examples of bar charts that depict the frequency of numerical variable values and may calculate probability distributions, variations, and outliers. Boxplots communicate various data distribution features when the form of the distribution cannot be observed, but insights may still be gained. Compared to histograms, they are handy for comparing nume rous charts simultaneously because they take up less space." How will you balance/corr ect data that is unbalanced?,"Unbalanced data can be corrected or balanced using a variety of approaches. It is possible to expand the sample size for minority groups, and the number of samples can be reduced for classes with many data points. The following are some of the methods used to balance data: Utilize the proper assessment metrics: It's critical to use the right evaluation metrics that give useful information while dealing with unbalanced data. Specificity/Precision: The number of relevant examples that have been chosen. Sensitivity: Indicates how many relevant cases were chosen . The F1 score represents the harmonic mean of accuracy andsensitivity , and the MCC represents the correlation coefficient between obser ved and anticipated binary classifications (Matthews's correlation coef ficient). The AUC (Area Under the Curv e) measures the relationship between true- positive and false-positive rates. Set of Instructions Resampling Working on obtaining multiple datasets may also be used to balance data, which can be accomplished by resampling. Under -sampling When the data amount is adequate, this balances the data by lowering the size of the plentiful class. A new balanced dataset may be obtained, which can be used for further modelling. Over -sampling When the amount of data available is insuf ficient, this method is utilized. This strategy attempts to balanc e the dataset by increasing the sample size. Instead of getting rid of excess ive samples, repetition, bootstrapping, and other approaches are used to produce and introduce fresh samples. Correctly do K-fold cross-validation When employin g over-sampling , cross-validation must be done correctly . Cross-validation should be performed before over-sampling since doing it afterward would be equivalent to overfitting the model to obtain a certain outcome. Data is resampled many times with varied ratios to circumvent this." Random for est or many decision tr ees: which is better?,"Because random forests are an ensemble approach that guarantees numerous weak decision trees learn forcefully , they are far more robust,accurate, and less prone to overfitting than multiple decision trees." What criteria do we use to determine the statistical importance of an instance?,"The statistical significance of insight is determined by hypothesis testing. The null and alternate hypotheses are provided, and the p-value is computed to explain further . We considered a null hypothesis true after computing the p-value, and the values were calculated. The alpha value, which indicates importance, is changed to fine-tune the outcome. The null hypothesis is rejected if the p-value is smaller than the alpha. As a consequence, the given result is statistically significant." What ar e the applications of long-tail distributions?,"A long-tailed distribution is when the tail progressively diminishes as the curve progresses to the finish. The usage of long-tailed distributions is exemplified by the Pareto princ iple and the product sales distribution, and it's also famous for classification and regression dif ficulties" "What is the definition of the central limit theor em, and what is its application?","The central limit theorem asserts that when the sample size changes without changing the form of the popula tion distribution, the normal distribution is obtained. The central limit theorem is crucial since it is commonly utilized in hypothesis testing and precisely calculating confidence intervals." "In statistic s, what do we understand by observational and experimental data?","Data from observational studies, in which variables are examined to see a link, is referred to as observat ional data. Experimental data comes from investigations in which specific factors are kept constant to examine any disparity in the results." What does mean imputation for missing data means? What are its disadvantages?,"Mean imputatio n is a seldom-used technique that involves replacing null values in a dataset with the data's mean. It's a terrible approach since it removes any accountability for feature correlation. This also indicates that the data will have low variance and a higher bias, reducing the model's accuracy and narrowing confidence intervals." "What is the definition of an outlier , and how do we recogn ize one in a dataset?","Data points that differ significantly from the rest of the dataset are called outliers. Depend ing on the learning process, an outlier can significantly reduce a model's accuracy and ef ficiency . Two strategies are used to identify outliers: Interquartile range (IQR) Standard deviation/z-score" "In statistics, how ar e missing data tr eated?","In Statistics, there are several options for dealing with missing data: Missing values prediction Individual (one-of-a-kind) value assignment Rows with missing data should be deleted Imputation by use of a mean or median value Using random forests to help fill in the blanks" "What is exploratory data analysis, and how does it differ from other types of data analysis?","Investigating data to comprehend it better is known as explo ratory data analysis. Initial investigations are carried out to identify patterns, detect anomalies, test hypotheses, and confirm correct assumptions." "What is selection bias, and what does it imply?","The phenomenon of selection bias refers to the non-random selection of individual or grouped data to undertake analysis and better understand model functionality . If proper randomization is not performed, the sample will not correctly represent the population." What ar e the many kinds of statistical selection bias?,"As indicated below , there are dif ferent kinds of selection bias: Protopathic bias Observer selection Attrition Sampling bias Time intervals" What is the definition of an inlier?,"An inlier is a data point on the same level as the rest of the dataset. As opposed to an outlier , finding an inlier in a dataset is more challenging because it requires external data. Outliers diminish model accuracy , and inliers do the same. As a result, they're also eliminated if found in the data. This is primarily done to ensure that the model is always accurate." Describe a situation in which the median is superior to the mean.,"When some outliers might skew data either favorably or negatively , the median is preferable since it offers an appropriate assessment in this instance." Could you pr ovide an example of a r oot cause analysis?,"As the name implies, root cause analysis is a problem-solving technique that identifies the problem's fundamental cause. For instance, if a city's greater crime rate is directly linked to higher red-colored shirt sales, this indicates that the two variables are positively related. However , this does not imply that one is responsible for the other . A/B testing or hypothesis testing may always be used to assess causality ." "What does the term ""six sigma"" mean?",Six sigma is a quality assurance approach frequently used in statistics to enhance procedures and function ality while working with data. A process is called six sigma when 99.99966 percent of the model's outputs are defect- free. What is the definition of DOE?,"In statistics, DOE stands for ""Design of Experiments."" The task design specifies the data and varies when the independent input factors change." Which of the following data types does not have a log-normal or Gaussian distribution?,"There are no log-normal or Gaussian distributions in exponential distributions, and in reality , these distributions do not exist for categorical data of any kind. Typical examp les are the duration of a phone call, the time until the next earthquake, and so on." What does the five-number summary mean in Statistics?,"As seen below , the five-number summary is a measure of five entities that encompass the complete range of data: Upper quartile (Q3) High extreme (Max) Median Low extreme (Min) The first quartile (Q1)" What is the definition of the Par eto principle?,"The Pareto princ iple, commonly known as the 80/20 rule, states that 80% of the results come from 20% of the causes in a given experiment. The observation that 80 percent of peas originate from 20% of pea plants on a farm is a basic example of the Pareto principle. Probability Data scientists and machine learning engineers rely on probability theory to undertake statistical analysis of their data. Testing for probability abilities is a suitabl e proxy metric for organizations to assess analytical thinking and intellect since probability is also strikingly unintuitive. Probability theory is used in different situations, including coin flips, choosing random numbers, and determining the likelihood that patients would test positive for a disease . Understanding probability might mean the difference betwe en gaining your ideal job and having to go back to square one if you're a data scientist. Interview Questions on Pr obability Concepts These probability questions are meant to test your understanding of probability theory on a conceptual level. You might be tested on the different forms of distributions, the Central Limit Theorem or the application of Bayes' Theorem. This issue requires proper probability theory understanding and explaining this information to a layperson." How do you distinguish between the Bernoulli and binomial distributions?,"The Bernoulli distribution simulates one trial of an experiment with just two possible outcomes, whereas the binomial distribution simulates n trials." Descr ibe how a probability distribution might be non-normal and provide an example.,"The probability distribution is not normal if most observations do not cluster around the mean, creat ing the bell curve. A uniform probability distribution is an example of a non-normal probability distribution, in which all values are equally likely to occur within a particular range." How can you tell the differ ence between correlation and covariance?,"Give a specific example. Covariance can be any numeric value, but correlation can only be between -1 (strong negative correlation) and 1 (strong positive correlation) (strong direct correlation). As a result, a link between two variables may appear to have a high covariance but a low correlation value." How are the Central Limit Theor em and the Law of Large Numbers different?,"The Law of Large Numbers states that a ""sample mean"" is an unbiased estimator of the population mean and that the error of that mean decreases as the sample size grows. In contrast, the Central Limit Theorem states that as a sample size n grows large, the normal distribution can approximate its distribution." What is the definition of an unbiased estimator? Give a layperson an example.,"An accurate statistic used to estimate a population parameter is an unbiased estimator . An example is using a sample of 1000 voters in a political poll to assess the overall voting population, and there is nothing like an utterly objective estimator ." Assume that the chance of finding a particular object X at location A is 0.6 and that finding it at location B is 0.8. What is the likelihood offinding item X in places A or B?,Let us begin by defining our probabilities: P(Item at location A) = P(A) = 0.6 P(Item at location B) = P(B) = 0.8 We want to understand how likely it is that item X will be found on the internet in this city. The likeliho od that item X is at location A or location B may be calculat ed from the question. We can describe this probability in equation form since our occurre nces are not mutually exclusive: P(A or B) = P(A or B) (AUB). "Assume you have a deck of 500 cards with numbers rang ing from 1 to 500. What is the likelihood of each following card being larger than the previously drawn card if all the cards are mixed randomly , and you are asked to choose thr ee cards one at a time?","Consider this a sample space problem, with all other specifics ignored. We may suppose that if someone selects three distinct numbered unique cards at random without replacement, there will be low , medium, and high cards. Let's pretend we drew the numb ers 1, 2, and 3 to make things easier . In our case, the winnin g scenario would be if we pulled (1,2,3) in that precise order . But what is the complete spectrum of possible outcomes?" "Assume you have one function, which gives a random number between a minimum and maximum value, N and M. Then take the output of that function and use it to calculate the total value of another random number generator with the same minimum value N. How would the resulting sample distribution be spread? What would the second function's anticipated value be?","Let X be the first run's outcome, and Y be the second run's result. Because the integer output is ""random"" and no other information is provided, we may infer that any integers betw een N and M have an equal chan ce of beingchosen. As a result, X and Y are discrete uniform random variables with N & M and N & X limits, respectively ." An equilate ral triangle has three zebras seated on each corner . Each zebra chooses a direction at random and only sprints along the triangle's outline to either of the triangle's opposing edges. Is there a chance that none of the zebras will collide?,"Assume that all of the zebras are arranged in an equilateral triangle. If they're sprinting down the outline to either edge, they have two alternatives for going in. Let's compute the chances that they won't collide, given that the scenario is random. In reality , there are only two options. The zebras will run in either a clockwise or counter -clockwise motion. Let's see what the probability is for each one. The likelihood that each zebra will choose to go clockwise is the product of the number of zebras who opt to travel clockwise. Given two options (clockwise or counter -clockwise), that would be 1/2 * 1/2 * 1/2 = 1/8. Every zebra has the same 1/8 chance of traveling counter -cloc kwise. As a result, we obtain the proper probability of 1/4 or 25% if we add the probabilities together ." "You contact three rando m pals in Seattle and ask them if it's raining on their own. Each of your pals has a two-thirds probability of giving you the truth and a one-third risk of deceiving you by lying. ""Yes,"" all three of your buddies agree, it is raining. What are the chances that it's raining in Seattle right now?","According to the outcome of the Frequentist method, if you repeat the trials with your friends, there is one occurrence in which all three of your friends lied inside those 27 trials. However , becau se your friends all provided the same response, you're not interested in all 27 trials, which would include occurrences when your friends' replies dif fered." "You are handed a neutral coin, and you are to toss the coin until it lands on either Heads Heads T ails (HHT) or Heads T ails T ails (HTT). Is it more probab le that one will appear first? If so, which one and how 13. You are handed a neutral coin, and you are to toss the coin until it lands on either Heads Heads T ails (HHT) or Heads T ails T ails (HTT). Is","This question needs a little memory . Given that we have to predict the number of heads out of some trials, we may deduce that it's a binomial distribution problem at first look. As a result, for each test, we'll employ a binomial distribution with n trials and a probability of success of p. The probability for success (a fair coin has a 0.5 chance of landing heads or tails) multiplied by the total number of trials is the anticipated number of heads for a binomial distributi on (576). As a result, our coin flips are projected to turn up heads 288 times." ,"Given the two circumstances, we may conclude that both sequences need H to come first. The chance of HHT is now equal to 1/2 after H occurs. What is the reason behind this? Because all you need for HHT in this circumstance is one H. Because we are flipping the coin in series until we observe a string of HHT or HTT in a row, the coin does not reset. The fact that the initial letter is H enhances the likelihood of HHT rather than HTT" Under what circumstances does the inverse of a diagonal matrix,"If all diagonal elements are non-zero, the inversion of a square diagonal matrix exists. If this is the case, the inverse is derived by substituting the reciprocal of each diagonal element." What does Ax = bAx=b stand for? When does Ax = b have a unique solution?,"Ax = b is a set of linear equations written in matrix form, in which: A is the order m x n coefficien t matrix, and x is the order n x 1 incognite variables vector . The constants create the vector b, which has the order m x 1. If and only if the system Ax = b has a unique solution. n rank[A|b] = n rank[A|b] = n n rank[A|b] = n rank[A|b] = n The matrix A|b is matrix A with b attached as an additional column, hence rank[A]=rank[Ab]=n." What is the process for diagonalizing a matrix?,"To obtain the diagonal matrix D of an nxn matrix A, we must do the following: Determine A's characteristic polynomial. To get the eigenvalues of A, find the roots of the characteristic polynomial. Find the corresponding eigenvectors for each of A's eigenvalues. The matrix is not diagonalizable if the total number of eigenvectors m determined in step 3 does not equal n (the number of rows and columns in A), but if m = n, the diagonal matrix D is provided by: bf D = P-1 A P, D=P 1 AP, where P is defined as a matrix whose columns are the eigenvectors of the matrix A." "Find the definition of positive definite, negative definite, positive semi-definite, and negative semi-definite matrices?","A positi ve defin ite matrix (PsDM) is a symmetric matrix M in which the number ztMz is positive for each non-zero column vector z. A symm etric matrix M is a positive semi-definite matrix if the number ztMz is positive or zero for every non-zero column vector z. Negative semi- definite matrices and negative definite matrices are defined in the same way. Because each matrix may be linked to the quadratic equation ztMz, these matrices aid in solving optimiza tion issues, a positive definite matrix M, for example, implie s a convex function, ensuring the existence of the global minimum. This allows us to solve the optimization issue with the Hessian matrix, and negative definit e matrices are subject to the same considerations." How does Linear Algebra relate to br oadcasting?,"Broadcasting is a technique for easing element-by-element operations based on dime nsion constraints. If the relevant dimensions in each matrix (rows versus rows, columns versus columns) match the following conditions, we say two matrices are compatible for broadcasting. The measurements are the same, or one of the dimensions is the same size. Broadcasting works by duplicating the smaller array to make it the same size and dimens ions as the bigge rarray . This approach was initially created for NumPy , but it has now been adopted by some other numerical computing libraries, including Theano, TensorFlow , and Octave." What is an Orthogonal Matrix?,"An orthogonal matrix is a square matrix with columns and rows of orthonormal unit vectors, such as perpendicular and length or magnitude. It's formally defined as follows: Q^t Q = Q Q^t = IQt, Q=QQ,   t=I Where Q stands for the orthogonal matrix, Qt for the transpose of Q, and I for the identity matrix. W e can observe from the definition above that Q^{-1} = Q^t Q−1 =Qt As a result, the orthogonal matrix is favored since its inverse is computed as merely its transpose, computationally inexpensive, and stable. Then you'll need to recall that the binomial distribution's standard deviation is sqrt(n*p* (1-p))." What exactly is Python?,"Python is a general-purpose, high-level, interpreted programming language. The correct tools/libraries may be used to construct practically any application because it is a general-purpose language. Python also has features like objects, modules, threads, exception handling, and automated memory management, which aid in modelling real-world issues and developing programs to solve them." What ar e the advantages of Python?,"Python is a general-purpose programming language with a simple, easy-to- learn syntax that prioritizes readability and lowers program maintenancecosts. Furthermore, the language is scriptable, open-source, and enables third-party packages, promoting modularity and code reuse. Its high-level data structures, along with the dynamic type and dynamic binding, have attracted a large developer community for Rapid Application Development and deployment." What is the definition of dynamically typed language?,"We must first learn about typing before comprehending a dynamically typed language. In computer languages, typing refers to type-checking. Because these languages don't allow for ""type-coercion,"" ""1"" + 2 will result in a type error in a strongly-typed language like Python (implicit conversion of data types). On the other hand, a weakly-typed language, such as JavaScript, will simply return ""12"" as a result. There are two steps to type-checking Static - Data T ypes are checked before execution. Dynamic - Data T ypes are checked during execution. Python is an interpreted language that executes each statement line by line. Thus type-checking happens in real-time while the program is running, and python is a Dynamically Typed Language as a result." What is the definition of an Interpreted Language?,"The sentences in an Interpreted language are executed line by line. Interpreted languages include Python, JavaScript, R, PHP, and Ruby , to name just a few. An interpreted language program executes straight from the source code without a compilation phase." "What is the meaning of PEP 8, and how significant is it?",Python Enhancement Proposal (PEP) is an acronym for Python Enhancement Proposal. A Python Extension Protocol (PEP) is an official design document that provides information to the Python community or describes a new feature or procedure for Python. PEP 8 is particularly important since it outlines the Python code style rules. Contrib uting to thePython open-so urce community appears to need a serious and tight adherence to these stylistic rules. What is the definition scope in Python?,"In Python, each object has its scope. In Python, a scope is a block of code in which an object is still releva nt. Namespaces uniquely identify all the objects in a program. On the other hand, these namespaces have a scope set for them , allowing you to utilize their objects without any prefix. The following are a few instances of scope produced during Python code execution: Those local objects available in a particular function are a local scope. A global scope refers to the items that have been available from the beginning of the code execution. The global objects of the current module that are available in the program are referred to as a module-level scope. An outermost scope refers to all of the program's built-in names. The items in this scope are searched last to discover the name reference." What is the meaning of pass in Python?,"In Python, the pass keyword denotes a null operation. It is commonly used to fill in blank blocks of code that may execute during runtime but has not yet been written. We may encounter issues during code execution if we don't use the pass statement in the following code." How does Python handle memory?,"The Python Memory Manager is in charge of memory management in Python. The memory allotted by the manager is in the form of a Python- only private heap area. This heap holds all Python objects, and because it is private, it is unavailable to the programmer . Python does, however , haveseveral basic API methods for working with the private memory area. Python also features a built-in garbage collection system that recycles unneeded memory for the private heap area." What are namespaces in Python? What is their purpose?,"In Python, a namespace ensures that object names are uniqu e and used without conflict. These namespaces are implemented in Python as dictionaries with a 'name as key' and a corresponding 'object as value.' Due to this, multiple namespaces can use the same name and map it to a different object. Here are a few instances of namespaces: Within a function, local names are stored in the Local Namespace. A temporary namespace is formed when a function is called, removed when the function returns. The names of various imported packages/modules used in the current project are stored in the Global Namespace. When the package is imported into the script, this namespace is generated and persists until executed. Built-in Namesp ace contains essential Python built-in functions and built-in names for dif ferent sorts of exceptions. The lifespan of a namespace is determined by the scope of objects to which it is assigned. The lifespan of a namespace comes to an end when the scope of an object expires. As a result, accessing inner namespace objects from an outside namespace is not feasible." What is Python's Scope Resolution?,"Objects with the same name but distinct functions exist inside the same scope. In certain instances, Python's scope resolution kicks in immediately . Here are a few examples of similar behavior: Many functions in the Python modules 'math' and 'cmath' are shared by both - log10(), acos(), exp(), and so on. It is important to prefix them with their corresponding module, such as math.exp() and cmath.exp(), to overcome this problem.Consider the code below , wher e an object temp is set to 10 globally and subsequently to 20 when the function is called. The function call, however , did not affect the global temperature value. Python draws a clear distinction between global and local variabl es, interpreting their namespace s as distinct identities." Explain the definition of decorators in Python?,"Decorators in Python are simp ly functions that add functionality to an existing Python function without affecting the function's structure. In Python, they are represented by the name @decorator name and are invoked from the bottom up. The elegance of decorators comes in the fact that, in addition to adding functionality to the method's output, they may also accept parameters for functions and change them before delivering them to the function. The inner nested function, i.e., the 'wrapper' function, is crucial in this case, and it's in place to enforce encapsulation and, as a result, keep itself out of the global scope." What are the definitions of dict and list comprehensions?,"Python comprehensions, like decorators, are syntactic sugar structures that aid in the construction of chang ed and filtered lists, dictionaries, and sets from a given list, dictionary , or set. Using comprehensions saves a lot of effort and allow s you to write less verbose code (containing more lines of code). Consider the following scenarios in which comprehensions might be highly beneficial: Performing math operations throughout the full list Using conditional filtering to filter the entire list Multiple lists can be combined into one using comprehensions, which allow for many iterato rs and hence can be used to combine multiple lists into one. Taking a multi-dimensional list and flattening it A simila r strategy of nested iterators (as seen before) can be used to flatten a multi-dimensio nal list or operate on its inner members." What is the definition of lambda in Python? What is the purpose of it?,"In Python, a lambda function is an anonymous function that can take any number of parameters but only have one expression. It's typically utilized when an anonym ous function is required for a brief time. Lambda functions can be applied in two dif ferent ways: To assign lambda functions to a variable, do the following: mul = lambda a, b : a * b print(mul(2, 5)) # output => 10 Wrapping lambda functions inside another function: def. myW rapper(n): return lambda a : a * n mulFive = myW rapper(5) print(mulFive(2)) # output => 10" "In Python, how do you make a copy of an object?","The assignment statement (= operator) in Python doesn't duplicate objects. Instead, it establishes a connec tion between the existing object and the name of the target variable. In Python, we must use the copy module to make copies of an object. Furthermore, the copy module provides two options for producing copies of a given object – A bit-w ise copy of an object is called a shallow copy . The values of the cloned object are an identical replica of the original object's values. If one of the variables references another object, just the references to that object are copied. Deep Copy recurs ively replicates all values from source to destination object, including the objects referenced by the source object." What are the definitions of pickling and unpickling?,"Serialization out of the box is a feature that comes standard with the Python library . Serializing an object means converting it into a format that can be saved to be de-serialize d later to return to its original state. The pickle module is used in this case.Pickling In Python, the serialization process is known as pickling. In Python, any object may be serialized as a byte stream and saved as a memory file. Pickling is a compact process, but pickle items may be further compacted. Pickle also retains track of the serialized objects, which is cross-version portable. Pickle.dump is the function used in operation mentioned above (). Unpickling Pickling is the polar opposite of unpickling. After deserializi ng the byte stream, it loads the object into memory to reconstruct the objects saved in the file. Pickle.load is the function used in operation mentioned above ()." What is PYTHONPATH?,PYTHONPATH is an environme nt variable that allows you to specify extra directories in which Python will look for modules and packa ges. This is especially important if you want to keep Python libraries that aren't installed in the global default location. What ar e the functions help() and dir() used for?,"Python's help() method displays modules, classes, functions, keywords, and other objects. If the help() method is used without an argument, an interactive help utility is opened on the console. The dir() function attempts to return a correct list of the object's attributes and methods. It reacts differently to various things because it seeks to produce the most relevant data rather than all of the information. It produ ces a list of all characteristics included in that module for Modules/Library objects. It returns a list of all acceptable attributes and basic attributes for Class Obje cts. It produces a list of attributes in the current scope if no arguments are supplied." How can you tell the differ ence between.py and.pyc files?,"The source code of a program is stored in.py files. Meanwhile, the bytecode of your program is stored in the .pyc file. After compiling the.py file, we obtain bytecode (source code). For some of the files you run, .pyc files are not produced. It's solely there to hold the files you've imported—the python interpreter checks for compiled files before executing a python program. The virtual computer runs the file if it is present, and it looks for a.py file if it isn't found. It is compiled into a.pyc file and then executed by the Python Virtual Machine if it is discovered.   Having a.pyc file saves you time while compiling." What does the computer interpret in Python?,"Python is not an interpreted or compiled language. The imple mentation's attribute is whether it is interpreted or compiled. Python is a bytecode (a collection of interpreter -readable instructions) that may be interpreted differently . The source code is saved with the extension .py. Python generates a set of instructions for a virtual machine from the source code. The Python interpreter is a virtu al machine implementation. ""Bytecode"" is the name for this intermediate format. The.py source code is initially compiled into bytecode (.pyc). This bytecode can then be interp reted by the standard CPython interpreter or PyPy's JIT (Just in T ime compiler)." "In Python, how are arguments delivered by value or reference?","Pass by value: The real object is copied and passed. Changing the value of the object's duplicate does not af fect the original object's value. Pass via reference: The real object is supplied as a reference. The value of the old object will change if the value of the new object is changed. Arguments are supplied by reference in Python, which means that a reference to the real object is passed." What exactly are Pandas/Python Pandas?,"Pandas are a Python open-source toolkit that allows for high-p erformance data manipulation. Pandas get its name from ""panel data,"" which refers to econometrics based on multidimensional data. It was created by Wes McKinney in 2008 and may be used for data analysis in Python. It can conduct the five major processes necessary for data processing and analysis, regardless of the data's origin, namely load, manipulate, prepare, model, and analyze." What are the different sorts of Pandas Data Structures?,"Pandas provide two data structures, Series and DataFrames, which the panda's library supports. Both of these data structures are based on the NumPy framew ork. A series is a one-dimensional data structure in pandas, whereas a DataFrame is two-dimensional." How do you define a series in Pandas?,"A Series is a one-dimensional array capable of holding many data types. The index refers to the row labels of a series. We can quickly turn a list, tuple, or dictionary into a series by utilizing the 'series' function. Multiple columns are not allowed in a Series." How can the standard deviation of the Series be calculated?,"The Pandas std() method is used to calculate the standard deviation of a collection of values, a DataFrame, a column, and a row . Series.std( skipna=None, axis=None, ddof=1, level=None, numeric_only=None, **kwar gs)" How do you define a DataFrame in Pandas?,"A DataFrame is a pandas data structure that uses a two-dimensional array with labeled axes (rows and columns). A DataFrame is a typical way to store data with two indices, namely a row index, and a column index. It has the following characteristics: Columns of heterogeneous kinds, such as int and bool, can be used, and it may be a dictionary of Series structures with indexed rows and columns. When it comes to columns, it's ""columns,"" and when it comes to rows, it's ""index.""" What distinguishes the Pandas Library from other libraries?,"The following are the essential aspects of the panda's library: Alignment of Data, Efficient Memory, Time Series, Reshapin, Join and merge" What is the purpose of reindexing in Pandas?,"DataFrame is reindexed to adhere to a new index with optional filling logic. It inserts NA/NaN in areas where the values are missing from the precedingindex. Unless the new index is provided as identical to the current one, the value of the copy becomes False. It returns a new object, and it is used to modify the DataFrame's rows and columns index." Can you explain how to use categorical data in Pandas?,"Categorical data is a Pandas data type that correlates to a categorical statistical variab le. A categorical variable has a restricted number of potential values, usually fixed . Gender , place of origin, blood type, socioeconomic status, observation time, and Likert scale rating s are just a few examples. Categorical data values are either in categories or np.nan." "In Pandas, how can we make a replica of the series?","The following syntax can be used to make a replica of a series: Series.copy(deep=T rue) pandas.Series.copy The statements above create a deep copy , which contains a copy of the data and the indices. If we set deep to False, neither the indices nor the data will be copied." How can I rename a Pandas DataFrame's index or columns?,You may use the .rename method to change the values of DataFrame'scolumns or index values. What is the correct way to iterate over a Pandas DataFrame?,"By combining a loop with an iterrows() function on the DataFrame, you may iterate over the rows of the DataFrame." "How Do I Remove Indices, Rows, and Columns from a Pandas Data Frame?","You must perform the followin g if you wish to delete the index from the DataFrame: Dataframe's Index Reset To delete the index name, run del df.index.name. Reset the index and drop the duplicate values from the index column to remove duplicate index values. With a row , you may remove an index. Getting Rid of a Column in your Dataframe The drop() function may remove a column from a DataFrame. The axis option given to the drop() function is either 0 to indicate the rows or 1 to indicate the columns to be dropped. To remove the column without reassigning the DataFrame, pass the ar gument in place and set it to T rue. The drop duplicates() functio n may also remove duplicate values from a column. Getting Rid of a Row in your Dataframe We may delete duplicate rows from the DataFrame by calling df.drop duplicates(). The drop() function may indicate the index of the rows to be removed from the DataFrame." What is a NumPy array in Pandas?,Numerical Python (Numpy) is a Python module that allows you to do different numerical computations and handle multidimensional and single- dimensional array items. Numpy arrays are quicker than regular Python arrays for computations. What is the best way to transform a DataFrame into a NumPy array?,"We can convert Pandas DataFrame to NumPy arrays to conduct various high-level mathe matical procedu res. The DataFrame.to NumPy() method is used. The DataFrame.to_numpy() function is used to the DataFrame which returns the numpy ndarray . DataFrame.back to_the numpy(dtype=None, copy=False)." What is the best way to convert a DataFrame into an Excel file?,"Using the to excel() method, we can export the DataFrame to an excel file. We must mentio n the destination filename to write a single object to an excel file. If we wish to write too many sheets, we must build an ExcelW riter object with the destination filename and the sheet in the file that we want to write to." What is the meaning of Time Series in panda?,"Time series data is regarded as an important source of information for developing a strategy that many organizations may use. It contains a lot of facts about the time, from the traditional banking business to the education industry . Time series forecastin g is a machine learning model that deals with T ime Series data to predict future values." What is the meaning of Time Offset?,The offset defines a range of dates that meet the DateOffset's requirements. We can use DateOffsets to advance dates forward to make them legitimate. How do you define Time periods?,"The Time Periods reflect the length of time, such as days, years, quarters, and months. It's a class that lets us convert frequencies to periods." What exactly is Numpy?,NumPy is a Python-based array processing program. It inclu des a high- performance multidimensional array object and utilities for manipulating them. It is the most important Python module for scientific computing. An N-dimensional array object with a lot of power and sophisticated broadcasting functions. What is the purpose of NumPy in Python?,"NumPy is a Python module that is used for Scientific Com puting. The NumPy package is used to carry out many tasks. A multidimensional array called ndarray (NumPy Array) holds the same data type values. These arrays are indexed in the same way as Sequences are, starting at zero." What does Python's NumPy stand for?,"NumPy (pronou nced /nmpa/ (NUM-py) or /nmpi/ (NUM-pee)) is a Python library that adds support for huge, multi-dimensional arrays and matrices, as well as a vast number of high-level mathematical functions to work on these arrays." Wher e does NumPy come into play?,"NumPy is a free, open-source Python library for numerical computations. A multi-dimensional array and matrix data structures are included in NumPy , and it may execute many operations on arrays, including trigonometric, statistical, and algebraic algorit hms. NumPy is a Numeric and Numarray extension." Installation of Numpy into Windows?,"Step 1:Install Python on your Windows 10/8/7 computer . To begin, go to the official Python download website and download the Python executable binaries for your W indows machine. Step 2: Install Python using the Python executable installer . Step 3: Download and install pip for W indows 10/8/7. Step 4: Install Numpy in Python on W indows 10/8/7 using pip. The Numpy Installation Pr ocess. Step 1: Open the terminal Step 2: Type pip install NumPy" What is the best way to import NumPy into Python?,Import NumPy as np How can I make a one-dimensional(1D)array?,"Num=[1,2,3] Num = np.array(num) Print(“1d array : “,num)" How can I make a two-dimensional (2D)array?,"Num2=[[1,2,3],[4,5,6]] Num2 = np.array(num2) Print(“\n2d array : “,num2)" How do I make a 3D or ND array?,"Num3=[[[1,2,3],[4,5,6],[7,8,9]]] Num3 = np.array(num3) Print(“\n3d array : “,num3)" What is the best way to use a shape in a 1D array?,"If num=[1,2,3], print('nshape of 1d',num.shape) if not defined." What is the best way to use shape in a 2D array?,"If not added, num2=[[1,2,3],[4,5,6]] print('nshape of 2d',num2.shape)" What is the best way to use shape in 3D or Nd Array?,"Num3=[[[1,2,3],[4,5,6],[7,8,9]]] if not added Print(‘\nshpae of 3d ‘,num3.shape)" What is the best way to identify the data type of a NumPy array?,"Print(‘\n data type num 1 ‘,num.dtype) Print(‘\n data type num 2 ‘,num2.dtype) Print(‘\n data type num 3 ‘,num3.dtype)" Can you print 5 zeros?,"Arr = np.zeros(5) Print(‘single arrya’,arr)" "Print zeros in a two-row, three-column format?","Arr2 = np.zeros((2,3)) Print(‘\nprint 2 rows and 3 cols : ‘,arr2)" Is it possible to utilize eye() diagonal values?,"Arr3 = np.eye(4) Print(‘\ndiaglonal values : ‘,arr3)" Is it possible to utilize diag() to create a square matrix?,"Arr3 = np.diag([1,2,3,4]) Print(‘\n square matrix’,arr3)" Printing range Show 4 integers random numbers between 1 and 15,"Rand_arr = np.random.randint(1,15,4) Print(‘\n random number from 1 to 15 ‘,rand_arr)" Print a range of 1 to 100 and show four integers at random.,"Rand_arr3 = np.random.randint(1,100,20) Print(‘\n random number from 1 to 100 ‘,rand_arr3)" "Print range between random numbers 2 rows and three columns, select integer's random numbers.","Rand_arr2 = np.random.randint([2,3]) Print(‘\n random number 2 row and 3 cols ‘,rand_arr2)" What is an example of the seed() function? What is the best way to utilize it? What is the purpose of seed()?,"np.random.seed(123) Rand_arr4 = np.random.randint(1,100,20) Print(‘\nseed() showing same number only : ‘,rand_arr4)" What is one-dimensional indexing?,"Num = np.array([5,15,25,35]) is one example. Num = np.array([5,15,25,35]) Print(‘my array : ‘,num)" "Print the first, last, second, and third positions.","Num = np.array([5,15,25,35]) if not added Print(‘\n first position : ‘,num[0]) #5 Print(‘\n third position : ‘,num[2]) #25" How do you find the final integer in a NumPy array?,"Num = np.array([5,15,25,35]) if not added Print(‘\n forth position : ‘,num[3])" How can we prove it pragmatically if we don't know the last position?,"Num = np.array([5,15,25,35]) if not added Print(‘\n last indexing done by -1 position : ‘,num[-1])" Define Supervised Learning?,Supervised learning is a machine learning technique that uses labeled training data to infer a function. A series of training examples make up thetraining data. Example: Knowing a person's height and weight might help determine their gender . The most common supervised learning algorithms are shown below . Support V ector Machines K-nearest Neighbor Algorithm and Neural Networks. Naive Bayes Regression Decision Trees. Explain Unsupervised Learning?,"Unsupervised learning is a machine learning method that searches for patterns in a data set. There is no dependent variable or label to forecast in this case. Algorithms for Unsupervised Learning: Clustering Latent V ariable Models and Neural Networks Anomaly Detection Example: A T-shirt clustering, for example, will be divided into ""collar style and V neck style,"" ""crew neck style,"" and ""sleeve kinds.""" What should you do if you'r e Overfitting or Underfitting?,"Overfitting occurs when a model is too well suited to training data; in this scenario, we must resample the data and evaluate model accuracy using approaches such as k-fold cross- validation. Whereas in Underfitting, we cannot interpret or capture patterns from the data, we must either adjust the algorithms or input more data points to the model." Define Neural Network?,It's a simplified representation of the human mind. It has neurons that activate when it encounters anything comparable to the brain. The many neurons are linked by connectio ns that allow information to travel from one neuron to the next. What is the meaning of Loss Function and Cost Function? What is the main distinction between them?,"When computing loss, we just consider one data point, referred to as a loss function. The cost function determines the total error for numerous data, and there isn't much difference. A loss function captures the difference between the actual and projecte d values for a single record, whereas a cost function aggregates the difference across the training dataset. Mean-squared error and Hinge loss are the most widely utilized loss functions. The Mean- Squared Error (MSE) measure s how well our model predicted values compared to the actual values. MSE = √(predicted value - actual value)2 Hinge loss: It is used to train the machine learning classifier , which is L(y) = max(0,1- yy) Where y = -1 or 1 denotes two classes and y denotes the classi fier's output form. In the equation y = mx + b, the most common cost function depicts the entire cost as the sum of the fixed and variable costs." Define Ensemble Learning?,"Ensemble learning is a strategy for creating more powerful machine learning models by combining numerous models. There are severa l causes for a model's uniqueness. The following are a few reasons: Various Populations Various Hypotheses Various modelling approachesWe will encounter an error when working with the model's training and testing data. Bias, variation, and irreducible error are all possib le causes of this inaccuracy . The model should now always exhibit a bias-variance trade-of f, which we term a bias-variance trade-of f. This trade-of f can be accomplished by ensemble learning. There are a variety of ensemble approaches available. Howeve r, there are two main strategies for aggregating several models: Bagging is a natural approach for generating new training sets from an existing one. Boosting is a more elegant strategy to optimize the optimum weighting scheme for a training set." How do you know the Machine Learning Algorithm you should use?,"It is entirely dependent on the data we have. SVM is used when the data is discrete, and we utilize linear regression if the dataset is continuous. As a result, there is no one-size-fits-all method for determining which machine learning algorit hm to utilize; it all relies on exploratory data analysis (EDA). EDA is similar to ""interviewing"" a dataset. We do the following as part of our interview: Sort our variables into categories like continuous, categorical, and so on. Use descriptive statistics to summarize our variables. Use charts to visualize our variables. Choose one best-fit method for a dataset based on the given observations." How should Outlier Values be Handled?,"An outlier is a dataset observatio n significantly different from the rest of the dataset. The following are some of the tools that are used to find outliers. Z-score, Box plot, Scatter plot, etc.To deal with outliers, we usually need to use one of three easy strategies: We can get rid of them. They can be labeled as outliers and added to the feature set. Similarly , we may change the characteristic to lessen the impact of the outlier ." Define Random Forest? What is the mechanism behind it?,"Random forest is a machine learning approach that may be used for regression and classification. Random forest operates by merging many different tree models, and random forest creates a tree using a random sampling of the test data columns. The procedures for creating trees in a random forest are as follows: Using the training data, calculate the sample size. Begin by creating a single node. From the start node, run the following algorithm: Stop if the number of observations is fewer than the node size. Choose variables at random. Determine whic h variable does the ""best"" job of separating the data. Divide the observations into two nodes. Run step 'a' on each of these nodes." What ar e SVM's different Kernels?,"In SVM, there are six dif ferent types of kernels, below are four of them: Linear kernel - When data is linearly separable. Polynomial kernel - When you have discrete data with no natural idea of smoothness. Radial basis kernel - Create a decision boundary that can separate two classes considerably better than a linear kernel. Sigmoid kernel - The sigmoid kernel is a neural network activation function." What is Machine Learning Bias?,"Data bias indicates that there is a discrepancy in the data. Inconsistency can develop for different causes, none of which are mutually exclusive. For example, to speed up the recruiting process, a digital giant like Amazon built a single-en gine that will take 100 resumes and spit out the best five candidates to employ . The program was adjusted to remove the prejudice once the business noticed it wasn't providing gender -neutral results." What is the difference between regression and classification?,"Classification is used to provide distinct outcomes, as well as to categorize data into specified categories. An example is classifying emails into spam and non-spam groups. Regression, on the other hand, works with continuous data. An example is Predicting stock prices at a specific period in time. The term ""classification"" refers to the process of categorizing the output into a set of categories. For example, is it going to be cold or hot tomorrow? On the other hand, regression is used to forecast the connection that data reflects. An example is, what will the temperature be tomorrow?" "What is Clustering, and how does it work?","Clustering is the process of dividing a collection of things into several groups. Objects in the same cluster should be similar to one another but not those in dif ferent clusters. The following are some examples of clustering: K means clustering Hierarchical clustering Fuzzy clustering Density-based clustering, etc." What is the best way to choose K for K-means Clustering?,"Direct procedures and statistical testing methods are the two types of approaches available: Direct Methods: It has elbows and a silhouette.Methods of statistical testing: There are data on the gaps. When selecting the ideal value of k, the silhouette is the most commonly utilized." Define Recommender Systems,"A recommendat ion engine is a program that predicts a user's preferences and suggests things that are likely to be of interest to them. Data for recommender systems comes from explicit user evaluations after seeing a movie or listening to music, implicit search engine inquiries and purchase histories, and other information about the users/items themselves." How do you determine if a dataset is normal?,"Plots can be used as a visual aid. The following are a few examples of normalcy checks: Shapiro-Wilk Test, Anderson-Darling Test, Martinez-Iglewicz Test, Kolmogorov-Smirnov Test, D’Agostino Skewness Test" Is it possible to utilize logistic regression for more than two classes?,"By default, logistic regression is a binary classifier , which means it can't be used for more than two classes. It can, however , be used to solve multi-class classification issues (multinomial logistic regression)" Explain covariance and correlation?,"Correlation is a statistical technique for determining and quantifying the quantitative relationship between two variables. The strength of a relationship betw een two variables is measured by correlation. Income and spending, demand and supply , and so on are examples. Covariance is a straightforwar d method of determining the degree of connection between two variables. The issue with covariance is that it'sdifficult to compare them without normalization." What is the meaning of P-value?,"P-values are utilized to make a hypothesis test choice. The P-value is the least significant level where the null hypothesis may be rejected. The lower the p-value, the more probable the null hypothesis is rejected." Define Parametric and Non-Parametric Models,"Parametric mode ls contain a small number of parameters. Thus all you need to know to forecast new data is the model's parameter . Non-parametric models have no restrictions on the number of parameters they may take, giving them additional flexibility and the ability to forecast new data. You must be aware of the current status of the data and the model parameters." Define Reinforcement Learning,"Reinforcement learning differs from other forms of learnin g, such as supervised and unsupervised learning. We are not provided data or labels in reinforcement learning." What is the difference between the Sigmoid and Softmax functions?,"The sigmoid function is utilized for binary classification, and the sum of the probability must be 1. On the other hand, the Softmax function is utilized for multi-classification, and the total probability will be 1." What is the Dimensionality Curse?,"All of the issues that develop when working with data in more dimensions are called the curse of dimensionality . As the number of features grows, so does the number of samples, making the model increasingly complicated. Overfitting is increasingly likely as the number of characteristics increases. A machine learning model trained on a high number of features becomes overfitted as it becomes increasingly reliant on the data it was trained on,resulting in poor performance on actual data and defeating the objective. Our model will make fewer assumptions and be simpler if our training data contains fewer characteristics." Why do we need to reduce dimensionality? What are the disadvantages?,"The amount of features is referred to as a dimension in Machine Learning. The process of lowering the dimension of your feature collectio n is known as dimensionality reduction. Dimensionality Reduction Benefits With less misleading data, model accuracy increases. Less computation is required when there are fewer dimensions. Because there is less data, algorithms can be trained more quickly . Fewer data necessitates less storage space. It removes redundant features and background noise. Dimensionality reduction aids in the visualization of data on 2D and 3D graphs. Dimensionality Reduction Drawbacks Some data is lost, which might negatively impact the effectiveness of subsequent training algorithms. It has the potential to be computationally demanding. Transformed characteristics are often dif ficult to decipher . It makes the independent variables more difficult to comprehend." Can PCA be used to reduce the dimensionality of a nonlinear dataset with many variables?,"PCA may be used to dramatically reduce the dimensionali ty of most datasets, even if they are extre mely nonlinear , by removing unnecessary dimensions. However , decreasing dimensionality with PCA will lose too much information if there are no unnecessary dimensions." "Is it required to rotate in PCA? If so, why do you think that is? What will happen if the components aren't rotated?","Yes, rotation (orthogonal) is required to account for the training set's maximum variance. If we don't rotate the components, PCA's influence will wane, and we'll have to choose a larger number of components to explain variation in the training set." Is standardization necessary before using PCA?,"PCA uses the covariance matrix of the original variables to uncover new directions because the covariance matrix is susceptible to variable standardization. In most cases, standardization provides equal weights to all variables. We obtain false directions when we combine features from various scales. However , if all variables are on the same scale, it is unnecessary to standardize them." Should strongly linked variables be removed before doing PCA?,"No, PCA uses the same Princ ipal Component (Eigenvector) to load all strongly associated variables, not distinct ones." What happens if the eigenvalues are almost equal?,PCA will not choose the principle components if all eigenvec tors are the same because all principal components will be similar . How can you assess a Dimensionality Reduction Algorithm's performance on your dataset?,"A dimensionality reduction technique performs well if it removes many dimensions from a dataset without sacrificing too much information. If you use dimensionality reduction as a preprocessing step before another Machine Learning algorithm (e.g., a Random Forest classifier), you can simply measure the performance of that second algorithm. If dimensionality reduction did not lose too much information, the algorithm should perform well with the original dataset.The Fourier Transform is a useful image processing method for breaking down an image into sine and cosine components. The picture in the Fourier or frequency domain is represe nted by the output of the transformation, while the input image represents the spatial domain equivalent." "What do you mean when you say ""FFT ,"" and why is it necessary?","FFT is an acron ym for fast Fourier transform, a DFT computing algorithm. It takes advantag e of the twiddle factor's symmetry and periodicity features to drastically reduce its time to compute DFT. As a result, utilizing the FFT technique reduces the number of difficult computations, which is why it is popular ." Describe some of the strategies for dimensionality reduction.,"The following are some approaches for reducing the dimensionality of a dataset: Feature Selectio n - As we evaluate qualities, we pick or delete them based on their value.Feature Extracti on - From the current features, we generate a smaller collectio n of features that summarizes most of the data in our dataset." What are the disadvantages of reducing dimensionality?,"Dimensionality reduction has some drawbacks; they include: The decrease may take a long time to complete. The modified independent variables might be difficult to comprehend. As the number of features is reduced, some information is lost, and the algorithms' performance suf fers. Support V ector Machine (SVM) The ""Support Vector Machine"" (SVM) supervised machine learning to solve classification and regression problems. SVMs are especially well- suited to classifying complex but small or medium-sized datasets. Let's go through several SVM-related interview questions." Could you explain SVM to me?,"Support vector machines (SVMs) are supervised machine learning techniques that may be used to solve classification and regressio n problems. It seeks to categorize data by locating a hyperplane that optimizes the margin between the training data classes. As a result, SVM is a big margin classifier . Support vector machines are based on the following principle: For linearly separable patterns, the best hyperplane is extended to patterns that are not linearly separable by original mapping data into new space using modifications of original data (i.e., the kernel trick)." "In light of SVMs, how would you explain Convex Hull?",We construct a convex hull for classes A and B and draw a perpendicular on the shortest distance between their nearest points. Should you train a model on a training set with millions of instances and hundreds of featur es using the primal or dual form of the SVM problem?,"Because kernelized SVMs may only employ the dual form, this question only relates to linear SVMs. The primal form of the SVM problem has a computational complexity proportional to the number of training examples m. Still, the dual form has a computational complexity propo rtional to a number between m2 and m3. If there are millions of instances, you should use the primal form instead of the dual form since the dual form is slower ." Describe when you want to employ an SVM over a Random Forest Machine Learning method.,"The fundament al rationale for using an SVM rather than a linearly separable problem is that the problem may not be linearly separable. We'll have to employ an SVM with a non-linear kernel in such a situation. If you're working in a higher -dime nsional space, you can also employ SVMs. SVMs, for example, have been shown to perform better in text classification." "Is it possible to use the kernel technique in logistic regression? So, why isn't it implemented in practice?","Logistic regress ion is more expensive to compute than SVM — O(N3) versus O(N2k), where k is the number of support vectors. The classifier in SVM is defined solely in terms of the support vectors, but the classifier in Logistic Regression is defined over all points, not just the support vectors. This gives SVMs certain inher ent speedups (in terms of efficient code- writing) that Logistic Regression struggles to attain." What are the difference between SVM without a kernel and logistic regression?,The only difference is in how they are implemented. SVM is substantially more ef ficient and comes with excellent optimization tools. Is it possible to utilize any similarity function with SVM?,"No, it must comply with Mercer's theorem." Is ther e any pr obabilistic output fr om SVM?,"SVMs do not offer probability estimates directly; instead, they are derived through a time-consuming five-fold cross-validation procedure." What are the many instan ces in which machine learning models might overfit?,"Overfitting of machine learning models can occur in a variety of situations, including the following: When a machine learning algorithm uses a considerably bigger training dataset than the testing set and learns patterns in the large input space, the accuracy on a small test set is only marginally improved. It occurs when a machine learning algorithm models the training data with too many parameters. Suppose the learning algorithm searches a large amount of hypothesis space. Let's figure out what hypothesis space is and what searching hypothesis space is all about. If the learning algorithm used to fit the model has a large number of possible hyperparameters and can be trained using multiple datasets (called training datasets) taken from the same dataset, a large number of models (hypothesis – h(X)) can be fitted on the same data set. Remember that a hypothesis is a target function estimator . As a result, many models may fit the same dataset. This is known as broader hypothesis space. In this case, the learning algorithm can access a broader hypothesis space. Given the broader hypothesis space, the model has a greater chance of overfitting the training dataset." What are the many instances in which machine learning models cause underfitting?,"Underfitting of machine learning models can occur in a variety of situations, including the following: Underfitting or low-biased machine learning models can occur when the training set contains fewer observations than variables. Because the machine learning algorithm is not complicated enough to represent the data in these circumstances, it cannot identify any link between the input data and the output variable.When a machine learning system can't detect a pattern between training and testing set variables, which might happen when dealing with many input variables or a high-dimensional dataset, this might be due to a lack of machine learning model complexity . A scarcity of training observations for pattern learning or a lack of computational power restricts machine learning algorithms' capacity to search for patterns in high- dimensional space, among other factors." "What is a Neural Network, and how does it work?","Neural Networks are a simplified version of how people learn, inspired by how neurons in our brains work. Three network layers make up the most typical Neural Networks: There is an input layer, A layer that is not visible (this is the most important layer where feature extraction takes place, and adjustments are made to train faster and function better), A layer for output." What are the Functions of Activation in a Neural Network?,"At its most basic level, an activ ation function determines whet her or not a neuron should activate. Any activation function can take the weighted sum of the inputs and bias as inputs. Activation functions include the step function, Sigmoid, ReLU, Tanh, and Softmax." What is the MLP (Multilayer Perceptron)?,"MLPs have an input layer , a hidden layer , and an output layer , just like Neural Networks. It has the same structure as a single layer perceptron with more hidden layers. MLP can identify nonlinear classes, whereas a single layer perceptron can only categorize linear separable classes with binary output (0,1). Each node in the other levels, except the input layer, utilizes a nonlinear activation function. This implies that all nodes and weights are joined together to produce the output based on the input layers, data flowingin, and the activation function. Backpropagation is a supervised learning method used by MLP . The neural network estimates the error with the aid of the cost function in backpropaga tion. It propagates the mistake backward from the point of origin (adjusts the weights to train the model more accurately)." what is Cost Function?,"The cost function, sometimes known as ""loss"" or ""error ,"" is a metric used to assess how well your model performs. During backpropagation, it's used to calculate the output layer's error . We feed that mistake backward through the neural network and train the various functions." What is the difference between a Recurrent Neural Network and aFeedforward Neural Network?,"The interviewer wants you to respond thoroughly to this deep learning interview questi on. Signals from the input to the output of a Feedforward Neural Network travel in one direction. The network has no feedback loops and simp ly evalu ates the current input. It is unable to remember prior inputs (e.g., CNN). The signals of a Recurrent Neural Network go in both directions, resulting in a looped netw ork. It generates a layer's output by combining the present input with previ ously received inputs and can recall prior data thanks to its internal memory ." What can a Recurrent Neural Network (RNN) be used for?,"Sentiment analysis, text mining, and picture captioning may benefit from the RNN . Recurrent Neural Networks may also be used to solve problems involving time-series data, such as forecasting stock values over a month or quarter." Mention the differences between Data Mining and Data Profiling?, Define the term 'Data Wrangling in Data Analytics.,"Data Wrangling is the process wherein raw data is cleaned, structured, and enriched into a desired usable format for better decision making. It involves discovering, structuring, cleaning, enriching, validating, and analyzing data. This process can turn and map out large amounts of data extracted from various sources into a more useful format. Techniques such as merging, grouping, concatenating, joining, and sorting are used to analyze the data. Thereafter it gets ready to be used with another dataset." What are the various steps involved in any analytics project?, What are the common problems that data analysts encounter during analysis?, Which are the technical tools that you have used for analysis and presentation purposes?, What are the best methods for data cleaning?, What is the significance of Exploratory Data Analysis (EDA)?, "Explain descriptive, predictive, and prescriptive analytics.", What are the different types of sampling techniques used by data analysts?, "Describe univariate, bivariate, and multivariate analysis.", What are your strengths and weaknesses as a data analyst?, What are the ethical considerations of data analysis?, How can you handle missing values in a dataset?, What are some common data visualization tools you have used?, Explain the term Normal Distribution., What is Time Series analysis?, How is Overfitting different from Underfitting?, How do you treat outliers in a dataset?, What are the different types of Hypothesis testing?, Explain the Type I and Type II errors in Statistics?, How would you handle missing data in a dataset?, Explain the concept of outlier detection and how you would identify outliers in a dataset., "In Microsoft Excel, a numeric value can be treated as a text value if it precedes with what?", "What is the difference between COUNT, COUNTA, COUNTBLANK, and COUNTIF in Excel?", How do you make a dropdown list in MS Excel?, Can you provide a dynamic range in “Data Source” for a Pivot table?, What is the function to find the day of the week for a particular date value?, How does the AND() function work in Excel?, Explain how VLOOKUP works in Excel?, What function would you use to get the current date and time in Excel?, "Using the below sales table, calculate the total quantity sold by sales representatives whose name starts with A, and the cost of each item they have sold is greater than 10.", "How do you handle missing data in a dataset, and what methods do you use for imputation?","Handling missing data is vital. Common methods include mean imputation, median imputation, forward or backward filling, or using machine learning models like K-Nearest Neighbors (KNN) to impute missing values based on similar data points." "What is A/B testing, and how can it be used to improve a product or website?","A/B testing involves comparing two versions (A and B) of a web page or product to determine which performs better. It helps in optimizing elements like layout, content, or features by collecting user data and making data-driven decisions for improvements." Describe data normalization and why it's important in databases.,"Data normalization is the process of organizing data in a database to reduce redundancy and improve data integrity. It involves breaking data into smaller, related tables and linking them using keys. Normalization prevents data anomalies and ensures efficient storage and retrieval." Explain the differences between a data warehouse and a traditional database.,"A data warehouse is designed for storing and analyzing large volumes of historical data. It's optimized for reporting and analytics. In contrast, a traditional database is used for transactional operations and real-time data processing." What are the key steps in exploratory data analysis (EDA)?,"EDA includes steps like data cleaning, univariate analysis, bivariate analysis, feature engineering, data visualization, and hypothesis testing. It aims to understand data patterns and relationships before in-depth analysis." How do you determine the appropriate data visualization for a given dataset?,"The choice of data visualization depends on the data's nature and the insights sought. For example, bar charts are suitable for categorical data, while scatter plots are used for showing relationships between two numerical variables." "What is regression analysis, and when is it useful in data analysis?","Regression analysis is a statistical method used to model the relationship between a dependent variable and one or more independent variables. It's useful when predicting outcomes, understanding correlations, or identifying trends in data." "Can you define the term ""correlation"" and provide an example of how it's used in data analysis?","Correlation measures the statistical relationship between two variables. For instance, in sales analysis, we might correlate advertising spend with revenue to assess their relationship and impact on sales." "What is the purpose of a SQL JOIN statement, and how does it work?","A SQL JOIN statement combines data from two or more tables based on a related column. It's used to retrieve information from multiple tables in a single query, enabling complex data retrieval and analysis." How do you assess the quality and reliability of a dataset?,"Data quality is assessed by checking for accuracy, completeness, consistency, and timeliness. Techniques include data profiling, data cleansing, and comparing data against predefined quality criteria." What is the difference between supervised and unsupervised machine learning?,"Supervised learning uses labeled data to train a model for making predictions or classifications. Unsupervised learning, on the other hand, deals with unlabeled data and focuses on discovering patterns or structures within the data." How do you ensure data security and privacy in your data analysis work?,"Data security involves using encryption, access controls, and secure data storage. Privacy is ensured by anonymizing sensitive information and complying with data protection regulations like GDPR" Describe the process of feature engineering in machine learning.,"Feature engineering involves selecting, creating, or transforming input variables (features) to improve the performance of machine learning models. It helps models capture relevant patterns in the data." How can data analysis help a business make informed decisions and gain a competitive advantage?,"Data analysis provides insights into customer behavior, market trends, and operational efficiency. Informed decisions based on data can optimize processes, target the right audience, and drive innovation, giving a competitive edge." What programming languages and tools are you proficient in for data analysis?,"I'm proficient in programming languages like Python and R, and I use tools like pandas, NumPy, Matplotlib, and Jupyter for data analysis and visualization." Explain the concept of time series analysis and its applications.,"Time series analysis deals with data collected over time, such as stock prices or temperature records. It's used for forecasting future values, identifying trends, and detecting seasonal patterns." How do you approach data storytelling to communicate your findings effectively?,"Data storytelling involves presenting data insights in a compelling and understandable way. I use clear visuals, narratives, and context to convey the significance of findings to both technical and non-technical audiences." Can you discuss the challenges and potential biases in data analysis?,"Challenges include data quality issues, selection bias, and ethical concerns. Biases can arise from unrepresentative samples or flawed data collection methods. It's crucial to address and mitigate these biases." What are the best practices for documenting your data analysis process?,"Best practices include maintaining clear documentation of data sources, preprocessing steps, analysis methods, and assumptions. This documentation ensures reproducibility and transparency in the analysis." Describe the process of data cleansing and its importance.,Data cleansing involves identifying and correcting errors or inconsistencies in datasets. It's essential to remove noise and ensure that the data used for analysis is accurate and reliable. How do you handle outliers in a dataset?,Outliers can be treated by either removing them if they are due to errors or transforming them using methods like Winsorization to reduce their impact on statistical analysis. "What is cross-validation in machine learning, and why is it important?",Cross-validation is a technique to assess a model's performance by splitting the data into training and testing sets multiple times. It helps prevent overfitting and provides a more reliable evaluation of model accuracy. How do you stay updated with the latest trends and techniques in data analysis?,"I regularly read industry blogs, research papers, and participate in online courses and conferences. Additionally, I engage with a professional network to exchange knowledge and insights." Can you provide an example of a complex data analysis project you've worked on?,"Certainly, one of the complex projects I've worked on involved analyzing customer behavior for an e-commerce platform, where I used advanced segmentation techniques and machine learning models to optimize product recommendations and increase conversion rates." 1. What exactly is R?,R is a free and open-source programming language and environment for statistical computation and analysis or data science. 2. What are the various data structures available in R? Explain them in a few words.,"These are the data structures that are available in R: Vector A vector is a collection of data objects with the same fundamental type, and components are the members of a vector . Lists Lists are R objects that include items of various types, such as integers, texts, vectors, or another list. Matrix A matrix is a data structure with two dimensions, and vectors of the same length are bound together using matrices. A matrix's elements must all be of the same type (numeric, logical, character).DataFrame A data frame, unlike a matrix, is more general in that individual columns might contain various data types (numeric, character , logical, etc.). It is a rectangular list that combines the properties of matrices and lists." 3. What ar e some of the advantages of R?,"It is open-source. For different reasons, this counts as both a benefit and a defect, but being open source means it's publicly available, free to use, and expandable. Its ecosystem of packages. As a data scientist, you don't have to spend a lot of time recreating the wheel, thanks to the built-in functions provided by R packages. Its statistical and graphical abilities. R's graphing skills , according to many people, are unrivaled." 4. What are the disadvantages of using R?,"You should be aware of the drawbacks of R, just as you shou ld know its benefits. Memory and ability to perform. R is often compared to Python as the less powerful language in memory and performance. This is debatable, and many believe it is no longer relevant now that 64-bit systems have taken over the market. It's free and open source. Open- source software offers both pros and cons. There is no governin g organization in charge of R. Therefore, there is no single point of contact for assistance or quality assurance. This also implies that the R packages aren't always of the best quality . Security . Because R was not designed with security in mind, it must rely on third-party resources to fill in the holes." 5. How do you import a CSV file?,"It's simp le to load a.csv file into R. You have to call the ""read.cs v()"" method and provide it with the file's location .house<- read.csv(""C:/Users/John/Desktop/house.csv"")" 6. What ar e the various components of graphic grammar?,"There are, in general, several components of graphic grammar: Facet layer, Themes layer, Geometry layer, Data layer, Co-ordinate layer, Aesthetics layer." "7. What is Rmarkdown, and how does it work? What's the point of it?","RMarkdown is an R-provided reporting tool. Rmarkdown allows you to produce high-quality reports from your R code. Rmarkdown may produce the following output formats: HTML, PDF, WORD." 8. What is the procedure for installing a package in R?,"To install a package in R, do the following command: install.packages(“”)" 9. Name a few R programs that can be used for data imputation?,"These are some R packages that may be used to input data. MICE, Amelia, Miss Forest, Hmisc, Mi, imputeR" 10. Can you explain what a confusion matrix is in R?,"A confusion matrix can be used to evaluate the model's accuracy . A cross- tabulation of observed and anticipated classes is calculated. The ""confusionmatrix()"" function from the ""caTools"" package can be used for this." "11. List some of the functions in the ""dplyr"" package","The dplyr package includes the following functions: Filter, Select, Mutate, Arrange, Count." 12. What would you do if you had to make a new R6 Class?,"To begin, we'll need to develop an object template that contains the class's ""Data Members"" and ""Class Functions."" These components make up an R6 object template: Private DataMembers, Name of the class, Functions of Public Members" 13. What do you know about the R package rattle?,"Rattle is a popular R-based GUI for data mining. It provides statistical and visual summarie s of data, conve rts data to be easily modeled, creates both unsupervised and supervised machine learning models from the data, visually displays model performance, and scores new datasets forproduction deployment. One of the most valuable features is that your interactions with the graphical user interface are saved as an R script that can be run in R without using the Rattle interface." 14. What are some R functions which can be used to debug?,The following functions can be used for debugging in R: traceback() debug() browser() trace() recover() "15. What exactly is a factor variable, and why would you use one?","A factor variable is a categorical variable that accepts numeric or character string values as input. The most important reason to empl oy a factor variable is that it may be used with great precision in statistical modeling. Another advanta ge is that they use less memory . To make a factor variable, use the factor() function." "16. In R, what are the three different sorting algorithms?","R's sort() function is used to sort a vector or factor , mentioned and discussed below . Radix: This non-comparative sorting method avoids overhead and is usually the most effective. It's a reliable algorithm used to calculate integer vectors and factors. Quick Sort: According to R documentation, this function ""uses Singleton (1969)'s implementation of Hoare's Quicksort technique and is only accessible when x is numeric (double or integer) and partial is NULL."" It isn't regarded as a reliable method. Shell: According to the R documentation, this approach ""uses Shellsort (an O(n4/3) variation from Sedgewick (1986)." 17. How can R help in data science?,"R reduces time-consuming and graphically intense tasks to minutes and keystrokes. In reality , you're unlikely to come across R outside of the world of data science or a related discipline. It's useful for linear and nonlinear modeling, time-series analysis, graphing, grouping, and many other tasks. Simply put, R was created to manipulate and visualize data. Thus it's only logical that it is used in data science." 18. What is the purpose of the () function in R?,"We use a () function to construct simpler code by applying an expression to a data set. Its syntax is as follows: R Programming Syntax Basics: R is the most widely used language for statistical computing and data analysis, with over 10,000 free packages available in the CRAN library . Like any other programming language, R has a unique syntax that you must learn to utilize all of its robust features. The R program's syntax: Variables, Comments, and Keywords are the three components of an R program. Variables are used to store data, Comments are used to make code more readable, and Keywords are reserved phrases that the compiler understands. CSV files in R Programming CSV files are text files in which each row's values are separated by a delimiter , such as a comma or a tab." 2. What is the definition of accuracy?,"It's the most basic performance metric, and it's just the ratio of correctly predicted observations to total observations. W e may say that it is best if our model is accura te. Yes, accuracy is a valuable statistic, but only when you have symmetric datasets with almost identical false positives and false negatives." 3. What is the definition of precision?,"It's also referred to as the positive predictive value. In your predictive model, precision is the number of right positives as compared to the overall number of positives it forecasts. True-Positives / (True-Positives + False-Positives) Precision = True- Positives / (True-Positives + False-Positives). True-Positiv es / Total Predicted Positives = Precision It's the number of correctly predicted positive items divided by the total number of correctly predicted positive elements. Precision may be defined as a metric of exactness, quality , or correctness. Exceptional accuracy: This indicates that most, if not all, of the good outcomes you predicted, are right." 4. What is the definition of recall?,"Recall that we may also refer to this as sensitivity or true-positi ve rate. The model predicts many positives compared to our data's actual number of positives. True-Positives/(T rue-Positives + False-Positives) = Recall True-Positives / T otal Actual Positives = Recall A recall measures completeness. Our model had a high recall, implying it categorized most or all positive aspects as positive." 1. What is your definition of Random Forest?,"Random Forest is a form of ensemble learning approach for classification, regression, and other tasks related to Random Forests. Random Forests works by training a large number of decision trees simultaneously , and this is accomplished by averaging many decision trees from various portions of the same training set." 2. What are the outputs of Random Forests for Classification and Regression problems?,Classification: The Random Forest's output is chosen by the most trees. Regression: The mean or average forecast of the various trees is the Random Forest's output. 3. What do Ensemble Methods entail?,"Ensemble techniques are a machine learning methodology that integrates numerous base models to create a single best-fit prediction model. Random Forest are a form of ensemble method. However , there is a law of decreasing returns in ensemble formation. The number of component classifiers in an ensemble significantly influences the accuracy of the prediction." 4. What are some Random Forest hyperparameters?,Hyperparameters in Random Forest include: The forest's total number of decision trees. The number of characteristics that each tree considers while splitting a node. The individual tree's maximum depth. The minimum number of samples to divide at an internal node. The number of leaf nodes at its maximum. The total number of random characteristics The bootstrapped dataset's size. 5. How would you determine the Bootstrapped Dataset's ideal size?,"Even though the size of the bootstrapped dataset is different, the datasets will be dif ferent since the observations are sampled with replacements. As a result, the training data may be used in its entirety . The best thing to do most of the time is ignoring this hyperparameter ." 6. Is it necessary to prune Random Forest? Why do you think that is?,"Pruning is a data compression method used in machine learning and search algorithms to minimize the size of decision trees by deleting non-critical and redundant elements of the tree. Because it does not over-fit like a single decision tree, Random Forest typically does not require pruning. This occurs when the trees are bootstrapped and numerous random trees employ random characteristics, resulting in robust individual trees not associated with one another ." 7. Is it required to use Random Forest with Cross-Validation?,"A random forest's OOB is comp arable to Cross-V alidation, and as a result, cross-validation is not required. By default, random forest uses 2/3 of the data for training , the remainder for testing in regression, and about 70% for training and testing in classification. Because the variable selection is randomized during each tree split, it is not prone to overfitting like other models." 8. What is the relationship between a Random Forest and Decision Trees?,"Random forest is an ensemble learning approach that uses many decision trees to learn. A random forest may be used for classification and regression, and random forest outperforms decision trees and does not have the same tendency to overfit the data. Overfitting occurs when a decision tree trained on a given dataset becomes too deep. Decision trees may be trained on multiple subsets of the training information to generate a random forest, and then the different decision trees can be averaged to reduce variation." 9. Is Random For est an Ensemble Algorithm?,"Yes, Random Forest is a tree-based ensemble technique that relies on a set of random variables for each tree. Bagging is used as the ensemble approach, while decision tree is used as the individual model in Random Forest.Random forests can be used for classification, regression, and other tasks in which a large number of decis ion trees are built at the same time. The random forest's output is the class most trees choose for classification tasks. The mean or average forecast of the individual tresses is returned for regression tasks. Decision trees tend to overfit their training set, corrected by random forests." 1. What are some examples of k-Means Clustering applications?,"The following are some examples of k-means clustering applications: Document class ification: Base d on tags, subjects, and the document's substance, k-mean s may group documents into numerous groups. Insurance fraud detection: It is feasible to identify new claims based on their closeness to clusters that signal fraudulent tendencies using previous data on fraudulent claims. Criminals who use cyber -profiling: This is the practice of gathering data from people and groups to find significant correlations. Cyber profiling is based on criminal profiles, which offer information to the investigation division to categorize the sorts of criminals present at the crime scene." 2. How can you tell the differ ence between KNN and K-means clustering?,"The K-nearest neighbor algorithm is a supervised classification method known as KNN. This means categorizing an unlabeled data point, requiring labeled data. It tries to categorize a data point in the feature space based on its closeness to other K-data points. K-means Cluste ring is a method for unsupervised classification. It merely needs a set of unlabeled points and a K-point threshold to collect and group data into K clusters." 3. What is k-Means Clustering?,"K-means Cluste ring is a vector quantization approach that divides a set of n observations into k clusters, with each observation belonging to the cluster with the closest mean. Within-cluster variances are minimize d using k- means clustering. Within-cluster -variance is an easy-to-understand compactn ess metric. Essentially , the goal is to split the data set into k divisions in the most compact way possible." 4. What is the Uniform Effect pr oduced by k-Means Clustering?,"The Uniform Effect refers to the tendency of k-means clustering to create clusters of uniform size. Even if the data behaves differently , uniform sizes ensure that the clusters have about the same number of observations." 5. What ar e some k-Means Clustering Stopping Criteria?,"The following are some of the most common reasons for stopping: Convergence. There are no more modifications; the points remain in the same cluster . The number of iterations that can be done. The method will be terminated after the maximum number of iterations has been reached. This is done to keep the algorithm's execution time to a minimum. Variance hasn't increased by at least x%. The variance did not increase by more than x times the starting variance. MiniBatch k-means will not conver ge, so one of the other criteria is required. The number of iterations is the most common." 6. Why does the Euclidean Distance metric dominate in k-Means Clustering?,"The construction of k-means is not reliant on distances, and Within-cluster variance is decreased using K-means. When you examine the variancedefinition, you'l l notice that it's the sum of squared Euclidea n distances from the center . The goal of k-means is to reduce squared errors. There is no such thing as ""distance"" in this case. Pairwise distances between data points are not explicitly used in the k-means process. It entails assigning points to the nearest centroid over and over again, based on the Euclide an distance between data points and a centroid. Euclidean geom etry is the origin of the term ""centroid."" In Euclidean space, it is a multivariate mean. Euclidean distances are the subject of Euclidean space. In most cases, non-Euclidean distances will not cross Euclidean space, which is why K-Means is only used for Euclidean distances. Using arbitrary distances is incorrect because k-means may stop conver ging with other distance functions." 1. What exactly is SQL?,"SQL is an acronym for the structured query language. It is a database management system that allows you to access and manipulate data. In 1986, the American National Standards Institute (ANSI) approved SQL as a standard." 2. What Can SQL do for you?,"SQL is capable of running queries against a database. •SQL may be used to get information from a database. •SQL may be used to create new records in a database. •SQL may be used to update data in a database. •SQL can delete records from a database. •SQL can build new databases. •SQL can create new tables in a database. •In a database, SQL may build stored procedures. •In a database, SQL may be used to generate views. •Permissions can be established on tables, methods, and views in SQL." 1. How do you distinguish between SQL and MySQL?,"SQL is a standard language based on English. MySQL is a relational database management system (RDBMS). SQL is the foundation of a relational database, and it is used to retrieve and manage data. MySQL is arelational database management system (RDMS), similar to SQL Server and Informix." 2. What are the various SQL subsets?,"Data Definition Language (DDL) lets you do things like CREATE, ALTER, and DELETE items on the database. Data Manipulation Language (DML) allows you to alter and access data. It aids in inserting, updating, deleting, and retrieving data from a database. Data Control Language (DCL) allows you to manage database access, grant and revoke access permissions." 3. What do you mean by database management system (DBMS)? What are the many sorts of it?,"A Datab ase Management System (DBMS) is a software program that captures and analyzes data through interacting with the user, applications, and the database itself. A database is a collection of data that is or ganized. A database management system (DBMS) allows users to interface. The database's data may be edited, retrieved, and destroyed, and it can be of any type, including strings, integers, and pictures. There are two types of database management systems (DBMS): •Relational Data base Managem ent System (RDBMS): Inform ation is organized into relationships (tables). MySQL is a good example. •Non-Relational Database Management System: This system has no relations, tuples, or attributes. A good example is MongoDB." "4. In SQL, how do you define a table and a field?","A table is a logically organized collection of data in rows and columns. The number of columns in a table is referred to as a field. Consider the following scenario: Fields: Student ID, Student Name, and Student Marks" 5. How do we define joins in SQL?,"A join clause joins rows from two or more tables based on a common column. It's used to join two tables together or derive data from them. As seen below , there are four dif ferent types of joins: •Inner join: The most frequent join in SQL is the inner join. It's used to get all the rows from various tables that satisfy the joining requirement. •Full Join: When there is a matc h in any table, a full join return s all the records. As a result, all rows from the left-hand side table and all rows from the right-hand side table are returned. •Right Join: In SQL, a “right join” returns all rows from the right table but only matche s records from the left table when the join condition is met. •Left Join: In SQL, a left join returns all of the data from the left table, but only the matching rows from the right table when the join condition is met." 6. What is the difference between the SQL data types CHAR and VARCHAR2?,"Both Char and Varchar2 are used for character strings. However , Varchar2 is used for variable-length strings, and Char is used for fixed-length strings. For instance, char (10) can only hold 10 characters and cannot store a string of any other length, but varchar2 (10) may store any length, i.e. 6, 8, 2." 7. What are constraints?,"In SQL, constraints are used to establish the table's data type limit. It may be supplied when the table statement is created or changed. The following are some examples of constraints:UNIQUE, NOT NULL, FOREIGN KEY, DEFAULT, CHECK, PRIMARY KEY." 8. What is a foreign key?,A foreign key ensures referential integrity by connecting the data in two tables. The foreign key as defined in the child table references the primary key in the parent table. The foreign key constraint obstructs actions to terminate links between the child and parent tables. "9. What is""data integrity""?","Data integrity refers to the consistency and correctness of data kept in a database. It also specifies integrity constraints, which are used to impose business rules on data when input into an application or database." What is the difference between a clustered and a non-clustered index?,"The following are the distinctions between a clustered and non-clustered index in SQL: •Clustered indexes are utilized for quicker data retrieval from databases, whereas reading from non-clustered indexes takes longer . •A cluste red index changes the way records are stored in a database by sorting rows by the clustered index column. A non-clustered index does not chan ge the way records are stored but instead creates a separate object within a table that points back to the original table rows after searching. There can only be one clustered index per table, although there can be numerous non clustered indexes." 11. How would you write a SQL query to show the current date?,A built-in method in SQL called GetDate() returns the current timestamp/date. "12. What exactly do you mean when you say ""query optimization""?","Query optimization is the step in which a plan for evaluating a query that has the lowest projected cost is identified. The following are some of the benefits of query optimization: •The result is delivered more quickly . •In less time, a higher number of queries may be run. •Reduces the complexity of time and space" "13. What is ""denormalization""?",Denormalization is a technique for retrieving data from higher to lower levels of a datab ase. It aids datab ase administrators in improving the overall performance of the infrastructur e by introducing redundancy into a table. It incorporates database queries that merge data from many tables into a single table to add redundant data to a table. 14. What are the differences between entities and relationships?,"Entities are real-world people, places, and things whose data may be kept in a database. Tables are used to contain information about a single type of object. A customer table, for example, is used to hold customer information in a bank datab ase. Each client's information is stored in the customer database as a collection of characteristics (columns inside the table). Relationships are connections or connections between things that have something in common. The customer name, for example, is linked to the customer accoun t number and contact information, which may be stored in the same database. There may also be connections between different tables (for example, customer to accounts)." 15. What is an index?,"An index is a performance optimization strategy for retrieving records from a table quickly . Because an index makes an entry for each value, retrieving data is faster ." 16. Describe the various types of indexes in SQL.,"In SQL, there are three types of indexes: •Unique Index: If the column is unique indexed, this index prevents duplicate values in the field. A unique index can be applied automatically if the main key is provided. •Clustered Index: This index reorders the table's physical order and searches based on key values. There can only be one clustered index per table. •Non-Clustered Index: Non-clustered indexes do not change the physical order of the database and keep the data in a logical order . There might be a lot of nonclustered indexes in a table." "17. What is normalization, and what are its benefits?",The practice of structuring data in SQL to prevent duplication and redundancy is known as normalization. The following are some of the benefits: •Improved database management •Tables with smaller rows are added to the mix. •Efficient data access •Greater queries flexibility •Locate the information quickly . •Security is easier to implement. •Allows for easy customization/customization •Data duplication and redundancy are reduced. •More compact database. •Ensure that data is consistent after it has been modified. 18. Describe the various forms of normalization.,"There are several levels of normalization to choose from. These are referred to as normal forms. Each subsequent normal form is dependent on the one before it. In most cases, the first three normal forms are suf ficient. First Normal Form (1NF) – There are no repeating groups in between rows Second Normal Form (2NF) – Every non-key (supporting) column value relies on the primary key . Third Normal Form (3NF) – Dependent solely on the primary key and no other non-key (supporting) column value." "19. In a database, what is the ACID property?","Atomicity , Consistency , Isolation, and Durability (ACID) is used to verify that data transactions in a database system are processed reliably . Atomicity: Atomicity relates to completed or failed transactions. A transaction refers to a single logical data operation. It means that if one portion of a transaction fails, the full transaction fails as well, leaving the database state unaltered. Consistency: Consistency guarantees that the data adheres to all validation standards. In basic terms, your transaction never leaves the database before it has completed its state. Isolation: The main purpose of isolation is concurrency control. Durability: Durability refers to the fact that once a transaction has been committed, it will occur regardless of what happens in the meantime, such as a power outage, a crash, or any other type of mistake." "20. What is ""Trigger"" in SQL?","Triggers are a stored procedure in SQL that is configured to execute automatically in situ or after data changes. When an insert, update, or other query is run against a specified table, it allows you to run a batch of code." 21. What are the different types of SQL operators?,"Logical Operators, Arithmetic Operators, Comparison Operators" 22. Do NULL values have the same meaning as zero or a blank space?,"A null value is not confused with a value of zero or a blank space. A null value denotes an unavailable, unknown, assigned, or not applicable value, whereas a zero denotes a number and a blank space denotes a character ." 23. What is the differ ence between a natural join and a cross join?,"The natural join is dependent on all columns in both tables having the same name and data types, whereas the cross join creates the cross product or Cartesian product of two tables." 24. What is a subquery in SQL?,"A subquery is a query defined inside another query to get data or information from the database. The outer query of a subquery is referred to as the main query. In contrast, the inner query is referred to as the subquery . Subqueries are always processed first, and the subquery's result is then passed on to the main query . It may be nested within any query , including SELECT , UPDATE, and OTHER. Any comparison operators, such as >, or =, can be used in a subquery ." 25. What are the various forms of subqueries?,Correlated and Non-Correlated subqueries are the two forms of the subquery . Correlated subqu eries: These queries pick data from a table that the outer query refers to. It is not considered an independent query because it refers to another table and column. Non-Correlated subquery: This query is a stand-alone query in which a subquery's output is used to replace the main queryresults. "1. What is a database management system (DBMS), and what is its purpose? Use examples to explain RDBMS.","The database management system, or DBMS, is a collection of applications or programs that allow users to construct and maintain databases. A database management system (DBMS) offers a tool or interface for executing different database activities such as adding, removing, updating, etc. It is software that allows data to be stored more compactly and securely than a file-based system. A database management system (DBMS) assists a user in overcoming issues such as data inconsistency , data redundancy , and other issues in a database, making it more comfortable and or ganized to use. Examples of prominent DBM S systems are file systems, XML, the Windows Registry , and other DBMS systems. RDBMS stands for Relational Database Management System, and it was first introduced in the 1970s to make it easier to access and store data than DBMS. In contrast to DBMS, which stores data as files, RDBMS stores data as tables. Unlike DBMS, storing data in rows and columns makes it easier to locate specific values in the database and more ef ficient. MySQL, Oracle DB, are good examples of RDBMS systems." 2. What is a database?,"A database is a collection of well-organized, consistent, and logical data and can be readily updated, accessed, and controlled. Most databases are made up of tables or object s (everything generated with the create command is a database object) that include entries and fields. A tuple or row represents a single entry in a table. The main components of data storage are attributes and columns, which carry information about a specific element of the database. A database management system (DBMS) pulls data from a database using queries submitted by the user ." 3. What drawbacks of traditional file-based systems make a database management system (DBS) a superior option?,"The lack of indexing in a typic al file-based system leaves us little choice but to scan the whole page, making content access time-consuming and sluggish. The other issue is redundancy and inconsistency , as files often include duplicat e and redundant data, and updating one causes all of them to become inconsistent. Traditional file-based systems make it more difficult to access data since it is disor ganized. Another drawback is the absence of concurrency management, which causes one action to lock the entire page, unlike DBMS, which allows several operations to operate on the same file simultaneously . Integrity checking, data isolation, atomicity , security , and other difficulties with traditional file-based systems have all been addressed by DBMSs." 4. Desc ribe some of the benefits of a database management system (DBS).,"The following are some of the benefits of employing a database management system (DBS). Data Sharing: Data from a single database may be shared by several users simultaneously . End-users can also respond fast to changes in the database environment because of this sharing. Integrity restrict ions: The presence of such limitations allows for the ordered and refined storage of data. Controlling database redundancy: Provides a means for integrating all data in a single database, eliminating redundancy in a database. Data Independence: This allows you to change the data structure without affecting the composi tion of any of the application programs that are currently running. Provides backup and recovery facility: It may be configured to automatically generate a backup of the data and restore the data in a database when needed.Data Security: A database management system (DBMS) provides the capabilities needed to make data storage and transmission more dependable and secure. Some common technologies used to safeguard data in a DBMS include authentication (the act of granting restricted access to a user) and encryption (encrypting sensitive data such as OTP, credit card information, and so on)." 5. Describe the differ ent DBMS languages.,"The following are some of the DBMS languages: DDL (Data Definition Language) is a language that includes commands for defining databases. CREA TE, AL TER, DROP , TRUNCA TE, RENAME, and so on. DML (Data Manipulation Langu age) is a set of instructions that may alter data in a database. SELECT , UPDA TE, INSER T, DELETE, and so on. DCL (Data Control Language): It offers instructions for dealing with the database system's user permissions and controls. GRANT and REVOKE, for example. TCL (Transaction Control Language) is a programming language that offers instructio ns for dealing with database transactions. COMMIT , ROLLB ACK, and SAVEPOINT are a few examples." 6. Wha t does it mean to have ACID qualities in a databasemanagement system (DBMS)?,"In a database management system, ACID stands for Atomicity , Consistency , Isolation, and Durability . These features enable a safe and secure exchange of data among dif ferent users. Atomicity: This attribute supports the notion of either running the whole query or doing nothing at all, which means that if a database update occurs, it should either be reflected across the entire database or not at all.Consistency: This feature guarantees that data is consistent before and after a transaction in a database. Isolation: This characteristic assures that each transaction is separate from the others, and this suggests that the status of one ongoing transaction has no bearing on the condition. Durability: This attribute guarantees that data is not destroyed in the event of a system failure or restart and that it is available in the same condition as before the failure or restart." 7. Are NULL values in a database the same as blank space or zero?,"No, a null value is different from zero and blank space. It denotes a value that is assigned , unknown, unavailable, or not applicable, as opposed to blank space, which denotes a character , and zero, which denotes a number . For instance, a null value in the ""number of courses"" taken by a student indicates that the value is unknown, but a value of 0 indicates that the student has not taken any courses." 8. What does Data W arehousing mean?,"Data warehousing is the process of gathering, extracting, processing, and importing data from numerous sources and storing it in a single database. A data warehouse may be conside red a central repository for data analytics that receives data from transactional systems and other relational databases. A data warehouse is a collection of historical data from an organization that aids in decision-making." 9. Desc ribe the various data abstraction layers in a database management system (DBMS).,"Data abstraction is the process of concealing extraneous elements from consumers. There are three degrees of data abstraction: Physical Level: This is the lowest level, and the database management system maintains it. The contents of this level are often concealed from system admins, developers, and users, and it comprises data storage descriptions.Conceptual or logical level: Developers and system administrators operate at the conceptual or logical level, which specifies what data is kept in the database and how the data points are related. External or View level: This level only depicts a portion of the database and keeps the table structure and actual storage specifics hidden from users. The result of a query is an example of data abstracti on at the View level. A view is a virtual table formed by choosing fields from multiple database tables." "10. What does an entity-r elationship (E-R) model mean? Define an entity , entity type, and entity set in a database management system.","A diagrammatic approach to database architecture in which real-world things are represented as entities and connections between them are indicated is known as an entity-relationship model. Entity: A real-world object with attributes that indicate the item's qualities is defined as an entity . A student, an employee, or a teacher , for example, symbolizes an entity . Entity Type: This is a group of entities with the same properties. An entity type is represented by one or more linked tables in a database. Entity type or attribut es may be thought of as a trait that distinguishes the entity from others. A student, for example, is an entity with properties such as student id, student name, and so on. Entity Set: An entity set is a collection of all the entities in a database that belongs to a given entity type. An entity set, for example, is a collection of all students, employees, teachers, and other individuals." 11. What is the differ ence between intension and extension in a database?,"The main distinction between intension and extension in a database is as follows:Intension: Intension, also known as database schema, describes the database's description. It is specified throughout the database's construction and typically remains unmodified. Extension, on the other hand, is a measurement of the number of tuples in a database at any particular moment in time. The snapshot of a database is also known as the extension of a database. The value of the extension changes when tuples are created, modified, or deleted in the database." 12. Describe the differ ences between the DELETE and TRUNCA TE commands in a database management system.,"DELETE command: this comm and is used to delete rows from a table based on the WHERE clause's condition. It just deletes the rows that the WHERE clause specifies. If necessary , it can be rolled back. It keeps a record to lock the table row before removing it, making it sluggish. The TRUNCA TE command is used to delete all data from a table in a database. Consequently , making it similar to a DELETE command without a WHERE clause. It deletes all of the data from a database table. It may be rolled back if necessary . (Truncate can be rolled back, but it's hard and can result in data loss depending on the database version.) It doesn't keep a log and deletes the entire table at once, so it's quick." 13. Define lock. Explain the significant differences between a shared lock and an exclusive lock in a database transaction.,"A database lock is a method that prevents two or more database users from updating the same piece of data at the same time. When a single database user or session obtains a lock, no other database user or session may edit the data until the lock is released.Shared lock: A shared lock is necessary for reading a data item, and in a shared lock, many transactions can hold a lock on the same data item. A shared lock allows many transactions to read the data items. Exclusive lock: A lock on any transaction that will conduct a write operation is an exclusive lock. This form of lock avoids inconsistency in the database by allowing only one transaction at a time." 14. What do normalization and denormalization mean?,"Normalization is breaking up data into numerous tables to reduce duplication. Normalization allows for more efficient storage space and makes maintaining database integrity . Denormalization is the reversal of normalization, in which tables that have been normalized are combined into a single table to speed up data retrieval. By reversing the normalization, the JOIN operation allows us to produce a denormalized data representation." 1. What are the various characteristics of a relational database management system (RDBMS)?,Name: Each relation should have a distinct name from all other relations in a relational database. Attributes: An attribute is a name given to each column in a relation. Tuples: Each row in a relation is referred to as a tuple. A tuple is a container for a set of attribute values. "2. What is the E-R Model, and how does it work?",The E-R model stands for Entity-Relationship. The E-R model is based on a real-world environment that consists of entities and related objects. A set of characteristics is used to represent entities in a database. 3. What does an object-oriented model entail?,The object-orien ed paradigm is built on the concept of collections of items. Values are saved in instance variables within an object and stored. Classes are made up of objects with the same values and use the same methods. 4. What are the three different degrees of data abstraction?,"Physical level: This is the most fundamental level of abstraction, describing how data is stored. Logical level: The logical level of abstraction explains the types of data recorded in a database and their relationships. View level: This is the most abstract level, and it describes the entire database." 5. What are the differ ences between Codd's 12 Relational Database Rules?,"Edgar F. Codd presented a set of thirteen rules (numbered zero to twelve) that he called Codd's 12 rules. Codd's rules are as follows: Rule 0: The system must meet Relational, Database, and Management Systems requirements. Rule 1: The information rule: Every piece of data in the database must be represented uniquely , most notably name values in column locations inside a distinct table row . Rule 2: The second rule is the assured access rule, which states that all data must be ingressive. Every scalar value in the database must be correctly/logically addressable. Rule 3: Null values must be treated consistently: The DBMS must allow each tuple to be null. Rule 4: Based on the relation al paradigm, an active online catalog (database structure): The system must provide an online, relational, or other structure that is ingressive to authorized users via frequent queries. Rule 5: The sublanguage of complete data: The system must support at least one relational language that meets the following criteria:1. That has a linear syntax 2. That can be utilized interactively as well as within application applications. 3. Data definition (DDL), data manipulation (DML), security and integrity restri ctions, and transaction management activities are all supported (begin, commit, and roll back). Rule 6: The view update rule: The system must upgrade any views that theoretically improve. Rule 7: Insert, update, and delete at the highest level: The system must support insert, update, and remove operators at the highest level. Rule 8: Physical data independence: Changing the physical level (how data is stored, for example, using arrays or linked lists) should not change the application. Rule 9: Logical data independence: Changing the logical level (tables, columns , rows, and so on) should not need changing the application. Rule 10: Integrity independence: Each application program's integrity restrictions must be recognized and kept separately in the catalog. Rule 11: Distribution independence: Users should not see how pieces of a database are distributed to multiple sites. Rule 12: The nonsubversion rule: If a low-level (i.e., records) interface is provided, that interface cannot be used to subvert the system." "6. What is the definition of normalization? What, therefore, explains the various normalizing forms?","Database normalization is a method of structuring data to reduce data redundancy . As a result, data consistency is ensured. Data redundancy has drawbacks, including wasted disk space, data inconsistency , and delayed DML (Data Manipulation Language) searches. Normalization forms include 1NF , 2NF , 3NF , BCNF , 4NF , 5NF , ONF , and DKNF .1.1NF: Each column's data should contain atomic number multiple values separated by a comma. There are no recurring column groupin gs in the table, and the main key is used to identify each entry individually . 2.2NF: – The table should satisf y all of 1NF's requirements, and redundant data should be moved to a separate table. Furthermore, it uses foreign keys to construct a link between these tables. 3.3NF: A 3NF table must meet all of the 1NF and 2NF requirements. There are no characteristics in 3NF that are partially reliant on the main key ." "7. What are primary key, a foreign key, a candidate key, and a super key?","The main key is the key that prevents duplicate and null values from being stored. A primary key can be specified at the column or table level, and per table, only one primary key is permitted. Foreign key: a foreign key only admits values from the linked column, and it accepts null or duplicate values. It can be classified as either a column or a table level, and it can point to a column in a unique/primary key table. Candidate Key: A Candidate key is the smallest super key; no subset of Candidate key qualities may be used as a super key . A super key: is a collection of related schema characteristics on which all other schema elements are partially reliant. The values of super key attributes cannot be identical in any two rows." 8. What ar e the various types of indexes?,"The following are examples of indexes: Clustered index: This is where data is physically stored on the hard drive. As a result, a database table can only have one clustered index. Non-clustered index: This index type does not define physical data but defines logical ordering. B-Tree or B+ trees are commonly used for this purpose." 9. What are the benefits of a relational database management system (RDBMS)?,Controlling Redundancy is the answer . Integrity is something that can be enforced. •It is possible to prevent inconsistency . •It's possible to share data. •Standards are enforceable. 10. What are some RDBMS subsystems?,"RDBMS subsystems are Language processing, Input-output, security , storage managem ent, distribution control,  logging and recovery , transaction control, and memory management." "11. What is Buffer Manager , and how does it work?","The Buffer Manager collects data from disk storage and chooses what data should be stored in cache memory for speedier processing. MYSQL MySQL is a relational database management system that is free and open- source (RDBMS ). It works both on the web and on the server . MySQL is a fast, dependable , and simple database, and it's a free and open-source program. MySQ L is a database management system that runs on many systems and employs standard SQL. It's a SQL database management system that's multithreaded and multi-user . Tables are used to store information in a MySQL database. A table is a set of columns and rows that hold linked information. MySQL includes standalone clients that allow users to communicate directly with a MySQL database using SQL. Still, MySQL is more common to be used in conjunction with other programs to create applications that require relational database functionality. Over 11 million people use MySQL." 1. What exactly is MySQL?,"MySQL is a scalable web server database management system, and it can expand with the website. MySQL is by far the most widely used open- source SQL database management system, developed by Oracle Corporation." 2. What are a few of the benefits of MySQL?,"•MySQL is a flexible database that operates on any operating system. •MySQL is focused on performance. •SQL at the Enterprise Level MySQL had been deficient in sophisticated functionality like subqueries, views, and stored procedures for quite some time. •Indexing and Searching of Full-T ext Documents •Query Caching: This significantly improves MySQL's performance. •Replication: A MySQL server may be copied on another , with many benefits. •Security and configuration" "3. What exactly do you mean when you say ""databases""?",A databa se is a structured collec tion of data saved in a computer system and organized to be found quickly . Information may be quickly found via databases. 4. What does SQL stand for in MySQL?,"SQL stands for Structured Query Language in MySQL. Other databases, such as Oracle and Microsoft SQL Server , also employ this language. To submit queries from a database, use instructions like the ones below: It's worth noting that SQL doesn't care about the case. However , writing SQL keywords in CAPS and other names and variables in a small case is agood practice." 5. What is a MySQL database made out of?,"A MySQL database comprises one or more tables, each with its own set of entries or rows. The data is included in numerous columns or fields inside these rows." 6. What ar e your options for interacting with MySQL?,"You may communicate with MySQL in three dif ferent ways: Via a web interface, Using a command line, Through a programming language." "7. What are MySQL Database Queries, and How do I use them?","An inquiry is a request or a precise question. A database may be queried for specific information, and a record returned." "8. In MySQL, what is a BLOB?","The abbreviation BLOB denote s a big binary object, and its purpose is to store a changeable amount of information. There are four dif ferent kinds of BLOBs: TINYBLOB, MEDIUMBLOB, BLOB, LONGBLOB. A BLOB may store a lot of information. Documents, photos, and even films are examples. If necessary , you may save the whole manuscript as a BLOB file." 9. What is the procedure for adding users to MySQL?,"By executing the CREATE command and giving the required credentials, you may create a User . Consider the following scenario: CREATE USER 'testuser' WITH' sample password' AS IDENTIFIER." "10. What exactly are MySQL's ""Views""?","A view in MySQL is a collection of rows that are returned when a certain query is run. A 'virtual table' is another name for this. Views make it simple to find out how to make a query available via an alias. Views provide the following advantages: Security, Simplicity, Maintainability" 11. Define MySQL Triggers?,"A trigger is a job that runs in reaction to a predefined database event, such as adding a new record to a table. This event entails entering, altering, or removing table data, and the action might take place before or immediately after any such event. Triggers serve a variety of functions, including: •Validation •Audit T rails •Referential integrity enforcement" "12. In MySQL, how many triggers are possible?","There are six triggers that may be used in the MySQL database: After Insert, Before Insert, Before Delete, Before Update, After Update, After Delete." 13. What exactly is a MySQL server?,"The server, mySQLd, is the heart of a MySQL installation; it handles all database and table management." 14. What are the different types of MySQL relationships?,"In MySQL, there are three types of relationships:•One-o-One: When two things have a one-to-one relationship, they are usually included as columns in the same table. •One-to-Many: When one row in one database is linked to many rows in another table, this is known as a one-to-many (or many -to-one) connection. •Many-to-Many: Many rows in one table are connected to many rows in another table in a many-to-many connection. Add a third table with the same key column as the other tables 29 to establish this link." 15. What is MySQL Scaling?,"In MySQL, scaling capacity refers to the system's ability to manage demand, and it's helpful to consider load from a variety of perspectives, including: Quantity of information, Amount of users, Size of related datasets, User activity" 16. What is SQL Sharding?,"Sharding divides huge tables into smaller portions (called shards) distributed acros s different servers. The benefit of sharding is that searches, maintenance, and other operations are quicker because the shard ed database is typically much smaller than the original." 1. What are constraints?,A constraint is an attribute of a table column that conducts data validation. Constraints help to ensure data integrity by prohibiting the entry of incorrect data. "2. What do you mean when you say ""data integrity""?",The consistency and correctness of data kept in a database are data integrity . 3. Is it possible to add constraints to a table that alr eady contains data?,"Yes, but it also depends on the data. For example, if a column contains null values and adds a not-null constraint, you must first replace all null values with some values." 4. Can a table have more than one primary key?,No table can only have one primary key 5. What is the definition of a foreign key?,"In one table, an FK refers to a PK in another . It prohibits any operations that might break the linkages betwee n tables and the data values they represent. FKs are used to ensure that referential integrity is maintained." 6. What is the difference between primary and unique key constraints?,"A null value will be allowed if the constraint is unique. A unique constraint will allow just one null value if a field is nullable. SQL Server allows for several unique constraints per table, but MySQL only allows for a single primary key ." 7. Is it possible to use Unique key restrictions across multiple columns?,Yes! Unique key constraints can be imposed on a composite of many fields to assure record uniqueness. Example: City + State in the StateList table. "8. When you add a unique key constraint, which index does the database construct by default?",A nonclustered index is constructed when you add a unique key constraint. "9. What does it mean when you say ""default constraints""?","When no value is supplied in the Insert or Update statement, a default constraint inserts a value in the column." 10. What kinds of data integrity are there?,"There are three types of integrity in relational databases. Entity Integrity (unique constraints, primary key), Domain Integrity (check constraints, data type)." 1. What exactly is an index?,"An index is a database object that the SQL server uses to improve query performance by allowing query access to rows in the data table. We can save time and increase the speed of database queries and applications by employing indexes. When construct ing an index on a column, SQL Server creates a second index table. When a user tries to obtain data from an existin g table that relies on the index table, SQL Server goes straight to the table and quickly retrieves the data. 250 indexes may be used in a table. The index type describes how SQL Server stores the index internally ." 2. Why are indexes required in SQL Server?,"Queries employ indexes to discover data from tables quickly . Tables and views both have indexes. The index on a table or view is quite similar to the index in a book. If a book doesn't contain an index and we're asked to find a certain chapter , we'll have to browse through the whole book, beginning with the first page. If we have the index, on the other hand, we look up the chapter's page number in the index and then proceed to that page number to find the chapter . Table and View indexes can help the query discover data fast in the same way. In reality , the presence of the appropriate indexes may significantly enhance query performance. If there is no index to aid the query , the query engine will go over each row in the table from beginning to end. This is referred to as a T able Scan, and the performance of a table scan is poor ." 3. What are the different types of indexes in SQL Server?,Clustered Index Non-Clustered Index 4. What is a Clustered Index?,"In the case of a clustered index, the data in the index table will be arranged the same way as the data in the real table. The index, for example, is wher e we discover the beginning of a book. The term ""clustered table"" refers to a table that has a clustered index. The data rows in a table witho ut a clustered index are kept unordered. A table can only have one clustered index, which is constructe d when the table's main key constraint is invoked. A clustered index determines the physical order of data in a table. As a result, a table can only have one clustered index." 5. What is a non-clustered index?,"In a non-clustered index, the data in the index table will be organized differently than the data in the real database. A non-cluster ed index is similar to a textbook index. The data is kept in one location, while the index is kept in another . The index will contain references to the data's storage place. A table can contain more than one non-clustered index since the non- clustered index is kept independently from the actual data, similar to how a book can have an index by chapters at the beginning and another index by common phrases at the conclusion. The data is stored in the index in ascending or descending order of the index key, which has no bearing on data storage in the table. We can define a maximum of 249 non clustered indexes in a database." "6. In SQL Serv er, what is the difference between a clustered and a non- clustered index?","One of the most common SQL Server Indexes Interview Ques tions is this one. Let's look at the differences. There can only be one clustered index per table, although several non-clustered indexes can be. The Clustered Index is quicker than the Non-Clustered Index by a little margin. When a Non-Clustered Index is used, an extra lookup from theNon-Clustered Index to the table is required to retrieve the actual data. A clustered index defines the row storage order in the database and does not require additional disk space. Still, a non-clustered index is kept independently from the table and thus requires additional storage space. A clustered index is a sort of index that reorders the actual storage of entries in a table. As a result, a table can only have one clustered index. A non- clustered index is one in which the logical order of the index differs from the physical order in which the rows are written." 7. What is a SQL Server Unique Index?,"If the ""UNIQUE"" option is used to build the index, the column on which the index is formed will not allow duplicate values, acting as a unique constraint. Unique clustered or unique non-clustered constraints are both possible. If clustered or non-clustered is not provided when building an index, it will be non-clustered by default. A unique index is used to ensure that key values in the index are unique." 8. When does SQL Server make use of indexes?,"SQL Server utilizes a table's indexes if the select, update, or delete statement includ ed a ""WHERE"" condition and the where condition field was an indexed column. If an ""ORDER BY"" phrase is included in the select statement, indexes will be used as well. Note: When SQL Server searches the database for information, it first determines the optimum execu tion plan for retrieving the data and then employs that plan, a full-page scan or an index scan." 9. When should a table's indexes be created?,"If a table column is regularly used in a condition or order by clause, we must establish an index on it. It is not recommended that an index be created for each column since many indexes might reduce database performance. Any change to the data should be reflected in all index tables." 10. What is the maximum number of cluster ed and non-clustered indexes per table?,"Clustered Index: Each table has only one Clustered Index. A clustered index stores all of the data for a single table, ordered by the index key. The Phone Book exemplifies the Clustered Index. Non-Cluster ed Index: Each table can include many Non-Clustered Indexes. A Non-Clustered Index is an index found in the back of a book. 1 Clustered Index + 249 Nonc lustered Index = 250 Index in SQL Server 2005 1 Clustered Index + 999 Nonc lustered Index = 1000 Index in SQL Server 2008." "11. Clustered or non-clustered index, which is faster?","The Clustered Index is quicker than the Non-Clustered Index by a little margin. When a Non-Clustered Index is used, an extra lookup from the Non-Clustered Index to the table is required to retrieve the actual data." "12. In SQL Server, what is a Composite Index? What are the benefits of utilizing a SQL Server Comp osite Index? What exactly is a Covering Query?","A composite index is a two-or -more-column index, and composite indexes can be both clustered and non-clustered. A covering query is one in which all of the information can be acquired from an index. A clustered index always covers a query if chosen by the query optimizer because it contains all the data in a table." 13. What are the various index settings available for a table?,"One of the following index configurations can be applied to a table: There are no indexes. A clustered index, Many non-clustered indexes and a clustered index, A non-clustered index, Many non-clustered indexes." 14. What is the table's name with neither a Cluster nor a Noncluster Index? What is the purpose of it?,Heap or unindex ed table Heap is the name given to it by Microsoft Press Books and Book On-Line (BOL). A heap is a table that does not have a clustered index and does not have pointers connecting the pages. The only structures that connect the pages in a table are the IAM pages. Unindexed table s are ideal for storing data quickly . It is often preferable to remove all indexes from a table before doing a large number of inserts and then to restore those indexes. 1. What is data integrity?,"The total correctness, completeness, and consistency of data are known as data integrity . Data integrity also refers to the data's safety and security in regulatory compliance, such as GDPR compliance. It is kept up-to-date by a set of processes, regulations, and standards that were put in place during the design phase. The information in a database will stay full, accurate, and dependable no matter how long it is held or how often it is accessed if the data integrity is protected. The importance of data integrity in defending oneself against data loss or a data leak cannot be overstated: you must first guarantee that internal users are handling data appropriately to keep your data secure from harmful outside influences. You can ensure that sensitive data is never miscategorized or stored wrongly by implementing suitable data validation and error checking, therefore exposing you to possible danger ." 2. What ar e data Integrity Types?,"There must be a proper understanding of the two forms of data integrity , physical and logical, for mainta ining data integrity . Both hierarchical andrelational databases are collections of procedures and methods that maintain data integrity ." 3. What is Physical Integrity?,"Physical integrity refers to safeguarding data's completeness and correctness during storage and retrieval. Physical integrity is jeopardized when natural calamities hit, electricity goes out, or hackers interrupt database functionality . Data processing managers, system programmers, applications programmers, and internal auditors may be unable to access correct data due to human mistakes, storage degradation, and many other difficulties." 4. what is Logical Integrity?,"In a relational database, logica l integrity ensures that data remains intact when utilized in various ways. Logical integrity , like physical integrity , protects data from human mistakes and hackers, but differently . Logic integrity may be divided into four categories." 5. Explain the Integrity of entities.,"Entity integrity relies on generat ing primary keys to guarantee that data isn't shown more than once and that no field in a database is null. These unique values identify pieces of data. It's a characteristic of relational systems, which store data in tables that may be connected and used in many ways." 6. What is Referential Consistency?,"The term ""referential integrity"" refers to a set of procedures that ensure that data is saved and utilized consistently . Only appropriate modifications, additions, or deletions of data are made, thanks to rules in the database's structure concerning how foreign keys are utilized. Rules may contain limits that prevent redundant data input, ensure proper data entry , and prohibit entering data that does not apply ." 7. What is Domain Integrity?,"Domain integrity is a set of operations that ensures that each piece of data in a domain is accurate. A domain is a set of permitted values that a column can hold in this context. Constraints and other measures that limit the format, kind, and amount of data submitted might be included." 8. User -defined integrity,"User-defined integrity refers to the rules and limitations that users create to meet their requirements. When it comes to data security , entity , referential, and domain integrity aren't always adequate, and business rules must frequently be considered and included in data integrity safeguards." 9. What are the risks to data integrity?,"The integrity of data recorded in a database can be affected for many reasons. The following are a few examples: Human error: Data integrity is jeopardized when people enter information erroneously , duplicate or delete data, fail to follow proper protocols, or make mistakes when implementing procedures designed to protect data. A transfer error occurs when data cannot be correctly transferred from one point in a database. In a relational database, transfer errors occur when data is present in the destination table but not in the source table. Viruses and bugs: Spyware, malware, and viruses are programs that can infiltrate a computer and change, erase, or steal data. Sudden computer or server breakdowns, as well as issues with how a computer or other device performs, are instances of serious failures that might indicate that your hardware has been hacked. Compromise hardware might cause data to be rendered inaccurately or incompletely . Also, they might limit or reduce data access or make information dif ficult to utilize. The following steps can be taken to reduce or remove data integrity risks: Limiting data access and modifying permissions to prevent unauthorized parties from making changes to dataValidating data, both when it's collected and utilized, ensures that it's accurate. Using logs to track when data is added, edited, or removed is a good way to back up data. Internal audits are carried out regularly . Using software to spot errors." 1. What is a cursor in SQL Server?,A cursor is a database object that represents a result set and handles data one row at a time 2. How to utilize the Transact-SQL Cursor ?,"Make a cursor declaration, Activate the cursor , Row by row, get the data. Deallocate cursor , Close cursor" 3. Define the different sorts of cursor locks,There are three different types of locks. ONL Y READ: This stops the table from being updated. 4. Tips for cursor optimization,"When not in use, close the pointer." 5. The cursor's disadvantages and limitations,Cursor consume s network resources by requiring a round-trip each time it pulls a record. 1. What constitutes good data visualization?,"Use of color theory, Data positioning, Bars over circles and squares, Reducing chart junk by avoiding 3D charts and eliminating the use of pie charts to show proportions" 2. How can you see more than three dimensions in a single chart?,"Typically , data is shown in charts using height, width, and depth in pictures; however , to visualize more than three dimensions, we employ visual cuessuch as color , size, form, and animations to portray changes over time." 3. What processes are involved in the 3D Transformation of data visualization?,Data transformation in 3D is necessary because it provides a more comprehensive picture of the data and the ability to see it in more detail. The overall procedure is as follows: ●Viewing Transformation ●Workstation Transformation ●Modeling Transformation ●Projection Transformation 4. What is the definition of Row-Level Security?,"Row-level security limits the data a person can see and access based on their access filters. Depending on the visualization tool being used, users can specify row-level security . Several prominent visualization technologies, including Qlik, Tableau, and Power BI, are available." 5. What Is Visualization “Depth Cueing”?,"Depth cueing is a fundamental challenge in vision approaches. Some 3D objects lack visual line and surface identification due to a lack of depth information. To draw attention to the visible lines, draw them as dashed lines and delete the unseen ones." 6. Explain Surface Rendering in Visualization?,"Lightening conditions in the screen, Degree of transparency, Assigned characteristics, Exploded and cutaway views, How rough or smooth the surfaces are to be, Three dimensional and stereoscopic views" 7. What is Informational Visualization?,"Information visualization focused on computer-assisted tools to study huge amounts of abstract data. The User Interface Research Group at Xerox PARC, which includes Dr. Jock Mackinlay , was the first to develop the phrase ""information visualization."" Selecting, manipulating, and showing abstract data in a way that allows human engagement for exploration and comprehension is a practical use of information visualization in computer applications. The dynamics of visual representation and interaction are important features of information visualization. Strong approaches allow the user to make real-time changes to the display , allowing for unequaled observation of patterns and structural relationships in abstract data." 8. What are the benefits of using Electrostatic Plotters?,"They outperform pen plotters and high-end printers in terms of speed and quality . A scan-conversion feature is now available on several electrostatic plotters. There are color electrostatic plotters on the market, and they make numerous passes over the page to plot color images." 9. What is Pixel Phasing?,Pixel phasing is an antialiasing method that smooths out stair steps by shifting the electron beam closer to the places defined by the object shape. 10. Define Perspective Projection,"This is accomplished by projecting points to the display plane and conver ging points. As a result, items further away from the viewing point should be smaller than those present here." 11. Explain winding numbers in visualization,"The winding number approach determines whether a particular point is inside or outside the polygon. This approach gives all the edges that cross the scan line a direction number . If the edge begins below the line and finishes above the scan line, the direction should be -1; otherwise, it shouldbe 1. When the value of the winding number is nonzero, the point is considered to be inside polygons or two-dimensional objects." 12. What is Parallel Projection?,Parallel projection is the process of creating a 2D representation of a 3D scene—project points from the object's surface along parallel lines on the display plane. Different 2D perspectives of things may be created by projecting the visible spots. 13. What is a blobby object?,Some objects may not retain a constant form but instead vary their surface features in response to particular motions or close contact with other objects. Molecular structures and water droplets are two examples of blobby objects. 14. What is Non-Emissive?,"They are optical effects that turn light from any source into pictorial forms, such as sunshine. A good example is the liquid crystal display ." 15. What is Emissive?,Electrical energy is converted into light energy by the emissive display . Examples include plasma screens and thin film electroluminescent displays. 16. What is Scan Code?,"When a key is pushed on the keyboard, the keyboard controller stores a code corresponding to the pressed key in the keyboard buffer, which is a section of memory . The scan code is the name given to this code." 17. What is the difference between a window port and a viewport?,A window port refers to a section of an image that a window will display . The view port is the display area of the selected portion of the form in which the selected component is displayed. 1. What is the distinction between deep learning and machine learning?, 2. Give a detailed explanation of the Decision Tree algorithm, 3. What exactly is sampling? How many different sampling techniques are you familiar with?, 4. What is the distinction between a type I and a type II error?, "5. What is the definition of linear regression? What are the definitions of the words p-value, coefficient, and r-squared value? What are the functions of each of these elements?", 6. What is statistical interaction?, 7. What is selection bias?, 8. What does a data set with a non-Gaussian distribution look like?, "9. What is the Binomial Probability Formula, and how does it work?", 10. What distinguishes k-NN clustering from k-means clustering?, 11. What steps would you take to build a logistic regression model?, 12. Explain the 80/20 rule and its significance in model validation., 13. Explain the concepts of accuracy and recall. What is their relationship to the ROC curve?, 14. Distinguish between the L1 and L2 regularization approaches., "15. What is root cause analysis, and how does it work?", 16. What is hash table collisions?, "17. Before implementing machine learning algorithms, what are some procedures for data wrangling and cleaning?", 18. What is the difference between a histogram and a box plot?, "19. What is cross-validation, and how does it work?", "20. Define the terms ""false-positive"" and ""false-negative."" Is it preferable to have a large number of false positives or a lar ge number of false negatives?", "21. In your opinion, which is essential, model performance or accuracy , when constructing a machine learning model?", 22. What are some examples of scenarios in which a general linear model fails?, 23. Do you believe that 50 little decision trees are preferable to a single huge one? Why?, 1. What are the most important data scientist tools and technical skills?,"Because data science is such a sophisticated profession, you'll want to demonstrate to the hiring manager that you're familiar with all of the most up-to-date industry-standard tools, software, and programming languages. Data scientists typically use R and Python among the different statistical programming languages used in data research. Both may be used for statistical tasks, including building a nonlinear or linear model, regression analysis, statisti cal testing, data mining, and so on. RStudio Server is another essential data science application, whereas Jupyter Notebook is frequently used for statistical modelling, data visualizations, and machine learning functions, among other things. Tableau, PowerBI, Bokeh, Plotly , and Infogram are just a few of the dedicated data visualization tools that Data Scientists use frequently . Data scientists must also have a lot of SQL and Excel skills.“Any specific equipment or technical skills required for the position you're interviewing for should also be included in your response. Examine the job description, and if there are any tools or applications you haven't used before, it's a good idea to familiarize yourself with them before the interview .”" 2. How should outlier values be treated?,"Outliers can be eliminated in some cases. You can remove garbage values or values that you know aren't true. Outliers with extreme values that differ significantly from the rest of the data points in a collection can also be deleted. Suppose you can't get rid of outliers. In that case, you may rethink whether you choose the prope r model, employ methods (such random forests) that aren't as affected by outlier values, or attempt normalizing your data." 3. Tell me about a unique algorithm you came up with., 4. What are some of the advantages and disadvantages of your preferred statistics software?, 5. Descr ibe a data science project where you had to work with a lot of code. What did you take away from the experience?, 6. How would you use five dimensions to portray data properly?, 7. Assume using multiple regression to create a predictive model. Describe how you plan to test this model., 9. How do you know that your modifications are better than doing nothingwhile updating an algorithm?, "10. What would you do if you had an unbalanced data set for prediction (i.e., many more negative classes than positive classes)?", 11. How would you validate a model you constructed using multiple regression to produce a predictive model of a quantitative outcomevariable?, "12. I have two equivalent model s of accuracy and processing power. Which one should I use for production, and why should I do so?", 12. I have two equivalent model s of accuracy and processing power. Which missing values. What are your plans for dealing with them?, 1. What qualities do you believe a competent Data Scientist should possess?,"Your response to this question will reveal a lot about how you view your position and the value you offer to a company to a hiring mana ger. In your response, you might discuss how data science necessitates a unique set of competencies and skills. A skilled Data Scientist must be able to combine technical skills like parsing data and creating models with business sense like understanding the challenges they're dealing with and recognizing actionable insights in their data. Y ou might also include a Data Scientist you like in your response, whether it's a colleague you know or an influential industry figure." 3. What are some of your strengths and weaknesses?, 4. Which data scientist do you aspire to be the most like?, 5. What attracted you to data science in the first place?, 6. What unique skills do you believe you can provide to the team?, 7. What made you leave your last job?, 8. What sort of compensation/pay do you expect?, 9. Give a few instances of data science best practices., 10. What data science project at our organization would you want to work on?, 11. Do you like to work alone or in a group of Data Scientists?, "12. In five years, where do you see yourself?", 13. How do you deal with tense situations?, 14. What inspires and motivates you?, 15. What criteria do you use to determine success?, 16. What kind of work atmosphere do you want to be in?, 17. What do you enjoy doing outside of data science?, 1. Tell me about a time when you were a multi-discipl inary team member.,"A Data Scientist works with a diverse group of people in technical and non- technical capacities. Working with developers, designers, product experts, data analysts, sales and market ing teams, and top-level executives, not to mention clients, is not unusual for a Data Scientist. So, in your response tothis question, show that you're a team player who enjoys the opportunity to meet and interact with people from other departments. Choose a scenario in which you reported to the company's highest-ranking officials to demonstrate not just that you can communicate with anybody but also how important your data-driven insights have been in the past." 2. Could you tell me about a moment when you used your leadership skills on the job?, 3. What steps do you use to resolve a conflict?, 4. What method do you like to use to establish rapport with others?, 5. Discuss a successful presentation you delivered and why you believe it went well., 6. How would you communicate a complex technical issue to a colleague orclient who is less technical?, 7. Describe a situation in which you had to be cautious when discussing sensitive information. How did you pull it of f?, "8. On a scale of 1 to 10, how good are your communication skills? Give instances of situations that prove the rating is correct.",Extras 1. Tell me about a moment when you were tasked with cleaning and organizing a large data collection,"According to studies, Data Scientists spend most of their time on data preparation rather than data mining or modelling. As a resul t, if you've worked as a Data Scientist before, you've almost certainly cleaned and organized a large data collection. It's also true that this is a job that just a few individuals like. However , data cleaning is one of the most crucial processes. As a result, you should walk the hiring manager through your data preparation process, including deleting duplicate observations, correcting structural problems, filtering outliers, dealing with missing data, and validating data." 2. Tell me about a data project you worked on and met a difficulty . What was your reaction?, "3. Have you gone above and beyond your normal responsibil ities? If so, how would you go about doing it?", 4. Tell me about a period when you were unsuccessful and what you learned from it., 5. How have you used data to improve a customer's or stakeholder's experience?, 6. Give me an example of a goal you've attained and how you got there., 7. Give an example of a goal you didn't achieve and how you dealt with it., 8. What strategies did you use to meet a tight deadline?, 9. Tell me about an instance when you successfully settled a disagreement, 1. What's the difference between support vector machines and logistic regression? What is an example of when you would choose to use one overthe other?, 2. What is the integral representation of a ROC area under the curve?, "3. A disc is spinning on a spind le, and you don’t know which direction the disc is spinning. A set of pins is given to you. How will you utilize the pins to show how the disc is spinning?", 3. What would you do if you discovered that eliminating missing values from a dataset resulted in bias?, "4. What metrics would you consider when addressing queries about a product's health, growth, or engagement?", "5. When attempting to address business difficulties with our product, what metrics would you consider?", 6. How would you know whether a product is performing well or not?, 7. What is the best way to tell if a new observation is an outlier? What is the difference between a bias-variance trade-off?, 8. Discuss how to randomly choose a sample of a product's users., "9. Before using machine learning algorithms, explain the data wrangling and cleaning methods.", 10. How would you deal with a binary classification that isn't balanced?, 11. What makes a good data visualization different from a bad one?, 12. What's the best way to find percentiles? Write the code for it., 13. Make a function that determines whether a word is a palindrome., why do you want to work in this company?, do you consider yourself successful?, are you willing to travel?, what are your salary expectations?, what would you consider your greatest strengths?, what would you consider your greatest weakness?, what motivates you?, why did you leave your last job?, what experience do you have in this field?, what do coworkers say about you?, why should we hire you?, are you a team player?, what is your philosophy towards work?, what have you learned from mistakes on the job?, how would you know you were successful on this job?,"Being successful means goals that are set are met. Being successful also means standards are not only reach, but also even exceeded wherever possible ." are you willing to work overtime nights weekends?, what will you do if you dont get this position?, what have you done to improve your knowledge in the last year?, how you would be an asset to this company?, how long would you expect to work for us in case you are hired?, why do you think you would do well at this job?, what irritates you about coworkers?, do your skills match this job or another job more closely?, what has disappointed you about a job?, if you were hiring a person for this job what would you look for?, what role do you tend to play in a team?, what was the most difficult decision for you made?, are you willing to make sacrifices for this company?, what qualities do you look for in a boss?, are you applying in other companies as well?, do you know anyone who works in our company?, how do you propose to compensate for your lack of experience?, have you ever worked in a job that you hated?, what would your previous supervisor say your strongest point is?,"Some of my strongest points at work are being hardworking, patient and a quick learner." what is the most difficult thing about working with you?, what suggestions have you made in your previous employment that was implemented?, would you rather be liked or feared?, how do you cope with stress?, would you rather work for money or job satisfaction?, what was your biggest challenge with your previous boss?, do you enjoy working as part of a team?, why should we hire you?, has anything ever irritated you about people youve worked with?, do you have any questions for me?, , what is the sql server query execution sequence?, what is normalization?, what are the three degrees of normalization and how is normalization done in each degree?, what are the different database objects?, what is collation?, what is a constraint and what are the seven constraints?, what is a surrogate key?, and how can it be improved?,The Derived Column a new column that is generated on the fly by applying expressions to transformation input columns. Ex: FirstName + + LastName AS Full name Derived column affect the performances of the data base due to the creation of a temporary new column. Executio n plan can save the new column to have better performance next time. what is a transaction?, what are the differences between oltp and olap?,"OLTP stands fo r Online Transactional Processing OLAP stands for Online Analytical Processing OLTP: Normalization Level: highly normalized Data Usage : Current Data (Database) Processing : fast for delta operations (DML) Operation : Delta operation (update, insert, de lete) aka DML Terms Used : table, columns and relationships OLAP: Normalization Level: highly denormalized Data Usage : historical Data (Data warehouse) Processing : fast for read operations Operation : read operation (select) Terms Used : dimension ta ble, fact table" how do you copy just the structure of a table?, what are the different types of joins?, what are the different types of restricted joins?, what is a subquery?, what are the set operators?, what is a derived table?, what is a view?, follow me https www you tubecomcsauravagarwal18 what are the types of views?, what is an indexed view?, what does with check do?, what is a ranking function and what are the four ranking functions?, what is partition by?, what is temporary table and what are the two types of it they are tables just like, explain variables?," Variable is a memory space (place holder) that contains a scalar value EXCEPT table variables, which is 2D data. Variable in SQL Server are created using DECLARE Statement. Variables are BATCH -BOUND. Variables that start with @ are user -defined va riables." explain dynamics qld sql?, what is sql injection attack?, what is self join?, what is correlated subquery?, what is the difference between regular subquery and correlated subquery?," Based on the above explanation, an inner subquery is independent from its outer subquery in Regular Subquery. On the other hand, an inner subqu ery depends on its outer subquery in Correlated Subquery." what are the differences between delete and truncate?,"Delete: Follow Me : https://www.yo utube.com/c/SauravAgarwa l DML statement that deletes rows from a table and can also specify rows using a WHERE clause. Logs every row deleted in the log fi le. Slower since DELETE records every row that is deleted. DELETE continues using the earlier max value of the identity column. Can have triggers on DELETE. Truncate: DDL statement that wipes out the entire table and you cannot delete specific rows. Does m inimal logging, minimal as not logging everything. TRUNCATE will remove the pointers that point to their pages, which are deallocated. Faster since TRUNCATE does not record into the log file. TRUNCATE resets the identity column. Cannot have triggers on TRU NCATE." what are the three different types of control flow statements?, what is table variable explain its advantages and disadvantages?, follow me https www you tubecomcsauravagarwal33 what are the differences between temporary table and table variable?, what is stored procedures ?, what are the four types of sp?,System Stored Procedures (SP_****): built -in stored procedures that were created by Microsoft. User Defined Stored Procedures: stored procedures that are created by users. Common naming convention (usp_****) CLR (Common Language Runtime): stored procedures that are impl emented as public static methods on a class in a Microsoft .NET Framework assembly. Extended Stored Procedures (XP_****): stored procedures that can be used in other platforms such as Java or C++. explain the types of sp. , SP with a single input parameter: SP with multiple parameters: SP with output parameters: Extracting data from a stored procedure based on an input parameter and outputting them using output variables. SP with RETURN statement (the return value is always single and integer value) what are the characteristics of sp?, what are the advantages of sp?," Precompiled code hence faster. They allow modular programming, which means it allows you to break down a big chunk of code into smaller pieces of codes. This way the code will be more readable and more easier to manage. Reusability. Can enhance security of your application. Users can be granted permission to execute SP without having to have direct permissions on the objects referenced in the procedure. Can reduce network traffic. An operation of hundreds of lines of code can be performed through single statement that executes the code in procedure rather than by sending hundreds of lines of code over the network. SPs are pre -compiled, which means it has to have an Execution Plan so every time it gets executed after creating a new Execution Plan, it will save up to 70% of execution time. Without it, the SPs are just like any regular TSQL statements." what is user defined functions udf?, what is the difference between stored procedure and udf?, what are the types of udf?, what is the difference between a nested udf and recursive udf?, Nested UDF: calling an UDF within an UDF Recursive UDF: calling an UDF within itself what is a trigger?, what are the types of triggers?, what are inserted and deleted tables aka magic tables?, what are some string functions to remember len string returns the length of string, what are the three different types of error handling?, explain about cursors?, Cursors are a temporary database o bject which are used to loop through a table on row -by-row basis. There are five types of cursors: 1. Static: shows a static view of the data with only the changes done by session which opened the cursor. 2. Dynamic: shows data in its current state as the cursor moves from record -to-record. 3. Forward Only: move only record -by-record 4. Scrolling: moves anywhere. 5. Read Only: prevents data manipulation to cursor data set. what is the difference between tables can and seek?," Scan: going through from the first page to the last page of an offset by offset or row by row. Seek: going to the specific node and fetching the information needed. Seek is the fastest way to find and fetch the data. So if you see your Execution Plan and if all of them is a seek, that means its optimized." why are the dml operations are slower on indexes?, what is a heap table on a heap?, what is the architecture in terms of a hard disk extents and pages?, A hard disk is divided into Extents. Every extent has eight pages. Every page is 8KBs ( 8060 bytes). what are the nine different types of indexes?, what is a clustering key?, explain about a clustered index?, what happens when clustered index is created?, what are the four different types of searching information in a table?, what is fragmentation?, what are the two types of fragmentation?, what are statistics?, Statistics allow the Query Optimizer to choose the optimal path in getting the data from the underlying table. Statistics are histograms of max 200 sampled values from col umns separated by intervals. Every statistic holds the following info: 1. The number of rows and pages occupied by a tables data 2. The time that statistics was last updated 3. The average length of keys in a column 4. Histogram showing the dis tribution of data in column what are some optimization techniques in sql?, how do you present the following tree in a form of a table?, how do you reverse a string without using reverse string?, what is deadlock?," Deadlock is a situation where, say there are two transactions, the two transactions are waiting for each other to releas e their locks. The SQL automatically picks which transaction should be killed, which becomes a deadlock victim, and roll back the change for it and throws an error message for it." what is a fact table?,The primary table in a dimensional model where the numerical performance measurements (or facts) of the business are stored so they can be summarized to provide information about the history of the operation of an organization. We use the term fact to r epresent a business measure. The level of granularity defines the grain of the fact table. what is a dimension table?,"Dimension tables are highly denormalized tables that contain the textual descriptions of the business and facts in their fact table. Since it is not uncommon for a dimension table to have 50 to 100 attributes and dimension tables tend to be relatively shallow in terms of the number of rows, they are also called a wide table. A dimension table has to have a surrogate key as its primary k ey and has to have a business/alternate key to link between the OLTP and OLAP." what are the types of measures?," Additive: measures that can be added across all dimensions (cost, sales). Semi -Additive: measures that can be added across few dimensions and not with others. Non -Additive: measures that cannot be added across all dimensions (stock rates)." what is a star schema?, what is a snowflake schema?, what is granularity?, The lowest level of information that is stored in the fact table. Usually determined by the time dimension table. The best granularity level would be per transaction but it would require a lot of memory. what is a surrogate key?, what are some advantages of using the surrogate key in a data warehouse?, what is the data type difference between a fact and dimension tables?, 1. Fact Tables They hold numeric data. They contain measures. They are deep. 2. Dimensional Tables They hold textual data. They contain attributes of their fact tables. They a re wide. what are the types of dimension tables?, what is your strategy for the incremental load?, what is cdc?, what is the difference between a connection and session?, what are all different types of collation sensitivity?,Following are different types of collation sensitivity - Case Sensitivity - A and a and B and b. Accent Sensitivity. Kana Sensitivity - Japanese Kana characters. Width Sensitivity - Single byte character and double byte character. what is clause?, what is union minus and interact commands?, how to fetch common records from two tables?, how to fetch alternate records from a table?, how to select unique records from a table?, how to remove duplicate rows from table?, what is rowid and rownum in sql?, how to find count of duplicate rows?,"Select rollno, count (rollno) from Student Group by rollno Having count (rollno)>1 Order by count (rollno) desc;" how to find third highest salary in employee table using self join?,Select * from Employee a Where 3 = (Select Count (distinct Salary) from Employee where a.salary<=b.salary; how to display following using query?, how to display date in dd mon yyyy table?, the count of that comma separated values?,"Student Name Marks Dinesh 30,130,20,4 Kumar 100,20,30 Follow Me : https://www.yo utube.com/c/SauravAgarwa l Sonali 140,10 Select Student_na me, regexp_count (marks,,) + As Marks Count from Student;" what is query to fetch last day of previous month in oracle?,"Select LAST_DAY (ADD_MONTHS (SYSDATE, -1)) from dual;" how to display the string vertically in oracle?, how to display department wise and month wise maximum salary?,"Select Department_no, TO_CHAR (Hire_date,Mon) as Month from Employee group by Department_no, TO_CHAR (Hire_date,mon);" how to calculate number of rows in table without using count function?, explain execution plan?, what is scala?,Scala is a general -purpose programming language providing support for both functional and Object - Oriented programming. what is tail recursion in scala?,"There are several situations where programmers have to write functions that are recursive in nature. The main problem with recursive functions is that, it may eat up all the allocated stack space. To overcome this situation, Scala compiler provides a mecha nism tail recursion to optimize these Follow Me : https://www.yo utube.com/c/SauravAgarwa l recursive functions so that it does not create new stack space, instead uses the current function stack space. To qualify for this, annotation @annotation.tailrec has to be used before defining the function and rec ursive call has to be the last statement, then only the function will compile otherwise, it will give an error." what are traits in scala?,"Traits are used to define object types specified by the signature of the supported methods. Scala allows to b e partially implemented but traits may not have constructor parameters. A trait consists of method and field definition, by mixing them into classes it can be reused." who is the father of scala programming language?,"Martin Oderskey, a German computer scientist, is the father of Scala programming language." what are case classes in scala?,"Case classes are standard classes declared with a special modifier case. Case classes export their constructor parameters and provide a recursive decomposition mec hanism through pattern matching. The constructor parameters of case classes are treated as public values and can be accessed directly. For a case class, companion objects and its associated method also get generated automatically. All the methods in t he class, as well, methods in the companion objects are generated based on the parameter list. The only advantage of Case class is that it automatically generates the methods from the parameter list." what is the superclass of all classes in scala?, what is a scala set what are methods through which operation sets are expressed?,"Scala set is a collection of pairwise elements of the same type. Scala set does not contain any duplicate elements. There are two kinds of sets, mutable and immutable." follow me https www you tubecomcsauravagarwal8 what is a scala map?,"Scala Map is a collection of key value pairs wherein the value in a map can be retrieved using the key. Values in a Scala Map are not unique, but the keys are unique. Scala supports two kinds of maps - mutable and immutable. By default, Scala suppo rts immutable map and to make use of the mutable map, programmers must import the scala.collection.mutable.Map class explicitly. When programmers want to use mutable and immutable map together in the same program then the mutable map can be accessed as mutable.map and the immutable map can just be accessed with the name of the map. 9. Name two significant differences between a trait and an abstract class. Abstract classes have constructors with zero or more parameters while traits do not; a class can extend any number of traits but only one abstract class." what is the use of tuples in scala?,"Scala tuples combine a fixed number of items together so that they can be passed around as whole. A tuple is immutable and can hold objects with dif ferent types, unlike an array or list." what do you understand by a closure in scala?,A closure is also known as an anonymous function whose return value depends upon the value of the variables declared outside the function. what do you understand by implicit parameter?,"Wherever, we require that function could be invoked without passing all the parameters, we use implicit parameter. We provide the default values for all the parameters or parameters which we want to be used as implicit. When the fun ction is invoked without passing the implicit parameters, local value of that parameter is used. We need to use implicit keyword to make a value, function parameter or variable as implicit." what is the companion object in scala?, what are the advantages of scala language?, what are the major drawbacks of scala language?,Drawbacks of Scala Language: - Follow Me : https://www.yo utube.com/c/SauravAgarwa l - Less Readable Code - Bit tou gh to Understand the Code for beginners - Complex Syntax to learn - Less Backward Compatibility what is akka play and sleek in scala?, what is unit and in scala?,"The 'Unit' is a type like void in Java. You can say it is a Scala equivalent of the void in Java, while still providing the language with an abstraction over the Java platform. The empty tuple '()' is a term representing a Unit value in Scala." what is the difference between a normal class and a case class in scala?,"Following are some key differences between a case class and a normal class in Scala: - case class allows pattern matching on it. - you can create instances of case class without using the new keyword - equals(), hashcode() and toString() method are automatically generated for case classes in Scala - Scala automatically generate accessor methods for all constructor argument" what are high order functions in scala?,"Follow Me : https://www.yo utube.com/c/SauravAgarwa l High order functions are functions that can receive or return othe r functions. Common examples in Scala are the filter, map, and flatMap functions, which receive other functions as arguments." which scala library is used for functional programming?, what is the best scala style checker tool available for play and scala based applications?, what is the difference between concurrency and parallelism?,"When several computations execute sequentially during overlapping time periods it is referred to as concurrency whereas when processes are executed simultaneously it is known as parallelism. Parallel collection, Futures and A sync library are examples of achieving parallelism in Scala." what is the difference between a java method and a scala function?, what is the difference between function and method in scala?, what is extractor in scala?, is scala a pure oop language?,"Yes, Scala is a Pure Object -Oriented Programming Language because in Scala, everything is an Object, and everything is a value. Functions are values and values are Objects. Scala does not have primitive data types and does not have static members." is java a pure oop language?,Java is not a Pure Object -Oriented Programming (OOP) Language because it supports the following two Non -OOP concepts: Java supports primitive data types. They are not objects.Java supports Static members. They are not related to objects. does scala support operator overloading scala supports operator overloading, does java support operator overloading?,Java does not support Operator Overloading. follow me https www you tubecomcsauravagarwal30 what are the default imports in scala language?, what is an expression?, what is a statement difference between expression and statement?, what is the difference between java s ifelse and scal as if else?, how to compile and run a scala program?,You can use Scala compiler scalac to compile Scala program (like javac) and scala command to run them (like scala) how to tell scala to look into a class file for some java class?,"We can use -classpath argument to include a JAR in Scala's classpath, as shown b elow $ scala -classpath jar Alternatively, you can also use CLASSPATH environment variable." what is the difference between a call by value and call by name parameter?,"The main difference between a call -by-value and a call -by-name parameter is that the former is computed before calling the function, and the latter is evaluated when accessed." what exactly is wrong with a recursive function that is not tail recursive?,Follow Me : https://www.yo utube.com/c/SauravAgarwa l Answer: You run the risk of running out of stack space and thus throwing an exce ption. what is the difference between var and value?, what is scala anonymous function?, what is function currying in scala?, what do you understand by unit and in scala?,Unit is a subtype of scala.anyval and is nothing but Scala equivalent of Java void that provides the Scala with an abstraction of the java platform. Empty tuple i.e. () in Scala is a term that repres ents unit value. whats the difference nil null none and nothing in scala?, what is lazy evaluation?,Lazy Evaluation means evaluating program at run -time on -demand that means when clients access the program then only its evaluated. The difference between val and lazy val is that val is used to define variables which are evaluated eagerly and lazy val i s also used to define variables but they are evaluated lazily. what is call by name?, does scala and java support call by name?,"Scala supports both call -by-value and call -by-name function parameters. However, Java supports only call-by-value, but not call -by-name." what is the difference between call by value and call by name function parameters?, what do you understand by apply and un apply methods in scala?, what is an anonymous function in scala?, what are the advantages of anonymous function function literal in scala?,The advantages of Anonymous Functio n/Function Literal in Scala: We can assign a Function Literal to variable We can pass a Function Literal to another function/method We can return a Function Literal as another function/method result/return value. follow me https www you tubecomcsauravagarwal50 what is the difference between un apply and apply when would you use them?, what is the difference between a trait and an abstract class in scala?, scala?,"According to the private access specifier, private members can be accessed only within that class, but Scalas companion object and class provide special access to private members. A companion object can access all the private members of a companion class. Similarly, a companion class can access all the private members of companion objects." what are scala variables?, mention the difference between an object and a class?, what is the difference between val and var in scala?, what is the difference between array and list in scala?, what is type inference in scala?, what is eager evaluation?,Follow Me : https://www.yo utube.com/c/SauravAgarwa l Eager Evaluation means evaluating program at compile -time or program deployment -time irrespective of clients are using that program or not. what is guard in scal as for comprehension construct?, why scala prefers immutability scala prefers immutability in design and in many cases uses it as, what are the considerations you need to have when using scala streams?,"Streams in Scala are a type of lazy collection, which are created using starting element and then recursively generated using those elements. Streams are like a List, except that, elements are added only when they are accessed, hence lazy. Since streams are lazy in terms of adding elements, they can be unbounded also, and once the elements are added, they are cached. Since Streams can be unbounded, a nd all the values are computed at the time of access, programmers need to be careful on using methods which are not transformers, as it may resul t in java.lang.OutOfMemoryErrors. stream.max stream.size stream.sum 62. Differentiate between Array and List i n Scala. List is an immutable recursive data structure whilst array is a sequential mutable data structure. Lists are covariant whilst array are invariants.The size of a list automatically increases or decreases based on the operations that are performed o n it i.e. a list in Scala is a variable -sized data structure whilst an array is fixed size data structure." which keyword is used to define a function in scala?,A function is defined in Scala using the def keyword. This may sound familiar to Python de velopers as Python also uses def to define a function. follow me https www you tubecomcsauravagarwal64 what is monad in scala?, is scala statically typed language?,"Yes, Scala is a statically -typed language." what is statically typed language and what is dynamically typed language?,"Statically -Typed Language means that Type checking is done at compile - time by compiler, not a t run-time. Dynamically -Typed Language means that Type checking is done at run-time, not at compile -time by compiler." what is the difference between un apply and apply when would you use them?, what is unit in scala?, what is the difference between java s void and scala sun it unit is something like java s void, what is app in scala?, what is the use of scala sapp?,The main advantage of using App is that we dont need to write main method. The main drawback of using App is that we should use same name args to refer command line argument because scala.Apps main() method uses this name. what are option some and none in scala?,Option is a Scala generic type that can either be some generic value or none. Queue often uses it to represent prim itives that may be null. what is scala future?, how it differs from java s future class?,The main and foremost difference between Scalas Future and Javas Future class is that the later does not provide promises/callbacks operations. The only way to retrieve the result is Future.get () in Java. what do you understand by diamond problem and how does scala resolve this?,"Follow Me : https://www.yo utube.com/c/SauravAgarwa l Multiple inheritance problem is referred to as the Deadly diamond problem or diamond problem. The inability to decide on which implementation of the method to choose is referred to as the Diamond Problem in Scala. Suppose say classes B and C both inherit from class A, while class D inherits from both class B and C. Now while implementing multiple inheritance if B and C override some method from class A, there is a confusion and dilemma always on which implementation D should inherit. This is what is referred to as diamond problem. Scala resolves diamond problem through the concept of Traits and class linearization rules." what is the difference between in java and scala?, what is repl in scala what is the use of scala s repl?, what are the similarities between scala sint and java s java lang integer?, what are the differences between scala sint and java s java lang integer?, what is the relationship between int and rich in tin scala?, applications?, what is the use of auxiliary constructors in scala?,"Auxiliary Constructor is the secondary constructor in Scala declared using the keywords this and def. The main pu rpose of using auxiliary constructors is to overload constructors. Just like in Java, we can provide implementation for different kinds of constructors so that the right one is invoked based on the requirements. Every auxiliary constructor in Scala should differ in the number of parameters or in data types." how does yield work in scala?,"The yield keyword if specified before the expression, the value returned from every expression, will be returned as the collection. The yield keyword is very useful, when there is a need, you want to use the return value of expression. The collection returned can be used the normal collection and iterate over in another loop." what are the different types of scala identifiers there four types of scala identifiers,Follow Me : https://www.yo utube.com/c/SauravAgarwa l Alpha numeric identifiers Operator identifiers Mixed identifiers Literal identifiers what are the different types of scala literals?, what is sbt what is the best build tool to develop play and scala applications?, what is the difference between and in scala?,:: and ::: are methods available in List class. :: method is used to append an element to the beginning of the list. And ::: method is used to concatenate the elements of a given list in front of this list. :: method works as a cons operator for List class. Here cons stands for construct. ::: method works as a concatenation operator for List class. follow me https www you tubecomcsauravagarwal88 what is the difference between and in scala?,#:: and #::: are methods available in Stream class #:: method words as a cons operator for Stream clas s. Here cons stands for construct. #:: method is used to append a given element at beginning of the stream. #::: method is used to concatenate a given stream at beginning of the stream. what is the use of in scala based applications?, this three question marks is not an operator a method in scala it is used to mark a method, what is the best scala style checker tool available for play and scala based applications?, how scala supports both highly scalable and highly performance applications?,"As Scala supports Multi -Paradigm Programming(Both OOP and FP) and uses Actor Concurrency Model, we can develop very highly Scalable and high -performance applications very easily." what are the available buildtools to develop play and scala based applications?,Follow Me : https://www.yo utube.com/c/SauravAgarwa l The following three are most popular available Build Tools to develop Play and Scala Applications: SBT Maven Gradle what is either in scala?, what are left and right in scala explain either left right design pattern in scala?, how many public class files are possible to define in scala source file?, what is nothing in scala?, and nothing in scala?, how to you create singleton classes in scala?, what is option and how is it used in scala?, what is the difference between a call by value and call by name parameter?,"The main difference between a call -by-value and a call -by-name parameter is that the former is computed before calling the function, and the later is evaluated when accessed." follow me https www you tubecomcsauravagarwal101 what is default access modifier in scala does scala have public keyword?, is scala an expression based language or statement based language?, is java an expression based language or statement based language?, mention some keywords which are used by java and not required in scala?,"Java u ses the following keywords extensively: public keyword - to define classes, interfaces, variables etc. static keyword - to define static members." why scala does not require them?, what is sq oop?, why is the default maximum mappers are 4insqoop?,"As of my knowledge, the default number of mapper 4 is followed by minimum concurrent task for one machine. We will lead to set a higher number of concurrent tasks, which can result in faster job completion." is it possible set speculative execution in sq oop?, what causes of had oop throw classnotfoundexception while sq oop integration?,"The most causes of that the supporting library (like connectors) was not updated in sqoop's library path, so we need to update it on that specific path." how to view all the databases and tables in rdbms from sq oop?,"Using below commands we can, sqoop -list-databases sqoop -list-tables" how to view table columns details in rdbms from sq oop?, to a hive table so how do we resolve it?,you can specify the --hive-overwrite option to indicate that existing table in hive must be replaced. After your data is imported into HDFS or this step i s omitted what is the default file format to import data using apache sq oop?, how do i resolve a communications link failure when connecting to mysql?, how do i resolve an illegalargumentexception when connecting to oracle?,"This could be caused a non -owner trying to connect to the table so prefix the table name with the schema, for example Sche maName.OracleTableName. 11) What's causing this Exception in thread main" java lang incompatible class change error when running noncdhhadoopwithsqoop?, to import the tables one by one?,"This can be accomp lished using the import -all-tables import command in Sqoop and by specifying the exclude -tables option with it as follows - sqoop import -all-tables --connect -username -password --exclude -tables Table298, Table 123, Table 299" does apache sq oop have a default database?,"Yes, MySQL is the default database. bigdatascholars.blogspot.com/2018/08/sqoop -interview -question -and-answers.html" how can i import large objects blob and clob objects in apache sq oop?, manner?, what is the difference between sq oop and dis tcp command in had oop?,"Both distCP (Distributed Copy in Hadoop) and Sqoop transfer data in parallel but the only difference is that distCP command can transfer any kind of data from one Hadoop cluster to another whereas Sqoop transfers data between RDBMS and other components in the Hadoop ecosystem like HBase, Hive, HDFS, etc." what is sq oop meta store?,Sqoop metastore is a shared metadata repository for remote users to define and execute saved jobs created using sqoop job defined in the metastore. The sqoop -site.xml should be configured to connect to the metastore. 18) What is the significance of using -split-by clau se for running parallel import tasks in Apache sq oop?, parallel map reduce tasks why?,Hadoop MapReduce cluster is configured to run a maximum of 4 parallel MapReduce tasks and the sqoop import can be configured with number of parallel tasks less than or equal to 4 but not more than4. 21) You successfully imported a table u sing Apache Sqoop to HBase but when you query the table it is found that the number of rows is less than expected what could be the likely reason?, row into rdbms in which the columns are defined as not null?,"Using the -input -null-string parameter, a default value can be specified so that the row gets inserted with the default value for the column that it has a NULL value in HDFS." how will you synchronize the data in hdfs that is imported by sq oop?, what are the relational databases supported in sq oop?, what are the destination types allowed in sq oop import command?,Currently Sqoop Supports data imported into below services. HDFS Hive HBase HCatalog Accumulo is sq oop similar to dist cp in had oop?, what are the majorly used commands in sq oop?, possible speed what can you do?, what might be the root cause and fix for this error scenario?, what is the importance of eval tool?, what is the process to perform an incremental data load in sq oop?, what is the significance of using compress codec parameter?,To get the out file of a sqoop import in formats other than .gz like .bz2 compressions when we use the -compress -code parameter. can freeform sql queries be used with sq oop import command if yes then how can they, be used?,Follow Me : https://www.yo utube.com/c/SauravAgarwa l Sqoop allows us to use fre e form SQL queries with the import command. The import command should be used with the -e and - query options to execute free form SQL queries. When using the -e and -query options with the import command the -target dir value must be specified. what is the purpose of sq oop merge?,The merge tool combines two datasets where entries in one dataset should overwrite entries of an older dataset preserving only the newest version of the records between both the data sets. how do you clear the data in a staging table before loading it by sq oop?,By specifying the -clear -staging -table option we can clear the staging table before it is loaded. This can be done again and again till we get proper data in staging. how will you update the rows that are already exported?, what is the role of jdbc driver in asq oop setup?,To connect to different relational databases sqoop needs a connector. Almost every DB vendor makes this connecter available as a JDBC driver which is specific to that DB. So Sqoop needs the JDBC driver of each of the database it needs to interact with. when to use targetdir and warehouse dir while importing data?, sync with the data in hdfs imported by sq oop?,Follow Me : https://www.yo utube.com/c/SauravAgarwa l sqoop can have 2 approaches. To use the --incremental parameter with append option where value of some columns are checked and only in case of modified values the row is imported as a new row. To use the --incre mental parameter with lastmodified option where a date column in the source is checked for records which have been updated after the last import. is it possible to add a parameter while running a saved job?,"Yes, we can add an argument to a saved job a t runtime by using the --exec option sqoop job --exec jobname -- -- newparameter 41) sqoop takes a long time to retrieve the minimum and maximum values of columns" mentioned in split by parameter how can we make it efficient?,We can use the --boundary -query parameter in which we specify the min and max value for the column based on which the split can happen into multiple mapreduce tasks. This makes it faster as the query inside the - boundary -query parameter is executed first and the job is ready with t he information on how many mapreduce tasks to create before executing the main query. how will you implement all or nothing load using sq oop?,Using the staging -table option we first load the data into a staging table and then load it to the final target table only if the staging load is successful. how will you update the rows that are already exported?, deleted?,Follow Me : https://www.yo utube.com/c/SauravAgarwa l Truncate the target table and load it again. 45) How can we load to a column in a relational table which is not null but the incoming value from hdfs has a null value?,By using the -input -null-string parameter we can specify a default value and that will allow the row to be inserted into the target table. how can you schedule asqoopjobusingoozie?,Oozie has in -built sqoop actions inside which we can mention the sqoop commands to be executed. 47) Sqoop imported a table successfully to HBase but it is found that the number of rows is fewer than expected what can be the cause?,"Some of the imported records might have null values in all the columns. As Hbase does not allow all null values in a row, those rows get dropped. 48) How can you force sqoop to execute a free form Sql query only once and import the rows" serially?, sqooprunsonly4 what can be the reason?,The Mapreduce cluster is configured to run 4 parallel tasks. So the sqoop command must have number of parallel tasks less or equal to that of the MapReduce cluster. 50) What happe ns when a table is imported into a HDFS directory which already exists using the append parameter?,"Follow Me : https://www.yo utube.com/c/SauravAgarwa l Using the --append argument, Sqoop will import data to a temporary directory and then rename the files into the normal target directory in a manner that doe s not conflict with existing filenames in that directory. 51) How to import only the updated rows form a table into HDFS using sqoop assuming the" source has last update timestamp details for each row?,By using the lastmodified mode. Rows where the check c olumn holds a timestamp more recent than the timestamp specified with --last-value are imported. 52) Give a Sqoop command to import all the records from employee table divided into groups of records by the values in the column department_id. $ sqoop impor t --connect jdbc:mysql://DineshDB --table EMPLOYEES --split-by dept_id -m2 what does the following query do?, what is the importance of conditions in sq oop?, can sq oop run without a had oop cluster?,"To run Sqoop commands, Hadoop is a mandatory prerequisite. You cannot run sqoop commands without the Hadoop libraries." is it possible to import a file in fixed column length from the database using sq oop import?, how to use sq oop validation?,You can use this parameter ( --validate) to validate the counts between whats imported/exported between RDBMS and HDFS. how to pass sq oop command as file arguments in sq oop?,"specify an options file, simply create an options file in a convenient locat ion and pass it to the command line via - -options -file argument. eg: sqoop --options -file /users/homer/work/import.txt --table TEST" is it possible to import data apart from hdfs and hive?,Sqoop supports additional import targets beyond HDFS and Hive. Sqoop can also import records into a table in HBase and Accumulo. follow me https www you tubecomcsauravagarwal60 is it possible to use sq oop direct command in h base?,"This function is incompatible with direct import. But Sqoop can d o bulk loading as opposed to direct writes. To use bulk loading, enable it using --hbase -bulkload." can i configure two sq oop command so that they are dependent on each other like if the, first sq oop job is successful second gets triggered if first fails second should not run?,"No, using sqoop commands it is not possible, but You can use oozie for this. Create an oozie workflow. Execute the second action only if the first action succeeds." what is uber mode and where is the settings to enable in had oop?, what is hive?, why do we need hive?, what is a meta store in hive?, is hive suitable to be used for oltp systems why?,"No, Hive does not provide insert and update at row level. So it is not suitable for OLTP system." can you explain about acid transactions in hive?, what are the types of tables in hive?, what kind of data warehouse application is suitable for hive?,"Hive is not considered as a full database. The design rules and regulations of Hadoop and HDFS put restrictions on what Hive can do.Hive is most s uitable for data warehouse applications. Where Analyzing the relatively static data, Less Responsive time and No rapid changes in data. Hive does not provide fundamental features required for OLTP (Online Transaction Processing). Hive is suitable for data warehouse applications in large data sets." explain what is a hive variable what do we use it for?, how to change the warehouse dir location for older tables?, what are the types of meta store available in hive?,There are three types of meta stores available in Hive. Embedded Metastore (Derby) Local Metastore Remote Metastore. is it possible to use same meta store by multiple users in case of embedded hive?, if you run hive server what are the available mechanism for connecting it from application?, what is ser de in apache hive?, which classes are used by the hive to read and write hdfs files?, give examples of these r de classes which hive uses to serialize and deserialize data?,"Hive currently use these SerDe classes to serialize and Deserialize data: MetadataTypedColumnsetSerDe: This SerDe is used to read/write delimited records like CSV, tab - separated control -A separated records (quote i s not supported yet.) ThriftSerDe: This SerDe is used to read or write thrift serialized objects. The class file for the Thrift object must be loaded first. DynamicSerDe: This SerDe also read or write thrift serialized objects, but it understands thrift DD L so the schema of the object can be provided at runtime. Also it supports a lot of different protocols, including TBinaryProtocol, TJSONProtocol, TCTLSeparatedProtocol(which writes data in delimited records)." how do you write your own custom ser de and what is the need for that?, what is object inspector functionality?, what is the functionality of query processor in apache hive?,This component implements the processing framework for c onverting SQL to a graph of map or reduce jobs and the execution time framework to run those jobs in the order of dependencies and the help of metastore details. what is the limitation of derby database for hive meta store?,"With derby database, you can not have multiple connections or multiple sessions instantiated at the same time. Derby database runs in the local mode and it creates a log file so that multiple users cannot access Hive simultaneously." what are managed and external tables?, what are the complex data types in hive?, how does partitioning help in the faster execution of queries?,"With the help of partitioning, a sub directory will be created with the name of the partitioned column and when you perform a query using the WHERE clause, only the particular sub -directory will be scanned instead of scanning the whole table. This gives yo u faster execution of queries." how to enable dynamic partitioning in hive?, what is bucket ing?, how does bucket ing help in the faster execution of queries?, how to enable bucket ing in hive?,"By default bucketing is disabled in Hive, you can enforce to enable it by setting the below property set hive.enforce.bucketing = true;" what are the different file formats in hive?,Every file format ha s its own characteristics and Hive allows you to choose easily the file format which you wanted to use. There are different file formats supported by Hive Follow Me : https://www.yo utube.com/c/SauravAgarwa l 1. Text File format 2. Sequence File format 3. Parquet 4. Avro 5. RC file format 6. ORC how is ser de different from file format in hive?, what is regex ser de?, how is orc file format optimised for data storage and analysis?,"ORC stores collections of rows in one file and within the collection the row data will be stored in a columnar format. With columnar format, it is very easy to compress, thus reducing a lot of storage cost. While querying also, it queries the par ticular column instead of querying the whole row as the records are stored in columnar format. Follow Me : https://www.yo utube.com/c/SauravAgarwa l ORC has got indexing on every block based on the statistics min, max, sum, count on columns so when you query, it will skip the blocks based on the indexing." how to access h base tables from hive?,"Using Hive -HBase storage handler, you can access the HBase tables from Hive and once you are connected, you can query HBase using the SQL queries from Hive. You can also join multiple tables in HBase from Hive and re trieve the result." when running a join query i see outofmemoryerror s?, communications exception communications link failure?, does hive support unicode?, are hive sql identifiers eg table names columns etc case sensitive?,"No, Hive is case insensitive." what is the best way to loadxml data into hive?,"The easiest way is to use the Hive XML SerDe (com.ibm.spss.hive.serde2.xml.XmlSerDe), which will all ow you to directly import and work with XML data." when hive is not suitable?, mention what are the different modes of hive?,"Depending on the size of data nodes in Hadoop, Hive can operate in two modes. These modes are, Local mode and Map reduce mode" mention what is hs2hiveserver2?, mention what hive query processor does?,Hive query processor convert graph of MapReduce jobs with the execution time framework. So that the jobs can be executed in the order of dependencies. Follow Me : https://www.yo utube.com/c/SauravAgarwa l mention what are the steps of hive in query processor?,"The components of a Hive query processor include, 1. Logical Plan Generation 2. Physical Plan Generation Execution Engine 3. Operators 4. UDFs and UDAFs 5. Optimizer 6. Parser 7. Semantic Analyzer 8. Type Checking" explain how can you change a column datatype in hive?,"You can change a column data type in Hive by using command, ALTER TABLE table_name CHANGE column_name column_name new_datatype;" mention what is the difference between order by and sortby in hive?,"SORT BY will sort the data within each reducer. You can use any number of reducers for SORT BY operation. ORDER BY will sort all of the data together, which has to pass through one reduc er. Thus, ORDER BY in hive uses a single." explain when to use explode in hive?,"Follow Me : https://www.yo utube.com/c/SauravAgarwa l Hadoop developers sometimes take an array as input and convert into a separate table row. To convert complex data types into desired table formats, then we can use explode function." mention how can you stop a partition form being queried?, can were name a hive table?,"yes, using below command Alter Table table_name RENAME TO new_name" what is the default location where hive stores table data?,hdfs://namenode_server/user/hive/warehouse is there a date datatype in hive?, can we run unix shell commands from hive give example,"Yes, using the ! mark just before the command. For example !pwd at hive prompt will list the current directory." can hive queries be executed from script file show?,Using the source comm and. Example Hive> source /path/to/file/file_with_query.hql what is the importance of hive rcfile?, what are the default record and field delimiter used for hive text files?,"The default record delimiter is \n And the filed delimiters are \001,\002,\003" what do you mean by schema on read?,The schema is validated with the data when reading the data and not enforced when writing data. how do you list all databases whose name starts with p?, what does the use command in hive do?,With the use command you fix the database on which all the subsequent hive queries will run. how can you delete the db property in hive?,There is no way you can delete the DBPROPERTY. what is the significance of the line?, how do you check if a particular partition exists?, which java class handles the input and output records encoding into files in hive tables?, what is the significance of if exists clause while dropping a table?, when you point a partition of a hive table to a new directory what happens to the data?, does the archiving of hive tables it saves any spaces in hdfs?, a hdfs file and not a local file?, are new and files which already exist?, what does the following query do?, what is a table generating function on hive?,A table generating function is a function which takes a single column as argument and expands it to multiple column or rows. Example exploe(). how can hive avoid map reduce?, what is the difference between like an dr like operators in hive?, is it possible to create cartesian join between 2 tables using hive?,No. As this kind of Join can not be implemented in map reduce follow me https www you tubecomcsauravagarwal70 what should be the order of table size in a join query?, what is the usefulness of the distributed by clause in hive?, how will you convert the string 512toa floatvalue in the price column?,Select cast(price as FLOAT) what will be the result when you do cast abc as int?,Hive will return NULL can we load data into a view?, what types of costs are associated in creating index on hive tables?, what does stream table tablename do?, can a partition be archived what are the advantages and disadvantages?,Follow Me : https://www.yo utube.com/c/SauravAgarwa l Yes. A partition can be archived. Advantage is it decreases the number of files stored in namenode and the archived file can be queried using hive. The disadvantage is it will cause less efficient query and does not offer any space savings. what is a generic udf in hive?, the following statement failed to execute what can be the cause?, how do you specify the table creator name when creating a table in hive?, which method has to be overridden when we use custom udf in hive?, same time?,"Follow Me : https://www.yo utube.com/c/SauravAgarwa l The default metastore configuration allows only one Hive session to be opened at a time for accessing the metastore. Therefore, if multiple clients try to access the metastore at the same time, they will get an error. One has to use a standalone metastore, i.e. Local or remote metastore configuration in Apache Hive for allowing access to multiple clients concurrently. Following are the steps to configure MySQL database as the local metastor e in Apache Hive: One should make the following changes in hive -site.xml: 1. javax.jdo.option.ConnectionURL property should be set to" jdbc mysql host dbname create database if not exist true,"2. javax.jdo.option.ConnectionDriverName property should be set to com. mysql.jdbc.Driver. One should also set the username and password as: 3. javax.jdo.option.ConnectionUserName is set to desired username. 4. javax.jdo.option.ConnectionPassword is set to the desired password. The JDBC driver JAR file for MySQL must be on the Hive classpath, i.e. The jar file should be copied into the Hive lib directory. Now, after restarting the Hive shell, it will automatically connect to the MySQL database which is running as a standalone metastore." is it possible to change the default location of a managed table?, when should we use sort by instead of order by?,"We should use SORT BY instead of ORDER BY whe n we have to sort huge datasets because SORT BY clause sorts the data using multiple reducers whereas ORDER BY sorts all of the data together using a single reducer. Therefore, using ORDER BY against a large number of inputs will take a lot of time to exec ute." what is dynamic partitioning and when is it used?, order to do so?, how can you add a new partition for the month december in the above partitioned table?, what is the default maximum dynamic partition that can be created by a mapper reducer?, how can you change it?, requires at least one static partition column how will you remove this error?, how will you consume this csv file into the hive warehouse using built ser de?, files without degrading the performance of the system?, can we change settings within hive session if yes how?, is it possible to add 100nodeswhenwehave100 nodes already in hive how?, explain the concatenation function in hive with an example?, explain trim and reverse function in hive with examples?, explain process to access subdirectories recursively in hive queries?,By using below commands we can access sub directories recursively in Hive hive> Set mapred.input.dir.recursive=true; hive> Set hive.mapred.supports.subdirectories=true; Hive tables can be pointed to the h igher level directory and this is suitable for the directory structure which is like /data/country/state/city/ how to skip header rows from a table in hive?, what is the maximum size of string datatype supported by hive mention the hive support, binary formats?, what is the precedence order of hive configuration?, if you run a select query in hive why does it not run map reduce?, how hive can improve performance with orc format tables?, explain about the different types of join in hive?, how can you configure remote meta store mode in hive?, what happens on executing the below query after executing the below query if you modify the, column how will the changes be tracked?, how to load data from a txt file to table stored as orc in hive?, hive?, how to improve hive query performance with had oop?, how do i query from a horizontal output to vertical output?, and ve numbers?,"we can try to use regexp_extract instead: regexp_extract('abcd -9090','.*( -[0-9]+)',1)" follow me https www you tubecomcsauravagarwal111 what is hive tablename maximum character limit?, hive?,"use from_unixtime in conjunction with unix_timestamp. select from_unixtime(unix_timestamp(`date`,'MMM dd, yyyy'),'yyyy -MM-dd')" how to drop the hive database whether it contains some tables?,Use cascade command while drop the database. Example: hive> drop database sampleDB cascade; i dropped and recreated hive external table but no data shown so what should i do?, difference between rdd data frame dataset?, when to user dds?,"Consider these scenarios or common use cases for using RDDs when: 1. you want low -level transformation and actions and control on your dataset; your data is unstructured, such as media streams or streams of text; 2. you want to manipulate your data with functional programming constructs than domain specific expressions; 3. you do nt care about imposing a schema, such as columnar format, while processing or accessing data attributes by name or column; and 4. you can forgo some optimization and performance benefits available with DataFrames and Datasets for structured and semi -structur ed data." what are the various modes in which spark runs on yarn client vs cluster mode,YARN client mode: The driver runs on the machine from which client is connected YARN Cluster Mode: The driver runs inside cluster what is dag directed acyclic graph?, what is a rdd and how it works internally?, what do we mean by partitions or slices?, what is the difference between map and flat map?, how can you minimize data transfers when working with spark?,"The various ways in which data transfers can be minimized when working with Apache Spark are: 1.Broadcast Variable - Broadcast variable enhances the efficiency of joins between small and large RDDs. 2.Accumulators - Accumulators help update the values of variables in parallel while executing. 3.The most common way is to avoid operations ByKey, repartition or any other operations which trigger shuffles." why is there a need for broadcast variables when working with apache spark?,"These are read only variables, present in -memory cache on every machine. When working with Spark , usage of broadcast variables eliminates the necessity to ship copies of a variable for every task, so data can be processed faster. Broadcast variables help in storing a lookup table inside the memory which enhances the retrieval efficiency when compared to an RDD lookup ()." how can you trigger automatic cleanups in spark to handle accumulated metadata?,"You can trigger the clean -ups by setting the parameter ""spark.cleaner.ttl"" or by dividing the long running jobs into different batches and writing the intermediary results to the disk. Follow Me : https://www.yo utube.com/c/SauravAgarwa l" why is blink db used?,BlinkDB is a query engine for executing interactive SQL queries on huge volumes of data and renders query results marked with meaningful error bars. BlinkDB helps users balance query a ccuracy with response time. what is sliding window operation?,"Sliding Window controls transmission of data packets between various computer networks. Spark Streaming library provides windowed computations where the transformations on RDDs are applied over a sliding window of data. Whenever the window slides, the RDDs that fall within the particular window are combined and operated upon to produce new RDDs of the windowed DStream." what is catalyst optimiser?, what do you understand by pair rdd?, what is the difference between persist and cache?,persist () allows t he user to specify the storage level whereas cache () uses the default storage level(MEMORY_ONLY). what are the various levels of persistence in apache spark?, does apache spark provide checkpointing?, what do you understand by lazy evaluation?,"Spark is intellectual in the manner in which it operates on data. When you tell Spark to operate on a given dataset, it heeds the instructions and makes a note of it, so that it does not forget - but it does nothing, unless asked for the final result. When a transformation like map () is called on a RDD -the operation is not performed immediately. Transformations in Spark are not evaluated till you perform an action. This helps optimize the overall data processing workflow." what do you understand by schema rdd?,An RDD that consists of row objects (wrappers around basic string or integer arrays) w ith schema information about the type of data in each column. Dataframe is an example of SchemaRDD. what are the disadvantages of using apache spark over had oop map reduce?,"Apache spark does not scale well for compute intensive jobs and consumes large n umber of system resources. Apache Sparks in -memory capability at times comes a major roadblock for cost efficient processing of big data. Also, Spark does have its own file management system and hence needs to be integrated with other cloud based data pla tforms or apache hadoop." what is lineage graph in spark?, what do you understand by executor memory in a spark application?,Follow Me : https://www.yo utube.com/c/SauravAgarwa l Every spark application has same fixed heap size and fixed number of cores for a spark executor. The heap size is what referred to as the Spark executor memory which is controlled with the spark. executor.memory property of the -executor -memory flag. Every spark application will have one executor on each worker node. The executor memory is basically a measure on how much memory of the worker node will the application utilize. what is an accumulator?,"Accumulators are Sparks offline debuggers. Similar to Hadoop Counters, Accumulators provide the number of events in a program. Accumulators are the variables that can be added through associative operations. Spark natively supports accumulators of numeric value types and standard mutable collections. AggregrateByKey() and combineByKey() uses accumul ators." what is spark context?, what is spark session?, why rdd is an immutable?, what is partitioner?, what are the benefits of data frames?, what is dataset?, what are the benefits of data sets?, what is shared variable in apache spark?,"Shared variables are nothing but the variables that can be used in parallel operations. Follow Me : https://www.yo utube.com/c/SauravAgarwa l Spark supports two types of shared variables: broadcast variables, which can be used to cache a value in memory on all nodes, and accumulators, which are variables that are only added to, such as counters and sums." how to accumulated metadata in apache spark?, what is the difference between dsm and rdd?, what is speculative execution in spark and how to enable it?,"One more point is, Speculative execution will not stop the slow running task but it launch the new task in parallel. Tabular Form : Spark Property >> Default Value >> Description spark.speculation >> false >> enables ( true ) or disables ( false ) speculative execution of tasks. spark.speculation.interval >> 100ms >> The time interval to use before checking for speculat ive tasks. spark.speculation.multiplier >> 1.5 >> How many times slower a task is than the median to be for speculation. spark.speculation.quantile >> 0.75 >> The percentage of tasks that has not finished yet at which to start speculation." how is fault tolerance achieved in apache spark?, combine by key?, explain the map partitions and map partitions with index?, explain fold operation in spark?, difference between text file vs whole text file?, what is co group operation?, explain pipe operation?, explain coalesce operation?, explain the repartition operation?, explain the top and take ordered operation?, explain the lookup operation?, how to kill spark running application?, follow me https www you tubecomcsauravagarwal49 how to stop info messages displaying on spark console?, where the logs are available in spark on yarn?, how to find out the different values in between two spark data frames?, what are security options in apache spark?, what is scala?, what are the types of variable expressions available in scala?,"val (aka Values): You can name results of expressions with the val keyword.Once refer a value, it does not re -compute it. Example: val x = 1 + 1 x = 3 // This does not compile. var (aka Variables): Variables are like values, except you can re -assign them. You can define a variable with the var keyword. Example: var x = 1 + 1 x = 3 // This can compile." what is the difference between method and functions in scala?, what is case classes in scala?, what is traits in scala?,"Traits are used to share interfaces and fields between classes. They are similar to Java 8s interfaces. Classes and objects can extend traits but traits cannot be instantiated and therefore have no parameters. Traits are types containing certain fields and methods. Multiple traits can be combined. A minimal trait is simply the keyword trait and an identifier: Example: trait Greeter { def gr eet(name: String): Unit = println(""Hello, "" + name + ""!"") }" what is singleton object in scala?,"An object is a class that has exactly one instance is called singleton object. Heres an example of a singleton object with a method: object Logger { def info(message: String): Unit = println(""Hi i am Dineshkumar"") }" what is companion objects in scala?,"An object with the same name as a class is called a companion object. Conversely, the class is the objects companion class. A companion class or object can access the private members of its companion. Use a companion object for methods and values which are not specific to instances of the companion class. Example: import scala.math._ case class Circle(radius: Double) { import Circle._ Follow Me : https://www.yo utube.com/c/SauravAgarwa l def area: Double = calculateArea(radius) } object Circle { private def calculateArea(radius: Double): Double = Pi * pow(radius, 2.0) } val circle1 = new Circle(5.0) circle1.area" what are the special datatype available in scala?, what is higher order functions in scala?, what is currying function or multiple parameter lists in scala?,"Methods may define multiple parameter lists. When a method is called with a fewer number of parameter lists, then this will yield a function taking the missing parameter lists as its arguments. This is formally known as currying. Follow Me : https://www.yo utube.com/c/SauravAgarwa l Example: def foldLeft[B](z: B)(op: (B, A) => B): B val numbers = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) val res = numbers.foldLeft(0)((m, n) => m + n) print(res) // 55" what is pattern matching in scala?, what are the basic properties avail in spark?, what are the configuration properties in spark?,"spark.executor.memory :- The maximum possible is managed by the YARN cluster whichcannot exceed the actual RAM available. spark.executor.cores: - Number of cores assigned per Executor which cannot be higher than the cores available in each worker. spark.executor.instances: - Numb er of executors to start. This property is acknowledged by the cluster if spark.dynamicAllocation.enabled is set to false. spark.memory.fraction: - The default is set to 60% of the requested memory per executor. spark.dynamicAllocation.enabled: - Overrides the mechanism that Spark provides to dynamically adjust resources. Disabling it provides more control over the number of the Executors that can be started, which in turn impact the amount of storage available for the session. For more information, please see the Dynamic Resource Allocation page in the official Spark website." what is sealed classes?,"Traits and classes can be marked sealed which means all subtypes must be declared in the same file. This is useful for pattern matching because we dont ne ed a catch all case. This assures that all subtypes are known. Example: sealed abstract class Furniture case class Couch() extends Furniture case class Chair() extends Furniture def findPlaceToSit(piece: Furniture): String = piece match { case a: Couch => ""Lie on the couch"" case b: Chair => ""Sit on the chair"" }" what is type inference?,"The Scala compiler can often infer the type of an expression so you dont have to declare it explicitly. Example: val Name = ""Dineshkumar S"" // it consider as S tring val id = 1234 // considered as int" follow me https www you tubecomcsauravagarwal68 when not to rely on defaulttype inference?,"The type inferred for obj was Null. Since the only value of that type is null, So it is impossible to assign a different value by default." how can we debug spark application locally?, map collection has key and value then key should be mutable or immutable?,Behavior of a Map is not specified if value of an object is changed in a manner that affects equals comparison while object with the key. So Key should be an immutable. what is off heap persistence in spark?,"One of the most important capabilities in S park is persisting (or caching) datasets in memory across operations. Each persisted RDD can be stored using a different storage level. One of the possibilities is to store RDDs in serialized format off -heap. Compared to storing data in the Spark JVM, off -heap storage reduces garbage collection overhead and allows executors to be smaller and to share a pool of memory. This makes it attractive in environments with large heaps or multiple concurrent applications. Follow Me : https://www.yo utube.com/c/SauravAgarwa l" what is the difference between apache spark and apache f link?, how do we measuring the impact of garbage collection?,GC has happened due to use too much of memory on a driver or some executors or it might be where garbage collection becomes extremely costly and slow as large numbers of objects are created in the JVM. You can do by this validation ' -verbose:gc -XX:+ PrintGCDetails -XX:+PrintGCTimeStamps' to Sparks JVM options using the `spark.executor.extraJavaOptions` configuration parameter. apache spark vs apache storm?, how to overwrite the output directory in spark?,"refer below command using Dataframes, df.write.mode(SaveMode.Overwrite).parquet(path)" how to read multiple text files into a single rdd?,"You can specify whole directories, use wildcards and even CSV of directories and wildcards like below. Eg.: val rdd = sc.textFile(""file:///D:/Dinesh.txt, file:///D:/Dineshnew.txt"")" can we runs park without base of hdfs?, define about generic classes in scala?, how to enable tungsten sort shuffle inspark2x?, how to prevent spark executors from getting lost when using yarn client mode?,"The solution if you're using yarn was to set --conf spark.yarn.executor.memoryOverhead=600, alternatively if your cluster uses mesos you can try --conf spark.mesos.executor.memoryOverhe ad=600 instead." what is the relationship between the yarn containers and the spark executors?,"First important thing is this fact that the number of containers will always be the same as the executors created by a Spark application e.g. via --num-executors parameter in spark -submit. Set by the yarn.scheduler.minimum -allocation -mb every container always allocates at least this amount of memory. This means if parameter --executor -memory is set to e.g. only 1g but yarn.scheduler.minimum -alloca tion-mb is e.g. 6g, the container is much bigger than needed by the Spark application. The other way round, if the parameter --executor -memory is set to somthing higher than the yarn.scheduler.minimum -allocation -mb value, e.g. 12g, the Container will alloc ate more memory dynamically, but only if the requested amount of memory is smaller or equal to yarn.scheduler.maximum -allocation -mb value. The value of yarn.nodemanager.resource.memory -mb determines, how much memory can b e allocated in sum by all containers of one host! So setting yarn.scheduler.minimum -allocation -mb allows you to run smaller containers e.g. for smaller executors (else it would be waste of memory). Setting yarn.scheduler.maximum -allocation -mb to the maximum value (e.g. equal to yarn.nodemanager.resource.memory -mb) allows you to define bigger executors (more memory is Follow Me : https://www.yo utube.com/c/SauravAgarwa l allocated if needed, e.g. by --executor -memory parameter)." how to allocate the memory sizes for the spark jobs in cluster?, how autocompletion tab can enable in py spark?,"Please import the below libraries in pyspark shell import rlcompleter, readline readline.parse_and_bind(""tab: complete"")" can we execute two transformations on the same rdd in parallel in apache spark?, which cluster type should i choose for spark?, what is d streams in spark streaming?,Spark streaming uses a micro batch archite cture where the incoming data is grouped into micro batches called Discretized Streams (DStreams) which also serves as the basic programming abstraction. Follow Me : https://www.yo utube.com/c/SauravAgarwa l The DStreams internally have Resilient Distributed Datasets (RDD) and as a result of this standard RDD transformations and actions can be done. what is stateless transformation?, what is stateful transformation?, what is a ws?,"Follow Me : https://www.yo utube.com/c/SauravAgarwa l Answer:AWS stands for Amazon Web Services. AWS is a platform that provides on -demand resources for hosting web services, storage, networking, databases and other resources over the internet with a pay -as-you-go pricing." what are the components of aws?,"Answer:EC2 Elastic Compute Cloud, S3 Simple Storage Service, Route53, EBS Elastic Block Store, Cloudwatch, Key -Paris are few of the components of AWS." what are key pairs?,Answer:Key -pairs are secure login information for your instances/virtual machines. To connect to the instances we use key -pairs that contain a public -key and private -key. what is s3?, what are the pricing models for ec2 instances?,"Answer:The different pricing model for EC2 instances are as below, On-demand Reserved Spot Schedu led Dedicated" what are the types of volumes for ec2 instances?, what are ebs volumes?,"Answer:EBS stands for Elastic Block Stores. They are persistent volumes that you can attach to the instances. With EBS volumes, your data will be preserved even when you stop your instances, unlike your instance store volumes where the data is deleted when you stop the instances." what are the types of volumes in ebs?, what are the different types of instances?,"Answer: Following are the typ es of instances, General purpose Computer Optimized Storage Optimized Memory Optimized Accelerated Computing" what is an auto scaling and what are the components?,"Answer: Auto scaling allows you to automatically scale -up and scale -down the number of instances depending on the CPU utilization or memory utilization. There are 2 components in Auto scaling, they are Auto -scaling groups and Launch Configuration." what are reserved instances?, what is an ami?, what is an eip?, what is cloud watch?,"Answer: Cloudwatch is a monitoring tool that you can use to monitor your various AWS resources. Like health check, network, Application, etc." what are the types in cloud watch?,Answer: There are 2 types in cloudwatch. Basic monitoring and detailed monitoring. Basic monitoring is free and detailed monitoring is chargeable. what are the cloud watch metrics that are available for ec2 instances?, what are the different storage classes in s3?,"Answer: Following are the types of storage classes in S3, Standar d frequently accessed Standard infrequently accessed One-zone infrequently accessed. Glacier RRS reduced redundancy storage" what is the default storage class ins3?,Answer: The default storage class in S3 in Standard frequently accessed. what is glacier?,Answer: Glacier is the back up or archival tool that you use to back up your data in S3. how can you secure the access to your s3 bucket?,"Answer: There are two ways that you can control the access to your S3 buckets, ACL Access Con trol List Bucket polices" how can you encrypt data in s3?,"Answer: You can encrypt the data by using the below methods, Server Side Encryption S3 (AES 256 encryption) Server Side Encryption KMS (Key management Service) Server Side Encryption C (Client Side)" what are the parameters for s3 pricing?,"Answer: The pricing model for S3 is as below, Storage used Number of requests you make Storage management Data transfer Transfer acceleration" what is the prerequisite to work with cross region replication in s3?,Answer: You need to enable versioning on both source bucket and destination to work with cross region replication. Also both the source and destination bucket should be in different region. what are roles?,Follow Me : https://www.yo utube.com/c/SauravAgarwa l Answer: Roles are used to provide permissions to entities that you trust within your AWS account. Roles are users in another account. Roles are similar to users but with roles you do not need to create any username and password to work with the resources. what are policies and what are the types of policies?, what is cloud front?,Answer: Cloudfront is an AWS web service that provided businesses and application developers an easy and efficient way to distribute their content with low latency a nd high data transfer speeds. Cloudfront is content delivery network of AWS. what are edge locations?, what is the maximum individual archive that you can store in glacier?,Answer: You can store a maximu m individual archive of upto 40 TB. what is vpc?, what is vpc peering connection?, what are nat gateways?,Answer: NAT stands for Network Address Translation. NAT gateways enables instances in a private subnet to connect to the internet but prevent the internet from initiating a connection with those instances. how can you control the security to your vpc?,Answer: You can use security groups and NACL (Network Access Control List) to control the security to your VPC. follow me https www you tubecomcsauravagarwalq34 what are the different types of storage gateway?,Answer: Following are the ty pes of storage gateway. File gateway Volume gateway Tape gateway what is a snowball?,"Answer: Snowball is a data transport solution that used source appliances to transfer large amounts of data into and out of AWS. Using snowball, you can move huge amount of data from one place to another which reduces your network costs, long transfer time s and also provides better security." what are the database types in rds?,"Answer: Following are the types of databases in RDS, Aurora Oracle MYSQL server Postgresql MariaDB SQL server" what is a redshift?, what issns?,Answer: SNS stands for Simple Notification Service. SNS is a web service that makes it easy to notifications from the cloud. You can set up SNS to receive email notification or message notification. what are the types of routing police sinroute53?,"Answer: Following are the types of routing policies in route53, Simple routing Latency routing Failover routing Geolocation r outing Weighted routing Multivalue answer" what is the maximum size of messages in sqs?,Answer: The maximum size of messages in SQS is 256 KB. what are the types of queues in sqs?, what is multi az rds?,"Answer: Multi -AZ (Availability Zone) RDS allows you to have a replica of your production database in another availability zone. Multi -AZ (Availability Zone) database is used for disaster recovery. You will have an exact copy of your database. So when your primary database goes down, your application will automatically failover to the standby database." what are the types of backups in rds database?,Answer: There are 2 types of backups in RDS database . Automated backups Manual backups which are known as snapshots. what is the difference between security groups and network access control list?,Answer: Security Groups Network access control list Can control the access at the instance level Can control access at the subnet level Can add rules for allow only Can add rules for both allow and deny Evaluates all rules before allowing the traffic Rules are processed in order number when allowing traffic. Can assign unlimited number of securit y groups Can assign upto 5 security groups. Statefull filtering Stateless filtering what are the types of loadbalancer sinec2?,"Answer: There are 3 types of load balancers, Application load balancer Network load balancer Classic load balancer" what is an del b?, what are the two types of access that you can provide when you are creating users?,Answer: Following are the two types of access that you can create. Programmatic access Console access what are the benefits of auto scaling?,Follow Me : https://www.yo utube.com/c/SauravAgarwa l Answer: Following are the benefits of auto scaling Better fault tolerance Better availability Better cost management what are security groups?,"Answer: Security groups acts as a firewall that contains the traffic for one or more instances. You can associate one or more security groups to your instances when you launch then. You can add rules to each security group that allow traffic to and from it s associated instances. You can modify the rules of a security group at any time, the new rules are automatically and immediately applied to all the instances that are associated with the security group" what are shared am is?, what is the difference between the classic loadbalancer and application loadbalancer?,"Answer: Dynamic port mapping, multiple port multiple listeners is used in Application Load Balancer, One port one listener is achieved via Classic Load Balancer" by default how many ip address does aws reserve in a subnet?,Answer: 5 what is meant by subnet?, how can you convert a public subnet to private subnet?, is it possible to reduce a ebs volume?,"Answer: no its not possible, we can increase it but not reduce them" what is the use of elastic ip are they charged by aws?,"Answer: These are ipv4 address which are used to connect the instance from internet, they are charged if the instances are not attached to it" one of mys3is bucket is deleted but i need to restore is there any possible way?, issue?, is it possible to stop ards instance how can i do that?, what is meant by parameter groups in rds and what is the use of it?,Answer: Since RDS is a managed service AWS offers a wide set of parameter in RDS as parameter group which is modified as per requirement what is the use of tags and how they are useful?, how can i rectify it?, i dont want my aws account id to be exposed to users how can i avoid it?, by default how many elastic ip address does aws offer?,Answer: 5 elastic ip per region you are enabled sticky session with elb what does it do with your instance?,Answer: Binds the user session with a specific instance Q67) Which type of load balancer makes routing decisions at either the transport layer or the Application layer and supports either EC2 or VPC. Answer: Classic Load Balancer which is virtual network interface that you can attach to an instance in a vpc?, have selected ssh http https protocol why do we need to select ssh?, security group how will these changes be effective?,Answer: Changes are automatically applied to windows instances loadbalancer and dns service comes under which type of cloud service?, this?,"Answer: Create a snapshot of the unencrypted volume (applying encryption parameters), copy the. Snapshot and create a volume from the copied snaps hot Q73) Where does the user specify the maximum number of instances with the auto" scaling commands?,Answer: Auto scaling Launch Config which are the types of ami provided by aws?, a single instance what setting can you use?,Answer: Sticky session when doi prefer to provisioned iops over the standard rds storage?, db instance for read or write a operation along with to primary db instance?,Answer: Primary db instance does not working . Q78) Which the AWS services will you use to the collect and the process e -commerce data for the nearby real time analysis?,"Answer: Good of Amazon DynamoDB. Q79) A company is deploying the new two -tier an web application in AWS. The company has to limit ed on staff and the requires high availability, and the application requires to complex Follow Me : https://www.yo utube.com/c/SauravAgarwa l queries and table joins. Which configuration provides to the solution for companys" requirements?,Answer: An web application provide on Amazon DynamoDB solution. which the statement use to cases are suitable for amazon dynamo db?,"Answer:The storing metadata for the Amazon S3 objects& The Running of relational joins and complex an updates. Q81) Your application has to the retrieve on data from your users mobile take every 5 minutes and then data is stored in the DynamoDB, later every day at the particular time the data is an extracted into S3 on a per user basis and then your application is later on used to visualize the data to user. You are the asked to the optimize the architecture of the backend system can to" lower cost what would you recommend do?, mysql which is the best approaches to the meet these requirements?, setup of following would you be prefer?,Answer: The Replace the RDS instance with an 6 node Redshift cluster with take 96TB of storage. Q84) Let to Suppose you have an application where do you have to render images and also do some of general computing which service will be best fit your need?,"Answer:Used on Application Load Balancer. Q85) How will change the instance give ty pe for the instances, which are the running in your" applications tier and then using auto scaling where will you change it from areas?,Answer: Changed to Auto Scaling launch configuration areas. Q86) You have an content management system running on the Am azon EC2 instance that is the approaching 100% CPU of utilization. Which option will be reduce load on the Amazon EC2 instance?,"Answer: Let Create a load balancer, and Give register the Amazon EC2 instance with it." follow me https www you tubecomcsauravagarwalq87 what does the connection of draining do?,"Answer: The re -routes traffic from the instances which are to be updated (or) failed an health to check. Q88) When the instance is an unhealthy, it is do terminated and replaced with an new ones," which of the services does that?,Answer: The survice make a fault tolerance. what are the lifecycle to hooks used for the auto scaling?,Answer: They are used to the put an additional taken wait time to the scale in or scale out events. Q90) An user has to setup an Auto Scaling group . Due to some issue the group has to failed for launch a single instance for the more than 24 hours. What will be happen to the Auto Scaling in the condition?,"Answer: The auto Scaling will be suspend to the scaling process. Q91) You have an the EC2 Securit y Group with a several running to EC2 instances. You changed to the Security of Group rules to allow the inbound traffic on a new port and protocol, and then the launched a several new instances in the same of Security Group.Such the new rules" apply?, region?,"Answer: May be the selected on Route 53 Record Sets. Q93) An customers wants to the captures all client connections to get information from his load balancers at an interval of 5 minutes only, which cal select option should he choose for his" application?,Answer: The condition should be Enable to AWS CloudTrail for the loadbalancers. which of the services to you would not use to deploy an app?,Answer: Lambda app not used on deploy. how do the elastic beanstalk can apply to updates?, could be reason solution?, accomplish this?,"Answer:The monitoring on Amazon CloudWatch Q98) The organization that is currently using the consolidated billing has to recently acquired to another company that alr eady has a number of the AWS accounts. How could an Administrator to ensure that all the AWS accounts, from the both existing company and then acquired company, is" billed to the single account?, this scenario?, use the aws credentials to access s3 bucket securely?, you accomplish the using the aws services?, setup?, availability?, disruptions during the ramp up traffic?, instances which are architectural choices should you make?,"Answer: Deploy to 3 EC2 instances in one of availability zone and 3 in another availability of zones and to use of Amazon Elastic is Load Balancer. Q106) You are the designing an application tha t a contains protected health information. Security and Then compliance requirements for your application mandate that all protected to health information in application use to encryption at rest and in the transit module. The application to uses an three -tier architecture. where should data flows through the load balancers and is stored on the Amazon EBS volumes for the processing, and the results are stored in the Amazon S3 using a AWS SDK. Which of the options satisfy the security" requirements?, are will evenly distributed across to four web servers?, capacity most of effectively?, support these requirements?,"Answer:The configure to the web application get authenticate end -users against the centralized access on the management system. Have a web application provisio n trusted to users STS tokens an entitling the download of the approved data directly from a Amazon S3. Q110) A Enterprise customer is starting on their migration to the cloud, their main reason for the migrating is agility and they want to the make their internal Microsoft active directory available to the many applications running on AWS, this is so internal users for only have to remember one set of the credentials and as a central point of user take control for the leavers and joiners. How could they ma ke their actions the directory secures and the highly available with minimal on -premises on infrastructure changes in the most cost and the time-" efficient way?, what is cloud computing?,"Answer:Cloud computing means it provides services to access programs, application, storage, network, server over the internet through browser or client side application on your PC, Laptop, Mobile by the end user without installing, updating and maintaining them." follow me https www youtube comcsauravagarwalq112 why we go for cloud computing?, what are the deployment models using in cloud?,Answer: Private Cloud Public Cloud Hybrid cloud Community cloud 4 explain cloud service models?, what are the advantage of cloud computing?, what is a ws?,"Answer: Amazon web service is a secure cloud services platform offering compute, power, database, storage, content delivery and other functionality to help business scale and grow. AWS is fully on -demand AWS is Flexibility, availability and Scalability Follow Me : https://www.yo utube.com/c/SauravAgarwa l AWS is Elasticity: scale up and scale down as needed." what is mean by region availability zone and edge location?, how to access aws platform?, what is ec 2whatarethebenefitsinec2?,Amazon Elastic compute cloud is a web service that provides resizable compute capacity in the cloud.AWS EC2 provides scalable computing capacity in the AWS Cloud. These are the virtual servers also called as an instances. We can use the instances pay per use basis. Benefits: Easier and Faster Elastic and Scalable High Availability Cost-Effective what are the pricing models available in awsec2?, what are the types using inawsec2?,Answer: General Purpose Compute Optimized Memory optimized Storage Optimized Accelerated Computing (GPU Based) follow me https www youtube comcsauravagarwalq122 what is ami what are the types in ami?, how to addressing awsec2 instances?, what is security group?,"Answer: AWS allows you to control traffic in and out of your instance through virtual firewall called Security groups. Security groups allow you to control traffic based on port, protocol and source/Destination." when your instance show retired state?, start what is the reason for that and explain solution?, what is elastic beanstalk?,"Answer:AWS Elastic Beanstalk is the fastest and simplest way to get an application up and running on AWS.Developers can simply upload their code and the service automatically handle all the details such as resource provisioning, l oad balancing, Auto scaling and Monitoring." what is amazon light sail?,"Follow Me : https://www.yo utube.com/c/SauravAgarwa l Answer:Lightsail designed to be the easiest way to launch and manage a virtual private server with AWS.Lightsail plans include everything you need to jumpstart your project a virtual machine, ssd based storage, data transfer, DNS Management and a static ip." what is ebs?,Answer:Amazon EBS Provides persistent block level storage volumes for use with Amazon EC2 instances. Amazon EBS volume is automatically replicated with its availability zone to protect component failure offering high availability and durability. Am azon EBS volumes are available in a variety of types that differ in performance characteristics and Price. how to compare ebs volumes?, what is cold hdd and throughput optimized hdd?, what is amazon ebs optimized instances?, what is ebs snapshot?, how to connect ebs volume to multiple instance?,"Answer: We cant able to connect EBS volume to multiple instance, but we can able to connect multiple EBS Volume to single instance." what are the virtualization types available in aws?, differentiate block storage and file storage?,"Answer: Block Storage: Block storage operates a t lower level, raw storage device level and manages data as a set of numbered, fixed size blocks. File Storage: File storage operates at a higher level, the operating system level and manage data as a named hierarchy of files and folders." what are the advantage and disadvantage of efs advantages,Answer: Fully managed service File system grows and shrinks automatically to petabytes Can support thousands of concurrent connections Multi AZ replication Throughput scales automatically to ensure consisten t low latency Disadvantages: Not available in all region Cross region capability not available More complicated to provision compared to S3 and EBS what are the things we need to remember while creating s3 bucket?,"Answer: Amazon S3 and Bucket names a re This means bucket names must be unique across all AWS Bucket names can contain upto 63 lowercase letters, numbers, hyphens and You can create and use multiple buckets You can have upto 100 per account by" follow me https www youtube comcsauravagarwalq139 what are the storage class available in amazon s3?, explain amazon s3 lifecycle rules?, what is the relation between amazon s3andawskms?,"Answer: To encrypt Amazon S3 data at rest, you can use several v ariations of Server -Side Encryption. Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypt it for you when you access itll SSE performed by Amazon S3 and AWS Key Management Service (AWS KMS) uses the 256 -bit Advanced Encryption Standard (AES)." what is the function of cross region replication in amazon s3?,"Answer: Cross region replication is a feature allows you asynchronously replicate all new objects in the source bucket in one AWS region to a targ et bucket in another region. To enable cross -region replication, versioning must be turned on for both source and destination buckets. Cross region replication is commonly used to reduce the latency required to access objects in Amazon S3" how to create encrypted ebs volume?, what is nat instance and nat gateway?, what is vpc peering?,Answer: Amazon VPC peering connection is a networking connection between two amazon vpcs that enables instances in either Amazon VPC to communicate with each other as if they are within the same network. You can create amazon VPC peering connection betwee n your own Amazon VPCs or Amazon VPC in another AWS account within a single region. what is mfa in aws?,Answer: Multi factor Authentication can add an extra layer of security to your infrastructure by adding a second method of authentication beyond just password or access key. what are the authentication in aws?,Answer: User Name/Password Access Key Access Key/ Session Token what is data warehouse in aws?,Data ware house is a central repository for data that can come from one or more sour ces. Organization typically use data warehouse to compile reports and search the database using highly complex queries. Data warehouse also typically updated on a batch schedule multiple times per day or per hour compared to an OLTP (Online Transaction Pro cessing) relational database that can be updated thousands of times per second. what is mean by multi az in rds?,Answer: Multi AZ allows you to place a secondary copy of your database in another availability zone for disaster recovery purpose. Multi AZ deployments are available for all types of Amazon RDS Database engines. When you create s Multi -AZ DB instance a primary instance is created in one Availability Zone and a secondary instance is created by another Availability zone. what is amazon dynamo db?,Answer: Amazon Dynamo DB is fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. Dynamo DB makes it simple and Cost effective to store and retrieve any amount of data. what is cloud formation?, how to plan auto scaling?,Answer: Manual Scaling Scheduled Scal ing Dynamic Scaling what is auto scaling group?,Answer: Auto Scaling group is a collection of Amazon EC2 instances managed by the Auto scaling service. Each auto scaling group contains configuration options that control when auto scaling should launc h new instance or terminate existing instance. differentiate basic and detailed monitoring in cloud watch?,Answer: Basic Monitoring: Basic monitoring sends data points to Amazon cloud watch every five minutes for a limited number of preselected metrics at no charge. Detailed Monitoring: Detailed monitoring sends data points to amazon CloudWatch every minute and allows data aggregation for an additional charge. what is the relationship between route 53 and cloud front?, what are the routing policies available in amazon route 53?,Answer: Simple Weighted Latency Based Failover Geolocation what is amazon elastic ache?,Answer: Amazon ElastiCache is a web services that simplifies the setup and management of distributed in memory caching environment. Cost Effective High Performance Scalable Caching Environment Using Memcached or Redis Cache Engine follow me https www youtube com c saurav agarwal q159whatissessqsandsns?, how to use amazon sqs what is a ws?,"Answer:Amazon Web Services is a secure cloud services stage, offering compute power, database storage, content deliv ery and other functionality to help industries scale and grow." what is the importance of buffer in aws?, what is the way to secure data for resounding in the cloud?,Answer: Avoid storage sensitive material in the cloud. Read the user contract to find out how your cloud service storing works. Be serious about passwords. Encrypt. Use an encrypted cloud service. name the several layers of cloud computing?, what is lambda edge in aws?,"Answer:Lambda Edge lets y ou run Lambda functions to modify satisfied that Cloud Front delivers, executing the functions in AWS locations closer to the viewer. The functions run in response to Cloud Front events, without provisioning or managing server." distinguish between scalability and flexibility?, what is ia as?, what is pa as?,"Answer:PaaS runs cloud platforms and runtime environments to develop, test and manage software Users: Software Developers" what is saas?, which automation gears can help with spin up services?, what is an ami how do i build one?, what are the main features of amazon cloud front?,"Answer:Amazon Cloud Front is a web service that speeds up delivery of your static and dynamic web content, such as .html, .css, .js, and image files, to your users.CloudFront delivers your content through a universal network of data centers called edge loc ations" what are the features of the amazon ec2 service?, explain storage for amazon ec2 instance?,"Answer:An instance store is a provisional storing type located on disks that are physicall y attached to a host machine. This article will present you to the AWS instance store storage type, compare it to AWS Elastic Block Storage (AWS EBS), and show you how to backup data stored on instance stores to AWS EBS Amazon SQS is a message queue serv ice used by scattered requests to exchange messages through a polling model, and can be used to decouple sending and receiving components Q174) When attached to an Amazon VPC which two components provide connectivity with" external networks?, which of the following are characteristics of amazon vpc subnets?,"Answer: Each subnet maps to a single Availability Zone. By defaulting, all subnets can route between each other, whether they are private or public." how can you send request to amazon s3?,"Follow Me : https://www.yo utube.com/c/SauravAgarwa l Answer:Every communication with Amazon S3 is either genuine or anonymous. Authentication is a process of validating the individuality of the requester trying to access an Amazon Web Services (AWS) product. Genuine requests must include a autograph value that authentic ates the request sender. The autograph value is, in part, created from the requesters AWS access keys (access key identification and secret access key)." what is the best approach to anchor information for conveying in the cloud?, what i saws certificate manager?, what is amazon emr?,Answer:Amazon Elastic MapReduce (EMR) is one such administration that gives completely oversaw facilitated Hadoop system over Amazon Elastic Compute Cloud (EC2). what is amazon kinesis firehose?, what is amazon cloud search and its highlights?, private cloud?, how would one be able to associate a vpc to corporate serverfarm?,"Answer:AWS Direct Connect empowers you to safely associate your AWS condition to your on - premises server farm or office area over a standard 1 gigabit or 10 gigabit Ethernet fiber -optic association. AWS Direct Connect offers committed fast, low dormancy association, which sidesteps web access suppliers in your system way. An AWS Direct Connect area gives access to Amazon Web Services in the locale it is related with, and also access to diffe rent US areas. AWS Direct Connect enables you to consistently parcel the fiber -optic associations into numerous intelligent associations called Virtual Local Area Networks (VLAN). You can exploit these intelligent associations with enhance security, separa te traffic, and accomplish consistence necessities." is it conceivable to push of fs3withec2 examples?, what is the distinction between amazon s3andebs?, follow me https www youtube comcsauravagarwalq188 what do you comprehend by aws?,"Answer:This is one of the generally asked AWS engineer inquiries questions. This inquiry checks your es sential AWS learning so the appropriate response ought to be clear. Amazon Web Services (AWS) is a cloud benefit stage which offers figuring power, investigation, content conveyance, database stockpiling, sending and some different administrations to help you in your business development. These administrations are profoundly versatile, solid, secure, and cheap distributed computing administrations which are plot to cooperate and, applications in this manner made are further developed and escalade." clarify the principle components of aws?, im not catching your meaning by ami what does it incorporate?, is vertically scale is conceivable on amazon occurrence?, what is the association among ami and instance?, what is the distinction between amazon s3andec2?, what number of capacity alternatives are therefore c2 instance?, what are the security best practices for amazon ec2 examples?, what is the system to send a demand to amazon s3?, what is the default number of basins made in aws?,Answer :This is an extremely straightforward inquiry yet positions high among AWS engineer inquiries questions. Answer this inquiry straightforwardly as the default number of pails made in each AWS account is 100. what is the motivation behind t2 examples?,Answer:T2 cases are intended for Providing moderate gauge execution Follow Me : https://www.yo utube.com/c/SauravAgarwa l Higher execution as required by outstanding task at hand what is the utilization of the cradle in aws?,"Answer:This is among habitually asked AWS designer inquiries questions. Give the appropriate response in straightforward terms, the cradle is primarily used to oversee stack with the synchronization of different parts i.e. to make framework blame tolerant. Without support, segments dont utilize any reasonable tech nique to get and process demands. Be that as it may, the cushion makes segments to work in a decent way and at a similar speed, hence results in quicker administrations." what happens when an amazon ec2 occurrence is halted or ended?,"Answer:At the season of ceasing an Amazon EC2 case, a shutdown is performed in a typical way. From that point onward, the changes to the ceased state happen. Amid this, the majority of the Amazon EBS volumes are stayed joined to the case and the case can b e begun whenever. The occurrence hours are not included when the occasion is the ceased state. At the season of ending an Amazon EC2 case, a shutdown is performed in an ordinary way. Amid this, the erasure of the majority of the Amazon EBS volumes is perfo rmed. To stay away from this, the estimation of credit deleteOnTermination is set to false. On end, the occurrence additionally experiences cancellation, so the case cant be begun once more." what are the mainstream dev ops devices?, what are the defaults services we get when we create custom aws vpc?,Answer: Route Table Follow Me : https://www.yo utube.com/c/SauravAgarwa l Network ACL Security Group what is the difference between public subnet and private subnet?, how do you access the ec2 which has private ip which is in private subnet?, what are the difference between route 53andelb?, follow me https www youtube comcsauravagarwalq210 what are the db engines which can be used in aws rds?,Answer: MariaDB MYSQL DB MS SQL DB Postgre DB Oracle DB what is status checks in awsec2?, to establish a peering connections between two vpcs what condition must be met?, how ebs can be accessed?, what is the maximum file length ins3?,Answer: utf -8 1024 bytes which activity can not be done using auto scaling?,Answer:Maintain fixed running of ec2 how will you secure data at rest in ebs?,Answer: EBS data is always secure what is the maximum size of s3 bucket?,Answer: 5TB can objects in amazon s3 be delivered through amazon cloud front?,Answer:Yes Q239) which service is used to distribute content to end user service using global network of edge location?,Answer: Virtual Private Cloud what i sep he maral storage?,Answer: Temporary storage what are shards in kinesis aws services?,Answer: Shards are used to store data in Kinesis. where can you find the ephemeral storage?, on the public cloud what is the architecture called?,Answer:Virtual private cloud route 53 can be used to route users to infrastructure outside of aws true false?,Answer: False is simple workflow service one of the valid simple notification service subscribers?,Follow Me : https://www.yo utube.com/c/SauravAgarwa l Answer: No Q246) which cloud model do Developers and organizations all around the world leverage extensively?, can cloud front serve content from a non aws origin server?,Answer: No is efsa centralised storage service in aws?,Answer: Yes Q249) Which AWS service will you use to collect and process ecommerce data for near real time analysis?, would you recommend?, follow me https www you tubecomcsauravagarwal1 how much data you are processing everyday?, on what types of data you are working?, how many tables you are having in your rdbms and what is the size of each table on a average?, how you use sq oop incremental load and how to stop the sq oop incremental job?, how many rows you are getting after doing select from tablename?, how much time it is taking to process the data in hive and spark?, how much data is getting appended everyday?, on what frequency means how many times you are using your sq oop job in a day or in a week?,9. How you are reusing the RDD(RDD transformations) with scenarios. what types of processing you are doing through your hive and spark?, when to user dd and data frame?,https://databricks.com/blog/2016/07 /14/a -tale-of-three -apache -spark -apis-rdds-datafram es-and-datasets.html why rdd is immutable?,https://www.quora.com/Why -is-RDD -immutable -in-Spark what is the difference between spark context and spark session?,http://data -flair.training/forums/topic/sparksession -vs-sparkcontext -in-apache -spark what is the difference between partitioning bucket ing repartitioning and coalesce?,Follow Me : https://www.yo utube.com/c/SauravAgarwa l https://data -flair.training/blogs/hive -partitioning -vs-bucketing/ https://stackover flow.com/questions/31610971/spark -repartition -vs-coalesce how to debug your spark code and how to build the jar?, how are you scheduling your jobs using oo zie?, how to select data from a hive external partitioned table?,http://blog.zhengdong.me/2 012/02/22/hive -external -table-with-partitions/ when to use spark client and cluster mode?, what is your cluster configuration and versions of each component?, realtime use cases of companion objects and trait?, how to fetch data from a scala list in a parallel manner?,"Not exactly the answer, but helpful https://alvinalexander.com/scala/how -to-use-parallel -collections -in-scala -performance" how to increase spark executor memory and hive utilisation memory?,https://stackoverflow.com/questions/26562033/how -to-set-apache -spark -executor -memo ry 23. Different types of Nosql databases.What is the difference between hbase and Cassandra. https://www.3pillarglobal.com/insights/exploring -the-different -types -of-nosql -databases Follow Me : https://www.yo utube.com/c/SauravAgarwa l https://data -flair.training/blogs/hbase -vs-cassandra/ questions regarding different file formats of had oop and when to use?,Useful blog: https://community.hitachivantara.com/community/products -and-solutions/pentaho/blog/2 017/11/ 07/hadoop -file-formats -its-not-just-csv-anymore what is the difference between hive map join and hive bucket join?,"https://data -flair.training/blogs/map -join-in-hive/ https://data -flair.training/blogs/bucket -map-join/ 26. Performance optimization techniques in sqoop,hive and spark. Hive - https://hortonworks.com/blog/5 -ways -make -hive-queries -run-faster/ 27. End to end project flow and the usage of all the hadoop ecosyst em components." why does apache spark exist and how py spark fits into the picture?, what is the filesize you are using in your development and production environment?, use cases of accumulators and broadcast variables?, explain the difference between internal and external tables?, when to use?,1. Use internal table if its data wont be used by other bigdata Ecosystems. Follow Me : https://www.yo utube.com/c/SauravAgarwa l 2. Use external table if its data would be used by other bigdata ecosystems as it wont have any impact just in case of table drop operation how did you run hive load scripts in production?,"Ans: All the hive commands were kept in .sql files ( for ex - load ordersdata.sql) and these files were invoked in in Unix shell script through command: hive -f ordersdata.sql. These unix scripts had few other HDFS commands as well. For ex - To load data into HDFS, make backup on local file system, send email once load was done etc. etc.). These unix scripts were called through Enterprise scheduler ( Control M or Autosys or Zookeeper)." why does hive doesnt store metadata in hdfs?,Ans: 1.Storing metadata in HDFS results in high latency/delay considering the fact of sequential access in HDFS for read/write operations. So its evident to store metadata in Metastore to achieve low lat ency because of random access in metastore(MySQL dB) which file format works best with hive tables why?, how to append files of various dfs?, how will you solve this problem and list the steps that i will be taking in order to do so?, why map reduce will not run if you run select from table in hive?, how to import first 10 records from a rdbms table into hdfs using sq oop how to import all, the records except first 20rows records and also last 50recordsusingsqoop import?, what is the difference between kafka and flume?,40. How to change the number of replication factors and how to change the number of mappers and reducers?, how the number of partitions and stages get decided in spark?, what is the default number of mappers and reducers in map reduce job?, how to change the block size while importing the data into hdfs?,Follow Me : https://www.yo utube.com/c/SauravAgarwa l 44. What setting need to be done while doing dynamic partition and bucketing. how to run map reduce and spark job?, what is datasets in spark and how to create and use it?, what are the difference between hive and h base hive and rdbms no sql and rdbms?, what are the difference between had oop and rdbms?, what are the difference between had oop and spark?,50. What are the difference between Scala and Java. what are the advantages and disadvantages of functional programming?, what are the advantages of had oop over distributed filesystems?,"53. Core concept of map reduce internal architecture and job flow. 54. Architecture of Hadoop,Yarn and Spark." what are the advantages of using yarn as cluster manager than me so sands park standalone cm?,Company Specific Questions Follow Me : https://www.yo utube.com/c/SauravAgarwa l Company: Fedility Date: 07 -Aug-2018 what security authentication you are using how you are managing?, about c entry security authentication?,3. how do you do schedule the jobs in Fair scheduler 4. prioritizing jobs how you are doing ac center l control for hdfs?,6. Disaster Recovery activities 7. what issues you are faced so far 8. do you know about puppet 9. hadoop development activities Company: Accenture Dt: 06 -July-2018 what are your daily activities and what are your roles and responsibilities in your current, project what are the services that are implemented in your current project?, what have you done for performance tunning?, what is the block size in your project?,4) Explain your current project process have you used storm kafka or solr services in your project?,6) Have you used puppet tool have you used security in your project why do you use security in your cluster?, explain how kerberos authentication happens?, what is your cluster size and what are the services you are using 10 do you have good hands on,experience in Linux have you used flume or storm in your project?,Company: ZNA 04-July-2018 1)Roles and responsibilities in current project Follow Me : https://www.yo utube.com/c/SauravAgarwa l 2)What do you monitor in cluster i.e; What do you monitor to ensure that c luster is in healthy state?, what is jvm?, what is rack awareness?, what is high availability how do you implement high availability on a preexisting cluster with, single node what are the requirements to implement ha, what is hive how do you install and configure from cli11 what is disc space and disc,Quota 12) How to add data nodes to your cluster without using Cloudera Manager. 13) How to add Disk space to Datanode which is already added to cluster. And how to format the disk before adding it to cluster. how good ru at shell scripting have you used shell scripting to automate any of your,activities. what are the activities that r automated using shell scripting in your current project 15 what are the,benefits of YARN compare to Hadoop -1. difference between mr1andmr2?,"18) Most challenges that you went through in your project. 19) Activities performed on Cloudera Manager 20) How you will know about the threshold, do you check manually every time. Do you know about puppet etc., 21) How many clusters and nodes are present in your p roject. 22) You got a call when u r out of office saying there is no enough space i.e., HDFS threshold has been reached. What is the your approach to resolve this issue. 23) Heat beat messages, Are they sequential processing or parallel processing. 24) Wha t is the volume of data you receive to your cluster every day." what is hdfs?, are you upgrading in node how?,"3. How do you copy config files to other nodes 4. what security system you follows, what is diff with out kerberos 5. What is JN, HA 6. what is usage of SNN" usage of automatic failover how you do what all r other methods?,8. How do you load data for teradata to Hadoop are you using impala?, could you give me your day today activities?, what is the process to integrate meta store for hive could you explain the process?, do you have any idea about dfs name dir?,"4) What will happend when data node is down. 5) How you will test, whether datanode is working or not. 6) Do you have idea about Zoombie process. 7) How namenode will be knowing datanode is down. Nagios alert, admin -report (command), cloudera manage 8) Heat beat, whether it is sequential processing or parallel processing. 9) What is the volume of data you receive to the cluster. 40 to 50GB 10) How do you receive data to your cluster. 11) What is your cluster size. 12) What is the port number of namenode. 13) What is the port number of Job tracker. 14) How do you install hive, pig, hbase." what is jvm?,"16) How do you do rebalancing. Company: Verizon 02 -Oct-2017 1)How do you dopaswordless SSH in hadoop. 2) Upgrades (Have you done anytime). 3) ClouderaManager port number. 4) what is your cluster size. 5) Versions 6) Map reduce version. 7) Daily activities. 8) What operations, you normally use in cloudera manager. 9) is internet connected to your nodes. 10) Do you have different cloudera managers for dev and production. 11) what are installation s teps Follow Me : https://www.yo utube.com/c/SauravAgarwa l Company: HCL 22-Sep-2017 1) Daily activities. 2) versions. 3) What is decommissioning. 4) What is the procedure to decommission datanode. 5) Difference between MR1 and MR2. 6) Difference between Hadoop1 and Hadoop2. 7) Difference between RDBMS and No -SQL. 8) What is the use of Nagios. Company: Collabera Date: 14 -Mar-2018 1) Provide your roles and responsibilities. 2) What do you do for cluster management. 3) At midnight, you got a call saying there is no enough space i.e., HDFS threshold h as been reached. What is the your approach to resolve this issue. 4) How many clusters and nodes are present in your project. 5) How you will know about the threshold, do you check manually every time. Do you know about puppet etc., 6) Code was tested succ essfully in Dev and Test. When deployed to Productions it is failing. As an" admin how do you track the issue?, what is decommissioning?, what is the file size youve used?, how long does it take to run your script in production cluster?, what is the filesize for production environment?, are you planning for anything to improve the performance?, what size of file do you use for development?, what did you do to increase the performance hive pig?, what is your cluster size?, what are the challenges you have faced in your project give 2 examples?, how to debug production issue logs script counters jvm, how do you select the ecosystem tools for your project?, how many nodes you are using currently?, what is the job scheduler you use in production cluster?,More question 1) What are your day to day activities. 2) How do you add datanode to the cluster. do you have any idea about dfs name dir?,"4) What will happend when data node is down. 5) How you will test, whether datanode is working or not. 6) Do you have idea about Zoombie process. 7) How namenode will be knowing datanode is down. Nagios alert, admin -report (command), cloudera manage Follow Me : https://www.yo utube.com/c/SauravAgarwa l 8) Heat beat, whether it is sequential proce ssing or parallel processing. 9) What is the volume of data you receive to the cluster. 40 to 50GB 10) How do you receive data to your cluster. 11) What is your cluster size. 12) What is the port number of namenode. 13) What is the port number of Job tracker. 14) How do you install hive, pig, hbase." what is jvm?,"16) How do you do rebalancing. Company: Verizon 02 -Oct-2017 1)How do you dopaswordless SSH in hadoop. 2) Upgrades (Have you done anytime). 3) ClouderaManager port number. 4) what is your cluster size. 5) Versions 6) Map reduce version. 7) Daily activities. 8) What operations, you normally use in cloudera manager. 9) is internet connected to your nodes. 10) Do you have different cloudera managers for dev and production. 11) what are installation steps Company: HCL 22-Sep-2017 1) Daily activities. 2) versions. 3) What is decommissioning. 4) What is the procedure to decommission datanode. 5) Difference between MR1 and MR2. 6) Difference between H adoop1 and Hadoop2. 7) Difference between RDBMS and No -SQL. Follow Me : https://www.yo utube.com/c/SauravAgarwa l 8) What is the use of Nagios." youtube videos of interview questions with explanation?,