Questions
stringlengths 5
360
⌀ | Answers
stringlengths 6
2.23k
⌀ |
|---|---|
follow steven our if or more ai and data science posts httpslnkdingzu463xq96 what is dropout and batch normalization?
| null |
descent?
| null |
why is tensor flow the most preferred library in deep learning?
| null |
what do you mean by tensor in tensor flow?
| null |
what is the computational graph?
| null |
what are the differences between supervised and unsupervised learning?
|
Supervised Learning Unsupervise d Learning Uses known and labeled data as input Supervised learning has a feedback mechanism Most commonly used supervised learning algorithms are decision trees, logistic regression, and support vector machine Uses unlabeled data as input Unsupervised l earning has no feedback mechanism Most commonly used unsupervised learning algorithms are k -means clustering, hierarchical clustering, and apriori algorithm
|
how is logistic regression done?
| null |
how do you build a random forest model?
| null |
how can you avoid the overfitting your model?
| null |
what are the features election methods used to select the right variables?
| null |
value show will you deal with them?
| null |
follow steven our if or more ai and data science posts httpslnkdingzu463x110 for the given point show will you calculate the euclidean distance in python?
|
plot1 = [1,3] plot2 = [2,5] The Euclidean distance can b e calculated as follows: euclidean_distance = sqrt( (plot1[0] -plot2[0])**2 + (plot1[1] -plot2[1])**2 )
|
what are dimensionality reduction and its benefits?
| null |
how will you calculate eigenvalues and eigenvectors of the following 3x3 matrix?
| null |
how should you maintain a deployed model?
| null |
how do you find rmse and mse in a linear regression model?
|
RMSE a nd MSE are two of the most common measures of accuracy for a linear regression model. RMSE indicates the Root Mea n Square Error. MSE indicates the Mean Square Error.
|
how can you select k fork means?
| null |
what is the significance of pvalue?
|
p-value typically 0.05 This indicates strong evidence against the null hypothesis; so you reject the null hypothesis. p-value typically > 0.05 This indicates weak evidence against the null hypothesis, so you accept the null hypothesis. p-value at cutoff 0.05 This is considered to be marginal, meaning it could go either way.
|
how can a time series data be declared as stationery?
| null |
how can you calculate accuracy using a confusion matrix?
| null |
result of which algorithm?
| null |
what is a generative adversarial network?
| null |
performance what can you do about it?
| null |
missing values of both categorical and continuous variables?
|
K-means clustering Linear regression K-NN (k -nearest neighbor) Decision trees The K nearest neighbor algorithm can be used because it can compute the nearest neighbor and if it doesn't have a value, it just computes the nearest neighbor based on all t he other features. When you're dealing with K -means clustering or linear regressio n, you need to do that in your pre - processing, otherwise, they'll crash. Decision trees also have the same problem, although there is some variance. 126. Below are the eight actual values of the target variable in the train file. What is the
|
entropy of the target variable?
|
[0, 0, 0, 1, 1, 1, 1, 1] Choose the correct answer. 1. -(5/8 log(5/8) + 3/8 log(3/8)) 2. 5/8 log(5/8) + 3/8 log(3/8) 3. 3/8 log(5/8) + 5/8 log(3/8) 4. 5/8 log(3/8) 3/8 log(5/8) The target variable, in this case, is 1. The formula for calculating the entropy is: Putting p=5 and n=8, we get Entropy = A = -(5/8 log(5/8) + 3/8 log(3/8)) 127. We want to predict t he probability of death from heart disease based on three ri sk factors: age, gender, and blood cholesterol level. What is the most appropriate
|
algorithm for this case?
| null |
study?
|
Choose the correct option: 1. K-means clustering 2. Linear regression 3. Association rules 4. Decision trees As we are looking for grouping people to gether specifically by four different similarities, it indicates the value of k. T herefore, K -means clustering (answer A) is the most appropriate algorithm for this study. 129. You have run the association rules algorithm on your dataset, and the two rules {banana, apple} => {grape} and {apple, orange} => {grape} have been found to be
|
relevant what else must be true?
| null |
their purchase decisions which analysis method should you use?
| null |
what are the feature vectors?
| null |
what are the steps in making a decision tree?
|
1. Take the entire data set as input. 2. Look for a split that maximizes the separation of the classes. A split is any test that div ides the data into two sets. 3. Apply the split to the input data ( divide step). 4. Re-apply steps one and two to the divided data. 5. Stop when you meet any stopping criteria. 6. This step is called pruning. Clean up the tree if you went too far doing splits.
|
what is root cause analysis?
| null |
what is logistic regression?
| null |
what is collaborative filtering?
|
Most recommender systems use this filtering process to find patterns and information by collaborating perspectives, numerous data sources, and several agents.
|
do gradient descent methods always converge to similar points?
|
They do not, because in some cases, they reach a local minima or a local optima point. You would not reach the global optima point. This is governed by the data and the starting conditions.
|
what is the goal of ab testing?
|
This is statistical hypothesis testing for randomized experiments with two variables, A and B. The objective of A/B testing is to detect any changes to a web pag e to maximize or increase the outcome of a strategy.
|
what are the confounding variables?
|
These are extraneous variables in a statistical model that correlates directly or inversely with both the dependent and the independent variable. The estimate fails to account for the confounding factor.
|
what is star schema?
| null |
how regularly must an algorithm be updated?
|
You will want to update an algorithm when: You want the model to evolve as dat a streams through infrastructure The underlying data source is changing There is a case of non -stationarity
|
what are eigenvalue and eigenvector?
| null |
why is resampling done?
|
Resampling is done in any of these cases: Estimating the accuracy of sample statistics by using subsets of accessible data, or drawing randomly with replacement from a set of data points Substituting labels on data points when perfo rming significance tests Validating models by using random subsets (bootstrapping, cross -validation)
|
what is selection bias?
|
Selection bias, in general, is a problematic situation in which error is introduced due to a non -random population sample.
|
what are the types of biases that can occur during sampling?
|
1. Selection bias 2. Undercoverage bias 3. Survivorship bias
|
what is survivorship bias?
|
overlooking those that did not because of their lack of prominence. This can lead to wrong conclusions in numerous ways.
|
how do you work towards a random forest?
|
The underlying principle of this technique is that several weak learners combine to provide a strong learner. The s teps involved are: 1. Build several decision trees on bootstrapped training samples of data 2. On each tree, each time a split is considered, a random sample of mm predictors is chosen as split candidates out of all pp predictors 3. Rule of thumb: At each split m=p m=p 4. Predictions: At the majority rule
|
what are the important skills to have in python with regard to data analysis?
|
The following are some of the important skills to possess which will come handy when performing data analysis using Python. Good understanding of the built -in data types especially lists, dictionaries, tuples, and sets. Master y of N -dimensional NumPy Arrays . Mastery of Pandas dataframes. Ability to perform element -wise ve ctor and matrix operations on NumPy arrays. Knowing that you should use the Anaconda distribution and the conda package manager. Familiarity with Scikit -learn . **Scikit -Learn Cheat Sheet ** Ability to write efficient list comprehensions instead of traditional for loops. Ability to write small, clean functions (important for any devel oper), preferably pure functions that dont alter objects. Knowing how to profile the performance of a Python script and how to optimize bottlenecks. Credit: kdnuggets, Simplilearn, Edureka, Guru99, Hackernoon, Datacamp, Nitin Panwa r, Michael Rundell
|
How do you subset or filter data in SQL?
|
To subset or filter data in SQL, we use WHERE and HAVING clauses.
|
What is the difference between a WHERE clause and a HAVING clause in SQL?
| null |
How are Union, Intersect, and Except used in SQL?
| null |
What is a Subquery in SQL?
|
A Subquery in SQL is a query within another query. It is also known as a nested query or an inner query. Subqueries are used to enhance the data to be queried by the main query.
It is of two types - Correlated and Non-Correlated Query.
|
How is joining different from blending in Tableau?
| null |
What do you understand by LOD in Tableau?
|
LOD in Tableau stands for Level of Detail. It is an expression that is used to execute complex queries involving many dimensions at the data sourcing level. Using LOD expression, you can find duplicate values, synchronize chart axes and create bins on aggregated data.
|
Can you discuss the process of feature selection and its importance in data analysis?
|
Feature selection is the process of selecting a subset of relevant features from a larger set of variables or predictors in a dataset. It aims to improve model performance, reduce overfitting, enhance interpretability, and optimize computational efficiency.
|
What are the different connection types in Tableau Software?
|
There are mainly 2 types of connections available in Tableau.
Extract: Extract is an image of the data that will be extracted from the data source and placed into the Tableau repository. This image(snapshot) can be refreshed periodically, fully, or incrementally.
Live: The live connection makes a direct connection to the data source. The data will be fetched straight from tables. So, data is always up to date and consistent.
|
What are the different joins that Tableau provides?
|
Joins in Tableau work similarly to the SQL join statement. Below are the types of joins that Tableau supports:
Left Outer Join
Right Outer Join
Full Outer Join
Inner Join
|
What is a Gantt Chart in Tableau?
|
A Gantt chart in Tableau depicts the progress of value over the period, i.e., it shows the duration of events. It consists of bars along with the time axis. The Gantt chart is mostly used as a project management tool where each bar is a measure of a task in the project.
|
What is the difference between Treemaps and Heatmaps in Tableau?
| null |
What is the correct syntax for reshape() function in NumPy?
| null |
What are the different ways to create a data frame in Pandas?
|
There are two ways to create a Pandas data frame.
By initializing a list
By initializing a dictionary
|
Write the Python code to create an employee’s data frame from the “emp.csv” file and display the head and summary.
| null |
How will you select the Department and Age columns from an Employee data frame?
Since we only want the odd number from 0 to 9, you can perform the modulus operation and check if the remainder is equal to 1.
| null |
Suppose there is an array that has values [0,1,2,3,4,5,6,7,8,9]. How will you display the following values from the array - [1,3,5,7,9]?
| null |
How can you add a column to a Pandas Data Frame?
| null |
How will you print four random integers between 1 and 15 using NumPy?
|
generate Random numbers using NumPy, we use the random.randint() function.
|
What do data analysts do?
|
Outline the main tasks of a data analyst: identify, collect, clean, analyze, and interpret. Talk about how these tasks can lead to better business decisions, and be ready to explain the value of data-driven decision-making.
|
What was your most successful/most challenging data analysis project?
|
Getting asked about a project you’re proud of is your chance to highlight your skills and strengths. Do this by discussing your role in the project and what made it so successful. As you prepare your answer, take a look at the original job description. See if you can incorporate some of the skills and requirements listed.
|
What is your process for cleaning data?
|
walk through the steps you typically take to clean a data set. Consider mentioning how you handle:
Missing data, Duplicate data, Data from different sources, Structural errors, Outliers
|
How do you explain technical concepts to a non-technical audience?
|
While drawing insights from data is a critical skill for a data analyst, communicating those insights to stakeholders, management, and non-technical co-workers is just as important. Answer should include the types of audiences you’ve presented to in the past (size, background, context). If you don’t have a lot of experience presenting, you can still talk about how you’d present data findings differently depending on the audience.
|
Tell me about a time when you got unexpected results.
|
describe the situation that surprised you and what you learned from it. Take this as opportunity to demonstrate your natural curiosity and excitement to learn new things from data.
|
What data analytics software are you familiar with?
|
Mention software solutions you’ve used for various stages of the data analysis process.
|
What scripting languages are you trained in?
|
As a data analyst, you’ll likely have to use SQL and a statistical programming language like R or Python. If you’re already familiar with the language of choice at the company, you’re applying to, great. If not, you can take this time to show enthusiasm for learning. Point out that your experience with one (or more) languages has set you up for success in learning new ones. Talk about how you’re currently growing your skills.
|
What statistical methods have you used in data analysis?
|
Mean, Standard deviation, Variance, Regression, Sample size, Descriptive and inferential statistics
|
How have you used Excel for data analysis in the past?
| null |
What is a VLOOKUP, and what are its limitations?
| null |
What is a pivot table, and how do you make one?
| null |
How do you find and remove duplicate data?
| null |
What are INDEX and MATCH functions, and how do they work together?
| null |
What’s the difference between a function and a formula?
| null |
What is the difference between 1-sample T-test vs. 2-sample T-test in SQL
| null |
What exactly does the term "Data Science" mean?
|
Data Science is an interdisciplinary discipline that encompasses a variety of
scientific procedures, algorithms, tools, and machine learning algorithms
that work together to uncover common patterns and gain useful insights
from raw input data using statistical and mathematical analysis.
Gathering business needs and related data is the first step; data cleansing,
data staging, data warehousing, and data architecture are all procedures in
the data acquisition process. Exploring, mining, and analyzing data are all
tasks that data processing does, and the results may then be utilized to
provide a summary of the data's insights.
|
Distinguish between data in long and wide formats.
|
Data in a long format Each row of the data reflects a subject's one-time information. Each subject's data would be organized in different/multiple rows. When viewing rows as groupings, the data may be identified. This data format is commonly used in R analysis and for writing to log files at the end of each experiment. Data in a W ide FormatThe repeated replies of a subject are divided into various columns in this example. By viewing columns as groups, the data may be identified. This data format is most wide ly used in stats programs for repeated measures ANOV As and is seldom utilized in R analysis.
|
List down the criteria for Overfitting and Underfitting Overfitting:
|
Overfitting: The model works well just on the sample training data. Any new data is supplied as input to the model fails to generate any result. These situations emer ge owing to low bias and large variation in the model. Decision trees are usually prone to overfitting. Underfitting: Here, the model is very simple in that it cannot find the proper connection in the data, and consequently , it does not perform well on the test data. This might arise owing to excessive bias and low variance. Underfitting is more common in linear regression
|
What exactly does the term "Data Science" mean?
|
Data Science is an interdisciplin ary discipline that encompasses a variety of scientific procedures, algorithms, tools, and machine learning algorithms that work together to uncover common patterns and gain useful insights from raw input data using statistical and mathematical analysis.Gathering busin ess needs and related data is the first step; data cleansing, data staging, data warehousing, and data architecture are all procedures in the data acquisition process. Exploring, mining, and analyzing data are all tasks that data processing does, and the results may then be utilized to provide a summary of the data's insights. Following the exploratory phases, the cleansed data is exposed to many algorithms, such as predictive analysis, regression, text mining, pattern recognition, and so on, depending on the needs. In the final last stage, the outcomes are aesthetically appealingly when conveyed to the business. This is where the ability to see data, report on it, and use other business intelligence tools come into play .
|
What is the differ ence between data science and data analytics?
|
Data science is altering data using a variety of technical analysis approaches to derive useful insights that data analysts may apply to their business scenarios. Data analytics is concerned with verifying current hypotheses and facts and answering quest ions for a more efficient and successful business decision- making process. Data Science fosters innovation by providing answers to questio ns that help people make connections and solve challenges in the future. Data analytics is concerned with removing current meaning from past context, whereas data science is concerned with predictive modelling. Data science is a wide topic that employs a variety of mathe matical and scientific tools and methods to solve complicated issues. In contrast, data analytics is a more focused area that employs fewer statistical and visualization techniques to solve particular problems.
|
What are some of the strat egies utilized for sampling? What is the major advantage of sampling?
|
Data analysis cannot be done on an entire amount of data at a time, especially when it concerns bigger datasets. It becomes important to obtaindata samples that can represent the full population and then analyse it. While doing this, it is vital to properly choose sample data out of the enormous data that represents the complete dataset. There are two types of sampling procedures depending on the engagement of statistics, they are: Non-Probability sampling techniques: Convenience sampling, Quota sampling, snowball sampling, etc. Probability sampling techniques: Simple random sampling, clustered sampling, stratified sampling.
|
What is the differ ence between Eigenvectors and Eigenvalues?
|
Eigenvectors are column vectors of unit vectors with a length/magnitude of 1; they are also known as right vectors. Eigenvalues are coefficients applied to eigenvectors to varying length or magnitude values. Eigen decomposition is the process of breaking down a matrix into Eigenvectors and Eigenvalues. These are then utilized in machine learning approaches such as PCA (Principal Component Analysis) to extract useful information from a matrix.
|
What does it mean to have high and low p-values?
|
A p-value measures the possibi lity of getting outcomes equal to or greater than those obtained under a certain hypothesis, provided the null hypothesis is true. This indicates the likelihood that the observed discrepancy happened by coincidence. When the p-value is less than 0.05, we say have a low p-val ue, the null hypothesis may be rejected, and the data is unlikely to be true null. A high p-value indicates the strength in support of the null hypothesis, i.e., values greater than 0.05, indicating that the data is true null. The hypothesis can go either way with a p-value of 0.05.
|
When to do re-sampling?
|
Re-sampling is a data sampling procedure that improves accuracy and quantifies the uncertainty of population characteristics. It is observed that the model is efficient by training it on different patterns in a dataset to guarantee that variances are taken care of. It's also done when models needto be verified using random subsets or tests with labels substituted on data points.
|
What does it mean to have "imbalanced data"?
|
A data is highly imbalanced when the data is unevenly distrib uted across several categories. These datas ets cause a performance problem in the model and inaccuracies.
|
Do the predicted value, and the mean value varies in any way?
|
Although there aren't many variations between these two, it's worth noting that they're employed in different situations. In general, the mean value talks about the probability distribution; in contrast, the anticipated value is used when dealing with random variables.
|
What does Survivorship bias mean to you?
|
Due to a lack of prominence, this bias refers to the logica l fallacy of focusing on parts that survived a procedure while missing others that did not. This bias can lead to incorrect conclusions being drawn.
|
Define key performance indicators (KPIs), lift, model fitting, robustness, and design of experiment (DOE).
|
KPI is a metric that assesses how successfully a company meets its goals. Lift measures the target model's performance compared to a random choice model. The lift represents how well the model predicts compare d to if there was no model. Model fitting measures how well the model under consideration matches the data. Robustness refers to the system's capacity to successfully handle variations and variances. DOE refers to the work of describing and explaining informati on variance under postulated settings to reflect factors.
|
Identify confounding variables
|
Another name for confounding variables is confounders. They are extraneous variables that impact both independent and dependent variables, generating erroneous associations and mathematical correlations.
|
What distinguishes time-series issues from other regression problems?
|
Time series data could be considered an extension of linear regression, which uses terminology such as autocorrelation and average movement to summarize previous data of y-axis variables to forecast a better future. Time series issues' major purpose is to forecast and predict when exact forecasts could be produced, but the determinant factors are not always known. The presence of Time in an issue might not determine that it is a time series problem. To be determined that an issue is a time series problem , there must be a relationship between tar get and T ime. The observations that are closer in time are anticipated to be comparable to those far away , providing seasonality accountability . Today's weather , for example, would be in comparison to tomorrow's weather but not to weather four months from now. As a result, forecasting the weather based on historical data becomes a time series challenge.
|
What if a dataset contains variables with more than 30% missing values? How would you deal with such a dataset?
|
We use one of the following methods, depending on the size of the dataset: If the datasets are minimal, the missing values are replaced with the average or mean of the remaining data. This may be done in pandas by using mean = df. Mean (), where df is the panda's data frame that contains the dataset and mean () determines the data's mean. We may use df.Fillna to fill in the missing numbers with the computed mean (mean).The rows with missing values may be deleted from bigger datasets, and the remaining data can be utilized for data prediction.
|
What is Cr oss-V alidation, and how does it work?
|
Cross-validation is a statistical approach for enhancing the performance of a model. It will be designed and evaluated with rotation using different samples of the training dataset to ensure that the model performs adequately for unkn own data. The training data will be divided into groups, and the model will be tested and verified against each group in turn. The following are the most regularly used techniques: Leave p-out method K-Fold method Holdout method Leave-one-out method
|
How do you go about tackling a data analytics project?
|
In general, we follow the steps below: The first stage is to understan d the company's problem or need. Then, sternly examine and evaluate the facts you've been given. If any data is missing, contact the company to clarify the needs. The following stage is to clean and prepa re the data, which will then be utilized for modelling. The variables are converted, and the missing values are available here. To acquire useful insights, run your model on the data, create meaningful visualizations, and evaluate the findings. Release the model implementation and evaluate its usefulness by tracking the outcomes and performance over a set period. V alidate the model using cross-validation.
|
What is the purpose of selection bias?
|
Selection bias occurs when no randomization is obtained while selecting a sample subset. This bias indicates that the sample used in the analysis does not reflect the whole population being studied.
|
Why is data cleansing so important?
| null |
Why is data cleansing so important? What method do you use to clean the data?
|
It is critical to have correct and clean data that contains only essential information to get good insights while running an algorithm on any data. Poor or erroneo us insights and projections are frequently the product of contaminated data, resulting in disastrous consequences. For exam ple, while starting a large marketing campaign for a product, if our data analysis instructs us to target a product that has little demand, in reality , the campaign will almost certainly fail. As a result, the company's revenue is reduced. This is when the value of having accurate and clean data becomes apparent. Data cleaning from many sources aid data transformation and produces data scientists may work on. Clean data improves the model's performance and results in extremely accurate predictions. When a dataset is sufficiently huge, running data on it becomes difficult. If the data is large, the data cleansing stage takes a long time (about 80% of the time), and it is impossible to include it in the model's execution. As a result, cleansing data before running the model improves the model's speed and ef ficiency . Data cleaning aids in the detec tion and correction of structural flaws in a dataset, and it also aids in the removal of duplicates and the maintenance of data consistency . 20. What featur e selection strategies are available for picking the
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.