content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
25OCT 2024
Award Ceremony for the 2024 Barbara and Jaroslaw Zemánek Prize
We cordially invite everyone to the award ceremony for the Barbara and Jaroslav Zemánek Prize in the field of functional analysis, which will take place as part of the IMPAN Colloquium at the
Institute of Mathematics of the Polish Academy of Sciences (IMPAN) on October 30th, 2024, at 2:15 PM in room 321 (third floor).
Programme of the ceremony:
14:15-15:15 Introductory lecture by Stuart White (University of Oxford)
15:15-15:45 Award of the Prize and a coffee break
15:45–16:45 Lecture of the laureate Christopher Schafhauser (University of Nebraska–Lincoln)
Stuart White
Title: Introduction to the classification of simple nuclear C*-algebras
Abstract: In this talk, I'll give an introduction to simple nuclear C*-algebras and their classification. This will be illustrated by examples coming from group actions. My aim will be to describe at
a high level the paradigm shift from using internal structure to classify, to obtaining internal structure from classification that Chris Schafhauser's work made possible. No prior knowledge of
operator algebras will be assumed.
Christopher Schafhauser
Title: Lifting problems in C*-algebras and applications
Abstract: A classical problem of Halmos asks which essentially normal operators (those commuting with their adjoint modulo a compact operator) on Hilbert space are compact perturbations of normal
operators (those commuting with their adjoint). A complete solution was obtained by Brown, Douglas, and Fillmore in the early 70s, and their solution led to the introduction of algebraic topological
methods in operator algebras. In particular, for a compact metric space X, they considered all embeddings of C(X) into the quotient B(H)/K(H) of bounded operators on a Hilbert space modulo the
compact operators and showed that homotopy classes of such embeddings form an abelian group K_1(X), which is the degree one term for a generalized homology theory dual to topological K-theory.
Building on this, Kasparov developed a much more general extension theory, studying lifting problems along more general quotient maps up to a stabilized notion of homotopy. I will discuss some recent
progress in `non-stable’ extension theory with applications to embedding problems and classification problems for simple nuclear C*-algebras. | {"url":"https://www.impan.pl/en/events/news/2024/nagroda-zemankow_2024_wreczenie","timestamp":"2024-11-14T12:08:31Z","content_type":"text/html","content_length":"41776","record_id":"<urn:uuid:fc6e29e9-5ec1-40f5-9977-080d5d3884b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00593.warc.gz"} |
Stochastic Gradient Descent: Math and Python Code - CoinClues
Stochastic Gradient Descent: Math and Python Code
1.1: What is Gradient Descent
Image by DALL-E-2
In machine learning , Gradient Descent is a star player. It’s an optimization algorithm used to minimize a function by iteratively moving towards the steepest descent as defined by the negative of
the gradient. Like in the picture, imagine you’re at the top of a mountain, and your goal is to reach the lowest point. Gradient Descent helps you find the best path down the hill.
The beauty of Gradient Descent is its simplicity and elegance. Here’s how it works, you start with a random point on the function you’re trying to minimize, for example a random starting point on the
mountain. Then, you calculate the gradient (slope) of the function at that point. In the mountain analogy, this is like looking around you to find the steepest slope. Once you know the direction, you
take a step downhill in that direction, and then you calculate the gradient again. Repeat this process until you reach the bottom.
The size of each step is determined by the learning rate. However, if the learning rate is too small, it might take a long time to reach the bottom. If it’s too large, you might overshoot the lowest
point. Finding the right balance is key to the success of the algorithm.
One of the most appealing aspects of Gradient Descent is its generality. It can be applied to almost any function, especially those where an analytical solution is not feasible. This makes it
incredibly versatile in solving various types of problems in machine learning, from simple linear regression to complex neural networks.
1.2: The ‘Stochastic’ in Stochastic Gradient Descent
Stochastic Gradient Descent (SGD) adds a twist to the traditional gradient descent approach. The term ‘stochastic’ refers to a system or process that is linked with a random probability. Therefore,
this randomness is introduced in the way the gradient is calculated, which significantly alters its behavior and efficiency compared to standard gradient descent.
In traditional batch gradient descent, you calculate the gradient of the loss function with respect to the parameters for the entire training set. As you can imagine, for large datasets, this can be
quite computationally intensive and time-consuming. This is where SGD comes into play. Instead of using the entire dataset to calculate the gradient, SGD randomly selects just one data point (or a
few data points) to compute the gradient in each iteration.
Think of this process as if you were again descending a mountain, but this time in thick fog with limited visibility. Rather than viewing the entire landscape to decide your next step, you make your
decision based on where your foot lands next. This step is small and random, but it’s repeated many times, each time adjusting your path slightly in response to the immediate terrain under your feet.
This stochastic nature of the algorithm provides several benefits:
• Speed: By using only a small subset of data at a time, SGD can make rapid progress in reducing the loss, especially for large datasets.
• Escape from Local Minima: The randomness helps SGD to potentially escape local minima, a common problem in complex optimization problems.
• Online Learning: SGD is well-suited for online learning, where the model needs to be updated as new data comes in, due to its ability to update the model incrementally.
However, the stochastic nature also introduces variability in the path to convergence. The algorithm doesn’t smoothly descend towards the minimum; rather, it takes a more zigzag path, which can
sometimes make the convergence process appear erratic.
2.1: The Algorithm Explained
Stochastic Gradient Descent (SGD) might sound complex, but its algorithm is quite straightforward when broken down. Here’s a step-by-step guide to understanding how SGD works:
Initialization (Step 1)
First, you initialize the parameters (weights) of your model. This can be done randomly or by some other initialization technique. The starting point for SGD is crucial as it influences the path the
algorithm will take.
Random Selection (Step 2)
In each iteration of the training process, SGD randomly selects a single data point (or a small batch of data points) from the entire dataset. This randomness is what makes it ‘stochastic’.
Compute the Gradient (Step 3)
Calculate the gradient of the loss function, but only for the randomly selected data point(s). The gradient is a vector that points in the direction of the steepest increase of the loss function. In
the context of SGD, it tells you how to tweak the parameters to make the model more accurate for that particular data point.
Gradient Formula
Here, ∇θJ(θ) represents the gradient of the loss function J(θ) with respect to the parameters θ. This gradient is a vector of partial derivatives, where each component of the vector is the partial
derivative of the loss function with respect to the corresponding parameter in θ.
Update the Parameters (Step 4)
Adjust the model parameters in the opposite direction of the gradient. Here’s where the learning rate η plays a crucial role. The formula for updating each parameter is:
• θnew represents the updated parameters.
• θold represents the current parameters before the update.
• η is the learning rate, a positive scalar determining the size of the step in the direction of the negative gradient.
• ∇θJ(θ) is the gradient of the loss function J(θ) with respect to the parameters θ.
The learning rate determines the size of the steps you take towards the minimum. If it’s too small, the algorithm will be slow; if it’s too large, you might overshoot the minimum.
Repeat until convergence (Step 5)
Repeat steps 2 to 4 for a set number of iterations or until the model performance stops improving. Each iteration provides a slightly updated model.
Ideally, after many iterations, SGD converges to a set of parameters that minimize the loss function, although due to its stochastic nature, the path to convergence is not as smooth and may oscillate
around the minimum.
2.2: Understanding Learning Rate
One of the most crucial hyperparameters in the Stochastic Gradient Descent (SGD) algorithm is the learning rate. This parameter can significantly impact the performance and convergence of the model.
Understanding and choosing the right learning rate is a vital step in effectively employing SGD.
What is Learning Rate?
At this point you should have an idea of what learning rate is, but let’s better define it for clarity. The learning rate in SGD determines the size of the steps the algorithm takes towards the
minimum of the loss function. It’s a scalar that scales the gradient, dictating how much the weights in the model should be adjusted during each update. If you visualize the loss function as a
valley, the learning rate decides how big a step you take with each iteration as you walk down the valley.
Too High Learning Rate
If the learning rate is too high, the steps taken might be too large. This can lead to overshooting the minimum, causing the algorithm to diverge or oscillate wildly without finding a stable point.
Think of it as taking leaps in the valley and possibly jumping over the lowest point back and forth.
Too Low Learning Rate
On the other hand, a very low learning rate leads to extremely small steps. While this might sound safe, it significantly slows down the convergence process.
In a worst-case scenario, the algorithm might get stuck in a local minimum or even stop improving before reaching the minimum.
Imagine moving so slowly down the valley that you either get stuck or it takes an impractically long time to reach the bottom.
Finding the Right Balance
The ideal learning rate is neither too high nor too low but strikes a balance, allowing the algorithm to converge efficiently to the global minimum.
Typically, the learning rate is chosen through experimentation and is often set to decrease over time. This approach is called learning rate annealing or scheduling.
Learning Rate Scheduling
Learning rate scheduling involves adjusting the learning rate over time. Common strategies include:
• Time-Based Decay: The learning rate decreases over each update.
• Step Decay: Reduce the learning rate by some factor after a certain number of epochs.
• Exponential Decay: Decrease the learning rate exponentially.
• Adaptive Learning Rate: Methods like AdaGrad, RMSProp, and Adam adjust the learning rate automatically during training.
3.1: Implementing SGD in Machine Learning Models
Link to the full code (Jupyter Notebook): https://github.com/cristianleoo/models-from-scratch-python/blob/main/sgd.ipynb
Implementing Stochastic Gradient Descent (SGD) in machine learning models is a practical step that brings the theoretical aspects of the algorithm into real-world application. This section will guide
you through the basic implementation of SGD and provide tips for integrating it into machine learning workflows.
Now let’s consider a simple case of SGD applied to Linear Regression:
class SGDRegressor:
def __init__(self, learning_rate=0.01, epochs=100, batch_size=1, reg=None, reg_param=0.0):
Constructor for the SGDRegressor.
learning_rate (float): The step size used in each update.
epochs (int): Number of passes over the training dataset.
batch_size (int): Number of samples to be used in each batch.
reg (str): Type of regularization ('l1' or 'l2'); None if no regularization.
reg_param (float): Regularization parameter.
The weights and bias are initialized as None and will be set during the fit method.
self.learning_rate = learning_rate
self.epochs = epochs
self.batch_size = batch_size
self.reg = reg
self.reg_param = reg_param
self.weights = None
self.bias = None
def fit(self, X, y):
Fits the SGDRegressor to the training data.
X (numpy.ndarray): Training data, shape (m_samples, n_features).
y (numpy.ndarray): Target values, shape (m_samples,).
This method initializes the weights and bias, and then updates them over a number of epochs.
m, n = X.shape # m is number of samples, n is number of features
self.weights = np.zeros(n)
self.bias = 0
for _ in range(self.epochs):
indices = np.random.permutation(m)
X_shuffled = X[indices]
y_shuffled = y[indices]
for i in range(0, m, self.batch_size):
X_batch = X_shuffled[i:i+self.batch_size]
y_batch = y_shuffled[i:i+self.batch_size]
gradient_w = -2 * np.dot(X_batch.T, (y_batch - np.dot(X_batch, self.weights) - self.bias)) / self.batch_size
gradient_b = -2 * np.sum(y_batch - np.dot(X_batch, self.weights) - self.bias) / self.batch_size
if self.reg == 'l1':
gradient_w += self.reg_param * np.sign(self.weights)
elif self.reg == 'l2':
gradient_w += self.reg_param * self.weights
self.weights -= self.learning_rate * gradient_w
self.bias -= self.learning_rate * gradient_b
def predict(self, X):
Predicts the target values using the linear model.
X (numpy.ndarray): Data for which to predict target values.
numpy.ndarray: Predicted target values.
return np.dot(X, self.weights) + self.bias
def compute_loss(self, X, y):
Computes the loss of the model.
X (numpy.ndarray): The input data.
y (numpy.ndarray): The true target values.
float: The computed loss value.
return (np.mean((y - self.predict(X)) ** 2) + self._get_regularization_loss()) ** 0.5
def _get_regularization_loss(self):
Computes the regularization loss based on the regularization type.
float: The regularization loss.
if self.reg == 'l1':
return self.reg_param * np.sum(np.abs(self.weights))
elif self.reg == 'l2':
return self.reg_param * np.sum(self.weights ** 2)
return 0
def get_weights(self):
Returns the weights of the model.
numpy.ndarray: The weights of the linear model.
return self.weights
Let’s break it down into smaller steps:
Initialization (Step 1)
def __init__(self, learning_rate=0.01, epochs=100, batch_size=1, reg=None, reg_param=0.0):
self.learning_rate = learning_rate
self.epochs = epochs
self.batch_size = batch_size
self.reg = reg
self.reg_param = reg_param
self.weights = None
self.bias = None
The constructor (__init__ method) initializes the SGDRegressor with several parameters:
• learning_rate: The step size used in updating the model.
• epochs: The number of passes over the entire dataset.
• batch_size: The number of samples used in each batch for SGD.
• reg: The type of regularization (either ‘l1’ or ‘l2’; None if no regularization is used).
• reg_param: The regularization parameter.
• weights and bias are set to None initially and will be initialized in the fit method.
Fit the Model(Step 2)
def fit(self, X, y):
m, n = X.shape # m is number of samples, n is number of features
self.weights = np.zeros(n)
self.bias = 0
for _ in range(self.epochs):
indices = np.random.permutation(m)
X_shuffled = X[indices]
y_shuffled = y[indices]
for i in range(0, m, self.batch_size):
X_batch = X_shuffled[i:i+self.batch_size]
y_batch = y_shuffled[i:i+self.batch_size]
gradient_w = -2 * np.dot(X_batch.T, (y_batch - np.dot(X_batch, self.weights) - self.bias)) / self.batch_size
gradient_b = -2 * np.sum(y_batch - np.dot(X_batch, self.weights) - self.bias) / self.batch_size
if self.reg == 'l1':
gradient_w += self.reg_param * np.sign(self.weights)
elif self.reg == 'l2':
gradient_w += self.reg_param * self.weights
self.weights -= self.learning_rate * gradient_w
self.bias -= self.learning_rate * gradient_b
This method fits the model to the training data. It starts by initializing weights as a zero vector of length n (number of features) and bias to zero. The model’s parameters are updated over a number
of epochs through SGD.
Random Selection and Batches(Step 3)
for _ in range(self.epochs):
indices = np.random.permutation(m)
X_shuffled = X[indices]
y_shuffled = y[indices]
In each epoch, the data is shuffled, and batches are created to update the model parameters using SGD.
Compute the Gradient and Update the parameters (Step 4)
gradient_w = -2 * np.dot(X_batch.T, (y_batch - np.dot(X_batch, self.weights) - self.bias)) / self.batch_size
gradient_b = -2 * np.sum(y_batch - np.dot(X_batch, self.weights) - self.bias) / self.batch_size
Gradients for weights and bias are computed in each batch. These are then used to update the model’s weights and bias. If regularization is used, it’s also included in the gradient calculation.
Repeat and converge (Step 5)
def predict(self, X):
return np.dot(X, self.weights) + self.bias
The predict method calculates the predicted target values using the learned linear model.
Compute Loss (Step 6)
def compute_loss(self, X, y):
return (np.mean((y - self.predict(X)) ** 2) + self._get_regularization_loss()) ** 0.5
It calculates the mean squared error between the predicted values and the actual target values y. Additionally, it incorporates the regularization loss if regularization is specified.
Regularization Loss Calculation (Step 7)
def _get_regularization_loss(self):
if self.reg == 'l1':
return self.reg_param * np.sum(np.abs(self.weights))
elif self.reg == 'l2':
return self.reg_param * np.sum(self.weights ** 2)
return 0
This private method computes the regularization loss based on the type of regularization (l1 or l2) and the regularization parameter. This loss is added to the main loss function to penalize large
weights, thereby avoiding overfitting.
3.2: SGD in Sci-kit Learn and Tensorflow
Now, while the code above is very useful for educational purposes, data scientists definitely don’t use it on a daily basis. Indeed, we can directly call SGD with few lines of code from popular
libraries such as scikit learn (machine learning) or tensorflow (deep learning).
SGD for linear regression in scikit-learn
from sklearn.linear_model import SGDRegressor
# Create and fit the model
model = SGDRegressor(max_iter=1000)
model.fit(X, y)
# Making predictions
predictions = model.predict(X)
SGD regressor is directly called from sklearn library, and follows the same structure of other algorithms in the same library.
The parameter ‘max_iter’ is the number of epochs (rounds). By specifying max_iter to 1000 we will make the algorithm update the linear regression weights and bias 1000 times.
Neural Network with SGD optimization in Tensorflow
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
# Create a simple neural network model
model = Sequential([
Dense(64, activation='relu', input_shape=(X_train.shape[1],)),
sgd = SGD(learning_rate=0.01)
# Compile the model with SGD optimizer
model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
# Train the model
model.fit(X, y, epochs=10)
In this code we are defining a Neural Network with one Dense Layer and 64 nodes. However, besides the specifics of the neural network, here we are again calling SGD with just two lines of code:
from tensorflow.keras.optimizers import SGD
sgd = SGD(learning_rate=0.01)
4.1: Why Choose SGD?
Efficiency with Large Datasets:
Scalability: One of the primary advantages of SGD is its efficiency in handling large-scale data. Since it updates parameters using only a single data point (or a small batch) at a time, it is much
less memory-intensive than algorithms requiring the entire dataset for each update.
Speed: By frequently updating the model parameters, SGD can converge more quickly to a good solution, especially in cases where the dataset is enormous.
Flexibility and Adaptability:
Online Learning: SGD’s ability to update the model incrementally makes it well-suited for online learning, where the model needs to adapt continuously as new data arrives.
Handling Non-Static Datasets: For datasets that change over time, SGD’s incremental update approach can adjust to these changes more effectively than batch methods.
Overcoming Challenges of Local Minima:
The stochastic nature of SGD helps it to potentially escape local minima, a significant challenge in many optimization problems. The random fluctuations allow the algorithm to explore a broader range
of the solution space.
General Applicability:
SGD can be applied to a wide range of problems and is not limited to specific types of models. This general applicability makes it a versatile tool in the machine learning toolbox.
Simplicity and Ease of Implementation:
Despite its effectiveness, SGD remains relatively simple to understand and implement. This ease of use is particularly appealing for those new to machine learning.
Improved Generalization:
By updating the model frequently with a high degree of variance, SGD can often lead to models that generalize better on unseen data. This is because the algorithm is less likely to overfit to the
noise in the training data.
Compatibility with Advanced Techniques:
SGD is compatible with a variety of enhancements and extensions, such as momentum, learning rate scheduling, and adaptive learning rate methods like Adam, which further improve its performance and
4.2: Overcoming Challenges in SGD
While Stochastic Gradient Descent (SGD) is a powerful and versatile optimization algorithm, it comes with its own set of challenges. Understanding these hurdles and knowing how to overcome them can
greatly enhance the performance and reliability of SGD in practical applications.
Choosing the Right Learning Rate
Selecting an appropriate learning rate is crucial for SGD. If it’s too high, the algorithm may diverge; if it’s too low, it might take too long to converge or get stuck in local minima.
Use a learning rate schedule or adaptive learning rate methods. Techniques like learning rate annealing, where the learning rate decreases over time, can help strike the right balance.
Dealing with Noisy Updates
The stochastic nature of SGD leads to noisy updates, which can cause the algorithm to be less stable and take longer to converge.
Implement mini-batch SGD, where the gradient is computed on a small subset of the data rather than a single data point. This approach can reduce the variance in the updates.
Risk of Local Minima and Saddle Points
In complex models, SGD can get stuck in local minima or saddle points, especially in high-dimensional spaces.
Use techniques like momentum or Nesterov accelerated gradients to help the algorithm navigate through flat regions and escape local minima.
Sensitivity to Feature Scaling
SGD is sensitive to the scale of the features, and having features on different scales can make the optimization process inefficient.
Normalize or standardize the input features so that they are on a similar scale. This practice can significantly improve the performance of SGD.
Hyperparameter Tuning
SGD requires careful tuning of hyperparameters, not just the learning rate but also parameters like momentum and the size of the mini-batch.
Utilize grid search, random search, or more advanced methods like Bayesian optimization to find the optimal set of hyperparameters.
Like any machine learning algorithm, there’s a risk of overfitting, where the model performs well on training data but poorly on unseen data.
Use regularization techniques such as L1 or L2 regularization, and validate the model using a hold-out set or cross-validation.
5.1: Variants of SGD
Stochastic Gradient Descent (SGD) has several variants, each designed to address specific challenges or to improve upon the basic SGD algorithm in certain aspects. These variants enhance SGD’s
efficiency, stability, and convergence rate. Here’s a look at some of the key variants:
Mini-Batch Gradient Descent
This is a blend of batch gradient descent and stochastic gradient descent. Instead of using the entire dataset (as in batch GD) or a single sample (as in SGD), it uses a mini-batch of samples.
It reduces the variance of the parameter updates, which can lead to more stable convergence. It can also take advantage of optimized matrix operations, which makes it more computationally efficient.
Momentum SGD
Momentum is an approach that helps accelerate SGD in the relevant direction and dampens oscillations. It does this by adding a fraction of the previous update vector to the current update.
It helps in faster convergence and reduces oscillations. It is particularly useful for navigating the ravines of the cost function, where the surface curves much more steeply in one dimension than in
Nesterov Accelerated Gradient (NAG)
A variant of momentum SGD, Nesterov momentum is a technique that makes a more informed update by calculating the gradient of the future approximate position of the parameters.
It can speed up convergence and improve the performance of the algorithm, particularly in the context of convex functions.
Adaptive Gradient (Adagrad)
Adagrad adapts the learning rate to each parameter, giving parameters that are updated more frequently a lower learning rate.
It’s particularly useful for dealing with sparse data and is well-suited for problems where data is scarce or features have very different frequencies.
RMSprop (Root Mean Square Propagation) modifies Adagrad to address its radically diminishing learning rates. It uses a moving average of squared gradients to normalize the gradient.
It works well in online and non-stationary settings and has been found to be an effective and practical optimization algorithm for neural networks.
Adam (Adaptive Moment Estimation)
Adam combines ideas from both Momentum and RMSprop. It computes adaptive learning rates for each parameter.
Adam is often considered as a default optimizer due to its effectiveness in a wide range of applications. It’s particularly good at solving problems with noisy or sparse gradients.
Each of these variants has its own strengths and is suited for specific types of problems. Their development reflects the ongoing effort in the machine learning community to refine and enhance
optimization algorithms to achieve better and faster results. Understanding these variants and their appropriate applications is crucial for anyone looking to delve deeper into machine learning
optimization techniques.
5.2: Future of SGD
As we delve into the future of Stochastic Gradient Descent (SGD), it’s clear that this algorithm continues to evolve, reflecting the dynamic and innovative nature of the field of machine learning.
The ongoing research and development in SGD focus on enhancing its efficiency, accuracy, and applicability to a broader range of problems. Here are some key areas where we can expect to see
significant advancements:
Automated Hyperparameter Tuning
There’s increasing interest in automating the process of selecting optimal hyperparameters, including the learning rate, batch size, and other SGD-specific parameters.
This automation could significantly reduce the time and expertise required to effectively deploy SGD, making it more accessible and efficient.
Integration with Advanced Models
As machine learning models become more complex, especially with the growth of deep learning, there’s a need to adapt and optimize SGD for these advanced architectures.
Enhanced versions of SGD that are tailored for complex models can lead to faster training times and improved model performance.
Adapting to Non-Convex Problems
Research is focusing on making SGD more effective for non-convex optimization problems, which are prevalent in real-world applications.
Improved strategies for dealing with non-convex landscapes could lead to more robust and reliable models in areas like natural language processing and computer vision.
Decentralized and Distributed SGD
With the increase in distributed computing and the need for privacy-preserving methods, there’s a push towards decentralized SGD algorithms that can operate over networks.
This approach can lead to more scalable and privacy-conscious machine learning solutions, particularly important for big data applications.
Quantum SGD
The advent of quantum computing presents an opportunity to explore quantum versions of SGD, leveraging quantum algorithms for optimization.
Quantum SGD has the potential to dramatically speed up the training process for certain types of models, though this is still largely in the research phase.
SGD in Reinforcement Learning and Beyond
Adapting and applying SGD in areas like reinforcement learning, where the optimization landscapes are different from traditional supervised learning tasks.
This could open new avenues in developing more efficient and powerful reinforcement learning algorithms.
Ethical and Responsible AI
There’s a growing awareness of the ethical implications of AI models, including those trained using SGD.
Research into SGD might also focus on ensuring that models are fair, transparent, and responsible, aligning with broader societal values.
As we wrap up our exploration of Stochastic Gradient Descent (SGD), it’s clear that this algorithm is much more than just a method for optimizing machine learning models. It stands as a testament to
the ingenuity and continuous evolution in the field of artificial intelligence. From its basic form to its more advanced variants, SGD remains a critical tool in the machine learning toolkit,
adaptable to a wide array of challenges and applications.
If you liked the article please leave a clap, and let me know in the comments what you think about it! | {"url":"https://urdupoint.live/stochastic-gradient-descent-math-and-python-code/","timestamp":"2024-11-03T15:46:15Z","content_type":"text/html","content_length":"182331","record_id":"<urn:uuid:6929616f-58dd-418e-a83c-a5f4f2a40177>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00649.warc.gz"} |
IF Formula for Date Ranges
I am looking to create a formula that would automatically assign a quarter (Q1, Q2, Q3 or Q4) based on the start date entered in another column. I would need to have the formula evaluate the date to
see if it falls within a certain range (example, Feb 1 - Apr 30 would populate Q1 etc).
How would I write out the date range portion of this formula? Thank you for your help!
Best Answers
• @ClaireWallace e Your syntax is off in both. This is the syntax for IF with OR:
=IF(OR(logical expression1, logical expression2, logical epression3...), value_if_true, value_if_false)
The logical expressions must be contained within OR, as I show above:
=IF(OR([Month Formula]@row >= 2, [Month Formula]@row < 5), "Q1")
Now, when nesting IFs as the "value_if_false" of the IF before them, at the very end of the formula you need to be sure you are closing off each IF; So when you have an IF with three additional
nested IFs after it, you need 4 parentheses at the very end:
=IF(OR([Month Formula]@row >= 2, [Month Formula]@row < 5), "Q1", IF(OR([Month Formula]@row >= 5, [Month Formula]@row < 7), "Q2", IF(OR([Month Formula]@row >= 7, [Month Formula]@row < 10), "Q3",
IF(OR([Month Formula]@row >= 10, [Month Formula]@row = 1), "Q4"))))
Now, if you want to avoid using the [Month Formula] column, we can do that too:
=IF(OR(MONTH([Start Date]@row) >= 2, MONTH([Start Date]@row) < 5), "Q1", IF(OR(MONTH([Start Date]@row) >= 5, MONTH([Start Date]@row) < 7), "Q2", IF(OR(MONTH([Start Date]@row) >= 7, MONTH([Start
Date]@row) < 10), "Q3", IF(OR(MONTH([Start Date]@row) >= 10, MONTH([Start Date]@row) = 1), "Q4"))))
Lastly, always make sure the system shows your first and last parentheses as the same color.
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
• You have a syntax problem. Each MONTH function within the OR needs to be closed off, and then each OR statement needs to be closed off, before you get to the value if true portion. This is
because you're telling the system to determine the numeric MONTH for [Final Deadline]@row and evaluate if that value is equal to 9, then do the same for 10 and 11 - and then if any of those is
true, set the cell value to "Initiation Phase".
For instance, in the first clause, you need a green close parentheses before the = 9, an orange one before the = 10, and a light blue one before the = 11. Once you add these, the close
parentheses in pink after the =11 will turn pink because it is closing off the OR function.
=IF(OR(MONTH([Final Deadline]@row) = 9, MONTH([Final Deadline]@row) = 10, MONTH([Final Deadline]@row) = 11), "Initiation Phase", …
Do the same for the remaining clauses. At the very end, I think you should have only 3 close parentheses, with the last one matching the blue color of the very first open parentheses.
You could also shorten this formula up a good bit if you want to by using "greater than or equal to" and "lesser than or equal to" operators:
=IF(MONTH([Final Deadline]@row) >= 9, "Initiation Phase", IF(MONTH([Final Deadline]@row) <= 2, "Planning Phase", IF(AND(MONTH([Final Deadline]@row) >= 3, MONTH([Final Deadline]@row) <= 8),
"Execution Phase")))
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
• Hey @ClaireWallace
Check out some of these other threads:
If none of these have helped, it would be useful to know what the column name is for your start date column and when exactly your quarters are.
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• Hi Genevieve,
Thanks for your response. I am getting closer but I'm encountering an Unparseable response. Here is what my formula is:
=IF(OR([Month Formula]@row = "2"), [Month Formula]@row >"2", [Month Formula]@row <"5"), "Q1", IF(OR([Month Formula]@row = "5", [Month Formula]@row <"5", [Month Formula]@row <"7"), "Q2" , IF(OR
([Month Formula]@row="7",[Month Formula]@row > "7",[Month Formula]@row < "10"),"Q3", IF(OR([Month Formula]@row = "10", [Month Formula]@row >"10", [Month Formula]@row ="1"), "Q4")
Q1: Feb - April 30
Q2: May - July 30
Q3: Aug-Oct 31
Q4: Nov- Jan 31
Thank you!
• @ClaireWallace - Remove the extraneous end parentheses in the red circle. Add three end parentheses' to the end of the formula to close off all the nested IFs. And you probably want to remove the
quotes from around your number criteria ( [Month Formula]@row < 2 instead of <"2" )
You can also combine some of your criteria: [Month Formula]@row = 2, [Month Formula]@row > 2 can be replaced by [Month Formula]@row >= 2
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
• Hi Jeff,
Thanks for the advice. I have made these changes & I am attempting to just enter the conditions for Q1 into the formula for the time now.
I have alternatively been referencing a cell called "Start Date" where there is a date entered, and I have alternatively referenced a cell where the Month Formula is already in use and the cell
contains numbers 1-12.
This types is returning an incorrect argument.
-Version where I am referencing the cell already containing the Month Formula
=IF(OR([Month Formula]@row >= 2, [Month Formula]@row < 5, "Q1"))
The other version where I am directly referencing the Month Column is returning an unparseable response, regardless of the little tweaks I am making to the syntax. This is currently what I have.
=IF(OR(MONTH([Start Date]@row>=2, (MONTH([Start Date]@row <5), "Q1"))))
Thanks for your help!
• @ClaireWallace e Your syntax is off in both. This is the syntax for IF with OR:
=IF(OR(logical expression1, logical expression2, logical epression3...), value_if_true, value_if_false)
The logical expressions must be contained within OR, as I show above:
=IF(OR([Month Formula]@row >= 2, [Month Formula]@row < 5), "Q1")
Now, when nesting IFs as the "value_if_false" of the IF before them, at the very end of the formula you need to be sure you are closing off each IF; So when you have an IF with three additional
nested IFs after it, you need 4 parentheses at the very end:
=IF(OR([Month Formula]@row >= 2, [Month Formula]@row < 5), "Q1", IF(OR([Month Formula]@row >= 5, [Month Formula]@row < 7), "Q2", IF(OR([Month Formula]@row >= 7, [Month Formula]@row < 10), "Q3",
IF(OR([Month Formula]@row >= 10, [Month Formula]@row = 1), "Q4"))))
Now, if you want to avoid using the [Month Formula] column, we can do that too:
=IF(OR(MONTH([Start Date]@row) >= 2, MONTH([Start Date]@row) < 5), "Q1", IF(OR(MONTH([Start Date]@row) >= 5, MONTH([Start Date]@row) < 7), "Q2", IF(OR(MONTH([Start Date]@row) >= 7, MONTH([Start
Date]@row) < 10), "Q3", IF(OR(MONTH([Start Date]@row) >= 10, MONTH([Start Date]@row) = 1), "Q4"))))
Lastly, always make sure the system shows your first and last parentheses as the same color.
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
• Thank you so much, Jeff! This helped immensely.
• Hi @Jeff Reisman I have tried following your suggestions to build out an IF formula that populates the project phase that each task falls within, based on the deadline that it is due. There may
be an easier way to do this but here is what I have so far and I am getting an Incorrect Argument error.
• You have a syntax problem. Each MONTH function within the OR needs to be closed off, and then each OR statement needs to be closed off, before you get to the value if true portion. This is
because you're telling the system to determine the numeric MONTH for [Final Deadline]@row and evaluate if that value is equal to 9, then do the same for 10 and 11 - and then if any of those is
true, set the cell value to "Initiation Phase".
For instance, in the first clause, you need a green close parentheses before the = 9, an orange one before the = 10, and a light blue one before the = 11. Once you add these, the close
parentheses in pink after the =11 will turn pink because it is closing off the OR function.
=IF(OR(MONTH([Final Deadline]@row) = 9, MONTH([Final Deadline]@row) = 10, MONTH([Final Deadline]@row) = 11), "Initiation Phase", …
Do the same for the remaining clauses. At the very end, I think you should have only 3 close parentheses, with the last one matching the blue color of the very first open parentheses.
You could also shorten this formula up a good bit if you want to by using "greater than or equal to" and "lesser than or equal to" operators:
=IF(MONTH([Final Deadline]@row) >= 9, "Initiation Phase", IF(MONTH([Final Deadline]@row) <= 2, "Planning Phase", IF(AND(MONTH([Final Deadline]@row) >= 3, MONTH([Final Deadline]@row) <= 8),
"Execution Phase")))
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
• The shorter formula you gave worked perfectly, thank you very much @Jeff Reisman !
• @Jeff Reisman last question - is there a function that allows the formula to take the year into account? During the month of September I usually close out 1 project, but also track tasks to
initiate the next project. I have September 2024 tasks and September 2025 tasks in my Smartsheet. Is there a formula that has the capability to distinguish between the month AND year, or only
• @DoyleMegan You can add more criteria into the formulas using the YEAR function. It works just like MONTH. See the links in my signature for the Formula and Error Message help pages.
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/102411/if-formula-for-date-ranges","timestamp":"2024-11-07T13:51:21Z","content_type":"text/html","content_length":"460015","record_id":"<urn:uuid:2f5a7245-adbc-4353-936c-5d903ffdc268>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00006.warc.gz"} |
Two-sample Nonparametric Tests • Genstat v21
Select menu: Stats | Statistical Tests | Two-sample nonparametric tests
Performs nonparametric statistical tests on data comprising two samples.
1. After you have imported your data, from the menu select
Stats | Statistical Tests | Two-sample nonparametric tests.
2. Fill in the fields as required then click Run.
You can set additional Options, then after running you can save the results by clicking Save.
Specifies the test to be carried out:
Wilcoxon This is a nonparametric test of location for two related samples (for example a before-and-after study). The null hypothesis is that the samples arise from exactly the same
Matched-Pairs test distribution, and this is tested against the alternative that the underlying distributions differ in their locations
Kolmogorov-Smirnov This is a test of the hypothesis that two samples of data have arisen from the same distribution, against the alternative, that the underlying distributions are different. The test
test statistic is the maximum absolute difference between their cumulative distribution functions
Spearman’s rank Spearman’s rank correlation coefficient is a measure of the association between the rankings of two samples and can be used to test for independence of the samples
Mann-Whitney U This is a test for differences in location between two samples of data
Sign test The two-sample sign test is a nonparametric test for difference in location between two related samples
Available data
List variates and factors that can be used to supply the data sets and groups. The contents will change as you move from one input field to another, so that appropriate types of data structure are
listed. Double-click a name to copy it to the current input field or type the name.
Data arrangement
You can supply the data either as a pair of variates or as a single variate with a factor defining the groups. Note: for the Wilcoxon test you can only supply data as a pair of variates of equal
Two variates You must supply the samples as two separate variates. Enter the names as Data variate 1 and Data variate 2.
One variate with group factor You must supply the data in one variate, specified as the Data variate. Membership of the two samples is then indicated by a factor by entering the name in Group factor.
Action Icons
Pin Controls whether to keep the dialog open when you click Run. When the pin is down
Restore Restore names into edit fields and default settings.
Clear Clear all fields and list boxes.
Help Open the Help topic for this dialog.
See also
These statistical tests are performed by the following procedures which may also be used in command mode with additional parameters in some cases: WILCOXON, MANNWHITNEY, KOLMOGOROV, SIGNTEST,
The descriptions of these commands contain full details of the methods used to calculate the test statistics, along with suitable references. | {"url":"https://genstat21.kb.vsni.co.uk/knowledge-base/two-sample/","timestamp":"2024-11-02T10:40:08Z","content_type":"text/html","content_length":"43701","record_id":"<urn:uuid:ff298c94-9aa3-49a1-b140-e5a1652788d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00790.warc.gz"} |
This module contains procedures and generic interfaces for finding the minimum and maximum of two input scalar values through lexical comparison. More...
This module contains procedures and generic interfaces for finding the minimum and maximum of two input scalar values through lexical comparison.
See pm_arrayMinMax for the equivalent operation on a sequence of values.
See also
Final Remarks ⛓
If you believe this algorithm or its documentation can be improved, we appreciate your contribution and help to edit this page's documentation and source file on GitHub.
For details on the naming abbreviations, see this page.
For details on the naming conventions, see this page.
This software is distributed under the MIT license with additional terms outlined below.
1. If you use any parts or concepts from this library to any extent, please acknowledge the usage by citing the relevant publications of the ParaMonte library.
2. If you regenerate any parts/ideas from this library in a programming environment other than those currently supported by this ParaMonte library (i.e., other than C, C++, Fortran, MATLAB, Python,
R), please also ask the end users to cite this original ParaMonte library.
This software is available to the public under a highly permissive license.
Help us justify its continued development and maintenance by acknowledging its benefit to society, distributing it, and contributing to it.
Amir Shahmoradi, Thursday 1:45 AM, August 22, 2019, Dallas, TX | {"url":"https://www.cdslab.org/paramonte/fortran/latest/namespacepm__mathMinMax.html","timestamp":"2024-11-12T03:42:50Z","content_type":"application/xhtml+xml","content_length":"14368","record_id":"<urn:uuid:142b329c-bd00-4952-8d97-36e83285360d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00823.warc.gz"} |
My English lesson
My English lesson Horizontal asymptotes A horizontal asymptote is a line parallel to the x-axis to which the graph of the function progressively approaches: we can say that the straight-line is
tangent to the graph of the function asymptotically. DEFINITION The straight-line y = l is a horizontal asymptote for a function y = f(x) if: lim f(x) = l x example x+4 O Verify, using the
appropriate limits, that the function y = ____ has the straightx 2 line x = 2 as its vertical asymptote and has the straight-line y = 1 as its horizontal asymptote. We know that the graph of the
function is an equilateral hyperbola whose centre is in C(2 ; 1) and that the straight-line y = 1 is a horizontal asymptote (figure below). Anyhow, let us verify by using the definition that lim f(x)
= 1. BE CAREFUL! B R Remember that a function having ax + b equation y = ______ with c 0 cx + d and a d = b c has a vertical c asymptote of equation x = __ and d a y = __ as its horizontal asymptote
c (see unit 1, §3). x y 1 4 2 4 2 O 2 2 4 x We shall verify that for each neighborhood I1; centered in 1 it exists a neighborhood of infinity I so that: x I f(x) I1; in other words, if a real
positive number is chosen arbitrarily, it exists a real positive number M so that if |x| > M then f(x) belongs to the neighborhood of 1 of ray : |x| > M |f(x) 1| 0 a k R can be determined so that if
x > k then |f(x) l | 0 a k R can be determined so that if x 1 ____ __ 6 |x 2| > __ 6 6 From which: x 2 + __ ; since > 0 is the ray of the neighborhood of 6 6 1, it means that 2 __ 0. By giving the
value M to the bigger one amongst them, we have determined the neighborhood of the infinity I we were looking for that verifies the limit. The straight-line y = 1 is a horizontal asymptote for the
function. NOW IT S YOUR TURN N 4 Given the function y = __2 verify x that the straight-line y = 0 is a horizontal asymptote for the function. 127 | {"url":"https://mydbook.olhos.it/mydbook/GT2023_G3482184T/127/fixed","timestamp":"2024-11-12T00:35:59Z","content_type":"text/html","content_length":"69564","record_id":"<urn:uuid:60730e6a-be18-4d7b-9b55-c86fd71dba21>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00616.warc.gz"} |
2009 ISIT Plenary Lecture
Facets of Entropy
Professor Raymond W. Yeung
Chinese University of Hong Kong
Constraints on the entropy function are sometimes referred to as the laws of information theory. For a long time, the submodular inequalities, or equivalently the nonnegativity of the Shannon
information measures, are the only known constraints. Inequalities that are implied by the submodular inequality are categorically referred to as Shannon-type inequalities. If the number of random
variables is fixed, a Shannon-type inequality can in principle be verified by a linear program known as ITIP.
A non-Shannon-type inequality is a constraint on the entropy function which is not implied by the submodular inequality. In the late 1990’s, the discovery of a few such inequalities revealed that
Shannon-type inequalities alone do not constitute a complete set of constraints on the entropy function.
In the past decade, connections between the entropy function and a number of fields in information science, mathematics, and physics have been established. These fields include probability theory,
network coding, combinatorics, group theory, Kolmogorov complexity, matrix theory, and quantum mechanics. This talk is an attempt to present a picture for the many facets of the entropy function.
Raymond W. Yeung received the BS, MEng and PhD degrees in electrical engineering from Cornell University in 1984, 1985, and 1988, respectively. He joined AT&T Bell Laboratories in 1988. He came
to CUHK in 1991 and has been with the Department since then, where he is currently a chair professor. He is the author of the book entitled A First Course in Information Theory (Kluwer Academic/
Plenum Publishers, 2002). His research interest is in information theory and network coding. He was a consultant in a project of Jet Propulsion Laboratory for salvaging the malfunctioning Galileo
Spacecraft. He is a member of the Board of Governors of the IEEE Information Theory Societyfrom 1999 to 2001. He has served on the committees of a number of information theory symposiums and
workshops. He was the General Chair of the First Workshop on Network, Coding, and Applications (NetCod 2005), a Technical Co-Chair of the 2006 IEEE International Symposium on Information
Theory, and a Technical Co-Chair of the 2006 IEEE Information Theory Workshop, Chengdu. He also has served on the editorial board of a number of academic journals. He was an Associate Editor for
Shannon Theory of the IEEE Transactions on Information Theory from 2002 to 2005. He currently serves as an Editor-at-Large of Communications in Information and Systems, an Editor of Foundation and
Trends in Communications and Information Theory and an Editor of Foundation and Trends in Networking. He was a recipient of the Croucher Senior Research Fellowship for 2000/01, the Best Paper Award
(Communication Theory) of the 2004 International Conference on Communications, Circuits and System, the 2005 IEEE Information Theory Society Paper Award, and the Friedrich Wilhelm Bessel Research
Award from the Alexander von Humboldt Foundation in 2007. He is a Fellow of the IEEE and the Hong Kong Institution of Engineers. | {"url":"https://www.itsoc.org/video/facets-entropy","timestamp":"2024-11-02T11:29:00Z","content_type":"text/html","content_length":"69537","record_id":"<urn:uuid:edcd09d7-4ad0-4b65-a783-8f8c08eec95d>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00452.warc.gz"} |
New River Community College
Statistics II - MTH 246 at New River Community College
Effective: 2017-08-01
Course Description
Continues the study of estimation and hypothesis testing with emphasis on advanced regression topics, experimental design, analysis of variance, chi-square tests and non-parametric methods.
Lecture 3 hours. Total 3 hours per week.
3 credits
The course outline below was developed as part of a statewide standardization process.
General Course Purpose
To serve as a second course in statistics that focuses on multivariate and nonparametric techniques useful to business, science, and social science majors.
Course Prerequisites/Corequisites
Prerequisite: Completion of MTH 245 or equivalent with a grade of C or better.
Course Objectives
• Review of Hypothesis Testing
□ Conduct hypothesis tests for population means and proportions.
□ Conduct a hypothesis test for the equality of two population means where:
☆ The samples are independent and the population variances are assumed unequal.
☆ The data consists of matched pairs.
□ Conduct a hypothesis test for the presence of correlation.
• Experimental Design
□ Define and apply the basic principles of design, including randomization, replication, and treatment/control groups.
□ Explain single and double blinding.
□ Describe the placebo and experimenter effects and describe how they can be countered using blinding.
□ Design experiments using the following methods:
☆ Completely randomized.
☆ Randomized block.
☆ Matched pairs.
□ Explain the concept of confounding.
• Correlation and Regression
□ Construct and interpret the residual plot related to a simple least-squares regression model.
□ Conduct hypothesis tests related to the coefficients of a simple least-squares regression model.
□ Construct and Apply a logistic regression model.
□ Calculate the coefficient of determination, the adjusted coefficient of determination, and overall P-value for a multiple regression model and use them to construct a best-fit multiple
regression equation.
• Categorical Data Anaylsis
□ Conduct chi-squared tests for:
☆ Goodness of fit.
☆ Independence between rows and columns of a two-way contingency table.
☆ Homogeneity of population proportions.
• Analysis of Variance (ANOVA)
□ Conduct one-way ANOVA to test the equality of two or more population means for both equal and unequal sample sizes and recognize its relationship to the pooled two sample t-test.
□ Conduct a multiple comparison test, such as Tukey's HSD, to determine which of the three or more population means differs from the others.
□ Conduct two-way ANOVA on sample data categorized with two fixed factors.
• Nonparametric Methods
□ Determine the rank of each element of a sorted data set.
□ Identify the relationship between a nonparametric test and its corresponding parametric technique.
□ Conduct a Wilcoxon signed-ranks test for a single sample.
□ Conduct a Wilcoxon signed-ranks test for matched pairs.
• Technology Application
□ Construct statistical tables, charts, and graphs using appropriate technology.
□ Perform statistical calculations using an appropriate statistical software package.
□ Complete statistical project. Students are required to complete some form of semester project in their course that is worth a significant portion of the student's grade. This could be either
an individual or group effort, and could be completed in stages through the semester or as a single, stand-alone exercise. As a minimum, the project should require students to manipulate and
draw statistical inferences from a large, realistic data set using a statistical software package.
Major Topics to be Included
• Hypothesis Testing
• Experimental Design
• Correlation and Regression
• Categorical Data Analysis
• Analysis of Variance
• Nonparametric Methods | {"url":"https://courses.vccs.edu/colleges/nrcc/courses/MTH246-StatisticsII/detail","timestamp":"2024-11-07T23:38:08Z","content_type":"application/xhtml+xml","content_length":"12857","record_id":"<urn:uuid:36f7c8d3-78c6-4508-96e0-38b60ccc2f10>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00514.warc.gz"} |
OA Fakinlede, University of Lagos
Weighted Integral Formulation
[gview file=”http://oafak.com/wp-content/uploads/2015/04/02-Weighted-Integral-Formulation.pdf”]
6 comments on “Weighted Integral Formulation”
1. I have noticed that most finite element textbooks tend to ignore the first derivative. they always consider the undifferentiated/natural term, the second derivative and possible the fourth. in my
search, I found out that in Reddy’s solution to Problem 3.2, he multiplied the first derivative by the residual and he did not reduce it. He noted that ” the term involving b(the first
derivative) is not integrated by parts because it does not reduce the differentiability required of the approximation functions.” whereas ‘Finite Element Methods for Partial Differential Equations
by J.J.W. van der Vegt and O. Bokhove Faculty of Mathematical Sciences University of Twente’ weakened the first derivative in page 6, chapter two by integrating by part. I would love to know the
correct procedure to problems having a first derivative.
□ Both cases give the weak form since the continuity requirements on the primary variable has been weakened. There is no “correct” solution. What we can look at is which gives the more accurate
solution. There are people who have refused to reduce the continuity conditions through integration by parts. Look at the discussion here: http://www.researchgate.net/post/
If you are inclined, I think the comparison between the accuracy obtained by retaining the strong form can be looked at in specific cases. I don’t know how much work has been done on this.
2. In an equation such as ay”+by’+cy=0. I have noticed that most finite element textbooks to ignore or do not involve the first derivative(y’). they always consider the undifferentiated/natural term
(y), the second derivative(y”) and possible the fourth(y^iv). In my search, I found out that in Reddy’s solution to Problem 3.2, he multiplied the first derivative(y’) by the residual(w) and he
did not reduce it. He noted that ” the term involving b(the first derivative) is not integrated by parts because it does not reduce the differentiability required of the approximation functions.”
whereas ‘Finite Element Methods for Partial Differential Equations by J.J.W. van der Vegt and O. Bokhove Faculty of Mathematical Sciences University of Twente’ weakened the first derivative in
page 6, chapter two by integrating by part. I would love to know the correct procedure to problems having a first derivative.
□ Further to my comment on the above note, I want to further draw your attention to the fact that there are FEA methods that do not even create a weak form at all. Two recent papers on this
issue are available to you in the link, http://1drv.ms/1Egv2ss . I think it is a good thing to take a closer look at this and even perhaps do a study on the accuracy and other issues that may
be associated with use of strong Finite Element Methods (SFEM) as opposed to weak FEM.
3. Many thanks for uploading your lecture notes on the open web. I am not a student of the course but I do use your lecture notes as supplement to my notes. In modal analysis, which type of FE will
give a good solution of the weak FE and the SFE ?
□ I am not an expert on this. However, I think you are really looking at the Eigenvalue Problem. I have not seen anything done on it from the Strong FEA people. Virtually all the literature I
am aware of is in the Weak Form FEA. There are many examples in textbooks. The text for the current course is recommended. | {"url":"https://oafak.com/2015/04/weighted-integral-formulation/","timestamp":"2024-11-13T20:58:00Z","content_type":"text/html","content_length":"46711","record_id":"<urn:uuid:294480ac-3a07-48ad-b96a-fbb3b9aaae0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00008.warc.gz"} |
I like to go back and re-solve Project Euler problems in different languages. Lately, I’ve been solving them in Javascript for fun. When I do this, I don’t look at previous solutions and try to do it
from scratch. When I was finished, I was surprised by the performance of my solution to 43 compared to my previous attempts in other languages.
Problem 43 is as follows:
The number, 1406357289, is a 0 to 9 pandigital number because it is made up of each of the digits 0 to 9 in some order, but it also has a rather interesting sub-string divisibility property.
Let d[1] be the 1^st digit, d[2] be the 2^nd digit, and so on. In this way, we note the following:
□ d[2]d[3]d[4]=406 is divisible by 2
□ d[3]d[4]d[5]=063 is divisible by 3
□ d[4]d[5]d[6]=635 is divisible by 5
□ d[5]d[6]d[7]=357 is divisible by 7
□ d[6]d[7]d[8]=572 is divisible by 11
□ d[7]d[8]d[9]=728 is divisible by 13
□ d[8]d[9]d[10]=289 is divisible by 17
Find the sum of all 0 to 9 pandigital numbers with this property.
When I first solved this problem, I solved it in C. This was in 2014, and I was still fairly green. My solution at the time was to iterate through every 10-digit number and see if it was pandigital
and then if it was, check if it met the sub-divisibility requirement.
This solution is what you would call “brute-force”. It’s inelegant, and slow. However, it does work. It took 33.948 seconds to compute.
A few years later I was doing more with Rust and Python. Both of these solutions I created used the same method. This probably happened because I wrote both solutions close together. At any case,
this time I thought myself more clever and took a pandigital number, 1234567890, and discovered every permutation, and then checked for the sub-divisibility requirement of each.
This is better than brute force, but still time consuming. Python can accomplish this in 18.724 seconds and Rust in 4.621. Better, but still not great.
The general rule of thumb with Project Euler is that if a solution takes more than a second, you haven’t found the intended method of solving it.
Looking at it this time around, it seemed like a very straightforward problem with an obvious path for a solution. Instead of finding pandigital numbers and checking if they meet the sub-string
divisibility requirement, this time I would build up the pandigital numbers using the sub-strings.
First I created arrays for the multiples of the first 7 primes with 2 and 3 digits. I then used a recursive function to build up a number using valid combinations of these sub-strings (since each one
overlaps the next with 2 digits). This creates a much smaller group of numbers to check.
Once I have all my potential pandigital numbers, I check to make sure they are in fact pandigital. (Note that at this stage, they should be missing the first digit). When checking for pandigitality,
I’m actually looking for 9 different digits, and if so, I prepend the missing 10th digit and voila, it’s a valid pandigital number!
This solution is much, much faster at .237 seconds.
I’m very pleased with that result, but a little shocked I didn’t see this method when I have solved it previously. It’s nice to know that since I first started solving these problems years ago, I can
see measurable improvement in my ability to find and create solutions to these fun little puzzles.
Project Euler 117
I enjoy solving Project Euler problems in my downtime. Problem 117 is one that I’ve looked at multiple times, but never came up with a good solution. The premise is easy enough, but, as with most
Euler problems, the scale of the problem presents issues.
Using a combination of grey square tiles and oblong tiles chosen from: red tiles (measuring two units), green tiles (measuring three units), and blue tiles (measuring four units), it is possible
to tile a row measuring five units in length in exactly fifteen different ways.
How many ways can a row measuring fifty units in length be tiled?
For the context of my solution, I am representing the tiles as an array or list of integers that each represent the length of a tile. For example, (1, 3, 1) would be a black, green, and black tile
taking up a total of 5 spaces.
Brute Force sometimes works on these problems, but more often than not, they take far too long to complete. I don’t feel like I’ve “solved” one of these problems if the solution takes more than a
second or two. My first naive attempt to solve this problem was to attack it recursively and build all the combinations of tiles. After letting it run for a few minutes, I gave up and assumed there
must be a better way.
While building each tile combination was prohibitively time consuming, I found that I could still use recursion to get the base combinations for the tiles. There are only about 1000 of those, so the
trick will be permuting them to get the full count. Well, as it turns out, this led to a few more problems.
Using the permutations tool provided in itertools proved to be too slow as well. First of all, it was finding too many solutions. For example:
>>> for i in list(itertools.permutations([1, 1, 2, 2])): print i
(1, 1, 2, 2)
(1, 1, 2, 2)
(1, 2, 1, 2)
(1, 2, 2, 1)
(1, 2, 1, 2)
(1, 2, 2, 1)
(1, 1, 2, 2)
(1, 1, 2, 2)
(1, 2, 1, 2)
(1, 2, 2, 1)
(1, 2, 1, 2)
(1, 2, 2, 1)
(2, 1, 1, 2)
(2, 1, 2, 1)
(2, 1, 1, 2)
(2, 1, 2, 1)
(2, 2, 1, 1)
(2, 2, 1, 1)
(2, 1, 1, 2)
(2, 1, 2, 1)
(2, 1, 1, 2)
(2, 1, 2, 1)
(2, 2, 1, 1)
(2, 2, 1, 1)
When in reality many of those are equivalent in the terms of the problem conditions. What we want is a set which looks more like this:
>>> for i in set(itertools.permutations([1, 1, 2, 2])): print i
(1, 1, 2, 2)
(2, 1, 2, 1)
(2, 1, 1, 2)
(1, 2, 2, 1)
(1, 2, 1, 2)
(2, 2, 1, 1)
That works, but the program still has to calculate all the permutations that we are throwing out which is time consuming; at the most, a set will have 50 tiles which is 50! permutations. That’s
30414093201713378043612608166064768844377641568960512000000000000 permutations, which is really just an insane amount! So that won’t do.
Well, we don’t need to know the permutations, just the amount of permutations! Let’s see if we can find a formula for that. The general formula that I found which researching permutations looks like
where n is the number of terms, and r is the number of terms you are choosing from your source pool. In this case, we are going to always use all the tiles in our base combination that we want to
permute, so the number of permutations is just the factorial of the number of tiles. Hmm. That doesn’t really help since it doesn’t filter out any duplicates.
Perhaps if I researched a bit more, I could have found the proper formula for this situation. Rather than read more about permutations, I decided to take one of my base combinations and do some
experiments to see if I could derive a formula for this situation. I took the last combination from my list and computed the number of unique permutations in order to see what my goal should be.
(This operation took over 10 minutes)
>>> print len(set(itertools.permutations([3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4])))
I played around with factorials and found that for my 13 terms there would be 13!, or 6227020800, possible permutations. Since I knew I wanted the formula to result in 78, I divided the number of
permutations by 78 and got 79833600. In theory, this number would be the denominator of my not-yet-existent formula. I tried grouping my 3s and 4s together; there are 2 and 11 respectively. I checked
what 11! is and it turned out to be 39916800 which is exactly half of my target value for the denominator. And 2! (for the number of 3s) is 2. That seems pretty promising!
My hypothesis was that multiplying the factorials of the number of similar terms together and then dividing them into the number of possible permutations would yield my desired result. I tested it
out on another combination and it worked out correctly, so I modified my program to perform this operation on each combination. I ran it and checked the answer, and it was correct!
And it only took .083 seconds! I’m pretty happy with the results. I’m sure a mathematician might have a more clever, direct way of solving this problem, however I think that since this solution is
accurate and quick I can be proud of it. You can see the complete code on my github.
Making USB Bootable Windows 10 Installer
I needed to update some servers to Windows 10/2016 at work this week, but didn’t have any DVDs large enough to accommodate the 5.8 GB size of the Windows ISO. Normal DVD+Rs are 4.7 GB, so to make
use of the ISO I downloaded from MSDN, one needs a DVD-R DL which can hold 8.5 GB of data.
I didn’t feel like going to a store or waiting for delivery, so I used Microsoft’s “Windows 7 USB/DVD Download Tool” (seemingly misnamed since it supports windows 7 and newer).
Once downloaded, the tool is fairly straight forward. You point it to an ISO, and then to a USB drive, and voila, it copies the data.
The annoyance is that I got the error “We were unable to copy your files. Please check your USB device and the selected ISO file and try again.” Trying again, of course, did not resolve the issue.
Fortunately, someone knowledgeable was able to explain the root cause of the error. The USB’s MBR needs to be cleared. This doesn’t happen automatically with the Windows Download Tool.
To clear the MBR and format the drive, follow these steps using the diskpart tool:
diskpart> list disk
diskpart> select disk #
diskpart> clean
diskpart> create partition primary
diskpart> select partition 1
diskpart> active
diskpart> format quick fs=fat32
diskpart> assign
diskpart> exit
This resolved my issue and I was able to go along my merry way.
I don’t use Windows a lot in my every-day life, but it is something I use quite a bit at work. Documenting little things like this helps me to remember tricks I come across, and hopefully can help
other people searching for solutions to similar problems. | {"url":"https://www.coastalvectors.com/blog/category/uncategorized/","timestamp":"2024-11-04T19:57:13Z","content_type":"application/xhtml+xml","content_length":"43780","record_id":"<urn:uuid:5795a1b9-c33a-4d04-a5d5-8838a191cc8a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00843.warc.gz"} |
Soft Magnetic Iron Triangular Bars - Leading Importer Stockholder And Distributors
Soft Magnetic Iron Triangular Bars
Manufacture of Soft Magnetic Iron Triangular Bars, Pure Iron Triangle-Bar, Electro Magnetic Iron Triangular Bars, High Purity Very Low Carbon Iron Triangle-Bar, Fe 99.5% Iron Triangular Bars, Fe
99.8% Iron Triangle-Bar, Fe 99.9% Iron Triangular Bars
Manufacturer of Soft-Magnetic Iron Triangle-Bar, Pure Iron Triangular Bars, Electro-Magnetic Iron Triangle-Bar, High-Purity Very Low-Carbon Iron Triangular Bars, Fe99.5% Iron Triangle-Bar, Fe99.8%
Iron Triangular Bars, Fe99.9% Iron Triangle-Bar
Manufacture of Pure Iron Triangular Bars
Manufacturer of Pure-Iron Triangle-Bar
Manufacture of High Purity Fe-99.5% Iron Triangular Bars
Manufacturer of High-Purity Fe99.5% Iron Triangle-Bar
Manufacture of High Purity Fe-99.8% Iron Triangular Bars
Manufacturer of High-Purity Fe99.8% Iron Triangle-Bar
Manufacture of High Purity Fe-99.9% Iron Triangular Bars
Manufacturer of High-Purity Fe99.9% Iron Triangle-Bar
Manufacture of Soft Magnetic Iron Triangular Bars
Manufacturer of Soft-Magnetic Iron Triangle-Bar
Manufacture of Soft Magnetic Iron Annealed Triangular Bars
Manufacturer of Soft-Magnetic Iron Annealed Triangle-Bar
Manufacture of Soft Magnetic Iron Hydrogen Annealed Triangular Bars
Manufacturer of Soft-Magnetic Iron Hydrogen-Annealed Triangle-Bar
Manufacture of Soft Magnetic Iron SMI Triangular Bars
Manufacturer of Soft-Magnetic Iron SMI Triangle-Bar
Manufacture of Soft Magnetic Iron Silicon Triangular Bars
Manufacturer of Soft-Magnetic Iron Silicon Triangle-Bar
Manufacture of Soft Steel with Ultra-Low Carbon and Low Impurities Triangular Bars
Manufacturer of Soft Steel with Ultra-Low Carbon and Low Impurities Triangle-Bar
Manufacture of Low Carbon Pure Iron Triangular Bars
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar
Manufacture of Electric Magnetic Iron Triangular Bars
Manufacturer of Electric-Magnetic Iron Triangle-Bar
Manufacture of Electromagnetic Triangular Bars, Electromagnet Triangular Bars
Manufacturer of Electromagnetic-Triangle-Bar, Electromagnet-Triangle-Bar
Manufacture of Electro-Magnetic Iron Triangular Bars, Electro-Magnet Iron Triangular Bars
Manufacturer of Electro-Magnetic Iron Triangle-Bar, Electro-Magnet Iron Triangle-Bar
Manufacture of High Purity Ultra Low Carbon Pure Iron Triangular Bars
Manufacturer of High-Purity Ultra Low-Carbon Pure-Iron Triangle-Bar
Manufacture of DIN 17405 Grade RFe20 Triangular Bars
Manufacturer of DIN-17405 Grade RFe-20 Triangle-Bar
Manufacture of DIN 17405 Grade RFe60 Triangular Bars
Manufacturer of DIN-17405 Grade RFe-60 Triangle-Bar
Manufacture of DIN 17405 Grade RFe80 Triangular Bars
Manufacturer of DIN-17405 Grade RFe-80 Triangle-Bar
Manufacture of DIN 17405 Grade RFe100 Triangular Bars
Manufacturer of DIN-17405 Grade RFe-100 Triangle-Bar
Manufacture of DIN 17405 Grade RFe120 Triangular Bars
Manufacturer of DIN-17405 Grade RFe-120 Triangle-Bar
Manufacture of DIN 17405 Pure Iron Triangular Bars
Manufacturer of DIN-17405 Pure-Iron Triangle-Bar
Manufacture of IS-11946 Soft Magnetic Iron Triangular Bars
Manufacturer of IS-11946 Soft-Magnetic Iron Triangle-Bar
Manufacture of IS-11946 Pure Iron Triangular Bars
Manufacturer of IS-11946 Pure-Iron Triangle-Bar
Manufacture of IS-11947 Soft Magnetic Iron Triangular Bars
Manufacturer of IS-11947 Soft-Magnetic Iron Triangle-Bar
Manufacture of IS-11947 Pure Iron Triangular Bars
Manufacturer of IS-11947 Pure-Iron Triangle-Bar
Manufacture of JIS C2504 Grade SUY0 Triangular Bars
Manufacturer of JIS-C2504 Grade SUY-0 Triangle-Bar
Manufacture of JIS C2504 Grade SUY1 Triangular Bars
Manufacturer of JIS-C2504 Grade SUY-1 Triangle-Bar
Manufacture of JIS C2504 Grade SUY2 Triangular Bars
Manufacturer of JIS-C2504 Grade SUY-2 Triangle-Bar
Manufacture of JIS C2504 Grade SUY3 Triangular Bars
Manufacturer of JIS-C2504 Grade SUY-3 Triangle-Bar
Manufacture of JIS C2504 Grade A12 Triangular Bars
Manufacturer of JIS-C2504 Grade A-12 Triangle-Bar
Manufacture of JIS C2504 Grade A20 Triangular Bars
Manufacturer of JIS-C2504 Grade A-20 Triangle-Bar
Manufacture of JIS C2504 Grade A60 Triangular Bars
Manufacturer of JIS-C2504 Grade A-60 Triangle-Bar
Manufacture of JIS C2504 Grade A80 Triangular Bars
Manufacturer of JIS-C2504 Grade A-80 Triangle-Bar
Manufacture of JIS C2504 Grade A120 Triangular Bars
Manufacturer of JIS-C2504 Grade A-120 Triangle-Bar
Manufacture of JIS C2504 Grade A240 Triangular Bars
Manufacturer of JIS-C2504 Grade A-240 Triangle-Bar
Manufacture of JIS C2504 SUYP0 Triangular Bars
Manufacturer of JIS-C2504 SUYP-0 Triangle-Bar
Manufacture of JIS C2504 SUYP1 Triangular Bars
Manufacturer of JIS C2504 SUYP-1 Triangle-Bar
Manufacture of JIS C2504 SUYP2 Triangular Bars
Manufacturer of JIS-C2504 SUYP-2 Triangle-Bar
Manufacture of JIS C2504 SUYP3 Triangular Bars
Manufacturer of JIS-C2504 SUYP-3 Triangle-Bar
Manufacture of JIS C2504 SUYB0 Triangular Bars
Manufacturer of JIS-C2504 SUYB-0 Triangle-Bar
Manufacture of JIS C2504 SUYB1 Triangular Bars
Manufacturer of JIS-C2504 SUYB-1 Triangle-Bar
Manufacture of JIS C2504 SUYB2 Triangular Bars
Manufacturer of JIS-C2504 SUYB-2 Triangle-Bar
Manufacture of JIS C2504 SUYB3 Triangular Bars
Manufacturer of JIS-C2504 SUYB-3 Triangle-Bar
Manufacture of JIS C2504 Pure Iron Triangular Bars
Manufacturer of JIS-C2504 Pure-Iron Triangle-Bar
Manufacture of ASTM A848-01 Low Carbon Magnetic Iron Triangular Bars
Manufacturer of ASTM-A848-01 Low Carbon Magnetic-Iron Triangle-Bar
Manufacture of ASTM A848 Alloy Type-1 Soft Magnetic Iron Triangular Bars
Manufacturer of ASTM-A848 Alloy Type-1 Soft Magnetic-Iron Triangle-Bar
Manufacture of ASTM A848 Alloy Type-2 Soft Magnetic Iron Triangular Bars
Manufacturer of ASTM-A848 Alloy Type-2 Soft Magnetic-Iron Triangle-Bar
Manufacture of ASTM A848 Pure Iron Triangular Bars
Manufacturer of ASTM A-848 Pure-Iron Triangle-Bar
Manufacture of ASTM A848-1 Pure Iron Triangular Bars
Manufacturer of ASTM A-848-1 Pure-Iron Triangle-Bar
Manufacture of ASTM A811 Low Carbon Magnetic Iron Triangular Bars
Manufacturer of ASTM-A811 Low Carbon Magnetic-Iron Triangle-Bar
Manufacture of ASTM A811 Grade 1 Soft Magnetic Iron Triangular Bars
Manufacturer of ASTM-A811 Grade-1 Soft Magnetic-Iron Triangle-Bar
Manufacture of ASTM A811 Grade 2 Soft Magnetic Iron Triangular Bars
Manufacturer of ASTM-A811 Grade-2 Soft Magnetic-Iron Triangle-Bar
Manufacture of ASTM A811 Pure Iron Triangular Bars
Manufacturer of ASTM A-811 Pure-Iron Triangle-Bar
Manufacture of Electrical Pure Iron Grade DT4 Triangular Bars
Manufacturer of Electrical-Pure Iron Grade DT4 Triangle-Bar
Manufacture of Electrical Pure Iron Grade DT4A Triangular Bars
Manufacturer of Electrical-Pure Iron Grade DT4A Triangle-Bar
Manufacture of Electrical Pure Iron Grade DT4E Triangular Bars
Manufacturer of Electrical-Pure Iron Grade DT4E Triangle-Bar
Manufacture of Electrical Pure Iron Grade DT4C Triangular Bars
Manufacturer of Electrical-Pure Iron Grade DT4C Triangle-Bar
Manufacture Electrical Pure Iron Grade DT8 Triangular Bars
Manufacturer of Electrical-Pure Iron Grade DT8 Triangle-Bar
Manufacture of Electrical Pure Iron Grade DT8A Triangular Bars
Manufacturer of Electrical-Pure Iron Grade DT8A Triangle-Bar
Manufacture of Electrical Pure Iron Grade DT8E Triangular Bars
Manufacturer of Electrical-Pure Iron Grade DT8E Triangle-Bar
Manufacture of Electrical Pure Iron Grade DT8C Triangular Bars
Manufacturer of Electrical-Pure Iron Grade DT8C Triangle-Bar
Manufacture of Electrical Pure Iron Grade DT9 Triangular Bars
Manufacturer of Electrical-Pure Iron Grade DT9 Triangle-Bar
Manufacture of ELCH2 Pure Iron Triangular Bars
Manufacturer of ELCH-2 Pure-Iron Triangle-Bar
Manufacture of Ame8 Pure Iron Triangular Bars
Manufacturer of Ame-8 Pure-Iron Triangle-Bar
Manufacture of ELCH2 Soft Magnetic Iron Triangular Bars
Manufacturer of ELCH-2 Soft Magnetic-Iron Triangle-Bar
Manufacture of Ame8 Soft Magnetic Iron Triangular Bars
Manufacturer of Ame-8 Soft Magnetic-Iron Triangle-Bar
Manufacture of GOST 11036-75 Grade 10895 Triangular Bars
Manufacturer of GOST-11036-75 Grade 10895 Triangle-Bar
Manufacture of GOST 11036-75 Grade 20895 Triangular Bars
Manufacturer of GOST-11036-75 Grade 20895 Triangle-Bar
Manufacture of GOST 11036-75 Grade 11895 Triangular Bars
Manufacturer of GOST-11036-75 Grade 11895 Triangle-Bar
Manufacture of GOST 11036-75 Grade 21895 Triangular Bars
Manufacturer of GOST-11036-75 Grade 21895 Triangle-Bar
Manufacture of GOST 11036-75 Grade 20880 Triangular Bars
Manufacturer of GOST-11036-75 Grade 20880 Triangle-Bar
Manufacture of GOST 11036-75 Grade 21880 Triangular Bars
Manufacturer of GOST-11036-75 Grade 21880 Triangle-Bar
Manufacture of GOST 11036-75 Grade 10860 Triangular Bars
Manufacturer of GOST-11036-75 Grade 10860 Triangle-Bar
Manufacture of GOST 11036-75 Grade 20860 Triangular Bars
Manufacturer of GOST-11036-75 Grade 20860 Triangle-Bar
Manufacture of GOST 11036-75 Grade 11860 Triangular Bars
Manufacturer of GOST-11036-75 Grade 11860 Triangle-Bar
Manufacture of GOST 11036-75 Grade 11880 Triangular Bars
Manufacturer of GOST-11036-75 Grade 11880 Triangle-Bar
Manufacture of GOST 11036-75 Grade 21860 Triangular Bars
Manufacturer of GOST-11036-75 Grade 21860 Triangle-Bar
Manufacture of GOST 11036-75 Grade 10850 Triangular Bars
Manufacturer of GOST-11036-75 Grade 10850 Triangle-Bar
Manufacture of GOST 11036-75 Grade 20850 Triangular Bars
Manufacturer of GOST-11036-75 Grade 20850 Triangle-Bar
Manufacture of GOST 11036-75 Grade 11850 Triangular Bars
Manufacturer of GOST-11036-75 Grade 11850 Triangle-Bar
Manufacture of GOST 11036-75 Grade 21850 Triangular Bars
Manufacturer of GOST-11036-75 Grade 21850 Triangle-Bar
Manufacture of GOST 11036-75 Grade 10880 Triangular Bars
Manufacturer of GOST-11036-75 Grade 10880 Triangle-Bar
Manufacture of GOST 11036-75 Grade 10864 Triangular Bars
Manufacturer of GOST-11036-75 Grade 10864 Triangle-Bar
Manufacture of GOST 11036-75 Pure Iron Triangular Bars
Manufacturer of GOST 11036-75 Pure-Iron Triangle-Bar
Manufacture of Pure Iron Triangular Bars in China
Manufacturer of Pure-Iron Triangle-Bar China
Manufacture of Soft Magnetic Iron Triangular Bars in China
Manufacturer of Soft-Magnetic Iron Triangle-Bar in China
Manufacture of Electro Magnetic Iron Triangular Bars in China
Manufacturer of Electro-Magnetic Iron Triangle-Bar in China
Manufacture of Low Carbon Pure Iron Triangular Bars in China
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in China
Manufacture of High Purity Iron Triangular Bars in China
Manufacturer of High-Purity Iron Triangle-Bar in China
Manufacture of Pure Iron Triangular Bars in Japan
Manufacturer of Pure-Iron Triangle-Bar in Japan
Manufacture of Soft Magnetic Iron Triangular Bars in Japan
Manufacturer of Soft-Magnetic Iron Triangle-Bar in Japan
Manufacture of Electro Magnetic Iron Triangular Bars in Japan
Manufacturer of Electro-Magnetic Iron Triangle-Bar in Japan
Manufacture of Low Carbon Pure Iron Triangular Bars in Japan
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in Japan
Manufacture of High Purity Iron Triangular Bars in Japan
Manufacturer of High-Purity Iron Triangle-Bar in Japan
Manufacture of Pure Iron Triangular Bars in Korea
Manufacturer of Pure-Iron Triangle-Bar in Korea
Manufacture of Soft Magnetic Iron Triangular Bars in Korea
Manufacturer of Soft-Magnetic Iron Triangle-Bar in Korea
Manufacture of Electro Magnetic Iron Triangular Bars in Korea
Manufacturer of Electro-Magnetic Iron Triangle-Bar in Korea
Manufacture of Low Carbon Pure Iron Triangular Bars in Korea
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in Korea
Manufacture of High Purity Iron Triangular Bars in Korea
Manufacturer of High-Purity Iron Triangle-Bar in Korea
Manufacture of Pure Iron Triangular Bars in Europe
Manufacturer of Pure-Iron Triangle-Bar in Europe
Manufacture of Soft Magnetic Iron Triangular Bars in Europe
Manufacturer of Soft-Magnetic Iron Triangle-Bar in Europe
Manufacture of Electro Magnetic Iron Triangular Bars in Europe
Manufacturer of Electro-Magnetic Iron Triangle-Bar in Europe
Manufacture of Low Carbon Pure Iron Triangular Bars in Europe
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in Europe
Manufacture of High Purity Iron Triangular Bars in Europe
Manufacturer of High-Purity Iron Triangle-Bar in Europe
Manufacture of Pure Iron Triangular Bars in Germany
Manufacturer of Pure-Iron Triangle-Bar in Germany
Manufacture of Soft Magnetic Iron Triangular Bars in Germany
Manufacturer of Soft-Magnetic Iron Triangle-Bar in Germany
Manufacture of Electro Magnetic Iron Triangular Bars in Germany
Manufacturer of Electro-Magnetic Iron Triangle-Bar in Germany
Manufacture of Low Carbon Pure Iron Triangular Bars in Germany
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in Germany
Manufacture of High Purity Iron Triangular Bars in Germany
Manufacturer of High-Purity Iron Triangle-Bar in Germany
Manufacture of Pure Iron Triangular Bars in France
Manufacturer of Pure-Iron Triangle-Bar in France
Manufacture of Soft Magnetic Iron Triangular Bars in France
Manufacturer of Soft-Magnetic Iron Triangle-Bar in France
Manufacture of Electro Magnetic Iron Triangular Bars in France
Manufacturer of Electro-Magnetic Iron Triangle-Bar in France
Manufacture of Low Carbon Pure Iron Triangular Bars in France
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in France
Manufacture of High Purity Iron Triangular Bars in France
Manufacturer of High-Purity Iron Triangle-Bar in France
Manufacture of Pure Iron Triangular Bars in Italy
Manufacturer of Pure-Iron Triangle-Bar in Italy
Manufacture of Soft Magnetic Iron Triangular Bars in Italy
Manufacturer of Soft-Magnetic Iron Triangle-Bar in Italy
Manufacture of Electro Magnetic Iron Triangular Bars in Italy
Manufacturer of Electro-Magnetic Iron Triangle-Bar in Italy
Manufacture of Low Carbon Pure Iron Triangular Bars in Italy
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in Italy
Manufacture of High Purity Iron Triangular Bars in Italy
Manufacturer of High-Purity Iron Triangle-Bar in Italy
Manufacture of Pure Iron Triangular Bars in Spain
Manufacturer of Pure-Iron Triangle-Bar in Spain
Manufacture of Soft Magnetic Iron Triangular Bars in Spain
Manufacturer of Soft-Magnetic Iron Triangle-Bar in Spain
Manufacture of Electro Magnetic Iron Triangular Bars in Spain
Manufacturer of Electro-Magnetic Iron Triangle-Bar in Spain
Manufacture of Low Carbon Pure Iron Triangular Bars in Spain
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in Spain
Manufacture of High Purity Iron Triangular Bars in Spain
Manufacturer of High-Purity Iron Triangle-Bar in Spain
Manufacture of Pure Iron Triangular Bars in UK
Manufacturer of Pure-Iron Triangle-Bar in UK
Manufacture of Soft Magnetic Iron Triangular Bars in UK
Manufacturer of Soft-Magnetic Iron Triangle-Bar in U.K
Manufacture of Electro Magnetic Iron Triangular Bars in UK
Manufacturer of Electro-Magnetic Iron Triangle-Bar in U.K
Manufacture of Low Carbon Pure Iron Triangular Bars in UK
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in U.K
Manufacture of High Purity Iron Triangular Bars in UK
Manufacturer of High-Purity Iron Triangle-Bar in U.K
Manufacture of Pure Iron Triangular Bars in Brazil
Manufacturer of Pure-Iron Triangle-Bar in Brazil
Manufacture of Soft Magnetic Iron Triangular Bars in Brazil
Manufacturer of Soft-Magnetic Iron Triangle-Bar in Brazil
Manufacture of Electro Magnetic Iron Triangular Bars in Brazil
Manufacturer of Electro-Magnetic Iron Triangle-Bar in Brazil
Manufacture of Low Carbon Pure Iron Triangular Bars in Brazil
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in Brazil
Manufacture of High Purity Iron Triangular Bars in Brazil
Manufacturer of High-Purity Iron Triangle-Bar in Brazil
Manufacture of Pure Iron Triangular Bars in London
Manufacturer of Pure-Iron Triangle-Bar in London
Manufacture of Soft Magnetic Iron Triangular Bars in London
Manufacturer of Soft-Magnetic Iron Triangle-Bar in London
Manufacture of Electro Magnetic Iron Triangular Bars in London
Manufacturer of Electro-Magnetic Iron Triangle-Bar in London
Manufacture of Low Carbon Pure Iron Triangular Bars in London
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in London
Manufacture of High Purity Iron Triangular Bars in London
Manufacturer of High-Purity Iron Triangle-Bar in London
Manufacture of Pure Iron Triangular Bars in Switzerland
Manufacturer of Pure-Iron Triangle-Bar in Switzerland
Manufacture of Soft Magnetic Iron Triangular Bars in Switzerland
Manufacturer of Soft-Magnetic Iron Triangle-Bar in Switzerland
Manufacture of Electro Magnetic Iron Triangular Bars in Switzerland
Manufacturer of Electro-Magnetic Iron Triangle-Bar in Switzerland
Manufacture of Low Carbon Pure Iron Triangular Bars in Switzerland
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in Switzerland
Manufacture of High Purity Iron Triangular Bars in Switzerland
Manufacturer of High-Purity Iron Triangle-Bar in Switzerland
Manufacture of Pure Iron Triangular Bars in Sweden
Manufacturer of Pure-Iron Triangle-Bar in Sweden
Manufacture of Soft Magnetic Iron Triangular Bars in Sweden
Manufacturer of Soft-Magnetic Iron Triangle-Bar in Sweden
Manufacture of Electro Magnetic Iron Triangular Bars in Sweden
Manufacturer of Electro-Magnetic Iron Triangle-Bar in Sweden
Manufacture of Low Carbon Pure Iron Triangular Bars in Sweden
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in Sweden
Manufacture of High Purity Iron Triangular Bars in Sweden
Manufacturer of High-Purity Iron Triangle-Bar in Sweden
Manufacture of Pure Iron Triangular Bars in USA
Manufacturer of Pure-Iron Triangle-Bar in U.S.A
Manufacture of Soft Magnetic Iron Triangular Bars in USA
Manufacturer of Soft-Magnetic Iron Triangle-Bar in U.S.A
Manufacture of Electro Magnetic Iron Triangular Bars in USA
Manufacturer of Electro-Magnetic Iron Triangle-Bar in U.S.A
Manufacture of Low Carbon Pure Iron Triangular Bars in USA
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in U.S.A
Manufacture of High Purity Iron Triangular Bars in USA
Manufacturer of High-Purity Iron Triangle-Bar in U.S.A
Manufacture of Pure Iron Triangular Bars in America
Manufacturer of Pure-Iron Triangle-Bar in America
Manufacture of Soft Magnetic Iron Triangular Bars in America
Manufacturer of Soft-Magnetic Iron Triangle-Bar in America
Manufacture of Electro Magnetic Iron Triangular Bars in America
Manufacturer of Electro-Magnetic Iron Triangle-Bar in America
Manufacture of Low Carbon Pure Iron Triangular Bars in America
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in America
Manufacture of High Purity Iron Triangular Bars in America
Manufacturer of High-Purity Iron Triangle-Bar in America
Manufacture of Pure Iron Triangular Bars in India
Manufacturer of Pure-Iron Triangle-Bar in India
Manufacture of Soft Magnetic Iron Triangular Bars in India
Manufacturer of Soft-Magnetic Iron Triangle-Bar in India
Manufacture of Electro Magnetic Iron Triangular Bars in India
Manufacturer of Electro-Magnetic Iron Triangle-Bar in India
Manufacture of Low Carbon Pure Iron Triangular Bars in India
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in India
Manufacture of High Purity Iron Triangular Bars in India
Manufacturer of High-Purity Iron Triangle-Bar in India
Manufacture of Soft Magnetic Iron Triangular Bars in Russia
Manufacturer of Soft-Magnetic Iron Triangle-Bar in Russia
Manufacture of Electro Magnetic Iron Triangular Bars in Russia
Manufacturer of Electro-Magnetic Iron Triangle-Bar in Russia
Manufacture of Low Carbon Pure Iron Triangular Bars in Russia
Manufacturer of Low-Carbon Pure-Iron Triangle-Bar in Russia
Manufacture of High Purity Iron Triangular Bars in Russia
Manufacturer of High-Purity Iron Triangle-Bar in Russia
Stockholders of Soft-Magnetic Iron Triangle-Bar, Pure Iron Triangular Bars, Electro-Magnetic Iron Triangle-Bar, High-Purity Very Low-Carbon Iron Triangular Bars, Fe99.5% Iron Triangle-Bar, Fe99.8%
Iron Triangular Bars, Fe99.9% Iron Triangle-Bar
Stockholder of Pure Iron Triangular Bars
Stockholders of Pure-Iron Triangle-Bar
Stockholder of High Purity Fe-99.5% Iron Triangular Bars
Stockholders of High-Purity Fe99.5% Iron Triangle-Bar
Stockholder of High Purity Fe-99.8% Iron Triangular Bars
Stockholders of High-Purity Fe99.8% Iron Triangle-Bar
Stockholder of High Purity Fe-99.9% Iron Triangular Bars
Stockholders of High-Purity Fe99.9% Iron Triangle-Bar
Stockholder of Soft Magnetic Iron Triangular Bars
Stockholders of Soft-Magnetic Iron Triangle-Bar
Stockholder of Soft Magnetic Iron Annealed Triangular Bars
Stockholders of Soft-Magnetic Iron Annealed Triangle-Bar
Stockholder of Soft Magnetic Iron Hydrogen Annealed Triangular Bars
Stockholders of Soft-Magnetic Iron Hydrogen-Annealed Triangle-Bar
Stockholder of Soft Magnetic Iron SMI Triangular Bars
Stockholders of Soft-Magnetic Iron SMI Triangle-Bar
Stockholder of Soft Magnetic Iron Silicon Triangular Bars
Stockholders of Soft-Magnetic Iron Silicon Triangle-Bar
Stockholder of Soft Steel with Ultra-Low Carbon and Low Impurities Triangular Bars
Stockholders of Soft Steel with Ultra-Low Carbon and Low Impurities Triangle-Bar
Stockholder of Low Carbon Pure Iron Triangular Bars
Stockholders of Low-Carbon Pure-Iron Triangle-Bar
Stockholder of Electric Magnetic Iron Triangular Bars
Stockholders of Electric-Magnetic Iron Triangle-Bar
Stockholder of Electromagnetic Triangular Bars, Electromagnet Triangular Bars
Stockholders of Electromagnetic-Triangle-Bar, Electromagnet-Triangle-Bar
Stockholder of Electro-Magnetic Iron Triangular Bars, Electro-Magnet Iron Triangular Bars
Stockholders of Electro-Magnetic Iron Triangle-Bar, Electro-Magnet Iron Triangle-Bar
Stockholder of High Purity Ultra Low Carbon Pure Iron Triangular Bars
Stockholders of High-Purity Ultra Low-Carbon Pure-Iron Triangle-Bar
Stockholder of DIN 17405 Grade RFe20 Triangular Bars
Stockholders of DIN-17405 Grade RFe-20 Triangle-Bar
Stockholder of DIN 17405 Grade RFe60 Triangular Bars
Stockholders of DIN-17405 Grade RFe-60 Triangle-Bar
Stockholder of DIN 17405 Grade RFe80 Triangular Bars
Stockholders of DIN-17405 Grade RFe-80 Triangle-Bar
Stockholder of DIN 17405 Grade RFe100 Triangular Bars
Stockholders of DIN-17405 Grade RFe-100 Triangle-Bar
Stockholder of DIN 17405 Grade RFe120 Triangular Bars
Stockholders of DIN-17405 Grade RFe-120 Triangle-Bar
Stockholder of DIN 17405 Pure Iron Triangular Bars
Stockholders of DIN-17405 Pure-Iron Triangle-Bar
Stockholder of IS-11946 Soft Magnetic Iron Triangular Bars
Stockholders of IS-11946 Soft-Magnetic Iron Triangle-Bar
Stockholder of IS-11946 Pure Iron Triangular Bars
Stockholders of IS-11946 Pure-Iron Triangle-Bar
Stockholder of IS-11947 Soft Magnetic Iron Triangular Bars
Stockholders of IS-11947 Soft-Magnetic Iron Triangle-Bar
Stockholder of IS-11947 Pure Iron Triangular Bars
Stockholders of IS-11947 Pure-Iron Triangle-Bar
Stockholder of JIS C2504 Grade SUY0 Triangular Bars
Stockholders of JIS-C2504 Grade SUY-0 Triangle-Bar
Stockholder of JIS C2504 Grade SUY1 Triangular Bars
Stockholders of JIS-C2504 Grade SUY-1 Triangle-Bar
Stockholder of JIS C2504 Grade SUY2 Triangular Bars
Stockholders of JIS-C2504 Grade SUY-2 Triangle-Bar
Stockholder of JIS C2504 Grade SUY3 Triangular Bars
Stockholders of JIS-C2504 Grade SUY-3 Triangle-Bar
Stockholder of JIS C2504 Grade A12 Triangular Bars
Stockholders of JIS-C2504 Grade A-12 Triangle-Bar
Stockholder of JIS C2504 Grade A20 Triangular Bars
Stockholders of JIS-C2504 Grade A-20 Triangle-Bar
Stockholder of JIS C2504 Grade A60 Triangular Bars
Stockholders of JIS-C2504 Grade A-60 Triangle-Bar
Stockholder of JIS C2504 Grade A80 Triangular Bars
Stockholders of JIS-C2504 Grade A-80 Triangle-Bar
Stockholder of JIS C2504 Grade A120 Triangular Bars
Stockholders of JIS-C2504 Grade A-120 Triangle-Bar
Stockholder of JIS C2504 Grade A240 Triangular Bars
Stockholders of JIS-C2504 Grade A-240 Triangle-Bar
Stockholder of JIS C2504 SUYP0 Triangular Bars
Stockholders of JIS-C2504 SUYP-0 Triangle-Bar
Stockholder of JIS C2504 SUYP1 Triangular Bars
Stockholders of JIS C2504 SUYP-1 Triangle-Bar
Stockholder of JIS C2504 SUYP2 Triangular Bars
Stockholders of JIS-C2504 SUYP-2 Triangle-Bar
Stockholder of JIS C2504 SUYP3 Triangular Bars
Stockholders of JIS-C2504 SUYP-3 Triangle-Bar
Stockholder of JIS C2504 SUYB0 Triangular Bars
Stockholders of JIS-C2504 SUYB-0 Triangle-Bar
Stockholder of JIS C2504 SUYB1 Triangular Bars
Stockholders of JIS-C2504 SUYB-1 Triangle-Bar
Stockholder of JIS C2504 SUYB2 Triangular Bars
Stockholders of JIS-C2504 SUYB-2 Triangle-Bar
Stockholder of JIS C2504 SUYB3 Triangular Bars
Stockholders of JIS-C2504 SUYB-3 Triangle-Bar
Stockholder of JIS C2504 Pure Iron Triangular Bars
Stockholders of JIS-C2504 Pure-Iron Triangle-Bar
Stockholder of ASTM A848-01 Low Carbon Magnetic Iron Triangular Bars
Stockholders of ASTM-A848-01 Low Carbon Magnetic-Iron Triangle-Bar
Stockholder of ASTM A848 Alloy Type-1 Soft Magnetic Iron Triangular Bars
Stockholders of ASTM-A848 Alloy Type-1 Soft Magnetic-Iron Triangle-Bar
Stockholder of ASTM A848 Alloy Type-2 Soft Magnetic Iron Triangular Bars
Stockholders of ASTM-A848 Alloy Type-2 Soft Magnetic-Iron Triangle-Bar
Stockholder of ASTM A848 Pure Iron Triangular Bars
Stockholders of ASTM A-848 Pure-Iron Triangle-Bar
Stockholder of ASTM A848-1 Pure Iron Triangular Bars
Stockholders of ASTM A-848-1 Pure-Iron Triangle-Bar
Stockholder of ASTM A811 Low Carbon Magnetic Iron Triangular Bars
Stockholders of ASTM-A811 Low Carbon Magnetic-Iron Triangle-Bar
Stockholder of ASTM A811 Grade 1 Soft Magnetic Iron Triangular Bars
Stockholders of ASTM-A811 Grade-1 Soft Magnetic-Iron Triangle-Bar
Stockholder of ASTM A811 Grade 2 Soft Magnetic Iron Triangular Bars
Stockholders of ASTM-A811 Grade-2 Soft Magnetic-Iron Triangle-Bar
Stockholder of ASTM A811 Pure Iron Triangular Bars
Stockholders of ASTM A-811 Pure-Iron Triangle-Bar
Stockholder of Electrical Pure Iron Grade DT4 Triangular Bars
Stockholders of Electrical-Pure Iron Grade DT4 Triangle-Bar
Stockholder of Electrical Pure Iron Grade DT4A Triangular Bars
Stockholders of Electrical-Pure Iron Grade DT4A Triangle-Bar
Stockholder of Electrical Pure Iron Grade DT4E Triangular Bars
Stockholders of Electrical-Pure Iron Grade DT4E Triangle-Bar
Stockholder of Electrical Pure Iron Grade DT4C Triangular Bars
Stockholders of Electrical-Pure Iron Grade DT4C Triangle-Bar
Stockholder Electrical Pure Iron Grade DT8 Triangular Bars
Stockholders of Electrical-Pure Iron Grade DT8 Triangle-Bar
Stockholder of Electrical Pure Iron Grade DT8A Triangular Bars
Stockholders of Electrical-Pure Iron Grade DT8A Triangle-Bar
Stockholder of Electrical Pure Iron Grade DT8E Triangular Bars
Stockholders of Electrical-Pure Iron Grade DT8E Triangle-Bar
Stockholder of Electrical Pure Iron Grade DT8C Triangular Bars
Stockholders of Electrical-Pure Iron Grade DT8C Triangle-Bar
Stockholder of Electrical Pure Iron Grade DT9 Triangular Bars
Stockholders of Electrical-Pure Iron Grade DT9 Triangle-Bar
Stockholder of ELCH2 Pure Iron Triangular Bars
Stockholders of ELCH-2 Pure-Iron Triangle-Bar
Stockholder of Ame8 Pure Iron Triangular Bars
Stockholders of Ame-8 Pure-Iron Triangle-Bar
Stockholder of ELCH2 Soft Magnetic Iron Triangular Bars
Stockholders of ELCH-2 Soft Magnetic-Iron Triangle-Bar
Stockholder of Ame8 Soft Magnetic Iron Triangular Bars
Stockholders of Ame-8 Soft Magnetic-Iron Triangle-Bar
Stockholder of GOST 11036-75 Grade 10895 Triangular Bars
Stockholders of GOST-11036-75 Grade 10895 Triangle-Bar
Stockholder of GOST 11036-75 Grade 20895 Triangular Bars
Stockholders of GOST-11036-75 Grade 20895 Triangle-Bar
Stockholder of GOST 11036-75 Grade 11895 Triangular Bars
Stockholders of GOST-11036-75 Grade 11895 Triangle-Bar
Stockholder of GOST 11036-75 Grade 21895 Triangular Bars
Stockholders of GOST-11036-75 Grade 21895 Triangle-Bar
Stockholder of GOST 11036-75 Grade 20880 Triangular Bars
Stockholders of GOST-11036-75 Grade 20880 Triangle-Bar
Stockholder of GOST 11036-75 Grade 21880 Triangular Bars
Stockholders of GOST-11036-75 Grade 21880 Triangle-Bar
Stockholder of GOST 11036-75 Grade 10860 Triangular Bars
Stockholders of GOST-11036-75 Grade 10860 Triangle-Bar
Stockholder of GOST 11036-75 Grade 20860 Triangular Bars
Stockholders of GOST-11036-75 Grade 20860 Triangle-Bar
Stockholder of GOST 11036-75 Grade 11860 Triangular Bars
Stockholders of GOST-11036-75 Grade 11860 Triangle-Bar
Stockholder of GOST 11036-75 Grade 11880 Triangular Bars
Stockholders of GOST-11036-75 Grade 11880 Triangle-Bar
Stockholder of GOST 11036-75 Grade 21860 Triangular Bars
Stockholders of GOST-11036-75 Grade 21860 Triangle-Bar
Stockholder of GOST 11036-75 Grade 10850 Triangular Bars
Stockholders of GOST-11036-75 Grade 10850 Triangle-Bar
Stockholder of GOST 11036-75 Grade 20850 Triangular Bars
Stockholders of GOST-11036-75 Grade 20850 Triangle-Bar
Stockholder of GOST 11036-75 Grade 11850 Triangular Bars
Stockholders of GOST-11036-75 Grade 11850 Triangle-Bar
Stockholder of GOST 11036-75 Grade 21850 Triangular Bars
Stockholders of GOST-11036-75 Grade 21850 Triangle-Bar
Stockholder of GOST 11036-75 Grade 10880 Triangular Bars
Stockholders of GOST-11036-75 Grade 10880 Triangle-Bar
Stockholder of GOST 11036-75 Grade 10864 Triangular Bars
Stockholders of GOST-11036-75 Grade 10864 Triangle-Bar
Stockholder of GOST 11036-75 Pure Iron Triangular Bars
Stockholders of GOST 11036-75 Pure-Iron Triangle-Bar
Stockholder of Pure Iron Triangular Bars in China
Stockholders of Pure-Iron Triangle-Bar China
Stockholder of Soft Magnetic Iron Triangular Bars in China
Stockholders of Soft-Magnetic Iron Triangle-Bar in China
Stockholder of Electro Magnetic Iron Triangular Bars in China
Stockholders of Electro-Magnetic Iron Triangle-Bar in China
Stockholder of Low Carbon Pure Iron Triangular Bars in China
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in China
Stockholder of High Purity Iron Triangular Bars in China
Stockholders of High-Purity Iron Triangle-Bar in China
Stockholder of Pure Iron Triangular Bars in Japan
Stockholders of Pure-Iron Triangle-Bar in Japan
Stockholder of Soft Magnetic Iron Triangular Bars in Japan
Stockholders of Soft-Magnetic Iron Triangle-Bar in Japan
Stockholder of Electro Magnetic Iron Triangular Bars in Japan
Stockholders of Electro-Magnetic Iron Triangle-Bar in Japan
Stockholder of Low Carbon Pure Iron Triangular Bars in Japan
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in Japan
Stockholder of High Purity Iron Triangular Bars in Japan
Stockholders of High-Purity Iron Triangle-Bar in Japan
Stockholder of Pure Iron Triangular Bars in Korea
Stockholders of Pure-Iron Triangle-Bar in Korea
Stockholder of Soft Magnetic Iron Triangular Bars in Korea
Stockholders of Soft-Magnetic Iron Triangle-Bar in Korea
Stockholder of Electro Magnetic Iron Triangular Bars in Korea
Stockholders of Electro-Magnetic Iron Triangle-Bar in Korea
Stockholder of Low Carbon Pure Iron Triangular Bars in Korea
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in Korea
Stockholder of High Purity Iron Triangular Bars in Korea
Stockholders of High-Purity Iron Triangle-Bar in Korea
Stockholder of Pure Iron Triangular Bars in Europe
Stockholders of Pure-Iron Triangle-Bar in Europe
Stockholder of Soft Magnetic Iron Triangular Bars in Europe
Stockholders of Soft-Magnetic Iron Triangle-Bar in Europe
Stockholder of Electro Magnetic Iron Triangular Bars in Europe
Stockholders of Electro-Magnetic Iron Triangle-Bar in Europe
Stockholder of Low Carbon Pure Iron Triangular Bars in Europe
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in Europe
Stockholder of High Purity Iron Triangular Bars in Europe
Stockholders of High-Purity Iron Triangle-Bar in Europe
Stockholder of Pure Iron Triangular Bars in Germany
Stockholders of Pure-Iron Triangle-Bar in Germany
Stockholder of Soft Magnetic Iron Triangular Bars in Germany
Stockholders of Soft-Magnetic Iron Triangle-Bar in Germany
Stockholder of Electro Magnetic Iron Triangular Bars in Germany
Stockholders of Electro-Magnetic Iron Triangle-Bar in Germany
Stockholder of Low Carbon Pure Iron Triangular Bars in Germany
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in Germany
Stockholder of High Purity Iron Triangular Bars in Germany
Stockholders of High-Purity Iron Triangle-Bar in Germany
Stockholder of Pure Iron Triangular Bars in France
Stockholders of Pure-Iron Triangle-Bar in France
Stockholder of Soft Magnetic Iron Triangular Bars in France
Stockholders of Soft-Magnetic Iron Triangle-Bar in France
Stockholder of Electro Magnetic Iron Triangular Bars in France
Stockholders of Electro-Magnetic Iron Triangle-Bar in France
Stockholder of Low Carbon Pure Iron Triangular Bars in France
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in France
Stockholder of High Purity Iron Triangular Bars in France
Stockholders of High-Purity Iron Triangle-Bar in France
Stockholder of Pure Iron Triangular Bars in Italy
Stockholders of Pure-Iron Triangle-Bar in Italy
Stockholder of Soft Magnetic Iron Triangular Bars in Italy
Stockholders of Soft-Magnetic Iron Triangle-Bar in Italy
Stockholder of Electro Magnetic Iron Triangular Bars in Italy
Stockholders of Electro-Magnetic Iron Triangle-Bar in Italy
Stockholder of Low Carbon Pure Iron Triangular Bars in Italy
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in Italy
Stockholder of High Purity Iron Triangular Bars in Italy
Stockholders of High-Purity Iron Triangle-Bar in Italy
Stockholder of Pure Iron Triangular Bars in Spain
Stockholders of Pure-Iron Triangle-Bar in Spain
Stockholder of Soft Magnetic Iron Triangular Bars in Spain
Stockholders of Soft-Magnetic Iron Triangle-Bar in Spain
Stockholder of Electro Magnetic Iron Triangular Bars in Spain
Stockholders of Electro-Magnetic Iron Triangle-Bar in Spain
Stockholder of Low Carbon Pure Iron Triangular Bars in Spain
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in Spain
Stockholder of High Purity Iron Triangular Bars in Spain
Stockholders of High-Purity Iron Triangle-Bar in Spain
Stockholder of Pure Iron Triangular Bars in UK
Stockholders of Pure-Iron Triangle-Bar in UK
Stockholder of Soft Magnetic Iron Triangular Bars in UK
Stockholders of Soft-Magnetic Iron Triangle-Bar in U.K
Stockholder of Electro Magnetic Iron Triangular Bars in UK
Stockholders of Electro-Magnetic Iron Triangle-Bar in U.K
Stockholder of Low Carbon Pure Iron Triangular Bars in UK
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in U.K
Stockholder of High Purity Iron Triangular Bars in UK
Stockholders of High-Purity Iron Triangle-Bar in U.K
Stockholder of Pure Iron Triangular Bars in Brazil
Stockholders of Pure-Iron Triangle-Bar in Brazil
Stockholder of Soft Magnetic Iron Triangular Bars in Brazil
Stockholders of Soft-Magnetic Iron Triangle-Bar in Brazil
Stockholder of Electro Magnetic Iron Triangular Bars in Brazil
Stockholders of Electro-Magnetic Iron Triangle-Bar in Brazil
Stockholder of Low Carbon Pure Iron Triangular Bars in Brazil
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in Brazil
Stockholder of High Purity Iron Triangular Bars in Brazil
Stockholders of High-Purity Iron Triangle-Bar in Brazil
Stockholder of Pure Iron Triangular Bars in London
Stockholders of Pure-Iron Triangle-Bar in London
Stockholder of Soft Magnetic Iron Triangular Bars in London
Stockholders of Soft-Magnetic Iron Triangle-Bar in London
Stockholder of Electro Magnetic Iron Triangular Bars in London
Stockholders of Electro-Magnetic Iron Triangle-Bar in London
Stockholder of Low Carbon Pure Iron Triangular Bars in London
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in London
Stockholder of High Purity Iron Triangular Bars in London
Stockholders of High-Purity Iron Triangle-Bar in London
Stockholder of Pure Iron Triangular Bars in Switzerland
Stockholders of Pure-Iron Triangle-Bar in Switzerland
Stockholder of Soft Magnetic Iron Triangular Bars in Switzerland
Stockholders of Soft-Magnetic Iron Triangle-Bar in Switzerland
Stockholder of Electro Magnetic Iron Triangular Bars in Switzerland
Stockholders of Electro-Magnetic Iron Triangle-Bar in Switzerland
Stockholder of Low Carbon Pure Iron Triangular Bars in Switzerland
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in Switzerland
Stockholder of High Purity Iron Triangular Bars in Switzerland
Stockholders of High-Purity Iron Triangle-Bar in Switzerland
Stockholder of Pure Iron Triangular Bars in Sweden
Stockholders of Pure-Iron Triangle-Bar in Sweden
Stockholder of Soft Magnetic Iron Triangular Bars in Sweden
Stockholders of Soft-Magnetic Iron Triangle-Bar in Sweden
Stockholder of Electro Magnetic Iron Triangular Bars in Sweden
Stockholders of Electro-Magnetic Iron Triangle-Bar in Sweden
Stockholder of Low Carbon Pure Iron Triangular Bars in Sweden
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in Sweden
Stockholder of High Purity Iron Triangular Bars in Sweden
Stockholders of High-Purity Iron Triangle-Bar in Sweden
Stockholder of Pure Iron Triangular Bars in USA
Stockholders of Pure-Iron Triangle-Bar in U.S.A
Stockholder of Soft Magnetic Iron Triangular Bars in USA
Stockholders of Soft-Magnetic Iron Triangle-Bar in U.S.A
Stockholder of Electro Magnetic Iron Triangular Bars in USA
Stockholders of Electro-Magnetic Iron Triangle-Bar in U.S.A
Stockholder of Low Carbon Pure Iron Triangular Bars in USA
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in U.S.A
Stockholder of High Purity Iron Triangular Bars in USA
Stockholders of High-Purity Iron Triangle-Bar in U.S.A
Stockholder of Pure Iron Triangular Bars in America
Stockholders of Pure-Iron Triangle-Bar in America
Stockholder of Soft Magnetic Iron Triangular Bars in America
Stockholders of Soft-Magnetic Iron Triangle-Bar in America
Stockholder of Electro Magnetic Iron Triangular Bars in America
Stockholders of Electro-Magnetic Iron Triangle-Bar in America
Stockholder of Low Carbon Pure Iron Triangular Bars in America
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in America
Stockholder of High Purity Iron Triangular Bars in America
Stockholders of High-Purity Iron Triangle-Bar in America
Stockholder of Pure Iron Triangular Bars in India
Stockholders of Pure-Iron Triangle-Bar in India
Stockholder of Soft Magnetic Iron Triangular Bars in India
Stockholders of Soft-Magnetic Iron Triangle-Bar in India
Stockholder of Electro Magnetic Iron Triangular Bars in India
Stockholders of Electro-Magnetic Iron Triangle-Bar in India
Stockholder of Low Carbon Pure Iron Triangular Bars in India
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in India
Stockholder of High Purity Iron Triangular Bars in India
Stockholders of High-Purity Iron Triangle-Bar in India
Stockholder of Soft Magnetic Iron Triangular Bars in Russia
Stockholders of Soft-Magnetic Iron Triangle-Bar in Russia
Stockholder of Electro Magnetic Iron Triangular Bars in Russia
Stockholders of Electro-Magnetic Iron Triangle-Bar in Russia
Stockholder of Low Carbon Pure Iron Triangular Bars in Russia
Stockholders of Low-Carbon Pure-Iron Triangle-Bar in Russia
Stockholder of High Purity Iron Triangular Bars in Russia
Stockholders of High-Purity Iron Triangle-Bar in Russia
Distributors of Soft-Magnetic Iron Triangle-Bar, Pure Iron Triangular Bars, Electro-Magnetic Iron Triangle-Bar, High-Purity Very Low-Carbon Iron Triangular Bars, Fe99.5% Iron Triangle-Bar, Fe99.8%
Iron Triangular Bars, Fe99.9% Iron Triangle-Bar
Distributor of Pure Iron Triangular Bars
Distributors of Pure-Iron Triangle-Bar
Distributor of High Purity Fe-99.5% Iron Triangular Bars
Distributors of High-Purity Fe99.5% Iron Triangle-Bar
Distributor of High Purity Fe-99.8% Iron Triangular Bars
Distributors of High-Purity Fe99.8% Iron Triangle-Bar
Distributor of High Purity Fe-99.9% Iron Triangular Bars
Distributors of High-Purity Fe99.9% Iron Triangle-Bar
Distributor of Soft Magnetic Iron Triangular Bars
Distributors of Soft-Magnetic Iron Triangle-Bar
Distributor of Soft Magnetic Iron Annealed Triangular Bars
Distributors of Soft-Magnetic Iron Annealed Triangle-Bar
Distributor of Soft Magnetic Iron Hydrogen Annealed Triangular Bars
Distributors of Soft-Magnetic Iron Hydrogen-Annealed Triangle-Bar
Distributor of Soft Magnetic Iron SMI Triangular Bars
Distributors of Soft-Magnetic Iron SMI Triangle-Bar
Distributor of Soft Magnetic Iron Silicon Triangular Bars
Distributors of Soft-Magnetic Iron Silicon Triangle-Bar
Distributor of Soft Steel with Ultra-Low Carbon and Low Impurities Triangular Bars
Distributors of Soft Steel with Ultra-Low Carbon and Low Impurities Triangle-Bar
Distributor of Low Carbon Pure Iron Triangular Bars
Distributors of Low-Carbon Pure-Iron Triangle-Bar
Distributor of Electric Magnetic Iron Triangular Bars
Distributors of Electric-Magnetic Iron Triangle-Bar
Distributor of Electromagnetic Triangular Bars, Electromagnet Triangular Bars
Distributors of Electromagnetic-Triangle-Bar, Electromagnet-Triangle-Bar
Distributor of Electro-Magnetic Iron Triangular Bars, Electro-Magnet Iron Triangular Bars
Distributors of Electro-Magnetic Iron Triangle-Bar, Electro-Magnet Iron Triangle-Bar
Distributor of High Purity Ultra Low Carbon Pure Iron Triangular Bars
Distributors of High-Purity Ultra Low-Carbon Pure-Iron Triangle-Bar
Distributor of DIN 17405 Grade RFe20 Triangular Bars
Distributors of DIN-17405 Grade RFe-20 Triangle-Bar
Distributor of DIN 17405 Grade RFe60 Triangular Bars
Distributors of DIN-17405 Grade RFe-60 Triangle-Bar
Distributor of DIN 17405 Grade RFe80 Triangular Bars
Distributors of DIN-17405 Grade RFe-80 Triangle-Bar
Distributor of DIN 17405 Grade RFe100 Triangular Bars
Distributors of DIN-17405 Grade RFe-100 Triangle-Bar
Distributor of DIN 17405 Grade RFe120 Triangular Bars
Distributors of DIN-17405 Grade RFe-120 Triangle-Bar
Distributor of DIN 17405 Pure Iron Triangular Bars
Distributors of DIN-17405 Pure-Iron Triangle-Bar
Distributor of IS-11946 Soft Magnetic Iron Triangular Bars
Distributors of IS-11946 Soft-Magnetic Iron Triangle-Bar
Distributor of IS-11946 Pure Iron Triangular Bars
Distributors of IS-11946 Pure-Iron Triangle-Bar
Distributor of IS-11947 Soft Magnetic Iron Triangular Bars
Distributors of IS-11947 Soft-Magnetic Iron Triangle-Bar
Distributor of IS-11947 Pure Iron Triangular Bars
Distributors of IS-11947 Pure-Iron Triangle-Bar
Distributor of JIS C2504 Grade SUY0 Triangular Bars
Distributors of JIS-C2504 Grade SUY-0 Triangle-Bar
Distributor of JIS C2504 Grade SUY1 Triangular Bars
Distributors of JIS-C2504 Grade SUY-1 Triangle-Bar
Distributor of JIS C2504 Grade SUY2 Triangular Bars
Distributors of JIS-C2504 Grade SUY-2 Triangle-Bar
Distributor of JIS C2504 Grade SUY3 Triangular Bars
Distributors of JIS-C2504 Grade SUY-3 Triangle-Bar
Distributor of JIS C2504 Grade A12 Triangular Bars
Distributors of JIS-C2504 Grade A-12 Triangle-Bar
Distributor of JIS C2504 Grade A20 Triangular Bars
Distributors of JIS-C2504 Grade A-20 Triangle-Bar
Distributor of JIS C2504 Grade A60 Triangular Bars
Distributors of JIS-C2504 Grade A-60 Triangle-Bar
Distributor of JIS C2504 Grade A80 Triangular Bars
Distributors of JIS-C2504 Grade A-80 Triangle-Bar
Distributor of JIS C2504 Grade A120 Triangular Bars
Distributors of JIS-C2504 Grade A-120 Triangle-Bar
Distributor of JIS C2504 Grade A240 Triangular Bars
Distributors of JIS-C2504 Grade A-240 Triangle-Bar
Distributor of JIS C2504 SUYP0 Triangular Bars
Distributors of JIS-C2504 SUYP-0 Triangle-Bar
Distributor of JIS C2504 SUYP1 Triangular Bars
Distributors of JIS C2504 SUYP-1 Triangle-Bar
Distributor of JIS C2504 SUYP2 Triangular Bars
Distributors of JIS-C2504 SUYP-2 Triangle-Bar
Distributor of JIS C2504 SUYP3 Triangular Bars
Distributors of JIS-C2504 SUYP-3 Triangle-Bar
Distributor of JIS C2504 SUYB0 Triangular Bars
Distributors of JIS-C2504 SUYB-0 Triangle-Bar
Distributor of JIS C2504 SUYB1 Triangular Bars
Distributors of JIS-C2504 SUYB-1 Triangle-Bar
Distributor of JIS C2504 SUYB2 Triangular Bars
Distributors of JIS-C2504 SUYB-2 Triangle-Bar
Distributor of JIS C2504 SUYB3 Triangular Bars
Distributors of JIS-C2504 SUYB-3 Triangle-Bar
Distributor of JIS C2504 Pure Iron Triangular Bars
Distributors of JIS-C2504 Pure-Iron Triangle-Bar
Distributor of ASTM A848-01 Low Carbon Magnetic Iron Triangular Bars
Distributors of ASTM-A848-01 Low Carbon Magnetic-Iron Triangle-Bar
Distributor of ASTM A848 Alloy Type-1 Soft Magnetic Iron Triangular Bars
Distributors of ASTM-A848 Alloy Type-1 Soft Magnetic-Iron Triangle-Bar
Distributor of ASTM A848 Alloy Type-2 Soft Magnetic Iron Triangular Bars
Distributors of ASTM-A848 Alloy Type-2 Soft Magnetic-Iron Triangle-Bar
Distributor of ASTM A848 Pure Iron Triangular Bars
Distributors of ASTM A-848 Pure-Iron Triangle-Bar
Distributor of ASTM A848-1 Pure Iron Triangular Bars
Distributors of ASTM A-848-1 Pure-Iron Triangle-Bar
Distributor of ASTM A811 Low Carbon Magnetic Iron Triangular Bars
Distributors of ASTM-A811 Low Carbon Magnetic-Iron Triangle-Bar
Distributor of ASTM A811 Grade 1 Soft Magnetic Iron Triangular Bars
Distributors of ASTM-A811 Grade-1 Soft Magnetic-Iron Triangle-Bar
Distributor of ASTM A811 Grade 2 Soft Magnetic Iron Triangular Bars
Distributors of ASTM-A811 Grade-2 Soft Magnetic-Iron Triangle-Bar
Distributor of ASTM A811 Pure Iron Triangular Bars
Distributors of ASTM A-811 Pure-Iron Triangle-Bar
Distributor of Electrical Pure Iron Grade DT4 Triangular Bars
Distributors of Electrical-Pure Iron Grade DT4 Triangle-Bar
Distributor of Electrical Pure Iron Grade DT4A Triangular Bars
Distributors of Electrical-Pure Iron Grade DT4A Triangle-Bar
Distributor of Electrical Pure Iron Grade DT4E Triangular Bars
Distributors of Electrical-Pure Iron Grade DT4E Triangle-Bar
Distributor of Electrical Pure Iron Grade DT4C Triangular Bars
Distributors of Electrical-Pure Iron Grade DT4C Triangle-Bar
Distributor Electrical Pure Iron Grade DT8 Triangular Bars
Distributors of Electrical-Pure Iron Grade DT8 Triangle-Bar
Distributor of Electrical Pure Iron Grade DT8A Triangular Bars
Distributors of Electrical-Pure Iron Grade DT8A Triangle-Bar
Distributor of Electrical Pure Iron Grade DT8E Triangular Bars
Distributors of Electrical-Pure Iron Grade DT8E Triangle-Bar
Distributor of Electrical Pure Iron Grade DT8C Triangular Bars
Distributors of Electrical-Pure Iron Grade DT8C Triangle-Bar
Distributor of Electrical Pure Iron Grade DT9 Triangular Bars
Distributors of Electrical-Pure Iron Grade DT9 Triangle-Bar
Distributor of ELCH2 Pure Iron Triangular Bars
Distributors of ELCH-2 Pure-Iron Triangle-Bar
Distributor of Ame8 Pure Iron Triangular Bars
Distributors of Ame-8 Pure-Iron Triangle-Bar
Distributor of ELCH2 Soft Magnetic Iron Triangular Bars
Distributors of ELCH-2 Soft Magnetic-Iron Triangle-Bar
Distributor of Ame8 Soft Magnetic Iron Triangular Bars
Distributors of Ame-8 Soft Magnetic-Iron Triangle-Bar
Distributor of GOST 11036-75 Grade 10895 Triangular Bars
Distributors of GOST-11036-75 Grade 10895 Triangle-Bar
Distributor of GOST 11036-75 Grade 20895 Triangular Bars
Distributors of GOST-11036-75 Grade 20895 Triangle-Bar
Distributor of GOST 11036-75 Grade 11895 Triangular Bars
Distributors of GOST-11036-75 Grade 11895 Triangle-Bar
Distributor of GOST 11036-75 Grade 21895 Triangular Bars
Distributors of GOST-11036-75 Grade 21895 Triangle-Bar
Distributor of GOST 11036-75 Grade 20880 Triangular Bars
Distributors of GOST-11036-75 Grade 20880 Triangle-Bar
Distributor of GOST 11036-75 Grade 21880 Triangular Bars
Distributors of GOST-11036-75 Grade 21880 Triangle-Bar
Distributor of GOST 11036-75 Grade 10860 Triangular Bars
Distributors of GOST-11036-75 Grade 10860 Triangle-Bar
Distributor of GOST 11036-75 Grade 20860 Triangular Bars
Distributors of GOST-11036-75 Grade 20860 Triangle-Bar
Distributor of GOST 11036-75 Grade 11860 Triangular Bars
Distributors of GOST-11036-75 Grade 11860 Triangle-Bar
Distributor of GOST 11036-75 Grade 11880 Triangular Bars
Distributors of GOST-11036-75 Grade 11880 Triangle-Bar
Distributor of GOST 11036-75 Grade 21860 Triangular Bars
Distributors of GOST-11036-75 Grade 21860 Triangle-Bar
Distributor of GOST 11036-75 Grade 10850 Triangular Bars
Distributors of GOST-11036-75 Grade 10850 Triangle-Bar
Distributor of GOST 11036-75 Grade 20850 Triangular Bars
Distributors of GOST-11036-75 Grade 20850 Triangle-Bar
Distributor of GOST 11036-75 Grade 11850 Triangular Bars
Distributors of GOST-11036-75 Grade 11850 Triangle-Bar
Distributor of GOST 11036-75 Grade 21850 Triangular Bars
Distributors of GOST-11036-75 Grade 21850 Triangle-Bar
Distributor of GOST 11036-75 Grade 10880 Triangular Bars
Distributors of GOST-11036-75 Grade 10880 Triangle-Bar
Distributor of GOST 11036-75 Grade 10864 Triangular Bars
Distributors of GOST-11036-75 Grade 10864 Triangle-Bar
Distributor of GOST 11036-75 Pure Iron Triangular Bars
Distributors of GOST 11036-75 Pure-Iron Triangle-Bar
Distributor of Pure Iron Triangular Bars in China
Distributors of Pure-Iron Triangle-Bar China
Distributor of Soft Magnetic Iron Triangular Bars in China
Distributors of Soft-Magnetic Iron Triangle-Bar in China
Distributor of Electro Magnetic Iron Triangular Bars in China
Distributors of Electro-Magnetic Iron Triangle-Bar in China
Distributor of Low Carbon Pure Iron Triangular Bars in China
Distributors of Low-Carbon Pure-Iron Triangle-Bar in China
Distributor of High Purity Iron Triangular Bars in China
Distributors of High-Purity Iron Triangle-Bar in China
Distributor of Pure Iron Triangular Bars in Japan
Distributors of Pure-Iron Triangle-Bar in Japan
Distributor of Soft Magnetic Iron Triangular Bars in Japan
Distributors of Soft-Magnetic Iron Triangle-Bar in Japan
Distributor of Electro Magnetic Iron Triangular Bars in Japan
Distributors of Electro-Magnetic Iron Triangle-Bar in Japan
Distributor of Low Carbon Pure Iron Triangular Bars in Japan
Distributors of Low-Carbon Pure-Iron Triangle-Bar in Japan
Distributor of High Purity Iron Triangular Bars in Japan
Distributors of High-Purity Iron Triangle-Bar in Japan
Distributor of Pure Iron Triangular Bars in Korea
Distributors of Pure-Iron Triangle-Bar in Korea
Distributor of Soft Magnetic Iron Triangular Bars in Korea
Distributors of Soft-Magnetic Iron Triangle-Bar in Korea
Distributor of Electro Magnetic Iron Triangular Bars in Korea
Distributors of Electro-Magnetic Iron Triangle-Bar in Korea
Distributor of Low Carbon Pure Iron Triangular Bars in Korea
Distributors of Low-Carbon Pure-Iron Triangle-Bar in Korea
Distributor of High Purity Iron Triangular Bars in Korea
Distributors of High-Purity Iron Triangle-Bar in Korea
Distributor of Pure Iron Triangular Bars in Europe
Distributors of Pure-Iron Triangle-Bar in Europe
Distributor of Soft Magnetic Iron Triangular Bars in Europe
Distributors of Soft-Magnetic Iron Triangle-Bar in Europe
Distributor of Electro Magnetic Iron Triangular Bars in Europe
Distributors of Electro-Magnetic Iron Triangle-Bar in Europe
Distributor of Low Carbon Pure Iron Triangular Bars in Europe
Distributors of Low-Carbon Pure-Iron Triangle-Bar in Europe
Distributor of High Purity Iron Triangular Bars in Europe
Distributors of High-Purity Iron Triangle-Bar in Europe
Distributor of Pure Iron Triangular Bars in Germany
Distributors of Pure-Iron Triangle-Bar in Germany
Distributor of Soft Magnetic Iron Triangular Bars in Germany
Distributors of Soft-Magnetic Iron Triangle-Bar in Germany
Distributor of Electro Magnetic Iron Triangular Bars in Germany
Distributors of Electro-Magnetic Iron Triangle-Bar in Germany
Distributor of Low Carbon Pure Iron Triangular Bars in Germany
Distributors of Low-Carbon Pure-Iron Triangle-Bar in Germany
Distributor of High Purity Iron Triangular Bars in Germany
Distributors of High-Purity Iron Triangle-Bar in Germany
Distributor of Pure Iron Triangular Bars in France
Distributors of Pure-Iron Triangle-Bar in France
Distributor of Soft Magnetic Iron Triangular Bars in France
Distributors of Soft-Magnetic Iron Triangle-Bar in France
Distributor of Electro Magnetic Iron Triangular Bars in France
Distributors of Electro-Magnetic Iron Triangle-Bar in France
Distributor of Low Carbon Pure Iron Triangular Bars in France
Distributors of Low-Carbon Pure-Iron Triangle-Bar in France
Distributor of High Purity Iron Triangular Bars in France
Distributors of High-Purity Iron Triangle-Bar in France
Distributor of Pure Iron Triangular Bars in Italy
Distributors of Pure-Iron Triangle-Bar in Italy
Distributor of Soft Magnetic Iron Triangular Bars in Italy
Distributors of Soft-Magnetic Iron Triangle-Bar in Italy
Distributor of Electro Magnetic Iron Triangular Bars in Italy
Distributors of Electro-Magnetic Iron Triangle-Bar in Italy
Distributor of Low Carbon Pure Iron Triangular Bars in Italy
Distributors of Low-Carbon Pure-Iron Triangle-Bar in Italy
Distributor of High Purity Iron Triangular Bars in Italy
Distributors of High-Purity Iron Triangle-Bar in Italy
Distributor of Pure Iron Triangular Bars in Spain
Distributors of Pure-Iron Triangle-Bar in Spain
Distributor of Soft Magnetic Iron Triangular Bars in Spain
Distributors of Soft-Magnetic Iron Triangle-Bar in Spain
Distributor of Electro Magnetic Iron Triangular Bars in Spain
Distributors of Electro-Magnetic Iron Triangle-Bar in Spain
Distributor of Low Carbon Pure Iron Triangular Bars in Spain
Distributors of Low-Carbon Pure-Iron Triangle-Bar in Spain
Distributor of High Purity Iron Triangular Bars in Spain
Distributors of High-Purity Iron Triangle-Bar in Spain
Distributor of Pure Iron Triangular Bars in UK
Distributors of Pure-Iron Triangle-Bar in UK
Distributor of Soft Magnetic Iron Triangular Bars in UK
Distributors of Soft-Magnetic Iron Triangle-Bar in U.K
Distributor of Electro Magnetic Iron Triangular Bars in UK
Distributors of Electro-Magnetic Iron Triangle-Bar in U.K
Distributor of Low Carbon Pure Iron Triangular Bars in UK
Distributors of Low-Carbon Pure-Iron Triangle-Bar in U.K
Distributor of High Purity Iron Triangular Bars in UK
Distributors of High-Purity Iron Triangle-Bar in U.K
Distributor of Pure Iron Triangular Bars in Brazil
Distributors of Pure-Iron Triangle-Bar in Brazil
Distributor of Soft Magnetic Iron Triangular Bars in Brazil
Distributors of Soft-Magnetic Iron Triangle-Bar in Brazil
Distributor of Electro Magnetic Iron Triangular Bars in Brazil
Distributors of Electro-Magnetic Iron Triangle-Bar in Brazil
Distributor of Low Carbon Pure Iron Triangular Bars in Brazil
Distributors of Low-Carbon Pure-Iron Triangle-Bar in Brazil
Distributor of High Purity Iron Triangular Bars in Brazil
Distributors of High-Purity Iron Triangle-Bar in Brazil
Distributor of Pure Iron Triangular Bars in London
Distributors of Pure-Iron Triangle-Bar in London
Distributor of Soft Magnetic Iron Triangular Bars in London
Distributors of Soft-Magnetic Iron Triangle-Bar in London
Distributor of Electro Magnetic Iron Triangular Bars in London
Distributors of Electro-Magnetic Iron Triangle-Bar in London
Distributor of Low Carbon Pure Iron Triangular Bars in London
Distributors of Low-Carbon Pure-Iron Triangle-Bar in London
Distributor of High Purity Iron Triangular Bars in London
Distributors of High-Purity Iron Triangle-Bar in London
Distributor of Pure Iron Triangular Bars in Switzerland
Distributors of Pure-Iron Triangle-Bar in Switzerland
Distributor of Soft Magnetic Iron Triangular Bars in Switzerland
Distributors of Soft-Magnetic Iron Triangle-Bar in Switzerland
Distributor of Electro Magnetic Iron Triangular Bars in Switzerland
Distributors of Electro-Magnetic Iron Triangle-Bar in Switzerland
Distributor of Low Carbon Pure Iron Triangular Bars in Switzerland
Distributors of Low-Carbon Pure-Iron Triangle-Bar in Switzerland
Distributor of High Purity Iron Triangular Bars in Switzerland
Distributors of High-Purity Iron Triangle-Bar in Switzerland
Distributor of Pure Iron Triangular Bars in Sweden
Distributors of Pure-Iron Triangle-Bar in Sweden
Distributor of Soft Magnetic Iron Triangular Bars in Sweden
Distributors of Soft-Magnetic Iron Triangle-Bar in Sweden
Distributor of Electro Magnetic Iron Triangular Bars in Sweden
Distributors of Electro-Magnetic Iron Triangle-Bar in Sweden
Distributor of Low Carbon Pure Iron Triangular Bars in Sweden
Distributors of Low-Carbon Pure-Iron Triangle-Bar in Sweden
Distributor of High Purity Iron Triangular Bars in Sweden
Distributors of High-Purity Iron Triangle-Bar in Sweden
Distributor of Pure Iron Triangular Bars in USA
Distributors of Pure-Iron Triangle-Bar in U.S.A
Distributor of Soft Magnetic Iron Triangular Bars in USA
Distributors of Soft-Magnetic Iron Triangle-Bar in U.S.A
Distributor of Electro Magnetic Iron Triangular Bars in USA
Distributors of Electro-Magnetic Iron Triangle-Bar in U.S.A
Distributor of Low Carbon Pure Iron Triangular Bars in USA
Distributors of Low-Carbon Pure-Iron Triangle-Bar in U.S.A
Distributor of High Purity Iron Triangular Bars in USA
Distributors of High-Purity Iron Triangle-Bar in U.S.A
Distributor of Pure Iron Triangular Bars in America
Distributors of Pure-Iron Triangle-Bar in America
Distributor of Soft Magnetic Iron Triangular Bars in America
Distributors of Soft-Magnetic Iron Triangle-Bar in America
Distributor of Electro Magnetic Iron Triangular Bars in America
Distributors of Electro-Magnetic Iron Triangle-Bar in America
Distributor of Low Carbon Pure Iron Triangular Bars in America
Distributors of Low-Carbon Pure-Iron Triangle-Bar in America
Distributor of High Purity Iron Triangular Bars in America
Distributors of High-Purity Iron Triangle-Bar in America
Distributor of Pure Iron Triangular Bars in India
Distributors of Pure-Iron Triangle-Bar in India
Distributor of Soft Magnetic Iron Triangular Bars in India
Distributors of Soft-Magnetic Iron Triangle-Bar in India
Distributor of Electro Magnetic Iron Triangular Bars in India
Distributors of Electro-Magnetic Iron Triangle-Bar in India
Distributor of Low Carbon Pure Iron Triangular Bars in India
Distributors of Low-Carbon Pure-Iron Triangle-Bar in India
Distributor of High Purity Iron Triangular Bars in India
Distributors of High-Purity Iron Triangle-Bar in India
Distributor of Soft Magnetic Iron Triangular Bars in Russia
Distributors of Soft-Magnetic Iron Triangle-Bar in Russia
Distributor of Electro Magnetic Iron Triangular Bars in Russia
Distributors of Electro-Magnetic Iron Triangle-Bar in Russia
Distributor of Low Carbon Pure Iron Triangular Bars in Russia
Distributors of Low-Carbon Pure-Iron Triangle-Bar in Russia
Distributor of High Purity Iron Triangular Bars in Russia
Distributors of High-Purity Iron Triangle-Bar in Russia
Importers of Soft-Magnetic Iron Triangle-Bar, Pure Iron Triangular Bars, Electro-Magnetic Iron Triangle-Bar, High-Purity Very Low-Carbon Iron Triangular Bars, Fe99.5% Iron Triangle-Bar, Fe99.8% Iron
Triangular Bars, Fe99.9% Iron Triangle-Bar
Importer of Pure Iron Triangular Bars
Importers of Pure-Iron Triangle-Bar
Importer of High Purity Fe-99.5% Iron Triangular Bars
Importers of High-Purity Fe99.5% Iron Triangle-Bar
Importer of High Purity Fe-99.8% Iron Triangular Bars
Importers of High-Purity Fe99.8% Iron Triangle-Bar
Importer of High Purity Fe-99.9% Iron Triangular Bars
Importers of High-Purity Fe99.9% Iron Triangle-Bar
Importer of Soft Magnetic Iron Triangular Bars
Importers of Soft-Magnetic Iron Triangle-Bar
Importer of Soft Magnetic Iron Annealed Triangular Bars
Importers of Soft-Magnetic Iron Annealed Triangle-Bar
Importer of Soft Magnetic Iron Hydrogen Annealed Triangular Bars
Importers of Soft-Magnetic Iron Hydrogen-Annealed Triangle-Bar
Importer of Soft Magnetic Iron SMI Triangular Bars
Importers of Soft-Magnetic Iron SMI Triangle-Bar
Importer of Soft Magnetic Iron Silicon Triangular Bars
Importers of Soft-Magnetic Iron Silicon Triangle-Bar
Importer of Soft Steel with Ultra-Low Carbon and Low Impurities Triangular Bars
Importers of Soft Steel with Ultra-Low Carbon and Low Impurities Triangle-Bar
Importer of Low Carbon Pure Iron Triangular Bars
Importers of Low-Carbon Pure-Iron Triangle-Bar
Importer of Electric Magnetic Iron Triangular Bars
Importers of Electric-Magnetic Iron Triangle-Bar
Importer of Electromagnetic Triangular Bars, Electromagnet Triangular Bars
Importers of Electromagnetic-Triangle-Bar, Electromagnet-Triangle-Bar
Importer of Electro-Magnetic Iron Triangular Bars, Electro-Magnet Iron Triangular Bars
Importers of Electro-Magnetic Iron Triangle-Bar, Electro-Magnet Iron Triangle-Bar
Importer of High Purity Ultra Low Carbon Pure Iron Triangular Bars
Importers of High-Purity Ultra Low-Carbon Pure-Iron Triangle-Bar
Importer of DIN 17405 Grade RFe20 Triangular Bars
Importers of DIN-17405 Grade RFe-20 Triangle-Bar
Importer of DIN 17405 Grade RFe60 Triangular Bars
Importers of DIN-17405 Grade RFe-60 Triangle-Bar
Importer of DIN 17405 Grade RFe80 Triangular Bars
Importers of DIN-17405 Grade RFe-80 Triangle-Bar
Importer of DIN 17405 Grade RFe100 Triangular Bars
Importers of DIN-17405 Grade RFe-100 Triangle-Bar
Importer of DIN 17405 Grade RFe120 Triangular Bars
Importers of DIN-17405 Grade RFe-120 Triangle-Bar
Importer of DIN 17405 Pure Iron Triangular Bars
Importers of DIN-17405 Pure-Iron Triangle-Bar
Importer of IS-11946 Soft Magnetic Iron Triangular Bars
Importers of IS-11946 Soft-Magnetic Iron Triangle-Bar
Importer of IS-11946 Pure Iron Triangular Bars
Importers of IS-11946 Pure-Iron Triangle-Bar
Importer of IS-11947 Soft Magnetic Iron Triangular Bars
Importers of IS-11947 Soft-Magnetic Iron Triangle-Bar
Importer of IS-11947 Pure Iron Triangular Bars
Importers of IS-11947 Pure-Iron Triangle-Bar
Importer of JIS C2504 Grade SUY0 Triangular Bars
Importers of JIS-C2504 Grade SUY-0 Triangle-Bar
Importer of JIS C2504 Grade SUY1 Triangular Bars
Importers of JIS-C2504 Grade SUY-1 Triangle-Bar
Importer of JIS C2504 Grade SUY2 Triangular Bars
Importers of JIS-C2504 Grade SUY-2 Triangle-Bar
Importer of JIS C2504 Grade SUY3 Triangular Bars
Importers of JIS-C2504 Grade SUY-3 Triangle-Bar
Importer of JIS C2504 Grade A12 Triangular Bars
Importers of JIS-C2504 Grade A-12 Triangle-Bar
Importer of JIS C2504 Grade A20 Triangular Bars
Importers of JIS-C2504 Grade A-20 Triangle-Bar
Importer of JIS C2504 Grade A60 Triangular Bars
Importers of JIS-C2504 Grade A-60 Triangle-Bar
Importer of JIS C2504 Grade A80 Triangular Bars
Importers of JIS-C2504 Grade A-80 Triangle-Bar
Importer of JIS C2504 Grade A120 Triangular Bars
Importers of JIS-C2504 Grade A-120 Triangle-Bar
Importer of JIS C2504 Grade A240 Triangular Bars
Importers of JIS-C2504 Grade A-240 Triangle-Bar
Importer of JIS C2504 SUYP0 Triangular Bars
Importers of JIS-C2504 SUYP-0 Triangle-Bar
Importer of JIS C2504 SUYP1 Triangular Bars
Importers of JIS C2504 SUYP-1 Triangle-Bar
Importer of JIS C2504 SUYP2 Triangular Bars
Importers of JIS-C2504 SUYP-2 Triangle-Bar
Importer of JIS C2504 SUYP3 Triangular Bars
Importers of JIS-C2504 SUYP-3 Triangle-Bar
Importer of JIS C2504 SUYB0 Triangular Bars
Importers of JIS-C2504 SUYB-0 Triangle-Bar
Importer of JIS C2504 SUYB1 Triangular Bars
Importers of JIS-C2504 SUYB-1 Triangle-Bar
Importer of JIS C2504 SUYB2 Triangular Bars
Importers of JIS-C2504 SUYB-2 Triangle-Bar
Importer of JIS C2504 SUYB3 Triangular Bars
Importers of JIS-C2504 SUYB-3 Triangle-Bar
Importer of JIS C2504 Pure Iron Triangular Bars
Importers of JIS-C2504 Pure-Iron Triangle-Bar
Importer of ASTM A848-01 Low Carbon Magnetic Iron Triangular Bars
Importers of ASTM-A848-01 Low Carbon Magnetic-Iron Triangle-Bar
Importer of ASTM A848 Alloy Type-1 Soft Magnetic Iron Triangular Bars
Importers of ASTM-A848 Alloy Type-1 Soft Magnetic-Iron Triangle-Bar
Importer of ASTM A848 Alloy Type-2 Soft Magnetic Iron Triangular Bars
Importers of ASTM-A848 Alloy Type-2 Soft Magnetic-Iron Triangle-Bar
Importer of ASTM A848 Pure Iron Triangular Bars
Importers of ASTM A-848 Pure-Iron Triangle-Bar
Importer of ASTM A848-1 Pure Iron Triangular Bars
Importers of ASTM A-848-1 Pure-Iron Triangle-Bar
Importer of ASTM A811 Low Carbon Magnetic Iron Triangular Bars
Importers of ASTM-A811 Low Carbon Magnetic-Iron Triangle-Bar
Importer of ASTM A811 Grade 1 Soft Magnetic Iron Triangular Bars
Importers of ASTM-A811 Grade-1 Soft Magnetic-Iron Triangle-Bar
Importer of ASTM A811 Grade 2 Soft Magnetic Iron Triangular Bars
Importers of ASTM-A811 Grade-2 Soft Magnetic-Iron Triangle-Bar
Importer of ASTM A811 Pure Iron Triangular Bars
Importers of ASTM A-811 Pure-Iron Triangle-Bar
Importer of Electrical Pure Iron Grade DT4 Triangular Bars
Importers of Electrical-Pure Iron Grade DT4 Triangle-Bar
Importer of Electrical Pure Iron Grade DT4A Triangular Bars
Importers of Electrical-Pure Iron Grade DT4A Triangle-Bar
Importer of Electrical Pure Iron Grade DT4E Triangular Bars
Importers of Electrical-Pure Iron Grade DT4E Triangle-Bar
Importer of Electrical Pure Iron Grade DT4C Triangular Bars
Importers of Electrical-Pure Iron Grade DT4C Triangle-Bar
Importer Electrical Pure Iron Grade DT8 Triangular Bars
Importers of Electrical-Pure Iron Grade DT8 Triangle-Bar
Importer of Electrical Pure Iron Grade DT8A Triangular Bars
Importers of Electrical-Pure Iron Grade DT8A Triangle-Bar
Importer of Electrical Pure Iron Grade DT8E Triangular Bars
Importers of Electrical-Pure Iron Grade DT8E Triangle-Bar
Importer of Electrical Pure Iron Grade DT8C Triangular Bars
Importers of Electrical-Pure Iron Grade DT8C Triangle-Bar
Importer of Electrical Pure Iron Grade DT9 Triangular Bars
Importers of Electrical-Pure Iron Grade DT9 Triangle-Bar
Importer of ELCH2 Pure Iron Triangular Bars
Importers of ELCH-2 Pure-Iron Triangle-Bar
Importer of Ame8 Pure Iron Triangular Bars
Importers of Ame-8 Pure-Iron Triangle-Bar
Importer of ELCH2 Soft Magnetic Iron Triangular Bars
Importers of ELCH-2 Soft Magnetic-Iron Triangle-Bar
Importer of Ame8 Soft Magnetic Iron Triangular Bars
Importers of Ame-8 Soft Magnetic-Iron Triangle-Bar
Importer of GOST 11036-75 Grade 10895 Triangular Bars
Importers of GOST-11036-75 Grade 10895 Triangle-Bar
Importer of GOST 11036-75 Grade 20895 Triangular Bars
Importers of GOST-11036-75 Grade 20895 Triangle-Bar
Importer of GOST 11036-75 Grade 11895 Triangular Bars
Importers of GOST-11036-75 Grade 11895 Triangle-Bar
Importer of GOST 11036-75 Grade 21895 Triangular Bars
Importers of GOST-11036-75 Grade 21895 Triangle-Bar
Importer of GOST 11036-75 Grade 20880 Triangular Bars
Importers of GOST-11036-75 Grade 20880 Triangle-Bar
Importer of GOST 11036-75 Grade 21880 Triangular Bars
Importers of GOST-11036-75 Grade 21880 Triangle-Bar
Importer of GOST 11036-75 Grade 10860 Triangular Bars
Importers of GOST-11036-75 Grade 10860 Triangle-Bar
Importer of GOST 11036-75 Grade 20860 Triangular Bars
Importers of GOST-11036-75 Grade 20860 Triangle-Bar
Importer of GOST 11036-75 Grade 11860 Triangular Bars
Importers of GOST-11036-75 Grade 11860 Triangle-Bar
Importer of GOST 11036-75 Grade 11880 Triangular Bars
Importers of GOST-11036-75 Grade 11880 Triangle-Bar
Importer of GOST 11036-75 Grade 21860 Triangular Bars
Importers of GOST-11036-75 Grade 21860 Triangle-Bar
Importer of GOST 11036-75 Grade 10850 Triangular Bars
Importers of GOST-11036-75 Grade 10850 Triangle-Bar
Importer of GOST 11036-75 Grade 20850 Triangular Bars
Importers of GOST-11036-75 Grade 20850 Triangle-Bar
Importer of GOST 11036-75 Grade 11850 Triangular Bars
Importers of GOST-11036-75 Grade 11850 Triangle-Bar
Importer of GOST 11036-75 Grade 21850 Triangular Bars
Importers of GOST-11036-75 Grade 21850 Triangle-Bar
Importer of GOST 11036-75 Grade 10880 Triangular Bars
Importers of GOST-11036-75 Grade 10880 Triangle-Bar
Importer of GOST 11036-75 Grade 10864 Triangular Bars
Importers of GOST-11036-75 Grade 10864 Triangle-Bar
Importer of GOST 11036-75 Pure Iron Triangular Bars
Importers of GOST 11036-75 Pure-Iron Triangle-Bar
Importer of Pure Iron Triangular Bars in China
Importers of Pure-Iron Triangle-Bar China
Importer of Soft Magnetic Iron Triangular Bars in China
Importers of Soft-Magnetic Iron Triangle-Bar in China
Importer of Electro Magnetic Iron Triangular Bars in China
Importers of Electro-Magnetic Iron Triangle-Bar in China
Importer of Low Carbon Pure Iron Triangular Bars in China
Importers of Low-Carbon Pure-Iron Triangle-Bar in China
Importer of High Purity Iron Triangular Bars in China
Importers of High-Purity Iron Triangle-Bar in China
Importer of Pure Iron Triangular Bars in Japan
Importers of Pure-Iron Triangle-Bar in Japan
Importer of Soft Magnetic Iron Triangular Bars in Japan
Importers of Soft-Magnetic Iron Triangle-Bar in Japan
Importer of Electro Magnetic Iron Triangular Bars in Japan
Importers of Electro-Magnetic Iron Triangle-Bar in Japan
Importer of Low Carbon Pure Iron Triangular Bars in Japan
Importers of Low-Carbon Pure-Iron Triangle-Bar in Japan
Importer of High Purity Iron Triangular Bars in Japan
Importers of High-Purity Iron Triangle-Bar in Japan
Importer of Pure Iron Triangular Bars in Korea
Importers of Pure-Iron Triangle-Bar in Korea
Importer of Soft Magnetic Iron Triangular Bars in Korea
Importers of Soft-Magnetic Iron Triangle-Bar in Korea
Importer of Electro Magnetic Iron Triangular Bars in Korea
Importers of Electro-Magnetic Iron Triangle-Bar in Korea
Importer of Low Carbon Pure Iron Triangular Bars in Korea
Importers of Low-Carbon Pure-Iron Triangle-Bar in Korea
Importer of High Purity Iron Triangular Bars in Korea
Importers of High-Purity Iron Triangle-Bar in Korea
Importer of Pure Iron Triangular Bars in Europe
Importers of Pure-Iron Triangle-Bar in Europe
Importer of Soft Magnetic Iron Triangular Bars in Europe
Importers of Soft-Magnetic Iron Triangle-Bar in Europe
Importer of Electro Magnetic Iron Triangular Bars in Europe
Importers of Electro-Magnetic Iron Triangle-Bar in Europe
Importer of Low Carbon Pure Iron Triangular Bars in Europe
Importers of Low-Carbon Pure-Iron Triangle-Bar in Europe
Importer of High Purity Iron Triangular Bars in Europe
Importers of High-Purity Iron Triangle-Bar in Europe
Importer of Pure Iron Triangular Bars in Germany
Importers of Pure-Iron Triangle-Bar in Germany
Importer of Soft Magnetic Iron Triangular Bars in Germany
Importers of Soft-Magnetic Iron Triangle-Bar in Germany
Importer of Electro Magnetic Iron Triangular Bars in Germany
Importers of Electro-Magnetic Iron Triangle-Bar in Germany
Importer of Low Carbon Pure Iron Triangular Bars in Germany
Importers of Low-Carbon Pure-Iron Triangle-Bar in Germany
Importer of High Purity Iron Triangular Bars in Germany
Importers of High-Purity Iron Triangle-Bar in Germany
Importer of Pure Iron Triangular Bars in France
Importers of Pure-Iron Triangle-Bar in France
Importer of Soft Magnetic Iron Triangular Bars in France
Importers of Soft-Magnetic Iron Triangle-Bar in France
Importer of Electro Magnetic Iron Triangular Bars in France
Importers of Electro-Magnetic Iron Triangle-Bar in France
Importer of Low Carbon Pure Iron Triangular Bars in France
Importers of Low-Carbon Pure-Iron Triangle-Bar in France
Importer of High Purity Iron Triangular Bars in France
Importers of High-Purity Iron Triangle-Bar in France
Importer of Pure Iron Triangular Bars in Italy
Importers of Pure-Iron Triangle-Bar in Italy
Importer of Soft Magnetic Iron Triangular Bars in Italy
Importers of Soft-Magnetic Iron Triangle-Bar in Italy
Importer of Electro Magnetic Iron Triangular Bars in Italy
Importers of Electro-Magnetic Iron Triangle-Bar in Italy
Importer of Low Carbon Pure Iron Triangular Bars in Italy
Importers of Low-Carbon Pure-Iron Triangle-Bar in Italy
Importer of High Purity Iron Triangular Bars in Italy
Importers of High-Purity Iron Triangle-Bar in Italy
Importer of Pure Iron Triangular Bars in Spain
Importers of Pure-Iron Triangle-Bar in Spain
Importer of Soft Magnetic Iron Triangular Bars in Spain
Importers of Soft-Magnetic Iron Triangle-Bar in Spain
Importer of Electro Magnetic Iron Triangular Bars in Spain
Importers of Electro-Magnetic Iron Triangle-Bar in Spain
Importer of Low Carbon Pure Iron Triangular Bars in Spain
Importers of Low-Carbon Pure-Iron Triangle-Bar in Spain
Importer of High Purity Iron Triangular Bars in Spain
Importers of High-Purity Iron Triangle-Bar in Spain
Importer of Pure Iron Triangular Bars in UK
Importers of Pure-Iron Triangle-Bar in UK
Importer of Soft Magnetic Iron Triangular Bars in UK
Importers of Soft-Magnetic Iron Triangle-Bar in U.K
Importer of Electro Magnetic Iron Triangular Bars in UK
Importers of Electro-Magnetic Iron Triangle-Bar in U.K
Importer of Low Carbon Pure Iron Triangular Bars in UK
Importers of Low-Carbon Pure-Iron Triangle-Bar in U.K
Importer of High Purity Iron Triangular Bars in UK
Importers of High-Purity Iron Triangle-Bar in U.K
Importer of Pure Iron Triangular Bars in Brazil
Importers of Pure-Iron Triangle-Bar in Brazil
Importer of Soft Magnetic Iron Triangular Bars in Brazil
Importers of Soft-Magnetic Iron Triangle-Bar in Brazil
Importer of Electro Magnetic Iron Triangular Bars in Brazil
Importers of Electro-Magnetic Iron Triangle-Bar in Brazil
Importer of Low Carbon Pure Iron Triangular Bars in Brazil
Importers of Low-Carbon Pure-Iron Triangle-Bar in Brazil
Importer of High Purity Iron Triangular Bars in Brazil
Importers of High-Purity Iron Triangle-Bar in Brazil
Importer of Pure Iron Triangular Bars in London
Importers of Pure-Iron Triangle-Bar in London
Importer of Soft Magnetic Iron Triangular Bars in London
Importers of Soft-Magnetic Iron Triangle-Bar in London
Importer of Electro Magnetic Iron Triangular Bars in London
Importers of Electro-Magnetic Iron Triangle-Bar in London
Importer of Low Carbon Pure Iron Triangular Bars in London
Importers of Low-Carbon Pure-Iron Triangle-Bar in London
Importer of High Purity Iron Triangular Bars in London
Importers of High-Purity Iron Triangle-Bar in London
Importer of Pure Iron Triangular Bars in Switzerland
Importers of Pure-Iron Triangle-Bar in Switzerland
Importer of Soft Magnetic Iron Triangular Bars in Switzerland
Importers of Soft-Magnetic Iron Triangle-Bar in Switzerland
Importer of Electro Magnetic Iron Triangular Bars in Switzerland
Importers of Electro-Magnetic Iron Triangle-Bar in Switzerland
Importer of Low Carbon Pure Iron Triangular Bars in Switzerland
Importers of Low-Carbon Pure-Iron Triangle-Bar in Switzerland
Importer of High Purity Iron Triangular Bars in Switzerland
Importers of High-Purity Iron Triangle-Bar in Switzerland
Importer of Pure Iron Triangular Bars in Sweden
Importers of Pure-Iron Triangle-Bar in Sweden
Importer of Soft Magnetic Iron Triangular Bars in Sweden
Importers of Soft-Magnetic Iron Triangle-Bar in Sweden
Importer of Electro Magnetic Iron Triangular Bars in Sweden
Importers of Electro-Magnetic Iron Triangle-Bar in Sweden
Importer of Low Carbon Pure Iron Triangular Bars in Sweden
Importers of Low-Carbon Pure-Iron Triangle-Bar in Sweden
Importer of High Purity Iron Triangular Bars in Sweden
Importers of High-Purity Iron Triangle-Bar in Sweden
Importer of Pure Iron Triangular Bars in USA
Importers of Pure-Iron Triangle-Bar in U.S.A
Importer of Soft Magnetic Iron Triangular Bars in USA
Importers of Soft-Magnetic Iron Triangle-Bar in U.S.A
Importer of Electro Magnetic Iron Triangular Bars in USA
Importers of Electro-Magnetic Iron Triangle-Bar in U.S.A
Importer of Low Carbon Pure Iron Triangular Bars in USA
Importers of Low-Carbon Pure-Iron Triangle-Bar in U.S.A
Importer of High Purity Iron Triangular Bars in USA
Importers of High-Purity Iron Triangle-Bar in U.S.A
Importer of Pure Iron Triangular Bars in America
Importers of Pure-Iron Triangle-Bar in America
Importer of Soft Magnetic Iron Triangular Bars in America
Importers of Soft-Magnetic Iron Triangle-Bar in America
Importer of Electro Magnetic Iron Triangular Bars in America
Importers of Electro-Magnetic Iron Triangle-Bar in America
Importer of Low Carbon Pure Iron Triangular Bars in America
Importers of Low-Carbon Pure-Iron Triangle-Bar in America
Importer of High Purity Iron Triangular Bars in America
Importers of High-Purity Iron Triangle-Bar in America
Importer of Pure Iron Triangular Bars in India
Importers of Pure-Iron Triangle-Bar in India
Importer of Soft Magnetic Iron Triangular Bars in India
Importers of Soft-Magnetic Iron Triangle-Bar in India
Importer of Electro Magnetic Iron Triangular Bars in India
Importers of Electro-Magnetic Iron Triangle-Bar in India
Importer of Low Carbon Pure Iron Triangular Bars in India
Importers of Low-Carbon Pure-Iron Triangle-Bar in India
Importer of High Purity Iron Triangular Bars in India
Importers of High-Purity Iron Triangle-Bar in India
Importer of Soft Magnetic Iron Triangular Bars in Russia
Importers of Soft-Magnetic Iron Triangle-Bar in Russia
Importer of Electro Magnetic Iron Triangular Bars in Russia
Importers of Electro-Magnetic Iron Triangle-Bar in Russia
Importer of Low Carbon Pure Iron Triangular Bars in Russia
Importers of Low-Carbon Pure-Iron Triangle-Bar in Russia
Importer of High Purity Iron Triangular Bars in Russia
Importers of High-Purity Iron Triangle-Bar in Russia
Exporters of Soft-Magnetic Iron Triangle-Bar, Pure Iron Triangular Bars, Electro-Magnetic Iron Triangle-Bar, High-Purity Very Low-Carbon Iron Triangular Bars, Fe99.5% Iron Triangle-Bar, Fe99.8% Iron
Triangular Bars, Fe99.9% Iron Triangle-Bar
Exporter of Pure Iron Triangular Bars
Exporters of Pure-Iron Triangle-Bar
Exporter of High Purity Fe-99.5% Iron Triangular Bars
Exporters of High-Purity Fe99.5% Iron Triangle-Bar
Exporter of High Purity Fe-99.8% Iron Triangular Bars
Exporters of High-Purity Fe99.8% Iron Triangle-Bar
Exporter of High Purity Fe-99.9% Iron Triangular Bars
Exporters of High-Purity Fe99.9% Iron Triangle-Bar
Exporter of Soft Magnetic Iron Triangular Bars
Exporters of Soft-Magnetic Iron Triangle-Bar
Exporter of Soft Magnetic Iron Annealed Triangular Bars
Exporters of Soft-Magnetic Iron Annealed Triangle-Bar
Exporter of Soft Magnetic Iron Hydrogen Annealed Triangular Bars
Exporters of Soft-Magnetic Iron Hydrogen-Annealed Triangle-Bar
Exporter of Soft Magnetic Iron SMI Triangular Bars
Exporters of Soft-Magnetic Iron SMI Triangle-Bar
Exporter of Soft Magnetic Iron Silicon Triangular Bars
Exporters of Soft-Magnetic Iron Silicon Triangle-Bar
Exporter of Soft Steel with Ultra-Low Carbon and Low Impurities Triangular Bars
Exporters of Soft Steel with Ultra-Low Carbon and Low Impurities Triangle-Bar
Exporter of Low Carbon Pure Iron Triangular Bars
Exporters of Low-Carbon Pure-Iron Triangle-Bar
Exporter of Electric Magnetic Iron Triangular Bars
Exporters of Electric-Magnetic Iron Triangle-Bar
Exporter of Electromagnetic Triangular Bars, Electromagnet Triangular Bars
Exporters of Electromagnetic-Triangle-Bar, Electromagnet-Triangle-Bar
Exporter of Electro-Magnetic Iron Triangular Bars, Electro-Magnet Iron Triangular Bars
Exporters of Electro-Magnetic Iron Triangle-Bar, Electro-Magnet Iron Triangle-Bar
Exporter of High Purity Ultra Low Carbon Pure Iron Triangular Bars
Exporters of High-Purity Ultra Low-Carbon Pure-Iron Triangle-Bar
Exporter of DIN 17405 Grade RFe20 Triangular Bars
Exporters of DIN-17405 Grade RFe-20 Triangle-Bar
Exporter of DIN 17405 Grade RFe60 Triangular Bars
Exporters of DIN-17405 Grade RFe-60 Triangle-Bar
Exporter of DIN 17405 Grade RFe80 Triangular Bars
Exporters of DIN-17405 Grade RFe-80 Triangle-Bar
Exporter of DIN 17405 Grade RFe100 Triangular Bars
Exporters of DIN-17405 Grade RFe-100 Triangle-Bar
Exporter of DIN 17405 Grade RFe120 Triangular Bars
Exporters of DIN-17405 Grade RFe-120 Triangle-Bar
Exporter of DIN 17405 Pure Iron Triangular Bars
Exporters of DIN-17405 Pure-Iron Triangle-Bar
Exporter of IS-11946 Soft Magnetic Iron Triangular Bars
Exporters of IS-11946 Soft-Magnetic Iron Triangle-Bar
Exporter of IS-11946 Pure Iron Triangular Bars
Exporters of IS-11946 Pure-Iron Triangle-Bar
Exporter of IS-11947 Soft Magnetic Iron Triangular Bars
Exporters of IS-11947 Soft-Magnetic Iron Triangle-Bar
Exporter of IS-11947 Pure Iron Triangular Bars
Exporters of IS-11947 Pure-Iron Triangle-Bar
Exporter of JIS C2504 Grade SUY0 Triangular Bars
Exporters of JIS-C2504 Grade SUY-0 Triangle-Bar
Exporter of JIS C2504 Grade SUY1 Triangular Bars
Exporters of JIS-C2504 Grade SUY-1 Triangle-Bar
Exporter of JIS C2504 Grade SUY2 Triangular Bars
Exporters of JIS-C2504 Grade SUY-2 Triangle-Bar
Exporter of JIS C2504 Grade SUY3 Triangular Bars
Exporters of JIS-C2504 Grade SUY-3 Triangle-Bar
Exporter of JIS C2504 Grade A12 Triangular Bars
Exporters of JIS-C2504 Grade A-12 Triangle-Bar
Exporter of JIS C2504 Grade A20 Triangular Bars
Exporters of JIS-C2504 Grade A-20 Triangle-Bar
Exporter of JIS C2504 Grade A60 Triangular Bars
Exporters of JIS-C2504 Grade A-60 Triangle-Bar
Exporter of JIS C2504 Grade A80 Triangular Bars
Exporters of JIS-C2504 Grade A-80 Triangle-Bar
Exporter of JIS C2504 Grade A120 Triangular Bars
Exporters of JIS-C2504 Grade A-120 Triangle-Bar
Exporter of JIS C2504 Grade A240 Triangular Bars
Exporters of JIS-C2504 Grade A-240 Triangle-Bar
Exporter of JIS C2504 SUYP0 Triangular Bars
Exporters of JIS-C2504 SUYP-0 Triangle-Bar
Exporter of JIS C2504 SUYP1 Triangular Bars
Exporters of JIS C2504 SUYP-1 Triangle-Bar
Exporter of JIS C2504 SUYP2 Triangular Bars
Exporters of JIS-C2504 SUYP-2 Triangle-Bar
Exporter of JIS C2504 SUYP3 Triangular Bars
Exporters of JIS-C2504 SUYP-3 Triangle-Bar
Exporter of JIS C2504 SUYB0 Triangular Bars
Exporters of JIS-C2504 SUYB-0 Triangle-Bar
Exporter of JIS C2504 SUYB1 Triangular Bars
Exporters of JIS-C2504 SUYB-1 Triangle-Bar
Exporter of JIS C2504 SUYB2 Triangular Bars
Exporters of JIS-C2504 SUYB-2 Triangle-Bar
Exporter of JIS C2504 SUYB3 Triangular Bars
Exporters of JIS-C2504 SUYB-3 Triangle-Bar
Exporter of JIS C2504 Pure Iron Triangular Bars
Exporters of JIS-C2504 Pure-Iron Triangle-Bar
Exporter of ASTM A848-01 Low Carbon Magnetic Iron Triangular Bars
Exporters of ASTM-A848-01 Low Carbon Magnetic-Iron Triangle-Bar
Exporter of ASTM A848 Alloy Type-1 Soft Magnetic Iron Triangular Bars
Exporters of ASTM-A848 Alloy Type-1 Soft Magnetic-Iron Triangle-Bar
Exporter of ASTM A848 Alloy Type-2 Soft Magnetic Iron Triangular Bars
Exporters of ASTM-A848 Alloy Type-2 Soft Magnetic-Iron Triangle-Bar
Exporter of ASTM A848 Pure Iron Triangular Bars
Exporters of ASTM A-848 Pure-Iron Triangle-Bar
Exporter of ASTM A848-1 Pure Iron Triangular Bars
Exporters of ASTM A-848-1 Pure-Iron Triangle-Bar
Exporter of ASTM A811 Low Carbon Magnetic Iron Triangular Bars
Exporters of ASTM-A811 Low Carbon Magnetic-Iron Triangle-Bar
Exporter of ASTM A811 Grade 1 Soft Magnetic Iron Triangular Bars
Exporters of ASTM-A811 Grade-1 Soft Magnetic-Iron Triangle-Bar
Exporter of ASTM A811 Grade 2 Soft Magnetic Iron Triangular Bars
Exporters of ASTM-A811 Grade-2 Soft Magnetic-Iron Triangle-Bar
Exporter of ASTM A811 Pure Iron Triangular Bars
Exporters of ASTM A-811 Pure-Iron Triangle-Bar
Exporter of Electrical Pure Iron Grade DT4 Triangular Bars
Exporters of Electrical-Pure Iron Grade DT4 Triangle-Bar
Exporter of Electrical Pure Iron Grade DT4A Triangular Bars
Exporters of Electrical-Pure Iron Grade DT4A Triangle-Bar
Exporter of Electrical Pure Iron Grade DT4E Triangular Bars
Exporters of Electrical-Pure Iron Grade DT4E Triangle-Bar
Exporter of Electrical Pure Iron Grade DT4C Triangular Bars
Exporters of Electrical-Pure Iron Grade DT4C Triangle-Bar
Exporter Electrical Pure Iron Grade DT8 Triangular Bars
Exporters of Electrical-Pure Iron Grade DT8 Triangle-Bar
Exporter of Electrical Pure Iron Grade DT8A Triangular Bars
Exporters of Electrical-Pure Iron Grade DT8A Triangle-Bar
Exporter of Electrical Pure Iron Grade DT8E Triangular Bars
Exporters of Electrical-Pure Iron Grade DT8E Triangle-Bar
Exporter of Electrical Pure Iron Grade DT8C Triangular Bars
Exporters of Electrical-Pure Iron Grade DT8C Triangle-Bar
Exporter of Electrical Pure Iron Grade DT9 Triangular Bars
Exporters of Electrical-Pure Iron Grade DT9 Triangle-Bar
Exporter of ELCH2 Pure Iron Triangular Bars
Exporters of ELCH-2 Pure-Iron Triangle-Bar
Exporter of Ame8 Pure Iron Triangular Bars
Exporters of Ame-8 Pure-Iron Triangle-Bar
Exporter of ELCH2 Soft Magnetic Iron Triangular Bars
Exporters of ELCH-2 Soft Magnetic-Iron Triangle-Bar
Exporter of Ame8 Soft Magnetic Iron Triangular Bars
Exporters of Ame-8 Soft Magnetic-Iron Triangle-Bar
Exporter of GOST 11036-75 Grade 10895 Triangular Bars
Exporters of GOST-11036-75 Grade 10895 Triangle-Bar
Exporter of GOST 11036-75 Grade 20895 Triangular Bars
Exporters of GOST-11036-75 Grade 20895 Triangle-Bar
Exporter of GOST 11036-75 Grade 11895 Triangular Bars
Exporters of GOST-11036-75 Grade 11895 Triangle-Bar
Exporter of GOST 11036-75 Grade 21895 Triangular Bars
Exporters of GOST-11036-75 Grade 21895 Triangle-Bar
Exporter of GOST 11036-75 Grade 20880 Triangular Bars
Exporters of GOST-11036-75 Grade 20880 Triangle-Bar
Exporter of GOST 11036-75 Grade 21880 Triangular Bars
Exporters of GOST-11036-75 Grade 21880 Triangle-Bar
Exporter of GOST 11036-75 Grade 10860 Triangular Bars
Exporters of GOST-11036-75 Grade 10860 Triangle-Bar
Exporter of GOST 11036-75 Grade 20860 Triangular Bars
Exporters of GOST-11036-75 Grade 20860 Triangle-Bar
Exporter of GOST 11036-75 Grade 11860 Triangular Bars
Exporters of GOST-11036-75 Grade 11860 Triangle-Bar
Exporter of GOST 11036-75 Grade 11880 Triangular Bars
Exporters of GOST-11036-75 Grade 11880 Triangle-Bar
Exporter of GOST 11036-75 Grade 21860 Triangular Bars
Exporters of GOST-11036-75 Grade 21860 Triangle-Bar
Exporter of GOST 11036-75 Grade 10850 Triangular Bars
Exporters of GOST-11036-75 Grade 10850 Triangle-Bar
Exporter of GOST 11036-75 Grade 20850 Triangular Bars
Exporters of GOST-11036-75 Grade 20850 Triangle-Bar
Exporter of GOST 11036-75 Grade 11850 Triangular Bars
Exporters of GOST-11036-75 Grade 11850 Triangle-Bar
Exporter of GOST 11036-75 Grade 21850 Triangular Bars
Exporters of GOST-11036-75 Grade 21850 Triangle-Bar
Exporter of GOST 11036-75 Grade 10880 Triangular Bars
Exporters of GOST-11036-75 Grade 10880 Triangle-Bar
Exporter of GOST 11036-75 Grade 10864 Triangular Bars
Exporters of GOST-11036-75 Grade 10864 Triangle-Bar
Exporter of GOST 11036-75 Pure Iron Triangular Bars
Exporters of GOST 11036-75 Pure-Iron Triangle-Bar
Exporter of Pure Iron Triangular Bars in China
Exporters of Pure-Iron Triangle-Bar China
Exporter of Soft Magnetic Iron Triangular Bars in China
Exporters of Soft-Magnetic Iron Triangle-Bar in China
Exporter of Electro Magnetic Iron Triangular Bars in China
Exporters of Electro-Magnetic Iron Triangle-Bar in China
Exporter of Low Carbon Pure Iron Triangular Bars in China
Exporters of Low-Carbon Pure-Iron Triangle-Bar in China
Exporter of High Purity Iron Triangular Bars in China
Exporters of High-Purity Iron Triangle-Bar in China
Exporter of Pure Iron Triangular Bars in Japan
Exporters of Pure-Iron Triangle-Bar in Japan
Exporter of Soft Magnetic Iron Triangular Bars in Japan
Exporters of Soft-Magnetic Iron Triangle-Bar in Japan
Exporter of Electro Magnetic Iron Triangular Bars in Japan
Exporters of Electro-Magnetic Iron Triangle-Bar in Japan
Exporter of Low Carbon Pure Iron Triangular Bars in Japan
Exporters of Low-Carbon Pure-Iron Triangle-Bar in Japan
Exporter of High Purity Iron Triangular Bars in Japan
Exporters of High-Purity Iron Triangle-Bar in Japan
Exporter of Pure Iron Triangular Bars in Korea
Exporters of Pure-Iron Triangle-Bar in Korea
Exporter of Soft Magnetic Iron Triangular Bars in Korea
Exporters of Soft-Magnetic Iron Triangle-Bar in Korea
Exporter of Electro Magnetic Iron Triangular Bars in Korea
Exporters of Electro-Magnetic Iron Triangle-Bar in Korea
Exporter of Low Carbon Pure Iron Triangular Bars in Korea
Exporters of Low-Carbon Pure-Iron Triangle-Bar in Korea
Exporter of High Purity Iron Triangular Bars in Korea
Exporters of High-Purity Iron Triangle-Bar in Korea
Exporter of Pure Iron Triangular Bars in Europe
Exporters of Pure-Iron Triangle-Bar in Europe
Exporter of Soft Magnetic Iron Triangular Bars in Europe
Exporters of Soft-Magnetic Iron Triangle-Bar in Europe
Exporter of Electro Magnetic Iron Triangular Bars in Europe
Exporters of Electro-Magnetic Iron Triangle-Bar in Europe
Exporter of Low Carbon Pure Iron Triangular Bars in Europe
Exporters of Low-Carbon Pure-Iron Triangle-Bar in Europe
Exporter of High Purity Iron Triangular Bars in Europe
Exporters of High-Purity Iron Triangle-Bar in Europe
Exporter of Pure Iron Triangular Bars in Germany
Exporters of Pure-Iron Triangle-Bar in Germany
Exporter of Soft Magnetic Iron Triangular Bars in Germany
Exporters of Soft-Magnetic Iron Triangle-Bar in Germany
Exporter of Electro Magnetic Iron Triangular Bars in Germany
Exporters of Electro-Magnetic Iron Triangle-Bar in Germany
Exporter of Low Carbon Pure Iron Triangular Bars in Germany
Exporters of Low-Carbon Pure-Iron Triangle-Bar in Germany
Exporter of High Purity Iron Triangular Bars in Germany
Exporters of High-Purity Iron Triangle-Bar in Germany
Exporter of Pure Iron Triangular Bars in France
Exporters of Pure-Iron Triangle-Bar in France
Exporter of Soft Magnetic Iron Triangular Bars in France
Exporters of Soft-Magnetic Iron Triangle-Bar in France
Exporter of Electro Magnetic Iron Triangular Bars in France
Exporters of Electro-Magnetic Iron Triangle-Bar in France
Exporter of Low Carbon Pure Iron Triangular Bars in France
Exporters of Low-Carbon Pure-Iron Triangle-Bar in France
Exporter of High Purity Iron Triangular Bars in France
Exporters of High-Purity Iron Triangle-Bar in France
Exporter of Pure Iron Triangular Bars in Italy
Exporters of Pure-Iron Triangle-Bar in Italy
Exporter of Soft Magnetic Iron Triangular Bars in Italy
Exporters of Soft-Magnetic Iron Triangle-Bar in Italy
Exporter of Electro Magnetic Iron Triangular Bars in Italy
Exporters of Electro-Magnetic Iron Triangle-Bar in Italy
Exporter of Low Carbon Pure Iron Triangular Bars in Italy
Exporters of Low-Carbon Pure-Iron Triangle-Bar in Italy
Exporter of High Purity Iron Triangular Bars in Italy
Exporters of High-Purity Iron Triangle-Bar in Italy
Exporter of Pure Iron Triangular Bars in Spain
Exporters of Pure-Iron Triangle-Bar in Spain
Exporter of Soft Magnetic Iron Triangular Bars in Spain
Exporters of Soft-Magnetic Iron Triangle-Bar in Spain
Exporter of Electro Magnetic Iron Triangular Bars in Spain
Exporters of Electro-Magnetic Iron Triangle-Bar in Spain
Exporter of Low Carbon Pure Iron Triangular Bars in Spain
Exporters of Low-Carbon Pure-Iron Triangle-Bar in Spain
Exporter of High Purity Iron Triangular Bars in Spain
Exporters of High-Purity Iron Triangle-Bar in Spain
Exporter of Pure Iron Triangular Bars in UK
Exporters of Pure-Iron Triangle-Bar in UK
Exporter of Soft Magnetic Iron Triangular Bars in UK
Exporters of Soft-Magnetic Iron Triangle-Bar in U.K
Exporter of Electro Magnetic Iron Triangular Bars in UK
Exporters of Electro-Magnetic Iron Triangle-Bar in U.K
Exporter of Low Carbon Pure Iron Triangular Bars in UK
Exporters of Low-Carbon Pure-Iron Triangle-Bar in U.K
Exporter of High Purity Iron Triangular Bars in UK
Exporters of High-Purity Iron Triangle-Bar in U.K
Exporter of Pure Iron Triangular Bars in Brazil
Exporters of Pure-Iron Triangle-Bar in Brazil
Exporter of Soft Magnetic Iron Triangular Bars in Brazil
Exporters of Soft-Magnetic Iron Triangle-Bar in Brazil
Exporter of Electro Magnetic Iron Triangular Bars in Brazil
Exporters of Electro-Magnetic Iron Triangle-Bar in Brazil
Exporter of Low Carbon Pure Iron Triangular Bars in Brazil
Exporters of Low-Carbon Pure-Iron Triangle-Bar in Brazil
Exporter of High Purity Iron Triangular Bars in Brazil
Exporters of High-Purity Iron Triangle-Bar in Brazil
Exporter of Pure Iron Triangular Bars in London
Exporters of Pure-Iron Triangle-Bar in London
Exporter of Soft Magnetic Iron Triangular Bars in London
Exporters of Soft-Magnetic Iron Triangle-Bar in London
Exporter of Electro Magnetic Iron Triangular Bars in London
Exporters of Electro-Magnetic Iron Triangle-Bar in London
Exporter of Low Carbon Pure Iron Triangular Bars in London
Exporters of Low-Carbon Pure-Iron Triangle-Bar in London
Exporter of High Purity Iron Triangular Bars in London
Exporters of High-Purity Iron Triangle-Bar in London
Exporter of Pure Iron Triangular Bars in Switzerland
Exporters of Pure-Iron Triangle-Bar in Switzerland
Exporter of Soft Magnetic Iron Triangular Bars in Switzerland
Exporters of Soft-Magnetic Iron Triangle-Bar in Switzerland
Exporter of Electro Magnetic Iron Triangular Bars in Switzerland
Exporters of Electro-Magnetic Iron Triangle-Bar in Switzerland
Exporter of Low Carbon Pure Iron Triangular Bars in Switzerland
Exporters of Low-Carbon Pure-Iron Triangle-Bar in Switzerland
Exporter of High Purity Iron Triangular Bars in Switzerland
Exporters of High-Purity Iron Triangle-Bar in Switzerland
Exporter of Pure Iron Triangular Bars in Sweden
Exporters of Pure-Iron Triangle-Bar in Sweden
Exporter of Soft Magnetic Iron Triangular Bars in Sweden
Exporters of Soft-Magnetic Iron Triangle-Bar in Sweden
Exporter of Electro Magnetic Iron Triangular Bars in Sweden
Exporters of Electro-Magnetic Iron Triangle-Bar in Sweden
Exporter of Low Carbon Pure Iron Triangular Bars in Sweden
Exporters of Low-Carbon Pure-Iron Triangle-Bar in Sweden
Exporter of High Purity Iron Triangular Bars in Sweden
Exporters of High-Purity Iron Triangle-Bar in Sweden
Exporter of Pure Iron Triangular Bars in USA
Exporters of Pure-Iron Triangle-Bar in U.S.A
Exporter of Soft Magnetic Iron Triangular Bars in USA
Exporters of Soft-Magnetic Iron Triangle-Bar in U.S.A
Exporter of Electro Magnetic Iron Triangular Bars in USA
Exporters of Electro-Magnetic Iron Triangle-Bar in U.S.A
Exporter of Low Carbon Pure Iron Triangular Bars in USA
Exporters of Low-Carbon Pure-Iron Triangle-Bar in U.S.A
Exporter of High Purity Iron Triangular Bars in USA
Exporters of High-Purity Iron Triangle-Bar in U.S.A
Exporter of Pure Iron Triangular Bars in America
Exporters of Pure-Iron Triangle-Bar in America
Exporter of Soft Magnetic Iron Triangular Bars in America
Exporters of Soft-Magnetic Iron Triangle-Bar in America
Exporter of Electro Magnetic Iron Triangular Bars in America
Exporters of Electro-Magnetic Iron Triangle-Bar in America
Exporter of Low Carbon Pure Iron Triangular Bars in America
Exporters of Low-Carbon Pure-Iron Triangle-Bar in America
Exporter of High Purity Iron Triangular Bars in America
Exporters of High-Purity Iron Triangle-Bar in America
Exporter of Pure Iron Triangular Bars in India
Exporters of Pure-Iron Triangle-Bar in India
Exporter of Soft Magnetic Iron Triangular Bars in India
Exporters of Soft-Magnetic Iron Triangle-Bar in India
Exporter of Electro Magnetic Iron Triangular Bars in India
Exporters of Electro-Magnetic Iron Triangle-Bar in India
Exporter of Low Carbon Pure Iron Triangular Bars in India
Exporters of Low-Carbon Pure-Iron Triangle-Bar in India
Exporter of High Purity Iron Triangular Bars in India
Exporters of High-Purity Iron Triangle-Bar in India
Exporter of Soft Magnetic Iron Triangular Bars in Russia
Exporters of Soft-Magnetic Iron Triangle-Bar in Russia
Exporter of Electro Magnetic Iron Triangular Bars in Russia
Exporters of Electro-Magnetic Iron Triangle-Bar in Russia
Exporter of Low Carbon Pure Iron Triangular Bars in Russia
Exporters of Low-Carbon Pure-Iron Triangle-Bar in Russia
Exporter of High Purity Iron Triangular Bars in Russia
Exporters of High-Purity Iron Triangle-Bar in Russia
Inspection & Approval Certificates : C/W Certificate (Calibration Works Certificate) EN 10204 3.1 / DIN 50049 3.1 / ISO 10474 3.1 Mill Test Certificate,
ISI Mark, BIS Certified, NACE HIC TM-0284 / NACE MR-0103 / NACE MR-0175 / ISO 15166, CE Marked, European Pressure Equipment Directive
PED-2014/68/EU, AD-2000-WO, ASME Boiler & Pressure Vessel Code Section-II Casting A Edition 2019, API 6A (American Petroleum Institute),
with EN 10204 3.2 Certificate duly Certified & Approved by IBR (Indian Boiler Regulations), LR Class (Lloyd’s Register), GL (Germanischer Lloyd),
BV (Bureau Veritas), DNV (Det Norske Veritas), ABS Class (American Bureau of Shipping), SGS, TUV, RINA, IR Class (Indian Register of Shipping),
NORSOK Approved Standard M-630, M-650 Rev.3
If you have any requirement of above items, please feel free to contact us
Mobile No. 0091 – 9820292499
Email – marketing@rolexmetals.com
57-A Khatargalli
Mumbai – 400 002 India
CHAIRMAN – chairman@rolexmetals.com
MANAGING DIRECTOR – managingdirector@rolexmetals.com
TECHNICAL DIRECTOR – technicaldirector@rolexmetals.com
SALES DIRECTOR – salesdirector@rolexmetals.com
COMMERCIAL DIRECTOR – commercialdirector@rolexmetals.com
COMMERCIAL MANAGER – commercial@rolexmetals.com
GENERAL MANAGER – generalmanager@rolexmetals.com
SALES MANAGER – salesmanager@rolexmetals.com
PURCHASE MANAGER – purchasemanager@rolexmetals.com
TECHNICAL MANAGER – technical@rolexmetals.com
WORKS MANAGER – worksmanager@rolexmetals.com
STORES MANAGER – stores@rolexmetals.com
WAREHOUSE MANAGER – warehouse@rolexmetals.com
SALES DOMESTIC – salesdomestic@rolexmetals.com
SALES INTERNATIONAL – salesinternational@rolexmetals.com
SALES GENERAL – sales@rolexmetals.com
PURCHASE GENERAL – purchase@rolexmetals.com
FINANCE MANAGER – finance@rolexmetals.com
ACCOUNTS MANAGER – accounts@rolexmetals.com
GENERAL INFORMATION – info@rolexmetals.com
EXPORT MANAGER – export@rolexmetals.com
IMPORT MANAGER – import@rolexmetals.com
AIR EXPORT – airexport@rolexmetals.com
SEA EXPORT – seaexport@rolexmetals.com
CUSTOMS – customs@rolexmetals.com
AIR FREIGHT – airfreight@rolexmetals.com
SEA FREIGHT – seafreight@rolexmetals.com
DESPATCH – despatch@rolexmetals.com
INSPECTION – inspection@rolexmetals.com
LOGISTICS – logistics@rolexmetals.com
TRANSPORT – transport@rolexmetals.com
KALAMBOLI WAREHOUSE – kalamboli@rolexmetals.com
TALOJA WAREHOUSE – taloja@rolexmetals.com
KHOPOLI WAREHOUSE – khopoli@rolexmetals.com
NHAVA SHEVA WAREHOUSE – nhavasheva@rolexmetals.com
KANDLA WAREHOUSE – kandla@rolexmetals.com
MUMBAI WAREHOUSE – mumbai@rolexmetals.com
STOCKYARD – stockyard@rolexmetals.com
SERVICE – service@rolexmetals.com
SUPPORT – support@rolexmetals.com
RECRUITMENT – career@rolexmetals.com
WEBMASTER – webmaster@rolexmetals.com
CUSTOMER CARE – customercare@rolexmetals.com | {"url":"http://rolexconcast.com/index.php/soft-magnetic-iron-triangular-bars/","timestamp":"2024-11-12T16:41:16Z","content_type":"text/html","content_length":"183894","record_id":"<urn:uuid:210de29f-09b2-478b-9282-ab427962bac6>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00109.warc.gz"} |
Beyond Bayesians and Frequentists — LessWrong
(Note: this is cross-posted from my blog and also available in pdf here.)
If you are a newly initiated student into the field of machine learning, it won't be long before you start hearing the words "Bayesian" and "frequentist" thrown around. Many people around you
probably have strong opinions on which is the "right" way to do statistics, and within a year you've probably developed your own strong opinions (which are suspiciously similar to those of the people
around you, despite there being a much greater variance of opinion between different labs). In fact, now that the year is 2012 the majority of new graduate students are being raised as Bayesians (at
least in the U.S.) with frequentists thought of as stodgy emeritus professors stuck in their ways.
If you are like me, the preceding set of facts will make you very uneasy. They will make you uneasy because simple pattern-matching -- the strength of people's opinions, the reliability with which
these opinions split along age boundaries and lab boundaries, and the ridicule that each side levels at the other camp – makes the "Bayesians vs. frequentists" debate look far more like politics than
like scholarly discourse. Of course, that alone does not necessarily prove anything; these disconcerting similarities could just be coincidences that I happened to cherry-pick.
My next point, then, is that we are right to be uneasy, because such debate makes us less likely to evaluate the strengths and weaknesses of both approaches in good faith. This essay is a push
against that --- I summarize the justifications for Bayesian methods and where they fall short, show how frequentist approaches can fill in some of their shortcomings, and then present my personal
(though probably woefully under-informed) guidelines for choosing which type of approach to use.
Before doing any of this, though, a bit of background is in order...
1. Background on Bayesians and Frequentists
1.1. Three Levels of Argument
As Andrew Critch [6] insightfully points out, the Bayesians vs. frequentists debate is really three debates at once, centering around one or more of the following arguments:
1. Whether to interpret subjective beliefs as probabilities
2. Whether to interpret probabilities as subjective beliefs (as opposed to asymptotic frequencies)
3. Whether a Bayesian or frequentist algorithm is better suited to solving a particular problem.
Given my own research interests, I will add a fourth argument:
4. Whether Bayesian or frequentist techniques are better suited to engineering an artificial intelligence.
Andrew Gelman [9] has his own well-written essay on the subject, where he expands on these distinctions and presents his own more nuanced view.
Why are these arguments so commonly conflated? I'm not entirely sure; I would guess it is for historical reasons but I have so far been unable to find said historical reasons. Whatever the reasons,
what this boils down to in the present day is that people often form opinions on 1. and 2., which then influence their answers to 3. and 4. This is not good, since 1. and 2. are philosophical in
nature and difficult to resolve correctly, whereas 3. and 4. are often much easier to resolve and extremely important to resolve correctly in practice. Let me re-iterate: the Bayes vs. frequentist
discussion should center on the practical employment of the two methods, or, if epistemology must be discussed, it should be clearly separated from the day-to-day practical decisions. Aside from the
difficulties with correctly deciding epistemology, the relationship between generic epistemology and specific practices in cutting-edge statistical research is only via a long causal chain, and it
should be completely unsurprising if Bayesian epistemology leads to the employment of frequentist tools or vice versa.
For this reason and for reasons of space, I will spend the remainder of the essay focusing on statistical algorithms rather than on interpretations of probability. For those who really want to
discuss interpretations of probability, I will address that in a later essay.
1.2. Recap of Bayesian Decision Theory
(What follows will be review for many.) In Bayesian decision theory, we assume that there is some underlying world state θ and a likelihood function p(X1,...,Xn | θ) over possible observations. (A
likelihood function is just a conditional probability distribution where the parameter conditioned on can vary.) We also have a space A of possible actions and a utility function U(θ; a) that gives
the utility of performing action a if the underlying world state is θ. We can incorporate notions like planning and value of information by defining U(θ; a) recursively in terms of an identical agent
to ourselves who has seen one additional observation (or, if we are planning against an adversary, in terms of the adversary). For a more detailed overview of this material, see the tutorial by North
What distinguishes the Bayesian approach in particular is one additional assumption, a prior distribution p(θ) over possible world states. To make a decision with respect to a given prior, we compute
the posterior distribution p[posterior](θ | X1,...,Xn) using Bayes' theorem, then take the action a that maximizes $\mathbb{E}_{p_{\mathrm{posterior}}}[U(\theta; a)]$.
In practice, p[posterior](θ | X1,...,Xn) can be quite difficult to compute, and so we often attempt to approximate it. Such attempts are known as approximate inference algorithms.
1.3. Steel-manning Frequentists
There are many different ideas that fall under the broad umbrella of frequentist techniques. While it would be impossible to adequately summarize all of them even if I attempted to, there are three
in particular that I would like to describe, and which I will call frequentist decision theory, frequentist guarantees, and frequentist analysis tools.
Frequentist decision theory has a very similar setup to Bayesian decision theory, with a few key differences. These are discussed in detail and contrasted with Bayesian decision theory in [10],
although we summarize the differences here. There is still a likelihood function p(X1,...,Xn | θ) and a utility function U(θ; a). However, we do not assume the existence of a prior on θ, and instead
choose the decision rule a(X1,...,Xn) that maximizes
$\displaystyle \min\limits_{\theta} \mathbb{E}[U(a(X_1,\ldots,X_n); \theta) \mid \theta]. \ \ \ \ \ (1)$
In other words, we ask for a worst case guarantee rather than an average case guarantee. As an example of how these would differ, imagine a scenario where we have no data to observe, an unknown θ in
{1,...,N}, and we choose an action a in {0,...,N}. Furthermore, U(0; θ) = 0 for all θ, U(a; θ) = -1 if a = θ, and U(a;θ) = 1 if a ≠ 0 and a ≠ θ. Then a frequentist will always choose a = 0 because
any other action gets -1 utility in the worst case; a Bayesian, on the other hand, will happily choose any non-zero value of a since such an action gains (N-2)/N utility in expectation. (I am
purposely ignoring more complex ideas like mixed strategies for the purpose of illustration.).
Note that the frequentist optimization problem is more complicated than in the Bayesian case, since the value of (1) depends on the joint behavior of a(X1,...,Xn), whereas with Bayes we can optimize
a(X1,...,Xn) for each set of observations separately.
As a result of this more complex optimization problem, it is often not actually possible to maximize (1), so many frequentist techniques instead develop tools to lower-bound (1) for a given decision
procedure, and then try to construct a decision procedure that is reasonably close to the optimum. Support vector machines [2], which try to pick separating hyperplanes that minimize generalization
error, are one example of this where the algorithm is explicitly trying to maximize worst-case utility. Another example of a frequentist decision procedure is L1-regularized least squares for sparse
recovery [3], where the procedure itself does not look like it is explicitly maximizing any utility function, but a separate analysis shows that it is close to the optimal procedure anyways.
The second sort of frequentist approach to statistics is what I call a frequentist guarantee. A frequentist guarantee on an algorithm is a guarantee that, with high probability with respect to how
the data was generated, the output of the algorithm will satisfy a given property. The most familiar example of this is any algorithm that generates a frequentist confidence interval: to generate a
95% frequentist confidence interval for a parameter θ is to run an algorithm that outputs an interval, such that with probability at least 95% θ lies within the interval. An important fact about most
such algorithms is that the size of the interval only grows logarithmically with the amount of confidence we require, so getting a 99.9999% confidence interval is only slightly harder than getting a
95% confidence interval (and we should probably be asking for the former whenever possible).
If we use such algorithms to test hypotheses or to test discrete properties of θ, then we can obtain algorithms that take in probabilistically generated data and produce an output that with high
probability depends only on how the data was generated, not on the specific random samples that were given. For instance, we can create an algorithm that takes in samples from two distributions, and
is guaranteed to output 1 whenever they are the same, 0 whenever they differ by at least ε in total variational distance, and could have arbitrary output if they are different but the total
variational distance is less than ε. This is an amazing property --- it takes in random input and produces an essentially deterministic answer.
Finally, a third type of frequentist approach seeks to construct analysis tools for understanding the behavior of random variables. Metric entropy, the Chernoff and Azuma-Hoeffding bounds [12], and
Doob's optional stopping theorem are representative examples of this sort of approach. Arguably, everyone with the time to spare should master these techniques, since being able to analyze random
variables is important no matter what approach to statistics you take. Indeed, frequentist analysis tools have no conflict at all with Bayesian methods --- they simply provide techniques for
understanding the behavior of the Bayesian model.
2. Bayes vs. Other Methods
2.1. Justification for Bayes
We presented Bayesian decision theory above, but are there any reasons why we should actually use it? One commonly-given reason is that Bayesian statistics is merely the application of Bayes'
Theorem, which, being a theorem, describes the only correct way to update beliefs in response to new evidence; anything else can only be justified to the extent that it provides a good approximation
to Bayesian updating. This may be true, but Bayes' Theorem only applies if we already have a prior, and if we accept probability as the correct framework for expressing uncertain beliefs. We might
want to avoid one or both of these assumptions. Bayes' theorem also doesn't explain why we care about expected utility as opposed to some other statistic of the distribution over utilities (although
note that frequentist decision theory also tries to maximize expected utility).
One compelling answer to this is dutch-booking, which shows that any agent must implicitly be using a probability model to make decisions, or else there is a series of bets that they would be willing
to make that causes them to lose money with certainty. Another answer is the complete class theorem, which shows that any non-Bayesian decision procedure is strictly dominated by a Bayesian decision
procedure --- meaning that the Bayesian procedure performs at least as well as the non-Bayesian procedure in all cases with certainty. In other words, if you are doing anything non-Bayesian, then
either it is secretly a Bayesian procedure or there is another procedure that does strictly better than it. Finally, the VNM Utility Theorem states that any agent with consistent preferences over
distributions of outcomes must be implicitly maximizing the expected value of some scalar-valued function, which we can then use as our choice of utility function U. These theorems, however, ignore
the issue of computation --- while the best decision procedure may be Bayesian, the best computationally-efficient decision procedure could easily be non-Bayesian.
Another justification for Bayes is that, in contrast to ad hoc frequentist techniques, it actually provides a general theory for constructing statistical algorithms, as well as for incorporating side
information such as expert knowledge. Indeed, when trying to model complex and highly structured situations it is difficult to obtain any sort of frequentist guarantees (although analysis tools can
still often be applied to gain intuition about parts of the model). A prior lets us write down the sorts of models that would allow us to capture structured situations (for instance, when trying to
do language modeling or transfer learning). Non-Bayesian methods exist for these situations, but they are often ad hoc and in many cases ends up looking like an approximation to Bayes. One example of
this is Kneser-Ney smoothing for n-gram models, an ad hoc algorithm that ended up being very similar to an approximate inference algorithm for the hierarchical Pitman-Yor process [15, 14, 17, 8].
This raises another important point against Bayes, which is that the proper Bayesian interpretation may be very mathematically complex. Pitman-Yor processes are on the cutting-edge of Bayesian
nonparametric statistics, which is itself one of the more technical subfields of statistical machine learning, so it was probably much easier to come up with Kneser-Ney smoothing than to find the
interpretation in terms of Pitman-Yor processes.
2.2. When the Justifications Fail
The first and most common objection to Bayes is that a Bayesian method is only as good as its prior. While for simple models the performance of Bayes is relatively independent of the prior, such
models can only capture data where frequentist techniques would also perform very well. For more complex (especially nonparametric) Bayesian models, the performance can depend strongly on the prior,
and designing good priors is still an open problem. As one example I point to my own research on hierarchical nonparametric models, where the most straightforward attempts to build a hierarchical
model lead to severe pathologies [13].
Even if a Bayesian model does have a good prior, it may be computationally intractable to perform posterior inference. For instance, structure learning in Bayesian networks is NP-hard [4], as is
topic inference in the popular latent Dirichlet allocation model (and this continues to hold even if we only want to perform approximate inference). Similar stories probably hold for other common
models, although a theoretical survey has yet to be made; suffice to say that in practice approximate inference remains a difficult and unsolved problem, with many models not even considered because
of the apparent hopelessness of performing inference in them.
Because frequentist methods often come with an analysis of the specific algorithm being employed, they can sometimes overcome these computational issues. One example of this mentioned already is L1
regularized least squares [3]. The problem setup is that we have a linear regression task Ax = b+v where A and b are known, v is a noise vector, and x is believed to be sparse (typically x has many
more rows than b, so without the sparsity assumption x would be underdetermined). Let us suppose that x has n rows and k non-zero rows --- then the number of possible sparsity patterns is $\binom{n}
{k}$ --- large enough that a brute force consideration of all possible sparsity patterns is intractable. However, we can show that solving a certain semidefinite program will with high probability
yield the appropriate sparsity pattern, after which recovering x reduces to a simple least squares problem. (A semidefinite program is a certain type of optimization problem that can be solved
efficiently [16].)
Finally, Bayes has no good way of dealing with adversaries or with cases where the data was generated in a complicated way that could make it highly biased (for instance, as the output of an
optimization procedure). A toy example of an adversary would be playing rock-paper-scissors --- how should a Bayesian play such a game? The straightforward answer is to build up a model of the
opponent based on their plays so far, and then to make the play that maximizes the expected score (probability of winning minus probability of losing). However, such a strategy fares poorly against
any opponent with access to the model being used, as they can then just run the model themselves to predict the Bayesian's plays in advance, thereby winning every single time. In contrast, there is a
frequentist strategy called the multiplicative weights update method that fairs well against an arbitrary opponent (even one with superior computational resources and access to our agent's source
code). The multiplicative weights method does far more than winning at rock-paper-scissors --- it is also a key component of the fastest algorithm for solving many important optimization problems
(including the network flow algorithm), and it forms the theoretical basis for the widely used AdaBoost algorithm [1, 5, 7].
2.3. When To Use Each Method
The essential difference between Bayesian and frequentist decision theory is that Bayes makes the additional assumption of a prior over θ, and optimizes for average-case performance rather than
worst-case performance. It follows, then, that Bayes is the superior method whenever we can obtain a good prior and when good average-case performance is sufficient. However, if we have no way of
obtaining a good prior, or when we need guaranteed performance, frequentist methods are the way to go. For instance, if we are trying to build a software package that should be widely deployable, we
might want to use a frequentist method because users can be sure that the software will work as long as some number of easily-checkable assumptions are met.
A nice middle-ground between purely Bayesian and purely frequentist methods is to use a Bayesian model coupled with frequentist model-checking techniques; this gives us the freedom in modeling
afforded by a prior but also gives us some degree of confidence that our model is correct. This approach is suggested by both Gelman [9] and Jordan [10].
3. Conclusion
When the assumptions of Bayes' Theorem hold, and when Bayesian updating can be performed computationally efficiently, then it is indeed tautological that Bayes is the optimal approach. Even when some
of these assumptions fail, Bayes can still be a fruitful approach. However, by working under weaker (sometimes even adversarial) assumptions, frequentist approaches can perform well in very
complicated domains even with fairly simple models; this is because, with fewer assumptions being made at the outset, less work has to be done to ensure that those assumptions are met.
From a research perspective, we should be far from satisfied with either approach --- Bayesian methods make stronger assumptions than may be warranted, and frequentists methods provide little in the
way of a coherent framework for constructing models, and ask for worst-case guarantees, which probably cannot be obtained in general. We should seek to develop a statistical modeling framework that,
unlike Bayes, can deal with unknown priors, adversaries, and limited computational resources.
4. Acknowledgements
Thanks to Emma Pierson, Vladimir Slepnev, and Wei Dai for reading preliminary versions of this work and providing many helpful comments.
5. References
[1] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a meta algorithm and applications. Working Paper, 2005.
[2] Christopher J.C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2:121--167, 1998.
[3] Emmanuel J. Candes. Compressive sampling. In Proceedings of the International Congress of Mathematicians. European Mathematical Society, 2006.
[4] D.M. Chickering. Learning bayesian networks is NP-complete. LECTURE NOTES IN STATISTICS-NEW YORK-SPRINGER VERLAG-, pages 121--130, 1996.
[5] Paul Christiano, Jonathan A. Kelner, Aleksander Madry, Daniel Spielman, and Shang-Hua Teng. Electrical flows, laplacian systems, and faster approximation of maximum flow in undirected graphs. In
Proceedings of the 43rd ACM Symposium on Theory of Computing, 2011.
[6] Andrew Critch. Frequentist vs. bayesian breakdown: Interpretation vs. inference. http://lesswrong.com/lw/7ck/frequentist_vs_bayesian_breakdown_interpretation/.
[7] Yoav Freund and Robert E. Schapire. A short introduction to boosting. Journal of Japanese Society for Artificial Intelligence, 14(5):771--780, Sep. 1999.
[8] J. Gasthaus and Y.W. Teh. Improvements to the sequence memoizer. In Advances in Neural Information Processing Systems, 2011.
[9] Andrew Gelman. Induction and deduction in bayesian data analysis. RMM, 2:67--78, 2011.
[10] Michael I. Jordan. Are you a bayesian or a frequentist? Machine Learning Summer School 2009 (video lecture at http://videolectures.net/mlss09uk_jordan_bfway/).
[11] D. Warner North. A tutorial introduction to decision theory. IEEE Transactions on Systems Science and Cybernetics, SSC-4(3):200--210, Sep. 1968.
[12] Igal Sason. On refined versions of the Azuma-Hoeffding inequality with applications in information theory. CoRR, abs/1111.1977, 2011.
[13] Jacob Steinhardt and Zoubin Ghahramani. Pathological properties of deep bayesian hierarchies. In NIPS Workshop on Bayesian Nonparametrics, 2011. Extended Abstract.
[14] Y.W. Teh. A bayesian interpretation of interpolated Kneser-Ney. Technical Report TRA2/06, School of Computing, NUS, 2006.
[15] Y.W. Teh. A hierarchical bayesian language model based on pitman-yor processes. Coling/ACL, 2006.
[16] Lieven Vandenberghe and Stephen Boyd. Semidefinite programming. SIAM Review, 38(1):49--95, Mar. 1996.
[17] F.~Wood, C.~Archambeau, J.~Gasthaus, L.~James, and Y.W. Teh. A stochastic memoizer for sequence data. In Proceedings of the 26th International Conference on Machine Learning, pages 1129--1136,
Then you might think you could have inconsistent betting prices that would harm the person you bet with, but not you, which sounds fine.
Rather: "If your betting prices don't obey the laws of probability theory, then you will either accept combinations of bets that are sure losses, or pass up combinations of bets that are sure gains."
Well, Cox's Theorem still assumes that you're representing belief-strengths with real numbers in the first place. Really you should go back to Savage's Theorem... :)
I've tried to do something similar with odds once, but the assumption about (AB|C) = F[(A|C), (B|AC)] made me give up.
Indeed, one can calculate O(AB|C) given O(A|C) and O(B|AC) but the formula isn't pretty. I've tried to derive that function but failed. It was not until I appealed to the fact that O(A)=P(A)/(1-P(A))
that I managed to infer this unnatural equation about O(AB|C), O(A|C) and O(B|AC).
And this use of classical probabilities, of course, completely defeats the point of getting classical probabilities from the odds via Cox's Theorem!
Did I miss something?
By the way, are there some other interesting natural rules of inference besides odds and log odds which are isomorphic to the rules of probability theory? (Judea Pearl mentioned something about MYCIN
certainty factor, but I was unable to find any details)
EDIT: You can view the CF combination rules here, but I find it very difficult to digest. Also, what about initial assignment of certainty?
EDIT2: Nevermind, I found an adequate summary ( http://www.idi.ntnu.no/~ksys/NOTES/CF-model.html ) of the model and pdf ( http://uai.sis.pitt.edu/papers/85/p9-heckerman.pdf ) about probabilistic
interpretations of CF. It seems to be an interesting example of not-obviously-Bayesian system of inference, but it's not exactly an example you would give to illustrate the point of Cox's theorem.
I predict that you'll probably answer my question in the later essay since my position hinges, crucially, one whether Bayesian epistemology is correct, but do you see anything that you disagree
with here?
Nope, everything you said looks good! I actually like the interpretation you gave:
However, I might frame it differently: both Bayesian statistics and frequentist statistics are useful only insofar as they approximate the true Bayesian epistemology.
I don't actually intend to take a position on whether Bayesian epistemology is correct; I merely plan to talk about implications and relationships between different interpretations of probability and
let people decide for themselves which to prefer, if any. Although if I had to take a position, it would be something like, "Bayes is more correct than frequentist but frequentist ideas can provide
insight into patching some of the holes in Bayesian epistemology". For instance, I think UDT is a very frequentist thing to do.
A nice middle-ground between purely Bayesian and purely frequentist methods is to use a Bayesian model coupled with frequentist model-checking techniques; this gives us the freedom in modeling
afforded by a prior but also gives us some degree of confidence that our model is correct. This approach is suggested by both Gelman [9] and Jordan [10].
Just to pile on a little bit here: A Bayesian might argue that uncertainty about which model you're using is just uncertainty, so put a prior on the space of possible models and do the Bayesian
update. This can be an effective method, but it doesn't entirely get rid of the problem - now you're modeling the structure of your uncertainty about models in a particular way, and that higher level
model could be wrong. You're also probably excluding some plausible possible models, but I'll sidestep that issue for now. The Bayesian might argue that this case is analogous to the previous - just
model your (model of the structure of uncertainty about models) - put a prior on that space too. But eventually this must stop with a finite number of levels of uncertainty, and there's no guarantee
that 1) your model is anywhere near the true model (i.e. the actual structure of your uncertainty) or 2) you'll be able to get answers out of the mess you've created.
On the other hand, frequentist model checking techniques can give you a pretty solid idea of how well the model is capturing the data. If one model doesn't seem to be working, try another instead!
Now a Bayesian might complain that this is "using the data twice" which isn't justified by probability theory, and they would be right. However, you don't get points for acting like a Bayesian, you
get points for giving the same answer as a Bayesian. What the Bayesian in this example should be worried about is whether the model chosen at the end ultimately gives answers that are close to what a
true Bayesian would give. Intuitively, I think this is the case - if a model doesn't seem to fit the data by some frequentist model checking method, e.g. a goodness of fit test, then it's likely that
if you could actually write down the posterior probability that the particular model you chose is true (i.e. it's the true structure of your uncertainty), that probability would be small, modulo a
high degree of prior certainty that the model was true. But I'm willing to be proven wrong on this.
Trivia: "fairs well against" should be fares.
As Andrew Critch [6] insightfully points out, the Bayesians vs. frequentists debate is really three debates at once, centering around one or more of the following arguments:...
I think there's a much more important and fundamental debate you're missing in your taxonomy, and one of the wellsprings of LW criticism: the sub-category of frequentist techniques called null
hypothesis testing. There are legitimate & powerful frequentist criticisms of NHST, and these are accepted and echoed as major arguments by many who are otherwise on an opposite side for one of those
other debates.
For my part, I'm sure that NHST is badly misleading and wrong, but I'm not so sure that I can tar all the other frequentist techniques with the same brush.
Yup, exactly. It seems possible to me that you can get around this within the frequentist framework, but most likely it's the case that you need to at least use Bayesian ideas somewhere to get an AI
to work at all.
I plan to write up a sketch of a possible FAI architecture based on some of the ideas paulfchristiano has been developing; hopefully that will clarify some of these points.
No worries -- good luck with your submission! :-)
Did this get written?
These theorems, however, ignore the issue of computation --- while the best decision procedure may be Bayesian, the best computationally-efficient decision procedure could easily be non-Bayesian.
This raises another important point against Bayes, which is that the proper Bayesian interpretation may be very mathematically complex.
if we are trying to build a software package that should be widely deployable, we might want to use a frequentist method because users can be sure that the software will work as long as some
number of easily-checkable assumptions are met.
I think these are the strongest reasons you've raised that we might want to deviate from pure Bayesianism in practice. We usually think of these (computation and understandability-by-humans) as
irritating side issues, to be glossed over and mostly considered after we've made our decision about which algorithm to use. But in practice they often dominate all other considerations, so it would
be nice to find a way to rigorously integrate these two desiderata with the others that underpin Bayesianism.
Keep religion out of math, please.
I agree with this. That was supposed to be the point of the post.
Randomization is conjectured not to help in the sense that people think P = BPP.
Even if P = BPP, randomization still probably helps; P = BPP just means that randomization doesn't help so much that it separates polynomial from non-polynomial.
Every biology paper released based on a 5% P-value threshold without regard to the underlying plausibility of the connection. There are many effects where I wouldn't take a 0.1% P-value to mean
anything (see: kerfluffle over superluminal neutrinos), and some where I'd take a 10% P-value as a weak but notable degree of confirmation.
"Area of app" depends on granularity: "analysis of running time" (e.g. "how long will this take, I haven't got all day") is an area of app, but if we are willing to drill in we can talk about
distributions on input vs worst case as separate areas of app. I don't really see a qualitative difference here: sometimes F is more appropriate, sometimes not. It really depends on how much we know
about the problem and how paranoid we are being. Just as with algorithms -- sometimes input distributions are reasonable, sometimes not.
Or if we are being theoretical statisticians, our intended target for techniques we are developing. I am not sympathetic to "but the unwashed masses don't really understand, therefore" kind of
arguments. Math techniques don't care, it's best to use what's appropriate.
edit: in fact, let the utility function u(.) be the running time of an algorithm A, and the prior over theta the input distribution for algorithm A inputs. Now consider what the expectation for F vs
the expectation for B is computing. This is a degenerate statistical problem, of course, but this isn't even an analogy, it's an isomorphism.
The section on B stat is fairly funny.
No doubt about it, Larry Wasserman* is a smart guy. Unfortunately, that section isn't his finest work. The normal prior example compares apples and oranges as discussed here, and the normalizing
constant paradox analysis is just wrong, as LW himself discusses here.
* I'm just a teeny bit jealous that his initials are "LW". How awesome would that be?
Ah, sorry for misunderstanding and going off on a tangent.
I see.
Another answer is the complete class theorem, which shows that any non-Bayesian decision procedure is strictly dominated by a Bayesian decision procedure --- meaning that the Bayesian procedure
performs at least as well as the non-Bayesian procedure in all cases with certainty.
I don't understand the connection to the earlier claims about minimizing worst-case performance. To strictly dominate, doesn't this imply that the Bayesian algorithm does as well or better on the
worst-case input? In which case, how does frequentism ever differ? Surely the complete class theorem doesn't show that all frequentist approaches are just a Bayesian approah in disguise?
I don't understand why Bayesians are presented as going for expected value while Frequentists are going for worst-case. These seem kind of orthogonal issues.
Support vector machines [2], which try to pick separating hyperplanes that minimize generalization error, are one example of this where the algorithm is explicitly trying to maximize worst-case
Could you expand on this a little? I've always thought of SVMs as minimizing an expected loss (the sum over hinge losses) rather than any best-worst-case approach. Are you referring to the "max min"
in the dual QP? I'm interested in other interpretations...
Finally, Bayes has no good way of dealing with […] with cases where the data was generated in a complicated way that could make it highly biased […].
Wait, what? If we don't know about the possibility of bias, we're doomed anyway, are we not? If we do know about it, then we just have to adjust our prior, right? Or is this again about the
intractability of true Bayesian computation in complicated cases?
I'm referring to using an approximation in order to guarantee performance. E.g. replacing the sum of a bunch of independent, well-behaved random variables with a gaussian, and using monte-carlo
methods to get approximate properties of the individual random variables with known resources if necessary.
"Guaranteed performance" typically cashes out as "replace the value of an action with the probability that its outcome is better than L, then pick the best" whereas "optimizing for the worst case"
typically cashes out as "replace the value of an action with the value of its worst outcome, then pick the best."
The latter is often referred to as "robustness" and the former as "partial robustness," and which one is applicable depends on the situation. Generally, the latter is used in problems with severe
probabilistic uncertainty, whereas the former needs some probabilistic certainty.
Suppose that there are two possible policies A and B, and in the worst case A gives utility 1 and B gives utility 2, but for the specific problem we care about we require a utility of 3. Then an
algorithm that optimizes for the worst case will choose B. On the other hand, there is no algorithm (that only chooses between policies A and B) that can guarantee a utility of 3. If you absolutely
need a utility of 3 then you'd better come up with a new policy C, or find an additional valid assumption that you can make. The subtlety here is that "optimizing for the worst case" implicitly means
"with respect to the current set of assumptions I have encoded into my algorithm, which is probably a subset of the full set of assumptions that I as a human make about the world".
The notion of guaranteed performance is important because it tells you when you need to do more work and design a better algorithm (for instance, by finding additional regularities of the environment
that you can exploit).
New Comment
51 comments, sorted by Click to highlight new comments since:
I haven't read this in detail but one very quick comment: Cox's Theorem is a representation theorem showing that coherent belief states yield classical probabilities, it's not the same as the
dutch-book theorem at all. E.g. if you want to represent probabilities using log odds, they can certain relate to each other coherently (since they're just transforms of classical probabilities), but
Cox's Theorem will give you the classical probabilities right back out again. Jaynes cites a special case of Cox in PT:TLOS which is constructive at the price of assuming probabilities are twice
differentiable, and I actually tried it with log odds and got the classical probabilities right back out - I remember being pretty impressed with that, and had this enlightenment experience wherein I
went to seeing probability theory as a kind of relational structure in uncertainty.
I also quickly note that the worst-case scenario often amounts to making unfair assumptions about "randomization" wherein adversaries can always read the code of deterministic agents but
non-deterministic agents have access to hidden sources of random numbers. E.g. http://lesswrong.com/lw/vq/the_weighted_majority_algorithm/
Good catch on Cox's theorem; that is now fixed. Do you know if the dutch book argument corresponds to a named theorem?
I'm not sure exactly how your comment about deterministic vs. non-deterministic agents is meant to apply to the arguments I've advanced here (although I suppose you will clarify after you're done
Separately, I disagree that the assumptions are unfair; I think of it as a particularly crisp abstraction of the actual situation you care about. As long as pseudo-random generators exist and you can
hide your source of randomness, you can guarantee that no adversary can predict your random bits; if you could usefully make the same guarantee about other aspects of your actions without recourse to
a PRG then I would happily incorporate that into the set of assumptions, but in practice it is easiest to just work in terms of a private source of randomness. Besides, I think that the use of this
formalism has been amply validated by its intellectual fruits (see the cited network flow application as one example, or the Arora, Hazan, and Kale reference).
Good catch on Cox's theorem; that is now fixed. Do you know if the dutch book argument corresponds to a named theorem?
There is a whole class of dutch book arguments, so I'm not sure which one you mean by the dutch book argument.
In any case, Susan Vineberg's formulation of the Dutch Book Theorem goes like this:
Given a set of betting quotients that fails to satisfy the probability axioms, there is a set of bets with those quotients that guarantees a net loss to one side.
Yes, that is the one I had in mind. Thanks!
When the assumptions of Bayes' Theorem hold, and when Bayesian updating can be performed computationally efficiently, then it is indeed tautological that Bayes is the optimal approach. Even when
some of these assumptions fail, Bayes can still be a fruitful approach. However, by working under weaker (sometimes even adversarial) assumptions, frequentist approaches can perform well in very
complicated domains even with fairly simple models; this is because, with fewer assumptions being made at the outset, less work has to be done to ensure that those assumptions are met.
I've only skimmed this for now (will read soon), but I wanted to point out that I completely agree with this conclusion (without reading the arguments in detail). However, I might frame it
differently: both Bayesian statistics and frequentist statistics are useful only insofar as they approximate the true Bayesian epistemology. In other words, if you know the prior and know the
likelihood, then performing the Bayesian update will give P(A|data) where A is any question you're interested in. However, since we usually don't know our prior or likelihood, there's no guarantee
that Bayesian statistics -- which amounts to doing the Bayesian update on the wrong model of the actual structure of our uncertainty (i.e. our actual prior + likelihood) -- will closely approximate
Bayesian epistemology. So, of course we should consider other methods that, while superficially don't look like the true Bayesian update, may do a better job of approximating the answer we want.
Computational difficulty is a separate reason why we might have to approximate Bayesian epistemology even if we can write down the prior + likelihood and that, once again, might entail using methods
that don't look "Bayesian" in any way.
If you recall, I briefly made this argument to you at the July minicamp, but you didn't seem to find it persuasive. I'll note now that I'm simply not talking about decision theory. So, e.g., when you
It follows, then, that Bayes is the superior method whenever we can obtain a good prior and when good average-case performance is sufficient.
I'm not taking a position on whether we need to consider whether we need average-case performance to be sufficient in order for using Bayesian statistics to be the best or a good option (I have
intuitions going both directions, but nothing fleshed out).
I predict that you'll probably answer my question in the later essay since my position hinges, crucially, one whether Bayesian epistemology is correct, but do you see anything that you disagree with
Bayesian methods make stronger assumptions than may be warranted, and frequentists methods provide little in the way of a coherent framework for constructing models, and ask for worst-case
guarantees, which probably cannot be obtained in general.
As a non-expert in the area, I find that this implies that "Bayesian methods" are unsuitable for FAI research, as preventing UFAI requires worst-case guarantees coupled with the assumption that AI
can read your source code. This must be wrong, otherwise EY would not trumpet Bayesianism so much. What did I miss?
If I apply Frequentist Decision Theory As Described By Jsteinhardt (FDTADBJ) to a real-world decision problem, where θ ranges over all possible worlds (as opposed to a standard science paper where θ
ranges over only a few parameters of some restricted model space), then the worst case isn't "we need to avoid UFAI", it's "UFAI wins and there's nothing we can do about it". Since there is at least
one possible world where all actions have an expected utility of "rocks fall, everyone dies", that's the only possible world that affects worst case utility. So FDTADBJ says there's no point in even
trying to optimize for the case where it's possible to survive.
For those who really want to discuss interpretations of probability, I will address that in a later essay.
I am still waiting with bated breath. :-)
Sorry about that! I'm really behind on writing right now, and probably will be for at least the next month as I work on a submission to NIPS (http://nips.cc/). I have a few other writing things to
catch up on before this, but still hope to get to it eventually.
In fact, now that the year is 2012 the majority of new graduate students are being raised as Bayesians (at least in the U.S.) with frequentists thought of as stodgy emeritus professors stuck in
their ways.
Is this actually true? Where would one get numbers on such a thing?
No, it's not true. This whole F vs B thing is such a false choice too. Does it make sense in computational complexity to have a holy war between average case and worst case analysis of algorithm
running time? Maybe for people who go on holy wars as a hobby, but not as a serious thing.
Does it make sense in computational complexity to have a holy war between average case and worst case analysis of algorithm running time?
Er, yes?
I don't understand why this was linked as a response at all. Randomization is conjectured not to help in the sense that people think P = BPP. But there are cases where randomization does strictly
help (wikipedia has a partial list: http://en.wikipedia.org/wiki/Randomized_algorithm).
My point was about sociology. Complexity theorists are not bashing each other's heads in over whether worst case or average case analysis is "better," they are proving theorems relating the
approaches, with the understanding that in some algorithm analysis applications, it makes sense to take the "adversary view," for example in real time systems that need strict guarantees. In other
applications, typical running time is a more useful quantity. Nobody calls worst case analysis an apostate technique. Maybe that's a good example to follow. Keep religion out of math, please.
Your analogy is imprecise. Average case and worst case analyses are both useful in their own right, and deal with different phenomena; F and B claim to deal with the same phenomena, but F is usually
more vague about what assumptions its techniques follow from.
A more apt analogy, in my opinion, would be between interpretations of QM. All of them claim to deal with the same phenomena, but some interpretations are more vague about the precise mechanism than
Why do you think F is more vague than B? I don't think that's true. LW folks (up to and including EY) are generally a lot more vague and imprecise when talking about statistics than professional
statisticians using F for whatever reason. But still seem to have strong opinions about B over F. It's kinda culty, to be honest.
Here's a book by a smart F:
The section on B stat is fairly funny.
F techniques tend to make assumptions that are equivalent to establishing prior distributions, but because it's easy to forget about these assumptions, many people use F techniques without
considering what the assumptions mean. If you are explicit about establishing priors, however, this mostly evaporates.
Notice that the point about your analogy was regarding area of application, not relative vagueness.
I don't have a strong personal opinion about F/B. This is just based on informal observations about F techniques versus B techniques.
many people use F techniques without considering what the assumptions mean
Can you name three examples of this happening?
I could, but I doubt anything would come of it. Forget about the off-hand vagueness remark; the analogy still fails.
Data point: One of our Montreal LW meetup members showed us a picture and description pulled from his Bayes stats/analysis class, and the picture shows kiosks with the hippy bayes person and the
straight-suited old-and-set-in-his-ways corporate clone, along with the general idea that frequentist thinking is good for long-term verification and reliability tests, but that people who promote
frequentism over bayes when both are just as good are Doing Something Wrong (AKA sneer at the other tribe).
I don't think anyone needs anecdotes that Bayesian approaches are more popular than ever before or are a bona fide approach; I'm interested in the precise claim that now a majority of grad students
identify as Bayesians. That is the interest.
I don't have precise numbers but this is my experience after having worked with ML groups at Cambridge, MIT, and Stanford. The next most common thing after Bayesians would be neural nets people if I
had to guess (I don't know what you want to label those as). Note that as a Bayesian-leaning person I may have a biased sample.
I suspect Berkeley might be more frequentist but am unsure.
Some random thoughts:
In order to fix "Bayesian Decision Theory" so that it works in multiplayer games, have it search through strategies for the one that leads to maximum utility, rather than just going action by action.
I guess this may be a non-mainstream thing?
Bayes is the superior method whenever we can obtain a good prior and when good average-case performance is sufficient. However, if we have no way of obtaining a good prior, or when we need
guaranteed performance, frequentist methods are the way to go.
If you "need guaranteed performance," just include that information in the utility function.
the Bayesians vs. frequentists debate is really three debates at once, centering around one or more of the following arguments:
Whether to interpret subjective beliefs as probabilities
Whether to interpret probabilities as subjective beliefs (as opposed to asymptotic frequencies)
Whether a Bayesian or frequentist algorithm is better suited to solving a particular problem.
Why are these arguments so commonly conflated?
Given the rest of your article, it looks like "conflated" could be replaced by "correlated" here. Calling the relationship between ideals and algorithms "conflation" already judges the issue a bit :)
In order to fix "Bayesian Decision Theory" so that it works in multiplayer games, have it search through strategies for the one that leads to maximum utility, rather than just going action by
action. I guess this may be a non-mainstream thing?
Nope, that's definitely a standard thing to do. It's what I was referring to when I said:
We can incorporate notions like planning and value of information by defining U(θ; a) recursively in terms of an identical agent to ourselves who has seen one additional observation (or, if we
are planning against an adversary, in terms of the adversary).
However, this doesn't actually work that well, since recursively modeling other agents is expensive, and if the adversary is more complicated than our model can capture, we will do poorly. It is
often much better to just not assume that we have a model of the adversary in the first place.
If you "need guaranteed performance," just include that information in the utility function.
There is a difference between "guaranteed performance" and "optimizing for the worst case". Guaranteed performance means that we can be confident, before the algorithm gets run, that it will hit some
performance threshold. I don't see how you can do that with a Bayesian method, except by performing a frequentist analysis on it.
Given the rest of your article, it looks like "conflated" could be replaced by "correlated" here. Calling the relationship between ideals and algorithms "conflation" already judges the issue a
bit :)
When terminology fails to carve reality at its joints then I think it is fair to refer to it as conflation. If it was indeed the case that ideals mapped one-to-one onto algorithms then I would
reconsider my word choice.
There is a difference between "guaranteed performance" and "optimizing for the worst case". Guaranteed performance means that we can be confident, before the algorithm gets run, that it will hit
some performance threshold.
Ah, okay. Whoops.
I don't see how you can do that with a Bayesian method, except by performing a frequentist analysis on it.
How about a deliberate approximation to an ideal use of the evidence? Or do any approximations with limited ranges of validity (i.e. all approximations) count as "frequentist"? Though then we might
have to divide computer-programming frequentists into "bayesian frequentists" and "frequentist frequentists" depending on whether they made approximations or applied a toolbox of methods.
How about a deliberate approximation to an ideal use of the evidence?
I'm confused by what you are suggesting here. Even a Bayesian method making no approximations at all doesn't necessarily have guaranteed performance (see my response to Oscar_Cunningham).
There is a difference between "guaranteed performance" and "optimizing for the worst case".
I'm not sure what the difference between these two is, could you spell it out for me? | {"url":"http://www.lesswrong.com/posts/o32tEFf5zBiByL2xv/beyond-bayesians-and-frequentists","timestamp":"2024-11-09T10:57:14Z","content_type":"text/html","content_length":"1049168","record_id":"<urn:uuid:b42eaddf-2ec0-45b5-8ff5-17b3616d9ac7>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00358.warc.gz"} |
ProD: Variations and Searching
An example of ProD
The dropdown menu provides examples derived from various sources: Weblicht, Penn Treebank, Stanford parser, and PAISÀ.
The branches can be drawn as curves, diagonal lines, or horizontal and vertical lines. The diagrams can be laid out as pure dendrograms, pure trees, or a hybrid of the two. Clicking on a node will
toggle whether its descendants are displayed or collapsed.
You can search for nodes in the diagrams. Node search terms are regular expressions, so NP will match any node containing "NP", while ^NP$ will match only "NP" nodes. You can also search for paths in
the diagram, by specifying a (top down) sequence of nodes, separated by spaces. A * can be used within a sequence as a wildcard. If there is more than one path, you can highlight them successively by
using the left and right arrows.
Redraw as Redraw with branches as | {"url":"http://linguistics.chrisculy.net/lx/software/ProD/more_samples/variations_searching.html","timestamp":"2024-11-03T04:25:57Z","content_type":"text/html","content_length":"21910","record_id":"<urn:uuid:8e8df6bc-f1ac-43ea-a318-1621442402a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00650.warc.gz"} |
Which of the following lenses would you prefer while reading small letters in a dictionary? (a) A convex lens with focal length of 50 cm (b) A concave lens with focal length of 50 cm (c) A convex lens with focal length of 5 cm (d) A concave lens with focal length of 5 cm
Which of the following lenses would you prefer while reading small letters in a dictionary?
(a) A convex lens with focal length of 50 cm
(b) A concave lens with focal length of 50 cm
(c) A convex lens with focal length of 5 cm
(d) A concave lens with focal length of 5 cm | {"url":"https://www.educart.co/ncert-solutions/which-of-the-following-lenses-would-you-prefer-while-reading-small-letters-in-a-dictionary","timestamp":"2024-11-06T08:55:56Z","content_type":"text/html","content_length":"201545","record_id":"<urn:uuid:19c752b2-bba7-41a2-ac1f-68d4dbbabbba>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00399.warc.gz"} |
Too many zeros and/or highly skewed? A tutorial on modelling health behaviour as count data with Poisson and negative binomial regression
Background: Dependent variables in health psychology are often counts, for example, of a behaviour or number of engagements with an intervention. These counts can be very strongly skewed, and/or
contain large numbers of zeros as well as extreme outliers. For example, ‘How many cigarettes do you smoke on an average day?’ The modal answer may be zero but may range from 0 to 40+. The same can
be true for minutes of moderate-to-vigorous physical activity. For some people, this may be near zero, but take on extreme values for someone training for a marathon. Typical analytical strategies
for this data involve explicit (or implied) transformations (smoker v. non-smoker, log transformations). However, these data types are ‘counts’ (i.e. non-negative whole numbers) or quasi-counts (time
is ratio but discrete minutes of activity could be analysed as a count), and can be modelled using count distributions–including the Poisson and negative binomial distribution (and their
zero-inflated and hurdle extensions, which alloweven more zeros). Methods: In this tutorial paper I demonstrate (in R, Jamovi, and SPSS) the easy application of these models to health psychology
data, and their advantages over alternative ways of analysing this type of data using two datasets–one highly dispersed dependent variable (number of views on YouTube, and another with a large number
of zeros (number of days on which symptoms were reported over a month). Results: The negative binomial distribution had the best fit for the overdispersed number of views on YouTube. Negative
binomial, and zero-inflated negative binomial were both good fits for the symptom data with over-abundant zeros. Conclusions: In both cases, count distributions provided not just a better fit but
would lead to different conclusions compared to the poorly fitting traditional regression/linear models.
• Count data
• negative binomial regression
• Poisson regression
• skewed data
• tutorial
Dive into the research topics of 'Too many zeros and/or highly skewed? A tutorial on modelling health behaviour as count data with Poisson and negative binomial regression'. Together they form a
unique fingerprint. | {"url":"https://pure.ul.ie/en/publications/too-many-zeros-andor-highly-skewed-a-tutorial-on-modelling-health","timestamp":"2024-11-09T07:08:52Z","content_type":"text/html","content_length":"58247","record_id":"<urn:uuid:9217405e-ba93-4f00-adf6-aa17f4af9497>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00341.warc.gz"} |
Cryptography Use
Cryptography Use
We use encryption everyday. It's necessary to have a basic understanding of what the different parts do, especially as they relate to Public Key. Public key cryptography, sometimes called public key
encryption, uses two cryptographic keys: a public key and a private key. It makes TLS/SSL possible. The use of encryption algorithms helps to protect sensitive information from unauthorised access,
and the use of cryptographic techniques helps. Is obfuscation good in cryptography? I use two key-based XOR encryption, various hashing techniques (SHA1) on the keys, and simple things such.
Public-key cryptography, or asymmetric cryptography, is the field of cryptographic systems that use pairs of related keys. Each key pair consists of a.
The most common and probably easiest understood use of cryptography are the symmetric ciphers. A symmetric encryption algorithm is a method that renders. Encryption protocols use a key to alter data
so that it's scrambled, and so Although the above examples of early cryptography illustrate how using. Cryptography is the study of securing communications from outside observers. Encryption
algorithms take the original message, or plaintext, and converts it. Most cryptographic algorithms that guarantee confidentiality work as follows: Alice uses a key to encrypt a message by changing it
into a scrambled form that. Cryptography allows us to keep information and communication secure using the codes using Encryption and Decryption methods. Cryptography is a foundational technology that
enables secure communication and data protection in the digital world. For example, cryptography is used to create various types of encryption protocols that are regularly used to protect data. These
include bit or bit. Cryptography is the use of a series of complex puzzles to conceal and uncover messages. Equations and computer coding convert plain, readable data into a format. Most
cryptographic algorithms that guarantee confidentiality work as follows: Alice uses a key to encrypt a message by changing it into a scrambled form that. Symmetric-key cryptographic algorithms use
the same cryptographic keys for both the encryption of the plaintext and the decryption of the ciphertext. Symmetric. Cryptography is now used for data integrity, entity authentication, data origin
authentication, and non-repudiation. The use of symmetric algorithms for.
Cryptography is commonly used with authentication. We spoke earlier about taking passwords and hashing them so that we can store them on a system for comparison. Cryptography gives secure
communication in the presence of malicious third-parties—known as adversaries. Learn the the types and principles at saitomontazh.ru There are two main types of encryption in use today: symmetric
cryptography and asymmetric cryptography. Both types use keys to encrypt and decrypt data sent. Bitcoin (as well as Ethereum and many other cryptocurrencies) uses a technology called public-private
key encryption. This allows them to be “trustless” – and. NIST Releases First 3 Finalized Post-Quantum Encryption Standards. Cryptography uses mathematical techniques to transform data and prevent it
from being read or. Yes for sure. Every day. If you are accessing the internet, watching pay TV, using credit cards, talking with your buddies in whatsapp and so on. You are using. Cryptography is
widely used on the internet to help protect user-data and prevent eavesdropping. · Cryptography can be used to secure communications by. Cryptography is an information security tactic used to protect
enterprise information and communication from cyber threats through the use of codes. These are safe and easy to use and don't require developers to make many decisions. The other level is low-level
cryptographic primitives. These are often.
Cryptography is the use of a series of complex puzzles to conceal and uncover messages. Equations and computer coding convert plain, readable data into a format. Cryptography is a method of
protecting information and communications using codes, so that only those for whom the information is intended can read and process. Table of contents. How cryptography keeps communication secret and
safe; Types of cryptography systems; Information security principles and uses of cryptography. Cryptography refers to the technique of securing information and communications, between two or more
parts, through various codes type. Symmetric-key cryptography - Both sender and receiver share a single key and the sender uses this key to encrypt plaintext. The cipher text is sent to the.
cryptography software? As a second question: Could decent cryptography privacy software be developed using only your basic math operations? Another classic example of military cryptography is the
Turing Machine, created by British citizen Alan Turing to decipher encrypted communications from German.
Easiest Way To Trade | Bathroom Shower Replacement Cost | {"url":"https://saitomontazh.ru/gainers-losers/cryptography-use.php","timestamp":"2024-11-07T12:37:58Z","content_type":"text/html","content_length":"11315","record_id":"<urn:uuid:bd0131a8-7171-417b-a3d4-5541bc540393>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00604.warc.gz"} |
Graphing inequalities worksheets
graphing inequalities worksheets Related topics: real life examples of quadratic functions
year 6 math exam sample uk
graphing ordered pairs to make a picture worksheet
radical expressions
algebraic expressions 5th grade worksheets
state chart diagram for online examination
math prayers
animation demo of sodium added to chlorine
graphing pictures from equations
graph of quadratic functions,2
zeros of polynomials
Author Message
gockd Posted: Wednesday 25th of Nov 07:35
Well there are just two people who can help me out right now , either it has to be some math guru or it has to be God himself. I’m fed up of trying to solve problems on
graphing inequalities worksheets and some related topics such as algebraic signs and graphing equations. I have my midterms coming up in a week from now and I don’t know what
to do ? Is there anyone out there who can actually spare some time and help me with my problems ? Any sort of help would be highly appreciated .
Registered: 18.05.2003
Back to top
espinxh Posted: Thursday 26th of Nov 09:01
You don’t need to ask anyone to solve any sample questions for you; in fact all you need is Algebrator. I’ve tried many such algebra simulation software but Algebrator is way
better than most of them. It’ll solve all the questions that you have and it’ll even explain each and every step involved in reaching that solution . You can try out as many
examples as you would like to, and unlike us human beings, it would never say, Oh! I’ve had enough for the day! ;) I used to have some problems in solving questions on powers
and algebraic signs, but this software really helped me get over those.
Registered: 17.03.2002
From: Norway
Back to top
Dolknankey Posted: Friday 27th of Nov 21:31
It would really be great if you could tell us about a tool that can offer both. If you could get us a home tutoring software that would give a step-by-step solution to our
problem, it would really be nice. Please let us know the genuine websites from where we can get the tool .
Registered: 24.10.2003
From: Where the trout
streams flow and the
air is nice
Back to top
TtN Posted: Sunday 29th of Nov 20:46
Great! I think that’s what I am looking for. Can you tell me where to get it?
Registered: 28.12.2001
From: Diest, Belgium,
Europe, World,
Universe, ...
Back to top
pcaDFX Posted: Tuesday 01st of Dec 07:02
You don’t have to call them up, it can be ordered online. Here’s the link: https://softmath.com/links-to-algebra.html. They even provide an unreserved money back assurance,
which is just great!
Registered: 03.07.2001
Back to top
Gog Posted: Wednesday 02nd of Dec 08:26
Algebrator is a user friendly software and is certainly worth a try. You will also find quite a few interesting stuff there. I use it as reference software for my math
problems and can swear that it has made learning math more fun .
Registered: 07.11.2001
From: Austin, TX
Back to top | {"url":"https://www.softmath.com/algebra-software/long-division/graphing-inequalities.html","timestamp":"2024-11-09T14:17:47Z","content_type":"text/html","content_length":"43105","record_id":"<urn:uuid:3b53d073-6090-4211-9a63-f803dd578332>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00588.warc.gz"} |
The Best Writing on Mathematics 2010
Suggested Reading January 4, 2011 1 Comment
While compiling the new math releases for my site that helps you “discover new books“, I came across a brand new book by Princeton University Press, called “The Best Writing on Mathematics 2010“.
While I haven’t had the opportunity to read this book yet, (it was just released yesterday), I think it looks extremely promising.
Here is a description of it directly from the publisher:
This anthology brings together the year’s finest writing on mathematics from around the world. Featuring promising new voices alongside some of the foremost names in mathematics, The Best Writing
on Mathematics makes available to a wide audience many articles not easily found anywhere else–and you don’t need to be a mathematician to enjoy them. These writings offer surprising insights
into the nature, meaning, and practice of mathematics today. They delve into the history, philosophy, teaching, and everyday occurrences of math, and take readers behind the scenes of today’s
hottest mathematical debates. Here readers will discover why Freeman Dyson thinks some mathematicians are birds while others are frogs; why Keith Devlin believes there’s more to mathematics than
proof; what Nick Paumgarten has to say about the timing patterns of New York City’s traffic lights (and why jaywalking is the most mathematically efficient way to cross Sixty-sixth Street); what
Samuel Arbesman can tell us about the epidemiology of the undead in zombie flicks; and much, much more.
In addition to presenting the year’s most memorable writing on mathematics, this must-have anthology also includes a foreword by esteemed mathematician William Thurston and an informative
introduction by Mircea Pitici. This book belongs on the shelf of anyone interested in where math has taken us–and where it’s headed.
I’m eagerly waiting for my copy to arrive, as it sounds like the kind of book that can provide readers with an overview of the direction in which some of the most exciting and cutting edge
mathematical research is currently headed.
If you’d like to pick up your own copy to start the year off with some advanced math reading, this title is now available and in stock on Amazon.
Sponsor’s message: Check out Math Better Explained, an elegant and insightful ebook that will help you see math in a new light and experience more of those awesome “aha!” moments when ideas suddenly
One Response | {"url":"https://mathblog.com/the-best-writing-on-mathematics-2010/","timestamp":"2024-11-05T10:44:05Z","content_type":"text/html","content_length":"83176","record_id":"<urn:uuid:0065a930-fb45-4726-8455-44776e204cfc>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00821.warc.gz"} |
Tag: ginibre
Tags → #ginibre
• “We derive a precise asymptotic formula for the density of the small singular values of the real Ginibre matrix ensemble shifted by a complex parameter. In particular we prove that away from the
real axis real and complex Ginibre matrices have the same local statistics.” | {"url":"https://dominik.page/tags/ginibre/","timestamp":"2024-11-07T13:00:11Z","content_type":"text/html","content_length":"14025","record_id":"<urn:uuid:380331b6-3358-4da1-a932-c7e9c45d992e>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00566.warc.gz"} |
docs/Performance.rst - llvm-project/polly - Git at Google
.. include:: <isonum.txt>
High-Performance Generalized Matrix Multiplication
Polly automatically detects and optimizes generalized matrix multiplication,
the computation C |larr| α ⊗ C ⊕ β ⊗ A ⊗ B, where A, B, and C are three appropriately sized matrices,
⊕ and ⊗ operations are originating from the corresponding matrix semiring, and α and β are
constants, and beta is not equal to zero. It allows to obtain the highly optimized form structured
similar to the expert implementation of GEMM that can be found in GotoBLAS and its successors. The
performance evaluation of GEMM is shown in the following figure.
.. image:: images/GEMM_double.png
:align: center
Compile Time Impact of Polly
Clang+LLVM+Polly are compiled using Clang on a Intel(R) Core(TM) i7-7700 based system. The experiment
is repeated twice: with and without Polly enabled in order to measure its compile time impact.
The following versions are used:
- Polly (git hash 0db98a4837b6f233063307bb9184374175401922)
- Clang (git hash 3e1d04a92b51ed36163995c96c31a0e4bbb1561d)
- LLVM git hash 0265ec7ebad69a47f5c899d95295b5eb41aba68e)
`ninja <https://ninja-build.org/>`_ is used as the build system.
For both cases the whole compilation was performed five times. The compile times in seconds are shown in the following table.
|Polly Disabled|Polly Enabled|
|964 |977 |
|964 |980 |
|967 |981 |
|967 |981 |
|968 |982 |
The median compile time without Polly enabled is 967 seconds and with Polly enabled it is 981 seconds. The overhead is 1.4%. | {"url":"https://llvm.googlesource.com/llvm-project/polly/+/4a1d4f9561deacf928f3281be915b7685d14ad9a/docs/Performance.rst","timestamp":"2024-11-03T14:25:05Z","content_type":"text/html","content_length":"22765","record_id":"<urn:uuid:9fdd475b-9b38-43ca-af5d-7659f26b010d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00759.warc.gz"} |
Develop a daily time step simulation model that consists of a reservoir with a single inflow and single outflow (release)
Use units of million m3 and include any necessary parameters (e.g., capacity k) as separate adjustable variables
Implement the standard linear operating policy (SLOP).
Assume the reservoir is 500 mcm (k=500).
Develop yield-reliability results for a target (T) delivery values of 1, 3, 5, and 7 mcm/day.3
(The mean inflow for the time series is 34.8 m3/s, or 3.0 million m3/day.)
The standard linear operating policy provides a basic rule for reservoir release. | {"url":"https://insightmaker.com/tag/Reservoir","timestamp":"2024-11-03T00:26:59Z","content_type":"text/html","content_length":"31165","record_id":"<urn:uuid:1a5b547e-d7c0-4cc0-bf10-50c5777a3a88>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00002.warc.gz"} |
Centers Of Triangles Worksheet Kuta - TraingleWorksheets.com
Center Of Triangle Worksheet – Triangles are one of the most fundamental patterns in geometry. Understanding the concept of triangles is essential for understanding more advanced geometric concepts.
In this blog we will go over the various kinds of triangles including triangle angles and the methods to calculate the perimeter and area of a triangle, and also provide examples of each. Types of
Triangles There are three types of triangles: equal isosceles, as well as … Read more
Center Of Triangles Worksheet
Center Of Triangles Worksheet – Triangles are among the most basic shapes found in geometry. Understanding triangles is vital to learning more advanced geometric terms. In this blog post, we will
cover the different types of triangles including triangle angles and the methods to calculate the perimeter and area of a triangle, and present the examples for each. Types of Triangles There are
three types that of triangles are equilateral isosceles, and scalene. Equilateral triangles include … Read more | {"url":"https://www.traingleworksheets.com/tag/centers-of-triangles-worksheet-kuta/","timestamp":"2024-11-10T17:46:35Z","content_type":"text/html","content_length":"54692","record_id":"<urn:uuid:c2619667-3400-4320-935c-5c105e3649f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00393.warc.gz"} |
Chapter 5 exercise
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
• Sales Support
1 on 1 presale consultation
• After-Sales Support
24/7 Technical Support 6 Free Tickets per Quarter Faster Response
• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs. | {"url":"https://topic.alibabacloud.com/a/chapter-5-exercise_8_8_32311997.html","timestamp":"2024-11-02T09:09:59Z","content_type":"text/html","content_length":"85103","record_id":"<urn:uuid:b2bbfcaf-1037-46f9-934a-777fce70c953>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00595.warc.gz"} |
Yuzhao Wang
Associate Professor in Mathematics
Mailing Address
School of Mathematics
University of Birmingham
Watson Building
Edgbaston, Birmingham
B15 2TT, United Kingdom
e-mail: y dot wang dot 14 [at] bham.ac.uk
Research Interest: Analysis group at UoB
Nonlinear Partial Differential Equations and Harmonic Analysis. In particular, the study of nonlinear dispersive PDEs such as nonlinear Schrödinger equations, nonlinear wave equations, and
the KdV equation by using techniques from PDEs, Harmonic Analysis, and Probability theory. Mainly, well-posedness (existence, uniqueness, and stability of solutions) in both deterministic and
probabilistic settings, existence of invariant measures, Strichartz estimates in different settings, etc. Also, interested in Fourier restriction theory and \(\ell^2\) decoupling theory.
Seminars: Analysis Seminar
UoE seminars
ICMS events
London Analysis and Probability Seminar
Paris-London Analysis Seminar
for the published papers.
Preprints on arXiv
may not be up-to-date.
1. (with V.D. Dinh, N. Rougerie, L. Tolomeo) Statistical mechanics of the radial focusing nonlinear Schrödinger equation in general traps, arXiv:2312.06232.
2. (with R. Liu, N. Tzvetkov) Existence, uniqueness, and universality of global dynamics for the fractional hyperbolic $\Phi^4_3$-model, arXiv:2311.00543.
3. (with G. Li, R. Liang) Optimal divergence rate of the focusing Gibbs measure, arXiv:2310.08783.
4. (with R. Liang) Gibbs Dynamics for the weakly dispersive nonlinear Schrödinger equations, arXiv:2306.07645, to appear in Comm. Math. Phys. (2024).
5. (with C. Ilya, T. Oh) Norm inflation for the cubic nonlinear heat equation above the scaling critical regularity, arXiv:2205.14488, to appear in Funkcialaj Ekvacioj (2024).
6. (with T. Robert, K. Seong, L. Tolomeo) Focusing Gibbs measures with harmonic potential, to appear in Annales de l'Institut Henri Poincare (2023).
7. (with Y. Zine) Norm inflation for the derivative nonlinear Schrödinger equation, arXiv:2206.08719, to appear in C. R. Math. Acad. Sci. Paris (2023).
8. (with T. Oh, L. Tolomeo, G. Zheng) Hyperbolic P(Phi)_2-model on the plane, arXiv:2211.03735.
9. (with R. Liang) Gibbs measure for the focusing fractional NLS on the torus, SIAM J. Math. Anal. 54 (2022), no. 6, 6096--6118.
10. (with T. Oh, Y. Zine) Three-dimensional stochastic cubic nonlinear wave equation with almost space-time white noise, Stoch. Partial Differ. Equ. Anal. Comput. 10 (2022), no. 3, 898--963.
11. (with T. Oh, T. Robert) On the parabolic and hyperbolic Liouville equations, Comm. Math. Phys. 387 (2021), no. 3, 1281--1351.
12. (with T. Oh, T. Robert, and P. Sosoe) Invariant Gibbs dynamics for the dynamical sine-Gordon model, Proc. Roy. Soc. Edinburgh Sect. A 151 (2021), no. 5, 1450--1466.
13. (with T. Oh, T. Robert, and P. Sosoe) On the two-dimensional hyperbolic stochastic sine-Gordon equation, Stoch. Partial Differ. Equ. Anal. Comput. 9 (2021), no. 1, 1--32.
14. (with T. Oh) On global well-posedness of the modified KdV equation in modulation spaces, Discrete Contin. Dyn. Syst. 41 (2021), no. 6, 2971--2992.
15. (with T. Oh) Normal form approach to the one-dimensional periodic cubic nonlinear Schrödinger equation in almost critical Fourier-Lebesgue spaces, J. Anal. Math. 143 (2021), no. 2,
16. (with T. Oh, T. Robert, N. Tzvetkov) Stochastic quantization of Liouville conformal field theory, arXiv:2004.04194, 77 pages.
17. (with T. Oh, N. Tzvetkov) Solving the 4NLS with white noise initial data, Forum Math. Sigma 8 (2020), Paper No. e48, 63 pp.
18. (with T. Oh) Global well-posedness of the one-dimensional cubic nonlinear Schrödinger equation in almost critical spaces, J. Differential Equations 269 (2020), no. 1, 612--640.
19. (with W. Wang) Liouville-type theorems for the stationary MHD equations in 2D, Nonlinearity 32 (2019), no. 11, 4483--4505.
20. (with O. Pocovnicu) An $L^p$-theory for almost sure local well-posedness of the nonlinear Schrödinger equations, C. R. Math. Acad. Sci. Paris 356 (2018), no. 6, 637--643.
21. (with T. Oh, O. Pocovnicu) On the stochastic nonlinear Schrödinger equations with non-smooth additive noise, Kyoto J. Math. 60 (2020), no. 4, 1227–1243.
22. (with T. Oh) Global well-posedness of the periodic cubic fourth order NLS in negative Sobolev spaces, Forum Math. Sigma 6 (2018), e5, 80 pp.
23. (with R. Mosincat, O. Pocovnicu, L. Tolomeo) Global well-posedness of three-dimensional periodic stochastic nonlinear beam equations.
24. (with T. Oh) On the ill-posedness of the cubic nonlinear Schrödinger equation on the circle, An. Stiint. Univ. Al. I. Cuza Iasi. Mat. (N.S.) 64 (2018), no. 1, 53–84.
25. (with J. Xiao) A Liouville problem for the stationary fractional Navier-Stokes-Poisson system, J. Math. Fluid Mech. 20 (2018), no. 2, 485--498.
26. (with Z. Guo, Y. Sire, L. Zhao) On the energy-critical fractional Schrödinger equation in the radial case. Dyn. Partial Differ. Equ. 15 (2018), no. 4, 265--282.
27. (with J. Xiao) Well/ill-posedness for the dissipative Navier-Stokes system in generalized Carleson measure spaces, Adv. Nonlinear Anal. https://doi.org/10.1515/anona-2016-0042
28. (with J. Xiao) A constructive approach to positive solutions of \(\Delta_p u + f(u,\nabla u) \le 0\) on Riemannian manifolds, Ann. Inst. H. Poincaré Anal. Non Linéaire 33 (2016), no. 6,
29. (with J. Xiao) A uniqueness principle for \(u^p\leq(-\Delta)^{\frac\alpha 2}u\) in the Euclidean space, Commun. Contemp. Math. 18 (2016), no. 6, 1650019, 17 pp.
30. (with Y. Liu, J. Xiao) Nonnegative solutions of a fractional sub-Laplacian differential inequality on Heisenberg group, Dyn. Partial Differ. Equ. 12 (2015), no. 4, 379--403.
31. (with J. Xiao) Homogeneous Campanato-Sobolev classes, Appl. Comput. Harmon. Anal. 39 (2015), no. 2, 214--247.
32. (with Z. Guo, T. Oh) Strichartz estimates for Schrödinger equations on irrational tori, Proc. Lond. Math. Soc. 109 (2014), no. 4, 975--1013.
33. (with Z. Guo) Improved Strichartz estimates for a class of dispersive equations in the radial case and their applications to nonlinear Schrödinger and wave equations. J. Anal. Math. 124
(2014), 1--38.
34. (with L. Molinet) Dispersive limit from the Kawahara to the KdV equation, J. Differential Equations 255, (2013), 2196--2219.
35. Periodic nonlinear Schrödinger equation in critical \(H^s(\mathbb{T}^n)\) spaces, SIAM J. Math. Anal. 45, (2013), 1691--1703.
36. Periodic Cubic Hyperbolic Schrödinger equation on \(\mathbb{T}^2\), J. Funct. Anal. 265 (2013), 424--434.
37. Local well-posedness for hyperbolic-elliptic Ishimori equation, Nonlinear Anal. 75 (2012), 2534--2541.
38. Quadratic dispersive generalized Benjamin-Ono equation, J. Math. Anal. Appl. 387 (2012), 844--856.
39. Global well-posedness and scattering for derivative Schrödinger equation, Comm. Partial Differential Equations 36 (2011), 1694--1722.
40. (with Z. Guo, L. Peng, B. Wang) Uniform well-posedness and inviscid limit for the Benjamin-Ono-Burgers equation, Adv. in Math. 228 (2011), 647--677.
41. (with Z. Guo) On the well-posedness of the Schrödinger-KdV system, J. Differential Equations 249 (2010), 2500--2520.
42. The Cauchy problem for the elliptic-hyperbolic Davey-Stewartson system in Sobolev space, J. Math. Anal. Appl. 367 (2010), 174--192.
□ James Patterson, since September 2022.
□ Rui Liang, September 2020 - July 2024.
Postdoctoral researcher at University of Massachusetts Amherst since 2024, USA.
□ Guopeng Li, September 2018 - July 2022.
Assistant Professor at Beijing Institute of Technology since 2024, China.
□ Engin Basakoglu, supported by EPSRC April 2023 - March 2024.
Postdoctoral researcher at ShanghaiTech University, 2024 - 2027, China.
□ Tomoyuki Tanaka, supported by JSPS and EPSRC September 2021 - March 2023.
Associate professor at Yokohama National University since 2024, Japan. | {"url":"https://web.mat.bham.ac.uk/Y.Wang/","timestamp":"2024-11-06T10:19:08Z","content_type":"text/html","content_length":"24771","record_id":"<urn:uuid:eb6a8baf-eef7-4311-93c9-e92bfd48b9af>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00626.warc.gz"} |
3 pass affine fourier resample
Just an example from the resample stuff I was looking at recently.
Source input - Lenna scaled by 1/8 using imagemagic with a Lanczos filter:
I'm applying an affine matrix with a 7° rotation and a 8x scale in each dimension. Apart from the test algorithm the rest are just using Java2D for the resampling on 8-bit images.
Just to see how little information is present i'll start with the nearest-neighbour version. The ringing at the top of the hat suggests artefacts have been added by the downsampling process.
Now for bilinear, which makes a right pigs breakfast of things:
Then comes bicubic. Which really isn't much better than bilinear once you get to this kind of scale. It's still making a mess of things:
And finally the one which is based on a three-pass shear and/or scale algorithm. It comprises three separate stages.
1. Horizontal shear and scale;
2. Vertical shear and scale;
3. Horizontal shear.
Each operates only in a single dimension which greatly simplifies the resampling required - one only needs to be able to resample and scale in one dimension.
I'm using a trick to avoid the typical sinc filter ringing along the edges of the image itself, and i'm not cropping the result properly yet.
Unfortunately due to using a Fourier Transform for scaling I end up with a bit of ringing primarily due to the Gibbs Phenomenon. How much of this is present depends on the source image too, and even
the nearest-neighbour result shows that the Lanczos downsampling has added some ringing to start with.
Even with the ringing certain features are significantly smoother - such as the brim of her hat, top of her shoulder, or the frame of the mirror.
Most of the design including using the Fourier Transform for arbitrary shift/scaling is from the paper Methods for Efficient, High Quality Volume Resampling in the Frequency Domain; Aili Li , Klaus
Mueller. But i'm using the affine matrix decomposition in ``Bottleneck-free separable affine image warping''; Owen, C.B. ; Makedon, F. Image Processing, 1997. Proceedings., International Conference
on (Volume:1 ). A related paper which just covers rotation is High Quality Alias Free Image Rotation; Charles B. Owen, Fillia Makedon.
Spectral Analysis
Visual appearnce is one thing, but how true to the desired signal is each result mathematically? Taking a log power spectrum of a portion of the image (without edges) allows me to look a bit deeper.
Since i've upscaled by 8x in each dimension an ideal (i.e. sinc filter) resampling will contain no higher frequencies than were originally present - i.e. for a 64x64 image upscaled by any amount,
only the lowest 64x64 frequencies should contain any information at all (it will be rotated along with the signal however). To emphasise this I've zeroed out the signal-bearing frequencies in the
following spectrograms so that only the distortion added by each algorithm is displayed.
To be "farier" on the 8-bit resampling i've also taken the power spectrum of the quantised 8-bit result of the Fourier based algorithm as used to generate the PNG. This quantisation mostly shows up
(?) as noise along the axes.
Note that each spectrum is auto-scaled so the brightness levels do not necessarily represent equivalent amounts of distortion.
To the spectrums ... first the nearest-neighbour. This basically leaks a copy of the original signal as a grid across the whole spectrum.
Bilinear reduces these signal copies significantly apart from along the axes.
Bicubic reduces them further but there is still significant signal leaking out to the adjacent frequencies. Approximately 1.5x along each axis.
And finally the result from the Fourier based method. Apart from the axes which are primarily due to the quantisation to 8-bit (i think), most of the signal is just noise plus a little from the
rotation leaking past the mask.
No comments: | {"url":"http://a-hackers-craic.blogspot.com/2013/10/3-pass-affine-fourier-resample.html","timestamp":"2024-11-10T03:08:25Z","content_type":"application/xhtml+xml","content_length":"53124","record_id":"<urn:uuid:6c0a95a5-5cd3-4fa2-8f32-59f6a8870ae1>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00321.warc.gz"} |
How math deals with Pythagoras theorem and Limit of function
Pythagoras Theorem
Mathematics is an abstract subject with unique signs, colorful graphs, and diverse shapes and structures in geometry. This amazingly exciting subject sometimes gives tough times, but the addition of
some adventure makes it more interesting.
One of the many interesting and adventurous topics there is Algebra, which itself is a diverse topic. Pythagoras Theorem is one of the standard algebraic problems. This theorem is acting as the
basics of mathematics, contributing to solving several math issues, particularly Algebra and geometry for centuries now.
In geometry, the triangle with a right angle (90 degrees) gives a long straight line, known as Hypotenuse. In the triangle, this hypotenuse is collectively the area of both base and perpendicular.
The square of the hypotenuse at the right angle is the sum of the two other sides (base and perpendicular) of the triangle.
A triangle is a collection of three tangents of different angles and lengths attached to each other following the Head to Tail rule. According to this Head to Tail rule, the tangents are attached to
each other in a way that one’s tail is attached to another’s head and its head to the other tail forming a triangle.
The triangle with a right angle is used in the Pythagoras Theorem.
As an example calculate the length of the tangents of a right angle triangle
Base= a =4cm
Perpendicular= b =3cm
Hypotenuse = c =5cm
Now take the squares of each value
According to the Pythagoras Theorem, the square of hypotenuse ‘c’ is the sum of both the square values of ‘a’ and ‘b’.
Formula used in Pythagorean theorem
ax2+ bx2 = cx2
4×2 + 3×2 = 5×2
=16 + 9 = 25
Pythagorean theorem solver helps to find the steepness of slopes. The slope can be of a hill and mountain. To observe the angles of these slopes, you can use a telescope to look toward the measuring
object maintaining a fair distance so that the observations must be clear. So you can achieve the right angle of the slope.
Why is the Pythagorean theorem important in mathematics?
It is of many uses, and the standard one is to observe the slope and triangles. To observe the types of triangle, acute triangle, obtuse triangle, or the right triangle, you can use the Pythagoras
theorem. If the theorem works, it is a right triangle with a square of hypotenuse equal to the sum of both values. If the square hypotenuse is large, it is an obtuse triangle, and if the square of
the hypotenuse is small, it is an acute triangle.
This theorem helps in the building of rectangles, obtaining the missing lengths of the triangle.
Applications of Pythagorean Theorem
The real-life applications of limits in our daily life are given as follow:
• Pythagorean Theorem is used to find the diagonal length of the roof’s slope.
• It is used in physical construction works like woodworking or architecture etc.
• Computer and Information technology engineers, Engineering and science engineers, and also construction engineers used this old age formula in their practical work.
• Pythagorean theorem is also used in two-dimensional navigation.
• It is also used to find the slope or steepness of high hills or mountains.
• You can also find the real height of a broken tree by using that Pythagorean theorem.
Related: These are the technology tools which are helpful in educational purpose, you ma also concern may other educational tools and get understandable study of it.
Introduction to Limits
Limit is one of the easiest concepts of calculus. No doubt, If it is easy to understand what are limits, it does not mean that limits can be explained in a single piece of paper. Limit is a vast
field of calculus or you may be called the major concept of calculus.
In limits, there is a continuous change that occurs or we approach toward something so that the minor difference between them can be neglectable.
What is the Limit of a Function?
As we say, every action has a reaction. The same goes here; the function (f) allows every input (x) to have an output (f(x)) function, and yes, it also assigns itself a limit (l), where the function
has to stop the input (p). In this case, the output(fx) comes closer to the limit(l), and the initial input(x) comes closer to the final input(p).
The precise definition of Limits
The precise definition of limit is somehow different than its theoretical definition. Also, the definition of limit can help us to learn how limits are important and useable in mathematics or
especially in calculus. As it is the base of both derivatives and integration.
The precise definition of limits are given as :
The Limit as some number like “x” approaches to some defined number “c” should be some outputted number which we will call “L”.
It means that if we approach any function like f(x) to the number “c” but it will not be exactly that “c”, then the whole function f(x) will be equal to L.
Limit of a function mathematics
The limit of a function is the calculation of functions within a limit. It belongs to calculus, and as calculus is a kind of infinite calculation, the limit of a function acts as a pillar to this
unstable calculus. Calculus is a scene of inaccuracy and infinity. In other terms, it never determines an accurate answer, calculation, or any solution. There is always the chance of proximity in
calculus. The limit of a function is an approx calculation but in less sense. You know the limits of your calculations, and it also gives a range of input till the final output, so you have the
privilege to be approximately correct.
For example, you know the limit of a function is 1, so you also know its inputs, i.e., 0.0,……..0.999 this is the input of function while 1 is the limit of the function.
Applications of Limits
The real-life applications of limits in our daily life are given as follow:
• Limits are helpful while measuring the magnetic field or electric field etc.
• It is used to filter out useable information from raw data.
• Limits are used as real-life approximations of derivatives.
• Limits are used in the speedometers of our cars.
LEAVE A REPLY Cancel reply | {"url":"https://www.postingtree.com/how-math-deals-with-pythagoras-theorem-and-limit-of-function/","timestamp":"2024-11-06T15:36:33Z","content_type":"text/html","content_length":"130807","record_id":"<urn:uuid:4ec329de-df76-4bcf-9c86-b0c650d4628d>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00003.warc.gz"} |
What are the factors on which the landing distance depends on? - Tourist guide
What are the factors on which the landing distance depends on?
The determination of landing distance required for aircraft to land is calculated by taking into account the effect of various influencing factors, including runway construction, surface conditions
and the use of aircraft devices which are available to assist deceleration.
What does landing distance required mean?
Actual landing distance is the distance used in landing and braking to a complete stop (on a dry runway) after crossing the runway threshold at 50 feet; and, • Required landing distance is the
distance derived by applying a factor to the actual landing distance.
What is the factored landing distance formula?
The factored landing distance is the certified landing distance multiplied by 1.67, which can then be compared directly to the available landing distance.
What are the factors affecting takeoff and landing distance?
They apply various factors, including density altitude, type of operation, runway surface, runway slope and wind to readily determine take-off and landing distances for a particular set of
What are the four factors that affect flight and landing?
Four forces affect an airplane while it is flying: weight, thrust, drag and lift. See how they work when you do these activities as demonstrations.
What is the difference between factored and unfactored landing distance?
Unfactored landing distances assume precise control; factored landing distances reflect the effects of deviating from an ideal landing profile.
What is the P factor during landing?
P-Factor. P-Factor, which is also called "asymmetric propeller loading", happens when the downward moving propeller blade takes a bigger "bite" of air than the upward moving blade.
What is landing distance required based on?
What is factored and unfactored landing distance?
Unfactored landing distances assume precise control; factored landing distances reflect the effects of deviating from an ideal landing profile.
How do pilots calculate landing?
When it comes to calculating the landing distance, it's not as simple as just plugging in the landing weight and coming up with a number. The major elements which affect the landing distance are:
aircraft weight, flap setting, wind, runway surface and runway slope.
How do airports decide take off and landing runway?
Weather, in particular wind speed and direction, is usually the main reason for selecting which runways are used at an airport, the direction aircraft take-off and land, and the flight paths that are
What factors affect takeoff and landing distance?
In general the LDR depends on a number of factors, principally:
• The aircraft landing mass;
• The surface wind and temperature;<
• The runway elevation and slope;
• The runway surface conditions (dry, wet, slippery or contaminated); and,
• The condition of aircraft wheel-brakes and braking systems.
• The approach speed increment<
What are factored vs unfactored loads?
A factored load is a load multiplied by a certain factor designated by codes of practice to drermine the strength of a structural members such as reinforced concrete. Unfactored load is a service
load to determine the working stress of a structural concrete, steel, or wood member.
Is factored or unfactored loading for deflection?
Note that a deflection controlled member should be identical with ASD versus LRFD, since unfactored loads are used for deflection calculations. Note also that compression per- pendicular to grain
uses a different KF factor, so this analysis would show different results for bearing controlled applications.
What are the factors that affect landings?
In general the LDR depends on a number of factors, principally:
• The aircraft landing mass;
• The surface wind and temperature;<
• The runway elevation and slope;
• The runway surface conditions (dry, wet, slippery or contaminated); and,
• The condition of aircraft wheel-brakes and braking systems.
• The approach speed increment< | {"url":"https://uaetrip.ae/faq/102714/","timestamp":"2024-11-13T11:33:38Z","content_type":"text/html","content_length":"161062","record_id":"<urn:uuid:6e023c88-f794-4bfc-8eab-9b2b25a47598>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00100.warc.gz"} |
What are the Types of Quantum Numbers?
Quantum numbers are a set of constant values. Quantum numbers, also known as electronic quantum numbers, are numerical values assigned to electrons that offer solutions to the Schrodinger wave
equation for hydrogen atoms. The Schrodinger equation needs to be satisfied when combining together the quantum numbers of all the electrons in a particular atom. The collection of numbers can define
the location, energy, and direction of an electron in an atom.
Types of Quantum Numbers
The four-quantum number completely characterizes or offers comprehensive information about an electron in an atom. The quantum numbers are:
The Principal Quantum Number (n)
Azimuthal Quantum Number (l)
Magnetic Quantum Number (ml)
Spin Quantum Number of Electrons (s)
Principal Quantum Number
The principal quantum numbers are represented by the sign ‘n’. They represent an atom’s primary electron shell. The larger the value of the primary quantum number, the greater the distance between
the nucleus and the electrons, and therefore the atomic size.
The principal quantum number value can be any positive integer greater than or equal to one. The number n=1 signifies an atom’s innermost electron shell, which corresponds to the lowest energy state
of an electron.
As an atom cannot have a negative value, the principal quantum number cannot have a negative value.
The value of the principal quantum number will be increased if an electron observes energy and jumps from one shell to a higher shell.
Also, when electrons lose energy, they return to lower shells, decreasing the value of n.
Absorption is the rise in the value of n for an electron that highlights the photons or energy absorbed. Similarly, the drop in the value of n for an electron is known as emission, and here is where
the electrons emit their energy.
Azimuthal Quantum Number
The azimuthal quantum number describes the shape of an orbital (or orbital angular momentum). The letter ‘l’ represents it, and its value equals the total number of angular nodes in the orbital.
Azimuthal Quantum Number Formula and Explanation:
An azimuthal quantum number value can represent an s, p, d, or f subshell in many configurations.
Its value is determined (and restricted by) the value of the principal quantum number, which spans between 0 and 1. (n-1).
For example, if n = 3, the azimuthal quantum number can be one of three values: zero, one, or two.
The resultant subshell is an ‘s’ subshell when l is set to zero.
For l=1 and l=2, the resultant subshells are ‘p’ and ‘d,’ respectively.
As a result, the three feasible subshells for n=3 are 3s, 3p, and 3d. In another situation when n = 5, the available values of l are zero, one, two, three, and four. The atom has three angular nodes
when l = 3.
Magnetic Quantum Number
The magnetic quantum number defines the overall number and orientation of orbitals in a subshell. It is denoted by the symbol ‘ml‘. This value indicates the orbital’s angular momentum projected along
a specified axis. Let us understand the magnetic quantum number formula and detailed explanation:
The magnetic quantum number is determined by the azimuthal quantum number.
The value of ml for a given l lies between -l and +l. As a result, the value of n has an indirect effect on it.
If n = 4 and l = 3, the magnetic quantum number in an atom might be -3, -2, -1, 0, +1, +2, and +3. The orbital’s ‘l’ value determines the overall number of orbitals in a particular subshell.
The formula (2l + 1) is used to compute it. The ‘3d’ subshell (n=3, l=2), for example, has 5 orbitals (2*2 + 1). Each orbital may accommodate two electrons. As a consequence, the 3d subshell may hold
a total of 10 electrons.
Electron Spin Quantum Number
The values of n, l, and ml have no effect on the electron spin quantum number. The value of this number, represented by the symbol ms, represents the spin direction of the electron.
The ms value tells which way the electron is spinning. The electron spin quantum number can range from +1/2 to -1/2.
A positive ms value indicates that the electron has an upward spin, often known as spin, up.
If ms is negative, the electron has a downward spin, often known as spin, down.
The quantum number of electron spin determines whether an atom can produce a magnetic field. The value of ms can be generalized to ±1/2.
Get our expert’s assistance in minimizing the details of the four distinct quantum numbers. Tutoroot offers skilled faculty who will provide you with online interactive classes along with in-depth | {"url":"http://cornimant.icu/what-are-the-types-of-quantum-numbers/","timestamp":"2024-11-07T18:18:32Z","content_type":"application/xhtml+xml","content_length":"15201","record_id":"<urn:uuid:efd6bddb-b255-4551-bddf-bfbee49c5ad5>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00675.warc.gz"} |
Enhancing Daily Calculations with NumPy Multiply Function - Adventures in Machine Learning
In a world where calculations are an essential part of daily life, multiplication is a fundamental concept. From simple arithmetic problems to complex mathematical formulas, multiplication plays a
crucial role in many areas of our lives.
NumPy Multiply Function is a useful tool for performing multiplication operations on arrays and scalars in NumPy. In this article, we will learn about NumPy Multiply, its syntax, and how it enhances
our daily calculation experiences.
Definition of NumPy Multiply Function
NumPy Multiply Function is an in-built function in the NumPy library that enables multiplication operations on arrays and scalars. It is a versatile tool that performs element-wise multiplication on
two given input arrays.
The inputs are multiplied using the broadcasting feature of NumPy, which ensures that arrays or scalars with different shapes can be multiplied efficiently.
Importance of Multiplication in Daily Life
Multiplication is a critical concept in daily life and is actively used in many areas. For instance, multiplication helps to calculate the total amount of money to be paid for an item bought in
several quantities.
It also helps to calculate the area of a rectangular room. Additionally, multiplication plays a vital role in various scientific and engineering fields, including physics, chemistry, and engineering.
For these fields, multiplication is crucial in calculating distances, volumes, and forces.
Syntax of NumPy Multiply Function
Now that we understand the importance of multiplication let us dive into the syntax of NumPy Multiply Function. The syntax for NumPy Multiply Function is straightforward.
Syntax : numpy.multiply(x1, x2, out=None, where=True, casting=’same_kind’, order=’K’, dtype=None, subok=True[, signature, extobj]).
The function takes two input parameters – x1 and x2 – and returns the element-wise multiplication of the two arrays.
It also accepts optional arguments for out and where, which we will discuss further.
Inputs for NumPy Multiply (Scalar and Arrays)
The NumPy Multiply Function can accept scalar inputs, arrays of different dimensions and shapes, or a combination of the two. For scalar multiplication, elements of the array are multiplied by a
single scalar value, resulting in a new array with the same shape as the original one.
Scalar Multiplication Example
import numpy as np
In this example, the scalar input value two multiplies each element of the array, resulting in an output array with the same shape as the input array. For array multiplication, the input arrays must
have compatible shapes for element-wise multiplication, making it necessary to use the broadcasting feature of NumPy. Broadcasting ensures that arrays or scalars with different shapes can still be
multiplied efficiently.
Broadcasting Example
import numpy as np
# Creating two arrays of different dimensions
x = np.array([[1, 2, 3], [4, 5, 6]])
y = np.array([2, 2, 2])
# Broadcasting the scalar array y to match the same shape as x
result = np.multiply(x, y)
array([[ 2, 4, 6],
[ 8, 10, 12]])
In conclusion, NumPy Multiply Function allows us to perform multiplication operations on arrays and scalars. By using the broadcasting feature of NumPy, it enables us to perform efficient
element-wise multiplication of arrays or scalars with different shapes.
With the code snippets shown in this article, you can now perform multiplication operations comfortably with NumPy. So the next time you need to perform multiplication operations, use NumPy Multiply
Function, and achieve accurate and efficient results.
Examples of NumPy Multiply Function
NumPy Multiply Function is a versatile tool that can be used to perform efficient element-wise multiplication of arrays and scalars. In this section, we will provide examples of how NumPy Multiply
Function can be used in different scenarios.
NumPy Multiply with Scalar Values
Let us start by looking at an example of NumPy’s Multiply Function with scalar values.
import numpy as np
# Scalar multiplication using NumPy Multiply Function
a = np.array([1, 2, 3, 4])
b = 2 # scalar value
print(np.multiply(a, b))
In this example, we used NumPy Multiply Function to perform scalar multiplication on an array. By multiplying the scalar value with each element of the array, NumPy creates a new array with the same
shape as the original array.
NumPy Multiply with a NumPy array and Scalar value
Now, let’s take a look at an example of NumPy Multiply Function with a NumPy array and a scalar value.
import numpy as np
# Scalar multiplication of a NumPy array
a = np.array([[1, 2, 3], [4, 5, 6]])
b = 2 # scalar value
print(np.multiply(a, b))
array([[ 2, 4, 6],
[ 8, 10, 12]])
In this example, the scalar value is multiplied by each element of the NumPy array. By using the broadcasting feature in NumPy, the scalar value is multiplied element-wise to each row of the array.
NumPy Multiply with two Same-Sized NumPy arrays
The NumPy Multiply Function can also be used with same-sized NumPy arrays, enabling us to perform element-wise multiplication efficiently. Here’s an example:
import numpy as np
# Element-wise multiplication on two same-sized NumPy arrays
a = np.array([1, 2, 3])
b = np.array([2, 4, 6])
result = np.multiply(a, b)
With NumPy Multiply Function, we can perform element-wise multiplication on two same-sized NumPy arrays quickly and easily.
NumPy Multiply with a Matrix and a Vector
The NumPy Multiply Function is also useful in matrix multiplication and can be combined with the broadcasting feature in NumPy. Here’s an example of NumPy Multiply Function with a Matrix and a
import numpy as np
# A matrix with shape 2x3
matrix = np.array([[1, 2, 3], [4, 5, 6]])
# A vector with shape 1x3
vector = np.array([2, 2, 2])
# Broadcasting vector to match the shape of matrix
# Then, perform element-wise multiplication
result = np.multiply(matrix, vector)
In this example, we used NumPy Multiply Function to perform matrix-vector multiplication. By using broadcasting, we made sure the vector was compatible with the matrix.
NumPy Multiply Function then performs the element-wise multiplication of the matrix and vector.
Conclusion and Summary
In summary, NumPy Multiply Function is a useful tool for performing multiplication operations on arrays and scalar values efficiently. It can perform scalar multiplication on arrays, element-wise
multiplication on same-sized arrays, and matrix multiplication with broadcasting.
NumPy Multiply Function is versatile and easy to use, making it a valuable tool for calculations in scientific, engineering, and other related fields. We recommend experimenting with NumPy Multiply
Function to leverage its full potential and improve your calculations.
In conclusion, NumPy Multiply Function is a versatile tool that enables efficient element-wise multiplication of arrays and scalars. The article has covered its syntax, usage, and examples, including
scalar values, same-sized arrays, matrix, and vector multiplication.
With NumPy Multiply Function, we can perform accurate and efficient multiplication operations in various fields such as science and engineering. The article recommends experimenting with NumPy
Multiply Function to embrace its full potential and improve calculations.
Overall, NumPy Multiply Function enhances our multiplication experience with its versatility and advanced features. | {"url":"https://www.adventuresinmachinelearning.com/enhancing-daily-calculations-with-numpy-multiply-function/","timestamp":"2024-11-07T00:13:31Z","content_type":"text/html","content_length":"82169","record_id":"<urn:uuid:600066ef-e942-42ce-b9cb-f39c6f720e59>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00612.warc.gz"} |
Autodiff with Zygote: issues with setting seeds
Hello all,
I would like to differentiate a function with fixed seed but obtain a “can’t differentiate foreign call” error when using Zygote. Any advise would be appreciated.
The following is a minimal working example and one possible, but for me somewhat limiting, work around.
using Zygote
using Random
# Random.seed!( ) does not work with Zygote. It produces a "can't differentiate foreight call" error
function simulator(x, id::Int64)
return simulator(x)
"This is a work around. The simulator needs to take a rng as input."
function simulator(x, rng::AbstractRNG)
noise1 = randn(rng)
noise2 = randn(rng)
@show noise1
@show noise2
return x+noise1+noise2
function simulator(x)
noise1 = randn()
noise2 = randn()
@show noise1
@show noise2
return x+noise1+noise2
function distance(sim, obs)
return sum((sim-obs).^2)
"This will work"
function loss(x, obsdata, id::Int64)
rng = Xoshiro(id)
sim = simulator(x, rng)
return distance(sim, obsdata)
"This won't work"
function loss_with_issue(x, obsdata, id::Int64)
sim = simulator(x, id)
return distance(sim, obsdata)
# data
myobs = 2.0;
# to fix the seed
id = 123
# test point
xtest = 3.0
# This works
Zygote.gradient(x->loss(x, myobs, id), xtest)
2*(simulator(xtest, Xoshiro(id))-myobs)
# This throws an error: can't differentiate foreigncall expression"
Zygote.gradient(x->loss_with_issue(x, myobs, id), xtest)
Arguably simulator(x, rng::AbstractRNG) is cleaner code and may be preferred anyway, but I needed to be able to differentiate my loss also for simulators such as simulator(x) that do not work with an
explicit RNG instance.
Would someone know how to make Zygote work without having to pass around a RNG instance, i.e. for the loss_with_issue case?
Many thanks!
I think you can just tell Zygote not to look inside that function, like so:
julia> Zygote.gradient(x->loss_with_issue(x, myobs, id), xtest)
noise1 = -0.6457306721039767
noise2 = -1.4632513788889214
ERROR: Can't differentiate foreigncall expression $(Expr(:foreigncall, :(:jl_get_current_task), Ref{Task}, svec(), 0, :(:ccall))).
[4] setstate!
@ /Applications/Julia-1.10.app/Contents/Resources/julia/share/julia/stdlib/v1.10/Random/src/Xoshiro.jl:132 [inlined]
julia> function simulator(x, id::Int64)
Zygote.@ignore Random.seed!(id)
return simulator(x)
simulator (generic function with 3 methods)
julia> Zygote.gradient(x->loss_with_issue(x, myobs, id), xtest)
noise1 = -0.6457306721039767
noise2 = -1.4632513788889214
I believe that could be made permanent by a one-line PR here.
2 Likes
Tangential remark: why do you want to differentate a function that returns random values? Autodiff engines are not designed to deal with such situations by default, so you might obtain unexpected
(and backend-dependent) results
Thank you very much. That indeed resolves the issue.
The motivation for this is the implementation of a statistical inference procedure that works by fixing the seed of the stochastic generative model (the simulator). Details about the method would be | {"url":"https://discourse.julialang.org/t/autodiff-with-zygote-issues-with-setting-seeds/121957","timestamp":"2024-11-05T02:39:23Z","content_type":"text/html","content_length":"29574","record_id":"<urn:uuid:73503924-b470-4610-9759-9e50fdc06e48>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00782.warc.gz"} |
Soil Water Potential Calculation
Soil Water Potential
Soil water potential is the amount of pressure that must be applied to the soil to move water through the soil column. In the context of agriculture, it can be thought of as the amount of energy crop
roots must exert to obtain water. When the soil water potential approaches zero, it is easy for crops to obtain water. As the potential becomes more and more negative, it becomes increasingly harder
for crops to obtain water. At a potential of -1500 kPa, the “wilting point” is reached, and the water in the soil becomes unavailable to plants. Because soil water potential is not a relative
measure, it is a more objective measure for assessing crop stress.
Water Potential Curve Derivation
Soil water potential varies by soil type, so soil samples are needed for it to be calculated. At select Montana Mesonet stations, soil samples have been collected, allowing for the derivation of soil
water potential curves in our lab. In this process, the soil samples are saturated and progressively dried. The potential is then measured at various points in the drying process. With these data
points, the Fredlund-Xing (FX) ^1 function is parameterized, which allows soil water content to be calculated from water potential. We then invert the FX equation to solve for soil water potential.
This allows us to provide realtime estimates of soil water potential using measured volumetric water content:
\[SWP = h \times \left( (\exp\left( \frac{s - r}{VWC - r} \right)^{\frac{1}{m}} - \exp(1))^{\frac{1}{n}} \right)\]
where $h$, $s$, $r$, $m$ and $n$ are the parameters fit to the FX model, VWC is soil volumetric water content (m^3/m^3), and SWP is soil water potential (kPa)
1. Fredlund, D. G., & Xing, A. (1994). Equations for the soil-water characteristic curve. Canadian geotechnical journal, 31(4), 521-532. ↩ | {"url":"https://climate.umt.edu/mesonet/ag_tools/swp/","timestamp":"2024-11-10T12:39:30Z","content_type":"text/html","content_length":"13786","record_id":"<urn:uuid:49b14d5a-9007-4ee3-9a2e-ba93b8879f43>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00182.warc.gz"} |
2012 A-level H2 Mathematics (9740) Paper 2 Question 3 Suggested Solutions - The Culture SG2012 A-level H2 Mathematics (9740) Paper 2 Question 3 Suggested Solutions
All solutions here are SUGGESTED. Mr. Teng will hold no liability for any errors. Comments are entirely personal opinions.
Discriminant of
From (1),
From (2),
KS Comments:
Sketching of the graph can be done using the GC and since it is one mark, we do not really need to label everything. Students can consider completing the square to solve (ii), or using the quadratic
formula to resolve for the complex roots directly. (iii) is a simple replacement and its a “state” question. For (v), students need to note that
pingbacks / trackbacks
• […] Question 3 (Note Worthy) […] | {"url":"https://theculture.sg/2015/08/2012-a-level-h2-mathematics-9740-paper-2-question-3-suggested-solutions/","timestamp":"2024-11-02T17:08:30Z","content_type":"text/html","content_length":"105688","record_id":"<urn:uuid:46c27273-45fe-45bd-be75-a032ab1b7678>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00532.warc.gz"} |
Kids.Net.Au - Encyclopedia > Moment of inertia
Moment of inertia is rotational inertia, i.e., moment of inertia is to rotational motion as mass is to linear motion. Rotational versions of Newton's Second Law, momentum, and the formula for kinetic
energy use this value (with torque, angular velocity and angular acceleration replacing force, velocity and acceleration, respectively). The moment of inertia for an object depends on its shape and
distribution of mass within that shape: the more the mass is on the outside with respect to the axis of rotation, the larger the moment of inertia. For a given mass M and radius r, in order of
increasing moment of inertia we have a solid sphere, a solid cylinder, a hollow sphere and a hollow cylinder, namely cMr^2, with c=2/5, 1/2, 2/3 and 1, respectively. The general form of the moment of
inertia involves an integral.
The moment of inertia is often represented by the letter I.
For an bunch of infinitely small particles with mass <math>m_n</math>, and distance <math>r_n</math> from one particular axis, the moment of inertia for that bunch of particles around that axis will
<math>I = \sum_n r_n^2 m_n</math>
Continuous mass distributions require an infinite sum over all the point mass moments which make up the whole. This is accomplished by an integration over all the mass:
<math>I = \int_0^M r^2\,dm</math>
See also: torque
All Wikipedia text is available under the terms of the GNU Free Documentation License | {"url":"http://encyclopedia.kids.net.au/page/mo/Moment_of_inertia","timestamp":"2024-11-12T16:57:30Z","content_type":"application/xhtml+xml","content_length":"13813","record_id":"<urn:uuid:0181090a-951b-4dcc-bd9a-cd94a8236ecb>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00470.warc.gz"} |
Table of Contents
We spend a lot of time in school mathematics learning how to work with numbers, but what are they? What is a number and how can we construct our collection of these very abstract things that we use
every day in conversation and planning?
⇒a more formal way to define numbers
We use them along with arithmetic like `times` and `+` to help us describe and understand many properties of the world we observe around us. An astonishing variety of very different quantities we
measure behave like numbers and have important properties derived using arithmetic. In physics we measure distance, time, mass and electrical charge and from them calculate properties like position,
area, volume, speed, acceleration, force, pressure, temperature, density and energy to build a model of the way physical things interact.
We use numbers for momentum, angle, probability and anything we can count as well as all kinds of properties we are interested in analysing, talking about and making predictions about that we can
measure, put in an order or assign meaningful values to. Not only science and technology but also design, business, planning and management rely on mathematical models to use and understand the
systems they are engaged with. Essentially numbers are part of understanding, describing and predicting how many kinds of things behave.
The view of numbers described here is rather more formal than is usual in school, but it can be a helpful way to think about these things for some students. There are several different approaches to
this kind of exploration of what lies behind mathematics. Here we consider numbers as a mathematical abstraction, a way of describing and working with both quantities and the order of things. We
count and measure things, and we use algebra to reason about those things using numbers. Some of the words I will use here are defined and explained first. The ideas are more important than the
words, but you will need to know some of the words to understand the explanations, or the questions you could be asked — and to communicate about these ideas.
some important ideas first ...
commutative means you can swap the order, that is: `qquad2times3=3times2`.
It works for `times` and for `+` (these, strictly, only have a meaning for pairs of numbers)
but `-` and `div` are not commutative:
`quad 2-3 quad`is the negative of`quad 3-2 quad`and
`quad 2div3 quad`is the reciprocal of`quad 3div2`
associative means you can group addition however you want: `qquad(2+3)+4 = 2+(3+4)`.
This applies to `times` as well, together these rules mean that you can rearrange `+` or `times` operations.
If you have `-` or `div` then it is not so simple, they are not commutative.
It does not work when operations `times` and `+` are mixed together: `quad(2+3)times4 ne 2+(3times4).quad` This means we need to make up a rule to tell us what we mean when we write something like
Order of operations in arithmetic:
do the brackets first;
then any powers;
then multiply and divide;
then finally plus and minus.
distributive tells how `+` and `times` work together, `quad 2times(3+4)=(2times3)+(2times4)=14`. This looks much neater (and is easier to read and understand) in our algebra notation where we do not
use the `times` symbol… \[\large a(b+c)=ab+ac.\]
Saying that counting numbers are closed under addition means that whenever you add two counting numbers the result is also a counting number. If we want numbers to be “closed under division” we need
to include fractions as well. Then if we want numbers to be “closed under subtraction” we need our numbers to include zero and the negative numbers. Understanding zero as a number, and the particular
problems this brings up, was difficult. It was the last part of this puzzle to be solved, thousands of years after fractions, powers, roots and negatives were widely used.
Number Algebra is the symbolic language that we use to write down and work with ‘facts’ we are given, or ‘facts’ we want to know, about numbers and things we can measure or count that behave like
We use letters to represent the values we are interested in but do not know yet, the ‘unknowns’ in our problem, the values we are trying to find.
We write algebra a little differently compared to arithmetic. We do not usually write the `times` or `div` symbols. We always use a fraction-bar instead of `div` and we write numbers and letters next
to each other to mean multiplication, so that: \begin{align*} \phantom{\frac12}42ab\quad\text{means} \quad&42 \times a \times b,\\ \text{and: } \quad&\\ \frac{4x}{yz}\quad\text{means} \quad&(4 \times
x)\div(y \times z). \end{align*} We call these groups of symbols a term.
The collection of numbers found by starting with one, then adding, then dividing, then subtracting we call the Rational Numbers. The Hindu mathematicians put these ideas together in a formal,
mathematical way during the time Europeans call the Dark Ages, from about 650AD. Their analysis was soon translated into Arabic. In the 1500s the Persian and Islamic mathematics that followed was
translated and published in Europe, including much of our basic algebra and the algorithms we use for arithmetic. Those words come from the Arabic title and Persian author of one of those books. The
modern symbols for the ten digits are originally from the Hindu written script of that time.
This is a rather important idea, we only just touch on it a little in high school — but mathematicians keep creating new kinds of entities like these to talk about new kinds of models of things that
we observe in the world around us. We develop the algebras that work with these new collections … what we are learning now is Number Algebra, and we look at Set Algebra also, manipulating sets and
their elements using operations including union and intersection with relations like in. At school we also explore Vector Algebra.
What is a number?
We can start by deciding that “one is a number”, the idea of ‘a single’ thing.
… now let’s make some more!
First we learn how to count.
We give the name two to the number after one, three to the number after two and make up a way of giving names to every next number. For example after ninety nine we call the next number one hundred.
So we end up with an endless list of number-names, in a fixed order.
I will call this first collection of named numbers the Counting Numbers. Sometimes we call them Natural Numbers.
I am going to use some words that I explained above.
□ this means that we can rearrange any sequence of additions
□ and that the addition of numbers always gives a number
□ for example we learn to “count by threes”, using our fingers to multiply three by a number, keeping a tally of how many times we add `3`.
☆ that means that you can rearrange products (reorder the number and letters within a
☆ multiplication of numbers always gives a number, we say numbers are “
closed under multiplication
□ multiplying any number by
leaves it unchanged, so we call
the “
multiplicative identity
☆ hence we can leave out the `1` in any
— it is always implied
□ certainly not the other way round!
☆ and why we write equations with `+` and `-` signs between
(rather than using `times` symbols)
□ hence we ‘collect like terms’ by adding their number-parts
□ hence we multiply an expression by multiplying every (top level) term of the expression.
• the
, or
multiplicative inverse
, of a number is the number you
it by to get
□ or, the same thing if you already can divide: one divided by a number is its reciprocal
□ we use this to do division, we multiply by the reciprocal to divide.
□ we need
to make a set of numbers (excluding zero)
closed under division
Fractions (that is: division and ratio) transform our number system from counting distinct things into one able to measure continuous properties like length, time, weight or probability.
This is a very big shift, we start to use this new and very different kind of number in primary school, but it takes a few years to start to really understand that difference.
□ the fraction bar in algebra combines brackets and dividing
• we decide that zero is a number (this might be very familiar, but it is not at all trivial or obvious!)
• the negative, or additive inverse, of a number is the number you add to it to get zero
□ or, put the other way round: a number subtracted from zero is its negative
□ thus we define subtraction: we add the negative to subtract.
□ put another way: zero is any number subtracted from itself
□ we need zero and
numbers to make our set of numbers
closed under subtraction
• now we have our new set of numbers —
, the
Counting Numbers
— we call them the
Rational Numbers
. We can add, subtract, multiply and (
almost always
) divide these numbers to get another as the result.
These properties, these ‘facts’ about numbers, are in the right hand column in the more formal, abstract representation here.
We use these rules to build our Number Algebra.
more numbers
Mathematicians extend the idea of number, and apply it in all sorts of practical or abstract models and explorations.
there are important non-numbers
When describing numbers above we saw that dividing by zero was not possible, it did not make sense. Yet in other cases like that we pressed on, we made new numbers to answer the questions the old
ones could not and so made our collections of numbers more complete. We cannot find any solution to \(\ x^2=-1\ \) in the Real Numbers, so we name a new number that is the solution and give it a
name: \(\rm i\). It is not a Real Number of course but we find that a whole new kind of number emerges. We call them Complex Numbers and find they do behave very nicely as numbers — following all the
rules we talked about here. We also discover many areas where they are very useful, we find many systems where values are naturally expressed as Complex Numbers.
Perhaps, then, we should simply define a new number, call it \(\large\infty\) and extend our number system even further. The problem is that it does not play by the number rules. If we include it as
a number we lose much of what makes numbers so useful and powerful, we cannot use it as a value within our algebra without extra care and work.
This is an ancient problem, discussed here. The ways we deal with it are very important and we get started with this mathematics in school. Beyond school we dig much more deeply (and into many other
kinds of abstractions and generalisations as well). The mathematics we call “Advanced” touches on this Infinitessimal Calculus in the last year of school, in our “Extension” courses we get quite
thoroughly into basic, 1-variable calculus — especially as it applies to rates of change in continuous quantities. This was the huge advance in mathematical thinking, physics and analysis that was
developed over the 1600s by a few generations of mathematicians including many very famous names — Keppler, Descarte, Pascal, Galilleo, Leibniz and Newton are probably the best known. This
application of calculus to understanding how physical objects move is called dynamics. | {"url":"http://simonwise.net/wikinotes/doku.php/teaching/topics/number/axioms","timestamp":"2024-11-06T08:59:33Z","content_type":"text/html","content_length":"45248","record_id":"<urn:uuid:a80828fb-7472-465c-bbef-10243198c40d>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00725.warc.gz"} |
Algorithms for computing Maximally Redundant Trees for IP/LDP Fast- Reroute
Routing Area Working Group G. Enyedi, Ed.
Internet-Draft A. Csaszar
Intended status: Informational Ericsson
Expires: January 16, 2014 A. Atlas, Ed.
C. Bowers
Juniper Networks
A. Gopalan
University of Arizona
July 15, 2013
Algorithms for computing Maximally Redundant Trees for IP/LDP Fast-
A complete solution for IP and LDP Fast-Reroute using Maximally
Redundant Trees is presented in [I-D.ietf-rtgwg-mrt-frr-
architecture]. This document defines the associated MRT Lowpoint
algorithm that is used in the default MRT profile to compute both the
necessary Maximally Redundant Trees with their associated next-hops
and the alternates to select for MRT-FRR.
Status of This Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on January 16, 2014.
Copyright Notice
Copyright (c) 2013 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
Enyedi, et al. Expires January 16, 2014 [Page 1]
Internet-Draft MRT FRR Algorithm July 2013
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Terminology and Definitions . . . . . . . . . . . . . . . . . 4
3. Algorithm Key Concepts . . . . . . . . . . . . . . . . . . . 6
3.1. Partial Ordering for Disjoint Paths . . . . . . . . . . . 6
3.2. Finding an Ear and the Correct Direction . . . . . . . . 8
3.3. Low-Point Values and Their Uses . . . . . . . . . . . . . 11
3.4. Blocks in a Graph . . . . . . . . . . . . . . . . . . . . 13
3.5. Determining Local-Root and Assigning Block-ID . . . . . . 15
4. Algorithm Sections . . . . . . . . . . . . . . . . . . . . . 16
4.1. MRT Island Identification . . . . . . . . . . . . . . . . 17
4.2. Root Selection . . . . . . . . . . . . . . . . . . . . . 18
4.3. Initialization . . . . . . . . . . . . . . . . . . . . . 18
4.4. MRT Lowpoint Algorithm: Computing GADAG using lowpoint
inheritance . . . . . . . . . . . . . . . . . . . . . . . 19
4.5. Augmenting the GADAG by directing all links . . . . . . . 21
4.6. Compute MRT next-hops . . . . . . . . . . . . . . . . . . 23
4.6.1. MRT next-hops to all nodes partially ordered with
respect to the computing node . . . . . . . . . . . . 24
4.6.2. MRT next-hops to all nodes not partially ordered with
respect to the computing node . . . . . . . . . . . . 24
4.6.3. Computing Redundant Tree next-hops in a 2-connected
Graph . . . . . . . . . . . . . . . . . . . . . . . . 25
4.6.4. Generalizing for graph that isn't 2-connected . . . . 27
4.6.5. Complete Algorithm to Compute MRT Next-Hops . . . . . 28
4.7. Identify MRT alternates . . . . . . . . . . . . . . . . . 30
4.8. Finding FRR Next-Hops for Proxy-Nodes . . . . . . . . . . 34
5. MRT Lowpoint Algorithm: Complete Specification . . . . . . . 36
6. Algorithm Alternatives and Evaluation . . . . . . . . . . . . 37
6.1. Algorithm Evaluation . . . . . . . . . . . . . . . . . . 37
7. Algorithm Work to Be Done . . . . . . . . . . . . . . . . . . 41
8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 42
9. Security Considerations . . . . . . . . . . . . . . . . . . . 42
10. References . . . . . . . . . . . . . . . . . . . . . . . . . 42
10.1. Normative References . . . . . . . . . . . . . . . . . . 42
10.2. Informative References . . . . . . . . . . . . . . . . . 42
Appendix A. Option 2: Computing GADAG using SPFs . . . . . . . . 44
Appendix B. Option 3: Computing GADAG using a hybrid method . . 48
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 50
Enyedi, et al. Expires January 16, 2014 [Page 2]
Internet-Draft MRT FRR Algorithm July 2013
1. Introduction
MRT Fast-Reroute requires that packets can be forwarded not only on
the shortest-path tree, but also on two Maximally Redundant Trees
(MRTs), referred to as the MRT-Blue and the MRT-Red. A router which
experiences a local failure must also have pre-determined which
alternate to use. This document defines how to compute these three
things for use in MRT-FRR and describes the algorithm design
decisions and rationale. The algorithm is based on those presented
in [MRTLinear] and expanded in [EnyediThesis].
Just as packets routed on a hop-by-hop basis require that each router
compute a shortest-path tree which is consistent, it is necessary for
each router to compute the MRT-Blue next-hops and MRT-Red next-hops
in a consistent fashion. This document defines the MRT Lowpoint
algorithm to be used as a standard in the default MRT profile for
As now, a router's FIB will contain primary next-hops for the current
shortest-path tree for forwarding traffic. In addition, a router's
FIB will contain primary next-hops for the MRT-Blue for forwarding
received traffic on the MRT-Blue and primary next-hops for the MRT-
Red for forwarding received traffic on the MRT-Red.
What alternate next-hops a point-of-local-repair (PLR) selects need
not be consistent - but loops must be prevented. To reduce
congestion, it is possible for multiple alternate next-hops to be
selected; in the context of MRT alternates, each of those alternate
next-hops would be equal-cost paths.
This document defines an algorithm for selecting an appropriate MRT
alternate for consideration. Other alternates, e.g. LFAs that are
downstream paths, may be prefered when available and that policy-
based alternate selection process[I-D.ietf-rtgwg-lfa-manageability]
is not captured in this document.
[E]---[D]---| [E]<--[D]<--| [E]-->[D]
| | | | ^ | |
| | | V | | V
[R] [F] [C] [R] [F] [C] [R] [F] [C]
| | | ^ ^ | |
| | | | | V |
[A]---[B]---| [A]-->[B] [A]---[B]<--|
(a) (b) (c)
a 2-connected graph MRT-Blue towards R MRT-Red towards R
Figure 1
Enyedi, et al. Expires January 16, 2014 [Page 3]
Internet-Draft MRT FRR Algorithm July 2013
Algorithms for computing MRTs can handle arbitrary network topologies
where the whole network graph is not 2-connected, as in Figure 2, as
well as the easier case where the network graph is 2-connected
(Figure 1). Each MRT is a spanning tree. The pair of MRTs provide
two paths from every node X to the root of the MRTs. Those paths
share the minimum number of nodes and the minimum number of links.
Each such shared node is a cut-vertex. Any shared links are cut-
[E]---[D]---| |---[J]
| | | | |
| | | | |
[R] [F] [C]---[G] |
| | | | |
| | | | |
[A]---[B]---| |---[H]
(a) a graph that isn't 2-connected
[E]<--[D]<--| [J] [E]-->[D]---| |---[J]
| ^ | | | | | ^
V | | | V V V |
[R] [F] [C]<--[G] | [R] [F] [C]<--[G] |
^ ^ ^ | ^ | | |
| | | V | V | |
[A]-->[B]---| |---[H] [A]<--[B]<--| [H]
(b) MRT-Blue towards R (c) MRT-Red towards R
Figure 2
2. Terminology and Definitions
network graph: A graph that reflects the network topology where all
links connect exactly two nodes and broadcast links have been
transformed into the standard pseudo-node representation.
Redundant Trees (RT): A pair of trees where the path from any node X
to the root R on the first tree is node-disjoint with the path
from the same node X to the root along the second tree. These can
be computed in 2-connected graphs.
Maximally Redundant Trees (MRT): A pair of trees where the path
from any node X to the root R along the first tree and the path
from the same node X to the root along the second tree share the
minimum number of nodes and the minimum number of links. Each
such shared node is a cut-vertex. Any shared links are cut-links.
Any RT is an MRT but many MRTs are not RTs.
Enyedi, et al. Expires January 16, 2014 [Page 4]
Internet-Draft MRT FRR Algorithm July 2013
MRT-Red: MRT-Red is used to describe one of the two MRTs; it is
used to described the associated forwarding topology and MT-ID.
Specifically, MRT-Red is the decreasing MRT where links in the
GADAG are taken in the direction from a higher topologically
ordered node to a lower one.
MRT-Blue: MRT-Blue is used to describe one of the two MRTs; it is
used to described the associated forwarding topology and MT-ID.
Specifically, MRT-Blue is the increasing MRT where links in the
GADAG are taken in the direction from a lower topologically
ordered node to a higher one.
cut-vertex: A vertex whose removal partitions the network.
cut-link: A link whose removal partitions the network. A cut-link
by definition must be connected between two cut-vertices. If
there are multiple parallel links, then they are referred to as
cut-links in this document if removing the set of parallel links
would partition the network.
2-connected: A graph that has no cut-vertices. This is a graph
that requires two nodes to be removed before the network is
spanning tree: A tree containing links that connects all nodes in
the network graph.
back-edge: In the context of a spanning tree computed via a depth-
first search, a back-edge is a link that connects a descendant of
a node x with an ancestor of x.
2-connected cluster: A maximal set of nodes that are 2-connected.
In a network graph with at least one cut-vertex, there will be
multiple 2-connected clusters.
block: Either a 2-connected cluster, a cut-edge, or an isolated
DAG: Directed Acyclic Graph - a digraph containing no directed
ADAG: Almost Directed Acyclic Graph - a digraph that can be
transformed into a DAG whith removing a single node (the root
GADAG: Generalized ADAG - a digraph, which has only ADAGs as all of
its blocks. The root of such a block is the node closest to the
global root (e.g. with uniform link costs).
Enyedi, et al. Expires January 16, 2014 [Page 5]
Internet-Draft MRT FRR Algorithm July 2013
DFS: Depth-First Search
DFS ancestor: A node n is a DFS ancestor of x if n is on the DFS-
tree path from the DFS root to x.
DFS descendant: A node n is a DFS descendant of x if x is on the
DFS-tree path from the DFS root to n.
ear: A path along not-yet-included-in-the-GADAG nodes that starts
at a node that is already-included-in-the-GADAG and that ends at a
node that is already-included-in-the-GADAG. The starting and
ending nodes may be the same node if it is a cut-vertex.
X >> Y or Y << X: Indicates the relationship between X and Y in a
partial order, such as found in a GADAG. X >> Y means that X is
higher in the partial order than Y. Y << X means that Y is lower
in the partial order than X.
X > Y or Y < X: Indicates the relationship between X and Y in the
total order, such as found via a topological sort. X > Y means
that X is higher in the total order than Y. Y < X means that Y is
lower in the total order than X.
proxy-node: A node added to the network graph to represent a multi-
homed prefix or routers outside the local MRT-fast-reroute-
supporting island of routers. The key property of proxy-nodes is
that traffic cannot transit them.
3. Algorithm Key Concepts
There are five key concepts that are critical for understanding the
MRT Lowpoint algorithm and other algorithms for computing MRTs. The
first is the idea of partially ordering the nodes in a network graph
with regard to each other and to the GADAG root. The second is the
idea of finding an ear of nodes and adding them in the correct
direction. The third is the idea of a Low-Point value and how it can
be used to identify cut-vertices and to find a second path towards
the root. The fourth is the idea that a non-2-connected graph is
made up of blocks, where a block is a 2-connected cluster, a cut-edge
or an isolated node. The fifth is the idea of a local-root for each
node; this is used to compute ADAGs in each block.
3.1. Partial Ordering for Disjoint Paths
Given any two nodes X and Y in a graph, a particular total order
means that either X < Y or X > Y in that total order. An example
would be a graph where the nodes are ranked based upon their unique
IP loopback addresses. In a partial order, there may be some nodes
Enyedi, et al. Expires January 16, 2014 [Page 6]
Internet-Draft MRT FRR Algorithm July 2013
for which it can't be determined whether X << Y or X >> Y. A partial
order can be captured in a directed graph, as shown in Figure 3. In
a graphical representation, a link directed from X to Y indicates
that X is a neighbor of Y in the network graph and X << Y.
[A]<---[R] [E] R << A << B << C << D << E
| ^ R << A << B << F << G << H << D << E
| |
V | Unspecified Relationships:
[B]--->[C]--->[D] C and F
| ^ C and G
| | C and H
V |
Figure 3: Directed Graph showing a Partial Order
To compute MRTs, the root of the MRTs is at both the very bottom and
the very top of the partial ordering. This means that from any node
X, one can pick nodes higher in the order until the root is reached.
Similarly, from any node X, one can pick nodes lower in the order
until the root is reached. For instance, in Figure 4, from G the
higher nodes picked can be traced by following the directed links and
are H, D, E and R. Similarly, from G the lower nodes picked can be
traced by reversing the directed links and are F, B, A, and R. A
graph that represents this modified partial order is no longer a DAG;
it is termed an Almost DAG (ADAG) because if the links directed to
the root were removed, it would be a DAG.
[A]<---[R]<---[E] R << A << B << C << R
| ^ ^ R << A << B << C << D << E << R
| | | R << A << B << F << G << H << D << E << R
V | |
[B]--->[C]--->[D] Unspecified Relationships:
| ^ C and F
| | C and G
V | C and H
Figure 4: ADAG showing a Partial Order with R lowest and highest
Enyedi, et al. Expires January 16, 2014 [Page 7]
Internet-Draft MRT FRR Algorithm July 2013
Most importantly, if a node Y >> X, then Y can only appear on the
increasing path from X to the root and never on the decreasing path.
Similarly, if a node Z << X, then Z can only appear on the decreasing
path from X to the root and never on the inceasing path.
When following the increasing paths, it is possible to pick multiple
higher nodes and still have the certainty that those paths will be
disjoint from the decreasing paths. E.g. in the previous example
node B has multiple possibilities to forward packets along an
increasing path: it can either forward packets to C or F.
3.2. Finding an Ear and the Correct Direction
For simplicity, the basic idea of creating a GADAG by adding ears is
described assuming that the network graph is a single 2-connected
cluster so that an ADAG is sufficient. Generalizing to multiple
blocks is done by considering the block-roots instead of the GADAG
root - and the actual algorithm is given in Section 4.4.
In order to understand the basic idea of finding an ADAG, first
suppose that we have already a partial ADAG, which doesn't contain
all the nodes in the block yet, and we want to extend it to cover all
the nodes. Suppose that we find a path from a node X to Y such that
X and Y are already contained by our partial ADAG, but all the
remaining nodes along the path are not added to the ADAG yet. We
refer to such a path as an ear.
Recall that our ADAG is closely related to a partial order, more
precisely, if we remove root R, the remaining DAG describes a partial
order of the nodes. If we suppose that neither X nor Y is the root,
we may be able to compare them. If one of them is definitely lesser
with respect to our partial order (say X<<Y), we can add the new path
to the ADAG in a direction from X to Y. As an example consider Figure
E---D---| E<--D---| E<--D<--|
| | | | ^ | | ^ |
| | | V | | V | |
R F C R F C R F C
| | | | ^ | | ^ ^
| | | V | | V | |
A---B---| A-->B---| A-->B---|
(a) (b) (c)
(a) A 2-connected graph
(b) Partial ADAG (C is not included)
(c) Resulting ADAG after adding path (or ear) B-C-D
Enyedi, et al. Expires January 16, 2014 [Page 8]
Internet-Draft MRT FRR Algorithm July 2013
Figure 5
In this partial ADAG, node C is not yet included. However, we can
find path B-C-D, where both endpoints are contained by this partial
ADAG (we say those nodes are *ready* in the sequel), and the
remaining node (node C) is not contained yet. If we remove R, the
remaining DAG defines a partial order, and with respect to this
partial order we can say that B<<D, so we can add the path to the
ADAG in the direction from B to D (arcs B->C and C->D are added). If
B were strictly greater than D, we would add the same path in reverse
If in the partial order where an ear's two ends are X and Y, X << Y,
then there must already be a directed path from X to Y already in the
ADAG. The ear must be added in a direction such that it doesn't
create a cycle; therefore the ear must go from X to Y.
In the case, when X and Y are not ordered with each other, we can
select either direction for the ear. We have no restriction since
neither of the directions can result in a cycle. In the corner case
when one of the endpoints of an ear, say X, is the root (recall that
the two endpoints must be different), we could use both directions
again for the ear because the root can be considered both as smaller
and as greater than Y. However, we strictly pick that direction in
which the root is lower than Y. The logic for this decision is
explained in Section 4.6
A partial ADAG is started by finding a cycle from the root R back to
itself. This can be done by selecting a non-ready neighbor N of R
and then finding a path from N to R that doesn't use any links
between R and N. The direction of the cycle can be assigned either
way since it is starting the ordering.
Once a partial ADAG is already present, we can always add ears to it:
just select a non-ready neighbor N of a ready node Q, such that Q is
not the root, find a path from N to the root in the graph with Q
removed. This path is an ear where the first node of the ear is Q,
the next is N, then the path until the first ready node the path
reached (that second ready node is the other endpoint of the path).
Since the graph is 2-connected, there must be a path from N to R
without Q.
Enyedi, et al. Expires January 16, 2014 [Page 9]
Internet-Draft MRT FRR Algorithm July 2013
It is always possible to select a non-ready neighbor N of a ready
node Q so that Q is not the root R. Because the network is
2-connected, N must be connected to two different nodes and only one
can be R. Because the initial cycle has already been added to the
ADAG, there are ready nodes that are not R. Since the graph is
2-connected, while there are non-ready nodes, there must be a non-
ready neighbor N of a ready node that is not R.
Create an empty ADAG. Add root to the ADAG.
Mark root as IN_GADAG.
Select an arbitrary cycle containing root.
Add the arbitrary cycle to the ADAG.
Mark cycle's nodes as IN_GADAG.
Add cycle's non-root nodes to process_list.
while there exists connected nodes in graph that are not IN_GADAG
Select a new ear. Let its endpoints be X and Y.
if Y is root or (Y << X)
add the ear towards X to the ADAG
else // (a) X is root or (b)X << Y or (c) X, Y not ordered
Add the ear towards Y to the ADAG
Figure 6: Generic Algorithm to find ears and their direction in
2-connected graph
Algorithm Figure 6 merely requires that a cycle or ear be selected
without specifying how. Regardless of the way of selecting the path,
we will get an ADAG. The method used for finding and selecting the
ears is important; shorter ears result in shorter paths along the
MRTs. The MRT Lowpoint algorithm's method using Low-Point
Inheritance is defined in Section 4.4. Other methods are described
in the Appendices (Appendix A and Appendix B).
As an example, consider Figure 5 again. First, we select the
shortest cycle containing R, which can be R-A-B-F-D-E (uniform link
costs were assumed), so we get to the situation depicted in Figure 5
(b). Finally, we find a node next to a ready node; that must be node
C and assume we reached it from ready node B. We search a path from C
to R without B in the original graph. The first ready node along
this is node D, so the open ear is B-C-D. Since B<<D, we add arc B->C
and C->D to the ADAG. Since all the nodes are ready, we stop at this
Enyedi, et al. Expires January 16, 2014 [Page 10]
Internet-Draft MRT FRR Algorithm July 2013
3.3. Low-Point Values and Their Uses
A basic way of computing a spanning tree on a network graph is to run
a depth-first-search, such as given in Figure 7. This tree has the
important property that if there is a link (x, n), then either n is a
DFS ancestor of x or n is a DFS descendant of x. In other words,
either n is on the path from the root to x or x is on the path from
the root to n.
global_variable: dfs_number
DFS_Visit(node x, node parent)
D(x) = dfs_number
dfs_number += 1
x.dfs_parent = parent
for each link (x, w)
if D(w) is not set
DFS_Visit(w, x)
Run_DFS(node root)
dfs_number = 0
DFS_Visit(root, NONE)
Figure 7: Basic Depth-First Search algorithm
Given a node x, one can compute the minimal DFS number of the
neighbours of x, i.e. min( D(w) if (x,w) is a link). This gives the
highest attachment point neighbouring x. What is interesting,
though, is what is the highest attachment point from x and x's
descendants. This is what is determined by computing the Low-Point
value, as given in Algorithm Figure 9 and illustrated on a graph in
Figure 8.
[E]---| [J]-------[I] [P]---[O]
| | | | | |
| | | | | |
[R] [D]---[C]--[F] [H]---[K] [N]
| | | | | |
| | | | | |
[A]--------[B] [G]---| [L]---[M]
(a) a non-2-connected graph
[E]----| [J]---------[I] [P]------[O]
(5, ) | (10, ) (9, ) (16, ) (15, )
| | | | | |
| | | | | |
[R] [D]---[C]---[F] [H]----[K] [N]
Enyedi, et al. Expires January 16, 2014 [Page 11]
Internet-Draft MRT FRR Algorithm July 2013
(0, ) (4, ) (3, ) (6, ) (8, ) (11, ) (14, )
| | | | | |
| | | | | |
[A]---------[B] [G]----| [L]------[M]
(1, ) (2, ) (7, ) (12, ) (13, )
(b) with DFS values assigned (D(x), L(x))
[E]----| [J]---------[I] [P]------[O]
(5,0) | (10,3) (9,3) (16,11) (15,11)
| | | | | |
| | | | | |
[R] [D]---[C]---[F] [H]----[K] [N]
(0, ) (4,0) (3,0) (6,3) (8,3) (11,11) (14,11)
| | | | | |
| | | | | |
[A]---------[B] [G]----| [L]------[M]
(1,0) (2,0) (7,3) (12,11) (13,11)
(c) with low-point values assigned (D(x), L(x))
Figure 8
global_variable: dfs_number
Lowpoint_Visit(node x, node parent, interface p_to_x)
D(x) = dfs_number
L(x) = D(x)
dfs_number += 1
x.dfs_parent = parent
x.dfs_parent_intf = p_to_x
x.lowpoint_parent = NONE
for each interface intf of x:
if D(intf.remote_node) is not set
Lowpoint_Visit(intf.remote_node, x, intf)
if L(intf.remote_node) < L(x)
L(x) = L(intf.remote_node)
x.lowpoint_parent = intf.remote_node
x.lowpoint_parent_intf = intf
else if intf.remote_node is not parent
if D(intf.remote_node) < L(x)
L(x) = D(intf.remote)
x.lowpoint_parent = intf.remote_node
x.lowpoint_parent_intf = intf
Run_Lowpoint(node root)
dfs_number = 0
Enyedi, et al. Expires January 16, 2014 [Page 12]
Internet-Draft MRT FRR Algorithm July 2013
Lowpoint_Visit(root, NONE, NONE)
Figure 9: Computing Low-Point value
From the low-point value and lowpoint parent, there are two very
useful things which motivate our computation.
First, if there is a child c of x such that L(c) >= D(x), then there
are no paths in the network graph that go from c or its descendants
to an ancestor of x - and therefore x is a cut-vertex. This is
useful because it allows identification of the cut-vertices and thus
the blocks. As seen in Figure 8, even if L(x) < D(x), there may be a
block that contains both the root and a DFS-child of a node while
other DFS-children might be in different blocks. In this example,
C's child D is in the same block as R while F is not.
Second, by repeatedly following the path given by lowpoint_parent,
there is a path from x back to an ancestor of x that does not use the
link [x, x.dfs_parent] in either direction. The full path need not
be taken, but this gives a way of finding an initial cycle and then
3.4. Blocks in a Graph
A key idea for an MRT algorithm is that any non-2-connected graph is
made up by blocks (e.g. 2-connected clusters, cut-links, and/or
isolated nodes). To compute GADAGs and thus MRTs, computation is
done in each block to compute ADAGs or Redundant Trees and then those
ADAGs or Redundant Trees are combined into a GADAG or MRT.
[E]---| [J]-------[I] [P]---[O]
| | | | | |
| | | | | |
[R] [D]---[C]--[F] [H]---[K] [N]
| | | | | |
| | | | | |
[A]--------[B] [G]---| [L]---[M]
(a) A graph with four blocks that are:
3 2-connected clusters and a cut-link
[E]<--| [J]<------[I] [P]<--[O]
| | | ^ | ^
V | V | V |
[R] [D]<--[C] [F] [H]<---[K] [N]
^ | ^ ^
| V | |
Enyedi, et al. Expires January 16, 2014 [Page 13]
Internet-Draft MRT FRR Algorithm July 2013
[A]------->[B] [G]---| [L]-->[M]
(b) MRT-Blue
[E]---| [J]-------->[I] [P]-->[O]
| | |
V V V
[R] [D]-->[C]<---[F] [H]<---[K] [N]
^ | ^ | ^ |
| V | | | V
[A]<-------[B] [G]<--| [L]<--[M]
(c) MRT-Red
Figure 10
Consider the example depicted in Figure 10 (a). In this figure, a
special graph is presented, showing us all the ways 2-connected
clusters can be connected. It has four blocks: block 1 contains R,
A, B, C, D, E, block 2 contains C, F, G, H, I, J, block 3 contains K,
L, M, N, O, P, and block 4 is a cut-edge containing H and K. As can
be observed, the first two blocks have one common node (node C) and
blocks 2 and 3 do not have any common node, but they are connected
through a cut-edge that is block 4. No two blocks can have more than
one common node, since two blocks with at least 2 common nodes would
qualify as a single 2-connected cluster.
Moreover, observe that if we want to get from one block to another,
we must use a cut-vertex (the cut-vertices in this graph are C, H,
K), regardless of the path selected, so we can say that all the paths
from block 3 along the MRTs rooted at R will cross K first. This
observation means that if we want to find a pair of MRTs rooted at R,
then we need to build up a pair of RTs in block 3 with K as a root.
Similarly, we need to find another one in block 2 with C as a root,
and finally, we need the last one in block 1 with R as a root. When
all the trees are selected, we can simply combine them; when a block
is a cut-edge (as in block 4), that cut-edge is added in the same
direction to both of the trees. The resulting trees are depicted in
Figure 10 (b) and (c).
Similarly, to create a GADAG it is sufficient to compute ADAGs in
each block and connect them.
It is necessary, therefore, to identify the cut-vertices, the blocks
and identify the appropriate local-root to use for each block.
Enyedi, et al. Expires January 16, 2014 [Page 14]
Internet-Draft MRT FRR Algorithm July 2013
3.5. Determining Local-Root and Assigning Block-ID
Each node in a network graph has a local-root, which is the cut-
vertex (or root) in the same block that is closest to the root. The
local-root is used to determine whether two nodes share a common
Compute_Localroot(node x, node localroot)
x.localroot = localroot
for each DFS child c
if L(c) < D(x) //x is not a cut-vertex
Compute_Localroot(c, x.localroot)
mark x as cut-vertex
Compute_Localroot(c, x)
Compute_Localroot(root, root)
Figure 11: A method for computing local-roots
There are two different ways of computing the local-root for each
node. The stand-alone method is given in Figure 11 and better
illustrates the concept; it is used by the MRT algorithms given in
the Appendices Appendix A and Appendix B. The method for local-root
computation is used in the MRT Lowpoint algorithm for computing a
GADAG using Low-Point inheritance and the essence of it is given in
Figure 12.
Get the current node, s.
Compute an ear(either through lowpoint inheritance
or by following dfs parents) from s to a ready node e.
(Thus, s is not e, if there is such ear.)
if s is e
for each node x in the ear that is not s
x.localroot = s
for each node x in the ear that is not s or e
x.localroot = e.localroot
Figure 12: Ear-based method for computing local-roots
Once the local-roots are known, two nodes X and Y are in a common
block if and only if one of the following three conditions apply.
o Y's local-root is X's local-root : They are in the same block and
neither is the cut-vertex closest to the root.
Enyedi, et al. Expires January 16, 2014 [Page 15]
Internet-Draft MRT FRR Algorithm July 2013
o Y's local-root is X: X is the cut-vertex closest to the root for
Y's block
o Y is X's local-root: Y is the cut-vertex closest to the root for
X's block
Once we have computed the local-root for each node in the network
graph, we can assign for each node, a block id that represents the
block in which the node is present. This computation is shown in
Figure 13.
global_var: max_block_id
Assign_Block_ID(x, cur_block_id)
x.block_id = cur_block_id
foreach DFS child c of x
if (c.local_root is x)
max_block_id += 1
Assign_Block_ID(c, max_block_id)
Assign_Block_ID(c, cur_block_id)
max_block_id = 0
Assign_Block_ID(root, max_block_id)
Figure 13: Assigning block id to identify blocks
4. Algorithm Sections
This algorithm computes one GADAG that is then used by a router to
determine its MRT-Blue and MRT-Red next-hops to all destinations.
Finally, based upon that information, alternates are selected for
each next-hop to each destination. The different parts of this
algorithm are described below. These work on a network graph after,
for instance, its interfaces are ordered as per Figure 14.
1. Compute the local MRT Island for the particular MRT Profile.
[See Section 4.1.]
2. Select the root to use for the GADAG. [See Section 4.2.]
3. Initialize all interfaces to UNDIRECTED. [See Section 4.3.]
4. Compute the DFS value,e.g. D(x), and lowpoint value, L(x). [See
Figure 9.]
5. Construct the GADAG. [See Section 4.4]
Enyedi, et al. Expires January 16, 2014 [Page 16]
Internet-Draft MRT FRR Algorithm July 2013
6. Assign directions to all interfaces that are still UNDIRECTED.
[See Section 4.5.]
7. From the computing router x, compute the next-hops for the MRT-
Blue and MRT-Red. [See Section 4.6.]
8. Identify alternates for each next-hop to each destination by
determining which one of the blue MRT and the red MRT the
computing router x should select. [See Section 4.7.]
To ensure consistency in computation, all routers MUST order
interfaces identically. This is necessary for the DFS, where the
selection order of the interfaces to explore results in different
trees, and for computing the GADAG, where the selection order of the
interfaces to use to form ears can result in different GADAGs. The
required ordering between two interfaces from the same router x is
given in Figure 14.
Interface_Compare(interface a, interface b)
if a.metric < b.metric
return A_LESS_THAN_B
if b.metric < a.metric
return B_LESS_THAN_A
if a.neighbor.loopback_addr < b.neighbor.loopback_addr
return A_LESS_THAN_B
if b.neighbor.loopback_addr < a.neighbor.loopback_addr
return B_LESS_THAN_A
// Same metric to same node, so the order doesn't matter anymore.
// To have a unique, consistent total order,
// tie-break based on, for example, the link's linkData as
// distributed in an OSPF Router-LSA
if a.link_data < b.link_data
return A_LESS_THAN_B
return B_LESS_THAN_A
Figure 14: Rules for ranking multiple interfaces. Order is from low
to high.
4.1. MRT Island Identification
The local MRT Island for a particular MRT profile can be determined
by starting from the computing router in the network graph and doing
a breadth-first-search (BFS), exploring only links that aren't MRT-
MRT_Island_Identification(topology, computing_rtr, profile_id)
for all routers in topology
rtr.IN_MRT_ISLAND = FALSE
Enyedi, et al. Expires January 16, 2014 [Page 17]
Internet-Draft MRT FRR Algorithm July 2013
computing_rtr.IN_MRT_ISLAND = TRUE
explore_list = { computing_rtr }
while (explore_list is not empty)
next_rtr = remove_head(explore_list)
for each interface in next_rtr
if interface is not MRT-ineligible
if ((interface.remote_node supports profile_id) and
(interface.remote_node.IN_MRT_ISLAND is FALSE))
interface.remote_node.IN_MRT_ISLAND = TRUE
add_to_tail(explore_list, interface.remote_node)
Figure 15: MRT Island Identification
4.2. Root Selection
In [I-D.atlas-ospf-mrt], a mechanism is given for routers to
advertise the GADAG Root Selection Priority and consistently select a
GADAG Root inside the local MRT Island. Before beginning
computation, the network graph is reduced to contain only the set of
routers that support the specific MRT profile whose MRTs are being
Off-line analysis that considers the centrality of a router may help
determine how good a choice a particular router is for the role of
GADAG root.
4.3. Initialization
Before running the algorithm, there is the standard type of
initialization to be done, such as clearing any computed DFS-values,
lowpoint-values, DFS-parents, lowpoint-parents, any MRT-computed
next-hops, and flags associated with algorithm.
It is assumed that a regular SPF computation has been run so that the
primary next-hops from the computing router to each destination are
known. This is required for determining alternates at the last step.
Initially, all interfaces must be initialized to UNDIRECTED. Whether
they are OUTGOING, INCOMING or both is determined when the GADAG is
constructed and augmented.
It is possible that some links and nodes will be marked as unusable,
whether because of configuration, IGP flooding (e.g. MRT-ineligible
links in [I-D.atlas-ospf-mrt]), overload, or due to a transient cause
such as [RFC3137]. In the algorithm description, it is assumed that
such links and nodes will not be explored or used and no more
discussion is given of this restriction.
Enyedi, et al. Expires January 16, 2014 [Page 18]
Internet-Draft MRT FRR Algorithm July 2013
4.4. MRT Lowpoint Algorithm: Computing GADAG using lowpoint inheritance
As discussed in Section 3.2, it is necessary to find ears from a node
x that is already in the GADAG (known as IN_GADAG). There are two
methods to find ears; both are required. The first is by going to a
not IN_GADAG DFS-child and then following the chain of low-point
parents until an IN_GADAG node is found. The second is by going to a
not IN_GADAG neighbor and then following the chain of DFS parents
until an IN_GADAG node is found. As an ear is found, the associated
interfaces are marked based on the direction taken. The nodes in the
ear are marked as IN_GADAG. In the algorithm, first the ears via
DFS-children are found and then the ears via DFS-neighbors are found.
By adding both types of ears when an IN_GADAG node is processed, all
ears that connect to that node are found. The order in which the
IN_GADAG nodes is processed is, of course, key to the algorithm. The
order is a stack of ears so the most recent ear is found at the top
of the stack. Of course, the stack stores nodes and not ears, so an
ordered list of nodes, from the first node in the ear to the last
node in the ear, is created as the ear is explored and then that list
is pushed onto the stack.
Each ear represents a partial order (see Figure 4) and processing the
nodes in order along each ear ensures that all ears connecting to a
node are found before a node higher in the partial order has its ears
explored. This means that the direction of the links in the ear is
always from the node x being processed towards the other end of the
ear. Additionally, by using a stack of ears, this means that any
unprocessed nodes in previous ears can only be ordered higher than
nodes in the ears below it on the stack.
In this algorithm that depends upon Low-Point inheritance, it is
necessary that every node have a low-point parent that is not itself.
If a node is a cut-vertex, that may not yet be the case. Therefore,
any nodes without a low-point parent will have their low-point parent
set to their DFS parent and their low-point value set to the DFS-
value of their parent. This assignment also properly allows an ear
between two cut-vertices.
Finally, the algorithm simultaneously computes each node's local-
root, as described in Figure 12. This is further elaborated as
follows. The local-root can be inherited from the node at the end of
the ear unless the end of the ear is x itself, in which case the
local-root for all the nodes in the ear would be x. This is because
whenever the first cycle is found in a block, or an ear involving a
bridge is computed, the cut-vertex closest to the root would be x
itself. In all other scenarios, the properties of lowpoint/dfs
parents ensure that the end of the ear will be in the same block, and
Enyedi, et al. Expires January 16, 2014 [Page 19]
Internet-Draft MRT FRR Algorithm July 2013
thus inheriting its local-root would be the correct local-root for
all newly added nodes.
The pseudo-code for the GADAG algorithm (assuming that the adjustment
of lowpoint for cut-vertices has been made) is shown in Figure 16.
Construct_Ear(x, Stack, intf, type)
ear_list = empty
cur_node = intf.remote_node
cur_intf = intf
not_done = true
while not_done
cur_intf.UNDIRECTED = false
cur_intf.OUTGOING = true
cur_intf.remote_intf.UNDIRECTED = false
cur_intf.remote_intf.INCOMING = true
if cur_node.IN_GADAG is false
cur_node.IN_GADAG = true
add_to_list_end(ear_list, cur_node)
if type is CHILD
cur_intf = cur_node.lowpoint_parent_intf
cur_node = cur_node.lowpoint_parent
else type must be NEIGHBOR
cur_intf = cur_node.dfs_parent_intf
cur_node = cur_node.dfs_parent
not_done = false
if (type is CHILD) and (cur_node is x)
//x is a cut-vertex and the local root for
//the block in which the ear is computed
localroot = x
// Inherit local-root from the end of the ear
localroot = cur_node.localroot
while ear_list is not empty
y = remove_end_item_from_list(ear_list)
y.localroot = localroot
push(Stack, y)
Construct_GADAG_via_Lowpoint(topology, root)
root.IN_GADAG = true
root.localroot = root
Initialize Stack to empty
push root onto Stack
while (Stack is not empty)
Enyedi, et al. Expires January 16, 2014 [Page 20]
Internet-Draft MRT FRR Algorithm July 2013
x = pop(Stack)
foreach interface intf of x
if ((intf.remote_node.IN_GADAG == false) and
(intf.remote_node.dfs_parent is x))
Construct_Ear(x, Stack, intf, CHILD)
foreach interface intf of x
if ((intf.remote_node.IN_GADAG == false) and
(intf.remote_node.dfs_parent is not x))
Construct_Ear(x, Stack, intf, NEIGHBOR)
Construct_GADAG_via_Lowpoint(topology, root)
Figure 16: Low-point Inheritance GADAG algorithm
4.5. Augmenting the GADAG by directing all links
The GADAG, regardless of the algorithm used to construct it, at this
point could be used to find MRTs but the topology does not include
all links in the network graph. That has two impacts. First, there
might be shorter paths that respect the GADAG partial ordering and so
the alternate paths would not be as short as possible. Second, there
may be additional paths between a router x and the root that are not
included in the GADAG. Including those provides potentially more
bandwidth to traffic flowing on the alternates and may reduce
congestion compared to just using the GADAG as currently constructed.
The goal is thus to assign direction to every remaining link marked
as UNDIRECTED to improve the paths and number of paths found when the
MRTs are computed.
To do this, we need to establish a total order that respects the
partial order described by the GADAG. This can be done using Kahn's
topological sort[Kahn_1962_topo_sort] which essentially assigns a
number to a node x only after all nodes before it (e.g. with a link
incoming to x) have had their numbers assigned. The only issue with
the topological sort is that it works on DAGs and not ADAGs or
To convert a GADAG to a DAG, it is necessary to remove all links that
point to a root of block from within that block. That provides the
necessary conversion to a DAG and then a topological sort can be
done. Finally, all UNDIRECTED links are assigned a direction based
upon the partial ordering. Any UNDIRECTED links that connect to a
root of a block from within that block are assigned a direction
INCOMING to that root. The exact details of this whole process are
captured in Figure 17
Enyedi, et al. Expires January 16, 2014 [Page 21]
Internet-Draft MRT FRR Algorithm July 2013
Set_Block_Root_Incoming_Links(topo, root, mark_or_clear)
foreach node x in topo
if node x is a cut-vertex or root
foreach interface i of x
if (i.remote_node.localroot is x)
if i.UNDIRECTED
i.OUTGOING = true
i.remote_intf.INCOMING = true
i.UNDIRECTED = false
i.remote_intf.UNDIRECTED = false
if i.INCOMING
if mark_or_clear is mark
if i.OUTGOING // a cut-edge
i.STORE_INCOMING = true
i.INCOMING = false
i.remote_intf.STORE_OUTGOING = true
i.remote_intf.OUTGOING = false
i.TEMP_UNUSABLE = true
i.remote_intf.TEMP_UNUSABLE = true
i.TEMP_UNUSABLE = false
i.remote_intf.TEMP_UNUSABLE = false
if i.STORE_INCOMING and (mark_or_clear is clear)
i.INCOMING = true
i.STORE_INCOMING = false
i.remote_intf.OUTGOING = true
i.remote_intf.STORE_OUTGOING = false
Run_Topological_Sort_GADAG(topo, root)
Set_Block_Root_Incoming_Links(topo, root, MARK)
foreach node x
set x.unvisited to the count of x's incoming interfaces
that aren't marked TEMP_UNUSABLE
Initialize working_list to empty
Initialize topo_order_list to empty
add_to_list_end(working_list, root)
while working_list is not empty
y = remove_start_item_from_list(working_list)
add_to_list_end(topo_order_list, y)
foreach interface i of y
if (i.OUTGOING) and (not i.TEMP_UNUSABLE)
i.remote_node.unvisited -= 1
if i.remote_node.unvisited is 0
add_to_list_end(working_list, i.remote_node)
next_topo_order = 1
while topo_order_list is not empty
y = remove_start_item_from_list(topo_order_list)
y.topo_order = next_topo_order
Enyedi, et al. Expires January 16, 2014 [Page 22]
Internet-Draft MRT FRR Algorithm July 2013
next_topo_order += 1
Set_Block_Root_Incoming_Links(topo, root, CLEAR)
Add_Undirected_Links(topo, root)
Run_Topological_Sort_GADAG(topo, root)
foreach node x in topo
foreach interface i of x
if i.UNDIRECTED
if x.topo_order < i.remote_node.topo_order
i.OUTGOING = true
i.UNDIRECTED = false
i.remote_intf.INCOMING = true
i.remote_intf.UNDIRECTED = false
i.INCOMING = true
i.UNDIRECTED = false
i.remote_intf.OUTGOING = true
i.remote_intf.UNDIRECTED = false
Add_Undirected_Links(topo, root)
Figure 17: Assigning direction to UNDIRECTED links
Proxy-nodes do not need to be added to the network graph. They
cannot be transited and do not affect the MRTs that are computed.
The details of how the MRT-Blue and MRT-Red next-hops are computed
and how the appropriate alternate next-hops are selected is given in
Section 4.8.
4.6. Compute MRT next-hops
As was discussed in Section 3.1, once a ADAG is found, it is
straightforward to find the next-hops from any node X to the ADAG
root. However, in this algorithm, we want to reuse the common GADAG
and find not only the one pair of MRTs rooted at the GADAG root with
it, but find a pair rooted at each node. This is useful since it is
significantly faster to compute. It may also provide easier
troubleshooting of the MRT-Red and MRT-Blue.
The method for computing differently rooted MRTs from the common
GADAG is based on two ideas. First, if two nodes X and Y are ordered
with respect to each other in the partial order, then an SPF along
OUTGOING links (an increasing-SPF) and an SPF along INCOMING links (a
decreasing-SPF) can be used to find the increasing and decreasing
paths. Second, if two nodes X and Y aren't ordered with respect to
each other in the partial order, then intermediary nodes can be used
to create the paths by increasing/decreasing to the intermediary and
then decreasing/increasing to reach Y.
Enyedi, et al. Expires January 16, 2014 [Page 23]
Internet-Draft MRT FRR Algorithm July 2013
As usual, the two basic ideas will be discussed assuming the network
is two-connected. The generalization to multiple blocks is discussed
in Section 4.6.4. The full algorithm is given in Section 4.6.5.
4.6.1. MRT next-hops to all nodes partially ordered with respect to the
computing node
To find two node-disjoint paths from the computing router X to any
node Y, depends upon whether Y >> X or Y << X. As shown in Figure
18, if Y >> X, then there is an increasing path that goes from X to Y
without crossing R; this contains nodes in the interval [X,Y]. There
is also a decreasing path that decreases towards R and then decreases
from R to Y; this contains nodes in the interval [X,R-small] or
[R-great,Y]. The two paths cannot have common nodes other than X and
[Y]<---(Cloud 2)<--- [X]
| ^
| |
V |
(Cloud 3)--->[R]--->(Cloud 1)
MRT-Blue path: X->Cloud 2->Y
MRT-Red path: X->Cloud 1->R->Cloud 3->Y
Figure 18: Y >> X
Similar logic applies if Y << X, as shown in Figure 19. In this
case, the increasing path from X increases to R and then increases
from R to Y to use nodes in the intervals [X,R-great] and [R-small,
Y]. The decreasing path from X reaches Y without crossing R and uses
nodes in the interval [Y,X].
[X]<---(Cloud 2)<--- [Y]
| ^
| |
V |
(Cloud 3)--->[R]--->(Cloud 1)
MRT-Blue path: X->Cloud 3->R->Cloud 1->Y
MRT-Red path: X->Cloud 2->Y
Figure 19: Y << X
4.6.2. MRT next-hops to all nodes not partially ordered with respect to
the computing node
Enyedi, et al. Expires January 16, 2014 [Page 24]
Internet-Draft MRT FRR Algorithm July 2013
When X and Y are not ordered, the first path should increase until we
get to a node G, where G >> Y. At G, we need to decrease to Y. The
other path should be just the opposite: we must decrease until we get
to a node H, where H << Y, and then increase. Since R is smaller and
greater than Y, such G and H must exist. It is also easy to see that
these two paths must be node disjoint: the first path contains nodes
in interval [X,G] and [Y,G], while the second path contains nodes in
interval [H,X] and [H,Y]. This is illustrated in Figure 20. It is
necessary to decrease and then increase for the MRT-Blue and increase
and then decrease for the MRT-Red; if one simply increased for one
and decreased for the other, then both paths would go through the
root R.
(Cloud 6)<---[Y]<---(Cloud 5)<------------|
| |
| |
V |
[G]--->(Cloud 4)--->[R]--->(Cloud 1)--->[H]
^ |
| |
| |
(Cloud 3)<---[X]<---(Cloud 2)<-----------|
MRT-Blue path: decrease to H and increase to Y
X->Cloud 2->H->Cloud 5->Y
MRT-Red path: increase to G and decrease to Y
X->Cloud 3->G->Cloud 6->Y
Figure 20: X and Y unordered
This gives disjoint paths as long as G and H are not the same node.
Since G >> Y and H << Y, if G and H could be the same node, that
would have to be the root R. This is not possible because there is
only one incoming interface to the root R which is created when the
initial cycle is found. Recall from Figure 6 that whenever an ear
was found to have an end that was the root R, the ear was directed
from R so that the associated interface on R is outgoing and not
incoming. Therefore, there must be exactly one node M which is the
largest one before R, so the MRT-Red path will never reach R; it will
turn at M and decrease to Y.
4.6.3. Computing Redundant Tree next-hops in a 2-connected Graph
The basic ideas for computing RT next-hops in a 2-connected graph
were given in Section 4.6.1 and Section 4.6.2. Given these two
ideas, how can we find the trees?
Enyedi, et al. Expires January 16, 2014 [Page 25]
Internet-Draft MRT FRR Algorithm July 2013
If some node X only wants to find the next-hops (which is usually the
case for IP networks), it is enough to find which nodes are greater
and less than X, and which are not ordered; this can be done by
running an increasing-SPF and a decreasing-SPF rooted at X and not
exploring any links from the ADAG root. ( Traversal algorithms other
than SPF could safely be used instead where one traversal takes the
links in their given directions and the other reverses the links'
An increasing-SPF rooted at X and not exploring links from the root
will find the increasing next-hops to all Y >> X. Those increasing
next-hops are X's next-hops on the MRT-Blue to reach Y. An
decreasing-SPF rooted at X and not exploring links from the root will
find the decreasing next-hops to all Z << X. Those decreasing next-
hops are X's next-hops on the MRT-Red to reach Z. Since the root R
is both greater than and less than X, after this increasing-SPF and
decreasing-SPF, X's next-hops on the MRT-Blue and on the MRT-Red to
reach R are known. For every node Y >> X, X's next-hops on the MRT-
Red to reach Y are set to those on the MRT-Red to reach R. For every
node Z << X, X's next-hops on the MRT-Blue to reach Z are set to
those on the MRT-Blue to reach R.
For those nodes, which were not reached, we have the next-hops as
well. The increasing MRT-Blue next-hop for a node, which is not
ordered, is the next-hop along the decreasing MRT-Red towards R and
the decreasing MRT-Red next-hop is the next-hop along the increasing
MRT-Blue towards R. Naturally, since R is ordered with respect to all
the nodes, there will always be an increasing and a decreasing path
towards it. This algorithm does not provide the complete specific
path taken but just the appropriate next-hops to use. The identity
of G and H is not determined.
The final case to considered is when the root R computes its own
next-hops. Since the root R is << all other nodes, running an
increasing-SPF rooted at R will reach all other nodes; the MRT-Blue
next-hops are those found with this increasing-SPF. Similarly, since
the root R is >> all other nodes, running a decreasing-SPF rooted at
R will reach all other nodes; the MRT-Red next-hops are those found
with this decreasing-SPF.
E---D---| E<--D<--|
| | | | ^ |
| | | V | |
R F C R F C
| | | | ^ ^
| | | V | |
A---B---| A-->B---|
Enyedi, et al. Expires January 16, 2014 [Page 26]
Internet-Draft MRT FRR Algorithm July 2013
(a) (b)
A 2-connected graph A spanning ADAG rooted at R
Figure 21
As an example consider the situation depicted in Figure 21. There
node C runs an increasing-SPF and a decreasing-SPF The increasing-SPF
reaches D, E and R and the decreasing-SPF reaches B, A and R. So
towards E the increasing next-hop is D (it was reached though D), and
the decreasing next-hop is B (since R was reached though B). Since
both D and B, A and R will compute the next hops similarly, the
packets will reach E.
We have the next-hops towards F as well: since F is not ordered with
respect to C, the MRT-Blue next-hop is the decreasing one towards R
(which is B) and the MRT-Red next-hop is the increasing one towards R
(which is D). Since B is ordered with F, it will find, for its MRT-
Blue, a real increasing next-hop, so packet forwarded to B will get
to F on path C-B-F. Similarly, D will have, for its MRT-Red, a real
decreasing next-hop, and the packet will use path C-D-F.
4.6.4. Generalizing for graph that isn't 2-connected
If a graph isn't 2-connected, then the basic approach given in
Section 4.6.3 needs some extensions to determine the appropriate MRT
next-hops to use for destinations outside the computing router X's
blocks. In order to find a pair of maximally redundant trees in that
graph we need to find a pair of RTs in each of the blocks (the root
of these trees will be discussed later), and combine them.
When computing the MRT next-hops from a router X, there are three
basic differences:
1. Only nodes in a common block with X should be explored in the
increasing-SPF and decreasing-SPF.
2. Instead of using the GADAG root, X's local-root should be used.
This has the following implications:
a. The links from X's local-root should not be explored.
b. If a node is explored in the outgoing SPF so Y >> X, then X's
MRT-Red next-hops to reach Y uses X's MRT-Red next-hops to
reach X's local-root and if Z << X, then X's MRT-Blue next-
hops to reach Z uses X's MRT-Blue next-hops to reach X's
Enyedi, et al. Expires January 16, 2014 [Page 27]
Internet-Draft MRT FRR Algorithm July 2013
c. If a node W in a common block with X was not reached in the
increasing-SPF or decreasing-SPF, then W is unordered with
respect to X. X's MRT-Blue next-hops to W are X's decreasing
aka MRT-Red next-hops to X's local-root. X's MRT-Red next-
hops to W are X's increasing aka Blue MRT next-hops to X's
3. For nodes in different blocks, the next-hops must be inherited
via the relevant cut-vertex.
These are all captured in the detailed algorithm given in
Section 4.6.5.
4.6.5. Complete Algorithm to Compute MRT Next-Hops
The complete algorithm to compute MRT Next-Hops for a particular
router X is given in Figure 22. In addition to computing the MRT-
Blue next-hops and MRT-Red next-hops used by X to reach each node Y,
the algorithm also stores an "order_proxy", which is the proper cut-
vertex to reach Y if it is outside the block, and which is used later
in deciding whether the MRT-Blue or the MRT-Red can provide an
acceptable alternate for a particular primary next-hop.
In_Common_Block(x, y)
if (((x.localroot is y.localroot) and (x.block_id is y.block_id))
or (x is y.localroot) or (y is x.localroot))
return true
return false
Store_Results(y, direction, spf_root, store_nhs)
if direction is FORWARD
y.higher = true
if store_nhs
y.blue_next_hops = y.next_hops
if direction is REVERSE
y.lower = true
if store_nhs
y.red_next_hops = y.next_hops
SPF_No_Traverse_Root(spf_root, block_root, direction, store_nhs)
Initialize spf_heap to empty
Initialize nodes' spf_metric to infinity and next_hops to empty
spf_root.spf_metric = 0
insert(spf_heap, spf_root)
while (spf_heap is not empty)
min_node = remove_lowest(spf_heap)
Store_Results(min_node, direction, spf_root, store_nhs)
if ((min_node is spf_root) or (min_node is not block_root))
Enyedi, et al. Expires January 16, 2014 [Page 28]
Internet-Draft MRT FRR Algorithm July 2013
foreach interface intf of min_node
if (((direction is FORWARD) and intf.OUTGOING) or
((direction is REVERSE) and intf.INCOMING) and
In_Common_Block(spf_root, intf.remote_node))
path_metric = min_node.spf_metric + intf.metric
if path_metric < intf.remote_node.spf_metric
intf.remote_node.spf_metric = path_metric
if min_node is spf_root
intf.remote_node.next_hops = make_list(intf)
intf.remote_node.next_hops = min_node.next_hops
insert_or_update(spf_heap, intf.remote_node)
else if path_metric is intf.remote_node.spf_metric
if min_node is spf_root
add_to_list(intf.remote_node.next_hops, intf)
if y.blue_next_hops is empty and y.red_next_hops is empty
if (y.local_root != y) {
y.blue_next_hops = y.localroot.blue_next_hops
y.red_next_hops = y.localroot.red_next_hops
y.order_proxy = y.localroot.order_proxy
Compute_MRT_NextHops(x, root)
foreach node y
y.higher = y.lower = false
clear y.red_next_hops and y.blue_next_hops
y.order_proxy = y
SPF_No_Traverse_Root(x, x.localroot, FORWARD, TRUE)
SPF_No_Traverse_Root(x, x.localroot, REVERSE, TRUE)
// red and blue next-hops are stored to x.localroot as different
// paths are found via the SPF and reverse-SPF.
// Similarly any nodes whose local-root is x will have their
// red_next_hops and blue_next_hops already set.
// Handle nodes in the same block that aren't the local-root
foreach node y
if (y.IN_MRT_ISLAND and (y is not x) and
(y.localroot is x.localroot) and
((y is x.localroot) or (x is y.localroot) or
(y.block_id is x.block_id)))
if y.higher
Enyedi, et al. Expires January 16, 2014 [Page 29]
Internet-Draft MRT FRR Algorithm July 2013
y.red_next_hops = x.localroot.red_next_hops
else if y.lower
y.blue_next_hops = x.localroot.blue_next_hops
y.blue_next_hops = x.localroot.red_next_hops
y.red_next_hops = x.localroot.blue_next_hops
// Inherit next-hops and order_proxies to other components
if x is not root
root.blue_next_hops = x.localroot.blue_next_hops
root.red_next_hops = x.localroot.red_next_hops
root.order_proxy = x.localroot
foreach node y
if (y is not root) and (y is not x) and y.IN_MRT_ISLAND
max_block_id = 0
Assign_Block_ID(root, max_block_id)
Compute_MRT_NextHops(x, root)
Figure 22
4.7. Identify MRT alternates
At this point, a computing router S knows its MRT-Blue next-hops and
MRT-Red next-hops for each destination in the MRT Island. The
primary next-hops along the SPT are also known. It remains to
determine for each primary next-hop to a destination D, which of the
MRTs avoids the primary next-hop node F. This computation depends
upon data set in Compute_MRT_NextHops such as each node y's
y.blue_next_hops, y.red_next_hops, y.order_proxy, y.higher, y.lower
and topo_orders. Recall that any router knows only which are the
nodes greater and lesser than itself, but it cannot decide the
relation between any two given nodes easily; that is why we need
topological ordering.
For each primary next-hop node F to each destination D, S can call
Select_Alternates(S, D, F, primary_intf) to determine whether to use
the MRT-Blue next-hops as the alternate next-hop(s) for that primary
next hop or to use the MRT-Red next-hops. The algorithm is given in
Figure 23 and discussed afterwards.
Select_Alternates_Internal(S, D, F, primary_intf,
D_lower, D_higher, D_topo_order)
//When D==F, we can do only link protection
if ((D is F) or (D.order_proxy is F))
Enyedi, et al. Expires January 16, 2014 [Page 30]
Internet-Draft MRT FRR Algorithm July 2013
if an MRT doesn't use primary_intf
indicate alternate is not node-protecting
return that MRT color
else // parallel links are cut-edge
return AVOID_LINK_ON_BLUE
if (D_lower and D_higher and F.lower and F.higher)
if F.topo_order < D_topo_order
return USE_RED
return USE_BLUE
if (D_lower and D_higher)
if F.higher
return USE_RED
return USE_BLUE
if (F.lower and F.higher)
if D_lower
return USE_RED
else if D_higher
return USE_BLUE
if primary_intf.OUTGOING and primary_intf.INCOMING
return AVOID_LINK_ON_BLUE
if primary_intf.OUTGOING is true
return USE_BLUE
if primary_intf.INCOMING is true
return USE_RED
if D_higher
if F.higher
if F.topo_order < D_topo_order
return USE_RED
return USE_BLUE
else if F.lower
return USE_BLUE
// F and S are neighbors so either F << S or F >> S
else if D_lower
if F.higher
return USE_RED
else if F.lower
if F.topo_order < D_topo_order
return USE_RED
Enyedi, et al. Expires January 16, 2014 [Page 31]
Internet-Draft MRT FRR Algorithm July 2013
return USE_BLUE
// F and S are neighbors so either F << S or F >> S
else // D and S not ordered
if F.lower
return USE_RED
else if F.higher
return USE_BLUE
// F and S are neighbors so either F << S or F >> S
Select_Alternates(S, D, F, primary_intf)
if D.order_proxy is not D
D_lower = D.order_proxy.lower
D_higher = D.order_proxy.higher
D_topo_order = D.order_proxy.topo_order
D_lower = D.lower
D_higher = D.higher
D_topo_order = D.topo_order
return Select_Alternates_Internal(S, D, F, primary_intf,
D_lower, D_higher, D_topo_order)
Figure 23
If either D>>S>>F or D<<S<<F holds true, the situation is simple: in
the first case we should choose the increasing Blue next-hop, in the
second case, the decreasing Red next-hop is the right choice.
However, when both D and F are greater than S the situation is not so
simple, there can be three possibilities: (i) F>>D (ii) F<<D or (iii)
F and D are not ordered. In the first case, we should choose the
path towards D along the Blue tree. In contrast, in case (ii) the
Red path towards the root and then to D would be the solution.
Finally, in case (iii) both paths would be acceptable. However,
observe that if e.g. F.topo_order>D.topo_order, either case (i) or
case (iii) holds true, which means that selecting the Blue next-hop
is safe. Similarly, if F.topo_order<D.topo_order, we should select
the Red next-hop. The situation is almost the same if both F and D
are less than S.
Enyedi, et al. Expires January 16, 2014 [Page 32]
Internet-Draft MRT FRR Algorithm July 2013
Recall that we have added each link to the GADAG in some direction,
so it is impossible that S and F are not ordered. But it is possible
that S and D are not ordered, so we need to deal with this case as
well. If F<<S, we can use the Red next-hop, because that path is
first increasing until a node definitely greater than D is reached,
than decreasing; this path must avoid using F. Similarly, if F>>S, we
should use the Blue next-hop.
Additionally, the cases where either F or D is ordered both higher
and lower must be considered; this can happen when one is a block-
root or its order_proxy is. If D is both higher and lower than S,
then the MRT to use is the one that avoids F so if F is higher, then
the MRT-Red should be used and if F is lower, then the MRT-Blue
should be used; F and S must be ordered because they are neighbors.
If F is both higher and lower, then if D is lower, using the MRT-Red
to decrease reaches D and if D is higher, using the Blue MRT to
increase reaches D; if D is unordered compared to S, then the
situation is a bit more complicated.
In the case where F<<S<<F and D and S are unordered, the direction of
the link in the GADAG between S and F should be examined. If the
link is directed S -> F, then use the MRT-Blue (decrease to avoid
that link and then increase). If the link is directed S <- F, then
use the MRT-Red (increase to avoid that link and then decrease). If
the link is S <--> F, then the link must be a cut-link and there is
no node-protecting alternate. If there are multiple links between S
and F, then they can protect against each other; of course, in this
situation, they are probably already ECMP.
Finally, there is the case where D is also F. In this case, only link
protection is possible. The MRT that doesn't use the indicated
primary next-hop is used. If both MRTs use the primary next-hop,
then the primary next-hop must be a cut-edge so either MRT could be
used but the set of MRT next-hops must be pruned to avoid that
primary next-hop. To indicate this case, Select_Alternates returns
As an example, consider the ADAG depicted in Figure 24 and first
suppose that G is the source, D is the destination and H is the
failed next-hop. Since D>>G, we need to compare H.topo_order and
D.topo_order. Since D.topo_order>H.topo_order, D must be not smaller
than H, so we should select the decreasing path towards the root.
If, however, the destination were instead J, we must find that
H.topo_order>J.topo_order, so we must choose the increasing Blue
next-hop to J, which is I. In the case, when instead the destination
is C, we find that we need to first decrease to avoid using H, so the
Blue, first decreasing then increasing, path is selected.
Enyedi, et al. Expires January 16, 2014 [Page 33]
Internet-Draft MRT FRR Algorithm July 2013
| ^ ^ ^
V | | |
[R] [C] [G]->[I]
| ^ ^ ^
V | | |
a 2-connected graph
Figure 24
4.8. Finding FRR Next-Hops for Proxy-Nodes
As discussed in Section 10.2 of
[I-D.ietf-rtgwg-mrt-frr-architecture], it is necessary to find MRT-
Blue and MRT-Red next-hops and MRT-FRR alternates for a named proxy-
nodes. An example case is for a router that is not part of that
local MRT Island, when there is only partial MRT support in the
A first incorrect and naive approach to handling proxy-nodes, which
cannot be transited, is to simply add these proxy-nodes to the graph
of the network and connect it to the routers through which the new
proxy-node can be reached. Unfortunately, this can introduce some
new ordering between the border routers connected to the new node
which could result in routing MRT paths through the proxy-node.
Thus, this naive approach would need to recompute GADAGs and redo
SPTs for each proxy-node.
Instead of adding the proxy-node to the original network graph, each
individual proxy-node can be individually added to the GADAG. The
proxy-node is connected to at most two nodes in the GADAG.
Section 10.2 of [I-D.ietf-rtgwg-mrt-frr-architecture] defines how the
proxy-node attachments MUST be determined. The degenerate case where
the proxy-node is attached to only one node in the GADAG is trivial
as all needed information can be derived from that attachment node;
if there are different interfaces, then some can be assigned to MRT-
Red and others to MRT_Blue.
Now, consider the proxy-node that is attached to exactly two nodes in
the GADAG. Let the order_proxies of these nodes be A and B. Let the
current node, where next-hop is just being calculated, be S. If one
of these two nodes A and B is the local root of S, let A=S.local_root
and the other one be B. Otherwise, let A.topo_order < B.topo_order.
Enyedi, et al. Expires January 16, 2014 [Page 34]
Internet-Draft MRT FRR Algorithm July 2013
A valid GADAG was constructed. Instead doing an increasing-SPF and a
decreasing-SPF to find ordering for the proxy-nodes, the following
simple rules, providing the same result, can be used independently
for each different proxy-node. For the following rules, let
X=A.local_root, and if A is the local root, let that be strictly
lower than any other node. Always take the first rule that matches.
Rule Condition Blue NH Red NH Notes
1 S=X Blue to A Red to B
2 S<<A Blue to A Red to R
3 S>>B Blue to R Red to B
4 A<<S<<B Red to A Blue to B
5 A<<S Red to A Blue to R S not ordered w/ B
6 S<<B Red to R Blue to B S not ordered w/ A
7 Otherwise Red to R Blue to R S not ordered w/ A+B
These rules are realized in the following pseudocode where P is the
proxy-node, X and Y are the nodes that P is attached to, and S is the
computing router:
Select_Proxy_Node_NHs(P, S, X, Y)
if (X.order_proxy.topo_order < Y.order_proxy.topo_order)
//This fits even if X.order_proxy=S.local_root
if (S==A.local_root)
P.blue_next_hops = A.blue_next_hops
P.red_next_hops = B.red_next_hops
if (A.higher)
P.blue_next_hops = A.blue_next_hops
P.red_next_hops = R.red_next_hops
if (B.lower)
P.blue_next_hops = R.blue_next_hops
P.red_next_hops = B.red_next_hops
if (A.lower && B.higher)
P.blue_next_hops = A.red_next_hops
P.red_next_hops = B.blue_next_hops
if (A.lower)
Enyedi, et al. Expires January 16, 2014 [Page 35]
Internet-Draft MRT FRR Algorithm July 2013
P.blue_next_hops = R.red_next_hops
P.red_next_hops = B.blue_next_hops
if (B.higher)
P.blue_next_hops = A.red_next_hops
P.red_next_hops = R.blue_next_hops
P.blue_next_hops = R.red_next_hops
P.red_next_hops = R.blue_next_hops
After finding the the red and the blue next-hops, it is necessary to
know which one of these to use in the case of failure. This can be
done by Select_Alternates_Inner(). In order to use
Select_Alternates_Internal(), we need to know if P is greater, less
or unordered with S, and P.topo_order. P.lower = B.lower, P.higher =
A.higher, and any value is OK for P.topo_order, until
A.topo_order<=P.topo_order<=B.topo_order and P.topo_order is not
equal to the topo_order of the failed node. So for simplicity let
P.topo_order=A.topo_order when the next-hop is not A, and
P.topo_order=B.topo_order otherwise. This gives the following
Select_Alternates_Proxy_Node(S, P, F, primary_intf)
if (F is not P.neighbor_A)
return Select_Alternates_Internal(S, P, F, primary_intf,
return Select_Alternates_Internal(S, P, F, primary_intf,
Figure 25
5. MRT Lowpoint Algorithm: Complete Specification
Enyedi, et al. Expires January 16, 2014 [Page 36]
Internet-Draft MRT FRR Algorithm July 2013
This specification defines the MRT Lowpoint Algorithm, which include
the construction of a common GADAG and the computation of MRT-Red and
MRT-Blue next-hops to each node in the graph. An implementation MAY
select any subset of next-hops for MRT-Red and MRT-Blue that respect
the available nodes that are described in Section 4.6 for each of the
MRT-Red and MRT-Blue and the selected next-hops are further along in
the interval of allowed nodes towards the destination.
For example, the MRT-Blue next-hops used when the destination Y >> S,
the computing router, MUST be one or more nodes, T, whose topo_order
is in the interval [X.topo_order, Y.topo_order] and where Y >> T or Y
is T. Similarly, the MRT-Red next-hops MUST be have a topo_order in
the interval [R-small.topo_order, X.topo_order] or [Y.topo_order,
Implementations SHOULD implement the Select_Alternates() function to
pick an MRT-FRR alternate.
In a future version, this section will include pseudo-code describing
the full code path through the pseudo-code given earlier in the
6. Algorithm Alternatives and Evaluation
This specification defines the MRT Lowpoint Algorithm, which is one
option among several possible MRT algorithms. Other alternatives are
described in the appendices.
In addition, it is possible to calculate Destination-Rooted GADAG,
where for each destination, a GADAG rooted at that destination is
computed. Then a router can compute the blue MRT and red MRT next-
hops to that destination. Building GADAGs per destination is
computationally more expensive, but may give somewhat shorter
alternate paths. It may be useful for live-live multicast along
6.1. Algorithm Evaluation
This section compares MRT and remote LFA for IP Fast Reroute in 16
service provider network topologies, focusing on coverage and
alternate path length. Figure 26 shows the coverage provided by MRT
and RLFA for protection against different failure modes in these
topologies. The coverage values are calculated as the percentage of
source-destination pairs protected by the given IPFRR method relative
to source-destination pairs protected by optimal routing, against the
same failure modes. For example, the second column is the percentage
of source-destination pairs protected by MRT against next-hop node
failure and against next-hop link failure relative to source-
Enyedi, et al. Expires January 16, 2014 [Page 37]
Internet-Draft MRT FRR Algorithm July 2013
destination pairs protected by optimal routing against the same
failure modes. The particular variants of MRT and RLFA used for this
analysis are described at the end of this section.
When the requirement is to provide IPFRR protection against a single
link or node failure, MRT is able to provide protection for any
source-destination pair for which some path still exists after the
failure. For the topologies analyzed here, RLFA provides protection
against a single link or node failure for 41% to 98% of the source-
destination pairs, with an average of 73% coverage.
| Topology | link and node | link-only |
| | failure coverage | failure coverage |
| |------------------+-----------------------------------+
| | MRT | RLFA | MRT | RLFA | RLFA |
| | | | |no possible| possible |
| | | | | loops | loops |
+------------+---------+--------+--------+---------- +--------------+
| T101 | 100.0% | 89.0% | 100.0% | 97.1% | 99.4% |
| T102 | 100.0% | 41.0% | 100.0% | 96.5% | 100.0% |
| T103 | 100.0% | 81.7% | 100.0% | 94.9% | 99.6% |
| T104 | 100.0% | 65.4% | 100.0% | 86.2% | 100.0% |
| T105 | 100.0% | 69.0% | 100.0% | 85.7% | 93.8% |
| T106 | 100.0% | 80.3% | 100.0% | 91.2% | 100.0% |
| T107 | 100.0% | 79.6% | 100.0% | 82.1% | 93.7% |
| T108 | 100.0% | 60.4% | 100.0% | 54.9% | 66.9% |
| T109 | 100.0% | 50.7% | 100.0% | 52.9% | 67.0% |
| T110 | 100.0% | 80.5% | 100.0% | 75.4% | 100.0% |
| T111 | 100.0% | 85.1% | 100.0% | 89.5% | 99.9% |
| T112 | 100.0% | 89.1% | 100.0% | 76.9% | 100.0% |
| T113 | 100.0% | 66.7% | 100.0% | 93.7% | 100.0% |
| T114 | 100.0% | 73.6% | 100.0% | 96.0% | 100.0% |
| T115 | 100.0% | 97.7% | 100.0% | 96.2% | 100.0% |
| T116 | 100.0% | 65.0% | 100.0% | 95.7% | 99.9% |
| Average | 100.0% | 73.4% | 100.0% | 85.3% | 95.0% |
| Median | 100.0% | 76.6% | 100.0% | 90.4% | 100.0% |
Figure 26
When the requirement is to provide protection against a single link
failure, MRT is able to provide 100% coverage. The coverage provided
by RLFA against a single link failure depends on whether or not one
restricts RLFA repairs to those that are guaranteed not to cause
loops in the event of a node failure occurs. When RLFAs are chosen
to exclude the possiblity of such loops, coverage for these
topologies ranges from 52% to 97%, with an average of 85%. If one
Enyedi, et al. Expires January 16, 2014 [Page 38]
Internet-Draft MRT FRR Algorithm July 2013
allows for the possibility of loops being created by the use of an
RLFA, then the coverage increases to range from 67% to 100%, with an
average of 95%.
Note that for most of the topologies, the calculated RLFA coverage
increases when reducing the protection requirements from link and
node failure coverage to link-only failure coverage. However, for
several of the topologies, the calculated RLFA coverage decreases
when going from link and node failure coverage to link-only failure
coverage. While the absolute number of source-destination pairs
protected by RLFA increases for all of the topologies when the
protection requirements are reduced, for some topologies the absolute
number of source-destination pairs that protectable by optimal
routing increases even more, resulting in a decrease in the relative
RLFA coverage.
| Topology | average alternate path length |
| | (relative to optimal) |
| +--------------------------------------------------+
| | MRT | RLFA | Next-Next-Hop |
| T101 | 1.28 | 1.01 | 1.04 |
| T102 | 1.22 | 1.01 | 1.02 |
| T103 | 1.13 | 1.01 | 1.02 |
| T104 | 1.01 | 1.00 | 1.00 |
| T105 | 2.97 | 1.42 | 1.16 |
| T106 | 1.16 | 1.06 | 1.07 |
| T107 | 86.87 | 99.51 | 1.04 |
| T108 | 1.07 | 1.01 | 1.03 |
| T109 | 1.09 | 1.02 | 1.05 |
| T110 | 1.06 | 1.03 | 1.25 |
| T111 | 1.25 | 1.02 | 1.10 |
| T112 | 1.11 | 1.05 | 1.32 |
| T113 | 1.03 | 1.00 | 1.02 |
| T114 | 1.77 | 1.00 | 1.06 |
| T115 | 1.01 | 1.00 | 1.04 |
| T116 | 1.31 | 1.01 | 1.04 |
| Median | 1.14 | 1.01 | 1.04 |
Figure 27
The first three columns of Figure 27 compare the lengths of the
alternate paths used by MRT and RLFA across the 16 topologies
(measured as the sum of IGP costs). The alternate path lengths for
the FRR methods are computed relative to the optimal alternate path
length, which is computed by removing the failed node and running an
Enyedi, et al. Expires January 16, 2014 [Page 39]
Internet-Draft MRT FRR Algorithm July 2013
SPF to find the shortest path from the PLR to the destination. The
alternate path lengths are averaged over all source-destination pairs
for which RLFA provides a node-protecting alternate or a link-
protecting alternate that cannot loop in the event of a node failure.
The fourth column of Figure 27 presents results for Next-Next-Hop
alternate paths. The calculated Next-Next-Hop alternate lengths
would apply to the Not-Via IPFRR method as well as RSVP-TE based FRR
using next-next-hop bypass tunnels.
Before drawing any general conclusions from this data, it is useful
understand the origin of the large values of average relative
alternate path length calculated for topology T107, with a value of
87 for MRT, and 99 for RLFA. The network topology represented by
T107 uses values of 10, 100, and 1000 as IGP costs, so small
deviations from the optimal alternate path can result in large
differences in relative path length. The fact that the Next-Next-Hop
average relative alternate path length for T107 is 1.04 (much closer
to optimal than either MRT or RLFA is reasonable. The next-next-hop
alternate path length is computed by removing the failed node and
finding the shortest path from the source to the next-next-hop node,
and adding that to the shortest path from the next-next-hop node to
the destination. Both of the paths that make up the Next-Next-Hop
alternate path follow shortest paths, so the resulting alternate path
from source to destination will only use a link with a cost of 1000
if absolutely necessary. Instead, the other two methods allow for at
least one hop in the alterate path to be chosen independent of the
cost of the link, often resulting in the use of a link with cost
For most of the topologies analyzed, the average RLFA alternate paths
are within a few percent of optimal with a median value of 1.01.
Most of the average MRT alternate path lengths are within 30% of
optimal with a median value of 1.14. In general, it appears that the
100% coverage provided by MRT comes at the expense of a modest
increase in alternate path length. This may be a desirable trade-off
if one considers that the alternate path length for a source-
destination pair without an RLFA alternate is effectively infinite.
The results for Next-Next-Hop alternate path lengths tend to fall in
between MRT and RLFA with a median of 1.04. However, for some
topologies the Next-Next-Hop results are higher than the MRT results.
The analysis presented here uses the MRT Lowpoint Algorithm defined
in this specification with a common GADAG root. The particular
choice of a common GADAG root is expected to affect the quality of
the MRT alternate paths, with a more central common GADAG root
resulting in shorter MRT alternate path lengths. For this analysis,
a single GADAG root was chosen for each topology by calculating the
sum of costs of all shortest paths to and from a given node. The
Enyedi, et al. Expires January 16, 2014 [Page 40]
Internet-Draft MRT FRR Algorithm July 2013
node with the lowest sum was chosen as the common GADAG root. In
actual deployments, the common GADAG root would be chosen based on
the GADAG Root Selection Priority advertised by each router, the
values of which would be determined off-line.
Both MRT and remote LFA have the option of using a local LFA in the
alternate selection process. The details of the alternate selection
processes used in this analysis are provided below.
For the MRT analysis, for each source, destination, and primary next-
hop, each alternate was chosen with the following priorty:
1. node-protecting downstream local LFA
2. node-protecting MRT alternate
3. link-protecting downstream local LFA
4. MRT alternate
For the RLFA analysis, each alternate was chosen with the following
1. node-protecting local LFA
2. link-protecting remote LFA providing node-protection
3. link-protecting downstream local LFA
4. link-protecting downstream remote LFA
5. link-protecting local LFA
6. link-protecting remote LFA
7. Algorithm Work to Be Done
Broadcast Interfaces: The algorithm assumes that broadcast
interfaces are already represented as pseudo-nodes in the network
graph. Given maximal redundancy, one of the MRT will try to avoid
both the pseudo-node and the next hop. The exact rules need to be
fully specified.
Enyedi, et al. Expires January 16, 2014 [Page 41]
Internet-Draft MRT FRR Algorithm July 2013
8. IANA Considerations
This doument includes no request to IANA.
9. Security Considerations
This architecture is not currently believed to introduce new security
10. References
10.1. Normative References
Atlas, A., Kebler, R., Envedi, G., Csaszar, A., Tantsura,
J., Konstantynowicz, M., and R. White, "An Architecture
for IP/LDP Fast-Reroute Using Maximally Redundant Trees",
draft-ietf-rtgwg-mrt-frr-architecture-03 (work in
progress), July 2013.
10.2. Informative References
Enyedi, G., "Novel Algorithms for IP Fast Reroute",
Department of Telecommunications and Media Informatics,
Budapest University of Technology and Economics Ph.D.
Thesis, February 2011, <http://www.omikk.bme.hu/
Atlas, A., Hegde, S., Chris, C., and J. Tantsura, "OSPF
Extensions to Support Maximally Redundant Trees", draft-
atlas-ospf-mrt-00 (work in progress), July 2013.
Bryant, S., Previdi, S., and M. Shand, "A Framework for IP
and MPLS Fast Reroute Using Not-via Addresses", draft-
ietf-rtgwg-ipfrr-notvia-addresses-11 (work in progress),
May 2013.
Litkowski, S., Decraene, B., Filsfils, C., and K. Raza,
"Operational management of Loop Free Alternates", draft-
ietf-rtgwg-lfa-manageability-00 (work in progress), May
Enyedi, et al. Expires January 16, 2014 [Page 42]
Internet-Draft MRT FRR Algorithm July 2013
Bryant, S., Filsfils, C., Previdi, S., Shand, M., and S.
Ning, "Remote LFA FRR", draft-ietf-rtgwg-remote-lfa-02
(work in progress), May 2013.
Kahn, A., "Topological sorting of large networks",
Communications of the ACM, Volume 5, Issue 11 , Nov 1962,
Retvari, G., Tapolcai, J., Enyedi, G., and A. Csaszar, "IP
Fast ReRoute: Loop Free Alternates Revisited", Proceedings
of IEEE INFOCOM , 2011, <http://opti.tmit.bme.hu/~tapolcai
Enyedi, G., Retvari, G., Szilagyi, P., and A. Csaszar, "IP
Fast ReRoute: Lightweight Not-Via without Additional
Addresses", Proceedings of IEEE INFOCOM , 2009,
Enyedi, G., Retvari, G., and A. Csaszar, "On Finding
Maximally Redundant Trees in Strictly Linear Time", IEEE
Symposium on Computers and Comunications (ISCC) , 2009,
[RFC3137] Retana, A., Nguyen, L., White, R., Zinin, A., and D.
McPherson, "OSPF Stub Router Advertisement", RFC 3137,
June 2001.
[RFC5286] Atlas, A. and A. Zinin, "Basic Specification for IP Fast
Reroute: Loop-Free Alternates", RFC 5286, September 2008.
[RFC5714] Shand, M. and S. Bryant, "IP Fast Reroute Framework", RFC
5714, January 2010.
[RFC6571] Filsfils, C., Francois, P., Shand, M., Decraene, B.,
Uttaro, J., Leymann, N., and M. Horneffer, "Loop-Free
Alternate (LFA) Applicability in Service Provider (SP)
Networks", RFC 6571, June 2012.
Enyedi, et al. Expires January 16, 2014 [Page 43]
Internet-Draft MRT FRR Algorithm July 2013
Appendix A. Option 2: Computing GADAG using SPFs
The basic idea in this option is to use slightly-modified SPF
computations to find ears. In every block, an SPF computation is
first done to find a cycle from the local root and then SPF
computations in that block find ears until there are no more
interfaces to be explored. The used result from the SPF computation
is the path of interfaces indicated by following the previous hops
from the mininized IN_GADAG node back to the SPF root.
To do this, first all cut-vertices must be identified and local-roots
assigned as specified in Figure 12.
The slight modifications to the SPF are as follows. The root of the
block is referred to as the block-root; it is either the GADAG root
or a cut-vertex.
a. The SPF is rooted at a neighbor x of an IN_GADAG node y. All
links between y and x are marked as TEMP_UNUSABLE. They should
not be used during the SPF computation.
b. If y is not the block-root, then it is marked TEMP_UNUSABLE. It
should not be used during the SPF computation. This prevents
ears from starting and ending at the same node and avoids cycles;
the exception is because cycles to/from the block-root are
acceptable and expected.
c. Do not explore links to nodes whose local-root is not the block-
root. This keeps the SPF confined to the particular block.
d. Terminate when the first IN_GADAG node z is minimized.
e. Respect the existing directions (e.g. INCOMING, OUTGOING,
UNDIRECTED) already specified for each interface.
Mod_SPF(spf_root, block_root)
Initialize spf_heap to empty
Initialize nodes' spf_metric to infinity
spf_root.spf_metric = 0
insert(spf_heap, spf_root)
found_in_gadag = false
while (spf_heap is not empty) and (found_in_gadag is false)
min_node = remove_lowest(spf_heap)
if min_node.IN_GADAG is true
found_in_gadag = true
foreach interface intf of min_node
Enyedi, et al. Expires January 16, 2014 [Page 44]
Internet-Draft MRT FRR Algorithm July 2013
if ((intf.OUTGOING or intf.UNDIRECTED) and
((intf.remote_node.localroot is block_root) or
(intf.remote_node is block_root)) and
(intf.remote_node is not TEMP_UNUSABLE) and
(intf is not TEMP_UNUSABLE))
path_metric = min_node.spf_metric + intf.metric
if path_metric < intf.remote_node.spf_metric
intf.remote_node.spf_metric = path_metric
intf.remote_node.spf_prev_intf = intf
insert_or_update(spf_heap, intf.remote_node)
return min_node
SPF_for_Ear(cand_intf.local_node,cand_intf.remote_node, block_root,
Mark all interfaces between cand_intf.remote_node
and cand_intf.local_node as TEMP_UNUSABLE
if cand_intf.local_node is not block_root
Mark cand_intf.local_node as TEMP_UNUSABLE
Initialize ear_list to empty
end_ear = Mod_SPF(spf_root, block_root)
y = end_ear.spf_prev_hop
while y.local_node is not spf_root
add_to_list_start(ear_list, y)
y.local_node.IN_GADAG = true
y = y.local_node.spf_prev_intf
if(method is not hybrid)
Set_Ear_Direction(ear_list, cand_intf.local_node,
Clear TEMP_UNUSABLE from all interfaces between
cand_intf.remote_node and cand_intf.local_node
Clear TEMP_UNUSABLE from cand_intf.local_node
return end_ear
Figure 28: Modified SPF for GADAG computation
Assume that an ear is found by going from y to x and then running an
SPF that terminates by minimizing z (e.g. y<->x...q<->z). Now it is
necessary to determine the direction of the ear; if y << z, then the
path should be y->x...q->z but if y >> z, then the path should be
y<-x...q<-z. In Section 4.4, the same problem was handled by finding
all ears that started at a node before looking at ears starting at
nodes higher in the partial order. In this algorithm, using that
approach could mean that new ears aren't added in order of their
total cost since all ears connected to a node would need to be found
before additional nodes could be found.
Enyedi, et al. Expires January 16, 2014 [Page 45]
Internet-Draft MRT FRR Algorithm July 2013
The alternative is to track the order relationship of each node with
respect to every other node. This can be accomplished by maintaining
two sets of nodes at each node. The first set, Higher_Nodes,
contains all nodes that are known to be ordered above the node. The
second set, Lower_Nodes, contains all nodes that are known to be
ordered below the node. This is the approach used in this algorithm.
Set_Ear_Direction(ear_list, end_a, end_b, block_root)
// Default of A_TO_B for the following cases:
// (a) end_a and end_b are the same (root)
// or (b) end_a is in end_b's Lower Nodes
// or (c) end_a and end_b were unordered with respect to each
// other
direction = A_TO_B
if (end_b is block_root) and (end_a is not end_b)
direction = B_TO_A
else if end_a is in end_b.Higher_Nodes
direction = B_TO_A
if direction is B_TO_A
foreach interface i in ear_list
i.UNDIRECTED = false
i.INCOMING = true
i.remote_intf.UNDIRECTED = false
i.remote_intf.OUTGOING = true
foreach interface i in ear_list
i.UNDIRECTED = false
i.OUTGOING = true
i.remote_intf.UNDIRECTED = false
i.remote_intf.INCOMING = true
if end_a is end_b
// Next, update all nodes' Lower_Nodes and Higher_Nodes
if (end_a is in end_b.Higher_Nodes)
foreach node x where x.localroot is block_root
if end_a is in x.Lower_Nodes
foreach interface i in ear_list
add i.remote_node to x.Lower_Nodes
if end_b is in x.Higher_Nodes
foreach interface i in ear_list
add i.local_node to x.Higher_Nodes
foreach node x where x.localroot is block_root
if end_b is in x.Lower_Nodes
foreach interface i in ear_list
add i.local_node to x.Lower_Nodes
if end_a is in x.Higher_Nodes
Enyedi, et al. Expires January 16, 2014 [Page 46]
Internet-Draft MRT FRR Algorithm July 2013
foreach interface i in ear_list
add i.remote_node to x.Higher_Nodes
Figure 29: Algorithm to assign links of an ear direction
A goal of the algorithm is to find the shortest cycles and ears. An
ear is started by going to a neighbor x of an IN_GADAG node y. The
path from x to an IN_GADAG node is minimal, since it is computed via
SPF. Since a shortest path is made of shortest paths, to find the
shortest ears requires reaching from the set of IN_GADAG nodes to the
closest node that isn't IN_GADAG. Therefore, an ordered tree is
maintained of interfaces that could be explored from the IN_GADAG
nodes. The interfaces are ordered by their characteristics of
metric, local loopback address, remote loopback address, and ifindex,
as in the algorithm previously described in Figure 14.
The algorithm ignores interfaces picked from the ordered tree that
belong to the block root if the block in which the interface is
present already has an ear that has been computed. This is necessary
since we allow at most one incoming interface to a block root in each
block. This requirement stems from the way next-hops are computed as
will be seen in Section 4.6. After any ear gets computed, we
traverse the newly added nodes to the GADAG and insert interfaces
whose far end is not yet on the GADAG to the ordered tree for later
Finally, cut-edges are a special case because there is no point in
doing an SPF on a block of 2 nodes. The algorithm identifies cut-
edges simply as links where both ends of the link are cut-vertices.
Cut-edges can simply be added to the GADAG with both OUTGOING and
INCOMING specified on their interfaces.
for each interface of node
if intf.remote_node.IN_GADAG is false
block_has_ear = false
for all interfaces of x
if (intf.remote_node.block_id == block_id) &&
(intf.remote_node.IN_GADAG is true)
block_has_ear = true
return block_has_ear
Construct_GADAG_via_SPF(topology, root)
Compute_Localroot (root,root)
Enyedi, et al. Expires January 16, 2014 [Page 47]
Internet-Draft MRT FRR Algorithm July 2013
root.IN_GADAG = true
while ordered_intfs_tree is not empty
cand_intf = remove_lowest(ordered_intfs_tree)
if cand_intf.remote_node.IN_GADAG is false
if L(cand_intf.remote_node) == D(cand_intf.remote_node)
// Special case for cut-edges
cand_intf.UNDIRECTED = false
cand_intf.remote_intf.UNDIRECTED = false
cand_intf.OUTGOING = true
cand_intf.INCOMING = true
cand_intf.remote_intf.OUTGOING = true
cand_intf.remote_intf.INCOMING = true
cand_intf.remote_node.IN_GADAG = true
if (cand_intf.remote_node.local_root ==
cand_intf.local_node) &&
/* Skip the interface since the block root
already has an incoming interface in the
block */
ear_end = SPF_for_Ear(cand_intf.local_node,
SPF method)
y = ear_end.spf_prev_hop
while y.local_node is not cand_intf.local_node
y = y.local_node.spf_prev_intf
Figure 30: SPF-based GADAG algorithm
Appendix B. Option 3: Computing GADAG using a hybrid method
In this option, the idea is to combine the salient features of the
above two options. To this end, we process nodes as they get added
to the GADAG just like in the lowpoint inheritance by maintaining a
stack of nodes. This ensures that we do not need to maintain lower
and higher sets at each node to ascertain ear directions since the
ears will always be directed from the node being processed towards
Enyedi, et al. Expires January 16, 2014 [Page 48]
Internet-Draft MRT FRR Algorithm July 2013
the end of the ear. To compute the ear however, we resort to an SPF
to have the possibility of better ears (path lentghs) thus giving
more flexibility than the restricted use of lowpoint/dfs parents.
Regarding ears involving a block root, unlike the SPF method which
ignored interfaces of the block root after the first ear, in the
hybrid method we would have to process all interfaces of the block
root before moving on to other nodes in the block since the direction
of an ear is pre-determined. Thus, whenever the block already has an
ear computed, and we are processing an interface of the block root,
we mark the block root as unusable before the SPF run that computes
the ear. This ensures that the SPF terminates at some node other
than the block-root. This in turn guarantees that the block-root has
only one incoming interface in each block, which is necessary for
correctly computing the next-hops on the GADAG.
As in the SPF gadag, bridge ears are handled as a special case.
The entire algorithm is shown below in Figure 31
find_spf_stack_ear(stack, x, y, xy_intf, block_root)
if L(y) == D(y)
// Special case for cut-edges
xy_intf.UNDIRECTED = false
xy_intf.remote_intf.UNDIRECTED = false
xy_intf.OUTGOING = true
xy_intf.INCOMING = true
xy_intf.remote_intf.OUTGOING = true
xy_intf.remote_intf.INCOMING = true
xy_intf.remote_node.IN_GADAG = true
push y onto stack
if (y.local_root == x) &&
//Avoid the block root during the SPF
Mark x as TEMP_UNUSABLE
end_ear = SPF_for_Ear(x,y,block_root,hybrid)
If x was set as TEMP_UNUSABLE, clear it
cur = end_ear
while (cur != y)
intf = cur.spf_prev_hop
prev = intf.local_node
intf.UNDIRECTED = false
intf.remote_intf.UNDIRECTED = false
intf.OUTGOING = true
intf.remote_intf.INCOMING = true
push prev onto stack
Enyedi, et al. Expires January 16, 2014 [Page 49]
Internet-Draft MRT FRR Algorithm July 2013
cur = prev
xy_intf.UNDIRECTED = false
xy_intf.remote_intf.UNDIRECTED = false
xy_intf.OUTGOING = true
xy_intf.remote_intf.INCOMING = true
Compute_Localroot (root,root)
root.IN_GADAG = true
Initialize Stack to empty
push root onto Stack
while (Stack is not empty)
x = pop(Stack)
for each interface intf of x
y = intf.remote_node
if y.IN_GADAG is false
find_spf_stack_ear(stack, x, y, intf, y.block_root)
Figure 31: Hybrid GADAG algorithm
Authors' Addresses
Gabor Sandor Enyedi (editor)
Konyves Kalman krt 11
Budapest 1097
Email: Gabor.Sandor.Enyedi@ericsson.com
Andras Csaszar
Konyves Kalman krt 11
Budapest 1097
Email: Andras.Csaszar@ericsson.com
Enyedi, et al. Expires January 16, 2014 [Page 50]
Internet-Draft MRT FRR Algorithm July 2013
Alia Atlas (editor)
Juniper Networks
10 Technology Park Drive
Westford, MA 01886
Email: akatlas@juniper.net
Chris Bowers
Juniper Networks
1194 N. Mathilda Ave.
Sunnyvale, CA 94089
Email: cbowers@juniper.net
Abishek Gopalan
University of Arizona
1230 E Speedway Blvd.
Tucson, AZ 85721
Email: abishek@ece.arizona.edu
Enyedi, et al. Expires January 16, 2014 [Page 51] | {"url":"https://datatracker.ietf.org/doc/html/draft-enyedi-rtgwg-mrt-frr-algorithm-03","timestamp":"2024-11-11T19:50:47Z","content_type":"text/html","content_length":"180281","record_id":"<urn:uuid:9d77f8d4-6a58-4a46-bba2-6df6697346e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00463.warc.gz"} |
Why Bagging is So Ridiculously Effective At Variance Reduction?
Random forest is a pretty powerful and robust model, which is a combination of many different decision trees.
What makes them so powerful over a traditional decision tree model is Bagging:
Bagging diagram
Anyone who has ever heard of Random Forest has surely heard of Bagging and how it works.
This is because, in my experience, there are plenty of resources that neatly describe:
• How Bagging algorithmically works in random forests.
• Experimental demo on how Bagging reduces the overall variance (or overfitting).
However, these resources often struggle to provide an intuition on:
1. Why Bagging is so effective.
2. Why do we sample rows from the training dataset with replacement.
3. The mathematical demonstration that verifies variance reduction.
Thus, in this article, let me address all of these above questions and provide you with a clear and intuitive reasoning on:
• Why bagging makes the random forest algorithm so effective at variance reduction.
• Why does bagging involves sampling with replacement?
• How do we prove variance reduction mathematically?
The code for this article and its practice exercise notebook has been provided towards the end of the article.
Let’s begin!
The overfitting experiment
Decision trees are popular for their interpretability and simplicity.
Yet, unknown to many, they are pretty infamous when it comes to overfitting any data they are given.
This happens because a standard decision tree algorithm greedily selects the best split at each node, making its nodes more and more pure as we traverse down the tree.
Unless we don’t restrict its growth, nothing can stop a decision tree from 100% overfitting the training dataset.
For instance, consider that we have the following dummy data, and we intentionally want to 100% overfit it with, say, a linear regression model.
Dummy regression dataset
This task will demand some serious effort by the engineer.
In other words, we can’t just run linear_model.fit(X, y) in this case to directly overfit the dataset.
Instead, as mentioned above, this will require some serious feature engineering effort to entirely overfit the given dataset.
For instance, to intentionally overfit this dummy dataset, we would have to explicitly create relevant features, which, in this case, would mostly be higher-degree polynomial features.
This is shown below:
Linear regression overfits the dataset when we create higher-order polynomial features
As shown above, as we increase the degree of our feature $x$ in our polynomial regression, the model starts to overfit the dataset more and more.
With a polynomial degree of $40$, the model entirely overfits the dataset.
The point is that overfitting this dataset (on any dataset, for that matter) with linear regression typically demands some engineering effort.
While the above dataset was easy to overfit, a complex dataset with all sorts of feature types may require serious effort to intentionally overfit the data.
However, this is NEVER the case with a decision tree model.
In fact, overfitting any dataset with a decision tree demands no effort from the engineer.
In other words, we can simply run dtree_model.fit(X, y) to overfit any dataset, regression or classification.
This happens because a standard decision tree always continues to add new levels to its tree until all leaf nodes are pure.
As a result, it always $100\%$ overfits the dataset by default, as shown below:
Decision tree overfits the regression dataset
The same problem is observed in classification datasets as well.
For instance, consider the following dummy binary classification dataset.
Dummy binary classification dataset
It’s clear that there is some serious overlap between the two classes.
Yet, a decision tree does not care about that.
The model will still meticulously create its decision boundary such that it classifies the dataset with 100% accuracy.
This is depicted below:
It is important to address this problem.
Remedies to Prevent Overfitting
Of course, there are many ways to prevent this, such as pruning and ensembling.
The main focus of this article is ensembling, specifically bagging, so we won’t get into much detail about pruning.
Pruning is commonly used in tree-based models, where it involves removing branches (or nodes) to simplify the model.
For instance, we can intentionally restrict the decision tree from growing after a certain depth. In sklearn’s implementation, we can do this by specifying the max_depth parameter.
Specifying max_depth parameter in a decision tree model
Pruning is also possible by specifying the minimum number of samples required to split an internal node.
Another pruning technique is called the cost-complexity-pruning (CCP).
CCP considers a combination of two factors for pruning a decision tree:
• Cost (C): Number of misclassifications
• Complexity (C): Number of nodes
Of course, dropping nodes will result in a drop in the model’s accuracy.
Thus, in the case of decision trees, the core idea is to iteratively drop sub-trees, which, after removal, leads to:
• a minimal increase in classification cost
• a maximum reduction of complexity (or nodes)
This is depicted below:
In the image above, both sub-trees result in the same increase in cost. However, it makes more sense to remove the sub-tree with more nodes to reduce computational complexity.
In sklearn, we can control cost-complexity-pruning using the ccp_alpha parameter:
• large value of ccp_alpha → results in underfitting
• small value of ccp_alpha → results in overfitting
The objective is to determine the optimal value of ccp_alpha, which gives a better model.
The effectiveness of cost-complexity-pruning is evident from the image below:
• Training the decision tree without any cost-complexity-pruning results in a complex decision region plot, and the model exhibits 100% accuracy.
• However, by tuning the ccp_alpha parameter, we prevented overfitting while improving the test set accuracy.
Ensemble learning
Another widely used technique to prevent overfitting is ensemble learning.
In a gist, an ensemble combines multiple models to build a more powerful model.
Whenever I wish to intuitively illustrate their immense power, I use the following image:
They are fundamentally built on the idea that by aggregating the predictions of multiple models, the weaknesses of individual models can be mitigated. Combining models is expected to provide better
overall performance.
Ensembles are primarily built using two different strategies:
1. Bagging
2. Boosting
1) Bagging
Bagging diagram
Here’s how it works:
• Bagging creates different subsets of data with replacement (this is called bootstrapping).
• Next, we train one model per subset.
• Finally, we aggregate all predictions to get the final prediction.
Some common models that leverage Bagging are:
• Random Forests
• Extra Trees
2) Boosting
Boosting diagram
Here’s how it works:
• Boosting is an iterative training process.
• The subsequent model puts more focus on misclassified samples from the previous model
• The final prediction is a weighted combination of all predictions
Some common models that leverage Boosting are:
Overall, ensemble models significantly boost the predictive performance compared to using a single model. They tend to be more robust, generalize better to unseen data, and are less prone to
As mentioned above, the focus of this article is specifically Bagging.
In my experience, there are plenty of resources that neatly describe:
• How Bagging algorithmically works in random forests.
• Experimental demo on how Bagging reduces the overall variance (or overfitting).
For instance, we can indeed verify variance reduction ourselves experimentally.
The following diagram shows the decision region plot obtained from a decision tree and random forest model:
It’s pretty clear that a random forest does not exhibit as high variance (overfitting) as the decision tree model does.
Typically, these resources explain the idea of Bagging as follows:
Instead of training one decision tree, train plenty of them, each on a different subsets of the dataset generated with replacement. Once trained, average the predictions of all individual
decision tree models to obtain the final prediction. This results in reducing the overall variance and increases the model’s generalization.
However, these resources often struggle to provide an intuition on:
1. Why Bagging is so effective.
2. Why do we sample rows from the training dataset with replacement.
3. The mathematical demonstration that verifies variance reduction.
Thus, in this article, let me address all of these above questions and provide you with a clear and intuitive reasoning on:
• Why bagging makes the random forest algorithm so effective at variance reduction.
• Why does bagging involves sampling with replacement?
• How do we prove variance reduction mathematically?
Towards the end, we shall also build an intuition towards the Extra trees algorithm and how it further contributes towards the variance reduction step.
Once we understand the objective bagging tries to solve, we shall also formulate new strategies to build our own bagging algorithms.
Let’s begin!
Motivation for Bagging
As shown in an earlier diagram, the core idea in a random forest model is to train multiple decision tree models, each on a different sample of the training dataset.
Bagging diagram
During inference, we take the average of all predictions to get the final prediction:
And as we saw earlier, training multiple decision trees reduces the model's overall variance.
But why?
Let’s dive into the mathematics that will explain this.
Read the full article
Sign up now to read the full article and get access to all articles for paying subscribers only.
Join today!
RAG • 22 min read
A Crash Course on Building RAG Systems – Part 1 (With Implementations)
A practical and beginner-friendly crash course on building RAG apps (with implementations).
NLP • 24 min read
AugSBERT: Bi-encoders + Cross-encoders for Sentence Pair Similarity Scoring – Part 2
A deep dive into extensions of cross-encoders and bi-encoders for sentence pair similarity.
NLP • 20 min read
Bi-encoders and Cross-encoders for Sentence Pair Similarity Scoring – Part 1
A deep dive into why BERT isn't effective for sentence similarity and advancements that shaped this task forever. | {"url":"https://www.dailydoseofds.com/why-bagging-is-so-ridiculously-effective-at-variance-reduction/","timestamp":"2024-11-03T06:09:28Z","content_type":"text/html","content_length":"94219","record_id":"<urn:uuid:bb8e6a98-bff7-4fe0-99af-c4b86a033805>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00057.warc.gz"} |
Worksheet on Calculating Distance | Problems on Calculating Distance | Answers
Worksheet on Calculating Distance
Practice the questions given in the worksheet on calculating distance. Learn how to solve different problems on calculating distance.
We know, the formula to calculate distance = speed × time.
1. A train covers 168 km in 4 hours. How much distance will it cover in 80 minutes?
2. If a motorist moves with a speed of 30 km/hr and covers the distance from place A to place B in 3 hours 30 minutes, find the distance between place A and place B.
3. Mohan drives a car at a uniform speed of 60 km/hr, find how much distance is covered in 90 minutes?
4. A train is running at a speed of 72 km/hr, how far will it go in 7 seconds?
5. A car travels at a speed of 54 km/hr. How many meters will it travel in 1 second?
6. How much father can an interstate bus go travelling 70 km/hr rather than 60 km/hr in 6 hours?
7. I walk at 3 km/hr and miss the bus by 2 minutes. If I walk 4 km/hr, I reach 3 minutes before the bus arrives. How far I walk to reach the bus stand?
8. A car travel a distance of 170 km in 2 hours partly at a speed of 100 km/hr and partly at 50 km/hr. Find the distance travelled at the speed of 100 km/hr.
9. Aaron and Sam cover the same distance on foot at the speed of 8 km/hr and 6 km/hr. Find the distance covered by each one of them when one takes 15 minutes longer than the other.
10. Sound travels at a speed of 1188 km in one hour. How many meters will it travel in one second?
Answers for the worksheet on calculating distance are given below to check the exact answers of the above questions on different problems.
1. 56 km
2. 105 km
3. 90 km
4. 140 m
5. 15 m
6. 60 km
7. 1 km
8. 140 km
9. 6 km
10. 330 m/sec
Worksheet on Conversion of Units of Speed
Worksheet on Calculating Speed
Worksheet on Calculating Distance
Worksheet on Train Passes through a Pole
Worksheet on Train Passes through a Bridge
Worksheet on Decimal into Percentage
Math Home Work Sheets
From Worksheet on Calculating Distance to HOME PAGE
Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need.
New! Comments
Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question. | {"url":"https://www.math-only-math.com/worksheet-on-calculating-distance.html","timestamp":"2024-11-04T04:31:07Z","content_type":"text/html","content_length":"37227","record_id":"<urn:uuid:6a931f9c-3738-4e46-923f-ae5a1cef3a08>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00604.warc.gz"} |
Unscramble RALJ
How Many Words are in RALJ Unscramble?
By unscrambling letters ralj, our Word Unscrambler aka Scrabble Word Finder easily found 7 playable words in virtually every word scramble game!
Letter / Tile Values for RALJ
Below are the values for each of the letters/tiles in Scrabble. The letters in ralj combine for a total of 15 points (not including bonus squares)
What do the Letters ralj Unscrambled Mean?
The unscrambled words with the most letters from RALJ word or letters are below along with the definitions.
• jarl (n.) - A chief; an earl; in English history, one of the leaders in the Danish and Norse invasions. | {"url":"https://www.scrabblewordfind.com/unscramble-ralj","timestamp":"2024-11-05T22:40:32Z","content_type":"text/html","content_length":"37127","record_id":"<urn:uuid:6232ad4c-f3e8-45c8-a1d9-40d9fbf8e981>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00458.warc.gz"} |
Fairly new to Smartsheets - Having trouble with a countifs / date formula. Need your help!
I have a "master data" sheet that has a "DATE OF INCIDENT" field column, and a "FOUND ON A SCHEDULED INSPECTION" field column that populated as either yes or no. These are the two references I'll
want to use in my formula on my metrics sheet.
In my metrics sheet I want to use a COUNTIFS statement to tell me how many passed each month for example.
=COUNTIFS({FOUND ON A SCHEDULED INSPECTION}, "NO", [{MONTH({DATE OF INCIDENT})}, "7"])
this WAS THE FORMULA i CAN UP WITH BUT IT RETURNED AN ERROR AS "UNPARSEABLE". Is this because I'm trying to use multiple functions nested together in same cell? I have seen some videos on youtube
where users seems to be adding more columns to their "master data" sheet breaking the date column into a column with a numerical month number and column for year. Can anyone explain this? And when to
use a report rather than your "master data" sheet for extra coulumns?
thanks in advance for any insight :)
Best Answers
• Hello @cable_guist and welcome!
When you're doing a criterion on a range you need to do it after defining the range.
COUNTIFS(range1, criterion1, range2, criterion2…)
In this case that means you need to adjust your formula to this:
=COUNTIFS({FOUND ON A SCHEDULED INSPECTION}, "NO", {DATE OF INCIDENT}, Month(@cell) = 7)
The Month(@cell) = 7 part is basically saying for any given cell within that range that matches the criteria of equaling 7.
• @ericncarr And if you wanted to expand that formula further to only count the cells that also match a specific year ?
• Hello @cable_guist and welcome!
When you're doing a criterion on a range you need to do it after defining the range.
COUNTIFS(range1, criterion1, range2, criterion2…)
In this case that means you need to adjust your formula to this:
=COUNTIFS({FOUND ON A SCHEDULED INSPECTION}, "NO", {DATE OF INCIDENT}, Month(@cell) = 7)
The Month(@cell) = 7 part is basically saying for any given cell within that range that matches the criteria of equaling 7.
• @ericncarr And if you wanted to expand that formula further to only count the cells that also match a specific year ?
• You would expand your criteria for the Date of Incident range with AND.
AND(logical_expression1, logical_expression2,…)
In your case:
=COUNTIFS({FOUND ON A SCHEDULED INSPECTION}, "NO", {DATE OF INCIDENT}, AND(MONTH(@cell ) = 7, YEAR(@cell) = 2024))
I don't know how you have your metric sheet set up, but I usually have dates in one column of the metric sheet so I can use a column formula instead of having to update the formula in every cell
which would look something like this:
=COUNTIFS({FOUND ON A SCHEDULED INSPECTION}, "NO", {DATE OF INCIDENT}, AND(MONTH(@cell ) = Month(Date@row), YEAR(@cell) = YEAR(Date@row)))
My date column in the metric sheet would just have the first day of each month, so 7/1/24, 8/1/24, 9/1/24 and so on.
• If I wanted to build a leaderboard with the name of the person who submitted / Reported the most tickets/incidents (essentially my rows in the master sheet), how would I go about using formulas
to achieve this output table on my dashboard?
If I have 200 people working at the company using the system submitting tickets then I have a problem with the current smartsheet knowledge.
The only way I know how to do that now is by creating a 200-row table in my metrics sheet that, where each row, would count for a single unique name from the 200 possible. Basically using SUMIF
or COUNTIF formula. What is the recommended way to do this efficiently to avoid this extra 200 rows of work?
• @cable_guist if I'm understanding correctly you can either use rank or max.
There are multiple ways you could go about, but I would sum the number submitted/reported as a column formula in the main dataset - so do your countif based on month, year, and submitter (or
whatever criteria you want to be counting for).
Then in your metric sheet or wherever you want to list the person(s) who submitted the most, you could pull the submitter with an index(collect()) based, again, on whatever criteria you want for
that (month or year or all time) and then pull in the number submitted from the column you created in the data set.
Max would get you the top submitter. If you want a certain number of top submitters you could have a column for rank (1, 2, 3, 4…etc.) that uses the rankeq function to pull in the top submitter
based on the rank@row.
So only 1 row or as many rows as you want for the number of top submitters you want to trap.
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/131972/fairly-new-to-smartsheets-having-trouble-with-a-countifs-date-formula-need-your-help","timestamp":"2024-11-14T09:07:22Z","content_type":"text/html","content_length":"418830","record_id":"<urn:uuid:c4a86bbd-e743-472e-bcda-752ea5c2d3a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00710.warc.gz"} |
Reconstruction of Markov random fields from samples: Some observations and algorithms
Markov random fields are used to model high dimensional distributions in a number of applied areas. Much recent interest has been devoted to the reconstruction of the dependency structure from
independent samples from the Markov random fields. We analyze a simple algorithm for reconstructing the underlying graph defining a Markov random field on n nodes and maximum degree d given
observations. We show that under mild nondegeneracy conditions it reconstructs the generating graph with high probability using Θ(dε^-2 δ^-4 log n) samples, where e, δ depend on the local
interactions. For most local interactions ε,δ are of order exp(- O(d)). Our results are optimal as a function of n up to a multiplicative constant depending on d and the strength of the local
interactions. Our results seem to be the first results for general models that guarantee that the generating model is reconstructed. Furthermore, we provide explicit O(n^d+2 ε^-2 δ^-4 log n)
running-time bound. In cases where the measure on the graph has correlation decay, the running time is O(n^2 log n) for all fixed d. We also discuss the effect of observing noisy samples and show
that as long as the noise level is low, our algorithm is effective. On the other hand, we construct an example where large noise implies nonidentifiability even for generic noise and interactions.
Finally, we briefly show that in some simple cases, models with hidden nodes can also be recovered.
All Science Journal Classification (ASJC) codes
• General Computer Science
• General Mathematics
• Algorithms
• Correlation decay
• Markov random fields
Dive into the research topics of 'Reconstruction of Markov random fields from samples: Some observations and algorithms'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/reconstruction-of-markov-random-fields-from-samples-some-observat","timestamp":"2024-11-07T19:44:39Z","content_type":"text/html","content_length":"52235","record_id":"<urn:uuid:4c0bfac6-60a9-45ca-83e0-1b403bb483f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00829.warc.gz"} |
AI Is Helping Mathematicians Build A Periodic Table Of Shapes - International Maths Challenge
AI Is Helping Mathematicians Build A Periodic Table Of Shapes
Atomic shapes are so simple that they can’t be broken down any further. Mathematicians are trying to build a “periodic table” of these shapes, and they hope artificial intelligence can help.
Mathematicians attempting to build a “periodic table” of shapes have turned to artificial intelligence for help – but say they don’t understand how it works or whether it can be 100 per cent
Tom Coates at Imperial College London and his colleagues are working to classify shapes known as Fano varieties, which are so simple that they can’t be broken down into smaller components. Just as
chemists arranged elements in the periodic table by their atomic weight and group to reveal new insights, the researchers hope that organising these “atomic” shapes by their various properties will
help in understanding them.
The team has assigned each atomic shape a sequence of numbers derived from features such as the number of holes it has or the extent to which it twists around itself. This acts as a bar code to
identify it.
Coates and his colleagues have now created an AI that can predict certain properties of these shapes from their bar code numbers alone, with an accuracy of 98 per cent – suggesting a relationship
that some mathematicians intuitively thought might be real, but have found impossible to prove.
Unfortunately, there is a vast gulf between demonstrating that something is very often true and mathematically proving that it is always so. While the team suspects a one-to-one connection between
each shape and its bar code, the mathematics community is “nowhere close” to proving this, says Coates.
“In pure mathematics, we don’t regard anything as true unless we have an actual proof written down on a piece of paper, and no advances in our understanding of machine learning will get around this
problem,” says team member Alexander Kasprzyk at the University of Nottingham, UK.
Even without a proven link between the Fano varieties and bar codes, Kasprzyk says that the AI has let the team organise atomic shapes in a way that begins to mimic the periodic table, so that when
you read from left to right, or up and down, there seem to be generalisable patterns in the geometry of the shapes.
“We had no idea that would be true, we had no idea how to begin doing it,” says Kasprzyk. “We probably would still not have had any idea about this in 50 years’ time. Frankly, people have been trying
to study these things for 40 years and failing to get to a picture like this.”
The team hopes to refine the model to the point where missing spaces in its periodic table could point to the existence of unknown shapes, or where clustering of shapes could lead to logical
categorisation, resulting in a better understanding and new ideas that could create a method of proof. “It clearly knows more things than we know, but it’s so mysterious right now,” says team member
Sara Veneziale at Imperial College London.
Graham Niblo at the University of Southampton, UK, who wasn’t involved in the research, says that the work is akin to forming an accurate picture of a cello or a French horn just from the sound of a
G note being played – but he stresses that humans will still need to tease understanding from the results provided by AI and create robust and conclusive proofs of these ideas.
“AI has definitely got uncanny abilities. But in the same way that telescopes didn’t put astronomers out of work, AI doesn’t put mathematicians out of work,” he says. “It just gives us a new tool
that allows us to explore parts of the mathematical landscape that were out of reach, or, like a microscope, that were too obscure for us to notice with our current understanding.”
For more such insights, log into www.international-maths-challenge.com.
*Credit for article given to Matthew Sparkes * | {"url":"https://international-maths-challenge.com/ai-is-helping-mathematicians-build-a-periodic-table-of-shapes/","timestamp":"2024-11-09T17:28:25Z","content_type":"text/html","content_length":"146525","record_id":"<urn:uuid:784bd37a-58e0-476e-b936-211b552f3a41>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00750.warc.gz"} |
Python Binary Search
Here you will learn about python binary search with program and algorithm.
In linear search, we have to check each node/element. Because of this, time complexity increases. To reduce this time complexity, we use Binary search. In Binary search half of the given array will
be ignored after just one comparison.
The main point to be noted is Binary Search only works for sorted array.
If the array is sorted into ascending order, all we have to do is find the middle index of the array and then compare the element at middle index with the element to find. If our given element is
greater than the element at middle index then we’ll ignore the left half array and the array will start from the next index of middle index.
On other hand, if the given element is less than the element presents at middle index, then we’ll ignore the right half array and the new array will ends with left one index of middle index.
If the given element is equal to the element presents on the middle index, then search is completed.
In the case, the first index is greater than the last index of array, it means we have been gone through the entire array and the given element is not presented in the array.
Python Binary Search
We’ve a sorted array [2, 14, 19, 21, 99, 210, 512, 1028, 4443, 5110] and the element to be find is 4443.
Step 1:
Step 2:
Step 3:
Binary_search [arr, starting index, last index, element]
Step:1- mid = (starting index + last index) / 2
Step:2- If starting index > last index
Then, Print "Element not found"
Else if element > arr[mid]
Then, starting index = mid + 1
Go to Step:1
Else if element < arr[mid]
Then, last index = mid - 1
Go to Step:2
{ means element == arr[mid] }
Print "Element Presented at position" + mid
Program for Binary Search in Python
Iterative Approach (Using Loop):
def Binary_search(arr,start_index,last_index,element):
while (start_index<= last_index):
mid =(int)(start_index+last_index)/2
if (element>arr[mid]):
start_index = mid+1
elif (element<arr[mid]):
last_index = mid-1
elif (element == arr[mid]):
return mid
return -1
arr = [2,14,19,21,99,210,512,1028,4443,5110]
element = 4443
start_index = 0
last_index = len(arr)-1
found = Binary_search(arr,start_index,last_index,element)
if (found == -1):
print "element not present in array"
print "element is present at index " + str(found)
element is present at index 8
Recursive Approach (Using Recursion):
def Binary_search(arr,start_index,last_index,element):
mid =(int)(start_index+last_index)/2
print "Element not found"
elif (element>arr[mid]):
start_index = mid+1
elif (element<arr[mid]):
last_index = mid-1
print "element is present at index " + str(mid)
arr = [2,14,19,21,99,210,512,1028,4443,5110]
element = 99
start_index = 0
last_index = len(arr)-1
element is present at index 4
Comment below if you have any queries related to above python binary search algorithm tutorial.
6 thoughts on “Python Binary Search”
so easy. and understanding for everyone
Great post and nice explanation.
One question though…
Why use this syntax for type conversion?
mid = (int)(start_index+last_index)/2
It is used in C language and i’m not sure these are same things.
This works for any function
>>> def f(a):
… return a + 1
>>> (f)(1.5)
You’ve could have done this
mid = int((start_index+last_index)/2)
which makes more sense in python, also it seems you are using python2 in which ‘/’ operator returns integer if the operands are integers, so type conversion is unnecessary.
how to develop our coding skills
loop approach is wrong
your program is not working please check it
The program does not give the correct output in case of data(input value) duplication.
Please rectify the bug in the program.
Leave a Comment | {"url":"https://www.thecrazyprogrammer.com/2017/11/python-binary-search.html","timestamp":"2024-11-03T15:39:03Z","content_type":"text/html","content_length":"145696","record_id":"<urn:uuid:3c2ea66c-fc54-45e9-b7b5-96799b5fe253>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00099.warc.gz"} |
Recitation 7 Trees and traversal techniques Solution - Programming Help
1. Tree Data Structure
2. Binary Trees
3. Preorder traversal
4. Inorder traversal
5. Postorder traversal
6. Exercise on traversal techniques
Recitation 7,
Trees and traversal techniques
1.Tree Data Structure
Technical Definition: A tree is a collection of entities called nodes. Nodes are connected by edges. Each node contains a value or data, and it may or may not have a child node.
Tree is an example of non-linear data structures. A structure is a way of representing the hierarchical nature of a structure in graphical form.
In simple terms, a tree is a data structure that is similar to a linked list but instead of each node pointing simply to the next node in a linear fashion, each node points to a number of nodes.
In trees ADT(Abstract data type), order of elements is not important. If we need ordering information linear data structures like linked lists, stacks, queues, etc can be used.
□ The root of a tree is the node with no parents. There can be at most one root node in a tree.
□ An edge refers to the link from parent to child.
□ A node with no children is called leaf node.
□ Children of same parent are called siblings.
□ A node p is an ancestor of node q if there exists a path from root to q and p appears on the path.
□ Set of all nodes at a given depth is called level of the tree.The root node is at level 0.
2. Binary Trees
A tree is called a binary tree in which each node has at the most two children, which are referred to as the left child and the right child i.e each node has zero child, one child or two children.
Empty tree is also a valid binary tree. We can visualise a binary tree as consisting of a root and two disjoint binary trees, called the left and right subtrees of the root.
Types of binary trees
1. Full Binary Tree – A full binary tree is a binary tree in which all nodes except leaves have two children.
2. Complete Binary Tree – A complete binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible.
Recitation 7,
Trees and traversal techniques
3. Perfect Binary Tree – A perfect binary tree is a binary tree in which all internal nodes have two children and all leaves are at same level.
Example code to create a Binary Tree:
struct Node
int data;
Node *left;
Node *right;
Applications of trees data structure
1. Expression trees are used in compilers.
2. Manipulate hierarchical data.
3. Used to implement search algorithms.
3. Binary Tree Traversals
In order to process trees, we need a mechanism for traversing them. The process of visiting all nodes of a tree is called tree traversal. Each node is processed only once but it may be visited more
than once. As we have seen already that in linear data structures like linked lists, stacks and queues the elements are visited in sequential order. But, in tree structures, there are many different
Traversal possibilities
Starting at the root of a binary tree, there are 3 main steps that can be performed and the order in which they are performed defines the traversal type. These steps are: performing an action on the
current node, traversing to the left child node, and traversing to the right child node. This process can be easily defined through recursion.
1. LDR: Process left subtree, process the current node data and then process the right subtree
2. LRD: Process left subtree, process right subtree, process current node data.
3. DLR: Process current node data, process left subtree, process right subtree.
4. DRL: Process current node data, process right subtree, process left subtree.
5. RDL: Process right subtree, process current node data,, process left subtree.
Recitation 7,
Trees and traversal techniques
6. RLD: Process right subtree, process current node data, process left subtree.
Classifying the traversals
The sequence in which these nodes are processed defines a particular traversal method. The classification is based on the order in which current node is processed. That means, if we are classifying
based on current node(D) and if D comes in the middle it does not matter whether L is on the left side of D or R is on the left side of D. Similarly, it doesn’t matter whether L is on the right side
of D or R is on the right side of D. Due to this, the total possibilities are reduced to 3 and these are:
1. Preorder Traversal (DLR)
2. Inorder Traversal (LDR)
3. PostOrder Traversal (LRD)
There is another traversal method which does not depend on above orders and it is :
Level Order Traversal. (We will cover this later.)
Note – The below traversal techniques can be done using both using recursion and iterative techniques. However for this class we will only focus on recursive techniques, since it is much simpler and
easy to understand.
PreOrder Traversal
In preorder traversal, each node is processed before(pre) either of its subtrees. This is the simplest traversal to understand. However, even though each node is processed before the subtrees, it
still requires that some information must be maintained while moving down the tree. Processing must return to the right subtree after finishing the processing of the left subtree. To move to the
right subtree after processing left subtree, we must maintain the root information. The obvious ADT for such information is a stack. Because of its LIFO structure, it is possible to get the
information about the right subtrees back in reverse order.
Pre order Traversal is defined as follows.
1. Visit the root.
2. Traverse the left subtree in Preorder.
3. Traverse the right subtree in Preorder.
Recitation 7,
Trees and traversal techniques
void preorder(int *root)
if(root != NULL)
InOrder Traversal
In inorder traversal the root is visited between the subtrees, inorder traversal is defined as follows.
1. Traverse the left subtree in Inorder.
2. Visit the root.
3. Traverse the right subtree in Inorder.
void inorder(int *root)
if(root != NULL)
PostOrder Traversal
In post order traversal, the root is visited after both subtrees. Postorder traversal is defined as follows.
Recitation 7,
Trees and traversal techniques
1. Traverse the left subtree in PostOrder.
2. Traverse the right subtree in PostOrder.
3. Visit the root.
void postorder(int *root)
if(root == NULL){
Example tree with PreOrder, InOrder and PostOrder Traversals:
Inorder (Left, Root, Right) : 5, 12, 6, 1, 9
Preorder (Root, Left, Right) : 1, 12, 5, 6, 9
Postorder (Left, Right, Root) : 5, 6, 12, 9, 1
Recitation 7,
Trees and traversal techniques
4. Complexity Analysis of Binary Tree Traversals
For all of these traversals – whether done recursively or iteratively we visit every node in the binary tree. That means that we’ll get a runtime complexity of O(n) – where n is the number of nodes
in the binary tree.
Download the Lab7 zipped file from Moodle. It has the header and implementation files for the tree class. Follow the TODO details.
1. Implement deleteTree function which deletes all the nodes of the tree (Silver Problem – Mandatory)
2. Implement sumNodes function which returns the sum of all nodes present in the tree (Gold Problem) | {"url":"https://www.edulissy.org/product/recitation-7-trees-and-traversal-techniques-solution/","timestamp":"2024-11-03T19:57:25Z","content_type":"text/html","content_length":"199155","record_id":"<urn:uuid:3083911c-95c2-4837-9fc2-0a4b342b2ecf>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00498.warc.gz"} |
Solver Option | Oasys GSA Documentation
Analysis Wizard : Solver Option
All analysis options are controlled from the analysis wizard and depend on the selected solver option.
Task Name
This is a name to associate with this task. This is used as the basis of the analysis case name, so that if there is more than one modal analysis the particular modes can be clearly identified.
Solution Type
The solution type breaks down into a number of main categories:
Solver, Task and Initial Case
The solver option depends on the analysis requested. The task number is determined by GSA, but the initial case can be specified by the user.
GSA can apply a range of imperfections in an analysis. The most commonly specified imperfection is an imperfection which is a function of height. This option allows for an imperfection in the x or y
direction as a linear function of height. More complex imperfections are set in the Advanced options.
Analysis Stage
The analysis task will refer to the selected stage. If stages are not required the 'Whole mode' option includes everything in the model. | {"url":"https://docs.oasys-software.com/structural/gsa/references/hidd-anal-solv/","timestamp":"2024-11-14T03:31:24Z","content_type":"text/html","content_length":"37225","record_id":"<urn:uuid:726ca37f-9d61-4f91-b314-3c2a144e0866>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00009.warc.gz"} |
varTestnlme citation info
To cite varTestnlme in publications use:
Baey C, Kuhn E (2023). “varTestnlme: An R Package for Variance Components Testing in Linear and Nonlinear Mixed-Effects Models.” Journal of Statistical Software, 107(6), 1–32. doi:10.18637/
To cite the associated theoretical paper use:
Baey C, Cournède P, Kuhn E (2019). “Asymptotic distribution of likelihood ratio test statistics for variance components in nonlinear mixed effects models.” Computational Statistics and Data
Analysis, 135, 107-122. doi:10.1016/j.csda.2019.01.014.
Corresponding BibTeX entries:
title = {{varTestnlme}: An {R} Package for Variance Components
Testing in Linear and Nonlinear Mixed-Effects Models},
author = {Charlotte Baey and Estelle Kuhn},
journal = {Journal of Statistical Software},
year = {2023},
volume = {107},
number = {6},
pages = {1--32},
doi = {10.18637/jss.v107.i06},
title = {Asymptotic distribution of likelihood ratio test
statistics for variance components in nonlinear mixed effects
author = {Charlotte Baey and Paul-Henry Cournède and Estelle Kuhn},
year = {2019},
journal = {Computational Statistics and Data Analysis},
volume = {135},
pages = {107-122},
doi = {10.1016/j.csda.2019.01.014}, | {"url":"https://cran.mirror.garr.it/CRAN/web/packages/varTestnlme/citation.html","timestamp":"2024-11-02T23:58:13Z","content_type":"text/html","content_length":"2262","record_id":"<urn:uuid:5dd5c883-a396-4149-8a01-64f698bb14dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00129.warc.gz"} |
The Least Memorization Possible
The Least Memorization Possible
January 11, 2017
I’ve found that there are two general groups of people when it comes to subjects like mathematics and physics. There are those who memorize, and those who internalize the material. Both can bring
understanding to the student, but they are much different.
One of my mathematics professors illustrated this when he said, “As a mathematician, I like to do the least memorization possible.”
At first, this struck me as a little ironic, because a staple of mathematics exams is just remembering the truckloads of formulas for various situations. I know that at least in my classes in
university, we get no formulas, no regular expressions (such as the trigonometric identities), no unit circle, or anything else. Everything needs to stay in our head, which means we have to memorize
some things.
However, the deeper point I think he was trying to get at was that mathematics isn’t about remembering formulas and knowing when to use them. Sure, that’s what happens when we work on these ideas for
a long time and get used to doing them, but the point is that these steps and procedures we take shouldn’t necessarily feel foreign. At the very least, they need to be logical and consistent. Doing a
double integration by parts with say components $x^2$ and $e^x$ by choosing the former as $u$ and the latter as $dv$ but then doing the opposite after the first integration by parts isn’t logical.
I try to keep this in mind when working on both improving my skills in a mathematics or physics class and while trying to tie everything together for the end of the semester. I don’t want to remember
a thousand different formulas. Instead, I want to remember the intuitive and powerful principles that I learned throughout the semester and be able to apply them when I get to problems, without
necessarily memorizing everything.
However, I do want to point out one final thing: a lot of the teachers are being a bit disingenuous when telling you, “It’s not the end of the world if you don’t remember a formula. You can easily
re-derive it.” Sure, that’s true and you can do that, but most students do not have the time to re-derive a formula and then answer the question that was troubling them on a test. This is
particularly true if the test has a short duration (as mine were) or if there are multiple questions in which you have to do this. Given enough time, I’m sure I could get the formulas I needed, but
that kind of time isn’t typically available during tests. That’s why it bugs me when teachers say this, because it’s true but not practical.
In a broader sense though, there’s something nice about being able to remember a few principles and working from there. I’m not saying you have to reinvent calculus for your test, but it might not
hurt to try and “compress” the number of things you need to remember into more broad categories that can adapt to your specific situation. | {"url":"https://jeremycote.net/2017/01/11/the-least-memorization-possible/","timestamp":"2024-11-08T18:06:13Z","content_type":"text/html","content_length":"5572","record_id":"<urn:uuid:150d71c4-7519-4889-b269-d0e56973094f>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00516.warc.gz"} |
Our SSTP course covers the following areas:
• Reasoning – Mathematics
• General ability – Quantitative
• Reasoning – Reading
• General ability – Verbal
• Persuasive and creative writing
When should you prepare?
Students can begin preparing for the test from year 7. Every year we commence the Mathematics course in February and the English course in July.
Program structure
Our program consists of two main courses:
Mathematics course prepares students for mathematics and quantitative tests, while English course prepares students for writing, reading and general ability-verbal tests. In each course, students
learn theory followed by questions practising different strategies. Students learn to answer questions in a given time frame. At the end of each course, students complete practice tests covering the
content learnt. If students wish to cover both mathematics and English courses, they need to attend twice weekly. | {"url":"https://roshchem.com/selective-school-test","timestamp":"2024-11-02T14:19:42Z","content_type":"text/html","content_length":"114751","record_id":"<urn:uuid:e1450395-2377-46b4-8537-e9f75ff8f7e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00373.warc.gz"} |
Procedures and Systems
Circle true or false to the following: True False
I like to keep things in order A C
I am quick at making conclusions about most things C A
Traditional solutions are the best A C
Other people's problems don't interest me B C
I rarely question or doubt what other people say C B
I don't always finish tasks on time C A
I feel comfortable in nearly all social situations C B
I like to predict results before beginning to do anything A C
I like working under pressure B C
I enjoy being challenged by new tasks C A
People are usually convinced by my arguments C B
Checking detail is not one of my strong points C A
Clear and distinct thought is important to me B C
I find it hard to express myself in groups B C
I always try to finish what I start A C
The beauty of nature often astounds me C B
Total A answers__4___
Total B answers__2___
Total A and B answers___6__
Communications and the Arts
Circle true or false to the following: True False
I would like to present TV programmes A C
I sometimes find it difficult to say what I mean C A
I think I could write good short stories A C
I could do drawings for new designs B C
My knowledge of the arts is rather limited C B
I prefer doing practical things to reading or creative writing C A
I rarely notice the design of clothes C B
I enjoy talking to others about their opinions A C
I am full of creative ideas B C
I find most fiction rather uninteresting C A
I am not very inventive C B
I am a very down-to-earth person C A
I would like to exhibit my photographs or paintings for others to see B C
I could design something which was visually attractive B C
Translating foreign languages would appeal to me A C
Unconventional people make me feel uncomfortable C B
Total A answers__6___
Total B answers__3___
Total A and B answers___9__
Science and Engineering
Circle true or false to the following: True False
I am good at finding the weaknesses in arguments A C
I nearly always make spontaneous decisions C A
Thinking up new ideas is easy for me A C
I'm not good at persuading others B C
I enjoy organizing things in advance C B
Thinking in the abstract helps to solve problems C A
Mending things is not one of my strong points C B
Talking about possibilities that might never happen is enjoyable A C
Other people's comments about me don't hurt me B C
I try to solve problems by intuition and personal feelings C A
I don't always finish what I begin C B
I don't try to hide my emotions C A
I find it easy to find solutions to practical problems B C
Traditional methods are usually the best ones B C
My independence is very important to me A C
I enjoy reading classical literature C B
Total A answers__3___
Total B answers_3____
Total A and B answers__6___
Ignore all C responses. They simply indicate a lack of interest in a particular area, and should not be included in your scoring.
You should now have four scores, each between 0 and 16. A score of 0-4 shows very little interest in a particular area. 5-12 is about average. A score of 13 and over shows a strong interest, and the
highest of your four scores indicates which area of work is most likely to suit the requirements of your individual personality.
Caring Influence
Medical Control
Welfare Commercial
Education Managerial
Date: 2016-01-05; view: 1523
<== previous page | next page ==>
Answer the questions. | Procedures and Systems | {"url":"https://doclecture.net/1-51589.html","timestamp":"2024-11-11T15:28:38Z","content_type":"text/html","content_length":"10930","record_id":"<urn:uuid:c42d89e4-942a-46e1-b4d2-da94bd1cb558>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00269.warc.gz"} |
The Advanced Numerical Simulation Domain
• The Advanced Numerical Simulation Domain
The Advanced Numerical Simulation Domain
The SNA Scientific Domain
Laurent Cambier
Scientific director
of the Advanced Numerical Simulation domain (SNA)
based on INRIA's Gamma team (meshing methods for numerical simulation). This alliance will accelerate the adoption of adaptive meshing methods for ASD applications such as combustion,
magneto-hydro-dynamics, plasma and hypersonic flows.
On October 18, 2022, ONERA co-founded the LARTISSTE scientific interest group (GIS) for the quantification of uncertainties, with Université Paris-Saclay and a very wide range of participants, from
academic research to industry, including major organizations such as Airbus, Safran, EDF, CEA, IFPEN, Cerfacs, ENS Paris-Saclay, CentraleSupélec... This GIS, which brings together major players in
the energy and aeronautics sectors, will explore important issues of numerical models for the reasoned use of numerical models for prediction and decision-making purposes.
Major HPC challenges for young researchers As its Sator supercomputer ramps up, ONERA has offered its young research engineers a few million hours of computing over two months in autumn 2021. The
initiative was spearheaded by Alain Refloch, in charge of the High-Performance Computing mission, and Laurent Cambier, Scientific Director for Advanced Numerical Simulation. See images of some of
these challenges.
LMA2S Maths Apps Lab - Laboratory of Applied Mathematics for Aeronautics and Space is a virtual inter-departmental laboratory (created in 2019) that federates ONERA's scientific community of
mathematicians and numerical engineers, giving it greater visibility. The aim is to optimize the resources allocated to this discipline by encouraging the pooling of work.. [w3.onera.fr/lma2s]
Next-generation software SONICS and A-set are the digital simulation codes of the future for ONERA and its partners, respectively for aerodynamics and mechanics of materials and structures, in
partnership with the Safran group. Deadline 2025.
ONERA is a member of the consortium OpenTURNS is a software library written in Python and C++ dedicated to the treatment of uncertainties in numerical simulation. The consortium includes Airbus, EDF
R&D, IMACS (Polytechnique), Phimeca and ONERA.
SNA in brief
SNA integrates the entire research and development process associated with ONERA's scientific and technical production in the form of software, and thus covers, for all physics disciplines, research
on modeling, studies on algorithms and applied mathematics, high-performance computing issues, coupling between different physics, model reduction techniques and propagation and quantification of
The aims of numerical simulation are to provide an in-depth understanding of physics, to meet the needs of government agencies (safety, environment, certification, etc.) and to enhance the
competitiveness of industry. These activities are underpinned by high-level fundamental research and fruitful collaborations, leading to structuring projects that are often large-scale and long-term.
ONERA's software products take full advantage of ONERA's unique positioning. In addition to the triptych of experimentation-modeling-simulation and the availability of multi-disciplinary skills in
physical modeling and digital sciences, the close relationships forged with industrial partners enable detailed and evolving knowledge of needs, rapid transfer of research to industry and validation
through use on realistic configurations. ONERA's development of its own software meets its need for research autonomy and capitalization on its scientific and technical heritage.
The SNA domain concerns the development of methods and tools for numerical simulation. SNA is a cross-disciplinary field in close interaction with the four other scientific domains and ONERA's seven
research departments.
Scientific Officers
Philippe Villedieu
Bertrand Langrand
Hélène Piet-Lahanier
SNA themes
The SNA domain brings together three main scientific themes spread across four ONERA departments : Materials and Structures ; Multiphysics for Energy ; Aerodynamics, Aeroelasticity, Acoustics ;
Information Processing and Systems:
Numerous other scientific fields are related to the SNA domain. Examples include
• Acoustics; multi-fidelity optimization; multiphase flows; reactive flows MFE
• Aeroelasticity; structural mechanics; structural design and optimization MAS
• Simulation of the electromagnetic scene; environment and signatures for optronic sensors; lightning, plasmas and electric thrusters PHY
• Systems design and optimization TIS
Numerical methods
DMAS Christophe Bovet, Johann Rannou
• Development of finite element methods : theory, technology, software engineering
• Code coupling for multiphysics and multiscale modeling
• Model reduction
• New methods for high-performance computing (HPC) and exploitation of massive data
• Structural optimization algorithms
• Test-simulation dialogue algorithms
• Uncertainties and stochastic modeling
Phase field and cohesive zone simulation of accreted ice shedding
This type of calculation is carried out in order to develop a technological test dedicated to be implemented in ONERA's icing wind tunnel. Once a block of ice has formed on a profile, pressurized air
is injected to induce detachment, and the characteristics of the detached ice are studied in relation to the aero-icing conditions and the characteristics of the substrate. Calculations are performed
using the Z-set code, exploiting its parallel resolution capabilities. This work is carried out in collaboration with the Multiphysics department for energy as part of the ONERA TRICEPS project.
DMPE Guillaume Puigt, Eric Quémerais, Ghislain Blanchard
• Space-time discretization: focus on finite volume and spectral differences
• Meshes: mesh operations, mesh adaptation, high-order curved mesh, etc.
• Multi-physics coupling: multi-solver coupling, multi-fluid approaches, fluid and solid particles, atomization...
• Adjoint methods: sensitivities, mesh adaptation...
• Eigenvalue problems
• Large linear systems: implicit methods for multi-physics...
• HPC: mesh management, solver architecture for HPC
Finite Volume Axis - Cedre solver Objective: spatial scheme of order greater than 2 on unstructured mesh.
Example of application to primary atomization in cryogenic combustion (VF with 2nd-order multipent MUSCL scheme).
Spectral Difference axis - JAGUAR solver Objective: scheme development on triangle/tetrahedron.
Family of schemes of order 3 to 6 (p=2 to 5) on triangles, convergence of calculations on laminar transonic NACA12 at Minfini=0.8, ;Alpha=10°, Re=500, despite non-alignment of mesh/solution.
JEROBOAM is a collaborative project focusing on discontinuous spectral methods for which the solutions in each cell are defined as polynomials of given degree p, and for which no continuity of fields
at cell interfaces is assumed. These approaches provide a class of numerical schemes whose order is controllable, and by their "mathematical nature" enable good scalability on massively parallel
machines (here, measured scalability of 96% on more than 121,000 cores for the JAGUAR solver). We are interested in these approaches in a perfect gas and reactive multi-species gas context. At the
heart of the project: numerical time integration schemes, spatial numerical schemes including shock capture and entropy schemes, coupling for multiphysics, boundary conditions for large-scale
turbulence simulations... Validation of these approaches involves stationary (minority) and especially unsteady flows, in order to demonstrate the LES-like capabilities of the approaches. The project
ends fall 2023 and will continue with HOGSMEADE, with a specific focus on multi-species flows, shocks, and Euler/Euler approach for liquid/gaz interaction.
DAAA Florent Renac, Bruno Maugars
• Space and time discretization
• Schemes for hypersonic flows
• Inversion of ill-conditioned linear systems
• Coupling methods for multiphysics
• A posteriori error estimation
• Mesh (h), scheme (p) and model (m) adaptations
• High-order CAD representation
• Methods for long-range noise propagation
REALITY - High-order numerical schemes for compressible fluid mechanics. To reduce aircraft design costs, numerical simulation has become an indispensable tool. We are concerned with turbulent flow
phenomena characterized by a wide range of time and space scales. The use of high-accuracy methods for their simulation is becoming a necessity as the levels of accuracy requiored in the aeronautical
industry tend to become ever higher. In this context, the REALITY project aimed to enhance the robustness and efficiency of high-order simulations of compressible fluid mechanics equations
(encompassing turbulence, multi-species and two-phase models) using discontinuous Galerkin methods, where the solution is sought as piecewise polynomials in each element of unstructured polyhedral
(2020-2023, with ENSAM, Bundeswehr University Munich and CETC), aims to bring fluidic control technology closer to industrial use in aircraft engine compressors, in order to reduce pumping margin.
ONERA's first activity in this project was to evaluate the ability of elsA software to predict the limit of operability (rotating stall) of the CME2 compressor (Laboratoire de Mécanique de Fluides de
Lille) using unsteady simulations over its entire circumference. Experimentally, only one stalled cell was observed. The simulations have enabled us to gain a better understanding of the mechanisms
involved in the stall of the CME2 compressor rotor in the absence of fluidic control. As a result, elsA is able to reproduce the single-cell rotating stall observed during testing.
The next step involved numerical simulations of rotating stall control on the same CME2 compressor fitted with fluidic injectors. The simulations model the entire circumference of the machine,
including the complete injector geometry (20 pairs of injectors distributed over the entire circumference). Simulations show the beneficial effect of fluidic control. The actuators modify the flow
within the compressor, delaying entry into rotating stall by shifting it towards lower engine flows. Ultimately, this will enable the engine to operate at a higher efficiency level, and therefore
with lower fuel consumption, over a wider operating range.
Simulation and supercomputing
DAAA Ivan Mary, Yann Mauffrey
• Deformation and mesh adaptation (fluid and structure)
• Lattice-Boltzmann method (LBM)
• Chimera technique, IBM
• Octree
• Numerical methods for parallel and exascale computing
• Software architecture
• Pre- and post-processing of calculations
• Validation on complex configurations and international benchmarks
• MDA (high-fidelity model, code coupling)
One of the objectives of the cooperation between ONERA and the US Army is to evaluate and compare the immersed boundary methods developed by ONERA in the ONERA FAST/IBM environment and by the US Army
in HELIOS/ROAM on a rotor-fuselage interaction case and on the evaluation of the drag of a helicopter rotor head in a wind tunnel. The aim is twofold: to be able to carry out CFD simulations in a
rapid restitution time around complex geometries, while at the same time capturing the physics in a sufficiently precise way, for which so-called "low-fidelity" methods alone are not appropriate.
ONERA has therefore adopted an original approach to calculating this type of configuration. Rotating blades are discretized by moving curvilinear meshes, for which a URANS model is used. The
helicopter fuselage is taken into account by Cartesian meshes using a immersed boundary method. As a result, calculations can be carried out in just a few tens of minutes, even for very complex
geometries, as the process is automated.
The coupling between the rotating curvilinear grids and the Cartesian mesh is achieved by an optimized Chimère approach, which limits the transfer time between grids to 20% of the overall time. The
calculations are implemented by coupling Python/CGNS HPC modules developed by ONERA/DAAA (Cassiopee, FastS, Quantum...). The first figure illustrates an academic calculation of the wake generated by
an isolated blade, while the next two figures represent a more industrial case of a 4-blade rotor in forward flight modeled by a URANS model which takes into account the interaction with the fuselage
thanks to a immersed boundary method.
Applied mathematics and scientific computing
DTIS François Rogier, Guillaume Dufour
• Numerical analysis of PDEs, Inverse Problems
• Particle models
• High-performance computing HPC
• Mesh adaptation
• Coupling of numerical models
• Uncertainty propagation, treatment of rare events
• Optimization, sensitivity analysis
• Substitution models
• Multi-physics and multi-scale simulations
Simulation of a tip-plane plasma discharge in a Mach 3 flow. The test case studied concerns the interaction of a plasma generated by a negativepoint-to-plane discharge with a shock wave from a
supersonic flow.
The calculation was carried out using the COPAIER code, which is able to perform simulations of plasmas outside of the thermodynamic equilibrium assumption, for 2D or 2D-axisymmetric geometries. The
solver is based on the resolution of conservation equations (of the convection-diffusion-reaction type) for the dynamics of charged species densities, coupled with the computation of the electric
field induced by the potential applied to the stressed electrode (here -7.5 kV) and by the volume charge distribution.
This is a multi-scale simulation in time and space. Indeed, to correctly represent field effects, mesh sizes must be of the order of Debye's length - micrometric in our case - while the device size
is centimetric. On these mesh sizes, the CFL condition corresponding to electron transport imposes time steps of less than a picosecond, while the characteristic time of flow along the device is more
in the order of a hundred microseconds (8 orders of magnitude).
Below: Positive ion density profile during the first 4 pulses of the discharge | {"url":"https://onera.fr/en/sna","timestamp":"2024-11-14T17:11:57Z","content_type":"text/html","content_length":"105136","record_id":"<urn:uuid:b57aa103-7e27-4d5c-9ce0-d137f508f139>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00438.warc.gz"} |
What is meant by Apogee and perigee,Eccentricity? How do we Calculate them? | Socratic
What is meant by Apogee and perigee,Eccentricity? How do we Calculate them?
1 Answer
Apogee, perigee and eccentricity are terms which describe aspects of an elliptic orbit.
Planets, moon and satellites follow elliptic orbits. There are a number of parameters which can be used to describe an orbit. At least two parameters are required.
The semi-major axis $a$ is half of the greatest width of the ellipse.
The eccentricity $0 \le e < 1$ describes the shape of the ellipse. An eccentricity of zero is a perfect circle.
Apogee means the furthest distance the Moon or a satellite gets from Earth in its orbit. The related term aphelion is the furthest distance a planet or other body gets from the Sun. The generic term
is apapsis which describes the furthest distance an orbit gets from anything. Apogee $A$ is related to the semi-major axis and eccentricity.
$A = a \left(1 + e\right)$
Perigee means the closest distance the Moon or a satellite gets to Earth in its orbit. The related term perihelion is the closest distance a planet or other body gets to the Sun. The generic term is
periapsis which describes the closest distance an orbit gets to anything. Perigee $P$ is related to the semi-major axis and eccentricity.
$P = a \left(1 - e\right)$
Given any two of apogee, perigee, eccentricity and semi-major axis, it is easy to calculate the other two and completely define the ellipse.
Impact of this question
37955 views around the world | {"url":"https://socratic.org/questions/what-is-meant-by-apogee-and-perigee-eccentricity-how-do-we-calculate-them","timestamp":"2024-11-08T11:43:30Z","content_type":"text/html","content_length":"34783","record_id":"<urn:uuid:d28012b9-7c08-47b3-8453-e74724706ebe>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00563.warc.gz"} |
Why doesn't a volume fraction algorithm agree with what was programmed?
5341 Views
0 Replies
0 Total Likes
Why doesn't a volume fraction algorithm agree with what was programmed?
I'm relatively new at coding, and I'm using Mathematica to create a 3D image of ellipsoids in a box. I have an array of major semi-axis lengths for the ellipsoids that I want, call it "axis", e.g.
axis = {16,16,14,12,10,8,6};
Now I have a desired ellipsoid volume fraction, call it "vol".
vol = 0.02; (or 2% volume fraction of ellipsoids)
The relationship of ellipsoid major semi-axis/minor semi-axis is "N", e.g. major semi-axis = N*minor semi-axis. The total volume of the ellipsoids is given by
total = Total[4/3Piaxis*(axis/N)^2];
I back-calculate the box dimension "h" to obtain the desired volume fraction using
h = (total/vol)^(1/3) since total/h^3 = vol
Now I have all of the information that I need for my ellipsoids and my box. I use an algorithm to place these ellipsoids in the box at random as follows:
img = {FaceForm[], EdgeForm[], Cuboid[{0, 0, 0}, {h, h, h}]}; (make a transparent box without edges)
(Graylevel ellipsoids in 3D) For[k = 1, k <= Length[totalparticles], k++, guessCount = 1;
(create an ellipsoid) While[k == k, longleng = totalparticles[[k]]; (*this is a dummy variable so that \ the ellipsoids are guaranteed not to extend beyond the box*)
(create the center point of the ellipsoid using three offsets) xoff = RandomReal[{0 + longleng, h - longleng}]; (*randomly choose a value between 0 and the \ width of the image to use as an x offset
for the ellipsoid centroid*)
yoff = RandomReal[{0 + longleng, h - longleng}]; (*randomly choose a value between 0 and the \ width of the image to use as a y offset for the ellipsoid centroid*)
zoff = RandomReal[{0 + longleng, h - longleng}]; (*randomly choose a value between 0 and the \ width of the image to use as a z offset for the ellipsoid centroid*)
obj = Ellipsoid[{xoff, yoff, zoff}, {axis[[k]], axis[[k]]/N, axis[[k]]/N}];
(*place the ellipsoid just created into a 3D graphic with all of \ the other exiting ellipsoids*) graph = Graphics3D[{GrayLevel[.2, .2], img, GrayLevel[.2, .2], obj}, Method -> "Shrinkwrap", Boxed ->
(*flatten out the pixel data from a multi- dimensional array into a single vector of graylevels*) data = Flatten[ImageData[graph]];
(*check to see if any graylevels fall below a designated \ threshhold; if they do, then two ellipsoids are touching and the new ellipsoid needs to be \ replaced*) check = Select[data, # < 0.6 &, 1];
If[Length[check] == 0, Break[]]; guessCount++;];
img = Prepend[img, obj];];
Now I check the pixel data and make sure the desired volume fraction is what I want:
graphic2 = Graphics3D[{GrayLevel[.2, .7], img}, Method -> "Shrinkwrap", Boxed -> False]; Totalpixels = Dimensions[ImageData[graphic2]][[1]]* Dimensions[ImageData[graphic2]][[2]]; epixels = N[Length
[Select[Flatten[ImageData[graphic2]], # < 1. &]]/3]; earea = Round[N[epixels/Totalpixels], .001]
where "img" is my 3D image of all of the ellipsoids, "epixels" is the number of pixels assigned to ellipsoids, and "earea" is the volume fraction of the image's ellipsoids. When I run all of this,
the pixel volume fraction is always much higher than the desired volume fraction, usually by a factor of between 4 and 10 depending on "N" and "vol", and I can't figure out why. Any help would be
greatly appreciated!! | {"url":"https://community.wolfram.com/groups/-/m/t/378896","timestamp":"2024-11-09T16:36:13Z","content_type":"text/html","content_length":"89871","record_id":"<urn:uuid:7ae7c86d-cee9-4091-8529-6aa2f7f50047>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00342.warc.gz"} |
What Work Opportunities Involves Z - Grain d'pirate
What exactly does E 4 me-an in math? Whether it is the first year of faculty or it is your own tenth, discovering what projects demand math is crucial for a number of reasons.
T majors tend to be more inclined to keep higher positions than majors in management. For their knowledge and training of mathematics, mathematics majors are often the ones that have encouraged.
Management positions call for mathematics amounts, as do additional positions such as lawyers, accountants, and computer programmers.
A lot of people possess some sort of mathematics track record, possibly basic or complex. There are even. And while there are lots of professions that rely on math, many others do not.
It could be easier to figure out which kind of mathematics degree that you wish to go after once you know what jobs involve math. Listed below are the main Kinds of Q degrees:
Basic Math – this really is the most elementary mathematics, together with terms like ratio and square root, and is taught in schools. College students will learn a few elementary operations like
multiplication, division, addition, subtraction, and simple fractions.
Calculus – That is one of the math programs in college, and college pupils learn about the analysis of charts, Newton’s Laws of Motion, and also linear equations. Calculus is utilized in every
career, including company, regulation, and engineering. In addition, it aids in managing the economy.
Analytical z – Including geometry, algebra, trigonometry, and calculus. Analytical math is often used in engineering and scientific areas, and information http://npwu.com/index.php?option=com_content
&view=article&id=330/ that is used in everyday activity. They’re also able to help you with mortgages and life insurance.
Statistics – Statistics is that the study of relationships and patterns, plus it’s used in every career. This will be the branch of mathematics that may be heard by everybody, including majors.
Engineering z – This class covers not merely math but any topics like chemistry, physics, and engineering. Engineering jobs may demand some math, and also those from the mathematics areas may also
make use of this training course. There’s just a chance that you can triumph within this particular career For those who own a dash for math.
Applied q – This training course is used in just about any industry and could consist of different theories, control systems, and data analysis. These courses are often what students predict software
and found in the practice of transportation, fabricating, bookkeeping, and far more. Q is something which each occupation necessitates, and it’s going to help you.
It is crucial to see the mathematics books and also learn from these , instead of depending on the absolutely free course that schools provide since so many occupations demand math. http://
racemegroup.com/tips-to-buy-essay-books/ Access in the custom of shooting classes and books that cover the subjects that as a way to have yourself a career, you need to perfect.
Getting a complex degree can help you find a career that it’s one of many ways, and needs complex mathematics to receive a fantastic job . In addition, there are lots of senior school careers that
help prepare college students for careers in math, and that includes careers that deal with gaming computer software, and lots different careers that deal with calculations and numbers.
http://www.graindpirate.fr/wp-content/uploads/2017/09/logo_site.png 0 0 http://www.graindpirate.fr/wp-content/uploads/2017/09/logo_site.png 2020-03-21 04:14:532020-03-21 04:14:53What Work
Opportunities Involves Z
Se joindre à la discussion ?
Vous êtes libre de contribuer !
Laisser un commentaire | {"url":"http://www.graindpirate.fr/what-work-opportunities-involves-z/","timestamp":"2024-11-08T15:38:08Z","content_type":"text/html","content_length":"38841","record_id":"<urn:uuid:d9bd752c-2148-4776-b580-573cfeaddcbc>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00130.warc.gz"} |
All About Math Journaling
This year I transitioned into the guided math model for my math block. The change happened in October and has been going strong ever since. It was spurred on by the fact that I have a smaller
physical classroom and 24 students.
Having all my students at their math centers all at once was too chaotic for me. Even though my students were engaged and learning, the noise level and space issues were making me anxious. I kept
trying it because the year prior, it worked so well for me. This year, it just would not work!
With the help of my team, I began changing my math block into a workshop model. I have shared this before HERE on my blog.
One new component of my math block is math journaling. Having students keep a math journal began when I decided to break up my math center time into different components rather than having everyone
at math centers all at once.
I couldn’t be happier with the result! My student’s journals are full of all the lessons and practice from our year of guided math! There’s such a variety of different activities. The concepts have
spiraled through the year and gotten progressively more advanced.
Don’t get me wrong, I still firmly believe in the hands-on practice and application of math centers, and my students do that EVERY. SINGLE. DAY.
Journaling adds to hands-on practice, providing a bit of novelty and a way to shake things up.
For journaling math concepts, my students have a ten-minute dedicated time window during their guided math rotations. They use this time to show what they know in our current concept or in a spiraled
My students can work together at times, and they always have a buddy-check system so they can reflect, compare, and self-monitor while I am doing a guided math lesson at my teacher table.
Having an example journal finished for them (without answers) helps as a reference when they are just getting used to how these journal activities work. My students no longer need it, but at first,
it was REALLY a life saver.
I am presenting a guided math class this week in my district! I plan to go into more depth on how I structure my math block and give options for different ages/abilities/classrooms. I will then share
math journal ideas here on the blog, including the structure of my math block as well!
Keep in mind that the students participate in building their own journals. This adds ownership to the process and helps students build attachment to their journals.
That makes math journaling more compelling, and it incentivizes the students to keep track of and take care of their journals throughout the year.
If you are interested in trying out math journals, I have a freebie here for you. It can be done without being placed in a journal. Click this picture to download on Google Docs!
We are still working through both our Winter and February Journal activities in our classroom. Plus, there are a couple more activities in my Oh for the Love of Math Centers that we still need for
our journals, too!
Since November, every math center packet I have made has a journal component. This way, I know they are practicing the concept throughout the math block in many different ways. I can also look back
and use the journals as a reference to track the progress for each student on an individual basis
One thing I can say for sure is that changing things up mid-year was the best thing I did. If things aren’t working for you, don’t be afraid to change it up. My students adjusted well and it gave us
all something new in our day to focus on.
If you are interested in trying out math journaling, I have the following units in my store.
Or each month is also sold if you don’t need the bundle.
11 Comments
1. Your math journals are such a fabulous addition to my math block. My students love them and so do I!
A Day in First Grade
2. Do you teach from a math book or just use your standards and plan your daily lessons? We are required to use the math book in our planning, I use it for whole group. My math journal, math
manipulative station, and teacher guided group is what I do each day. I am looking for a more fluent flow of learning for my kinders.
3. Great post, thank you for sharing! 🙂
4. Great post. My kiddos are too noisy during math stations so I think I will have to try this.
5. Your math journals look awesome!
Lovin’ Kindergarten
6. This is awesome! I can't wait to see how you structure your math block. I want to do guided math and have some sort of system. I know it will take lots of prep time but I'm ready for the change.
Once I get it worked out I know it will change my way of teaching. 🙂
7. This week I introduced math journals to my class using your awesome February math journals packet! Things went really well and my kids really enjoyed the journal activities. I am super excited to
hear more about the structure of your math block! I'm hoping to re-structure a few things in mine. Thanks so much for sharing!
Ship Shape First Grade
8. I need these pronto Tonto! Can you BE any more fabulous??
Sunny Days In Second Grade
9. Wow! I love those journals! I would so love to move to a guided math type approach in my classroom! Our district was making a move in that direction and then we had an adoption change and we have
gone in a different direction. I can't wait to see more on this-I want to give it a go!
10. Such a fan of your journals. Plan to have them all by summer!!
One Fab Teacher
11. I have looked for your Back to School Journaling work and am wondering if you have them. I have gotten all the other necessary packs for guided math and all I need now are the journaling that
matches it all. I am excited and love your blogs and work. It is easy to follow and use. Will you please help? Thanks,
[email protected] | {"url":"https://tunstallsteachingtidbits.com/2014/02/all-about-math-journaling.html","timestamp":"2024-11-05T15:16:41Z","content_type":"text/html","content_length":"429755","record_id":"<urn:uuid:2b2158bc-2ae9-4275-8889-8917f7c33b72>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00348.warc.gz"} |
Pulsatile flow through idealized renal tubules: Fluid-structure interaction and dynamic pathologies
[1] I. Mnassri, A. El Baroudi, Vibrational frequency analysis of finite elastic tube filled with compressible viscous fluid, Acta Mech. Solida Sin., 30 (2017), 435-444.
[2] R. M. Terrill, An exact solution for flow in a porous pipe, Z. Angew. Math. Phys., 33 (1982), 547-552.
[3] O. San, A. E. Staples, Dynamics of pulsatile flows through elastic microtubes, Int. J. Appl. Mech., 4 (2012).
[4] S. Nag, A. Resnick, Biophysics and biofluid dynamics of primary cilia: Evidence for and against the flow-sensing function, Am. J. Physiol. Renal Physiol., 313 (2017), 706-720.
[5] M. J. Lighthill, Mathematical Biofluiddynamics, Society for Industrial and Applied Mathematics, Philadelphia, 1975.
[6] A. El Baroudi, F. Razafimahery, L. Rakotomanana, Fluid-structure interaction within three-dimensional models of an idealized arterial wall, Int. J. Eng. Sci., 84 (2014), 113-126.
[7] M. Zamir, The Physics of Pulsatile Flow, Springer-Verlag, New York, 2000.
[8] P. D. Cabral, J. L. Garvin, Luminal flow regulates no and o2(-) along the nephron, Am. J. Physiol. Renal Physiol., 300 (2011), 1047-1053.
[9] L. M. Satlin, S. Sheng, C. B. Woda, T. R. Kleyman, Epithelial Na(+) channels are regulated by flow, Am. J. Physiol. Renal Physiol., 280 (2001), 1010-1018.
[10] M. Essig, G. Friedlander, Tubular shear stress and phenotype of renal proximal tubular cells, J. Am. Soc. Nephrol, 14 (2003), S33-35.
[11] J. B. Freund, J. G. Goetz, K. L. Hill, J. Vermot, Fluid flows and forces in development: Functions, features and biophysical principles, Development, 139 (2012), 1229-1245.
[12] F. Kotsis, R. Nitschke, M. Doerken, G. WalzE, W, Kuehn, Flow modulates centriole movements in tubular epithelial cells, Pflugers Arch, 456 (2008), 1025-1035.
[13] A. B. Maunsbach, G. H. Giebisch, B. A. Stanton, Effects of flow rate on proximal tubule ultrastructure, Am. J. Physiol., 253 (1987), 582-587.
[14] H. A. Praetorius, K. R. Spring, Removal of the mdck cell primary cilium abolishes flow sensing, J. Membr. Biol., 191 (2002), 69-76.
[15] H. A. Praetorius, K. R. Spring, The renal cell primary cilium functions as a flow sensor, Curr. Opin. Nephrol Hypertens, 12 (2003), 517-520.
[16] D. J. Furley, J. S. Wilkie, Galen on Respiration and the Arteries: an Edition with English Translation and Commentary of De usu Respirationis, An in Arteriis Natura Sanguis Contineatur, De usu
Pulsum, and De Causis Respirationis, Princeton University Press, Guildford, 1984.
[17] W. Harvey, On the Motion of the Heart and Blood in Animals; and on the Circulation of the Blood; and on the Generation of Animals, William Benton, Chicago, 1952.
[18] J. R. Womersley, Method for the calculation of velocity, rate of flow and viscous drag in arteries when the pressure gradient is known, J. Physiol., 127 (1955), 553-563.
[19] J. R. Womersley, Xxiv. oscillatory motion of a viscous liquid in a thin-walled elastic tubei: The linear approximation for long waves, London, Edinburgh, Dublin Philos. Mag. J. Sci., 46 (1955),
[20] J. Womersley, An elastic tube theory of pulse transmission and oscillatory flow in mammalian arteries, 1957.
[21] R. I. Macey, Pressure flow patterns in a cylinder with reabsorbing walls, Bull. Math. Biophy., 25 (1963), 1-9.
[22] E. A. Marshall, E. A. Trowbridge, Flow of a newtonian fluid through a permeable tube - application to proximal renal tubule, Bull. Math. Bio., 36 (1974), 457-476.
[23] C. Pozrikidis, Stokes flow through a permeable tube, Arch. Appl. Mech., 80 (2010), 323-333.
[24] A. M. Weinstein, Nonequilibrium thermodynamic model of the rat proximal tubule epithelium, Biophys. J., 44 (1983), 153-70.
[25] A. M. Weinstein, A mathematical model of the rat proximal tubule, Am. J. Physiol., 250 (1986), 860-873.
[26] A. M. Weinstein, A mathematical model of rat ascending henle limb. i. cotransporter function, Am. J. Physiol. Renal Physiol., 298 (2010), 512-524.
[27] A. M. Weinstein, A mathematical model of rat ascending henle limb. iii. tubular function, Am. J. Physiol. Renal Physiol., 298 (2010), 543-556.
[28] A. M. Weinstein, T. A. Krahn, A mathematical model of rat ascending henle limb. ii. epithelial function, Am. J. Physiol. Renal Physiol., 298 (2010), 525-542.
[29] A. M. Weinstein, A mathematical model of rat collecting duct. i. flow effects on transport and urinary acidification, Am. J. Physiol. Renal Physiol., 283 (2002), 1237-1251.
[30] M. E. Downs, A. M. Nguyen, F. A. Herzog, D. A. Hoey, C. R. Jacobs, An experimental and computational analysis of primary cilia deflection under fluid flow, Comput. Methods Biomech. Biomed. Eng.,
17 (2014), 2-10.
[31] W. Liu, S. Xu, C. Woda, P. Kim, S. Weinbaum, L. M. Satlin, Effect of flow and stretch on the [Ca2+] i response of principal and intercalated cells in cortical collecting duct, Am. J. Physiol.
Renal Physiol., 285 (2003), 998-1012.
[32] A. K. O'Connor, E. B. Malarkey, N. F. Berbari, M. J. Croyle, C. J. Haycraft, P. D. Bell, et al., An inducible ciliagfp mouse model for in vivo visualization and analysis of cilia in live tissue,
Cilia, 2 (2013), 8.
[33] A. T. Layton, L. C. Moore, H. E. Layton, Signal transduction in a compliant thick ascending limb, Am. J. Physiol. Renal Physiol., 302 (2012), 1188-1202.
[34] D. J. Marsh, O. V. Sosnovtseva, E. Mosekilde, N. H. Holstein-Rathlou, Vascular coupling induces synchronization, quasiperiodicity, and chaos in a nephron tree, Chaos, 17 (2007), 015114.
[35] A. T. Layton, H. E. Layton, A computational model of epithelial solute and water transport along a human nephron, PLoS Comput. Biol., 15 (2019), e1006108.
[36] COMSOL Multiphysics^® v. 5.4. www.comsol.com. COMSOL AB, Stockholm, Sweden.
[37] M. Dejam, Dispersion in non-newtonian fluid flows in a conduit with porous walls, Chem. Eng. Sci., 189 (2018), 296-310.
[38] M. Dejam, H. Hassanzadeh, Z. X. Chen, Shear dispersion in combined pressure-driven and electro-osmotic flows in a channel with porous walls, Chem. Eng. Sci., 137 (2015), 205-215.
[39] M. Zamir, Hemo-Dynamics, Springer International Publishing, 2015.
[40] A. T. Layton, Modeling transport and flow regulatory mechanisms of the kidney, ISRN Biomath., 2012 (2012).
[41] A. T. Layton, L. C. Moore, H. E. Layton, Multistable dynamics mediated by tubuloglomerular feedback in a model of coupled nephrons, Bull. Math. Bio., 71 (2009).
[42] H. B. Atabek, Wave propagation through a viscous fluid contained in a tethered, initially stresses, orthotropic elastic tube, Biophys. J., 8 (1968), 626-649.
[43] M. A. Day, The no-slip condition of fluid dynamics, Erkenntnis, 33 (1990), 285-296.
[44] B. J. Cox, J. M. Hill, Flow through a circular tube with a permeable navier slip boundary, Nanoscale Res. Lett., 6 (2011), 389.
[45] L. N. Reinking, B. Schmidt-Nielsen, Peristaltic flow of urine in the renal capillary collecting ducts of hamsters, Kidney Int., 20 (1981), 55-60.
[46] M. Pradella, R. M. Dorizzi, F. Rigolin, Relative density of urine: Methods and clinical significance, Crit. Rev. Clin. Lab. Sci., 26 (1988), 195-242.
[47] B. Vahidi, N. Fatouraee, A. Imanparast, A. N. Moghadam, A mathematical simulation of the ureter: Effects of the model parameters on ureteral pressure/flow relations, J. Biomech. Eng., 133
(2011), 031004.
[48] S. Cortell, F. J. Gennari, M. Davidman, W. H. Bossert, W. B. Schwartz, A definition of proximal and distal tubular compliance. Practical and theoretical implications, J. Clin. Invest., 52
(1973), 2330-2339.
[49] N. H. Holstein-Rathlou, D. J. Marsh, Oscillations of tubular pressure, flow, and distal chloride concentration in rats, Am. J. Physiol., 256 (1989), 1007-1014.
[50] E. Gonzalez, P. Carpi-Medina, G. Whittembury, Cell osmotic water permeability of isolated rabbit proximal straight tubules, Am. J. Physiol., 242 (1982), 321-330.
[51] E. Frömter, C. Mller, T. Wick, Permeability properties of the proximal tubular epithelium of the rat kidney studied with electrophysiological methods, Electrophysiol. Epithelial Cells, (1971),
[52] J. A. Schafer, Transepithelial osmolality differences, hydraulic conductivities, and volume absorption in the proximal tubule, Annu. Rev. Physiol., 52 (1990), 709-726.
[53] J. Howard, Mechanics of Motor Proteins and the Cytoskeleton, Sinauer, 2001.
[54] T. Sakai, D. A. Craig, A. S. Wexler, D. J. Marsh, Fluid waves in renal tubules, Biophys. J., 50 (1986), 805-813.
[55] M. Mahran, A. ELsabbagh, H. Negm, A comparison between different finite elements for elastic and aero-elastic analyses, J. Adv. Res., 8 (2017), 635-648.
[56] R. Carrisoza-Gaytan, Y. Liu, D. Flores, C. Else, H. G. Lee, G. Rhodes, et al., Effects of biomechanical forces on signaling in the cortical collecting duct (ccd), Am. J. Physiol. Renal Physiol.,
307 (2014), 195-204.
[57] J. J. Kang, I. Toma, A. Sipos, F. McCulloch, J. Peti-Peterdi, Quantitative imaging of basic functions in renal (patho) physiology, Am. J. Physiol. Renal Physiol., 291 (2006), 495-502.
[58] A. T. Layton, L. C. Moore, H. E. Layton, Multistability in tubuloglomerular feedback and spectral complexity in spontaneously hypertensive rats, Am. J. Physiol. Renal Physiol., 291 (2006) 79-97.
[59] E. B. Pitman, R. M. Zaritski, K. J. Kesseler, L. C. Moore, H. E. Layton, Feedback-mediated dynamics in two coupled nephrons, Bul. Math. Bio., 66 (2004), 1463-1492.
[60] J. L. Laugesen, E. Mosekilde, N.-H. Holstein-Rathlou, Synchronization of period-doubling oscillations in vascular coupled nephrons, Chaos: Interdiscip. J. Nonlinear Sci., 21 (2011), 033128.
[61] D. J. Marsh, O. V. Sosnovtseva, E. Mosekilde, N. H. Holstein-Rathlou, Vascular coupling induces synchronization, quasiperiodicity, and chaos in a nephron tree, Chaos: Interdiscip. J. Nonlinear
Sci., 17 (2007), 015114.
Reader Comments | {"url":"http://www.aimspress.com/article/doi/10.3934/mbe.2020094","timestamp":"2024-11-08T12:02:37Z","content_type":"text/html","content_length":"133357","record_id":"<urn:uuid:bbe10b3d-7a3f-4153-8d80-9643a5aae418>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00047.warc.gz"} |
Line graphs
Example: Assume that every student in Example 1 visits exactly 2 courses, but nobody visits the same two. Assume students know each other if and only if they attend some common course. All
students are asked who knows whom. Is it possible to reconstruct all attendance lists?
Graphs occuring in this way are called line graphs. More precisely, the line graph L(G) of a graph G=(V,E) has the edge set E as vertex set, and two such former edges are adjacent in L(G) whenever
they share a common vertex in G. Thus it is the intersection graph of G, if edges are viewed as 2-element subsets of the vertex set.
Line graphs appeared implicitely in a 1932 paper of H. Whitney and were explicitely introduced by J. Krausz in 1943.
From the beginning, the question whether and how line graphs G can be recognized, and how so-called roots H---graphs for which L(H)=G holds--- may look like for line graphs G was central in the
investigation of line graphs. Whitney showed that every connected graph except K[3] has at most one root (K[3] has two roots: K[1,3] and K[3]). Krausz gave in his paper a characterization of line
graphs, which, however, didn't lead to an efficient recognition algorithm. But such algorithms follow from characterizations of line graphs in [RW65], [B67], see [L74], [R73], for instance.
Thus the question in the Example above can be answered in the affirmative. If the knowledge graph of the students is connected, and if there are at least 4 students, then we may find out all
attendence lists in polynomial time, and these lists are unique (modulo relabeling of the courses---of course we don't know the names of the courses).
Interval graphs
Example: The mathematics department has a small cafeteria, so small, indeed that it is impossible to oversee anybody who is there at the same time. Anybody claims to have been there only once,
but 8 meals have been eaten by 7 people, so somebody must have been there twice. The investigator asks everybody whom he or she saw at the cafeteria, and the result is the graph on the right.
Who is the overeater? (This is a shortened version of examples by Berge and Golumbic. )
An interval graph is an intersection graph of a family of intervals of the real line. These graphs have been introduced by G. Hajos in 1957. They got some kind of fame, however, after the molecular
biologist R. Benzer tested in 1961 the hypothesis that the subelements of genes are linearly linked together in the following way: Under the mutations, he identified 32 so-called deletions
---mutations caused by the loss of a connected DNA fragment. By recombination tests, it is possible to decide whether deletions overlap or not. The resulting graph was, as should be the case, an
interval graph. One year later, Lekkerkerker and Boland gave two `good' characterizations of interval graphs, and nowadays several efficient recognition algorithms exist.
Line graphs and interval graphs behave well
The classes of line graphs and interval graphs are both algorithmically relatively tame: Membership in both classes can be recognized in polynomial time, furthermore many optimization problems that
are NP-hard for general graphs are polynomial-time solvable when restricting the input to interval or line graphs.
It seems that many researchers, when facing new, apparently difficult, problems, routinely try line graphs and interval graphs. This, and the prominence of interval graphs and line graphs seduced
some people to take the opinion that problems for intersection graphs should always be relatively easy to tackle. In my opinion, this is far from being true, see here, here, here, and here, for
Back to the start page for intersection graphs.
Erich Prisner
made on January 12, 1999 | {"url":"http://eprisner.de/Journey/Evergreens.html","timestamp":"2024-11-06T05:57:58Z","content_type":"text/html","content_length":"5819","record_id":"<urn:uuid:8f454776-1a2e-4618-8f09-bb52b529b751>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00137.warc.gz"} |
How to Conditionally Concat 2 Columns In Python Pandas Dataframe?
You can use the numpy.where() function in pandas to conditionally concatenate two columns in a dataframe. First, define your conditions using numpy.where(), then use + operator to concatenate the
columns when the condition is met. Here is an example:
1 import pandas as pd
3 # Create a sample dataframe
4 data = {'A': [1, 2, 3, 4],
5 'B': [5, 6, 7, 8]}
6 df = pd.DataFrame(data)
8 # Conditionally concatenate columns A and B
9 df['C'] = np.where(df['A'] > 2, df['A'].astype(str) + df['B'].astype(str), '')
11 print(df)
In this example, a new column 'C' is created which conditionally concatenates columns 'A' and 'B' based on the condition that values in column 'A' are greater than 2.
How to handle non-numeric values while concatenating columns based on conditions in Pandas DataFrame?
To handle non-numeric values while concatenating columns based on conditions in a Pandas DataFrame, you can use the np.where() function from the NumPy library to apply conditions and concatenate the
columns accordingly. Here is an example:
1 import pandas as pd
2 import numpy as np
4 # Create a sample DataFrame
5 data = {'A': [1, 2, 'Non-numeric', 4],
6 'B': ['String', 3, 5, 6]}
7 df = pd.DataFrame(data)
9 # Concatenate columns based on conditions
10 df['C'] = np.where(df['A'].apply(lambda x: str(x).isnumeric()),
11 df['A'].astype(str) + df['B'].astype(str),
12 df['B'].astype(str))
14 print(df)
In this example, the np.where() function is used to check if the values in column 'A' are numeric or not. If the value is numeric, it concatenates columns 'A' and 'B' as strings and assigns the
result to a new column 'C'. If the value is non-numeric, it assigns the value of column 'B' directly to column 'C'.
How to concatenate columns with different data types based on conditions in Python Pandas DataFrame?
You can concatenate columns with different data types based on conditions in a Python Pandas DataFrame using the apply() method along with a custom function that performs the concatenation based on
your conditions. Here's an example:
1 import pandas as pd
3 # Sample DataFrame
4 df = pd.DataFrame({'A': [1, 2, 3, 4],
5 'B': ['a', 'b', 'c', 'd'],
6 'C': [True, False, True, False]})
8 # Custom function to concatenate columns based on conditions
9 def concat_values(row):
10 if row['C']:
11 return str(row['A']) + row['B']
12 else:
13 return row['B']
15 # Apply the custom function to create a new column with concatenated values
16 df['D'] = df.apply(concat_values, axis=1)
18 print(df)
In this example, the concat_values function takes a row as input and checks the value of column 'C' in that row. It then concatenates columns 'A' and 'B' if the value of 'C' is True, otherwise it
returns just the value of column 'B'.
After applying the function to the DataFrame using df.apply(), a new column 'D' is created with the concatenated values based on the conditions. You can adjust the logic in the custom function to fit
your specific requirements.
What is the best approach for handling special cases during conditional concatenation in Pandas DataFrame?
Handling special cases during conditional concatenation in a Pandas DataFrame can be done using the np.where function along with boolean indexing.
Here is an example to demonstrate how to handle special cases during conditional concatenation in a Pandas DataFrame:
1 import pandas as pd
2 import numpy as np
4 # Create a sample DataFrame
5 data = {'A': [1, 2, 3, 4, 5],
6 'B': [10, 20, 30, 40, 50]}
7 df = pd.DataFrame(data)
9 # Define the condition for concatenation
10 condition = df['A'] > 2
12 # Define special case values
13 special_case_values_A = [100, np.nan, 300, np.nan, 500]
14 special_case_values_B = [1000, np.nan, 3000, np.nan, 5000]
16 # Concatenate values based on condition
17 df['A'] = np.where(condition, special_case_values_A, df['A'])
18 df['B'] = np.where(condition, special_case_values_B, df['B'])
20 # Display the updated DataFrame
21 print(df)
In this example, we first define a condition based on the values in column 'A'. We then define special case values for column 'A' and 'B'. Finally, we use np.where to concatenate the special case
values based on the condition.
This approach allows for flexibility in handling special cases during conditional concatenation in a Pandas DataFrame. By using np.where, we can easily apply different values to the DataFrame based
on specific conditions.
How to conditionally concatenate 2 columns in Python Pandas DataFrame?
You can conditionally concatenate two columns in a Pandas DataFrame using the apply method along with a custom function. Here's an example:
1 import pandas as pd
3 # Create a sample DataFrame
4 data = {'A': [1, 2, 3, 4],
5 'B': ['a', 'b', 'c', 'd'],
6 'C': [True, False, True, False]}
7 df = pd.DataFrame(data)
9 # Define a custom function to conditionally concatenate columns A and B
10 def concatenate_cols(row):
11 if row['C']:
12 return str(row['A']) + row['B']
13 else:
14 return row['B']
16 # Apply the custom function to create a new column D
17 df['D'] = df.apply(concatenate_cols, axis=1)
19 # Display the updated DataFrame
20 print(df)
In this example, we use the apply method to apply the concatenate_cols function to each row in the DataFrame. The function checks the value in column C, and if it is True, it concatenates columns A
and B; otherwise, it returns the value in column B. The result is stored in a new column D in the DataFrame.
How to improve performance while conditionally concatenating large datasets in Pandas DataFrame?
When conditionally concatenating large datasets in a Pandas DataFrame, it is important to optimize performance to avoid long processing times. Here are some tips to improve performance:
1. Use vectorized operations: Instead of looping through the rows of the DataFrame to check the condition and concatenate values, use vectorized operations provided by Pandas such as DataFrame.loc
or DataFrame.apply.
2. Avoid nested loops: Nested loops can significantly slow down the processing time. Try to use built-in Pandas functions or list comprehensions instead.
3. Use boolean indexing: Use boolean indexing to filter the rows that meet the condition before concatenating them. This can help reduce the size of the DataFrame and improve performance.
4. Use the concat function: Instead of using the + operator for concatenating DataFrames, use the pd.concat function which is optimized for concatenating large datasets.
5. Consider using the merge function: If you need to concatenate DataFrames based on a common column, consider using the merge function which can be more efficient than concatenation.
6. Use the pd.Series.str.cat method: If you are concatenating string columns, consider using the pd.Series.str.cat method which is specifically designed for this purpose and can be more efficient.
7. Consider using the chunksize parameter: If memory usage is a concern, consider using the chunksize parameter in functions like pd.read_csv or pd.concat to process the data in smaller chunks.
By following these tips, you can improve the performance of conditionally concatenating large datasets in a Pandas DataFrame.
What is the impact of memory usage on conditional concatenation in Pandas DataFrame?
Memory usage can have a significant impact on conditional concatenation in Pandas DataFrame. When concatenating DataFrames based on certain conditions, the resulting DataFrame may consume more memory
if the individual DataFrames being concatenated are large.
If the DataFrames being concatenated are already consuming a large amount of memory, the resulting DataFrame may exceed the available memory, leading to potential memory errors or slowdowns in
performance due to increased swapping between memory and disk.
In order to mitigate the impact of memory usage on conditional concatenation, it is important to carefully manage memory allocation and optimize the size of DataFrames being concatenated. This can be
done by selecting only relevant columns or rows based on the conditions, dropping unnecessary columns, and using data types that consume less memory, such as category data type for categorical
Additionally, it is recommended to periodically check and monitor the memory usage of the DataFrame during concatenation processes to ensure efficient memory management and prevent potential memory | {"url":"https://tech-blog.v6.rocks/blog/how-to-conditionally-concat-2-columns-in-python","timestamp":"2024-11-14T15:27:31Z","content_type":"text/html","content_length":"195589","record_id":"<urn:uuid:be9dd8f2-6c3d-45d6-96e8-b0e73e643a7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00024.warc.gz"} |
can not be execute an extreme long sql under batch mode
• Status: Closed
• Resolution: Fixed
• 1.12.2
execute command
bin/sql-client.sh embedded -d conf/sql-client-batch.yaml
content of conf/sql-client-batch.yaml
- name: bnpmphive
type: hive
hive-conf-dir: /home/gum/hive/conf
hive-version: 3.1.2
planner: blink
type: batch
#type: streaming
result-mode: table
parallelism: 4
max-parallelism: 2000
current-catalog: bnpmphive
#current-database: snmpprobe
# table.sql-dialect: hivemodules:
- name: core
type: core
- name: myhive
type: hivedeployment:
# general cluster communication timeout in ms
response-timeout: 5000
# (optional) address from cluster to gateway
gateway-address: ""
# (optional) port from cluster to gateway
gateway-port: 0
execute command bin/sql-client.sh embedded -d conf/sql-client-batch.yaml content of conf/sql-client-batch.yaml catalogs: - name: bnpmphive type: hive hive-conf-dir: /home/gum/hive/conf
hive-version: 3.1.2 execution: planner: blink type: batch #type: streaming result-mode: table parallelism: 4 max-parallelism: 2000 current-catalog: bnpmphive #current-database: snmpprobe #
configuration: # table.sql-dialect: hivemodules: - name: core type: core - name: myhive type: hivedeployment: # general cluster communication timeout in ms response-timeout: 5000 # (optional)
address from cluster to gateway gateway-address: "" # (optional) port from cluster to gateway gateway-port: 0
1. execute an extreme long sql under batch mode
'CD' product_name,
r.code business_platform,
5 statisticperiod,
cast('2021-03-24 00:00:00' as timestamp) coltime,
cast(r1.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_00002,
cast(r2.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_00007,
cast(r3.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_00005,
cast(r4.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_00006,
cast(r5.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00029,
cast(r6.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00028,
cast(r7.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00015,
cast(r8.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00014,
cast(r9.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00011,
cast(r10.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00010,
cast(r11.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00013,
cast(r12.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00012,
cast(r13.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00027,
cast(r14.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00026,
cast(r15.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00046,
cast(r16.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00047,
cast(r17.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00049,
cast(r18.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00048,
cast(r19.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00024,
cast(r20.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00025,
cast(r21.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00022,
cast(r22.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00023,
cast(r23.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00054,
cast(r24.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00055,
cast(r25.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00033,
cast(r26.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00032,
cast(r27.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00053,
cast(r28.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00052,
cast(r29.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00051,
cast(r30.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00050,
cast(r31.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00043,
cast(r32.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00042,
cast(r33.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00017,
cast(r34.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00016,
cast(r35.indicatorvalue as double) as YWPT_ZHQI_CD_038_GZ_00003,
cast(r36.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00045,
cast(r37.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00044,
cast(r38.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00038,
cast(r39.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00039,
cast(r40.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00037,
cast(r41.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00036,
cast(r42.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00040,
cast(r43.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00041,
cast(r44.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00034,
cast(r45.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00035,
cast(r46.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00030,
cast(r47.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00031,
cast(r48.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00020,
cast(r49.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00021,
cast(r50.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00018,
cast(r51.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00019,
cast(r52.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_00004,
cast(r53.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_00008,
cast(r54.indicatorvalue as double) as YWPT_ZHQI_CD_038_XT_00061,
cast(r55.indicatorvalue as double) as YWPT_ZHQI_CD_038_YW_00009,
localtimestamp as crtime,
'2021-03-24' as dt
from prod_mysql_bnpmp.r_biz_product r
left join raw_restapi_load.p_hcd r1 on r1.dt='2021-03-24' and r1.coltime =cast('2021-03-24 00:00:00' as timestamp) and r1.businessplatform=r.code and r1.indicatornumber='YWPT-ZHQI-CD-038-GZ-00002'
left join raw_restapi_load.p_hcd r2 on r2.dt='2021-03-24' and r2.coltime =cast('2021-03-24 00:00:00' as timestamp) and r2.businessplatform=r.code and r2.indicatornumber='YWPT-ZHQI-CD-038-YW-00007'
left join raw_restapi_load.p_hcd r3 on r3.dt='2021-03-24' and r3.coltime =cast('2021-03-24 00:00:00' as timestamp) and r3.businessplatform=r.code and r3.indicatornumber='YWPT-ZHQI-CD-038-YW-00005'
left join raw_restapi_load.p_hcd r4 on r4.dt='2021-03-24' and r4.coltime =cast('2021-03-24 00:00:00' as timestamp) and r4.businessplatform=r.code and r4.indicatornumber='YWPT-ZHQI-CD-038-YW-00006'
left join raw_restapi_load.p_hcd r5 on r5.dt='2021-03-24' and r5.coltime =cast('2021-03-24 00:00:00' as timestamp) and r5.businessplatform=r.code and r5.indicatornumber='YWPT-ZHQI-CD-038-XT-00029'
left join raw_restapi_load.p_hcd r6 on r6.dt='2021-03-24' and r6.coltime =cast('2021-03-24 00:00:00' as timestamp) and r6.businessplatform=r.code and r6.indicatornumber='YWPT-ZHQI-CD-038-XT-00028'
left join raw_restapi_load.p_hcd r7 on r7.dt='2021-03-24' and r7.coltime =cast('2021-03-24 00:00:00' as timestamp) and r7.businessplatform=r.code and r7.indicatornumber='YWPT-ZHQI-CD-038-XT-00015'
left join raw_restapi_load.p_hcd r8 on r8.dt='2021-03-24' and r8.coltime =cast('2021-03-24 00:00:00' as timestamp) and r8.businessplatform=r.code and r8.indicatornumber='YWPT-ZHQI-CD-038-XT-00014'
left join raw_restapi_load.p_hcd r9 on r9.dt='2021-03-24' and r9.coltime =cast('2021-03-24 00:00:00' as timestamp) and r9.businessplatform=r.code and r9.indicatornumber='YWPT-ZHQI-CD-038-XT-00011'
left join raw_restapi_load.p_hcd r10 on r10.dt='2021-03-24' and r10.coltime =cast('2021-03-24 00:00:00' as timestamp) and r10.businessplatform=r.code and r10.indicatornumber='YWPT-ZHQI-CD-038-XT-00010'
left join raw_restapi_load.p_hcd r11 on r11.dt='2021-03-24' and r11.coltime =cast('2021-03-24 00:00:00' as timestamp) and r11.businessplatform=r.code and r11.indicatornumber='YWPT-ZHQI-CD-038-XT-00013'
left join raw_restapi_load.p_hcd r12 on r12.dt='2021-03-24' and r12.coltime =cast('2021-03-24 00:00:00' as timestamp) and r12.businessplatform=r.code and r12.indicatornumber='YWPT-ZHQI-CD-038-XT-00012'
left join raw_restapi_load.p_hcd r13 on r13.dt='2021-03-24' and r13.coltime =cast('2021-03-24 00:00:00' as timestamp) and r13.businessplatform=r.code and r13.indicatornumber='YWPT-ZHQI-CD-038-XT-00027'
left join raw_restapi_load.p_hcd r14 on r14.dt='2021-03-24' and r14.coltime =cast('2021-03-24 00:00:00' as timestamp) and r14.businessplatform=r.code and r14.indicatornumber='YWPT-ZHQI-CD-038-XT-00026'
left join raw_restapi_load.p_hcd r15 on r15.dt='2021-03-24' and r15.coltime =cast('2021-03-24 00:00:00' as timestamp) and r15.businessplatform=r.code and r15.indicatornumber='YWPT-ZHQI-CD-038-XT-00046'
left join raw_restapi_load.p_hcd r16 on r16.dt='2021-03-24' and r16.coltime =cast('2021-03-24 00:00:00' as timestamp) and r16.businessplatform=r.code and r16.indicatornumber='YWPT-ZHQI-CD-038-XT-00047'
left join raw_restapi_load.p_hcd r17 on r17.dt='2021-03-24' and r17.coltime =cast('2021-03-24 00:00:00' as timestamp) and r17.businessplatform=r.code and r17.indicatornumber='YWPT-ZHQI-CD-038-XT-00049'
left join raw_restapi_load.p_hcd r18 on r18.dt='2021-03-24' and r18.coltime =cast('2021-03-24 00:00:00' as timestamp) and r18.businessplatform=r.code and r18.indicatornumber='YWPT-ZHQI-CD-038-XT-00048'
left join raw_restapi_load.p_hcd r19 on r19.dt='2021-03-24' and r19.coltime =cast('2021-03-24 00:00:00' as timestamp) and r19.businessplatform=r.code and r19.indicatornumber='YWPT-ZHQI-CD-038-XT-00024'
left join raw_restapi_load.p_hcd r20 on r20.dt='2021-03-24' and r20.coltime =cast('2021-03-24 00:00:00' as timestamp) and r20.businessplatform=r.code and r20.indicatornumber='YWPT-ZHQI-CD-038-XT-00025'
left join raw_restapi_load.p_hcd r21 on r21.dt='2021-03-24' and r21.coltime =cast('2021-03-24 00:00:00' as timestamp) and r21.businessplatform=r.code and r21.indicatornumber='YWPT-ZHQI-CD-038-XT-00022'
left join raw_restapi_load.p_hcd r22 on r22.dt='2021-03-24' and r22.coltime =cast('2021-03-24 00:00:00' as timestamp) and r22.businessplatform=r.code and r22.indicatornumber='YWPT-ZHQI-CD-038-XT-00023'
left join raw_restapi_load.p_hcd r23 on r23.dt='2021-03-24' and r23.coltime =cast('2021-03-24 00:00:00' as timestamp) and r23.businessplatform=r.code and r23.indicatornumber='YWPT-ZHQI-CD-038-XT-00054'
left join raw_restapi_load.p_hcd r24 on r24.dt='2021-03-24' and r24.coltime =cast('2021-03-24 00:00:00' as timestamp) and r24.businessplatform=r.code and r24.indicatornumber='YWPT-ZHQI-CD-038-XT-00055'
left join raw_restapi_load.p_hcd r25 on r25.dt='2021-03-24' and r25.coltime =cast('2021-03-24 00:00:00' as timestamp) and r25.businessplatform=r.code and r25.indicatornumber='YWPT-ZHQI-CD-038-XT-00033'
left join raw_restapi_load.p_hcd r26 on r26.dt='2021-03-24' and r26.coltime =cast('2021-03-24 00:00:00' as timestamp) and r26.businessplatform=r.code and r26.indicatornumber='YWPT-ZHQI-CD-038-XT-00032'
left join raw_restapi_load.p_hcd r27 on r27.dt='2021-03-24' and r27.coltime =cast('2021-03-24 00:00:00' as timestamp) and r27.businessplatform=r.code and r27.indicatornumber='YWPT-ZHQI-CD-038-XT-00053'
left join raw_restapi_load.p_hcd r28 on r28.dt='2021-03-24' and r28.coltime =cast('2021-03-24 00:00:00' as timestamp) and r28.businessplatform=r.code and r28.indicatornumber='YWPT-ZHQI-CD-038-XT-00052'
left join raw_restapi_load.p_hcd r29 on r29.dt='2021-03-24' and r29.coltime =cast('2021-03-24 00:00:00' as timestamp) and r29.businessplatform=r.code and r29.indicatornumber='YWPT-ZHQI-CD-038-XT-00051'
left join raw_restapi_load.p_hcd r30 on r30.dt='2021-03-24' and r30.coltime =cast('2021-03-24 00:00:00' as timestamp) and r30.businessplatform=r.code and r30.indicatornumber='YWPT-ZHQI-CD-038-XT-00050'
left join raw_restapi_load.p_hcd r31 on r31.dt='2021-03-24' and r31.coltime =cast('2021-03-24 00:00:00' as timestamp) and r31.businessplatform=r.code and r31.indicatornumber='YWPT-ZHQI-CD-038-XT-00043'
left join raw_restapi_load.p_hcd r32 on r32.dt='2021-03-24' and r32.coltime =cast('2021-03-24 00:00:00' as timestamp) and r32.businessplatform=r.code and r32.indicatornumber='YWPT-ZHQI-CD-038-XT-00042'
left join raw_restapi_load.p_hcd r33 on r33.dt='2021-03-24' and r33.coltime =cast('2021-03-24 00:00:00' as timestamp) and r33.businessplatform=r.code and r33.indicatornumber='YWPT-ZHQI-CD-038-XT-00017'
left join raw_restapi_load.p_hcd r34 on r34.dt='2021-03-24' and r34.coltime =cast('2021-03-24 00:00:00' as timestamp) and r34.businessplatform=r.code and r34.indicatornumber='YWPT-ZHQI-CD-038-XT-00016'
left join raw_restapi_load.p_hcd r35 on r35.dt='2021-03-24' and r35.coltime =cast('2021-03-24 00:00:00' as timestamp) and r35.businessplatform=r.code and r35.indicatornumber='YWPT-ZHQI-CD-038-GZ-00003'
left join raw_restapi_load.p_hcd r36 on r36.dt='2021-03-24' and r36.coltime =cast('2021-03-24 00:00:00' as timestamp) and r36.businessplatform=r.code and r36.indicatornumber='YWPT-ZHQI-CD-038-XT-00045'
left join raw_restapi_load.p_hcd r37 on r37.dt='2021-03-24' and r37.coltime =cast('2021-03-24 00:00:00' as timestamp) and r37.businessplatform=r.code and r37.indicatornumber='YWPT-ZHQI-CD-038-XT-00044'
left join raw_restapi_load.p_hcd r38 on r38.dt='2021-03-24' and r38.coltime =cast('2021-03-24 00:00:00' as timestamp) and r38.businessplatform=r.code and r38.indicatornumber='YWPT-ZHQI-CD-038-XT-00038'
left join raw_restapi_load.p_hcd r39 on r39.dt='2021-03-24' and r39.coltime =cast('2021-03-24 00:00:00' as timestamp) and r39.businessplatform=r.code and r39.indicatornumber='YWPT-ZHQI-CD-038-XT-00039'
left join raw_restapi_load.p_hcd r40 on r40.dt='2021-03-24' and r40.coltime =cast('2021-03-24 00:00:00' as timestamp) and r40.businessplatform=r.code and r40.indicatornumber='YWPT-ZHQI-CD-038-XT-00037'
left join raw_restapi_load.p_hcd r41 on r41.dt='2021-03-24' and r41.coltime =cast('2021-03-24 00:00:00' as timestamp) and r41.businessplatform=r.code and r41.indicatornumber='YWPT-ZHQI-CD-038-XT-00036'
left join raw_restapi_load.p_hcd r42 on r42.dt='2021-03-24' and r42.coltime =cast('2021-03-24 00:00:00' as timestamp) and r42.businessplatform=r.code and r42.indicatornumber='YWPT-ZHQI-CD-038-XT-00040'
left join raw_restapi_load.p_hcd r43 on r43.dt='2021-03-24' and r43.coltime =cast('2021-03-24 00:00:00' as timestamp) and r43.businessplatform=r.code and r43.indicatornumber='YWPT-ZHQI-CD-038-XT-00041'
left join raw_restapi_load.p_hcd r44 on r44.dt='2021-03-24' and r44.coltime =cast('2021-03-24 00:00:00' as timestamp) and r44.businessplatform=r.code and r44.indicatornumber='YWPT-ZHQI-CD-038-XT-00034'
left join raw_restapi_load.p_hcd r45 on r45.dt='2021-03-24' and r45.coltime =cast('2021-03-24 00:00:00' as timestamp) and r45.businessplatform=r.code and r45.indicatornumber='YWPT-ZHQI-CD-038-XT-00035'
left join raw_restapi_load.p_hcd r46 on r46.dt='2021-03-24' and r46.coltime =cast('2021-03-24 00:00:00' as timestamp) and r46.businessplatform=r.code and r46.indicatornumber='YWPT-ZHQI-CD-038-XT-00030'
left join raw_restapi_load.p_hcd r47 on r47.dt='2021-03-24' and r47.coltime =cast('2021-03-24 00:00:00' as timestamp) and r47.businessplatform=r.code and r47.indicatornumber='YWPT-ZHQI-CD-038-XT-00031'
left join raw_restapi_load.p_hcd r48 on r48.dt='2021-03-24' and r48.coltime =cast('2021-03-24 00:00:00' as timestamp) and r48.businessplatform=r.code and r48.indicatornumber='YWPT-ZHQI-CD-038-XT-00020'
left join raw_restapi_load.p_hcd r49 on r49.dt='2021-03-24' and r49.coltime =cast('2021-03-24 00:00:00' as timestamp) and r49.businessplatform=r.code and r49.indicatornumber='YWPT-ZHQI-CD-038-XT-00021'
left join raw_restapi_load.p_hcd r50 on r50.dt='2021-03-24' and r50.coltime =cast('2021-03-24 00:00:00' as timestamp) and r50.businessplatform=r.code and r50.indicatornumber='YWPT-ZHQI-CD-038-XT-00018'
left join raw_restapi_load.p_hcd r51 on r51.dt='2021-03-24' and r51.coltime =cast('2021-03-24 00:00:00' as timestamp) and r51.businessplatform=r.code and r51.indicatornumber='YWPT-ZHQI-CD-038-XT-00019'
left join raw_restapi_load.p_hcd r52 on r52.dt='2021-03-24' and r52.coltime =cast('2021-03-24 00:00:00' as timestamp) and r52.businessplatform=r.code and r52.indicatornumber='YWPT-ZHQI-CD-038-YW-00004'
left join raw_restapi_load.p_hcd r53 on r53.dt='2021-03-24' and r53.coltime =cast('2021-03-24 00:00:00' as timestamp) and r53.businessplatform=r.code and r53.indicatornumber='YWPT-ZHQI-CD-038-YW-00008'
left join raw_restapi_load.p_hcd r54 on r54.dt='2021-03-24' and r54.coltime =cast('2021-03-24 00:00:00' as timestamp) and r54.businessplatform=r.code and r54.indicatornumber='YWPT-ZHQI-CD-038-XT-00061'
left join raw_restapi_load.p_hcd r55 on r55.dt='2021-03-24' and r55.coltime =cast('2021-03-24 00:00:00' as timestamp) and r55.businessplatform=r.code and r55.indicatornumber='YWPT-ZHQI-CD-038-YW-00009'
where 1=1
and r.code='YWPT-ZHQI-CD-038'
and r.type='biz';
2. get error
[ERROR] Could not execute SQL statement. Reason:
java.io.IOException: Can not make a progress: all selected inputs are already finished
3. execute same sql under streaming mode and get expected output
4 kB
105 kB | {"url":"https://issues.apache.org/jira/browse/FLINK-22443?attachmentOrder=desc","timestamp":"2024-11-04T15:14:24Z","content_type":"text/html","content_length":"275928","record_id":"<urn:uuid:ea2cb863-876c-4d64-9c3b-500b7d273dd2>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00843.warc.gz"} |
Mathematic Symbols Part 2:Comparison and Fraction
Mathematic Symbols Part 2:
Comparison and Fraction
Part 1 of this series had the basic math glyphs (addition, subtraction, etc.) and some comparative symbols (equal, not equal, approximately equal): + − ± × ÷ = ≠ and ≈. This post, Part 2, includes
some more comparative symbols, degree, percent, and fraction slash: > < ≤ ≥ ° % and ⁄.
Squinty Eyes >< ≥≤
The greater than (>) and less than (<) signs are mirror images of each other, as are the greater than or equal to (≥) and less than or equal to (≤) signs. So I will just talk about the greater than
signs with the understanding that everything applies to the less than signs as well.
The > sign is basically a chevron pointing right. It is the same width and height as the + sign, and it’s vertical center aligns with the minus.
The ≥ sign has the minus on the baseline, just like the ± symbol. Similarly, the symbol sitting atop the minus can have a reduced height, but often the ≥ sign is taller than the ± sign. In many
instances the > on top is the same size as the standalone > sign. That’s what I’ve opted to do for Protest, since it gives a bit more whitespace to an otherwise fairly heavy symbol.
Pieces of the Pie:
Degree, Percent, Fraction
The degree symbol (°) is a circle, not an o. It’s monoline and doesn’t show any contrast, unlike the masculine ordinal (º). The top of the degree symbol aligns with cap height. It can be the same
size as the masculine ordinal, but is often smaller. This is usually the case in low contrast fonts, so that the degree is more easily distinguished from the ordinal. I’ve chosen to make the degree
smaller in Protest for the same reason.
The percent sign (%), like the degree, aligns at cap height. It also sits at baseline. This is so that it will align with the figures. Also similar to the degree, the zeros in the percent sign can be
the same proportion as the superscript figures and ordianls, though they are sometimes smaller. Some fonts treat the zeros as circles, like the degree sign, though I’m choosing not to.
The zero in Protest has a slash through it to distinguish it from the uppercase O, but the zeroes in the percent sign are too small for that. Given the context, the zeros don’t need the
The slash in the percent sign is thinner than the regular slash, and it’s angle is smaller (about 53°, as opposed to the slash’s 72°) to accommodate the zeroes on either side.
Fraction Slash
The fraction slash ( ⁄ ) is sometimes steeper, sometime less, and sometimes equal to the angle of the percent slash. Regardless of the angle, it’s always close. This makes sense given that the
percent sign is just a zero over zero fraction. I’ve chosen to make the fraction slash for Protest the same size and angle as the percent slash.
Up Next
The next post covers radical, integral, partial derivative, and infinity.
• Mathematic Symbols
1. [S:+ − ± × ÷ = ≠ ≈:S]
2. > < ≤ ≥ % ° ⁄
3. √ ∫ ∂ ∞
4. ∑ ∏ Δ π
• Diacritics | {"url":"https://www.societyoffonts.com/2018/02/02/mathematic-symbols-part-2comparison-and-fraction/","timestamp":"2024-11-09T09:07:20Z","content_type":"text/html","content_length":"137311","record_id":"<urn:uuid:33fc690b-4983-425a-abed-b89d803afe45>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00871.warc.gz"} |
The World’s Shortest IQ Test Has 3 Questions, But Almost No One Can Answer Them All CorrectlyThe World's Shortest IQ Test Has 3 Questions, But Almost No One Can Answer Them All Correctly - Awareness ActThe World’s Shortest IQ Test Has 3 Questions, But Almost No One Can Answer Them All Correctly
IQ stands for intelligence quotient and is used to generally measure intelligence. And the cognitive reflection test does just that.
The cognitive intelligence test comes from the paper ‘Cognitive Reflection and Decision Making,’ by Shane Frederick. Perhaps the most interesting aspect of this study is how simply it seemingly is.
Well, at first glance. However, when you get down to answering the test, it is much more complicated than it seems.
Out of 3,428 people, 33% missed all three questions. And 83% of those missed at least one. Only 48% of MIT students were able to answer all of the questions correctly. Glance at them, calculate your
answers, and then compare them to the correct answer.
(1) A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? _____ cents
(2) If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets? _____ minutes
(3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of
the lake? _____ days
What answers did you come up with? Here are the correct answers:
(1) 5 cents (not 10)
(2) 5 minutes (not 100)
(3) 47 days (not 24)
1. If the ball costs X amount, and the bat costs $1.00 more, the equation is x+1. X+ (X+1)= 1.1, then 2x=0.1, so X= 0.05.
2. The answer to this one is in the question. If it takes 5 machines 5 minutes to make 5 widgets, then one widget is created in 5 minutes. So, it would take 100 machines 5 minutes to make 100
3. Every day moving forward, the patch doubles. So, every day backward would mean the patch halves in size. So, on day 47, the lake is half full, not day 24.
To be honest, my brain hurts. How did you do? | {"url":"https://awarenessact.com/the-worlds-shortest-iq-test-has-3-questions-but-almost-no-one-can-answer-them-all-correctly/","timestamp":"2024-11-06T18:25:45Z","content_type":"text/html","content_length":"142952","record_id":"<urn:uuid:3f181c4a-ad40-4a7f-b76a-50df73b85230>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00044.warc.gz"} |
torch.optim is a package implementing various optimization algorithms.
Most commonly used methods are already supported, and the interface is general enough, so that more sophisticated ones can also be easily integrated in the future.
How to use an optimizer¶
To use torch.optim you have to construct an optimizer object that will hold the current state and will update the parameters based on the computed gradients.
Constructing it¶
To construct an Optimizer you have to give it an iterable containing the parameters (all should be Variable s) to optimize. Then, you can specify optimizer-specific options such as the learning rate,
weight decay, etc.
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
optimizer = optim.Adam([var1, var2], lr=0.0001)
Per-parameter options¶
Optimizer s also support specifying per-parameter options. To do this, instead of passing an iterable of Variable s, pass in an iterable of dict s. Each of them will define a separate parameter
group, and should contain a params key, containing a list of parameters belonging to it. Other keys should match the keyword arguments accepted by the optimizers, and will be used as optimization
options for this group.
You can still pass options as keyword arguments. They will be used as defaults, in the groups that didn’t override them. This is useful when you only want to vary a single option, while keeping all
others consistent between parameter groups.
For example, this is very useful when one wants to specify per-layer learning rates:
{'params': model.base.parameters()},
{'params': model.classifier.parameters(), 'lr': 1e-3}
], lr=1e-2, momentum=0.9)
This means that model.base’s parameters will use the default learning rate of 1e-2, model.classifier’s parameters will use a learning rate of 1e-3, and a momentum of 0.9 will be used for all
Taking an optimization step¶
All optimizers implement a step() method, that updates the parameters. It can be used in two ways:
This is a simplified version supported by most optimizers. The function can be called once the gradients are computed using e.g. backward().
for input, target in dataset:
output = model(input)
loss = loss_fn(output, target)
Some optimization algorithms such as Conjugate Gradient and LBFGS need to reevaluate the function multiple times, so you have to pass in a closure that allows them to recompute your model. The
closure should clear the gradients, compute the loss, and return it.
for input, target in dataset:
def closure():
output = model(input)
loss = loss_fn(output, target)
return loss
Base class¶
class torch.optim.Optimizer(params, defaults)[source]¶
Base class for all optimizers.
Parameters need to be specified as collections that have a deterministic ordering that is consistent between runs. Examples of objects that don’t satisfy those properties are sets and iterators
over values of dictionaries.
☆ params (iterable) – an iterable of torch.Tensor s or dict s. Specifies what Tensors should be optimized.
☆ defaults (Dict[str, Any]) – (dict): a dict containing default values of optimization options (used when a parameter group doesn’t specify them).
Optimizer.add_param_group Add a param group to the Optimizer s param_groups.
Optimizer.load_state_dict Loads the optimizer state.
Optimizer.state_dict Returns the state of the optimizer as a dict.
Optimizer.step Performs a single optimization step (parameter update).
Optimizer.zero_grad Resets the gradients of all optimized torch.Tensor s.
Adadelta Implements Adadelta algorithm.
Adagrad Implements Adagrad algorithm.
Adam Implements Adam algorithm.
AdamW Implements AdamW algorithm.
SparseAdam SparseAdam implements a masked version of the Adam algorithm suitable for sparse gradients.
Adamax Implements Adamax algorithm (a variant of Adam based on infinity norm).
ASGD Implements Averaged Stochastic Gradient Descent.
LBFGS Implements L-BFGS algorithm.
NAdam Implements NAdam algorithm.
RAdam Implements RAdam algorithm.
RMSprop Implements RMSprop algorithm.
Rprop Implements the resilient backpropagation algorithm.
SGD Implements stochastic gradient descent (optionally with momentum).
Many of our algorithms have various implementations optimized for performance, readability and/or generality, so we attempt to default to the generally fastest implementation for the current device
if no particular implementation has been specified by the user.
We have 3 major categories of implementations: for-loop, foreach (multi-tensor), and fused. The most straightforward implementations are for-loops over the parameters with big chunks of computation.
For-looping is usually slower than our foreach implementations, which combine parameters into a multi-tensor and run the big chunks of computation all at once, thereby saving many sequential kernel
calls. A few of our optimizers have even faster fused implementations, which fuse the big chunks of computation into one kernel. We can think of foreach implementations as fusing horizontally and
fused implementations as fusing vertically on top of that.
In general, the performance ordering of the 3 implementations is fused > foreach > for-loop. So when applicable, we default to foreach over for-loop. Applicable means the foreach implementation is
available, the user has not specified any implementation-specific kwargs (e.g., fused, foreach, differentiable), and all tensors are native and on CUDA. Note that while fused should be even faster
than foreach, the implementations are newer and we would like to give them more bake-in time before flipping the switch everywhere. You are welcome to try them out though!
Below is a table showing the available and default implementations of each algorithm:
Algorithm Default Has foreach? Has fused?
Adadelta foreach yes no
Adagrad foreach yes no
Adam foreach yes yes
AdamW foreach yes yes
SparseAdam for-loop no no
Adamax foreach yes no
ASGD foreach yes no
LBFGS for-loop no no
NAdam foreach yes no
RAdam foreach yes no
RMSprop foreach yes no
Rprop foreach yes no
SGD foreach yes no
How to adjust learning rate¶
torch.optim.lr_scheduler provides several methods to adjust the learning rate based on the number of epochs. torch.optim.lr_scheduler.ReduceLROnPlateau allows dynamic learning rate reducing based on
some validation measurements.
Learning rate scheduling should be applied after optimizer’s update; e.g., you should write your code this way:
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
scheduler = ExponentialLR(optimizer, gamma=0.9)
for epoch in range(20):
for input, target in dataset:
output = model(input)
loss = loss_fn(output, target)
Most learning rate schedulers can be called back-to-back (also referred to as chaining schedulers). The result is that each scheduler is applied one after the other on the learning rate obtained by
the one preceding it.
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
scheduler1 = ExponentialLR(optimizer, gamma=0.9)
scheduler2 = MultiStepLR(optimizer, milestones=[30,80], gamma=0.1)
for epoch in range(20):
for input, target in dataset:
output = model(input)
loss = loss_fn(output, target)
In many places in the documentation, we will use the following template to refer to schedulers algorithms.
>>> scheduler = ...
>>> for epoch in range(100):
>>> train(...)
>>> validate(...)
>>> scheduler.step()
Prior to PyTorch 1.1.0, the learning rate scheduler was expected to be called before the optimizer’s update; 1.1.0 changed this behavior in a BC-breaking way. If you use the learning rate scheduler
(calling scheduler.step()) before the optimizer’s update (calling optimizer.step()), this will skip the first value of the learning rate schedule. If you are unable to reproduce results after
upgrading to PyTorch 1.1.0, please check if you are calling scheduler.step() at the wrong time.
lr_scheduler.LambdaLR Sets the learning rate of each parameter group to the initial lr times a given function.
lr_scheduler.MultiplicativeLR Multiply the learning rate of each parameter group by the factor given in the specified function.
lr_scheduler.StepLR Decays the learning rate of each parameter group by gamma every step_size epochs.
lr_scheduler.MultiStepLR Decays the learning rate of each parameter group by gamma once the number of epoch reaches one of the milestones.
lr_scheduler.ConstantLR Decays the learning rate of each parameter group by a small constant factor until the number of epoch reaches a pre-defined milestone: total_iters.
lr_scheduler.LinearLR Decays the learning rate of each parameter group by linearly changing small multiplicative factor until the number of epoch reaches a pre-defined milestone:
lr_scheduler.ExponentialLR Decays the learning rate of each parameter group by gamma every epoch.
lr_scheduler.PolynomialLR Decays the learning rate of each parameter group using a polynomial function in the given total_iters.
lr_scheduler.CosineAnnealingLR Set the learning rate of each parameter group using a cosine annealing schedule, where $\eta_{max}$ is set to the initial lr and $T_{cur}$ is the number of
epochs since the last restart in SGDR:
lr_scheduler.ChainedScheduler Chains list of learning rate schedulers.
lr_scheduler.SequentialLR Receives the list of schedulers that is expected to be called sequentially during optimization process and milestone points that provides exact intervals to
reflect which scheduler is supposed to be called at a given epoch.
lr_scheduler.ReduceLROnPlateau Reduce learning rate when a metric has stopped improving.
lr_scheduler.CyclicLR Sets the learning rate of each parameter group according to cyclical learning rate policy (CLR).
lr_scheduler.OneCycleLR Sets the learning rate of each parameter group according to the 1cycle learning rate policy.
lr_scheduler.CosineAnnealingWarmRestarts Set the learning rate of each parameter group using a cosine annealing schedule, where $\eta_{max}$ is set to the initial lr, $T_{cur}$ is the number of
epochs since the last restart and $T_{i}$ is the number of epochs between two warm restarts in SGDR:
Weight Averaging (SWA and EMA)¶
torch.optim.swa_utils implements Stochastic Weight Averaging (SWA) and Exponential Moving Average (EMA). In particular, the torch.optim.swa_utils.AveragedModel class implements SWA and EMA models,
torch.optim.swa_utils.SWALR implements the SWA learning rate scheduler and torch.optim.swa_utils.update_bn() is a utility function used to update SWA/EMA batch normalization statistics at the end of
SWA has been proposed in Averaging Weights Leads to Wider Optima and Better Generalization.
EMA is a widely known technique to reduce the training time by reducing the number of weight updates needed. It is a variation of Polyak averaging, but using exponential weights instead of equal
weights across iterations.
Constructing averaged models¶
The AveragedModel class serves to compute the weights of the SWA or EMA model.
You can create an SWA averaged model by running:
>>> averaged_model = AveragedModel(model)
EMA models are constructed by specifying the multi_avg_fn argument as follows:
>>> decay = 0.999
>>> averaged_model = AveragedModel(model, multi_avg_fn=get_ema_multi_avg_fn(decay))
Decay is a parameter between 0 and 1 that controls how fast the averaged parameters are decayed. If not provided to get_ema_multi_avg_fn, the default is 0.999.
get_ema_multi_avg_fn returns a function that applies the following EMA equation to the weights:
$W^\textrm{EMA}_{t+1} = \alpha W^\textrm{EMA}_{t} + (1 - \alpha) W^\textrm{model}_t$
where alpha is the EMA decay.
Here the model model can be an arbitrary torch.nn.Module object. averaged_model will keep track of the running averages of the parameters of the model. To update these averages, you should use the
update_parameters() function after the optimizer.step():
>>> averaged_model.update_parameters(model)
For SWA and EMA, this call is usually done right after the optimizer step(). In the case of SWA, this is usually skipped for some numbers of steps at the beginning of the training.
Custom averaging strategies¶
By default, torch.optim.swa_utils.AveragedModel computes a running equal average of the parameters that you provide, but you can also use custom averaging functions with the avg_fn or multi_avg_fn
• avg_fn allows defining a function operating on each parameter tuple (averaged parameter, model parameter) and should return the new averaged parameter.
• multi_avg_fn allows defining more efficient operations acting on a tuple of parameter lists, (averaged parameter list, model parameter list), at the same time, for example using the
torch._foreach* functions. This function must update the averaged parameters in-place.
In the following example ema_model computes an exponential moving average using the avg_fn parameter:
>>> ema_avg = lambda averaged_model_parameter, model_parameter, num_averaged:\
>>> 0.9 * averaged_model_parameter + 0.1 * model_parameter
>>> ema_model = torch.optim.swa_utils.AveragedModel(model, avg_fn=ema_avg)
In the following example ema_model computes an exponential moving average using the more efficient multi_avg_fn parameter:
>>> ema_model = AveragedModel(model, multi_avg_fn=get_ema_multi_avg_fn(0.9))
SWA learning rate schedules¶
Typically, in SWA the learning rate is set to a high constant value. SWALR is a learning rate scheduler that anneals the learning rate to a fixed value, and then keeps it constant. For example, the
following code creates a scheduler that linearly anneals the learning rate from its initial value to 0.05 in 5 epochs within each parameter group:
>>> swa_scheduler = torch.optim.swa_utils.SWALR(optimizer, \
>>> anneal_strategy="linear", anneal_epochs=5, swa_lr=0.05)
You can also use cosine annealing to a fixed value instead of linear annealing by setting anneal_strategy="cos".
Taking care of batch normalization¶
update_bn() is a utility function that allows to compute the batchnorm statistics for the SWA model on a given dataloader loader at the end of training:
>>> torch.optim.swa_utils.update_bn(loader, swa_model)
update_bn() applies the swa_model to every element in the dataloader and computes the activation statistics for each batch normalization layer in the model.
update_bn() assumes that each batch in the dataloader loader is either a tensors or a list of tensors where the first element is the tensor that the network swa_model should be applied to. If your
dataloader has a different structure, you can update the batch normalization statistics of the swa_model by doing a forward pass with the swa_model on each element of the dataset.
Putting it all together: SWA¶
In the example below, swa_model is the SWA model that accumulates the averages of the weights. We train the model for a total of 300 epochs and we switch to the SWA learning rate schedule and start
to collect SWA averages of the parameters at epoch 160:
>>> loader, optimizer, model, loss_fn = ...
>>> swa_model = torch.optim.swa_utils.AveragedModel(model)
>>> scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=300)
>>> swa_start = 160
>>> swa_scheduler = SWALR(optimizer, swa_lr=0.05)
>>> for epoch in range(300):
>>> for input, target in loader:
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()
>>> if epoch > swa_start:
>>> swa_model.update_parameters(model)
>>> swa_scheduler.step()
>>> else:
>>> scheduler.step()
>>> # Update bn statistics for the swa_model at the end
>>> torch.optim.swa_utils.update_bn(loader, swa_model)
>>> # Use swa_model to make predictions on test data
>>> preds = swa_model(test_input)
Putting it all together: EMA¶
In the example below, ema_model is the EMA model that accumulates the exponentially-decayed averages of the weights with a decay rate of 0.999. We train the model for a total of 300 epochs and start
to collect EMA averages immediately.
>>> loader, optimizer, model, loss_fn = ...
>>> ema_model = torch.optim.swa_utils.AveragedModel(model, \
>>> multi_avg_fn=torch.optim.swa_utils.get_ema_multi_avg_fn(0.999))
>>> for epoch in range(300):
>>> for input, target in loader:
>>> optimizer.zero_grad()
>>> loss_fn(model(input), target).backward()
>>> optimizer.step()
>>> ema_model.update_parameters(model)
>>> # Update bn statistics for the ema_model at the end
>>> torch.optim.swa_utils.update_bn(loader, ema_model)
>>> # Use ema_model to make predictions on test data
>>> preds = ema_model(test_input) | {"url":"http://pytorch.org/docs/2.2/optim.html","timestamp":"2024-11-06T17:43:07Z","content_type":"text/html","content_length":"115118","record_id":"<urn:uuid:9cb26204-b072-4ce8-9d82-24a25da1d98e>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00348.warc.gz"} |
Algebraic.net - Pure_And_Applied_Math: Category Theory
21. Aristotle's Theory of Substance: The Categories and Metaphysics Zeta (Oxford Aristotle Studies) by Michael V. Wedin, October, 2002
22. Measure and category; a survey of the analogies between topological and measure spaces by John C. Oxtoby,
23. Classifying Spaces and Classifying Topoi (Lecture Notes in Mathematics, 1616) by Ieke Moerdijk, October, 1995
24. A Course in GB Syntax: Lectures on Binding and Empty Categories (Current Studies in Linguistics) by Howard Lasnik, Juan Uriagereka, 23 February, 1988
25. Categories of Symmetries and Infinite-Dimensional Groups (London Mathematical Society Monographs, New Series, No 16) by Yu. A. Neretin, G. G. Gould, et all August, 1996
26. Lectures on Tensor Categories and Modular Functors (University Lecture Series, No 21) by Bojko Bakalov, Alexander, Jr. Kirillov, November, 2000
27. Triangulated Categories (Annals of Mathematics Studies, No. 148) by Amnon Neeman, 01 February, 2001
28. Derived L-Adic Categories for Algebraic Stacks (Memoirs of the American Mathematical Society, No. 774) by K. Behrend, 01 May, 2003
29. Categorical Foundations : Special Topics in Order, Topolgy, Algebra, and Sheaf Theory by Maria Pedicchio, Walter Tholen, October, 2003
30. Uncountably Categorical Theories (Translations of Mathematical Monographs, Vol 117) by Boris Zilber, July, 1997 | {"url":"http://algebraic.net/pure_and_applied_math/category_theory_page_no_3.html","timestamp":"2024-11-07T10:14:56Z","content_type":"text/html","content_length":"15590","record_id":"<urn:uuid:cfb235b3-693a-4708-b5ab-a91300d07a6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00718.warc.gz"} |
Transactions Online
Kenji SATO, Yoshiharu MUROYA, Tetsuro OKUDA, "Design of High Slope-Efficiency Phase-Shifted DFB Laser Diodes with Asymmetrically-Pitch-Modulated (APM) Gratings" in IEICE TRANSACTIONS on Electronics,
vol. E83-C, no. 6, pp. 855-859, June 2000, doi: .
Abstract: A theoretical study on high slope-efficiency phase-shifted DFB laser diodes is presented. We have proposed a new grating structure called asymmetrically-pitch-modulated (APM) grating, and
calculated its slope- efficiency and single-mode-yield. In order to take into account the modulated grating period; we have developed an F-matrix which directly includes a chirped grating structure.
APM phase-shifted DFB laser diodes consist of a uniform grating in one half section of the cavity and a chirped grating in the other half. This structure causes asymmetrical field distribution inside
the cavity and the optical output power from one facet is larger than that from the other facet. According to the simulation results, when the normalized coupling coefficient κ L is 3.0, the
front-to-rear output power ratio is 2.6, while the single-mode-yield remains at 100%, and simultaneously the slope-efficiency improvement becomes 65% better than that of ordinary quarter-wave
phase-shifted DFB lasers of the same κ L value.
URL: https://global.ieice.org/en_transactions/electronics/10.1587/e83-c_6_855/_p
author={Kenji SATO, Yoshiharu MUROYA, Tetsuro OKUDA, },
journal={IEICE TRANSACTIONS on Electronics},
title={Design of High Slope-Efficiency Phase-Shifted DFB Laser Diodes with Asymmetrically-Pitch-Modulated (APM) Gratings},
abstract={A theoretical study on high slope-efficiency phase-shifted DFB laser diodes is presented. We have proposed a new grating structure called asymmetrically-pitch-modulated (APM) grating, and
calculated its slope- efficiency and single-mode-yield. In order to take into account the modulated grating period; we have developed an F-matrix which directly includes a chirped grating structure.
APM phase-shifted DFB laser diodes consist of a uniform grating in one half section of the cavity and a chirped grating in the other half. This structure causes asymmetrical field distribution inside
the cavity and the optical output power from one facet is larger than that from the other facet. According to the simulation results, when the normalized coupling coefficient κ L is 3.0, the
front-to-rear output power ratio is 2.6, while the single-mode-yield remains at 100%, and simultaneously the slope-efficiency improvement becomes 65% better than that of ordinary quarter-wave
phase-shifted DFB lasers of the same κ L value.},
TY - JOUR
TI - Design of High Slope-Efficiency Phase-Shifted DFB Laser Diodes with Asymmetrically-Pitch-Modulated (APM) Gratings
T2 - IEICE TRANSACTIONS on Electronics
SP - 855
EP - 859
AU - Kenji SATO
AU - Yoshiharu MUROYA
AU - Tetsuro OKUDA
PY - 2000
DO -
JO - IEICE TRANSACTIONS on Electronics
SN -
VL - E83-C
IS - 6
JA - IEICE TRANSACTIONS on Electronics
Y1 - June 2000
AB - A theoretical study on high slope-efficiency phase-shifted DFB laser diodes is presented. We have proposed a new grating structure called asymmetrically-pitch-modulated (APM) grating, and
calculated its slope- efficiency and single-mode-yield. In order to take into account the modulated grating period; we have developed an F-matrix which directly includes a chirped grating structure.
APM phase-shifted DFB laser diodes consist of a uniform grating in one half section of the cavity and a chirped grating in the other half. This structure causes asymmetrical field distribution inside
the cavity and the optical output power from one facet is larger than that from the other facet. According to the simulation results, when the normalized coupling coefficient κ L is 3.0, the
front-to-rear output power ratio is 2.6, while the single-mode-yield remains at 100%, and simultaneously the slope-efficiency improvement becomes 65% better than that of ordinary quarter-wave
phase-shifted DFB lasers of the same κ L value.
ER - | {"url":"https://global.ieice.org/en_transactions/electronics/10.1587/e83-c_6_855/_p","timestamp":"2024-11-02T04:27:43Z","content_type":"text/html","content_length":"61964","record_id":"<urn:uuid:8417d20c-d148-4bd0-b962-db7f5ca81308>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00484.warc.gz"} |
Math, Grade 7, Getting Started, Introduction To Ratio Tables
Justify Equivalent Ratios
Justify Equivalent Ratios
Discuss the following with your classmates.
• How did you know that your ratio card was equivalent to your partner’s ratio card?
• What is another ratio that is also equivalent to your ratio cards?
• What is another ratio that is also equivalent to the two ratios shown in the image? | {"url":"https://oercommons.org/courseware/lesson/2562/student/?section=6","timestamp":"2024-11-10T09:31:06Z","content_type":"text/html","content_length":"35319","record_id":"<urn:uuid:e7cfd032-13e5-4147-a0e4-1cbb6f407f5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00074.warc.gz"} |
The Workshop is organized by the Department of Probability Theory, Statistics and Actuarial Mathematics of Taras Shevchenko National University of Kyiv
All sessions are organized as Zoom meetings
Time is everywhere local time in Kyiv (GMT+3)
October 11: official program, scientific talks, 14:15-18:40
October 12: scientific talks, 14:00-18:00
Program committee: Kestutis Kubilius, Yuliya Mishura
Organizing committee: Iryna Bodnarchuk, Kostiantyn Ralchenko
Invited Speakers
Thomas Augustin, Ludwig-Maximilians-Universität München, Germany
Title of the talk: Survival Analysis under Generalized Measurement Error Models
Sandor Baran, University of Debrecen, Hungary
Title of the talk: K-optimal designs for regression models driven by Ornstein-Uhlenbeck processes and fields
Taras Bodnar, Stockholm University, Sweden
Title of the talk: Singular Conditional Autoregressive Wishart Model for Realized Covariance Matrices
Khalifa Es-Sebaiy, Kuwait University, Kuwait
Title of the talk: Berry-Esseen bounds of second moment estimators for Gaussian processes observed at high frequency
Alexander Ivanov, National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute", Ukraine
Title of the talk: LSE Consistency of the Symmetric Textured Surface Parameters
Kestutis Kubilius, Vilnius university, Lithuania
Title of the talk: Stochastic differential equation with a soft wall
Rostyslav Maiboroda, Taras Shevchenko National University of Kyiv, Ukraine
Title of the talk: Estimation of concentrations parameters in models of mixtures with varying concentrations
Lutz Mattner, Universität Trier, Germany
Title of the talk: A convolution inequality, yielding a sharper Berry-Esseen theorem for summands Zolotarev-close to normal
Yuliya Mishura, Taras Shevchenko National University of Kyiv, Ukraine
Title of the talk: Statistical estimation in the models with memory
Ostap Okhrin, Technical University of Dresden, Germany
Title of the talk: Infinitely stochastic micro reserving (Jointly with Matúš Maciak and Michal Pešta)
Kostiantyn Ralchenko, Taras Shevchenko National University of Kyiv, Ukraine
Title of the talk: Drift parameters estimation in the Cox–Ingersoll–Ross model
Shalabh, Indian Institute of Technology Kanpur, India
Title of the talk: Goodness of Fit Statistic in Non-parametric Measurement Error Model
Sergiy Shklyar, Taras Shevchenko National University of Kyiv, Ukraine
Title of the talk: Sufficiency estimator in the inverse exponential regression
Nakahiro Yoshida, University of Tokyo, Japan
Title of the talk: Adaptive and non-adaptive estimation for degenerate diffusion processes | {"url":"https://probability.knu.ua/sspdct2022/","timestamp":"2024-11-11T23:51:53Z","content_type":"text/html","content_length":"19040","record_id":"<urn:uuid:414a6a02-6949-46d8-94cc-502649c2b8d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00742.warc.gz"} |
What is Kinetic Energy - Definition
What is Kinetic Energy – Definition
The kinetic energy, K, is defined as the energy stored in an object because of its motion. It is called kinetic energy, from the Greek word kinetikos – motion. Thermal Engineering
What is Kinetic Energy
The kinetic energy, K, is defined as the energy stored in an object because of its motion. An object in motion has the ability to do work and thus can be said to have energy. It is called kinetic
energy, from the Greek word kinetikos, meaning “motion.”
The kinetic energy depends on the speed of an object and is the ability of a moving object to do work on other objects when it collides with them. On the other hand, the kinetic energy of an object
represents the amount of energy required to increase the velocity of the object from rest (v = 0) to its final velocity. The kinetic energy also depends linearly on the mass, which is a numerical
measure of object’s inertia and the measure of an object’s resistance to acceleration when a force is applied.
We define the quantity:
K = ½ mv^2
to be the translational kinetic energy of the object. It must be added, it is called the “translational” kinetic energy to distinguish it from rotational kinetic energy.
Conservation of Mechanical Energy
First the principle of the Conservation of Mechanical Energy was stated:
The total mechanical energy (defined as the sum of its potential and kinetic energies) of a particle being acted on by only conservative forces is constant.
See also: Conservation of Mechanical Energy
An isolated system is one in which no external force causes energy changes. If only conservative forces act on an object and U is the potential energy function for the total conservative force, then
E[mech] = U + K
The potential energy, U, depends on the position of an object subjected to a conservative force.
It is defined as the object’s ability to do work and is increased as the object is moved in the opposite direction of the direction of the force.
The potential energy associated with a system consisting of Earth and a nearby particle is gravitational potential energy.
The kinetic energy, K, depends on the speed of an object and is the ability of a moving object to do work on other objects when it collides with them.
K = ½ mv^2
The above mentioned definition (E[mech] = U + K) assumes that the system is free of friction and other non-conservative forces. The difference between a conservative and a non-conservative force is
that when a conservative force moves an object from one point to another, the work done by the conservative force is independent of the path.
In any real situation, frictional forces and other non-conservative forces are present, but in many cases their effects on the system are so small that the principle of conservation of mechanical
energy can be used as a fair approximation. For example the frictional force is a non-conservative force, because it acts to reduce the mechanical energy in a system.
Note that non-conservative forces do not always reduce the mechanical energy. A non-conservative force changes the mechanical energy, there are forces that increase the total mechanical energy, like
the force provided by a motor or engine, is also a non-conservative force.
Block sliding down a frictionless incline slope
The 1 kg block starts out a height H (let say 1 m) above the ground, with potential energy mgH and kinetic energy that is equal to 0. It slides to the ground (without friction) and arrives with no
potential energy and kinetic energy K = ½ mv^2. Calculate the velocity of the block on the ground and its kinetic energy.
E[mech] = U + K = const
=> ½ mv^2 = mgH
=> v = √2gH = 4.43 m/s
=> K[2] = ½ x 1 kg x (4.43 m/s)^2 = 19.62 kg.m^2.s^-2 = 19.62 J
Assume a pendulum (ball of mass m suspended on a string of length L that we have pulled up so that the ball is a height H < L above its lowest point on the arc of its stretched string motion. The
pendulum is subjected to the conservative gravitational force where frictional forces like air drag and friction at the pivot are negligible.
We release it from rest. How fast is it going at the bottom?
The pendulum reaches greatest kinetic energy and least potential energy when in the vertical position, because it will have the greatest speed and be nearest the Earth at this point. On the other
hand, it will have its least kinetic energy and greatest potential energy at the extreme positions of its swing, because it has zero speed and is farthest from Earth at these points.
If the amplitude is limited to small swings, the period T of a simple pendulum, the time taken for a complete cycle, is:
where L is the length of the pendulum and g is the local acceleration of gravity. For small swings the period of swing is approximately the same for different size swings. That is, the period is
independent of amplitude.
Relativistic Kinetic Energy
As velocity of an object approaches the speed of light, the relativistic kinetic energy approaches infinity. It is caused by the Lorentz factor, which approaches infinity for v → c.
The previous relationship between work and kinetic energy are based on Newton’s laws of motion. When we generalize these laws according to the principle of relativity, we need a corresponding
generalization of the equation for kinetic energy. If an object’s speed is close to the speed of light, it is necessary to use relativistic mechanics to calculate its kinetic energy.
In classical mechanics, kinetic energy and momentum are expressed as:
Derivation of its relativistic relationships is based on the relativistic energy-momentum relation:
It can be derived, the relativistic kinetic energy and the relativistic momentum are:
The first term (ɣmc^2) of the relativistic kinetic energy increases with the speed v of the particle. The second term (mc^2) is constant; it is called the rest energy (rest mass) of the particle, and
represents a form of energy that a particle has even when at zero velocity. As velocity of an object approaches the speed of light, the kinetic energy approaches infinity. It is caused by the Lorentz
factor, which approaches infinity for v → c. Therefore the speed of light cannot be reached by any massive particles.
The first term (ɣmc^2) is known as the total energy E of the particle, because it equals the rest energy plus the kinetic energy:
E = K + mc^2
For a particle at rest, i.e. K is zero, so the total energy is its rest energy:
E = mc^2
This is one of the striking results of Einstein’s theory of relativity is that mass and energy are equivalent and convertible one into the other. Equivalence of the mass and energy is described by
Einstein’s famous formula E = mc^2. This result have been experimentally confirmed countless times in nuclear and elementary particle physics. For example, see Positron-electron Pair Production or
Conservation of Energy in Nuclear Reactions.
See also: Relativistic Mass
Example: Proton’s kinetic energy
A proton (m = 1.67 x 10^-27 kg) travels at a speed v = 0.9900c = 2.968 x 10^8m/s. What is its kinetic energy?
According to a classical calculation, which is not correct, we would obtain:
K = 1/2mv^2 = ½ x (1.67 x 10^-27 kg) x (2.968 x 10^8m/s)^2 = 7.355 x 10^-11 J
With relativistic correction the relativistic kinetic energy is equal to:
K = (ɣ – 1)mc^2
where the Lorentz factor
ɣ = 7.089
K = 6.089 x (1.67 x 10^-27 kg) x (2.9979 x 10^8m/s)^2 = 9.139 x 10^-10 J = 5.701 GeV
This is about 12 times higher energy as in the classical calculation. According to this relationship, an acceleration of a proton beam to 5.7 GeV requires energies that are in the order different.
We hope, this article, Kinetic Energy, helps you. If so, give us a like in the sidebar. Main purpose of this website is to help the public to learn some interesting and important information about
thermal engineering. | {"url":"https://www.thermal-engineering.org/what-is-kinetic-energy-definition/","timestamp":"2024-11-05T16:58:11Z","content_type":"text/html","content_length":"476120","record_id":"<urn:uuid:7820d2f6-29e2-4a66-9557-ce50b2347107>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00764.warc.gz"} |
Geometry, Topology and Mathematical Physics | VCU Department of Mathematics and Applied Mathematics
Geometry, Topology and Mathematical Physics
Geometry and topology are branches of pure mathematics that constitute a highly active area of central importance in the current mathematical landscape. Geometry is one of the most ancient academic
disciplines. Geometers and topologists are concerned with the shape, size and abstract properties of spaces and spatial relationships. Mathematical physicists give a rigorous mathematical framework
to physical theories of the natural world.
Modern research in geometry, topology and mathematical physics includes many subdisciplines that employ techniques from neighboring branches of mathematics, including algebra and representation
theory, combinatorics and discrete mathematics, or analysis.
Research and Application
Our research group has interests in algebraic geometry, low-dimensional topology and knot theory, geometric measure theory and analysis, string theory and conformal field theory. Members of our
department also investigate the applications of these areas to the study of structures in theoretical physics, quantum computing, superconductivity, molecular biology and materials science. | {"url":"https://math.vcu.edu/research/geometry-topology-and-mathematical-physics/","timestamp":"2024-11-05T13:13:08Z","content_type":"text/html","content_length":"26451","record_id":"<urn:uuid:9ec510e9-c26b-4a7e-ba9c-5e46f8110102>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00449.warc.gz"} |
Excel Formula for Changing Entire Line Information
In this tutorial, you will learn how to use an Excel formula to change the entire line information using a list in Python. The formula we will be using is the VLOOKUP function, which allows you to
search for a value in a table array and return a value from the same row in a specified column. This formula is particularly useful when you have a list of information and you want to update the
entire line based on a specific value.
To use the VLOOKUP function, you need to provide four arguments: the value to search for, the table array, the column number, and a flag for exact match. The value to search for is typically a unique
identifier or key in the list. The table array is the range of cells that contains the list of information you want to use to change the line. The column number is dynamically retrieved using the
COLUMN() function, which allows the formula to return the corresponding value from the same column in the list. The flag for exact match is used to specify whether the formula should only return a
value if the exact value is found in the first column of the table array.
To demonstrate how this formula works, let's consider an example. Suppose we have a sheet named 'List' with the following data:
A B C D
1 Red Small 10
2 Green Large 20
3 Blue Small 30
And in another sheet, we have the following data:
If we enter the formula =VLOOKUP(A2, List!$A$2:$D$10, COLUMN(), FALSE) in cell B2 and drag it down to B4, the result will be:
A B C D
1 Green Large 20
2 Red Small 10
3 Blue Small 30
The entire line information in column B, C, and D has been changed based on the values in column A, using the list in the 'List' sheet.
In conclusion, the VLOOKUP function is a powerful tool in Excel that can be used to change the entire line information using a list. By providing the appropriate arguments, you can update multiple
columns in a table based on a specific value. This formula is particularly useful when working with large datasets or when you need to perform bulk updates. By understanding how to use the VLOOKUP
function in Python, you can enhance your data manipulation capabilities and streamline your workflow.
An Excel formula
=VLOOKUP(A2, List!$A$2:$D$10, COLUMN(), FALSE)
Formula Explanation
This formula uses the VLOOKUP function to change the entire line information using a list.
Step-by-step explanation
1. The VLOOKUP function searches for a value in the first column of a table array and returns a value in the same row from a column you specify.
2. The first argument, A2, is the value to search for. This is typically a unique identifier or key in the list.
3. The second argument, List!$A$2:$D$10, is the table array. This is the range of cells that contains the list of information you want to use to change the line.
4. The third argument, COLUMN(), is used to dynamically retrieve the column number of the cell where the formula is entered. This allows the formula to return the corresponding value from the same
column in the list.
5. The fourth argument, FALSE, is used to specify an exact match. This means that the formula will only return a value if the exact value in A2 is found in the first column of the table array.
6. The formula can be copied and pasted to other cells in the same column to change the entire line information based on the values in column A.
For example, let's say we have the following data in the "List" sheet:
| A | B | C | D |
| | | | |
| 1 | Red | Small | 10 |
| 2 | Green | Large | 20 |
| 3 | Blue | Small | 30 |
And in another sheet, we have the following data:
| A | B | C | D |
| | | | |
| 1 | 2 | | |
| 2 | 1 | | |
| 3 | 3 | | |
If we enter the formula =VLOOKUP(A2, List!$A$2:$D$10, COLUMN(), FALSE) in cell B2 and drag it down to B4, the result will be:
| A | B | C | D |
| | | | |
| 1 | Green | Large | 20 |
| 2 | Red | Small | 10 |
| 3 | Blue | Small | 30 |
The formula has changed the entire line information based on the values in column A, using the list in the "List" sheet. | {"url":"https://codepal.ai/excel-formula-generator/query/5Pr92CAg/excel-formula-change-entire-line-information","timestamp":"2024-11-03T15:47:54Z","content_type":"text/html","content_length":"103654","record_id":"<urn:uuid:fde861ec-03ae-485e-92ca-ad5627744e7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00106.warc.gz"} |
Tips and examples of Math Symbol Palettes
This article contains some examples of math symbol palettes which you can copy and paste to use on your Inspera Assessment. You can also use parts of it, depending on what math symbols are relevant
for your questions.
The article Math Symbol Palette provides you with more information about creating a math palette and how to add it to a question.
What is a math palette toolbar?
In the question types Math entry, Math working and Essay the candidates has access to a math palette by clicking the Sigma-button. With this function, the candidates gets the opportunity to answer
with math symbols or any other LaTeX symbols.
Math working and Math Entry:
Why create customised math palette?
By default one Basic and one Advanced math palette is available, and if you are happy with these you do not need to create your own math symbol palettes.
The purpose of the configurable math palette is to enable institutions to customise the palette to the specific needs and requirements in the question.
Use our examples of Math Symbol Palettes
Listed at the bottom of this article are some examples of math palettes we have made to help you on the way. All examples include three tabs, but in theory it is possible to have as many tabs you
want. If using only one tab, no name/label will be displayed for the candidates.
Do the following to use these examples:
1. In Inspera Assessment: Choose Math Symbol Palettes in the Author tab in the main menu
2. Create a new Math Symbol Palette (read more here on how to create a new math symbol palette)
3. Remove the entire default code in the palette
4. From this article: Download the file you find relevant (they are in a plain text format)
5. Copy the entire LaTeX code from the downloaded file
6. Insert the copied code to your palette in Inspera Assessment
7. Click "Save"
When you have looked at the palette in the preview on the left hand side to ensure it looks as it should, you can edit the code to rename tabs, add or remove symbols and move around sequences to make
it suit your specific needs. Remember to save when you are done, and test the palette in the preview to the left as well as on a test before using it with candidates on real exams.
Read more about how to edit your math symbol palette in the article Math Symbol Palette.
Example 1
If you copy the LaTeX code from example 1, the palette will look like this:
│Greek letters │Binary operators │Relations│
Example 2 - Derivation (Derivasjon)
If you copy the LaTeX code from example 2, the palette will look like this:
│Formler│Derivation │Symboler │
Example 3 - Integration
If you copy the LaTeX code from example 3, the palette will look like this:
│Basic│Integration │Symbols │
Example 4 - Medicine (Norwegian)
If you copy the LaTeX code from example 4, the palette will look like this:
Example 5 - Math (Sweden K12)
If you copy the LaTeX code from example 5, the palette will look like this: | {"url":"https://support.inspera.com/hc/en-us/articles/360048958131-Tips-and-examples-of-Math-Symbol-Palettes","timestamp":"2024-11-03T22:06:48Z","content_type":"text/html","content_length":"48170","record_id":"<urn:uuid:7827e42a-4aa8-4855-8264-aeb0f035bc9f>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00408.warc.gz"} |
International Journal of Metrology and Quality Engineering (IJMQE)
Issue Int. J. Metrol. Qual. Eng.
Volume 12, 2021
Article Number 12
Number of page(s) 11
DOI https://doi.org/10.1051/ijmqe/2021010
Published online 28 May 2021
© A. Ainane et al., Published by EDP Sciences, 2021
1 Introduction
Cereal storage is designed to keep grains for later use from many external factors, such as humidity, temperature, and light, that can lead to their degradation [1]. In addition, good storage also
protects its content from other influences such as: microorganisms, shocks, odours, vibrations, dust and compressive forces [2]. The Technological development has brought the possibility to access to
countless foods from every part of the globe, so the importance of protecting their organoleptic properties is food quality conformity [3–5].
More precisely, stored cereals are prone to postharvest loss in quality and quantity due to infestation by different groups of insects, which is considered the principal problem of storage [6]. Among
the stored product beetles, Sitophilus granaries (Coleoptera: Dryophthorinae) are a primary pest in storage of grain-based products, principally wheat, but also attack others cereals: oats, rye,
barley, corn, as well as derived products (flour) and other seeds, especially Fabaceae [7,8]. Control of these insects relies heavily on the treatment by synthetic insecticides such as pyrethroids,
organochlorines, organophosphates, carbamates and fumigants (mainly methyl bromide and phosphine) [9]. However, due to toxicity to consumers, insecticide resistance and resurgence of pests associated
with synthetic insecticides, alternative solutions are requiring sought [10]. Pesticides from natural sources, relatively cheaper, biodegradable, which are means accessible and available, less toxic
to non-target organisms and less prone to resistance by insect species are considered potential resources [11]. Among these resources, Essential oils produced from aromatic and medicinal plants have
received much work due to their broad spectrum of activities, they are volatile, complex compounds characterized by a strong odour and are formed by aromatic plants as secondary metabolites
(monoterpenes, sesquiterpenes and phenylpropanoid compounds) which show different insecticide properties (larvicidal, antifeedant, repellent, fumigant and ovicidal activities) [12]. Due forward
described properties, essential oils are potentially suitable for integrated pest management programmes [13].
Cedar extracts considered an effective natural pesticide, especially their essential oil. This last obtained from different geographically regions, such as Morocco, Algeria, Cyprus, Lebanon, Syria,
Turkey, Afghanistan and Himalayas, have been reported with their use in entomology to obtain many treatments traditionally ascribed to insecticidal activities [14]. In terms of previous reports on
the chemical composition of this plant, the oil from the wood of this species has been extensively studied, sesquiterpenoids of the himachalane compounds being the major components [15].
The objective of this work is to carry out a desorption transfer study of the essential oil of Cedrus atlantica after fixation in a porous clay, by the use of experimental designs (complete factorial
design) and analytical models of diffusion process (2nd Fick's Law), to know the process of essential oil and to determine the relation of certain physical parameters between them (diffusivity,
constant of evaporation and rate of evaporation) which can explain the mechanism in play with the insecticidal activity against Sitophilus granaries.
2 Materials and methods
2.1 Materials
2.1.1 Clay
The studied natural porous clay RC was selected from Bejaad, Morocco (32°47'01.7”N 6°13'52.3”W). The chemical composition of the clay raw material (Tab. 1) shows that the major elements are SiO[2],
Fe[2]O[3] and Al[2]O[3]. The amounts of K[2]O, Na[2]O and MgO are significant. From XRD analyses (Fig. 1), it is found that this clay is mainly constituted of quartz, calcite, dolomite, kaolinite,
illite and hematite [16].
2.1.2 Essential oil of cedrus atlantica
The essential oil of Cedrus atlantica (EOCa), which was obtained by steam distillation from the sawdust of Cedrus atlantica Man. from Morocco, was analysed by chromatography–mass spectrometry
(GC–MS). The GC analysis was performed on SHIMADZU GC-14B equipped with and FID detector and a LM-5 (30m×0.25mm×0.3mm) capillary column. The components in the oil were identified by comparison
of the mass spectra obtained by CG–MS and literature data. The results obtained by the analysis are demonstrated in Table 2. In total were identified 22 constituents (92.10%).
The data obtained with the chemical analysis are in accord with previous works reported in literature of the compounds analysed, the sesquiterpenes hydrocarbons α-himachalene (15.63%), β-himachalene
(31.24%) and γ-himachalene (14.46%) are the majority compounds, in which the component β-himachalene is the most abundant and represent practically the two-thirds of a percentage of the oil
composition [17].
2.1.3 Preparation of the mixture clay/essential oil
The RC clay was homogenized, finely ground (<63µm) and heated for 2h at 200°C for the organic complete combustion. a well-defined volume of EOCa has been mixed with a quantity of clay powder,
prepared between 10 and 40% (v/w) in order to have a total fixation of the oil on the powder. Then, the prepared materials (RC+EOCa) are moved to metal cylinders (diameters 1cm and 2cm), for the
continuation of the experimental works.
2.2 Theoretical considerations
2.2.1 Assumptions
• Material transfer of EOCa is considered with one-dimensional diffusion in the cylindrical coordinate system.
• The diffusion takes place under transient conditions with a constant diffusivity and the rate of evaporation.
• During sorption, the concentration of EOCa on clay RC face reaches the equilibrium value as soon as the EOCa is mixed with RC.
• At the beginning of desorption, the concentration of EOCa throughout the clay is constant.
2.2.2 Model of transfer of matter by diffusion
Fick's second law for diffusion in terms of cylindrical coordinates (r, θ, z), can be written as [18,19]:$∂ C ∂ t = 1 r { ∂ ∂ r ( r D r ∂ C ∂ r ) + ∂ ∂ θ ( D θ r ∂ C ∂ θ ) + ∂ ∂ z ( r D z ∂ C ∂ z ) }
Since, the cylinder height is smaller in front of the diameter and the lateral cylinder surface, the diffusion transfer will be directed in the direction of the height, which allows to consider the
transfer model is one-dimensional. The problem is made easier and the diffusion equation becomes Fick's 2nd Law:$∂ C ∂ t = D z ∂ 2 C ∂ z 2 .$(2)
Considering the following boundary conditions:${ t = 0 C = C 0 0 ≤ z ≤ h t > 0 C = C t z = h.$(3)
The analytical solution of equation (2) has been demonstrated by Crank (1979) [18] according to the conditions displayed in equation (3), it can be written as follows:$C ( z , t ) = 4 π ∑ n = 0 ∞ e x
p ( − D z ⋅ ( 2 n +1 ) 2 h 2 ⋅ π 2 ⋅ t ) ⋅ c o s ( ( 2 n +1 ) π 2 h ⋅ z ) .$(4)
The total mass M[t] of EOCa in the clay RC at an instant t is obtained by integration of the variable C (concentration) on the thickness of the material, and on a surface S subjected to the
concentration flow:$M t = ∫ 0 h S . C ( z , t ) d z .$(5)
The analytical solution of these equations taking into account the boundary conditions and after an infinite desorption time tends towards equilibrium is given by:$M t − M ∞ M 0 − M ∞ = 8 π 2 ∑ n = 0
∞ 1 ( 2 n + 1 ) 2 e x p ( − D z ⋅ ( 2 n + 1 ) 2 h 2 ⋅ π 2 ⋅ t )$(6)
M[0]: The initial mass of EOCa absorbed.
M[t]: The mass of EOCa desorbed after a given time of aging.
M[∞]: the mass of liquid at equilibrium.
The simplifying the equation (6) by taking the first term of the series solution and assuming M[∞]=0:$M t M 0 = 8 π 2 e x p ( − D z ⋅ π 2 ⋅ t h 2 ) .$(7)
The term (8/π^2) is assumed to be equal to 1. The last equation becomes:$M t M 0 = e x p ( − D z ⋅ π 2 ⋅ t h 2 ) .$(8)
During our study of the modelling and in consideration of the theoretical approximations already cited, we will focus on the three parameters which explain the desorption process: Diffusivity D[z],
Evaporation constant K and evaporation rate F:
• The diffusivity does practically obtaining by the linear, logarithmic extrapolation of equation (8).
• the constant of evaporation K approximated to the diffusivity by the following equation:
$K = π 2 . D z h 2 .$(9)
• The evaporation rate F was determined using the initial value of the desorption flux, it is the value of the initial gradient of the desorption curve as a function of time, as indicated in the
equation (10):
$F = F 0 = 1 S ⋅ l i m t → 0 d M d t .$(10)
2.3 Experimentation and modelling
2.3.1 Insecticidal activity test
An insecticide-susceptible population of the adult weevil species (Sitophilus granarius) were used in this study. The population of Sitophilus granarius was obtained from the Department of
environmental engineering of EST-Khenifra (University of Sultan Moulay Slimane, Khenifra, Morocco). The strain was reared on wheat grains free of insecticide residue in glass containers (1L) within
growth chambers at 27±2°C, 70±10% of relative humidity (RH), and 12:12h photoperiod (D: L) [20].
EOCa or EOCa+RC were placed in steel cylinders (h=0.5cm; Φ=1cm or 2cm or 3cm), then the cylinders are included in glass petri dishes containing 10 insects. The assembly with a negative
control (without product) are introduced into a fumigation chamber included in the experimental enclosure (temperature and relative humidity controlled). The mortality of insects is recorded as a
function of time for 24h. Repetitions were performed in triplicate for each trial to minimize errors.
Corrected mortality in treating insects is expressed by equation (11):$M % = ( M t e s t − M c o n t r o l 100 − M c o n t r o l ) × 100$(11)
M%: Mortality corrected;
M[test]: Mortality observed during the test;
M[control]: mortality observed in the control.
The determination of the lethal dose of 50% LD50 is determined by linear interpolation on curves giving the percentage of mortality as a function of the logarithm of the concentration tested.
2.3.2 Optimization of transfer conditions
In order to minimize the experimental conditions of diffusion transfer of EOCa essential oil in RC porous clay media, we performed a statistical approach based on experimental designs.
The transfer was presented as the evaporation rates F [21]. The design matrix (Full Factorial Design) was performed according to 4 factors X[i] such as:
• Factor 1=C: Essential oil concentration (0.01μL/cm^3 and 0.02μL/cm^3);
• Factor 2=T: Temperature (25 and 30°C);
• Factor 3=D: Cylinder diameter (1cm and 2cm);
• Factor 4=M: Mass of clay rock (0.05g and 0.10g).
The number of tests carried out is calculated by the formula:$number of tests = 2 k .$(12)
The number 2 corresponds to the number of two levels −1 and +1 and k stands for the number of factors studied.
In our study, the number of factors equals 4:
number of tests=2^4=16
This corresponds to 16 tests.
Table 3 presents the matrix of the experiments and the evaporation rates obtained, from which the rows present the tests coded for the factors studied according to the high (+1) or low (−1) level of
each factor.
The polynomial model presents:$F = b 0 + ∑ i = 1 n b i X i + ∑ i = 1 n ∑ j = 1 n − 1 b i j X i X j + ∑ i = 1 n ∑ j = 1 n − 1 ∑ k = 1 n − 2 b i j k X i X j X k + b i j k l X i X j X k X l$(13)
• Mean: b[0];
• 4 main effects: b[i];
• 6 effects present the 2nd order interactions: b[ij];
• 4 effects present the 3rd order interactions: b[ijk];
• 1 interaction of the 4th order: b[ijkl].
2.3.3 Study of insecticidal activity behaviour of essential oil mixture in clay porous media
The interaction between the behaviour of EOCa in porous clay media RC and insecticidal activity was achieved by processing the matter transfer by diffusion parameters vs. mortality of insecticidal
activity, according to the statistical tool of Principal Component Analysis (PCA) which is an extremely powerful tool for compressing and synthesizing information, useful when there is a sum of
quantitative data to be processed, to be interpreted and to know the correlations between them [22].
2.3.4 Computer and statistical processing
All computer processing was done using MATLAB software. The numerical modelling results obtained will be compared with the experimental results, in order to find the parameters to be searched for and
to make the maximum interpretation. Other computer tools have been used in parallel to perform certain calculations or graphical representation, such as: EXCEL, ORIGIN LAB, DESIGN PLAN and XLSTAT.
3 Results and analysis
3.1 Insecticidal activities in porous clay media
All the results of the insecticidal tests of the EOCa alone or with the porous clay media RC and GC, against Sitophilus granarius are displayed in Table 4 in the form of lethal doses of 50% (LD[50])
depending on the parameters studied such as: the diameter of cylinder D and the temperature of the incubation T. From this table, we deduce that the LD[50] decrease in the presence of porous clayey
RC and GC media, which proves that the insecticidal power has changed when the essential oil has been fixed on the porous media. In addition, the studied factors such as cylinder diameter and
temperature keep the same properties of insecticidal activities, hence increasing cylinder diameter and increasing temperature promotes good activity. Also, it can be concluded that the insecticidal
activity of the EOCa fixed on porous media and their persistence depends on the nature of the interactions between the constituent compounds of essential oils and the media studied [23].
3.2 Optimization of essential oil transfer conditions in porous media
As we described previously, the experimental design chosen for the optimization of the flux parameter of the evaporation of the EOCa fixed in the porous medium is a full factorial design of four (4)
factors: the concentration of l essential oil C, temperature T, cylinder diameter D and mass of clay medium M. Each factor requires 2 levels: high level is coded +1 and low level is coded −1, which
implies a total number of 16 simulations. Therefore, the mathematical model associated with the 1st degree polynomial is presented as a function of the factors plus the second, third and fourth order
interactions of these factors.
The coefficients of the flux model of the essential oil fixed on the RC medium are displayed in Table 5. Following the results obtained by the optimization route, the mean flux is 2.6061.10^−4. If we
take into account only the factors studied on the medium, we deduce that:
• The increasing concentration of the essential oil increases the flux;
• The increase in temperature increases the flow;
• The increase in cylinder diameter decreases the flow;
• The growth of the mass of porous medium decreases the flux.
The set of graphical representations of flux as a function of two factors in 3D are shown in Figure 2. The interaction of two of the four factors also influences the flux values. The flux increases
with respect to the interaction of two concentration-temperature parameters, it decreases with respect to a parameter with the interactions of concentration-diameter, concentration-mass,
mass-temperature and temperature-diameter and it decreases with respect to the interaction from two parameters to diameter-mass interactions. All the graphical representations of flow as a function
of two factors in 3D are shown in Figure 2.
In general, and according to the results of Tables 6 and 7, the coefficients of determination R^2 for the two media show the correct explanation of the optimized models. The Fisher test value is F
[test]=2.42 which explains why the model is not significant with respect to noise (uncertainties). The predictive coefficient of determination R[pred]^2 gives a negative value which implies that
the total mean of flux presented can be a better predictor of the response of the model studied [24]. The variation ratio of the response measures the flux/uncertainty ratio, is equal to 6.68 which
confirms that the response is adequate (ratio greater than 4 which is desirable).
3.3 Modelling of the essential oil diffusion process in porous media
To fully understand the behavior and the diffusion process of EOCa in the RC medium, two intermediate paths were made: the study of the desorption kinetics and the simulation of the concentration
profiles inside the cylinder. This study makes it possible to explain the mechanism involved, as well as a means of precisely determining the parameters: the diffusion coefficient D[z], the
evaporation flux F, the evaporation constant K and the activation energy E[a].
Several kinetic models cited in the literature can study the desorption mechanism [25–27], in our case the model tested is the analytical solution of the Fickian diffusion model, which is described
previously. The methodology adopted consists in controlling the mass percentage of the oil as a function of the time of the experimental results with simulations of analytical treatments during
desorption. To make the simulation according to this model, it is necessary to seek the coefficient of diffusion D which is the principal parameter. The kinetic study was carried out under the
optimal conditions corresponding to the tests carried out in the previous paragraph (C=0.01μL/cm^3, T=30°C,M=0.10g and D=2cm). The study of the linearity of equation (6) of the
experimental results obtained by the gravimetric technique, gives us approximate values of the diffusion coefficients D[z] according to the optimal conditions, it is worth 9.81.10^−6 cm^2/h. The
experimental results of the simulation of the desorption kinetics of the EOCa in the RC medium are shown in Figure 3. The curves obtained show a very clear branch which reflects a decrease in the
mass percentage of the essential oil, which explains the desorption mechanism, we see that the amount desorbed is 5.77% over 24h.
The simulation made of the model is closer to the experimental results, which proves the validity of the analytical model developed and on the other hand the ability to give a reliable estimate of
the other parameters to be sought. In general, the simulation has important interests, it can deduce the quantities of oil that remained during the time of biological applications (insecticidal
activities). Figure 4 shows the simulation of the profiles of the concentration of essential oil in the medium. These concentration profiles are used to give good information about the transfer of
EOCa within porous media, so that the concentrations can be determined for each altitude and for a more precise time.
3.4 Interaction of the behavior of essential oils in porous media and insecticidal power
As we described previously, the behavior of the EOCa in the RC medium is identified on the one hand by the diffusion parameters: the evaporation flux F, the diffusivity D[z], the evaporation constant
K and the activation energy E[a], and insecticidal activity is presented by M% mortality. The calculated values of all parameters are mentioned in Table 8, the interpretation of these results remains
complex. On the other hand, the values of the energy of the activation are constant and low, which confirms that the two clay porous media have the same energetic behavior, moreover, the negative
value proves the opposite direction of desorption (adsorption) and it shows the great capacity of porous media to fix essential oils [28].
Principal Component Analysis (PCA) is a statistical tool to study the behaviour of EOCa in RC media with respect to insecticidal activity and to determine hidden information. The case that we are
going to study crosses 16 subjects (the matrices produced during the study of the optimization of the flow of evaporation in the two porous media) and the four (4) variables: the flow of evaporation
F, diffusivity D[z], evaporation constant K and mortality M% (Tab. 8).
To fully explain the results, we must start with the choice of the axes that are interesting for our analysis. Table 9 present the eigenvalues of the four (4) presentative axes obtained. According to
the Kaiser criterion [29], the axes F1 and F2 were retained whose eigenvalues are greater than 1 (λ[1]=1.89 and λ[2]=1.38), with variabilities of 47.31% and 34.70% respectively, and corresponding
to the total of 82% of the information. These eigenvalues of the axes selected return a “good proportion” of the analysis, this means that the sum of the inertia explained by each of the axes
represents a significant part of the total inertia.
In the mapping displayed in Figure 5, we can see the following interpretations:
The parameters: the diffusion coefficient D[z] and the evaporation constant K are very close to the correlation circle towards the F1 axis and therefore very well represented on the mapping. The
rather closed angle (starting from the origin) formed by the points (D[z]: cos^2(θ)=0.78 and K: cos^2(θ)=0.72), indicates that these two variables are well correlated with each other. These
results come back to the dependence of these two parameters according to equation (9);
The parameters: the flow of evaporation F and the percentage of mortality M% are close to the circle of correlation towards the axis F2, so they are well represented on the mapping. The rather closed
angle formed by the points (F: cos^2(θ)=0.24 and M%: cos^2(θ)=0.15), indicates that these two variables are well correlated between them. These results show that the observed M% mortality of
insecticide activity depends on the flow of evaporation F;
On the other hand, the almost right angle formed by the four parameters indicates that the two variables of the diffusion coefficient D[z] and of the evaporation constant K are independent of the two
other variables which are the evaporation flux F and the percentage mortality M%.
4 General discussion
The essential oil of Cedrus atlantica presented a good insecticidal activity against the stored commodity pest of wheat Sitophilus granarius, moreover, this activity was evolved when it was fixed in
the clay porous medium. The mechanism of action of this activity is fumigation, hence the appropriate model of the process is a transfer of the material.
To explain the mechanism, we have carried out a study of the relationship between the behavior of the essential oil in the porous medium with the insecticidal activity, hence the study of transfer of
essential oil was done by using mathematical analytic models of Fickian diffusion processes under assumptions and theoretical approximations that facilitate numerical calculations. As a result, some
physical parameters: the flux F, the diffusivity D[z], the evaporation constant K were determined in parallel with the biological parameter of a mortality M% of insecticidal activities against
Sitophilus granarius. The use of the statistical tool of the principal component analysis (PCA) which consists in the determination of the correlation between all the obtained parameters allows to
show that the biological activity is dependent on the flux F.
In parallel, the optimization of the diffusion transfer of the essential oil was done according to the design of experiments method with a Full factorial design. The results show that the flux F
increases with the concentration of the essential oil and with the temperature, but decreases with the cylinder diameter (contact surface) and the mass of the porous medium.
In general, to have a good insecticidal activity against Sitophilus granarius it is necessary to play on the direct increase of the flow of the essential oil fixed on the porous medium by the
increase of the concentration of the essential oil and the temperature and by the decrease of the cylinder diameter and the mass of the porous medium.
Finally, the limitations of this study can be cited as follows:
• The constituents of essential oils do not have the same affinity with respect to the porous medium;
• The insecticidal activity of the essential oil fixed in the porous medium and their persistence depends on the nature of interactions between the constituent compounds of the essential oil and
the medium studied.
5 Conclusion
The study of the desorption behavior of Cedrus atlantica essential oil fixed on porous clay medium in relation to its insecticidal activities against Sitophilus granarius was carried out according to
mathematical studies of Fick's law and well-defined statistical approaches such as principal component analysis and design of experiments. Several data and information were exploited to explain the
mechanism of observation of the remarkable biological activity in the case of oil fixation in the porous medium.
The observed mortality of this insecticidal activity depends on the evaporation flow, and this last parameter also depends on the concentration of essential oil, the temperature, the diameter of the
cylinder used and the mass of the porous medium. All the results of the data collected explain that the surface of the porous medium delays the affinity of the constituents of the oil (bioactive
compounds) during the desorption.
All Tables
All Figures
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on
Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. | {"url":"https://www.metrology-journal.org/articles/ijmqe/full_html/2021/01/ijmqe210009/ijmqe210009.html","timestamp":"2024-11-03T00:04:12Z","content_type":"text/html","content_length":"147322","record_id":"<urn:uuid:043a2300-0e47-4fdb-8544-a7011a7bde6d>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00025.warc.gz"} |
Car Delivery Cost Calculator - Certified Calculator
Car Delivery Cost Calculator
When it comes to delivering a car, whether for personal use or business purposes, it’s essential to estimate the cost involved. The Car Delivery Cost Calculator is a handy tool that allows you to
calculate the delivery cost of a car based on the distance it needs to be transported and the delivery cost per mile. This calculation helps you plan your budget and make informed decisions when it
comes to car delivery.
The Car Delivery Cost Calculator employs a straightforward formula to determine the delivery cost:
Delivery Cost = Distance × Delivery Cost Per Mile
• Distance: This represents the number of miles the car needs to be transported.
• Delivery Cost Per Mile: The delivery cost per mile is the cost incurred for delivering the car one mile.
By using this formula, you can calculate the total delivery cost with ease.
How to Use
Using the Car Delivery Cost Calculator is a breeze:
1. Input the distance the car needs to be transported in the “Distance” field.
2. Enter the delivery cost per mile in the “Delivery Cost Per Mile” field.
3. Click the “Calculate” button.
4. The calculator will process the data and provide you with the estimated delivery cost.
For instance, if you need to deliver a car 300 miles, and the delivery cost per mile is $1.50, after clicking “Calculate,” the result may be:
Delivery Cost: $450.00
1. Why should I estimate the delivery cost of a car?
□ Estimating the delivery cost helps you plan your budget and make informed decisions about car transportation.
2. Does the calculator consider factors like vehicle type or specific delivery service rates?
□ The calculator provides a basic estimate based on the distance and delivery cost per mile. It does not account for variations due to vehicle type or specific service rates.
3. Can I use this calculator for other types of vehicles besides cars?
□ While it’s designed for cars, you can use it for any type of vehicle by adjusting the distance and delivery cost per mile accordingly.
4. Is this calculator applicable for international car delivery as well?
□ The calculator can be used for international delivery if you enter the distance in miles and the corresponding delivery cost per mile in the chosen currency.
5. Can I use different currency formats in the calculator?
□ Yes, you can use your preferred currency format as long as you input the amounts consistently.
6. Does the calculator provide an accurate estimate in real-world scenarios?
□ The calculator provides a basic estimate. For a more precise calculation, consider additional factors such as taxes, import/export duties, and the specific services offered by the delivery
The Car Delivery Cost Calculator is a valuable tool for estimating the cost of delivering a car based on the distance and the delivery cost per mile. By using this calculator, you can plan your
budget effectively and make informed decisions when it comes to car delivery. Keep in mind that this calculator provides a basic estimate and does not consider all factors that may influence the
total delivery cost, such as additional services, taxes, and location-specific variables. It’s advisable to conduct further research and consult with delivery service providers when making a final
Leave a Comment | {"url":"https://certifiedcalculator.com/car-delivery-cost-calculator/","timestamp":"2024-11-02T18:52:56Z","content_type":"text/html","content_length":"56623","record_id":"<urn:uuid:a87e6cd9-fe05-44ae-aa04-c1aa56286203>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00638.warc.gz"} |
Amount of Water Discharge in the Zanja near Mill Creek, Mentone, California
I. Introduction
The Zanja roughly translates to “ditch” in Spanish and was built in the early 1800’s by Native Americans, under the guidance of Spanish missionaries, to bring water to an outpost of Mission San
Gabriel. [1]
My parents own a house along the Zanja in Mentone. Water flows at roughly bank full depth year round. The Zanja has been the subject of a number of lawsuits between various municipalities and the
home owners who live along its banks. These lawsuits have focused on the issue of water rights since the county and various cities want to divert the flow of the Zanja and use it for drinking water,
effectively cutting off the flow of the Zanja to the homeowners. A settlement was eventually reached in which both sides agreed not to use the water for drinking or irrigation, and the Zanja would
continue to be allowed to flow through the private properties of homeowners who lived along its banks.
Many of these lawsuits happened when I was fairly young, so I don’t remember many details about them, or the studies both sides presented for their cases. Regardless of this, I was curious to see how
much water flows through the Zanja. Was the amount of water that the cities wanted to get their hands on that significant? Thanks to reading the book Cadillac Desert and recently finishing a
geomorphology class, my curiosity got the best of me. So I set out to find just how much water is flowing through the Zanja.
II. Methods
In order to determine the amount of water flowing through a given spot in the Zanja at any one second, I needed to find 3 variables: Depth (D), Width (W) and Water Velocity. The depth was easily
determined by simply measuring across a specific spot, which we’ll call cross section ‘A’. Depth was determined by taking a series of 3 measurements across cross section ‘A’ and then averaging them.
Velocity was probably the most difficult aspect. I measured the distance between two points along the bank and then threw a tennis ball in the water, recording the amount of time (T) it took for the
ball to move between those two points (H). I repeated this process six times and then came up with the average time it took for the tennis ball to cover that distance.
Once I had the physical data, I did some calculations to come up with a cross sectional area of the water at that point (W x D[avg]) as well as the Water Velocity (H/T). The calculations for cross
section was in inches and I wanted feet. Since W x D[avg] gives units in terms of square inches, I divided by 1 square foot (144 inches) to convert to square feet. Water Velocity was already measured
in terms of feet per second, so no conversions were necessary.
III. Results
D[avg] = Average depth
W = Width of stream
H = Distance between two points along river
V[avg] = Velocity of tennis ball averaged over 6 trials
A = Area of Cross Section ‘A’
Q[w] = Amount of water discharge
D[avg] = 6.3 inches
W = 89 inches
H = 7 feet
V[avg ]= .97 ft/sec
A = ( D[avg] x W) = 561 sq. inches / 144 sq. inches = 3.9 sq. feet
Q[w] = A x V[avg] = 3.9 sq. feet x .97 feet per second = 3.8 cubic feet per second
IV. Discussion
My final result, after rounding to the correct amount of significant figures was 3.8 cubic feet per second. Comparing this to the discharge of many famous rivers, this amount is extraordinarily
miniscule. The Mississippi River has an average discharge of 470,000 cubic feet per second. [2] The Santa Ana River, which flows to the west of the Zanja, and where much of its water ultimately ends
up, has a mean annual discharge of 33.8 cubic feet per second. [3] For being one of the largest rivers in Southern California, this is a very small amount. Needless to say, we do live in a very arid
Does enough water flow through the Zanja to justify local municipalities trying to take it? To simplify things when dealing with quantities of water, many organizations speak in terms of acre-feet.
An acre-foot is the amount of water a family of four will need for one year. [4] According to Google, 1 acre-foot is equivalent to 43,560 cubic feet. Dividing this by 3.8 cubic feet per second, we
find that it takes 11,463 seconds (or just over 3 hours) to fill the amount of space required by one acre-foot of water.
According to the 2000 census, the nearby city of Redlands has a population of 63,591 people. To simplify calculations, I divided by 4 to come up with the number of “families” who will be needing
water, or the number of acre-feet that Redlands would need. Almost 16,000 acre-feet! Multiplying that by 3 hours per acre-foot, it would take nearly 5 and a half years to store enough water from the
Zanja to supply the residential needs of Redlands for one year. As you can see, that in itself isn’t too practical. Not accounting for evaporation or infiltration, by itself the Zanja would be able
to meet about 20% of the residential needs for the city of Redlands. This isn’t that much in the scheme of things and almost doesn’t justify the cost and effort that would be needed to bring the
water into Redlands or any other city. However, in Southern California, water is nearly more valuable than gold.
V. Conclusions
My data should be taken with a grain of salt as most of the data is based on rough estimates and many assumptions. There are quite a few sources of error, such as average velocity. In most cases, you
would measure velocity just below the surface, where water is flowing the fastest, as well as taking a variety of discharge measurements for multiple locations and averaging those to get an overall
discharge for the river. My data represents the amount of discharge at a single spot on the Zanja and I would assume it is roughly average, based on my observations of the water level over the years.
However, I have no data to quantify that.
Regardless of these issues, the amount of water flowing through the Zanja at any given moment is quite small. Given the scarcity of water in Southern California, the cost and consequences of removing
the water from its “natural” channel to use for drinking water outweigh the cost of leaving the water in the channel for many to enjoy, as it runs through Redlands and many of its parks.
VI. References
[1] How big where their footprints? “Mission Era 1,” [online]: [Accessed 30th May, 2004].
[2] LA Coast. “Mississippi River Delta Basin,” [online]: [Accessed 30th May, 2004].
[3] 1999 California Hydrologic Data Report. “11051500 SANTA ANA RIVER NEAR MENTONE, CA,” [online] [Accessed 30th May, 2004].
[4] National Resources Defense Council. “Drawdown – Groundwater Mining on Black Mesa,” [online] [Accessed 30th May, 2004].
2 Replies to “Amount of Water Discharge in the Zanja near Mill Creek, Mentone, California” | {"url":"https://daveschumaker.net/amount-of-water-discharge-in-the-zanja-near-mill-creek-mentone-california/","timestamp":"2024-11-01T19:18:46Z","content_type":"text/html","content_length":"196574","record_id":"<urn:uuid:4316a3b4-e631-4062-8390-390e1fa777e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00665.warc.gz"} |
eJournal: uffmm.org, ISSN 2567-6458,
6.June 2022 – 13.June 2022, 10:30h
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de
In the uffmm review section the different papers and books are discussed from the point of view of the oksimo paradigm, which is embedded in the general view of a generalized ‘citizen science’ as a
‘computer aided sustainable applied empirical theory’ (CSAET). In the following text the author discusses the introduction of the book “Theory of Sets” from the series “Elements of Mathematics” by
N.Bourbaki (1968) [1b]
In the foundational post with the title “From SYSTEMS Engineering to THEORY Engineering” [3] the assumptions of the whole formalization approach in logic, mathematics and science are questioned as to
narrow to allow a modern sustainable theory of science dealing explicitly with the future. To sharpen some of the arguments in that post it seems to be helpful to discuss one of the cornerstones of
modern (formalized) mathematics substantiated in the book ‘Theory of sets’ from the Bourbaki group.[1a] It has to be mentioned that the question of the insufficiency of formalization has been
discussed in the uffmm blog in several posts before. (cf. e.g. [2])
In the introduction to the ‘Set Theory Book’ the bourbaki group reveals a little bit of their meta-mathematical point of view, which finally belongs to the perspective of philosophy. At the one hand
they try to be ‘radically formal’, but doing this they notice themselves that this is — by several reasons — only a ‘regulative idea’, somehow important for our thinking, but not completely feasible.
This ‘practical impossibility’ is not necessarily a problem as long as one is conscious about this. The Bourbaki group is conscious about this problem, but different to their ‘rigor’ with the
specialized formalization of mathematical ideas, they leave it widely ‘undefined’ what follows from the practical impossibility of being ‘completely rigorous’. In the following text it will be tried
to describe the Bourbaki position with both dimensions: the idea of ‘formalization’ and the reality of ‘non-formalized realities’ which give the ‘ground’ for everything, even for the formalization.
Doing this it will — hopefully — become clear that the idea of formalization was a great achievement in the philosophical and scientific thinking but it did not really solve our problems of
understanding the world. The most important aspects of knowledge are ‘outside’ of this formalization approach, and many ‘problems’ which seem to bother our actual thinking are perhaps only
‘artifacts’ of this simplified formalization approach (somehow similar to the problems which have been induced by the metaphysical thinking of the older philosophy). To say it flatly: to introduce
new names for old problems does not necessarily solve problems. It enables new ways of speaking and perhaps some new kinds of knowledge, but it does not really solve the big problems of knowledge.
And the biggest problem of knowledge is — perhaps — the primary ‘knowledge machine’ itself: the biological actors which have brains to transform ‘reality’ in ‘virtual models’ in their brains and
communication tools to ‘communicate’ these virtual models to enable a ‘collective intelligence’ as well as a ‘collective cooperation’. As long as we do not understand this we do not really understand
the ‘process of knowing’.
before formalization
With the advent of the homo sapiens population on the planet earth about 300.000 years ago [4] it became possible that biological systems could transform their perceptions of the reality around their
brains into ‘internal’, ‘virtual’ models, which enabled ‘reference points’ for ‘acting’ and a ‘cooperation’ which was synchronized by a ‘symbolic communication’. Those properties of the internal
virtual models which have no clear ‘correspondence’ to the ‘reality between the brains’ are difficult to communicate.
Everyday symbolic communication refers to parts of the reality by certain types of expressions, which are ‘combined’ in manners which encode different types of ‘relations’ or even ‘changes’.
Expressions which ‘refer’ to ‘concrete’ properties can be ‘overloaded’ by expressions which refer to other expressions, which in turn refer either again to expressions or to ‘concrete meanings’.
Those objects which are the targets of a referring relation — concrete objects or other expressions — are here called ‘the meaning’ of the expressions. Thus the ‘meaning space’ is populated by either
expressions related to ‘concrete’ properties or by ‘expressions pointing forward’ to other expressions and these ‘pointing-forward’ expressions are here called ‘abstract meaning’. While concrete
meanings are usually ‘decidable’ in the everyday world situations as being ‘given’ (being ‘true’) or as ‘not being given’ (‘being false’), abstract meanings are as expressions ‘undefined’: they can
lead to some concrete property which in turn perhaps can be decided or not.
The availability of ‘abstract expressions’ in ordinary language can be seen as a ‘problem’ or as a ‘blessing’. Being able to generate and use abstract terms manifests a great flexibility in talking —
and thinking! — about possible realities which allow to overcome the dictatorship of the ‘now’ and the ‘individual single’. Without abstraction thinking would indeed be impossible. Thus if one
understands that ‘thinking’ is a real process with sequences of different states which reveal eventually more abstract classes, structures, and changes, then abstraction is the ‘opener’ for more
reality, the ‘enabler’ for a more broader and flexible knowledge. Only by ‘transcending’ the eternal ‘Now’ we get an access to phenomena like time, changes, all kinds of dynamics, and only thereby
are ‘pictures of some possible future’ feasible!
Clearly, the potential of abstraction can also be a source of ‘non-real’ ideas, of ‘fantastic’ pictures, of ‘fake news’ and the like.
But these possible ‘failures’ — if they will be ‘recognized’ as failures! — are inevitable if one wants to dig out some ‘truth’ in the nearly infinite space of the unknown. Before the ‘knowledge that
something is true’ one has to master a ‘path of trial and error’ consuming ‘time’ and ‘resources’.
This process of creating new abstract ideas to guide a search in the space of the unknown is the only key to find besides ‘errors’ sometimes some ‘truth’.
Thus the ‘problem’ with abstract ideas is an unavoidable condition to find necessary ‘truths’. Stepping back in the face of possible problems is no option to survive in the unknown future.
the formal view of the world according to bourbaki
Figure 1: Graphical interpretation of N.Bourbaki, Set Theory (1968), Introduction, ‘liberal version’
Language, object language, meta language
Talking about mathematical objects with their properties within an ordinary language is not simple because the expressions of an ordinary language are as such usually part of a network of meanings,
which can overlap, which can be fuzzy, which are giving space for many interpretations. Additionally, that which is called a ‘mathematical object’ is not a kind of an object wich is given in the
everyday world experience. What can be done in such a situation?
Bourbaki proposes to introduce a ‘specialized language’ constructed out of a finite set of elements constituting the ‘alphabet’ of a new language, together with ‘syntactical rules’, which describe
how to construct with the elements of the alphabet chains of elements called ‘(well formed) expressions’, which constitute the ‘language’ L[O], which shall be used to talk about mathematical objects.
But because mathematics is not restricted to ‘static objects’ but deals also with ‘transformations’ (‘changes’) of objects, one needs ‘successions of objects’ (‘sequences’), which are related by
‘operations with mathematical objects’. In this case the operations are also represented by ‘expressions’ but these expressions are expressions of a ‘higher order’ which have as referenced subject
those expressions which are representing objects . Thus, Bourbaki needs right from the beginning two languages: an ‘object language’ (expressions of a language L[O] representing mathematical objects)
and a ‘meta language’ LL (expressions referring to expressions of the object language L[O] including certain ‘types of changes’ occurring with the object language expressions). Thus a mathematical
language L[m] consists in the combination of an object language L[O] with a meta language LL (L[m] = (L[O],LL)).
And, what becomes clear by this procedure, to introduce such a kind of mathematical language L[m] one needs another language talking about the mathematical language L[m], and this is either the
everyday (normal) language L, which is assumed to be a language which everybody can ‘understand’ and ‘apply correctly’, or it is a third specialized language LLL, which can talk with special
expressions about the mathematical language L[m]. Independent of the decision which solution one prefers, finally the ordinary language L will become the meta language for all other thinkable meta
Translating(?) math objects into formal expressions
If the formalized expressions of the mathematical language (L[m] = (L[O],LL)) would be the mathematical objects themselves, then mathematics would consist only of those expressions. And, because
there would be no other criteria available, whatever expressions one would introduce, every expression would claim to be a relevant mathematical expression. This situation would be a ‘maximum of
non-sense’ construct: nothing could be ‘false’.
Thus, the introduction of formal expressions of some language alone seems to be not enough to establish a language which is called a ‘mathematical’ language L[m] different from other languages which
talk about other kinds of objects. But what could it be which relates to ‘specific math objects’ which are not yet the expressions used to ‘refer’ to these specific math objects?
Everybody knows that the main reason for to ‘speak’ (or ‘write’) about math specific objects are humans which are claiming to be ‘mathematicians’ and which are claiming to have some ‘knowledge’ about
specific objects called ‘math objects’ which are the ‘content’ which they ‘translate’ into the expressions of a certain language call ‘mathematical language’.[5] Thus, if the ‘math objects’ are not
the used expressions themselves then these ‘math objects’ have to be located ‘inside of these talking humans’. According to modern science one would specify this ‘inside’ as ‘brain’, which is
connected in complex ways to a body which in turn is connected to the ‘outside world of the body’. Until today it is not possible to ‘observe’ directly math objects assumed to be in the brain of the
body of someone which claims to be a mathematician. Thus one mathematician A can not decide what another mathematician B has ‘available in his brain’ at some point of time.
Bourbaki is using some formulations in his introduction which gives some ‘flavor’ of this ‘being not able to translate it into a formalized mathematical language’. Thus at one position in the text
Bourbaki is recurring to the “common sense” of the mathematicians [6] or to the “reader’s intuition”. [7] Other phrases refer to the “modes of reasoning” which cannot be formalized [8], or simply to
the “experience” on which “the opinion rests”. [9] Expressions like ‘common sense’, ‘intuition’, ‘modes of reasoning’, and ‘experience’ are difficult to interpret. All these expressions describe
something ‘inside’ the brains which cannot be observed directly. Thus, how can mathematician A know what mathematician B ‘means’ if he is uttering some statement or writes it down? Does it make a
difference whether a mathematician is a man or a woman or is belonging to some other kind of a ‘gender’? Does it make a difference which ‘age’ the mathematician has? How ‘big’ he is? Which ‘weight’
he has?
Thus, from a philosophical point of view the question to the specific criteria which classify a language as a ‘mathematical language’ and not some other language leads us into a completely
unsatisfying situation: there are no ‘hard facts’ which can give us a hint what ‘mathematical objects’ could be. What did we ‘overlook’ here? What is the key to the specific mathematical objects
which inspired the brains of many many thousand people through centuries and centuries? Is mathematics a ‘big fake’ or is there more than this?
A mathematician as an ‘actor’?
Figure 2: Graphical interpretation of N.Bourbaki, Set Theory (1968), Introduction, ‘Actor view’
This last question “Is mathematics a ‘big fake’ or is there more than this?” can lead o the assumption, that it is not enough to talk about ‘mathematics’ by not including the mathematician itself.
Only the mathematician is that ‘mysterious source’ of knowledge, which seems to trigger the production of ‘mathematical expressions’ in speaking or writing (or drawing). Thus a meta-mathematical —
and thereby philosophical’ — ‘description’ of mathematics should have at least the ‘components’ (MA, L[O],LL,L) with ‘MA’ as abbreviation for the set of actors where each element of the set MA is a
mathematician, and — this is decisive ! — it is this math actor MA which is in possession of those ‘criteria’ which decide whether an expression E belongs the ‘mathematical language L[m]‘ or not.
The phrase of the ‘mathematician’ as a ‘mysterious source of knowledge’ is justified by an ’empirical observational point of view’: nobody can directly look into the brain of a mathematician. Thus
the question of what an expression called ‘mathematical expression’ can ‘mean’ is in such an empirical view not decidable and appears to be a ‘mystery’.
But in the reality of everyday life we can observe, that every human actor — not only mathematicians — seems to be able to use expressions of the everyday language with referring to ‘internal states’
of the brain in a way which seems to ‘work’. If we are talking about ‘pain with my teeth’ or about ‘being hungry or thirsty’ or ‘having an idea’ etc. then usually other human actors seem to
‘understand’ what one is uttering. The ‘evidence’ of a ‘working understanding’ is growing up by the ‘confirmation of expectations’: if oneself is hungry, then one has a certain kind of ‘feeling’ and
usually this feeling leads — depending from the cultural patterns one is living in — to a certain kind of ‘behavior’, which has — usually — the ‘felt effect’ of being again ‘not hungry’. This
functional relation of ‘feeling to be hungry’, ‘behavior of consuming something’, ‘feeling of being not hungry again’ is an association between ‘bodily functions’ common to all human actors and
additionally it is a certain kind of observable behavior, which is common to all human actors too. And it seems to work that human actors are able to associate ‘common internal states’ with ‘common
observable behavior’ and associate this observable behavior with the ‘presupposed internal states’ with certain kinds of ‘expressions’. Thus although the internal states are directly not observable,
they can become ‘communicated by expressions’ because these internal states are — usually — ‘common to the internal experience of every human actor’.
From this follows the ‘assumption’ that we should extend the necessary elements for ‘inter-actor communication’ with the factor of ‘common human actor HA experience’ abbreviated as ‘HA[X]‘ (HA, HAX,
MA, L[O],LL,L), which is usually accompanied by certain kinds of observable behavior ‘B[X]‘, which can be used as point of reference for certain expressions ‘L[X]‘, which ‘point to’ the associated
intern al states HA[X], which are not directly observable. This yields the structure (HA, HA[X], MA, B[X], L[O],LL,L[X],L).
Having reached this state of assumptions, there arises an other assumption regarding the relationship between ‘expressions of a language’ — like (L[O],LL,L[X],L) — and those properties which are
constituting the ‘meaning’ of these expressions. In this context ‘meaning’ is not a kind of an ‘object’ but a ‘relation’ between two different things, the expressions at one side and the properties
‘referred to’ on the other side. Moreover this ‘meaning relation’ seems not to be a ‘static’ relation but a ‘dynamic’ one, associating two different kinds of properties one to another. This reminds
to that what mathematicians call a ‘mapping, a function’, and the engineers a ‘process, an operation’. If we abbreviate this ‘dynamic meaning relation’ with the sign ‘μ’, then we could agree to the
convention ‘μ[X] : L[X] <—> (B[X],HA[X])’ saying that there exists a meaning function μ[X] which maps the special expressions of L[X] to the special internal experience HA[X], which in turn is
associated with the special behavior B[X]. Thus, we extend our hypothetical structure to the format (HA, HA[X], MA, B[X], L[O],LL,L[X],L,μ[X]).
With these assumptions we are getting a first idea how human actors in general can communicate about internal, directly not observable states, with other human actors by using external language
expressions. We have to notice that the assumed dynamic meaning relation μ[X] itself has to be located ‘inside’ the body, inside’ the brain. This triggers the further assumption to have ‘internal
counterparts’ of the external observable behavior as well as external expressions. From this follows the further assumption that there must exists some ‘translation/ transformation’ ‘τ’ which ‘maps’
the internal ‘counterparts’ of the observable behavior and the observable expressions into the external behavior.(cf. figure 2) Thus, we are reaching the further extended format: (HA, HA[X], MA, B
[X], L[O],LL,L[X],L,μ[X],τ).
Mathematical objects
Accepting the basic assumptions about an internal meaning function μ[X] as well an internal translation function τ narrows the space of possible answers about the nature of ‘math objects’ a little
bit, but as such this is possibly not yet a satisfying answer. Or, have we nevertheless yet to assume that ‘math objects’ and related ‘modes of reasoning’ are also rooted in internal properties and
dynamics of the brain which are ‘common to all humans’?
If one sees that every aspect of the human world view is encoded in some internal states of the brain, and that what we call ‘world’ is only given as a constructed virtual structure in the brains of
bodies including all the different kinds of ‘memories’, then there is no real alternative to the assumption that ‘math objects’ and related ‘modes of reasoning’ have to be located in these — yet not
completely decoded — inner structures and dynamics of the brain.
From the everyday experience — additionally enlightened by different scientific disciplines, e.g. experimental (neuro) psychology — we know that the brain is — completely automatic — producing all
kinds of ‘abstractions’ from concrete ‘perceptions’, can produce any kinds of ‘abstractions of abstractions’, can ‘associate’ abstractions with other abstractions, can arrange many different kinds of
memories to ‘arrangements’ representing ‘states’/ ‘situations’, ‘transformations of states’, ‘sequences of states’ and the like. Thus everything which a ‘mathematical reasoning’ HA[m] needs seems to
be available as concrete brain state or brain activity, and this is not only ‘special’ for an individual person alone, it is the common structure of all brains.
Therefore one has to assume that the set of mathematicians MA is a ‘subset’ of the set of human actors HA in general. From this one can further assume that the ‘mathematical reasoning’ HA[m] is a
subset of the general human everyday experience HA[X]. And, saying this, the meaning function μ[X] as well as the translation function τ should be applicable also to the mathematical reasoning and
the mathematical objects as well: (HA, MA, HA[X], HA[m], B[X], L[O],LL,L[X],L,μ[X],τ).
These assumptions would explain why it is not only possible but ‘inevitable’ to use the everyday language L to introduce and to use a mathematical language L[m] with different kinds of sub-languages
(L[O],LL, LLL, …). Thus in analogy to the ‘feeling’ ‘to be hungry’ with a cultural encoded kind of accompanying behavior B[X] we have to assume that the different kinds of internal states and
transformations in the case of mathematical reasoning can be associated with an observable kind of ‘behavior’ by using ‘expressions’ embedded (encoded) in certain kinds of ‘behavior’ accepted as
‘typical mathematical’. Introducing expressions like ‘0’, ‘1’, ‘2’, …, ’10’, … (belonging to a language L[o]) for certain kinds of ‘objects’ and expressions like ‘+’, ‘-‘ … for certain kinds of
operations with these before introduced objects (belonging to a language LL) one can construct combined expressions like ‘1+2=3’ (belonging to a mathematical language L[m]). To introduce ‘more
abstract objects’ like ‘sets’, ‘relations’, ‘functions’ etc. which have no direct counterpart in the everyday world does not break the assumptions. The everyday language L operates already only with
abstract objects like ‘cup’, ‘dog’, ‘house’ etc. The expression ‘cup’ is an abstract concept, which can easily be associated with any kind of concrete phenomena provided by perceptions introducing
‘sets of different properties’, which allow the construction of ‘subsets of properties’ constituting a kind of ‘signature’ for a certain abstract ‘class’ which only exists in the dynamics of the
brain. Thus having a set C named ‘cup’ introduces ‘possible elements’, whose ‘interpretation’ can be realized by associating different kinds of sets of properties provided by ‘sensory perception’.
But the ‘memory’ as well as the ‘thinking’ can also provide other kinds of properties which can be used too to construct other ‘classes’.
In this outlined perspective of brain dynamics mathematics appears to be a science which is using these brain dynamics in a most direct way without to recur to the the huge amount of everyday
experiences. Thus, mathematical languages (today summarized in that language, which enables the so called ‘set theory’) and mathematical thinking in general seems to reflect the basic machinery of
the brain processing directly across all different cultures. Engineers worldwide speaking hundreds of different ‘everyday languages’ can work together by using mathematics as their ‘common language’
because this is their ‘common thinking’ underlying all those different everyday languages.
Being a human means being able to think mathematically … besides many other things which characterizes a human actor.
Basically every ordinary language offers all elements which are necessary for mathematics (mathematics is the kernel of every language). But history illustrates that it can be helpful to ‘extend’ an
ordinary language with additional expressions (L[o], LL, LLL, …). But the development of modern mathematics and especially computer science shows increasing difficulties by ‘cutting away’ everyday
experience in case of elaborating structures, models, theories in a purely formal manner, and to use these formal structures afterwards to interpret the everyday world. This separates the formal
productions from the main part of users, of citizens, leading into a situation where only a few specialist have some ‘understanding’ (usually only partially because the formalization goes beyond the
individual capacity of understanding), but all others have no more an understanding at all. This has he flavor of a ‘cultural self-destruction’.
In the mentioned oksimo software as part of a new paradigm of a more advanced citizen science this problem of ‘cultural self-destruction’ is avoided because in the format of a ‘computer aided
sustainable applied empirical theory (CSAET) the basic language for investigating possible futures is the ordinary language which can be extend as needed with quantifying properties. The computer is
not any more needed for ‘understanding’ but only for ‘supporting’ the ‘usage’ of everyday language expressions. This enables the best of both worlds: human thinking as well as machine operations.
Def: wkp := Wikipedia; en := English
[1a] Bourbaki Group, see: https://en.wikipedia.org/wiki/Nicolas_Bourbaki
[1b] Theory of Sets with a chapter about structures, see: https://en.wikipedia.org/wiki/%C3%89l%C3%A9ments_de_math%C3%A9matique
[2] Gerd Doeben-Henisch, “Extended Concept for Meaning Based Inferences, Version 1”, August 2020, see: https://www.uffmm.org/wp-content/uploads/2020/08/TruthTheoryExtended-v1.pdf
[3] Gerd Doeben-Henisch, “From SYSTEMS Engineering to THEORY Engineering”, June 2022, see: https://www.uffmm.org/2022/05/26/from-systems-engineering-to-theory-engineering/
[4] Humans, How many years ago?, see: wkp (en): https://en.wikipedia.org/wiki/Human#:~:text=Homo%20sapiens,-Linnaeus%2C%201758&text=
[5] The difference between ‘talking’ about math objects and ‘writing’ is usually not thematised in the philosophy of mathematics. Because the ordinary language is the most general ‘meta language’ for
all kinds of specialized languages and the primary source of the ordinary language then ‘speaking’ is perhaps not really a ‘trivial’ aspect in understanding the ‘meaning’ of any kind of language.
[6] “But formalized mathematics cannot in practice be written down in full, and therefore we must have confidence in what might be called the common sense of the mathematician …”. (p.11)
[7] “Sometimes we shall use ordinary language more loosely … and by indications which cannot be translated into formalized language and which are designed to help the reader to reconstruct the whole
text. Other passages, equally untranslatable into formalized language, are introduced in order to clarify the ideas involved, if necessary by appealing to the reader’s intuition …”(p.11)
[8] “It may happen at some future date that mathematicians will agree to use modes of reasoning which cannot be formalized in the language described here : it would then be necessary, if not to
change the language completely, at least to enlarge its rules of syntax.”(p.9)
[9] “To sum up, we believe that mathematics is destined to survive … but we cannot pretend that this opinion rests on anything more than experience.”(p.13) | {"url":"https://www.uffmm.org/category/formalization/","timestamp":"2024-11-08T01:26:55Z","content_type":"text/html","content_length":"244172","record_id":"<urn:uuid:d157bd67-200e-4933-bf89-92fad10c74b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00208.warc.gz"} |
The Complex Monge-Ampère Equation and Pluripotential Theorysearch
Item Successfully Added to Cart
An error was encountered while trying to add the item to the cart. Please try again.
Please make all selections above before adding to cart
The Complex Monge-Ampère Equation and Pluripotential Theory
eBook ISBN: 978-1-4704-0441-3
Product Code: MEMO/178/840.E
List Price: $57.00
MAA Member Price: $51.30
AMS Member Price: $34.20
Click above image for expanded view
The Complex Monge-Ampère Equation and Pluripotential Theory
eBook ISBN: 978-1-4704-0441-3
Product Code: MEMO/178/840.E
List Price: $57.00
MAA Member Price: $51.30
AMS Member Price: $34.20
• Memoirs of the American Mathematical Society
Volume: 178; 2005; 64 pp
MSC: Primary 32; Secondary 53
We collect here results on the existence and stability of weak solutions of complex Monge-Ampére equation proved by applying pluripotential theory methods and obtained in past three decades.
First we set the stage introducing basic concepts and theorems of pluripotential theory. Then the Dirichlet problem for the complex Monge-Ampére equation is studied. The main goal is to give
possibly detailed description of the nonnegative Borel measures which on the right hand side of the equation give rise to plurisubharmonic solutions satisfying additional requirements such as
continuity, boundedness or some weaker ones. In the last part the methods of pluripotential theory are implemented to prove the existence and stability of weak solutions of the complex
Monge-Ampére equation on compact Kähler manifolds. This is a generalization of the Calabi-Yau theorem.
Graduate students and research mathematicians interested in differential equations.
□ Chapters
□ 1. Positive currents and plurisubharmonic functions
□ 2. Siciak’s extremal function and a related capacity
□ 3. The Dirichlet problem for the Monge-Ampère equation with continuous data
□ 4. The Dirichlet problem continued
□ 5. The Monge-Ampère equation for unbounded functions
□ 6. The complex Monge-Ampère equation on a compact Kähler manifold
• Permission – for use of book, eBook, or Journal content
• Book Details
• Table of Contents
• Requests
Volume: 178; 2005; 64 pp
MSC: Primary 32; Secondary 53
We collect here results on the existence and stability of weak solutions of complex Monge-Ampére equation proved by applying pluripotential theory methods and obtained in past three decades. First we
set the stage introducing basic concepts and theorems of pluripotential theory. Then the Dirichlet problem for the complex Monge-Ampére equation is studied. The main goal is to give possibly detailed
description of the nonnegative Borel measures which on the right hand side of the equation give rise to plurisubharmonic solutions satisfying additional requirements such as continuity, boundedness
or some weaker ones. In the last part the methods of pluripotential theory are implemented to prove the existence and stability of weak solutions of the complex Monge-Ampére equation on compact
Kähler manifolds. This is a generalization of the Calabi-Yau theorem.
Graduate students and research mathematicians interested in differential equations.
• Chapters
• 1. Positive currents and plurisubharmonic functions
• 2. Siciak’s extremal function and a related capacity
• 3. The Dirichlet problem for the Monge-Ampère equation with continuous data
• 4. The Dirichlet problem continued
• 5. The Monge-Ampère equation for unbounded functions
• 6. The complex Monge-Ampère equation on a compact Kähler manifold
Permission – for use of book, eBook, or Journal content
Please select which format for which you are requesting permissions. | {"url":"https://bookstore.ams.org/MEMO/178/840","timestamp":"2024-11-02T12:38:37Z","content_type":"text/html","content_length":"64465","record_id":"<urn:uuid:8466240c-8809-404d-abf4-69c7254db92a>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00238.warc.gz"} |
consensus algorithm
Dantzig-Wolfe decomposition (DWD) is a classical algorithm for solving large-scale linear programs whose constraint matrix involves a set of independent blocks coupled with a set of linking rows. The
algorithm decomposes such a model into a master problem and a set of independent subproblems that can be solved in a distributed manner. In a typical … Read more | {"url":"https://optimization-online.org/tag/consensus-algorithm/","timestamp":"2024-11-04T07:12:54Z","content_type":"text/html","content_length":"82483","record_id":"<urn:uuid:87aa4e7e-689f-4525-a1c8-769eb45a76d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00849.warc.gz"} |
Adiabatic process in context of pressure volume work
31 Aug 2024
Journal of Thermodynamics and Kinetics
Volume 12, Issue 3, 2023
Adiabatic Processes in the Context of Pressure-Volume Work
In this article, we discuss the concept of adiabatic processes in relation to pressure-volume work. We provide a theoretical framework for understanding the thermodynamic implications of such
processes and explore their relevance in various engineering applications.
An adiabatic process is a thermodynamic process that occurs without any heat transfer between the system and its surroundings. In the context of pressure-volume work, an adiabatic process involves a
change in volume while maintaining constant internal energy. This type of process is commonly encountered in compressors, turbines, and other mechanical devices.
Theoretical Framework
Consider a system undergoing an adiabatic process, where the pressure (P) and volume (V) are related by the equation:
P = f(V)
where f(V) represents a function of volume. The work done on or by the system during this process is given by:
W = ∫P dV
Since the process is adiabatic, there is no heat transfer between the system and its surroundings. Therefore, the internal energy (U) remains constant:
dU = 0
Using the first law of thermodynamics, we can write:
dQ - dW = dU
Substituting dU = 0, we get:
dQ = dW
Since there is no heat transfer (dQ = 0), the work done on or by the system is equal to the change in internal energy:
dW = dU
The adiabatic process has several implications for pressure-volume work. Firstly, since the internal energy remains constant, the work done on or by the system is entirely due to changes in pressure
and volume. Secondly, the process is reversible, meaning that the system can return to its initial state without any residual effects.
In conclusion, adiabatic processes play a crucial role in understanding pressure-volume work. The theoretical framework provided above highlights the thermodynamic implications of such processes and
their relevance in various engineering applications. Further research into this area is necessary to fully explore the potential benefits and limitations of adiabatic processes in the context of
pressure-volume work.
[1] Callen, H. B. (1960). Thermodynamics: An Introduction to the Physical Theories of Equilibrium Thermostatics and Their Applications. John Wiley & Sons.
[2] Zemansky, M. W. (1957). Heat and Thermodynamics. McGraw-Hill Book Company.
Note: The references provided are classic texts in thermodynamics and are not specific to the topic of adiabatic processes in pressure-volume work.
Related articles for ‘pressure volume work’ :
• Reading: Adiabatic process in context of pressure volume work
Calculators for ‘pressure volume work’ | {"url":"https://blog.truegeometry.com/tutorials/education/bf1c3341321cdc59118f7e0bc5b9340b/JSON_TO_ARTCL_Adiabatic_process_in_context_of_pressure_volume_work.html","timestamp":"2024-11-04T07:49:50Z","content_type":"text/html","content_length":"17250","record_id":"<urn:uuid:dfd72893-d9b3-4c6c-be69-e84b118dcd8f>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00051.warc.gz"} |
Them w/ Briahna Joy Gray and Trevor Beaulieu - Unlocked! | Struggle Session
(upbeat music)
(upbeat music)
(upbeat music)
(upbeat music)
- Hi, I'm Leslie, the third of Struggle Session.
- I am Brianna Joy Gray of Bad Faith Podcast.
- Just Trevor Bullewer of Shambar and Shark today,
we're talking about the new Amazon Horror Series,
Capital H Horror Series, them, them.
Now, if you haven't watched it,
we'll probably begin to spoilers.
I would suggest if you're thinking about watching it,
if you do not own a Rob Zombie DVD,
or maybe perhaps the Hills of Eyes,
this is probably not the series for you.
This is probably not the series for you.
If you think, if you're coming in thinking,
you're gonna get a PG-13 Get Out Esque
Racial Thriller, that's not what the show is.
I think that's why it's been so divisive.
I think I'm the only one on the show
who actually likes the show, but T.B.
What did y'all think?
The female protagonist's hair makeup and skin generally
looked fabulous throughout.
I will give them that stunning, costuming,
just sets the color treatment visually,
very, very compelling stuff.
- Personally, I think is one of the best
looking T.V. shows I've ever seen.
A lot of Hitchcock, a lot of Hitchcock, as well.
I mean, but there is all sorts of horror reference.
Like I said, you have Wes Craven in there.
You have Texas Jane Salmon massacre.
You have European horror films like The Devils
and all the folk horror and all the sorts of things.
And it all looks very, very beautiful.
I will say this, it doesn't even look like,
it looks better than a lot of movies
that you're asked to see in the movie theater.
Certainly looks a lot better than coming to America
to a lot more filmmaking women to this show than that.
- I think, namely just to make it,
but just physically, the actors was really beautiful
and striking and I feel like she deserves better
than this role.
I would like to see her as an Angela Bassettish
kind of role where she gets to be a little more regal.
This was, I think, I don't know.
Like, I like things like I spin on your grave and stuff like that,
where people get degraded and there's a lot of horror and stuff,
but there's a real like, come up and it's like,
I spin on your grave, I remember correctly.
She chops off one of the rapist penis
and she is a straight up like wax and then left and right.
And this one, I just felt like the revenge just did,
I mean, to have like, what was it, eight, 10 episodes
of this level of torture porn?
And then the revenge just felt like,
we're gonna stand up with a head held high at the end of this
and we're gonna, there's a little bit of a beating,
but it just didn't feel like the payoff was worth
sitting through all of that.
I think it's where it falls short from those other movies for me.
- Well, I actually thought the ending,
if I would not get for anything,
I thought the ending was maybe a little bit falsely to upbeat
because for a horror of this level,
everyone should end up there at the end.
- I wouldn't mind if they killed everybody
and then went out in a glazed glory too.
Like, I agree with you.
I could have dealt with the ending being less upbeat,
but I thought the upbeatness was more about keeping your dignity.
You know what I mean?
Which, look at it.
- I don't know, I mean, 'cause they had already killed two people.
Like both of the protagonists murdered people
before they ended the show,
the female protagonist, lucky beats,
what's their nurse racist too?
Because not every evil white person in this show is racist.
That's one thing I actually liked about the show,
there's different levels of racist.
Some racist want to do violence somewhere.
Like, nah, nah, we just want to scare them off legally.
I don't want to get in trouble.
I have a jail and shit like that.
But yeah, they do, but the male protagonist
just guns down the cop, not even a racist cop.
- But I think it's part of my problem,
I'm sorry, it's the last thing I'm sorry.
The people they kill aren't really the ones
doing the most grief to them.
That's the problem.
- I put in your grave and stuff, she kills the rapist
and then get out, like he kills the people
specifically torturing him, but I felt like it was like
the lesser transgressors against them
that get killed by them, you know?
- Yeah, I actually kind of liked that
because it made it seem like more of control,
like the horror, there was no real clean way out.
Like the horror, they were constantly under attack
from mundane racism, from a ghost as well as this trauma
that they underwent in the past.
And I really liked the fact that it spilled out
in ways where they ended up turning violent
and doing horrible things because that's what
the horror movie is supposed to be.
In fact, I talked about on Lovecraft Country,
my disappointment was that is all the heroes
made stay pretty noble throughout.
But this is supposed to be a horror movie
of your the protagonist of a horror movie,
you're supposed to be fucked up by the end of it.
You're supposed to be turning to do something,
you're supposed to be especially with a ghost trying
to specifically drive you to killing your family,
like something fucked up should have.
And I was glad that if you fucked up things happen,
I understand the ending wasn't as pat
because I think the problem was,
it exists in several different genres at the same time
and the ending genre is Haunted House,
which I think always have the least satisfying endings
of any horror story.
Basically, they kind of stand up until the ghost know
and then ask it and that's what happens in this series.
But I felt like if maybe it was a little bit more focused
on the genre, if it focused on the suspense aspects
or the noir aspects,
stuff that seems like Walter Mosley,
that could have been one shorter series.
Episode five was really controversial,
but I thought that was like for like a hills,
have eyes sort of thing or like the Devil's Rejects Rob Zombie.
That was about the best kind of,
that was absolutely extreme, high quality, extreme horror,
but they didn't really go in that direction.
They also also had like the ghost and stuff,
which was a completely separate thing.
I actually thought the episode nine,
where they tell the story of this racist preacher ghost
haunting the suburbs of California.
I thought that was really like a really good capsule episode
and you could just watch that episode
without watching the rest of the series and enjoy it.
- Would push back against the cop not being really racist.
He's only cause playing to be kind of on their side
because he's in league with-- - Yeah, yeah, he's being played.
- Lady, yeah, and ultimately when the woman
who rented in the house
starts to kind of have some reservations
and show some ethics,
he's the one that threatens her to not out the scheme
to the black people because he's in it for the money
and wants to keep pushing all these black families
into these predatory housing arrangements.
So you could argue that he's the real,
the real one of the worst in terms of the systemic issues
that are going on here.
- No, what I meant more was that like,
not that he's not a villain, but that his villainy
is not just, I hate black people, right?
It's more complicated than that.
And all of them, all of the races have a little bit more
going on than that.
I was actually surprised and I wish to show
what's promoted more as like an ensemble horror show
where you follow both the antagonists and the protagonist
which is how they promote American horror stories.
- So this is part of the problem is that you come into this
in the first few episodes are structured around this family
and then at one point, this black family
and at one point, you are spinning a lot of time
with this white woman neighbor who is evil
and seems to be the only one that's motivated
by pure racial hatred.
Now, I don't, we could have a whole conversation
about what it means to have different kinds of racial motives
'cause I frankly think the most common kind of racism
is one that's motivated, it's mixed in
with these other incentives, my housing value,
my, you know, payola, whatever it is.
And that we don't talk as a society, you know,
are an episode that we have on bad faith that's dropping
the day after we record this.
We talked to Heather McGee about how I think sometimes
we don't do very good job of explaining racism
in a way that white people can understand
because we don't talk about the ways that it's explicitly
cultivated by elites to be weaponized, you know,
for profit to put it like really simplistically.
But what I was watching and that when the episode
would flip in follow these other characters,
I did not care, I was just waiting impatiently
to get back to the characters I had been taught to told about
for the first few episodes.
But they spent an inordinate amount of time with this woman
developing backstory and plot points
that never come up again and never go anywhere.
And to your point, Leslie, about who ultimately gets
to kill the baddie, she set up as one of the worst villains
in the show and she's randomly killed by the milkman
who abducts her and then shoots her.
I love that.
And the field where no one even sees it happen.
Yeah, I feel very, I'm pretty unsatisfying to me.
I agree with Brue.
Oh, I loved it.
She got exactly what she wanted.
She wanted the, she wanted a house with a white man
and a black people around and the milkman gave it to her.
And then when she had a problem with that,
he gave her something else.
I loved her getting shot dead in the field
and no one ever finding her body.
I actually, they spent episodes,
episodes, cultivating this woman as evil,
giving our backstories so we could better understand
what's motivating her evil, setting up a conflict
between her and our female black protagonist.
And then the whole sum and total of her purpose
in the series ends up being that the fact
that she's missing and the other white people in neighborhood
don't know where she is, incentivizes them,
incites them to come for the black people harder.
So they basically supplant all of the,
what we got really invested in this one white woman's hatred
for this family and then they end up just using her
as a spark to invest all these other white people
in hating the family a lot too, instead of just building up
those other characters.
And so to me, it just, it felt like this like weird transitive,
you know, like,
and like, like, like, off-shored hatred.
It's like, well, they could have had a gold missing
at any point for any time.
Why did I invest in this whole plot and narrative
that could have been spent explaining more
about this weird white preacher ghost?
Like, we're gonna have to talk about this preacher ghost.
- I just disagree.
I think the way they did it was like really good storytellant
is like, yeah, you think she's gonna end up in one direction
where the milkman is gonna be the one directly confronted
but he has another plans.
He kills her off, but that still doesn't solve
the problem for the black family because guess what?
Now the white lady's missing
and now the men who kind of didn't want to push it so far
are now willing to push it even further.
I actually liked that even if you take her
out of the equation entirely, she just disappears
and half the people think she's run off with another man.
The racism still is there and still gets worse
for the black family.
- I was also kind of confused about exactly
what these people's history or experience was
with running black people out of town
because the men some kind of feckless in their racism
and they can't need her to egg them on.
And it's like, you know, you gotta do this
and it's like, well, we don't really want to.
And it seems a little bit hesitant.
They just want to just glare at them really hard for a while
and it's just like, you gotta do more.
But they've shown it.
They've kind of ran to black people out of town
and they're like being driven insane
and something crazy happened.
- Oh, that was the ghost.
That was the ghost, actually.
- So it was the ghost that was--
- Talk about the ghost.
- And it was a different,
I think it was in a slightly different neighborhood.
It wasn't that exact neighborhood.
- So, okay, 'cause I was confused about that
'cause it seemed to have done this before I thought.
- No, it was not them, not them.
- Can we talk about the ghost and what we interpreted the ghost?
So I don't know how much you want to set this up
at all for people who might be listening
who haven't seen the show
'cause we haven't really set it up at all.
- Oh, I mean, it's too much.
I mean, I don't know how to explain.
It's like five or six horror movies put together.
There's a lot going on in this show.
- But just the purposes of,
this is how I understood this is what I saw happening
and I wanna know if this is what you saw happening.
I saw, they set up that a nice young black family
is moving into this all white town
after something bad has happened to them
that isn't revealed for a few episodes.
But ultimately we discover that the wife has been brutally raped
and their baby child has been brutally killed
by some white people who randomly come up to their nice big
country house in North Carolina
out of the blue to attack them.
Now, the husband is apparently an architect, which,
I don't know what he was doing in North Carolina
but it seemed like a really dramatic change of scenery
and kind of like social context.
It seemed like a kind of a farming, rural existence in South
in North Carolina and then suddenly he's Mr. Suit and Ty
waltzing into the architecture firm as the only black person,
but okay, okay fine.
And then to the point, you made this point on your show, T,
the family having experienced all this racial trauma
chooses to enter into another white neighborhood.
And expose themselves to that again
on the heels of the worst thing that could happen to you.
I mean, okay, so here we are.
Now, there's some implication in the beginning
that maybe there's something wrong with this house.
It's been empty, they've been trying to sell it for a while.
But that's never really followed up on,
is the house gonna be haunted.
But we do discover eventually is that the neighborhood
is on land that was occupied by settlers
in the way back times.
And there was a preacher who lived there
who was a kind of a good guy thought he was speaking to God.
But one day these black people come into town
and he takes them and he lets them stay.
The other white people in the town get really angry at this.
And eventually he joins in with the other white people
in the town and deciding to kill, to maim and kill the black people.
And that preacher has made a deal with the devil somehow
that he's a ghost now in the present in the 1950s or whatever.
And he's the one that's secretly been terrorizing
these black family because in addition to the external racism
they're getting from the neighbors,
they're also having these like visions.
Every member of the family that's making them
especially paranoid.
And it's not clear for a lot of the show
what is real racism and what is their paranoid delusions
which we later discover are being caused by this ghost.
And I found that ambiguity to be unsatisfying
especially because surreal things would happen
and it wasn't clear to me like is this a ghost thing
or not ghost thing until later in the show.
Like for example, there's at one point this black woman
and this racist white neighborhood runs out of her house
waving a gun around in the air.
And there are no consequences.
Now later we realized we learned that the cop on the beat
who was apparently the only cop.
In the neighborhood who comes up when the gun incident happens
is actually being paid,
is in good hoots with the real estate folks
to basically protect, like keep the black people in their homes
like allow the black people to stay here
so that they can keep hoodwinking them into these
these least agreements.
So pride mortgages, yes.
But that just seemed kind of unrealistic
a single cop could accomplish that.
Like you just seem like logistically it didn't make sense.
- He just shows up when there might be trouble.
And it's not like there's not that many black people moving in.
And the whole thing they say is like it's a domino effect
once they start coming in, the white people just start leaving.
He's just there to make sure that they don't kill
the first black family that moves in.
- But I guess the problem is he can't be there 24/7.
There's plenty of times they could have killed them.
I just feel like--
- But they didn't really want to.
They didn't really want to.
Like I think people came in and serious expecting it
to be like get out where it's like immediate
like full on murderous racism.
What you see in Lovecraft Country,
and I think Watchmen too,
and these other types of shows where it's a meat or antibullabellum,
like all of it is immediate murderous racism.
But you come in, they are just like regular
suburban white folk,
and they kind of don't know what to do.
And they're not ready to kill them.
They're just like mad and want to drive them away
without actually getting in trouble themselves.
And that's why they actually listen to him
when he comes over and says leave him alone or whatever.
Like it's not as intense a situation, I think,
as you're merely think.
- I guess my problem is a lot of the stuff
that did happen under his watch
would have been enough, I think,
to drive a lot of black families out.
So I feel like he wasn't doing that great a job
'cause enough stuff still happened.
I think a lot of families would have left.
Did the ghosts kill the dog?
Did the question you guys got, or was it?
- I thought the white people killed the dog.
Although it happened in the basement,
so it's one of these intentionally confusing.
- I think it might have been the ghost.
I think it might have been the ghost.
- 'Cause the basement is the ghost to me,
but the white people were talking about poison the dog
got dead.
- I thought maybe it was a misdirection by the riot of the people.
- Maybe it was Allison Peele, I forget.
It was 10 episodes ago.
They did say the window was too small for a human
an adult to get in through,
so then I thought maybe it was a misdirection to hint at the ghosts.
- You could just chuck a poison dog tree down there.
- That's true.
- Here's the thing, here's the thing.
Here's my problem.
I accept a ghost story.
I accept a beloved kind of a plot where you don't know
what's going on, you know the house is haunted
and then over the course of the novel
or the somewhat regrettable movie,
you discover that, oh, okay, she had a horrible trauma
she experienced, her child was killed,
and this is the baby goat.
It ends up all joining up and being connected
and making sense as a narrative.
And it's like a manifestation of trauma.
It's a little allegorical and non-literal,
but even in a literal film form, it hangs together,
it makes sense.
The weird part of that movie was my girl,
Tannie Newton, coming out full nine months pregnant
with that weird mercanton,
but that's a different conversation.
And this, there were all of these potential causes
for what was going on.
And to me, the ghost story, if it was just a ghost story,
then fine, but it was a ghost story.
It was a story of racism and racists
who were acting in a way that made them seem like they were possessed.
Like it wasn't just normal racism,
it was like a psychotic obsession,
but it wasn't clear whether this was also part of the ghost stuff.
- It is, it is.
- And then there was also the fact that this man
before ever moving to California,
had a mental breakdown as a consequence of fighting in the war.
And the ghost seems to be praying on that,
his own sense of mental instability
seems to be part and parcel of this trend of his,
with the motif of him not being able to eat sweet foods,
coming up in the context of the ghost,
even though it was something that started
as a consequence of his wartime trauma.
So if you're gonna have all of these things going on at once,
even for one that has to be purposeful,
why do we need all of these motivations
for all of the dissonance that's happening?
Like one would have it sufficed,
but if you're gonna have all of them,
I need to pay off.
I need to have a reason why.
Why introduce that this man has wart trauma?
Okay, except for that, it makes him sympathetic
to what is wife's going through.
Okay, all right.
But then ultimately he's not believing his wife
that she's seeing visions of the ghost
when he perfectly knows well
that he's also seeing visions
of this weird Vodville character,
racist Vodville character.
And they never communicate about any of the things
they're all seeing except the little girl,
and then nobody believes little girl,
even though they're all apparently seeing visions
at the same time,
then they introduce the teenage daughter's visions
a little later in the show in a way that,
I don't know, like her character just kind of underdeveloped
as a whole.
Yeah, her, she was a little bit under,
her plot was a little bit underdeveloped,
but even though I like some, I like the ending visual,
the visual where she's like covers herself
and white paint in front of the bonfire,
I thought that was cool.
But she didn't really have like that much going on
and I kept just like thinking back to her character
and like what was she was in the door to the field.
Other sounds like, yeah, like you,
like it almost just felt like an extension
like that movie happened, or two.
Also like what,
everything, everything felt so like textbook basic,
like, oh, a black girl and she goes to all white school
and she wants to be white.
Like, can the whole bluest eye Easter egg,
I thought was a little too on the nose,
like I felt like they tried to do a little black art Easter egg
where she looks into the mirror and her eyes blue.
I'm like, okay, 20 Morrison, bluest eye, like I get it,
but it, you're not really going anywhere with it really.
It's corny, like also her mom is like the most beautiful
moment I've ever seen.
Chocolate, right around this town.
And we just have this really flat narrative.
There was ever any conversation,
hey, mom, I'm feeling insecure at school
and we could have had it instead of,
instead of spending an entire episode
on that white lady's backstory
when she just gets killed like a dog in the field anyway.
Develop a relationship between this mother
and this daughter and the sense of self esteem
and a white space.
You know, imagine if maybe the mom had taken the daughter
to go to the other part of Compton
where the black people lived.
And that whole part of the story, it's like,
it begs the question, why are you living in that neighborhood?
Like why would it be a great scene?
So what you described would have been a great scene.
If she brought her daughter over to the black side
to help her get over feeling bad around the white school,
that would have been a great scene.
I wouldn't love to see that.
It does not horror scene.
That's just a regular drama scene.
That's not like, you don't bring the daughter to solve the problem.
But he has to, but,
but lucky going over there was a regular drama scene anyway
when she goes over and hangs out with the people.
So it would have also created an opportunity
for some narrative consistency because the daughter could
have asked some questions that I as a viewer was watching.
For example, why is it so important for you to live
in this all white neighborhood?
Something the daughter could have asked
and then the mother could have answered it in a way
that, you know, I wouldn't agree with, but what makes sense?
Like, you know, your father has been working so hard
to feel a capable long.
He has his chip in his shoulder.
Like, I just really want to give you the best.
These white people can't scare us.
Somebody has to be a pioneer.
We have to put our foot down.
Yeah, you know, the kinds of rationalizations people make,
but at least then, that one just be this big thing hanging
in the air like everything's chill over here in great.
And for some reason, we've gone to this predatory mortgage
with people who are literally trying to kill us next door
after I've just been raped in my child kill by white people.
I think that question would be fair
if the show took place over a long time, but it's only,
but it's like by like, they for they want to leave,
but they like, like can't, they can't get out of the loan
and then she ends up in the hospital to show only,
it feels like a long time, but showing takes place
is only like a week and a half.
So I think it's, so I think is realistic
that no matter what racism you experience
when you get to your new brand new house,
you're going to stay for more than a week.
Probably and they do try to leave in the middle,
but then she in me, but then she gets institutionalized
so then they're kind of stuck.
One thing, the dog dies, my kids aren't going to be in that house anymore.
That's all I have to say.
How are we going to have a horror movie, Bri?
How are we going to have a horror movie, Bri?
The first, the first scary thing we gone.
They're poisoning small creatures up in my house
and I have my little toddler baby right around here
talking about cat in the bag.
I think the mistake they made is that they should not have worked
in the down south gym crow stuff at all.
Like I think, I think what happens is once you have her gang raped
while watching her kid get, uh, bloodshed in the bag,
then it ruins your rest of the story because like Bri said,
why do you want to move into this neighborhood so bad,
disting these moron packing, what happened to the white people?
Like that, that did that like, yeah, yeah, you just up and, and move like,
and I felt like what this movie was really,
what this show was really about was like assimilation anxiety.
And I think that's what this kind of class of black people
who make this kind of art that they're big life problems,
assimilation anxiety, but that's not a bad thing to write a show about.
That's fine, but I feel like they feel kind of guilty.
Like this is not a good enough thing to do a show about.
So we have to tie it into something deeper.
So great migration.
Like one thing I like about the Jordan P.L. thing with get out was,
I think he was just kind of on his, hey, this is kind of what I worry about.
Assimilation anxiety.
I'm not going to try to tie it into, you know, a lot deeper stuff than than that,
you know, but I felt like they really just want to talk about
having to live around white people and deal with microaggressions
and deal with occasional explosions of violence and whatever,
but they felt like, oh, that's not heavy enough, you know,
let's tie it into extreme Jim Crow violence.
And then it just creates this kind of mismatch that to me, it doesn't work.
It ends up undermining the current part because you're like,
why are you moving into this area and dealing with this white picket fence stuff?
Because he's another problem is the black part of town isn't really the hood yet.
You know, it hasn't really hit a point.
Yeah, it's nice.
So it's like, it's not like, you just hate black people like,
why don't you move here?
It's not 80's confidence.
Somebody's pre-white for it.
But somebody did people did do this in real life.
So that's the thing.
Like, people did do this.
They were the first family to move into, to skip over the nice black neighborhood
and move into the all white comptence.
So I, but what is the same ones who dealt with like that type of gang rape Jim Crow?
No, it's Cory Booker family.
You know, Cory Booker always tells that story because, you know,
it's about how like his parents, Jewish friend got them the least and then they moved in
and his parents were like, why is passing but pale?
And like, I feel like that, that kind of family who was always upwardly mobile,
lights get like that's, that's who ends up in those spaces.
That's who's the first to move.
Or oftentimes people who, no one even knew was black until it was, it was too late.
It's not this family who apparently was, were some kind of rural, who knows what they did.
But old grow was standing outside in a white cotton, like slip dress in apron,
one second south in North Carolina.
And the next second were like corporate executives walking into the advertising firm
in, in the big city of LA.
They seem like completely different people between those two settings.
It was just, it was disjointed.
It didn't make any sense.
And to your point T, it starts to feel like they are like racism is bad enough.
Like that kind of rape and violence, like that happened.
And I don't want to be sitting here thinking, oh, was it an evil spirit that didn't know?
Is a white people who do that stuff.
And all of a sudden about the predatory landing and like actually playing out how that happens,
I thought that was fascinating.
And as, even though it's kind of basic, like, oh, look, I read the red lighting book and
I'm going to write this into my script, I have never seen it spelled out quite that clearly
and explicitly in a, near fiction narrative before.
And it's fascinating and it's on right.
Yeah, it's, there could be a whole show on that.
I know that wouldn't be a horror show, Leslie.
But there, I actually liked that aspect of it.
It's the ghost stuff that I started to feel like was overkill because it's like there's
racial hatred is real. And when you start introducing this fantasy element to it, it makes it seem
like it had to be a supernatural priest from 100 years ago to make white people act like
this when white people just were acting like that.
I, well, I think the thing is the show is not about race or racism.
It's just nice.
It's about horror.
Like he didn't make this show because when the shinnah puts so much race and racism
in it, because that feels exploitative.
But it is interviews and claims it is.
That's the problem.
But I, when he might claim it is, but when you watch this show, like he's not obsessing
about getting the racial facts right, he's obsessing about getting the shot, the shot,
the shot is right.
And I'm sorry, you're just like, there's so much focus on the racial facts.
But I, you're reading a sociology book from like freshman year.
Yeah, yeah, it's a little more TV.
I mean, I don't think it, I think actually any of the racial stuff is much more well done
than anything in watchman, anything.
Like it felt like almost like what if madman, a show of that quality where, and like the
red line scene, that's a perfect example that could have been seen in a high quality prestige
drama about race with no violence or anything in it.
I think it, there is a, the show should be complemented for doing, be able to do that stuff
extremely well when the shows built around it can't do it well.
But ultimately, I think this show is like, you don't do that episode five if you're just
trying to tell, you know, a fairy, a fairy tale story about race.
Like you do that when you want to do like a poppy gross, you know, rob zombie type fucked
up horror thing.
Episode five, you're talking about when the rape happens.
Yeah, yeah.
But the thing is there, there are tons of racial movies with scenes just like that rosewood.
But I just, I just think the way, but I think the way that this show is shot and the way,
and the, and the people who are directing like Ty West who is like very big into, you know,
is a, is a horror, horror, horror director's horror director.
I just think that the way it was marketed as being similar to like these racial prestige shows
was disservice because I think it's much closer to something like channel zero or American horror
Like American horror story, it deals with social issues, but I don't think you anyone would
say that any of those, those seasons are actually about like having anything particularly strong
to say about them.
I think it's mostly about, you know, tillation, shock, horror.
And I think this, that's what this show really is about, even though it does do some racial
stuff, I think it does a lot of the racial like fact dropping much better than these other
But when I'm watching this, but on the whole like, I just, it just doesn't feel all that
like racism is an element of the horror, but the show doesn't really have anything to say
about race or racism is just like people deal with racism.
This is how a racist ghosts would be like it just seemed to be more about exploring these
different modes of horror than it was saying absolutely anything at all about racism.
I think American horror story is a good example because I think it's true that that show does
not get into any particulars about the substance of what's going on.
I mean, sometimes it's just like witches, so there's not a lot of particulars to get into.
It's, it's just about a covenant or whatever.
But this show doesn't just treat the racist 1950s as a architecture for the first time
for a horror show, right?
They could have set a show in the 50s.
They could have set a show in the 1850s.
They could have set a show in all kinds of racist times in American history and not made
it about racism.
There's Abraham Lincoln, Vampire Hunter, which I didn't see, but I can guess, but it probably,
you know, talked about Abraham Lincoln and slavery, but didn't make it, you know, didn't
have people raping slaves and bludgeoning babies in a paper bag in a pillowcase in a
And at a certain point, you are using racial horror to highlight the genre horror and not
everyone is going to love that because racial horror is real and it's like really crappy.
And so if you're going to do that, I think, I think you're allowed.
I'm not approved.
I'm not saying like, some things just shouldn't make light of or, you know, you shouldn't
take too lightly.
But I think you have to have a payoff.
You have to see me saying something.
For whatever we just want to say about, get out.
I felt like it was saying something.
It was saying something about white liberal guilt in the hypocrisy of white liberals.
It was saying something about final scene where everyone gasped when the cop car comes up
and we have this collective experience of realizing that black people have a different
relationship to law enforcement and the white people in the theater.
The movie has a point.
I felt differently about us.
I felt like us didn't know what it was saying at all.
It was a fine movie.
There are aspects of it I enjoy, but it just didn't know what it was doing.
There were all these things I felt like metaphors, but I didn't know what it was a metaphor
for what?
I don't know.
And that's how I feel about it.
It was beautiful.
The gruesome horrible shots were beautiful.
It was very well produced.
It looked expensive.
The people looked amazing.
And I wanted to invest.
There were aspects of it that I found to be interesting.
For sure.
But at the end of the day, it's even hard to describe exactly why I didn't like it because
there just wasn't a payoff of all of the horror.
And that's when it becomes torture porn instead of just a horror movie to me.
Yeah, I just didn't like the payoff for me is the horror.
I don't know what.
Most of the things people are not going on the show are things that have a firm tradition
and references block for block shot for shot stuff that you can see in other horror movies
in a similar tone and no payoff.
At the end of the Hills AvA is the hills went, usually.
There's no payoff.
There's no victory.
There's no you necessarily get your revenge.
And so I understand.
And but this is not stuff that's for everybody.
This is not stuff that's for even most horror fans.
Like most horror fans aren't going to like this stuff.
But this show has been elevated to like the front page of Amazon is next to the next, you
know, Michael P. Jordan movie is next to fabulous Miss maislin.
I think but it's a very specific gritty grimy type of horror thing and it is very nasty.
And I would not recommend this to most people.
I will only recommend this to people really invested into some like shocking gruesome horror.
I would not recommend this to just average person looking for a good new TV show.
Like that's not what this is.
But I think part of the problem that happens is that the Hills have eyes, not that stuff,
they're not kind of trafficking in a real serious like, you know, like if you got to do
like Jim Crow and and violence and lynching and all this stuff, it would be kind of like
if you're doing the Hills have eyes, but with the Armenian genocide or something, you know,
there's movies like there's movies, there's movies compared to Joe Joe rabbit.
Let's talk about like Joe Joe rabbit, which when I heard it was about, I was like, I'm
going to pull this off or even the one with Brad Pitt and glorious bastards.
Both of those movies could have gone really off the rails.
And some people didn't like in glorious bastards because they felt like it was what like
appropriating they wanted it they wanted it to be like a Jewish hero as opposed to is Brad
Pitt's character like supposed to be a Gentile.
I remember there would be some discourse about whether he was like appropriating a moment
of potential like Jewish self determination and like glory and like killing the Nazis,
but regardless, Joe Joe rabbit never in the Inglorious bastards also never kind of plays.
I want to be careful with this because I don't think that they them thus them them them.
It's doing this intentionally, but it would be as though in the Inglorious bastards or
and Joe Joe rabbit, they introduce a power, a force that is persecuting and genociding Jews
other than Hitler and Nazis.
If we found out that Hitler wasn't just anti-Semitic piece of crap, you mean like the entire MCU?
Yes, that's a whack.
Yeah, that's it.
I mean, but yeah, that's, but that's a, I mean, that, but that's not so.
Everybody is not such an unusual thing to have like a Nazi be like a zombie Nazi or vampire
But in the Marvel movies, you don't, you're not getting up close as of what's going down
in Auschwitz, right?
You're not getting people being just gruesomely with their ribs poking out of their ribca- like
you are getting the racial, you're getting explicit racial terror in in them.
But sorry.
And at the same time, you're getting the antagonist, the ultimate antagonist is literally
a long dead Scandinavian priest.
I mean, it literally offloads even the Americans of it.
It's just like random new immigrant with an accent who is apparently responsible because
of a deal with the devil for the racial torture, this family that we've just been subjecting
ourselves to for 10 hours.
And that's what starts to feel like it cheapens the experience of the horror.
Like you can zoom, you can zoom out on the racial terror and play fast and lose with the
fact of history or you can zoom in and like really respect that this was a real life happening
and these are historical events.
And even in this show is apparently based on a real story, right?
This is a real family, at least that's what the title card at the beginning said.
I don't know if that was part of the conceit.
Yeah, there was like a, I don't know.
It's talked about the great migration to title card, but I don't know if it mentioned the
specific family.
It said the name of a family was the first black family to move into this neighborhood.
I don't know if that was fake.
They might have just looked up the records.
I mean, there's no reason.
Yeah, yeah.
It was just a name.
I mean, if that's a real name of a real family, obviously they can do what they want, but
the implication it, I mean, something happened to that family, that family had a real experience
and it wasn't because of a Swedish devil.
I just don't, I just don't think that, I don't think the show really says that the Swedish
devil is responsible for all the race and because first of all, the racism, he was like,
like the racism started with them.
Like if anything, like this, I'm actually surprised with the show for explicitly like
condemning the racism explicit in Christianity and in the Bible, which like you almost never
see in anything, but like even before he's, you think he's possessed, like he's quoting scripture
from the Bible about how black people are like foreigners can be treated as slaves and such
and such, but I don't think the show really wants you to think that those people aren't
And Allison's pills character, she's not haunted or anything.
She's just like a pure like, viral race is in most of the people in the neighborhood.
I think maybe until the very end, we're not supposed to think of them as being under any
sort of influence similar to an episode nine with the scanning, they've been felt.
We're not supposed to think their racism is influenced by the ghosts until the absolute
very end when they all like, you know, end up, you know, burnt torching themselves in
this, you know, religious racist fervor, but I can see that reading of it, but I really don't
think the show's intention is to it, but I can see why that confusion is happening because
it, he says it himself, he has three problems for the protagonist, the internal struggle
with the ghosts, the external struggle with the racist neighbors, which is not super natural.
And then they have, you know, the struggle with their past and the things that they've already
dealt with in their own, you know, psychological issues that the ghosts take advantage of that
the racist, you know, you know, exacerbate themselves, like all of them aren't necessarily
connected in every way until the end.
I can see why you would think, like you would feel that way that was kind of offloading the
racism onto the demon, but actually, and I think a lot of stuff does that, but I actually
don't, I think this show was nuanced enough that made it very clear, like these people were
racist and also the ghost is racist, but he was a racist of the human too.
I mean, it doesn't seem like the ghost even needs to be there or do it like what's the
Like, so there's a movie like, like Black Swan, for instance, and I would say, okay, Black
Swan, and again, it's been like 15 years since I saw that movie, so great assault, but
I seem to recall it being extremely difficult to watch in all the great way in a really good
way, because it was genuinely subtle.
The horror of her internal breakdown externalized in these ways where we didn't know as if you
were what was real or what wasn't because we were seeing the distortions through her eyes.
The scenes of her meticulously, like picking out her cuticles until they start to bleed,
you want to turn away from the screen, not because it's a baby literally being whacked against
the floor and a frickin pillowcase, but because it was, there was a sense of restraint, and
I found it to be, like, I remember watching that movie that threw my fingers and like nothing
dramatic even happens.
I guess the thing is, like, that does a fair point, but that's a completely different type
of movie is like, there, but there's a lot of movies where babies are going to whacked
in bags against force and something like that.
That scene did not shock me at all because I've seen somebody--
It's not about being shocked.
It's not about being shocked.
I'm not like prudent.
I mean, I mean, I did not watch that through my fingers because it was almost cartoonish.
Yeah, it was.
It was, I think, I think, like, kind of delivery a lot of this extreme horror can be a little
bit cartoonish, but I think, like, it just wasn't trying to be subtle as well as what
I'm kind of saying, and people are going to prefer that.
I would imagine most people would prefer black swan to something, you know, more violent
or more extreme or sloky or too, and I think the show at times can be a little bit sloky
because, like, that that back scene is not really, like, convincing.
I don't mind sloky, and it's either.
I enjoyed many seasons of American horror story.
I don't mind.
My point is that, okay, I once ate a musical theater student who explained who said a musical
should make a good and a good musical when they break into song, it does something that
but more and better.
It tells a better story or conveys a better emotion than you could have done with written
And if it doesn't elevate a scene, if you're just having to explain what happened anyway,
then it's a bad musical and that was bad writing.
And I think that principle kind of applies here where all I want is for the parts that
were introduced to be be be additive.
I'm not mad at the idea of schlockiness, I'm not mad at the idea of gruesomeness, I'm not
mad at the dea of horror.
But at some point, in my opinion, the various elements that were introduced are competing
with each other for airtime.
And any one of those elements or any two of those elements were interesting and good.
And I, the criticism isn't that I didn't like the elements that no one of the elements
was as free to thrive and be expressed fully as it might have been.
If half of my brain power weren't devoted to trying to figure out what the hell was going
on and any given moment, what was motivating what and why the events that were happening
on the screen, that a lot of time was being devoted to at certain points, were actually
going to matter in the long run.
I think it was a confusion too that was bothering me where it's trying to explore real life
racial trauma and real life history, redlining Jim Crow, but also trying to do schlocky and
over the top stuff.
And I don't think the two should have been mixed like either do full allegory or but we
need to explore intergenerational trauma and things that people really feel very triggered
by, but then also try to make a grindhouse thing out of it.
I think it's, it's different than the hills have eyes because the hills have eyes is something
totally fictional.
You're suddenly making people relive things that, you know, they're worried that their ancestors
went through or whatever.
They live in, they're parents, they live in terror of and that's kind of problem when you
kind of introduce ghosts into it and grind house and everything.
You're kind of almost cheapening it.
It's weird because you kind of, it seems like the show wants to be homework TV, wants to
be profound, it wants to be a history lesson, but it also wants to be exploitation, mando,
grind house entertainment.
And I think it's that blending of the two that has a lot of people kind of with a bad
temptation of mouth about this stuff.
My main thing is like if you want a entertainment, this is not the show for you and you should
watch it and anything you see in it, I would say that's a bonus, but the real, like I just
think the core of the show and the most, the thing they're most interesting, interesting
and this had like several different, you know, horror directors who aren't known for
their films about racial justice and some more about like vampire women and stuff like
So I think that like that's that element is the horror element is just so strong and that's
what really won me over to it.
Now, I do agree with you 100% Briana said this like it has three lanes of horror that could
go in and I like the ghost one absolutely could have been left out actually even though
I like the lot of it, but I think the show may have been stronger if they had left it
out or if it was short, ever short of show, like artists could have been three different
shows, right? Could have been about multiple different families going through different
kinds of things or if it was only the ghost.
I imagine a show, part of the thing about this pernicious about racism is that sometimes
it's like you don't know when it's racism or when it shows, oh, that person was rude or
I that person, Jenny, you wanted to help me in the grocery store and wasn't following
You know, there's, there's this kind of sense where as a black person, you're always
forced to doubt yourself because you don't want to make a wrong accusation, but you know,
some part of what's happening to you, some percentage of it really is racism, right?
And so I imagine a world like imagine a show where they move into this neighborhood, they're
very nervous about it for obvious reasons.
There are tensions and glances exchanged with neighbors and you don't really know how
to interpret them, but then all of this stuff starts happening in their internal world
and it's creating the more and more and more paranoid and acted ways that genuinely
actually piss off the neighbors for good reasons and not racist reasons.
And there's all these tensions like the ghosts could genuinely be stirring crap in the neighborhood
like as opposed to having kind of like being disconnected from like external racism that
just exists in the world.
You know, because most racism isn't as overt even in the 1950s as what went down in that
neighborhood with like just the full on insanity that happened day one.
It's lower bill, I think, where, okay, either the ghost is pulling all of the strings or
the ghost is just affecting them so much in the house that is making them unable to behave,
you know, they're the racism monitors or like mis-firing and causing tensions with the neighbors.
There's ways that this could go that wouldn't feel like those ideas were in competition.
Yeah, I see what you're saying.
That's an interesting take, but I said in this brief is that if they had made that one,
it absolutely would have had to end with the black family being murdered by the right people.
That would have had to be the ending to that.
But yeah, I felt there was a lot of things in the show though that I did feel were genuine
kind of slay king, slay queen moments.
Like when he, when that cracker talks jive to him and he just walks up and busts him
upside the head with the gun, I like that.
I was, I felt that that was real to me him going in the bathroom and being like, because
he's dealing with his white balls that felt real to me.
Her rocking across the street and slapping Allison Pill finally someone has done that that
felt all real to me when he just shoots the cop and just drives off and that faces no
consequences for it.
That felt real to you.
Yeah, it did feel real, but it felt good.
It felt good.
So there were, there were some fun, there were some fun moments, even though it was a really
dark show.
I think there was some fun moments and it does technically have kind of a happy ending.
They did beat the ghost and inadvertently actually saved all their white neighbors too
for being consumed in the flames of their racial fury.
What did you guys make of the kind of late in show dead baby in a box reveal?
What I thought was that, you know, that was an editing thing.
I think we were supposed to think it was supposed to be a storyline that we weren't sure
if what happened to Lucky actually happened.
We weren't supposed to be sure about that.
We were supposed to question the darkness question it towards the end.
The darsal like you killed our little brother.
I think that was something that was supposed to be set up earlier on and somewhere it got lost
in the end.
You did not land for me.
Yeah, I think we made for shooting kill that baby with a fucking timeout.
Yeah, I think they were trying, I think just, I mean, because even that scene like that,
the scene in episode five, that scene felt like that should have been the first scene in
the show, but they didn't want to open the show with black woman being raped by some mutant
I think that, I think that's why, and I think that changed.
And I think after that we were maybe supposed to see scenes where we thought Lucky's sanity
or her story because she was the only one there.
But that, that, that, then the surreal nature of that, of what happened her, that was a real
sticking point to me.
I got to say because to this, to this day, I mean, the way they filmed it, we're wearing all
white, the house, the middle of nowhere, the people came and then left and there was no
sense of community.
There was no sense of what town they lived in.
It felt like a dream.
But I would have liked to have spent more time with, do they have parents?
We got to meet the white lady's parents.
Do they have parents?
Do they have any sense of community?
Did they get support from the community when their baby was blushing to death?
Was there, are there police involved?
And what happened to the people who did all of that?
It can be really just fully go and bang someone's baby to death and there's just nothing,
no follow-up.
I mean, there was just, I don't know.
I think those are fair questions, but again, I think it's just a genre thing.
It's like, yes, they are, like, are they literally ghosts?
No, but as far as the film goes, yes, they are those, like, the mutants in the hills,
they just come out and attack you.
They're the strangers if you've seen that movie.
Why do they do this?
Because you're home.
They're just the things that come out of it.
That's the part that makes it seem cheapened a little bit, right?
Because part of the story of why those time of events are horrible is, okay.
You go to the cops and the cops don't do anything because black people don't get justice.
Or, you know, and the racism isn't quite so senseless.
It's, I mean, rape, you don't have to like explain, you know, rape.
In real life, there would be something much more twisted.
Like, I thought that white woman was going to have, was going to have always wanted a baby,
couldn't have her own kids and just like wanted to steal a black baby.
I thought she was going to take the baby.
And then she was not going to have fewer legal resources to get the baby back.
And there's something at least sinister about that.
And that feels really true to life given the obvious history of selling black children.
But either make her like write her in as a sociopath who just travels with rapists and go
door to door doing that.
I mean, I don't know, man.
I mean, they're just villains from another movie.
That's really what it is.
And you're, that's either going to satisfy you or not.
But I think that's really what it is. They are villains from another movie who showed up in this one.
Yeah, that's what it felt like.
That's what it felt like.
I will also say that that little baby was beautifully.
And was one of the most charming little on screen talents at the end.
It just made it worse.
It was such a great baby.
And I just, great baby.
I don't like babies like that.
This was a notable, just emotive, gorgeous baby with big old man eyes.
And I didn't expect I could get anything out of them and me feel worth sitting through
Like, like, you know, I didn't get invested in the baby and sit through all that nonsense
to really get anything out of it.
Just was like, okay, this is all right.
Not now what?
I think, you know, when you look at these films, like the thing you're supposed to get out
of it and people, people were Chris critical of episode nine to the ending of that.
That's just really violent, really bleak, bleak, but also kind of shawky and sloppy and cartoonish
too with the make when they put their eyes out in the makeup is kind of not the kind of
puffy and just kind of over the top with it.
I think you're supposed to get that you pose to feel that revolts in and that horror and
absolute sense of dread and you're and in most of these types of stories, you don't get
any resolution.
I actually don't think it's similar to I spit on your gray because I spit on your gray
if she does get revamp directly.
You wouldn't assume you're right.
Yeah, and it's more closer to like the home invasion movies where they just come kill you
and then they leave and then that's it.
But then the story goes on.
So I think he was like he was so he tries to he's trying to mix all these different dissonant
types of horror together too.
And I think that's a legitimate problem that you have with the show and even sometimes
he's like, not even in horror like like the stuff with the red lining.
It's like mad men and then the stuff with like the paying off the cop to do it.
That's just like a crime thriller.
That's just that's just like a easy role in his devil in the blue dress.
Like that he could have been that type of show as well.
So I do think there's a bit too much in it.
It's a bit too too long, but I do have to say like every single individual part is so
well done for a guy who this is his like his first major project and a bunch of different
directors, but the consistency of the visual style is just so strong and like all the way
they are able to mostly I think successfully mix the different genres.
It's just very impressive to me.
I wish this this was maybe maybe two half seasons of two different stories that kind of
split up some of these elements or maybe you just have more characters going through different
things that aren't necessarily all related and make it kind of a bigger show in some ways.
I will say even outside of the trauma porn like say the trauma porn problems weren't
I didn't really find it that good as far as a plot pacing like I don't think there would
need to be like 10 hours of this.
It was just I felt like it was just very padded.
It was just full of just micro aggressions to pat out eight hours.
Like after like eight hours were like okay I get it.
They're going to walk into a white person somewhere is going to say something slick like
I get it.
I don't need to see eight hours of the grosser things that in slick and then the male man
sings and slick like it like I get it.
This could have been a two hour movie.
It could have been a two hour movie but it could have been a quite good two hour movie.
It could have been two hour movie.
I didn't really just think it was really well paced or worth the 10 hours that it wanted
out of me.
It could do is a good two hour movie in there somewhere that was just kind of bloated beyond
I even think episode nine could have been its own like two hour movie.
That's just like mid summer with black people and basically.
Yeah, but they also could have kind of cut up like I gotta say I'm not I'm still not sure
what happened in it.
I was I confession I started zoning out.
I was just having a bulldoze at the end of the series by the end.
But I as I recall the preacher there was a nice preacher who was at odds with the rest of
the town because he thought he had a direct relationship with God.
But he was a nice guy.
He had taken in this orphan and he had just experienced a personal tragedy because his family
had all died.
Then here come these two black people wheeling into town and everyone wants the black people
to go away and they want nothing to do with them.
But the nice white preacher says no, no, Jesus says we got to you know be nice to strangers
and whatnot.
Then at some point I glanced up for my Twitter scrolling and the nice white preacher had
completely flipped and was suddenly talking about how we got to pluck the eyes out and burn
these Negroes up in this church and there the mob was on and pop in.
Well he was being haunted that child he took in was actually not a child.
It was a demon.
And so from the beginning of the yeah from the beginning of the episode he is being haunted
and driven mad by this child.
And so he it ends up encouraging and working up the tile into a frenzy he starts getting
into a frenzy because of the sort of untenable situation.
So he does flip on him which I think was very interesting because he's the he's the protagonist
of that episode.
He's the main character not the black couple and I thought it was just as like a capsule.
It was like a quite interesting like if this was the episode of like like a or any other
or anthology on the outer limits or I don't know tells from the cripple I think it would
be a creep show.
This would be a really strong entry because he's start as he starts off as kind of as good
man but by but we the unbeknownst the audience he's already sold his soul to the devil in that
first scene where we think he's actually a praying to God.
He's actually a praying to Satan and then from there there's that turn and it just absolutely
ends up horribly for the family but we get the small satisfaction that all the white people
die and he dies but he continues on as like a revenant or a ghost of some sort.
What do you think the devil was against black people that's what I didn't even understand
like why is it double yeah that's one thing that is explained that was I don't know if it's
necessarily the devil it's maybe a demon a demonic force in to you of some sort but ultimately
his motivation ultimately go on explain which I think the show is done did a pretty good
job of explaining the motivations of many characters the ones you wouldn't expect even
little racist you get to know why why they feel this way why they're acting out in this
way but they never really explain where this original evil comes from I think that was
a kind of a deliberate choice but it also is maybe a look is but it does require the
T.C. as you as you notice it the goat the original evil is also a racist too like it specifically
wants that racism out of you when you when you when it haunts people for some reason I don't
know why it's just that and then that's that's part of the that's part of the cheapening right
like because there's other stuff going on yeah but you could but I don't think any explanation
would have been kind of says fact three for the one explanation I want there to be I wanted
to be on the record that I wish there were no ghosts in this no explanation of the
motivation I think the ghost is dumb and we shouldn't have a ghost because like also like okay so
there was one this implication that one of the character one of the white women's husbands
is gay and that's part of her tension in her issue is that she like isn't feeling
desire whatever so we're is there another there's a like an anti gay demon that's gonna be a season
two or is it just black people are are the things that there was a lot of big a tree happening
in the neighborhood in the fifties but somehow it's just the blackness has the own special black devil
is there a gay devil is there like a misogynist devil a demon for everything and it's the
intersectional demon it just hates everything it just hates hate specifically black queer women who
are poor it's just intersectional demon that'd be a good one you know or I know this may begin
deeper this may begin deeper but that demon there seems to be a tie to the land specifically
is haunting that area area is that supposed to be the spirit of America and that's why it's
specifically a racist demon see you don't know Marvin didn't have it together but he's working on
multiple levels perhaps maybe I'm thinking about the intersectional intersectional demon like he
doesn't hate black people he doesn't hate women he doesn't hate gay people when they come together in
one body just activates I hate it you know I don't know like but this is supposed to be American horror
story style so he got to prove for two seasons out the bat and he's gonna have a second season with
a new story is not gonna continue from this one yeah it's supposed to be an anthology yeah I'm
curious what the second season will be maybe it'll it'll be interesting to keep in the same land
and or something like you said but I yeah I just really think it even if you hate I think it's just
a really like this is the best looking thing that almost any of the streaming servers have
have shot like I think it's kind of valuable for that especially with this young writer who I think
mostly I think the writing is pretty good the dialogue is pretty is very good compared to other
stuff especially when you compare it to like a lot of people were talking about lean the waith and
her influence of this I can tell she didn't work right one word of the script if she did also in
fact write the queen and slim script these are not made by similar yeah I think she's not ready
for eight credits I think I should yeah she's just like the executive producer on this so I don't
know what this says about her her work but I I I just I'm just really excited about this show I
I wish I wish it was promoted more accurately because like it's like it gets like extremely
negative reviews and something like vice but like a box or something like but nobody who works for
Vox should even be allowed to like rent any of this stuff like they should have like they're not
mature enough for this like this sort of like extreme horror for the most part I don't think any
about any like a lot of the mainstream like people who are writing about this were writing about it
like they were about to write about scandal like this is not you know like this is like a mainstream
thing I don't think I don't think it's fair to say that the criticism is because there's a genre
Mitch match I think some of the Chris there are some people that are like that but we have just come
out of a world where we've had like a 10-year trajectory of 12 years of slave through Django
Jojo Rabbit is in this kind of ufre we have Antebellum that just came out last
last year Antebellum is so cool you know we had I mean I feel like there was another one of these
shows that we just had you know we have people doing Harriet Tubman's that have whole fictional
love stories with flavor owners that didn't exist people are bringing up Octavia Butler like
we live in a world where this kind of genre mix oh you know the one with them tick
Oh lovecraft lovecraft country a ma's another one this ma yeah this this is not like people are like
obviously them and us and sorry us and Jesus sorry us and uh the other one the good one the first one
oh good deal good out so it's not like people are like I've never seen this before
but like get out is like a baby thing compared to this that's like a pv 13 movie oh I don't think so
I think let's like I think sometimes and it's completely fine to like stuff that isn't that good
we all know that I am famous for fully embracing lots of stuff that other people don't like
but I kind of know it's not good even if I like it I think that this there's a lot of beautiful
pieces of this and like you say I think visually it's absolutely stunning but sometimes something
it just doesn't hang together and I think that you could probably go in and edit the crap out of
this and film a couple extra scenes and make something that was extraordinary I think there's a lot of
good material here but I think it's not it's meritively you you say that the dialogue was good
it wasn't bad but I don't really remember much yeah I mean I love I love remember it oh I remember
every time he calls somebody a o-fe okay I thought the the husband's arc in particular like who
was very weak and who was he and was he kind of like a protector who could like get things done and
throw things down and I wasn't the army and I you know I had a little bit of a week you know a mental
health crisis but I'm here and I'm here to protect my family or is he a little bit of a punk who
just keeps kind of taking that at work like there was some character inconsistencies I think the point
I think the show was trying to show that he was both you have to be both if you're a black man like
that like that was the point of the character he was like throwing like the the black face character
he was throwing back in his face like oh you're big in the bed but you're smiling at these o-fays at
the office like that's like I think this is something I have to deal with I felt that I felt sad so
what did it mean when the black face character they wiped the paint off them and he's a white man
because they just made it rendered it utterly incoherent to me that was a little bit confusing me
I was hoping that he was going to wipe his face and be like a blank like void or something like that
I don't know why I was just a white guy because he wasn't even played by a white guy he was played by
yeah it was really a hero to me what it was trying to say that's the thing I think what they're trying
to do is that like he's trying to convince himself that he's not him that he's not that he's not
that jiggaboo character and so when yeah I think that's why he was saying so when he wipes it off
and you see it's a white it's actually a white man under there even though he actually is a black man
for most of the scenes when he wipes it off then he's like oh this isn't me this isn't another black man
this is actually just a menstrual show it's not real I think that was the point but I wish it was
some scary other it's what I I was looking for actually yeah and the late the latent series reveal
that he wasn't home for the rate because he had taken the girls to the movies I mean I think there
was some interesting ways you could have played with his kind of guilt around not being home but I
didn't even know why he wasn't home until like episodes yeah I think that's I think that's a lot
of that was like shifted from the first episode to later like I like that was a yeah you're absolutely
right that was another plot thread that felt like it was supposed to be in in injected quite a
bit earlier on just like the possibility that lucky might have actually killed the baby like
their show did more of that I'm the reason you would be there I think the reason it wasn't injected
earlier on was because it was trying to do the mystery box theater stuff where you just
introduce things non-linearly just to give the show the illusion of depth like that was just
something lost and watchman and anything linda lawper Abrams do a lot where is every show every show
has two time lives now it's got two timelines a flashback that reveals stuff that there's no
reason why it couldn't be revealed in linear fashion like there's no reason why you couldn't reveal
where he was earlier except that's just mystery box storytelling now and I really can't stand it like
non-linear for no reason at all like this time like there's a comic book called sleeper that reveals
things non-linearly and every non-linear reveal accentuates a narrative where it wouldn't be as good as
you if you revealed it linearly but most mystery box shows I think just make things non-linear just to
keep the audience guessing as to what am I even watching in a key piece to understand what you
were even just watching is reveal later and I always say like a difference between a mystery
and a mystery box show is like a mystery you're finding out things along with the protagonist but a
mystery box show is something that the protagonist knows things but it was helped from the audience
for no good reason except but it kind of hurts the show because it keeps you from connecting to the
protagonist because you're missing a key part of what the protagonist knows and you wondering
why it's a protagonist acting that way what is this protagonist problem and then two-thirds of the
way into the series you realize oh he went to the movies or he did this and it's like okay yeah so I'm
starting to connect now but it's only two two episodes left yeah exactly and like every show like
every show does this is much it must be a man-data because like it has like they could have edited the
show straight up linear and it wouldn't have changed that much except for maybe the episode nine and
it was just like the one flashback they could have done that but I really think that the streaming
services they have to mend like no one who sits down and writes decides that actually I want all my
backstory told in alternating flashbacks throughout the until like no one wants like to write a script
like that that's just like nonsense but I think that's what all the streaming services do now
because it makes it so that you can't you you when you're bored in episode one and two you're still
confused and don't actually know what's going on because you're like yes I want to know what's going on
I'll ask you guys something like do you think finding out if it was still linearly do you think finding
out about the rape it and the baby killing in the first episode would have hurt the narrative at all
I don't think it would have hurt the narrative but I think it would have hurt I think people would have
been extremely turned all the way yeah maybe yeah but couldn't they have told us couldn't they have
just heavily hit hit it that's something bad had happened that prompted the move without literally
showing the scene of the bludgeoning until you're sure as well you mean showing the scene but showing
it later but just being more telling us something terrible happened maybe you can even tell us we
lost a child and not tell us how maybe we thought it was a miscarriage maybe like whatever but
lay the foundation for it so we understand why she's sad why they're moving why you know some of
the emotional motivation the emotional underpinnings of the first episodes and then you would still
get a payoff of showing this gruesome event down the line oh I thought they did hinted but I maybe
it's because I knew it was happening because I saw people complain about it before but I thought they
did hinted but it might have been like subtle but I just picked up on it because I knew that's
something but I knew that the yeah because they did the baby is not there but they don't say why
the baby is not there but they don't say why and so maybe I just assume that you know some
pieces were there that weren't there yeah that I mean honestly that jump isn't that that like
backstory that doesn't bother me to me that's not a sequence of events issue that's just like I
wish it didn't happen yeah I mean I bothered me at first when they were jumping back because
the only way to tell like what was what time period you were in was when the the husband's beard
but he had a longer beard in the past which is usually like not how it goes usually in the in the
future moment you have the more facial hair so it was it was kind of a door and it just reminded me
like every single show does this for no reason even like a show like the Witcher has multiple timelines
going on even though it takes place in fucking 1600 Poland you need to know two different time
periods in fantasy 1600 Poland and or the keep up with the Witcher just a little bit I wished this
story of telling style we just completely die out but as long as we're watching streaming stuff I
think is gonna keep going I think all those awards that watchman just one is gonna give a whole
bunch of fresh gas into the tank for that style like there's a lot of because one thing about a lot of
these new writers who are trying to get to TV they've really cynically study what works and what
doesn't work like I was on the social media app clubhouse and there were a bunch of them in a room
trying to compare notes about what's one awards recently and what's been recently recently and
there was a guy taking notes and they're giving shows like you know about what's worked and I was
like wow this guy is gonna totally reverse engineer a show he was trying to write a show and he was
asking a bunch of other quote-unquote creatives you know what are some good shows that have gotten a
lot of buzz lately and stuff and so I think a lot of these writers I think deliberately try to
reverse engineer what's getting awards and what's getting you know a lot of good reviews so that's
what we said TV writers weren't creators creatives are here yeah exactly I'll say this about watchman
I was I was ground zero for why is this so confusing like I did not enjoy the ambiguity of the first
like half the season I was like I'm not trying to read 15 books in 80 years of comics and watch a movie
to understand what's going on in a TV show that I'm only watching to keep up with the zeitgeist
because I love old girl from always commercials like this is not this like this is not as like it
shouldn't be hard it shouldn't be work for me to understand a freaking show but I think watchman
paid off I think I was frustrated for seven tenths of this thing but by the end they linked up all of
the bits and a paid off now I wish they I think they could have done it without requiring all of that
confusion and frustration and making me stick it out for the beginning but I did appreciate that
it all seemed to be for purpose and that is not how I felt about them I think I could have liked
watchman if it just didn't call stuff for watchman that was my biggest problem with it to be honest it
just was not watchman if they just gave it a different name and different characters and just wrote
like a little one season mystery box show I would have I would have been annoyed with the mystery
boxness of it but I could have just enjoyed it like 50 shades of gray you wanted them to 50 shades of
gray you know how that's like Twilight and one of them like fanficking is that what 50 shades of gray is
like a fanfick twilight oh I didn't even know that it's interesting yeah exactly just give it
its own name and whatever yeah that's perfect I get where you're coming from now yeah yeah exactly
it's twilight set without vampires rich people instead of vampires which same yeah totally
like like which is funny because the original watchman was that too like um these comic book heroes
like Alan Moore wants to write about these comic book heroes they told me couldn't you mean
this alternate version if Damon love made his alternate name swap version of watchman and just
did his own thing I think I could have appreciated it as a mystery box show been annoyed at the flashbacks
but you know been in the same place where you were but yeah in general yeah this whole timeline
jumping thing in flashback thing I think is 90% of the time unwarranted wonderful to have such a
impassioned discussion thank you I really love this show and thank you all for uh tolerating my
my passion because this is I really feel like what I watch is I'm like this is one of my favorite
things I've watched like especially when it comes to like streaming this is not a easy show to go
to the mattresses for us I'm impressed that you know you were willing to go to the mattresses but
the show oh yeah protect little Marvin at all can we at least agree can we at least agree that he
did not know how to even the way you're gonna say it no I don't agree it can can we at least agree
that he did not completely drop the bag like a lot of people would on their on their first show can
you can't even give him that this is better than the first season of what any rando would do if
they were given Amazon money can you can we at least give him a little Marvin nothing I'm sorry oh come
on I don't think it was the worst thing in the world it was just so much of it that I got I presented it
over time if it were two hour movie that was only okay then I wouldn't be so frustrated yeah I can
see I can see that and that was my thinking like about halfway through I was like man I just wish
it was a two hour movie but then I then it kind of started winning me over especially with episode nine
I just thought that was just really really good much like it was as good as that one it was much I
think it was actually better than one episode of watchman everyone says is good that also think
is episode nine I think that's another thing on the streaming shows are doing where like only like
the eighth or ninth episode is a black and white flashback ends the only good episode of the season
yeah like that that's that's another thing they're all doing but I don't know that I can go aside
but the problem with that one good episode of watchman was that it was just one particular issue of
the watchman tv show they just redid it again like there's one episode the there's one issue of the
comic where because I watched watchman then I reread the comic right so I watched the watchman tv show
I'm like I don't like this tv show but I won't lie this episode is good then I reread the comic
in preparation of us doing a show on it then I got to that issue of the comic and I was like damn I
can't even give it credit for that it was just a retelling of this issue and I was so upset because
that was the one thing I was giving the show credit on and I had forgotten that it was actually a
not for no retelling basically of an issue of the comic all right folks that was culturally black
for struggle session I am Leslie the third I'm Brianna J great I'm Trevor Bollewe
thank you so much peace
(gentle music) | {"url":"https://sesh.show/episode/them-w-briahna-joy-gray-and-trevor-beaulieu-unlocked","timestamp":"2024-11-01T19:41:52Z","content_type":"text/html","content_length":"430759","record_id":"<urn:uuid:1badc5c6-6040-4f54-9f76-1b47dd7f5fa2>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00090.warc.gz"} |
sort – Drawing with Numbers
I recently got a question via email about how to sort a view by two different criteria at the same time. Here’s an Excel table of opportunities where for each month and Opportunity ID there’s a
forecast of size of opportunity:
The goal here is to sort the opportunities within each account type by the nearest (by month) and biggest opportunities (by Forecast size) first, so it looks more like this in Tableau:
Now with that data in Excel arranged in a tabular form I can sort on December, then November, then October, and so on, and get the desired sort:
But in Tableau I can’t do that, if I try to work with the data in that “wide” format with a column for each month it just won’t work. If I use Tableau’s pivot feature to make the data “tall” with a
record for each Opportunity ID and Month then I still run into problems. I want to sort the opportunities by each month and by Forecast but when I try to sort the Opportunity ID dimension I can’t get
it to work, it only sorts by a single month’s values, so in the view below Opportunity ID 864280 should be the first new one for August since:
The Excel way isn’t good because each month I have to manually re-sort the data. And in Tableau it just seems impossible to get the right sort because it looks like we need to sort in two different
directions at once (get the earliest non-zero month for each opportunity, and then sort up and down the opportunities in each account type), and Tableau only lets us sort on one thing at a time.
However, it is possible – read on for how to get this kind of sort in Tableau and maybe learn a few math tricks!
Part of how this kind of problem can be more challenging is the way the problem (and data) is initially presented to us. When we see the data in the crosstab form in Tableau the *appearance* is that
we need to sort in two different directions. In fact, we really only need to sort in one direction based on the forecast value in the first month for each opportunity, so in the view below we’d want
Opportunity ID 864271 to be the first one sorted because it’s from July 2016.
Each opportunity row in the view needs to be sorted within the account type by the first (earliest) month where there is a non-zero forecast and then by the value of Forecast in descending order for
that month.
The key to sorting headers and panes in Tableau is that it’s done using the discrete (blue) pills on Rows or Columns from left to right. So the left-most discrete (blue) pill headers are sorted, then
the 2nd discrete pill’s headers are sorted, and so on. For discrete dimensions from a primary source we can sort by a measure, use the default alphanumeric sort, or a manually, otherwise any discrete
pills are by default alphanumerically sorted or manually sorted.
Therefore in this case I knew I needed to either return a measure that could sort some dimension (like the Opportunity ID) or return a discrete dimension value that with the default alphanumerical
sort would work right. Note that filtering wouldn’t work here because the goal is to show a sorted crosstab.
The next part of working out the solution is how to structure this value for sorting. I’ve done some multi-level sorting in the past where I needed a nested sort of single dimension by two different
criteria, and a common construct is a number of the form X.Y where the integer portion X is from one criteria and the decimal portion N is from the other criteria. So with the default alphanumerical
sort 1.2 comes before 1.3 comes before 2.1 etc.
So for the integer part of the sort I need to convert the date for each opportunity into a number where the Forecast is greater than 0. The Date Number (temp) calc has the formula:
IF [Forecast] > 0 THEN
This convers the date into an integer, in this case the number of days since 1/1/1900. To get the first (earliest) month for each opportunity then all I need to do is aggregate it with MIN() at the
level of Opportunity ID:
Ultimately, this is is what we’re going to do to get that pill sort of Opportunity ID’s in the final view.
For the decimal part of the sort I needed a number where the smallest numbers reflected the largest values, and it needed a value between 0 and 0.999999 (it can’t be a whole value of 1 because that
would affect the integer sort). A way to turn a set of positive numbers into decimal numbers between 0 and 1 is to do X/(max X). In this case X is the Forecast, so to get the max X in the data I used
the Level of Detail Expression, here’s the Max Forecast (temp) formula:
{FIXED : MAX([Forecast])}
Now if I do [Forecast]/MAX([Forecast]) that’s going to return a number between 0 and 1 that preserves the original ordering of values, i.e. bigger values of Forecast are closer to 1. So to invert
that I used use 1 – X/(max X). So if (max X) is 10 and X is 9, then the result of (1 – 9/10) is 0.1, while if X is 2 then the result of (1- 2/10) is 0.8, a bigger number.
We avoid results of 1 that could affect the sort by skipping values of where the Forecast is 0, here’s the Invert Forecast (temp) formula:
IF [Forecast] > 0 THEN
1-[Forecast]/[LOD Max Forecast Value]
I could have avoided the LOD expression for the max value by just setting a gigantically huge number, however past experience with foreign currencies has shown me that whatever huge number I can
imagine is likely to be smaller than reality so I chose to make sure that the value is coming from the data
With all the values worked out I could now put everything together into a single calculation, this is the Sort calc that returns record-level values:
IF [Forecast] > 0 THEN
+ (1-[Forecast]/{FIXED : MAX([Forecast])})
//Forecast is 0, return a really big number that will be put at the end of the sort
This calc returns the numbers as record level values.
To show how the sort works out I set up this view where the Sort calc is used as the leftmost discrete dimension to show what gets sorted first, with the bar chart we can quickly visually verify that
the dates are sorted with earliest months first and then by the Forecast within each month:
Note that there’s a different value for each Opportunity ID/month combination, when what we really want is that single minimum value for each Opportunity ID/month. So we need to aggregate the Sort
measure with MIN() at the level of detail of Opportunity ID, and we can do just that using a pill sort on the Opportunity ID dimension:
Now we can arrange pills to build the original crosstab view and have the desired sort:
And as the data updates the sort will automatically work, in this case I’ve added January 2017 to the data:
The following bits of Tableau knowledge were necessary to get the desired results:
• How Tableau sorts views using discrete pills.
• How Tableau’s pill sorts work.
• A tiny bit of Level of Detail Expression building.
And then the following math tricks were used:
• Combining two different numbers into one using an integer number for one value and a decimal number for the second value.
• Making positive numbers into a range of 0-1 using X/(max X). A different formula would be needed if there were negative numbers and/or the desired range was different.
• Inverting ranges to make big numbers small and small numbers big using 1 – X/(max X)
FYI if LOD expressions are not available in your particular data source then you could use a table calculation, a data blend, or just manually enter your equivalent of the Max Forecast value. I set
up a table calculation version as well in the Sorting by Two Values Tableau Public workbook.
sort by more than one measure
Two options – sort by RANK, or by using one measure that is offset by a number of decimal places plus the other measure http://community.tableausoftware.com/ideas/3261#comment-9526
Sorting and Top N
Why Tableau sorts the way it does (Ross Bunker post):
Nested Sort
Dynamic sort:
find top N within category (using a set):
can also use Index() instead of a Running Total of # of Records
this also shows how to do a nested sort
Top N within a Category:
Just show top N lines, using filter within IF statement within Top N filter:
Find Top N based on most recent results:
Show Top N, then put everyone else in an “other” group
Long thread on how to do this:
another thread on this:
easy top N and bottom N via v8 sets:
Top N filter and order of filter application
some complex sorting within table calcs
Sorting or ranking within a dimension, with some funky stuff with groups and colors
Sorting an “Other” dimension member at the end of a list, post by Andy Kriebel w/useful comment from Joe Mako at the bottom:
Sorting a view by a table calc
– create the table calc
– create a calculation that uses the table calc to generate something that Tableau’s default sort will work just fine with, e.g. low to high numbers
– put the calculation on the Rows shelf
– set the calculation to Discrete
– drag the blue pill to the leftmost position on the Rows shelf
– uncheck Show Header for that pill
Example from a forum post I created:
Example from Joe Mako, for making a fake color legend that sorts based on a measure:
Standard Rank, Standard Competition Rank
James Baker post:
Show top results with >N% of total, everything else as other
Top 25%/bottom 25%
(Joe’s workbook is another example of “partitioning” by table calculations)\
Another partitioning by table calc workbook:
[loop category=”wikicontent” tag=”sorting,sort,sorts”]
[field title] – Added [field date]
Related posts:
[loop tag=”sorting,sort,sorts,top-N,bottom-N,hierarchical-sorting,nested-sorting” exclude=”this” relation=”and” compare=”not” taxonomy=”category” value=”wikicontent”]
• [field title-link][field thumbnail-link] | {"url":"https://drawingwithnumbers.artisart.org/tag/sort/","timestamp":"2024-11-09T04:57:52Z","content_type":"text/html","content_length":"133175","record_id":"<urn:uuid:6738b118-0e3b-4541-8cda-3bb07c8eaf38>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00341.warc.gz"} |
Class 9 Maths Notes Exercise 11.2 PDF - FBISE Solved Past Papers
Federal Board Class 9 Maths Notes Exercise 11.2 PDF is solved for the students. For more notes visit the Class 9 Maths Notes Page.
Federal Board Class 9 Computer Science Notes Chapter 5 Computer Networks solution of all the past and model papers questions. Class 9 Computer Science Notes Chapter 5 See also Physics 9 Chapter 3
Past Papers Questions
Federal Board Class 9 Maths Notes Exercise 15 PDF is solved for the students. For more notes visit the Class 9 Maths Notes Page. Class 9 Maths Notes Exercise 15 See also Class 9 Maths Chapter 5 Notes
Federal Board Class 9 Computer Science Notes Chapter 1 Fundamentals of Computer solution of all the past papers and model papers questions. Class 9 Computer Science Notes Chapter 1 See also Class 9
Chemistry Chapter 1 Fundamentals of Chemistry Complete Notes
Federal Board Class 9 Maths Notes Review Exercise 10 PDF is solved for the students. For more notes visit the Class 9 Maths Notes Page. Class 9 Maths Notes Review Exercise 10 See also Class 9 Maths
Important MCQs for Exam 2021 FBISE
Federal Board Class 9 Maths Notes Exercise 4.4 PDF is solved for the students. For more notes visit the Class 9 Maths Notes Page. Class 9 Maths Notes Exercise 4.4 See also Math Book of Class 9
Federal & Punjab Board PDF Download
Federal Board Class 9 Maths Notes Exercise 5.4 PDF is solved for the students. For more notes visit the Class 9 Maths Notes Page. Class 9 Maths Notes Exercise 5.4 See also Math Book of Class 9
Federal & Punjab Board PDF Download | {"url":"https://fbisesolvedpastpapers.com/class-9-maths-notes-exercise-11-2/","timestamp":"2024-11-13T15:17:14Z","content_type":"text/html","content_length":"120671","record_id":"<urn:uuid:f7e489fd-90ac-4086-bc61-7d1151a65f50>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00699.warc.gz"} |
As a student I was an excellent maths student but due to scarcity of time I couldnt give attention to my daughters math education. It was an issue I could not resolve and then I came across this
software. Algebrator was of immense help for her. She could now learn the basics of algebra. This was shown in her next term grades.
Max Duncan, OH
I am using this program to help both of my children's Algebra homework although they are of different ages. It is such a blessing to have it around.
D.H., Tennessee
Algebrator is truly an educational software. My students feel at ease while using it. Its like having an expert sit next to you.
Allen Donland, GA
My math professor suggested I use your Algebrator product to help me learn the quadratic equations, and non-linear inequalities, since I just could not follow what he was teaching. I was very
skeptical at first, but when I started to understand how to enter the equations, I was amazed with the solution process your software provides. I tell everyone in my class that has problems to
purchase your product.
Rick Edmondson, TX
I am a 9th grade student and always wondered how some students always got good marks in mathematics but could never imagine that Ill be one of them. Hats off to Algebrator! Now I have a firm grasp
over algebra and my approach to problem solving is more methodical.
J.F., Alaska | {"url":"https://sofsource.com/math-homework/how-to-write-a-rule-for-a-quad.html","timestamp":"2024-11-10T08:53:18Z","content_type":"text/html","content_length":"89378","record_id":"<urn:uuid:2fab7501-90de-4fb7-bc7b-aa809b7f2241>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00775.warc.gz"} |
Test 18
top of page
Test 18
Flow Past an Elastic Object
Two classic benchmarks are provided to validate the capability of SPH model for FSI problems with different density ratios. The two test cases correspond to the FSI2 and FSI3 cases proposed by Turek
and Hron (2006). As shown in Fig.1, an elastic plate is attached to a rigid cylinder. The elastic plate oscillates under the forces induced by the vortex shedding. Case FSI2 is more widely used in
the literature than FSI3 because in FSI3 the density ratio between the structure and the fluid is only 1, which can be regarded as a small density ratio, and therefore makes FSI3 more challenging and
requires a stronger FSI coupling algorithm in the numerical solver.
Fig. 1. Sketch for the problem of FSI between an elastic object and laminar incompressible flow
Flow phenomena
Incompressible Flow
Viscous Flow
Fluid-structure Interaction
As shown in Fig.1, an elastic plate is attached to a rigid cylinder of diameter 𝐷=0.1 m and they are immersed in a viscous current with density 𝜌=1000 kg/m3 and kinematic viscosity 𝜈=1E−3 m2/s. The
length of the fluid domain is 𝐿2=2.5 m and the width is 𝐻2=0.41 m.
The cylinder center marked by 𝐶 is located at (0,0) which is the origin of the reference frame, and the tip point 𝐴 of the plate is located at (0.25 m, 0). The bottom of the flow is located at 𝑦=−0.2
m, and therefore the axis of symmetry of the immersed body is located slightly below the axis of symmetry of the fluid domain with a shifted distance of 0.005 m.
Boundary conditions
On all the solid walls, including the lateral walls, the cylinder and plate surfaces, no-slip boundary conditions are imposed.
The current enters from the left side and freely exits from the right side. A parabolic velocity profile is prescribed at the left channel inflow:
𝑢(−0.25,𝑦) = 1.5 𝑈 (𝑦+0.2)(𝐻2−𝑦−0.2) (0.5𝐻2)^(-2),
where the mean inflow velocity is 𝑈 and the maximum of the inflow velocity profile is 1.5 𝑈.
The right-side boundary is treated as a free outlet boundary.
Initial conditions
In the test case FSI2, the density of the elastic plate is 𝜌0𝑠=10^4 kg/m3 which leads to a density ratio between the structure and fluid of 𝜌0𝑠/𝜌0𝑓=10. The Young modulus and Poisson ratio are 𝐸𝑠=
1.4x1E6 Pa and 𝜈𝑠=0.4, respectively. The mean inflow velocity at the in-flow boundary is 𝑈=1 m/s (which leads to a larger Reynolds number ℜ=𝑈𝐷/𝜈=100). Due to the lift force exerted by the viscous
fluid on the elastic plate, the plate deforms and vibrates under the periodic vortex shedding from the structure surface and, finally, a steady VIV state is achieved.
The challenging test case named as FSI3 proposed by Turek and Hron (2006) is considered by increasing the mean inflow velocity to 𝑈=2 m/s (which leads to a larger Reynolds number ℜ=𝑈𝐷/𝜈=200) and
increasing the Young modulus of the elastic plate to be 𝐸𝑠=5.6x1E6 Pa (which will lead to a higher vibrating frequency). In addition, the density ratio is reduced to be 𝜌0𝑠/𝜌0𝑓=1.
Results specifications
To validate the numerical results, the displacements of the tip of the elastic plate (point 𝐴 in Fig. 1) is compared with other reference solutions. For FSI2, solutions from FEM, LBM and SPH models
are available (see the folder named “FSI2”). For FSI3, solutions from SPH and IB-RLB (immersed boundary - regularized lattice Boltzmann) are available (see the folder named “FSI3”).
Results format
1. For all provided ASCII files, the first column is time (unit: s) and the second column is the displacement of point A (unit: m).
2. In the folder named “FSI2”, four numerical results related to BEM (Turek and Hron, 2006), LBM (Li et al., 2017), IBFEM (Bhardwaj and Mittal, 2012) and SPH (Sun et al., 2021) are provided with an
ASCII format.
3. In the folder named “FSI3”, two numerical results related to IB-RLB (Li et al., 2019) and SPH (Sun et al., 2021) are provided with an ASCII format.
Benchmark results of FSI2
Fig.2. Comparison of the vertical displacement of point A between the results of δ+-SPH, Bhardwaj and Mittal (2012), Li and Favier (2017) and Turek and Hron (2006)
Fig. 3. δ+-SPH results of the case FSI2: distributions of the stress component 𝜎11in the elastic plate and the vorticity in the flow field at four time instants
Benchmark results of FSI3
Fig.4. Time evolution of the vertical displacement of point A at the end of the elastic plate, compared to the reference result by Turek and Hron (2006).
Fig.5. Time evolution of the vertical displacement of point A at the end of the elastic plate, compared to the LBM result by Li et al. (2019)
Fig. 6. δ+-SPH results of the case FSI3: distributions of the stress component 𝜎11in the elastic plate and the vorticity in the flow field at four time instants
You can download the full test case below:
Publications using this test case as a benchmark
O'Connor, J., & Rogers, B.D. (2021). A fluid–structure interaction model for free-surface flows and flexible structures using smoothed particle hydrodynamics on a GPU. Journal of Fluids and
Structures, 104, 103312.
Turek, S., & Hron, J. (2006). Proposal for numerical benchmarking of fluid-structure interaction between an elastic object and laminar incompressible flow. In Fluid-structure interaction (pp.
371-385). Springer, Berlin, Heidelberg.
Li, Z., Cao, W., & Le Touzé, D. (2019). On the coupling of a direct-forcing immersed boundary method and the regularized lattice Boltzmann method for fluid-structure interaction. Computers & Fluids,
190, 470-484.
Li, Z., Wang, K., Wu, W., Leo, C. J., & Wang, N. (2017). Vertical vibration of a large-diameter pipe pile considering the radial inhomogeneity of soil caused by the construction disturbance effect.
Computers and Geotechnics, 85, 90-102.
Bhardwaj, R., & Mittal, R. (2012). Benchmarking a coupled immersed-boundary-finite-element solver for large-scale flow-induced deformation. AIAA journal, 50(7), 1638-1642.
Sun, P. N., Le Touze, D., Oger, G., & Zhang, A. M. (2021). An accurate FSI-SPH modeling of challenging fluid-structure interaction problems in two and three dimensions. Ocean Engineering, 221,
Sun, P. N., Le Touzé, D., & Zhang, A. M. (2019). Study of a complex fluid-structure dam-breaking benchmark problem using a multi-phase SPH method with APR. Engineering Analysis with Boundary
Elements, 104, 240-258.
Zhang, C., Rezavand, M., & Hu, X. (2021). A multi-resolution SPH method for fluid-structure interactions. Journal of Computational Physics, 429, 110028.
bottom of page | {"url":"https://www.spheric-sph.org/tests/test-18","timestamp":"2024-11-08T08:04:10Z","content_type":"text/html","content_length":"641637","record_id":"<urn:uuid:74dec8b3-759d-454a-a5a3-a907bbda51ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00128.warc.gz"} |
How to Calculate GDP Deflator | Sapling
The GDP deflator is a fudge factor that allows us to compare an economy's Gross Domestic Product in two or more different years. It also allows us to accurately assess an economy's real growth rate
over time. It does this by providing a compensating factor that backs inflation out of the GDP results.
The Problem the GDP Deflator Helps Solve
One problem with trying to understand an economy's performance over a period of years is that price inflation skews results. For example, if over the past year your wages increased by 7 percent, but
now as a result of price inflation it costs 10 percent more to buy goods, you've actually lost buying power. Your own personal economy isn't 7 percent greater; it's about 3 percent less.
Nominal vs. Real GDP
The same concept holds true for GDP, which economists define as the total market value in a given year of everything produced within that country's borders, plus exports less imports. For example,
consider a GDP that's growing at the rate of 7 percent annually, but during the same period price inflation grew at the rate of 10 percent. Although what economists call the "Nominal GDP" grew by 7
percent, the economy's "Real GDP" actually shrank by around 3 percent. Comparing nominal GDPs doesn't tell us very much.
GDP Inflator Formulas
One way of overcoming this problem is to establish a base year for annual GDP calculations, then back inflation out of the nominal GDP numbers in later years by using a compensating inflation rate
factor, the "GDP Deflator."
The GDP Deflator equals nominal GDP divided by real GDP times 100
If nominal GDP equals $600 billion and real GDP equals $500 billion, then the GDP Deflator equals 120.
When the GDP Deflator is known, it can be used to calculate Real GDP from Nominal GDP:
Real GDP equals Nominal GDP divided by GDP Deflator
The GDP Deflator and Growth Rate Comparisons
Comparing the growth rates of two economies requires using the GDP inflator to differentiate between real and nominal growth in successive years.
For example, using the GDP Deflator allows you to understand that real Chinese GDP in 2014 grew at the rate of 7.4 percent. Compared to the real U.S. growth rate in 2014 of 2.4 percent, it seems
However, a static comparison of GDPs in a single year doesn't tell you all you need to know. An analysis of real Chinese GDPs in several different years shows that Chinese growth rates declined year
over year from 2009 through 2014, while the U.S. real GDP increased year over year for the same period. China's economy seems to be slowing down while the U.S. economy is speeding up.
By comparing real GDP growth year over year, economists can more accurately determine a country's long-term economic trend and more accurately compare the growth rates of different economies. The GDP
Deflator gives economists a convenient way of doing this. | {"url":"https://www.sapling.com/5055470/calculate-gdp-deflator","timestamp":"2024-11-03T14:02:46Z","content_type":"text/html","content_length":"313049","record_id":"<urn:uuid:35f20c9c-101c-45e0-9fee-13c9073f1674>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00702.warc.gz"} |
Heat Equation Worksheet - Equations Worksheets
Heat Equation Worksheet
Heat Equation Worksheet – The objective of Expressions and Equations Worksheets is to help your child learn more effectively and efficiently. They include interactive activities and problems based on
the sequence of operations. These worksheets make it simple for children to grasp complicated concepts and simple concepts quickly. These PDFs are free to download and can be utilized by your child
to test math problems. These resources are beneficial for students in 5th-8th grades.
Download Free Heat Equation Worksheet
These worksheets can be used by students in the 5th through 8th grades. These two-step word puzzles are constructed using decimals or fractions. Each worksheet contains ten problems. You can access
them through any print or online resource. These worksheets are a fantastic way to get your students to practice rearranging equations. In addition to allowing students to practice changing
equations, they aid students in understanding the characteristics of equality and reverse operations.
These worksheets are geared towards fifth and eight graders. These worksheets are perfect for those who are struggling to calculate percentages. You can select from three different kinds of problems.
You can decide to tackle one-step problems containing whole or decimal numbers, or you can use word-based approaches to do fractions or decimals. Each page contains 10 equations. The Equations
Worksheets are used by students from 5th to 8th grade.
These worksheets can be used for practicing fraction calculations and other concepts in algebra. You can choose from many kinds of challenges with these worksheets. It is possible to select the one
that is numerical, word-based or a mix of both. The problem type is also vital, as each will be a unique problem type. Each page will have ten challenges and is a wonderful resource for students in
5th-8th grade.
These worksheets aid students in understanding the connection between variables and numbers. The worksheets let students work on solving polynomial problems and to learn how to apply equations in
everyday life. These worksheets are a fantastic opportunity to gain knowledge about equations and expressions. They will assist you in learning about the different types of mathematical problems and
the various kinds of symbols used to describe them.
These worksheets could be beneficial to students in the first grades. These worksheets will teach students how to graph equations and solve them. The worksheets are great to practice polynomial
variables. They will also help you discover how to factor and simplify the equations. You can find a great set of equations, expressions and worksheets designed for children of every grade level.
Working on the worksheet yourself is the best way to get a grasp of equations.
There are a variety of worksheets that can be used that teach quadratic equations. Each level comes with their own worksheet. These worksheets are designed for you to help you solve problems of the
fourth level. After you’ve completed a step, you’ll be able move on to solving other types of equations. Then, you can work on solving the same-level problems. For example, you can find a problem
with the same axis, but as an elongated number.
Gallery of Heat Equation Worksheet
Heat Calculations Worksheet Answers Worksheets Quadratics
Specific Heat Worksheet Answers Best Of Specific Heat Worksheet Answers
Specific Heat Equation Practice Worksheet
Leave a Comment | {"url":"https://www.equationsworksheets.net/heat-equation-worksheet/","timestamp":"2024-11-06T08:25:16Z","content_type":"text/html","content_length":"64290","record_id":"<urn:uuid:c6bcd5b3-4706-413f-875e-e533fd9599ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00408.warc.gz"} |
du Sautoy on symmetry
An interesting TED talk:
Marcus du Sautoy on symmetry
-- interesting to watch, lots of pictures; ignore the fact that it starts with the standard slightly overwrought version of Galois' story. If you want a more accurate version, I recommend Amir
Duel at Dawn: Heroes, Martyrs, and the Rise of Modern Mathematics Alhambra
, on which du Sautoy has overlaid animations showing the effect of rotating them. Presumably they won't let you do this if you actually go there.
No comments: | {"url":"https://godplaysdice.blogspot.com/2011/03/du-sautoy-on-symmetry.html","timestamp":"2024-11-08T20:14:26Z","content_type":"text/html","content_length":"46294","record_id":"<urn:uuid:4a00eb12-9fa1-47f1-81f8-b26fe2dbc439>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00860.warc.gz"} |
cgttrs: solves one of the systems of equations A * X = B, A**T * X = B, or A**H * X = B, - Linux Manuals (l)
cgttrs (l) - Linux Manuals
cgttrs: solves one of the systems of equations A * X = B, A**T * X = B, or A**H * X = B,
CGTTRS - solves one of the systems of equations A * X = B, A**T * X = B, or A**H * X = B,
TRANS, N, NRHS, DL, D, DU, DU2, IPIV, B, LDB, INFO )
CHARACTER TRANS
INTEGER INFO, LDB, N, NRHS
INTEGER IPIV( * )
COMPLEX B( LDB, * ), D( * ), DL( * ), DU( * ), DU2( * )
CGTTRS solves one of the systems of equations
* X = B, A**T * X = B, or A**H * X = B, with a tridiagonal matrix A using the LU factorization computed by CGTTRF.
TRANS (input) CHARACTER*1
Specifies the form of the system of equations. = aqNaq: A * X = B (No transpose)
= aqTaq: A**T * X = B (Transpose)
= aqCaq: A**H * X = B (Conjugate transpose)
N (input) INTEGER
The order of the matrix A.
NRHS (input) INTEGER
The number of right hand sides, i.e., the number of columns of the matrix B. NRHS >= 0.
DL (input) COMPLEX array, dimension (N-1)
The (n-1) multipliers that define the matrix L from the LU factorization of A.
D (input) COMPLEX array, dimension (N)
The n diagonal elements of the upper triangular matrix U from the LU factorization of A.
DU (input) COMPLEX array, dimension (N-1)
The (n-1) elements of the first super-diagonal of U.
DU2 (input) COMPLEX array, dimension (N-2)
The (n-2) elements of the second super-diagonal of U.
IPIV (input) INTEGER array, dimension (N)
The pivot indices; for 1 <= i <= n, row i of the matrix was interchanged with row IPIV(i). IPIV(i) will always be either i or i+1; IPIV(i) = i indicates a row interchange was not required.
B (input/output) COMPLEX array, dimension (LDB,NRHS)
On entry, the matrix of right hand side vectors B. On exit, B is overwritten by the solution vectors X.
LDB (input) INTEGER
The leading dimension of the array B. LDB >= max(1,N).
INFO (output) INTEGER
= 0: successful exit
< 0: if INFO = -k, the k-th argument had an illegal value | {"url":"https://www.systutorials.com/docs/linux/man/l-cgttrs/","timestamp":"2024-11-11T22:33:26Z","content_type":"text/html","content_length":"9632","record_id":"<urn:uuid:95ef1bd6-55e0-482e-8ecf-097e71af2db5>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00538.warc.gz"} |
Viscoelastic Model
Elastic materials having the capacity to dissipate the mechanical energy due to viscous effects are characterized as viscoelastic materials. For multi-axial stress state, the constitutive relation
can be written as:
where e and f are the deviatoric and volumetric strains; G(t - t) and K(t - t) are shear and bulk relaxation functions. The relaxation functions can then be represented by the mechanical model,
(shown in this figure) which is usually referred to as a Generalized Maxwell Model having the expressions as the following:
where G0 and K0 are the initial shear and bulk moduli (t = 0) given by: G0 = E/2(1+v) and K0 = E/3(1-2v).
gi, ki, tiG, and tiK are the i-th shear and bulk moduli and corresponding times.
The effect of temperature on the material behavior is introduced through the time-temperature correspondence principle. The mathematical form of the principle is:
where g t is the reduced time and g is the shift function. The WLF (Williams-Landel-Ferry) equation is used to approximate the function:
where TO is the reference temperature which is usually picked as the Glass transition temperature; C1 and C2 are material dependent constants.
The required parameters include the following:
Parameter Symbol Description
EX Elastic modulus
Linear Elastic Parameters NUxy Poisson's ratio
GXY (optional) Shear modulus
G1, G2, G3,..., G8 represent g1, g2, ...,g8 in the Generalized Maxwell Model equations
TAUG1, TAUG2, ....., TAUG8 represent t1g, t2g,..., t8g in the Generalized Maxwell Model equations
Relaxation Function Parameters
K1, K2, ..., K8 represent k1, k2, ...,k8 in the Generalized Maxwell Model equations
TAUK1, TAUK2, ..., TAUK8 represent t1k, t2k,..., t8k in the Generalized Maxwell Model equations
REFTEMP represents T0 in the WLF equation
WLF Equation Parameters VC1 represents C1 in the WLF equation
VC2 represents C2 in the WLF equation
Tables & Curves tab, the first point of the curve is the G1 or K1 moduli at time t1. At time t = 0, the program automatically computes G0 or K0 from the Elastic modulus and Poisson's ratio.
The viscoelastic material model can be used with the draft and high quality solid and thick shell elements. | {"url":"https://help.solidworks.com/2011/English/SolidWorks/cworks/LegacyHelp/Simulation/Materials/Material_models/Viscoelasticity_Model.htm","timestamp":"2024-11-13T04:12:12Z","content_type":"application/xhtml+xml","content_length":"153720","record_id":"<urn:uuid:2051d51f-9c0d-45b5-8202-af5563315e25>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00595.warc.gz"} |
sgelq2.f - Linux Manuals (3)
sgelq2.f (3) - Linux Manuals
sgelq2.f -
subroutine sgelq2 (M, N, A, LDA, TAU, WORK, INFO)
SGELQ2 computes the LQ factorization of a general rectangular matrix using an unblocked algorithm.
Function/Subroutine Documentation
subroutine sgelq2 (integerM, integerN, real, dimension( lda, * )A, integerLDA, real, dimension( * )TAU, real, dimension( * )WORK, integerINFO)
SGELQ2 computes the LQ factorization of a general rectangular matrix using an unblocked algorithm.
SGELQ2 computes an LQ factorization of a real m by n matrix A:
A = L * Q.
M is INTEGER
The number of rows of the matrix A. M >= 0.
N is INTEGER
The number of columns of the matrix A. N >= 0.
A is REAL array, dimension (LDA,N)
On entry, the m by n matrix A.
On exit, the elements on and below the diagonal of the array
contain the m by min(m,n) lower trapezoidal matrix L (L is
lower triangular if m <= n); the elements above the diagonal,
with the array TAU, represent the orthogonal matrix Q as a
product of elementary reflectors (see Further Details).
LDA is INTEGER
The leading dimension of the array A. LDA >= max(1,M).
TAU is REAL array, dimension (min(M,N))
The scalar factors of the elementary reflectors (see Further
WORK is REAL array, dimension (M)
INFO is INTEGER
= 0: successful exit
< 0: if INFO = -i, the i-th argument had an illegal value
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
September 2012
Further Details:
The matrix Q is represented as a product of elementary reflectors
Q = H(k) . . . H(2) H(1), where k = min(m,n).
Each H(i) has the form
H(i) = I - tau * v * v**T
where tau is a real scalar, and v is a real vector with
v(1:i-1) = 0 and v(i) = 1; v(i+1:n) is stored on exit in A(i,i+1:n),
and tau in TAU(i).
Definition at line 122 of file sgelq2.f.
Generated automatically by Doxygen for LAPACK from the source code. | {"url":"https://www.systutorials.com/docs/linux/man/3-sgelq2.f/","timestamp":"2024-11-10T15:51:48Z","content_type":"text/html","content_length":"8949","record_id":"<urn:uuid:cdc6497d-08f2-4c2f-9c50-33398a4d94cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00458.warc.gz"} |
<img src="https://blogger.googleusercontent.com/img/a/AVvXsEgeHlmDvQ_yfDVmLFtphofkNCsPMU6FEwEaVMNZm2XQc_W9sa_x4KoNzFLoJslh17YKLq1HqTFZGfywzNgJDLjpY-QAX7mF046OdK3U906jx2T0TT7Ar1O4P6lNTVOlvi-caadvq8kxhhvoQLPATBHlbY1uZO2ggCVIOhr4vUYJC7CZX3Yx6xM=s16000" />
In the circuit shown, the switch S is connected to position P for a long time so that the charge on the capacitor becomes $q_1$ μC. Then S is switched to position Q. After a long time, the charge on
the capacitor is $q_2$ μC.
Q.1 The magnitude of $q_1$ is ___ .
Q.2 The magnitude of $q_2$ is ___ .
The figure below shows the situation when switch S is connected to position P.
No current flows through the capacitor in steady state.
So, $I=\frac {2-1}{1+2}=\frac {1}{3} A$
$V_C =2-2I=2-2\times \frac {1}{3}=\frac {4}{3} V$
$q_1 =CV_C =1\times \frac {4}{3} = \frac {4}{3} \mu C$
Actually, the answer to Q.1 is $q_1 =\frac {4}{3}=1.33$ as the unit is outside.
The figure below shows the situation when switch S is connected to position Q.
If the sum of the coefficients in the expansion of $(x+y)^n$ is 4096, then the greatest coefficient in the expansion is _ _ _ _ . Solution $C_0 + C_1 + C_2 + C_3 + ......................... + C_n =
4096 $ $\therefore 2^n = 4096 =2^{12} $ $\Rightarrow n = 12 $ Greatest coefficient = ${}^{12}{C_6} = 924$ | {"url":"https://www.123iitjee.com/2022/03/In%20the%20circuit%20shown%20the%20switch%20S.html","timestamp":"2024-11-06T10:57:57Z","content_type":"text/html","content_length":"84597","record_id":"<urn:uuid:95bc143f-2d5b-41a5-8ccf-ed349c34e524>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00349.warc.gz"} |
Program Files
There are a series of scripts used to run DENSS. The core functions of DENSS are stored in the saxstats.py module. The main script to run a single reconstruction is denss.py. There are also scripts
for performing multiple DENSS runs and aligning and averaging maps, as well as helper scripts that can perform some useful tasks. The following is a full list of the available scripts and a brief
description of what they can do:
• denss.py – A tool for calculating an electron density map from solution scattering data.
• denss.fit_data.py – A tool for fitting solution scattering data with smooth function based on Moore’s algorithm for fitting a trigonometric series.
• denss.all.py – Generate, align, and average multiple electron density maps using DENSS.
• superdenss – Generate, align and average multiple electron density maps using EMAN2.
• denss.align.py – A tool for aligning electron density maps.
• denss.align2xyz.py – A tool for aligning an electron density map such that its principal axes of inertia are aligned with the x,y,z axes.
• denss.align_by_principal_axes.py – A tool for aligning an electron density map to another electron density map based only on alignment of principal axes (no minimization).
• denss.average.py – A tool for averaging multiple pre-aligned electron density maps.
• denss.align_and_average.py – A tool for aligning and averaging multiple electron density maps.
• denss.refine.py – A tool for refining an electron density map from solution scattering data.
• denss.calcfsc.py – A tool for calculating the Fourier Shell Correlation between two pre-aligned MRC formatted electron density maps.
• denss.get_info.py – Print some basic information about an MRC file.
• denss.pdb2mrc.py – A tool for calculating electron density maps from pdb files.
• denss.mrc2sas.py – A tool for calculating simple scattering profiles from MRC formatted electron density maps.
• denss.regrid.py – A tool for regridding a scattering profile to another set of q values.
• denss.mrcops.py – A tool for performing basic operations on MRC formatted electron density maps.
• denss.select_enantiomer.py – A tool for selecting the best enantiomer of a density map.
• denss.select_enantiomers.py – A tool for selecting the best enantiomers from a set of multiple density maps.
• denss.generate_reference.py – A tool for generating a reference from a set of maps using a binary averaging procedure.
• best_enantiomers.sh – A tool for generating and selecting enantiomers using EMAN2.
• fsc2res.py – A tool for plotting Fourier Shell Correlation curves and estimating resolution.
Input Data
Solution scattering data, in particular small angle X-ray scattering (SAXS) data, are typically highly oversampled, meaning that there are far more data points measured in an experiment than there
are actual unique pieces of information available. SAXS data are often 20 to 50-fold oversampled, depending on the size of your particle and the geometry of your experiment.
However, running DENSS at such high oversampling ratios is computationally costly, as the size of the array grows as N^3. As DENSS typically runs at an oversampling ratio of 3 to 5 (see below for
details), we need to first reduce the highly oversampled SAXS data to something more manageable, while still retaining all the useful precision brought about by the experimental oversampling.
To do this, we utilize well-known procedures for performing an indirect Fourier transform of the SAXS data. Usually this is used for extracting pair distribution functions, here we need it for
providing a smooth fit to the experimental scattering data and for estimating I(0), which is an important parameter in the data.
Multiple file formats are currently acceptable to denss.py and denss.all.py (the two primary scripts for running DENSS): .dat files, .fit files or .out files. Smooth fits will be extracted from .fit
and .out files. Raw data can be given as a .dat file, which will then be automatically fit with a smooth curve. Alternatively you can manually run the fitting with the denss.fit_data.py script below.
Files ending in .dat that are already smooth curves can also be used, as DENSS will check if the file is raw data or a smooth fit.
Using denss.fit_data.py
A script called denss.fit_data.py is provided which can be used to fit experimental data with a smooth curve based on an extended version of Peter Moore’s approach (Moore 1979) using a trigonometric
series. The denss.fit_data.py script includes a simple interactive GUI for selecting Dmax and the smoothing factor alpha and displays the experimental data, the smooth fit to the data, and the real
space pair distribution function. denss.fit_data.py will save a .fit file containing the smooth fit to the data which can then be used as input to denss.py (see below). Additionally, useful
parameters calculated from the fit, such as the radius of gyration and Porod volume, are displayed and saved in the file header. The manuscript describing the mathematical derivation and the
algorithm of this new approach is currently in preparation.
denss.fit_data.py can be run simply from the command line as:
> denss.fit_data.py -f experimental_data.dat
where experimental_data.dat is the noisy scattering profile, given as a three-column ASCII text file with columns q, I, error. An interactive GUI will appear showing the experimental scattering
profile on the left along with the fit to the data, and the associated pair distribution function (P(r)) on the right. Two interactive sliders on the bottom left can be adjusted for Dmax (the maximum
particle dimension) and the alpha smoothing factor. See denss.fit_data.py -h for more options. When opened, denss.fit_data.py will estimate Dmax directly from the data and will estimate the optimal
alpha value to maximize the smoothness of the P(r) curve while still fitting the I(q) profile. The user can also trim the beginning and ending data points by entering values in the “First point” and
“Last point” input boxes. Additionally, by default the algorithm will extrapolate the fitted intensity curve to ameliorate truncation ripples in the P(r) curve. The “Extrapolate” checkbox can be
clicked to disable this option if desired. The size parameters calculated from the fitted curve (including the forward scattering I(0), radius of gyration Rg, average length vector r, Porod volume
Vp, Volume of correlation Vc, and length of correlation lc) will be displayed with their corresponding uncertainties below the P(r) plot.
When closing the GUI window, this will save a *.fit file that can be used with denss.py for density reconstruction. Additional details such as the Dmax will be printed to the console (Dmax will be
needed for denss.py also). The *.fit file can be used directly in denss.py and the Dmax will then be read from the header. The P(r) function will also be saved to a *_pr.dat file.
Using GNOM from ATSAS
One of the most popular software routines for performing this calculation is called GNOM, from the ATSAS suite of SAXS programs.
DENSS can natively accept GNOM output files (with a .out extension) as input to the denss.py script. An example of such a file, 6lyz.out, is provided along with the DENSS download. This file is a
GNOM formatted file calculated from simulated SAXS data from PDB entry 6LYZ (hen egg white lysozyme). The small, 14 kDa globular particle provides for convenient and quick testing.
However, any valid procedure you like may be used for calculating the smooth fit to the scattering data. These smoothed curves can be given to DENSS as simple ASCII text files with three columns: q
(=4π sin(θ)/λ in Å^-1); intensity; and error on the intensity. DENSS will interpolate the given scattering profile to the necessary q values used in the reconstruction.
It is not recommended to use the automated version of GNOM, called datgnom (or autognom). This version truncates the data to low resolution (something like q < 7/Rg) due to limitations of bead
modeling algorithms (such as DAMMIF) that cannot utilize such higher resolution data. However DENSS is fully capable of using high resolution data (though due to information content may not improve
the resolution of the final map). It is highly recommended to manually run GNOM (for example from within the Primus interface) and ensure that all high resolution data points are used. Even in cases
where the high resolution data may not be the greatest quality, it is likely to be better than the simple Porod extrapolation that DENSS performs when the voxel sizes are smaller than the
corresponding resolution of the data defined by q[max]. | {"url":"https://tdgrant.com/introduction/","timestamp":"2024-11-10T22:42:20Z","content_type":"text/html","content_length":"392478","record_id":"<urn:uuid:51ee5777-317f-44ab-87b0-c81c9531e858>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00453.warc.gz"} |
What does a large test statistic indicate?
The larger the test statistic, the smaller the p-value and the more likely you are to reject the null hypothesis. A p-value is an area in the tail of a distribution that tells you the odds of a
result happening by chance.
What test statistic tells you how big the effect was?
Pearson r correlation This parameter of effect size summarises the strength of the bivariate relationship. The value of the effect size of Pearson r correlation varies between -1 (a perfect negative
correlation) to +1 (a perfect positive correlation).
How do you know if a test statistic is appropriate?
How to calculate a test statistic
1. Find the raw scores of the populations. Assume you want to perform a z-test to determine whether the means of two populations are equal.
2. Calculate the standard deviation of the population.
3. Calculate the population mean.
4. Evaluate the z-value.
5. Apply the t-test formula.
6. Interpret the results.
What is a large sample size test?
Further, t-test may be used in case of both small sample ( n<30) and large sample (n>30), but Z-test can be used in case of large samples only.
How do you interpret a test statistic?
Interpreting test statistics The agreement between your calculated test statistic and the predicted values is described by the p-value. The smaller the p-value, the less likely your test statistic is
to have occurred under the null hypothesis of the statistical test.
What does it mean to have a large effect size?
Effect size tells you how meaningful the relationship between variables or the difference between groups is. It indicates the practical significance of a research outcome. A large effect size means
that a research finding has practical significance, while a small effect size indicates limited practical applications.
When the sample size is large more than 30 we use the statistic?
The central limit theorem (CLT) states that the distribution of sample means approximates a normal distribution as the sample size gets larger, regardless of the population’s distribution. Sample
sizes equal to or greater than 30 are often considered sufficient for the CLT to hold.
What is large sample data?
Large Sample Theory is a name given to the search for approximations to the behaviour of statistical procedures which are derived by computing limits as the sample size, n, tends to infinity. Suppose
we have a data set with a fairly large sample size, say n = 100.
Is large effect size good?
Effect size tells you how meaningful the relationship between variables or the difference between groups is. A large effect size means that a research finding has practical significance, while a
small effect size indicates limited practical applications.
What effect size is small medium and large?
The effect is small because 0.384 is between Cohen’s value of 0.2 for small effect size and 0.5 for medium effect size. The size of the differences of the means for the two companies is small
indicating that there is not a significant difference between them….50 Cohen’s Standards for Small, Medium, and Large Effect Sizes.
Size of effect d
Small 0.2
Medium 0.5
Large 0.8
What is the t statistic calculator?
Use the t-statistic calculator (t-value calculator or t test statistic calculator) to compute the t-value of a given dataset using its sample mean, population mean, standard deviation and sample
When you calculate a test statistic you have?
The formula for the test statistic depends on the statistical test being used. Generally, the test statistic is calculated as the pattern in your data (i.e. the correlation between variables or
difference between groups) divided by the variance in the data (i.e. the standard deviation).
What is a test statistic?
The test statistic is a number calculated from a statistical test of a hypothesis. It shows how closely your observed data match the distribution expected under the null hypothesis of that
statistical test.
What is the test statistic for one population mean calculator?
The Test Statistic for One Population Mean Calculator is a calculator that is used when the variable is numerical and only one population or group is being studied. Let’s say that an economist,
Economist William German, believes that students who work and go to college only spend, on average, $15 a day on food.
What does the standardized test statistic calculator do?
The standardized test statistic calculator is an easy way for anyone to compare the results with a ‘Normal’ population. The T-Scores and Z-Scores are comparatively similar. However, the
T-Distribution here is a bit shorter & fatter as compared to normal distribution. They tend to do similar things.
What is the formula for the test statistic in a t-test?
Formulas for the test statistic in t-tests include the sample size, as well as its mean and standard deviation. The exact formula depends on the t-test type – check the sections dedicated to each
particular test for more details. | {"url":"https://www.toccochicago.com/2022/08/09/what-does-a-large-test-statistic-indicate/","timestamp":"2024-11-14T14:47:01Z","content_type":"text/html","content_length":"46660","record_id":"<urn:uuid:439e665d-74e5-41c8-ab2d-255952ae8c14>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00660.warc.gz"} |
Linear Dependence and Independence in Linear Algebra
1. Vectors
2. Matrices
3. Linear equations
4. Matrix determinant
5. Vector space
6. Special matrices
7. Eigenvalues and Eigenvectors
8. Orthogonality
9. Matrix decomposition
Mark as learned
auto_stories Bi-column layout
Linear Dependence and Independence in Linear Algebra
schedule Jan 9, 2024
Last updated
Linear Algebra
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!
Linear combinations of vectors
If $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_n$ are vectors, then their linear combination is defined as follows:
$$c_1\boldsymbol{v}_1+ c_2\boldsymbol{v}_2+ \cdots+ c_n\boldsymbol{v}_n$$
Where $c_1$, $c_2$, $\cdots$, $c_n$ are scalar constants.
Expressing a vector as a linear combination of other vectors
Consider the following vectors:
$$\boldsymbol{v}_1= \begin{pmatrix} 3\\4 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_2= \begin{pmatrix} 1\\2 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_3= \begin{pmatrix}5\\8\end{pmatrix}$$
Express $\boldsymbol{v}_3$ as a linear combination of $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$.
Solution. The linear combination of $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ that generates $\boldsymbol{v}_3$ is:
$$\boldsymbol{v}_3=\boldsymbol{v}_1+2\boldsymbol{v}_2 \;\;\;\;\;\;\;\;\;\;\Longleftrightarrow\;\;\;\;\;\;\;\;\;\; \begin{pmatrix} 5\\8 \end{pmatrix}= \begin{pmatrix} 3\\4 \end{pmatrix}+ 2\begin
{pmatrix} 1\\2 \end{pmatrix}$$
We say that $\boldsymbol{v}_3$ can be expressed as a linear combination of $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$.
Linear dependence and independence of two vectors
Linearly dependent vectors
Consider the following two vectors:
Can one vector be expressed as a constant multiple of the other? To answer this question, consider the following equation:
$$$$\label{eq:m3vR3xzP1TnBBuSrILn} c\begin{pmatrix} 1\\2 \end{pmatrix} = \begin{pmatrix} 2\\4 \end{pmatrix}$$$$
Clearly, if we let $c=2$, then the equality holds. This means that if we double the shorter vector, we get the longer vector - this should be clear from the diagram as well.
In fact, recall from theoremlink that as long as the two vectors are pointing in the same direction, there will always exist a constant $c$ that satisfies \eqref{eq:m3vR3xzP1TnBBuSrILn} because
multiplying a vector by a constant involves stretching/shrinking the vector but preserving the direction.
Whenever we can express two vectors as a multiple of one another, we say that the two vectors are linearly dependent.
Linearly independent vectors
Consider the following two vectors:
We ask ourselves the same question - can we express one vector as a multiple of the other? Again, we are interested in finding a constant that makes the following equality hold:
$$$$\label{eq:oDqV0a3tTqzI5YprYaC} c\begin{pmatrix} 1\\2 \end{pmatrix} = \begin{pmatrix} 3\\4 \end{pmatrix}$$$$
Clearly, there exists no constant value $c$ that satisfies the equality. Setting $c=3$ will make the first elements match, but the second elements will not match. Similarly, setting $c=2$ will only
match the second elements.
The fact that $c$ does not exist should make sense from the diagram because $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ are pointing in different directions. No matter how much we stretch $\boldsymbol
{v}_1$, we will never be able to obtain $\boldsymbol{v}_2$.
Whenever we cannot express two vectors as a constant multiple of one another, we say that the two vectors are linearly independent.
Deriving the formal definition
Linearly dependent case
Let's now modify equation \eqref{eq:m3vR3xzP1TnBBuSrILn} slightly and include a constant multiple on the right-hand side as well:
$$$$\label{eq:BRDlEdUuNi3ZgHYtFz2} c_1\begin{pmatrix} 1\\2 \end{pmatrix} = c_2\begin{pmatrix} 2\\4 \end{pmatrix}$$$$
Notice that \eqref{eq:BRDlEdUuNi3ZgHYtFz2} is true when $c_1=c_2=0$. However, this is trivial because shrinking any two vectors to a zero vector will always make them identical. If we can find a pair
of non-zero constants $c_1$ and $c_2$ that satisfies the equality, then this implies that we can express one vector using a multiple of the other. To understand why this is true, divide both sides by
$c_2$ to get:
$$\frac{c_1}{c_2}\begin{pmatrix} 1\\2 \end{pmatrix}= \begin{pmatrix} 2\\4 \end{pmatrix}$$
Since $c_1$ and $c_2$ are just constants, the fraction can also be treated as a constant. In this case, $c_1=2$ and $c_2=1$ will satisfy the equation. Because we managed to express one vector as a
multiple of the other, the two vectors must be linearly dependent.
The key takeaway here is that two vectors are linearly dependent only if there exist non-zero constants for the equality \eqref{eq:BRDlEdUuNi3ZgHYtFz2} to hold.
Linearly independent case
Similarly, let's now modify \eqref{eq:oDqV0a3tTqzI5YprYaC} from earlier using two constants:
$$$$\label{eq:VQbMJFwXncwU5HjWo1t} c_1\begin{pmatrix} 1\\2 \end{pmatrix}= c_2\begin{pmatrix} 3\\4 \end{pmatrix}$$$$
There exists no pair of non-zero constants that satisfy the above equality. In other words, the only solution to \eqref{eq:VQbMJFwXncwU5HjWo1t} is $c_1=c_2=0$, which is the trivial solution. Because
neither of the vectors can be expressed as a linear combination of the other, the vectors must be linearly independent.
This means that two vectors are linearly independent only if the constants have to be all zero for the equality \eqref{eq:VQbMJFwXncwU5HjWo1t} to hold.
We have demonstrated that we can determine whether two vectors $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ are linearly dependent by checking the existence of non-zero constants that satisfy:
$$$$\label{eq:Yc8qA0BuXIwROxIAtXs} c_1\boldsymbol{v_1}= c_2\boldsymbol{v_2}$$$$
Let's move the vector on the right-hand side to the left-hand side to get:
$$$$\label{eq:PxgyGjfIqOXoUf1mtmG} c_1\boldsymbol{v_1}- c_2\boldsymbol{v_2}=\boldsymbol{0}$$$$
We now convert the sign of $c_2$ to positive:
$$$$\label{eq:mSkwupqaA4DqVy0fcn9} c_1\boldsymbol{v_1}+ c_2\boldsymbol{v_2}=\boldsymbol{0}$$$$
We are allowed to alter the sign of the constants because if there exist constant terms $c_1$ and $c_2$ that satisfy \eqref{eq:mSkwupqaA4DqVy0fcn9}, then there will always exist constant terms $c_1$
and $c_2$ satisfying \eqref{eq:PxgyGjfIqOXoUf1mtmG}, and vice versa. For instance, consider the following equation:
$$c_1\begin{pmatrix} 1\\2 \end{pmatrix} - c_2\begin{pmatrix} 2\\4 \end{pmatrix}= \boldsymbol{0}$$
One solution is $c_1=2$ and $c_2=1$. If we swap the sign of $c_2$, then the solution becomes $c_1=2$ and $c_2=-1$. The constant terms have changed but the actual values they take on do not matter -
all we care about when determining the linear dependence of two vectors is whether or not there exist constant terms $c_1$ and $c_2$ such that the equality in \eqref{eq:mSkwupqaA4DqVy0fcn9} holds.
For our example, the vectors were in $\mathbb{R}^2$, but they can reside in any finite dimension.
Linear dependence and independence of more than two vectors
When we have more than two vectors, the question changes from "are the two vectors dependent?" to "are the set of vectors dependent?". If a vector can be expressed as a linear combination of the
other vectors in the set, then the set is considered to be dependent. Otherwise, the set is independent.
As an example, consider the following set of three vectors:
Here, $\boldsymbol{v}_3$ can be expressed as a linear combination of $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ like so:
$$$$\label{eq:znCaorY29bofgU2SYSI} \boldsymbol{v}_3=2\boldsymbol{v}_1+\boldsymbol{v}_2$$$$
Visually, this means that:
Therefore, our set of vectors is dependent.
Every pair of vectors in this set is independent - yet the set of three vectors is dependent.
Now, let's do what we did earlier and generalize the criterion for linear dependence. Recall that the criterion in the case of two vectors is as follows:
For the case when we have three vectors, we can guess that the criterion would be:
$$$$\label{eq:KPNLoqNjC0SvplKn6mK} c_1\boldsymbol{v}_1+c_2\boldsymbol{v}_2+c_3\boldsymbol{v}_3=\boldsymbol{0}$$$$
It turns out that this is precisely the criterion to check for linear dependence of three vectors! Let's now understand why. Suppose there exist $c_1$, $c_2$ and $c_3$ that are not all zeros and
satisfy \eqref{eq:KPNLoqNjC0SvplKn6mK}. Let's rewrite \eqref{eq:KPNLoqNjC0SvplKn6mK} such that $\boldsymbol{v}_3$ is the subject:
$$$$\label{eq:NhBnLsW2XRQzdLZ97cp} \boldsymbol{v}_3=-\frac{c_1}{c_3}\boldsymbol{v}_1 -\frac{c_2}{c_3}\boldsymbol{v}_2\\$$$$
Here, we can regard the fraction coefficients as constants. We have managed to express $\boldsymbol{v}_3$ using some linear combinations of vectors $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$, which
means that the set of vectors is linearly dependent.
Now, what if the only coefficients that satisfy equation \eqref{eq:KPNLoqNjC0SvplKn6mK} are all zeros? In such cases, the vectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$ and $\boldsymbol{v}_3$
disappear and we are left with $\boldsymbol{0}=\boldsymbol{0}$. Because we cannot make $\boldsymbol{v}_1$, $\boldsymbol{v}_2$ or $\boldsymbol{v}_3$ the subject, none of the vectors in our set can be
constructed using a linear combination of the other vectors in the set.
To generalize further, instead of just three vectors, what if we had $n$ vectors? The criterion to test for linear dependence would be:
$$$$\label{eq:lpuN20Wpqn1PEOzGrWE} c_1\boldsymbol{v}_1+c_2\boldsymbol{v}_2+\cdots+c_n\boldsymbol{v}_n =\boldsymbol{0}$$$$
What we have derived is a criterion to test for linear dependence given multiple vectors. If there exist coefficients that are not all zeros satisfying \eqref{eq:lpuN20Wpqn1PEOzGrWE}, then the set of
vectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_n$ are linearly dependent. On the other hand, if the only way to get the zero vector is by having all the constant terms be
zero, then we have a linearly independent set.
Formal definition of linear dependence and independence
The set of vectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_n$ is said to be linearly dependent if there exist scalars $c_1$, $c_2$, $\cdots$, $c_n$ that are not all zeros
$$$$\label{eq:LB5A64eXwsweyudsytL} c_1\boldsymbol{v}_1+c_2\boldsymbol{v}_2+\cdots+c_n\boldsymbol{v}_n =\boldsymbol{0}$$$$
If the only way for the equality to hold is for all $c_1$, $c_2$, $\cdots$, $c_n$ to equal zero, then the set of vectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_n$ is said
to be linearly independent.
Note that the left-hand side, which is a sum of scalar-vector products, is called a linear combination of vectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_n$.
Remark. Instead of the above definition, we could have defined linear dependence and independence as follows:
• a set of vectors is linearly dependent if one vector can be constructed as a linear combination of the other vectors.
• a set of vectors is linearly independent if none of the vectors can be constructed as a linear combination of the other vectors.
The reason why the formal definition is preferred is that equation \eqref{eq:LB5A64eXwsweyudsytL} can be expressed in a more practical way using matrices. We shall get back to this laterlink in this
Linearly dependent set in R2
Consider the following set of vectors:
$$\boldsymbol{v}_1= \begin{pmatrix} 3\\2 \end{pmatrix},\;\;\;\; \boldsymbol{v}_2= \begin{pmatrix} 6\\4 \end{pmatrix}$$
Show that $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ are linearly dependent.
Solution. At first glance, $\boldsymbol{v}_2$ is simply the double of $\boldsymbol{v}_1$, so we immediately know that they are linearly dependent. Let's still go through the steps to come to this
conclusion mathematically. The formula to test for linear dependence is:
$$$$\label{eq:R2jcsXyswMYpOp4zuNL} c_1 \begin{pmatrix}3\\2\end{pmatrix} + c_2\begin{pmatrix}6\\4\end{pmatrix} =\boldsymbol{0}$$$$
We can reformulate this as a system of linear equations:
$$\begin{cases} 3c_1+6c_2=0\\ 2c_1+4c_2=0 \end{cases}$$
From the first equation, we know that:
$$$$\label{eq:T1uZG54EHUKQ84EhWv5} c_1=-2c_2$$$$
Substituting this into the second equation gives:
This means that there are infinitely many solutions - any pair of $c_1$ and $c_2$ that satisfies the equality \eqref{eq:T1uZG54EHUKQ84EhWv5} is a valid solution. For instance, $c_1=4$ and $c_2=-2$ is
one possible solution. Since there exist coefficients $c_1$ and $c_2$ that are not both zeros and satisfy \eqref{eq:R2jcsXyswMYpOp4zuNL}, we conclude that our vectors are linearly dependent.
Because $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ are linearly dependent, we can express one vector as a scalar multiple of the other. Let's pick the coefficients $c_1=4$ and $c_2=-2$ because they
satisfy \eqref{eq:T1uZG54EHUKQ84EhWv5}. Substituting them into \eqref{eq:R2jcsXyswMYpOp4zuNL} gives:
$$(4)\begin{pmatrix}3\\2\end{pmatrix} +(-2)\begin{pmatrix}6\\4\end{pmatrix} =\boldsymbol{0} \;\;\;\;\;\;\;\;\;\Longleftrightarrow\;\;\;\;\;\;\;\;\; 4\boldsymbol{v}_1 -2\boldsymbol{v}_2 =\boldsymbol
{0} \;\;\;\;\;\;\;\;\;\Longleftrightarrow\;\;\;\;\;\;\;\;\; \boldsymbol{v}_2= 2\boldsymbol{v}_1 $$
Indeed, $\boldsymbol{v}_2$ can be expressed as $2$ times $\boldsymbol{v}_1$.
Linearly independent set in R2
Consider the following pair of vectors:
$$\boldsymbol{v}_1= \begin{pmatrix} 3\\2 \end{pmatrix},\;\;\;\; \boldsymbol{v}_2= \begin{pmatrix} 6\\5 \end{pmatrix}$$
Show that $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ are linearly independent.
Solution. We can easily tell that $\boldsymbol{v}_2$ cannot be expressed as a scalar multiple of $\boldsymbol{v}_1$ and vice versa. Therefore, the two vectors must be linearly independent. Let's
still confirm this mathematically using our criterion:
$$$$\label{eq:ClVxXzPeL3eG5xOgjBO} c_1 \begin{pmatrix}3\\2\end{pmatrix} + c_2\begin{pmatrix}6\\5\end{pmatrix}= \boldsymbol{0}$$$$
We can reformulate this as a system of linear equations:
$$\begin{cases} 3c_1+6c_2=0\\ 2c_1+5c_2=0 \end{cases}$$
From the first row, we have that:
$$$$\label{eq:vlpR9E6sNDEy66izBb9} c_1=-2c_2$$$$
Substituting this into the second row gives:
From \eqref{eq:vlpR9E6sNDEy66izBb9}, we have that $c_1=0$ as well. Since the only way \eqref{eq:ClVxXzPeL3eG5xOgjBO} can be satisfied is by having the coefficients be $0$, we conclude that the
vectors are linearly independent.
Linearly dependent set in R3
Consider the following three vectors:
$$\boldsymbol{v}_1= \begin{pmatrix} 2\\4\\3 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_2= \begin{pmatrix} 1\\2\\2 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_3= \begin{pmatrix} 4\\8\\7 \end{pmatrix}$$
Show that the vectors are linearly dependent.
Solution. Let's solve for $c_1$, $c_2$ and $c_3$ below:
$$$$\label{eq:jobxCfha2MUEOEJEA5i} c_1\begin{pmatrix} 2\\4\\3 \end{pmatrix} +c_2\begin{pmatrix} 1\\2\\2 \end{pmatrix}+ c_3\begin{pmatrix} 4\\8\\7 \end{pmatrix} =\boldsymbol{0}$$$$
Rewriting this as a system of linear equations:
$$\begin{cases} 2c_1+c_2+4c_3=0\\ 4c_1+2c_2+8c_3=0\\ 3c_1+2c_2+7c_3=0\\ \end{cases}$$
Multiplying the top equation by $2$ gives us the middle equation, which means that they are identical. Since we now only have $2$ equations with $3$ unknowns, one constant term is allowed to vary.
This means that there are infinitely many solutions of $c_1$, $c_2$ and $c_3$ that satisfy \eqref{eq:jobxCfha2MUEOEJEA5i}. Because there exists a set of non-zero coefficients that satisfies \eqref
{eq:jobxCfha2MUEOEJEA5i}, the three vectors must be linearly dependent.
Linearly independent set in R3
Consider the following three vectors:
$$\boldsymbol{v}_1= \begin{pmatrix}1\\0\\0\end{pmatrix},\;\;\;\;\; \boldsymbol{v}_2= \begin{pmatrix}0\\1\\0\end{pmatrix},\;\;\;\;\; \boldsymbol{v}_3= \begin{pmatrix}0\\0\\1\end{pmatrix}$$
Show that the set of vectors is linearly independent.
Solution. By inspection, these vectors are linearly independent because we cannot construct any of these vectors using the other vectors. Let's still solve for $c_1$, $c_2$ and $c_3$ below:
$$$$\label{eq:yFQuisYj2aZCWXuWXmA} c_1\begin{pmatrix}1\\0\\0\end{pmatrix} +c_2\begin{pmatrix}0\\1\\0\end{pmatrix}+ c_3\begin{pmatrix}0\\0\\1\end{pmatrix} =\boldsymbol{0}$$$$
This corresponds to the following system of linear equations:
$$\begin{cases} c_1=0\\c_2=0\\c_3=0\\ \end{cases}$$
Because the only coefficients that satisfy \eqref{eq:yFQuisYj2aZCWXuWXmA} are all zeros, our set of vectors must be linearly independent.
Expressing linear combinations using matrix-vector product
Consider the following linear combinations:
$$x_1\boldsymbol{a}_1+ x_2\boldsymbol{a}_2+ \cdots+ x_n\boldsymbol{a}_n$$
Where $x_i\in\mathbb{R}$ and $\boldsymbol{a}_i\in\mathbb{R}^m$ for $i=1,2,\cdots,n$. This can be expressed as a matrix-vector product:
$$$$\label{eq:y0B5oUWXjxeqQX1HUIN} x_1\boldsymbol{a}_1+ x_2\boldsymbol{a}_2+ \cdots+ x_n\boldsymbol{a}_n= \begin{pmatrix} \vert&\vert&\vert&\vert\\ \boldsymbol{a}_1&\boldsymbol{a}_2&\cdots&\
boldsymbol{a}_n\\ \vert&\vert&\vert&\vert \end{pmatrix} \begin{pmatrix} x_1\\ x_2\\ \vdots\\ x_n\\ \end{pmatrix}= \boldsymbol{A}\boldsymbol{x}$$$$
Here, $\boldsymbol{A}$ is a matrix whose columns are composed of vectors $\boldsymbol{a}_i$.
Proof. This theorem is equivalent to theoremlink.
Linear combination of three vectors
Consider the following linear combination of vectors:
$$$$\label{eq:NN895vYDrnrfWzljlhZ} 2\boldsymbol{a}_1+ 5\boldsymbol{a}_2+ \boldsymbol{a}_3$$$$
Where the vectors are defined as:
$$\boldsymbol{a}_1=\begin{pmatrix} 1\\ 3\\ 2 \end{pmatrix},\;\;\;\; \boldsymbol{a}_2=\begin{pmatrix} 9\\ 3\\ 2 \end{pmatrix},\;\;\;\; \boldsymbol{a}_3=\begin{pmatrix} 5\\ 4\\ 2 \end{pmatrix}$$
Express \eqref{eq:NN895vYDrnrfWzljlhZ} as a matrix-vector product.
Solution. We can directly use theoremlink to express equation \eqref{eq:NN895vYDrnrfWzljlhZ} as a matrix-vector product:
$$\begin{align*} 2\boldsymbol{a}_1+5\boldsymbol{a}_2+\boldsymbol{a}_3 &=\begin{pmatrix} \vert&\vert&\vert\\ \boldsymbol{a}_1&\boldsymbol{a}_2&\boldsymbol{a}_3\\ \vert&\vert&\vert \end{pmatrix} \begin
{pmatrix} 2\\5\\1 \end{pmatrix}\\&= \begin{pmatrix} 1&9&5\\3&3&4\\2&2&2 \end{pmatrix} \begin{pmatrix} 2\\ 5\\ 1 \end{pmatrix} \end{align*}$$
Formal definition of linear dependence and independence in matrix-vector form
Recall that the criterion of linear dependence is:
$$c_1\boldsymbol{v}_1+c_2\boldsymbol{v}_2+\cdots+c_n\boldsymbol{v}_n =\boldsymbol{0}$$
Using theoremlink, we can rewrite this in terms of a matrix-vector product:
$$$$\label{eq:kTGrGeyO2ZMz4lDlHFg} \boldsymbol{0}= \begin{pmatrix} \vert&\vert&\vert&\vert\\ \boldsymbol{v}_1&\boldsymbol{v}_2&\cdots&\boldsymbol{v}_n\\ \vert&\vert&\vert&\vert \end{pmatrix} \begin
{pmatrix} c_1\\ c_2\\ \vdots\\ c_n\\ \end{pmatrix}$$$$
Note the following:
• if the only solution is $c_1=c_2=\cdots=c_n=0$, then the set of vectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_n$ is linearly independent.
• otherwise, the set is linearly dependent.
Remark. The matrix-vector form has two advantages:
• we can easily use a computer program to solve for the coefficients.
• we can use Gaussian elimination to solve for the coefficients. We will see an example of this later.
Showing linear independency using Gaussian elimination
Consider the following two vectors:
$$\boldsymbol{v}_1=\begin{pmatrix} 3\\2 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_2=\begin{pmatrix} 6\\2 \end{pmatrix}$$
Show that these two vectors are linearly independent.
Solution. Using matrix-vector notation, the criterion for linear dependence is:
$$$$\label{eq:r436dJsdMikBCUppLTT} \begin{pmatrix} 0\\ 0 \end{pmatrix}= \begin{pmatrix} 3&6\\ 2&2 \end{pmatrix} \begin{pmatrix} c_1\\ c_2 \end{pmatrix}$$$$
By theoremlink, we can ignore the zero vector on the left during Gaussian elimination because performing any elementary row operationlink (e.g. summation, multiplication) will not have an impact on
zeros. Therefore, we can focus on row-reducing the coefficient matrix only:
$$$$\label{eq:Gw1enp022fFpeSRSviZ} \begin{pmatrix} 3&6\\ 2&2\\ \end{pmatrix} \sim \begin{pmatrix} 1&2\\ 1&1\\ \end{pmatrix} \sim \begin{pmatrix} 1&2\\ 0&1\\ \end{pmatrix} \sim \begin{pmatrix} 1&0\\ 0
&1\\ \end{pmatrix}$$$$
This means that \eqref{eq:r436dJsdMikBCUppLTT} can be reformulated as below since they share the same solution set:
$$$$\label{eq:Z4UDMRfkEtliiAfOQ14} \begin{pmatrix} 0\\ 0 \end{pmatrix}= \begin{pmatrix} 1&0\\ 0&1\\ \end{pmatrix} \begin{pmatrix} c_1\\ c_2\\ \end{pmatrix}$$$$
The only way the above can be true is if $c_1$ and $c_2$ are both equal to zero. This means that the original vectors $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ are linearly independent.
Showing linear dependency using Gaussian elimination
Consider the following vectors:
$$\boldsymbol{v}_1= \begin{pmatrix} 2\\3\\1 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_2= \begin{pmatrix} 1\\1\\3 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_3= \begin{pmatrix} 7\\9\\11 \end{pmatrix}$$
Show that this set of vectors is linearly dependent. Also, express $\boldsymbol{v}_3$ as a linear combination of $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$.
Solution. The criterion for linear dependence is:
$$$$\label{eq:Yp1nezlKJNYTxna9JZi} c_1\boldsymbol{v}_1 +c_2\boldsymbol{v}_2 +c_3\boldsymbol{v}_3 =\boldsymbol{0}$$$$
Expressing this in matrix-vector form:
$$\begin{pmatrix} \vert&\vert&\vert\\\boldsymbol{v}_1&\boldsymbol{v}_2&\boldsymbol{v}_3\\\vert&\vert&\vert\\ \end{pmatrix} \begin{pmatrix}c_1\\c_2\\c_3\end{pmatrix}=\boldsymbol{0} \;\;\;\;\;\;\;\;\
Longleftrightarrow\;\;\;\;\;\;\;\; \begin{pmatrix}2&1&7\\3&1&9\\1&3&11\end{pmatrix} \begin{pmatrix}c_1\\c_2\\c_3\end{pmatrix}=\boldsymbol{0}$$
Now, we perform row-reduction on the coefficient matrix like so:
$$\begin{pmatrix} 2 & 1 & 7\\ 3 & 1 & 9\\ 1 & 3 & 11 \end{pmatrix} \sim \begin{pmatrix} 2 & 1 & 7\\ 0 & 1 & 3\\ 0 & -5 & -15 \end{pmatrix} \sim \begin{pmatrix} 2 & 1 & 7\\ 0 & 1 & 3\\ 0 & 1 & 3 \end
{pmatrix} \sim \begin{pmatrix} 2 & 1 & 7\\ 0 & 1 & 3\\ 0 & 0 & 0 \end{pmatrix} \sim \begin{pmatrix} 2 & 0 & 4\\ 0 & 1 & 3\\ 0 & 0 & 0 \end{pmatrix} \sim \begin{pmatrix} 1 & 0 & 2\\ 0 & 1 & 3\\ 0 & 0
& 0 \end{pmatrix}$$
Because the last row is all zeros, $c_3$ can take on any value. By theoremlink, we already know that this system has infinitely many solutions, which means that the set of vectors is linearly
dependent. Therefore, we can express a vector as a linear combination of the other vectors.
For simplicity, let's say $c_3=1$. Using the second row of the reduced row echelon form, we have that $c_2=-3$. Using the first row, we have that $c_1=-2$. Substituting these coefficients into \eqref
{eq:Yp1nezlKJNYTxna9JZi} gives us:
$$-2\boldsymbol{v}_1 -3\boldsymbol{v}_2 +\boldsymbol{v}_3 =\boldsymbol{0}$$
Making $\boldsymbol{v}_3$ the subject:
$$\boldsymbol{v}_3= 2\boldsymbol{v}_1+3\boldsymbol{v}_2$$
We have managed to express $\boldsymbol{v}_3$ as a linear combination of $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$.
Practice problems
Consider the following vectors:
$$\boldsymbol{v}_1= \begin{pmatrix} 5\\3 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_2= \begin{pmatrix} 2\\1 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_3= \begin{pmatrix}3\\1\end{pmatrix}$$
Express $\boldsymbol{v}_1$ as a linear combination of $\boldsymbol{v}_2$ and $\boldsymbol{v}_3$. How many possible linear combinations are there?
We want to solve for coefficients $c_1$, $c_2$ and $c_3$ below:
$$\begin{pmatrix} 5&2&3\\ 3&1&1 \end{pmatrix} \begin{pmatrix}c_1\\c_2\\c_3\end{pmatrix}= \begin{pmatrix}0\\0\end{pmatrix}$$
By theorem, we can solve for $c_1$, $c_2$ and $c_3$ by row-reducing the matrix term:
$$\begin{pmatrix} 5&2&3\\3&1&1\\ \end{pmatrix}\sim \begin{pmatrix} 15&6&9\\15&5&5\\ \end{pmatrix}\sim \begin{pmatrix} 15&6&9\\0&1&4\\ \end{pmatrix}\sim \begin{pmatrix} 5&2&3\\0&1&4\\ \end{pmatrix}\
sim \begin{pmatrix} 5&0&-5\\0&1&4\\ \end{pmatrix}$$
Using this row echelon form, we can solve for the coefficients. From the second row, we have that:
$$\begin{align*} c_2+4c_3&=0\\ c_2&=-4c_3\\ \end{align*}$$
Here, $c_3$ is a free variable so let's set $c_3=1$ for simplicity. This would give us $c_2=-4$.
Next, we look at the first row:
$$\begin{align*} 5c_1-5c_3&=0\\ c_1&=c_3 \end{align*}$$
Since we've set $c_3=1$, we have that $c_1=1$. Therefore, by theoremlink, one possible linear combination is:
$$\boldsymbol{v}_1 -4\boldsymbol{v}_2 +\boldsymbol{v}_3=\boldsymbol{0} $$
Solving for $\boldsymbol{v}_1$ gives:
$$\boldsymbol{v}_1= 4\boldsymbol{v}_2 -\boldsymbol{v}_3 $$
Since $c_3$ is a free variable, there are infinite number of linear combinations. If we had assigned a different value for $c_3$, then we would end up with another set of linear combination. Note
that this also means that the set of these vectors is linearly dependent.
Consider the following vectors:
$$\boldsymbol{v}_1= \begin{pmatrix} 1\\3\\2 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_2= \begin{pmatrix} 2\\1\\4 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_3= \begin{pmatrix} 10\\10\\20 \end{pmatrix}$$
Express $\boldsymbol{v}_3$ as a linear combination of $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$. Is this set of vectors linearly dependent?
We want to solve for $c_1$, $c_2$ and $c_3$ in the following:
$$\begin{pmatrix} 1&2&10\\ 3&1&10\\ 2&4&20 \end{pmatrix} \begin{pmatrix}c_1\\c_2\\c_3\end{pmatrix}= \begin{pmatrix}0\\0\\0\end{pmatrix}$$
We row-reduce the coefficient matrix:
$$\begin{pmatrix} 1&2&10\\3&1&10\\2&4&20\\ \end{pmatrix}\sim \begin{pmatrix} 6&12&60\\6&2&20\\6&12&60\\ \end{pmatrix}\sim \begin{pmatrix} 6&12&60\\0&10&40\\0&0&0\\ \end{pmatrix}\sim \begin{pmatrix} 1
&2&10\\0&1&4\\0&0&0\\ \end{pmatrix}$$
We have 2 equations and 3 unknowns, which means that $c_3$ is a free variable. For simplicity, let's set $c_3=1$. From the second row, we have that:
$$\begin{align*} c_2+4&=0\\ c_2&=-4\\ \end{align*}$$
Finally, using the top row:
$$\begin{align*} c_1+2(-4)+10&=0\\ c_1&=-2 \end{align*}$$
By theoremlink, we can express $\boldsymbol{v}_3$ as a linear combination of $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$ like so:
$$\begin{align*} -2\boldsymbol{v}_1 -4\boldsymbol{v}_2 +\boldsymbol{v}_3 &=\boldsymbol{0}\\ \boldsymbol{v}_3 &=2\boldsymbol{v}_1+ 4\boldsymbol{v}_2 \end{align*}$$
Since we've managed to express $\boldsymbol{v}_3$ as a linear combination of $\boldsymbol{v}_1$ and $\boldsymbol{v}_2$, the set of vectors is linearly dependent.
Consider the following:
$$\boldsymbol{v}_1= \begin{pmatrix} 7\\0\\4 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_2= \begin{pmatrix} 3\\2\\2 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_3= \begin{pmatrix} 1\\3\\1 \end{pmatrix}$$
Which of the following is true?
The set of vectors is linearly dependent.
The set of vectors is linearly independent.
The goal is to solve for the following homogenous linear system:
$$\begin{pmatrix} 7&3&1\\0&2&3\\4&2&1\\ \end{pmatrix} \begin{pmatrix}c_1\\c_2\\c_3\end{pmatrix}= \begin{pmatrix}0\\0\\0\end{pmatrix}$$
We perform row-reduction to get:
$$\begin{pmatrix} 7&3&1\\0&2&3\\4&2&1\\ \end{pmatrix}\sim \begin{pmatrix} 28&12&4\\0&2&3\\28&14&7\\ \end{pmatrix}\sim \begin{pmatrix} 28&12&4\\0&2&3\\0&-2&-3\\ \end{pmatrix}\sim \begin{pmatrix} 7&3&1
\\0&2&3\\0&0&0\\ \end{pmatrix}$$
Since we have a row with all zeros, we know that there exist infinitely many solutions to the homogeneous system by theoremlink. This means that we are able to express a vector as some linear
combination of the other two vectors and hence the vector set is linearly dependent.
Consider the following:
$$\boldsymbol{v}_1= \begin{pmatrix} 1\\0\\1 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_2= \begin{pmatrix} 0\\1\\1 \end{pmatrix},\;\;\;\;\; \boldsymbol{v}_3= \begin{pmatrix} 1\\1\\1 \end{pmatrix}$$
Which of the following is true?
The set of vectors is linearly dependent.
The set of vectors is linearly independent.
The goal is to solve for the following homogenous linear system:
$$\begin{pmatrix} 1&0&1\\0&1&1\\1&1&1 \end{pmatrix} \begin{pmatrix}c_1\\c_2\\c_3\end{pmatrix}= \begin{pmatrix}0\\0\\0\end{pmatrix}$$
The reduced row echelon form of the coefficient matrix is:
$$\begin{pmatrix} 1&0&1\\0&1&1\\1&1&1 \end{pmatrix}\sim \begin{pmatrix} 1&0&1\\0&1&1\\0&-1&0 \end{pmatrix}\sim \begin{pmatrix} 1&0&1\\0&1&1\\0&0&1 \end{pmatrix}\sim \begin{pmatrix} 1&0&0\\0&1&0\\0&0&
1 \end{pmatrix}$$
The only solution to the system is therefore $c_1=c_2=c_3=0$. By definitionlink, this means that the set of vectors is linearly independent, that is, none of the vectors can be expressed as a linear
combination of the other two vectors.
Did you find this page useful?
Ask a question or leave a feedback...
Enjoy our search
Hit / to insta-search docs and recipes! | {"url":"https://www.skytowner.com/explore/linear_dependence_and_independence_in_linear_algebra","timestamp":"2024-11-10T10:58:38Z","content_type":"text/html","content_length":"101195","record_id":"<urn:uuid:da7976a0-a829-49dc-8f87-6211381834ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00174.warc.gz"} |
Philip M. Whitman - Wikipedia Republished // WIKI 2
Philip Martin Whitman is an American mathematician who contributed to latticetheory, particularly the theory of freelattices.
Living in Pittsburgh,^[3] he attended the HaverfordCollege, where he earned a corporation scholarship for 1936–37,^[4] and a Clementine Cope fellowship for 1937–38,^[5] and was awarded highest
honors in mathematical astronomy in 1937.^[6] He was elected to the college's chapter of the PhiBetaKappaSociety.^[7] In June 1937, he was conferred the Bachelorofsciencedegree from Haverford.^
[8] According to GarrettBirkhoff, Whitman was an undergraduate Harvard student in 1937,^[9] and an outstanding graduate student not later than 1940, one of the first who taught elementary courses to
freshmen in the mathematics department.^[10] In 1938 he earned his AM,^[11] and in June 1941 he obtained his Ph.D. degree from Harvard University.^[12] He was a member of the AMS not later than 1947,
^[13] and was awarded an AMS honorary membership not later than 1995.^[14]
YouTube Encyclopedic
• 1/3
• The Poetry of Emily Dickinson: Metaphor and its Philosophical Mysteries - Professor Belinda Jack
• Mindy Grossman, CEO of HSN, Inc: Culture Trumps Strategy
• 1. Great Expectations | ChemLab Boot Camp
Selected publications
External links
This page was last edited on 26 September 2022, at 22:18 | {"url":"https://wiki2.org/en/Philip_M._Whitman","timestamp":"2024-11-05T23:08:58Z","content_type":"application/xhtml+xml","content_length":"65767","record_id":"<urn:uuid:cb3fc249-f61f-4e47-aba6-39bda1413224>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00062.warc.gz"} |
Browsing by Author "Rampersad, Narad"
Browsing by Author "Rampersad, Narad"
Now showing items 1-20 of 24
• Blanchet-Sadri, F.; Currie, James D.; Rampersad, Narad; Fox, Nathan (Integers, 2014-02-20)
We study the combinatorics of vtm, a variant of the Thue-Morse word generated by the non-uniform morphism 0 ↦ 012, 1 ↦ 02, 2 ↦ 1 starting with 0. This infinite ternary sequence appears a lot in
the literature and finds ...
• Lacroix, Anne; Rampersad, Narad (Discrete Mathematics and Theoretical Computer Science, 2013)
If L is a language, the automaticity function AL(n) (resp. NL(n)) of L counts the number of states of a smallest deterministic (resp. non-deterministic) finite automaton that accepts a language
that agrees with L on all ...
• Camungol, Serina; Rampersad, Narad (Mathematical Sciences Publishers, 2015-09-17)
Ochem, Rampersad, and Shallit gave various examples of infinite words avoiding what they called approximate repetitions. An approximate repetition is a factor of the form x x', where x and x' are
close to being identical. ...
• Currie, James D.; Rampersad, Narad (2015-09-14)
In previous work, Currie and Rampersad showed that the growth of the number of binary words avoiding the pattern xxxR was intermediate between polynomial and exponential. We now show that the
same result holds for the ...
• Currie, James D.; Rampersad, Narad; Shallit, Jeffrey (The Electronic Journal of Combinatorics, 2006-09-22)
We characterize the squares occurring in infinite overlap-free binary words and construct various α power-free binary words containing infinitely many overlaps.
• Currie, James D.; Rampersad, Narad (Discrete Mathematics and Theoretical Computer Science, 2014-05-13)
We construct infinite cubefree binary words containing exponentially many distinct squares of length n . We also show that for every positive integer n , there is a cubefree binary square of
length 2n.
• Krawchuk, Colin; Rampersad, Narad (Integers, 2018-03)
Cassaigne et al. introduced the cyclic complexity function c_x(n), which gives the number of cyclic conjugacy classes of length-n factors of a word x. We study the behavior of this function for
the Fibonacci word f and the ...
• Currie, James; Rampersad, Narad (EDP Sciences, 2009)
We show that Dejean’s conjecture holds for n ≥ 27. This brings the final resolution of the conjecture by the approach of Moulin Ollagnier within range of the computationally feasible.
• Zamboni, Luca Q.; Saari, Kalle; Rampersad, Narad; Currie, James D. (Elsevier, 2014-01-22)
Given an infinite word x over an alphabet A, a letter b occurring in x, and a total order \sigma on A, we call the smallest word with respect to \sigma starting with b in the shift orbit closure
of x an extremal word of ...
• Currie, James; Mol, Lucas; Rampersad, Narad (World Scientific, 2017)
We present an infinite family of formulas with reversal whose avoidability index is bounded between 4 and 5, and we show that several members of the family have avoidability index 5. This family
is particularly interesting ...
• Currie, James D.; Rampersad, Narad (The Electronic Journal of Combinatorics, 2008-08-31)
The critical exponent of an infinite word w is the supremum of all rational numbers α such that w contains an α-power. We resolve an open question of Krieger and Shallit by showing that for each
α>2 there is an infinite ...
• Rampersad, Narad (The Electronic Journal of Combinatorics, 2011-06-21)
In combinatorics on words, a word w over an alphabet ∑ is said to avoid a pattern p over an alphabet ∆ if there is no factor x of w and no non-erasing morphism h from ∆* to ∑* such that h(p) = x.
Bell and Goh have recently ...
• Currie, James D.; Rampersad, Narad (Elsevier, 2016-01)
Abstract Consider the set of those binary words with no non-empty factors of the form xxx^R. Du, Mousavi, Schaeffer, and Shallit asked whether this set of words grows polynomially or
exponentially with length. In this ...
• Currie, James; Rampersad, Narad (EDP Sciences, 2010)
Richomme asked the following question: what is the infimum of the real numbers α > 2 such that there exists an infinite word that avoids α-powers but contains arbitrarily large squares beginning
at every position? We resolve ...
• Rampersad, Narad (Integers, 2018-03)
• Charlier, Émilie; Rampersad, Narad; Rigo, Michel; Waxweiler, Laurent (Integers, 2011-12-02)
We study the structure of automata accepting the greedy representations of N in a wide class of numeration systems. We describe the conditions under which such automata can have more than one
strongly connected component ...
• Charlier, Émilie; Lacroix, Anne; Rampersad, Narad (EDP Sciences, 2011)
We prove that the subsets of Nd that are S-recognizable for all abstract numeration systems S are exactly the 1-recognizable sets. This generalizes a result of Lecomte and Rigo in the
one-dimensional setting.
• Currie, James; Rampersad, Narad; Aberkane, Ali (2004-06-19)
We show that the number of ternary words of length n avoiding abelian cubes grows faster than r^n, where r = 2^{1/24}
• Currie, James D.; Mol, Lucas; Rampersad, Narad (EDP Sciences, 2018-02-13)
While a characterization of unavoidable formulas (without reversal) is well-known, little is known about the avoidability of formulas with reversal in general. In this article, we characterize
the unavoidable formulas ...
• Rampersad, Narad (University of WinnipegUniversity of Waterloo, 2007)
The study of combinatorics on words dates back at least to the beginning of the 20th century and the work of Axel Thue. Thue was the first to give an example of an infinite word over a three
letter alphabet that contains ... | {"url":"https://winnspace.uwinnipeg.ca/xmlui/browse?type=author&value=Rampersad%2C+Narad","timestamp":"2024-11-11T18:21:58Z","content_type":"text/html","content_length":"59499","record_id":"<urn:uuid:2995c93e-cfa6-47d9-ac2a-a3b0928f1d54>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00306.warc.gz"} |
A note on distinct distances in rectangular lattices
In his famous 1946 paper, Erdos (1946) proved that the points of a n×n portion of the integer lattice determine Θ(n/logn) distinct distances, and a variant of his technique derives the same bound for
n×n portions of several other types of lattices (e.g., see Sheffer (2014)). In this note we consider distinct distances in rectangular lattices of the form {(i,j)∈^Z20≤i≤n1^-α,0≤j≤ ^nα}, for some 0<α
<1/2, and show that the number of distinct distances in such a lattice is Θ(n). In a sense, our proof "bypasses" a deep conjecture in number theory, posed by Cilleruelo and Granville (2007). A
positive resolution of this conjecture would also have implied our bound.
Funders Funder number
Hermann Minkowski-MINERVA Center for Geometry
Israel Science Fund
Instituto de Ciencias Matemáticas 338/09, 892/13, SEV-2011-0087
Tel Aviv University
Ministerio de Ciencia e Innovación
Israeli Centers for Research Excellence 4/11
• Discrete geometry
• Distinct distances
• Lattice
Dive into the research topics of 'A note on distinct distances in rectangular lattices'. Together they form a unique fingerprint. | {"url":"https://cris.tau.ac.il/en/publications/a-note-on-distinct-distances-in-rectangular-lattices","timestamp":"2024-11-05T07:28:08Z","content_type":"text/html","content_length":"49907","record_id":"<urn:uuid:69c4f10a-9c8d-443b-8cad-b7e3b65767cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00080.warc.gz"} |
Etihad Airlines - the promise and the reality
Hell, half the time I can barely stand up straight to pee... let alone put on a full face of makeup.
I want to see a video that has some turbulence added.
Top Posters In This Topic
Chris, the larger the aircraft, the less you feel turbulence.
And the A380 aircraft is really worth the trip on any airline and in any class, a cut above experience.
Now one of these new 'apartments' introduced today by Etihad Airlines with "64-inch sliding door, minibar, personal vanity unit, wardrobe and swiveling TV monitor for viewing from either the seat or
the bed" sound like a dream.
Edited by Steven_Draker
Chris, the larger the aircraft, the less you feel turbulence.
No more transcons on a Cessna for me!
No more transcons on a Cessna for me!
Chris….note my avatar. You don't have to stand up at all, pee right out the bottom, and still have fun!
Chris….note my avatar. You don't have to stand up at all, pee right out the bottom, and still have fun!
Do you have one of those bent bottle pee things?
Do you have one of those bent bottle pee things?
Don't need it! Just hang out…..w a t c h . o u t . b e l o w w w w w
No muss, no fuss.
No makeup mirror, tho. Oh, and wear a coat and scarf. Goggles, too.
Always a,
No more transcons on a Cessna for me!
What about one of these? Not all Cessnas are created equal
What about one of these? Not all Cessnas are created equal
Might just have to change from my pedal-pusher to this one……long - and - smooth - and sleek - and warm. And a pot to piss in. Guess you have to have one of those to afford it. Would love to just
touch it, maybe stroke it, kiss it.
Bet I could chase down that nasty Red Baron!
Ever Fun
Might just have to change from my pedal-pusher to this one……long - and - smooth - and sleek - and warm. And a pot to piss in. Guess you have to have one of those to afford it. Would love to just
touch it, maybe stroke it, kiss it.
Bet I could chase down that nasty Red Baron!
Ever Fun
If you could buy the Cessna, who would your crew be? Pilot (AKA Top Dog), Copilot (BACK UP Stud), and the crew (the SCREWS).
I could fantasize that trip with a whole bunch of different hot guys.
First group---Chris E.(pilot), Ace (co pilot), and Nate SF and Dane Scott to serve the passengers during flight :-)
Boston Bill
I just want to be a steward for one cross-continental flight so I can use the name of a book I read 41 years ago while sitting on a beach on Dunk Island off the coast of Brisbane. The name of the
book was "Coffee, Tea or Me."
If you could buy the Cessna, who would your crew be? Pilot (AKA Top Dog), Copilot (BACK UP Stud), and the crew (the SCREWS).
I could fantasize that trip with a whole bunch of different hot guys.
First group---Chris E.(pilot), Ace (co pilot), and Nate SF and Dane Scott to serve the passengers during flight :-)
Boston Bill
All the way with that crew - won't give them turn around time either. Well, turn around, but not turn-around. Can't wait to see the uniforms; open air?
What about one of these? Not all Cessnas are created equal
You won't be lonely on that Citation for long, darling.
I just want to be a steward for one cross-continental flight so I can use the name of a book I read 41 years ago while sitting on a beach on Dunk Island off the coast of Brisbane. The name of the
book was "Coffee, Tea or Me."
Was that the one by/about the stews (ahem, flight attendants)? I want to be a steward whose involved in the flight!
You won't be lonely on that Citation for long, darling.
Well my darling, indeed not, but something tells me that you know your way around a Corporate Jet....
Well my darling, indeed not, but something tells me that you know your way around a Corporate Jet....
I did get the chance to fly in a corporate 727 a while ago. :o
The "residence" cabin is to make its debut on the Heathrow to Abu Dhabi route. Less than 7 hours flying for $20,000 one way, but that does appear to be for 2 people....
The "residence" cabin is to make its debut on the Heathrow to Abu Dhabi route. Less than 7 hours flying for $20,000 one way, but that does appear to be for 2 people....
These airlines keep trying to shove more and more people into a smaller space.
These airlines keep trying to shove more and more people into a smaller space.
Actually, in some cases, I think it's more and more people into bigger spaces - hence the new double deckers.
It's telling my age but my first trip to Hawaii was on the Pan Am double decker in the 50's. What fun for a kid!! (of course, Chris, you're still a kid! and I'm still fun!)
here's a guy who takes a shower on an Emirates flight....I did post this several months ago if anybody is wondering....
My priorities would be a totally flat bed and high enough walls to shield the kitten from the eyes of passers by. A makeup mirror would be great.
These airlines keep trying to shove more and more people into a smaller space.
Aren't they?
http://media.treehugger.com/assets/images/2011/10/airplaneseating.jpg http://nycaviation.com/newspage/wp-content/uploads/2009/06/standing-only-hannibal-300x252.jpg
Steven, I've seen that proposed configuration elsewhere. While it complies with FAA regulations (believe it or not), it's doubtful that airlines would actually implement it because no passenger wants
to fly that way (golly, you don't say). Sheesh, the airlines would have better luck convincing passengers to spoon together to save space.
Yeah, it is crazy that this product will be one a relatively short hop from London to Abu Dhabi, but I guess that's where all the rich sheikhs are going. Although I guess ultimately it will show up
on longer routes as well. Their 787-9's are supposed to have similar F class apartments and business class. And I think that is going to premiere on IAD-AUH and AUH-HKG or something like that.
Steven, I've seen that proposed configuration elsewhere. While it complies with FAA regulations (believe it or not), it's doubtful that airlines would actually implement it because no passenger
wants to fly that way (golly, you don't say). Sheesh, the airlines would have better luck convincing passengers to spoon together to save space.
If I could fly with Steven I wouldn't mind spooning. HIS ride might not be comfortable but I'd enjoy the hell out of it.
• 3 years later... | {"url":"https://www.companyofmen.org/topic/62008-etihad-airlines-the-promise-and-the-reality/","timestamp":"2024-11-10T18:15:34Z","content_type":"text/html","content_length":"420311","record_id":"<urn:uuid:1be7f07c-5759-42e8-a2da-4dfa45733382>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00632.warc.gz"} |
Bar to Megapascal Converter | bar to MPa
Bar to Megapascal converter | bar to MPa conversion
Are you struggling with converting Bar to Megapascal? Don’t worry! Our online “Bar to Megapascal Converter” is here to simplify the conversion process for you.
Here’s how it works: simply input the value in Bar. The converter instantly gives you the value in Megapascal. No more manual calculations or headaches – it’s all about smooth and effortless
Think of this Bar (bar) to Megapascal (MPa) converter as your best friend who helps you to do the conversion between these pressure units. Say goodbye to calculating manually over how many Megapascal
are in a certain number of Bar – this converter does it all for you automatically!
What are Bar and Megapascal?
In simple words, Bar and Megapascal are units of pressure used to measure how much force is applied over a certain area. It’s like measuring how tightly the air is pushing on something.
The short form for Bar is “bar” and the short form for Megapascal is “MPa”.
In everyday life, we use pressure units like Bar and Megapascal to measure how much things are getting squeezed or pushed. It helps us with tasks like checking tire pressure or understanding the
force in different situations.
How to convert from Bar to Megapascal?
If you want to convert between these two units, you can do it manually too. To convert from Bar to Megapascal just use the given formula:
MPa = Value in bar * 0.1
here are some examples of conversion,
• 2 bar = 2 * 0.1 = 0.2 MPa
• 5 bar = 5 * 0.1 = 0.5 MPa
• 10 bar = 10 * 0.1 = 1 MPa
Bar to Megapascal converter: conclusion
Here we have learn what are the pressure units Bar (bar) and Megapascal (MPa)? How to convert from Bar to Megapascal manually and also we have created an online tool for conversion between these
Bar to Megapascal converter” or simply bar to MPa converter is a valuable tool for simplifying pressure unit conversions. By using this tool you don’t have to do manual calculations for conversion
which saves you time. | {"url":"https://calculatorguru.net/bar-to-megapascal/","timestamp":"2024-11-06T02:36:29Z","content_type":"text/html","content_length":"121966","record_id":"<urn:uuid:78bd06bb-f86b-44af-8aa8-2f84c74c97c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00718.warc.gz"} |
6.5.5 Divisibility Part 2 CodeHS Answers » Quizzma6.5.5 Divisibility Part 2 CodeHS Answers
You must login to ask a question.
The most common answer is:
numerator = int(input(“Enter a numerator: “))
denominator = int(input(“Enter denominator: “))
If denominator is 0, this will result in a division-
by-zero error! Add in code to solve this issue:
if numerator / denominator * denominator == numerator:
print “Divides evenly!”
except ZeroDivisionError:
print “Doesn’t divide evenly.”
To correctly handle division by zero and check for divisibility, we can modify your code to use a try-except block properly and ensure the usage of standard ASCII quotation marks.
Additionally, the logic for checking divisibility can be simplified by using the modulo operator %, which directly checks if the remainder of the division is zero.
Here’s the revised version of the code:
numerator = int(input("Enter a numerator: "))
denominator = int(input("Enter a denominator: "))
# Checking for division by zero and divisibility
# Correct way to check for divisibility
if numerator % denominator == 0:
print("Divides evenly!")
print("Doesn't divide evenly.")
except ZeroDivisionError:
print("Cannot divide by zero.")
This program will:
• Prompt the user to enter a numerator and a denominator.
• Use a try-except block to catch and handle division by zero errors.
• Check if the numerator divides evenly by the denominator using the modulo operator %.
• Print “Divides evenly!” if there’s no remainder.
• If the denominator is zero, it will catch the ZeroDivisionError and print “Cannot divide by zero.”
Was this helpful?
Let us know if this was helpful. That’s the only way we can improve. | {"url":"https://quizzma.com/6-5-5-divisibility-part-2-codehs-answers/","timestamp":"2024-11-05T17:01:56Z","content_type":"text/html","content_length":"218225","record_id":"<urn:uuid:5b433141-8105-4b27-b338-68f0908a3787>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00754.warc.gz"} |
Celebrating National Pi Day : The Significance of the Mathematical Constant
Pi (Ï€) is the most important and fascinating constant in mathematics representing the ratio of a circle's circumference to its diameter. The significance of Pi (Ï€) is not just limited to
mathematics but it has also been found applicable in other fields such as engineering, physics, and computer science. It is a trigonometry number which means it has an infinite number of decimal
places without repeating. National Pi Day is celebrated on 14th March every year, today we are going to explore the origins of National Pi(Ï€) Day and why it is significant.
What is National Pi(Ï€) Day?
National Pi(Ï€) Day is celebrated on March 14th (3/14) every year. This date was chosen because it corresponds to the first three digits of Pi(Ï€) i.e. 3.14. Physicist Larry Shaw organized the
celebration of National Pi(Ï€) Day for the first time in 1988 at San Francisco Exploratorium. The celebration included a pi-themed parade and the recitation of pi to as many decimal places as
What is Pi(Ï€)?
Pi is a non-repeating, non-terminating number that has an infinite number of decimal places. The value of Pi is approximately
, but it is often simplified to 3.14 for practical purposes. Pi has been known and studied for thousands of years, and its discovery and value evolved over time. The ancient Egyptians and Babylonians
were aware of the existence of Pi but did not have precise value for it. The Greek mathematician Archimedes is credited with discovering one of the earliest methods for approximating Pi in the 3rd
century BCE. To calculate the value of Pi, Archimedes used the "circumscribing and inscribing" process. Archimedes calculated the value of Pi by using this method between 3.1408 and 3.1429.
In the following centuries, many mathematicians developed new methods for approximating the value of Pi. The Indian mathematician Madhava developed an infinite series of fractions in the 14th century
that could be used to calculate Pi to any degree of accuracy. Another infinite series developed by the English mathematician John Wallis in the 17th century could be used to calculate Pi to many
decimal places.
Despite these advances, the true nature of Pi as an irrational number was not fully understood until the 18th century. The swiss mathematician Johann Lambert proved that Pi was irrational in 1761,
which means it cannot be expressed as a ratio of two integers. In the centuries following, the search for more and more digits of Pi became a fascination for mathematicians and computer scientists.
The first electronic computer, ENIAC, calculated Pi to over 2000 decimal places in 1949. Today, advanced computers can calculate the value of Pi(Ï€) to trillions of decimal places, revealing patterns
and insights into the nature of this fascinating mathematical constant.
Here are a few of the most famous methods to calculate the value of Pi(Ï€):
1. the Greek Method: The Greek mathematician Archimedes used a process called "circumscribing and inscribing" to calculate the value of Pi to within a few decimal points.
2. The Infinite Series Method: This method was discovered by the Indian mathematician Madhava in the 14th century.
3. The Monte Carlo Method: This method involves using random numbers to estimate the value of Pi.
4. The Spigot Algorithm: This algorithm was developed by two mathematicians Simon Plouffe and David Bailey in the 1990s, this method is very efficient for calculating Pi to a billion digits.
Why is National Pi Day significant?
To celebrate the significance of Pi in mathematics and its application in other fields. The search for the digits of Pi has been a fascination for many mathematicians for centuries, and it has led to
the discovery of many new formulas and identities.
National Pi Day is celebrated in various ways around the world. Some of the activities that are organized include:
1. Pi recitation contests
2. Pi-themed events
3. Pi-themed lessons
4. Pi-themed sales, etc.
10 most interesting facts about Pi(Ï€):
National Pi Day is a celebration of the significance of pi in mathematics and its application in other fields. It's a way to celebrate the beauty and complexity of mathematical constant that has
intrigued mathematicians and scientists for centuries. We need to acknowledge the importance of Pi in mathematics and its applications. We also honor the curiosity and creativity of mathematicians
and scientists who have contributed to our understanding of Pi and its significance.
Post a Comment | {"url":"https://www.thesciencekida.in/2023/03/celebrating-national-pi-day-significance.html","timestamp":"2024-11-11T01:03:42Z","content_type":"application/xhtml+xml","content_length":"283900","record_id":"<urn:uuid:cd0928f3-b7d9-4c47-a20d-ee7960748b44>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00558.warc.gz"} |
Python log2 Function
Kodeclik Blog
Python’s math.log2() function
If you're working with data that follows a logarithmic scale, the log2 function in Python can be an essential tool for your data analysis. In this post, we'll explore what the log2 function is, how
to use it in Python, and some practical applications.
What is the log2 function?
The log2 function is a mathematical function that returns the logarithm of a number with base 2. Let us recall what the logarithm means. Logarithms are just another name for exponents or powers.
Because 10^2 = 100, we say that the logarithm of 100 (to the base 10) is 2. Similarly, because 2^5 = 32, we say that the logarithm of 32 to the base 2 is 5. This is what log2 is, i.e., it is
computing a logarithm to the base 2 (unlike the previous example that was with respect to base 10).
Stated differently, log2() tells you how many times you need to divide a number by 2 before you reach 1. For example, the log2 of 8 is 3, because 8 can be divided by 2 three times to get 1 (8/2/2/2 =
1). Similarly, the log2 of 1 is 0 because you do not have to divide by 2 any number of times: it is already 1!
How do we use log2 in Python?
Using the log2 function in Python is easy. You first need to import the math module, which gives you access to the log2 function. Here is a very simple program you can explore:
import math as m
In this code, we import the math module as “m”, and then use the log2 function to find the logarithm (to the base 2) of 32. The output is 5 as expected:
Here is a more elaborate program exploring log2 of various numbers and printing a very descriptive output:
import math as m
for x in range(13):
y = pow(2,x)
print("2^" + str(x) + " = " + str(y) +
", so log2(" + str(y) + ") = " + str(m.log2(y)))
In this program, we use the pow() function to first compute powers of 2, in a loop, and then use log2() to reconstruct the exponent. The resulting values are printed in a user-friendly sentence, one
for each step of the loop. The output is:
2^0 = 1, so log2(1) = 0.0
2^1 = 2, so log2(2) = 1.0
2^2 = 4, so log2(4) = 2.0
2^3 = 8, so log2(8) = 3.0
2^4 = 16, so log2(16) = 4.0
2^5 = 32, so log2(32) = 5.0
2^6 = 64, so log2(64) = 6.0
2^7 = 128, so log2(128) = 7.0
2^8 = 256, so log2(256) = 8.0
2^9 = 512, so log2(512) = 9.0
2^10 = 1024, so log2(1024) = 10.0
2^11 = 2048, so log2(2048) = 11.0
2^12 = 4096, so log2(4096) = 12.0
Parameters of the log2 function
As seen above, the log2 function takes only one argument, which is the number you want to find the logarithm of. The argument must be a positive number, or else a ValueError will be raised.
Practical applications of log2
The log2 function has practical applications in various fields, including data science, cryptography, and signal processing. For example, in data science, it is often used to transform data that
follows a logarithmic scale into a linear scale, which can be easier to analyze.
In this post, we've explored what the log2 function is, how to use it in Python, and some practical applications. By using this function, you can easily perform logarithmic calculations in your
Python programs and manipulate data that follows a logarithmic scale.
Interested in more things Python? Checkout our post on Python queues. Also see our blogpost on Python's enumerate() capability. Also if you like Python+math content, see our blogpost on Magic Squares
. Finally, master the Python print function! | {"url":"https://www.kodeclik.com/python-log2/","timestamp":"2024-11-02T02:09:48Z","content_type":"text/html","content_length":"91410","record_id":"<urn:uuid:edacde8f-55f6-49d8-8187-17ccc5543dfa>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00501.warc.gz"} |
Hollow Hemisphere Calculators | List of Hollow Hemisphere Calculators
List of Hollow Hemisphere Calculators
Hollow Hemisphere calculators give you a list of online Hollow Hemisphere calculators. A tool perform calculations on the concepts and applications for Hollow Hemisphere calculations.
These calculators will be useful for everyone and save time with the complex procedure involved to obtain the calculation results. You can also download, share as well as print the list of Hollow
Hemisphere calculators with all the formulas. | {"url":"https://www.calculatoratoz.com/en/hollow-hemisphere-Calculators/CalcList-4405","timestamp":"2024-11-12T15:17:16Z","content_type":"application/xhtml+xml","content_length":"118904","record_id":"<urn:uuid:8a382269-e040-4941-84d8-cd5c54b2d65e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00387.warc.gz"} |
A truly flat hat top (and part 4 of "pocket hats")
5 illustrations, click any illustration to enlarge
reminder--like the other posts in this series, the top part shows the general directions, the bottom part, directions for the KAL of the "pocket hats" which illustrate the post
For years, a truly flat hat top in ribbing eluded me. With spiral decreases, no matter how ferocious the rate of decrease, the darn thing never lay flat, nor could I find a suitable pattern for
progressively eliminating ribs. With "all-at-once decreases" the ribs became even more distorted and the darn thing still wouldn't lay flat--the 12, 10 or 8 stitches at the top would form a little
nipple when the yarn was drawn through--not quite the thing. On both kinds of decrease, the needles at the top of the hat went slipping out as fewer and fewer stitches remained. But today, I go on my
way rejoicing, because a few months ago, a new trick revealed itself to me, a ...
for a ribbed hat
This hat top features 5 decrease rounds and a final working together at the top, 6 steps in all: these are labeled steps A through F on the illustration, and are described in the text.
Step D is not labeled on the illustration because it is only a minor (although mathematically important) decrease.
Some of these steps are ordinary decreases with which you are familiar: knit 2 together (k2tog) and purl 2 together (p2tog), but some of these steps have a little trick involved.
In the top part of this post -- the general directions -- the number of plain rounds to work between the decreases is left vague, partly because much depends on the yarn you're using: baby weight
requires more plain rounds between decreases than does bulky yarn. In the bottom part of this post, round-by-round directions are given for the hat top in double knitting weight (DK, also called
"light worsted") and you can use these row-by-row directions as a starting point, adding plain rows for thinner yarn, subtracting for thicker.
Part 1: general directions for the truly flat hat top
STEP A (decrease on the purl ribs)
This first round of decreases features an ordinary decrease done on the PURL stitches. Written succinctly:
• *purl 2 together, knit 2* all the way around the first decrease round
Hidden in the purl ribs, the purl 2 together (p2tog) decrease will never show on the outside (although it does show inside). After this decrease round, knit several rounds without decreasing in the
new pattern (k2,p1), and that is the end of step A.
One little note before leaving step 1: On a 2x2 ribbing, the rate of decrease in step A leaves 3/4 of the original number of stitches on your needles.
Step B (decrease on the knit ribs)
On this step, a knit 2 together (k2tog) decrease is done on the knit ribs. However, there is TRICK: an extra round added, a set-up round where only the purl stitches are worked, while the knit
stitches are not worked at all, only slipped. As shown in the illustration, working the k2togs WITHOUT the set up row results in flabby, slanted stitches, while WITH the set up row, the stitches are
more upright.
Including the set-up row, written succinctly, step B is in 2 rounds:
• round 1 (set up round): * p1, slip the 2 knit stitches of the knit rib purlwise, while holding yarn in back* all the way around the round.
• round 2 (decrease round): *p1, k2tog* all the way around the round.
By inserting an extra round of purls but not knits, the knits are forced to stretch upward. Of course they will slant somewhat, but with much their slack devoted to stretching up that extra round,
the k2togs will lay smoother and more upright than if the slipping row were omitted.
Three little notes before we leave step B:
• After this step, you will have 1/2 of your original stitch count on the needle.
• From here on out, when you come to count rows, it will look like you lost a row -- if you find that you need to count rows, the row on which you slipped the knits will be invisible--you won't be
able to see it unless you turn the work inside out to count rows!
• Performing the decreases on this round will certainly result in so few stitches that they cannot POSSIBLY be stretched around a circular needle, however short. Therefore, if you were not already
working on double pointed needles (dpn's) or by the magic loop technique, you would have to switch to one of those techniques now.
Step C (decrease away all purl ribs, change gauge)
The problem now is that there is STILL too much yarn and too much slack to make a hat top lay nicely flat. So the little
of this step is to CHANGE GAUGE. Yes, simply by knitting with a smaller needle (2 sizes smaller works well) you'll be putting a lot less yarn into the hat top, and that'll help a lot with laying
Changing gauge is not a conventional method of decrease, so it bears repeating one more time: by switching to a smaller needle, you are putting a lot less yarn into the fabric, and this creates a
decrease all by itself. In other words, you will now: *
switch to needles 2 sizes smaller and use these smaller sized needles for the remainder of the hat top.
As it happens, in step C we NOT ONLY want to tighten up all the future stitches we are going to knit, but we ALSO want to decrease away even more of them. Therefore, IN THIS SAME ROUND that you're
switching to smaller needles, you are ALSO going to do another k2tog decrease all around.
The k2tog's will look best if you arrange matters so that the purl stitch lays behind the knit stitch when the k2tog is finished. Written succinctly, and incorporating the previous part about smaller
needles, step C can be summarized:
• arrange matters so that one purl stitch is on the tip of your left needle, with a knit stitch the left of that. *Knit together the knit stitch with the purl stitch* all the way around the round,
using the smaller needles.
As you can see from the illustration below, the combination of gauge change and decreases results in a very pretty and very distinctive change in fabric.
After this decrease, work an additional round or 2 with the smaller needles and this will end step C.
Two little notes before leaving step C:
• After this step, you will have 25% of your original stitch count on the needles.
• In this step, you work away all the purls, and all the rest of the hat top will be by means of knit stitches only.
Step D (another set up row, possibly with decreases)
In step D, you're going to decrease away as many stitches as are needed so that at the end of the step, you'll have a multiple of 4 stitches on your needles. If the number of stitches on your needles
is ALREADY divisible by 4, then you're all set, just knit a round plain. If you wonder WHY you need a multiple of 4, there is a
paragraph of explanation. If today is not a "why" day for you, just scroll past.
In Step F, the last step, you Kitchener-stitch (graft) the top of the hat together. Step E-- just before that final grafting--is a last decrease round in which you're going to decrease away the
remaining number of stitches by half. So, in order to have the correct number of stitches for steps E and F, this step--step D--gets rid of any extra stitches which would throw off final stitch
counts. In other words, this step, D, is a "set-up round." As you may remember from middle-school math, only numbers which are multiples of 4 (4, 8, 12, 16, 20, 24, 28, 32, 36, 40 or 44, etc.) will
yield an even number when further divided by 2. In other words, round E gets rid of half the remaining stitches, and the number of stitches at the end of round E has to be an even number of stitches,
so rounds E and F will only work if we use round D to decrease away any leftover stitches ahead of time.
Written succinctly, in step D you must proceed as follows:
• If on your needles, you have ANY NUMBER OF STITCHES DIVISIBLE BY 4, you're all set: just knit one round plain. If, however, you have ANY OTHER sort of a number, you must use this step to bring
the number of stitches on your needles to the nearest multiple of 4 by knitting 2 stitches together as many times as needed. You can simply put the necessary number of decreases anywhere at
random in this round, just so long as you don't put them next to one another.
Knit one additional round plain (no decreases) and that is the end of step D.
Step E (last decrease round)
This step is simply a round of k2togs--and it is the last decrease round. The only thing remarkable about this round is that these last decreases are a bit miserable. As a result of previously
switching to smaller needles in step C, these stitches are already at a very small gauge. Knitting two together at this gauge (and on so few loops) means stitches that just want to POP off the
needle. However, persevere, because the final result is worth it.
To make it easier, the little
in this step is that you can either use a smaller needle (a tiny sock needle) OR a crochet hook to work the actual decreases, then replace the resulting loops back onto the needles you have been
using (replacing the stitches back on the needles they came from assures that these last loops are of the correct diameter for step 6, replace the stitches RIGHT ARM FORWARD so they are in the
correct position for step F).
Step F (graft--a.k.a. Kitchener stitch--the hat top shut)
Arrange the stitches so that they are divided in half. If you are working on dpn's, arrange 1/2 the stitch count on each of two needles. If you are working by magic loop, put 1/2 the stitch count on
each needle of your magic loop set-up. The grand finale, the final
of this hat top, is to graft the top together.
(Click here for an easy method of Kitchener stitching.)
Kitchener-stitching the top shut achieves three big aims:
• first, the short lengthwise graft pulls the entire top into the pleasant-looking oval shape of the finished decrease (see illustration below)
• second, grafting prevents you from having to work final decrease rounds on a tiny number of stitches, with needles falling out in all directions and
• third, it makes a really smooth top, avoiding the pointy-looking decreases of other hat tops
This decrease looks very well, and goes very fast--faster and faster on each round as you decrease away more and more stitches.
Part 2: Round-by-round directions for the pocket hat KAL
In the
previous post of the pocket hat KAL
, we left the pocket hats 4 stripes high, with the fifth color just added by the jogless back join method. In this post, all the decreasing of the entire hat will be done in this final color.
The row instructions are written for the Watch Cap.
For the Stocking Cap, add another round in pattern between rows 2 and 3. For the Rasta Hat add another round in pattern between rows 2 and 3 and also between rows 5 and 6.
Stitch counts appear for the two different widths of hats (116 sts = regular size, 120 sts = extra large head size)
• Round 1: After performing the jogless back join in the new color yarn, knit around (no purling) ending just before the first two purl stitches to be worked in the new color. Place marker. (116,
120 sts)
• Rounds 2 and 3: *p2, k2* repeat around (for stocking cap and rasta hat, add another round between rounds 2 and 3, per note in PURPLE, above)
• Round 4: *p2tog, k2* repeat around (87, 90 sts remaining) ROUND 4 corresponds to STEP A of the general directions in part 1 of this post, above.
• Rounds 5 and 6: *p1, k2* repeat around (for rasta hat, add another round between rounds 5 and 6, per note in PURPLE, above)
• Round 7: *p1, sl2* repeat around. (Slip stitches purlwise so as not to twist, slip with yarn in back.)
• Round 8: *p1, k2tog, repeat around (58, 60 sts remaining). Things will start to get tight at this round – it’s time to switch to dpns or the magic loop technique if you haven't done so already.
ROUNDS 7 and 8 correspond to STEP B of the general directions, above.
• Rounds 9 and 10: *p1, k1* repeat around
• Round 11: Switching to needles 2 sizes smaller, and using a long magic-loop type set-up, or a set of dpns, *k2tog* around, setting up so that you have (15, 15) stitches on your first needle, and
(14, 15) on the second (29, 30 stitches remaining). ROUND 11 corresponds to step C of the general directions, above.
• Round 12: Knit plain (no further purling on this hat top).
• Round 13: In order to make the final graft work on this hat it is important to have a multiple of 4 stitches at this point:
□ For the 116 st hat: k13, k2tog, k to end of round (28 sts remaining)
□ For the 120 st hat: k13, k2tog, k13, k2tog (28 sts remaining). ROUND 13 corresponds to step D of the general directions, above.
• Round 14: k plain
• Round 15: *k2tog* around. (14, 14 sts remaining). ROUND 15 corresponds to step E of the general directions, above.
• Round 16: Kitchener-stitch the top of the hat closed. ROUND 16 corresponds to step F of the general directions, above.
Try the hat on. If you find it is too short or too long, the hat need not be ripped out all the way to the beginning to rebalance the color proportion among the stripes. No. As stated in the first
post of this KAL, the ultimate fit of this hat can be somewhat adjusted by working more or fewer rounds only in this last color. So, if you need to fix the fit, rip out to round one of this last
color, then re-work, adding or subtracting in the plain rows to make the hat longer or shorter. If you want to know WHY this procedure does not distort the color pattern even though it seems like it
would, a (long-ish) art- history type explanation is
If you are following along in the KAL, the steps still remaining to a finished hat are
• BLOCKING and
• TWO ALTERNATIVES to conquer ITCHY-FOREHEAD syndrome
* * *
ADDENDUM, 2011
: The KAL laid out above stretches out over 5 posts, of which this is fourth, and it is free. However, some folks have written to say they find it hard to follow the pattern over so many posts.
So...if you like,
you can buy the pattern in an easy-to-print, all-in-one place pdf.
* * * | {"url":"https://techknitting.blogspot.com/2008/03/truly-flat-hat-top-and-part-4-of-pocket.html","timestamp":"2024-11-13T16:11:00Z","content_type":"application/xhtml+xml","content_length":"116436","record_id":"<urn:uuid:6ad9a672-a485-4527-af8f-98dd2021a62a>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00060.warc.gz"} |