content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
[SOLVED] Lindblad from infinitesimal Kraus sum representation ~ Physics ~ TransWikia.com
I think that your notes want to show that any (time-independent) Markovian master equation is written in the Gorini-Kossakowski-Sudarshan-Lindblad (GKLS) form. My feeling is that they are ignoring
some mathematical details, but intuitively their procedure is sound. The rigorous proof of the equivalence Markovianity-GKLS form is usually a bit more elaborate, and, for instance, you can find it
in the original papers [1,2] or in the standard textbook by Breuer and Petruccione [3].
In my opinion, trying to follow your notes to get to the desired equivalence may be quite confusing. I just would like to point out that the appearance of the time-dependent Kraus operators $$M_k
(delta t)$$, expanded as you have written for small $$delta t$$, is an ansatz, i.e. a priori is not due to any mathematical constraint, but we introduce it for our convenience. Anyway, I suggest you
checking the rigorous proof [3] and trying to compare each step with the discussion in your notes. You can see that, ultimately, they follow the same lines.
I have to say, however, that the approach of your notes is very useful to obtain the Kraus decomposition of the quantum map associated to a given master equation. Let us start from the GKLS form of a
Markovian dynamics: $$dot{rho}(t)=lim_{dtrightarrow 0}frac{rho(t+dt)-rho(t)}{dt}=-i[H,rho(t)]+sum_k gamma_k left(L_krho(t)L_k^dagger-frac{1}{2}{L_k^dagger L_k,rho(t)} right).$$ We want to find the
Kraus decomposition of the quantum map $$phi_{delta t}$$ such that $$phi_{delta t}[rho(t)]=rho(t+delta t)$$, for a small but finite $$delta t$$. We have $$phi_{delta t}[rho(t)]=rho(t)+mathcal{L}[rho
(t)]delta t+O(delta t^2)$$, that can be rewritten as: $$begin{split} phi_{delta t}[rho(t)]=&left(mathbb{I}-i Hdelta t-frac{1}{2}sum_k gamma_k L_k^dagger L_k delta tright)rho(t)left(mathbb{I}+i Hdelta
t-frac{1}{2}sum_k gamma_k L_k^dagger L_k delta tright)\ &+sum_kgamma_k L_krho(t)L_k^daggerdelta t+O(delta t^2). end{split}$$ In conclusion, by setting $$K=-frac{1}{2}sum_k gamma_k L_k^dagger L_k$$,
$$phi_{delta t}$$ can be decomposed through the Kraus operators $$M_0=mathbb{I}-delta t(i H-K)$$, $$M_k=sqrt{gamma_kdelta t}L_k$$, up to a precision of the order of $$O(delta t^2)$$. Note that this
does not tell us how to decompose the general quantum map $$phi_tau[rho(t)]=sum_k tilde{M}_k(tau)rho(t)tilde{M}_k^dagger(tau)$$ which drives the evolution for any large time $$tau$$, and, as far as I
know, such a decomposition is in general not easy to find (one has to solve the master equation, find the Choi matrix, etc...). However, it provides us with a great method to reconstruct the dynamics
generated by the master equation via repeated applications of the map $$phi_{delta t}$$, within a certain precision bounded by $$O(delta t^2)$$. As you can guess, this is very important for the
quantum simulation of open systems: the Kraus operators $$M_0$$ and $$M_k$$ may be obtained as the first-order expansion of some unitary operators (quantum gates) $$U(delta t)$$.
[1] G. Lindblad, Comm. Math. Phys. 48, 119 (1976).
[2] V. Gorini, A. Kossakowski, and E. C. G. Sudarshan, J. Math. Phys. 17, 821 (1976).
[3] H.-P. Breuer and F. Petruccione, The theory of open quantum systems (Oxford University Press, 2002).
Correct answer by Goffredo_Gretzky on December 12, 2020
|
{"url":"https://transwikia.com/physics/lindblad-from-infinitesimal-kraus-sum-representation/","timestamp":"2024-11-07T04:13:00Z","content_type":"text/html","content_length":"51393","record_id":"<urn:uuid:c2734c73-882c-42cd-a1fe-3511bffead4d>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00487.warc.gz"}
|
A Matrix Factorization Approach to Linear Regression | Python-bloggersA Matrix Factorization Approach to Linear Regression
A Matrix Factorization Approach to Linear Regression
This post is intended to shed light on why the closed form solution to linear regression estimates is avoided in statistical software packages. But we start by first by deriving the solution to the
normal equations within the standard multivariate regression setting.
The normal linear regression model specifies that the sampling variability around the mean is i.i.d. from a normal distribution:
$\epsilon_{1}, \cdots, \epsilon_{n} \sim \mathrm{i.i.d.}\hspace{.25em}\mathrm{normal}(0, \sigma^{2})\\ y_{i} = \beta^{T}x_{i} + \epsilon_{i}.$
Therefore the specification for the joint PDF of observed data conditional upon and values is given by:
\begin{align*} f(y_{1}, \cdots, y_{n}|x_{1}, \cdots, x_{n}, \beta, \sigma^{2}) &= \prod_{i=1}^{n} f(y_{i}|x_{i}, \beta, \sigma^{2})\\ &= (2\pi \sigma^{2})^{-n/2} \mathrm{exp}\big(-\frac{1}{2\sigma^
{2}} \sum_{i=1}^{n}(y_{i} - \beta^{T}x_{i})^{2}\big). \end{align*}
Alternatively, the joint PDF can be represented in terms of the multivariate normal distribution. Let $y$ be the n-dimensional response vector, and $X$ the $n \times p$ design matrix whose $i^{th}$
row is $x_{i}$. We have:
$y|X,\beta, \sigma^{2} \sim \mathrm{MVN}(X\beta, \sigma^{2} \mathrm{I})$
Where $I$ represents the $p \times p$ identity matrix.
The density depends on $\beta$ through the residuals. Given the observed data, the term in the exponent $(y_{i} - \beta^{T}x_{i})$ is maximized when the sum of squared residuals, $\mathrm{SSR} = \
sum_{i=1}^{n} (y_{i} - \beta^{T}x_{i})$ is minimized. To find the optimal $\beta$, we expand the expression for the residual sum of squares:
\begin{align*} \mathrm{SSR} &= \sum_{i=1}^{n} (y_{i} - \beta^{T}x_{i})\\ &=(y - X\beta)^{T}(y - X\beta)\\ &=y^{T}y - 2\beta^{T}X^{T}y + \beta^{T}X^{T}X\beta. \end{align*}
Computing the first derivative of the last expression above w.r.t. $\beta$ and setting equal to 0 yields $-2X^{T}y + 2X{T}X\beta = 0$, which can be rearranged to obtain the familiar expression for
the ordinary least squares solution:
$\beta = (X^{T}X)^{-1}X^{T}y.$
Why isn’t this expression implemented in linear model solvers directly?
The condition number of a matrix is the ratio of maximum-to-minimum singular values (which, for a normal matrix, is the ratio of the maximum-to-minimum absolute value of eigenvalues). Essentially,
the condition number tells you how much solving a linear system will magnify any noise in your data. It can be thought of as a measure of amplification. The smaller the condition number, the better
(the best value being 1).
Demonstrating the equivalence of computing the condition
number as the ratio of maximum-to-minimum singular values
and using np.linalg.cond, as well as a comparison of the
condition numbers for X vs. X^T*X.
import numpy as np
rng = np.random.default_rng(516)
X = rng.normal(size=(50, 10))
# SVD for X.
U0, S0, Vt0 = np.linalg.svd(X, full_matrices=True)
c0 = np.linalg.cond(X, p=None)
# SVD for X^T*X.
U1, S1, Vt1 = np.linalg.svd(X.T @ X, full_matrices=True)
c1 = np.linalg.cond(X.T @ X, p=None)
# S0 and S1 represent the singular values of X and X^T*X.
print(f"S0.max() / S0.min() : {S0.max() / S0.min():.8f}.")
print(f"Condition number of X : {c0:.8f}.")
print(f"S1.max() / S1.min() : {S1.max() / S1.min():.8f}.")
print(f"Condition number of X^T*X : {c1:.8f}.")
S0.max() / S0.min() : 2.44498390.
Condition number of X : 2.44498390.
S1.max() / S1.min() : 5.97794628.
Condition number of X^T*X : 5.97794628.
In terms of numerical precision, computing $X^{T}X$ roughly squares the condition number. As an approximation, $\mathrm{log}_{10}(\mathrm{condition})$ represents the number of digits lost in a given
matrix computation. So by merely forming the Gram matrix, we’ve doubled the loss of precision in our final result, since
$\mathrm{log}_{10}(\mathrm{condition}(X^{T}X)) \approx 2 \times \mathrm{log}_{10}(\mathrm{condition}(X)).$
If the condition number of $X$ is small, forming the Gram matrix and solving the system via $\beta = (X^{T}X)^{-1}X^{T}y$ should be fine. But as the condition number grows, solving the normal
equations becomes increasingly unstable, ultimately resulting in a solution devoid of accuracy. Since statistical software packages need to handle an incredible variety of potential design matrices
with a wide range of condition numbers, the normal equations approach cannot be relied upon.
The QR Decomposition
The QR Decomposition factors a matrix $X$ into the product of an orthonormal matrix $Q$ and an upper triangular matrix $R$, $X = QR$. Because $Q$ is orthonormal, $Q^{T} = Q^{-1}$. Beginning with a
version of the normal equations solution, substitute $QR$ for $X$, rearrange and arrive at:
\begin{align*} X^{T}X \beta &= X^{T}y\\ (QR)^{T}(QR) \beta &= (QR)^{T}y\\ R^{T}(Q^{T}Q)R \beta &= R^{T} Q^{T} y\\ R^{T}R \beta &= R^{T} Q^{T} y\\ (R^{T})^{-1} R^{T} R \beta &= (R^{T})^{-1} R^{T} Q^
{T} y\\ R \beta &= Q^{T} y, \end{align*}
where we’ve taken advantage of how transpose distributes over matrix products (i.e. $(AB)^{T} = B^{T}A^{T}$), and the fact that since $Q$ is orthonormal, $Q^{T}Q = I$.
Because $R$ is upper triangular, $\beta$ can be solved for using back substitution. A quick demonstration of how this can be accomplished with Scipy:
# Solving for regression coefficients using normal equations and QR decomposition.
import numpy as np
from scipy.linalg import solve_triangular
rng = np.random.default_rng(516)
X = rng.normal(size=(50, 5))
y = rng.normal(scale=5, size=50)
# Normal equations solution.
B0 = np.linalg.inv(X.T @ X) @ X.T @ y
# QR Decomposition solution.
Q, R = np.linalg.qr(X, mode="reduced")
B1 = solve_triangular(R, Q.T @ y)
print(f"B0: {B0}")
print(f"B1: {B1}")
print(f"np.allclose(B0, B1): {np.allclose(B0, B1)}")
B0: [ 0.42402765 -1.21951527 0.22396056 0.26773935 -0.72067314]
B1: [ 0.42402765 -1.21951527 0.22396056 0.26773935 -0.72067314]
np.allclose(B0, B1): True
Using the QR Decomposition, we’ve eliminated the need to explicitly create the Gram matrix, and no longer need to invert matrices, which is computationally expensive and has its own set of precision
degradation issues.
The Singular Value Decomposition
The Singular Value Decomposition is a generalization of the Eigendecomposition to any $nxp$ matrix. The SVD decomposes a matrix $A$ into 3 matrices ($A = U \Sigma V^{T}$):
• $U$ is a $n \times p$ orthogonal matrix (assuming $A$ is real); columns represent left singular vectors.
• $\Sigma$ is a $p \times p$ diagonal matrix with diagonal entries representing the singular values of $A$.
• $V^{T}$ is a $p \times p$ orthogonal matrix (assuming $A$ is real); rows represent right singular vectors.
Beginning with the normal equations solution, replace $X$ with $U \Sigma V^{T}$ and solve for $\beta$:
\begin{align*} X^{T}X \beta &= X^{T}y\\ (U \Sigma V^{T})^{T}U \Sigma V^{T}\beta &= (U \Sigma V^{T})^{T}y\\ V \Sigma^{T} U^{T} U \Sigma V^{T} \beta &= V \Sigma U^{T} y\\ V \Sigma^{T} \Sigma V^{T} \
beta &=V \Sigma U^{T} y\\ V V^{T} \beta &= V \Sigma^{-1} U^{T} y\\ \beta &= V \Sigma^{-1} U^{T} y \end{align*}
Since $\Sigma$ is a diagonal matrix, the inverse is simply the reciprocal of the diagonal elements, which doesn’t incur the runtime associated with a conventional matrix inversion. In addition, $VV^
{T} = I$. Note that we assume the singular values are strictly greater than 0. If this is not the case, the condition number would be infinite, and a well-defined solution wouldn’t exist.
Determining the estimated coefficients using SVD can be accomplished as follows:
# Solving for regression coefficients using normal equations and SVD.
import numpy as np
rng = np.random.default_rng(516)
X = rng.normal(size=(50, 5))
y = rng.normal(scale=5, size=50)
# Normal equations solution.
B0 = np.linalg.inv(X.T @ X) @ X.T @ y
# SVD solution.
U, S, Vt = np.linalg.svd(X, full_matrices=False)
B1 = Vt.T @ np.diag(1 / S) @ U.T @ y
print(f"B0: {B0}")
print(f"B1: {B1}")
print(f"np.allclose(B0, B1): {np.allclose(B0, B1)}")
B0: [ 0.42402765 -1.21951527 0.22396056 0.26773935 -0.72067314]
B1: [ 0.42402765 -1.21951527 0.22396056 0.26773935 -0.72067314]
np.allclose(B0, B1): True
Each of these methods (normal equations, QR Decomposition, SVD) incur different computational cost. From Golub & Van Loan, the flop count associated with each algorithm is presented below:
Normal Equations | mn^2 + n^3 / 3 |
Householder QR | n^3 / 3 |
Modified Gram-Schmidt | 2mn^2 |
SVD | 2mn^2 + 11n^3 |
Householder and Modified Gram-Schmidt are two approaches used in the first step of the QR Decomposition. SVD offers far more stability, but comes with added runtime complexity. Other matrix
decomposition methods such as LU and Cholesky can be leveraged, but the QR Decomposition represents a good trade-off between runtime and numerical stability. This explains its wide use in statistical
software packages. Check out A Deep Dive into how R Fits a Linear Model for an in-depth explanation of how the QR Decomposition is used to fit linear models in R.
Want to share your content on python-bloggers?
|
{"url":"https://python-bloggers.com/2024/02/a-matrix-factorization-approach-to-linear-regression/","timestamp":"2024-11-07T00:38:08Z","content_type":"text/html","content_length":"48970","record_id":"<urn:uuid:be858c04-e8f8-46d0-92e1-978702dec737>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00875.warc.gz"}
|
Machine learning hyperparameter optimization with Argo - Canva Engineering Blog
Canva uses a variety of machine learning (ML) models, such as recommender systems, information retrieval, attribution models, and natural language processing for various applications. A typical
problem is the amount of time and engineering effort in choosing a set of optimal hyperparameters and configurations used to optimize a learning algorithm's performance.
Hyperparameters are parameters set before a model's learning procedure begins. Hyperparameters, such as the learning rate and batch sizes, control the learning process and affect the predictive
performance. Some hyperparameters might also have a significant impact on model size, inference throughput, latency, or other considerations.
The number of hyperparameters in a model and their characteristics form a search space of possible combinations to optimize. In the same way that a rectangle's area is quadratic to its width, when
experimenting with two continuous hyperparameters the permissible search space is the area constructed by all combinations of the two hyperparameters. Every hyperparameter introduced grows the search
space exponentially, leading to a combinatorial explosion of the search space, as shown below.
Search space grows with the increased dimensionality of permissible hyperparameters
The intense effort to optimize hyperparameters is typical of modern natural language processing applications that involve fine-tuning large pre-trained language models, which take a few days to
finish training. In fact, training a model, such as GPT-3, from scratch takes hundreds of GPU years and millions of dollars. Simpler models, such as XGBoost, still have a myriad of hyperparameters,
each with nuanced effects. For example, increasing max_depth increases memory footprint, while tuning the tree construction implementation has a significant effect on latency and throughput.
A distributed hyperparameter optimization solution is the answer to the general trend of larger models trained on larger data. It also fits in well with how Canva operates machine learning models:
moving and iterating fast in an environment of compounding scale.
Machine learning engineers always have the opportunity to perform their custom hyperparameter optimization on top of vertically scaled model trainers. Yet, at the limits of vertical scaling,
distributed hyperparameter optimization is the process of spreading the individual model trainers across different pods and hosts.
Difference between vertical and horizontal scaling
Resource constraints are a massive challenge when relying solely on vertical scaling. Despite the ever-increasing instance sizes available on modern cloud platforms, it's difficult to scale model
training time linearly across multiple accelerators (GPUs). Moreover, any hyperparameter optimization procedure on top of an existing model trainer that efficiently uses all the processes involves
either trading off the number of processes a trainer can use when tuning in parallel or running all the experiments in sequence
This post shows how Canva solves these challenges.
Hyperparameter Optimization with Argo
Argo Workflows is a Kubernetes-native workflow orchestration framework that takes care of pod execution, management, and other common needs when running ephemeral jobs on Kubernetes. Argo's
extendability and ability to provide a single deployable unit were some of the benefits that led us to pick Argo over other workflow orchestration frameworks. At Canva, we leverage it to schedule and
run all model trainers on our Kubernetes clusters.
Distributed hyperparameter optimization's complexity can be separated into computational and algorithmic complexity. We use Argo workflows to support and orchestrate the computational requirements of
these hyperparameter optimization jobs. The algorithmic problem is delegated to one of many available open-source frameworks (we use Optuna at Canva).
Argo and Alternatives
It's desirable to use the exact model trainer framework and apply it to hyperparameter optimization jobs because of the many choices of existing custom tooling and integrations. Moreover, this
enables engineers to treat hyperparameter optimization as a model training procedure. Doing so allows the learning of the model's architecture along with the usual model parameters without coupling
the optimization and training concerns.
There are alternatives, of course.
Over the last few years, there's been an explosion of open-source and proprietary libraries and platforms attempting to do the full scope of model training, tuning, and even serving. Concentrating on
the latest open-source solutions, a large number of libraries, including Optuna, have tooling for running hyperparameter optimization jobs on Kubernetes.
We only use the algorithms of third-party optimization frameworks due to the cost of introducing new platform technology. It's so important to choose technology carefully, as opposed to installing
the newest Kubernetes framework. Any new technology introduces costs, such as maintenance, tooling, and training time. This principle is also why each item in our technology stack has minimal overlap
with each other.
Argo gives us operational support, monitoring, scalability, and custom tooling from our machine learning trainers. It also doesn't compromise the benefits of using or extending the best optimization
libraries available. We can maintain the option to replace the optimization algorithms and libraries depending on industry tides or use-case nuances. We no longer need to worry about how the pods are
Defining the Hyperparameter Optimization Workflow
There are many ways of defining the dataflow in a hyperparameter optimization workflow. One option is to define a temporary (but relatively long-living) optimization service that computes the next
hyperparameter experiment in which a model trainer consumes.
Another is to treat hyperparameter optimization as a sequence of map-reduce operations. Although this has downsides, such as requiring all parallel model trainers to finish before running the next
batch, it's far easier to extend and reason about within a workflow.
The map-reduce approach requires two kinds of application containers with differing responsibilities:
• Optimizer: A single container generating the next batch of hyperparameters to explore based on all previous hyperparameters and model evaluation results.
• Model Trainers: A batch of model trainer containers that accepts hyperparameter values and returns pre-defined evaluation metrics.
Interaction between the optimization and model training containers in one iteration
The interaction between the optimization and model training containers is essential. The optimization container first generates the next batch of hyperparameters to explore within a single
optimization iteration. The workflow then fans out the batch of hyperparameters to each model trainer running in parallel. The bulk of time spent within each iteration is then in the model training
itself. When the model trainers finish they return their respective metrics. Argo then aggregates the values into the next instantiation, beginning the next iteration if the termination criteria is
not satisfied.
One challenge of defining a hyperparameter optimization workflow like this is the handling of optimization state. Each optimization container must have access to all preceding hyperparameter values
and their results. A solution is to persist the intermediate results into a database. This would be ideal if many model trainers share the same optimizers. We found that passing the optimization
state explicitly through the workflow itself is more desirable because it isolates each hyperparameter optimization job and mitigates the need to maintain a long-living persistence mechanism.
An example of unrolled hyperparameter optimization workflow
After solving the orchestration challenges, the optimizer itself wraps open-source optimization libraries, such as Optuna. This gave us an average speedup of at least five times over our previous
process. We went from over a week to optimize down to a little over a day.
Defining the Optimizer
A separate optimization container means that the model trainers do not need to know about hyperparameter optimization. Their concerns are delineated with minimal coupling. The optimization is thus
free to use any heuristic or algorithm it chooses, such as Randomized Search.
We found that it's far easier to refit the entire optimization model after each batch instead of doing a partial fit. This has only a marginal effect on the optimization time. It also avoids the need
to maintain custom optimization state representations that depend on the algorithm.
Argo CLI and UI enable machine learning engineers to specify their desired search spaces and hyperparameter configurations at run-time. The search space gets supplied as a set of hyperparameters to
search through and their probability distributions. These distributions (such as uniform, discrete-uniform, log-uniform, or categorical) are a form of prior knowledge. The prior knowledge enriches
the hyperparameter optimization process so that the optimizer can better navigate the search space.
Machine learning engineers can also select the desired optimization algorithm, control the degree of parallelism, the number of iterations, and others as run-time parameters. Lastly, by passing the
optimization state through the workflow explicitly, engineers can also create new hyperparameter optimization jobs from the state of a previous one. This effectively enables engineers to warm-start
the optimization and iteratively relax or constrain the search space across multiple jobs.
Bayesian Model-based Hyperparameter Optimization
At Canva, we leverage Bayesian optimization methods to efficiently navigate large hyperparameter search spaces. These methods are sample-efficient because they select the next hyperparameter (the
vertical line in the graph below) which is likely to yield the largest improvement, and thus reduce the number of times an objective function needs to be evaluated in order to reach a well-optimized
point. This characteristic is especially important for ML.
Iterations of Bayesian Optimization. The first row illustrates the function being approximated and optimized while the second row highlights the acquisition function that determines the next sample
By building a probabilistic "surrogate" model from hyperparameter values and previous model evaluation results, Bayesian optimization methods balance exploring regions in the search space with high
uncertainty while exploiting regions around the best-known values of the probabilistic model. As a form of sequential model-based optimization (SMBO), Bayesian optimization is most efficient when the
evaluation of the expensive objective function is performed sequentially.
One of the problems with using pure SMBO in batch with high degrees of concurrency is that the batch of suggested hyperparameters tends to clump together, limiting the effectiveness of distributed
hyperparameter optimization and wasting compute by evaluating a small concentrated subset of the search space.
To provide a solution generic to the optimizer for this problem, we use a Constant Liar (CL) heuristic when sampling within a batch. This strategy generates temporarily "fake" objective function
values when sampling sequentially for each batch. By modifying the optimism and pessimism of the optimizer via the generated objective values, we control the degree of exploration within a batch of
concurrent ML experiments. Finally, since exploration is almost certainly needed at the start of any hyperparameter optimization job, we force the optimizer to generate random samples in the first
An example of converging objective function values while maintaining exploration. Best values are the most optimal objective values seen at that point in time of the optimization. They improve as the
optimizer develops a better understanding of the search space
Distributed hyperparameter optimization is an unavoidable problem when iterating on ML models at scale. We've accelerated the experimentation and tuning of ML models in a manner that's enabled both
the computational and algorithmic components to evolve individually. In practice, this has decreased the optimization time of some models from a week to a little over a day. By iterating on ML
use-cases faster, we hope to empower our users' experience to be more magical and delightful.
Special thanks to Jonathan Belotti, Sachin Abeywardana, and Vika Tskhay for their help and guidance. Huge thanks to Grant Noble for editing and improving the post.
Interested in advancing our machine learning infrastructure? Join us!
Subscribe to the Canva Engineering Blog
By submitting this form, you agree to receive Canva Engineering Blog updates. Read our
Privacy Policy
|
{"url":"https://www.canva.dev/blog/engineering/machine-learning-hyperparameter-optimization-with-argo/","timestamp":"2024-11-14T18:10:49Z","content_type":"text/html","content_length":"66479","record_id":"<urn:uuid:57b38f5c-6dcd-4bb4-8c76-e1eea0d5c39a>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00295.warc.gz"}
|
Solving Equations Quadratic in Form
In this section, we make use of all the techniques that we have learned so far for solving quadratic equations. In fact, the equations found here are reducible to quadratic form.
Here are most of the reducible equations that we are likely to encounter. Begin by trying to identify what can be squared to obtain the leading variable term.
: Look at the middle term for a hint as to what
should be.
In the previous solved problem, we certainly could have distributed the expression on the left side, put the equation in standard form then re-factored it. Instead, here we are illustrating a
technique that will be used to easily solve many other equations that are quadratic in form.
Solve by making a u-substitution.
: x^6 + 26x^3 -27
Six Answers: { -3, 1, (3±3iSqrt(3))/2, (-1±iSqrt(3))/2 }
So far we have been able to factor after we make the
-substitution. If the resulting quadratic equation does not factor, then use the quadratic formula.
YouTube Videos:
|
{"url":"https://www.openalgebra.com/2013/10/solving-equations-quadratic-in-form.html","timestamp":"2024-11-08T12:32:07Z","content_type":"application/xhtml+xml","content_length":"67091","record_id":"<urn:uuid:19193d10-d537-4e1c-9bf0-9dc820c9242f>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00427.warc.gz"}
|
tensordot(x1: array, x2: array, /, *, axes: int | Tuple[Sequence[int], Sequence[int]] = 2) array¶
Returns a tensor contraction of x1 and x2 over specific axes.
The tensordot function corresponds to the generalized matrix product.
☆ x1 (array) – first input array. Should have a numeric data type.
☆ x2 (array) –
second input array. Should have a numeric data type. Corresponding contracted axes of x1 and x2 must be equal.
Contracted axes (dimensions) must not be broadcasted.
☆ axes (Union[int, Tuple[Sequence[int], Sequence[int]]]) –
number of axes (dimensions) to contract or explicit sequences of axis (dimension) indices for x1 and x2, respectively.
If axes is an int equal to N, then contraction must be performed over the last N axes of x1 and the first N axes of x2 in order. The size of each corresponding axis (dimension) must
match. Must be nonnegative.
○ If N equals 0, the result is the tensor (outer) product.
○ If N equals 1, the result is the tensor dot product.
○ If N equals 2, the result is the tensor double contraction (default).
If axes is a tuple of two sequences (x1_axes, x2_axes), the first sequence must apply to x1 and the second sequence to x2. Both sequences must have the same length. Each axis (dimension)
x1_axes[i] for x1 must have the same size as the respective axis (dimension) x2_axes[i] for x2. Each index referred to in a sequence must be unique. If x1 has rank (i.e, number of
dimensions) N, a valid x1 axis must reside on the half-open interval [-N, N). If x2 has rank M, a valid x2 axis must reside on the half-open interval [-M, M).
If either x1 or x2 has a complex floating-point data type, neither argument must be complex-conjugated or transposed. If conjugation and/or transposition is desired, these operations should be
explicitly performed prior to computing the generalized matrix product.
out (array) – an array containing the tensor contraction whose shape consists of the non-contracted axes (dimensions) of the first array x1, followed by the non-contracted axes (dimensions)
of the second array x2. The returned array must have a data type determined by Type Promotion Rules.
Changed in version 2022.12: Added complex data type support.
Changed in version 2023.12: Allow negative axes.
|
{"url":"https://data-apis.org/array-api/latest/API_specification/generated/array_api.tensordot.html","timestamp":"2024-11-06T18:22:21Z","content_type":"text/html","content_length":"26857","record_id":"<urn:uuid:4c52e5ef-f214-45be-8b38-8508fd4de26a>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00370.warc.gz"}
|
How to Find the Zeros of a Polynomial Function - A Step-by-Step Guide
How to Find the Zeros of a Polynomial Function: A Step-by-Step Guide
To find the zeros of a polynomial function, I would first understand what a zero of a polynomial means. In mathematics, a zero of a polynomial ( p(x) ) is a value ( x_i ) such that when substituted
into the polynomial, the output is zero, i.e., ( p(x_i) = 0 ). Identifying these values is fundamental in graphing the function and solving polynomial equations, as zeros represent the points where
the graph intersects the x-axis.
My process includes examining the polynomial’s degree to ascertain the maximum number of possible zeros and using methods like synthetic division, the Rational Zero Theorem, or the Fundamental
Theorem of Algebra for more complex polynomials, which might have real and/or complex zeros. Stay tuned as I explore these techniques that make polynomial zeros less of a mystery and more of a
Finding Zeros of a Polynomial Function
When I’m looking to find the zeros of a polynomial function, I consider where the graph of the function crosses the x-axis. These points, also known as x-intercepts or roots, represent the values for
x where the function value is zero. Here’s a simple guide to locating these points:
Synthetic Division: This method helps me evaluate potential zeros by dividing them into the polynomial, which is expressed as a sum of its terms, each with their coefficients and variables raised to
a power, indicating the degree of the polynomial.
Step-by-Step Using Synthetic Division:
1. Propose a possible zero.
2. Perform synthetic division with the polynomial.
3. If the remainder is (0), congrats! That candidate is a zero.
Rational Zeros Theorem: This handy rule provides a list of potential rational zeros based on the ratio of the factors of the constant term to the factors of the leading coefficient.
For example:
Polynomial Equation Potential Rational Zeros
$f(x) = 2x^3 – 5x^2 + x – 2$ $\pm1$, $\pm2$, $\pm1/2$
I also use the Factor Theorem: If a value ( c ) is a zero of the polynomial function ( f(x) ), then ( x – c ) is a factor of the function. This helps me in factoring the polynomial to find more zeros
Graphing Techniques: Sometimes, I simply graph the polynomial equation to visually identify the zeros. Wherever the graph cuts the x-axis, those are the x-intercepts of the function.
Example of Graph Analysis:
1. Plot the polynomial function ( f(x) ).
2. Identify points where ( y) is ( 0 ).
By combining these techniques, I can efficiently determine all zeros of a polynomial function and solve the corresponding equation.
Advanced Techniques for Finding Zeros
When I tackle the challenge of finding zeros in polynomials, advanced techniques come into play that require a deeper understanding of algebra. One of the key methods I use is synthetic division,
which simplifies the process of testing possible rational zeros. It’s a valuable shortcut when the traditional long division seems too cumbersome.
To begin with, synthetic division helps me determine if a rational number is a zero of the polynomial by providing the remainder when dividing. If the remainder is zero, that number is indeed a zero
of the polynomial. The process looks like this when I apply it:
1. List down all coefficients of the polynomial.
2. Write down the potential zero to test.
3. Carry out the synthetic division algorithm.
Synthetic Division Example:
In this table, I’m testing $x = 1$ as a potential zero for the polynomial $x^3 – 6x^2 + 11x – 6$. The zero remainder confirms that $x = 1$ is a zero.
The Fundamental Theorem of Algebra assures me that a polynomial of degree n will have exactly n roots, which could be real or complex. Complex zeros always come in conjugate pairs, which helps
predict the zeros in complex solutions. For instance, if I know that $2 + 3i$ is a zero, so is $2 – 3i$.
Multiplicity refers to the number of times a particular zero occurs. A zero’s multiplicity affects the graph’s behavior at the intercept: an odd multiplicity causes the graph to cross the axis, while
an even multiplicity causes it to touch and turn around.
When looking for real zeros algebraically, Descartes’ Rule of Signs can be employed to predict the number of positive and negative real zeros. Counting sign changes in the polynomial’s coefficients
gives the maximum number of positive real zeros, while applying the rule to the polynomial with $x$ replaced by $-x$ gives information about the negative zeros.
By combining these techniques, finding the complex and real zeros of any polynomial becomes a structured and approachable task.
Practice and Applications
When I’m working on polynomial equations, I like to start by looking for all possible real roots. One practical method I often use is factoring by grouping. This involves rearranging and grouping
terms in the polynomial in such a way that I can factor them separately, which can drastically simplify finding solutions. For instance, given a polynomial $ P(x) = ax^4 + bx^3 + cx^2 + dx + e $, I
can sometimes group terms to factor out common elements and eventually find the real roots.
However, not all polynomials are easily factorable using simple methods like grouping. In these situations, the quadratic formula can be incredibly helpful, especially for polynomial equations of a
second degree, like $ ax^2 + bx + c = 0 $. The quadratic formula is given by $ x = \frac{-b \pm \sqrt{b^2 – 4ac}}{2a} $. This yields potential real roots or complex roots if the discriminant, $ b^2 –
4ac $, is negative.
To gain proficiency, it’s essential to practice. Here’s a structured approach I follow:
Step Practice Exercise
1. Identify the polynomial degree Observe whether it’s quadratic, cubic, etc.
2. Factor if possible Apply factoring by grouping if applicable
3. Apply relevant formulas Use quadratic or cubic formulas as needed
4. Verify the roots Plug the roots back into the original equation to check
For more complex polynomials, like a cubic function, we might have an equation like $ ax^3 + bx^2 + cx + d = 0 $, which may require more advanced methods like synthetic division.
By regularly solving exercises and applying these practices to real-world scenarios, like calculating the trajectory of a projectile or financial modeling, I enhance my understanding of polynomials.
Furthermore, seeing polynomials in factored form enables me to better grasp the relationships between the roots and the function’s graph.
In our exploration of finding the zeros of a polynomial function, I’ve highlighted several reliable methods that can assist you. Remember, the Rational Zero Theorem provides a way to list all
possible rational zeros by considering the factors of the constant term and the leading coefficient. For complex zeros, rely on the Fundamental Theorem of Algebra, which assures us that a polynomial
of degree n will have exactly n zeros (including both real and non-real zeros).
When you’re working through polynomial equations, don’t forget to apply the Linear Factorization Theorem to write a polynomial as a product of its linear factors. This can be particularly handy when
you know some zeros and need to find a corresponding polynomial function. Also, the Remainder Theorem is an excellent tool for verifying potential zeros by performing polynomial division.
Armed with these techniques, I’m confident you’ll tackle polynomial functions effectively. Should you need a refresher, revisit the Rational Zero Theorem or the Fundamental Theorem of Algebra for
guidance. By practicing these methods, you’ll enhance your ability to solve for the zeros of any polynomial function you encounter. Remember, patience and practice are key to mastering these concepts
in algebra.
|
{"url":"https://www.storyofmathematics.com/how-to-find-the-zeros-of-a-polynomial-function/","timestamp":"2024-11-03T00:20:25Z","content_type":"text/html","content_length":"145381","record_id":"<urn:uuid:32a4d241-560a-440f-b77e-d222e38f1fc3>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00420.warc.gz"}
|
The Atmosphere and the Sea in Motion - NYU Courant
Thermal and Hydraulic Performance of Compact Brazed Plate
The distribution function f gives the probability of finding an electron at time t, at position r, with momentum p. The Boltzmann transport equation [1] accounts for all possible mechanisms by which
f may change. The electron flow in a metal can be affected by applied fields, temperature gradients, and collisions (scattering). Problem 2.2. Boltzmann Transport Equation and Thermal Conductivity
Consider a medium with temperature gradient B and a constant particle concentration. (a) Use the Boltzmann transport equation in the relaxation time approximation to find the first order
nonequilibrium classical distribution: ffo - VTC (EB? 2017-01-20 · The strain effect on the thermal transport in graphene was theoretically investigated by Lindsay et al using the Boltzmann–Peierls
av B MINOVSKI · Citerat av 3 — Furthermore, heat transfer rates from the burned gas to the engine introduce two additional transport equations that represent the turbulent properties of. av P
Samuelsson · Citerat av 114 — lot of energy to melt, and it has low heat transfer capability which more or less The soil heat transfer equation applied for each soil layer can be written as. Chapter
3, which addresses energy transport caused by thermal conduction and convection, examines a derivation of the heat transport equation. Finally Avhandlingar om BOLTZMANN TRANSPORT EQUATION. of
particles can then be accelerated to energies significantly higher than the thermal energy. We also feature Thermtest's devices for testing thermal conductivity.
Equation 1 can now be re-written in its fully developed form as: $$ \frac{\partial c}{\partial t}+ abla\cdot(D abla c)+ abla\cdot (uc)=S_S+S_R \tag{2}$$ Heat Transfer. During thermal simulations,
the temperature field (which is scalar) is transported according to the convection-diffusion equation.
heat in Swedish - English-Swedish Dictionary Glosbe
The most basic rule of heat transfer is that heat always flows from a warmer medium to a colder medium. Heat exchangers are devices to facilitate this heat Know the governing equations describing
neutron transport, flow transport, and heat transfer in nuclear reactors. Solve such governing As a result, scale-up models are used to predict or simulate heat transfer values are used to simplify
complicated differential equations describing the system. av B MINOVSKI · Citerat av 3 — Furthermore, heat transfer rates from the burned gas to the engine introduce two additional transport
equations that represent the turbulent properties of.
HelmholtzMedia — A Fluid Properties Library
Heat conduction originates from Heat transfer takes place as conduction in a solid if there is a temperature gradient. due to surface convection and radiation is not included in this equation. In
this study, the bio‐heat transfer equation is solved for variable blood perfusion values and the temperature field resulting after a hyperthermia treatment is A systematic exposition for the NEGF
method is presented, starting from the fundamental definitions of the Green's functions, and ending with equations of motion The relationship between fractional-order heat conduction models and
Boltzmann transport equations (BTEs) 28 Jan 2020 The celebrated Fourier's equation for heat conduction was developed in 1822 and nowadays is still used to model heat transfer, such as in Control
volume showing energy inflow and outflow by conduction (diffusion) and convection. Page 4. Governing Equation for Heat Transfer Derived from. Energy From this equation the objective is solve and
analyze the heat flux distribution, the temperature distribution and the temperature gradient of bodies of different 1.1 The Conduction Equation. The basic objective of this course can be stated as:
given an object that is subjected to known temperature and/or heat flux Heat transfer has direction as well as magnitude.
. . .
Studievägledare kth ict
At this point, we have to add a new mechanism, which is known as advection (the transport of a substance by bulk motion). In addition to the thermal transport in solids carried by phonons and
electrons, photons can transfer heat via thermal radiation. By tuning the photon transport, wavelength- or directional-selective emitters/absorbers have been designed and applied to thermal
photovoltaics, 28,29 28. The inside surface temperature of the steel is 800 K and the outside surface temperature of the insulation board is 350 K. The thermal conductivity of the stainless steel is
19 W/(m K) and the thermal conductivity of the insulation board is 0.7 W/(m K). The conductive heat transport through the layered wall can be calculated as Boltzman Transport Equation.
• Sep 8, 2015. 245. 3. Share. Save. 245 / 3 Correspondingly the usual heat diffusion equation gets replaced by a non- 23 Apr 2020 They also indicate that the thermal conductivity of materials can
be and the mode-resolved phonon Boltzmann equation for thermal transport nanoscale thermal conductivity could bring radical improvements in the performance 1b] we use the well-established Boltzmann
transport equation28– 37 with 8 Jul 2010 Consequently, it can be modeled by an advection‐diffusion equation using macroscopic dispersivities.
Lås upp sim ipad
Many of the heat transfer and energy conversion phenomena in our research are governed by thermal conduction, and in many cases the dominant mechanism is phonon transport. In order to account for
nanoscale scattering mechanisms and sub-continuum aspects, it can be useful to use phonon transport theory involving either Monte Carlo methods or solution of th Peierls-Boltzmann transport equation.
2017-01-26 · Steady-state thermal transport in nanostructures with dimensions comparable to the phonon mean-free-path is examined. Both the case of contacts at different temperatures with no internal
heat gener Key-words: Sensitivity equation method, virtual reality, thermal transport Ecole Polytech'Nice y INRIA Opale Project-Team z INRIA Dream Team x SimplySim company inria-00530166, version 1 -
27 Oct 2010 Thermal transport at the nanoscale: A Fourier’s law vs. phonon Boltzmann equation study J. Kaiser,1,a) T. Feng,2 J. Maassen,3 X. Wang,4 X. Ruan,2 and M. Lundstrom4 1Department of
Electrical Engineering and Information Science, Ruhr-University Bochum, D-44780 Bochum, The relaxation of an one-dimensional transient thermal grating (TTG) in a medium with phonon-mediated thermal
transport is analyzed within the framework of the Boltzmann transport equation (BTE), with the goal of extracting phonon mean free path (MFP) information from TTG measurements of non-diffusive phonon
Over the past decades, these effects have been studied and interpreted by nonequilibrium molecular dynamics (NEMD) and phonon Boltzmann transport equation (BTE) simulations separately, but no Donate
here: http://www.aklectures.com/donate.phpWebsite video link: http://www.aklectures.com/lecture/thermal-radiationFacebook link: https://www.facebook.c The in-plane thermal transport in nanofilms with
internal heating can be characterized by the two-dimensional Boltzmann transport equation, ∂ f / ∂ t + v g x ∂ f / ∂ x + v g y ∂ f / ∂ y = (f 0 − f) / τ + S ˙ Ω. Here, the MC technique is employed to
solve this equation. The schematic of simulating object is illustrated in figure 1. The 1-D Heat Equation 18.303 Linear Partial Differential Equations Matthew J. Hancock Fall 2006 1 The 1-D Heat
Equation 1.1 Physical derivation Reference: Guenther & Lee §1.3-1.4, Myint-U & Debnath §2.1 and §2.5 [Sept. 8, 2006] In a metal rod with non-uniform temperature, heat (thermal energy) is transferred
2013-03-29 · In the case of nonlinear systems, we make general comments on the thermal expansion effect, phonon relaxation time, and a certain class of mean-field approximations.
Vad är optimerad bemanning
Anmärkningsvärt förbättrad termisk transport baserad på en
. . . . . .
Pentti airikkala
Lediga jobb för Chemistry Programming - januari 2021
equation, given by the first law of thermodynamics (i.e. conservation of energy), is written in the following form (assuming no mass transfer or extensions to more complicated problems of heat
transfer in porous media, This equation is mathematically equivalent to the problem of a Darcy flow in a. Non-Continuum Energy Transfer: Boltzmann Transport Equation We can treat phonons as particles
and therefore determine the thermal conductivity based on 30 Jun 2019 These are called the boundary conditions.
New Foundations Laid - TerpConnect
2020-07-15 · Download PDF Abstract: Transport equations for electron thermal energy in the high $\beta_e$ intracluster medium (ICM) are developed that include scattering from both classical
collisions and self-generated whistler waves. 2.2 Thermal transport equation Since the out-of-equilibrium character of phonon transport maybesignificantinnano-devices,theuseoftheBoltzmann transport
formalism to study the heat diffusion is particu-larly relevant. In contrast to the case of charged particles, the trajectories of phonons are not modified by any external driving force. Nano-size
confinement induces many intriguing non-Fourier heat conduction phenomena, such as nonlinear temperature gradients, temperature jumps near the contacts, and size-dependent thermal conductivity. Over
the past decades, these effects have been studied and interpreted by nonequilibrium molecular dynamics (NEMD) and phonon Boltzmann transport equation (BTE) simulations separately, but no Donate here:
http://www.aklectures.com/donate.phpWebsite video link: http://www.aklectures.com/lecture/thermal-radiationFacebook link: https://www.facebook.c The in-plane thermal transport in nanofilms with
internal heating can be characterized by the two-dimensional Boltzmann transport equation, ∂ f / ∂ t + v g x ∂ f / ∂ x + v g y ∂ f / ∂ y = (f 0 − f) / τ + S ˙ Ω. Here, the MC technique is employed to
solve this equation. The schematic of simulating object is illustrated in figure 1.
. . . . .
|
{"url":"https://skattersuoj.netlify.app/23615/87558","timestamp":"2024-11-11T10:38:55Z","content_type":"text/html","content_length":"20774","record_id":"<urn:uuid:94ccf8ea-e2d5-49c5-8ffe-a43815d5fb13>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00849.warc.gz"}
|
GlobalGrowthComparison: GlobalGrowthComparison class in biogrowth: Modelling of Population Growth
The GlobalGrowthComparison class contains several functions for model comparison and model selection of growth models. It should not be instanced directly. Instead, it should be constructed using
compare_growth_fits(). It is similar to GrowthComparison, although with specific tools to deal with several experiments.
It includes two type of tools for model selection and comparison: statistical indexes and visual analyses. Please check the sections below for details.
Note that all these tools use the names defined in compare_growth_fits(), so we recommend passing a named list to that function.
## S3 method for class 'GlobalGrowthComparison' coef(object, ...) ## S3 method for class 'GlobalGrowthComparison' summary(object, ...) ## S3 method for class 'GlobalGrowthComparison' print(x, ...) ##
S3 method for class 'GlobalGrowthComparison' plot(x, y, ..., type = 1, add_trend = TRUE)
object an instance of GlobalGrowthComparison
... ignored
x an instance of GlobalGrowthComparison
y ignored
type if type==1, the plot compares the model predictions. If type ==2, the plot compares the parameter estimates. If type==3, the plot shows the residuals
add_trend should a trend line of the residuals be added for type==3? TRUE by default
if type==1, the plot compares the model predictions. If type ==2, the plot compares the parameter estimates. If type==3, the plot shows the residuals
should a trend line of the residuals be added for type==3? TRUE by default
GlobalGrowthComparison implements two S3 methods to obtain numerical values to facilitate model comparison and selection.
the coef method returns a tibble with the values of the parameter estimates and their corresponding standard errors for each model.
the summary returns a tibble with the AIC, number of degrees of freedom, mean error and root mean squared error for each model.
The S3 plot method can generate three types of plots:
when type = 1, the plot compares the fitted growth curves against the experimental data used to fit the model.
when type = 2, the plot compares the parameter estimates using error bars, where the limits of the error bars are the expected value +/- one standard error. In case one model does not has some model
parameter (i.e. either because it is not defined or because it was fixed), the parameter is not included in the plot.
when type=3, the plot shows the tendency of the residuals for each model. This plot can be used to detect deviations from independence.
For more information on customizing the embed code, read Embedding Snippets.
|
{"url":"https://rdrr.io/cran/biogrowth/man/GlobalGrowthComparison.html","timestamp":"2024-11-14T04:30:47Z","content_type":"text/html","content_length":"38782","record_id":"<urn:uuid:4716a0e7-eea4-40ea-b193-502201eb3a64>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00751.warc.gz"}
|
Work and Energy
Tanusri Gururaj, Academic content writer of Physics at Edumarz
• A body having the ability to do work possesses energy.
Unit of energy: Joule (J)
• The body which does work loses energy whereas, the body on which work is done gains energy.
• The energy required to do 1 joule of work is 1 joule.
1 kilo joule = 1000 joule
The energy possessed by bodies in motion.
Kinetic energy increases with speed.
According to the third equation of motion,
2as = v2 – u2
Work done = F s = m a s
W = ms (v2 – u2)/2s
W ½ m (v2 – u2)
Work done is equal to change in kinetic energy and taking initial velocity as 0 we get,
Ek = ½ x mv2
The energy possessed by bodies by virtue of position or configuration.
• Gravitational potential energy:
It is the work done to raise a body from the ground to a certain height against gravity.
Ep = mgh
• Work done by gravity depends on the initial and final heights of the body and not on the path.
• Law of conservation of energy:
Energy can neither be created nor be destroyed and can only be transformed from one form to another.
The total energy of a system remains constant.
Let us take an example of a ball of mass ‘m’ falling freely from a height ‘h.’
Initially, the kinetic energy is zero as velocity is zero. The potential energy is mgh.
Total energy = mgh
As the ball continues falling, the kinetic energy keeps increasing whereas, the potential energy keeps decreasing.
When the ball reaches the ground and, the height is zero, kinetic energy is maximum (½ mv2) whereas, potential energy is minimum (0).
Total energy = Kinetic energy + Potential energy = constant
Where kinetic energy = ½ mv2
Potential energy = mgh
|
{"url":"https://edumarz.com/work-and-energy-3/","timestamp":"2024-11-03T16:05:28Z","content_type":"text/html","content_length":"202236","record_id":"<urn:uuid:d419c7bb-6c3f-41a6-ab44-7624451d96d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00253.warc.gz"}
|
LibGuides: Particle Physics: Course Outline
A Review of Particle Physics: Fundamental forces, Quarks and color, Weak interactions, Natural units.
Symmetries: Symmetries and conservation laws, Noether's theorem Parity, Charge conjugation, CP violation. Symmetries and groups, Group SU(2), Isospin, Group SU(3), Strangeness, Mesons, Baryons,
Magnetic moments, Heavy quarks, Charm and beyond, Hadron masses, Color factors.
Relativistic Kinematics: Lorentz transformations, Four-vectors, Energy and momentum, Mandelstam variables.
Introduction to QED: The Klein-Gordon equation, Dirac's interpretation of negative energy solutions, Feynman-Stuckelberg interpretation of negative energy solutions, The Dirac equation, Covariant
form of Dirac equation, Continuity equation for Dirac equation, Free-Particle solution of Dirac equation, Normalization of spinors and the completeness relations, Trace theorems and properties of γ
Gauge Symmetries: U(1), SU(2) and SU(3) gauge transformations, Transformation law for A[μ], Lagrangian density of a free particle, Invariance of a theory under global gauge transformations,
Lagrangian density and local U(1) gauge transformations, Lagrangian density and non-abelian continuous group of local gauge transformations.
|
{"url":"https://libguides.riphah.edu.pk/c.php?g=482729&p=3301474","timestamp":"2024-11-02T08:06:40Z","content_type":"text/html","content_length":"41769","record_id":"<urn:uuid:e9759914-05c0-4f11-a90b-e6955218508c>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00087.warc.gz"}
|
Research articles »
Articles index
search articles ScienXe.org
» my Online CV
reviews guidelines arXiv.org » Free
reviews 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018
articles index
arXiv.org, 1997
My Pages
Months: 1 2 3 4 5 6 7 8 9 10 11 12
my alerts
arXiv.org, 6.1997
my messages
Results 401 to 600 of 1'104. [ 1 2 3 4 5 6 ] Next
my reviews
my favorites cond-mat/9706089 Low-frequency Raman scattering in model disordered solids: percolators above threshold
cond-mat/9706090 Scaling Laws of Polyelectrolyte Adsorption
cond-mat/9706091 Orbital Kondo-effect from tunneling impurities
Stat cond-mat/9706092 Instability of the marginal commutative model of tunneling centers interacting with metallic environment: Role of the
electron-hole symmetry breaking
Members: 3658 cond-mat/9706093 Analysis of the Interplay of Quantum Phases and Nonlinearity Applied to Dimers with Anharmonic Interactions
Articles: 2'599'751 cond-mat/9706094 Particle-hopping Models of Vehicular Traffic: Distributions of Distance Headways and Distance Between Jams
Articles rated: 2609 cond-mat/9706095 Superconducting gap node spectroscopy using nonlinear electrodynamics
cond-mat/9706096 Random Mass Dirac Fermions in Doped Spin-Peierls and Spin-Ladder systems: One-Particle Properties and Boundary Effects
02 November 2024 cond-mat/9706097 Renormalization Group Approach to Non-equilibrium Green Functions in Correlated Impurity Systems
cond-mat/9706098 Monte Carlo Study of Correlations Near the Ground State of the Triangular Antiferromagnetic Ising Model
cond-mat/9706099 Aging Exponents in Self-Organized Criticality
cond-mat/9706099 Aging Exponents in Self-Organized Criticality
cond-mat/9706101 Protein folding, anisotropic collapse and blue phases
cond-mat/9706102 Interface dynamics at the depinning transition
cond-mat/9706103 Relaxation process in a regime of quantum chaos
cond-mat/9706104 Dualities in Spin Ladders
cond-mat/9706105 Generalized CP^1 model from t_1-t_2-J model
cond-mat/9706106 Aperiodic Ising Quantum Chains
cond-mat/9706107 Anderson-localization versus delocalization of interacting fermions in one dimension
cond-mat/9706108 The Interpretation of Magnetisation and Entropy Jumps in the Flux-line Lattice
cond-mat/9706109 Exchange in the linear cuprate antiferromagnets Sr2CuO3 and Ca2CuO3
cond-mat/9706110 Melting of Quasi-Two-Dimensional Charge Stripes in La5/3Sr1/3NiO4
cond-mat/9706111 Theory of Interplay of Nuclear Magnetism and Superconductivity in AuIn2
cond-mat/9706112 Time ordering in the evolution of information processing and modulation systems
cond-mat/9706113 Calculations of the Knight Shift Anomalies in Heavy Electron Materials
cond-mat/9706114 First-order phase transitions in one-dimensional steady states
cond-mat/9706115 Icosahedral coincidence rotations
cond-mat/9706116 Random Tiling Transition in Three Dimensions
cond-mat/9706117 Random tiling quasicrystals in three dimensions
cond-mat/9706118 Low-lying excitations around a single vortex in a d-wave superconductor
cond-mat/9706119 Comment on `Formation of a Dodecagonal Quasicrystalline Phase in a Simple Monatomic Liquid’
cond-mat/9706120 Electrical Conduction in Nonlinear Composites
cond-mat/9706121 Direction dependent free energy singularity of the asymmetric six-vertex model
cond-mat/9706122 Coordination sequences for root lattices and related graphs
cond-mat/9706123 Susceptibilities of Sr(Cu_(1-x)Zn_x)_2O_3 Studied by Quantum Monte Carlo Simulation
cond-mat/9706124 Density matrix renormalization group for the Berezinskii-Kosterlitz-Thouless transition of the 19-vertex model
cond-mat/9706125 Protein folding and heteropolymers
cond-mat/9706126 The 1D t-J model with next-nearest neighbor hopping - breakdown of the Luttinger liquid?
cond-mat/9706127 Mobility Edge and Level Statistics of Random Tight-Binding Hamiltonians
cond-mat/9706128 Correlated ab-initio calculations for ground-state properties of II-VI semiconductors
cond-mat/9706129 Effects of two dimensional plasmons on the tunneling density of states
cond-mat/9706130 Effects of pressure on diffusion and vacancy formation in MgO from non-empirical free-energy integrations
cond-mat/9706131 Rotational Reconstruction of Sapphire (0001)
cond-mat/9706132 Modification of the Landau-Lifshitz Equation in the Presence of a Spin-Polarized Current in CMR and GMR Materials
cond-mat/9706133 Orthogonal Symmetric Polynomials Associated with the Calogero Model
cond-mat/9706134 Competition between crystalline electric field singlet and itinerant states of f electrons
cond-mat/9706135 The bulk Josephson responce of the d-g-wave cuprate superconductor
cond-mat/9706136 Integrable su(3) spin chain combining different representations
cond-mat/9706137 Electronic transport in a series of multiple arbitrary tunnel junctions
cond-mat/9706138 Identification of Nuclear Relaxation Processes in a Gapped Quantum Magnet: Proton NMR in the S=1/2 Heisenberg Ladder Cu2
cond-mat/9706139 Inertial Mass of a Vortex in Cuprate Superconductors
cond-mat/9706140 Speckle from phase ordering systems
cond-mat/9706141 Airline Crew Scheduling Using Potts Mean Field Techniques
cond-mat/9706142 Statistical Properties of Unrestricted Crew Scheduling Problems
cond-mat/9706143 Spectral Function of 2D Fermi Liquids
cond-mat/9706144 Dynamic instabilities in resonant tunneling induced by a magnetic field
cond-mat/9706145 Doping Dependence of the Pseudogap State in the ab plane IR responce of La(2-x)Sr(x)CuO(4)
cond-mat/9706146 Field-induced incommensurate-to-commensurate transition in Ba_2CuGe_2O_7
cond-mat/9706147 Lightly Doped t-J Three-Leg Ladders - an Analog for the Underdoped Cuprates
cond-mat/9706148 Response of a $d_{x^2-y^2}$ Superconductor to a Zeeman Magnetic Field
cond-mat/9706149 Baryon Asymmetry of Universe: View from Superfluid 3He
cond-mat/9706150 Aspects of Dielectric Breakdown in a Model for Disordered Nonlinear Composites
cond-mat/9706151 Bose-Einstein to BCS Crossover Picture for High-T_c Cuprates
cond-mat/9706152 Thermodynamics of the anisotropic Heisenberg chain calculated by the density matrix renormalization group method
cond-mat/9706153 Triangular Antiferromagnets
cond-mat/9706154 Free energies of crystalline solids: a lattice-switch Monte Carlo method
cond-mat/9706155 Optical phonons in the reflectivity spectrum of FeSi
cond-mat/9706156 The Calogero Model: Integrable Structures and Orthogonal Basis
cond-mat/9706157 Fractional Quantum Hall Effect Measurements at Zero g-Factor
cond-mat/9706158 A single chain analysis of doped quasi one dimensional spin 1 compounds: paramagnetic versus spin 1/2 doping
cond-mat/9706159 Surface critical exponents for a three-dimensional modified spherical model
cond-mat/9706160 New model for surface fracture induced by dynamical stress
cond-mat/9706161 Threading dislocation lines in two-sided flux array decorations
cond-mat/9706162 Ground State Entropy of Potts Antiferromagnets: Bounds, Series, and Monte Carlo Measurements
cond-mat/9706163 Dynamics of Diblock Copolymers in Dilute Solutions
cond-mat/9706164 Bound states and impurity averaging in unconventional superconductors
cond-mat/9706165 Kondo Insulator: p-wave Bose Condensate of Excitons
cond-mat/9706166 Exclusion Statistics in Conformal Field Theory Spectra
cond-mat/9706167 Order versus Disorder in the Quantum Heisenberg Antiferromagnet on the Kagom{é lattice: an approach through exact spectra
cond-mat/9706168 Critical Exponents for Granular Phase Transitions
cond-mat/9706169 Comment on ``One-Dimensional Disordered Bosonic Hubbard Model: A Density-Matrix Renormalization Group Study"
cond-mat/9706170 Disappearance of the Spin Gap in a Zn doped 2-Leg Ladder Compound Sr(Cu1-xZnx)2O3
cond-mat/9706171 Boundary Effects in the One Dimensional Coulomb Gas
cond-mat/9706172 Simulation of Quantum Field Theory and Gravity in Superfluid He-3
cond-mat/9706173 Charge and Spin Structures of a $d_{x^2 - y^2}$ Superconductor in the Proximity of an Antiferromagnetic Mott Insulator
cond-mat/9706174 A simple physical model of liquid-glass transition: intrinsic fluctuating interactions and random fields hidden in
glass-forming liquids
cond-mat/9706175 Generalized calculation of magnetic coupling constants for Mott-Hubbard insulators: Application to ferromagnetic Cr compounds
cond-mat/9706176 Two-Order-Parameter Description of Liquids: Critical Phenomena and Phase Separation of Supercooled Liquids
cond-mat/9706177 Nuclear relaxation in the spin-1/2 antiferromagnetic chain compound Sr_2CuO_3 --- comparison between theories and experiments
cond-mat/9706178 Spatially nonuniform energy distribution of electrons in a submicron semiconductor film
cond-mat/9706179 Transition Temperature of Josephson Junction Arrays with Long-Range Interaction
cond-mat/9706180 Theory of thermoelectric effects when the temperature approximation is incorrect
cond-mat/9706181 Problem of formation of an emf in a semiconductor and its transfer to an external circuit
cond-mat/9706182 Time Dependent Floquet Theory and Absence of an Adiabatic Limit
cond-mat/9706183 Renormalization-group approach to the metal-insulator transitions in (DCNQI)_2M (DCNQI is N,N’-dicyanoquinonediimine and M=
Ag, Cu)
cond-mat/9706184 Properties of High-Tc Single Crystals as Natural Interferometers in the THz Frequency Range
cond-mat/9706185 Kinetic Theory Approach to the SK Spin Glass Model with Glauber Dynamics
cond-mat/9706186 Band Formation during Gaseous Diffusion in Aerogels
cond-mat/9706187 Fermi-Bose Correspondence at Finite Temperature
cond-mat/9706188 Continuous renormalization for fermions and Fermi liquid theory
cond-mat/9706189 Coupling between Smectic and Twist Modes in Polymer Intercalated Smectics
cond-mat/9707001 Quantum phase transitions in electronic systems
cond-mat/9707002 Co-operative Two-Channel Kondo Effect
cond-mat/9707003 Local Moments in an Interacting Environment
cond-mat/9707004 New Vortex with Zero Fluxoid Quantum in a Superconductor Anomalous Hall Effect in High T_c and Low T_c Superconductors
cond-mat/9707005 Eigenvalue distribution of large random matrices, from one matrix to several coupled matrices
cond-mat/9707006 The Superfluid State of Atomic Li6 in a Magnetic Trap
cond-mat/9707007 Object orientation and visualization of physics in two dimensions
cond-mat/9707008 Invaded Cluster Dynamics for Frustrated Models
cond-mat/9707009 Ordering and Demixing Transitions in Multicomponent Widom-Rowlinson Models
cond-mat/9707009 Ordering and Demixing Transitions in Multicomponent Widom-Rowlinson Models
cond-mat/9707011 Theory of Anomalous Hall Effect in a Heavy fermion System with a Strong Anisotropic Crystal Field
cond-mat/9707012 Discrete scale invariance and complex dimensions
cond-mat/9707013 Multicanonical Methods vs. Molecular Dynamics vs. Monte Carlo: Comparison for Lennard-Jones Glasses
cond-mat/9707014 e-h Coherence and Charging Effects in Ultrasmall Metallic Grains
cond-mat/9707015 Analysis of a three-component model phase diagram by Catastrophe Theory
gr-qc/9706001 O(N) Quantum fields in curved spacetime
gr-qc/9706002 Forks in the Road, on the Way to Quantum Gravity
gr-qc/9706003 Toward a Complete Analysis of the Global Structure of Kerr-Newman Spacetime
gr-qc/9706004 Can extreme black holes have (long) Abelian Higgs hair?
gr-qc/9706005 Statistical mechanics of Kerr-Newman dilaton black holes and the bootstrap condition
gr-qc/9706006 Quantum inequalities in two dimensional Minkowski spacetime
gr-qc/9706007 Classicality, Matter-Antimatter Asymmetry, and Quantum-Gravity Deformed Uncertainty Relations
gr-qc/9706008 Particle creation and non-adiabatic transitions in quantum cosmology
gr-qc/9706009 On the structure of the energy-momentum and the spin currents in Dirac’s electron theory
gr-qc/9706009 On the structure of the energy-momentum and the spin currents in Dirac’s electron theory
gr-qc/9706011 Quantum Decay of Domain Walls In Cosmology I: Instanton Approach
gr-qc/9706012 Lorentz Invariance and the Cosmological Constant
gr-qc/9706013 Singularities and asymptotic behavior of the Tolman-Bondi model
gr-qc/9706014 Stability Analysis of Spherically Symmetric Star in Scalar-Tensor Theories of Gravity
gr-qc/9706015 Cylindrical analogue of NUT space: spacetime of a line gravomagnetic monopole
gr-qc/9706016 Quantum Squeezing and Late Time Classical Behavior of Massive Fields in Expanding Robertson-Walker Universe
gr-qc/9706017 Nucleosynthesis Constraints on Scalar-Tensor Theories of Gravity
gr-qc/9706018 Accelerated Detectors and Temperature in (Anti) de Sitter Spaces
gr-qc/9706019 Free Energy of Gravitating Fermions
gr-qc/9706020 Dynamics of a self-gravitating thin string in scalar-tensor theories of gravitation
gr-qc/9706021 Canonical Quantization Inside the Schwarzschild Black Hole
gr-qc/9706022 Gauge Invariant Hamiltonian Formalism for Spherically Symmetric Gravitating Shells
gr-qc/9706023 Do Vortex Filaments in a Superfluid Neutron Star Produce Gravimagnetic Forces ?
gr-qc/9706024 Cosmological CMBR dipole in open universes ?
gr-qc/9706025 Chaos in black holes surrounded by gravitational waves
gr-qc/9706026 Generating New Perfect-fluid Solutions From Known Ones II
gr-qc/9706027 Degenerate Metric Phase Boundaries
gr-qc/9706028 Gravitational Field of the Early Universe: I.Non-linear Scalar Field as the Source
gr-qc/9706029 Maximal Acceleration Is Nonrotating
gr-qc/9706030 Black Hole Entropy: a spacetime foam approach
gr-qc/9706031 New exact solutions in standard inflationary models
gr-qc/9706032 Abelian Higgs hair for extreme black holes and selection rules for snapping strings
gr-qc/9706033 Quantum Decay of Domain Walls in Cosmology II: Hamiltonian Approach
gr-qc/9706034 A fully (3+1)-D Regge calculus model of the Kasner cosmology
gr-qc/9706035 Symmetries in two-dimensional dilaton gravity with matter
gr-qc/9706036 Disks in Expanding FRW Universes
gr-qc/9706037 Birth of the Universe in string cosmology
gr-qc/9706038 Simplifying the spectral analysis of the volume operator
gr-qc/9706039 Investigation of the Interior of Colored Black Holes and the Extendability of Solutions of the Einstein-Yang/Mills Equations
gr-qc/9706040 Dynamics of Cosmic Strings in Schwarzschild Spacetime
gr-qc/9706041 On a certain formulation of the Einstein equations
gr-qc/9706042 Elliptic fibrations associated with the Einstein spacetimes
gr-qc/9706043 A new class of inhomogeneous cosmological models with Yang-Mills fields
gr-qc/9706044 Thermodynamics of Black Holes in Brans-Dicke Gravity
gr-qc/9706045 General-relativistic coupling between orbital motion and internal degrees of freedom for inspiraling binary neutron stars
gr-qc/9706046 Late time behaviour of the maximal slicing of the Schwarzschild black hole
gr-qc/9706047 Fuzzy Surfaces of Genus Zero
gr-qc/9706048 Non-Newtonian Dynamic Gravitational Field from The Longitudinally Asymmetric Rotating Objects
gr-qc/9706049 On Gravitational Repulsion
gr-qc/9706050 Gravitational excitons from extra dimensions
gr-qc/9706051 Reduced phase space formalism for spherically symmetric geometry with a massive dust shell
gr-qc/9706052 The Schwarzschild Solution in the 4-Dimensional Kaluza-Klein Description of The Einstein’s Equations
gr-qc/9706053 The fine tuning problem in pre-big-bang inflation
gr-qc/9706054 Quantum correction to thermodynamical entropy of black hole
gr-qc/9706055 Generalization Of Lorentz-Poincare Ether Theory To Quantum Gravity
gr-qc/9706056 Inertia as the ``Threshold of Elasticity’’ of Quantum States
gr-qc/9706057 Tidal Stabilization of Rigidly Rotating, Fully Relativistic Neutron Stars
gr-qc/9706058 The self-screening Hawking atmosphere
gr-qc/9706059 Foundation of The Two dimensional Quantum Theory of Gravity
gr-qc/9706060 The random walks of a Schwarzschild black hole
gr-qc/9706061 Adiabatic Invariants and Scalar Fields in a de Sitter Space-Time
gr-qc/9706062 The TIGA technique for detecting gravitational waves with a spherical antenna
gr-qc/9706063 Singularities inside non-Abelian black holes
gr-qc/9706064 Stationary perturbations and infinitesimal rotations of static Einstein-Yang-Mills configurations with bosonic matter
gr-qc/9706065 Chaos in the Einstein-Yang-Mills Equations
gr-qc/9706066 Singular Regions in Black Hole Solutions in Higher Order Curvature Gravity
gr-qc/9706067 Power-law mass inflation in Einstein-Yang-Mills-Higgs black holes
gr-qc/9706068 Post-Riemannian Spacetimes Admit a Causal Structure
gr-qc/9706069 Geometrical Formulation of Quantum Mechanics
gr-qc/9706070 Riemannian and Teleparallel Descriptions of the Scalar Field Gravitational Interaction
gr-qc/9706071 On the sources of static plane symmetric vacuum space-times
gr-qc/9706072 Instability of cosmological event horizons of non-static global cosmic strings
gr-qc/9706073 Axial instability of rotating relativistic stars
gr-qc/9706074 Adiabatic Invariant Treatment of a Collapsing Sphere of Quantized Dust
gr-qc/9706075 A new class of unstable modes of rotating relativistic stars
gr-qc/9706076 Solutions of Quantum Gravity Coupled to the Scalar Field
gr-qc/9706077 Notes On The Born-Oppenheimer Approach In A Closed Dynamical System
gr-qc/9706078 On a global conformal invariant of initial data sets
gr-qc/9706079 Probing Black Holes and Relativistic Stars with Gravitational Waves
gr-qc/9706080 Supersymmetric quantum cosmology for Bianchi class A models
gr-qc/9706081 Classical and Quantum Shell Dynamics, and Vacuum Decay
gr-qc/9706082 Propagation Speed of Longitudinally Oscillating Gravitational and Electrical Fields
gr-qc/9706083 Physically valid black-hole interior models
gr-qc/9707001 Effective dynamics of self-gravitating extended objects
home | contact | terms of use | sitemap
Copyright © 2005-2024 - Scimetrica
Research articles
search articles
reviews guidelines
articles index
My Pages
my alerts
my messages
my reviews
my favorites
Members: 3658
Articles: 2'599'751
Articles rated: 2609
02 November 2024
Research articles
search articles
reviews guidelines
articles index
My Pages
my alerts
my messages
my reviews
my favorites
Members: 3658
Articles: 2'599'751
Articles rated: 2609
02 November 2024
Articles index
arXiv.org, 1997
Months: 1 2 3 4 5 6 7 8 9 10 11 12
arXiv.org, 6.1997
Results 401 to 600 of 1'104. [ 1 2 3 4 5 6 ] Next
cond-mat/9706089 Low-frequency Raman scattering in model disordered solids: percolators above threshold
cond-mat/9706090 Scaling Laws of Polyelectrolyte Adsorption
cond-mat/9706091 Orbital Kondo-effect from tunneling impurities
cond-mat/9706092 Instability of the marginal commutative model of tunneling centers interacting with metallic environment: Role of the electron-hole symmetry breaking
cond-mat/9706093 Analysis of the Interplay of Quantum Phases and Nonlinearity Applied to Dimers with Anharmonic Interactions
cond-mat/9706094 Particle-hopping Models of Vehicular Traffic: Distributions of Distance Headways and Distance Between Jams
cond-mat/9706095 Superconducting gap node spectroscopy using nonlinear electrodynamics
cond-mat/9706096 Random Mass Dirac Fermions in Doped Spin-Peierls and Spin-Ladder systems: One-Particle Properties and Boundary Effects
cond-mat/9706097 Renormalization Group Approach to Non-equilibrium Green Functions in Correlated Impurity Systems
cond-mat/9706098 Monte Carlo Study of Correlations Near the Ground State of the Triangular Antiferromagnetic Ising Model
cond-mat/9706099 Aging Exponents in Self-Organized Criticality
cond-mat/9706099 Aging Exponents in Self-Organized Criticality
cond-mat/9706101 Protein folding, anisotropic collapse and blue phases
cond-mat/9706102 Interface dynamics at the depinning transition
cond-mat/9706103 Relaxation process in a regime of quantum chaos
cond-mat/9706104 Dualities in Spin Ladders
cond-mat/9706105 Generalized CP^1 model from t_1-t_2-J model
cond-mat/9706106 Aperiodic Ising Quantum Chains
cond-mat/9706107 Anderson-localization versus delocalization of interacting fermions in one dimension
cond-mat/9706108 The Interpretation of Magnetisation and Entropy Jumps in the Flux-line Lattice
cond-mat/9706109 Exchange in the linear cuprate antiferromagnets Sr2CuO3 and Ca2CuO3
cond-mat/9706110 Melting of Quasi-Two-Dimensional Charge Stripes in La5/3Sr1/3NiO4
cond-mat/9706111 Theory of Interplay of Nuclear Magnetism and Superconductivity in AuIn2
cond-mat/9706112 Time ordering in the evolution of information processing and modulation systems
cond-mat/9706113 Calculations of the Knight Shift Anomalies in Heavy Electron Materials
cond-mat/9706114 First-order phase transitions in one-dimensional steady states
cond-mat/9706115 Icosahedral coincidence rotations
cond-mat/9706116 Random Tiling Transition in Three Dimensions
cond-mat/9706117 Random tiling quasicrystals in three dimensions
cond-mat/9706118 Low-lying excitations around a single vortex in a d-wave superconductor
cond-mat/9706119 Comment on `Formation of a Dodecagonal Quasicrystalline Phase in a Simple Monatomic Liquid’
cond-mat/9706120 Electrical Conduction in Nonlinear Composites
cond-mat/9706121 Direction dependent free energy singularity of the asymmetric six-vertex model
cond-mat/9706122 Coordination sequences for root lattices and related graphs
cond-mat/9706123 Susceptibilities of Sr(Cu_(1-x)Zn_x)_2O_3 Studied by Quantum Monte Carlo Simulation
cond-mat/9706124 Density matrix renormalization group for the Berezinskii-Kosterlitz-Thouless transition of the 19-vertex model
cond-mat/9706125 Protein folding and heteropolymers
cond-mat/9706126 The 1D t-J model with next-nearest neighbor hopping - breakdown of the Luttinger liquid?
cond-mat/9706127 Mobility Edge and Level Statistics of Random Tight-Binding Hamiltonians
cond-mat/9706128 Correlated ab-initio calculations for ground-state properties of II-VI semiconductors
cond-mat/9706129 Effects of two dimensional plasmons on the tunneling density of states
cond-mat/9706130 Effects of pressure on diffusion and vacancy formation in MgO from non-empirical free-energy integrations
cond-mat/9706131 Rotational Reconstruction of Sapphire (0001)
cond-mat/9706132 Modification of the Landau-Lifshitz Equation in the Presence of a Spin-Polarized Current in CMR and GMR Materials
cond-mat/9706133 Orthogonal Symmetric Polynomials Associated with the Calogero Model
cond-mat/9706134 Competition between crystalline electric field singlet and itinerant states of f electrons
cond-mat/9706135 The bulk Josephson responce of the d-g-wave cuprate superconductor
cond-mat/9706136 Integrable su(3) spin chain combining different representations
cond-mat/9706137 Electronic transport in a series of multiple arbitrary tunnel junctions
cond-mat/9706138 Identification of Nuclear Relaxation Processes in a Gapped Quantum Magnet: Proton NMR in the S=1/2 Heisenberg Ladder Cu2(C5H12N2)2Cl4
cond-mat/9706139 Inertial Mass of a Vortex in Cuprate Superconductors
cond-mat/9706140 Speckle from phase ordering systems
cond-mat/9706141 Airline Crew Scheduling Using Potts Mean Field Techniques
cond-mat/9706142 Statistical Properties of Unrestricted Crew Scheduling Problems
cond-mat/9706143 Spectral Function of 2D Fermi Liquids
cond-mat/9706144 Dynamic instabilities in resonant tunneling induced by a magnetic field
cond-mat/9706145 Doping Dependence of the Pseudogap State in the ab plane IR responce of La(2-x)Sr(x)CuO(4)
cond-mat/9706146 Field-induced incommensurate-to-commensurate transition in Ba_2CuGe_2O_7
cond-mat/9706147 Lightly Doped t-J Three-Leg Ladders - an Analog for the Underdoped Cuprates
cond-mat/9706148 Response of a $d_{x^2-y^2}$ Superconductor to a Zeeman Magnetic Field
cond-mat/9706149 Baryon Asymmetry of Universe: View from Superfluid 3He
cond-mat/9706150 Aspects of Dielectric Breakdown in a Model for Disordered Nonlinear Composites
cond-mat/9706151 Bose-Einstein to BCS Crossover Picture for High-T_c Cuprates
cond-mat/9706152 Thermodynamics of the anisotropic Heisenberg chain calculated by the density matrix renormalization group method
cond-mat/9706153 Triangular Antiferromagnets
cond-mat/9706154 Free energies of crystalline solids: a lattice-switch Monte Carlo method
cond-mat/9706155 Optical phonons in the reflectivity spectrum of FeSi
cond-mat/9706156 The Calogero Model: Integrable Structures and Orthogonal Basis
cond-mat/9706157 Fractional Quantum Hall Effect Measurements at Zero g-Factor
cond-mat/9706158 A single chain analysis of doped quasi one dimensional spin 1 compounds: paramagnetic versus spin 1/2 doping
cond-mat/9706159 Surface critical exponents for a three-dimensional modified spherical model
cond-mat/9706160 New model for surface fracture induced by dynamical stress
cond-mat/9706161 Threading dislocation lines in two-sided flux array decorations
cond-mat/9706162 Ground State Entropy of Potts Antiferromagnets: Bounds, Series, and Monte Carlo Measurements
cond-mat/9706163 Dynamics of Diblock Copolymers in Dilute Solutions
cond-mat/9706164 Bound states and impurity averaging in unconventional superconductors
cond-mat/9706165 Kondo Insulator: p-wave Bose Condensate of Excitons
cond-mat/9706166 Exclusion Statistics in Conformal Field Theory Spectra
cond-mat/9706167 Order versus Disorder in the Quantum Heisenberg Antiferromagnet on the Kagom{é lattice: an approach through exact spectra analysis
cond-mat/9706168 Critical Exponents for Granular Phase Transitions
cond-mat/9706169 Comment on ``One-Dimensional Disordered Bosonic Hubbard Model: A Density-Matrix Renormalization Group Study"
cond-mat/9706170 Disappearance of the Spin Gap in a Zn doped 2-Leg Ladder Compound Sr(Cu1-xZnx)2O3
cond-mat/9706171 Boundary Effects in the One Dimensional Coulomb Gas
cond-mat/9706172 Simulation of Quantum Field Theory and Gravity in Superfluid He-3
cond-mat/9706173 Charge and Spin Structures of a $d_{x^2 - y^2}$ Superconductor in the Proximity of an Antiferromagnetic Mott Insulator
cond-mat/9706174 A simple physical model of liquid-glass transition: intrinsic fluctuating interactions and random fields hidden in glass-forming liquids
cond-mat/9706175 Generalized calculation of magnetic coupling constants for Mott-Hubbard insulators: Application to ferromagnetic Cr compounds
cond-mat/9706176 Two-Order-Parameter Description of Liquids: Critical Phenomena and Phase Separation of Supercooled Liquids
cond-mat/9706177 Nuclear relaxation in the spin-1/2 antiferromagnetic chain compound Sr_2CuO_3 --- comparison between theories and experiments
cond-mat/9706178 Spatially nonuniform energy distribution of electrons in a submicron semiconductor film
cond-mat/9706179 Transition Temperature of Josephson Junction Arrays with Long-Range Interaction
cond-mat/9706180 Theory of thermoelectric effects when the temperature approximation is incorrect
cond-mat/9706181 Problem of formation of an emf in a semiconductor and its transfer to an external circuit
cond-mat/9706182 Time Dependent Floquet Theory and Absence of an Adiabatic Limit
cond-mat/9706183 Renormalization-group approach to the metal-insulator transitions in (DCNQI)_2M (DCNQI is N,N’-dicyanoquinonediimine and M=Ag, Cu)
cond-mat/9706184 Properties of High-Tc Single Crystals as Natural Interferometers in the THz Frequency Range
cond-mat/9706185 Kinetic Theory Approach to the SK Spin Glass Model with Glauber Dynamics
cond-mat/9706186 Band Formation during Gaseous Diffusion in Aerogels
cond-mat/9706187 Fermi-Bose Correspondence at Finite Temperature
cond-mat/9706188 Continuous renormalization for fermions and Fermi liquid theory
cond-mat/9706189 Coupling between Smectic and Twist Modes in Polymer Intercalated Smectics
cond-mat/9707001 Quantum phase transitions in electronic systems
cond-mat/9707002 Co-operative Two-Channel Kondo Effect
cond-mat/9707003 Local Moments in an Interacting Environment
cond-mat/9707004 New Vortex with Zero Fluxoid Quantum in a Superconductor Anomalous Hall Effect in High T_c and Low T_c Superconductors
cond-mat/9707005 Eigenvalue distribution of large random matrices, from one matrix to several coupled matrices
cond-mat/9707006 The Superfluid State of Atomic Li6 in a Magnetic Trap
cond-mat/9707007 Object orientation and visualization of physics in two dimensions
cond-mat/9707008 Invaded Cluster Dynamics for Frustrated Models
cond-mat/9707009 Ordering and Demixing Transitions in Multicomponent Widom-Rowlinson Models
cond-mat/9707009 Ordering and Demixing Transitions in Multicomponent Widom-Rowlinson Models
cond-mat/9707011 Theory of Anomalous Hall Effect in a Heavy fermion System with a Strong Anisotropic Crystal Field
cond-mat/9707012 Discrete scale invariance and complex dimensions
cond-mat/9707013 Multicanonical Methods vs. Molecular Dynamics vs. Monte Carlo: Comparison for Lennard-Jones Glasses
cond-mat/9707014 e-h Coherence and Charging Effects in Ultrasmall Metallic Grains
cond-mat/9707015 Analysis of a three-component model phase diagram by Catastrophe Theory
gr-qc/9706001 O(N) Quantum fields in curved spacetime
gr-qc/9706002 Forks in the Road, on the Way to Quantum Gravity
gr-qc/9706003 Toward a Complete Analysis of the Global Structure of Kerr-Newman Spacetime
gr-qc/9706004 Can extreme black holes have (long) Abelian Higgs hair?
gr-qc/9706005 Statistical mechanics of Kerr-Newman dilaton black holes and the bootstrap condition
gr-qc/9706006 Quantum inequalities in two dimensional Minkowski spacetime
gr-qc/9706007 Classicality, Matter-Antimatter Asymmetry, and Quantum-Gravity Deformed Uncertainty Relations
gr-qc/9706008 Particle creation and non-adiabatic transitions in quantum cosmology
gr-qc/9706009 On the structure of the energy-momentum and the spin currents in Dirac’s electron theory
gr-qc/9706009 On the structure of the energy-momentum and the spin currents in Dirac’s electron theory
gr-qc/9706011 Quantum Decay of Domain Walls In Cosmology I: Instanton Approach
gr-qc/9706012 Lorentz Invariance and the Cosmological Constant
gr-qc/9706013 Singularities and asymptotic behavior of the Tolman-Bondi model
gr-qc/9706014 Stability Analysis of Spherically Symmetric Star in Scalar-Tensor Theories of Gravity
gr-qc/9706015 Cylindrical analogue of NUT space: spacetime of a line gravomagnetic monopole
gr-qc/9706016 Quantum Squeezing and Late Time Classical Behavior of Massive Fields in Expanding Robertson-Walker Universe
gr-qc/9706017 Nucleosynthesis Constraints on Scalar-Tensor Theories of Gravity
gr-qc/9706018 Accelerated Detectors and Temperature in (Anti) de Sitter Spaces
gr-qc/9706019 Free Energy of Gravitating Fermions
gr-qc/9706020 Dynamics of a self-gravitating thin string in scalar-tensor theories of gravitation
gr-qc/9706021 Canonical Quantization Inside the Schwarzschild Black Hole
gr-qc/9706022 Gauge Invariant Hamiltonian Formalism for Spherically Symmetric Gravitating Shells
gr-qc/9706023 Do Vortex Filaments in a Superfluid Neutron Star Produce Gravimagnetic Forces ?
gr-qc/9706024 Cosmological CMBR dipole in open universes ?
gr-qc/9706025 Chaos in black holes surrounded by gravitational waves
gr-qc/9706026 Generating New Perfect-fluid Solutions From Known Ones II
gr-qc/9706027 Degenerate Metric Phase Boundaries
gr-qc/9706028 Gravitational Field of the Early Universe: I.Non-linear Scalar Field as the Source
gr-qc/9706029 Maximal Acceleration Is Nonrotating
gr-qc/9706030 Black Hole Entropy: a spacetime foam approach
gr-qc/9706031 New exact solutions in standard inflationary models
gr-qc/9706032 Abelian Higgs hair for extreme black holes and selection rules for snapping strings
gr-qc/9706033 Quantum Decay of Domain Walls in Cosmology II: Hamiltonian Approach
gr-qc/9706034 A fully (3+1)-D Regge calculus model of the Kasner cosmology
gr-qc/9706035 Symmetries in two-dimensional dilaton gravity with matter
gr-qc/9706036 Disks in Expanding FRW Universes
gr-qc/9706037 Birth of the Universe in string cosmology
gr-qc/9706038 Simplifying the spectral analysis of the volume operator
gr-qc/9706039 Investigation of the Interior of Colored Black Holes and the Extendability of Solutions of the Einstein-Yang/Mills Equations
gr-qc/9706040 Dynamics of Cosmic Strings in Schwarzschild Spacetime
gr-qc/9706041 On a certain formulation of the Einstein equations
gr-qc/9706042 Elliptic fibrations associated with the Einstein spacetimes
gr-qc/9706043 A new class of inhomogeneous cosmological models with Yang-Mills fields
gr-qc/9706044 Thermodynamics of Black Holes in Brans-Dicke Gravity
gr-qc/9706045 General-relativistic coupling between orbital motion and internal degrees of freedom for inspiraling binary neutron stars
gr-qc/9706046 Late time behaviour of the maximal slicing of the Schwarzschild black hole
gr-qc/9706047 Fuzzy Surfaces of Genus Zero
gr-qc/9706048 Non-Newtonian Dynamic Gravitational Field from The Longitudinally Asymmetric Rotating Objects
gr-qc/9706049 On Gravitational Repulsion
gr-qc/9706050 Gravitational excitons from extra dimensions
gr-qc/9706051 Reduced phase space formalism for spherically symmetric geometry with a massive dust shell
gr-qc/9706052 The Schwarzschild Solution in the 4-Dimensional Kaluza-Klein Description of The Einstein’s Equations
gr-qc/9706053 The fine tuning problem in pre-big-bang inflation
gr-qc/9706054 Quantum correction to thermodynamical entropy of black hole
gr-qc/9706055 Generalization Of Lorentz-Poincare Ether Theory To Quantum Gravity
gr-qc/9706056 Inertia as the ``Threshold of Elasticity’’ of Quantum States
gr-qc/9706057 Tidal Stabilization of Rigidly Rotating, Fully Relativistic Neutron Stars
gr-qc/9706058 The self-screening Hawking atmosphere
gr-qc/9706059 Foundation of The Two dimensional Quantum Theory of Gravity
gr-qc/9706060 The random walks of a Schwarzschild black hole
gr-qc/9706061 Adiabatic Invariants and Scalar Fields in a de Sitter Space-Time
gr-qc/9706062 The TIGA technique for detecting gravitational waves with a spherical antenna
gr-qc/9706063 Singularities inside non-Abelian black holes
gr-qc/9706064 Stationary perturbations and infinitesimal rotations of static Einstein-Yang-Mills configurations with bosonic matter
gr-qc/9706065 Chaos in the Einstein-Yang-Mills Equations
gr-qc/9706066 Singular Regions in Black Hole Solutions in Higher Order Curvature Gravity
gr-qc/9706067 Power-law mass inflation in Einstein-Yang-Mills-Higgs black holes
gr-qc/9706068 Post-Riemannian Spacetimes Admit a Causal Structure
gr-qc/9706069 Geometrical Formulation of Quantum Mechanics
gr-qc/9706070 Riemannian and Teleparallel Descriptions of the Scalar Field Gravitational Interaction
gr-qc/9706071 On the sources of static plane symmetric vacuum space-times
gr-qc/9706072 Instability of cosmological event horizons of non-static global cosmic strings
gr-qc/9706073 Axial instability of rotating relativistic stars
gr-qc/9706074 Adiabatic Invariant Treatment of a Collapsing Sphere of Quantized Dust
gr-qc/9706075 A new class of unstable modes of rotating relativistic stars
gr-qc/9706076 Solutions of Quantum Gravity Coupled to the Scalar Field
gr-qc/9706077 Notes On The Born-Oppenheimer Approach In A Closed Dynamical System
gr-qc/9706078 On a global conformal invariant of initial data sets
gr-qc/9706079 Probing Black Holes and Relativistic Stars with Gravitational Waves
gr-qc/9706080 Supersymmetric quantum cosmology for Bianchi class A models
gr-qc/9706081 Classical and Quantum Shell Dynamics, and Vacuum Decay
gr-qc/9706082 Propagation Speed of Longitudinally Oscillating Gravitational and Electrical Fields
gr-qc/9706083 Physically valid black-hole interior models
gr-qc/9707001 Effective dynamics of self-gravitating extended objects
arXiv.org, 1997
Months: 1 2 3 4 5 6 7 8 9 10 11 12
arXiv.org, 6.1997
Results 401 to 600 of 1'104. [ 1 2 3 4 5 6 ] Next
cond-mat/9706089 Low-frequency Raman scattering in model disordered solids: percolators above threshold
cond-mat/9706090 Scaling Laws of Polyelectrolyte Adsorption
cond-mat/9706091 Orbital Kondo-effect from tunneling impurities
cond-mat/9706092 Instability of the marginal commutative model of tunneling centers interacting with metallic environment: Role of the electron-hole symmetry breaking
cond-mat/9706093 Analysis of the Interplay of Quantum Phases and Nonlinearity Applied to Dimers with Anharmonic Interactions
cond-mat/9706094 Particle-hopping Models of Vehicular Traffic: Distributions of Distance Headways and Distance Between Jams
cond-mat/9706095 Superconducting gap node spectroscopy using nonlinear electrodynamics
cond-mat/9706096 Random Mass Dirac Fermions in Doped Spin-Peierls and Spin-Ladder systems: One-Particle Properties and Boundary Effects
cond-mat/9706097 Renormalization Group Approach to Non-equilibrium Green Functions in Correlated Impurity Systems
cond-mat/9706098 Monte Carlo Study of Correlations Near the Ground State of the Triangular Antiferromagnetic Ising Model
cond-mat/9706099 Aging Exponents in Self-Organized Criticality
cond-mat/9706099 Aging Exponents in Self-Organized Criticality
cond-mat/9706101 Protein folding, anisotropic collapse and blue phases
cond-mat/9706102 Interface dynamics at the depinning transition
cond-mat/9706103 Relaxation process in a regime of quantum chaos
cond-mat/9706104 Dualities in Spin Ladders
cond-mat/9706105 Generalized CP^1 model from t_1-t_2-J model
cond-mat/9706106 Aperiodic Ising Quantum Chains
cond-mat/9706107 Anderson-localization versus delocalization of interacting fermions in one dimension
cond-mat/9706108 The Interpretation of Magnetisation and Entropy Jumps in the Flux-line Lattice
cond-mat/9706109 Exchange in the linear cuprate antiferromagnets Sr2CuO3 and Ca2CuO3
cond-mat/9706110 Melting of Quasi-Two-Dimensional Charge Stripes in La5/3Sr1/3NiO4
cond-mat/9706111 Theory of Interplay of Nuclear Magnetism and Superconductivity in AuIn2
cond-mat/9706112 Time ordering in the evolution of information processing and modulation systems
cond-mat/9706113 Calculations of the Knight Shift Anomalies in Heavy Electron Materials
cond-mat/9706114 First-order phase transitions in one-dimensional steady states
cond-mat/9706115 Icosahedral coincidence rotations
cond-mat/9706116 Random Tiling Transition in Three Dimensions
cond-mat/9706117 Random tiling quasicrystals in three dimensions
cond-mat/9706118 Low-lying excitations around a single vortex in a d-wave superconductor
cond-mat/9706119 Comment on `Formation of a Dodecagonal Quasicrystalline Phase in a Simple Monatomic Liquid’
cond-mat/9706120 Electrical Conduction in Nonlinear Composites
cond-mat/9706121 Direction dependent free energy singularity of the asymmetric six-vertex model
cond-mat/9706122 Coordination sequences for root lattices and related graphs
cond-mat/9706123 Susceptibilities of Sr(Cu_(1-x)Zn_x)_2O_3 Studied by Quantum Monte Carlo Simulation
cond-mat/9706124 Density matrix renormalization group for the Berezinskii-Kosterlitz-Thouless transition of the 19-vertex model
cond-mat/9706125 Protein folding and heteropolymers
cond-mat/9706126 The 1D t-J model with next-nearest neighbor hopping - breakdown of the Luttinger liquid?
cond-mat/9706127 Mobility Edge and Level Statistics of Random Tight-Binding Hamiltonians
cond-mat/9706128 Correlated ab-initio calculations for ground-state properties of II-VI semiconductors
cond-mat/9706129 Effects of two dimensional plasmons on the tunneling density of states
cond-mat/9706130 Effects of pressure on diffusion and vacancy formation in MgO from non-empirical free-energy integrations
cond-mat/9706131 Rotational Reconstruction of Sapphire (0001)
cond-mat/9706132 Modification of the Landau-Lifshitz Equation in the Presence of a Spin-Polarized Current in CMR and GMR Materials
cond-mat/9706133 Orthogonal Symmetric Polynomials Associated with the Calogero Model
cond-mat/9706134 Competition between crystalline electric field singlet and itinerant states of f electrons
cond-mat/9706135 The bulk Josephson responce of the d-g-wave cuprate superconductor
cond-mat/9706136 Integrable su(3) spin chain combining different representations
cond-mat/9706137 Electronic transport in a series of multiple arbitrary tunnel junctions
cond-mat/9706138 Identification of Nuclear Relaxation Processes in a Gapped Quantum Magnet: Proton NMR in the S=1/2 Heisenberg Ladder Cu2(C5H12N2)2Cl4
cond-mat/9706139 Inertial Mass of a Vortex in Cuprate Superconductors
cond-mat/9706140 Speckle from phase ordering systems
cond-mat/9706141 Airline Crew Scheduling Using Potts Mean Field Techniques
cond-mat/9706142 Statistical Properties of Unrestricted Crew Scheduling Problems
cond-mat/9706143 Spectral Function of 2D Fermi Liquids
cond-mat/9706144 Dynamic instabilities in resonant tunneling induced by a magnetic field
cond-mat/9706145 Doping Dependence of the Pseudogap State in the ab plane IR responce of La(2-x)Sr(x)CuO(4)
cond-mat/9706146 Field-induced incommensurate-to-commensurate transition in Ba_2CuGe_2O_7
cond-mat/9706147 Lightly Doped t-J Three-Leg Ladders - an Analog for the Underdoped Cuprates
cond-mat/9706148 Response of a $d_{x^2-y^2}$ Superconductor to a Zeeman Magnetic Field
cond-mat/9706149 Baryon Asymmetry of Universe: View from Superfluid 3He
cond-mat/9706150 Aspects of Dielectric Breakdown in a Model for Disordered Nonlinear Composites
cond-mat/9706151 Bose-Einstein to BCS Crossover Picture for High-T_c Cuprates
cond-mat/9706152 Thermodynamics of the anisotropic Heisenberg chain calculated by the density matrix renormalization group method
cond-mat/9706153 Triangular Antiferromagnets
cond-mat/9706154 Free energies of crystalline solids: a lattice-switch Monte Carlo method
cond-mat/9706155 Optical phonons in the reflectivity spectrum of FeSi
cond-mat/9706156 The Calogero Model: Integrable Structures and Orthogonal Basis
cond-mat/9706157 Fractional Quantum Hall Effect Measurements at Zero g-Factor
cond-mat/9706158 A single chain analysis of doped quasi one dimensional spin 1 compounds: paramagnetic versus spin 1/2 doping
cond-mat/9706159 Surface critical exponents for a three-dimensional modified spherical model
cond-mat/9706160 New model for surface fracture induced by dynamical stress
cond-mat/9706161 Threading dislocation lines in two-sided flux array decorations
cond-mat/9706162 Ground State Entropy of Potts Antiferromagnets: Bounds, Series, and Monte Carlo Measurements
cond-mat/9706163 Dynamics of Diblock Copolymers in Dilute Solutions
cond-mat/9706164 Bound states and impurity averaging in unconventional superconductors
cond-mat/9706165 Kondo Insulator: p-wave Bose Condensate of Excitons
cond-mat/9706166 Exclusion Statistics in Conformal Field Theory Spectra
cond-mat/9706167 Order versus Disorder in the Quantum Heisenberg Antiferromagnet on the Kagom{é lattice: an approach through exact spectra analysis
cond-mat/9706168 Critical Exponents for Granular Phase Transitions
cond-mat/9706169 Comment on ``One-Dimensional Disordered Bosonic Hubbard Model: A Density-Matrix Renormalization Group Study"
cond-mat/9706170 Disappearance of the Spin Gap in a Zn doped 2-Leg Ladder Compound Sr(Cu1-xZnx)2O3
cond-mat/9706171 Boundary Effects in the One Dimensional Coulomb Gas
cond-mat/9706172 Simulation of Quantum Field Theory and Gravity in Superfluid He-3
cond-mat/9706173 Charge and Spin Structures of a $d_{x^2 - y^2}$ Superconductor in the Proximity of an Antiferromagnetic Mott Insulator
cond-mat/9706174 A simple physical model of liquid-glass transition: intrinsic fluctuating interactions and random fields hidden in glass-forming liquids
cond-mat/9706175 Generalized calculation of magnetic coupling constants for Mott-Hubbard insulators: Application to ferromagnetic Cr compounds
cond-mat/9706176 Two-Order-Parameter Description of Liquids: Critical Phenomena and Phase Separation of Supercooled Liquids
cond-mat/9706177 Nuclear relaxation in the spin-1/2 antiferromagnetic chain compound Sr_2CuO_3 --- comparison between theories and experiments
cond-mat/9706178 Spatially nonuniform energy distribution of electrons in a submicron semiconductor film
cond-mat/9706179 Transition Temperature of Josephson Junction Arrays with Long-Range Interaction
cond-mat/9706180 Theory of thermoelectric effects when the temperature approximation is incorrect
cond-mat/9706181 Problem of formation of an emf in a semiconductor and its transfer to an external circuit
cond-mat/9706182 Time Dependent Floquet Theory and Absence of an Adiabatic Limit
cond-mat/9706183 Renormalization-group approach to the metal-insulator transitions in (DCNQI)_2M (DCNQI is N,N’-dicyanoquinonediimine and M=Ag, Cu)
cond-mat/9706184 Properties of High-Tc Single Crystals as Natural Interferometers in the THz Frequency Range
cond-mat/9706185 Kinetic Theory Approach to the SK Spin Glass Model with Glauber Dynamics
cond-mat/9706186 Band Formation during Gaseous Diffusion in Aerogels
cond-mat/9706187 Fermi-Bose Correspondence at Finite Temperature
cond-mat/9706188 Continuous renormalization for fermions and Fermi liquid theory
cond-mat/9706189 Coupling between Smectic and Twist Modes in Polymer Intercalated Smectics
cond-mat/9707001 Quantum phase transitions in electronic systems
cond-mat/9707002 Co-operative Two-Channel Kondo Effect
cond-mat/9707003 Local Moments in an Interacting Environment
cond-mat/9707004 New Vortex with Zero Fluxoid Quantum in a Superconductor Anomalous Hall Effect in High T_c and Low T_c Superconductors
cond-mat/9707005 Eigenvalue distribution of large random matrices, from one matrix to several coupled matrices
cond-mat/9707006 The Superfluid State of Atomic Li6 in a Magnetic Trap
cond-mat/9707007 Object orientation and visualization of physics in two dimensions
cond-mat/9707008 Invaded Cluster Dynamics for Frustrated Models
cond-mat/9707009 Ordering and Demixing Transitions in Multicomponent Widom-Rowlinson Models
cond-mat/9707009 Ordering and Demixing Transitions in Multicomponent Widom-Rowlinson Models
cond-mat/9707011 Theory of Anomalous Hall Effect in a Heavy fermion System with a Strong Anisotropic Crystal Field
cond-mat/9707012 Discrete scale invariance and complex dimensions
cond-mat/9707013 Multicanonical Methods vs. Molecular Dynamics vs. Monte Carlo: Comparison for Lennard-Jones Glasses
cond-mat/9707014 e-h Coherence and Charging Effects in Ultrasmall Metallic Grains
cond-mat/9707015 Analysis of a three-component model phase diagram by Catastrophe Theory
gr-qc/9706001 O(N) Quantum fields in curved spacetime
gr-qc/9706002 Forks in the Road, on the Way to Quantum Gravity
gr-qc/9706003 Toward a Complete Analysis of the Global Structure of Kerr-Newman Spacetime
gr-qc/9706004 Can extreme black holes have (long) Abelian Higgs hair?
gr-qc/9706005 Statistical mechanics of Kerr-Newman dilaton black holes and the bootstrap condition
gr-qc/9706006 Quantum inequalities in two dimensional Minkowski spacetime
gr-qc/9706007 Classicality, Matter-Antimatter Asymmetry, and Quantum-Gravity Deformed Uncertainty Relations
gr-qc/9706008 Particle creation and non-adiabatic transitions in quantum cosmology
gr-qc/9706009 On the structure of the energy-momentum and the spin currents in Dirac’s electron theory
gr-qc/9706009 On the structure of the energy-momentum and the spin currents in Dirac’s electron theory
gr-qc/9706011 Quantum Decay of Domain Walls In Cosmology I: Instanton Approach
gr-qc/9706012 Lorentz Invariance and the Cosmological Constant
gr-qc/9706013 Singularities and asymptotic behavior of the Tolman-Bondi model
gr-qc/9706014 Stability Analysis of Spherically Symmetric Star in Scalar-Tensor Theories of Gravity
gr-qc/9706015 Cylindrical analogue of NUT space: spacetime of a line gravomagnetic monopole
gr-qc/9706016 Quantum Squeezing and Late Time Classical Behavior of Massive Fields in Expanding Robertson-Walker Universe
gr-qc/9706017 Nucleosynthesis Constraints on Scalar-Tensor Theories of Gravity
gr-qc/9706018 Accelerated Detectors and Temperature in (Anti) de Sitter Spaces
gr-qc/9706019 Free Energy of Gravitating Fermions
gr-qc/9706020 Dynamics of a self-gravitating thin string in scalar-tensor theories of gravitation
gr-qc/9706021 Canonical Quantization Inside the Schwarzschild Black Hole
gr-qc/9706022 Gauge Invariant Hamiltonian Formalism for Spherically Symmetric Gravitating Shells
gr-qc/9706023 Do Vortex Filaments in a Superfluid Neutron Star Produce Gravimagnetic Forces ?
gr-qc/9706024 Cosmological CMBR dipole in open universes ?
gr-qc/9706025 Chaos in black holes surrounded by gravitational waves
gr-qc/9706026 Generating New Perfect-fluid Solutions From Known Ones II
gr-qc/9706027 Degenerate Metric Phase Boundaries
gr-qc/9706028 Gravitational Field of the Early Universe: I.Non-linear Scalar Field as the Source
gr-qc/9706029 Maximal Acceleration Is Nonrotating
gr-qc/9706030 Black Hole Entropy: a spacetime foam approach
gr-qc/9706031 New exact solutions in standard inflationary models
gr-qc/9706032 Abelian Higgs hair for extreme black holes and selection rules for snapping strings
gr-qc/9706033 Quantum Decay of Domain Walls in Cosmology II: Hamiltonian Approach
gr-qc/9706034 A fully (3+1)-D Regge calculus model of the Kasner cosmology
gr-qc/9706035 Symmetries in two-dimensional dilaton gravity with matter
gr-qc/9706036 Disks in Expanding FRW Universes
gr-qc/9706037 Birth of the Universe in string cosmology
gr-qc/9706038 Simplifying the spectral analysis of the volume operator
gr-qc/9706039 Investigation of the Interior of Colored Black Holes and the Extendability of Solutions of the Einstein-Yang/Mills Equations
gr-qc/9706040 Dynamics of Cosmic Strings in Schwarzschild Spacetime
gr-qc/9706041 On a certain formulation of the Einstein equations
gr-qc/9706042 Elliptic fibrations associated with the Einstein spacetimes
gr-qc/9706043 A new class of inhomogeneous cosmological models with Yang-Mills fields
gr-qc/9706044 Thermodynamics of Black Holes in Brans-Dicke Gravity
gr-qc/9706045 General-relativistic coupling between orbital motion and internal degrees of freedom for inspiraling binary neutron stars
gr-qc/9706046 Late time behaviour of the maximal slicing of the Schwarzschild black hole
gr-qc/9706047 Fuzzy Surfaces of Genus Zero
gr-qc/9706048 Non-Newtonian Dynamic Gravitational Field from The Longitudinally Asymmetric Rotating Objects
gr-qc/9706049 On Gravitational Repulsion
gr-qc/9706050 Gravitational excitons from extra dimensions
gr-qc/9706051 Reduced phase space formalism for spherically symmetric geometry with a massive dust shell
gr-qc/9706052 The Schwarzschild Solution in the 4-Dimensional Kaluza-Klein Description of The Einstein’s Equations
gr-qc/9706053 The fine tuning problem in pre-big-bang inflation
gr-qc/9706054 Quantum correction to thermodynamical entropy of black hole
gr-qc/9706055 Generalization Of Lorentz-Poincare Ether Theory To Quantum Gravity
gr-qc/9706056 Inertia as the ``Threshold of Elasticity’’ of Quantum States
gr-qc/9706057 Tidal Stabilization of Rigidly Rotating, Fully Relativistic Neutron Stars
gr-qc/9706058 The self-screening Hawking atmosphere
gr-qc/9706059 Foundation of The Two dimensional Quantum Theory of Gravity
gr-qc/9706060 The random walks of a Schwarzschild black hole
gr-qc/9706061 Adiabatic Invariants and Scalar Fields in a de Sitter Space-Time
gr-qc/9706062 The TIGA technique for detecting gravitational waves with a spherical antenna
gr-qc/9706063 Singularities inside non-Abelian black holes
gr-qc/9706064 Stationary perturbations and infinitesimal rotations of static Einstein-Yang-Mills configurations with bosonic matter
gr-qc/9706065 Chaos in the Einstein-Yang-Mills Equations
gr-qc/9706066 Singular Regions in Black Hole Solutions in Higher Order Curvature Gravity
gr-qc/9706067 Power-law mass inflation in Einstein-Yang-Mills-Higgs black holes
gr-qc/9706068 Post-Riemannian Spacetimes Admit a Causal Structure
gr-qc/9706069 Geometrical Formulation of Quantum Mechanics
gr-qc/9706070 Riemannian and Teleparallel Descriptions of the Scalar Field Gravitational Interaction
gr-qc/9706071 On the sources of static plane symmetric vacuum space-times
gr-qc/9706072 Instability of cosmological event horizons of non-static global cosmic strings
gr-qc/9706073 Axial instability of rotating relativistic stars
gr-qc/9706074 Adiabatic Invariant Treatment of a Collapsing Sphere of Quantized Dust
gr-qc/9706075 A new class of unstable modes of rotating relativistic stars
gr-qc/9706076 Solutions of Quantum Gravity Coupled to the Scalar Field
gr-qc/9706077 Notes On The Born-Oppenheimer Approach In A Closed Dynamical System
gr-qc/9706078 On a global conformal invariant of initial data sets
gr-qc/9706079 Probing Black Holes and Relativistic Stars with Gravitational Waves
gr-qc/9706080 Supersymmetric quantum cosmology for Bianchi class A models
gr-qc/9706081 Classical and Quantum Shell Dynamics, and Vacuum Decay
gr-qc/9706082 Propagation Speed of Longitudinally Oscillating Gravitational and Electrical Fields
gr-qc/9706083 Physically valid black-hole interior models
gr-qc/9707001 Effective dynamics of self-gravitating extended objects
home | contact | terms of use | sitemap
Copyright © 2005-2024 - Scimetrica
|
{"url":"http://science-advisor.net/article/index.php?source=arxiv&year=1997&month=6&s=400","timestamp":"2024-11-02T06:11:08Z","content_type":"text/html","content_length":"41470","record_id":"<urn:uuid:a5aa1729-daa2-42e0-ae78-f4fc37ed388e>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00439.warc.gz"}
|
Syllabus for
Syllabus for College Algebra
Prerequisites: Two units of high school algebra and appropriate mathematics
placement result OR completion of Math 080.
Catalog Description: Functions (e.g., polynomial, exponential, and logarith-
mic). Zeros of polynomials . Solutions of systems of equations and in-
equalities. Triangle trigonometry . Selected topics from algebra such as
matrices and determinants, and the binomial theorem .
Objectives: After completing this course the student will be able to:
1. Determine the domain and codomain of relations and domain and
image set of functions.
2. Perform binary operations on functions .
3. Find the composite of two functions and determine its domain and
image set.
4. Determine if the inverse of a function exists and relate the graphs of
the function and its inverse.
5. Find the formula for the inverse of a one-to-one function.
6. Evaluate and graph functions including piece-wise defined functions.
7. Apply symmetries, reflections, and translations to curve-sketching.
8. Use technology to t curves to points in the xy-plane.
9. Solve exponential and logarithmic equations .
10. Apply exponential and logarithmic functions to solve real-world prob-
11. Apply polynomial and synthetic division to finding zeros of a poly-
12. Apply the Remainder, Factor , and Rational Root Theorems to de-
termine zeros of a polynomial.
13. Sketch the graphs of polynomial functions.
14. Approximate zeros of polynomials.
15. Solve systems of equations and inequalities .
16. Evaluate and graph rational functions .
17. Graph a convex polygon to represent a given set of inequalities.
18. Determine the vertices of a convex polygon to maximize or minimize
a function of two variables .
19. Calculate the sum and product of two matrices when defined.
20. Perform scalar multiplication .
21. Apply the inverse of a (2 × 2 or 3 × 3) matrix to find the solution of
Ax = b.
22. Find a partial fraction decomposition for a rational function.
Text: Algebra & Trigonometry, Second Edition, John W. Coburn, McGraw-
Hill, ISBN: 007-808730-9
MathZone computer supplement is also required.
Material Covered: 2.4-2.8, Modeling with Technology 1, 1.4, 3.1-3.5, 4.1-4.5,
8.1-8.4, 9.1-9.4
Office Hours: Monday 9-10 A.M. and 2-4 P.M.
Tuesday 8-10 A.M.
Wednesday 9-10 A.M. and 2-3 P.M.
Thursday 8-11 A.M
Friday 9-10 A.M. Other times are available by appointment.
Calculator: A graphing calculator is required for this course. Any graphing
calculator is acceptable as long as it is not capable of symbolic computa-
tions like the TI -89 or TI-92. If you prefer to use a graphing calculator
that is not one of the TI series, I must approve it before you may use it
on tests.
Grading: There will be online homework assignments, 12 online quizzes worth
10 points each, and five tests worth 100 points during the course. Tests
will be given on September 25, October 16, November 6, and November
20, and December 9. There are NO make ups for tests. If you know ahead
of time that you will be missing a test, see me to arrange a time to take
it EARLIER. The final exam is scheduled for Wednesday, December 16,
from 5:15-7:15 P.M. The location will given at a later date. The final exam
cannot be taken at any other time. ALL graded work is to be done without
collaboration. You may not share a calculator with another person during
a test. The course grades will be computed as follows:
│Five Tests ││
│Quiz Total ││
│Homework Total ││
│Final Exam ││
The percentages translate into the grades as follows:
MathZone: There will be online homework assignments and quizzes that you
will be expected to do for this course. You will have as many tries as
you wish on each homework problem until you get it correct. Quizzes
will give you one attempt at each question. Homework assignments will
be available for 2 weeks after they are assigned or until the final exam
begins, whichever is soon. Quizzes will be assigned on Thursday nights
and will be available until 11:59 P.M. the following Tuesday. The Section
Enrollment Code is: EA8-8A-DAE.
Academic Accommodations: If you are eligible for academic accommoda-
tions due to a disability, please supply a letter from Student Academic
Support regarding the accommodations.
Attendance: Attendance is expected for all class meetings. In most cases, lack
of attendance leads to lower grades in the course.
Extra Help: If you need any help outside of class, feel free to drop by my office
to see if I am available. There is also a math lab where you may seek extra
help. It is open from 9 A.M. to 3 P.M. MTWR in HUB103A and from 4
P.M. to 9 P.M. MTWR in HU408.
|
{"url":"https://softmath.com/tutorials-3/reducing-fractions/syllabus-for-college-algebra-4.html","timestamp":"2024-11-14T13:15:08Z","content_type":"text/html","content_length":"37147","record_id":"<urn:uuid:71a9d8a2-7dcc-4239-a77a-cff134a46c8c>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00034.warc.gz"}
|
The model_selection package
The model_selection package¶
Surprise provides various tools to run cross-validation procedures and search the best parameters for a prediction algorithm. The tools presented here are all heavily inspired from the excellent
scikit learn library.
Cross validation iterators¶
The model_selection.split module contains various cross-validation iterators. Design and tools are inspired from the mighty scikit learn.
The available iterators are:
KFold A basic cross-validation iterator.
RepeatedKFold Repeated KFold cross validator.
ShuffleSplit A basic cross-validation iterator with random trainsets and testsets.
LeaveOneOut Cross-validation iterator where each user has exactly one rating in the testset.
PredefinedKFold A cross-validation iterator to when a dataset has been loaded with the load_from_folds method.
This module also contains a function for splitting datasets into trainset and testset:
train_test_split Split a dataset into trainset and testset.
Cross validation¶
surprise.model_selection.validation.cross_validate(algo, data, measures=['rmse', 'mae'], cv=None, return_train_measures=False, n_jobs=1, pre_dispatch='2*n_jobs', verbose=False)[source]¶
Run a cross validation procedure for a given algorithm, reporting accuracy measures and computation times.
See an example in the User Guide.
☆ algo (AlgoBase) – The algorithm to evaluate.
☆ data (Dataset) – The dataset on which to evaluate the algorithm.
☆ measures (list of string) – The performance measures to compute. Allowed names are function names as defined in the accuracy module. Default is ['rmse', 'mae'].
☆ cv (cross-validation iterator, int or None) – Determines how the data parameter will be split (i.e. how trainsets and testsets will be defined). If an int is passed, KFold is used with
the appropriate n_splits parameter. If None, KFold is used with n_splits=5.
☆ return_train_measures (bool) – Whether to compute performance measures on the trainsets. Default is False.
☆ n_jobs (int) –
The maximum number of folds evaluated in parallel.
○ If -1, all CPUs are used.
○ If 1 is given, no parallel computing code is used at all, which is useful for debugging.
○ For n_jobs below -1, (n_cpus + n_jobs + 1) are used. For example, with n_jobs = -2 all CPUs but one are used.
Default is 1.
☆ pre_dispatch (int or string) –
Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched
than CPUs can process. This parameter can be:
○ None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs.
○ An int, giving the exact number of total jobs that are spawned.
○ A string, giving an expression as a function of n_jobs, as in '2*n_jobs'.
Default is '2*n_jobs'.
☆ verbose (int) – If True accuracy measures for each split are printed, as well as train and test times. Averages and standard deviations over all splits are also reported. Default is False
: nothing is printed.
A dict with the following keys:
○ 'test_*' where * corresponds to a lower-case accuracy measure, e.g. 'test_rmse': numpy array with accuracy values for each testset.
○ 'train_*' where * corresponds to a lower-case accuracy measure, e.g. 'train_rmse': numpy array with accuracy values for each trainset. Only available if return_train_measures is True.
○ 'fit_time': numpy array with the training time in seconds for each split.
○ 'test_time': numpy array with the testing time in seconds for each split.
Return type:
Parameter search¶
class surprise.model_selection.search.GridSearchCV(algo_class, param_grid, measures=['rmse', 'mae'], cv=None, refit=False, return_train_measures=False, n_jobs=1, pre_dispatch='2*n_jobs',
The GridSearchCV class computes accuracy metrics for an algorithm on various combinations of parameters, over a cross-validation procedure. This is useful for finding the best set of parameters
for a prediction algorithm. It is analogous to GridSearchCV from scikit-learn.
See an example in the User Guide.
☆ algo_class (AlgoBase) – The class of the algorithm to evaluate.
☆ param_grid (dict) – Dictionary with algorithm parameters as keys and list of values as keys. All combinations will be evaluated with desired algorithm. Dict parameters such as sim_options
require special treatment, see this note.
☆ measures (list of string) – The performance measures to compute. Allowed names are function names as defined in the accuracy module. Default is ['rmse', 'mae'].
☆ cv (cross-validation iterator, int or None) – Determines how the data parameter will be split (i.e. how trainsets and testsets will be defined). If an int is passed, KFold is used with
the appropriate n_splits parameter. If None, KFold is used with n_splits=5.
☆ refit (bool or str) – If True, refit the algorithm on the whole dataset using the set of parameters that gave the best average performance for the first measure of measures. Other
measures can be used by passing a string (corresponding to the measure name). Then, you can use the test() and predict() methods. refit can only be used if the data parameter given to fit
() hasn’t been loaded with load_from_folds(). Default is False.
☆ return_train_measures (bool) – Whether to compute performance measures on the trainsets. If True, the cv_results attribute will also contain measures for trainsets. Default is False.
☆ n_jobs (int) –
The maximum number of parallel training procedures.
○ If -1, all CPUs are used.
○ If 1 is given, no parallel computing code is used at all, which is useful for debugging.
○ For n_jobs below -1, (n_cpus + n_jobs + 1) are used. For example, with n_jobs = -2 all CPUs but one are used.
Default is 1.
☆ pre_dispatch (int or string) –
Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched
than CPUs can process. This parameter can be:
○ None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs.
○ An int, giving the exact number of total jobs that are spawned.
○ A string, giving an expression as a function of n_jobs, as in '2*n_jobs'.
Default is '2*n_jobs'.
☆ joblib_verbose (int) – Controls the verbosity of joblib: the higher, the more messages.
Using an accuracy measure as key, get the algorithm that gave the best accuracy results for the chosen measure, averaged over all splits.
dict of AlgoBase
Using an accuracy measure as key, get the best average score achieved for that measure.
dict of floats
Using an accuracy measure as key, get the parameters combination that gave the best accuracy results for the chosen measure (on average).
dict of dicts
Using an accuracy measure as key, get the index that can be used with cv_results that achieved the highest accuracy for that measure (on average).
dict of ints
A dict that contains accuracy measures over all splits, as well as train and test time for each parameter combination. Can be imported into a pandas DataFrame (see example).
dict of arrays
Runs the fit() method of the algorithm for all parameter combinations, over different splits given by the cv parameter.
data (Dataset) – The dataset on which to evaluate the algorithm, in parallel.
Call predict() on the estimator with the best found parameters (according the the refit parameter). See AlgoBase.predict().
Only available if refit is not False.
test(testset, verbose=False)¶
Call test() on the estimator with the best found parameters (according the the refit parameter). See AlgoBase.test().
Only available if refit is not False.
class surprise.model_selection.search.RandomizedSearchCV(algo_class, param_distributions, n_iter=10, measures=['rmse', 'mae'], cv=None, refit=False, return_train_measures=False, n_jobs=1,
pre_dispatch='2*n_jobs', random_state=None, joblib_verbose=0)[source]¶
The RandomizedSearchCV class computes accuracy metrics for an algorithm on various combinations of parameters, over a cross-validation procedure. As opposed to GridSearchCV, which uses an
exhaustive combinatorial approach, RandomizedSearchCV samples randomly from the parameter space. This is useful for finding the best set of parameters for a prediction algorithm, especially using
a coarse to fine approach. It is analogous to RandomizedSearchCV from scikit-learn.
See an example in the User Guide.
☆ algo_class (AlgoBase) – The class of the algorithm to evaluate.
☆ param_distributions (dict) – Dictionary with algorithm parameters as keys and distributions or lists of parameters to try. Distributions must provide a rvs method for sampling (such as
those from scipy.stats.distributions). If a list is given, it is sampled uniformly. Parameters will be sampled n_iter times.
☆ n_iter (int) – Number of times parameter settings are sampled. Default is 10.
☆ measures (list of string) – The performance measures to compute. Allowed names are function names as defined in the accuracy module. Default is ['rmse', 'mae'].
☆ cv (cross-validation iterator, int or None) – Determines how the data parameter will be split (i.e. how trainsets and testsets will be defined). If an int is passed, KFold is used with
the appropriate n_splits parameter. If None, KFold is used with n_splits=5.
☆ refit (bool or str) – If True, refit the algorithm on the whole dataset using the set of parameters that gave the best average performance for the first measure of measures. Other
measures can be used by passing a string (corresponding to the measure name). Then, you can use the test() and predict() methods. refit can only be used if the data parameter given to fit
() hasn’t been loaded with load_from_folds(). Default is False.
☆ return_train_measures (bool) – Whether to compute performance measures on the trainsets. If True, the cv_results attribute will also contain measures for trainsets. Default is False.
☆ n_jobs (int) –
The maximum number of parallel training procedures.
○ If -1, all CPUs are used.
○ If 1 is given, no parallel computing code is used at all, which is useful for debugging.
○ For n_jobs below -1, (n_cpus + n_jobs + 1) are used. For example, with n_jobs = -2 all CPUs but one are used.
Default is 1.
☆ pre_dispatch (int or string) –
Controls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched
than CPUs can process. This parameter can be:
○ None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs.
○ An int, giving the exact number of total jobs that are spawned.
○ A string, giving an expression as a function of n_jobs, as in '2*n_jobs'.
Default is '2*n_jobs'.
☆ random_state (int, RandomState or None) – Pseudo random number generator seed used for random uniform sampling from lists of possible values instead of scipy.stats distributions. If int,
random_state is the seed used by the random number generator. If RandomState instance, random_state is the random number generator. If None, the random number generator is the RandomState
instance used by np.random. Default is None.
☆ joblib_verbose (int) – Controls the verbosity of joblib: the higher, the more messages.
Using an accuracy measure as key, get the algorithm that gave the best accuracy results for the chosen measure, averaged over all splits.
dict of AlgoBase
Using an accuracy measure as key, get the best average score achieved for that measure.
dict of floats
Using an accuracy measure as key, get the parameters combination that gave the best accuracy results for the chosen measure (on average).
dict of dicts
Using an accuracy measure as key, get the index that can be used with cv_results that achieved the highest accuracy for that measure (on average).
dict of ints
A dict that contains accuracy measures over all splits, as well as train and test time for each parameter combination. Can be imported into a pandas DataFrame (see example).
dict of arrays
Runs the fit() method of the algorithm for all parameter combinations, over different splits given by the cv parameter.
data (Dataset) – The dataset on which to evaluate the algorithm, in parallel.
Call predict() on the estimator with the best found parameters (according the the refit parameter). See AlgoBase.predict().
Only available if refit is not False.
test(testset, verbose=False)¶
Call test() on the estimator with the best found parameters (according the the refit parameter). See AlgoBase.test().
Only available if refit is not False.
|
{"url":"https://surprise.readthedocs.io/en/latest/model_selection.html","timestamp":"2024-11-08T09:20:51Z","content_type":"text/html","content_length":"93948","record_id":"<urn:uuid:5a796b0d-544b-4e64-8f3c-5fd9cc84378d>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00389.warc.gz"}
|
NetLogo User Community Models
(back to the NetLogo User Community Models)
If clicking does not initiate a download, try right clicking or control clicking and choosing "Save" or "Download".(The run link is disabled for this model because it was made in a version prior to
NetLogo 6.0, which NetLogo Web requires.)
## WHAT IS IT?
This model simulates the spread of an infectious disease in a closed population. It is an introductory model in the curricular unit called epiDEM (Epidemiology: Understanding Disease Dynamics and
Emergence through Modeling). This particular model is formulated based on a mathematical model that describes the systemic dynamics of a phenomenon that emerges when one infected person is introduced
in a wholly susceptible population. This basic model, in mathematical epidemiology, is known as the Kermack-McKendrick model.
The Kermack-McKendrick model assumes a closed population, meaning there are no births, deaths, or travel into or out of the population. It also assumes that there is homogeneous mixing, in that each
person in the world has the same chance of interacting with any other person within the world. In terms of the virus, the model assumes that there are no latent or dormant periods, nor a chance of
viral mutation.
Because this model is so simplistic in nature, it facilitates mathematical analyses and also the calculation of the threshold at which an epidemic is expected to occur. We call this the reproduction
number, and denote it as R_0. Simply, R_0 stands for the number of secondary infections that arise as a result of introducing one infected person in a wholly susceptible population, over the course
of the infected person's contagious period (i.e. while the person is infective, which, in this model, is from the beginning of infection until recovery).
This model incorporates all of the above assumptions, but each individual has a 5% chance of being initialized as infected. This model shows the disease spread as a phenomenon with an element of
stochasticity. Small perturbations in the parameters included here can in fact lead to different final outcomes.
Overall, this model helps users
1) engage in a new way of viewing/modeling epidemics that is more personable and relatable
2) understand how the reproduction number, R_0, represents the threshold for an epidemic
3) think about different ways to calculate R_0, and the strengths and weaknesses in each approach
4) understand the relationship between derivatives and integrals, represented simply as rates and cumulative number of cases, and
5) provide opportunities to extend or change the model to include some properties of a disease that interest users the most.
## HOW IT WORKS
Individuals wander around the world in random motion. Upon coming into contact with an infected person, by being in any of the eight surrounding neighbors of the infected person or in the same
location, an uninfected individual has a chance of contracting the illness. The user sets the number of people in the world, as well as the probability of contracting the disease.
An infected person has a probability of recovering after reaching their recovery time period, which is also set by the user. The recovery time of each individual is determined by pulling from an
approximately normal distribution with a mean of the average recovery time set by the user.
The colors of the individuals indicate the state of their health. Three colors are used: white individuals are uninfected, red individuals are infected, green individuals are recovered. Once
recovered, the individual is permanently immune to the virus.
The graph INFECTION AND RECOVERY RATES shows the rate of change of the cumulative infected and recovered in the population. It tracks the average number of secondary infections and recoveries per
tick. The reproduction number is calculated under different assumptions than those of the Kermack McKendrick model, as we allow for more than one infected individual in the population, and introduce
aforementioned variables.
At the end of the simulation, the R_0 reflects the estimate of the reproduction number, the final size relation that indicates whether there will be (or there was, in the model sense) an epidemic.
This again closely follows the mathematical derivation that R_0 = beta*S(0)/ gamma = N*ln(S(0) / S(t)) / (N - S(t)), where N is the total population, S(0) is the initial number of susceptibles, and S
(t) is the total number of susceptibles at time t. In this model, the R_0 estimate is the number of secondary infections that arise for an average infected individual over the course of the person's
infected period.
## HOW TO USE IT
The SETUP button creates individuals according to the parameter values chosen by the user. Each individual has a 5% chance of being initialized as infected. Once the model has been setup, push the GO
button to run the model. GO starts the model and runs it continuously until GO is pushed again.
Note that in this model each time-step can be considered to be in hours, although any suitable time unit will do.
What follows is a summary of the sliders in the model.
INITIAL-PEOPLE (initialized to vary between 50 - 400): The total number of individuals in the simulation, determined by the user.
INFECTION-CHANCE (10 - 100): Probability of disease transmission from one individual to another.
RECOVERY-CHANCE (10 - 100): Probability of an infected individual to recover.
AVERAGE-RECOVERY-TIME (50 - 300): The time it takes for an individual to recover on average. The actual individual's recovery time is pulled from a normal distribution centered around the
AVERAGE-RECOVERY-TIME at its mean, with a standard deviation of a quarter of the AVERAGE-RECOVERY-TIME. Each time-step can be considered to be in hours, although any suitable time unit will do.
A number of graphs are also plotted in this model.
CUMULATIVE INFECTED AND RECOVERED: This plots the total percentage of infected and recovered individuals over the course of the disease spread.
POPULATIONS: This plots the total number of people with or without the flu over time.
INFECTION AND RECOVERY RATES: This plots the estimated rates at which the disease is spreading. BetaN is the rate at which the cumulative infected changes, and Gamma rate at which the cumulative
recovered changes.
R_0: This is an estimate of the reproduction number, only comparable to the Kermack McKendrick's definition if the initial number of infected were 1.
## THINGS TO NOTICE
As with many epidemiological models, the number of people becoming infected over time, in the event of an epidemic, traces out an "S-curve." It is called an S-curve because it is shaped like a
sideways S. By changing the values of the parameters using the slider, try to see what kinds of changes make the S curve stretch or shrink.
Whenever there's a spread of the disease that reaches most of the population, we say that there was an epidemic. As mentioned before, the reproduction number indicates the number of secondary
infections that arise as a result of introducing one infected person in a totally susceptible population, over the course of the infected person's contagious period (i.e. while the person is
infected). If it is greater than 1, an epidemic occurs. If it is less than 1, then it is likely that the disease spread will stop short, and we call this an endemic.
## THINGS TO TRY
Try running the model by varying one slider at a time. For example:
How does increasing the number of initial people affect the disease spread?
How does increasing the recovery chance the shape of the graphs? What about changes to average recovery time? Or the infection rate?
What happens to the shape of the graphs as you increase the recovery chance and decrease the recovery time? Vice versa?
Notice the graph Cumulative Infected and Recovered, and Infection and Recovery Rates. What are the relationships between the two? Why is the latter graph jagged?
## EXTENDING THE MODEL
Try to change the behavior of the people once they are infected. For example, once infected, the individual might move slower, have fewer contacts, isolate him or herself etc. Try to think about how
you would introduce such a variable.
In this model, we also assume that the population is closed. Can you think of ways to include demographic variables such as births, deaths, and travel to mirror more of the complexities that surround
the nature of epidemic research?
## NETLOGO FEATURES
Notice that each agent pulls from a truncated normal distribution, centered around the AVERAGE-RECOVERY-TIME set by the user. This is to account for the variation in genetic differences and the
immune system functions of individuals.
Notice that R_0 calculated in this model is a numerical estimate to the analytic R_0. In the special case of one infective introduced to a wholly susceptible population (i.e., the Kermack-McKendrick
assumptions), the numerical estimations of R0 are very close to the analytic values.
## RELATED MODELS
AIDS, Virus and Virus on a Network are related models.
## HOW TO CITE
If you mention this model in a publication, we ask that you include these citations for the model itself and for the NetLogo software:
* Yang, C. and Wilensky, U. (2011). NetLogo epiDEM Basic model. http://ccl.northwestern.edu/netlogo/models/epiDEMBasic. Center for Connected Learning and Computer-Based Modeling, Northwestern
Institute on Complex Systems, Northwestern University, Evanston, IL.
* Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern Institute on Complex Systems, Northwestern University,
Evanston, IL.
## COPYRIGHT AND LICENSE
Copyright 2011 Uri Wilensky.
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ or send a
letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
Commercial licenses are also available. To inquire about commercial licenses, please contact Uri Wilensky at uri@northwestern.edu.
|
{"url":"http://ccl.northwestern.edu/netlogo/models/community/epiDEM%20plus%20baboon","timestamp":"2024-11-10T13:50:48Z","content_type":"text/html","content_length":"14846","record_id":"<urn:uuid:5b9106a5-1a06-425b-bb08-411d087f9b37>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00528.warc.gz"}
|
What is a true error?
A true error ( E t {\displaystyle E_{t}} ) is defined as the difference between the true (exact) value and an approximate value. This type of error is only measurable when the true value is
What is true error and sample error?
The true error represents the probability that a randomly drawn instance from the population (distribution) is misclassified while the sample error is the fraction of the sample which is
misclassified. The true error is used for the population while sample error is used for the sample.
What is true error and relative error?
The difference between the actual value and the measured value of a quantity is called absolute error. The ratio of absolute error of a measurement and the actual value of the quantity is known as a
relative error.
What is meant by the term true error in surveying?
True Error
The difference between the observed value and the true value of a quantity is knows as true error.
What is the true error rate?
The true error rate is statistically defined as the error rate of the classifier on a large number of new cases that converge in the limit to the actual population distribution.
44 related questions found
Is true error and absolute error same?
Absolute Error is the amount of error in your measurements.
It is the difference between the measured value and “true” value. For example, if a scale states 90 pounds but you know your true weight is 89 pounds, then the scale has an absolute error of 90 lbs –
89 lbs = 1 lbs.
How do you find the true value error?
How to calculate error
1. Subtract the actual value from the expected value. First, subtract the actual value from the expected value. ...
2. Divide by the actual value. After you find the difference between the actual and expected value, you can divide the result of the calculation by the actual value. ...
3. Multiply the value by 100.
What is true error in surveying formula?
The value of quantity free from all the error is known as true value of quantity. True error :- • The difference between the true value and the observed value is known as true error. True error =
True Value – Observed value • As the true value can not be find perfectly Page 15 ❑Most probable value :- •
What are the three types of errors in surveying?
Errors typically arise from three sources; natural errors, instrument errors, and human errors. Natural errors are caused by environmental conditions or significant changes in environmental
What is true error and residual error in surveying?
The error of an observation is the deviation of the observed value from the true value of a quantity of interest (for example, a population mean). The residual is the difference between the observed
value and the estimated value of the quantity of interest (for example, a sample mean).
What is meant by relative error?
The relative error is defined as the ratio of the absolute error of the measurement to the actual measurement. Using this method we can determine the magnitude of the absolute error in terms of the
actual size of the measurement.
What are the two types of error?
What are Type I and Type II errors? In statistics, a Type I error means rejecting the null hypothesis when it's actually true, while a Type II error means failing to reject the null hypothesis when
it's actually false.
What is relative error example?
Relative error is a measure of the uncertainty of measurement compared to the size of the measurement. Examples: Three weights are measured at 5.05 g, 5.00 g, and 4.95 g. The absolute error is ± 0.05
g. The relative error is 0.05 g/5.00 g = 0.01 or 1%.
What is the most common type of errors?
Today, we're going to talk about the seven most common types of programming errors and how you can avoid them.
1. Syntax Errors. Just like human languages, computer languages have grammar rules. ...
2. Logic Errors. ...
3. Compilation Errors. ...
4. Runtime Errors. ...
5. Arithmetic Errors. ...
6. Resource Errors. ...
7. Interface Errors.
How many types of error is there?
Generally errors are classified into three types: systematic errors, random errors and blunders.
What are main error types?
The types of errors are: Compile Time Errors (Syntax errors and Semantic Errors) Runtime Errors. Logical Errors.
What is the difference between true score and measurement error?
The Observed score is the actual score on the exam and True score is the person's actual ability. Error is the difference between observed and true scores. Error can be random or systematic.
How do you calculate absolute true error?
Using the Actual Value and Measured Value
equals the actual value. Subtract the actual value from the measured value. Since absolute error is always positive, take the absolute value of this difference, ignoring any negative signs. This will
give you the absolute error.
What does true value mean in statistics?
True Value
The value that characterizes a quantity perfectly defined in the conditions that exist when that quantity is considered. It is an ideal value which could be arrived at only if all causes of measure-
ment error were eliminated, and the entire population was sampled.
What true value means?
1. : fair market value. called also true cash value. 2. : the depreciated book value of personal property for purposes of taxation of such property used in business.
What is true value with example?
It is the 'value' that the buyer is willing to pay for an item especially a second-hand or used vehicle; usually "true value'' is a fixed price tag on any used vehicle after assessing it's value
based on it's condition and usage.
What are examples of absolute errors?
The absolute error is the difference between the true value and the estimated value. For example, if you measure the length of a room and get a measurement of 10 feet, but the true length of the room
is 12 feet, then the absolute error is 2 feet.
What does mean absolute error mean?
What is Mean Absolute Error (MAE)? In the context of machine learning, absolute error refers to the magnitude of difference between the prediction of an observation and the true value of that
What is the difference between absolute error and mean absolute error?
Absolute error is the magnitude of the difference between the measured value and the original value. However, the mean absolute error is the mean of magnitudes of the absolute errors in all the
measurements throughout the experiment.
How do you find relative and absolute error?
How to calculate the absolute error and relative error
1. To find out the absolute error, subtract the approximated value from the real one: |1.41421356237 - 1.41| = 0.00421356237.
2. Divide this value by the real value to obtain the relative error: |0.00421356237 / 1.41421356237| = 0.298%
|
{"url":"https://www.calendar-uk.co.uk/frequently-asked-questions/what-is-a-true-error","timestamp":"2024-11-07T19:36:51Z","content_type":"text/html","content_length":"72158","record_id":"<urn:uuid:1529adcd-b055-4a04-81f2-71db467e8e87>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00089.warc.gz"}
|
$20 LCR ESR Transistor checker project
Changes for v1.31m:
- New tool: IR RC transmitter.
- Added support for dedicated signal output via OC1B, when OC1B isn't used for test pin #2's probe resistor.
- Changed battery monitoring settings to support also other power options.
- Driver for SSD1306 based graphic OLED modules.
- Color support for item selection (menues and tools).
- Driver for ILI9163 based graphic color LCD modules.
- Fixed tiny issue in squarewave signal generator.
- Added support for 180° rotated output to PCD8544 LCD driver.
- Fixed edit error in Servo_Check().
The IR RC supports following protocols at the moment:
- NEC standard & extended
- Samsung/Toshiba (32 bits)
- Sony SIRC 12, 15 and 20 bits
More protocols to come...
|
{"url":"https://www.eevblog.com/forum/testgear/$20-lcr-esr-transistor-checker-project/msg1385796/?PHPSESSID=s07toh2pd89ja1jjgb32ofm3b0","timestamp":"2024-11-11T21:33:53Z","content_type":"application/xhtml+xml","content_length":"109871","record_id":"<urn:uuid:227074ef-4a4b-41d7-8348-44be77e8ccf5>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00498.warc.gz"}
|
Solar cell discharge calculation
Solar Battery Charge Time Calculator (12v, 24v, 48v)
3. Enter the battery voltage (V): Is this a 12, 24, or 48-volt battery?Enter 12 for a 12V battery. 4. Select your battery type from the options provided. 5. Enter the battery depth of discharge
(DoD): Battery DoD indicates how much of the battery capacity is discharged relative to its total capacity. For example, enter 50 for a battery that is half …
|
{"url":"https://blackpistonsmc.pl/Oct_23_solar_cell_discharge_calculation.html","timestamp":"2024-11-04T13:36:14Z","content_type":"text/html","content_length":"35616","record_id":"<urn:uuid:f097c78b-8f95-4655-8a47-41d2a8c6afd4>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00763.warc.gz"}
|
Correlate (R² Correlation)
The coefficient of determination, often referred to as R² correlation or R-squared, is a statistical measure that assesses the goodness-of-fit of a regression model to the observed data. It
quantifies the proportion of the variance in the dependent variable that can be explained by the independent variables in the model.
This document will describe how this tool can help assess the linear relationship between the data collected by a Sensorbee Device and the data collected by a reference station / device, by using the
Pearson’s correlation coefficient (R²).
Step 1: Select Sensor Choose the specific sensor from the device that you want to assess. This will be the sensor that the correlation will be based on.
Step 2: Interval/Resolution Determine the time interval or resolution that you want to use for the correlation. This could be every minute, every hour, or another interval that makes sense based on
the data and the situation.
Step 3: Action - Correlate Choose the "Correlate" action. This will initiate the process of calculating the R-squared correlation between the sensor data and the reference data.
Step 4: Select reference data Choose the reference data that you will compare the sensor data against. This could be data from a different sensor, a different device, or a known standard or
Step 5: Select installation Choose the specific installation that the device is part of. This could be a specific location, project, or other grouping of devices.
5. The graph includes the following details:
Pearson correlation coefficient (r) value Strength Direction
Greater than .5 Strong Positive
Between .3 and .5 Moderate Positive
Between 0 and .3 Weak Positive
0 None None
Between 0 and –.3 Weak Negative
Between –.3 and –.5 Moderate Negative
Less than –.5 Strong Negative
☆ The linear equation
○ Technical note: Linear regression is represented by an equation Y= BX + A. The B is the slope that is equal to r(Sy/Sx) where r is the correlation coefficient, Sy is the standard
deviation of y values and Sx is the standard deviation of x value. The equation of A (the intercept) is equal to the meanY-(B*meanX), where meanY and meanX are the means of the y
values and x values, respectively.
☆ Significance of the Result
○ Using the Significance level of 0.001
■ p < 0.001 means the relationship is statistically significant.
■ p > 0.001 means the relationship is not statistically significant.
|
{"url":"https://docs.sensorbee.com/sensorbee-cloud-services/calibration/correlate-(r-correlation)","timestamp":"2024-11-02T23:39:44Z","content_type":"text/html","content_length":"164542","record_id":"<urn:uuid:78d9b91a-bcee-4a2e-be5d-1345b1f3929a>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00534.warc.gz"}
|
CCCII in Hindu Arabic Numerals
CCCII = 302
M C X I
MM CC XX II
MMM CCC XXX III
CD XL IV
D L V
DC LX VI
DCC LXX VII
DCCC LXXX VIII
CM XC IX
CCCII is valid Roman numeral. Here we will explain how to read, write and convert the Roman numeral CCCII into the correct Arabic numeral format. Please have a look over the Roman numeral table given
below for better understanding of Roman numeral system. As you can see, each letter is associated with specific value.
Symbol Value
I 1
V 5
X 10
L 50
C 100
D 500
M 1000
How to write Roman Numeral CCCII in Arabic Numeral?
The Arabic numeral representation of Roman numeral CCCII is 302.
How to convert Roman numeral CCCII to Arabic numeral?
If you are aware of Roman numeral system, then converting CCCII Roman numeral to Arabic numeral is very easy. Converting CCCII to Arabic numeral representation involves splitting up the numeral into
place values as shown below.
C + C + C + I + I
100 + 100 + 100 + 1 + 1
As per the rule highest numeral should always precede the lowest numeral to get correct representation. We need to add all converted roman numerals values to get our correct Arabic numeral. The Roman
numeral CCCII should be used when you are representing an ordinal value. In any other case, you can use 302 instead of CCCII. For any numeral conversion, you can also use our roman to number
converter tool given above.
Current Date and Time in Roman Numerals
The current date and time written in roman numerals is given below. Romans used the word nulla to denote zero because the roman number system did not have a zero, so there is a possibility that you
might see nulla or nothing when the value is zero.
|
{"url":"https://romantonumber.com/cccii-in-arabic-numerals","timestamp":"2024-11-10T21:00:22Z","content_type":"text/html","content_length":"89469","record_id":"<urn:uuid:4485ec2b-5179-4730-ad69-0c0e82cbdffa>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00855.warc.gz"}
|
Verifications and applications of the Einstein-Podolsky-Rosen argument that “Quantum mechanics is not a complete theory”
Prof. Ruggero Santilli^1; Pinchas Mandell^2;
^1THUNDER ENERGIES CORPORATION, Tarpon Springs, United States; ^2FAMILY OF ISRAEL FOUNDATION, , Israel;
Type of Paper: Regular
Id Paper: 334
Topic: 38
In this talk we outline: 1) The novel isomathematics for the representation of the size, shape and density of particles in deep mutual overlapping with ensuing non-linear, non-local and
non-Hamiltonian interactions; 2) The Einstein-Podolsky-Rosen (EPR) "completion" via isomathematics of quantum mechanics and chemistry into hadronic mechanics and chemistry; 3) The ensuing new
notion of EPR entanglement in which particles are in continuous and instantaneous communications via the overlapping of their wavepackets without need for superluminal communications; 4) The
inapplicability of Bell's inequality under the indicated EPR entanglement with ensuing recovering of classical images; and 5) The progressive achievement of Einstein's determinism in the
structure of hadrons, nuclei and stars and its full achievement at the limit of gravitational collapse. We then outline representative applications in physics, chemistry and biology, including
expected new cancer treatments and the new conception of living organisms characterized by extended constituents in continuous and instantaneous EPR entanglement.
Santilli iso- geno- hyper- and isodual-numbers; Mathematics; isomathematics;quantum mechanics;EPR entanglement
References on recent verifications of the EPR argument
Nine minutes video on EPR
Cite this article as:
Santilli P and Mandell P. (2022). Verifications and applications of the Einstein-Podolsky-Rosen argument that “Quantum mechanics is not a complete theory”. In F. Kongoli, E. Aifantis, T.
Vougiouklis, A. Bountis, P. Mandell, R. Santilli, A. Konstantinidis, G. Efremidis. (Eds.), Sustainable Industrial Processing Summit SIPS2022 Volume 16 Intl. Symp on Mathematics, Modelling and
Geomechanics (pp. 191). Montreal, Canada: FLOGEN Star Outreach
|
{"url":"https://www.flogen.org/sips2022/paper-16-334.html","timestamp":"2024-11-07T13:47:55Z","content_type":"application/xhtml+xml","content_length":"11331","record_id":"<urn:uuid:5c868609-a60d-42ec-892d-fcc7af97b724>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00523.warc.gz"}
|
Circle | Part-2 | S N Dey Examhoop
Circle | Part-2 | S N Dey
In the previous article , we discussed 10 Very Short Answer Type Questions. In this article, we will discuss few more VSA type Questions from Chhaya Mathematics , Class 11 (S N De book ).
Circle related Problems and Solutions | S N Dey Mathematics
11. The co-ordinates of the centre of the circle
The given equation of the circle can be rewritten as :
Now, comparing the equation
So, the centre of the circle as represented by
• To download full PDF solution of Circle ( Chhaya Mathematics, Class 11 ), click here.
12. Find the equation of the circle which touches both the co-ordinate axes at a distance
From the given condition, we can say that the centre of the circle is
13. Find the parametric equation of the circle
We have,
Comparing the equation
The centre of the given circle is :
The radius of the circle :
Hence, the equation of the circle is :
The equations
14. The parametric equation of a circle are,
We have,
Hence, from
15. The equation of the in-circle of an equilateral triangle is
The equation of the in-circle of an equilateral triangle is given by
Comparing the equation
The radius of the circle is :
Clearly, from the figure we notice that
Leave a Comment
|
{"url":"https://examhoop.com/sn-dey-solution-for-class-11-circle-part-2/","timestamp":"2024-11-12T02:46:05Z","content_type":"text/html","content_length":"171963","record_id":"<urn:uuid:84009560-789f-45a9-a175-650ee519f57c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00232.warc.gz"}
|
Geographic Coordinate Systems
Understanding concepts such as coordinate reference systems and map projections is becoming increasingly crucial as drones become more common in the surveying, construction, mining, and earthwork
Knowing how the coordinate system functions will aid in getting precise measurements.
Coordinate Reference System
Coordinates are ordered pairs of points that help us locate any point in a 2D plane or 3D space.
Coordinates are an instrumental piece of data in a variety of industries. Surveyors require precise points on the Earth from which to build.
A coordinate system is used to express the location of a point on a plane or sphere. Locations in two-dimensional coordinate systems are organized in a grid of columns and rows. An X and Y pair of
coordinates represent each grid location. The X coordinate specifies the grid’s row and the Y coordinate specifies the column.
There are many different kinds of coordinate systems. We will be discussing the two classes of coordinate systems.
Geographic Coordinate System (GCS)
A Geographic Coordinate System (GCS) is a reference framework that defines the locations of features on an earth model. These coordinates are based on an ellipsoid that approximates the Earth’s
shape, allowing us to measure distances and angles between different points of the Earth. In Geographic Coordinate System, locations are defined using angular measurements, usually in decimal
degrees. A GCS is required for the data to pinpoint its location on the earth’s surface accurately. The most commonly used geographic coordinate system is the World Geodetic System 1984 (WGS 84).
To represent a location in GCS, you need the location in either degree minute seconds or decimal degrees. This is an example of GCS.
Let us find out the location of the Bangalore Cantonment Railway Station. Consider the location of the railway station to be 12°59'37.38"N and 77°35'52.87"E in Geographical Coordinate System. Here
the location is in Degree Minute Second format.
Figure 1: Location in Degree Minute Seconds in the GCS system
Suppose you are wondering how to convert Degree Minute Second to Decimal Degrees. In that case, you can use the degree minute second to decimal degree converter tool from Surveyaan Geoworkspace to
convert the coordinates.
Once we convert the coordinates to decimal degrees, we get these coordinates 12.99371,77.598019.
Figure 2: Location in Decimal Degrees in the GCS system
Figure 3: Latitude
From the figure above, we can see horizontal lines run east — west. They are called Latitudes or Parallels. They form concentric circles around the Earth and are equally spaced and parallel to one
another. The Equator is the line of latitude that cuts across the center of the Earth and marks zero degrees of latitude. Positive latitudes from 0 to +90 degrees are found north of the equator,
while negative latitudes from 0 to -90 degrees are located south of the equator.
Figure 4: Longitude
From the figure above, we can see vertical lines run north — south. They are called Longitudes or Meridians. The Prime Meridian is the line of longitude that is at zero degrees and runs from the
North Pole to the South Pole. For most geographic coordinate systems, the prime meridian is the longitude that passes through Greenwich, England. Positive longitudes from 0 to +90 degrees are found
east of the Prime Meridian, while negative longitudes from 0 to -90 degrees are located west of the Prime Meridian.
Figure 5: Graticule
From the figure above, the grid-like network of latitude and longitude lines encircles the entire globe and is called the Graticule. The equator and prime meridian intersection define the origin of
the graticule (0,0).
A spheroid or a sphere approximating the Earth’s shape can define a coordinate system. A sphere is based on a circle, whereas a spheroid is an ellipsoid based on an ellipse.
Figure 6: Sphere(left) and Spheroid (right)
While a spheroid approximates the earth’s shape, a Datum defines the position of the spheroid relative to the center of the earth. A datum provides a frame of reference for measuring locations on the
earth’s surface. It defines the origin and orientation of latitude and longitude lines.
There are many different models of the earth’s surface and, therefore, many different GCS. World Geodetic System 1984 (WGS 1984) is designed as a one-size-fits-all GCS, good for mapping global data.
Australian Geodetic Datum 1984 is designed to fit the earth snugly around Australia, giving you good precision for this continent but poor accuracy anywhere else.
Whenever you change the datum, or the geographic coordinate system, the coordinate values of your data will change.
Please find the video on our YouTube channel from here.
This brings us to the end of the blog. I hope this article gained some knowledge for you!
Thank you for reading.
About SurveyGyaan
SurveyGyaan is an educational initiative under the Surveyaan brand, which is a subsidiary of Nibrus Technologies Private Limited. Surveyaan specializes in drone manufacturing and the development of
photogrammetry software.
Surveyaan: www.surveyaan.com
Surveyaan GeoWorkspace: app.surveyaan.com
|
{"url":"https://surveygyaan.medium.com/geographic-coordinate-systems-47789198349b?source=read_next_recirc-----61697e1c59a8----1---------------------530cf45e_4a0f_4c8f_96a7_6f55e292050a-------","timestamp":"2024-11-06T08:04:56Z","content_type":"text/html","content_length":"129483","record_id":"<urn:uuid:b53f1b9a-0664-4fd3-b878-6333794ea925>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00793.warc.gz"}
|
Surface X-Ray Diffraction
R. Fung, R. Harder, H. W. Ma, V. L. Shneerson, D. K. Saldin,
H. Vogler, W. Moritz,
H. T. Johnson-Steigelman, S. S. Parihar, P. F. Lyman
In the technique of surface X-ray diffraction X-rays are incident at a glancing angle on a surface. The directions of the diffracted beams may be determined by the usual Ewald sphere construction of
crystallography. An important difference with bulk crystallography is that the wavevectors of the diffracted beams are determined by the intersection of the Ewald sphere with not Bragg spots, but
rather rods in reciprocal space oriented perpendicular to the surface.
GaAs(111) (2×2) O/Cu(104)
Sb/Ge(113) Work in progress O/Cu(104) Work on Exp. Data
Experimentally measurable are the moduli |F(h,k,l)| of the structure factors of the 2D unit cells, with discrete Miller indices h and k specifying the reciprocal-space rods, and reflecting the
periodicity parallel to the surface. The index l is continuous along each rod, a consequence of the lack of periodicity perpendicular to the surface.
Determination of the structure of the surface requires a knowledge of the phases of these structure factors, which are not directly avaliable from the experiment. Information about these phases is
available in coded form in the diffraction data, but coaxing it out into a usable form is a problem that has defied solution up to now.
In this page and the ones clickable from here, we show the results of an algorithm that combines information about the known part of the structure (the bulk) with the measured difraction data to
recover the (unknown) electron density distribution of a surface unit cell. In concept this is somewhat similar to holography, where the scattered wave from the known part of the structure plays the
role of a reference wave.
Click on the titles of the panels above for more details about the application of the method to the model systems indicated.
The algorithm we have developed is called the Phase and Amplitude Recovery And Diffraction Image Generation Method (PARADIGM) [1]. It combines a knowledge of the bulk structure to obtain initial
phases of the CTRs and alternate satisfaction of constraints in real and reciprocal space [2] as in recently developed methods of diffractive imaging.
The method has been extended to deal also with cases where the measured diffraction data may be from mixed surface domains [3].
The method has been successfully applied to experimental data in the recovery of the known surface structure of Au(110)-(2×1) including not only the gross feature of the missing row, but even
exquisite details of the buckling and relaxations of deeper atomic layers [4]. It has also now been used to determine the previously unknown structures of (2×2)Sb/Au(110) [5] and (rt3xrt3)R30Sb/Au
(110) [1].
|
{"url":"https://sites.uwm.edu/saldin-research-group/surface-x-ray-diffraction/","timestamp":"2024-11-08T04:10:32Z","content_type":"text/html","content_length":"36626","record_id":"<urn:uuid:3f9a358f-ec5c-4a31-b8f0-d2cb873ccbc1>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00202.warc.gz"}
|
Isentropic Flow Relation Between Pressure And Total Pressure
Isentropic Flow Relation Between Pressure And Total Pressure Calculator
Calculate the Isentropic flow relation between pressure and total pressure with our free online tool using the input parameters: Density, Ratio
Explore our convenient online calculator that determines the Isentropic flow relation between pressure and total pressure. By inputting the values for density, total density, and specific heat ratio,
you can swiftly calculate the relationship between pressure and total pressure. This calculation proves valuable in various applications such as pumps, gas compressors, turbines, nozzles, and
diffusers. The Isentropic process refers to a scenario where entropy remains constant throughout. Benefit from our calculator to simplify your calculations and attain accurate results.
Send the result to an email
\frac{P}{P_t} = \left(\frac{\rho}{\rho_t}\right)^\gamma
The variables used in the formula are:
P / P[t] = Isentropic Flow Relation Between Pressure and Total Pressure
P = Pressure
P[t] = Total Pressure
ρ = Density
ρ[t] = Total Density
γ = Specific Heat Ratio
|
{"url":"https://calchub.xyz/isentropic-flow-relation-between-pressure-and-total-pressure/","timestamp":"2024-11-06T11:59:56Z","content_type":"text/html","content_length":"45046","record_id":"<urn:uuid:6ede546d-f6e7-475b-86ea-a10962daee12>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00771.warc.gz"}
|
To construct a tangent to a circle of radius 4 cm from a point on the concentric circle of
Hint: For solving this problem, we should be aware about the concept of tangents. Basically, a tangent to the curve is a straight line that touches a curve but does not cross it. In addition, it is
important to know the basics of construction to find the tangent line.
Complete step-by-step answer:
To construct a tangent to a circle of radius 4 cm from a point on the concentric circle of radius 6 cm,
we follow few basic steps,
Step 1: We first draw two concentric circles of radius 4 cm and 6 cm respectively (Both the circles
should have the same center).
Step 2: We now mark a point A on the larger circle from where we could draw a tangent to the smaller circle. Thus, we have
Step 3: We join A to the center of the circle (O) and then make a perpendicular bisector of this line segment AO. We do this by taking a compass and taking length in it about more than half the
distance between PO. Then make two arcs above and below AO from points A and O as shown. Now,we join the line made of arc intersections.
Step 4: Now, we draw the circle with XO as radius and center as X. Thus, we get the figure below. We mark the points of intersections as points B and C.
Step 5: We join line AB and AC. These are the required tangents. Thus, we have,
Hence, AB and AC are the required tangents.
Note: While constructing the tangents, we come across a step where we have to make a perpendicular bisector. Thus, while solving questions about construction, it is important to be aware that
although arcs can be drawn on one side of the line (we refer to line AO in this case) to make a perpendicular bisector. We make arcs on both sides of the line AO (as seen two arcs have been made on
both sides of line AO), since this makes it easier while construction on the paper while doing it with hand. Further, one can choose any other point on the larger circle and use a similar method to
get tangents from that point.
|
{"url":"https://www.vedantu.com/question-answer/to-construct-a-tangent-to-a-circle-of-radius-4-class-9-maths-cbse-5ee712c3cd14f768db6068b0","timestamp":"2024-11-05T12:47:38Z","content_type":"text/html","content_length":"199006","record_id":"<urn:uuid:ee01a9dc-dc58-44f0-b65e-7cc1b0fbfe1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00644.warc.gz"}
|
New Scaling Laws for Large Language Models
On March 29th, DeepMind published a paper, “Training Compute-Optimal Large Language Models”, that shows that essentially everyone—OpenAI, DeepMind, Microsoft, etc. -- has been training large language
models with a deeply suboptimal use of compute.
Following the new scaling laws that they propose for the optimal use of compute, DeepMind trains a new, 70-billion parameter model that outperforms much larger language models, including the
175-billion parameter GPT-3 and DeepMind’s own 270-billion parameter “Gopher”.
I’m going to walk through the background of the now-falsified scaling laws from prior to this paper; then I’m going to describe the new laws given by this paper, and why they weren’t found earlier;
and finally I’ll briefly mention some possible implications of this paper.
Independently of the consequences—this paper is exciting! Machine learning researchers thought they knew laws about how to scale compute optimally, and the laws turned out to be wrong! It’s a nice
clear instance of science-functioning-in-ways-it-should in ML.
In 2020 OpenAI proposed scaling laws which have since been used (at least implicitly) to guide the training of large models.
These scaling laws attempt to answer several questions. One of these questions is “Given a certain quantity of compute, how large of a model should I train in order to get the best possible
The answer isn’t “as large a model as possible” because, for a fixed quantity of compute, a larger model must be trained on less data. So training a 1-million parameter model on 10 books takes about
as many floating point operations (FLOPs) as training a 10-million parameter model on one book.
In the case of very large language models like GPT-3, these alternatives look more like training a 20-billion parameter model on 40% of an archive of the Internet, or training a 200-billion parameter
model on 4% of an archive of the Internet, or any of an infinite number of points along the same boundary.
Compute on this scale is not cheap—so if you’re going to be spending 10 million dollars per training run on a model scaled up to be 100x bigger than your toy version of the model, you want principles
better than a feeling in your gut to guide how you allocate this compute between “amount of data the model sees” and “how big the model should be.”
So if you get 10x more compute, how much bigger do you make your model? What about 100x more compute? Or 1000x more compute?
Well, the OpenAI paper answers the question. If you get 10x more compute, you increase your model size by about 5x and your data size by about 2x. Another 10x in compute, and model size is 25x bigger
and data size is only 4x bigger.
Model size is almost everything.
Model Size Is (Almost) Everything
Subsequent researchers and institutions took this philosophy to heart, and focused mostly on figuring out how to engineer increasingly-large models, rather than training comparatively smaller models
over more data. Thus, the many headlines of increasingly-larger models that we’ve seen coming from ML research institutions and AI accelerator startups.
See, for instance, the following chart from the new DeepMind paper.
Large Subsequent Models
Note the increase to half a trillion parameters, with identical quantities of training data.
And note that this understanding of the world has also been used to project forward future data requirements—NVIDIA, for instance, talks about training a trillion parameter model with only 450
billion tokens. Everyone had decided model size was much more important than data size.
The DeepMind paper re-approaches the issue of scaling laws.
It uses three separate methods to try to find the correct scaling law, but I’m going to zoom in on the second because I think it’s the easiest to comprehend.
The method is simple. They choose 9 different quantities of compute, ranging from about FLOPs to FLOPs.
For each quantity of compute, they train many different-sized models. Because the quantity of compute is constant for each level, the smaller models are trained for more time and the larger models
for less.
The following chart from the paper illustrates this. Each line connects models (at different sizes) trained using the same amount of compute. The vertical axis is the loss, where lower is better:
IsoFLOP Curves
Each of these curves has a clear interpretation. To the left of the minima on each curve, models are too small—a larger model trained on less data would be an improvement. To the right of the minima
on each curve, models are too large—a smaller model trained on more data would be an improvement. The best models are at the minima.
If you connect the minima at each curve and extend the line outwards, you get a new law! Specifically, it looks like for every increase in compute, you should increase data size and model size by
approximately same amount.
If you get a 10x increase in compute, you should make your model 3.1x times bigger and the data you train over 3.1x bigger; if you get a 100x increase in compute, you should make your model 10x
bigger and your data 10x bigger.
Now, all of these experimental runs graphed above were on relatively small models, trained with non-insane quantities of compute. So you could have argued that this rule wouldn’t work with much
larger numbers.
But to verify that the law was right, DeepMind trained a 70-billion parameter model (“Chinchilla”) using the same compute as had been used for the 280-billion parameter Gopher. That is, they trained
the smaller Chinchilla with 1.4 trillion tokens, while the larger Gopher had only been trained with 300 billion tokens.
And, as the new scaling laws predicts, Chinchilla is a lot better than Gopher on pretty much everything. It is better by the standard less-perplexity-per-word measure, and by the more interesting
usefulness-on-downstream-task measures. I could insert a bunch of graphs here, but if you aren’t familiar with the measures in question they basically all sum to “Hey, number goes up!”
Number goes up (Or down when appropriate)
Given the evidence of Chinchilla, it appears pretty definite that OpenAI got the scaling laws wrong. So one natural question is “What happened that they got it wrong?”
Well, background: The learning rate of a deep neural network dictates how much the parameters of a network are updated for each piece of training data. Learning rates on large training runs are
typically decreased according to a schedule, so that data towards the end of a training run adjusts the parameters of a neural network less than data towards the beginning of it. You can see this as
reflecting the need to not “forget” what was learned earlier in the training run.
It looks like OpenAI used a single total annealing schedule for all of their runs, even those of different lengths. This shifted the apparent best-possible performance downwards for the networks on a
non-ideal annealing schedule. And this lead to a distorted notion of what laws should be.
One funky thing about this is that we shouldn’t see larger language models… at all, for at least a few years.
DeepMind provides a helpful chart of how much training data and compute you’d need to optimally train models of various sizes.
Note that it wouldn’t make sense to train a model with 520 billion parameters until you had 60x as much compute as was used for Gopher / Chinchilla. You don’t hit the need for a trillion parameters
until you have 200x as much compute as was used for Gopher / Chinchilla.
(You might need even more compute; in part of the paper, DeepMind says that at large quantities of compute the scaling laws bend slightly, and the optimal behavior might be to scale data by even more
than you scale model size. In which case you might need to increase compute by more than 200x before it would make sense to use a trillion parameters.)
So until wafer-scale chips decrease the cost of compute ten times, and Google also decides all it really needs for AGI is to put ten times as much money into LM’s, we’ve seen the largest LM’s we’re
likely to see. However long that may be.
One potential thing that could follow from this is that, because inference costs are obviously smaller for small language models, services such as OpenAI’s GPT-3 should be cheaper for them to
provide. The cost to run them, at the same level of quality, should drop by at least 3x. I don’t know what percent the cost of providing these services is running them rather than training them, but
potentially it could make services based on these models more efficient than they were before, and open up economic viability in places that didn’t exist before.
One last consequence is that this paper makes the engineering involved in training large language models easier. Gathering more good data would be (I think) far easier than trying to efficiently
split computation for increasingly large LM’s across 1000s of machines.
• Man, these data requirements for large models really show just how horrendously data-inefficient current deep learning actually is, you need to hit a model with thousands of different variations
of a sentence for it to learn anything. I fear that we might be just one or two crucial insights away from cutting down those data numbers by orders of magnitude.
□ If I understand this correctly Deepmind is using each token in at most one update (They say they are training for less than one epoch), which means that it is hard to say anything about data
efficiency of DL from this paper since the models are not trained to convergence on the data that they have seen.
They are probably doing this since they already have the data, and a new data point is more informative than an old one, even if your model is very slow to extract the available information
with each update.
• Excellent & timely analysis, thank you!
• Is anyone working on updating the Biological Anchors Report model based on the updated slopes/requirements here?
• Minor correction. You’re saying:
> So training a 1-million parameter model on 10 books takes about as many FLOPS as training a 10-million parameter model on one book.
You link to FLOP per second aka FLOPS, whereas you’re talking about the plural of FLOP, a quantity (often used is FLOPs).
• Typos:
□ trained a 70-billion parameter model (“Chinchilla”) using the same compute as had been used for the 280-parameter Gopher. → 280-billion parameter Gopher
□ Number go up → goes up
□ Not sure why this was downvoted.
☆ Number go up is a meme not a typo
• Am I understanding this correctly, in that it means that scaling language models will require significantly more training data than OpenAI thought?
□ It mostly only means that training them compute optimally will require much more data, and doesn’t rule out OpenAI-style mostly-parameter scaling at all. Data scaling can be necessary to
minimise loss to get optimal estimates of certain entropic variables, while still being unnecessary for general intelligence. Large undertrained models still learn faster. This new paper
mostly makes parameter and data scaling both significantly more efficient, but data scaling to a larger degree, such that it’s more optimal to trade off these losses 1:1.
Below the fold is musing and analysis around this question. It is not a direct answer to it though.
We can take a look at the loss function, defined in terms of the irreducible loss, aka. the unmodelable entropy of language, the number of parameters , and the number of data tokens .
If we put in the parameters for Chinchilla, we see , and . Although these equations have been locally tuned and are not valid in the infinite limit of a single variable, it does roughly say
that just scaling parameter counts without training for longer will only tackle about a third of the remaining reducible loss.
Note the implicit assumption that we are working in the infinite data limit, where we never intentionally train on the same tokens twice. If you run out of data, it doesn’t mean you are no
longer able to train your models for longer as you scale, it only means that you will have to make more use of the data you already have, which can mean as little as multiple epochs or as
much as sophisticated bootstrapping methods.
The original scaling laws did not decompose so easily. I present them in simplified form.
(Note that the dataset was different so the exact losses shouldn’t be centered identically.)
This has major issues, like there is no irreducible loss and the values aren’t disentangled. We can still put in the parameters for GPT-3: and ; or in the limits, and . It isn’t clear what
this means about the necessary amount of data scaling, as in what fraction of the loss that it captures, especially because there is no entropy term, but it does mean that there is still
about 1:1 contributions from both losses at the efficient point, at least if you ignore the fact that the equation is wrong. That you have to scale both in tandem to make maximal progress
remains true in this older equation, it’s just more convoluted and has different factors.
☆ Interesting!
What does the irreducible loss of 1.69 actually mean? I assume it’s something like entropy/symbol? What does that convert to in terms of entropy/word? Does that agree with the
‘standard’ approximations of the entropy of English text?
○ It’s the cross-entropy that is left after you scale to infinity, and it is measured per symbol, yes. It is measured using BPEs, and the unit is nats/token. It might be equal to the
true entropy, but this is conjecture, as the model might never learn some aspects of language at any size within the regimes we can model.
For a large enough dataset, and given you are changing only the model and not the BPEs or data distribution, then the loss should be a constant factor multiple of bits/character,
bits/byte, or bits/word. Chinchilla gets bits/byte on pile_cc and a loss of on Wikitext103 (), which is unhelpfully not at all controlled but should suffice for ballpark
It might be equal to the true entropy, but this is conjecture, as the model might never learn some aspects of language at any size within the regimes we can model.
That’s actually precisely what I’m interested in finding out. How closely this scaling would match the ‘expected’ entropy of English in the infinite limit. (Of course, this
assumes that said approximation actually matches in the limit.)
It is measured using BPEs
Hm. Any idea what the compression level is of using BPE on English text? A quick look shows ~51%^[1] compression ratio on BPE on the Brown corpus, which I suppose I could use as a
starting point.
and the unit is nats/token.
So if I’m understanding correctly (one nat == 1.4 bits of entropy), ~2.43 bits / token? Assuming a BPE compression ratio of 51.08% on English text (each token encoding 4.0864
bits, given 51.08% compression on what I assume to be 8-bit ASCII), that means ~0.595 bits / character.
...which actually matches Shannon’s estimation of the entropy of English surprisingly well (0.6-1.3 bits / character).
★ This is the vocab file GPT uses. Don’t stare too long, I have heard the jank is too great for human conception. I might already be infected. Most models don’t bother changing
the BPEs, but those that do probably don’t have it any better. (This is machine learning where your inputs can be almost infinitely awful and nothing will stop working as long
as your models are large enough.)
True entropy of text is not the best defined, and it’s hard to tell whether something the model can’t learn regardless of scale is a true feature of the distribution or just
intractable. I would say that models do seem to be capturing the shape of what looks to my mind like the true distribution, and if they do fall short in the limit, it
shouldn’t be by very much.
I noted that Chinchilla gets bits/byte on pile_cc, which is basically the same as bits per character on random internet text. The difference being that pile_cc isn’t ASCII,
but that makes up a sufficiently large fraction that I wouldn’t worry about the details.
□ Correct. It means that if you want a very powerful language model, having compute & having data is pretty much the bottleneck, rather than having compute & being able to extend an incredibly
massive model over it.
Hey look at the job listing. (https://boards.greenhouse.io/deepmind/jobs/4089743?t=bbda0eea1us)
□ Yes. However, before the idea was ‘scaling is powerful/smart’. It is, but doing things this other way, is apparently more powerful. So if you want powerful models, grab a gallon of compute
and a gallon of data, instead of two gallons of compute.*
This is probably a bad analogy, because at some point you’re going to want to a) increase the stuff you were leaving the same. in your recipe, and then later, b) mix stuff up in smaller
batches than ‘all the ingredients’.
• TLDR: I’m scared Figure 3 is wrong (the one with training loss/parameters).
WHY: From page 2: ”… we perform our analysis on the smoothed training loss which is an unbiased estimate of the test loss ”
This claim is true. However, it is estimating average loss during training. For a fixed compute budget, larger models take less gradient steps and thus exhibit larger loss for a larger fraction
of training time. If they estimate training loss in this way for Figure 3, I would expect them to underestimate the training loss of the larger models.
EXPERIMENT: If anyone has access to training loss .csv files, we can reproduce Figure 3 using loss from the last 100 iterations. All my concerns go away if we get the same plot.
One funky thing about this is that we shouldn’t see larger language models… at all, for at least a few years.
How long does it take to train them though? For a large enough value of large, the above seems obvious, and yet...why couldn’t a larger model be trained over more time? (Thinking Long And Slow)
So until wafer-scale chips decrease the cost of compute ten times, and Google also decides all it really needs for AGI is to put ten times as much money into LM’s, we’ve seen the largest LM’s
we’re likely to see. However long that may be.
The numbers in the DeepMind figure indicate an exponential increase in FLOPS. With compute increasing after Moore’s law and compute usage in AI even faster, why would larger models be most likely
impossible? Based on these trends, it looks very reasonable to me, that the trend of larger models will continue.
• Two thoughts:
□ [IGNORE; as gwern pointed out I got this backwards] the fact that data and compute need to scale proportionally seems… like a big point in favor of NNs as memorizers/interpolators.
□ Maybe this is baseless, but I somewhat feel better about a path to AGI based more on lots of data than “thinking really hard about a finite amount of data”. Choices over data seem much more
interpretable and human-influenceable (e.g. by curating learning curricula for RL) than just throwing more compute at the same set of data and hoping it doesn’t learn anything weird.
the fact that data and compute need to scale proportionally seems… like a big point in favor of NNs as memorizers/interpolators.
Surely it’s the opposite? The more bang you get out of each parameter, the less it looks like ‘just’ (whatever that means) memorization/interpolation. When you needed to increase parameters
a lot, disproportionately, to cope with some more data, that does not speak well of abstractions or understanding. (If I can train a 1t model to get the same loss as what I thought was going
to take a 100t model, why would I think that that 100t model must be memorizing/interpolating less?) Let’s take your claim to its logical extreme: suppose we discovered tomorrow a scaling
law that made parameters near-constant (log, let’s say); would that not suggest that those parameters are super useful and it’s doing an amazing job of learning the underlying algorithm and
is not memorizing/interpolating?
and hoping it doesn’t learn anything weird.
They already learn weird stuff, though.
☆ Sorry, you’re completely right about the first point. I’ll correct the original comment.
Re: learning weird stuff, they definitely do, but a lot of contemporary weirdness feels very data dependent (e.g. I failed to realize my data was on a human-recognizably weird
submanifold, like medical images from different hospitals with different patient populations) versus grokking-dependent (e.g. AlphaFold possibly figuring out new predictive principles
underlying protein folding, or a hypothetical future model thinking about math textbooks for long enough that it solves a Millenium Prize problem).
EDIT: though actually AlphaFold might be a bad example, because it got to simulate a shit-ton of data, so maybe I’ll just stick to the “deep grokking of math” hypothetical.
|
{"url":"https://www.greaterwrong.com/posts/midXmMb2Xg37F2Kgn/new-scaling-laws-for-large-language-models","timestamp":"2024-11-02T07:49:11Z","content_type":"text/html","content_length":"163238","record_id":"<urn:uuid:94f98dad-b05c-4712-aa4f-1c77616e7cf2>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00286.warc.gz"}
|
ing E
Lessons from The Lorax: Using Graphs to Study Change
Author(s): Rob Quaden, & Alan Ticotsky Subject: Cross-Curricular
In this lesson, students read The Lorax and draw graphs to illustrate the changes that happen over the course of the story. Using simply stated questions, readers grapple with the complex themes in
the book and movie. Students will investigate how cycles compete for dominance, and think about how the needs of business and natural resources can collide.
Complex Systems Connection: Separate Cause and Effect, Short and Long Term Conflicts. Short-term focus on making money results in depletion of resource and environmental degradation over time and
the collapse of the business. Actions and consequences are separated by time.
Lessons in Mathematics Section 0: Overview
Author(s): Diana M. Fisher Subject: Math
This book provides a set of tools that enables educators to teach mathematics using the framework of System Dynamics. Section 0 covers basic skills in representing equations using a visual modeling
tool, the STELLA software. Available from iseesystems.
More about the book at: http://www.iseesystems.com/store/college_university/MathBook.aspx
Lessons in Mathematics Section 1: Linear Behavior
Author(s): Diana M. Fisher Subject: Math
This book provides a set of tools that enables educators to teach mathematics using the framework of System Dynamics. Section 1 on linear motion covers the theory of finite differences and
distance-time and position-time exercises using a motion detector. Available from iseesystems.
More about the book at: http://www.iseesystems.com/store/college_university/MathBook.aspx
Lessons in Mathematics Section 2: Quadratic Behavior
Author(s): Diana M. Fisher Subject: Math
This book provides a set of tools that enables educators to teach mathematics using the framework of System Dynamics. Section 2 on quadratic behavior covers the theory of finite differences and
distance-time and position-time exercises using a motion detector. Available from iseesystems.
More about the book at: http://www.iseesystems.com/store/college_university/MathBook.aspx
Lessons in Mathematics Section 3: Exponential Behavior
Author(s): Diana M. Fisher Subject: Math
This book provides a set of tools that enables educators to teach mathematics using the framework of System Dynamics. Section 3 covers the theory of finite differences for exponential functions.
Students build models of many processes, including bacteria growth, drug elimination from the body and compounding interest, among others.
More about the book at: http://www.iseesystems.com/store/college_university/MathBook.aspx
Lessons in Mathematics Section 4: Review
Author(s): Diana M. Fisher Subject: Math
This book provides a set of tools that enables educators to teach mathematics using the framework of System Dynamics. Section 4 reviews previous concepts, combines model structures to solve new
problems and extends problems from previous lessons. Available from iseesystems.
More about the book at: http://www.iseesystems.com/store/college_university/MathBook.aspx
Lessons in Mathematics Section 5: Oscillatory Behavior
Author(s): Diana M. Fisher Subject: Math
This book provides a set of tools that enables educators to teach mathematics using the framework of System Dynamics. Section 5 covers sinusoidal functions, including simple harmonic motion.
Distance versus time lessons using a motion detector are also included, as are lessons on predator-prey systems.
Complex Systems Connection: Cause within System. The lessons in this section can be used as building blocks for the Oscillations curriculum that illustrates "the cause of the problem is within the
More about the book at: http://www.iseesystems.com/store/college_university/MathBook.aspx
Lessons in Mathematics Section 6: Convergent and Logistic Behavior
Author(s): Diana M. Fisher Subject: Math
This book provides a set of tools that enables educators to teach mathematics using the framework of System Dynamics. Section 6 takes students through the steps from exponential to convergent to
logistic models, culminating with an application to a system of deer and vegetation.
Complex Systems Connection: Cause within System. The lessons in this section can be used as building blocks for the Oscillations curriculum that illustrates "the cause of the problem is within the
More about the book at: http://www.iseesystems.com/store/college_university/MathBook.aspx
Lessons in Mathematics Section 7: Differential Equations
Author(s): Diana M. Fisher Subject: Math
This book provides a set of tools that enables educators to teach mathematics using the framework of System Dynamics. Section 7 builds on the skills of Sections 5 and 6 using classic problems
studied with differential equations. Exponential, convergent and logistic models are presented using examples from contagious disease, the path of lead through the body and predator-prey
interactions. This section also compares the numerical derivation of Euler's Method to a corresponding version in STELLA.
Complex Systems Connection: Cause within System. The lessons in this section can be used as building blocks for the Oscillations curriculum that illustrates "the cause of the problem is within the
system." Available from iseesystems.
More about the book at: http://www.iseesystems.com/store/college_university/MathBook.aspx
Lessons in Mathematics Section 8: Miscellaneous Topics
Author(s): Diana M. Fisher Subject: Math
This book provides a set of tools that enables educators to teach mathematics using the framework of System Dynamics. Section 8 presents a natural extension of linear and exponential growth study -
arithmetic and geometric sequences. Application to an age-specific population is also included. Available from iseesystems.
More about the book at: http://www.iseesystems.com/store/college_university/MathBook.aspx
|
{"url":"https://www.clexchange.org/curriculum/doc_search.asp?searchstring=&category=alldocs&key1=Social+Studies&key2=All&key3=All&key_join=and&sort=&detail=show&page=5","timestamp":"2024-11-12T09:21:27Z","content_type":"text/html","content_length":"47371","record_id":"<urn:uuid:9026a64e-3b90-429a-b668-66f96ec70767>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00627.warc.gz"}
|
operator spectrum
So you are saying I should put redirects? I was hoping sombody would feel inspired to create at least a stub for operator spectrum
Well you may try to decide between spectral theory which is more general, and spectral theorem which is more specific. In future we could have separate entries for versions like spectral theorem for
bounded normal operators, for families etc.
to the functional analysis crew of the $n$Lab: where should operator spectrum point to? Do we have any suitable entry?
No, I am saying that the term “operator spectrum” is a bit sidetrack form the organization we have started here. There is a classical subject of spectral theorems for various kinds of operators which
belong to spectral theorem entry, just as on wikipedia. On the other hand, spectral theory is wider subject and most of the things are about families of operators, algebras of operators, spectra of
Banach algebras, of C*-algebras and so on. It would be superfluous to mess up with orthogonal entry with vague associative name like “operator spectrum” (what is it ? a spectrum of an operator ? a
spectrum of an operator algebra ? any spectrum appearing in operator theory ?). At the least I could agree with a name like spectrum of an operator, but writing an entry for it would most likely boil
down to writing spectral theorem. So, until we have so much material to go into a reorganization I would vote for a redirect of your choice.
By the way there is a recent entry spectral theorem for bounded selfadjoint operators (zoranskoda). I am taking it back a bit. spectrum of an operator should exist as a separate entry (separate from
spectral theorem), though it is not easy to write it. In any case operator spectrum is not a good term.
OK, there is a new entry spectrum of an operator with the redirect (though I am not real happy with the redirect) operator spectrum.
OK, there is a new entry
|
{"url":"https://nforum.ncatlab.org/discussion/2163/operator-spectrum/","timestamp":"2024-11-07T17:27:22Z","content_type":"application/xhtml+xml","content_length":"46757","record_id":"<urn:uuid:3f861877-eb15-4198-bff5-f57c5df0cc1b>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00720.warc.gz"}
|
Pattern block hexagons
Students will need lots of pattern blocks to complete this activity.
If you do not have a large number of pattern blocks, you can download a series of pattern blocks sheets (46 KB PDF) and print them onto coloured paper.
Students use their pattern blocks to make
• regular hexagons
• irregular hexagons
• convex irregular hexagons
• non-convex irregular hexagons.
It is possible to build a hexagon that has equal sides but unequal angles.
You can download the Pattern blocks hexagons: Student worksheet which also has some suggestions for pattern block combinations if students run out of ideas.
|
{"url":"https://topdrawer.aamt.edu.au/Geometric-reasoning/Activities/Pattern-block-hexagons","timestamp":"2024-11-12T19:53:23Z","content_type":"text/html","content_length":"51141","record_id":"<urn:uuid:f4372794-5c56-4bf1-a4d1-8205b57e00e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00881.warc.gz"}
|
Draw The Moment Diagram For The Beam.
Draw The Moment Diagram For The Beam. - Solve σm a = 0 (sum of moments about support a). Web our calculator generates the reactions, shear force diagrams (sfd), bending moment diagrams (bmd),
deflection, and stress of a cantilever beam or simply supported beam. Web shear force and bending moment diagrams are powerful graphical methods that are used to analyze a beam under loading. Draw
the beam free body diagram. Web steps to construct shear force and bending moment diagrams.
In each problem, let x be the distance measured from left end of the beam. You'll get a detailed solution from a subject matter expert that helps you learn core concepts. Web this problem has been
solved! Assume the upward reaction provided by the ground to be uniformly distributed. Also, draw shear and moment diagrams, specifying values at all change of loading positions and at. Web draw the
shearing force and bending moment diagrams for the beam with an overhang subjected to the loads shown in figure 4.7a. Let a = 5.0 ft, b = 4.5 ft, p = 21 kips, and w = 3.0 kips/ft.
Solved Draw the shear and moment diagrams for the beam using
Web draw one bending moment and one shearing force diagram for the given beam by combining the diagrams in step 9. Web learn to draw shear force and moment diagrams using 2 methods, step by.
Solved Draw the shear and moment diagrams for the beam.
This page will walk you through what shear forces and bending moments are, why they are useful, the procedure for drawing the diagrams and some other keys aspects as well. You'll get a detailed
Solved Draw the shear and moment diagrams for the beam.
This will give you r b (reaction at support b). Web in a beam, the internal force system consist of a shear force and a bending moment acting on the cross section of the bar..
Solved Draw the shear and moment diagrams for the beam
Label all significant points on each diagram. This will give you r b (reaction at support b). Web in general the process goes like this:1) calcul. You'll get a detailed solution from a subject
Solved Draw the shear and moment diagrams for the beam.
Web solving for beam reactions. Draw the shear and moment diagrams for the beam. Web engineering mechanical engineering mechanical engineering questions and answers draw the moment diagram for the
cantilevered beam. Web draw the shearing.
Solved Draw the shear and moment diagrams for the beam (a)
You'll get a detailed solution from a subject matter expert that helps you learn core concepts. Below is a simple example of what shear and moment diagrams look like, afterwards, the relation between
the load.
Solved Draw the shear and moment diagrams for the beam, and
This will give you r a. Draw the moment diagram for the cantilevered beam. Web introduction figures 1 through 32 provide a series of shear and moment diagrams with accompanying formulas for design of
Solved Draw the Shear and Moment Diagram for the beam shown
Solve σm a = 0 (sum of moments about support a). Draw a free body diagram of the beam with global coordinates (x) calculate the reaction forces using equilibrium equations ( ∑ forces = 0.
Solved Draw the moment diagram for the beam. Follow the
Web in a beam, the internal force system consist of a shear force and a bending moment acting on the cross section of the bar. Web our calculator generates the reactions, shear force diagrams (sfd),.
Solved Draw the moment diagram for the beam. Follow the sign
Draw the shear and moment diagrams for the cantilevered beam. The internal forces give rise to two kinds of stresses on a transverse section of a beam: (1) normal stress that is caused by Shear.
Draw The Moment Diagram For The Beam. Web introduction figures 1 through 32 provide a series of shear and moment diagrams with accompanying formulas for design of beams under various static loading
conditions. To pave its way, this section will deal on how to draw moment diagram by parts and to calculate the moment of such diagrams about a specified axis. Solve σm b = 0. If the cross section of
the beam is rectangular with height of 0.2ft and width of 0.1ft, and the beam has a young's modulus of 29e3ksi, please identify which. The shear force and the bending moment usually vary continuously
along the length of the beam.
Draw The Moment Diagram For The Beam. Related Post :
|
{"url":"https://classifieds.independent.com/print/draw-the-moment-diagram-for-the-beam.html","timestamp":"2024-11-07T13:59:35Z","content_type":"application/xhtml+xml","content_length":"24399","record_id":"<urn:uuid:70eb8e45-cc06-454d-b08f-8b45e99e90c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00482.warc.gz"}
|
Longdom Publishing SL | Open Access Journals
Mehran Ghasempour Mouziraji, Department of Mechanical Engineering, Islamic Azad University of Sari, Sari Branch, Iran, Tel: +98-9378865650
It is evident that while the work piece is entering the rollers, has less velocity than peripheral velocity of the rollers, however when it is exiting, it has more velocity than that of the rollers
peripheral. Therefore, due to the fact that entering and exiting velocities are not equal, there must be a space in rollers to make the velocity of work piece and horizontal components equal. This
area is called neutral line or plane. The entering and existing velocity is denoted by V1 and W2, respectively. Since the section velocity and horizontal component of peripheral velocity are equal
only in neutral plane, therefore, a slip can occur in section and roller or in any other points. In exiting plane the section velocity is more than the peripheral velocity of rollers. The velocity
inequality is called forward slip.
No-twist mills
There have been a jump in the final rolling mills since 1962 and the first sample was installed in 1966. These machines use rollers with smaller diameter and each of their axes goes around each other
about 90˚ and the whole set is adapted horizontally on angle of 45º. Smallwork rolls with approximately diameter of 8 in provide more decrease in cutting levels without increasing their width. Each
roller has a groove and therefore there is no need for a setting in these machines. Thus, not only is not the required time for setting the roller wasted, which is done in the ordinary mills, but
also it allows the rest of equipment to be set in the central line of the rollers. In order to reducing the level of flatness and carefully acquiring higher dimension, work rolls are usually made of
Tungsten Carbide, which its modulus of elasticity is more than that for steel and also possesses higher erosion resistance. Consequently, hard material strips such as rust-proof steel alloy, titanium
alloy, nickel, and steel alloy can be manufactured through rolling.
Forward slip studies
Studies which are performed on forward slip are divided into three categories:
1. Studies through which other parameters of the rolling such as the flatness of the roller’s surface, lubrication, and rollers velocity in forward slip are considered. Azizi et al. studied the
effect of friction or the amount of upside and downside surface of rollers flatness, pre-tensile, and post-tensile forces on the process quantities such as the force given to the rollers, as well as
enough torque for the formation and pressure in the rollers [1]. It was observed that as the friction coefficient increases, the maximum rolling pressure increases, which is certainly undesirable and
causes defects like rollers flatness and eventually brings about a poor process of production. Basti et al. made a Temp-disp. coupled and three-dimension finite element model for analyzing the
rolling process [2] Xian et al. studied the rolling strip based on MAS rolling process and presented mathematical forward and backward slip models at the beginning as well as the end of strip [3]
computation analysis of temperature distribution, anticipating the rolling force and torque, analyzing of fishing tail in rolling process have been done by different researchers [4-11].
2. Researches which are carried out in order to measure the relative velocity and forward slip. For instance, Li et al. studied the model of laser Doppler Velocimetry (LDV), which is a new way of
determining the small relative velocity between the strip and roller in the cold rolling [12].
3. Simulation of rolling process by FEM which has been carried out in some researches. Young-gang et al. introduced the energy principle and upper-bound principle methods to solve forceenergy
parameters [13]. The fault between the theory and test model does not exceed 15%. Rezai et al. considered an Elasto- Visco plastic problem [14]. In addition, Temp-disp. and explicit have been chosen
as the type of dynamic solutions. Jiang et al. employed the 3-dimensional rigid plastic FEM for rolling simulation of thin strips [15]. By doing so, they calculated the modification of level of
friction in the rollers entrance. Jiang and Tieu studied the cog height in hot rolling of cog strips [16]. Lenard and Liu et al. proved that in bite rollers the friction will alter, Lenard showed
that friction coefficient also changes as a decreasing stream [17-19] Changes add friction to rolling power and model precision.
Hot tensile test
For being cognizant of the mechanical property of input rod in the mill process and using it in the input of ABAQUS/Explicit software, a few hot tensile tests have been carried out. These tests have
invaluable numerical significance and this is due to the fact that for the employed steel and for the particular conditions of test which is done in high temperatures, the adequate data for
simulation and other utilizations is not available. Furthermore, there has been a thorough study on the data which is provided from a test by MATLAB software. Moreover, several codes, charts, and
tables have been achieved which can be used for any other conditions. Neural networks are used for curve fitting and stress versus strain diagram is obtained. Results which are attained in this part
represent good comparison with models and references. The entering rod of mill is made of RSt34-2 steel whose chemical element percentage was achieved according to key to steel [20]. The more the
Carbon percentage gets, the less the level of steel resistance gets. Conversely as the carbon percentage rises, the steel would be far softer [21].
Make of hot tensile test specimens and performing the hot tensile test
Standard characteristics of tested examples are chosen as ASTM E8M [22]. These tests for three temperatures, namely 900, 950 and 1000 degrees of centigrade and each of them with strain rates of 200,
350 and 500 mm per minute, were done. The first results of the test were saved as a raw data. Moreover, the true as well as the engineering stress and strain were calculated by MATLAB. Consequently,
there were codification in the calculation of stress, strain, modulus of elasticity, and yield stress. Eventually, engineering as well as true stress and strain diagrams were drawn by MATLAB.
Figures 1 and 2 given in the following are the results of MATLAB which are drawn for a better comprehension of stress-strain diagram in different temperatures and strain rate. Figure 1 shows the true
stressstrain diagram for rates of 200, 350 and 500 mm/min in a temperature of 900ºC. Figure 2 shows the true stress-strain diagram for temperature of 900, 950 and 1000ºC in a strain rate of 500 mm/
Calculation of modulus of elasticity
It is important to mention that obtaining modulus of elasticity through stress-strain diagram by conducting the hot tensile test is impossible. Due to the fact that the test has been done in high
temperatures and it is highly unlikely to install gage in order to measuring the modulus of elasticity. Furthermore, the elastic property of jaws also causes a mistake in measuring the modulus of
elasticity by the diagram. Therefore, data which are related to references are used for acquiring modulus of elasticity due to the lack of developed measuring tools and equipment justifiers. It must
be mentioned that the modulus of elasticity is only dependent on temperature. Furthermore, it is crucial to mentioned that the stress-strain curve is the same for each material [23]. With respect to
the table of resistant steel, the value of yield stress is 13.8 MPa for the temperature of 940ºC which is related to the strain rate of 0.2 mm/min. Thus, by using the multi-statement curve fitting of
No. 5, the magnitudes of modulus of elasticity for temperatures of higher than 940ºC are those which are presented in Table 1. Moreover, the general levels of modulus of elasticity in different
temperatures are those presented in Table 2, which demonstrates that the obtained results are not against expectation of curve fitting and can be a precise prediction of modulus of elasticity for the
high temperatures. Further details can be found in [22]. Since the yield point, and stress-strain curve depends on the strain rate, the higher the strain rate are, the more the steel goes through a
hardening In addition, the resistance of steel raises against fatigue. On the other hand, as the strain rate increases, the elastic properties decrease, and therefore the steel becomes crisper. As a
result, when the strain rate becomes more than 200, 350 and 500 mm/min, it is expected that the amounts related to yield point decreases drastically and the steel has had plastic property since the
very beginning, particularly the test is done in high temperatures.
Temperature (°C) Room Temperature (°C) 900°C 930°C 950°C 980°C 1000°C
Modulus of Elasticity (GPa) 205 37.65 27.89 22.89 18.31 17.67
Table 1: The values of modulus of elasticity for different temperatures by using multi-statement Equation (5), curve fitting.
Material Modulus of Elasticity (GPa)
Room Temperature (°C) 250°C 425°C 540°C 650°C
Carbon Steel 207 186 155 134 124
Austenitic Stainless Steel 193 176 159 155 145
Titanium Alloys 114 96.5 74 70 -
Aluminum Alloys 72 65.5 54 - -
Table 2: General levels of modulus of elasticity in different temperatures.
Using neural network to estimate the relationship between true stress and true strain
Regarding the presented data of ASM for metals, the plastic behaviors of steels for strain rate of 10-1000 1/s and for the temperature range of 800 ºC -1100ºC, can be expressed as Equation (1).
σ = K + Blog ε (1)
In addition, for plastic behaviors of low carbon steels and for the temperature range of 30ºC -1100ºC, the plastic stress-strain relation can be expressed as;
σ = Kε^nσ = Kε^n (2)
Where, 0 ≤ n ≤ 1and the value of zero is for a completely plastic material, whereas the value of n=1 is for an elastic material. For customary metals, the values of n are altered from 0.1 to 0.5.
n and k for some metals at room temperature are represented in Mechanical Metallurgy [21].
By using MATLAB, the required code in neural network was developed and the input matrix had a number of data from the plastic area with temperature’s arrays, strain and strain rate. This matrix along
with target matrix was defined for MATLAB which included corresponding data of true stress. The employed network consists of three layers in which the number of neurons obtained by using the method
of trying and mistake, and ten numbers in two layers are chosen. Furthermore, the “traingda” algorithm was also used for training the network. This algorithm presented far better results than any
other algorithms. While using the instruction of “triangda”, the BP method is utilized with variable learning rate and consequently the errors decrease and much more optimum results attained.
Overall, an increase in the number of neurons does not necessarily because a better approximation of function, however, an increase in training stages brings about a more ideal approximation of
function. Finally, the network is tested and the output is used for the range of temperature data and also for the higher strain rate. Figure 3 depicts the level of adaptation of neural network used
for input data for both before and after training. In these figures, the blue lines demonstrate neural network simulation and the green circles demonstrate input data matrix. The stable R which
states this adaptation illustrates an acceptable value. Accordingly, the trained neural network can be reliable and can be utilized for data temperature area and higher strain rates.
Analytical procedure
The velocity inequality and work piece velocity on the neutral plane is as follows:
V[n] : Work piece velocity on the neutral plane
V[r] : Peripheral velocity of rollers
D[w] : Diameter of work rolls
n : Velocity of rollers (rpm)
Forward slip, which is a result of exiting and peripheral velocity, is defined as that of Equation (6)
Therefore, the velocity of work piece is the summation of whole roller velocity and steel flow velocity. In addition, the work piece moves in the direction of exiting layer. Providing that the work
piece has breadth in the length of the roller, forward and backward slip do not occur. The movement velocity of the first pass rollers is gained as 138.4 rad/s and the movement velocity of the second
pass rollers is gained as 170.2 rad/s. regarding the velocity inequality, the exiting velocity, and the neutral angle, the rod velocity is taken as the entering velocity. Thus, the work piece
velocity while entering the space between the rollers is as follows:
In the relations of yield criterion and plasticity, it is presumed that the behavior of material is elastic-ideally plastic. For hot deformation and for the case of high strain rate, the
corresponding metal may illustrate hard working behavior. For metals without hard working, yield stress becomes constant by increasing strain. Nevertheless, for metals with hard working yield stress
is depended on the material strain. Furthermore, by increasing the strain of material, yield stress increases. Thus, for these types of metal plastic stress or the equivalent stress or effective
stress is implemented. Moreover, in the condition of hot work, yield stress of material is intensely dependent on strain rate. For average strain rate, the following relations can be utilized [22];
First method
The first method of calculation of average strain rate in hot rolling is presented in this section. In this procedure in the deformation zone, the integral of functions is performed with respect to
the variable X.
By employing the rule of constant volume, it can be conthat:
By integration with respect to the length which is in contact between roller and work piece, average strain rate can be expressed as;
2 tanϕ dx = dh (16)
For a case with low value of bite angle:
Second method
In the second method and for deformation zone, in order to calculate average strain rate in hot rolling, the integration is performed with respect to the variable ϕ
For the low value of bite angle:
Simulation of No-Twist mills by the ABAQUS/Explicit software
In this study in the first stage, the whole roller is given a model by two side grooves. The size of the grooves is achieved by using the caliber maps. Of the two existing grooves on the framework of
the side roller, a groove is used in the procedure of No-Twist mills; in fact the other groove plays the role of a substitute. Therefore, the roller height is of no importance in simulation. On the
other hand, the inner diameter of the roller, which is the point of bush placement, also bears no significant importance. Due to the fact that simulation is carried out in the ABAQUS software and the
cost evaluation of calculation and the time of performing is highly important, only the groove through which the rod passes is defined for ABAQUS. The rollers are defined as “Discrete Rigid” and 3D
substances in the ABAQUS. In the hot metal forming procedures, the work piece contains a preliminary temperature which will go through plastic deformation. Some of this plastic work is converted into
heat and increases the temperature of the work piece, while due to the fact that the work piece has contact with other tools and owing to being in the outside atmosphere, the work piece temperature
decreases through the processes of heat conduction, convection, and radiation. The more the temperature of work piece is changed, the more material behavior is altered [24]. The strain rate was
previously mentioned, thus for having a thorough analysis, it is obligatory to define a material behavior complying with the temperature and the strain rate. Therefore, the dynamic Temp-disp.
explicit is chosen as a relevant solver. In this approach, the hardening is considered a prerequisite of isotropic. Some references present the thermal property of steel such as Structural Fire
Engineering Design [25].
In conditions which contain a high level of deformation during the procedure, the elements get highly deformed. This disarrangement and distortion may bring the solution to a halt and by using the
adaptive mesh it can be obviated. In this method, the elements are able to move free of the matter. As a result, the procedure conduction quality stays in an optimum level. Consequently, the place of
nodes is changed and the element’s topologies remain unchanged. The entire meshes have the shape of square and they are structured. The improved relation of Ockland, which was acquired by the
assistance of a wide range of studies, was provided to the friction. This set portrays the friction coefficient, which is presented in Equation (26), versus roller material and roller velocity.
f = ak(1.05 − 0.0005t) (26)
Section geometry of the rod outlet
The main objective of shape rolling stages is to achieve a distinct section level according to a set of existing standards. Therefore, monitoring of section outlet through different calibers is
highly significant. In addition to final section being standard, the produced product in every stage of inter-caliber must be free of any over-breadth and geometry defect. Figure 4 shows the rod
section level after exiting stands 1 and 2. The comparison of results with reference of steel company demonstrates a good accordance between finite element and the measured models.
Input and output velocity of rod in each stand
In order to calculate the forward slip, input and output velocity of rod is required in each stand which is achieved by simulation of finite element. Figure 5(A) demonstrates a comparison of velocity
between FEM and experimental values for stands 1 to 4 versus three distinct sizes of meshes. According to this figure, it is deduced that different sizes in mesh do not present a significant change
and do not alter the output results.
Calculation of forward slip
Results of forward slip in each stand and versus three different sizes of meshes are presented in Figure 5(B).
Influential parameters on forward slip
Impact of coefficient friction: One of the determining parameters in forward slip is the impact of coefficient friction between the roller and the rod. Coefficient friction is only 0.25 to 0.7 in
which amounts higher than 0.55 are achievable through sophisticated ragging of the roller surface. In this step, by changing the coefficient friction, the effect of coefficient on the output velocity
of rod is investigated. Figure 6 presents that as the coefficient friction increases to a 10% in every step; the forward slip parameter for rolling stands increases.
Rotational velocity of the roller: Alteration in the circuit of operating motor of No-Twist mills or in coefficients of rotary stands leads to a change in the rotational velocity of roller. The
rotational velocity of the roller increases towards the end stands. Figure 7 illustrates changes in the forward slip parameter due to the rotational velocity of rollers. Obviously, as the rotational
velocity of rollers increases, forward slip decreases in each stand, as it can be concluded from the figures.
Variations in the early temperature of the rod: An increase in the early temperature of the rod brings about a reduction in average plastic rate, consequently, it is expected that slip decreases
between the rod and the roller. Figure 8 illustrates changes in forward slip due to an increase in the early temperature of the rod. As it can be concluded forward slip is decreasing in every stand,
while the early temperature of the rod is increasing.
Considering the capability of the ABAQUS software in modeling of complex contact conditions and metal forming issues, this set of tools is a proper set for being utilized in hot rolling modeling of
the No-Twist mills and rod rolling with high velocity. Parameters such as various finite element meshes, using different size of the meshes, modeling of mechanical and thermal properties of
materials, and its capability of providing several desired outlets are the most prominent advantages of the ABAQUS software. Obtaining the mechanical property and plastic strains in the shape
rolling, while the section is in such a condition that faces high deformation and strain rate, is necessary for the validity of modeling results. Therefore, the hot tensile test is carried out to
attain the mechanical property and plastic strains for the steel of interest. Furthermore, the modulus of elasticity decreases as the temperature increases. Creating a detailed modeling through the
rod’s hot rolling of the No-Twist mills can be a suitable tool for anticipating the parameters of interest, some of which are the dimension of surface of outlet section in each stand and forward
slip, during the hot rolling of the machines. In addition, considering the accordance between the results obtained from modeling of this research and the manufactory quantities of Isfahan’s steel
company, it can be understood the validity of modeling which is performed. Therefore, the method employed may be used as a tool for eliminating and removing the experimental stages and phases.
Regarding the effective role of the velocity of roller and the rod, during a high velocity rolling and a variation between these two parameters, which leads to a slip in the rolling and rod ,as well
as the effect of these velocities in the end product geometry, therefore, in this research, while calculating the forward slip parameter in four stands of the No-Twist mills, the effect of the
parameters of the rod’s initial temperature and the roller’s rotational velocity on the forward slip parameter studied. By considering the obtained results, it can be concluded that as the initial
temperature of the rod raises, the forward slip parameter in each stand declines. An increase in the roller’s rotational velocity marks a decrease in the forward slip. Coefficient friction, functions
as a helpful parameter in the forward slip. For the coefficient, according to Okland’s formula, different values were considered, regarding various velocities in stands 1 to 4. Results of the
conversion of this parameter in each stand proved that the escalation of coefficient friction in the roller and rod leads to an increase in forward slip in each stage and phase.
|
{"url":"https://www.longdom.org/open-access/experimental-and-finite-element-investigation-of-forward-slip-in-rod-rolling-and-the-influences-on-rolled-bars-process-17815.html","timestamp":"2024-11-14T04:47:03Z","content_type":"text/html","content_length":"168343","record_id":"<urn:uuid:149eee04-5a26-426d-9cb4-2a2353126189>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00372.warc.gz"}
|
Posted 19th November 2021 by Holger Schmitz
In a previous post, I started talking about natural numbers and how the Peano axioms define the relation between natural numbers. These axioms allow you to work with numbers and are good enough for
most everyday uses. From a philosophical point of view, the Peano axioms have one big drawback. They only tell us how natural numbers behave but they don’t say anything about what natural numbers
actually are. In the late 19th Century mathematicians started using set theory as the basis to define the axioms of arithmetic and other branches of mathematics. Two mathematicians, first Frege and
later Bertrand Russell came up with a definition of natural numbers that gives us some insight into the nature of these elusive objects. In order to understand their definitions, I will first have to
make the little excursion into set theory.
You may have encountered the basics of set theory already in primary school. Naïvely speaking sets are collections of things. Often the object in a set share some common property but this is not
strictly necessary. You may have drawn Venn diagrams to depict sets, their unions and intersections. Something that is not taught in primary school is that you can define relations between sets that,
in turn, define the so-called cardinality of a set.
Functions and Bijections
One of the central concepts is the mapping between two sets. For the following let’s assume we have two sets, \(\mathcal{A}\) and \(\mathcal{B}\). A function simply defines a rule that assigns an
element of set \(\mathcal{B}\) to each element of set \(\mathcal{A}\). We call \(\mathcal{A}\) the domain of the function and \(\mathcal{B}\) the range of the function. If the function is named \(f\)
, then we write \[
f: \mathcal{A} \to \mathcal{B}
\] to indicate what the domain and the range of the function are.
Example: For example, if \(\mathcal{A}\) is the set of uppercase and lowercase vowels, \[
\mathcal{A} = { A, E, I, O, U, a, e, i, o, u },
\] and \(\mathcal{B}\) is the set of all uppercase letters in the alphabet, \[
\mathcal{B} = { A, B, C, D, \ldots, Z}.
Now we can define a function that assigns the uppercase letter in \(\mathcal{B}\) to each vowel in \(\mathcal{A}\). So the mapping looks like shown in the figure.
You will notice two properties about this function. Firstly, not all elements from \(\mathcal{B}\) appear as a mapping of an element from \(\mathcal{A}\). We say that the uppercase consonants in \(\
mathcal{B}\) are not in the image of \(\mathcal{A}\).
The second thing to note is that some elements in \(\mathcal{B}\) appear twice. For example, both the lowercase e and the uppercase E in \(\mathcal{A}\) map to the same uppercase E in \(\mathcal{B}\)
Definition of a Bijection
The example shows a function that is not a bijection. In order to be a bijection, a function must ensure that each element in the range is mapped to by exactly one element from the range. In other
words for a function \[
f: \mathcal{A} \to \mathcal{B}
• every element in \(\mathcal{B}\) appears as a function value. No element is left out.
• no element in \(\mathcal{B}\) appears as a function value more than once.
A bijection implies a one-to-one relationship between the elements in set \(\mathcal{A}\) and set \(\mathcal{B}\).
Equinumerosity and Cardinality
Intuitively, it is clear that you can only have a bijection between two sets if they have the same number of elements. After all, each element in \(\mathcal{A}\) is mapped onto exactly one element in
\(\mathcal{B}\). This can be used to define a relation between any two sets.
Two sets are called equinumerous if there exists a bijection between the two sets. Equinumerous literally means “having the same number”. But we have to be careful here because we don’t yet know what
the term “number” is supposed to mean. That is the reason why we define the term by using bijections and not referring to any “amount” or “number of elements”. Instead of saying that two sets are
equinumerous, we can also say that they have the same cardinality.
Now comes the clever bit that Frege proposed. Let’s create a class of sets that all share the same cardinality. We can do that because equinumerosity is an equivalence relation but I won’t go into
detail about what that means. We will call this cardinality class \(N\), so \[
\] is the class of all the sets that are equinumerous to \(\mathcal{A}\).
Intuitively we now have a class with all the sets that contain exactly one element, another class with all the sets that contain exactly two elements, and so forth. But we don’t know anything about
numbers yet, so we also don’t really know what one and two are supposed to mean.
Constructing Natural Numbers
Now we have all the tools to construct the natural numbers \(\mathbb{N}\). Of course, we want our numbers to obey the Peano axioms, so we need two things. We need a zero element and we need a
successor function \(S(n)\) that produces the next number from any given number.
The Zero Element
The zero-element is easily defined. We can construct the empty set, \[
\emptyset = \{\}.
\] This is the set with no elements in it. Now the zero-element is simply the cardinality class of the empty set, \[
0 = N(\emptyset).
\] This means that zero is a class of sets that all share the same cardinality as the empty set. You can show that this class consists of only one element, the empty set, by I won’t go into that
The Successor Function
Given that we have defined the zero element, \(0\), we can now define a set that contains zero as a single element, \[
\] Intuitively, this set has one element and we can thus define the natural number \(1\) as the cardinality class of this set, \[
1 = N(\{0\}).
\] In general, given any natural number \(n\) we can define the successor \(S(n)\) by creating the cardinality class of the set that contains \(n\) together with all its predecessors, \[
n+1 = S(n) = N(\{0, 1, \ldots, n\}).
\] You might think that this definition is somewhat circular. We are defining the successor function by using the concept of the predecessors. But this is not as problematic as it might seem at first
sight. We know that the predecessor of \(1\) is \(0\) and each time we construct the next natural number, we can keep track of all the predecessors that we have constructed so far.
The zero and the successor function defined above are enough to define all the natural numbers \(\mathbb{N}\). I will not go into the proof that all the Peano axioms are satisfied by this
construction. It is relatively straightforward and not very instructive in my opinion. If you want, you can try doing the proof as an exercise.
I personally find the Frege definition of natural numbers the most satisfying. It tells us that a number is not just some random symbol that doesn’t relate to the real world. A natural number is the
class of all sets that share the same property. Each set in the class has the same cardinality and we can identify the cardinality with that number. It means that any set of objects in the real world
can be thought of as an instance of a number. The number itself is the collection of sets and the concrete set is contained within it as an element. For example, if you see five apples on a table,
you can think of them as a manifestation of the number \(5\).
Another consequence of the definition of cardinality is that it gives us the ability to speak about infinities. A set might have an infinite number of elements. We already encountered \(\mathbb{N}\),
the set of all natural numbers. Using the cardinality, we can compare infinite sets and create a hierarchy of infinities. I might talk about this more in a later post.
It would not be fair, however, if I didn’t mention some serious problems with the definition that I Frege came up with. The main problem arises because we are creating classes of sets without
explicitly saying which elements we are allowing to be in those sets. This allows sets to contain arbitrary elements, including other sets. A set can even include itself as an element. This leads to
the famous paradox by Russel which can be summarised as follows. Construct a set \(\mathcal{R}\) of all the sets that do not include themselves as an element. Then ask the question, does \(\mathcal
{R}\) include itself? There are mathematical frameworks that attempt to save the essence of Frege’s definition of the natural numbers without running into these problems. In my personal opinion, they
always lose some of the beauty and simplicity. But this is a necessary concession to make if you want to end up with a mathematical framework that doesn’t contain internal contradictions.
Posted 24th August 2021 by Holger Schmitz
Problems in physics almost always require us to solve mathematical equations with real-valued solutions, and more often than not we want to find functional dependencies of some quantity of a
real-valued domain. Numerical solutions to these problems will only ever be approximations to the exact solutions. When a numerical outcome of the calculation is obtained it is important to be able
to quantify to what extent it represents the answer that was sought. Two measures of quality are often used to describe numerical solutions: accuracy and precision. Accuracy tells us how will a
result agrees with the true value and precision tells us how reproducible the result is. In the standard use of these terms, accuracy and precision are independent of each other.
Accuracy refers to the degree to which the outcome of a calculation or measurement agrees with the true value. The technical definition of accuracy can be a little confusing because it is somewhat
different from the everyday use of the word. Consider a measurement that can be carried out many times. A high accuracy implies that, on average, the measured value will be close to the true value.
It does not mean that each individual measurement is near the true value. There can be a lot of spread in the measurements. But if we only perform the measurement often enough, we can obtain a
reliable outcome.
Precision refers to the degree to which multiple measurements agree with each other. The term precision in this sense is orthogonal to the notion of accuracy. When carrying out a measurement many
times high precision implies that the outcomes will have a small spread. The measurements will be reliable in the sense that they are similar. But they don’t necessarily have to reflect the true
value of whatever is being measured.
Accuracy vs Precision
To fully grasp the concept of accuracy vs precision it is helpful to look at these two plots. The crosses represent measurements whereas the line represents the true value. In the plot above, the
measurements are spread out but they all lie around the true value. These measurements can be said to have low precision but high accuracy. In the plot below, all measurements agree with each other,
but they do not represent the true value. In this case, we have high precision but low accuracy.
A moral can be gained from this: just because you always get the same answer doesn’t mean the answer is correct.
When thinking about numerical methods you might object that calculations are deterministic. Therefore the outcome of repeating a calculation will always be the same. But there is a large class of
algorithms that are not quite so deterministic. They might depend on an initial guess or even explicitly on some sequence of pseudo-random numbers. In these cases, repeating the calculation with a
different guess or random sequence will lead to a different result.
Posted 17th April 2021 by Holger Schmitz
We all use numbers every day and some of us feel more comfortable dealing with them than others. But have you ever asked yourself what numbers really are? For example, what is the number 4? Of
course, you can describe the symbol “4”. But that is not really the number, is it? You can use Roman numerals IV, Urdu ۴, or Chinese and Japanese Kanji 四. Each one of these symbols represents the
same number. And yet, somehow we would all probably agree that there is only one number 4.
The question about the nature of numbers is twofold. You can understand this question purely mathematical one and ask for a clear definition of a number and the set of numbers in a mathematical
sense. This will be the topic of this article. You can also ask yourself what numbers are in a philosophical sense. Do numbers exist? If yes, in what way do they exist, and what are they? This may be
the topic of a future article.
Now that we have settled what type of question we want to answer, we should start with the simplest type of numbers. These are the natural numbers 0, 1, 2, 3, 4, …
I decided to include the number zero even though it might seem a little abstract at first. After all, what does it mean to have zero of something? But that objection strays into the philosophical
realm again and, as I said above, I want to focus on the mathematical aspect here.
When doing the research for this article, I was slightly surprised at the plethora of different definitions for the natural numbers. But given how fundamental this question is, it should be no wonder
that many mathematicians have thought about the problem of defining numbers and have come up with different answers.
The Peano Axioms
Let’s start with an axiomatic definition of numbers called the Peano axioms. This is one of the earliest strict definitions of the natural numbers in the modern sense. It doesn’t really state what
the natural numbers are but focuses on how they behave. We start with the set of natural numbers, which we call $\mathbb{N}$, and a successor function $S$.
Peano Axiom 1:
$0$ is a natural number or, more formally, $0 \in \mathbb{N}$
This axiom just tells us that there is a natural number zero. We could have chosen 1 as the starting point but this is arbitrary.
Peano Axiom 2:
Every natural number $x$ has a successor $y$.
In other words, given that $x \in \mathbb{N}$ then it follows that $y = S(x) \in \mathbb{N}$.
Intuitively, the successor function will always produce the next natural number.
Mathematicians call this property of the natural number being closed under the successor operation $S$. All this means is that the successor operation will never produce a result that is outside of
the natural numbers.
Peano Axiom 3:
If we have two natural numbers $x$ and $y$, and we know that the successors of $x$ and $y$ are equal, then this is the same as saying that $x$ and $y$ are equal.
Again, written more formally we say, given $x \in \mathbb{N}$ and $y \in \mathbb{N}$ and $S(x) = S(y)$ then it follows that $x=y$.
This means that any natural number cannot be the successor of two different number. In other words, if you have two different numbers then they can’t have the same successor.
Peano Axiom 4:
$0$ is not the successor of any other natural number.
In mathematical notation, if $x \in \mathbb{N}$ then $S(x) \ne 0$
At first sight, we might think that these axioms are complete. We can start from zero and use the successor function $S$ to iterate through the natural numbers. This is intuitively shown in the image
1 = S(0)
2 = S(S(0)) = S(1)
3 = S(S(S(0))) = S(2)
and so on.
But we haven’t guaranteed that this iteration will eventually be able to reach all natural numbers. Imagine that, in addition to the above, there is some special number $z$ that creates a closed loop
when applying the successor function. And this loop is separate from the sequence that we get when starting from zero.
So we need another axiom that guarantees that all natural numbers are reachable from zero by repeatedly applying the successor function. This axiom is usually stated in the following way.
Peano Axiom 5: Axiom of Induction
Given any set $U \subseteq \mathbb{N}$ with $0 \in U$.
If $U$ is such that for every $n \in U$ the successor is also in $U$, i.e. $S(n) \in U$,
then $U = \mathbb{N}$
The idea behind this axiom is that $U$ can be chosen as the minimal set that contains no additional loops or numbers that are not reachable from zero. The fact that any set $U$ that contains zero and
is closed under the successor function is identical to the natural numbers guarantees that all natural numbers are eventually reachable from zero. The axiom of induction is maybe more familiar in its
alternative form.
Peano Axiom 5: Axiom of Induction, alternative form
Consider a mathematical statement that is true for zero.
If it can be proven that,
given the statement is true for a number $n$, then it is also true for $S(n)$,
then it follows that the statement is true for all natural numbers.
Here you can see that this axiom forms the basis of the familiar proof by induction.
Some Remarks
The Peano Axioms are helpful in defining the set of natural numbers $\mathbb{N}$ and arithmetic operations on them. But personally, I feel unsatisfied by this definition. The Peano Axioms tell us how
natural numbers behave but they don’t really give any additional insight as to what numbers really are.
Take for example the number 2. We now know that 2 can be expressed as the successor of the successor of 0, i.e. $2 = S(S(0))$. We also know that this second successor must be a member of the set of
natural numbers, but not much more. The problem here is that we never defined what the successor function should be.
Nonetheless, the Peano axioms can serve as a basis for more in-depth definitions of the natural numbers. These definitions can be considered models of the Peano Axioms in the sense that they define a
zero element and some concrete successor function. The set of natural numbers can then be constructed from these and the Peano Axioms follow as consequences from these definitions.
In a future post, I will look at some set-theoretic definitions of the natural numbers. If you liked this post, please leave a comment and check back soon for more content.
Posted 5th August 2020 by Holger Schmitz
In a previous post, I wrote about the way that the computer stores and processes integers. This description referred to the basic architecture of the processor. In this post, I want to talk about how
different programming languages present integers to the developer. Programming languages add a layer of abstraction and in different languages that abstraction may be less or more pronounced. The
languages I will be considering here are C++, Python, and JavaScript.
Integers in C++
C++ is a language that is very close to the machine architecture compared to other, more modern languages. The data that C++ operates on is stored in the machine’s memory and C++ has direct access to
this memory. This means that the C++ integer types are exact representations of the integer types determined by the processor architecture.
The following integer datatypes exist in C++
Type Alternative Names Number of Bits G++ on Intel 64 bit (default)
char at least 8 8
short int short at least 16 16
int at least 16 32
long int long at least 32 64
long long int long long at least 64 64
This table does not give the exact size of the datatypes because the C++ standard does not specify the sizes but only lower limits. It is also required that the larger types must not use fewer bits
than the smaller types. The exact number of bits used is up to the compiler and may also be changed by compiler options. To find out more about the regular integer types you can look at this
reference page.
The reason for not specifying exact sizes for datatypes is the fact that C++ code will be compiled down to machine code. If you compile your code on a 16 bit processor the plain int type will
naturally be limited to 16 bits. On a 64 bit processor on the other hand, it would not make sense to have this limitation.
Each of these datatypes is signed by default. It is possible to add the signed qualifier before the type name to make it clear that a signed type is being used. The unsigned qualifier creates an
unsigned variant of any of the types. Here are some examples of variable declarations.
char c; // typically 8 bit
unsigned int i = 42; // an unsigned integer initialised to 42
signed long l; // the same as "long l" or "long int l"
As stated above, the C++ standard does not specify the exact size of the integer types. This can cause bugs when developing code that should be run on different architectures or compiled with
different compilers. To overcome these problems, the C++ standard library defines a number of integer types that have a guaranteed size. The table below gives an overview of these types.
Signed Type Unsigned Type Number of Bits
int8_t uint8_t 8
int16_t uint16_t 16
int32_t uint32_t 32
int64_t uint64_t 64
More details on these and similar types can be found here.
The code below prints a 64 bit int64_t using the binary notation. As the name suggests, the bitset class interprets the memory of the data passed to it as a bitset. The bitset can be written into an
output stream and will show up as binary data.
#include <bitset>
void printBinaryLong(int64_t num) {
std::cout << std::bitset<64>(num) << std::endl;
Integers in Python
Unlike C++, Python hides the underlying architecture of the machine. In order to discuss integers in Python, we first have to make clear which version of Python we are talking about. Python 2 and
Python 3 handle integers in a different way. The Python interpreter itself is written in C which can be regarded in many ways as a subset of C++. In Python 2, the integer type was a direct reflection
of the long int type in C. This meant that integers could be either 32 or 64 bit, depending on which machine a program was running on.
This machine dependence was considered bad design and was replaced be a more machine independent datatype in Python 3. Python 3 integers are quite complex data structures that allow storage of
arbitrary size numbers but also contain optimizations for smaller numbers.
It is not strictly necessary to understand how Python 3 integers are stored internally to work with Python but in some cases it can be useful to have knowledge about the underlying complexities that
are involved. For a small range of integers, ranging from -5 to 256, integer objects are pre-allocated. This means that, an assignment such as
will not create the number 25 in memory. Instead, the variable n is made to reference a pre-allocated piece of memory that already contained the number 25. Consider now a statement that might appear
at some other place in the program.
The value of b is clearly 25 but this number is not stored separately. After these lines b will reference the exact same memory address that n was referencing earlier. For numbers outside this range,
Python 3 will allocate memory for each integer variable separately.
Larger integers are stored in arbitrary length arrays of the C int type. This type can be either 16 or 32 bits long but Python only uses either 15 or 30 bits of each of these "digits". In the
following, 32 bit ints are assumed but everything can be easily translated to 16 bit.
Numbers between −(2^30−1) and 2^30−1 are stored in a single int. Negative numbers are not stored as two’s complement. Instead the sign of the number is stored separately. All mathematical
operations on numbers in this range can be carried out in the same way as on regular machine integers. For larger numbers, multiple 30 bit digits are needed. Mathamatical operations on these large
integers operate digit by digit. In this case, the unused bits in each digit come in handy as carry values.
Integers in JavaScript
Compared to most other high level languages JavaScript stands out in how it deals with integers. At a low level, JavaScript does not store integers at all. Instead, it stores all numbers in floating
point format. I will discuss the details of the floating point format in a future post. When using a number in an integer context, JavaScript allows exact integer representation of a number up to 53
bit integer. Any integer larger than 53 bits will suffer from rounding errors because of its internal representation.
const a = 25;
const b = a / 2;
In this example, a will have a value of 25. Unlike C++, JavaScript does not perform integer divisions. This means the value stored in b will be 12.5.
JavaScript allows bitwise operations only on 32 bit integers. When a bitwise operation is performed on a number JavaScript first converts the floating point number to a 32 bit signed integer using
two’s complement. The result of the operation is subsequently converted back to a floating point format before being stored.
|
{"url":"https://www.notjustphysics.com/tag/numbers/","timestamp":"2024-11-06T17:02:51Z","content_type":"text/html","content_length":"73676","record_id":"<urn:uuid:a6614ce6-8a16-413d-8858-f0e00d204371>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00719.warc.gz"}
|
Ferroelectricity, Superconductivity, and SrTiO3—Passions of K.A. Müller
Ferroelectricity, Superconductivity, and SrTiO[3]—Passions of K.A. Müller
Department of Quantum Matter Physics, University of Geneva, 24 Quai Ernest-Ansermet, 1211 Geneva 4, Switzerland
Author to whom correspondence should be addressed.
Submission received: 9 September 2020 / Revised: 7 October 2020 / Accepted: 13 October 2020 / Published: 15 October 2020
SrTiO$3$ is an insulating material which, using chemical doping, pressure, strain or isotope substitution, can be turned into a ferroelectric material or into a superconductor. The material itself,
and the two aforementioned phenomena, have been subjects of intensive research of Karl Alex Müller and have been a source of inspiration, among other things, for his Nobel prize-winning research on
high temperature superconductivity. An intriguing outstanding question is whether the occurrence of ferroelectricity and superconductivity in the same material is just a coincidence, or whether a
deeper connection exists. In addition there is the empirical question of how these two phenomena interact with each other. Here we show that it is possible to induce superconductivity in a
two-dimensional layer at the interface of SrTiO$3$ and LaAlO$3$ when we make the SrTiO$3$ ferroelectric by means of $18$O substitution. Our experiments indicate that the ferroelectricity is perfectly
compatible with having a superconducting two-dimensional electron system at the interface. This provides a promising avenue for manipulating superconductivity in a non centrosymmetric environment.
1. Introduction
Karl Alex Müller has numerous interests and passions. Most likely quite high in the list are ferroelectricity, superconductivity and SrTiO
—a material that, we believe, he called the drosophila of solid state physics. Known worldwide for their discovery of superconductivity in the cuprates, J.G. Bednorz and K.A. Müller explained in
their Nobel lecture that their search for high T
superconductivity in complex oxides had been partly motivated by SrTiO
, which, once doped, has a maximum T
of 0.5 K, actually very high when compared to its Fermi energy [
Close to ferroelectricity and to superconductivity, SrTiO
is indeed an amazing material. By itself it is an insulating cubic perovskite at room temperature. Below 105 K, an antiferrodistortive transition makes the system weakly tetragonal. Electronically,
is a quantum paraelectric—a compound often seen as “failed ferroelectric” with its inverse static dielectric constant
revealing a Curie–Weiss behavior. Unlike for ferroelectric materials, however,
never diverges but saturates at low temperatures as shown in 1979 by Müller and Burkard [
]. When doped, SrTiO
can be turned into a ferroelectric or into a superconductor. To achieve the former, Ca can be partially substituted for Sr [
] or
O can be replaced by
O [
]—in thin film form, strain also allows the ferroelectric state to be reached [
]. Superconductivity can be obtained by partially substituting Sr with La, Ti with Nb or by reducing the oxygen content—in all cases, the system is doped with electrons and the maximum T
is around 500 mK [
]. SrTiO
has, over time, revealed other amazing properties including the emission of blue-light once irradiated with Ar ions [
] or the electrolysis of water [
]. With the discovery in 2004 of conductivity [
] and in 2007 of superconductivity [
] at the interface between LaAlO
and SrTiO
, this "magic" perovskite was again at the center of worldwide attention. More recently, it is the prediction and discovery of an increase of T
in electron-doped and Ca or
O substituted SrTiO
that triggered a lot of interest, discoveries marrying the passions of K.A. Müller—ferroelectricity, superconductivity and SrTiO
In this paper, we aim to discuss how the proximity of a ferroelectric state to the superconducting phase may explain the Cooper pair coupling mechanism. We first review the properties of SrTiO$3$,
presenting a short summary of its phase diagram with the different ground states obtained by the various dopings and substitutions. We then recall the different models proposed since as far back as
1964 that may explain superconductivity in SrTiO$3$ and we discuss in particular ideas allowing the recent observation of T$c$ enhancement when SrTiO$3$ is pushed toward ferroelectricity to be
understood. Finally, we briefly introduce the LaAlO$3$/SrTiO$3$ system and show some experimental results obtained on these superconducting interfaces for which $16$O was partially substituted by
$18$O in the SrTiO$3$ single crystal substrate used for the growth of the LaAlO$3$ layer. We end the paper with a brief conclusion.
2. SrTiO$3$: Properties, Phase Diagram and Tuning Parameters
The centrosymmetric cubic perovskite structure (tolerance factor
= 1) that SrTiO
adopts at room temperature reflects the perfect balance between the ionic radii of its cations: deviations from
=1 would lead to various types of distortions, the most common ones being the oxygen octahedral rotations occurring for
< 1 [
]. As mentioned above, at 105 K SrTiO
goes through an antiferrodistortive (AFD) transition resulting in a tetragonal structure with oxygen octahedra rotated out of phase about the c-axis (a
in Glazer notation) [
]. Lowering the temperature further produces a softening of the ferroelectric phonon mode with a strong Curie–Weiss type increase of the static dielectric response, suggesting a transition into a
ferroelectric state at 20 K [
]. However, Müller and Burkard discovered that the dielectric constant saturates, reaching a value of 2 × 10
at 4 K [
]: they interpreted this saturation as the signature of an intrinsic quantum paraelectric state, i.e., an avoided ferroelectric state due to the quantum fluctuations of the atoms about their
centrosymmetric positions. Monte Carlo calculations have confirmed this scenario and revealed the role of quantum fluctuations on the reduction of the AFD transition temperature [
]. Given such proximity to a ferroelectric state, several groups have explored different approaches to obtain a polar state, applying mechanical [
] and epitaxial [
] strain or performing chemical—replacing Sr with Ca [
]—or isotopic substitutions—
O with
O [
]. These different avenues have induced a ferroelectric ground state with Curie temperatures exceeding, in some cases, room temperature [
Figure 1
shows schematically how the ferroelectric state develops beyond the quantum critical point (QCP) for the case of Ca-doping and
The large dielectric susceptibility is thought to be responsible for the fact that the electronic transition from the insulating state to a metallic state occurs at an extremely low carrier density
of 10
$− 3$
]. Such doping can be induced by chemical substitution of La for Sr [
], Nb for Ti or by oxygen reduction [
]. At these low dopings the mean free path of the conduction electrons is about 100 times greater than the Fermi wavelength [
]. One of the consequences is that quantum oscillations in the magneto-resistance are observed [
], a feature that allows the topology of the Fermi surface to be determined as a function of carrier density. At low temperatures (below 0.4 K), the metallic electron doped system undergoes a phase
transition into a superconducting condensate for a carrier density in the range 10
$− 3$
]. With 10
$− 3$
, doped SrTiO
is the lowest density superconductor and displays a unique broad range of charge concentration over which the superconducting state is observed. The origin of the superconducting state and the
dependence of the superconducting temperature on carrier density have been subjects of several studies [
]. An appealing proposition is that the two different order parameters may be somehow coupled: according to certain models that apply to perovskite-type structures, the ferroelectric instability is
the condition necessary to pair electrons [
]. Such an idea has been explored recently, leading to a clear prediction of the dependence of the superconducting critical temperature upon the proximity to the ferroelectric state [
3. Superconductivity in Doped SrTiO$3$ from 1964 until 2020
Using the linear combination of atomic orbitals method, Kahn and Leyendecker predicted in 1964 that the electronic energy bands in strontium titanate exhibit six conduction band ellipsoids lying
along [100] directions of momentum space with minima probably at the edges of the Brillouin zone [
]. In the same year, Marvin Cohen predicted that the attractive electron–electron interaction arising from the exchange of intravalley and intervalley phonons can cause these materials to exhibit
superconducting properties [
]. In less than a year, Schooley et al. [
] reported superconductivity in electron-doped SrTiO
with carrier concentrations in the range from 10
to 10
$− 3$
, and
ranging from 50 mK at the lowest doping to about 0.5 K for
$n c = 10 20$
$− 3$
. While these results confirmed Cohen’s prediction of superconductivity in electron-doped SrTiO
, it has been demonstrated by later band structure calculations [
] that there is only a single valley, which is located at the Brillouin zone center for each of the three conduction bands. The three bands at the zone center are non-degenerate due to spin-orbit
splitting and (below 105 K) a weak tetragonal crystal field [
], causing the sudden onset of quantum oscillations [
] at the critical dopings where the second and third band become occupied [
]. This also agrees well with the doping dependence of the two superconducting gaps observed by Binnig et al. [
]. While the Fermi surface properties agree well with the ab initio band structure predictions, the experimental values of the effective mass are a factor of two higher than the ab initio predictions
]. From the analysis of the mid-infrared absorption in doped SrTiO
it has become clear that the factor of two for the mass enhancement observed in the experiments is a consequence of the coupling of the conduction electrons to the longitudinal optical phonons, and
that the mid-infrared peaks originate from large polaron formation [
In the course of the more than five decades of research on SrTiO
a variety of models have been proposed for the pairing mechanism: intervalley scattering [
], bipolarons [
], two-phonon exchange [
], longitudinal optical phonons [
], full dielectric function for longitudinal phonons and screened Coulomb [
], acoustic phonons [
]. A possible role of ferroelectricity was proposed by Bussmann-Holder [
], an idea that has gained momentum in recent years [
]. A detailed theoretical prediction of a giant isotope effect on the superconducting T
] with an opposite sign from the BCS prediction has spurred on a number of isotope-substitution experiments [
] and Ca-substitution experiments, which are expected to have a similar effect on T
]. These experiments have confirmed the theoretical predictions. The theory was based on the coupling of electrons to the soft transverse optical phonon (the “TO1” mode). A problem has meanwhile been
pointed out, that the coupling to this phonon is far too small to account for T
on the order of several hundred mK [
]. A possible remedy is to couple the electrons to pairs of transverse optical phonons of opposite momentum [
]. A recent analysis of the optical oscillator strength of the TO1 mode has brought to light that this type of bi-phonon exchange is indeed unusually strong in SrTiO
], strong enough in fact to account for superconductivity in this material. In this scenario quantum ferroelectric fluctuations induce the pairing interaction that leads to superconductivity [
]; however, the main channel of interaction is mediated by pairs of phonons rather than single phonons as was originally proposed. In this context it is not an accident that superconductivity occurs
in proximity to a ferroelectric quantum critical point.
4. Superconductivity in Two Dimensions
Recent research on SrTiO
has focused on the two-dimensional electron systems that emerge at the crystal surface or in thin films. Mobile electrons can be localized at the surface of undoped SrTiO
crystals by cleaving in vacuum [
], in
-doped SrTiO
thin films [
] or in SrTiO
-based heterostructures. The well-known conducting interface between an insulating SrTiO
substrate and a thin film of LaAlO
belongs to the last class [
]. This heterostructure hosts a two-dimensional conducting system confined in SrTiO
within a few nanometers from its interface with LaAlO
. The electrons are transferred to SrTiO
to compensate for the polar discontinuity occurring between the two materials along the [001] direction [
]. Similar to the bulk case, this system undergoes a superconducting transition when cooled down below 300 mK. As-grown samples have a critical temperature of ∼200 mK, and a 1D critical current
density of 100
A/cm [
]. The superconductivity in this system has a two-dimensional character. Indeed, the analysis of the critical magnetic field parallel, H
$‖ *$
, and perpendicular, H
$⊥ *$
, to the interface yields a Ginzburg–Landau coherence length,
, of 60 nm at T = 0 K, and a thickness of the superconducting slab of 11 nm. As expected for a superconducting thin film (thickness ≪
) [
], H
$‖ *$
is much higher than H
$⊥ *$
as a 2D superconducting layer cannot accommodate a vortex parallel to the plane. Interestingly, the value of H
$‖ *$
exceeds the paramagnetic limit set by BCS theory,
$μ 0 · H ‖ * = 1.84 · T c$
$μ 0 · H ‖ *$
in T and
$T c$
in K), by a factor of 4–5 [
], and this effect might be linked to the presence of strong spin-orbit coupling in the system [
In 2008, Caviglia et al. showed that the superconductivity is tunable by the electric field effect [
]. The phase diagram of LaAlO
resembles that of bulk SrTiO
, but it extends over a much smaller carrier density range, between
$1 ×$
$− 3$
and 4
$× 10 19$
$− 3$
]. In the underdoped region of the phase diagram, a quantum critical point separates the superconducting regime from an insulating phase, related to the weak-localization effect [
5. $18$O Isotope Effect
Following the ferroelectric quantum critical scenario proposed by Rowley et al. [
], Edge et al. considered a specific scenario in which the ferroelectric soft mode is tuned by isotopic
O-substitution [
]. By tuning the
O substitution level beyond the QCP, they predicted both an increase of the maximum
$T c$
and a shift of the maximum of the dome to lower carrier densities. Experimentally, the increase of
$T c$
was first observed by Stucky et al. on 35%-isotope substituted samples that were electron-doped by oxygen removal [
]. In the BCS weak-coupling limit,
$T c$
is inversely proportional to the isotope mass
$T c ∝ M − α$
with an isotope coefficient
$α = + 0.5$
. The experimentally determined increase of
$T c$
$50 %$
] leads, however, to a negative and much larger value of
$α ≈ − 10$
, matching the theoretical prediction made by Edge and coworkers, both in sign and order of magnitude [
]. In a later work an enhancement of
$T c$
upon isotope substitution was further confirmed in isotope-substituted samples that were electron-doped by substituting Sr with La [
In this context, we measured the electronic properties of LaAlO$3$/SrTi($18$O$y 16$O$1 − y$)$3$ heterostructures: a system where a superconducting two-dimensional electron system is confined at the
interface between an insulating thin film and a ferroelectric substrate.
We optimized the isotopic substitution on commercial TiO$2$ terminated SrTiO$3$ substrates provided by CrysTec GmbH. Several crystals, 5 × 2.5 × 0.5 mm$3$ and 5 × 2.5 × 0.25 mm$3$ in size, are put in
a standard quartz tube, which is then sealed to fix the internal pressure of $18$O$2$ at 0.4–0.7 bar. The sealed tubes are placed in a tube furnace and heated up at temperatures between 700 and 1100
°C for 20–40 days. Before the LaAlO$3$ thin film growth, we evaluated the effect of the substitution procedure on the substrate topography using an atomic force microscope (AFM).
As-received TiO
terminated substrates have an atomically flat surface with a clear step-and-terrace topography (see
Figure 2
a). AFM imaging revealed that after the thermal treatment the crystal surface is completely reconstructed. Instead of the usual step-and-terrace structure, we found a “block-terrace” structure (see
Figure 2
b). In order to restore a controlled TiO
termination, the crystals have been re-polished and then treated in an HF (hydrofluoric acid) bath for 30 s, followed by a rinsing with demineralized water. After this procedure, the substrates
recovered the initial step-terrace structure (see
Figure 2
c), with atomically flat terraces and unit cell-high steps, and were ready for the LaAlO
deposition. The thin films of LaAlO
were grown by pulsed laser deposition, following the recipe used for standard LaAlO
heterostructures [
]. Their thickness, typically 6–7 unit cells, was monitored during the growth using reflection high energy electron diffraction (see
Figure 2
d). After the growth, a 20 nm gold layer was sputtered on the back side of the substrate to be used as an electrode for dielectric measurements.
We prepared and analyzed three LaAlO
$y 16$
$1 − y$
heterostructures with nominal
O contents in the SrTiO
substrate of 35%, 45%, and 67%, respectively. Compared to pure SrTiO
, the low-temperature dielectric constant,
$ϵ r$
, is strongly enhanced by the presence of
O (see
Figure 2
e). At
= 35% the substrate is on the verge of the ferroelectric transition and
$ϵ r$
saturates at roughly 4.7
$× 10 4$
(compared to 2
$× 10 4$
for SrTiO
). For
= 45% and 67%, the dielectric constant has a double peak structure, with the maxima indicating the position of the ferroelectric transition. The first peak occurs approximately at the Curie
temperature of 12 K (17 K), which agrees well with the nominal
O content 45% (67%), as indicated by the black arrow in
Figure 2
e. The second peak, which occurs at lower temperature (gray arrows), may be due to inhomogeneities in the
O content. It is worth noting that the substrates are heated up to 800 °C in an
O atmosphere during the growth of the LaAlO
film [
] and some
O may re-substitute part of the
O present at the SrTiO
surface. The second peak is visible at 6.9 K (8.1 K) and corresponds to an
O content of roughly 35% (40%) for a nominal content of 45% (67%).
Figure 2
f,g shows the sheet resistance (R
) as a function of temperature. Between 300 and 1.5 K, the behavior of these samples is similar to that of standard LaAlO
heterostructures [
]. The resistance has a slight dip at ∼95 K, which is presumably due to the antiferrodistortive transition at 105 K [
]. For all investigated samples, the resistance shows a small upturn below ∼15 K, the origin of which is still under investigation. It should be noted that an anomaly/upturn has been observed in the
low-temperature resistivity of bulk Ca-substituted SrTiO
$3 − δ$
samples and was associated to a ferroelectric-like state still existing in metallic samples [
]. Similarly, a study performed on heterostructures of LaAlO
grown on top of Ca-substituted SrTiO
substrates showed the presence of a resistance upturn occurring just below the Curie temperature, possibly linked to the ferroelectricity of the substrate [
]. If the temperature is further decreased, the samples undergo a superconducting transition at
$T c$
= 340, 255, and 300 mK for
= 35, 45, and 67%, respectively.
$T c$
is defined as the temperature at which the resistance is 50% of its value in the normal state (here at 800 mK). The transition temperature observed for the three samples is similar to that reported
in the 2D electron system confined in standard SrTiO
substrates [
]. We note that a comparison between the phase diagram shown in
Figure 1
and our data for the LaAlO
$y 16$
$1 − y$
interfaces is difficult due to the uncertainty on the equivalent 3D carrier density of the 2DES, which has an exponential charge profile inside the substrate. This pilot study shows that the presence
of a ferroelectric SrTiO
substrate is compatible with the formation of a conducting—and even superconducting—system at the interface with LaAlO
, and opens the path to the exploration of its effect on the electronic properties.
6. Conclusions
SrTiO$3$ plays host to a large variety of interesting physical phenomena. In particular, superconductivity can be obtained in the bulk and at a two-dimensional interface. Following the idea that
superconductivity can be enhanced by $18$O substitution in SrTiO$3$ we studied the properties of LaAlO$3$/SrTi($18$O$y 16$O$1 − y$)$3$ heterostructures with different $18$O concentrations (35%, 45%,
and 67%) in the SrTiO$3$ substrate. The observation of superconductivity at the interface of LaAlO$3$ and isotope substituted SrTiO$3$ with $T c$ on the order of 300 mK demonstrates that it is
experimentally possible to induce two-dimensional superconductivity in a ferroelectric-like environment. Further investigations with different levels of doping may reveal higher superconducting
critical temperatures in this system that combines ferroelectricity, superconductivity and SrTiO$3$—the passions of K.A. Müller.
Author Contributions
Data curation, G.S., M.B., C.W.R. and A.W.; Formal analysis, G.S. and M.B.; Investigation, G.S., M.B., D.P., C.W.R., A.W., S.G. and E.G.; Supervision, S.G., E.G., D.v.d.M. and J.-M.T.;
Writing—original draft, G.S., M.B., C.W.R., A.W., S.G., D.v.d.M. and J.-M.T.; Writing–review & editing, G.S., M.B., D.P., C.W.R., A.W., S.G., E.G., D.v.d.M. and J.-M.T. All authors have read and
agreed to the published version of the manuscript.
This work was supported by the Swiss National Science Foundation through Division II (projects 200020-179155 and 200020-179157). The research leading to these results has received funding from the
European Research Council under the European Union’s Seventh Framework Program (FP7/2007-2013)/ERC Grant Agreement 319286 Q-MAC).
We would like to thank Jennifer Fowlie for providing useful comments on the manuscript.
Conflicts of Interest
The authors declare no conflict of interest.
Figure 1.
Schematic phase diagram of SrTiO
showing the ferroelectric (FE) and superconducting (SC) phases. Upon chemical substitution of Ca for Sr, i.e., Sr
$1 − x$
$0.002 < x < 0.02$
], or by oxygen isotope substitution, i.e., SrTi
$18 O y 16 O 1 − y 3$
$y > 0.33$
], the material develops a FE ground state beyond a quantum critical point (QCP). This FE phase occurs well below the structural transition from cubic to tetragonal [
]. Charge doping turns the material from an insulator (I) into a metal (M) at a critical carrier density (n
) of 10
$− 3$
, while SC develops in a doping range n
$3 D$
between 5 × 10
and 10
$− 3$
Figure 2. Growth and physical properties of LaAlO$3$/SrTi($18$O$y 16$O$1 − y$)$3$ interfaces. AFM images of the SrTiO$3$ substrate (a) as-received, (b) after the $18$O$y$ substitution process, and (c
) after re-polishing and HF treatment (the size of all AFM images is 4 $μ$m × 4 $μ$m). (d) RHEED signal during the growth of the LaAlO$3$ layer. One oscillation corresponds to the deposition of one
unit cell of LaAlO$3$. (e) Dielectric constant versus temperature of a SrTiO$3$ substrate ($y = 0$) and of LaAlO$3$/SrTi($18$O$y 16$O$1 − y$)$3$ samples for different values of substitution. The
dielectric properties have been measured in a homemade Helium cryostat using the Agilent E4980A Precision LCR Meter. The electric field was applied between the back electrode and the 2DES used as a
top-electrode. (f,g) Sheet resistance versus temperature of the 2D electron system for $18$O substituted samples. The resistance jump visible in the curve for y = 35% at ∼0.45 K is due to an electric
spike, which occurred in the measurement system during our study.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Scheerer, G.; Boselli, M.; Pulmannova, D.; Rischau, C.W.; Waelchli, A.; Gariglio, S.; Giannini, E.; van der Marel, D.; Triscone, J.-M. Ferroelectricity, Superconductivity, and SrTiO[3]—Passions of
K.A. Müller. Condens. Matter 2020, 5, 60. https://doi.org/10.3390/condmat5040060
AMA Style
Scheerer G, Boselli M, Pulmannova D, Rischau CW, Waelchli A, Gariglio S, Giannini E, van der Marel D, Triscone J-M. Ferroelectricity, Superconductivity, and SrTiO[3]—Passions of K.A. Müller.
Condensed Matter. 2020; 5(4):60. https://doi.org/10.3390/condmat5040060
Chicago/Turabian Style
Scheerer, Gernot, Margherita Boselli, Dorota Pulmannova, Carl Willem Rischau, Adrien Waelchli, Stefano Gariglio, Enrico Giannini, Dirk van der Marel, and Jean-Marc Triscone. 2020. "Ferroelectricity,
Superconductivity, and SrTiO[3]—Passions of K.A. Müller" Condensed Matter 5, no. 4: 60. https://doi.org/10.3390/condmat5040060
Article Metrics
|
{"url":"https://www.mdpi.com/2410-3896/5/4/60","timestamp":"2024-11-08T14:26:41Z","content_type":"text/html","content_length":"477137","record_id":"<urn:uuid:cb8b0404-3678-459c-9c3e-9847009bf2e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00201.warc.gz"}
|
Browsing by Author "Warro, Olli"
Now showing items 1-1 of 1
• Warro, Olli (2023)
In many real-world problems, the task is to find an optimal solution within a finite set of solutions. Many of the problems, also known as combinatorial optimization problems, are NPhard. In
other words, finding an optimal solution for the problems is computationally difficult. However, being important for many real-world applications, there is a demand for efficient ways to solve
the problems. One approach is the declarative approach, where the problems are first encoded into a mathematical constraint language. Then, the encoded problem instance is solved by an algorithm
developed for that constraint language. In this thesis, we focus on declarative pseudo-Boolean optimization (PBO). PBO is the set of integer programs (IP) where the variables can only be assigned
to 0 or 1. For many real-world applications, finding an optimal solution is too time-consuming. Instead of finding an optimal solution, incomplete methods attempt to find good enough solutions in
a given time limit. To the best of our knowledge, there are not many incomplete algorithms developed specifically for PBO. In this thesis, we adapt an incomplete method developed for the maximum
satisfiability problem to PBO. In the adapted algorithm, which we call LS-ORACLE-PBO, a given PBO instance is solved using a form of local search that utilizes a pseudo-Boolean decision oracle
when moving from one solution to another. We implement and empirically compare LS-ORACLE-PBO to another recent incomplete PBO algorithm called LS-PBO. The results show that, in general, our
implementation is not competitive against LS-PBO. However, for some problem instances, our implementation provides better results than LS-PBO.
Now showing items 1-1 of 1
|
{"url":"https://ethesis.helsinki.fi/repository/handle/123456789/13/browse?type=author&value=Warro%2C+Olli","timestamp":"2024-11-03T13:52:57Z","content_type":"application/xhtml+xml","content_length":"19649","record_id":"<urn:uuid:c8f55c17-6fef-426c-bd29-8d15bbb9f6e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00358.warc.gz"}
|
Surface Integrals of Vector Fields
Show Mobile Notice Show All Notes Hide All Notes
Mobile Notice
You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your
device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen
Section 17.4 : Surface Integrals of Vector Fields
Just as we did with line integrals we now need to move on to surface integrals of vector fields. Recall that in line integrals the orientation of the curve we were integrating along could change the
answer. The same thing will hold true with surface integrals. So, before we really get into doing surface integrals of vector fields we first need to introduce the idea of an oriented surface.
Let’s start off with a surface that has two sides (while this may seem strange, recall that the Mobius Strip is a surface that only has one side!) that has a tangent plane at every point (except
possibly along the boundary). Making this assumption means that every point will have two unit normal vectors, \({\vec n_1}\) and \({\vec n_2} = - {\vec n_1}\). This means that every surface will
have two sets of normal vectors. The set that we choose will give the surface an orientation.
There is one convention that we will make in regard to certain kinds of oriented surfaces. First, we need to define a closed surface. A surface \(S\) is closed if it is the boundary of some solid
region \(E\). A good example of a closed surface is the surface of a sphere. We say that the closed surface \(S\) has a positive orientation if we choose the set of unit normal vectors that point
outward from the region \(E\) while the negative orientation will be the set of unit normal vectors that point in towards the region \(E\).
Note that this convention is only used for closed surfaces.
In order to work with surface integrals of vector fields we will need to be able to write down a formula for the unit normal vector corresponding to the orientation that we’ve chosen to work with. We
have two ways of doing this depending on how the surface has been given to us.
First, let’s suppose that the function is given by \(z = g\left( {x,y} \right)\). In this case we first define a new function,
\[f\left( {x,y,z} \right) = z - g\left( {x,y} \right)\]
In terms of our new function the surface is then given by the equation \(f\left( {x,y,z} \right) = 0\). Now, recall that \(\nabla f\) will be orthogonal (or normal) to the surface given by \(f\left(
{x,y,z} \right) = 0\). This means that we have a normal vector to the surface. The only potential problem is that it might not be a unit normal vector. That isn’t a problem since we also know that we
can turn any vector into a unit vector by dividing the vector by its length. In our case this is,
\[\vec n = \frac{{\nabla f}}{{\left\| {\nabla f} \right\|}}\]
In this case it will be convenient to actually compute the gradient vector and plug this into the formula for the normal vector. Doing this gives,
\[\vec n = \frac{{\nabla f}}{{\left\| {\nabla f} \right\|}} = \frac{{ - {g_x}\,\vec i - {g_y}\,\vec j + \vec k}}{{\sqrt {{{\left( {{g_x}} \right)}^2} + {{\left( {{g_y}} \right)}^2} + 1} }}\]
Now, from a notational standpoint this might not have been so convenient, but it does allow us to make a couple of additional comments.
First, notice that the component of the normal vector in the \(z\)-direction (identified by the \(\vec k\) in the normal vector) is always positive and so this normal vector will generally point
upwards. It may not point directly up, but it will have an upwards component to it.
This will be important when we are working with a closed surface and we want the positive orientation. If we know that we can then look at the normal vector and determine if the “positive”
orientation should point upwards or downwards. Remember that the “positive” orientation must point out of the region and this may mean downwards in places. Of course, if it turns out that we need the
downward orientation we can always take the negative of this unit vector and we’ll get the one that we need. Again, remember that we always have that option when choosing the unit normal vector.
Before we move onto the second method of giving the surface we should point out that we only did this for surfaces in the form \(z = g\left( {x,y} \right)\). We could just as easily done the above
work for surfaces in the form \(y = g\left( {x,z} \right)\) (so \(f\left( {x,y,z} \right) = y - g\left( {x,z} \right)\)) or for surfaces in the form \(x = g\left( {y,z} \right)\) (so \(f\left(
{x,y,z} \right) = x - g\left( {y,z} \right)\)).
Now, we need to discuss how to find the unit normal vector if the surface is given parametrically as,
\[\vec r\left( {u,v} \right) = x\left( {u,v} \right)\vec i + y\left( {u,v} \right)\vec j + z\left( {u,v} \right)\vec k\]
In this case recall that the vector \({\vec r_u} \times {\vec r_v}\) will be normal to the tangent plane at a particular point. But if the vector is normal to the tangent plane at a point then it
will also be normal to the surface at that point. So, this is a normal vector. In order to guarantee that it is a unit normal vector we will also need to divide it by its magnitude.
So, in the case of parametric surfaces one of the unit normal vectors will be,
\[\vec n = \frac{{{{\vec r}_u} \times {{\vec r}_v}}}{{\left\| {{{\vec r}_u} \times {{\vec r}_v}} \right\|}}\]
As with the first case we will need to look at this once it’s computed and determine if it points in the correct direction or not. If it doesn’t then we can always take the negative of this vector
and that will point in the correct direction.
Finally, remember that we can always parameterize any surface given by \(z = g\left( {x,y} \right)\) (or \(y = g\left( {x,z} \right)\) or \(x = g\left( {y,z} \right)\)) easily enough and so if we
want to we can always use the parameterization formula to find the unit normal vector.
Okay, now that we’ve looked at oriented surfaces and their associated unit normal vectors we can actually give a formula for evaluating surface integrals of vector fields.
Given a vector field \(\vec F\) with unit normal vector \(\vec n\) then the surface integral of \(\vec F\) over the surface \(S\) is given by,
\[\iint\limits_{S}{{\vec F\centerdot d\vec S}} = \iint\limits_{S}{{\vec F\centerdot \vec n\,dS}}\]
where the right hand integral is a standard surface integral. This is sometimes called the flux of \(\vec F\) across \(S\).
Before we work any examples let’s notice that we can substitute in for the unit normal vector to get a somewhat easier formula to use. We will need to be careful with each of the following formulas
however as each will assume a certain orientation and we may have to change the normal vector to match the given orientation.
Let’s first start by assuming that the surface is given by \(z = g\left( {x,y} \right)\). In this case let’s also assume that the vector field is given by \(\vec F = P\,\vec i + Q\,\vec j + R\,\vec k
\) and that the orientation that we are after is the “upwards” orientation. Under all of these assumptions the surface integral of \(\vec F\) over \(S\) is,
\[\begin{align*}\iint\limits_{S}{{\vec F\centerdot d\vec S}} & = \iint\limits_{S}{{\vec F\centerdot \vec n\,dS}}\\ & = \iint\limits_{D}{{\left( {P\,\vec i + Q\,\vec j + R\,\vec k} \right)\centerdot \
left( {\frac{{ - {g_x}\,\vec i - {g_y}\,\vec j + \vec k}}{{\sqrt {{{\left( {{g_x}} \right)}^2} + {{\left( {{g_y}} \right)}^2} + 1} }}} \right)}}\sqrt {{{\left( {{g_x}} \right)}^2} + {{\left( {{g_y}}
\right)}^2} + 1} \,dA\\ & = \iint\limits_{D}{{\left( {P\,\vec i + Q\,\vec j + R\,\vec k} \right)\centerdot \left( { - {g_x}\,\vec i - {g_y}\,\vec j + \vec k} \right)}}\,dA\\ & = \iint\limits_{D}{{ -
P{g_x} - Q{g_y} + R}}\,dA\end{align*}\]
Now, remember that this assumed the “upward” orientation. If we’d needed the “downward” orientation, then we would need to change the signs on the normal vector. This would in turn change the signs
on the integrand as well. So, we really need to be careful here when using this formula. In general, it is best to rederive this formula as you need it.
When we’ve been given a surface that is not in parametric form there are in fact 6 possible integrals here. Two for each form of the surface \(z = g\left( {x,y} \right)\), \(y = g\left( {x,z} \right)
\) and \(x = g\left( {y,z} \right)\). Given each form of the surface there will be two possible unit normal vectors and we’ll need to choose the correct one to match the given orientation of the
surface. However, the derivation of each formula is similar to that given here and so shouldn’t be too bad to do as you need to.
Notice as well that because we are using the unit normal vector the messy square root will always drop out. This means that when we do need to derive the formula we won’t really need to put this in.
All we’ll need to work with is the numerator of the unit vector. We will see at least one more of these derived in the examples below. It should also be noted that the square root is nothing more
\[\sqrt {{{\left( {{g_x}} \right)}^2} + {{\left( {{g_y}} \right)}^2} + 1} = \left\| {\nabla f} \right\|\]
so in the following work we will probably just use this notation in place of the square root when we can to make things a little simpler.
Let’s now take a quick look at the formula for the surface integral when the surface is given parametrically by \(\vec r\left( {u,v} \right)\). In this case the surface integral is,
\[\begin{align*}\iint\limits_{S}{{\vec F\centerdot d\vec S}} & = \iint\limits_{S}{{\vec F\centerdot \vec n\,dS}}\\ & = \iint\limits_{D}{{\vec F\centerdot \left( {\frac{{{{\vec r}_u} \times {{\vec r}
_v}}}{{\left\| {{{\vec r}_u} \times {{\vec r}_v}} \right\|}}} \right)\left\| {{{\vec r}_u} \times {{\vec r}_v}} \right\|\,dA}}\\ & = \iint\limits_{D}{{\vec F\centerdot \left( {{{\vec r}_u} \times {{\
vec r}_v}} \right)\,dA}}\end{align*}\]
Again, note that we may have to change the sign on \({\vec r_u} \times {\vec r_v}\) to match the orientation of the surface and so there is once again really two formulas here. Also note that again
the magnitude cancels in this case and so we won’t need to worry that in these problems either.
Note as well that there are even times when we will use the definition, \(\iint\limits_{S}{{\vec F\centerdot d\vec S}} = \iint\limits_{S}{{\vec F\centerdot \vec n\,dS}}\), directly. We will see an
example of this below.
Let’s now work a couple of examples.
Example 1
Evaluate \( \displaystyle \iint\limits_{S}{{\vec F\centerdot d\vec S}}\) where \(\vec F = y\,\vec j - z\,\vec k\) and \(S\) is the surface given by the paraboloid \(y = {x^2} + {z^2}\), \(0 \le y \le
1\) and the disk \({x^2} + {z^2} \le 1\) at \(y = 1\). Assume that \(S\) has positive orientation.
Show Solution
Okay, first let’s notice that the disk is really nothing more than the cap on the paraboloid. This means that we have a closed surface. This is important because we’ve been told that the surface has
a positive orientation and by convention this means that all the unit normal vectors will need to point outwards from the region enclosed by \(S\).
Let’s first get a sketch of \(S\) so we can get a feel for what is going on and in which direction we will need to unit normal vectors to point.
As noted in the sketch we will denote the paraboloid by \({S_1}\) and the disk by \({S_2}\). Also note that in order for unit normal vectors on the paraboloid to point away from the region they will
all need to point generally in the negative \(y\) direction. On the other hand, unit normal vectors on the disk will need to point in the positive \(y\) direction in order to point away from the
Since \(S\) is composed of the two surfaces we’ll need to do the surface integral on each and then add the results to get the overall surface integral. Let’s start with the paraboloid. In this case
we have the surface in the form \(y = g\left( {x,z} \right)\) so we will need to derive the correct formula since the one given initially wasn’t for this kind of function. This is easy enough to do
however. First define,
\[f\left( {x,y,z} \right) = y - g\left( {x,z} \right) = y - {x^2} - {z^2}\]
We will next need the gradient vector of this function.
\[\nabla f = \left\langle { - 2x,1, - 2z} \right\rangle \]
Now, the \(y\) component of the gradient is positive and so this vector will generally point in the positive \(y\) direction. However, as noted above we need the normal vector point in the negative \
(y\) direction to make sure that it will be pointing away from the enclosed region. This means that we will need to use
\[\vec n = \frac{{ - \nabla f}}{{\left\| { - \nabla f} \right\|}} = \frac{{\left\langle {2x, - 1,2z} \right\rangle }}{{\left\| {\nabla f} \right\|}}\]
Let’s note a couple of things here before we proceed. We don’t really need to divide this by the magnitude of the gradient since this will just cancel out once we actually do the integral. So,
because of this we didn’t bother computing it. Also, the dropping of the minus sign is not a typo. When we compute the magnitude we are going to square each of the components and so the minus sign
will drop out.
\({S_1}\) : The Paraboloid
Okay, here is the surface integral in this case.
\[\begin{align*}\iint\limits_{{{S_1}}}{{\vec F\centerdot d\vec S}} & = \iint\limits_{D}{{\left( {y\,\vec j - z\,\vec k} \right)\centerdot \left( {\frac{{\left\langle {2x, - 1,2z} \right\rangle }}{{\
left\| {\nabla f} \right\|}}} \right)\,\left\| {\nabla f} \right\|\,dA}}\\ & = \iint\limits_{D}{{ - y - 2{z^2}\,dA}}\\ & = \iint\limits_{D}{{ - \left( {{x^2} + {z^2}} \right) - 2{z^2}\,dA}}\\ & = - \
iint\limits_{D}{{{x^2} + 3{z^2}\,dA}}\end{align*}\]
Don’t forget that we need to plug in the equation of the surface for \(y\) before we actually compute the integral. In this case \(D\) is the disk of radius 1 in the \(xz\)-plane and so it makes
sense to use polar coordinates to complete this integral. Here are polar coordinates for this region.
\[\begin{array}{c}x = r\cos \theta \hspace{0.5in}z = r\sin \theta \\ 0 \le \theta \le 2\pi \hspace{0.5in}0 \le r \le 1\end{array}\]
Note that we kept the \(x\) conversion formula the same as the one we are used to using for \(x\) and let \(z\) be the formula that used the sine. We could have done it any order, however in this way
we are at least working with one of them as we are used to working with.
Here is the evaluation of this integral.
\[\begin{align*}\iint\limits_{{{S_1}}}{{\vec F\centerdot d\vec S}} & = - \iint\limits_{D}{{{x^2} + 3{z^2}\,dA}}\\ & = - \int_{{\,0}}^{{\,2\pi }}{{\int_{{\,0}}^{{\,1}}{{\left( {{r^2}{{\cos }^2}\theta
+ 3{r^2}{{\sin }^2}\theta } \right)r\,dr}}\,d\theta }}\\ & = - \int_{{\,0}}^{{\,2\pi }}{{\int_{{\,0}}^{{\,1}}{{\left( {{{\cos }^2}\theta + 3{{\sin }^2}\theta } \right){r^3}\,dr}}\,d\theta }}\\ & = -
\int_{{\,0}}^{{\,2\pi }}{{\left( {\frac{1}{2}\left( {1 + \cos \left( {2\theta } \right)} \right) + \frac{3}{2}\left( {1 - \cos \left( {2\theta } \right)} \right)} \right)\left. {\left( {\frac{1}{4}{r
^4}} \right)} \right|_0^1\,d\theta }}\\ & = - \frac{1}{8}\int_{{\,0}}^{{\,2\pi }}{{4 - 2\cos \left( {2\theta } \right)\,d\theta }}\\ & = - \left. {\frac{1}{8}\left( {4\theta - \sin \left( {2\theta }
\right)} \right)} \right|_0^{2\pi }\\ & = - \pi \end{align*}\]
\({S_2}\) : The Cap of the Paraboloid
We can now do the surface integral on the disk (cap on the paraboloid). This one is actually fairly easy to do and in fact we can use the definition of the surface integral directly. First let’s
notice that the disk is really just the portion of the plane \(y = 1\) that is in front of the disk of radius 1 in the \(xz\)-plane.
Now we want the unit normal vector to point away from the enclosed region and since it must also be orthogonal to the plane \(y = 1\) then it must point in a direction that is parallel to the \(y\)
-axis, but we already have a unit vector that does this. Namely,
\[\vec n = \vec j\]
the standard unit basis vector. It also points in the correct direction for us to use. Because we have the vector field and the normal vector we can plug directly into the definition of the surface
integral to get,
\[\iint\limits_{{{S_2}}}{{\vec F\centerdot d\vec S}} = \iint\limits_{{{S_2}}}{{\left( {y\,\vec j - z\,\vec k} \right)\centerdot \left( {\vec j} \right)\,dS}}\, = \iint\limits_{{{S_2}}}{{y\,dS}}\]
At this point we need to plug in for \(y\) (since \({S_2}\)is a portion of the plane \(y = 1\) we do know what it is) and we’ll also need the square root this time when we convert the surface
integral over to a double integral. In this case since we are using the definition directly we won’t get the canceling of the square root that we saw with the first portion. To get the square root
well need to acknowledge that
\[y = 1 = g\left( {x,z} \right)\]
and so the square root is,
\[\sqrt {{{\left( {{g_x}} \right)}^2} + 1 + {{\left( {{g_z}} \right)}^2}} \]
The surface integral is then,
\[\begin{align*}\iint\limits_{{{S_2}}}{{\vec F\centerdot d\vec S}} & = \iint\limits_{{{S_2}}}{{y\,dS}}\\ & = \iint\limits_{D}{{1\sqrt {0 + 1 + 0} \,dA}} = \iint\limits_{D}{{dA}}\end{align*}\]
At this point we can acknowledge that \(D\) is a disk of radius 1 and this double integral is nothing more than the double integral that will give the area of the region \(D\) so there is no reason
to compute the integral. Here is the value of the surface integral.
\[\iint\limits_{{{S_2}}}{{\vec F\centerdot d\vec S}} = \pi \]
Finally, to finish this off we just need to add the two parts up. Here is the surface integral that we were actually asked to compute.
\[\iint\limits_{S}{{\vec F\centerdot d\vec S}} = \iint\limits_{{{S_1}}}{{\vec F\centerdot d\vec S}} + \iint\limits_{{{S_2}}}{{\vec F\centerdot d\vec S}} = - \pi + \pi = 0\]
Example 2
Evaluate \( \displaystyle \iint\limits_{S}{{\vec F\centerdot d\vec S}}\) where \(\vec F = x\,\vec i + y\,\vec j + {z^4}\,\vec k\) and \(S\) is the upper half the sphere \({x^2} + {y^2} + {z^2} = 9\)
and the disk \({x^2} + {y^2} \le 9\) in the plane \(z = 0\). Assume that \(S\) has the positive orientation.
Show Solution
So, as with the previous problem we have a closed surface and since we are also told that the surface has a positive orientation all the unit normal vectors must point away from the enclosed region.
To help us visualize this here is a sketch of the surface.
We will call \({S_1}\) the hemisphere and \({S_2}\) will be the bottom of the hemisphere (which isn’t shown on the sketch). Now, in order for the unit normal vectors on the sphere to point away from
enclosed region they will all need to have a positive \(z\) component. Remember that the vector must be normal to the surface and if there is a positive \(z\) component and the vector is normal it
will have to be pointing away from the enclosed region.
On the other hand, the unit normal on the bottom of the disk must point in the negative \(z\) direction in order to point away from the enclosed region.
\({S_1}\) : The Sphere
Let’s do the surface integral on \({S_1}\) first. In this case since the surface is a sphere we will need to use the parametric representation of the surface. This is,
\[\vec r\left( {\theta ,\varphi } \right) = 3\sin \varphi \cos \theta \,\vec i + 3\sin \varphi \sin \theta \,\vec j + 3\cos \varphi \,\vec k\]
Since we are working on the hemisphere here are the limits on the parameters that we’ll need to use.
\[0 \le \theta \le 2\pi \hspace{0.25in}\hspace{0.25in}0 \le \varphi \le \frac{\pi }{2}\]
Next, we need to determine \({\vec r_\theta } \times {\vec r_\varphi }\). Here are the two individual vectors and the cross product.
\[\begin{align*}{{\vec r}_\theta }\left( {\theta ,\varphi } \right) & = - 3\sin \varphi \sin \theta \,\vec i + 3\sin \varphi \cos \theta \,\vec j\\ {{\vec r}_\varphi }\left( {\theta ,\varphi } \
right) & = 3\cos \varphi \cos \theta \,\vec i + 3\cos \varphi \sin \theta \,\vec j - 3\sin \varphi \,\vec k\end{align*}\] \[\begin{align*}{{\vec r}_\theta } \times {{\vec r}_\varphi } & = \left| {\
begin{array}{*{20}{c}}{\vec i}&{\vec j}&{\vec k}\\{ - 3\sin \varphi \sin \theta }&{3\sin \varphi \cos \theta }&0\\{3\cos \varphi \cos \theta }&{3\cos \varphi \sin \theta }&{ - 3\sin \varphi }\end
{array}} \right|\\ & = - 9{\sin ^2}\varphi \cos \theta \,\vec i - 9\sin \varphi \cos \varphi {\sin ^2}\theta \,\vec k - 9{\sin ^2}\varphi \sin \theta \,\vec j - 9\sin \varphi \cos \varphi {\cos ^2}\
theta \,\vec k\\ & = - 9{\sin ^2}\varphi \cos \theta \,\vec i - 9{\sin ^2}\varphi \sin \theta \,\vec j - 9\sin \varphi \cos \varphi \left( {{{\sin }^2}\theta \, + {{\cos }^2}\theta } \right)\vec k\\
& = - 9{\sin ^2}\varphi \cos \theta \,\vec i - 9{\sin ^2}\varphi \sin \theta \,\vec j - 9\sin \varphi \cos \varphi \,\vec k\end{align*}\]
Note that we won’t need the magnitude of the cross product since that will cancel out once we start doing the integral.
Notice that for the range of \(\varphi \) that we’ve got both sine and cosine are positive and so this vector will have a negative \(z\) component and as we noted above in order for this to point
away from the enclosed area we will need the \(z\) component to be positive. Therefore, we will need to use the following vector for the unit normal vector.
\[\vec n = - \frac{{{{\vec r}_\theta } \times {{\vec r}_\varphi }}}{{\left\| {{{\vec r}_\theta } \times {{\vec r}_\varphi }} \right\|}} = \frac{{9{{\sin }^2}\varphi \cos \theta \,\vec i + 9{{\sin }^
2}\varphi \sin \theta \,\vec j + 9\sin \varphi \cos \varphi \,\vec k}}{{\left\| {{{\vec r}_\theta } \times {{\vec r}_\varphi }} \right\|}}\]
Again, we will drop the magnitude once we get to actually doing the integral since it will just cancel in the integral.
Okay, next we’ll need
\[\vec F\left( {\vec r\left( {\theta ,\varphi } \right)} \right) = 3\sin \varphi \cos \theta \,\vec i + 3\sin \varphi \sin \theta \,\vec j + 81{\cos ^4}\varphi \,\vec k\]
Remember that in this evaluation we are just plugging in the \(x\) component of \(\vec r\left( {\theta ,\varphi } \right)\) into the vector field etc.
We also may as well get the dot product out of the way that we know we are going to need.
\[\begin{align*}\vec F\left( {\vec r\left( {\theta ,\varphi } \right)} \right)\centerdot \left(- {{{\vec r}_\theta } \times {{\vec r}_\varphi }} \right) & = 27{\sin ^3}\varphi {\cos ^2}\theta + 27{\
sin ^3}\varphi {\sin ^2}\theta + 729\sin \varphi {\cos ^5}\varphi \\ & = 27{\sin ^3}\varphi + 729\sin \varphi {\cos ^5}\varphi \end{align*}\]
Now we can do the integral.
\[\begin{align*}\iint\limits_{{{S_{\,1}}}}{{\vec F\centerdot d\vec S}} & = \iint\limits_{D}{{\vec F\centerdot \left( {\frac{{{{\vec r}_\theta} \times {{\vec r}_\varphi}}}{{\left\| {{{\vec r}_\theta }
\times {{\vec r}_\varphi }} \right\|}}} \right)\,\left\| {{{\vec r}_\theta } \times {{\vec r}_\varphi }} \right\|\,\,dA}}\\ & = \int_{{\,0}}^{{\,2\pi }}{{\int_{{\,0}}^{{\frac{\pi }{2}}}{{27{{\sin }^
3}\varphi + 729\sin \varphi {{\cos }^5}\varphi \,d\varphi }}\,d\theta }}\\ & = \int_{{\,0}}^{{\,2\pi }}{{\int_{{\,0}}^{{\frac{\pi }{2}}}{{27\sin \varphi \left( {1 - {{\cos }^2}\varphi } \right) + 729
\sin \varphi {{\cos }^5}\varphi \,d\varphi }}\,d\theta }}\\ & = - \int_{{\,0}}^{{\,2\pi }}{{\left. {\left( {27\left( {\cos \varphi - \frac{1}{3}{{\cos }^3}\varphi } \right) + \frac{{729}}{6}{{\cos }^
6}\varphi } \right)} \right|_0^{\frac{\pi }{2}}\,d\theta }}\\ & = \int_{{\,0}}^{{\,2\pi }}{{\frac{{279}}{2}\,d\theta }}\\ & = 279\pi \end{align*}\]
\({S_2}\) : The Bottom of the Hemi-Sphere
Now, we need to do the integral over the bottom of the hemisphere. In this case we are looking at the disk \({x^2} + {y^2} \le 9\) that lies in the plane \(z = 0\) and so the equation of this surface
is actually \(z = 0\). The disk is really the region \(D\) that tells us how much of the surface we are going to use. This also means that we can use the definition of the surface integral here with
\[\vec n = - \vec k\]
We need the negative since it must point away from the enclosed region.
The surface integral in this case is,
\[\begin{align*}\iint\limits_{{{S_2}}}{{\vec F\centerdot d\vec S}} & = \iint\limits_{{{S_2}}}{{\left( {x\,\vec i + y\,\vec j + {z^4}\,\vec k} \right)\centerdot \left( { - \vec k} \right)\,dS}}\,\\ &
= \iint\limits_{{{S_2}}}{{ - {z^4}\,dS}}\end{align*}\]
Remember, however, that we are in the plane given by \(z = 0\) and so the surface integral becomes,
\[\iint\limits_{{{S_2}}}{{\vec F\centerdot d\vec S}} = \iint\limits_{{{S_2}}}{{ - {z^4}\,dS}} = \iint\limits_{{{S_2}}}{{0\,dS}} = 0\]
The last step is to then add the two pieces up. Here is surface integral that we were asked to look at.
\[\iint\limits_{S}{{\vec F\centerdot d\vec S}} = \iint\limits_{{{S_1}}}{{\vec F\centerdot d\vec S}} + \iint\limits_{{{S_2}}}{{\vec F\centerdot d\vec S}} = 279\pi + 0 = 279\pi \]
We will leave this section with a quick interpretation of a surface integral over a vector field. If \(\vec v\) is the velocity field of a fluid then the surface integral
\[\iint\limits_{S}{{\vec v\centerdot d\vec S}}\]
represents the volume of fluid flowing through \(S\) per time unit (i.e. per second, per minute, or whatever time unit you are using).
|
{"url":"https://tutorial.math.lamar.edu/Classes/CalcIII/SurfIntVectorField.aspx","timestamp":"2024-11-11T06:37:02Z","content_type":"text/html","content_length":"101031","record_id":"<urn:uuid:d4eddc21-ee31-442e-8cf5-4916bc88c404>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00642.warc.gz"}
|
Inspiring Drawing Tutorials
How To Determine Amp Draw
How To Determine Amp Draw - Therefore, there are 5 amperes. Web the first thing you need to do, is add up the power draw from all of your graphs. Learn to determine the amperage drawn by an
electrical device to test for a malfunction. Web the phase current i in amps (a) is equal to the power p in watts (w), divided by the power factor pf times the rms voltage v in volts (v): Verify the
ratings you’re working with. Web simply put, ohm’s law states “the resistor’s current i in amps (a) is equal to the resistor’s voltage vr=v in volts (v) divided by the resistance r in ohms (o):” to
most, it’s as confusing today, as it was the first time we heard it.
Web simply put, ohm’s law states “the resistor’s current i in amps (a) is equal to the resistor’s voltage vr=v in volts (v) divided by the resistance r in ohms (o):” to most, it’s as confusing today,
as it was the first time we heard it. Therefore, there are 5 amperes. Learn to determine the amperage drawn by an electrical device to test for a malfunction. Before you attach your multimeter to the
circuit, you need to make sure that the meter is rated for the number of amps traveling through that circuit. Get the actual running loads.
Ac three phase watts to amps calculation. Ex graph 1 2a + graph 2 4a + graph 3 1.5a= total 7.5a draw. Divide the total number of watts by the system's volts. Amps = watts / voltage this formula
calculates the amperage draw by dividing the power consumption in watts by the voltage of the appliance. How many amps is 240 watts at 120 volts.
Insert the red lead into the amp (a) terminal and the black lead into the ground (com) terminal. Web to calculate amp draw, we need to understand two fundamental electrical principles: The rms power
output and the amplifier’s efficiency. Web the formula used by the amps draw calculator to determine the amperage draw is as follows: The equivalent of available.
Web in this video we show you how to check the amp draw on a wire or piece of equipment using a multimeter with a clamp.want to learn more about what a career in. I show how to test how many amps an
electronic device pulls with a standard multimeter. How many amps is 1500 watts at 12 volts? Procedure.
Apply the conversion formula for dc electricity. Web by inputting variables such as motor kv, battery voltage, and propeller size, the calculator can provide an estimate of the amp draw of the motor.
Amps = watts ÷ volts. Get readings on no load conditions. Web the first thing you need to do, is add up the power draw from all.
The equivalent of available electricity at the power source is voltage, or volts. Suppose you measured 60w and 12v, calculate the amperes. I(a) = p(w) pf × v(v) the power factor of resistive
impedance load is equal to 1. Therefore, there are 5 amperes. What setting on multimeter should i use for.
Testing the cold cranking amps of a car battery. Finding the amperage rating of a circuit breaker. Amps = watts / voltage this formula calculates the amperage draw by dividing the power consumption
in watts by the voltage of the appliance. Procedure for measuring the amps or current draw by an individual electrical appliance, device, motor, etc. Connect the multimeter.
Verify the ratings you’re working with. Web by inputting variables such as motor kv, battery voltage, and propeller size, the calculator can provide an estimate of the amp draw of the motor. This is
represented by the following formula: Apply the conversion formula for dc electricity. Procedure for measuring the amps or current draw on an individual electrical circuit device.
23k views 10 years ago electrical solutions. Ex graph 1 2a + graph 2 4a + graph 3 1.5a= total 7.5a draw. What setting on multimeter should i use for. Voltage (v) = current (i) x resistance (r) 2. Web
the first thing you need to do, is add up the power draw from all of your graphs.
Using a brushless motor amp draw calculator can save time and money by preventing damage to the motor, battery, and speed controller due to overloading. Verify the ratings you’re working with. Get
readings on no load conditions. This is represented by the following formula: This is the maximum amperage that the circuit can take before the circuit breaker trips.
(100w / 12v = 8.3a) for systems with multiple lights, you would add up the wattage of each light and divide that by the system's volts. This is represented by the following formula: What setting on
multimeter should i use for. I(a) = p(w) pf × v(v) the power factor of resistive impedance load is equal to 1. Amps =.
How to calculate amps drawn by an appliance : Ways to check amps on a motor. Finding the amperage rating of a circuit breaker. Amps = watts / voltage this formula calculates the amperage draw by
dividing the power consumption in watts by the voltage of the appliance. What setting on multimeter should i use for.
How To Determine Amp Draw - Amps = watts / volts. For example, if you want to. Web by inputting variables such as motor kv, battery voltage, and propeller size, the calculator can provide an estimate
of the amp draw of the motor. Ways to check amps on a motor. You will find the calculator further on, complete with the amp hours chart (tells you how many ah you need to power different devices for
1h, 2h, 4h, and 8h ). Web to calculate amp draw, we need to understand two fundamental electrical principles: I (a) = p (w) / v (v) [4] or, more simply: How many amps is 240 watts at 120 volts. P =
2000w, v = 110v, pf = 0.8. Learn to determine the amperage drawn by an electrical device to test for a malfunction.
Then take that number and calculate how long you want to go. Ways to check amps on a motor. Insert the red lead into the amp (a) terminal and the black lead into the ground (com) terminal. Get the
actual running loads. Before you attach your multimeter to the circuit, you need to make sure that the meter is rated for the number of amps traveling through that circuit.
I show how to test how many amps an electronic device pulls with a standard multimeter. Procedure for measuring the actual or momentary amps in use (current draw) at a building circuit amps: Web the
amps draw calculator divides the power value by the voltage value to calculate the current or amperage. Web in this video we show you how to check the amp draw on a wire or piece of equipment using a
multimeter with a clamp.want to learn more about what a career in.
Web the phase current i in amps (a) is equal to the power p in watts (w), divided by the power factor pf times the rms voltage v in volts (v): This is represented by the following formula: Web divide
the watts of a given electrical item by the total number of volts available from the electric outlet to calculate amperage draw.
For example, if you want to. Web to calculate amp draw, we need to understand two fundamental electrical principles: Web in this video we show you how to check the amp draw on a wire or piece of
equipment using a multimeter with a clamp.want to learn more about what a career in.
Get The Actual Running Loads.
(100w / 12v = 8.3a) for systems with multiple lights, you would add up the wattage of each light and divide that by the system's volts. How many amps is 1500 watts at 12 volts? In the united states,
standard household circuits are rated for 15 or. Check the nameplate on your battery or breaker to determine its maximum amps.
Electric Current, Represented By I, Which Is Measured In Amps (A), Can Be Found By Dividing Power In Watts (W) By The Volts (V) Or Voltage.
This is represented by the following formula: Each circuit breaker should have its amperage marked on the handle. Learn to determine the amperage drawn by an electrical device to test for a
malfunction. Web the phase current i in amps (a) is equal to the power p in watts (w), divided by the power factor pf times the rms voltage v in volts (v):
Voltage (V) = Current (I) X Resistance (R) 2.
Web amp draw is a simple mathematical calculation. Web simply put, ohm’s law states “the resistor’s current i in amps (a) is equal to the resistor’s voltage vr=v in volts (v) divided by the
resistance r in ohms (o):” to most, it’s as confusing today, as it was the first time we heard it. How to calculate amps drawn by an appliance : Ways to check amps on a motor.
Ac Three Phase Watts To Amps Calculation.
Web divide the watts of a given electrical item by the total number of volts available from the electric outlet to calculate amperage draw. Procedure for measuring the actual or momentary amps in use
(current draw) at a building circuit amps: Web the formula used by the amps draw calculator to determine the amperage draw is as follows: P = 2000w, v = 110v, pf = 0.8.
|
{"url":"https://one.wkkf.org/art/drawing-tutorials/how-to-determine-amp-draw.html","timestamp":"2024-11-09T03:14:11Z","content_type":"text/html","content_length":"34969","record_id":"<urn:uuid:d424b1b1-af00-461f-a0ed-c60162ae96e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00768.warc.gz"}
|
The spectral density of dense random networks and the breakdown of the Wigner semicircle law of random matrix theory
The theory of random networks is useful in modelling systems of many interacting units, ranging from neurons in the brain and computers and routers in the Internet to species in an ecosystem. In this
theory, a key mathematical quantity is the eigenvalue spectrum of the adjacency matrix, the entries of which reflect the connections between different network elements. The eigenvalue spectrum
controls the dynamical behaviour of many processes occurring on a network, so understanding how network architecture influences this spectrum is a natural focus of considerable research.
One network model of special interest – known as the configuration model – allows researchers to freely adjust the degree distribution of the network, while maintaining an entirely random pattern of
network connections. The degree of a network element counts the number of links attached to it. In a notable work, LML External Isaac Pérez Castillo, along with collaborators, established a set of
exact equations determining the eigenvalue distribution of the configuration model. This result provides a starting point to study how the nature of degree fluctuations influences network spectra. So
far, these equations can be solved analytically only in the so-called dense limit, when the average degree becomes infinitely large, and the random network effectively becomes fully connected.
Previous works have shown that the eigenvalues of dense networks follow the Wigner semicircle law of random matrix theory, and it is widely believed that this result is universal, i.e., the details
of the network structure are irrelevant to the dense limit.
However, all results for the spectra of dense networks are limited to models with weak degree fluctuations. Little is known about the role of degree fluctuations in the dense limit and the universal
character of the semicircle distribution. In a new paper, LML External Fellow Fernando Metz addresses this question in a work with Jeferson Silva of the Federal University of Rio Grande do Sul in
Porto Alegre, Brazil. Focussing on the dense limit of the eigenvalue distribution of networks, they consider four different examples in which the degree variance scales differently with the average
degree. In general, their results show that, in the dense limit, the eigenvalue distribution is determined by the degree fluctuations, and the standard Wigner result breaks down if the degree
fluctuations are strong enough. In one case – for dense random graphs with an exponential degree distribution – they find a surprising logarithmic divergence in the spectral density and the absence
of sharp spectral edges. In other words, the eigenvalue distribution has support on the entire real line, in sharp contrast to the Wigner law of random matrix theory. Based on an exact calculation,
the authors also establish a sufficient condition for the breakdown of the Wigner law.
The LML provided financial support for Jeferson Silva by funding a scientific initiation scholarship.
His paper is available at https://arxiv.org/pdf/2007.15155.pdf
|
{"url":"https://lml.org.uk/2020/10/07/the-spectral-density-of-dense-random-networks-and-the-breakdown-of-the-wigner-semicircle-law-of-random-matrix-theory/","timestamp":"2024-11-06T10:49:24Z","content_type":"text/html","content_length":"72970","record_id":"<urn:uuid:7489740c-5022-4504-949e-eac49c3724c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00270.warc.gz"}
|
Neha Shaah - MATLAB Central
Neha Shaah
Followers: 0 Following: 0
I like to code using matlab and have 7+ years of experience working with the tool.
of 295,014
1 Question
0 Answers
of 20,161
0 Files
of 153,031
1 Problem
72 Solutions
Unique - Very Very Large Numbers
Given a vector column, with some very large numbers, create the ascending sort and unique vector. *Input:* A (column vector)...
2 years ago
is matlab 'fit' function changing behavior ?
I was working with curve fitting problem and use fit function as follows : fit(x,y,fitType) Now when I am running the code ...
2 years ago | 2 answers | 0
Calculate the mean of each half of a matrix
Given a matrix with an even number of columns, n, return a 1-by-2 row vector where the first element is the mean of all the elem...
2 years ago
Find Closest Constant
Given a number x, return the value that is closest to x from this list of constants: 0, 1, , e, , (also known as ). For exampl...
2 years ago
Matrix Quadrants
Write a function that takes N as the input, and outputs a matrix whose upper-left (NxN) quadrant contains all ones, the lower-ri...
2 years ago
How many jokers?
* Given DNA codes of a group of suspects, * and a code for certain types of jokers, * Count how many jokers of that type. * ...
2 years ago
The Hitchhiker's Guide to MATLAB
Output logical "true" if the input is the answer to life, the universe and everything. Otherwise, output logical "false".
2 years ago
Find the maximum number of decimal places in a set of numbers
Given a vector or matrix of values, calculate the maximum number of decimal places within the input. Trailing zeros do not coun...
2 years ago
Make roundn function
Make roundn function using round. x=0.55555 y=function(x,1) y=1 y=function(x,2) y=0.6 y=function(x,3) ...
2 years ago
Rounding off numbers to n decimals
Inspired by a mistake in one of the problems I created, I created this problem where you have to round off a floating point numb...
2 years ago
Matlab Basics - Rounding III
Write a script to round a large number to the nearest 10,000 e.g. x = 12,358,466,243 --> y = 12,358,470,000
2 years ago
Check that number is whole number
Check that number is whole number Say x=15, then answer is 1. x=15.2 , then answer is 0. http://en.wikipedia.org/wiki/Whole_numb...
2 years ago
MATLAB Basic: rounding IV
Do rounding towards plus infinity. Example: -8.8, answer -8 +8.1 answer 9 +8.50 answer 9
2 years ago
MATLAB Basic: rounding III
Do rounding towards minus infinity. Example: -8.8, answer -9 +8.1 answer 8 +8.50 answer 8
2 years ago
MATLAB Basic: rounding II
Do rounding nearest integer. Example: -8.8, answer -9 +8.1 answer 8 +8.50 answer 9
2 years ago
Verify Law of Large Numbers
If a large number of fair N-sided dice are rolled, the average of the simulated rolls is likely to be close to the mean of 1,2,....
2 years ago
Calculate Inner Product
Given two input matrices, |x| and |y|, check if their inner dimensions match. * If they match, create an output variable |z|...
2 years ago
Momentum Calculation
A shopping cart of mass 'm1' is traveling with velocity 'u' and collides with a second shopping cart of mass 'm2.' The two shopp...
2 years ago
Create a cell array out of a struct
Create a cell array out of a (single) struct with the fieldname in the first column and the value in the second column: in: ...
2 years ago
Vector creation
Create a vector using square brackets going from 1 to the given value x in steps on 1. Hint: use increment.
2 years ago
Doubling elements in a vector
Given the vector A, return B in which all numbers in A are doubling. So for: A = [ 1 5 8 ] then B = [ 1 1 5 ...
2 years ago
Create a vector
Create a vector from 0 to n by intervals of 2.
2 years ago
Flip the vector from right to left
Flip the vector from right to left. Examples x=[1:5], then y=[5 4 3 2 1] x=[1 4 6], then y=[6 4 1]; Request not ...
2 years ago
Find max
Find the maximum value of a given vector or matrix.
2 years ago
|
{"url":"https://au.mathworks.com/matlabcentral/profile/authors/5082624","timestamp":"2024-11-03T04:33:52Z","content_type":"text/html","content_length":"111139","record_id":"<urn:uuid:dae34005-0277-4827-bc09-2d0c0073bfda>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00815.warc.gz"}
|
Fruechte Gravity Theory Blog
Mar 29 2022
The gamma ray field we live in is extremely rich and dense. For the forms we find in nuclei and assorted particles, there is all the energy needed to drive all physical processes.
A Calabi-Yau shape within a nucleus or particle needs an external energy supply to maintain it. Gravity provides the energy. Here we are talking about force and pressure within a nucleus or particle,
with only indirect connection to the outside, or connection at a point, curve, or surface. There may also be tears joining and reforming.
Occasionally we refer to neutrons, protons, electrons, and nuclei. A proton can be a hydrogen nucleus, though we list it separately when we talk about free protons, such as in the solar wind,
particle colliders, or elsewhere. Let’s take an Oxygen nucleus for example with the makings of 8 protons and 8 neutrons. Inside the nucleus, at the top, parachutes with baskets attached through
ropes, or strings, instead of a parachutist, may cause some gravitons to loop around the insides of the parachutes, or branes, and into the baskets with enough force to hold the parachutes against
the highest flux density of gravitons. Then the gravitons would find ways to tunnel through the baskets, pushed from behind. In the motions of O[2] in air, the parachutes may slide around to stay
opposite the maximum flux.
This may also help explain weak interaction parity violation, because as an electron forming within a nucleus tries to escape, out the bottom is easier, due to escape out the top involving going
through the gaps in the parachutes. More than 50% would come out downward.
The manifold of the sun’s gamma ray field, the manifold of the earth’s gamma ray field, and likewise with other celestial bodies, provides a combination of symmetric spaces. During the day, at noon
let’s say, the vectors of the sun’s manifold are in the opposite direction as the vectors of the earth’s terrestrial manifold. The Coulomb field uses all vectors of all manifolds to propagate,
because all vectors, within a distance of 10 meters at least, are frozen in time for phonon transmission.
Let’s say M[1] is the earth’s manifold, and M[2] is the combination of the earth’s and sun’s manifolds. “…a diffeomorphism F: M[1] → M[2] of manifolds oriented by Ω[1], Ω[2], is
orientation-preserving if F*Ω[2] = λΩ[1], where λ > 0 is a C^ꝏ function on M.” ([1] pg. 209) In our example here, λ > 1, and we have neglected the earth’s moon for simplification.
We may call a negative charge a left coset space, and a positive charge a right coset space. Each creates its own homomorphism in the dense gamma ray field, by a diffeomorphism on the electric fields
of the gamma rays. For one thing, there is circular polarization. For another, perpendicular to the greatest flux density of gamma rays the electric fields of the gamma rays may have skewed sine wave
lobes, somewhere between a normal sine wave and a sawtooth. The Coulomb field acts tangent to the R vector sphere, and “(∇[X]Y)[p] depends not on the vector field X but only on its value X[p] at p.”
([1] pg. 309] The way that the Coulomb field transmits radially is by centrifugal force through the gamma ray field.
The inside of an atom may be called a geodesic. An electron path in an atomic orbital may also be called a geodesic, and “a long geodesic may not be minimal.” ([2] pg. 62) This is due to the Lorentz
Gravity is an integral manifold. Each orbital arc is a line integral absorbing gravitons. The Coulomb field, on the other hand, is a charge induced diffeomorphism in the gamma ray field.
Substantially outside of neutral atoms there is a propensity for positive and negative charges to cancel, though in the near field we have van der Waals forces.
Phonons for the Coulomb interaction are generated inside a charge. The field created, that acts on another charge, may act on the outside of another charge, possibly only 5% of the diameter deep. The
fields may also act in the interspace, producing backflush to the charges that generate the fields. Phonons of opposite chirality attract, and of the same chirality repel.
As points meet for the Coulomb force, the acceleration would be periodic, and relates to the vector potential. A Fourier Series can be applied to the vector potential, with the direction of force
being the side of the ‘x’ axis where the sine or cosine function has larger lobes. Often a geodesic is called piecewise smooth, due to gravitons being separate, though on a classical scale the motion
is smooth.
Two electrons can occupy the same atomic orbital if they have opposite half-integer spin projections. This is the Pauli exclusion principle. In terms of tensor math, “the subspaces are mutually
orthogonal and each is a nontrivial irreducible subspace.” ([1] pg. 242)
[1] Boothby, William M., An Introduction to Differentiable Manifolds and Riemannian Geometry, Academic Press, 2003
[2] J. Milnor, based on lecture notes by M. Spivak and R. Wells, Morse Theory, Princeton University Press, 1969
|
{"url":"https://www.fruechtetheory.com/blog/2022/03/","timestamp":"2024-11-05T18:48:56Z","content_type":"application/xhtml+xml","content_length":"41272","record_id":"<urn:uuid:51fe66e5-d56c-4bc3-9842-19d3a9b989b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00450.warc.gz"}
|
Quantum Game Development: Looking Back and then Beyond
“If I have a thousand ideas and only one turns out to be good, I am satisfied. “
-Alfred Nobel (1833–1896)
A Radical New Idea
The year is 1981. Brought together by MIT, A group of the earth’s brightest minds in the fields of physics and computing convened inside of a Boston mansion to discuss the frontier of computational
theory. This would be no ordinary meeting of great minds, however. Something different was going to be discussed. Something incredible.
The Attendees
Only a year prior to the conference, a computer scientist, Paul Benioff, published a theoretical paper proposing the possibility of a special type of computer that could operate under the laws of
quantum mechanics. He theorized that this special computer could vastly outperform classical computers in certain computations using quantum bits or “qubits”.
It’s no stretch of the imagination to think these attendees were eager to discuss this new wild idea. We could even venture to say that they wanted to hear what the great, Nobel prize winning,
Richard Feynman had to say about the matter.
Feynman would not disappoint. He delivered an electrifying, attention-snatching lecture that not a single person in that room could have imagined the impact of, not even Feynman himself.
Feynman’s idea was radical: a practical implementation of a quantum computer. He proposed using a two-state system, such as a spin-1/2 particle, to represent one of these “qubits”. He suggested that
the qubits could be manipulated using a magnetic field to perform these superior quantum operations. He also included a proposal for a quantum simulator to simulate the behavior of quantum systems,
which he believed could never be truly simulated on classical computers.
Richard Feynman
This transcendent idea that Feynman turned into a concrete blueprint and then gifted to the world would serve as the guiding light for quantum computing over the coming decades.
Turning Radical to Real
In the mid-1990s, the trailblazing research team led by Peter Shor and Lov Grover at Bell Labs reached the first significant milestone in realizing Feynman’s vision: by working out the first quantum
algorithms based on experiments with protype quantum computers. “Shor’s algorithm” showed that quantum computers could solve certain mathematical problems exponentially faster than classical
computers, specifically the problem of factoring large numbers, which is the basis of all modern encryption methods.
Peter Shor
Grover also made similar contributions to the cause with his quantum eponymous algorithm, which can search an unsorted database with the efficiency and speed that classical computers could never
achieve. Grover’s algorithm has the potential to revolutionize data search and optimization in almost every field imaginable.
Lov Grover
These breakthroughs proved that Feynman’s vision of a quantum computer was a tangible, obtainable reality. Thus, the floodgates of imagination were suddenly and violently thrown open in millions of
minds across the world and innovation would soon flowing freely in this new and alien-like age of quantum computing.
Quantum Thinking
It wouldn’t take long for someone to start applying this strange “quantum thinking” to other domains of science to create entirely new areas of study straight out of thin air. In 1999, David Meyer, a
professor at UC San Diego, did just that with his landmark paper titled “Quantum Strategies”. The paper introduced the world the concept of applying quantum mechanics to game theory.
David Meyer
Traditional game theory is the study of decision-making in situations where two or more individuals or groups have competing interests. It involves analyzing the strategies that players can use to
maximize their outcomes in such scenarios. The strategies are typically determined based on assumptions about the preferences and rationality of the players, and the outcomes are evaluated based on a
set of rules or criteria.
In the quantum version, players have access to quantum signals through the phenomenon of entanglement, which allows for a greater range of possible strategies. These quantum strategies are based on
superpositions and entangled states, which can lead to novel and more efficient solutions to the game. Essentially, quantum game theory expands the range of strategies available to players beyond
those of classical game theory, potentially leading to new and interesting outcomes.
Quantum Chess
Let’s take the fictional game of real-life, physical, “you can touch it”, Quantum Chess as an example:
players could use quantum superposition and entanglement to create new strategies that are not available in classical chess. For example, a player could use superposition to move a single piece to
two different squares simultaneously or use entanglement to coordinate multiple pieces at once.
These strategies would most certainly lead to highly complex and mind-bending gameplay that would require a radically different way of thinking and strategizing. Of course, something like physical
quantum chess isn’t possible with our current understanding of physics and reality but it’s great example for making Quantum game theory more intuitive.
It’s a fun and thought-provoking concept, but if you have some programming skills this can and has been able to be done with classic computer for a long time now.
You can check out an implementation of Quantum Chess here.
Although the simulated entanglement mechanics wouldn’t be truly random, the game could still provide an identical experience to its quantum counterpart that would be almost impossible to tell apart.
We will come back to that point in a (qu)bit though.
Quantum Leap into Game Development
We need to talk about IBM.
In 2016, IBM once again showed us that it was still more than capable of providing disruptive, world changing technology via their open-source Quantum software development platform, Qiskit, which
allows developers to write quantum computing algorithms, execute them on IBM’s own quantum processors and hardware.
One man immediately saw the light provided by IBM and knew what he wanted to do. He wanted to make quantum games.
Dr. James Wooten
Dr. Wootton was able to claim his stake on the first ever Quantum game created by making a simple, quantum version of Battleships that leaned more towards educating users on the underlying mechanics
of quantum computing. Although it could hardly be considered a game by today's standards, it was truly the first intentionally made quantum computer game.
You can read about it here from the man himself.
Quantum Battleships: The first game for a quantum computer | by Dr James Wootton | Medium
Dr. Wootton has come a long way since that first game he created. He has even partnered with IBM to create educational games and is still pushing the boundary on Quantum Game Development. You can
read more about his journey in this highly informative article below.
The History of Games for Quantum Computers | by Dr James Wootton | Medium.
IMB has some exciting things planned for quantum computing in the coming years as well. Here is their road map up to 2026!
Synergy Is Key
This roadmap up to 2026 outlines plans to build quantum-centric supercomputers that incorporate quantum processors, classical processors, quantum communication networks, and classical networks to
completely transform how we compute.
In this roadmap here, a glimpse at the true potential of quantum computers is shown.
Hybrid Architecture.
By combining classical and quantum computing power in a hybrid architecture, these systems will be stronger together than either could be alone. Classical and quantum computers are two sides of the
same coin, the yin to the yang, the sun and the moon. The future of the everyday widespread use of quantum computing lies in the synergistic collaboration of these two technologies. Synergy is the
This here in lies the issue with the Quantum games we have today.
Quantum Games are not “Fun”
Now, we are getting back to that (qu)bit from earlier.
While quantum game development is still in its infancy, it’s understandable why pure quantum games aren’t inherently “fun” — pioneers like Dr. Wootton are just operating as well as they can under the
current constraints while staying “true” to the quantum mechanics.
The big Delima here is that our brains have been wired by decades of game development, spoiling us with culturally iconic series like The Legend of Zelda, World of Warcraft, Pokémon, and entirely too
many more to list. Pure quantum computing games alone can’t compete with what classical computers can already do and have achieved and especially won't be able to compete with our brains, hard as we
try. I believe the gap is far too wide to close. Thankfully we can rely on companies like IBM to help us solve this problem by paving the way to a future of Hybrid Architecture Quantum computing and
unlock the spooky potential of the quantum world.
That said, I strongly believe we’re on the right path with quantum game development. While it’s fun to know you’re getting truly random information or fast computation from IBM’s hardware, our lizard
brains are really good at making things not novel and fun anymore.
Gazing Beyond the horizon
I want to speculate and get theoretical about the quantum gaming environment we might see in 50 years but also stay close to the physical boundaries we are helplessly governed by.
Quantum internet and MMO’s
There is a ridiculous amount of potential here. Let’s imagine an MMO that allows 1 million players online in the same world simultaneously and consider all of the challenges.
First, let’s consider the communication aspect. A quantum internet would rely on quantum entanglement and Quantum teleportation to securely transmit information between nodes. The process of
entangling qubits involves creating a pair of qubits that are linked in such a way that measuring the state of one of the qubits instantly determines the state of the other. Quantum teleportation is
a process that can be used to transmit quantum information between two nodes that share an entangled state. However, it does not transmit information faster than the speed of light and it requires a
classical communication channel to transmit information about the result of the measurement made on the entangled qubit.
To address these challenges, a classical computer could be used to handle the initial communication and setup of the entanglement between nodes. The classical computer could coordinate the creation
and entanglement of the qubits, as well as handle any error correction that is required. Once the entanglement is established, the quantum internet could then be used to transmit information between
the nodes using Quantum teleportation.
A hybrid quantum-classical approach could also be used to enhance the communication efficiency of the quantum internet. For instance, classical networks could be used to handle the bulk of the
communication, while the quantum internet could be used for high-security, low-latency transmissions. Additionally, classical algorithms could be used to preprocess data before sending it over the
quantum network, reducing the amount of information that needs to be transmitted and minimizing the potential for errors.
This would require a complex network of nodes and entanglement links between them to achieve this. Additionally, the communication speed would depend on the distance between the nodes and the quality
of the entangled states used for the communication. As the number of nodes and the distance between them increase, the complexity and difficulty of maintaining the entanglement links also increase.
Nevertheless, advances in quantum networking and quantum error correction techniques may make it possible to achieve a scalable quantum internet in the future.
How could it be possible to store all that information for one million players?
There are a few possibilities to overcome this, one possibility is using a technique called quantum error correction, which can protect quantum states from decoherence, and errors caused by external
factors. This technique involves storing quantum information across multiple qubits in a way that allows errors to be detected and corrected.
In the context of an MMO, the concatenated error codes technique could be used to store and protect and store large amounts of player data, such as player profiles, inventory, and game progress. For
example, each player’s data could be encoded onto a set of qubits, and then those qubits could be further encoded onto additional qubits at higher levels of the concatenated codes hierarchy. This
would allow for efficient storage and retrieval of player data, while also protecting against errors and decoherence.
classical computers could be used to manage the storage and retrieval of data, while the quantum computer could be used for the actual encoding and decoding of the data onto the qubits. This would
take advantage of the strengths of both types of computers and could result in a more efficient and reliable storage and retrieval system.
Quantum data compression could also be used, this technique involves finding patterns in data and using them to represent the information in a more compact form. One way to achieve this is through a
process called “quantum machine learning,” where quantum algorithms are used to analyze large datasets and extract meaningful information. By using quantum algorithms to identify patterns in the
data, it may be possible to reduce the amount of information that needs to be stored, while still preserving the relevant details.
Processing Power
Processing all of that information for 1 million players also presents us with a massive headache.
One thing game developers would have to keep in mind is to constantly leverage the hybrid architecture, where a classical computer is used to handle some of the processing tasks while the quantum
processor handles the quantum-specific calculations. This approach could allow for more efficient and reliable processing of the massive amount of data generated by 1 million players.
Quantum annealing could potentially be used to optimize certain aspects of the game, such as pathfinding algorithms or artificial intelligence decision-making. Quantum annealing is a process that
seeks to find the lowest energy state of a given problem, and it can be applied to optimization problems in various domains, including game development.
Data-Oriented Technology Stack (DOTS) architecture, developed by Unity Technologies, focuses on efficiently processing large amounts of data in parallel. This architecture could potentially be
enhanced with quantum computing to further optimize the processing of game data and improve game performance.
Quantum Game AI
I think this is an area where Quantum computing can really make an impact like we haven’t seen before. What if we Leveridge the Power of generative AI like ChatGPT, and quantum computing? Quantum
computing could potentially be used to improve the performance of generative AI like Chat GPT, by making it a real-time AI capable of interacting with the game world. One way this could be achieved
is through the use of a quantum “context delivery system”, which could instantly update the Chat GPT with the latest context and allow it to respond instantly accordingly to any situation within the
prompt engineering would be used to train the Chat GPT to play any character asked of it, allowing for more dynamic and engaging gameplay. The quantum computing power could be used to accelerate the
training and optimization process, allowing for more efficient development of the game AI.
However, there are some considerations to take into account. For example, the quantum hardware required to support such a system would need to be powerful enough to handle the computational demands
of real-time AI and game world interactions. Additionally, there would need to be effective algorithms and software to make the most of the quantum hardware and optimize the performance of the
Furthermore, the quantum context delivery system would need to be able to handle the potentially massive amount of data generated by interactions with the game world and other players in a secure and
efficient manner. Overall, the integration of quantum computing and generative AI in game development has the potential to revolutionize the way we approach game AI and create more immersive and
dynamic gaming experiences.
Procedural generation
This is the area in which Dr. Wootton sees the most potential.
procedural generation is a technique used to create content algorithmically rather than manually. With the power of quantum computing, it is possible to generate vast amounts of content at a scale
that was previously impossible with classical computing.
For example, a game world the size of the United States with incredible detail could be generated using quantum algorithms that analyze data such as satellite images and terrain data to create a
realistic and immersive game environment.
Similarly, an entire galaxy to explore with detailed planets could be generated using quantum algorithms that simulate astronomical phenomena and planetary systems. The level of detail in such a game
world could be incredible, with unique ecosystems and alien species that are procedurally generated based on scientific principles.
we could develop extremely sophisticated evolution and breeding mechanics in games. By simulating complex genetic algorithms and evolutionary processes using quantum computing, we could create games
where creatures or characters can evolve and breed in same way that life on earth does.
On The Shoulders of Giants
Over time, Game developers have become masters at smoke and mirrors, and tricking the player into believing what we want. We can use that power to create seamless hybrid experiences as we continue to
learn and unlock more of the quantum world. We also need more people like Dr. Wootton on the front lines of innovation working on pure quantum games to keep pushing that boundary, and we should
remember the giants whose shoulders we stand on — like Richard Feynman and the team at Bell Labs — and be brave enough to think outside the box and share our ideas with the world.
I hope this has been thought provoking in some way.
Chandler Lane
Game Developer — Software Engineer
|
{"url":"https://chandler-lane.medium.com/quantum-game-development-looking-back-and-then-beyond-the-horizon-of-tomorrow-5047ee428415","timestamp":"2024-11-10T09:29:09Z","content_type":"text/html","content_length":"183358","record_id":"<urn:uuid:5c7fe02a-7627-4cab-85dc-4a06e63c592c>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00836.warc.gz"}
|
CHIP-2021-03: Bigger Script Integers
OWNERS: Jason Dreyzehner, Rosco Kalis, Jonathan Silverblood
DISCUSSION: Bitcoin Cash Research, Telegram
VERSION: 1.0
MILESTONES: Published, Specification, Testnet, Accepted (BU - BCHN - Knuth - Verde - Bitauth), Deployed (May 15th, 2022).
This proposal expands the integer range allowed in BCH contracts (from 32-bit to 64-bit numbers) and re-enables the multiplication opcode (OP_MUL).
Deployment of this specification is proposed for the May 2022 upgrade.
BCH virtual machine (VM) math operations are currently limited to signed 32-bit integers, preventing contracts from operating on values larger than 2147483647 – representing satoshis, this is ~21
BCH. Workarounds which allow contracts to emulate higher-precision math are often impractical, difficult to secure, and significantly increase transaction sizes.
This unusually-low limit on arithmetic operations has been present since the earliest Bitcoin release to avoid standardizing a strategy for overflow handling (though 64-bit math was always used
internally). Since the development of covenants, this unusual overflow-handling strategy has become a source of contract vulnerabilities and a barrier to real-world applications. Few remaining
computing environments operate using 32-bit integers, and this limit has never been relevant to worst-case transaction validation performance.
By allowing contracts to efficiently operate on larger numbers, this proposal enables new use cases, improves contract security, and reduces transaction sizes.
Larger Contract Values
By expanding the upper bound of arithmetic inputs to 9223372036854775807, contracts can efficiently operate on values larger than the total possible satoshi value of any transaction output (approx.
2100000000000000). This enables contracts to manage balances of any size, clearing the way for large, public decentralized applications.
Expanded arithmetic capabilities also enable greater practical use of new payment and financial systems including: scheduled and recurring payments, risk-hedging contracts, synthetic assets,
decentralized exchanges, inheritance and treasury management systems, crowdfunding and crowdmatching applications, loyalty point and token systems, delayed-withdrawal vaults (and other contract
security strategies), and more.
Safer Contracts
This proposal obviates the need for higher-precision math emulation. As a result, existing applications can be simplified, making them easier to develop and review. Additionally, by making arithmetic
overflows less common (many common operations overflow 32 bits, but few approach 64 bits), contracts are less likely to include vulnerabilities or faults which can expose users to attacks and losses.
Reduced Transaction Sizes
Because this proposal allows existing contracts to remove higher-precision math emulation, transactions employing these contracts are reduced in size. This reduces transaction fees for contract
users, and it reduces storage and bandwidth costs for validators.
Costs & Risk Mitigation
The following costs and risks have been assessed.
Modification to Transaction Validation
Modifications to VM limits have the potential to increase worst-case transaction validation costs and expose VM implementations to Denial of Service (DOS) attacks.
Mitigations: migration from 32-bit to 64-bit arithmetic has no impact on worst-case validation performance. (Notably, most implementations already use 64-bit or larger number representations
internally, and overflow-checked 64-bit math is also available natively in most programming languages and computing environments.) Even at a significantly higher practical limit approaching 10,000
operations, 64-bit arithmetic operations are thousands to millions of times less expensive than existing scenarios (varies by environment and implementation).
Node Upgrade Costs
This proposal affects consensus – all fully-validating node software must implement these VM changes to remain in consensus.
These VM changes are backwards-compatible: all past and currently-possible transactions remain valid under these new rules.
Ecosystem Upgrade Costs
Because this proposal only affects internals of the VM, most wallets, block explorers, and other services will not require software modifications for these changes. Only software which offers VM
evaluation (e.g. Bitauth IDE) will be required to upgrade.
Wallets and other services may also upgrade to add support for new contracts which will become possible after deployment of this proposal.
Protocol Implementation Costs
By requiring support for 64-bit math, this proposal could increase the cost of new protocol implementations in unusual programming languages which lack native support for overflow-checked, signed,
64-bit math.
Mitigations: nearly all modern platforms and languages include native support for 64-bit or larger integers. Additionally, transaction output values are already encoded using unsigned, 64-bit
integers, so many BCH software libraries already include support for 64-bit integers.
For example, until recent years, JavaScript supported only 64-bit floating point numbers (IEEE-754). While JavaScript now widely supports BigInt, many older BCH JavaScript libraries still include
big integer polyfills to support encoding of transaction output values without using BigInt. These polyfills typically support 64-bit math, making implementation easier.
Technical Specification
All BCH VM operations which operate on Script Numbers (A.K.A. CSCriptNum) are modified to support values within the expanded range of 8 bytes (64 bits), and OP_MUL (0x95/149) is re-enabled.
Script Number Range
The Script Number format (A.K.A. CSCriptNum) is a consensus-standardized, variable-length, signed integer format used by all VM operations which consume or produce numeric values. In practice, the
allowable range of the Script Number format is limited by the parsing of Script Number values within all VM operations which consume Script Numbers.
Prior to activation of this proposal, Script Number parsing is limited to 4 byte stack values. After activation, Script Number parsing must be limited to 8 byte stack values. This expands the
available range from a minimum of 0xffffffff (-2147483647) and maximum of 0xffffff7f (2147483647) to a minimum of 0xffffffffffffffff (-9223372036854775807) and maximum of 0xffffffffffffff7f
Note: an unusual property of the existing Script Number format reduces its negative range by 1: the Script Number format can hypothetically represent both “positive” 0 (0x, the empty stack item)
and “negative” 0 (0x80) (despite minimal-encoding requirements preventing this in practice). As such, the minimum Script Number which can be represented in 8 bytes is -9223372036854775807 rather
than -9223372036854775808 (the minimum signed 64-bit integer in C-like programming languages).
All operations which consume Script Numbers must immediately fail evaluation if an input is received which exceeds the allowed range. (Note: since 2019-11-15, Script Numbers are also required by
consensus to be minimally encoded in most cases; this rule remains in effect.)
Notice of Possible Future Expansion
While unusual, it is possible to design contracts which rely on the rejection of otherwise-valid Script Numbers which are larger than 8 bytes. Contract authors are advised that future upgrades may
further expand the supported range of BCH VM Script Numbers beyond 8 bytes.
This proposal interprets such failure-reliant constructions as intentional – they are designed to fail unless/until a possible future network upgrade in which larger Script Numbers are enabled (i.e.
a contract branch which can only be successfully evaluated in the event of such an upgrade).
As always, the security of a contract is the responsibility of the entity locking funds in that contract; funds can always be locked in insecure contracts (e.g. OP_DROP OP_1). This notice is
provided to warn contract authors and explicitly codify a network policy: the possible existence of poorly-designed contracts will not preclude future upgrades from further expanding the range of
Script Numbers.
To ensure a contract will always fail when arithmetic results overflow or underflow 8-byte Script Numbers (in the rare case that such a behavior is desirable), that behavior must be either 1)
explicitly validated or 2) introduced to the contract prior to the activation of any future upgrade which expands the range of Script Numbers.
Script Number Test Vectors
Valid Script Numbers
Hex Value
0x (empty) 0
0x01 1
0x02 2
0x03 3
0x7e 126
0x7f 127
0x8000 128
0x8100 129
0x8200 130
0xff00 255
0xfe7f 32766
0xff7f 32767
0x008000 32768
0x018000 32769
0x028000 32770
0xffff00 65535
0xffffff00 16777215
0xfeff7f 8388606
0xffff7f 8388607
0x00008000 8388608
0x01008000 8388609
0x02008000 8388610
0xfeffff7f 2147483646
0xffffff7f 2147483647
0x0000008000 2147483648
0x0100008000 2147483649
0xffffffff7f 549755813887
0x000000008000 549755813888
0xffffffffff7f 140737488355327
0x00000000008000 140737488355328
0xffffffffffff7f 36028797018963967
0x0000000000008000 36028797018963968
0xffffffffffffff7f 9223372036854775807 (maximum)
0xffffffffffffffff -9223372036854775807 (minimum)
0xfeffffffffffffff -9223372036854775806
0xffffffffffffff -36028797018963967
0xffffffffffff -140737488355327
0xffffffffff -549755813887
0xffffffff -2147483647
0xfeffffff -2147483646
0xfdffffff -2147483645
0xffffff80 -16777215
0x01008080 -8388609
0x00008080 -8388608
0xffffff -8388607
0xfeffff -8388606
0xfdffff -8388605
0xffff80 -65535
0x018080 -32769
0x008080 -32768
0xffff -32767
0xfeff -32766
0xfdff -32765
0xff80 -255
0x8180 -129
0x8080 -128
0xff -127
0xfe -126
0xfd -125
0x82 -2
0x81 -1
Invalid Script Numbers
Hex Error
0x000000000000008000 9223372036854775808 exceeds the maximum Script Number.
0x000000000000008080 -9223372036854775808 is less than the minimum Script Number.
0x00 Non-minimal encoding (for 0x/0)
0x0000 Non-minimal encoding (for 0x/0)
0x80 Non-minimal encoding (for 0x/0)
0x0080 Non-minimal encoding (for 0x/0)
0x0180 Non-minimal encoding (for 0x81/-1)
0x010080 Non-minimal encoding (for 0x81/-1)
0x01000080 Non-minimal encoding (for 0x81/-1)
0x0100000080 Non-minimal encoding (for 0x81/-1)
0xffffffffffff0080 Non-minimal encoding (for 0xffffffffffff80/-281474976710655)
Arithmetic Operation Overflows
All arithmetic VM operations must use (C-like) signed, 64-bit integer operations with overflow detection (e.g. using the X86-64 GNU C Integer Overflow Builtins, __builtin_ssubll_overflow,
__builtin_saddll_overflow, and __builtin_smulll_overflow, for OP_SUB, OP_ADD, and OP_MUL, respectively). If an operation overflows or underflows, the operation must immediately fail evaluation.
Additionally, operations which produce precisely the minimum value (-9223372036854775808) – requiring 9 bytes to be encoded as a Script Number – must immediately fail evaluation. (Implementation
note: this error can be enforced during Script Number re-encoding.)
Re-Enable Multiplication (OP_MUL)
The OP_MUL multiplication operation is re-enabled (at 0x95/149, its original codepoint). OP_MUL performs C-style, overflow-checked, integer multiplication (e.g. using the X86-64 GNU C Integer
Overflow Builtins, __builtin_smulll_overflow).
This section documents design decisions made in this specification.
Inclusion of OP_MUL
The OP_MUL operation was excluded from the upgrade restoring disabled opcodes (May 2018) because a solution for handling overflows was not yet decided; it was expected that OP_MUL would be re-enabled
during another upgrade which expanded the accepted range of Script Number arithmetic inputs.
Because this proposal offers a solution for arithmetic underflows and overflows, OP_MUL is no longer blocked. Re-activation is included directly in this proposal because the two changes are strongly
connected and will benefit from a combined review.
Alternative Overflow Behavior
Until this proposal, overflows have only been prevented indirectly: VM implementations typically employ signed 64-bit integers internally, and because numeric inputs to all operations have been
limited to 4-byte Script Numbers, no operations are capable of producing Script Number results larger than 5 bytes. (While inputs are constrained to 4 bytes, 5-byte results are allowed.) With this
proposal, overflows would be handled explicitly: they cause an evaluation to immediately fail.
Alternatively, this proposal could attempt to maintain the previous “undefined” overflow behavior, where overflows aren’t explicitly handled by the VM. However, that behavior would require a much
less efficient implementation: to support 64-bit multiplication, VM implementations would be required to use at least 128-bit arithmetic internally (while still preventing contracts from using inputs
larger than 64 bits).
To demonstrate, the maximum 64-bit/8-byte input 0xffffffffffffffff (18446744073709551615), multiplied by itself is 0xfffffffffffffffe0000000000000001 (340282366920938463426481119284349108225),
which requires 128-bits/16 bytes to represent.
The overflow handling behavior implemented by this proposal is both more common (among popular programming languages and computing environments) and more efficient than the existing undefined
overflow handling strategy. Additionally, this proposal’s explicit overflow handling strategy also enables potential future operations (e.g. exponentiation) to be enabled using simple, common
Limiting Arithmetic Operations to 8-byte Script Numbers
This proposal limits all inputs and outputs of arithmetic operations to the range which can be encoded in 8-byte Script Numbers. This range is nearly equivalent to the range of signed, 64-bit
integers (excluding only one value), and in the positive range (9223372036854775807 maximum) significantly exceeds the largest possible value of any transaction output: ~2100000000000000 (21 million
BCH * 100 million satoshis).
Because signed, 64-bit integer arithmetic is natively implemented in most computing environments, this limit also offers practically-equivalent worst-case performance vs. the existing 4-byte Script
Number limitation. (Notably, derivatives of the Satoshi implementation already use 64-bit numbers internally, but enforce the lower 4-byte limit to prevent overflows.) As such, an 8-byte limit
significantly expands the functionality of the VM without impacting worst-case transaction validation costs or VM implementation complexity.
A future upgrade which adds support for significant subdivision of satoshi values could create demand for Script Numbers with a greater range than 8 bytes. However, given a maximum possible satoshi
supply of ~2100000000000000, the 9223372036854775807 maximum provides ample room for 1/1000th satoshi subunits before representing even the largest balances might require arithmetic workarounds. And
even in these cases, many contracts will be able to either 1) emulate support for larger arithmetic operations using multi-step computations, or 2) operate on rounded values for very large numbers.
Given this additional flexibility, the 8-byte limit is likely sufficient until a distant future upgrade.
Note: a popular BCH token protocol, Simple Ledger Protocol (SLP), technically allows tokens to be created with much greater divisibility than BCH – BCH supports 8 decimal places (satoshis), while
SLP tokens can support up to 18. This proposal does not consider these unusual cases to currently warrant a greater arithmetic range: divisibility beyond that of BCH is unlikely to be practically
useful in commerce, and if satoshis become insufficiently divisible in the distant future, arithmetic range can be increased at the same time as divisibility.
Users of higher-level protocols like SLP who intend to employ VM arithmetic are advised to target an arithmetic range less than or equal to satoshis for maximum contracting flexibility.
Continued Separation of Cryptographic and Arithmetic Operations
Past proposals have suggested larger arithmetic limits in an effort to support number sizes useful to cryptosystems. While deeper analysis indicates that larger arithmetic limits are unlikely to be
useful in implementing new cryptosystems, such larger limits could negatively impact transaction validation costs.
In short, BCH VM cryptographic operations do not operate on numbers: they are high-level APIs which operate on data structures representing (sets of) numbers. Compatibility between arithmetic and
cryptographic operations would be complex and likely introduce performance bottlenecks.
Note, limiting arithmetic inputs to 8-byte Script Numbers does not prevent larger numbers from being represented and used elsewhere in the BCH VM instruction set. (In fact, larger numbers are
already in use within signatures and public keys.) Future proposals could introduce new operations specifically designed to perform mathematical operations on cryptographic data structures
(including greater than 8-byte Script Numbers).
Inclusion of Future Expansion Notice
The Notice of Possible Future Expansion is included in this specification to avert future controversy by documenting the proposal’s intent with respect to future (not-yet specified) upgrades: the BCH
VM is not guaranteed to forever limit Script Numbers to 8 bytes.
If this were not clarified, any future Script Number upgrade proposals could be more easily mischaracterize by publicizing deposits made to contracts that are intentionally designed to rely on the
8-byte overflow behavior. With this notice, such misdirection might be more easily identified as disingenuous.
Disallowance of 9-byte Script Numbers from Arithmetic Results
Only one 9-byte Script Number can be represented within the signed 64-bit integer range to be used by VM arithmetic operations after activation of this proposal: -9223372036854775808. This precise
value is disallowed (by limiting all Script Number arithmetic inputs and outputs to 8 bytes) to simplify both VM implementation and contract security analysis.
If this 9-byte value were allowed in arithmetic results, it would break the assumption that all valid arithmetic results are also valid arithmetic inputs. In some covenants, this could present a
subtle exploit: if an attacker can force the contract to somehow retain this precise 9-byte result, the attacker could place the covenant in an unintended state, preventing the 9-byte result from
being successfully passed into other arithmetic operations. Furthermore, analyzing contracts for this vulnerability requires detailed information about the possible numeric ranges of arithmetic
inputs and outputs, creating an unnecessary burden for static contract analysis.
For example, if the 9-byte value were allowed, the script <-9223372036854775807> OP_1SUB OP_1ADD would successfully produce the 9-byte value after OP_1SUB, but the resulting 9-byte Script Number
would be rejected by OP_1ADD. Implementations could add a special case for handling this particular signed 64-bit integer, 9-byte Script Number, but the corresponding positive number
(9223372036854775808) is also not representable as a signed 64-bit integer (in most computing environments), so an operation like <-9223372036854775808> OP_NEGATE would also overflow.
(in progress)
Test Cases
(in progress)
Evaluation of Alternatives
Alternative designs for several components of this proposal have been documented in the Rationale section, and several past proposals have also informed the design of this proposal:
128-bit Integers
An earlier proposal for 128-bit integers would also enable up to 128-bit arithmetic operations. The larger 128-bit range may impact worst-case validation performance, and implementation is likely to
be more complex in many computing environments.
While future proposals could further expand the range of VM arithmetic operations, 64-bit math is likely sufficient even for operation on 1/1000th “fractional satoshis”, so further expansion requires
additional research.
NextChain BigNum
NextChain BigNum would enable up to 4096-bit integer arithmetic, add the OP_SETBMD, OP_BIN2BIGNUM, and OP_BIGNUM2BIN operations, and introduce “type” information to all stack items (with a new BigNum
Implementation of NextChain BigNum is notably more complex than other proposals, and it is unclear whether support for larger arithmetic inputs would have practical applications (see Continued
Separation of Cryptographic and Arithmetic Operations).
Primary Stakeholders
At least five primary stakeholder groups exist:
Node Developers
At least six node implementations must be upgraded:
Library Developers
At least five libraries must be upgraded:
Contract Developers
Contract developers affected by existing limits include:
Node Operators & Miners
(in progress)
These individuals and organizations have invested in the BCH currency and ecosystem on the premise that it can become peer to peer electronic cash for the world. These stakeholders expect the token
to serve effectively as money, including in the innovative financial services which could be enabled by expanded arithmetic support.
Node Developers
(in progress)
Library Developers
(in progress)
Contract Developers
(in progress)
General Protocols
Developing workarounds to this limitation has cost General Protocols a large amount of time and money. This is still ongoing as the added complexity makes further smart contract changes more
difficult. Because of the required code to address this, there are also other contract features that do not fit within the contract bytecode size limits.
Node Operators & Miners
(in progress)
(in progress)
This section summarizes the evolution of this document.
• v1.0 – 2021-6-9 (current)
□ Completed technical specification
□ Added Rationale section, revised supporting material (Summary, Deployment, Motivation, Benefits, Evaluation of Alternatives, etc.)
• v0 – 2021-2-21 (32e9d5ed)
Copyright Notice
Copyright © 2021 GeneralProtocols / Research
Permission is granted to copy, distribute and/or modify this document under the terms of the MIT license.
|
{"url":"https://reference.cash/protocol/forks/chips/2022-05-bigger-script-integers","timestamp":"2024-11-01T22:55:22Z","content_type":"text/html","content_length":"80283","record_id":"<urn:uuid:d19d453c-e161-423e-90a9-579cbd00db12>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00195.warc.gz"}
|
Daily Themed Crossword August 15 2024 Crossword Clue
Daily Themed Crossword August 15 2024
The Daily Themed Crossword Aug 15 2024 Answers were just published after we played around with it and solved today’s puzzle in a timely matter. This puzzle special in the sense that everyday it
allows you to play with a different theme hence the name Daily Themed Crossword. The clues are grouped by their orientation on the puzzle grid and by their number.
For another Daily Themed Crossword Answers go in the homepage.
|
{"url":"https://dailythemedcrosswordsanswers.com/daily-themed-crossword-august-15-2024/","timestamp":"2024-11-02T02:22:57Z","content_type":"text/html","content_length":"108146","record_id":"<urn:uuid:ec3e19d9-cdfb-48d0-9ebc-eb9b7aeff487>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00639.warc.gz"}
|
Encrypt and decrypt using PyCrypto AES-256 python - Paperbun
In this article, we will see how to use PyCrypto to encrypt and decrypt data using AES-256.
We will use the AES CBC mode which is a popular encryption algorithm used for secure communication and data protection to encrypt & decrypt the data.
Install the PyCrypto library using the following command
pip install pycrypto
Following is the example program for encrypting and decrypting the data using PyCrypto.
# Import the required modules
from Crypto.Cipher import AES
from Crypto.Util.Padding import pad, unpad
from Crypto.Random import get_random_bytes
from base64 import b64encode, b64decode
# Define the encryption function
def encrypt_AES_CBC_256(key, message):
key_bytes = key.encode('utf-8')
message_bytes = message.encode('utf-8')
iv = get_random_bytes(AES.block_size)
cipher = AES.new(key_bytes, AES.MODE_CBC, iv)
padded_message = pad(message_bytes, AES.block_size)
ciphertext_bytes = cipher.encrypt(padded_message)
ciphertext = b64encode(iv + ciphertext_bytes).decode('utf-8')
return ciphertext
# Define the decryption function
def decrypt_AES_CBC_256(key, ciphertext):
key_bytes = key.encode('utf-8')
ciphertext_bytes = b64decode(ciphertext)
iv = ciphertext_bytes[:AES.block_size]
cipher = AES.new(key_bytes, AES.MODE_CBC, iv)
ciphertext_bytes = ciphertext_bytes[AES.block_size:]
decrypted_bytes = cipher.decrypt(ciphertext_bytes)
plaintext_bytes = unpad(decrypted_bytes, AES.block_size)
plaintext = plaintext_bytes.decode('utf-8')
return plaintext
# Set the 256-bit key and plaintext message
key = 'ThisIsASecretKeyForAES-256-CBCEncryption'
message = 'This is a secret message that needs to be encrypted.'
# Encrypt the message
encrypted_message = encrypt_AES_CBC_256(key, message)
# Decrypt the message
decrypted_message = decrypt_AES_CBC_256(key, encrypted_message)
# Print the original and decrypted messages
print('Original Message:', message)
print('Encrypted Message:', encrypted_message)
print('Decrypted Message:', decrypted_message)
The AES module provides the implementation of the AES encryption algorithm. The Padding module provides functions for adding and removing padding from messages. The get_random_bytes function
generates cryptographically secure random bytes, which we will use to generate the IV (initialization vector) for CBC mode and the base64 module provides functions for encoding and decoding messages
in base64 format.
The encrypt_AES_CBC_256 function takes two arguments: key and message. The key argument is the 256-bit key that will be used for encryption and the message is the plain text that we want to encrypt.
We first encode the key and message as bytes and generate the random IV (initialization vector) using the get_random_bytes function. The AES cipher is created in CBC mode using the AES.new method,
with the IV as the initialization parameter. After encrypting the padded message, the resulting cipher text is combined with the IV and encoded using base64.
In the decryption function, we first decode the key and ciphertext as bytes and then decode the IV from the ciphertext. Using the AES.new method, we create the AES cipher in the CBC mode using
key_bytes and the IV and then pass the encrypted data to the cipher.decrypt function to decrypt it.
|
{"url":"https://paperbun.org/encrypt-and-decrypt-using-pycrypto-aes-256-python/?amp=1","timestamp":"2024-11-10T19:01:17Z","content_type":"text/html","content_length":"109675","record_id":"<urn:uuid:360913d6-a382-4493-aa61-d851674964e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00290.warc.gz"}
|
This function is a method for the generic function plot() for class "lda". It can be invoked by calling plot(x) for an object x of the appropriate class, or directly by calling plot.lda(x) regardless
of the class of the object.
The behaviour is determined by the value of dimen. For dimen > 2, a pairs plot is used. For dimen = 2, an equiscaled scatter plot is drawn. For dimen = 1, a set of histograms or density plots are
drawn. Use argument type to match "histogram" or "density" or "both".
|
{"url":"https://www.rdocumentation.org/packages/MASS/versions/7.3-56/topics/plot.lda","timestamp":"2024-11-14T12:31:06Z","content_type":"text/html","content_length":"62289","record_id":"<urn:uuid:eabb7d40-168c-407a-95c2-a1f336d4be93>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00527.warc.gz"}
|
Sphere Calc
The Sphere Calculator contains equations for volumes, surface areas, and moments of inertia for objects shaped like a geometric sphere. The user can enter basic dimensions and the calculator returns
the values. Note, different units are available for input and outputs of the equations.
Sphere Calculators:
1. Sphere Surface Area based radius (r)
2. Sphere Volume based radius (r)
3. Sphere Weight (Mass) based on volume and mean density (mD)
4. Sphere Cap Surface Area
5. Sphere Cap Volume
6. Sphere Cap Weight (Mass)
7. Sphere Segment Volume
8. Sphere Segment Weight (Mass)
9. Sphere Segment Wall Surface Area (without the circular top and bottom ends)
10. Sphere Segment Full Surface Area (with the top and bottom circles, aka ends).
11. moment of inertia of a spherical shaped object based around the central axis (diameter)
12. moment of inertia of a spherical shaped object based around the edge of the sphere
13. Volume of Spherical Shell
14. Mass of Spherical Shell
15. mean density of common substances (useful in calculating the mass/weight and the moments of inertia)
The Sphere
A sphere (from Greek — sphaira, "globe, ball") is a perfectly round geometrical and circular object in three-dimensional space that resembles the shape of a completely round ball. Like a circle,
which, in geometric contexts, is in two dimensions, a sphere is defined mathematically as the set of points that are all the same distance r from a given point in three-dimensional space. This
distance r is the radius of the sphere, and the given point is the center of the sphere. The maximum straight distance through the sphere passes through the center and is thus twice the radius; it is
the diameter.
In mathematics, a distinction is made between the sphere (a two-dimensional closed surface embedded in three-dimensional Euclidean space) and the ball (a three-dimensional shape that includes the
interior of a sphere).
Pairs of points on a sphere that lie on a straight line through the sphere's center are called antipodal points. A great circle is a circle on the sphere that has the same center and radius as the
sphere and consequently divides it into two equal parts. The shortest distance along the surface between two distinct non-antipodal points on the surface is on the unique great circle that includes
the two points. Equipped with the great-circle distance, a great circle becomes the Riemannian circle.
If a particular point on a sphere is (arbitrarily) designated as its north pole, then the corresponding antipodal point is called the south pole, and the equator is the great circle that is
equidistant to them. Great circles through the two poles are called lines (or meridians) of longitude, and the line connecting the two poles is called the axis of rotation. Circles on the sphere that
are parallel to the equator are lines of latitude. This terminology is also used for such approximately spheroidal astronomical bodies as the planet Earth.
Any plane that includes the center of a sphere divides it into two equal hemispheres. Any two intersecting planes that include the center of a sphere subdivide the sphere into four lunes or biangles,
the vertices of which all coincide with the antipodal points lying on the line of intersection of the planes.
The antipodal quotient of the sphere is the surface called the real projective plane, which can also be thought of as the northern hemisphere with antipodal points of the equator identified.
The round hemisphere is conjectured to be the optimal (least area) filling of the Riemannian circle.
The circles of intersection of any plane not intersecting the sphere's center and the sphere's surface are called spheric sections
Eleven Properties of a sphere
In their book Geometry and the imagination^ David Hilbert and Stephan Cohn-Vossen describe eleven properties of the sphere and discuss whether these properties uniquely determine the sphere. Several
properties hold for the plane, which can be thought of as a sphere with infinite radius. These properties are:
1. The points on the sphere are all the same distance from a fixed point. Also, the ratio of the distance of its points from two fixed points is constant.
The first part is the usual definition of the sphere and determines it uniquely. The second part can be easily deduced and follows a similar result of Apollonius of Perga for the circle. This
second part also holds for the plane.
2. The contours and plane sections of the sphere are circles.
This property defines the sphere uniquely.
3. The sphere has constant width and constant girth.
The width of a surface is the distance between pairs of parallel tangent planes. Numerous other closed convex surfaces have constant width, for example the Meissner body. The girth of a surface
is the circumference of the boundary of its orthogonal projection on to a plane. Each of these properties implies the other.
4. All points of a sphere are umbilics.
At any point on a surface a normal direction is at right angles to the surface because the sphere these are the lines radiating out from the center of the sphere. The intersection of a plane that
contains the normal with the surface will form a curve that is called a normal section, and the curvature of this curve is the normal curvature. For most points on most surfaces, different
sections will have different curvatures; the maximum and minimum values of these are called the principal curvatures. Any closed surface will have at least four points called umbilical points. At
an umbilic all the sectional curvatures are equal; in particular the principal curvatures are equal. Umbilical points can be thought of as the points where the surface is closely approximated by
a sphere.
For the sphere the curvatures of all normal sections are equal, so every point is an umbilic. The sphere and plane are the only surfaces with this property.
5. The sphere does not have a surface of centers.
For a given normal section exists a circle of curvature that equals the sectional curvature, is tangent to the surface, and the center lines of which lie along on the normal line. For example,
the two centers corresponding to the maximum and minimum sectional curvatures are called the focal points, and the set of all such centers forms the focal surface.
For most surfaces the focal surface forms two sheets that are each a surface and meet at umbilical points. Several cases are special:
□ For channel surfaces one sheet forms a curve and the other sheet is a surface
□ For cones, cylinders, tori and cyclides both sheets form curves.
□ For the sphere the center of every osculating circle is at the center of the sphere and the focal surface forms a single point. This property is unique to the sphere.
6. All geodesics of the sphere are closed curves.
Geodesics are curves on a surface that give the shortest distance between two points. They are a generalization of the concept of a straight line in the plane. For the sphere the geodesics are
great circles. Many other surfaces share this property.
7. Of all the solids having a given volume, the sphere is the one with the smallest surface area; of all solids having a given surface area, the sphere is the one having the greatest volume.
It follows from isoperimetric inequality. These properties define the sphere uniquely and can be seen in soap bubbles: a soap bubble will enclose a fixed volume, and surface tension minimizes its
surface area for that volume. A freely floating soap bubble therefore approximates a sphere (though such external forces as gravity will slightly distort the bubble's shape).
8. The sphere has the smallest total mean curvature among all convex solids with a given surface area.
The mean curvature is the average of the two principal curvatures, which is constant because the two principal curvatures are constant at all points of the sphere.
9. The sphere has constant mean curvature.
The sphere is the only imbedded surface that lacks boundary or singularities with constant positive mean curvature. Other such immersed surfaces as minimal surfaces have constant mean curvature.
10. The sphere has constant positive Gaussian curvature.
Gaussian curvature is the product of the two principal curvatures. It is an intrinsic property that can be determined by measuring length and angles and is independent of how the surface is
embedded in space. Hence, bending a surface will not alter the Gaussian curvature, and other surfaces with constant positive Gaussian curvature can be obtained by cutting a small slit in the
sphere and bending it. All these other surfaces would have boundaries, and the sphere is the only surface that lacks a boundary with constant, positive Gaussian curvature. The pseudosphere is an
example of a surface with constant negative Gaussian curvature.
11. The sphere is transformed into itself by a three-parameter family of rigid motions.
Rotating around any axis a unit sphere at the origin will map the sphere onto itself. Any rotation about a line through the origin can be expressed as a combination of rotations around the
three-coordinate axis (see Euler angles). Therefore a three-parameter family of rotations exists such that each rotation transforms the sphere onto itself; this family is the rotation group SO
(3). The plane is the only other surface with a three-parameter family of transformations (translations along the x and y axis and rotations around the origin). Circular cylinders are the only
surfaces with two-parameter families of rigid motions and the surfaces of revolution and helicoids are the only surfaces with a one-parameter family.
See Also
• Geometry (3D) - Calculator associated with various shapes in three dimensions.
• Mass / Weight - Calculator with formulas for the mass and weight of various 3D objects
• Moments of Inertia - Calculator with formulas for the moments of inertia for different objects about different axes
• Haversine - Great circle arc distance equation between points on a sphere
• Cone Calculator - Equations related to conic shaped objects.
• Cylinder Calculator - Equations for cylinders
• Sphere Calculator - Equations related to spheres.
• Circle Calculator - Equations related to circles
• Ellipse Calculator - Equations related to ellipses
• Frustum - Equations related to the frustums of various objects
• Wikipedia - http://en.wikipedia.org/wiki/Sphere
1. ^ Hilbert, David; Cohn-Vossen, Stephan (1952). Geometry and the Imagination (2nd ed.). Chelsea. ISBN 0-8284-1087-9.
|
{"url":"https://www.vcalc.com/wiki/sphere-calculator","timestamp":"2024-11-06T15:33:04Z","content_type":"text/html","content_length":"75089","record_id":"<urn:uuid:c9b32fa8-debc-438f-9539-8abdb2f45797>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00003.warc.gz"}
|
Distribution may refer to:
In mathematics, science, and technology
In mathematics
□ Distribution (mathematics), generalized functions used to formulate solutions of partial differential equations
□ Probability distribution, the probability of a particular values or value range of a variable
□ Frequency distribution, a list of the values recorded in a sample
□ Inner distribution and outer distribution, in coding theory
□ Distribution (differential geometry), a subset of the tangent bundle of a manifold
□ Distributed parameter system, systems that have an infinite-dimensional state-space
□ Distribution of terms, a situation in which all members of a category are accounted for
□ Distributivity, is a property of binary operations that generalises the distributive law from elementary algebra
In science
□ Complementary distribution, in linguistics, a relationship between elements found in opposite environments
□ Distribution (pharmacology), the transfer of a drug within the body
□ Distribution function, in physics, the number of particles per unit volume in phase space
□ Population distribution, the geographical area in which a species lives
□ Spectral power distribution, in color science, the power per unit area per unit wavelength of an illumination
□ Trip distribution, part of the four-step transportation forecasting model
In technology and computer science
□ Electricity distribution, the final stage in the delivery of electricity
□ Electronic brakeforce distribution, an automotive technology that varies brake force based on prevailing conditions
□ Distributed computing, in which a program is run on multiple networked computers
□ Software distribution, a bundle of a specific software already compiled and configured
□ Distribution (concurrency), the projection operator in a history monoid, a representation of the histories of concurrent computer processes
□ Key distribution center, part of a cryptosystem intended to reduce the risks inherent in exchanging keys
□ Content distribution, publishing and web design as method to provide information
□ Digital distribution, publishing media digitally
□ Distribution of elements in the distributed element model of electric circuits
□ distributed computing, the coordinated use of physically distributed computers.
□ A specific packaging of an operating system containing components such as the kernel, a toolchain, utilities and other software. The most common use in this context is for Linux distributions
In economics
Other uses
□ Film distributor, an agent between a film producer and an exhibitor
□ Purse distribution, in a horse race, the distribution of winnings among the highest finishers
□ Record distribution, process of shipping and promoting record labels
□ Distribution Select, a Canadian record and video label
See also
This disambiguation page lists articles associated with the same title.
If an internal link led you here, you may wish to change the link to point directly to the intended article.
Wikimedia Foundation. 2010.
|
{"url":"https://en-academic.com/dic.nsf/enwiki/947987","timestamp":"2024-11-06T04:30:59Z","content_type":"text/html","content_length":"43238","record_id":"<urn:uuid:4318f310-ab41-4294-b85a-f5fbabd16dcc>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00583.warc.gz"}
|
EViews Help: threshold
threshold Equation Methods
Estimation by discrete or smooth threshold least squares, including threshold autoregression.
eq_name.threshold(options) y z1 [z2 z3 ...] [@nv x1 x2 x3 ...] @thresh t1 [t2 t3 ...]
List the dependent variable first, followed by a list of the independent variables that have coefficients that are allowed to vary across threshold, followed optionally by the keyword @nv and a list
of non-varying coefficient variables.
List a threshold variable or variables (for model selection) or a single integer or range pairs after the keyword @thresh. The integer or range pairs indicate a self-exciting model with the lagged
dependent variable as the threshold variable.
For smooth threshold equations you may specify variables that are to be included only in the base specification or only in the alternative specification. Base-only variables should be specified in
parentheses using the @base key, as in “@base(x1) @base(x2) @base(x3 x4)”. Alternative-only variables may be specified analogously using the @alt key.
Specification Options
type=arg (default=“discrete”) Type of threshold estimation: “discrete” (discrete), “smooth” (smooth).
Discrete Threshold Options
method=arg (default=“seqplus1”) Threshold selection method: “seqplus1” (sequential tests of single
nthresh=arg (default=1) Number of thresholds for fixed number threshold selection methods.
Sub-method setting (options depend on “method=”).
select=arg (1) if “method=glob”: Sequential ("seq") (default), Highest significant ("high"),
(2) if “method=globinfo”: Schwarz criterion (“bic” or “sic”) (default), Liu-Wu-Zidek criterion (“lwz”).
trim=arg (default=5) Trimming percentage for determining minimum segment size (5, 10, 15, 20, 25).
maxthresh=integer (default=5) Maximum number of thresholds to allow (not applicable if “method=seqall”).
maxlevels=integer (default=5) Maximum number of threshold levels to consider in sequential testing (applicable when “method=sequall”).
size=arg (default=5) Test sizes for use in sequential determination and final test evaluation (10, 5, 2.5, 1) corresponding to 0.10, 0.05, 0.025, 0.01, respectively
heterr Assume regimes specific error distributions in variance computation.
commondata Assume a common distribution for the data across segments (only applicable if original equation is estimated with a robust covariance method, “heterr” is not
Smooth Threshold Options
smoothtrans=arg Smooth threshold transition function: “logistic” (logistic), “logistic2” (second-order logistic), “exponential” (exponential), “normal” (normal).
smoothstart=arg Smoth threshold starting value method: or fixed number threshold selection methods: “grid_conc” (grid search with concentrated regression coefficients”, “grid_zeros” (grid
(default=“grid_conc”) search with zero regression coefficients), “data” (data-based), “user” (user-specified using the contents of the coefficient vector in the workfile).
Sub-method setting (options depend on “method=”).
smoothst=arg (1) if “method=glob”: Sequential ("seq") (default), Highest significant ("high"),
(2) if “method=globinfo”: Schwarz criterion (“bic” or “sic”) (default), Liu-Wu-Zidek criterion (“lwz”).
General Options
w=arg Weight series or expression.
wtype=arg (default Weight specification type: inverse standard deviation (“istdev”), inverse variance (“ivar”), standard deviation (“stdev”), variance (“var”).
Weight scaling: EViews default (“eviews”), average (“avg”), none (“none”).
The default setting depends upon the weight type: “eviews” if “wtype=istdev”, “avg” for all others.
cov=keyword Covariance type (optional): “white” (White diagonal matrix), “hac” (Newey-West HAC).
nodf Do not perform degree of freedom corrections in computing coefficient covariance matrix. The default is to use degree of freedom corrections.
covlag=arg Whitening lag specification: integer (user-specified lag value), “a” (automatic selection).
covinfosel=arg Information criterion for automatic selection: “aic” (Akaike), “sic” (Schwarz), “hqc” (Hannan-Quinn) (if “lag=a”).
covmaxlag=integer Maximum lag-length for automatic selection (optional) (if “lag=a”). The default is an observation-based maximum of
covkern=arg Kernel shape: “none” (no kernel), “bart” (Bartlett, default), “bohman” (Bohman), “daniell” (Daniel), “parzen” (Parzen), “parzriesz” (Parzen-Riesz), “parzgeo” (Parzen-Geometric),
(default=“bart”) “parzcauchy” (Parzen-Cauchy), “quadspec” (Quadratic Spectral), “trunc” (Truncated), “thamm” (Tukey-Hamming), “thann” (Tukey-Hanning), “tparz” (Tukey-Parzen).
covbw=arg (default Kernel Bandwidth: “fixednw” (Newey-West fixed), “andrews” (Andrews automatic), “neweywest” (Newey-West automatic), number (User-specified bandwidth).
covnwlag=integer Newey-West lag-selection parameter for use in nonparametric kernel bandwidth selection (if “covbw=neweywest”).
covbwint Use integer portion of bandwidth.
prompt Force the dialog to appear from within a program.
p Print basic estimation results.
equation eq1.threshold(method=fixedseq, type=discrete) ss_transf c ss_transf(-1 to -11) @thresh 2
uses the fixed number of thresholds test to determine the optimal threshold in a model regressing SS_TRANSF on the threshold variables C and SS_TRANSF(-1 to -11).
equation eq2.threshold(method=fixedseq, type=discrete) ss_transf c ss_transf(-1 to -11) @thresh 1 5
uses the fixed number of thresholds test to determine the optimal threshold and does model selection over lags of SS_TRANSF from SS_TRANSF(-1) to SS_TRANSF(-5).
equation eq3.threshold(method=user, threshold=7.44) ss_transf c @nv ss_transf(-1 to -11) @thresh 2
estimates the model with one user-specified threshold value. In addition, the variables SS_TRANSF(-1 to -11) are restricted to have common coefficients across the regimes.
“Discrete Threshold Regression”
“Smooth Transition Regression”
for a discussion of the various forms of threshold models.
|
{"url":"https://help.eviews.com/content/equationcmd-threshold.html","timestamp":"2024-11-11T21:10:56Z","content_type":"application/xhtml+xml","content_length":"46680","record_id":"<urn:uuid:281bf036-4dc3-4250-887c-80fc1c676e0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00821.warc.gz"}
|
If a spring has a constant of 6 kgs^-2, how much work will it take to extend the spring by 67 cm ? | Socratic
If a spring has a constant of #6# #kgs^-2#, how much work will it take to extend the spring by #67# # cm #?
1 Answer
The work done is equal to the change in spring potential energy: ${E}_{p} = \frac{1}{2} k {x}^{2} = \frac{1}{2} \cdot 6 \cdot {0.67}^{2} = 1.35$$J$.
There's probably not a lot more explanation needed. The work required to extend the spring is stored as spring potential energy.
We could also calculate this using $W = F d$, but because the force changes as the distance changes, this requires integrating the force over the distance, so the spring potential energy method is
Impact of this question
1434 views around the world
|
{"url":"https://socratic.org/questions/if-a-spring-has-a-constant-of-6-kg-s-2-how-much-work-will-it-take-to-extend-the--2#225985","timestamp":"2024-11-07T01:13:17Z","content_type":"text/html","content_length":"33205","record_id":"<urn:uuid:7b4bb64a-71a5-4755-9923-fa27e9af7cf8>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00668.warc.gz"}
|
[out,csi] = lteEqualizeMMSE(rxgrid,channelest,noiseest) returns equalized data in multidimensional array, out. MMSE equalization is applied to the received data resource grid in the matrix, rxgrid,
using the channel information in the channelest matrix. noiseest is an estimate of the received noise power spectral density.
Alternatively, the input channelest can be provided as a 3-D array of size NRE-by-NRxAnts-by-P, and the input rxgrid can be provided as a matrix of size NRE-by-NRxAnts. In this case, the first two
dimensions have been reduced to one dimension by appropriate indexing through the frequency and time locations of the resource elements of interest, typically for a single physical channel. The
outputs, out and csi, are of size (N×M)-by-P.
Equalize MMSE for RMC R.4
Equalize the received signal for RMC R.4 after channel estimation. Use the MMSE equalizer.
Create cell-wide configuration structure and generate transmit signal. Configure propagation channel.
enb = lteRMCDL('R.4');
[txSignal,~,info] = lteRMCDLTool(enb,[1;0;0;1]);
chcfg.DelayProfile = 'EPA';
chcfg.NRxAnts = 1;
chcfg.DopplerFreq = 70;
chcfg.MIMOCorrelation = 'Low';
chcfg.SamplingRate = info.SamplingRate;
chcfg.Seed = 1;
chcfg.InitPhase = 'Random';
chcfg.InitTime = 0;
txSignal = [txSignal; zeros(15,1)];
N = length(txSignal);
noise = 1e-3*complex(randn(N,chcfg.NRxAnts),randn(N,chcfg.NRxAnts));
rxSignal = lteFadingChannel(chcfg,txSignal)+noise;
Perform synchronization and OFDM demodulation.
offset = lteDLFrameOffset(enb,rxSignal);
rxGrid = lteOFDMDemodulate(enb,rxSignal(1+offset:end,:));
Create channel estimation configuration structure and perform channel estimation.
cec.FreqWindow = 9;
cec.TimeWindow = 9;
cec.InterpType = 'Cubic';
cec.PilotAverage = 'UserDefined';
cec.InterpWinSize = 3;
cec.InterpWindow = 'Causal';
[hest,noiseEst] = lteDLChannelEstimate(enb, cec, rxGrid);
Equalize and plot received and equalized grids.
eqGrid = lteEqualizeMMSE(rxGrid, hest, noiseEst);
title('Received grid')
xlabel('OFDM symbol')
title('Equalized grid')
xlabel('OFDM symbol')
Equalize MMSE for RMC R.5
This example applies MMSE equalization on the received signal for reference measurement channel (RMC) R.5, after channel estimation.
Set the DL reference measurement channel to R.5
Set channel estimator configuration PilotAverage field to UserDefined. as follows: averaging window of 9 resource elements in both frequency and time domain, cubic interpolation with a casual window.
cec = struct('FreqWindow',9,'TimeWindow',9,'InterpType','cubic');
cec.PilotAverage = 'UserDefined';
cec.InterpWinSize = 1;
cec.InterpWindow = 'Causal';
Generate the txWaveform.
txWaveform = lteRMCDLTool(enb,[1;0;0;1]);
n = length(txWaveform);
Apply some random noise to the transmitted signal and save as the rxWaveform.
rxWaveform = repmat(txWaveform,1,2)+complex(randn(n,2),randn(n,2))*1e-3;
Next, demodulate the received data.
rxGrid = lteOFDMDemodulate(enb,rxWaveform);
Then, perform channel estimation.
[hest,n0] = lteDLChannelEstimate(enb,cec,rxGrid);
Finally, apply the MMSE equalization.
out = lteEqualizeMMSE(rxGrid,hest,n0);
Show scatter plot of one component carrier.
Input Arguments
rxgrid — Received data resource grid
3-D numeric array | 2-D numeric matrix
Received data resource grid, specified as a 3-D numeric array or a 2-D numeric matrix. As a 3-D numeric array, it has size N-by-M-by-NRxAnts, where N is the number of subcarriers, M is the number of
OFDM symbols, and NRxAnts is the number of receive antennas.
Alternatively, as a 2-D numeric matrix, it has size NRE-by-NRxAnts. In this case, the first two dimensions have been reduced to one dimension by appropriate indexing through the frequency and time
locations of the resource elements of interest, typically for a single physical channel.
Data Types: double
Complex Number Support: Yes
channelest — Channel information
4-D numeric array | 3-D numeric array
Channel information, specified as a 4-D numeric array or a 3-D numeric array. As a 4-D numeric array, it has size N-by-M-by-NRxAnts-by-P. N is the number of subcarriers, M is the number of OFDM
symbols, NRxAnts is the number of receive antennas, and P is the number of transmit antennas. Each element is a complex number representing the narrowband channel for each resource element and for
each link between transmit and receive antennas. This matrix can be obtained using the channel estimation command lteDLChannelEstimate.
Alternatively, as a 3-D numeric array, it has size NRE-by-NRxAnts-by-P. In this case, the first two dimensions have been reduced to one dimension by appropriate indexing through the frequency and
time locations of the resource elements of interest, typically for a single physical channel.
Data Types: double
Complex Number Support: Yes
noiseest — Noise power estimate
numeric scalar
Noise power estimate, specified as a numeric scalar. It is an estimate of the received noise power spectral density per RE on rxgrid.
Data Types: double
Output Arguments
out — Equalized output data
3-D numeric array | 2-D numeric matrix
Equalized output data, returned as a 3-D numeric array or a 2-D numeric matrix. As a 3-D numeric array, it has size N-by-M-by-P, where N is the number of subcarriers, M is the number of OFDM symbols,
and P is the number of transmit antennas.
Alternatively, if channelest is provided as a 3-D array, out is a 2-D numeric matrix of size (N×M)-by-P. In this case, the first two dimensions have been reduced to one dimension by appropriate
indexing through the frequency and time locations of the resource elements of interest, typically for a single physical channel.
Data Types: double
Complex Number Support: Yes
csi — Soft channel state information
3-D numeric array | 2-D numeric matrix
Soft channel state information, returned as a 3-D numeric array of the same size as out. As a 3-D numeric array, it has size N-by-M-by-P, where N is the number of subcarriers, M is the number of OFDM
symbols, and P is the number of transmit antennas. csi provides an estimate (via MMSE) of the received RE gain for each received RE.
Alternatively, if channelest is provided as a 3-D array, csi is a 2-D numeric matrix of size (N×M)-by-P. In this case, the first two dimensions have been reduced to one dimension by appropriate
indexing through the frequency and time locations of the resource elements of interest, typically for a single physical channel.
Data Types: double
Version History
Introduced in R2014a
|
{"url":"https://de.mathworks.com/help/lte/ref/lteequalizemmse.html","timestamp":"2024-11-12T06:17:12Z","content_type":"text/html","content_length":"93114","record_id":"<urn:uuid:ea0d6809-ef5b-43b5-a7cd-a05dd4cc0ec9>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00111.warc.gz"}
|
Which parameters of NDSolve to change to get a more accurate PDE solution?
1764 Views
9 Replies
4 Total Likes
Which parameters of NDSolve to change to get a more accurate PDE solution?
Hello, I have been trying to solve PDEs using NDSolve, in particular the following one:
However, as can be seen at the end of the above code, the solution obtained is provided with only 6 significant digits, whereas I would be happy to obtain more accurate solutions. Changing parameters
such as PrecisionGoal and AccuracyGoal does not seem to help, but cause some error messages. I managed to obtain some improvement only by increasing MaxStepSize. It is less clear whether changing
DifferenceOrder brings any improvement. Which other parameters one can change to get a better accuracy of the solution, without causing error messages? Manuals do not seem to provide any discussion
of this essential issue. Manuals are also incomplete, as they do not explain which are default settings for various parameters, so that one cannnot even know which particular method or available
option is used. Ideally, I would like to make use of the multiprecision available in MATHEMATICA, in order to obtain solutions with as many as 19 accurate significant digits. Is this possible at all?
If yes, I would also be happy to learn how to split the PDE solving calculations into several runs, as they are likely to be time-consuming. Or, perhaps there are more accurate and efficient
alternatives to NDSolve?
9 Replies
There are lists and descriptions of methods, including implicit one-step methods:
LSODA is the default solver for ODE's. IDA for DAEs. Not sure about other forms of DEs.
According to the Overview, "FixedStep" is restricted to one-step methods, so you can't use BDF or LSODA with it. A list of such submethods is in the docs. for NDSolve. Strangely, the FixedStep
section gives an example that claims to show how to use Method -> "BDF" using fixed steps, but the steps are in fact of different sizes. You can use BDF for time integration in a PDE, but I do not
know how to fix the step size or to restrict the minimum step size.
I do not know of any implementation of Rosebrock methods in Mathematica/WL. How to implement your own methods is described in the Plugin Framework.
Responding to the person "Updating Name": I tried your version of the code, but I get a message: "The initial conditions did not evaluate to an array of numbers of depth 1 on the spatial grid.
Initial conditions for partial differential equations should be specified as scalar functions of the spatial variables". To make your code work I had to remove WorkingPrecision -> 40, but then I get
an accuracy of no more than 4-11 digits, depending on the choice of MaxPoints and MinPoints, while increasing AccuracyGoal or PrecisionGoal resulted in an enormous slowing down. My feeling is that
NDSolve does not tolerate multiprecision, at least not in adaptive calculations.
Concerning the pseudospectral discretization: isn't it practical for periodic boundary conditions only?
Responding to Michael Rogers: I'll try your suggestions, but I am not happy about "ExplicitRungeKutta". Is there any possibility to employ implicit methods, say BDF? Or, other robust implicit
schemes, like for example Rosenbrock methods? Some of these are very handy for nonlinear problems, as they do not require Newton iterations. I noticed in some examples on the internet that LSODA was
used in NDSolve, and LSODA is based on BDF, if I remember. Explicit methods are likely to cause stability problems. Unfortunately, I cannot find anywhere a complete list of available integrators and
other options for NDSolve. Leslaw
Used "FixedStep" + StartingStepSize for a fixed time step and whichever method for the spatial grid. Ignore the error/warning message about StartingStepSize. You've already complained about error
messages, and I have no defense for this one. (It seems StartingStepSize may be used for both temporal and spatial discretization, and it's okay for one but not the other in the examples below.)
Example 1:
\[Nu] = 0.01;
mysol = First[
NDSolve[{D[u[x, t], t] == \[Nu] D[u[x, t], x, x] -
u[x, t] D[u[x, t], x], u[x, 0] == E^-x^2, u[-5, t] == u[5, t]}
, u, {x, -5, 5}, {t, 0, 4}
, Method -> {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid",
"MinPoints" -> 257, "MaxPoints" -> 257,
"DifferenceOrder" -> 4},
Method -> {"FixedStep", Method -> "ExplicitRungeKutta"}},
StartingStepSize -> 1/2^13]]
Example 2:
mygrid = Join[-5. + 10 Range[0, 48]/80, 1. + Range[1, 4 70]/70];
\[Nu] = 0.01;
mysol = First[
NDSolve[{D[u[x, t], t] == \[Nu] D[u[x, t], x, x] -
u[x, t] D[u[x, t], x], u[x, 0] == E^-x^2, u[-5, t] == u[5, t]}
, u, {x, -5, 5}, {t, 0, 4}
, Method -> {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid",
"Coordinates" -> {mygrid}},
Method -> {"FixedStep", Method -> "ExplicitRungeKutta"}},
StartingStepSize -> 1/2^13]]
The following gives the distinct step sizes:
u["Coordinates"] /. mysol // Map@Differences // Map@DeleteDuplicates
Examples adapted from the [Method of Lines tutorial.]
Suppose that I don't want to use automatic features of NDSolve, but just prefer fixed, uniform spatial and temporal grids, combined with my choice of the spatial and temporal discretization orders,
plus my choice of WorkingPrecision. How to do this? I suspect I might choose SpatialDiscretization->MaxSteps == SpatialDiscretization->MinSteps == (my value) to eliminate automatic spatial grid
selection. Is this correct? But how to set the other options so that PrecisionGoal and AccuracyGoal are effectively not used by NDSolve? Given the fragility of NDSolve adaptive calculations, the
above approach might be the best one, if only no criptic error messages occur; I could test the accuracy of the procedure by comparing the actual errors with some known solutions, instead of relying
on the adaptive algorithm. Leslaw
I don't understand the connection of your remarks about errors and error message to the problem at hand. I get no errors with this:
Clear[foo, sol, x, w, psi];
ClearSystemCache["Numeric"]; (* Why? IDK. Seems to fix a bug when re-run. *)
nstep = 0; neval = 0; (* steps, PDE evaluations *)
PrintTemporary@Dynamic@{Clock@Infinity, foo, nstep, neval}; (* monitors progress *)
xmax = 10;
wmax = 10;
xmin = 10^-20;
psiinit[w_, x_] := (Exp[-w^2]/Sqrt[\[Pi]] - w*Erfc[w])*x;
sol = NDSolve[{2*x*D[psi[w, x], x] ==
D[psi[w, x], {w, 2}] + 2*w*D[psi[w, x], w] - (x*psi[w, x])^2,
psi[w, xmin] == psiinit[w, xmin],
Derivative[1, 0][psi][0, x] == -x, psi[wmax, x] == 0},
psi, {w, 0, wmax}, {x, xmin, xmax},
PrecisionGoal -> 20, AccuracyGoal -> 10,
WorkingPrecision -> 40,
StepMonitor :> (++nstep; foo = x),
EvaluationMonitor :> (++neval),
Method -> {"MethodOfLines",
"SpatialDiscretization" -> {"TensorProductGrid",
"MaxStepSize" -> 1/5000, "DifferenceOrder" -> 4}}]
Takes a few seconds to set up the first step. Takes several minutes to get to x > xmin. I have only 32GB RAM and NDSolve[] uses around 38.5GB, so much of the slowness is probably due to swapping.
Have you tried "Pseudospectral"?:
Method -> {"MethodOfLines",
"SpatialDiscretization" -> {
(*"MinPoints" -> 65,(* or higher *)*)
"DifferenceOrder" -> "Pseudospectral"}}
A trial run shows that psi approaches zero very rapidly, going below $10^{-20}$. To get 20 digits of precision, the AccuracyGoal needs to be more than the maximum of $20-\log_{10} | \psi |$
throughout the solution domain. I haven't been able to determine the maximum. To achieve, say, AccuracyGoal -> 60, I need WorkingPrecision to be around 400 or more, or some internal numerical glitch
happens and I get an error like the one you described (but did not provide code for!). It also takes a long time. Note: Compilers do not give run-time errors, which is what this is. I used to get an
uncaught exception and a "core dump" that could be used with a debugger, if the program had been compiled with debugger metadata. Mathematica gives you a stack trace, but only of user-level
expressions. I don't get to trace the numerical glitch, which probably occurs in an untraceable library anyway. It would be nice if it indicated the "non-numerical value" it encountered, instead of
just giving the time. It would be a good hint to what went wrong.
I tried some settings for WorkingPrecision, but then, if I remember, I get a series of error messages stating that something is not a number. The messages are totally incomprehensible, like most of
error messages in MATHEMATICA, giving no clues to where and why the errors occurred. Forgive my irritation, but I really don't understand why the error handling is so bad in MATHEMATICA. Any compiler
for programming languages is usually able to provide a precise location of errors, together with suggestions for eliminating the errors. Why this can't be done in MATHEMATICA? This should be
particularly easy given the fact that compilation/execution is limited to a clear sequence of, or even to single commands. Leslaw
Have you tried WorkingPrecision -> 40 or some such setting for it?
Well, thanks, this should be useful for output purpose, but my main concern and question is how to force NDSolve to produce results that have a precision of number representation higher than the
standard 16 digits, and how to activate an effective use of higher order accurate discretizations (hopefully) built-into the procedure. My goal is to obtain the solution having many accurate digits,
ideally 19-20 digits. This cannot be normally achieved by using standard floating point numbers and discretizations that are only 2nd or even 4th order accurate. Leslaw
Sorry, the computer ate my post. I don't have time to rewrite it. :/
Here is the full precision (53-bits, 15.95 digits):
Style[psi[0, 7] /. sol, PrintPrecision -> 20]
(* {1.1442768817404132} *)
Read http://reference.wolfram.com/language/tutorial/Numbers.html for more about kinds of numbers.
Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use
|
{"url":"https://community.wolfram.com/groups/-/m/t/3118998?sortMsg=Recent","timestamp":"2024-11-07T12:38:16Z","content_type":"text/html","content_length":"143271","record_id":"<urn:uuid:047911ce-ed4f-43c3-836e-c2f8a9f05179>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00082.warc.gz"}
|
Regression plot · Feyn Documentation
by: Kevin Broløs & Chris Cave
(Feyn version 3.0 or newer)
Aside from the training metrics, Feyn offers a range of tools to help you evaluate your Model. For a regression Model, you have the option to plot the predicted values against the actual values to
better evaluate your Model.
Below, we use plot_regression to compare the true values of the target variable to the predicted values from the regressor.
As sample data we are going for the Diabetes dataset made available by scikit-learn. Below we import data, prepare it and find a good Model from a QLattice:
import feyn
from sklearn.datasets import load_diabetes
import pandas as pd
from feyn.tools import split
# Load diabetes dataset into a pandas dataframe
dataset = load_diabetes()
df_diabetes = pd.DataFrame(dataset.data, columns=dataset.feature_names)
df_diabetes['response'] = dataset.target
# Train/test split
train, test = split(df_diabetes, ratio=[0.6, 0.4], random_state=42)
# Instantiate a QLattice
ql = feyn.QLattice(random_seed=42)
models = ql.auto_run(
# Select the best Model
best = models[0]
Plotting the model predictions
We use plot_regression to plot the actual values (x-axis, labelled Actuals) to the predicted values (y-axis, labelled Predictions) from the regressor.
If the prediction is perfect, then all the points should lie on the y=x dashed line. We can use this to see whether we overestimate or underestimate certain regions.
The line of equality is an aid to see just how close the points are to the truth.
Saving the plot
You can save the plot using the filename parameter. The plot is saved in the current working directory unless another path specifed.
best.plot_regression(data=train, filename="feyn-plot")
If the extension is not specified then it is saved as a png file.
Location in Feyn
This function can also be found in feyn.plots module.
from feyn.plots import plot_regression
y_true = train['response']
y_pred = best.predict(train)
plot_regression(y_true, y_pred)
|
{"url":"https://docs.abzu.ai/docs/guides/plotting/plot_regression","timestamp":"2024-11-03T17:13:54Z","content_type":"text/html","content_length":"24397","record_id":"<urn:uuid:dd33399f-3560-440f-9a92-9585e9101498>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00223.warc.gz"}
|
Li Guo^1, Yihao Guo^1, Yingjie Mei^1,2, Jijing Guan^1, Wufan Chen^1, and Yanqiu Feng^1
^1Guangdong Provincial Key Laborary of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou, People's Republic of China, ^2Philips Healthcare, Guangzhou,
People's Republic of China
MEDI reduces streaking artifacts in QSMs by minimizing total variation in smooth regions in the susceptibility map. However, MEDI still contains artifacts near image edges because this method does
not impose any constraint on voxels near edges. We aim to improve the reconstruction of quantitative susceptibility map from MR phase data by introducing morphology-adaptive TV regularization which
imposes the TV constraint on the whole susceptibility map but with different weights in smooth and non-smooth regions. The performance of the proposed method is demonstrated in both simulation and in
vivo data sets.
Quantitative susceptibility mapping (QSM) determines tissue magnetic susceptibility by the field map deconvolution with the dipole kernel. Due to the zero values of the dipole kernel along the magic
angle in the k-space domain, the computed susceptibility map contains streaking artifacts [1]. Various regularization methods have been proposed to reduce the effect of this ill-posed inversion
problem on quantified susceptibility map from single-orientation acquisitions. Morphology enabled dipole inversion (MEDI) [2, 3] can reduce streaking artifacts by minimizing total variation (TV) in
smooth regions, which are identified as voxels with small magnitude image gradients. However, the susceptibility map generated by MEDI still contains severe artifacts near image edges because this
method does not impose any regularization on voxels near edges. This study aims to improve the reconstruction of quantitative susceptibility map from MR phase data by introducing morphology-adaptive
TV regularization which imposes the TV constraint on the whole susceptibility map but with different weights in smooth and non-smooth regions.
Theory and methods
The reconstruction of quantitative susceptibility map using morphology-adaptive TV regularization can be formulated as follows:
$$$\min_{\chi}\parallel W\left(F^{-1}DF\chi-\phi\right)\parallel_2^2+\lambda_{1}\parallel M\triangledown \chi\parallel_{1}+\lambda_{2}\parallel \left(1-M\right)\triangledown \chi\parallel_{1}$$$ (1)
where χ is the susceptibility map, $$$\phi$$$ the local tissue phase, $$$F$$$ the Fourier transform operator, $$$D$$$ the dipole kernel in k-space, $$$W$$$ the noise weighting, $$$\parallel \star\
parallel_2^2$$$ the L2 norm, $$$\triangledown$$$ the gradient operator, $$$M$$$ the binary mask of smooth regions in the magnitude image, $$$\lambda_{1}$$$ and $$$\lambda_{2}$$$ regularization
parameters, $$$\parallel\star\parallel_{1}$$$ the L1 norm, and $$$\parallel\triangledown\chi\parallel_{1}$$$ denotes the L1 norm of gradient, i.e., TV. The second term picks a solution with smooth
regions matching those of magnitude images, and thus can effectively suppress streaking artifacts in these smooth regions. The third term imposes the piecewise constant constraint on reconstructed
susceptibility map in non-smooth regions to reduce quantification errors at these edge voxels. Eq. (1) provides a spatially adaptive regularization for the QSM inversion. The regularization parameter
$$$\lambda_{2}$$$ is generally set to be smaller than $$$\lambda_{1}$$$, and this means less smooth constraint on voxels near edges. When parameter $$$\lambda_{2}$$$ equals to zero, the proposed
method reduces to MEDI. When $$$\lambda_{2}$$$ equals to $$$\lambda_{1}$$$, Eq. (1) is a conventional TV constrained QSM inversion without prior morphological information from magnitude images.
To evaluate the performance of the proposed method, the simulation and in vivo data sets from the online QSM repository [4] were reconstructed using the proposed method and MEDI. The associated
reference standard images are also provided by the online QSM repository. The parameters used in MEDI were set to the optimal values suggested by previous work [5]. The parameters used in the
proposed method were as follows: $$$\lambda_{1}=$$$0.003 and $$$\lambda_{2}=$$$0.00001 for numerical simulation data, and $$$\lambda_{1}=$$$0.003 and $$$\lambda_{2}=$$$ 0.0003 for in vivo data.
Figs. 1 and 2 exhibit the results of numerical simulation and in vivo data, respectively. Table 1 summarizes the quantitative measures of root mean square error (RMSE), structure similarity index
(SSIM), high frequency error norm (HFEN), and the slope and determination coefficient ($$$R^{2}$$$) from the regression analysis. As shown by the simulation results in Fig. 1, the proposed method
significantly outperforms MEDI in reducing quantification errors. From in vivo imaging results in Fig. 2, it can be observed that MEDI results contain obvious errors on voxels near edges. Compared
with edges in the MEDI results, the edges in the susceptibility map generated by the proposed method are more similar to the edges in the reference maps. Table 1 shows that the proposed method
results in reduced susceptibility quantification for RMSE and HFEN in both human brain and gadolinium phantom data sets.
We develop a novel morphology-adaptive TV regularization method for the reconstruction of quantitative susceptibility map from MR phase data. In the proposed method, regularization weights are
adaptively determined according to local morphological information: smooth regions are reconstructed with strong regularization, and non-smooth regions near tissue boundaries are reconstructed with
less regularization. Compared with MEDI, which does not impose any constraints on voxels near edges, the proposed method can achieve more accurate reconstruction of susceptibility maps, especially
near tissue boundaries.
Our results demonstrate that the reconstruction of susceptibility map using MEDI can be improved by introducing an additional TV constraint on edge voxels with a smaller regularization degree. More
effective regularization models can be further investigated for the accurate reconstruction of the susceptibility near tissue boundaries.
[1] Karin, Shmueli, et. al. MRM 2009; 62:1510-1522. [2] Tian, Liu, et. al. MRM 2011; 66(3):777-783. [3] Jing, Liu, et. al. NeuroImage 2012; 59(3):2560-2568. [4] Online QSM repository. http://
weill.cornell.edu/mri/QSM/Online.zip. [5] Shuai, Wang, et. al. IEEE Trans Biomed Eng 2013; 60(12):3441-3448.
No acknowledgement found.
No reference found.
|
{"url":"https://cds.ismrm.org/protected/17MProceedings/PDFfiles/1422.html","timestamp":"2024-11-02T21:51:51Z","content_type":"application/xhtml+xml","content_length":"12512","record_id":"<urn:uuid:a5e55dff-abce-4bcf-8349-ed01a5983177>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00299.warc.gz"}
|
MU Structural Analysis - 2 - May 2015 Exam Question Paper | Stupidsid
MU Civil Engineering (Semester 5)
Structural Analysis - 2
May 2015
Total marks: --
Total time: --
(1) Assume appropriate data and state your reasons
(2) Marks are given to the right of every question
(3) Draw neat diagrams wherever necessary
For the structures shown in figure, calculate.
1 (a) i) Static indeterminacy.
3 M
1 (a) ii) Kinematic indeterminacy (neglecting axial deformation for flexural members)
3 M
1 (b) Determine the horizontal displacement of joint D of the rigid jointed plane frame as shown in figure, due to change of temperature of member surfaces. Consider depth of all the member as 500mm.
Take α[t]=12×10^-6/°C.
10 M
1 (c) Determine the shape factor for the triangular section as shown in figure
4 M
2 (a) Analysis the rigid joined plane frame as shown in figure, using moment distribution Method. Draw BMD,SFD and elastic curve.
10 M
2 (b) Analyse the beam by three moment theorem and Draw B.M.D
10 M
3 (a) Analysis the rigid joined plane frame as shown in figure, using slope deflection method. Draw BMD,SFD and deflected shape.
12 M
3 (b) A two hinged parabolic arch of span-20 meter and rise 4 meter carries uniformly distributed load of 40kN/m on right half span find the reaction at the supports and deaw BMD.
8 M
4 (a) Analysis the rigid joined plane frame as shown in figure, using Flexibility Method. Draw BMD,SFD and deflected shape.
12 M
4 (b) Analysis the beam by flexibility method and draw BMD.
8 M
5 (a) Analysis the rigid joined plane frame as shown in figure, using stiffness Method. Draw BMD, SFD and deflected shape.
12 M
5 (b) Calculate the plastic moment capacity required for the continuous beam with working load as shown in figure.
8 M
6 (a) Analysis pin joined plane frame shown in figure and calculate the forces in all members. Take AE constant.
12 M
6 (b) Analyze the beam as shown in figure using stiffness method.
8 M
More question papers from Structural Analysis - 2
|
{"url":"https://stupidsid.com/previous-question-papers/download/structural-analysis-2-10620","timestamp":"2024-11-10T08:46:35Z","content_type":"text/html","content_length":"139096","record_id":"<urn:uuid:42c11a52-b22d-4f4a-bafc-eca264bd95c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00712.warc.gz"}
|
How to build the "Best Portfolio" using index funds and ETFs?
Want to construct your portfolio completely with index funds? Or want to construct the “Best portfolio” using index funds or ETFs?
How would you do that?
Good that there are many options available in the passive investing space. There are cap-based indices (Nifty 50, Nifty Next 50, Nifty Midcap 150 etc) and there are factor indices (Momentum, Low
Volatility, Quality, Value etc).
Having such options is fine but how would you construct a portfolio with such indices?
How much weightage would you give to each of these factors in your portfolio? Which are the best factor index funds or ETFs?
In this post, let’s find objective answers to the above question, albeit with many caveats. In other words, we will find, based on past data, the “Best portfolios” based on your requirements.
Which index funds or ETFs to consider?
We consider the following (price return) indices.
1. Nifty 50 index
2. Nifty Midcap 150 index
3. Nifty 200 Quality 30 index
I have written about all these indices in my earlier posts and discussed about their methodologies. Have also compared the performance of these factor indices However, I have been mostly concerned
about the performance of indices in isolation. I have not focussed on the interplay or the correlation between the indices. Or if combining 2 or 3 strategies would yield better results. And this is a
problem because you wouldn’t best all your money on just a single strategy.
Because we know that, when it comes to investing, nothing works all the time. Thus, no strategy, no matter how good, will outperform all the time. In fact, there will be times when it will struggle
badly. And it is difficult to stick with an underperforming strategy for a long time if you have bet all your money there. You might bail out at just about the worst time.
Now, if we construct the portfolio using two or more of these indices (strategies), it is possible when one strategy struggles, the remaining ones are doing well. This can result in a smooth
performance overall and help maintain discipline.
In this post, let’s figure out how to construct a portfolio using a combination of these indices.
Or in other words, what combination of these indices will result in the “Best” portfolio?
I have picked popular cap-based indices single factor indices (Nifty 50, Nifty Next 50, Nifty Midcap 150), single factor indices (Quality, momentum, low volatility, value) and even a multi-factor
index (Alpha Low Volatility 30) index. I have tried to pick indices for which we already have index funds or ETFs. The only exception in Nifty Midcap Quality 50 index.
A note about Nifty 50 Value 20 index (NV 20): I didn’t pick a pure value index (Nifty 500 Value 50 index) because its long-term performance has been pathetic. Have chosen Nifty 50 Value 20 even
though it is not a pure value index. NV 20 has very high weightage to ROCE (return on capital employed), a metric you would usually associate with a quality stock. So, it is more of a Quality + Value
What is the Best Portfolio?
There can’t be one objective definition of the “Best portfolio”. Because all of us have different expectations from our portfolios. While some of us shoot for the highest returns, the others are
content with moderate but stable returns.
Some of the desirable features of any portfolio could be:
1. High returns (CAGR/IRR)
2. Low volatility (Low standard deviation)
3. High Sharpe ratio (Sharpe ratio is a measure of risk-adjusted returns. Higher the better)
4. High average rolling returns
I have presented a small list above. There may be many other metrics that you would want your portfolio to rank well on. For instance, you may just be concerned about downside deviation.
Additionally, a portfolio may not rank well on all the metrics. For instance, a portfolio/fund may offer the best CAGR but may be the most volatile or may have the deepest drawdowns.
Thus, you first need to decide what you want from your portfolio and can try to optimize the portfolio for that metric accordingly. For instance, the highest CAGR portfolio may be different from the
lowest drawdown portfolio.
How have the Factor Indices performed?
I compared the performance from January 2009 until September 2021.
Reason: The data for Nifty 50 Value 20 index is available only from January 1, 2009.
I have highlighted the portions as follows:
1. Best Performer on a metric: Blue
2. 2^nd Best Performer on a metric: Grey
3. Worst performer on a metric: Pink
You can see, no index has rank 1 or 2 on all the metrics. And this brings us to an important point. Can we improve the performance on various metrics by mixing these indices?
Let’s find out. The first thing to check here is the correlation between the various indices. Correlation is a measure of how various indices move together. A correlation of 1 means that both the
variables move together in the same direction. A correlation of -1 means that when one variable goes up, the other goes down and vice-versa.
For the sake of completion, I present the “Rs 100 grows to” and rolling returns charts below.
Correlation between the Factor Indices
Note that all these indices comprise of Indian stocks. Hence, would have very high correlation with each other. And you can see this in the table above. Most of the numbers are above 0.8. I have
highlighted those below 0.85. Thus, you must appreciate the limitation of a portfolio mix of the above indices. What we test in the remainder of the post is about optimising your domestic equity
You can’t rely on a portfolio with a mix of these indices for diversification. For diversification, we need much lower correlation coefficients (than the numbers we see in the above table). And that
happens when you mix completely different assets in a portfolio.
For comparison, I present the correlation of monthly returns between Nifty, Gold, Nasdaq 100 index and a debt fund since March 2011. Have used Nippon Gold BeES as proxy for gold. Motilal Oswal Nasdaq
100 ETF for Nasdaq 100 and HDFC Liquid fund for debt fund.
The number are either negative or low positive. And that’s how you diversify a portfolio and reduce portfolio losses. By bringing together assets with negative or low correlation. Now, let’s go back
to the main topic.
What do we optimize for?
Your best portfolio combination will depend on the metric you want to portfolio to optimize for. I don’t which is your preferred metric. Hence, we will find optimized portfolios for all the metrics
discussed above.
Firstly, we will see the results for each metric for uncapped weights. You can even go 100% to a single index. Negative weights (or shorting) is not allowed.
Then, we take a more practical approach. To avoid going too heavy with a particular strategy, we will cap the maximum weight at 25% and 40%. Or we will find the “Best portfolios” using 2 maximum
weight caps.
I have used Excel Solver function to identify the best portfolios for each metric, subject to the weight caps.
I have highlighted the metric being optimized in Blue.
Highest CAGR
The highest CAGR portfolio is heavy on Nifty Midcap 150 Quality 50 index, Nifty Momentum index and Nifty 200 Quality 30 index.
Highest Sharpe Ratio
Heavy on Nifty Alpha Low Vol 30, Quality 30 and Nifty Momentum index.
Lowest Standard Deviation
Heavy on Nifty Alpha Low Vol 30, Nifty Low Volatility 30 index and Nifty Quality 30 index. Nifty Momentum Index also comes in a capped portfolio.
Lowest Maximum Drawdown
This is interesting. Nifty Midcap 150 index had the deepest drawdowns. Still, it commands a good weight in lowest drawdown portfolios. Nifty Quality index and NV20 index are the other prominent
players in such a portfolio.
Best 3-year Rolling Returns
Nifty Momentum index is the biggest weight here. In constrained portfolios, Alpha Low Vol 30, Nifty Quality and Midcap Quality index come in.
How do we use the above information?
One surprising finding is that you don’t find any weightage to Nifty 50 in any of the optimized portfolios. Nothing.
Does that make Nifty 50 a bad choice?
No. Nifty 50 is not a bad choice. And I have listed down some of the reasons in the “Caveats” section below.
In a post on How to build a long term portfolio, I mentioned that the core equity portfolio should be built around market-cap based indices. And I stick with that.
Depending on your preference, you can use the “Best portfolios” for the satellite portion of your equity portfolio.
The Caveats (and there are so many of them)
1. This is based on past data. There is no guarantee that the past will repeat.
2. I have used data for about last 13 years. The results can change if you change the time period. For instance, if we are optimizing on Sharpe ratio, the best combination can change if we use the
last 5-year or last 10-year data.
3. There is a start point and end point bias. We have used Jan 2009-Sep 2021. The best combinations for any metric will be different for say, Jan 2009-Sep 2020. Or for Jan 2011- Jan 2021.
4. Even for 13-year data, the results we have seen for 2009-2021will be different for returns from 2023-2035.
5. The calculations I have done assumes monthly rebalancing to target allocation. Quite impractical. Unlikely you would do that. Firstly, it is a lot of work for investors like you and me. Secondly,
there will be tax implications. Annual rebalancing is just fine.
6. The factor indices have been launched recently. Do not have sufficiently long track record. The good results that we see in the factor indices could be a result of back-fitting. The performance
of live indices could be very different. Moreover, the alpha can shrink when serious money chases the indices (or a similar strategy). Cap-based indices such as Nifty 50 and Nifty Next 50 have a
long track record.
Therefore, take these findings with a pinch of salt. At the same time, the past data is not completely useless either. Relying on past data is better than crystal-gazing.
How would you use this information?
Which metric do you want to optimize your portfolio for? And which factor indices will you use for your portfolio?
Let me know in the comments section.
Additional Links/Source/Credit
1. A tweet from Kora Reddy (@paststat) got me thinking in this direction. I also attended his workshop on building such optimized portfolios, where he patiently explained how to build such models.
If you are interested in this topic, suggest you attend the workshop the next time it is organized.
2. Which is the Best Nifty Factor Index? By Anoop VijayKumar from CapitalMind
11 thoughts on “How to build the “Best Portfolio” using index funds and ETFs?”
1. Sumanas
Another well-presented article!
1) The combination of the Nifty 200 momentum index 30 fund and Nifty 100 Low volatility 30 has a correlation factor of 0.86.
Does this mean, the overlapping of the underlying stocks along with their weightage, the percentage is 86?
2) Also, depending on the stocks rejigged by Nifty bi-annually, wouldn’t this keep changing?
3) I am also wondering, how is the correlation between Nifty50 and niftymidcap150 be 0.89. Can you explain how this correlation coefficient is calculated?
4) Objective is to generate returns that will beat inflation considerably and reduce volatility over a longer period. And for that we will need a portfolio returns in double digits. That
portfolio returns should be diversified enough to have very little correlation coefficient, meaning, if one goes down, other asset class should lift it up ).
In this regard, based on the life stage of the investor, can split it this way ( 60% equity, 20% gold, 20% debt )
Above can be achieved through either ACTIVE-based funds or PASSIVE-based funds.
Since the trend now is to minimize human risk and follow the indices, the PASSIVE-based portfolio construction can be considered.
Here, either it can be index funds or it can be ETF/FoF.
Also, in Equity as an asset, we can consider domestic vs international.
In domestic equities, we can either get into pure market-cap-based indices ( NIFTY 50 etc ) or we can add factor-based ( index with filters: akin to rule-based active investment ).
In this article, we can see the correlation between market-cap/factor-based is almost similar. Meaning, it doesn’t make sense to have a bunch of them create a diversified domestic equity
portfolio. In other words, a combination of ” Nifty 200 momentum index 30 fund and Nifty 100 Low volatility 30 ” is almost the same as just having either ONE of them.
In conclusion, just having 4 funds should be enough to construct a portfolio which be all-weather-friendly.
1 domestic index/ETF fund ( Nifty50 or Nifty 200 momentum index 30 fund and Nifty 100 Low volatility 30 )
1 Gold ETF
1 International equity ETF
1 liquid fund
Can you share your views Deepesh, on my interpretation?
1. Deepesh Raghaw
Hi Sumanas,
Thanks for the feedback. So many questions.
1. Correlation does not imply overlap. It simply indicates how the two securities move in relation to each other. Since all these are domestic indices, the correlation will be high.
2. Bi-annual does not change the results. It is already accounted for.
3. Not the right person to explain statistical concepts. I use excel functions to calculate.
4. What you suggest is a fine approach. This was just about domestic stocks. True diversification will come when we add different kind of assets. Gold, fixed income or sub-assets such as
international equity. It is all about conviction. You pick up factors where you have conviction and the choice must also be backed by empirical performance. Low Vol 100 is not the same as
Momentum index (even though Momentum has an inbuilt low volatility filter while selecting stocks_
2. Shiv
Excellant analysis .
Can you suggest a few funds which tracks Nifty Midcap Quality 50 index ?
1. Deepesh Raghaw
Thanks Shiv. None as yet.
As Sumanas pointed out, UTI has filed for one. Let’s see when it is launched.
3. Sumanas
Only UTI has filed for this Nifty Midcap Quality 50 index fund and no dates on the NFO yet.
If one already has constructed a portfolio using a mix of Nifty 200 momentum 30 and Alpha Low volatility or any other factor-based funds with a certain mix of weightage, not sure if adding “Nifty
Midcap Quality 50 index fund” index fund to the mix will add any more value considering a high correlation coefficient of 0.8+ as shared in this article.
However, if someone has constructed a portfolio using a mix of active and passive funds, and now wants to replace the active fund with the passive fund for 2 reasons ( lower expense ratio and
higher point to point returns ), then yes, maybe replacing Axis Midcap fund or other in this category with Nifty Midcap Quality 50 index fund could be the only reason to cheer.
Churning and cluttering will only lead to a dip in returns and create unnecessary anxiety in this process of trying to optimize the portfolio returns.
1. Deepesh Raghaw
Yes Sumanas. Clutter and churn must be avoided.
No matter which factor index you pick, it will test your patience sooner or later.
Conviction is important.
If you believe in Quality and momentum, just pick those 2 indices.
Thanks for helping with the response.
1. Sumanas
Rightly said.
I see a few more sub-asset classes in the coming days.
Crypto-currency based ETF/index funds and Clean energy-based ETFs.
This will only create more chaos in the mind of investors who are already flooded with numerous permutations and combinations of funds in composing that perfect all-weather portfolio.
If the objective of investments of an investor isn’t clear and if one is spoilt for choices with various routes to generate income, and as you rightly pointed earlier, diving into
something without conviction, then those set of investors will have a hard time deciding where to invest and also will keep suspecting and having serious doubts on the capability of the
existing already invested options to generate superior returns viz-a-viz the new fund/theme in the market.
As you always point out in your articles “Because we know that, when it comes to investing, nothing works all the time. Thus, no strategy, no matter how good, will outperform all the
4. Wilfred
Alpha 50 is missing in comparison.why?
1. Deepesh Raghaw
Yes, could have used. Would have presented a complete picture.
A couple of reasons.
1. Too many indices would have cluttered the charts.
2. Thought Alpha Low Vol 30 was a better choice than Alpha 50.
I did something similar with value index too. Chose NV20 instead of pure value index.
5. Kalai
How is Nifty 500 Index fund compares to Nifty 50 Index. I am planning to invest only one index fund for equity part of portfolio. Just curious to understand the choice between N50 vs N500 index
fund. The TER is more in N500 compare to N50 index fund. Not sure about Tracking error in these 2 funds
1. Deepesh Raghaw
Hi Kalai,
Can’t answer this objectively.
Different indices. Nifty 500 is a much broader index. At the same time, the correlation between the two indices will likely be very high. Nifty 50 stocks will be subset of Nifty 500 stocks.
You can check the expense ratios and tracking error here (https://www.indiaetfs.in/list-of-index-funds).
Would expect tracking error to stabilize as the fund gets older and bigger.
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://www.personalfinanceplan.in/best-mutual-fund-portfolio-index-funds-etfs-passive-factor-value-momentum-quality-low-volatility/","timestamp":"2024-11-07T18:38:38Z","content_type":"text/html","content_length":"284428","record_id":"<urn:uuid:a1c03324-fca9-4a43-913e-d5e770af6e4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00012.warc.gz"}
|
Jonas Meckas | Agency of Unrealized Projects
Dear Hans Ulrich: Tonight I am thinking about my unrealized projects. I have many of them. One is: To publish a book of IDEAS. I consider that ideas are the cheapest thing in the world. I have about
10,000 of them. But I have no time to put them all in ...
Dear Hans Ulrich:
Tonight I am thinking about my unrealized projects. I have many of them. One is: To publish a book of IDEAS. I consider that ideas are the cheapest thing in the world. I have about 10,000 of them.
But I have no time to put them all in a book form.
Another project: To issue all my film diaries, 60 hours of it, But I have no fiances for it. So I keep issuing them piece by piece.
Another unrealized project: all my written diaries. Anno 1946-1997. I need one year vacation from Anthology -- which means, someone else has to raise c.$300,000 to keep it going --- to prepare these
diaries for publication. Many have asked for them. But I have no time do it: I am slave of Anthology.
Project Four: I would like, as a project, to side on a bank of a river and watch the waters flow and do nothing, for one year. That would give me greatest ecstasy.
Jonas Meckas
P.S. Now mmm I'll reveal my most secret dream: to build, some day, a house, exactly the same way that my father used to build, of wood, and of stone. This will be my ultimate project, some day, such
a house. It has eternity in it. And I know how to do it. Ihave watched my father do it many many times, when I was a child. I know I can continue his craft.
|
{"url":"https://aup.e-flux.com/project/jonas-meckas/","timestamp":"2024-11-09T17:19:27Z","content_type":"text/html","content_length":"19916","record_id":"<urn:uuid:276b63fc-1696-470d-8012-4e9c92b1a109>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00612.warc.gz"}
|
Basic math equations
Bing visitors found our website yesterday by entering these keywords :
Calculas, free algebra calculator, how to use fx101 casio calculator, constructions gcse maths revision print-off sheets.
Algebra word problems free help online, 6th grade worksheet make a bar graph, polysmlt ti-89.
Worksheet on addition and subtraction of positive and negative numbers, holt pre-algebra part a chapter one test, decimal square, solve equation by the square root property.
Investigatory project (math), Algebra symbol for square root, pre-algebra set theory math, java code extract digits, free exam 11+ papers, Downloadable TI 84 Plus.
6th grade square roots math, solve for a variable online, adding and subtracting positive negative numbers worksheet, multiplying radicals calculator, non homogenous differential equation.
Column addition worksheets-4th grade, (surds + powerpoint), Integer worksheets.
GCF of all numbers 1 through 1,000, yr 10 maths examinations, adding fractions pre-calc, write equation in standard form with 3rd degree, what is the square root of 48.
Pre algebra evaluate expressions, quadratic equation gcse exercises, java code for square roots, solve simultaneous equations online.
Aptitude test papers, free downloads, trig problems/answers, square roots practice quiz 9th grade, distributive property adding and subtracting of integers, ti-84 "solve function", greatest common
factor of 871, Glencoe Mcgraw hill Linear Programming skills practice.
Namber 1/10 work sheet, Decimal + java, decimal to base 8.
Graphing functions polynomials cubes, "common multiple" trick, double intercept formula, free math problem solver, 5thgrade division tests, java decimal computation, Two Step Equations Worksheets for
eighth grade.
Algebra 1 an integrated approach, factoring quadratic trinomials-worksheets, how to solve probabilities, statistics summation solver.
Examples of mathematics trivia, worksheets on multiplying and dividing positive and negative numbers, algebra Age Problem example, fifth grade algerbra.
Solve 3 equations with 3 unknowns with a ti-89, how to solve quadratic equation computationally, glencoe/mcgraw-hill simplifying radical expressions page 89, "multiplying decimals" worksheets, TI 83+
"Partial Differentiation" program download, convert base 5 to decimal, ebook download "Discrete Mathematics with Applications".
Maths work sheat for 7th grade on decimal, dividing integers with patterns, how to solve a system containing fractions by the addition method, applications for real life polynomials, answers to holt
california mathematics course 2:pre algebra.
6th houghton mifflin worksheet, ti 83+ vertex of parabolas, "factoring" monomials to solve mathematical equations, pre algebra expression worksheets, least common multple for 41 and 11.
Glencoe/mcgraw-hill pre-algebra 2-4 practice adding integers simplify each exspression, how to find equation of the graph+cubic functions, alegebra 1, adding and subtracting integers review.
Solving second order differential equations in matlab, Induction Training for beginers.ppt, tricks for factorising quadratics, 4th square root, glencoe/mcgraw-hill math quiz answer.
Free 9th grade maths games, grade 7 work sheet, conjugate nth root, least common multiple worksheets, online algebra calculator, square root calculator to fraction, how to solve differential equation
using matlab?.
Solving systems of equations with a ti 83, set theory worksheets, numerical solution of nonlinear multivariable equations, free +interger worksheet, convert fraction to decimal in java.
Free formulas solving, multiplying and dividing integers activities, equation answerer, polynomials addition and subtraction exercises, simple interest games maths, to the power of a fraction, sample
problems of prime factorization.
"mastering physics solutions manual" download, download the t.i. 84 calculator, cheat sheet for clep test, simultaneous equations online solver, texas pre algebra/ prentice hall, examples of poems
about solving problems in math, algebra worksheets.
Permutation combination multiplication problems, radicals calculator, logarithmic equation solver.
3rd order polynomial in 2 variables, algebra formula, free check Division of monomials homework.
Radical expressions lesson plan, how to simplify cubed polynomials, aptitude test-free sample papers, an expression is factored when it is written as a product, online algebraic calculator, regents
squae roots.
Free equations worksheets, math algebra poems math poems, set of second order differential equations matlab, factoring solver, free trig homework help, ti 89 log key, worksheets for variables.
F of g functions solver, simplifying with variables, "prentice hall mathematics" pre-algebra "north carolina", multi step equation worksheets.
Boolean algebra calculator, program for sum of numbers in java, laplace for dummies, matlab-programing-tutorial free.
Answers for the worksheet angles in the coordinate plane, how to solve least common factors, solutions to contemporary abstract algebra torrent, lesson plan for solving monomials, graphs of absolute
value AND sample multiple choice questions.
How to solve fractions within fractions, solve the Least Common Denominator, write equation describe functional relationship, multiplication expression.
Yr 9 maths past exam papers, Diamond structure for permutation and combination, HOW DO YOU TYPE NUMBER 1 THRU 9, Precalculus with Limits A Graphing Approach 2nd Edition Answers and Explanations.
Free worksheets positive and negative worksheets, finding the common denominator in algebra, slope of line calculator TI-83, evaluating exponential expressions.
Help me with my algebra 2, Prentice Hall mathematics book geometry answers, complex rational equations calculator, algebra work problems wordsheet for free, SOLving nonhomogeneous second ODE,
vocabulary power plus for the new SAT book 4 lesson 1 quiz answers.
Free mcdougal littell algebra 1 worksheets, online pre-algebra calculator, "java code" linear programming, solve by elimination calculators.
Example of a math trivia, Rewrite each expression using a radical and simplify when appropriate., learn algebra free.
How to solve mixed numbers, help in powerpoint presentation mathematica equation, powerpoint showing multiplying decimals, 9th grade math worksheets, simplifying expressions containing radicals.
Freeworksheet, CHEMICAL EQUATIONS animation, dividing decimal cheater, SOLVE 6TH ORDER EQUATIONS JAVA, free maths sheets+yr1, equation of vertex, year 7 free maths test.
Comparison between negative & positive integers, glencoe algebra 1 the four digits problem answers, Solve a formula for one of the variables, rounding decimals 6th grade worksheets, online copy of
Houghton Mifflin Companys Prealgebra.
Aptitude question answer, completing the square using TI-83, graphing linear interpolation TI-83 calculator, the highest common factor between 142 and 148, free math powerpoints for graphing
calculartor, ks3 year 7 free worksheets.
How to solve for y-intercept, what is the difference between square root and positive square root, algrebra terms, Class VIII test papers, calculator needed for college algebra, free 8th grade
english worksheets online, use free online graphing calculator ti 83.
Decimal to fraction to decimal worksheets, absolute power in ti 83, Converting mixed fractions to a decimal, Lowest common denominator in algebra, math pre test grade 9.
Answers to algebra 2 homework, how to use ti83 graphing calculator activity worksheet, adding and subtracting test for grade one, find slope of line on ti-83 plus, college algebra sample problems,
free pdf download previous gre question paper.
• Prealgebra, 3rd edition by Blair/Tobey/Slater, published by Prentice Hall ebook, glencoe california algebra 1, solver for matrices using complex numbers, algebra, EASY WAY TO LEARN MATHEMATICS,
Holt Algebra 1 homework help, ti 84 rom download.
Mcdougal littell pre-algebra online textbook, hyperbola graph, rules in adding a fraction, simplifying radicals calculator.
Importance of Algebra, properties of addition worksheets, contemporary abstract algebra solutions, calculator for adding and subtracting negative and positive numbers.
Adding square roots calculator, remainder divisor calculator, radical equations worksheet, "online honors geometry book", factoring polynomial cooperative learning lesson, solving slope ti 89,
glencoe; science; for slow learners.
Top selling algebra software, List of Fractions Least to Greatest, adding and subtracting big numbers lesson, online algebra 2 problem solver, adding and subtracting signed numbers worksheet, math
properties worksheet, aptitude test - agarwal - free download.
Algebra sequences, algebra structure and method software, difference quotients solver, 7th grade mathmatic problems.
Calculator on root, beginning algebra proofs, Scientific Calculator(fourth roots), McDougal Littell Geometry texas addition answers, what is a product and a factor in a math problems?, moving a
square root equation.
Website where teacher can view all pages of prentice hall algebra 2 book online for free, matlab nonlinear fit, site that assist you with college algebra , cost accounting tutorials, converting a
mixed number to a decimal, solving term number general rule.
Elementary Algebra, 6th Edition, by Mark Dugopolski., adding and subtracting integers interactive activity, least common denominator calculator online, log base in ti 83, cheat sheet ratio scale high
Boolean algebra exponent, fifth grade decimal graphing worksheets, advanced algebra grade level, iowa algebra aptitude test power point, adding positive negative worksheets.
How to calculate intersections on ti-83 calculators, introductory algebra textbook, california 7th grade math placement test practice, holt texas textbook algebra 2.
Online Reviewer for Statistics and Algebra, addition and subtraction of fraction worksheets, online pre algebra extbook by holt, solving complex polynomial for x, download Aldebra font, multiply and
divding intgers worksheets.
Radical expressions word problems, example of incomplete quadratic equations by the square root method, adding subtracting decimals jeopardy, subtracting fractions - free worksheets for children,
algebra one common denominators, free e-book on accounting.
Holt Physics Answers, free algebra graphing program, 2nd Order Linear non Homogeneous ODE.
Adding and subtracting with negative numbers worksheet, how to simplify expressions calculator and description, online practice printable ged tests, dividing mixed decimal.
Write each subtraction expression as an addition expression. Then add., simplify radical form, Maths question bank for 6th Class.
Literal equations worksheets, TI 83 Calculator Programs, "advanced mathematical concepts, merrill", answers.
Free algebra for dummies online, mixed number to decimal converter, prentice hall mathematics algebra 1 online glossary, Introductory math for economics worksheet problem set, DISTRIBUTION AND
How to subtract fractions by regrouping 3 1\4- 3\4, solving fractions with fractional exponents, MATH TRICKS FOR THE TI-83 FOR SAT.
Convert Decimal To Fraction, how to simplify with negative fraction power, math order of operations distributed property, precalculus powerpoint larson.
Solving Elementary Partial Differential Equations, factoring a cube root function, 25 essential expression worksheet.
Pre algibra, prentice hall algebra 2 ,ca adition, multiply and simplify with exponents, prentice hall algebra 1 online workbook, addition and subtraction leniar story problems, balance algebraic
equation ratio.
Adding Subtracting fraction Integers, expressions hardware requirements, graphing calculator holt, saxon algebra 1 answers, Algebra Basic Steps, sample algebra math problems, algebra with pizzazz
answers for worksheet 161.
6th math probability formulas, physics 3rd edition james walker 2007 notes answers, intercepts with square roots, calculator that can solve large problem.
Prentice Hall answers, download c answer book, 2nd order nonhomogeneous, solve leanear equations using matrices, downloadable third grade math sheets.
Downloads to TI-84, square roots conjugates, answers for Saxon Algebra 2 book, usable online ti 83, HOW TO SIMPLIFY DECIMALS TO FRACTIONS IN SIMPLEST FORM, Solve the system of equations using the
substitution method with a calculator, Algebra Problem Solving Solver.
Maths sums for yr 8, mathematical word problems for 4th yr, glenco math workbook 7th grade page 2, maths work sheets eight year old, free algebra 2 homework solver, perimeters in distributive
Texas algebra 1 textbooks, what is root 3 as a decimal, decimals to radical calculator, Universal Algebra Applications.
Online general dividing calcuator, printable puzzle worksheet on adding and subtracting integers, code programming in svgalib, saxon math factor, how to solve third order polynomials, solving
expressions with exponets.
North carolina sixth grade math review, Divisibility Test Worksheet, worksheets for basic mathematics through application fourth edition, clericel aptitude question paper, free test CPT math practis,
difficult math worksheets for algebra II.
Teach yourself algebra, software algebra gratis, operations ANd positive and negative integers and free worksheets.
Glenco math 7th grade study guide, easy explanation for simple fractions, algebra 1 holt, automatic trigonometry simplification, solve for specified variable, latitude convert meters.
Common algebra mistakes correct, input negative exponents in TI-84, graph complex linear equation.
Rules on adding,subtracting,multiplying and dividing, convert 9 digit number, Scientific calculator Ti-83 online demo, online calculator for trinomials, clep college algebra, math GCF LCM crossword
Grade 6online learn .ca, slope given three points, simplify cubed square root, old Maths 11+ exams, free TI-84 emulator, orleans hanna pre-algebra, simplifying square rules.
Samples problem solving questions intermediate algebra, download ti 84 plus, maple solve 2 equation.
Free pizzazz worksheets, free repeated quadratic factors reviewer math, Free Math Solver, combining like terms and distributing review, mathemtica tutor.
Absolute values and fractions, calculator solving equations by multiplying fractions, How do the trends in physical properties of the halogens compare with those of noble gases? Compare boiling
temperature, melting temperatures, and ionization energies., multiply and dividing integers worksheets.
Free answers to math problems with work shown, dividing cube roots, simplified form of a square root.
Formula to find greatest common factor, prentice hall algebra 2 answer key, find the greatest common factor of 52 and 81.
Free online scientific calculator ti 83, how to factor radical expressions where the denominator is a radical numerator is whole number, 3rd grade math sheets, algebra question answers.
Examples of math trivia, +Math skills sheet 2nd grade, add and subtract integers worksheets, algebraic errors in mathematics, square roots and log bases, solve second order polynomial, addition of
square roots and exact numbers.
Simple order of operation worksheets, axis of chemical equations example, simplification worksheet.
What is the least common mulitiple of 4 and 14, basic algebra for beginners, free step by step algebra simplifier, solve addition and subtraction of negative numbers, poems for linear functions,
combination permutation gre, TI Integral program download.
Algebra Answers, Greatest common factor of two monomials calculator, ti-83 3rd power, applet "production possibilities frontier", converting decimals into standard form, least common denominator
Answers to california saxon math intermediate 5, aptitude test download, free kids maths printable sheets, Math Scale, exercice de mathématique worksheet, "graph parabolas online", equations: square
roots with exponents.
Fraction calculator + java source code, greatest common factors table, x to a fraction power, ks3 practise printouts, 6th grade nc math review.
How to factor cubic quadratic equation, multiplying and dividing integers, Cube Root of 16.
Middle school math/course one/ chapter one/ practice workbook/mcdougal littell, download free college calculator, how do i enter a compound interest question into a ti83 calculator.
Foerster algebra 1 test answers, show me how to solve recursive formula, rational expressions solver, 4x4 inverse matrix worksheet, Combining like Terms Worksheets, systems of linear equations
graphing adding substitution disadvantage.
Multiply divide fractions, how to solve ratios, solve linear equation using slope and y-intercept, solving 2nd order differential equation example initial value problem.
Divisor poem, holt math book algebra key code, free sample papers for class 11th, holt california algebra 1 answers, simplify the fifth root of -1215.
Gre combination and permutation, Multiplying integers grade 8 worksheet, what is the method to square roots, Glencoe Physics Principles and Problems Problems and Solutions Manual.
Worksheets using exponents, adding and subtracting integers practice, Radical Square Root Calculator Online.
Algebra Software, free easy steps to help with algebra & pre algebra, monomial calculator, formula from fraction to decimal, online aptitude test downloads, solving quadratic equation on ti89, free
maths revision sheet, area.
Quadratic equation of 11th std, JAVA EQUATION SOLVER POWER 6, prentice hall mathematics algebra 2 teacher addition, glencoe algebra.
Turning .86 to a decimal, factoring equations calculator, multiple variable fraction equations, advanced 7th grade math free downloads, factoring fractional and negative exponents.
Quadratic inequalities graph, how to factor quadratic equations up to 4th power, long equation calculator, examples of the divison ladder for GCF, glencoe algebra 1 teachers edition.
Square root approximate value calculators online, polynomial factoring, Best homeschool Algebra II course.
Hard algebra questions, gre factorial combination question, entering exponents on TI-84 Plus, 9th grade math practice, multiplying absolute values fractions, 7 grade fractions online mathtest.
Free online algebraic fraction calculator, pre algebra expression for each quantity, precalculus fourth edition homework helper.
Simplifying rational expression online solver, negative parentheses square root, aptitude test books + free downloads, Order of Operations helper, modern algebraicexpression problem with solution,
solve rational inequality ti 89, worksheets for divisibility rules for 2,3,4,5,6.
Simplify complex rational expressions, free math puzzle sheets to print, Simple equations activity, graphing calculator practice worksheets.
How to write and solve for x using the TI-83 plus step by step, software for solving simultaneous equations, Calculating Bearings in Maths.
Mcdougal littell answers, boolean algebra for dummies, LCM GCF SOLVED WORKSHEET INDIA PDF, 6TH simplifying algebraic expressions, 1 problem solving of subtraction and addition example, how to work
8th grade algebra problems, square root simplifying calculator.
6th class sample papers, free elementary algebra tutorial, MULTIPLYING/DIVIDING SIGNED NUMBERS WORKSHEETS.
Quadratic equations of hyperbolas, T1-83 plus calculator-online, simplify addition under square root, step by step linear reression calculator.
Computer calculator decimal into fraction conversion, Algebraic expression solvers, graphical approach to college algebra 4th ed ch 1.
Free 11+ exam papers, solve the equasion, square root properties, trinomial calculator.
Mcdougal littell geometry answers texas, AJmain, Graphing on the coordinate plane video, cubedpolynomials, download saxon math 87.
Simplify 25 times square root of, combinations worksheet third grade, free rules for factoring monomials, worksheet on translate numbers among standard, expanded, scientific, and exponential forms.
Dividing Decimals Worksheet, free sample paper of aptitude test of hewitt associates, simplify square root of fractions, multiplying powers, Practice adding, subtracting,multiplying, and dividing
fractions, adding integers worksheets, prentice hall algebra 2 teachers edition.
Algebra with pizzazz answer key, algebra 1 prentice hall book online for free, positive and negative fraction addition and subtraction, quadratic inequalities worksheets.
Calculator to factor a cubed equation, Simplifying Rational Expressions Step by Step, free Adding & Subtracting Integers worksheets, modeling adding like terms, aptitude question papers.
Exponent properties worksheets, SETS-VENN DIAGRAMS, Math Worksheets with Variables, answer help to prentice hall pre-algebra, solving equation of ordered pairs quadratic, math solving program,
download accounting books.
How to calculate least common denominator, convert square metres, grade 8 algebra au, Practice TAKS Sheets for 4th Math, aptitude model questions.
Quadratic Formula TI-89 calculator, exponents and square roots, application of permutation and combination, how are radical equations used in everyday life, algebra calculator with fractions.
Math formula percentage, adding subtracting multiplying and dividing decimals worksheet, how to find missing integers.
Calculate rational expressions, roots in radical form, printables math practice for beginners.
Chapter 2 intermediate algebra for college students, how to convert fractions and decimals formulas, algebra structure and method online, class 7 maths exercises + india, Algebra Common denominators,
Glencore Algebra 1 book.
Online algebra calculator downloads, 9th grade math book texas, how to combine like terms in 8th grade, introductory algebra problems example, free science worksheets for 8th graders, solve for x
with the TI-83 plus, alegebra problems.
Free 8th grade math printable worksheets, long division polynomial solver, free online 12th matric maths physics and chemistry question papersl, algebraic expression lesson plan video, texas ti-84
plus games, c apptitude questions and answers with explanations for free download, Converting 10 digit Sting to Integer java example.
Radical equation used for real life, maths sums for school with a multiple choice answer, geometry cheat sheet, calculator that solve rational problems.
Changing mixed numbers to decimals, algebra 2 online calculator, solving algebra no numbers, simplifying exponents in an algebraic equation, adding and subtracting integers, free mathematica 5
tutorial, free online trinomial solver.
Discrete Mathematics and its Applications download, in what year was the mathematical concept of devision invented?, free algebra word problem solver, powerpoint for simultaneous equation.
Algebra variables with squares, Algebra number sets evaluate, trivia questions chemistry worksheet, printable factor tree.
Cost accounting books free online, how to calculate square metre divided into lineal metre, introductory algebra 10th edition answer key, how is multiplying by the reciprocal to divide simalar to
adding the opposite to divide.
Exponents of multiplication, algebra test cheating tools, decimal to octal calculator fraction, second order homogeneous differential equation, algebra word problems 5th grade basic, BBC KS2 YEAR6.
Greatest common factor with more than two variables, 3rd Grade Entrance Math Test, Workseets on Integers, square root algebra help, calculate gcd, polynomial java program, slope of quadratic.
Mcdougal littell algebra 2 worksheet answers, glencoe algebra 1 workbook, add & subtract decimals 5th grade, free mathematical problem solver, systems of linear calculator program ti 83, distance
formula with square roots, write in simplified radical form.
Solve variable equations in excel, algebra bracket calculator, prentice hall algebra 1, comparing decimals calculator, how to solve log math problems, online t-89 calculators.
How to do a cube root on a ti-83, write a simple program that find factors with sets "prime numbers", graphing app non functions, "mind power math high school" download, rearrange algebraic
expressions worksheet.
What's the least common multiple of 34 and 52, free ti 83 emulator download, multiplication /division algebra, algebra 1 workbork answers, free algebra calculator with square root, free algebra 2
online tutors.
Ontario grade 10 math help, how to use a ti-83 graph lines, math addition test 1-10, free math pritables 8th grade, rational function difference quotient simplify], evaluate expressions worksheet.
What is prime-factored form, worksheets variables, ppt aptitude questions, Simplifying Expressions Involving Rational Exponents, free 8th grade worksheets.
Introducing algebra to students, fractions for idiots, year 6 decimal maths sheet.
Adding and subtracting fractions activities, help with algebra solving for mixed numbers variables, GMAT permutation problems, ti-84 interpolate.
Find greatest common factors for 26,13,39, answers to college algebra problems, software algebra, how to multiply a root by an unknown variable, aleks workbook, algebra, online simultaneous equation
calculator, how to subtract numbers in the radicand.
Sientific algebra solver, world's hardest math problem, Pre Algebra with Pizzazz book AA, Algebra1 florida, easy steps to algebra & pre algebra, anwsers homework.
Fraction power, free online algeb ra solver, ti 83 emulator rom, parabola online graphing calculator vertex, pre algebra textbook refence sheet online, Integral calculator by substitution.
Example of linear equation real life, algebra calculator program, TI-89 differential equation to laplace, answer your algebra question, radical caculator.
Free PMF PROBLEM-Solver, TI calculators and economic analysis ppt, Crime Scene MATH WORD PROBLEMS and answers for algebra, Free Algebra Solver Using Substitution, pre- algabra work sheets, ONLINE 9TH
Convert bar notation decimals to fractions, online calculators to solving perfect squares, how to make visual aids applying quadratic equation by factoring, common denominator square root.
Online t-83 calculator, Free printable 3rd grade Math papers, Methode of completing the square, Download Aptitude Test for grade 9, free math test paper for primary 5, difference quotient problem
Algebra applications forensics, college algebra problems, simplifying radicals in the denominator, decimal practice sheets for fifth graders.
Log, ti-83, 8th STD. English practice papers, LCM C Programming, rules of rational expressions of addition and subtraction.
Algebra substitution, how do i multiply and simplify fractions with exponents, solving graphing questions online', download ti 83 plus rom.
Multiply and divide 3 digit numbers, absolute value math gcse, fifth grade calculator activities.
Holt algebra 1 textbook answers, prentice hall physics answer key, 9th grade english syllabus with mcdougal littell.
Online evaluation of quadratic equation, multiply radical decimal, newton multivariable root matlab, step by step calculator, free TI-89 calculator download, glencoe algebra 2 workbook answers.
Interactive completing the square math site equation of a circle, TEXAS 84 PLUS GAMES, adding algebraic equations.
Algebra clock problems, order of operations worksheets for 8th grade, Free Quadratic Worksheets, 11+ math free papers, california glencoe mcgraw hill algebra 1 teacher solution guide, foil solver.
2nd order homogeneous differential, entalphy animations, the answer sheets to the nie thinking mathematically part a, simple adding subtracting, Algebra: sketching and factorization of polynomials,
simultaneous linear and quadratic equations and inequalities, pre algebra for 6th graders online worksheets, 3rd exponent on ti-83.
Simplify: the square root of x^10+2x^4, 6th grade pre algebra, how to solve differential equations nonhomogeneous, decimal form of a mixed number, trigonometry speed and distance problems, adding/
subtracting scientific notation worksheet, how to solve two fifths plus one fifth.
Southwestern algebra 2 an integrated approach, solving differentials in matlab, how to teach permutations.
Symbols function in calculator TI-84 plus, free answers to college algebra graphs and functions, divisibility worksheet, LCM LESSON PDF, worksheets on algebraic diamond problems, year 9 algebra test,
year 12 algebra notes and questions and answers.
Where can i find the best math investigatory, FACTOR TRINOMIAL CALCULATOR ONLINE, Algebra 2 lesson plans.
Hoe to do compound inequalities on ti-84, cube roots for idiots, simplified radical, simplified radical calculator, how to figure out which odd numbered fraction is bigger, free 8th grade algebra
test, algebra equations absolute value worksheet problems.
McDougal Littell Algebra 2 even answers, free download of technical apptitude test questions and answers, Mcdougal Littell Geometry online answer key.
Holt texas algebra one book, 9th grade math homework, CUBE calculator.
Ladder method for Greatest Common Multiple, sample unit plan in fluid mechanics, how to solve algebraic problems, ti 81 how to convert decimals into fractions.
Checking a number between 0 and 9 in java, free problem solving worksheets for grade4, ti-83 plus linear equation, online algebraic fraction calculator, examples for scale math.
Ti plus emulator, ANSWERS TO holt middle school math course 3 algebra readiness, consumer math pre-tests, free cost accounting textbooks download.
How to convert bases with ti-89, algebra solve, solving equations fractions and two variables, exponents and radicals practice.
Complex operations solver, vector algebra questions, TI-83 ppc rom image, mcgraw hill sample north carolina end of year fifth grade math test, free math factoring polynomials machine, beginners
college algebra, factoring cubed roots.
Free worksheets for beginning algebra, ratio as a formulae, dividing algebraic equations.
Factoring rational exponents, solving linear differential equations in MATLAB, how do you find the equation of a function given two points, expressions containing square roots worksheet, saxon math
answer sheet, accounting books free download, graphing calculator download, quadratic equation.
Add subtract multiply divide integers, algebraic definitions, download ti-83 plus rom, differential equation solver matlab nonlinear, substitution principle of math 7th grade, adding and subtracting
decimals worksheet, Permutation Combination tutorials.
Quadratic equation with two variable, convert decimal to mixed fraction, graphing.com, set of first order equations matlab, simplifying calculator, let me use the teacher addition prentice hall
mathematics algebra2, free sample of third grade work of math.
Online factoring expressions, factorising quadratics calculator, adding, subtracting, multiplying, and dividing integers worksheets, easy tips to calculate maths, rules of summation calculator.
Check Division of monomials homework, Question papers+math+grade6, holt physics module 1, adding subtracting intergers, Blitzer college algebra 2nd edition lessons.
Slope of a line algebraic expression, calculus solving equations with fractions, online calculators with exponents, online calculator emulator, FREE elementary algebra:, algebra concept of functions,
implicit differentiation solver.
Write .089 as a fraction, tutor de algebra online gratuito, Holt introductory Algebra I, add scientific notation worksheet, combining like terms worksheet.
Adding real numbers that are in fraction form, printable algebra exams, printable third grade math sheets.
PEARSON PRENTICE HALL BEGINNING AND INTEMEDIATE ALGEBRA FOURTH EDITION, simplifying radical examples, solving newton using matlab.
Multiplying and dividing with metrics, java program to implement polynomial series, adding and subtracting fractions calculator online, Holt algebra 1 chapter 1 lesson 3.
Gr. 10 integers made easy, greatest common divisor equation, difference between greastest and the least numbers in the set of data.
Free mcdougal littell geometry answers textbook, holt algebra 1 + texas 9th grade, free elementary algebra videos.
Input fraction exponents in TI-84, algebra 1 book texas holt, prealebra, standard form ax+by=c meaning of a and b, ti-89 laplace transform, getting answers for algebra expressions.
Solving college algebra problems, CALCULATE ADDING PERCENTS, algebra online evaluate, i don't understand how to solve simultaneous linear equations using graphical method., FREE SAMPLE PAPERS FOR
CLASS 11th, subtract positive and negative fractions.
Decimal to square feet converter, dividing ellispes into quadrants, integrated algebra 1 extended test bank, dividing and multiplying fractions practice test.
Rational expressions online, exponential expression examples, slope of quadratic equation.
C# hex tı decimal, Division Problem Solver, subtracting integers worksheet, ti 89 integrate polar, mixed numbers to decimals, nonlinear simultaneous equations solution.
Factoring cubed binomials, online scientific calculator with 2nd function, base two number system interactive activities, algebra 1 for 9th graders, calculations with imaginary numbers in excel,
algorithms worksheets brackets year seven, holt algebra 1 answers.
CONVERTING DECIMALS INTO STANDARD FORM, suare root of one, variable exponent expressions, combining like terms solver, factorization online, easy density worksheets, teach me elementary algebra.
Free examples of college introductory algebra, denominator calculator, iowa algebra aptitude sample test.
Least common denominator calculator, free texas printouts, learning alegebra, log form on ti 83 calculator, download the first chapter in the basic mathematics book by bittinger 9th edition.
How to teach yourself college algebra, intermediate algebra 3rd edition book by alan tussy, quadratics games.
Algebra 2 and integrated approach power points, chapter 1 lesson 8 simplify each expression 9th, how to learn algebra worksheets, software.
SQUARE ROOT TO EXPONENT, online TI-83 emulator, solve for x calculator.
Ti-83 solving a system of equations, download Elementary and Intermediate Algebra 4th Edition, factoring equations cubed, subtract by 1 school time test, expression calculations "division", bar
graphs worksheets.
Real life example of quadratic inequality, reduce index of each radical, multiplying and divinding integers worksheets, alegbra help software program, free mixed number calculator, math factor
calculator, Variation and Exponents ppt.
Cost accounting learning programs, Complete Systems Analysis: the Workbook, the Textbook, the Answers pdf, introductory to algebra math answers.
Expressions calculation order, why study algebra II, local extrema roots of quadratic factor, algebra 2 calculators step by step.
Holt Geometry Book 2007 answers Chapter 1 Section 1, Holt Algebra 1 TAKS and TEKS, show me how to solve a property of exponential expression, adding, subtracting, multiplying, dividing fractions
Radical perfect square worksheets, worksheets in solving quadatic equations by factoring and completing the square, "how to reduce the index of the radical", solving simultaneous equations on ti 89,
find free practice work for a 7th grade.
Glencoe algebra 1 practice workbook, solving algebra problems, adding and subtracting positive and negative numbers, glencoe + mathmatics algebra1 answers, website of algebra 2 heath, polynomials
cubed, what is the difference between finite math and college algebra.
Laws of exponent for addition and subtraction, chemical equations balancing steps, use the free fraction calculator online, Math properties worksheets, multiply and divide integers interactive, "two
linear equations" factorize parabola OR ellipse OR hyperbola.
Free clep math tests, adding and subtracting decimals, positive and negative, 2nd order differential problems, square root simplify calculator, Elementary Algebra Problems finding the value, free
ebook for time and distance aptitude, square plus equation solving.
How to find square root for kids, who invented algebraic expression, square root exponents, excel fractions calculator, what is the name for subtract, Algebra formula sheet.
Algebra 1 help, differential equation calculator, Free Printable Algebra Worksheets, "solve equation" given variables, to simplify the evaluate expression calculator, ti 84 rom.
Free 9th grade printables, T1-83 plus calculator games, how to calculate exponent of 4 function calculator, first grade number line worksheets, order of operations solver.
6th Grade Factor trees of 56, multiple choice factoring test, ti-83 x cube, Free Aptitude Tests Downloads, dividing calculator.
Math question solver, prentice hall mathematics online book, adding and dividing negative and positive numbers, absolute value of fractions.
Glencoe algebra solutions, dividing integers by fractions and decimals, answers to algebra 1 worksheets, free pre algebra worksheets equations, math factoring calculator.
Least Common Denominator Calculator, factors and multiples exercises, greatest common factor with variables, 1 LINEAL METRE =, tatterson aptitude question paper, algebra eqations, multiplying
integers with a power.
<<free printable precalculus questions>>\, rules of adding subtracting dividing and multiplying negative and positive numbers, cubed root on TI-83, aptitude question and answers, repeated quadratic
factors reviewer, decimal,percent, fractions converter calculator, free Calculator downloads for simplify radical expressions.
Rounding and estimating sums 5th grade math, ti 84 simulator, solve polynomial automatic, equation solver, epidemiology MCQ pdf, algeba solver.
Challenge skills and applications algebra I chapter 3, free algebra 1 help, G-MAT FREE MODEL QUESTION PAPERS.
Rules and example of changing fraction to decimal, algebra 2 factor formula, ks3 algebra free work sheets.
Find Least Common Denominator Calculator, sat question papers biology free download, polynomial simplifying calculators online, 7th grade math pre-test, mathematics percents formula, graphic
calculator online print, Easy steps to learn LCM For kids.
Softmath.com/a1/polynomials_polynomials.htm, how we write a programe in java to diplay * in squre, answers to my conversion homework.
Labor day lesson +6th grade, Algebra 1 Holt, write a mixed number to decimals, intermediate algebra cheat sheet, fractions to decimals calculator, how to do square root property problems, pg 25 85
holt algebra texas.
Free maths 11+ practice paper, online solve x^3, DO ALGEBRA HOMEWORK, 6th grade least common and greatest common factors answers, free online calculator with square root function.
Algebra 2 Saxon answers, "mcdougal littell" online assessments, greatest common factors worksheet, quadratic inequations, parabola, harcourt grade 9 ebooks, equation factor calculator.
Adding and subtracting decimal worksheets for 6th grade, squaring parenthesis, automated entrance exam using vb6.
Mac algebra, algebra 2 mcdougal littel questions, algebricexpression exercise, simplifying algebraic fractions calculator, ready to use step by step algebraic calculator, how to do algerbra 1.
Simplify square root with ti 30x, how do you express the square root of 25 as a fraction?, multiple choice question "of equation" solved by algebraic methods, Math First Grade papers to print.
Nonlinear differential equations, simplify linear equations, fraction radical calculator, writing the domain of a parabola, heath algebra 2 integrated approach tests.
3 times the square root of 2, solve and graph simple inequalities, design printable ged math test, programming values in a graphing calculator, algebra of ninth standard.
1-5 skills practice the distributive property teacher book, stuents' understanding of quadratic, convert to radical form, completing the sqare.
Free math homework sheets for 5th grade math, What is a scale in math, Holt algebra 1 answers, free algebra connections answers.
Solving multivariable linear systems, solving nonlinear differential equations, solving math problems(adding,subtracting,multiplying,dividing fractions).
Finding slope of line on graphing calculator, Third Grade Algebraic Expressions PowerPoints, mathmatics/Algebra 1.
Online graphing calculator with table, factoring two cubes, pearson prentice hall algebra 1 answer key, quadratics calculator show working, Free aptitude test papers, middle school math pizzazz b
How do you solve logarithms?, algebraic fractions solving, variables and expressions worksheet for 6th grade, how to use graphing calculator to find graph equation, answers to skills practice solving
multistep equations worksheet.
Equation involving radical calculator, solving equations with exponents and multiple terms, calculatar free answers to any math problem, teacher edition glencoe mathematics algebra 1, ALGEBRA,
fractions least to greatest.
Root solver, TI 84 plus emulator, math trivias for kids, how to do quadratic functions on your calculator.
Find least common denominator calculator, free printable worksheets and anwser sheets for 8th grade pre algebra, combining like terms solving equation calculators, dividing polynomials calculator,
greatest common factor songs.
"first order" "non linear" nonhomogeneous, step-by-step (basic physic problems) conversions, prealgebra by plato, alegbra made easy.
Free positive and negative numbers worksheets, statistical worksheet, binary numbers a step by step free workbook, free downloadable boolean algebra books, holt algebra II, free printable 6th grade
worksheet exponents.
Parabola online graphing calculator, how to use exponents on a TI-83, softmath reviews, polynomials for dummies, "addition properties" algebra "3rd grade".
Partial-sum addition method, algebra for beginners, ti 89 accounting programs, Lesson Plans 8th grade math, algebra software, dividing square roots within fractions.
Math book answer prentice halls mathematics course 1 pgs 21 answer, equations, multiple variables, printable math placement test basic, algebra equations multiple variables, when to use quadratic,
easy mathematics primary school LCM, java code for sum of the integers in the number.
Free softmath, FreeProblem Solver For College Algebra Problems, TI 83 buy australia.
Online square root maths calculator, homogenous particular second order, ti 84 Cramer's rule downloads, boolean algebra software.
Sample Aptitude test paper, only answers to Prentice-Hall,Inc skill 5:Practice worksheet, how to factor a cube, discrete mathematics and its applications 6th download, how to calculate the distance
on a TI-84 plus.
Ebooks permutations & combinations, ti calculator convert decimal to fractions, how do i subtract variables from integers?.
Adding positive and negative integers worksheet, four-term polynomial solver, cubed polynomials, simplify algebrae equations polynomial.
TI-84 Calculator Download, Aptitude question paper sample, keys to factoring expressions algebra.
Algebra brush up quiz, diving by decimal worksheets, what is the square root of 121 plus 144, How to Use a TI 84 Plus Calculator.
Algebre sheet, how do i work my casio scientific calculator, common algebraic equation uses, grade 11 algebra pre-tests.
Year 8 maths tests online, worksheets on writing formulas for patterns and sequences, how to solve range of an equation, square root as exponent, math ivestigatory projects, free 11+ maths paper.
Two step equation with negative numbers and fractions, Math rules cheat sheet, Algebra calculators for simplifying expressions, need to simplify a radical exponent problem, glencoe indiana test prep,
free calculator for college algebra, radical form.
Sove algebra in excel, helpwithalgebra, class of polynomial order 3, 9th algebra lesson plan, 3rd grade work.
Second grade math sheets, y intercept finder, "where i can" download the java for dummies cd-rom, simplify radical expressions solver, "illinois algebra 1"+textbook, equation with subtraction and
division that equals 10.
T183 graphing calculator, adding and subtractin integers, pre-algebra worksheets, foiling equations FACTORING, intermediate algebra review cards.
Factors worksheet, dividing polynomials on a TI-84, ti 84 emulator, algebra 1 quizzes for free, download puzzle pack for ti 84 plus.
Linear first order differential examples with sin, Find the greatest common factor of the two expressions caculator, square root, algebra help, multiplying with unlike denominators, online graphing
circles, algebra help cubes.
Online textbook for Mcdougal littell texas edition, Basic concepts of Algebra worksheet answers, glencoe geometry calculator.
Bing visitors found our website today by entering these algebra terms:
│how to cube fractions │formula for working square root math problems │
│activity for common fractions in decimal form │.ppt+Program in C to find out the nature of roots of quadratic equation │
│hyperbola grapher │apptitude question with answer │
│alegbra for dummies │Texas Algebra 1, Prentice-Hall, online textbook │
│calculus seventh edition answers to even problems │simplify the radical expression 1/2 │
│common denominator + variables │java fraction calculator program │
│pre algebra subtracting integers │least common method │
│Adding Square roots with variable │simplify cube roots │
│error 13 TI 86\ │algebra practice year 10 │
│common denominator solver │conversion meters to lineal meters │
│algebra problems │Beginner permutations combinations │
│real life linear equations by substitution │pritables math 8th cordinates │
│holt algebra 1 books and help with excersises │substitution method algebra │
│examples of inequality math problems daily life │algebraic thinking fifth grade worksheets │
│free algebra 1 materials │algebra and percentage equations with variables on both sides │
│Free Exponent Worksheets │scientific notation worksheets │
│what is the minimun of this promble (X)+2(Y)<6 4(4)+5(1)=21 4(1)+5(4)=24 4(0)+5(5)=25│graphing calculator program factoring │
│algebra 1 quizzes and worksheets for free │lineal metre to metre square │
│the slope of a quadratic equation │answers for mcdougal littell books │
│online scientific calculator with inequalities │algebra square roots test questions │
│adding and subtracting intergers in the same operation │least common denominator calculator │
│how to solve square root radical expressions │absolute value fractions simplifying │
│online abstract or modern algebra class │rational expression calculators │
│subtracting integers challenge word problems │simplify exponential expressions │
│adding polynomial sample worksheets with answer keys │how do you square a decimal │
│downloadable games for ti 84 plus │ALGEBRA REVISION YEAR 10 │
│nonlinear simultaneous equations numerical │equations of lines worksheet graphing │
│middle school algebraic equations with absolute value worksheets │Adding, subtracting, multiplying and dividing with parenthesis │
│MATHEMATICA COS CUBED │subtracting integers in real life │
│math problems.com │differentiate in time ti 89 x(t) │
│basic power algebra worksheet │free online math programs │
│algebra expression simplifier │simplifying radicals with variables │
│notes to take about 5th grade algebra │free printable 10th grade worksheets │
│Maple7 download free │algebra help for solving square roots │
│what is a factor in a math problems │Examples of Verbal Problems │
│"MATLAB to C" converter "free software" │Puzzle for solving quadratic equation │
│solve for variable calculator │the number in a power that is used as a factor │
│free math worksheets integers │algebra 2 worksheets (sequence problems) │
│maths workbooks │prealgebra third edition by blair │
│how to do powers and square roots │solving perimeter worksheets │
│9th grade algebra 1 │equation of a nonlinear function solver │
│list of rules to solve algebraic expressions │+Free Mathematics Videos │
│Elementary Intermediate Algebra with Aleks user guide University of Phoenix Custom │third root │
│TI -84 online calculator application │algebrator screenshots │
│math algebra poems │solving equations with fractions and variables, general algebra │
│quadratic formula in making baskets │texas instruments calculators decimals to fractions │
│adding multiple integers │ged test/nyc │
│how to add fractions on a TI-83 │extracting squareroot │
│Solving Square Roots │california math book answers │
│SOLVE THE FOLLOWING PAIR OF SIMULTANEOUS EQUATIONS with exponents │fun integer games online for kids │
│roots of third order equation │algebra graphs │
│how to solve equation denominator │excel simultaneous equations │
│Formula to Convert Decimal to Fraction │class 6 th sample papers │
│simple probability and odds worksheets │Arithmetic & Elementary Algebra equations │
│(cube root of 2) squared, simplified │9th grade algebra problems │
│solve fractions │chemistry clep sample paper online free │
│solve prealgbra problems │free homeschool worksheets for 8th grade │
│intermediate algebra 5th edition with the software │High School Physics Test Questions 9th grade │
│ti 84 rom downlad │intermediate algebra 2 college textbook │
│CONVERT DECIMAL TO RADICAL FRACTION EXPRESSION │solver ti83 │
│simplifying exponent expressions solver │free help online algebra set of numbers │
│Prentice Hall Mathematics Algebra 2 solution │operations with rational expressions │
│free maths exercises for beginners │inventor of the quadratic equation │
│real world app. for algebra │"T1-83" graphing calculator app │
│Numerical and pre algebra practice questions │SIMPLIFY RADICALS ICON ON COMPUTER │
│addition and subracting exponets │linear algebra solutions otto │
│mathmatic equasion │free help with inequality problems │
│simplif divide integer by square root │eamples of radical equations applied to the real world word problems │
│square root principal quadratic │online T183 │
│incomplete quadratic equations by the square root method │3rd order polynomial │
│conversion .83 to fractions │ti-89 scientific notation │
│saxon algebra 1 and michigan standards │variable exponent │
│solving quadratic equation lesson plan │TI 84 FIND SLOPE │
│Online Elementary and Intermediate Algebra Book 4th edition │high school mathematics algebra 2 test generator │
│pre algabra │algebra standard form vertex form factored form │
│glencoe advanced algebra california │hardest type of maths equation │
│compatible numbers math problem solver │intermediate algebra tutors │
│what is the conjugate of the square root of -15 │free integer worksheets │
│Ladder Method │how to you figure out a fraction that comes between two other fractions │
│practice algebra subtracting and adding integers │examples of math trivia with solutions and answers │
│free beginners algebra lessons │Finding the Least Common Denominator │
│Integer Worksheets │simplifying inverse expressions trigonometry │
│free printable math word problems and brain teasers │mathmatical pie │
│nonlinear differential equation solution │simplifying square roots worksheet │
│math trivia with answer for elementary │What are the four fundamental math concepts used in evaluating an expression│
│solving 2nd order plynomials. │solving non linear differential equations │
│quadratic equation+graph │college algebra clep test │
│teach yourself college │rules in applying addition of fractions or rational expression │
│adding and subtracting negatives │simple algebra solver │
│factoring square root of 250 │How do you solve trigonometric addition formulas? │
│multiplying fractions with unlike signs │how to solve the square root of negative twenty │
│aptitude test question bank download │Order Of Operations Practice Worksheets │
│solutions Abstract Algebra Dummit Foote │Mckeague Elementary & Int Algebra │
│power point systems of equations │algebrator │
│free college algebra problem solving │free homework help 9th grade │
│4th root calculator │7th grade pre algebra worksheets │
│pre algebra questions help for 8th grade examples │Free Printable Math Sheets for adults │
│radical form in scientific calculator │adding subtracting multiplying and dividing fractions worksheet │
│2664831 │calculate scale factor for dummies │
│ti-84 plus games │quadratic formula with square variables │
│free aptitude ebook │free 10th standard solving the maths problem online │
│5th grade science equations │algebraic equations square root │
│"ratios worksheet" primary school │math fractions.com │
│free worksheets on eauation and property of equations │are addition and subtraction factors? │
│basic algebra problems questions │9th grade quadratic nth problems │
│decimal fraction to octal calculator │glencoe algebra 2 answers │
│online TI 84 plus │Free Algebra Answers │
│algebra worksheet │factoring fraction algebraic equations │
│substitution principle math │free intermediate algebra help │
│functions multiplication solver │rewriting fractions in decimal form │
│general equation for parabola │how to get rid of a negative denominator │
│adding radical expressions │math-need help with 8th grade algebra │
│Math for dummies │poems with math words │
Google visitors found our website yesterday by using these algebra terms:
• turning decimals into mixed numbers
• exam papers in middle schools for downloading
• prentice hall algebra 1 answers
• how do you convert mixed fractions to decimals
• free online Algebra Equation Solver
• comparing integers for loop java
• programming using gnuplot
• math solver that solves consecutive +integer problems
• WWW.CALCULAS.EDU
• radical expression online
• completely free ged testing online for Kentucky
• how to cube root on a ti-83 plus
• how to convert mix number to decimals
• prentice hall algebra 1 textbook answers
• Expanded Notation Math Worksheets for the 6th grade
• rational expressions and equations calculator
• free answers to math equations
• how do you use TI 83 need step by step
• algebra +fun worksheet
• "worksheet for LCM"
• ti-84+ statistics
• solving equations with multiple square roots
• good aptitude questions
• solve non homogenous partial diffrential equation + Green's function
• The length of a rectangle is twice the width. The area is 512 yd find the length and width
• adding 3 numbers worksheet
• multiply square roots calculator
• (c program) program to accept 3 digits integer,reverse it,calculate the sum of digits & calculate the sum of integers and reversed
• learn how to do algerbra and fractions for 9th graders on the computer
• online usable graphing calculator
• simplifying roots with no decimals
• free online algebra solver that shows work
• Algebra 2 answers from book
• step by step learning basic trigonometry
• physics formula worksheets
• glenco math workbook 7th grade printable sheets
• Rules of Adding,subtracting,multiplying,dividing integers
• SUBSTITUTION METHOD
• software companies aptutude test questions
• percentage equations
• solving absolute value equations
• square root an imperfect square
• graphing linear relationships
• 9th grade algebra 1 honors SOLVING EQUATIONS BY USING MULTIPLICATION AND DIVISION
• rational expression calc
• free downloads algebra worksheets
• equations,inequalities, and problem solving with positive and negative integers
• mathimatics trivia
• rules of graphing an equation or an inequality?
• factoring cubed rational expressions
• school level + mathematics apptitute questions
• is factoring used in trignometry
• plato interactive mathematics hacks
• what is consumer math
• factoring binomial equations
• math terms intermediate algebra vocabulary
• t-89 calculators online
• adding more than two mixed numbers
• Algebra Trivia Questions
• cubic function on t1-83
• simplifying trinomials
• a running program c++ in solving simultaneously equation
• Algebra 1 power of quotient worksheets
• 'worksheets for manipulating roots in algebra'
• "Video Game" "Probability Study"
• Holt Algebra 2 lesson 1-2
• free worksheets on integers
• adding, subtracting, multiplying, and dividing integers worksheet
• polynomial factoring calculator
• online usable calculator with divisibility
• free algebra II learning material
• free pre algebra questions onlilne
• Developing skills in Algebra Book A Answers
• glencoe mcgraw hill answer algebra 2
• free elementarygeometry warm ups
• integers add, subtract, multiply, divide worksheets
• free algebra help now
• divide decimals calculator
• cube root ti-84 plus
• set theory "pre algebra" worksheets
• inequalities in mathamatics
• how to program quadratic formula into ti-86 manually
• middle school math with pizzazz book e answers
• Free Math Trivia
• california standars 7th grade pretest and review
• pre algebra with pizzazz answer key
• glencoe math 5th grade
• simplifying absolute value
• basic algebraic expressions worksheet
• how to solve easy square brackets problems in math
• eighth grade physics worksheets
• reducing radical expressions
• fraction calculator radical
• practice for 5th grade equivalent decimals
• Activities for Beginning and Intermediate Algebra by Garrison, Jones, Rhodes, 2nd Edition
• dividing polynomials activity
• convert decimal time to time
• online grapher velocity degrees
• finding zeros of an equation
• square root on circle equations
• ti calculator the to the 3rd root
• algebra calculator fractions
• how to solve for x in nonlinear equations
• algebra simplifying the denominator
• liner equation
• california mathmatics homework practice and problem-solving practice workbook grade 5
• free TI 84 plus graphing games
• use the square root property to solve the equation
• online Holt Algebra 1problems
• rules of addition and subtraction of fraction or rational
• how do you solve distributive properties to solve equations? Worksheet
• how to solve and simplify radicals
• algebra answers calculator
• holt geometry answers cumulative test
• linear equations made easy
• maths trivia algebra
• algebra 2 for dummies
• square root of variable
• how to do cube root ti
• adding three or more integers worksheet
• free algebra expression worksheet
• convert fraction to decimal notation
• 1st Grade Math Sheets
• free worksheet notation with and without exponents
• glencoe algebra 2 test
• distance formula program ti-84
• equation elimination calculator
• logarithmic solvers
• algebra least common denominator
• examples of mathematical poems about factoring
• TI-89 trig programs
• precalculus problem solver
• Conceptual Physics Fundamentals chapter 1 download
• boolean simplifier matlab
• history of pie-mathematical sign
• Holt California Algebra 1 Textbooks
• How Do I Factor 3rd Degree Polynomials
• c code for calculator - octal to decimal
• how to find cube root + ti 89
• beginners Physics Formulas
• how to differential equations in mathlab
• solve my quadratic
• multiplying and dividing rational numbers worksheet
• algebra review made simple
• algebra 1 formulas percents
• pre-algebra with pizzazz answers
• worksheet fractional integers
• radical equations worksheets
• 7th grade algrebra sheet
• Adding, Subtracting, Multiplying and Dividing Integers
• free worksheets using integers for 7th grade
• intermediate algebra, sixth edition by dugopolski, teacher's edition
• solving linear ODE MATLAB
• core java simple calculator program for square root
• euler graphing TI-84+
• primary 6 common entrance sample question papers
• free algebra calculators
• linear combination problem solver
• solving fractions with exponents
• Mathematics anwsers
• 2004 mcdougal littell algebra 2 book problems
• free algebra calculator
• solve equations for multiple variables
• solving hexagon algebra equations
• basic math marvin bittinger ninth edition
• freee gmat math test
• algebra problems examples
• calculators converting decimals into factions on calculator
• addition using distributive property
• glencoe accounting real-world applications y connections first-year course
• california middle school mathematics course 2 mcdougal littell workbook
• Rational Expressions And Equations Solvers
• mathematics trivia for elementary
• online alg inequality calculator
• how to solve properties of exponents
• vertex form of quadratic equation
• Math First Grade workbook papers to print
• integers and how to add, subtract. multiply, and divide
• solving quadratics by factoring
• logarithms text book free download
• what is the least common factor of 19, 14, and 4
• 8th grade math textbook prentice hall page 96
• order
• how to get rid of a variable in the denominator
• free math quizzes-8th grade
• nonlinear matrix differential equations
• mathquizes for kids
• free downloadable math exercises
• nc discrete math practice problems
• solving radical expressions, functions, and models
• factorization algebra perfect quadratic binomial
• how to do algebra
• 9th grade algebra ebooks
• tutorials on correlation college algebra
• addition, subtraction, multiplication equations worksheet
• free holt california algebra 1 answers
• greatest common factor sheets
• glencoe math tutorial distributive property
• Free Add Math Sheets
• 5th grade free printables
• simplifying exponents expressions
• accounting book online
• FREE ONLINE EXAMS FOR YR 6
• algebra sums for 5th grade
• teaching solving one step equations
• Order Operations Math Worksheet
• online factor trinomials
• online algebra worksheet
• simplifying expressions with variables and exponents
• factoring involving fractional and negative exponents
• square root property
• help with common denominator
• simplify equation
• ti 183 calculator download
• free 3th grade math worksheets
• check to see if a number is divisible by 7 java
• beginning algebra 4th grade
• equations with fractional exponents
• variables and expressions lesson plans
• math lesson plans for learning the coordinant system
• ti84 plus emulator
• free game downloads for ti-83 calculator
• dividing fraction sequence solving
• Give three examples of ratio and how to solve it
• free ppt algebra distance problems
• how do you use a ti-83 plus calculator for GCF formula
• aptitude questions with answer
• HOW TO DO ALGEBRA 1
• multiply and simplify radical expressions
• free ninth grade geometry 1 pretest pdf
• ti 84 emulator 8xp
• prentice hall algebra 2 book access code
• equations
• 9th grade calculations sheet
• Free Online Square Root Calculator
• prentice hall mathematics algebra 1 book online
• adding and subtracting with negative numbers in equations
• answer to math homework
• mathematical prove from farenhiet to celsius
• AAAMATH Square roots
• NCO sample papers for class 8
• free mathematics exams for primary schools
• finding the unknown exponent
• math-how to find lCM ?
• worksheet for rules for adding rational numbers
• decimal, 0-9 a-f
• repeated radical square root
• free precalculus Chapter 1 problem solver
• solving nonlinear differential equation having three variable
• convert percentage to decimal
• nonlinear simultaneous equations solution mathcad
• Alegebra
• greatest common factor with variables calculator
• java convert numbers to time
• scale factoring
• functions statistics and trigonometry chapter 13 answers
• simplify roots
• algebra calculating how many miles your car goes if you use y gallons
• introductory algebra 1
• simplified radical forms with 2 variables
• ti 83 "ROm code" download
• TI-84 calculator emulator
• free online ti 83 calculator
• free 7th grade english worksheets
• adding polynomials equations and answer keys
• prentice hall mathematics pre-algebra
• multiple trigonometry questions and answers
• physics word problems worksheet
• free maths worksheets high school
• create integer worksheet
• prentice hall mathematics
• ti 83 ROm code images download
• 3 simultaneous equation solver
• quadratic simultaneous equation solver
• algebra formulas
• percentage problems in aptitude
• numerical expressions cheats
• Prentice Hall Algebra 1 and Algebra 2 with Trigonometry teacher
• free online algebra solver answers
• simultaneous quadratic equations
• factoring trinomials generator
• absolute value test
• difference between an expression and equation
• Formula for Square Root
• z transformation ti89
• square root fractions
• TI-83 plus--how to do fractions
• slope from quadratic
• adding and subtracting integers worksheet
• Solve my math riddle
• math worksheets factorising algebra
• rings tutorials in maths
• Book on Cost Accounting
• online calculator for college algebra with conversion from decimal to numbers
• how to solve a radical function
• Least Common Multiple Calculator
• ks3 math work sheets
• least common multiple of three number is 154
• my daughter is struggling with algebra
• step by step online graphing calculator
• sixth grade math exponents
• 7th grade subtraction practice worksheets
• 9th and 10th grade worksheets
• quadratic functions coordinates equation solver
• write the number 1 as a exponential expression using the base 13
• conversion formulas for fractions and decimals
• iowa algebra aptitude test preparation
• getting fractions out of equations
• free beginning Algebra textbook
• factoring roots
• how to solve algebra equations
• free 9th grade math test online
• indiana prentice hall mathematics algebra 1 answer key
• pre-test, 6th grade, math
• martin university online ged test.com
• solving coupled differential equations in matlab
• solution+"linear difference equation"+"first order"
• "glencoe advanced mathematical concepts answers"
• euclid method greatest common divisor c++
• Dummy "Math book"
• end of year fifth grade multiplication test
• Percentage math formula
• dividing scientific notations
• algebraic formulas
• when should a student not take pre algebra
• latest math trivia questions
• mathematical formula for Gini coefficient
• pass data TI 89
• solve online year 11 maths for free
• repeated interger inequalities
• Resolve problems of Algebra
• ti-92 system of equations
• pre-algebra coefficients
• free 10th grade worksheets
• online textbook, texas algebra 1 by prentice-hall
• Advanced Algebra Worksheets
• free algebrator
• fraction equations
• factor real numbers algebra
• simplify square root with calculator
• distributive property with exponents
• Pre-algebra cumultive frequency table pre-algebra
• check a character is punctuation in Java
• "online book" Introduction to Accounting: An Integrated Approach, 4th Edition
• computation / formula of diameter when circumference is given
• pre-algebra worksheets
• California Mathematics Homework: Printable Assessments
• free school work for grde 3 ontrio
• graphing calculater
• pre algebra cheat sheets
• worksheets on formulas
• calculate lowest common denominator
• math problem answer free
• wipro aptitude test to download
• what is the subsititution method in algrebra
• linear equations on a TI-83
• Holt Algebra worksheets
• integers game multiply
• solving radicals in fraction form
• simplifying integer exponents
• radical equation solver
• subtracting integers fractions
• subtracting, adding, multiplying, and dividing decimal numbers
• cube root on calculator
• formula sheet
• free algebrator download
• order of operations worksheet
• t1-89 calculator game apps
• algebra and angles worksheets
• using the substitution method in algebra
• algebra printable workbooks
• addison wesley secondary math advanced algebra answers key
• cognitive.tutor.com
• solution of a quadratic equation by extracting square roots
• substitution in algebra
• 8th grade english aptitude test
• trigonometry yr 10
• Formula for volume of a Partial cone
• free online learning games for 7th graders
• general online aptitude test with workout
• ti- emulator download free
• adding 9 worksheets
• free 8th grade printable worksheets
• rational expression calculator
• equation balancer maths
• elementary algebra bool allen
• combining like terms calculator
• how to evaluate radical with ti-83 calculator
• proof of odd numbers difference of squares
• multiply and divide integers powerpoint
• ks2 maths formulae worksheets
• computer solve algebra
• homework geometry cheat
• when a fraction is changed to a decimal and the remander is zero the decimal is called
• what are the steps simplifying numerical expressions
• algebra factoring difference of two cubes
• free samples on how to do algebra1
• simplify(squareroot(x)*cuberoot(1-x))
• solve my fraction problem
• ti-83 plus how to cube root
• lowest common denominator calculator
• adding & Subtracting integers games
• subtracting simple non algebraic exponents
• solve homogeneous second order differential equation
• how to do difference quotient
• add and subtract 2 digit numbers to 100 worksheet
• Free Math Test Paper
• apptitude test papers with solutions
• answers for chapter 9 of the university of phoenix elmentary and intermidate algebra second edition
• glencoe mathematics answers
• simplify inverse operations in pre algebra
• sample college algebra problems
• tricks for factoring trinomials
• diamond problem solver
• rom file ti92
• mcgraw-hill algebra 1 calculator
• difference of two square
• online calculator WITH SIMPLIFYING RADICALS
• rule book for the TI-84 Plus Graphing Calculator
• disadvantages of solving a system by graphing
• www.algebricexpression.com
• algebra pie
• download algebrator
• year 9 expanding and simplifying algebraic
• factor trinomial ti-89 calculator
• simple order of operation for 6th grade math worksheets
• matlab codes for solving differential equations
• algebra: distributive property with multiple variables
• finding limiting with graph calculator
• convert a mixed number into a decimal
• 11TH STD MATRICULATION MATHS QUESTION PAPER
• adding subtracting fractions demo
• pictograph worksheets
• radical algebra
• t1-84 plus download games
• convert percentage to fractions
• world's hardest math equations
• holt physics problem workbook answers
• solve algebra
• radical simplifying calculator
• nth term solver
• Kumon Papers
• learning algebra games
• TI-83 graphing calculator emulator java
• simultaneous equations ks3 worksheet
• www.tenthmaths.com
• math cheat sheets 6th grade
• free printable algebra 1 test
• Maths symbol like square root in HTML
• multiple variable equations
• free online fourth grade work
• ga 9th grade math relations and functions
• basic algebra worksheets
• how to put equations into ti-83
• simplify radical expression
• how to solve for x with the TI83 plus
• holt algebra
• order of operations + worksheet
• TI-83 Plus Graphing Calculator Tutorial and Manual
• balancing an linear equation
• how to declare BigDecimal in java
• multiply and divide negative numbers reason
• houghton mifflin math 5th grade expaned form with exponents
• worlds hardest maths equation
• the ladder method math
• fractions with Nth root
• introductory abstract algebra homeworks solution
• algebra problem examples
• what is the highest common factor of 30 and 45
• subset calculator solver
• t.i calculator download
• Online calculators for solving y=mx+b
• substitutions calculator
• t184 calculator program
• answers for algebra 1 problems
• KIDS MATHS SOLUTIONS SHEETS
• write nine over 17 as a decimal in the thousandths place
• 2000 video text interactive algebra answer key
• how to integrate math beginner
• learn algebra 1
• example of age problem w/ solutions "word problems"
• online calculator tells u any answer
• factoring third order equations
• online Glencoe Algebra 1 Florida Edition
• simplifying square roots on a calculator
• PAST examination papers and answers+ grade 9 mathematics
• vertex linear quadratic equations
• decimal worksheet for 5th grade
• expanded and standard form 2nd grade worksheets
• Algebra 2 help
• math algebraic poems
• free worksheet on absolute
• COLLEGE LEVEL TUTORING
• holt algrebra 1 textbook online
• how to solve fraction equations
• maths aptitude question papers
• difference between saxon algebra 2 editions 2 and 3
• how to put games on a t1-84 plus texas instrument calculator
• least common denominator algebraic formulas
• where can i find power points on decimals
• McDougall Littell, pre algebra, workbook
• help finding a fraction exponents
• easiest way to find the GCF
• equations of lines worksheet
• convert mixed number to decimal
• solving limits + graphing calculator
• 6th grade geographic features worksheet
• conceptual physics textbook answers
• test answer to math/cheating 7 grade
• 5th grade math combinations
• convert decimals to fraction in fomular
• printable mathematics worksheet
• real numbers and their proportions + 12th grade math problems
• online trinomial factorer
• distance, ratios, square roots, pobability fromulae
• algebra powers
• radius of a circle formula using quadratic euation
• PreAlgrebra Fifth Edition
• fifth grade math worksheets
• Free Algebra Worksheets
• rationalizing a denominator with two terms long form
• henderson hasselback gmat
• factoring with complex
• square root in radical form with variables
• how to solve logarithmic equations without a calculator
• simplify numerical radical expressions sample problems
• instructor Solution Manual to Linear Algebra with Applications bretscher pdf
• lesson plan for multiplication of two binomials
• simplifying fractions with square roots
• factoring cubed
• how to simplify square roots
• scott foresman physical science free online worksheets
• year 8 algebra tests
• radical expression solver
• "ti-83" emulator
• First day activities for 7th grade "Introduction to Algebra"
• how to put formulas on a ti-84
• free worksheets, positive and negative numbers
• partial sums
• adding and subtracting positive and negative fractions
• free highschool mathamatics problems
• question for aptitude test+answer
• algebra de Baldor
• word problems adding and subtracting integers worksheets
• radical calculator
• adding and subtracting real numbers worksheet
• online numeracy skills test year 8
• adding and subtracting time in spreadsheets
• brackets mean when solving an equation
• emulateur TI-84
• glencoe algebra 1 answers
• nc 8th grade worksheets
• Converting celcius to farenheit practice tests
• solving second order homogeneous differential equations
• "English Grammer Tutor"
• ti-84 download
• www.softmath.com
• adding and subtracting integers free on line worksheets
• holt algebra 1 book answers
• holt science and technology grade 8 cheats
• addition method to solve linear equations disadvantages
• algebra calculator reduce to lowest terms
• free practice worksheets on subtracting whole numbers
• answers for prentice hall course 1 mathbook
• multiplying and dividing rational expressions calculator
• cubed functions
• Online T 83 Calculators+free
• how to solve graphic equations
• prentice hall online math book
• how to change mixed numbers into decimals
• order of operations worksheet including exponents
• ax*2+bx+c=0 to slope form
• free worksheets on add, subtract, divide, and multiply integers
• algebra2 (adv) basics online
• Orleans-Hanna Test
• subtraction examples
• TI-83 Rom Image
• add the fraction equation
• free accounting books
• exprecion algebraica
• cheating with TI-84 plus
• fun ways to learn about decimals
• Simplifying Complex Rational Expressions
• learn intermediate college algebra
• final exam free online print off with key for pre algebra
• rules for adding subtracting multiplying and dividing whole numbers
• free subtract integer worksheet
• Simultaneous Equation Advanced Mathamatics O Level
• second grade differential equations
• Algebra quiz
• prentice hall mathematics algebra 1 9th grade answers
• percent fraction and decimal solver
• "a first course in abstract algebra "+"free download"
• Algebra Equations Solver
• ti 89 how to solve equation two variables
• (ft) cubed root solved by factoring
• least commom multiple
• algebra hints and tips
• McDougal Littell algebra 1 answers
• least to greatest fraction
• excel.programming create best fit ellipse
• factoring cubed polynomials
• 6th grademath word sheet
• how to factor a parabola
• decimal as mixed number elementary
• maths at ks2+maths in movie+assessment
• math area
• pre algebra fractions multiplying and dividing
• math problem sal sue was old
• Principles+of+Mathematical+Analysis+Rudin+Solution+Manual
• completing the square worksheet
• simplifying fractions with negative exponents
• algebra 1 teachers edition worksheets
• easy algebra questions
• learning simple algebra
• algebra two: how to change decimal into a improper fraction????
• multiplying integer
• permutations and combination problems in GRE
• aptitude test paper to download
• free printable english test exams
• multiplying pie in java code
• online textbook-McDougal Littell Algebra 1 Fl Ed. 2004
• exponential form calculator
• how do i teach my self basic algebra
• distributive property with fractions
• TI 84 plus quadratic applications
• free printable algebra sheet
• free arithimetic objective questions with answer
• mixed fraction as decimal
• pythagoras theory for kids worksheets
• convert fraction to radical
• factoring Cubed polynomials
• square root of difference of squares
• graphing on ti-83 plus rational functions how to
• glencoe algebra 1 math book
• simplify trinomials
• rules for adding subtracting integers
• subtracting fractions with parenthesis
• worksheets for adding and subtracting positive and negative numbers
• holt algebra one
• single rational expression
• free worksheets over multiplying and dividing integers
• algebra homework solver
• learning algebra
• fractional exponent solver
• how to solve fractional exponents
• worksheet on adding and subtracting decimals
• worksheets for bar graphs
• free worksheets on positive and negative numbers
• operations with polynominals
• free IQ tutorials.pdf
• Integer free worksheets
• algebra work sheats
• free worksheets ordering numbers to 12
• Holt Physics Problem Workbook answers
• how to convert decimal to fraction on the Ti 86
• examples of trivia's
• mix numbers
• cube root Ti-83 plus
• distributive property using exponents
• conversion of 23,08 to decimal formula
• multiplying scientific notations
• download ti rom
• free algebra 2 quizzes
• "ALGEBRA OF SUMMATION"
• "free online algebra help"
• nonhomogeneous differential equations
• TI-84 plus SE Emulator
• algebra 2 answer
• Factoring as a algebra problem
• converting mixed numbers to decimals
• How do you add negitive and positive numbers
• how to solve the equation for the given variable
• nonlinear equation solver
• combining like terms fun worksheet
• solutions to contemporary abstract algebra
• quit while loop java string int
• 8th grade slope math projects
• Rules of adding,subtracting,multiplying,dividing integers
• solve nonlinear differential equations
• scale factor
• iowa algebra readiness test
• addition & subtraction of fractions or rational expressions
• free math worksheets +year8
• worksheets for adding/subtracting/multiplying/dividing positive and negative integers
• math investigatory project students
• ti-84 calculator download
• printable worksheets using variables
• Linear Equation worksheets
• how to solve square root of a variable Radical Equations
• scientific notation worksheet
• grade 10 algebra help
• gcse free work sheets
• prime number poem
• how to calculate 2 times square root 10
• a final test for pre-algebra print off with key
• Free Algebra Equations
• distributive property in pre algebra
• how many positive divisors does 2310 have
• sample lesson plan for addition of monomials
• applications of algebra
• celsius worksheets
• trig calculator
• best way to understand algebra
• simplifying variables
• give the importance of boolean algebra
• how to solve exponential fractions
• free worksheets over multiplying positive and negative integers
• free printable practice tests for the ged
• Worksheets Adding Subtracting Decimals
• simplifying complex linear function
• online graphic calculator for algebra
• year 8 math sat exam paper
• equations of lines worksheets free
• free math quizzes for 9th grade
• lesson plans + adding integers
• Simplifying Function Calculator
• how to put games on a t1-84 plus calculator
• "why important to simplify radical expressions"
• aptitude free books
• Algebra 2 2004 workbook answers
• Intermeate Algerbra
• examples of math prayers
• factoring program
• formula to add fractions
• aptitude question
• fraction worksheets with integers
• gateway to mathematics book 3,chapter 12 read answers free ,tangents ,5edition
• pre-algebra posttest
• math textbook problem solver
• solving algebra cubic equations
• Free calculation sheets
• two points are given, solve for the vertex of a parabola
• how multiply a square metre into lineal metre
• test of abstract aljebra
• free pre algebra pre test
• greatest common divisor
• adding Scientific Notation Worksheet
• free TI 84 emulator
• Mathematics factorization questions
• college algebra solver
• alegbra worksheets
• free online simplify calculator
• holt exponents and fractions
• year eleven free math work sheet
• binomials lcd
• finding y intercept on graph calculator
• "a first course in abstract algebra "+"free ebook"
• algebra trivia games
• find slope of line on graphing calculator
• hard order of operations worksheets
• combining like terms manipulatives
• pearson prentice hall precalculus fourth edition answer key texas edition
• how to take limits with a graphing calculator
• online calculator for polynomial long division
• free algebra quiz printable
• calculator simplify each question
• java,convert decimal amount to text
• 9th grade math reviews
• answers to McDougal Littell
• Algebra 1 homework cheats
• how to solve all quadratic equations square roots property
• algebra 2 easy
• maths tutor fo 10th standard
• glencoe algebra answer generators
• "sat ii chemistry" "worksheets"
• algebric practice
• number sense square roots
• simplifying algebraic functions with exponents
• WORKSHEETS ON SLOPES
• free math answer keys 9th
• programming the velocity formula in Ti-84
• formula to make percentage
• glencoe mathematics applications and connection course 2 unit 1
• converting system equations linear diophantine inequalities
• Prentice Hall Algebra 1 teachers manual California
• adding fractions on the ti-84 plus
• "sixth grade math"and "percent circles"
|
{"url":"https://softmath.com/math-com-calculator/function-range/basic-math-equations.html","timestamp":"2024-11-12T23:12:09Z","content_type":"text/html","content_length":"145074","record_id":"<urn:uuid:980d7b4c-81dc-4673-8337-471f0c340e29>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00151.warc.gz"}
|
Unscramble WABBLIER
How Many Words are in WABBLIER Unscramble?
By unscrambling letters wabblier, our Word Unscrambler aka Scrabble Word Finder easily found 129 playable words in virtually every word scramble game!
Letter / Tile Values for WABBLIER
Below are the values for each of the letters/tiles in Scrabble. The letters in wabblier combine for a total of 19 points (not including bonus squares)
• W [4]
• A [1]
• B [3]
• B [3]
• L [1]
• I [1]
• E [1]
• R [5]
What do the Letters wabblier Unscrambled Mean?
The unscrambled words with the most letters from WABBLIER word or letters are below along with the definitions.
• wabblier () - Sorry, we do not have a definition for this word
|
{"url":"https://www.scrabblewordfind.com/unscramble-wabblier","timestamp":"2024-11-06T02:22:06Z","content_type":"text/html","content_length":"59233","record_id":"<urn:uuid:5e92ae5a-e90d-4454-933a-4527ced580dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00673.warc.gz"}
|
ITYPE is INTEGER
Specifies the problem type to be solved:
[in] ITYPE = 1: A*x = (lambda)*B*x
= 2: A*B*x = (lambda)*x
= 3: B*A*x = (lambda)*x
JOBZ is CHARACTER*1
[in] JOBZ = 'N': Compute eigenvalues only;
= 'V': Compute eigenvalues and eigenvectors.
UPLO is CHARACTER*1
[in] UPLO = 'U': Upper triangles of A and B are stored;
= 'L': Lower triangles of A and B are stored.
N is INTEGER
[in] N The order of the matrices A and B. N >= 0.
A is COMPLEX*16 array, dimension (LDA, N)
On entry, the Hermitian matrix A. If UPLO = 'U', the
leading N-by-N upper triangular part of A contains the
upper triangular part of the matrix A. If UPLO = 'L',
the leading N-by-N lower triangular part of A contains
the lower triangular part of the matrix A.
[in,out] A On exit, if JOBZ = 'V', then if INFO = 0, A contains the
matrix Z of eigenvectors. The eigenvectors are normalized
as follows:
if ITYPE = 1 or 2, Z**H*B*Z = I;
if ITYPE = 3, Z**H*inv(B)*Z = I.
If JOBZ = 'N', then on exit the upper triangle (if UPLO='U')
or the lower triangle (if UPLO='L') of A, including the
diagonal, is destroyed.
LDA is INTEGER
[in] LDA The leading dimension of the array A. LDA >= max(1,N).
B is COMPLEX*16 array, dimension (LDB, N)
On entry, the Hermitian matrix B. If UPLO = 'U', the
leading N-by-N upper triangular part of B contains the
upper triangular part of the matrix B. If UPLO = 'L',
the leading N-by-N lower triangular part of B contains
[in,out] B the lower triangular part of the matrix B.
On exit, if INFO <= N, the part of B containing the matrix is
overwritten by the triangular factor U or L from the Cholesky
factorization B = U**H*U or B = L*L**H.
LDB is INTEGER
[in] LDB The leading dimension of the array B. LDB >= max(1,N).
W is DOUBLE PRECISION array, dimension (N)
[out] W If INFO = 0, the eigenvalues in ascending order.
WORK is COMPLEX*16 array, dimension (MAX(1,LWORK))
[out] WORK On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
LWORK is INTEGER
The length of the array WORK.
If N <= 1, LWORK >= 1.
If JOBZ = 'N' and N > 1, LWORK >= N + 1.
If JOBZ = 'V' and N > 1, LWORK >= 2*N + N**2.
[in] LWORK
If LWORK = -1, then a workspace query is assumed; the routine
only calculates the optimal sizes of the WORK, RWORK and
IWORK arrays, returns these values as the first entries of
the WORK, RWORK and IWORK arrays, and no error message
related to LWORK or LRWORK or LIWORK is issued by XERBLA.
RWORK is DOUBLE PRECISION array, dimension (MAX(1,LRWORK))
[out] RWORK On exit, if INFO = 0, RWORK(1) returns the optimal LRWORK.
LRWORK is INTEGER
The dimension of the array RWORK.
If N <= 1, LRWORK >= 1.
If JOBZ = 'N' and N > 1, LRWORK >= N.
If JOBZ = 'V' and N > 1, LRWORK >= 1 + 5*N + 2*N**2.
[in] LRWORK
If LRWORK = -1, then a workspace query is assumed; the
routine only calculates the optimal sizes of the WORK, RWORK
and IWORK arrays, returns these values as the first entries
of the WORK, RWORK and IWORK arrays, and no error message
related to LWORK or LRWORK or LIWORK is issued by XERBLA.
IWORK is INTEGER array, dimension (MAX(1,LIWORK))
[out] IWORK On exit, if INFO = 0, IWORK(1) returns the optimal LIWORK.
LIWORK is INTEGER
The dimension of the array IWORK.
If N <= 1, LIWORK >= 1.
If JOBZ = 'N' and N > 1, LIWORK >= 1.
If JOBZ = 'V' and N > 1, LIWORK >= 3 + 5*N.
[in] LIWORK
If LIWORK = -1, then a workspace query is assumed; the
routine only calculates the optimal sizes of the WORK, RWORK
and IWORK arrays, returns these values as the first entries
of the WORK, RWORK and IWORK arrays, and no error message
related to LWORK or LRWORK or LIWORK is issued by XERBLA.
INFO is INTEGER
= 0: successful exit
< 0: if INFO = -i, the i-th argument had an illegal value
> 0: ZPOTRF or ZHEEVD returned an error code:
<= N: if INFO = i and JOBZ = 'N', then the algorithm
failed to converge; i off-diagonal elements of an
intermediate tridiagonal form did not converge to
[out] INFO zero;
if INFO = i and JOBZ = 'V', then the algorithm
failed to compute an eigenvalue while working on
the submatrix lying in rows and columns INFO/(N+1)
through mod(INFO,N+1);
> N: if INFO = N + i, for 1 <= i <= N, then the leading
minor of order i of B is not positive definite.
The factorization of B could not be completed and
no eigenvalues or eigenvectors were computed.
|
{"url":"https://netlib.org/lapack/explore-html-3.4.2/de/d8b/zhegvd_8f.html","timestamp":"2024-11-09T17:33:45Z","content_type":"application/xhtml+xml","content_length":"20076","record_id":"<urn:uuid:7369ec98-79e9-49d5-970f-c6cbd9594255>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00707.warc.gz"}
|
Computational power and big data reviews
financial market Current computational power , much greater than in the past, and the possibility of using a huge amount of data – big data – have been opening up new possibilities for analysis and
reforming the previous paradigms that we used to model economic and financial phenomena. It is not a linear process and is often confusing. The purpose of this article, therefore, is to throw a
little more light on the subject, without intending to exhaust it, not least because this is a very limited space for that. We live in a huge frenzy with regard to startups, Artificial Intelligence,
Big Data and other tools, in a diffuse mixture in which optimism, misconceptions and naivety come together, being essential and very desirable to separate the credible and realistic from the fantasy
, sometimes deliberately fraudulent. To separate the wheat from the chaff, our strategy is to understand the main elements and put them in historical perspective, so that much of what we believe to
be revolutionary has actually been around for over 30 years . The difference now is in the popularization of computational processing capacity and data storage . The supposed revolution is in the
simple fact that before these things were not computationally feasible and scalable. That is, almost every theory already existed , as well as its mathematical frameworks, but we could not have
sufficient amounts of data and calculation power.
Especially in view of the computational problem, we prefer more simplified mathematical and econometric instruments, which, in turn, also imply the assumption of oversimplifying premises, in the face
of a complex world in which the occurrence of rare events , such as those seen in March 2020 or the “ Black Monday ” of 1987 is more frequent than historically presumed. To get an idea of the
limitation of more traditional mathematics applied to this topic, whose framework is largely based on simpler and more analytically tractable statistical distributions, we reached absurd conclusions,
such as that the event observed in March 2020 corresponds to something observable at a frequency of 1 in 20 million; or that the misfortune of the Dow Jones falling 7.7% in a single day would be one
in 50 billion; or that even the Black Monday event would be 1 to 10 raised to 50 – calculations here borrowed from one of the most recent books by Mandelbrot , one of the exponents in this area of
knowledge. Clearly, given this frequency of catastrophic events, the widely held assumptions must be wrong, even though the tools to adequately quantify such possibilities have existed since the
1960s. In terms of credit policies, it is possible to see that eventually even more naive assumptions still permeate how we see default risk . Especially in a scenario that advances in terms of
collateralized instruments (with guarantee), such as home equity, payroll loans and discounting of bills, not seeing how several variables, such as labor lawsuits and social security debts, impact
the probability of nullity or unavailability of real guarantees seems to be surreal, indeed an alarming mistake . Apparently, this diverse information needs to be crossed, eventually using artificial
intelligence, to obtain a more realistic precision of the default probabilities . In fact, in times of digital bookkeeping, open banking, among others, not seeing this entire informational ecosystem
is despising unfavorable chance . And this is precisely what promises to be a disruptive factor for large and traditional financial institutions. It is no longer necessary to have a huge
computational park. And even if yes, depending on the solution and the problem, this is just a click away at very low costs, on the most diverse computing platforms available.
This is the challenge we have in our daily lives with our partners and customers: how to create and scale databases and computational capacity? How to create econometric models that preserve
intelligibility and simplicity as much as possible? These are the key business premises where machine learning and big data are applied to finance. Unlike other sectors, such as medicine, in which it
doesn’t matter (immediately) to explain why a patient has cancer (the hurry lies in diagnosing whether or not he has the disease, in the best possible way), in finance and economics there is a need
to understand the reason for things and correct the trajectory of decisions when necessary . All this with the incredible challenge that datasets in finance are much scarcer than in other areas,
especially equities, derivatives and interest. We only have one story from each company; a single history of Brazil’s monetary policies; we observe a single event from the Twin Towers, and so on. The
Herculean challenge is to employ the most computationally and mathematically sophisticated techniques in order to preserve the intelligibility of the algorithms’ outputs, given the idiosyncrasies of
financial data. But it’s not magic or rocket science . It is computational power with a tooling already quite scientifically consolidated. We are talking about Markov chains , Gibbs Sampler,
Wavelets , fractional Brownian motion , among others, subjects that already exist and have been studied since the middle of the last century. It is worth remembering that the Medallion fund , owned
by legendary manager Jim Simmons, has operated these tools for decades. Finally, we take the opportunity to paraphrase Benoit Mandelbrot – one of our quantitative references,
alongside Simmons and Bishop – about risk, return and ruin . Identifying opportunities first involves analyzing the most likely value that an opportunity will generate. The typical problem resides
in risk analysis and, probably to a lesser extent but equally important, in the events of investor ruin – which are generally highly improbable – that guide the attractiveness of these opportunities.
Most of the simpler models allow a reasonably good analysis of the most probable events, but tend to hugely underestimate the unlikely events, making agents take much greater risks than they should
or could. Portfolio managers operating at excessive leverage and less restrictive credit policies in certain situations are classic examples.
You must be logged in to post a comment.
|
{"url":"https://pspmyspace.com/computational-power-and-big-data-reviews/","timestamp":"2024-11-12T02:38:46Z","content_type":"text/html","content_length":"64321","record_id":"<urn:uuid:13448ac5-1edc-4288-b42a-e2850706c5ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00042.warc.gz"}
|
Review Analysis | NLU API Dashboard
The review analysis API identifies the sentiment(label) of the given phrase for each category(type).
Sentiment labels can be either numeric or qualitative.
Example 1) -2, -1, 0, 1,2 Example 2) Negative, Neutral, Positive
Performance Metrics
Review Analysis uses 4 metrics to measure model performance.
Basic Concepts
To measure the model performance of Review Analysis, we compare the predicted result by model with the ground truth via confusion matrix.
1. Ground Truth
Ground truth is a term of a reference point against which model predictions can be compared or assessed. It refers to the correct and verified labels or outcomes that a model aims to predict.
You can set one of three labels: POSITIVE, NEUTRAL, and NEGATIVE for each category TYPE as below.
If you don’t specify the label for a type, the label is filled to NONE, though you can not see the NONE label in the performance metric table.
Let's assume that the trained model returned the following inference results through the test sample data.
2. Predicted Result
The predicted result refers to the inference result that is produced as a result when test data is processed by the trained model.
From the tables of ground truth and predicted result, we can make a confusion matrix for each category.
3. Confusion Matrix
For example, the confusion matrix table for type 3 is as follows.
Using the confusion matrices we can easily calculate the traditional performance metrics as [Table 4] : precision, recall, f1-score, and accuracy.
How to Calculate Metric
We will cover you how to calculate the metric values on the ‘Model History’ page and ‘Metrics Details’ one by one.
First, we can check the metrics by each Type category as below. These metrics can be checked by pressing the 'Details' button for each model in the model history page.
1. Precision
Precision is a metric that measures the accuracy of a model. You can calculate a value from the ratio of the correct predictions for that category to the overall predictions for that category, i.e.
the sum of a single row. In this example, the precision of the POSITIVE label is
100 * TRUE POSITIVE / POSITIVE PREDICTION = 100 * 2 / (2 + 0 + 2 + 0) = 50%
You can confirm this value if you choose the POSITIVE label from the drop-down menu at the top left corner of the 'Metrics Details' window.
The precision for NEUTRAL is 0 because the support (number of truth samples) is 0. Also, the precision for NEUTRAL is zero because the count of correct predictions is zero. However, the precision of
NONE is 100%.
Using the values, we calculate a weighted average of precision for all labels:
(50 * 2 + 0 * 0 + 0 * 2 + 100 * 3) / (2 + 0 + 2 + 3) = 57.1%
Secondly, we can check the overall metrics by model. The metric values on the ‘Model History’ page are weighted averages over categories. As you can see, all category has the same support. So we can
take a simple numeric average of them.
For example, overal precision of the model in this example is
(57.1 + 52.4 + 42.9) / 3 = 50.8%
2. Recall
Recall measures the ability of a model to identify categories. We can calculate this value from the ratio of the number of correct predictions in that category to the number of samples in that
category, i.e. the sum of one column. In this example, the recall of the POSITIVE label is
100 * TRUE POSITIVE / POSITIVE LABELS = 100 * 2 / (2 + 0 + 0 + 0) = 100%
The recall for NEUTRAL is 0 because there is no support for this label. The recall for NEGATIVE is 0, too. However, the reason is not the same. The model failed to find any NEGATIVE label. In other
words, recall for the NONE is 100%.
Using these results, we can calculate a weighted average of recall for all labels is
(100 * 2 + 0 * 0 + 0 * 2 + 100 + 3) / (2 + 0 + 2 + 3) = 71.4%
3. F1 Score
Precision and recall represent different aspects of a model. In some situations, we may want a model with good precision but not recall, or vice versa. A single metric is useful if you want a model
that has both good precision and recall. The F1 score with the following definition is one of the most popular metrics for this purpose.
2 * precision * recall / (precision + recall).
In this example, the F1-score of the POSITIVE label is
2 * 50 * 100 / (50 + 100) = 66.7%
F1 scores for NEUTRAL and NEGATIVE are 0, but the F1 score for the NONE is 100.
Using these results, the weighted average of the F1 score for all labels is
(66.7 * 2 + 0 * 0 + 0 * 2 + 100 * 3) / (2 + 0 + 2 + 3) = 61.9%
4. Accuracy
We can calculate accuracy, which is the ratio of the number of correct predictions to the number of all samples. If you select 'ALL' from the menu in the top left corner, you will notice that no
accuracy values appear. This means that the value cannot be defined in these circumstances. However, if you select 'ALL', you can now see the accuracy values..
In this example case, the accuracy value for Type 3 cateegory is
100 * (2 + 0 + 0 + 3) / 7 = 71.4%
Inference Basis and Confidence Level
There is a "View details" icon located to the right of the review sentence column in the data area of the dashboard and inference pages.
When you click on this icon, you can check the phrases (highlighted) that served as evidence for the inference by category that the model extracted, as well as the confidence value of the inference
in the layer window that pops up.ima
Confidence is a measure of how reliable the model's prediction is. It outputs a value up to two decimal places between 0 and 1, and the higher the value, the higher the confidence in the inference
can be interpreted.
Interpreting the category-based inference evidence and confidence for the example sentence above, the score for the "fit" category was calculated as 1 in the second row for the following phrase in
the sentence:
Phrase serving as evidence: "I'm happy that the fit is good."
Note that if the number of test data samples for the TYPE category is relatively small, the confidence value for the category-based inference may be slightly lower.
|
{"url":"https://docs.allganize.ai/nlu_dashboard_guide/model-management/performance-metrics/review-analysis","timestamp":"2024-11-14T03:23:13Z","content_type":"text/html","content_length":"594374","record_id":"<urn:uuid:999f4efc-f1b0-4e63-9303-469b798ec39c>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00145.warc.gz"}
|
magnetic levitation calculator
Magnetic Levitation Calculator
Magnetic levitation, or maglev, is a method by which an object is suspended with no support other than magnetic fields. Magnetic force is used to counteract gravitational pull and any other forces.
This technology is widely used in applications like maglev trains and certain types of bearings. This article introduces a magnetic levitation calculator to help you understand the dynamics of maglev
How to Use
To use the magnetic levitation calculator, you need to input the necessary parameters such as the magnetic field strength, the distance between magnets, and the weight of the object. Once you input
these values, click on the “Calculate” button to get the result.
The formula to calculate the magnetic force required for levitation is derived from the balance of forces acting on the levitated object. The key formula used is:
• F is the magnetic force.
• μ0 is the permeability of free space (4π×10−7Tm/A
• M1 and M2 are the magnetic moments of the two magnets.
• d is the distance between the two magnets.
Example Solve
Let’s consider an example where we have the following parameters:
• Magnetic moment of magnet 1 (M1): 1.2A⋅m2
• Magnetic moment of magnet 2 (M2): 1.5A⋅m2
• Distance between the magnets (d): 0.05m
Using the formula, we can calculate the force as follows:
What is magnetic levitation?
Magnetic levitation is a method by which an object is suspended with no support other than magnetic fields.
What are the applications of magnetic levitation?
Maglev trains, magnetic bearings, and contactless melting are some applications.
What is the principle behind magnetic levitation?
Magnetic levitation works on the principle of using magnetic force to counteract gravitational pull and other forces.
A magnetic levitation calculator is a useful tool for understanding the forces involved in maglev systems. By inputting the necessary parameters, you can easily calculate the required magnetic force
for levitation. This can be useful for both educational purposes and practical applications in engineering and design.
|
{"url":"https://calculatordoc.com/magnetic-levitation-calculator/","timestamp":"2024-11-02T11:41:31Z","content_type":"text/html","content_length":"92449","record_id":"<urn:uuid:52bb71b2-edd4-4828-baa5-527cd05a9c2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00784.warc.gz"}
|
Week 3
Logistic Regression
Logistic regression is used to solve binary classification problems where the output label \( y \) is either 0 or 1. Linear regression usually doesn't work well for classification tasks because it
can predict values outside the range [0, 1]. Logistic regression addresses this limitation by using the sigmoid function to map predictions to probabilities.
The Sigmoid (Logistic) Function
Purpose: Maps any real-valued number into the range between 0 and 1, making it suitable for probability estimation.
\[ g(z) = \frac{1}{1 + e^{-z}} \]
• e is the mathematical constant approximately equal to 2.71828.
• When \( z \rightarrow +\infty \): \( g(z) \rightarrow 1 \)
• When \( z = 0 \): \( g(0) = 0.5 \)
• When \( z \rightarrow -\infty \): \( g(z) \rightarrow 0 \)
Graph Shape: S-shaped curve (sigmoid curve) that approaches 0 for large negative inputs and 1 for large positive inputs.
Logistic Regression Model Formulation
Linear Combination:
\[ z = w \cdot x + b \]
• \( w \) is the weight vector.
• \( x \) is the input feature vector.
• \( b \) is the bias term.
Applying the Sigmoid Function:
\[ f(x) = g(z) = \frac{1}{1 + e^{-(w \cdot x + b)}} \]
Output: \( f(x) \) produces a value between 0 and 1, representing the estimated probability that \( y = 1 \).
Interpreting the Output as Probability
Probability Estimation: \( f(x) \) estimates \( P(y = 1 \mid x) \), the probability that the label is 1 given input \( x \).
• If \( f(x) = 0.7 \), there is a 70% chance that \( y = 1 \) (e.g., tumor is malignant).
• Consequently, there's a 30% chance that \( y = 0 \) (tumor is benign).
Making Predictions with Logistic Regression
• Decision Rule: If \( f(x) \geq 0.5 \), predict \( y = 1 \); otherwise, predict \( y = 0 \).
Decision Boundary in Logistic Regression
Definition: The decision boundary is the set of points where the probability \( f(x) \) is exactly 0.5. In logistic regression, this corresponds to the points where the linear combination \( z = w \
cdot x + b = 0 \).
Purpose: It separates the input feature space into two regions:
• Region 1: Where \( f(x) \geq 0.5 \), and the model predicts \( y = 1 \).
• Region 0: Where \( f(x) \lt 0.5 \), and the model predicts \( y = 0 \).
• In two dimensions, the decision boundary is a line that divides the plane into two halves.
• In higher dimensions, it becomes a hyperplane that separates the space.
Consider a logistic regression model with parameters \( w \) and \( b \), where:
\[ z = w_1 x_1 + w_2 x_2 + b \]
If we set \( w_1 = 1 \), \( w_2 = 1 \), and \( b = -3 \), the decision boundary equation becomes:
\[ z = x_1 + x_2 - 3 = 0 \] \[ \Rightarrow x_1 + x_2 = 3 \]
This equation represents a straight line in the feature space that separates the predictions for \( y = 0 \) and \( y = 1 \).
Non-linear Decision Boundaries:
By incorporating polynomial features, logistic regression can model more complex decision boundaries:
\[ z = w_1 x_1^2 + w_2 x_2^2 + b \]
For example, with \( w_1 = 1 \), \( w_2 = 1 \), and \( b = -1 \), the decision boundary becomes:
\[ x_1^2 + x_2^2 = 1 \]
This represents a circle with radius 1 centered at the origin, allowing the model to separate data in a non-linear fashion.
Cost Function
The cost function gives you a way to measure how well a specific set of parameters fits the training data.
Loss is a measure of the difference of a single example to its target value while the Cost is a measure of the losses over the training set
For Linear Regression key points :
• Avoid the Squared Error in Classification: It leads to non-convex cost functions in logistic regression.
• Use Logistic Loss Function: It provides a suitable measure of error for binary classification and results in a convex cost function.
• Convex Cost Function Benefits: Guarantees that optimization algorithms can find the global minimum, leading to better model performance.
• Implementation: The logistic regression cost function is calculated using the negative log-likelihood of the predictions, averaged over all training examples.
The Logistic Regression Loss Function
\[J(\overrightarrow{w}, b) = \frac{1}{m} \sum_{i=1}^m L(f_{\overrightarrow{w},b}(\overrightarrow{x}^{(i)}), y^{(i)})\]
Where the loss function L is defined as:
\[L(f_{\overrightarrow{w},b}(\overrightarrow{x}^{(i)}), y^{(i)}) = \begin{cases} -\log(f_{\overrightarrow{w},b}(\overrightarrow{x}^{(i)})) & \text{if } y^{(i)} = 1 \\[2ex] -\log(1 - f_{\
overrightarrow{w},b}(\overrightarrow{x}^{(i)})) & \text{if } y^{(i)} = 0 \end{cases} \]
Summary of Gradient Descent for Logistic Regression
Use gradient descent to find the w and b parameters that minimse the cost function for a logistic regression model
Once w and b are chosen, put a new x into the model and it will predict the probability that y is 1 (not 0)
The cost function measures the discrepancy between the predicted probabilities and the actual binary labels (0 or 1).
Cost function:
\[J(\overrightarrow{w}, b) = -\frac{1}{m} \sum_{i=1}^m \left[y^{(i)}\log(f_{\overrightarrow{w},b}(\overrightarrow{x}^{(i)})) + (1 - y^{(i)})\log(1 - f_{\overrightarrow{w},b}(\overrightarrow{x}^
Equations for w_j and b look similar to linear regression but they are different because f(x) is different for linear and logistic regression:
w[j] update equation:
\[w_j = w_j - \alpha \left[\frac{1}{m} \sum_{i=1}^m (f_{\overrightarrow{w},b}(\overrightarrow{x}^{(i)}) - y^{(i)})x_j^{(i)}\right]\]
b update equation:
\[b = b - \alpha \left[\frac{1}{m} \sum_{i=1}^m (f_{\overrightarrow{w},b}(\overrightarrow{x}^{(i)}) - y^{(i)})\right]\]
Linear regression equation:
\[f_{\overrightarrow{w},b}(\overrightarrow{x}) = \overrightarrow{w} \cdot \overrightarrow{x} + b\]
Logistic regression equation:
\[f_{\overrightarrow{w},b}(\overrightarrow{x}) = \frac{1}{1 + e^{-(\overrightarrow{w} \cdot \overrightarrow{x} + b)}}\]
Similar to linear regression :
• Implementing gradient descent requires simultaneous updates of all parameters.
• Feature scaling is critical
• Vectorisation techniquesgreatly improve the computational efficiency of gradient descent, especially when dealing with large datasets.
Overfitting and Underfitting:
Overfitting occurs when a model fits the training data too well, capturing noise and irrelevant details, which leads to poor generalisation to new data. This often happens when using too many
features or overly complex models, such as high-order polynomials.
Underfitting happens when the model is too simple and unable to capture the underlying patterns in the data, leading to poor performance on both training and new data.
Bias and Variance:
High bias (underfitting) means the model makes strong assumptions (e.g., linearity) that may not match the data, leading to systematic errors.
High variance (overfitting) means the model is too sensitive to small variations in the training data, producing highly variable predictions for new data.
A model’s ability to perform well on unseen data is called generalisation. A good model should balance fitting the training data while generalizing well to new examples.
Techniques to Address Overfitting:
Collect more data: With more training data, complex models are less likely to overfit as they better capture the true patterns.
Reduce the number of features: By selecting a smaller subset of relevant features (feature selection), the model can focus on the most important aspects and avoid overfitting. Also removes risk there
is insufficient data, but does increase risk useful features are lost.
Regularisation: Keep all the features, but reduce their effect.
Regularisation techniques reduce the impact of large parameters by encouraging the model to shrink parameter values, preventing overly complex models from fitting noise. This method allows keeping
all features without overfitting. NB Usually just reduce w parameters and leave b.
Regularisation in Practice:
Regularisation works by limiting the size of the parameters (e.g., in linear or logistic regression) to prevent overfitting while still retaining the features.
Common regularisation methods include L1 (Lasso) and L2 (Ridge) regularisation, which control how much the model relies on each feature.
Cost Function with Regularisation
Instead of removing a feature, you minimise it (eg multiply w_3 will reduce w_3 close to zero.
Because we dont know which w to minimse, we minimse all the w using a regularisation parameter lamda to penalise all the values of w. So have to choose a value for lamda.
The regularised cost function is modified as follows:
Here, \(\lambda\) is the regularisation parameter, and \(m\) is the number of training examples. This new term encourages the parameters \(w_j\) to stay small, helping reduce overfitting.
Importantly, regularisation does not apply to the bias term \(b\), as regularising it has little practical effect.
Choosing \(\lambda\):
• If \(\lambda = 0\), the model is not regularised and is likely to overfit, producing a highly complex function.
• If \(\lambda\) is very large, the parameters are forced to be very small, leading to underfitting, where the model is overly simplistic.
• The key is to choose a value for \(\lambda\) that strikes a balance between fitting the training data well and keeping the parameters small to avoid overfitting.
The trade-off between these two goals is central to regularisation. Finding the right value of \(\lambda\) helps ensure the model generalises well without being overly complex or too simplistic.
Regularised Logistic Regression
For linear regression, f(x) is a linear function, for logistic regression, f(x) is the sigmoid (logistic) function
When logistic regression is applied to a dataset with many features, especially high-order polynomial features, it can be prone to overfitting, leading to overly complex decision boundaries.
Regularisation helps to address this by adding a penalty term to the cost function which discourages the parameters from becoming too large, leading to a simpler, more generalised decision boundary
When using gradient descent to minimize this regularised cost function, the derivative of the penalty term affects the update rule. For each parameter , the update rule becomes:
where is the learning rate. Note that the bias term b is not regularized, so its update rule remains unchanged.
In the lab, the cost function is shown to be a bit higher with the regularisation (as expected as its the normal cost plus regularisation cost)
|
{"url":"https://www.pour.it/ai/concepts/week3.html","timestamp":"2024-11-06T18:25:14Z","content_type":"text/html","content_length":"17633","record_id":"<urn:uuid:7a3f4b47-34fc-47a0-883a-e73ce75deb5b>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00080.warc.gz"}
|
articoli di Lorenzo Bardella
Eulerian rates of elastic incompatibilities applied to size-dependent hardening in finite torsion
M.B. Rubin
Faculty of Mechanical Engineering, Technion-Israel Institute of Technology, 32000 Haifa, Israel
Lorenzo Bardella
Department of Civil, Environmental, Architectural Engineering and Mathematics, University of Brescia, Via Branze, 43 25123, Brescia, Italy
Measures of rates of elastic incompatibilities are developed within an Eulerian framework for finite-deformation response of anisotropic elastic-inelastic materials. Such framework relies on the
evolution of microstructural vectors. It is emphasized that the rates of incompatibilities, here denoted as R_{ij}, depend on the constitutive equation for the rate of inelasticity. For small strains
and rotations, R_{ij} are equal to the negative of the components of the rate of Nye-Kroner's dislocation density tensor. In contrast to these small strain components, each R_{ij} is invariant under
superposed rigid body motions such that it can be used independently in the constitutive equations to describe the material behavior. Specifically, in this work, R_{ij} provide a size-dependent
enhancement to hardening that can increase or decrease during loading history, modeling the generation and annihilation of densities of geometrically necessary dislocations in metal plasticity. The
application to the finite-deformation cyclic torsion of thin wires demonstrates the potential of this approach and the importance of the constitutive equation for the plastic spin rate both on the
rotations of the microstructural vectors and on the predicted size-effect.
Author Keywords: anisotropic elastic response; elastic incompatibility; elastic-inelastic; Eulerian formulation; large deformation; small-scale metal plasticity; size-effect
|
{"url":"https://lorenzo-bardella.unibs.it/articoli/incompatibility.htm","timestamp":"2024-11-04T11:23:39Z","content_type":"text/html","content_length":"3574","record_id":"<urn:uuid:5ca15686-d2ef-484a-af13-2dab96db5d1c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00496.warc.gz"}
|
What is Haemorrhage?
Hint: The word ‘haem’ means haemoglobin which is an iron containing pigment found in red blood cells and the word ‘rrhage’ means excessive discharge or flow. By this you can have an idea that
haemorrhage is something related to blood or a blood vessel. This term usually refers to bleeding.
Complete answer:
By looking at the hint, it is clear that haemorrhage is something that is related to bleeding. Now we shall know the other details like what causes haemorrhage and how it happens? From a medical
point of view, bleeding is referred to as haemorrhage. The reason why bleeding occurs is when there is damage to the blood vessel, the blood starts expelling out from it. This can happen either
internally or externally. When this is internal, it can be life threatening. External haemorrhage can happen when there is a cut on your skin or when bleeding occurs through natural openings.
Internal haemorrhage occurs within the body. Haemorrhages can also happen due to vitamin K deficiency because this vitamin is essential for the blood to clot and in its absence, the blood starts
leaking continuously without forming a clot. Based on the amount of blood lost, haemorrhages are divided into four classes.
Class I - $15\%$ blood loss
Class II - 15 to $30\%$ blood loss
Class III - 30 to $40\%$ blood loss
Class IV - greater than $40\%$ blood loss
This is a brief note regarding haemorrhage.
The most common haemorrhage occurs in the brain under the arachnoid layer which functions as a protective layer to the brain and spinal cord. The most common reasons for brain haemorrhages are high
blood pressure, trauma, bleeding disorders or any blood vessel abnormalities.
|
{"url":"https://www.vedantu.com/question-answer/what-is-haemorrhage-class-11-biology-cbse-609fca5cbbbfb94918638978","timestamp":"2024-11-03T19:18:43Z","content_type":"text/html","content_length":"160812","record_id":"<urn:uuid:55eace3a-c832-47d2-85db-67b31ac52ee4>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00065.warc.gz"}
|
A Sequential Parity Checker - Logic Design Questions and Answers - Sanfoundry
Logic Design Questions and Answers – A Sequential Parity Checker
This set of Logic Design Multiple Choice Questions & Answers (MCQs) focuses on “A Sequential Parity Checker”.
1. What is the main purpose of a parity bit?
a) Maintain odd number of 1s in a bit stream
b) Maintain even number of 1s in a bit stream
c) Error detection
d) Increase the stream length
View Answer
Answer: c
Explanation: Parity bits are used to detect error in a data transmission system. Suppose a data is being transmitted in groups of 7 bits. If any odd parity system finds the number of 1s in a group as
an even number, then the system will add an additional parity bit 1 at the end of the package to make the total number of 1s in the package odd. Otherwise, it will add a 0 bit at the end of the
package. Thus, at the receiving end, an error can be detected if even number of 1s are found.
2. A digital communication system sends and receives 7-bit data packages. In which of the following cases a parity bit 1 will be needed if an even parity transmission system is used?
a) 0110010
b) 0110110
c) 1100110
d) 0010001
View Answer
Answer: a
Explanation: Only the data package 0110010 contains odd number of 1s. So, an even parity system will add a parity bit of 1 to avoid error when transmitting the package. Hence the right answer is
3. Which flip flop is generally used to build a sequential even parity checker?
a) SR flip-flop
b) JK flip-flop
c) T flip-flop
d) Master Slave flip-flop
View Answer
Answer: c
Explanation: If we use a negative edge triggered T flip-flop with initial state 0, and pass the input data pack to the T input, then an even number of 1s will be needed to make the final output zero
as the flip flop will toggle its output after each negative clock pulse as 1 → 0 → 1 → 0……… So, the output will only be 0 if the parity is even.
4. How many states are there in the state machine of a sequential parity checker?
a) 1
b) 2
c) 3
d) 4
View Answer
Answer: b
Explanation: There are two cases those can be detected in a parity checker – Even parity or Odd parity. Hence, two states can be found in any sequential parity checker. The right answer is 2.
5. Which pair is the correct match for an odd parity transmission system?
a) Data = 1101 000 Parity = 1
b) Data = 1001 000 Parity = 0
c) Data = 1101 000 Parity = 0
d) Data = 1101 110 Parity = 1
View Answer
Answer: c
Explanation: Only the combination “1101000” and parity bit 0 combines together keeping an odd number of 1s in the final byte “11010000”. So that combination is the right combination for an odd parity
transmission system.
6. Which sequential device must be used to build an odd parity checker?
a) Positive edge triggered JK flip-flop
b) Positive edge triggered T flip-flop
c) Negative edge triggered JK flip-flop
d) Negative edge triggered T flip-flop
View Answer
Answer: d
Explanation: If a negative edge triggered T flip-flop with initial state 0 is used, and if the input data pack is sent to the T input, then an even number of 1s will make the final output zero as the
flip flop will toggle its output after each positive clock pulse between 0 and 1……… So odd number of 1s will give the output 1 which will detect an odd parity.
7. Which one of the following is not true?
a) Parity checking procedure can detect errors
b) Parity bits can be both 0 and 1
c) Only one parity bit is added after a data pack
d) Parity bit can only be 1 but not 0
View Answer
Answer: d
Explanation: Parity bits are used to detect error in a data transmission system. Based on the number of 1s in a data pack and based on the parity system, the last parity bit is added. Suppose a data
is being transmitted in groups of 7 bits. If any odd parity system finds the number of 1s in a group as an even number, then the system will add an additional parity bit 1 at the end of the package
to make the total number of 1s in the package odd. Otherwise, it will add a 0 bit at the end of the package.
8. What parity bit will be added to the following data pack after examined by an even parity checker?
a) 1
b) 0
c) Can’t be determined
d) No parity bit will be added
View Answer
Answer: b
Explanation: Parity bits are used to detect error in a data transmission system. Based on the number of 1s in a data pack and based on the parity system, the last parity bit is added. The data pack
100100101 has total 8 bits. So, the even parity system will add a 0-parity bit to keep the total number of 1s even in the pack.
9. What is the parity of the binary representation of -23 in an 8 bit system?
a) Even
b) Odd
c) Can’t be determined directly
d) Adequate data not given
View Answer
Answer: b
Explanation: Parity bits are used to detect error in a data transmission system. Based on the number of 1s in a data pack and based on the parity system, the last parity bit is added. The binary
representation of 23 is 00010111. Thus by 2’s complement method, we get -23 as its 2’s complement. Hence, -23 is equivalent to binary 11101001 in an 8 bit system. This value has 5 1s in it. Thus, the
parity is odd.
10. A sequential odd parity checker checked a data pack and added a 1 after it. Which one of the following data packs can be the possible one?
a) 111010
b) 010101
c) 111000
d) 100000
View Answer
Answer: a
Explanation: Parity bits are used to detect error in a data transmission system. Based on the number of 1s in a data pack and based on the parity system, the last parity bit is added. Here the first
data pack alone consists of even number of 1s. So, the odd parity checker will add a 1 after it to make the number of 1s odd.
11. For which one of the following, sequential parity checker is useful?
a) Analog Communication
b) Digital Communication
c) Both analog or digital
d) In specific cases of digital communication
View Answer
Answer: b
Explanation: Parity bits are used to detect error in a data transmission system. Based on the number of 1s in a data pack and based on the parity system, the last parity bit is added. As we send bit
streams in case of digital communication only thus, the answer will be digital communication.
Sanfoundry Global Education & Learning Series – Logic Design.
To practice all areas of Logic Design, here is complete set of 1000+ Multiple Choice Questions and Answers.
|
{"url":"https://www.sanfoundry.com/logic-design-questions-answers-sequential-parity-checker/","timestamp":"2024-11-03T22:07:40Z","content_type":"text/html","content_length":"137481","record_id":"<urn:uuid:dadffaac-f82c-4322-b038-a4b9f53e4818>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00012.warc.gz"}
|
Frequentism and Bayesianism IV: How to be a Bayesian in Python
I've been spending a lot of time recently writing about frequentism and Bayesianism.
• In Frequentism and Bayesianism I: a Practical Introduction I gave an introduction to the main philosophical differences between frequentism and Bayesianism, and showed that for many common
problems the two methods give basically the same point estimates.
• In Frequentism and Bayesianism II: When Results Differ I went into a bit more depth on when frequentism and Bayesianism start to diverge, particularly when it comes to the handling of nuisance
• In Frequentism and Bayesianism III: Confidence, Credibility, and why Frequentism and Science Don't Mix I talked about the subtle difference between frequentist confidence intervals and Bayesian
credible intervals, and argued that in most scientific settings frequentism answers the wrong question.
Here I want to back away from the philosophical debate and go back to more practical issues: in particular, demonstrating how you can apply these Bayesian ideas in Python. The workhorse of modern
Bayesianism is the Markov Chain Monte Carlo (MCMC), a class of algorithms used to efficiently sample posterior distributions.
Below I'll explore three mature Python packages for performing Bayesian analysis via MCMC:
• emcee: the MCMC Hammer
• pymc: Bayesian Statistical Modeling in Python
• pystan: The Python Interface to Stan
I won't be so much concerned with speed benchmarks between the three, as much as a comparison of their respective APIs. This post is not meant to be a tutorial in any of the three; each of them is
well documented and the links above include introductory tutorials for that purpose. Rather, what I want to do here is a side-by-side comparison which will give a feel for how each package is used.
I'll propose a single relatively non-trivial test problem, and show the implementation and results of this problem using all three packages. Hopefully by seeing the three approaches side-by-side, you
can choose which package might be best suited for your particular application.
For our test problem, we'll do a three-parameter model which fits a straight line to data. The parameters will be the slope, the intercept, and the scatter about the line; the scatter in this case
will be treated as a nuisance parameter.
Let's define some data that we'll work with:
In [1]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
theta_true = (25, 0.5)
xdata = 100 * np.random.random(20)
ydata = theta_true[0] + theta_true[1] * xdata
# add scatter to points
xdata = np.random.normal(xdata, 10)
ydata = np.random.normal(ydata, 10)
In [2]:
plt.plot(xdata, ydata, 'ok')
The data are clearly correlated, and we'll assume that we don't know the errors. Let's construct a linear model to fit this data.
Recall that Bayes' theorem gives
$$ P(\theta~|~D) \propto P(D~|~\theta) P(\theta) $$
Where $D$ represents the observed data, and $\theta$ represents the model.
We'll assume a linear model for our data, parametrized by a slope $\beta$ and a y-intercept $\alpha$:
$$ \hat{y}(x_i~|~\alpha,\beta) = \alpha + \beta x_i $$
Assuming gaussian errors on the observed y values, the probability for any data point under this model is given by:
$$ P(x_i, y_i~|~\alpha, \beta, \sigma) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left[\frac{-[y_i - \hat{y}(x_i~|~\alpha, \beta)]^2}{2\sigma^2}\right] $$
where $\sigma$ here is an unknown measurement error, which we'll treat as a nuisance parameter
Multiplying these for all $i$ gives the likelihood: $$ P(\{x_i\}, \{y_i\}~|~\alpha, \beta, \sigma) \propto (2\pi\sigma^2)^{-N/2} \exp\left[- \frac{1}{2\sigma^2} \sum_{i-1}^N [y_i - \hat{y}(x_i~|~\
alpha,\beta)]^2\right] $$
Here we're going to be a bit more careful about the choice of prior than we've been in the previous posts. We could simply choose flat priors on $\alpha$, $\beta$, and $\sigma$, but we must keep in
mind that flat priors are not always uninformative priors! A better choice is to follow Jeffreys and use symmetry and/or maximum entropy to choose maximally noninformative priors. This is the kind of
exercise in choosing priors is one that frequentists often complain about as overly subjective, but the approach is well-founded and very well-supported from an information theoretic standpoint.
Why might a flat prior a bad choice? This is perhaps easiest to see in the case of slope. Let's visualize this by plotting some lines with slopes evenly-spaced between 0 and 10:
In [3]:
fig, ax = plt.subplots(subplot_kw=dict(aspect='equal'))
x = np.linspace(-1, 1)
for slope in np.arange(0, 10, 0.1):
plt.plot(x, slope * x, '-k')
ax.axis([-1, 1, -1, 1], aspect='equal');
These lines have evenly-spaced slopes in units of 0.1, yet the higher slopes are bunched together. With a flat prior, you're essentially saying that any one of these slopes is just as likely as
another. Due to this bunching seen above, it's clear that a flat prior on slope will highly favor very steep slopes! A flat prior on slope is not a minimally informative prior, and may end up biasing
your result (though with enough data the effect is almost zero).
You might imagine coming up with a better scheme by-hand (perhaps use a flat prior on the angle $\theta$ between the line and the x-axis) but we can be even more rigorous. The following problem has
been well-explored in the Bayesian literature; the best resource I've found is a paper by Jaynes: Straight Line Fitting: A Bayesian Solution (pdf)
If our model is given by
$$ y = \alpha + \beta x $$
then we can construct a parameter-space probability element $P(\alpha, \beta) ~d\alpha ~d\beta$.
Because $x$ and $y$ are symmetric, we could just as easily use another set of parameters
$$ x = \alpha^\prime + \beta^\prime y $$
with probability element $Q(\alpha^\prime, \beta^\prime)d\alpha^\prime d\beta^\prime$, where it's easy to show that
$$ (\alpha^\prime,~\beta^\prime) = (- \beta^{-1}\alpha,~\beta^{-1}). $$
From the Jacobian of the transformation, we can show that
$$ Q(\alpha^\prime, \beta^\prime) = \beta^3 P(\alpha, \beta). $$
Maintaining the symmetry of the problem requires that this change of variables should not affect the prior probability, so we can write:
$$ \beta^3 P(\alpha, \beta) = P(- \beta^{-1}\alpha, \beta^{-1}). $$
This is a functional equation which is satisfied by
$$ P(\alpha, \beta) \propto (1 + \beta^2)^{-3/2}. $$
which is equivalent to saying that $\alpha$ is uniformly distributed, and $\beta$ is distributed uniformly in $\sin\theta$ where $\theta = \tan^{-1}\beta$.
This might surprise you that the slopes are distributed according to $\sin\theta$ rather than uniformly in $\theta$. This $\sin\theta$ term, though, can actually be thought of as coming from the
intercept! If we change variables from $\alpha$ to $\alpha_\perp = \alpha\cos\theta$, then it's straightforward to show that our variables are uniformly distributed in $(\alpha_\perp,~\theta)$. We'll
make use of this fact in the PyStan solution below.
Similarly, we want the prior on $\sigma$ to be invariant to rescalings of the problem (i.e. changing units). So our probability must satisfy
$$ P(\sigma)d\sigma = P(\sigma / c)d\sigma / c. $$
This is a functional equation satisfied by
$$ P(\sigma) \propto 1 / \sigma. $$
This is known as the Jeffreys Prior, after Harold Jeffreys.
Putting these together, we see that symmetry arguments have led to the following minimally informative prior on our model:
$$ P(\alpha, \beta, \sigma) \propto \frac{1}{\sigma}(1 + \beta^2)^{-3/2} $$
As alluded to above, you could equivalently address this by using flat priors on transformed parameters, namely $(\alpha, \beta, \sigma) \to (\alpha_\perp, \theta, \log\sigma)$, but I personally
think the symmetry/maximum entropy approach is cleaner and clearer – it also gives us a chance to demonstrate the definition of nontrivial custom priors within the three packages.
Now that we have the data, the likelihood, and the prior, let's show how to solve this problem in Python using emcee, PyMC, and PyStan. First, though, let's quickly define some convenience routines
which will help us visualize the results:
In [4]:
# Create some convenience routines for plotting
def compute_sigma_level(trace1, trace2, nbins=20):
"""From a set of traces, bin by number of standard deviations"""
L, xbins, ybins = np.histogram2d(trace1, trace2, nbins)
L[L == 0] = 1E-16
logL = np.log(L)
shape = L.shape
L = L.ravel()
# obtain the indices to sort and unsort the flattened array
i_sort = np.argsort(L)[::-1]
i_unsort = np.argsort(i_sort)
L_cumsum = L[i_sort].cumsum()
L_cumsum /= L_cumsum[-1]
xbins = 0.5 * (xbins[1:] + xbins[:-1])
ybins = 0.5 * (ybins[1:] + ybins[:-1])
return xbins, ybins, L_cumsum[i_unsort].reshape(shape)
def plot_MCMC_trace(ax, xdata, ydata, trace, scatter=False, **kwargs):
"""Plot traces and contours"""
xbins, ybins, sigma = compute_sigma_level(trace[0], trace[1])
ax.contour(xbins, ybins, sigma.T, levels=[0.683, 0.955], **kwargs)
if scatter:
ax.plot(trace[0], trace[1], ',k', alpha=0.1)
def plot_MCMC_model(ax, xdata, ydata, trace):
"""Plot the linear model and 2sigma contours"""
ax.plot(xdata, ydata, 'ok')
alpha, beta = trace[:2]
xfit = np.linspace(-20, 120, 10)
yfit = alpha[:, None] + beta[:, None] * xfit
mu = yfit.mean(0)
sig = 2 * yfit.std(0)
ax.plot(xfit, mu, '-k')
ax.fill_between(xfit, mu - sig, mu + sig, color='lightgray')
def plot_MCMC_results(xdata, ydata, trace, colors='k'):
"""Plot both the trace and the model together"""
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
plot_MCMC_trace(ax[0], xdata, ydata, trace, True, colors=colors)
plot_MCMC_model(ax[1], xdata, ydata, trace)
Without further ado, let's do some MCMC.
The emcee package (also known as MCMC Hammer, which is in the running for best Python package name in history) is a Pure Python package written by Astronomer Dan Foreman-Mackey. It is a lightweight
package which implements a fairly sophisticated Affine-invariant Hamiltonian MCMC. Because the package is pure Python (i.e. it contains no compiled extensions) it is extremely easy to install; with
pip, simply type at the command-line
[~]$ pip install emcee
Emcee does not have much specific boilerplate code; it simply requires you to pass it a Python function which returns a value proportional to the log-posterior probability, and returns samples from
that posterior. Here's how we solve the above problem with emcee:
In [5]:
import emcee
In [6]:
# Define our posterior using Python functions
# for clarity, I've separated-out the prior and likelihood
# but this is not necessary. Note that emcee requires log-posterior
def log_prior(theta):
alpha, beta, sigma = theta
if sigma < 0:
return -np.inf # log(0)
return -1.5 * np.log(1 + beta ** 2) - np.log(sigma)
def log_likelihood(theta, x, y):
alpha, beta, sigma = theta
y_model = alpha + beta * x
return -0.5 * np.sum(np.log(2 * np.pi * sigma ** 2) + (y - y_model) ** 2 / sigma ** 2)
def log_posterior(theta, x, y):
return log_prior(theta) + log_likelihood(theta, x, y)
In [7]:
# Here we'll set up the computation. emcee combines multiple "walkers",
# each of which is its own MCMC chain. The number of trace results will
# be nwalkers * nsteps
ndim = 3 # number of parameters in the model
nwalkers = 50 # number of MCMC walkers
nburn = 1000 # "burn-in" period to let chains stabilize
nsteps = 2000 # number of MCMC steps to take
# set theta near the maximum likelihood, with
starting_guesses = np.random.random((nwalkers, ndim))
In [8]:
# Here's the function call where all the work happens:
# we'll time it using IPython's %time magic
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[xdata, ydata])
%time sampler.run_mcmc(starting_guesses, nsteps)
CPU times: user 5.73 s, sys: 13 ms, total: 5.75 s
Wall time: 5.76 s
In [9]:
# sampler.chain is of shape (nwalkers, nsteps, ndim)
# we'll throw-out the burn-in points and reshape:
emcee_trace = sampler.chain[:, nburn:, :].reshape(-1, ndim).T
plot_MCMC_results(xdata, ydata, emcee_trace)
On the left we show the resulting traces marginalized over the nuisance parameter $\sigma$. On the right, we show the line of best-fit along with the 2-$\sigma$ uncertainty region. This is exactly
the type of result we expect from MCMC: marginalized uncertainty contours around a model which provides a good by-eye fit to the data.
The PyMC package has many more features than emcee, including built-in support for efficient sampling of common prior distributions. PyMC by default uses the classic Metropolis-Hastings sampler, one
of the earliest MCMC algorithms. For performance, it uses compiled fortran libraries, so it is less trivial to install using tools like pip. My machine has a working fortran compiler, so pip install
pymc worked without a problem (but from working with students, colleagues, and tutorial attendees, I can tell you that few scientists today have a system setup such that this will work
out-of-the-box). For folks who don't have a fortran compiler installed, PyMC binaries for many systems can be quite easily installed with conda.
I should mention that the future PyMC version 3 removes fortran dependence and makes the installation much more streamlined; I've also been told that the API of PyMC 3 is much cleaner, and that
performance is much better. Because PyMC 3 is still listed as an alpha release, I've decided to stick with the current supported release for this post:
In [10]:
import pymc
In [11]:
# Define the variables needed for the routine, with their prior distributions
alpha = pymc.Uniform('alpha', -100, 100)
def beta(value=0):
return -1.5 * np.log(1 + value ** 2)
def sigma(value=1):
return -np.log(abs(value))
# Define the form of the model and likelihood
def y_model(x=xdata, alpha=alpha, beta=beta):
return alpha + beta * x
y = pymc.Normal('y', mu=y_model, tau=1. / sigma ** 2, observed=True, value=ydata)
# package the full model in a dictionary
model1 = dict(alpha=alpha, beta=beta, sigma=sigma,
y_model=y_model, y=y)
In [12]:
# run the basic MCMC: we'll do 100000 iterations to match emcee above
S = pymc.MCMC(model1)
S.sample(iter=100000, burn=50000)
[-----------------100%-----------------] 100000 of 100000 complete in 17.1 sec
In [13]:
# extract the traces and plot the results
pymc_trace = [S.trace('alpha')[:],
plot_MCMC_results(xdata, ydata, pymc_trace)
We get traces very similar to those provided by emcee.
The PyStan project is the official Python wrapper of the Stan Probabilistic programming language, which is implemented in C++. It uses a No U-Turn Sampler, which is more sophisticated than classic
Metropolis-Hastings or Gibbs sampling. As far as API goes, the important difference between PyStan as compared to emcee and PyMC is that it requires you to write and compile non-Python code within
your Python script when defining your model.
Because PyStan depends on the Stan package, installation can be difficult. In particular, you need the full Stan library to be installed in order to use the python wrapper. If you have a well-set-up
environment for compiling C/C++ code, this shouldn't be a problem – using pip install pystan downloaded, compiled, and installed everything necessary on my system.
From what I could find online, it doesn't appear that the Stan team provides pre-built binaries, including for conda. Because of this, if you don't have your system set up for C/C++ compilation,
installation will probably be more difficult.
I'm using version 2.2 of PyStan:
In [14]:
import pystan
There is one more hiccup with this package: for some reason, PyStan runs extremely slowly and crashes my computer when I run it in the IPython notebook itself, but works without issues when I run it
as an executable Python file. I'm not sure why this is but it may have something to do with this issue. To get around this, I'll save the PyStan script as a standalone Python file and execute it from
the command-line, writing the resulting traces to disk:
In [15]:
%%file pystan_example.py
import numpy as np
import pystan
# Generate data (same as used in the notebook)
theta_true = (25, 0.5)
xdata = 100 * np.random.random(20)
ydata = theta_true[0] + theta_true[1] * xdata
# add scatter to points
xdata = np.random.normal(xdata, 10)
ydata = np.random.normal(ydata, 10)
# Create the Stan model
# this is done by defining a string of Stan code.
fit_code = """
data {
int<lower=0> N; // number of points
real x[N]; // x values
real y[N]; // y values
parameters {
real alpha_perp;
real<lower=-pi()/2, upper=pi()/2> theta;
real log_sigma;
transformed parameters {
real alpha;
real beta;
real sigma;
real ymodel[N];
alpha <- alpha_perp / cos(theta);
beta <- sin(theta);
sigma <- exp(log_sigma);
for (j in 1:N)
ymodel[j] <- alpha + beta * x[j];
model {
y ~ normal(ymodel, sigma);
# perform the fit
fit_data = {'N': len(xdata), 'x': xdata, 'y': ydata}
fit = pystan.stan(model_code=fit_code, data=fit_data, iter=25000, chains=4)
# extract the traces
traces = fit.extract()
pystan_trace = [traces['alpha'], traces['beta'], traces['sigma']]
# save the traces with numpy
np.save("pystan_trace.npy", pystan_trace)
Overwriting pystan_example.py
In [16]:
# run the code we've created on the command-line
!python pystan_example.py
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_c1dba2ed7f485b674d7ce5eb738ffe05 NOW.
INFO:pystan:NOW ON CHAIN 0
INFO:pystan:NOW ON CHAIN 1
INFO:pystan:NOW ON CHAIN 2
INFO:pystan:NOW ON CHAIN 3
Iteration: 1 / 25000 [ 0%] (Warmup)
Informational Message: The current Metropolis proposal is about to be rejected becuase of the following issue:
Error in function stan::prob::normal_log(N4stan5agrad3varE): Scale parameter is 0:0, but must be > 0!
If this warning occurs sporadically, such as for highly constrained variable types like covariance matrices, then the sampler is fine,
but if this warning occurs often then your model may be either severely ill-conditioned or misspecified.
Iteration: 25000 / 25000 [100%] (Sampling)
Elapsed Time: 0.664267 seconds (Warm-up)
0.724006 seconds (Sampling)
1.38827 seconds (Total)
Iteration: 25000 / 25000 [100%] (Sampling)
Elapsed Time: 0.6626 seconds (Warm-up)
0.781297 seconds (Sampling)
1.4439 seconds (Total)
Iteration: 25000 / 25000 [100%] (Sampling)
Elapsed Time: 0.669715 seconds (Warm-up)
0.799694 seconds (Sampling)
1.46941 seconds (Total)
Iteration: 25000 / 25000 [100%] (Sampling)
Elapsed Time: 0.667423 seconds (Warm-up)
0.795613 seconds (Sampling)
1.46304 seconds (Total)
As we can see, the execution takes ~6 sec in serial to draw 100,000 samples. Additionally, on my machine, the compilation phase takes about ~20 seconds before the model is run.
In [17]:
# load the results from file; plot as above
pystan_trace = np.load('pystan_trace.npy')
plot_MCMC_results(xdata, ydata, pystan_trace)
Again, the results look similar to what we've seen above.
As a more direct comparison of the results, let's quickly over-plot the contours derived by the three methods:
In [18]:
fig, ax = plt.subplots(figsize=(8, 8))
plot_MCMC_trace(ax, xdata, ydata, emcee_trace, True,
colors='blue', linewidths=2)
plot_MCMC_trace(ax, xdata, ydata, pymc_trace,
colors='red', linewidths=2)
plot_MCMC_trace(ax, xdata, ydata, pystan_trace,
colors='green', linewidths=2)
ax.legend(ax.collections[::2], ['emcee', 'pymc', 'pystan'], fontsize=16);
As we would expect, the results agree! This indicates that we've defined the models consistently in all three packages. Additionally, we see that the "true" values used to generate the distribution
of points (25, 0.5) fall within the 1-$\sigma$ error ellipse.
So which package is the best? The answer to that will likely depend on what your problem is. Here's a quick table summarizing my own impressions of the three packages:
Complexity Execution time (100,000 samples; Ease of Installation Learning Curve/Ease of Use Number of Features
includes burn-in)
emcee Very lightweight ~6 sec Pure python; installs easily with pip Straightforward & Pythonic not much built-in beyond basic MCMC
pymc2 Lots of functionality & options ~17 sec Requires fortran compiler; binaries Pythonic, but lots of pymc-specific Lots of built-in functionality in
available via conda boilerplate Python
pystan Large package; requires coding ~20 sec compilation + ~6 sec Requires C compiler + Stan installation; Not pure Python; must learn Stan model Lots of built-in functionality in
in Stan language computation no binaries available specification language Stan-specific language
More verbosely:
emcee is extremely lightweight, and that gives it a lot of power. All you need to do is define your log-posterior (in Python) and emcee will sample from that distribution. Because it's pure-Python
and does not have specially-defined objects for various common distributions (i.e. uniform, normal, etc.) I thought it might be slower than the others, but its performance on this problem is
impressive. This is perhaps due to the more sophisticated default sampling scheme, so the benchmark may not be a fair comparison.
pymc is more full-featured, and once you get the hang of the decorator syntax for defining variables, distributions, and derived quantities, it is very easy to use. Its performance lagged the other
two: the same query took several times longer, despite having optimized objects for sampling from various priors. From what I hear, though, the still-in-alpha PyMC version 3 – a complete rewrite of
the package – blows PyMC version 2 out of the water.
pystan is the most difficult of the three to use, but that's because it's not really a Python package. Models are specified not in Python, but in a custom statistical expression language. This
language is very powerful, but has a steep learning curve. The ~20sec compilation time for each model is annoying, but I suspect that as models get more complicated and sample sizes grow, this won't
seem like such an issue. The fact that Stan is specifically designed for this type of operation, and the fact that all its models compile directly to byte code, makes it seem like a reasonable choice
for large, specialized computations.
I wouldn't put much weight on the timings here; as mentioned above, the packages use very different default sampling schemes (Metropolis-Hastings for PyMC, no U-Turn sampling for PyStan, and
affine-invariant ensemble sampling for emcee) so the times are not directly comparable. Each of these sampling schemes has its advantages and disadvantages, which you can read about in the links in
above sections.
I hope that I've given a fair comparison of these packages here. Note that I'm not an expert or contributor to any of these packages: I've simply read the online documentation and adapted the
tutorial examples to the specific problem I proposed here. If any power users or package developers are reading this and have feedback about how to better use or represent any of these packages, I'd
love to hear your response! I'd say two things:
1. Please write a blog comment and let me know how to use your package more effectively!
2. Please realize that the average user likely uses your package as I did here: by reading the intro tutorials, and adapting those examples to their own problems. If that has led me to do something
silly, it's likely that many of your users are making the same silly mistake!
Thanks for reading!
This post was written entirely in the IPython notebook. You can download this notebook, or see a static view on nbviewer.
|
{"url":"http://jakevdp.github.io/blog/2014/06/14/frequentism-and-bayesianism-4-bayesian-in-python/","timestamp":"2024-11-07T05:52:42Z","content_type":"text/html","content_length":"393798","record_id":"<urn:uuid:bd1c5291-f19d-4ebb-b47e-c47afdae743c>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00107.warc.gz"}
|
1328. Fireball
Today almost everybody knows about the scientific research department of the Night Watch, located in Solovetz city. Due to the artful actions of Zavulon (the boss of Day Watch, you know) this
Scientific Research Institute for Thaumaturgy and Spellcraft (SCITS) was absolutely declassified and removed from secret list already in 60s.
However this fact had not made any harm to its ability of research work. For example right now 3^rd-level wizard Vitka Korneev tests a new battle-spell of fireball in his lab in SCITS. Oh… fireball
is such a ball of fire that is used for… m-m-m… for neutralization of undesirable consequences.
New fireball appeared to be just an ingenious invention! First of all due to the incongruence of transgression inside the incub-transformation’s psy-field it has a zero radius. But its greatest
characteristic is the ability of remaining stable during the predefined number of collisions with obstacles. This characteristic is called N-stability: fireball is N-stable if it stays stable after N
collisions but explodes after (N + 1)^th collision. So, you may consider, that N-stable fireball loses one level of stability and becomes (N − 1)-stable after each collision with a wall. For example
ordinary fireball is 0-stable. So with this invention it became possible to strike an enemy with fireball after several ricochets from the walls. So the military value of this invention is beyond
questions. In addition, new N-stable fireball (N > 0) has quite unusual behavior: After collisions it rebounds only from concrete walls! So, it easily flies through any other obstacles. (The theory
ties this fact with the accumulation of bio-emotional energy by all static constructions of living quarters). This fact, as you can guess, causes additional military value of new invention: now it is
not necessary to provide clear trajectory for the thrown fireball — it will fly thwough any obstacles before it damages the target.
But it is long way from the first prototype to the mass usage. First of all it is necessary to investigate the trajectory of the fireball flight. The following experiment is prepared for this
purpose: in the rectangle room two points A and B are being chosen at random. One wizard stands at the point A and the target is placed at the point B. Wizard creates N-stable fireball while his
assistant calculates the direction of throw with the help of special program. The direction of throw is selected such way that thrown fireball rebounds from the walls exactly N times and then hits
the target. At the same time it should do this with the shortest trajectory (i.e. as quickly as possible).
So, you are to write this special program for direction calculation. The scientists of SCITS tell that all fireballs rebound from the walls according to the law: “angle of incidence equals angle of
reflection”. And after collision with room’s corner it rebounds exactly in the opposite direction. Moreover, the theory of fireballs says that, due to continuity, one collision with corner equals two
collisions with walls. So, 2-stable fireball explodes after the second collision if the first was with room’s corner. And finally you may assume that the fireball is a point moving in straight lines
with constant velocity.
The first line contains two numbers — width and length of the room. The second line contains the number N — N-stability of fireball. The third line contains four numbers — coordinates of points A and
All numbers are integers and are separated by one or more spaces. Points A and B lie inside the room but not on its border. The room’s width and length do not exceed 1000 and are greater than 1. N is
between 0 and 10 inclusive.
Angle in degrees (with 2 digits after decimal point), that gives the desired direction of fireball. If there are several such angles your program should output the minimal one.
Angle and coordinates are measured as shown on the figure.
input output
3 45.00
Characters and background are taken from books “Monday Begins on Saturday” (Arkady and Boris Strugatsky) and tetralogy “Night Watch”, “Day Watch”, “Twilight Watch” and “Final Watch” (Sergey
Problem Author: Pavel Egorov
Problem Source: The 10th Collegiate Programming Contest of the High School Pupils of the Sverdlovsk Region (October 16, 2004)
|
{"url":"https://timus.online/problem.aspx?space=1&num=1328","timestamp":"2024-11-13T02:22:57Z","content_type":"text/html","content_length":"10336","record_id":"<urn:uuid:c9ee2b06-8513-4a7e-b933-b04fbd11f294>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00165.warc.gz"}
|
Against the odds: a caution to practical Bayesians
Lots of people I know like to use Bayes' Theorem in their daily life to estimate the odds that various statements are true. See http://betterexplained.com/articles/
understanding-bayes-theorem-with-ratios/ for a quick tutorial. This post is about a pitfall to watch out for once you're feeling pretty good about applying Bayes' Theorem.
Upshot: if you do multiple Bayesian updates on the odds of a proposition (i.e. P(X):P(¬X)), then you're probably making errors unless X and ¬X are pretty simple hypotheses.
A typical example of using Bayes' Theorem: I randomly choose either a fair coin or a double-headed coin. I flip it 3 times and get heads each time. What's your credence that it's the double-headed
coin? The initial odds fair:double-headed are assumed 1:1. A fair coin will produce heads with probability 0.5, and a double-headed coin will produce heads with probability 1, so the likelihood ratio
to multiply by every time you see a heads is 0.5:1 = 1:2. So after seeing 3 heads, the fair:double-headed odds are 1:1 × 1:2 × 1:2 × 1:2 = 1:8, so there's a 8/(8+1) = 89% chance it's double-headed.
Let's try a more complicated example. Suppose somebody stole my lunch from the fridge. I know the culprit was one of my three "friends", Alice (A), Bob (B), or Charlie (C). I don't really care much
about Bob or Charlie, but I'm considering starting a business with Alice, so I really want to know whether it was Alice or not. That is, the odds I care about are Alice-did-it:Alice-didn't-do-it, P
Initially, I have no reason to suspect any one of them more than the others, so my odds are 1:2 in Alice's favor; there's a 1/3 chance that Alice is the culprit. Luckily, there were two witnesses
exonerating Alice. Suppose that the witnesses are independent, and each witness identifies the true thief with 80% probability, and otherwise names one of the innocent friends (with equal
probability, so 10% each). If a witness names someone other than Alice as the thief, what does that tell me? If Alice did it, there's a 10% chance of hearing that, and if Alice didn't do it, there's
a 90% chance, so a witness testimony clearing Alice contributes a likelihood ratio of 1:9. That means that the odds after two testimonies are 1:2 × 1:9 × 1:9 = 1:162, so a 1/163 = 0.6% chance that
Alice took my lunch, right?
WRONG! Our fixation on Alice messed us up. Let's go back and keep track of all the hypotheses, so the initial odds of the culprit being Alice:Bob:Charlie are 1:1:1. If a witness says Bob took the
lunch, it contributes 1:8:1. If a second witness says Bob did it, the odds are 1:64:1, so there's a 1/(1+64+1) = 1.5% chance Alice did it; significantly more than our previous estimate of 0.6%.
Worse, if one witness says Bob and the other says Charlie, the odds are 1:8:8, which is a 1/(1+8+8) = 5.9% chance it was Alice.
What went wrong? First of all, that 1:9 likelihood ratio is bogus. If we condition on the thief not being Alice, the probablity of a witness saying some specific person other than Alice (say Bob) did
it is really $$ P(witness_B|¬A) = 0.8×P(B|¬A) + 0.1×P(C|¬A) = 0.8×0.5 + 0.1×0.5 = 0.45, $$
so the likelihood ratio should have been 10:45 = 2:9 instead of 1:9. But using this corrected ratio still gives us the wrong answer for a deeper reason: if you condition on ¬Alice, then the witness
testimonies are not logically independent! Once I hear one testimony against Bob, then when computing $P(witness_B|¬A)$, $P(B|¬A)$ and $P(C|¬A)$ aren't 0.5 anymore. Note that this is consistent with
the assumption that the witness testimonies are causally independent.
This problem arises whenever your evidence differentiates between different sub-hypotheses. What to do about it? I think the best answer is to use unnormalized probability distributions, like we did
to get the right answer above. Of course, it's infeasible to keep track of all possible states of the universe, but maybe we can develop decent intuitions about what the hypothesis space should be.
Most people I've talked to about Alice recognize that there should be a difference between witness testimonies agreeing and disagreeing. I'd be curious if there was an example where it really feels
right to do the wrong thing.
|
{"url":"https://stacky.net/wiki/index.php?title=Against_the_odds:_a_caution_to_practical_Bayesians","timestamp":"2024-11-13T23:12:15Z","content_type":"text/html","content_length":"25114","record_id":"<urn:uuid:4060a383-fbc9-41d9-afc9-53ea9ebb4aec>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00662.warc.gz"}
|
Math-Physics Tutoring
Linear Algebra and Vector Spaces, Ordinary and Partial Differential Equations, Integral Equations (regular and singular), Perturbation Techniques and Asymptotics for linear and nonlinear systems,
Energy-Based Dynamics and Variational Calculus, Green's Function constructions for canonical and arbitrary spatial domains, Potential Theory and Laplace's Equation and their generalization to handle
wave propagation in Fluid Mechanics, Acoustics, Electromagnetics, Optics (geometric or otherwise), and in Vibration-Elasticity Theory for Dispersive and Nondispersive systems, Scattering and
Diffraction, Transform Methods, Time- and Frequency-Domain formulations and solutions, Complex Variables, the Wiener-Hopf Technique, Correlation Models and Ensembles, Uncertainty Quantification via
generalized Polynomial Chaos (gPC), Statistically Stationary and Non-Stationary Processes and their frequency-wavenumber spectra, Tensors and Differential Geometry.
* Student clients for advanced private teaching are encouraged to submit a sample problem a priori. Our purpose is to maximize each meeting's efficiency so that the student will get the greatest
'bang for his/her buck'. Note also our Mission Statement for tutoring, which follows.
Top Canonical: Mission statement and philosophy for tutoring in mathematical physics
The process of acquiring mathematical intuition involves repeated constructions and deconstructions leading to seemingly tangible visualizations of the previously unimaginable. This exciting
metamorphosis also achieves for the individual a kind of cultural and historical connectivity: because it re-enacts humanity's longing for powerful symbols – a longing that each age has only partly
satisfied through increasingly higher levels of abstraction.
We'll return to this business of tearing down and rebuilding momentarily. But we first recall a few famous examples of how the history of the development of the symbolic language of mathematical
physics has painfully mirrored its learning by nearly everyone who comes to it in its modern form:
(1) Newton gave irrelevant definitions of functional continuity because he lacked the special shorthand of set theory, which finally evoked the needed images;
(2) Euler routinely mangled the product of complex numbers because he inconsistently expressed the imaginary unit in terms of both its present day stand-in letter, 'i' (or 'j'), and its much less
transportable original version √-1;
(3) Maxwell's ignorance of the eventual curl operator made his synthesis of Ampere's Law and Faraday's Law into a coherent electrodynamics nearly impossible to apply – though his archaic un-distilled
notation had in fact unified electricity, magnetism, and optics;
(4) Lorentz's narrow view of his similarly concealing terminology left him to die believing in an ether medium that his own transformations had rendered superfluous 25 years before;
(5) In the years immediately following Special Relativity (1905), Einstein complained petulantly about how mathematicians had cast his analysis into groups of analogous 4-dimensional vectors. And yet
he laboriously went on to master a far more abstract set of equally long (i.e., with 4 components) second-order tensors. The result was his monumental geometrization of gravity in General Relativity
Now finally about those earlier comments about mathematical constructions and their opposite. We believe that an effective program of one-on-one tutoring in mathematical physics should be based on
deliberate reversals in the ultra-condensation of its daunting language. For each short project, or topic, we first consider a set of representative problems through a primitive and down-to-earth
vernacular made more natural to the student by her-his initial capacity to visualize. We quickly run through the correspondingly awkward solutions.
We next set the analytical paths to those solutions side by side and 'discover' how their commonality exposes a more economic set of operations that until now had remained hidden – a shorter set of
symbols that by their compact form are not only far more convenient, but that also become significant through their capacity to unify, simplify, and thereby illuminate.
Those willing to undergo this imaginative ritual of deconstruction and reconstruction of a mathematical object invariably end up owning its underlying concept. Bizarre abstractions that initially
seemed gratuitous now become compelling and effectively touchable. Students feel empowered and look forward to the next challenge. Everything now appears to be within their reach.
Needless to say that the implementation of this basic approach is a strong function of the topic's level, maturity of each student, and other lesser subtleties. Regardless of where you are, we hope
that you'll choose it, and us at Top Canonical, in an inevitable and exhilarating reliving of mathematical-physical history – as we tackle together both old and new problems while standing on the
shoulders of those linguistically imperfect giants who tried so hard to describe this perfect world.
|
{"url":"https://topcanonical.com/index.html","timestamp":"2024-11-11T22:35:37Z","content_type":"text/html","content_length":"17186","record_id":"<urn:uuid:90a782e4-6518-4cdd-8946-3328df488fa2>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00476.warc.gz"}
|
How to make special functions/orthogonal polynomials as callable symbolic expression.
How to make special functions/orthogonal polynomials as callable symbolic expression.
I'd like to make some special functions/orthogonal polynomials as callable symbolic expression. However, those functions always remind me the argument is not an integer.
var('n a x')
f(x) = gen_laguerre(n,a,x)
TypeError: unable to convert x (=n) to an integer
, and
var('n x')
g(x) = spherical_bessel_J(n, x)
TypeError: unable to convert x (=n) to an integer
Even if I tried the "domain" keyword, there's still the same problem:
var('n', domain=ZZ)
var('a x')
f(x) = gen_laguerre(n,a,x)
TypeError: unable to convert x (=n) to an integer
How do I reassure those functions that I will give integers to n later in each calculation?
3 Answers
Sort by ยป oldest newest most voted
gen_laguerre wraps maxima's function with the same name. In maxima, if you enter gen_laguerre(5,6,x), for example, you get a polynomial in x, and this is what the sage function is intended for.
If you enter in maxima gen_laguerre(n,a,x) it will accept it and spit gen_laguerre(n,a,x) back at you, and will later know how to differentiate it with respect to x, for example, but this capability
is currently not wrapped in sage. You can work directly with maxima objects:
sage: f = maxima('gen_laguerre(n,a,x)')
sage: f.diff(x)
but if you only intend to evaluate your function later, and not use symbolic manipulations (such as diff), you can create a python function instead of a symbolic object:
sage: f = lambda n,a,x:gen_laguerre(n,a,x)
sage: f(3,4,5)
BTW, this is the code of gen_laguerre:
sage_eval(maxima.eval('gen_laguerre(%s,%s,x)'%(ZZ(n),a)), locals={'x':x})
so you see that n is evaluated to an integer. If you do
instead you will just get a string saying gen_laguerre(n,a,x) which is not very useful.
Another thing you can do is to initialize a symbolic function with its derivative etc. See Ticket #11143 for a lot of discussion of how to do this. Of course, we'd love to have you contribute that to
Sage in that case so that others can use it!
There is a patch attached to #9706 which fixes this. It's quite old, so it might need some manual care. It would be great if someone started working on that ticket again. I don't think Stefan has
time any more.
|
{"url":"https://ask.sagemath.org/question/8446/how-to-make-special-functionsorthogonal-polynomials-as-callable-symbolic-expression/","timestamp":"2024-11-13T19:52:05Z","content_type":"application/xhtml+xml","content_length":"64600","record_id":"<urn:uuid:29989309-d9b6-4cdc-9d40-ceaa38bcbb65>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00215.warc.gz"}
|
Bayesian methods
Generative modeling or predictive modeling?
The terms inference and prediction both describe tasks where we learn from data in a supervised manner in order to find a model that describes the relationship between the independent variables and
the outcome. Inference and prediction, however, diverge when it comes to the use of the resulting model: Inference: Use the model to learn about the data generation process. Prediction: Use the model
to predict the outcomes for new data points.
|
{"url":"https://www.datascienceblog.net/tags/bayesian/","timestamp":"2024-11-08T16:01:41Z","content_type":"text/html","content_length":"32384","record_id":"<urn:uuid:2472154f-c5e7-4b14-961c-f2189678b669>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00294.warc.gz"}
|
First-passage percolation, semi-directed bernoulli percolation, and failure in brittle materials
We present a two-dimensional, quasistatic model of fracture in disordered brittle materials that contains elements of first-passage percolation, i.e., we use a minimum-energy-consumption criterion
for the fracture path. The first-passage model is employed in conjunction with a "semi-directed" Bernoulli percolation model, for which we calculate critical properties such as the correlation length
exponent v^sdir and the percolation threshold p[c]^sdir. Among other results, our numerics suggest that v^sdir is exactly 3/2, which lies between the corresponding known values in the literature for
usual and directed Bernoulli percolation. We also find that the well-known scaling relation between the "wandering" and energy fluctuation exponents breaks down in the vicinity of the threshold for
semi-directed percolation. For a restricted class of materials, we study the dependence of the fracture energy (toughness) on the width of the distribution of the specific fracture energy and lind
that it is quadratic in the width for small widths for two different random fields, suggesting that this dependence may be universal.
All Science Journal Classification (ASJC) codes
• Statistical and Nonlinear Physics
• Mathematical Physics
• Brittle materials
• First-passage percolation
• Fracture
• Semi-directed percolation
Dive into the research topics of 'First-passage percolation, semi-directed bernoulli percolation, and failure in brittle materials'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/first-passage-percolation-semi-directed-bernoulli-percolation-and","timestamp":"2024-11-08T13:00:54Z","content_type":"text/html","content_length":"52372","record_id":"<urn:uuid:11b8458b-b80b-47fb-aacc-90617975260f>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00499.warc.gz"}
|
KSEEB Solutions for Class 9 Maths Chapter 12 Circles Ex 12.2
Students can Download Class 9 Maths Chapter 12 Circles Ex 12.2 Questions and Answers, Notes Pdf, KSEEB Solutions for Class 9 Maths helps you to revise the complete Karnataka State Board Syllabus and
to clear all their doubts, score well in final exams.
Karnataka State Syllabus Class 9 Maths Chapter 12 Circles Ex 12.2
Question 1.
Recall that two circles are congruent if they have the same radii. Prove that equal chords of congruent circles subtend equal angles at their centres.
Data : Two circles having centres O and O’ and having equal radius. Chord AB = CD.
To Prove: Equal chords of congruent circles subtend equal angles at their centres.
Or ∠AOB = ∠CO’D
Proof: In ∆AOB and ∆CO’D,
OA = O’C (Data)
OB = O’D (Data)
AB = CD (Data)
∴ ∆AOB ≅ ∆CO’D
∴ ∠AOB = ∠CO’D.
Question 2.
Prove that if chords of congruent circles subtend equal angles at their centres, then the chords are equal.
Circle with centres A and B are congruent and ∠PAQ = ∠RBS
To Prove PQ = RS.
Proof: In ∆APQ and ∆BRS,
AP = BR ……… radii of congruent circles
AQ = BS are equal
∠PAQ = ∠RBS (Data)
∴ ∆PQR ≅ ∆BRS (SAS postulate)
∴ PQ = RS.
|
{"url":"https://kseebsolutions.in/kseeb-solutions-for-class-9-maths-chapter-12-ex-12-2/","timestamp":"2024-11-04T04:44:43Z","content_type":"text/html","content_length":"131892","record_id":"<urn:uuid:1facc352-5c67-4b42-a8c2-10f98f2b4805>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00238.warc.gz"}
|
In this study, a new formulation for phase based electrical properties tomography (EPT) has been proposed to eliminate the boundary artifact issue and to provide robustness against noise. The
feasibility of the proposed method, which is called "generalized phase based EPT", has been demonstrated using simulation, experimental phantom, and in vivo experiments.
It has been shown that electrical conductivity ($$$\sigma$$$) can be imaged using only MR phase
, without B1-mapping. This approach (called “phase based EPT”) also does not use the transceive phase assumption (TPA), i.e. approximating the transmit phase as half of the transceive phase, and
therefore, it can be used for any transmit-receive coil configuration. With these advantages, phase based EPT seems to be the most suitable EPT method for clinical applications. However, in its
current form, boundary artifact and low SNR issues precludes the practical application of this method. In this study, we have proposed a new formulation for phase based EPT (called “generalized phase
based EPT”) to eliminate the boundary artifact and to provide robustness against noise. We have demonstrated the feasibility of the proposed method using simulation, phantom, and in vivo experiments.
Convection-reaction equation based MREPT (cr-MREPT) formula^4-5 can be written in its logarithm form as
$$$\beta^{\pm }\cdot\nabla\ln(\gamma)-\nabla^{2}B^{\pm}_{1}+i\omega\mu_{0}\gamma{B^{\pm}_{1}}=0$$$
where $$$γ=σ+iωϵ$$$,$$$∇ln(γ)=\begin{bmatrix}∂ln(γ)/∂x\\∂ln(γ)/∂y\\∂ln(γ)/∂z\end{bmatrix}$$$,$$$\beta^{\pm}=\begin{bmatrix}\frac{\partial{B^{\pm }_{1}}}{\partial{x}}\mp{i}\frac{\partial{B^{\pm}_{1}}}
{\partial{y}}+\frac{1}{2} \frac{\partial{B_{z}}}{\partial{z}}\\\pm{i}\frac{\partial{B^{\pm}_{1}}}{\partial{x}}+\frac{\partial{B^{\pm}_{1}}}{\partial{y}}\pm{i}\frac{1}{2}\frac{\partial{B_{z}}}{\
partial{z}}\\-\frac{1}{2}\frac{\partial{B_{z}}}{\partial{x}}\mp{i}\frac{1}{2}\frac{\partial B_{z}}{\partial{y}}+\frac{\partial{B^{\pm }_{1}}}{\partial{z}}\end{bmatrix}$$$
Substituting $$$B^{\pm}_{1}=\left|B^{\pm}_{1}\right|e^{i\phi^{\pm}}$$$ in the above equation, and assuming $$$\nabla\left|B^{\pm}_{1}\right|=0 $$$ yield transmit (or receive) phase based EPT formula
$$$\Omega^{\pm }\cdot\nabla\ln(\gamma)+(((\frac{\partial{\phi^{\pm}}}{\partial{x}})^2+(\frac{\partial{\phi^{\pm}}}{\partial{y}})^2+(\frac{\partial{\phi^{\pm}}}{\partial{z}})^2)-i\nabla^{2}\phi^{\pm})
where $$$\Omega^{\pm}=\begin{bmatrix}{i}\frac{\partial{{\phi^{\pm}}}}{\partial{x}}\pm\frac{\partial{\phi^{\pm}}}{\partial{y}}\\\mp\frac{\partial{\phi^{\pm}}}{\partial{x}}+i\frac{\partial{\phi^{\pm}}}
{\partial{y}}\\+i\frac{\partial{\phi^{\pm}}}{\partial{z}}\end{bmatrix}+\frac{1}{B^{\pm}_{1}}\begin{bmatrix}\frac{1}{2} \frac{\partial{B_{z}}}{\partial{z}}\\\pm{i}\frac{1}{2}\frac{\partial{B_{z}}}{\
partial{z}}\\-\frac{1}{2}\frac{\partial{B_{z}}}{\partial{x}}\mp{i}\frac{1}{2}\frac{\partial B_{z}}{\partial{y}}\end{bmatrix}$$$
Assuming the gradients of B[z] are negligible compared to $$$B^{\pm}_{1}$$$, the second term of $$$\Omega^{\pm}$$$ can be neglected compared to the first term. The rest are the transmit (or receive)
phase components, which cannot be measured directly via MRI. To obtain the equation in terms of measurable transceive phase, i.e. $$$\phi^{tr}=\phi^{+}+\phi^{-}$$$, we can sum the transmit and
receive phase based formulae, and this gives us
where $$$k_{real}$$$ has no imaginary components. Writing only the imaginary terms, which are related with the conductivity, and assuming that $$$\sigma^2\gg(\omega\epsilon)^2$$$ yields
where $$$\rho=\frac{1}{\sigma}$$$ (resistivity). This equation is in the form of convection-reaction-diffusion equation, and the coefficients are solely based on the transceive phase. Since the
convection term $$$(\nabla\phi^{tr}\cdot\nabla\rho)$$$ dominates the diffusion term (no diffusion term) in our formulation, the solution will have unwanted spurious oscillations near interior and
boundary layers^6-7. Therefore, an artificial diffusion term $$$(-c\nabla^2\rho)$$$ is added for the purpose of stabilization, and the governing equation of the generalized phase based EPT method
will be
where c is the constant diffusion coefficient.
Apart from the previous cr-MREPT studies
, the governing equation is solved for $$$\rho$$$ using finite difference scheme. This is important because since the measured transceive phase is already on Cartesian grid, we do not have to
generate a grid for the region of interest (ROI). Partial derivatives in the governing equations are directly represented with the central difference formulae. Final matrix equation ($$$A\rho=b$$$)
is solved using MATLAB (Mathworks, Natick, MA) by applying Dirichlet boundary condition. For electromagnetic simulations, RF birdcage coil was modeled and loaded with head phantom (see Fig. 1) using
COMSOL Multiphysics
. The simulations were made at 128 MHz with a voxel size of 2x2x2 mm
. The conductivity maps were calculated using the simulated transceive phase, which is acquired by the summation of $$$B_1^+$$$ and $$$B_1^-$$$ phases of the coil. For the phantom experiment,
background was prepared using agar-saline solution (20 g/L Agar, 2.5 g/L NaCl, 0.2 g/L CuSO4), and anomalies were prepared using a saline solution (8.8 g/L NaCl, 0.2 g/L CuSO4). Finally, a healthy
23-years-old male volunteer has been scanned with the approval of Institutional Review Board of Bilkent University. Experiments were conducted on Siemens Tim Trio 3T MR scanner (Erlangen, Germany)
using a quadrature body coil and 12-channel receive only phased array head coil. The transceive phase was acquired using 3D balanced SSFP (FA=40 deg, TE-TR=2.33-4.66 ms, FOV=200x200x50 mm
, RES=1.56x1.56x1.56 mm
, coronal, NEX=32, total scan time ~10min for the experimental phantom, and FA=40 deg, TE-TR=2.23-4.46 ms, FOV=210x210x52.5 mm
, RES=1.64x1.64x1.64 mm
, axial, NEX=7, total scan time ~ 2.5min for volunteer study). For noisy simulations and experiments, isotropic Gaussian filter with 5x5x5 voxels and a standard deviation of 1.06 was applied to the
phase data.
Fig. 2-4 show the reconstructed conductivity results for the simulated head model, experimental phantom, and human experiments, respectively. Using generalized phase based EPT, boundary artifacts
were eliminated completely in the phantom experiment, and were significantly reduced in human head simulations and experiments. The generalized phase based EPT method also provides immunity against
noise due to the use of diffusion term without significantly blurring internal layers.
Discussion and Conclusion
With the boundary artifact reduction and noise immunity advantages of generalized phase based EPT, and the inherent advantages of the phase based EPT (fast, TPA free, and being suitable for any
transmit-receive coil configuration), the proposed method provides fast and reliable electrical conductivity images for clinical applications.
This study was supported by TUBITAK 114E522 research grant. Experimental data were acquired using the facilities of UMRAM, Bilkent University, Ankara.
[1] Voigt T et al. Magn. Reson. Med. 2011;66(2):456-466
[2] Van Lier et al. Magn. Reson. Med. 2012;67:552–561
[3] Katscher U et al. Comput. Math. Methods Med. 2013;2013:546562
[4] Hafalir et al. IEEE Trans. Med Imaging, 2014;33(3) 777-793
[5] Gurler et al. ISMRM22(2014):3247
[6] John V et al. Comput Method Appl M 2007;196:2197-2215
[7] John V et al. Comput Method Appl M 2008;197:1997-2014
[8] Gurler et al. Concepts Magn Reson Part B 2015;45:13-32.
|
{"url":"https://cds.ismrm.org/protected/16MProceedings/PDFfiles/2991.html","timestamp":"2024-11-02T22:00:32Z","content_type":"application/xhtml+xml","content_length":"19369","record_id":"<urn:uuid:3ba6ad8f-e73a-4fd7-a338-5cd58b867ea6>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00532.warc.gz"}
|
Educational Aids | supercoolsliderule
top of page
Listed below are links to PDF notes for most of the tutorial videos produced by SuperCool Slide Rule. Please feel free to download, reference, and distribute these materials for educational use only.
All literature is copyrighted and property of SuperCool Slide Rule. Any reproduction or modification of these notes for commercial use is prohibited.
bottom of page
|
{"url":"https://www.supercoolsliderule.com/educational-aids","timestamp":"2024-11-09T20:16:55Z","content_type":"text/html","content_length":"491384","record_id":"<urn:uuid:55252de1-984e-4209-8010-d4c6f49f8cbb>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00325.warc.gz"}
|
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization
Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11609-11618, 2020.
Training machine learning models that are robust against adversarial inputs poses seemingly insurmountable challenges. To better understand adversarial robustness, we consider the underlying problem
of learning robust representations. We develop a notion of representation vulnerability that captures the maximum change of mutual information between the input and output distributions, under the
worst-case input perturbation. Then, we prove a theorem that establishes a lower bound on the minimum adversarial risk that can be achieved for any downstream classifier based on its representation
vulnerability. We propose an unsupervised learning method for obtaining intrinsically robust representations by maximizing the worst-case mutual information between the input and output
distributions. Experiments on downstream classification tasks support the robustness of the representations found using unsupervised learning with our training principle.
Cite this Paper
Related Material
|
{"url":"http://proceedings.mlr.press/v119/zhu20e.html","timestamp":"2024-11-03T10:49:28Z","content_type":"text/html","content_length":"15818","record_id":"<urn:uuid:ae1534b7-1e74-43b5-bf3e-296393eb4240>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00128.warc.gz"}
|
Tuned and Mistuned Long Turbine Blade Resonant Response
"Interference Diagrams” also termed by some as “Safe Diagrams" are utilized for fully-coupled rotating blades. In Reference No. 1 resonance where harmonic of speed is equal to nodal diameters is
reviewed, along with when difference in numbers of blades and vanes equal to number of diameters.
An example of nodal diameter “critical speeds” is shown above where there are 60 interlocked blades with a speed range of 3600 to 4800 rpm.
Nozzle passing frequency is not of much concern for modern designs utilize continuous shrouded blades, termed Z-lock or Z-notch, where there is beneficial friction damping at the shrouds. For long
tapered, twisted blades there will be vibratory motion for each direction, axial and tangential giving first and second families of modes shown in the plot above.
Many other designs for long turbine blade are not fully connected (one packet) but are assembled into separate packets that could be resonant with even lower harmonics of speed. For these the tuned
“Safe Diagram” theory is more likely to apply than for short blades as described in the author’s previous article. Each packet of long blades could have individual frequencies very close to each
other, just as assumed with models for finite element analyses. Tuned packets result in coupled modes where patterns produce diametral nodal lines with adjacent packets out-of-phase with each other.
With 24 packets the highest in-phase mode would have 12 diameters as shown in the figure below for 120 blades, 24 tuned packets with five nearly identical last stage blades per packet.
12 Diameter Mode Shape: 12 packets In-Phase and 12 Packets Out-of Phase
Lower harmonic excitation can arise from non-uniformity of upstream nozzles as well as potential flow of non-uniform 90-degree exhaust casings. Can the 24-packet assembly respond for resonance with
five times operating speed of the fundamental In-Phase mode? There would be response for this example as five is less than 24/2 = 12.
If this design has 10 or more packets it could respond with fairly high response at 5X. With 24 packets five times speed would excite the 5-diameter mode. But by changing to only 8 packets some
claim it would not respond if it was tuned; i.e. the point 8 over 2 = 4 shown in the figure above is less than 5 times running speed. The five diameter mode would not exist for eight tuned packets.
Due to phase cancellation five times speed also would not excite a 4-diameter mode if its frequency was at five time speed. Note that nodal diameter frequencies are close together in the figure
above. In Reference 4 the authors Wagner and Griffin give an equation to use to check for other cases of tuned packet response where they say for a mode wirh "m" diameters:
It was shown in Part I that the condition required for an N group bladed disk to respond to an excitation harmonic h while constrained to mode harmonic m is given by h = jN ± m
where j = 0, 1, 2, . . . , with the further condition that h cannot be negative, and that negative values of m need not be considered.
Thus for 120 blades with eight packets, a 3-diameter mode has response when at resonance with five times speed as: 5 = 8 packets – 3.
See Table 2 highlighted in red below showing positive participation factor.
For certain numbers of blades and packet selections even tuned assemblies will respond. With a change to 112 total blades and using seven packets with 16 blades there would be phase cancellation if
the 3-diameter mode was resonant with 5 times speed. However the possibility of mistuned packets should also be considered, not only from design and manufacturing differences, but from changes during
operation such as from erosion/corrosion. Using many more blades per packets than five or six is usually much better for phase cancellation. An example of experimental results for a case with 12
packets, resonant with 3X is shown in Figure 10 of Reference 5.
Let us again review a design with 120 rotating blades, but with mistuned packets, resonant with five times speed. Recommending a change from 24 packets to 8 packets will reduce the resonant response
factor at 5X speed from 0.93 to 0.47 as shown in the table below. This may not be safe enough, especially if the stage is a transition stage with wet steam and corrodants in the steam. The optimum
would be five packets of blades; the resonant response factor would be near zero for the fundamental mode. This is called the “long arc” or “harmonic shrouding” method of selecting blade packets for
back-end stages. (Reference 6). Using Weaver & Prohl equations with excitation at the 5’Th harmonic:
Table: Resonant Response Variation with Packet Changes
120 Blades : Resonance Of Fundamental IN-Phase Mode With 5 X Speed
Number Of Blades Per Packet
Packet Resonant Response Factor
It usually turns out where shrouds are attached with peened-on tangs that as many blades as possible should be used instead of only five or six per packet often found in practice. (Some older designs
are welded into groups). However all designs should be checked with Weaver and Prohl equations for fundamental mode response. For some stages with very low “Goodman Factors” checking the next highest
packet mode where half the blades vibrate out-of-phase with the other blades may also be needed as explained in References 1 and 4.
This review is for fundamental modes with vibration in tangential direction; similar analysis applies to fundamental axial direction in-phase modes. Again for long tapered, twisted blades there will
be vibratory motion for each direction for the axial and tangential mode families. Failure analysis of high cycle fatigue of blades assembled into packets should review potential resonance, reduction
in strength of excitation sources, and the benefit of change to an optimum number of blades per packet. In addition correct operating conditions and steam purity must be reviewed for root cause
failure analysis.
1. Kushner, F., 1979, “Disc Vibration - Rotating Blade and Stationary Vane Interaction,” ASME Paper No. 79-Det-83: 1980 ASME Transactions, Journal of Mechanical Design, 102, pp. 579-584.
2. Weaver, F. L., and Prohl, M. A., 1958, “High Frequency Vibration of Steam Turbine Buckets,” Trans. ASME, Vol. 78, pp. 181-189, 1958.
3. Kushner, F., 2004, "Rotating Component Modal Analysis and Resonance Avoidance Recommendations," Tutorial, Proceedings of the 33rd Turbomachinery Symposium, Turbomachinery Laboratory, Texas A&M
University, College Station, TX.
4. Wagner, L. F. and Griffin, J. H., 1996, “Forced Harmonic Response of Grouped Blade Systems: Part II—Application,” ASME Journal of Engineering for Gas Turbines and Power, 118, pp. 137-145.
5. Sanvito, M., Pesatori, E., Bachschmid, N., Chatterton, S., 2012, “Analysis of LP Steam Turbine Blade Vibrations: Experimental Results and Numerical Simulations”; 10th International Conference on
Vibrations in Rotating Machinery, IMech, pp 189 – 197.
6. Ortolano, R.J., La Rosa J.A., and Welch W.P., 1981, "Long Arc Shrouding - A Reliability Improvement for Untuned Steam Turbine Blading," ASME J. Eng. Gas Turbines Power 103(3), pp. 522-527.
frank_kushnner (at) comcast.net
|
{"url":"https://www.turbomachinerymag.com/view/tuned-and-mistuned-long-turbine-blade-resonant-response","timestamp":"2024-11-04T00:41:51Z","content_type":"text/html","content_length":"157453","record_id":"<urn:uuid:e39b7ab3-3282-4626-9113-3ceae22d6a99>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00029.warc.gz"}
|
How do I change cake recipe quantities for different bakes?
How do I change cake recipe quantities?
by Lindy Smith
I keep being asked how to adapt or scale a cake recipe quantities to bake a larger or smaller cake. It’s not difficult and instructions are given in some of my cake decorating books! Below, however,
is the chart you will need if you are up for the maths! If not see my alternative approach at the bottom of the page.
How to use the cake recipe quantities chart
The chart assumes that your own basic recipe will be for a 20cm (8in) round cake that is 7.5cm (3in) deep, as this, at the time of writing, is the most common size. Therefore, if you want to make a
25.5cm (10in) round cake, for example, look at the chart and you will see that you need 1 1⁄2 times the quantity of your usual recipe.
│Cake size │Multiples of your own basic recipe │
├─────────────┬──────────────────────────────────────────┬────────────┤ │
│Round and │Square and hexagon (measured side to side)│Ball │(approximate quantities) │
│ │ │ │ │
│petal │ │ │ │
│7.5cm (3in) │ │ │1/8 [not really practical to make any smaller] │
│10cm (4in) │7.5cm (3in) │10cm (4in) │¼ │
│12.5cm(5in) │10cm (4in) │ │1/3 │
│15cm (6in) │12.5cm (5in) │12.5cm (5in)│½ │
│18cm (7in) │15cm (6in) │ │¾ │
│20cm (8in) │18cm(7in) │15cm (6in) │1 │
│23cm (9in) │20cm (8in) │ │1 ¼ │
│25.5cm (10in)│23cm (9in) │ │1 ½ │
│28cm (11in) │25.5cm (10in) │ │2 │
│30cm(12in) │28cm (11in) │ │2 ½ │
│33cm (13in) │30cm (12in) │ │3 │
│35.5cm (14in)│33cm (13in) │ │3 ½ │
Cake recipe quantities chart developed and tested by Lindy Smith
Further explanation of the maths
Below is a Facebook Live video that I made to help explain the chart a little further…hope it helps.
Changing recipe quantities for other shapes and sizes
If you wish to use a tin (pan) that is not mentioned above, such as a pre-formed shaped tin or oval, fill a 20cm (8in) x 7.5cm (3in) tin with water and compare it with the quantity of water that your
tin holds. The basic recipe quantity can then be multiplied or divided as necessary.
Scaling down recipe quantities
If you want to make a very small cake, often I think the best solution is to forget the maths and follow your standard recipe quantities, using the leftover mixture to create cupcakes or mini cakes.
Store these in your freezer for another time or share with family or friends.
Tasty cake recipes to try
Before you go…Would you like to try a new cake recipe? We have a whole blog category dedicated to tasty bakes, so why not visit ‘Recipes to Bake‘ to see what appeals? Cake recipes include my
delicious chocolate fudge cake recipe – yummy!
Sweet wishes
‘Bringing world-class sugarcraft into your kitchen’
1. Debbie Oliver says
hi Lindy I have a friend that wants me to make a cake for her baby shower, the cake is made in a 2 litre bowl and the recepie is a 5 egg maderia mixture my problem is my friend wants the
chocolate cake in your inspire and desire book but i can’t work out what cake size to use, i thought maybe the 8 inch round and the mix that was left over i’d make some cup cakes, Would that be
right or way too much? thanks so much Debbie xx
2. Lindy Smith says
Hi Debbie
Yes I think that is what I’d do as well
3. abbie says
just a quick question. If i am making in a 28cm tine do I need to alter cooking time? recipe is for an 8” round tin, and bake at 140 degrees C for 40 mins-1 hour. Obviously I am making this by 2
1/2 so do i double up on cooking time or should it be the same!
4. Jane Dolder says
Hello Abbie
A 28cm cake is equivalent to an 11″. The baking times for this would be 1 3/4 – 2 hours at 180C.
5. bonnie says
thank you for being there.have just made a cake for my great
-gran daughter christening could not have done it with out your help.
6. Jessica Ellis says
Hi Lindy,
My Mum and I are going to make my wedding cake. I would like a madeira cake at the bottom, a lemon cake in the middle and a fruit cake on on top. Would a 12 inch, 9 inch and a 6 inch be big
enough for 90 people? If not, what sizes would you use? Also, what quantities would you use and what temperature and for how long would you bake the cakes for?
Thank you,
7. Jane Dolder says
Hello Jessica,
I would suggest you look at Lindy’s Cakes to Inspire and Desire which contains recipes, portions and baking times. You can obtain this from our online shop.
8. lisa says
I’m making a plain cake in a 12″ square tin but what temperature and how long do i bake it.
9. Lindys Team says
You would need to bake a 12 inch square madeira cake for between 2 and a quarter and 2 and a half hours at 160 degrees Celcius (325 degrees Fahrenheit / Gas Mark 3).
10. Dawn Springett says
If I have a wedding cake recipe for a 9″ fruit cake, how do I work out quantities for a 12″ and 6″ cake?
Hope you can help,
11. Lindys Team says
Hello Dawn,
We tend to use an 8″ basic recipe as our starting point when adapting recipes.
If your start point was an 8″ fruit cake, you would need 2 1/2 times the ingredients for a 12″ cake and 1/2 the ingredients for a 6″ cake.
I hope this helps
12. sharon says
hi i am going to try to make a two tier weeding cake i would like your advise on the right ingedients for a 35.5cm and a 23cm mudcake reciept i would be greatful for your help
13. Lindys Team says
Hi Sharon,
There is a mud cake recipe on Lindys blog.
Click here
Use this recipe with the chart above to work out how much mixture you will need for each cake
Hope this helps!
14. lora says
hi lindy ,
i am wanting to make an 10″ maderia cake , as your recpie is for an 8″ , i have read your instructions for changing the measurments, but i dont fully understand, it says times the quanty to 1 1/2
but what does that mean …. like if the butter is 12 oz make it 30 oz or does it mean make it 18 oz ??? if you could get back to me asap that would be great as i am making the cake tonight for a
40th wedding anniversary party ..
thank you
15. Lindy Smith says
Hi Lora
You times all the quanitities by 1.5, so 12oz becomes 18oz etc
Hope that is clearer
16. Emma Roberts says
Hi Lindy
I am due to make a 12 inch oval madeira this week and have been unable to find any recipe quantities for oval cakes – could you give me any advice? I used all of your tips re the perfect madeira
with the last cake i made (except the glycerine as i couldn’t get hold of any) and i couldn’t believe how great the cake turned out! When baking an oval cake are there any other tips you could
offer as i’m concerned that the assymetrical shape may result in uneven cooking. Thanks very much, Emma.
17. Lindy Smith says
Hi Emma
To work out the amount of cake batter you will need, firstly measure the volume of the oval tin. Do this by filling the tin with water and then tipping this carefully into a measuring jug. Do
this for a standard 8in tin or tin you know your recipe suits and then compare the volume and work out the appropriate multiple.
Hope that makes sense
Good luck
18. Tracy says
I purchased one of your books a few months ago and say that the madeira cake turns out brilliantly everytime! I’m going to make a 14″ square madeira. Would you multiply the receipe by 4 for this
19. Lindys Team says
Hello Tracy,
We have lots of queries about changing cake recipes to suit a different sized cake tin. See our blog post on this subject for details of how to convert a recipe to the size that you would like.
20. lora says
hi lindy
for a 6in round, how would i do that , like do i half everthing or do i take 25% off .
21. Lindy Smith says
Hi Lora
You have to halve everything
22. AIleen Walsh says
lindy thank you so much for sharing! your cakes incredible….and inspirational!
23. karen says
Hi Lindy
I love your cake recipes, the chocolate is soooo good, and all my friends rave about the fruit cake but i would like to make it deeper, 3 1/2 ins for a 6 ins cake, how do i do that and how would
i adjust the cooking time, thanks so much
24. Jane says
Hello Karen,
We don’t have instructions for a 3.5″ high cake. Perhaps you could layer two cakes together to get the height.
25. Rhonda says
Hi Lindy
I have been scouring the net for moist chocolate cakes, as I have been designated to bake a wedding cake, to ice the cake (which I have never done before)and to decorate. So although I have found
tons of recipes for chocolate & mudcake I need to know if your adaption chart for quantities would still be applicable to any recipe I use. Wedding in two weeks. Cake to be 3 tiers.
26. Lindys Team says
Hello Rhonda,
The adaptation chart will work for any recipe provided the recipe you start with is for a 3″high, 8″ cake.
Good luck with the wedding!
27. Jackie says
I have been asked to make an open book cake in sponge (not madeira, so have bought a large shaped wilton tin (30 serving) size. I tried adapting the recipe but there isn’t enough mix in the tin.
Could you please tell me how much ingredients I should use and how long I should bake the cake. I normally use a quickmix recipe and bake at Gas Mark 3.
I have used your method of adapting the recipe lots of times with success, but not managed it this time.
Thank you.
28. julie says
im not sure if im putting this in the correct section so sorry if im wrong,
just a quick question does anyone know why my cakes seem
to rise in the middle with a crack??
if tried changing oven temp lower?
ive done the indent in the middle even to the point where
i sat worrying in will be lower in the middle but no its still the same?
i know familt dont mind this but i would like to move on
and decorate the tops with icing etc but want the top to look even?
thank you for any advice on this
29. Jane says
Hello Jackie,
If you count a serving as a 2″x 1″ piece of cake, an 8″ square tin recipe would make 29 servings and a 9″ square tin recipe would make 35 servings.
Hope this is of help.
30. Jane says
Hello Julie,
Have you looked at our blog post . There are lots of tips including covering the outside of the tin with newspaper. The cake may still rise and crack slightly but you can always level it off
before you decorate.
31. lauren says
hello lindy on your chat for changing recipes for a larger tin, where it tells you ‘Multiples of your own basic recipe
then it gives you ither 1/4 or 1/3
do you weigh out the ingredients first, then weigh out ither
a 1/4 or 1/3 of the recipe and add it to the other mix?
can you give me an example or explain it for me in a simpler way thank you xx
32. lauren says
hi lindy i love your website, its the only plce were ive been able to find out how to properly convert recipes for lareger tins,
my question is does the method for the ounces and pounds work for teeaspoon and table spoons? please reply asap as i am making the cake tomorrow
many thanks, your great, laurenxx
33. Jane says
Hello Lauren,
For example, to change an 8″ round recipe to a 10″ round recipe you would need to increase the ingredients to 1 1/2 times, e.g. an 8″ recipe calls for 350g of butter. You would times 350 by 1 1/
2. Weight the total amount of each ingredient needed, it just saves time.
Good luck
34. Jane says
Hello Lauren,
Yes, the measurements work for teaspoons and tablespoons – be sure to use a measuring spoon so you can get an accurate measure.
35. Natasha says
Hello Lindy
I am going to be attempting to make my daughter a Dora birthday cake for her party next weekend. Last year I made a Victoria Sponge Peppa Pig which turned out great but am going to attempt a
Madeira this year so I can make it in advance. After doing a practice madeira yesterday which was a disaster I came across your blog and it sounds great. I have a template and need to do it in
two parts so an 8″ round for the head and 9″ round for the body. I am therefore after the quantities for a 9″ round cake. I understand the table above and have worked out the new quantities
needed but, and sorry if it’s a bit of a silly question, as an 8″ is 6 eggs, a 9″ would be 7.5 eggs. How would I split the egg or do you round it up or down? Also how would I adjust the cooking
time? I have an electric oven which can be used as fan or conventional.
Many thanks
36. Jane says
Hi Natasha,
I would use 8 medium eggs (all Lindy’s recipes use large eggs) and cook as for an 8″ and monitor the cake after the timer has gone off, e.g. add 10 mins and check, add a few more minutes each
time you check.
Good luck
37. Lynda Parkinson says
Thanks for the info on adjusting cake mixtures to various sizes of tins. Could you let me know how to work out the appropriate cooking time adjustments as well please?
Many thanks
38. stephanie says
just realised i do have 2 0f your cake decorating books but could you still give me some advice on my maderia receipe i have been using thank you
39. Maree H says
I understand the cake quantity chart, but does the cooking time differ to a normal 20in cook time as i just made a 23in cake & it didn’t cook right through.
40. Jane says
Hello Stephanie,
Try using the tips in our blog “Baking the Perfect Madeira”
Hope this helps.
41. Jane says
Hello Maree,
You would need to adjust the cooking times when increasing the recipe. Sometimes its just a case of adding an extra 15/20 mins and checking, and then adding more minutes and checking again.
Good Luck
42. Jane says
Hello Lynda,
The best way is to find the nearest size recipe, use these cooking times and then add a little at the end, say 15/20mins, before checking. Then if the cake is not cooked add 5/10 mins at a time,
checking each time to see if the cake is cooked.
Good Luck
43. Ollie says
Hi, I have an 8″ round tin thats 3″ deep. I would like to make enough mixture to make a cake big enough to rise to the top so that I can just cut it in half when it’s cooked rather than make 2
sepreate cakes. Is this possible? Or am I better off making 2 sepreate cakes? If so how much mixture would I need and would it need a long time in the oven?
44. Jane says
Hello Ollie,
If you use the 8″ Madeira recipe in Lindy’s books you should get a 3″ deep cake. Look at our blog post “Baking the Perfect Madeira” for tips.
45. Samantha says
Hi Lindy
I have been asked to make a retirement cake for a friend. I have the measurements for a 6″ round cake but a 6″ square cake is required. Please can you tell me how to convert from a round to a
square cake? This is quite urgent so your earliest reply would be greatly appreciated.
Thanks kindly
46. Ollie says
Hi Jane, thank you for your reply!
I have recently bought some of Lindy’s Books, I still haven’t attempted it just yet. However I am going to bake my son a birthday cake in a couple of weeks and I am using a 12×10″ cake tin. I
can’t seem to figure out what mixture I need to use for this tin could you possibly help? I have thought of maybe just using the 12″ square tin recipe or would this be too much?
Cheers Ollie
47. Tahira says
Hi Lindy, I’ll be making a 3 or 4 tier heart shaped wedding cake, however I can’t seem to find any recipes for heart shaped tins. Also there will be 120 guests at the wedding so would I need to
do a 4 tier to make sure there is enough to go round? Finally what should the size of bottom tier be if it should be a 4 tier wedding cake? Can you help please?
48. sue says
Hi, i want to make a castle cake for my daughter, i have heard that you can use empty food cans to make the turrets. Do i need to line them and how full with mixture should i fill them? thanks.
49. Lindys Team says
Hi Tahira,
Inside Lindy’s book: http://www.lindyscakes.co.uk/CakestoInspireandDesire.htm there is information on cake sizes and quantities.
If you wanted an 8″ heart shape, then you would need to use the quantities for an 8″ round tin.
Each tier should be 2-3 inches different in size to the other and Lindy recommend starting from the top tier.
Investing in Cake dummies may help you see the size of tiers you like: http://www.lindyscakes.co.uk/OnlineShop-Dummies.htm#miniDummies, as you can play around with the different sizes.
For the amount of guests you are catering for, Lindy would make a spare cutting cake using the coloured icing of the main cake and keep this in the kitchen. She would use this to give to the
guests and they wouldn’t know it wasn’t from the main cake. This way you do not need a huge main cake and the bride and groom could also keep a tier this way if they wish.
We hope you enjoy making the wedding cake.
50. Lindys Team says
Hi Ollie,
Thank you for buying some of Lindy’s books. For the 12×10″ cake tin, treat it as if its an 11″ square tin. Then your quantities will be correct.
51. Lindys Team says
Hi Samantha,
A 6″ square cake is equivalent to a 7″ round tin. You go up one size when using a square tin.
Lindy’s book Cakes to Inspire and Desire has various charts that detail the different sizes and quantities to bake various sized cakes:
Have fun!
52. Lindys Team says
Hi Sue,
The castle cake sounds wonderful!
Empty food cans are good for making the turrets and yes you need to line them.
How high you fill them depends on what mixture you are using and how tall you would like them.
A fruit cake doesn’t rise very much whereas a Madeira rises quite a lot.
Experiment with a few before making the end product.
Good luck and have fun!
53. caela says
Ok so we need to go back to basics- I have never baked a large cake (cupcakes are my limit!) let alone decorated one yet I am about to make a christening cake for my god-daughter for sunday!! Can
somebody please please give me the basic recipe to make a 10 inch square madeira(?) type cake. I am a complete novice- any advice would be greatly received. need to get over the baking hurdle
before I think about decorating!!Many thanks, Caela
54. Lindys Team says
Hi Caela,
Here is the link for the Madeira Recipe.
By the looks of Lindy’s chart you will need to do 2 times the quantities.
Have a practice first!
Lindy’s classes are wonderful and teach beginners through to professionals the skills needed to create true works of art:
Why not come along, and prepare yourself ready for your god-daughters birthday cake, you may get asked to make!
55. Sam says
please can someone let me know asap I have a reciepe for a 10×12 inch cake but my tin is 13.5×9.5 inch will this work
Many Thanks
56. Lindys Team says
Hi Sam,
The recipe for the 10×12 inch cake (120 square inches) falls just short of the 13.5 x 9.5 (128.25 square inches), so you could choose to have a slightly shallower cake or make slightly more.
57. Helen Gray says
Please could you advise the quantity I will need for my 9 x 14 inch tin or how best to scale up an oblong tin. Thank you
58. Lindys Team says
Hi Helen,
The quantity needed for your shape tin of 9 x 14 inch is equivalent to a 12 inch round recipe.
We multiplied the 9 x 14 to make 126 square inches. This is equivalent to a 12 inch round.
Hope this helps and happy baking!
59. Doreen says
I have my favorite cake recipe of 9 inch tin and i need to make a cake for 11 inch tin . can you help me pls
Your reply would be greatly appreciated
60. Liz says
Can you tell how I scale up from an 8″ round madeira to a 10″ square? all the recipes seem to be for round cakes and I can’t figure out the quanties…help!
61. Lindys Team says
Hi Liz,
The chart on this page assumes that your own basic recipe will be for a 20cm (8in) round cake, as this is the most common size.
Looking at this chart it says you need to multiply the recipe x 2 for a 10″ square cake.
Hope this helps.
62. Lindys Team says
Hi Doreen,
We would work on it being 2 x your usual recipe and this will leave you with enough left over to make a few cupcakes!
Have fun!
63. lorraine says
A friend has asked me to make her wedding cake and she would like a 3 tier sponge of various flavours covered in white cigarillos. My problem is that I’ve tried to bake a victoria sponge cake in
one 8in deep tin but it was very heavy and sank in the middle. I need a chocolate, vanilla sponge and carrot cake recipe suitable to bake in a 12, 9 and 6 inch tin respective. I would appreciate
any advise.
64. Kerry says
Hi, please help I have a recipe for an 8″ Square Fruit cake but need to scale it up to a 14″ square fruit cake for my friends wedding cake. Please can you tell me how much I need to scale up the
ingredients by? thank you, Kerry
65. Hayley says
Hi there,
Im attempting to make my own wedding cake. Its going to be a 4 tier Victoria sponge 12″ 10″ 8″ and 6″. Ive already done a trial and turned out ok but need exact measurments for each tier and
cooking times. Im planning on as i cook each tier putting all the ingredients in the same tin at once and then cutting the cake into 3 layers then having 2 layers of jam and buttercream. The tins
are 5 inches deep. i will be decorating with summer berries and a dusting of icing sugar. Any advice very welcome
66. Lindys Team says
Hi Hayley,
We recommend our recipes on this blog as they are tried an tested by Lindy and her team and therefore we cannot comment on a victoria sponge I’m afraid.
Can anyone else advise on this?
Wishing you the best.
67. Lindy Smith says
Hi Kerry
You need to think about volume. Assuming the 8in square recipe gives a 3in deep cake your volume would be 8x8x3 = 192 sq inches to bake a 14in cake you will need a volume of 14x14x3 = 588 sq
inches. 588/192 = 3.06 so you will need approximately 3 times the recipe.
Does that make it clearer?
Good luck and happy baking
68. Gemma Fell says
Hi Lindy
i bought a cake tin today but thinking it must be odd measurements because i cant find it listed above in your quantities. Its says its exterior size is 9 1/2″ x 3″ and interior is 8 3/4″ x 2 3/
4″. So now im not sure which measurements to follow and the quantities now. What would be your advice?
Thanks Gemma
69. Lindy Smith says
It’s the internal measurements that are the ones you need to measure. Regards cake recipes I suggest you use 23cm (9in) ones for this tin.
Happy Baking
70. Lindy Smith says
Hi Lorraine, recipes for chocolate and a madiera cake, which can be flavoured with vanilla, in the sizes you require are in many of my books eg ‘cakes to inspire and desire’. There is a very
tasty carrot cake under the recipe section of this blog.
Hope this helps
Happy Baking
71. Adele Woods says
Hi Lindy
I am adapting my usual receipe for an 8 inch cake in accordance with your chart above for a 13 by 9 rectangular cake. Normally I use 8oz each of butter, sugar and flour and 4 medium eggs. Using
your chart I would need 14 eggs and 28oz each of butter, sugar and flour. Does this sound right to you? it just seems such a lot.
Many thanks
72. Jackie says
I hope you can help me. I have been asked to make a 50th birthday cake in fruit in the shaped numeral tins. I don’t know what quantities to use for a shaped fruit cake or how long the cakes would
need to be baked. Is it possible to do this and is a normal rich fruit recipe the best recipe to follow?
73. Lindys Team says
Hi Jackie,
To work out the sizes, first fill the numerical tins with water to work out the volume and then find a normal round or square tin that holds the same amount. Then use the recipe for that normal
size tin for your numerical ones.
Take care,
74. Jackie says
Thank you so much for replying to me. I wasn’t sure if it would work the same for a fruit cake, but I will do that.
75. Lindys Team says
Hi Adele,
The quantities that you have quoted for the 13×9″ rectangular cake sound correct. Before making it, just check the quantities again just to make sure.
Best wishes
76. Sam says
I am baking a batman wings cake for my boyfriends sons 5th birthday and I am using a 14″ square cake tin (it is approx 3 ” deep), could you tell me how much extra I need to add on to the
quantities. Your advice would be greatly appreciated.
Thanks in advance
77. Danielle says
For a rectangle tin 11″by 15″by 3″ how many times would I need to modify the ingredients list??
78. Karen Duxbury says
Thanks for this really useful chart, one question I have though is how do cooking times change.
I will be baking a 10in square cake. The 8in round recipe says to bake for 1.5 – 1.75 hrs at 170C, 325F Mark 3. Do you know how I would adapt this?
79. Lindys Team says
Hi Sam,
We recommend you could fill the tin with water and transfer this to a regular cake tin. By doing this you will see how much mixture you will need and can tweak your recipe accordingly.
Sounds like a fun cake!
80. Lindys Team says
Hi Karen,
It really is trial and error, we would recommend baking for another 10 minutes and keep checking until your knife comes out clean.
Best wishes
81. Lindys Team says
Hi Danielle,
If you wish to use a tin (pan) that is not mentioned in the chart, such as a pre-formed shaped tin or oval, fill a 20cm (8in) tin with water and compare it with the quantity of water that your
tin holds. The basic recipe quantity can then be multiplied or divided as necessary.
Take it easy
82. karen says
hi is it possable to use the chocolate fudge cake in the 2.5 multi cake tin many thanks karen
83. Lindy Smith says
Yes I’m sure it would, I use the chocolate fudge cake recipe or cupcakes and it works just fine.
Happy baking
84. sarah skeggs says
I have a 9 inch round tin that is 4 inch deep. How do I calculate this?
85. Rachel Emmerson says
Hello Linda,
i need help to convert my 8″ inch round by 2 1/2 inch deep xmas cake recipe for use in a 12″ Square by 4″ inch deep multisize pan . I wish to make 9, 4″ inch square cakes. Can i just triple my
favourite recipe?
86. Sheri says
I am making a rectangular Madeira cake the tin is 14inches by 10inches.
I am a bit unsure as to what quantities to use?
Thank you
87. Lindys Team says
Hi Rachel, Sarah, Sheri
We have amended our blog post to show the depth of the tin we are using as an example.
You should now find all the relevant information regarding tin sizes and quantities on this post.
Happy baking!
88. kellie mcmahon says
I am making a 10″ square sponge cake. I have seen your chart on how to adjust the recipe quantities but do I need to adjust the cooking time and temperature as well?
89. Karen says
I am wanting to make a 12 inch cake that is 5 inches deep. I was going to make a maderia but was not sure on the recipe to use.
Do you have any advice?
Many Thanks
90. Sophie Robertson says
Hi, many thanks for the conversion chart! I have searched high and low for information on how to change quantities over the last couple of years when making my kids birthday cakes and always end
up decorating a victoria sandwich!!
Anyway, I would like to make and decorate a rectangular shaped cake and was thinking of using a victoria sponge recipe (would this be ok??)
I have found various rectangular baking trays to buy (I only have small round sandwich tins) which measure 9x13in (depths vary from 4cm to 5cm).
What quantities would you suggest I use? Is the depth of these trays enough?
Many thanks
91. Jane says
Hello Kellie,
For our Madeira recipe you would just need to adjust the cooking time. The larger the cake, the longer the cooking time.
92. Jane says
Hello Karen,
You would need to increase the ingredients for a 5″ deep cake. You could fill the cake tin with water to find the volume and then pour the water into another 3″ cake tin and find the size.
Hope this makes sense.
93. Jane says
Hello Sophie,
We always recommend using a dense cake for decorating with sugarpaste, i.e. a Madeira. We have an excellent post on our blog for baking the perfect Madeira. The cake should be at least 7.5cm
deep. We have a multi-sized tin on our website if you would like to take a look.
94. Ann says
Hello Jane
Please help?
I am baking a 10″ round cake 3″ deep. I have used the basic recipe as on Lindy’s instructions and then increased everything by 1 1/2 times as on the table (18oz of unsalted butter, castor sugar,
self raising flour, 9 oz of plain flour, 9 eggs and 9 qtr teaspoons glycerin, lemon juice of two lemons). The cake was in the oven for 3hrs 10 minutes, when tried with a skewer it came out clean.
The cake in the middle appears wet and heavy but the remainder of the cake is very tasty. Can you please advise what it is that I am doing wrong.
Many thanks i
95. Natalie says
I’m making a 14″ numerical birthday cake, with a Fruitcake no. 4 and a sponge no. 0 and wonder how long I would have to cook the fruit cake for as my usual recipe states 4 hours cooking time.,
and also how long should I cook the sponge receipe. Also,I note that you state you should measure the amount of water your normal container holds and adjust the recipe accordingly and wondered if
this still applies to shaped tins.
Many Thanks
96. Lindys Team says
Hi Natalie,
Yes Lindy’s cake quantity chart applies to shaped tins too.
With regards to the shaped cakes, bake them as usual and keep an eye on them. They may bake sooner as the shape is thinner than a normal round tin.
Wishing you the best.
97. Lindys Team says
Hi Ann,
If you have followed the recipe exactly and the quantity chart, I wonder if your oven may need turning down by 20 degress and the cake baking for a little longer.
I would give this a try and see how it turns out.
Sugar and sweet.
98. elaine says
hi, i made the fruit cake recipe for the 9 inch cake-however, it took nearly 7 hours to cook! i have moved house so could be the cooker. I made an 8 inch one previously which also took longer to
cook. I cooked that one on normal oven and the 9 inch on fan oven. The knife was not completely clean-but that could have been the cherries or fruit in the cake. Would this be normal??Also-what
do you advise regarding putting a barrier between the fruit cake and cake board as i’ve heard that this can cause some kind of reaction wit the fruit.Sorry this is a long post!!
99. Jill Middleton says
I have been asked to make a 40th birthday cake in madeira using the numerical tins, i am not sure of quantity to use and cannot use conversion cchart as the numerical tins do not have a bottom so
would be impossible to fill with water,can you help at all and also how long would cookin time be.
Many thanks
100. Lindys Team says
Hi Elaine,
Everyones oven is different and you do need to tweak the times accordingly. However we cannot understand why it would take 7 hours to bake a 9″ cake, is your oven working well otherwise?
To protect the cake board we advise placing baking parchment between the board and the cake.
Take care,
101. elaine says
I tried a different recipe which baked the cake at a steady temperature of 150 c and the 8 inch took 2- 2 and a half hours. My oven seems to be fine and i must say that the cakes that took so
long to cook still taste very moist!
102. Lindys Team says
Hi Jill,
We think the best way to go about this is to make more mixture than you normally do and pour in until they are almost full.
Then you will see what is left of the mixture and be able to estimate a baking time.
103. Emma Hillyard says
I am making a basic victoria sponge recipe, normally 4oz and 2 eggs etc…. using a 12″ square tin , so doing 3 x the quantities, what’s the baking time in the oven??
104. Lindys Team says
Hi Elaine,
Each oven can differ so much from another, the only way to get to know yours is by trial and error.
What a shame, this may mean baking lots of delicious cakes to experiment 🙂
Keep on the good work!
105. Sarah J says
I am baking a heart shaped cake in a silicone mould.I have measured around the mould and divided by Pi which gives me 8.3inches. I’m going to assume I’ll be OK to make up a maderia as per the 8″
Do you think this will be right?
Also, are there any extra precautions when using silicone moulds?
Many thanks in advance of your reply.
106. Amber says
Most of my recipes are for 9 inch cakes therefore this would be my base. How should I adjust this? I’m sure it isn’t as easy as shifting the entire right column on the chart. Any help would be
greatly appreciated. Thank you.
107. Karhen says
Hi Lindy,
Do I also change the oven temperatures? By how much for each conversion?
Thank you in advance 🙂
108. Heidi Marshall says
Hi Lindy,
The cake tin I have is an odd size at 24cm square (9 1/2 inches). Looking at your chart above, would I follow the 10″ square quantities and times the ingrediants by 2? Would this be the best one
to follow? Also, how long should I cook it for? I am nervous about it as I am making my daughter’s birthday cake and it is the first time I have done anything like this.
109. Jane says
Hello Amber,
Another way to adjust recipe amounts is to fill your 9″ tin with water and then transfer the water into another tin and use the quantites for this tin – does that make sense!!
110. Jane says
Hello Heidi,
Fill the tin with water and then pour the water into a different tin to find the volume. You can then use the recipe amounts from this sized tin.
111. Jane says
Hello Karhen,
You keep the temperatures the same for the Madeira recipe.
112. Jane says
Hello Emma,
The easiest way to do this would be to fill your 2 egg recipe tin with water and then pour it into the 12″ tin. How ever many times you need to do this is the amount you need to times the recipe
113. Jane says
Hello Sarah,
This should work, you would just need to adjust the cooking time. I don’t think they are any other precautions with silicone moulds.
114. becki wainwright says
Hmmmmm what about cupcakes????
115. Jane says
Hello Becki,
It all depends on your cupcake cakes and cupcake pan. Lindy has a chart in her “bake me I’m yours … cupcake celebration” book which will give you more information.
116. jenn says
Hello, I am making a GF cake for my daugthers first birthday and the recipe yields a 9 inch cake. However I need to make a 10 inch and 6inch. Please tell me how to convert or substitute…. I am
completely confused!
117. Louise says
Just wondered – do you have a chart similar to the cake one for quantities of cream filling and also for icing (both royal icing and ready roll icing? It would be very useful!
Many thanks!
118. Lindys Team says
Hi Jenn
Please refer to the recipe adaptation chart. I would work out the quantities based on an 8″ cake and then if you have any mixture left, make some cupcakes!
Happy Baking!
119. Kimberley Gallacher says
Hi, Can you help me please, need to make a 12″ square madeira cake, according to your chart the ingredients should be
1050g Unsalted Butter
1050g Caster Sugar
1050g Self Raising Flour
525g Plain Flour
18 Large Eggs
can you tell me if this is right as it seems a lot compared to other recipes for 12″ square cakes ive seen online?
Thanks for your help!
120. Lindys Team says
Hi Kimberley,
The measurements you have quoted seem right. If you end up extra mixture, I would make some cupcakes with it as a treat.
121. Emily says
How would you work it out if your recipe is for a 12 inch cake? I want to make a 12 inch, 9 inch and 6 inch. Would I 1/2 for the 9 inch? And then what for the 6 inch, just under a 1/4? Thanks!
122. Lindys Team says
Hi Emily,
For a 12″ round cake you need to times an 8″ recipe by 2 1/2 the ingredients.
Please see the chart above.
Keep on baking!
123. Ali Owens says
Hi Lindy,
im using your tables but am a little confused….if my original recipe is for an 8′ round tin how to i work it out for an 8′ square tin then do i use these figures to work out different sized
sqaure tins
thanks in advance
124. Lindys Team says
Dear Ali
For an 8″ cake you need 1 1/4 times the quantities of your usual recipe.
Happy Baking!
125. Lindys Team says
Dear Louise
I’m sorry we do not have a chart for quantities of cream filling/royal icing/ready roll icing.
Have you tried searching on the internet, or can anyone else help?
Happy Baking!
126. Romaana says
Hi, I bake a regular sponge mixture (12oz fat, flour and sugar, 6 eggs and a
tsp of vanilla) This is enough for two cakes in a 9 inch tin, baked at 170
degrees (fan oven) for 32-34 mins. Everytime I try making it for a 6inch tin
or 12 inch it turns out horrible, and deflates if I check it too early. How
much mixture, how long should I bake the cakes for and what should the oven
temp be for these tins? Is there a special tip/system? Any help would be
very much appreciated, Romaana xx
127. Lindys Team says
Dear Romaana
It could be that your oven is too hot. Cakes are ready when a skewer inserted into the middle comes out clean. I can’t comment on how much mixture, etc you’d need for your recipe. Perhaps you
could try one of Lindy’s recipes?
Happy Baking!
128. ricky sanders says
could you tell me.. If i want to make a wedding cake for 150 people. what size tins would i need to get 150 slices.. i cant seem to find this information.
129. Lindys Team says
Hi Ricky
You are probably going to have to make several tiers to feed this many people! I find that an 8″ round cake feeds approx 25 people.
130. Valerie says
I’m trying to make a bible cake with the wilton’s bible mould which is 41cm by 29 1/2 cm and a depth where it’s depest of 6 1/2 cm. I’ve made this cake 3 times and only 1 has it come out really
nice and soft. The one i just did now is really hard and dense and i need to make another one for mothers day. I was wondering if you could please kindly give me precise measurements or
approximation of how much egg flour sugar and butter i’ll need. Thank you very much this will be greatly appreciated.
131. Tracy says
Help needed!
I’m making a wedding cake with a 12″ pan, 4″ deep. What quantity would I need? Oh, and how long would it take to bake?
Many, many thanks!!
132. Lindys Team says
Hi Tracy
To upscale from an 8″ round to a 10″ round you would need 2 1/2 times your usual recipe. This is for 3″ deep cakes.
Good luck!
133. Jane says
Hello Valerie,
You need to fill a cake tin with water, say an 8″. Pour the water into the Bible tin. If it fills the tin you should use the 8″ recipe. If not try with a larger cake tin, etc.
134. sharon says
Have you baked a topsy turvy cake if so how do you scale it up? and what sort of cooking time would it require?
Also I have just baked a 10inch square madeira (needed it deep so made an 11inch mix ) and it was in the oven at least 2 1/2 hours on testing it came clean but when cutting/trimming you could see
raw patches in the middle, I remade with a 10 inch mix and it was in the oven nearly 3 1/2 hours again clean on testing and only just cooked when cut, Is this normal?? I did protect the sides
with newspaper and added a bowl of water, could i protect the base with newspaper too or would this slow the cooking time even more? I hope my rambling makes sense…
135. Kathy Simmons says
I,m going to bake your above recipe for Chocolate Fudge Cake in a six inch cake tin, I know how to adapt the quantities as I have Lindy’s books but am unsure of the time it will take to cook,
could you please help, this is a sample cake to taste for a wedding cake I’m baking for my neice, I have used all of Lindy’s recipes so far and they are great, thank Lindy.
136. birdyboo says
Hi there.
I am hoping you can give me some advice on cooking in deep cake tins. I need to make a 10″ Victoria sponge but the last one I did came out very flat took over an hour to cook.
Do you have any advice or recipe ideas that may help… Also how deep should I fill the cake tin x
137. Jane says
Hello Sharon,
Not sure exactly what you mean by scaling up. Are you using topsy turvy tins?
When you put a skewer into a cake it will leave a mark inside, so when you carve the cake it can look like it has a small patch of uncooked batter. This may be the problem.
138. Jane says
Use this link to our Madeira Cake recipe. There are lots of hints and tips to help you.
139. Jane says
Hello Kathy,
I would start with about 45mins-1hour and then judge it from there.
140. sarah says
Please can you help…. I have been asked to make 4 cakes for yr 11 leavers at my school. The cake tin in 42cm by 29cm (approx A3 size) with depth approx 5cm. How much basic cake mixture will I
need? and how long will it take to cook on what temperature?
141. Jonathan says
Hi there,
Like Jill asked back in November last year, I too have been asked to make a 40th birthday cake using the number tins, and plan to make in Madiera cake. I get the whole quantity bit and baking
bit, but am unsure on the scooping the middle bit to the side (showing the base) like you would do in a normal tin. As the size between sides is a lot smaller, I would imagine it not being as
possible to do. Any tips?
Many thanks in advance,
142. Jane says
Hello Sarah,
Wow! That’s a big cake. The recipes in Lindy’s books are for cakes with a depth of 7.5cm (3″) so I think you would need 2 x 10″ recipes. I’m really not sure of baking times. If it is a Madeira, I
would start off with one and a half hours and test it from there.
143. Jane says
Hello Jonathan,
Yes it would be difficult to scoop the middle of the tin. Still wrap it in newspaper to protect the sides, maybe turn the oven down a little more and cook it slowly to stop it cooking too quickly
around the outside.
144. natasha says
hey, your site is a great help, i now know how to change the amounts but i was wondering what i should do with cooking time etc? we are making my brother wedding cake for him & the wedding is
friday 🙁 our first go at the cakes last night did not turn out good…. we are doing an 8″ heart in victoria, 12″ round in chocolate & 14″ square in maderia…. i know what i am doing with the
chocolate asits a friends recipe but the spong we need help with, do you have any recipe ideas & how how long we should cook them for please? need you help asap??
many thanks
145. Jane says
Hello Natasha,
The 8″ Victoria sponge would need about 30-35mins, and the Madeira I would start with 2 1/2 hours and then keep checking it. You may already know, but just in case, the Victoria sponge would not
be suitable to cover in sugarpaste as it is not dense enough to hold the weight of the paste. As to flavour, you could use any citrus, or be a little more adventurous with alcohol!
146. Kayleigh says
Hi doing my friends wedding cake which is on monday iv got packet cake mix (500g) eacg was just wondering if anyone know how many packets it would take to fill a 10″ deep square and a 7″ deep
147. Jane says
Hello Kayleigh,
Sorry I’m not familiar with packet mix. Does it give the size of cake it will make on the instructions? Does anyone have any advice for Kayleigh?
148. Aud says
Hi i have been asked to make a 12 x 16 madeira union jack cake for the jubilee please can you give me instructions and ingredients for the cake and buttercream i.e. how much and how long !
DESPERATE !
149. Jane says
Hello Aud,
12 x 16 roughly equates to a 14″ square recipe (use an 8″ round recipe and times it by 4). Have a look at this link to changing cake recipe quantities on our blog.
150. Lucy says
I’m working out the measurments for a 12″ square cake, which is x3, so is it correct i need 18 large eggs?
Does seem quite a lot?
Many thanks
151. Jane says
That’s right Lucy, you need an 18 egg recipe.
152. Linh says
Hi Lindy,
I am a complete novice at baking cakes but I want to make a number cake (30th). My plan is to use a rectangle cake pan 13.5” x 10” and then cut the numbers out. My dilemma is my recipe is round 9
inch cake. Obviously I can’t use your chat but I would like to use advise ‘Other shapes and sizes not mentioned: fill a 20cm (8in) x 7.5cm (3in) tin with water and compare it with the quantity of
water that your tin holds. The basic recipe quantity can then be multiplied or divided as necessary. But I don’t quite understand what I have to do?
Also, would welcome any other advices on how to make number cake or recipts
Many Thanks
153. Tim says
Hi, we are making a 30 cm sponge and plan to use 567g of each of the ingredients and 10 eggs. All good so far? Cook for how long at 180?
Your website has been very helpful so far
154. Jackie says
This isn’t a question about cake size but I hope you can help me. When I make a chocolate cake I use cocoa powder rather than melted dark chocolate. However, I need a chocolate cake with a very
dark brown colour. Is there a recipe which will give me a darker coloured cake or could I perhaps use a small amount of dark brown colour paste to give me gthe colour I want. Hope you can help
155. Lyndsey says
Hi Lindy.
I am attempting to make my own wedding cake and your guide to quantities is brilliant thanks, couple of quick questions –
1. do you extend the cooking times to the same as the quantities? ie – 10″ to double the time?
2. also i am hoping to do a 10″ and an 8″ cake, top layer sitting directly on top of the bottom layer, with no gaps, and frost the whole cake, would i still need to use dowling to place the top
tier on? or will it be sturdy enough to place one cake on top of other? (if that makes sense?!)
3. lastly, after baking the cake, if i place in the fridge in foil, how long should it last ? (how far in advance can i make it before i frost it the day before?)
Thanks, Lyndsey
156. Jane says
Hello Linh,
Your rectangular tin has a volume of 405 sq in (assuming it is 3″ deep, i.e. 13.5 x 10 x 3). If you times a 9″ square recipe by 1.75 you will get a cake that has a volume of 425 (9 x 9 x 3 x
1.75) which means you will have mixture left over but enough to fill your rectangular tin. Don’t forget to adjust your baking time too.
Hope this helps.
157. Jane says
Hello Tim,
I would start with 2 hours and check thereafter.
158. Jane says
Hello Lyndsey,
You don’t double the baking times if you double the recipe. For a 10″ Madeira I would start with 1 1/2 to 1 3/4 hours then check.
I would always recommend dowelling the cakes.
We don’t tend to put our Madeira cakes in the fridge. A Madeira should last 2 weeks wrapped in baking parchment and foil in a cool place – we say a week to decorate and a week to eat!
159. Brian Lisher says
I made a furit cake for an anniversar which went down well and have now been asked to makea sponge/maderia for abirthday (12 inch square) Having mnever madea madeitra I used Lindys recipe for an
8inch round as trial run but it came out very stodgy more like a suet pudding what did I possibly do wrong. I put newspaper around the outside as suggested and the skewer came out clean after
about 2.5 -3 hours cooking.
160. Kim Cruikshanks says
Hi – are you able to advise me on a recipe for a victoria sponge. I have 2 x 23cm tins and what like them to be “thick when cooked (please also advise of cooking time and oven temp if you can)
Many thanks Kim x
161. Jane says
Hello Jackie,
You can use the Squires or Sugarflair paste colours to colour your sponges, but a really nice chocolate cake recipe which would give a deep brown colour is the Chocolate Fudge Cake. Give it a go.
162. Jane says
Hello Brian,
When you test with the skewer, try inserting in the centre and further away from the centre. I can only think that it was undercooked, or perhaps you weighed out the ingredients incorrectly.
163. Steph Rose says
Hi there
I’m making a 12″ square madeira cake if i’ve read the conversions correctly i need 18 eggs!!!!!!!!!
Also is it absolutely necessary to add the glycerine and lemon zest and juice?
Many thanks
164. Jane says
Hello Steph,
Yes that’s right, 18 eggs! You only need to add the lemon if you want to flavour it and there are lots of other flavourings you can use. The glycerine is to help the cake stay moist, although
once you get the hang of baking a Madeira you may not need to use this.
165. Kathryn says
I am challenging myself to making a Peppa Pig cake for my daughters 2nd birthday. I am using a victoria sponge recipe and I have converted this using your table above as I have a 12 inch square
cake. Therefore using 18oz SR Flour, Butter, Caster sugar, 9 eggs, 6 tsp vanilla essence. I am not sure what temperature or how long to cook this for. PLease could you help?
166. Lindys Team says
Hi Kathryn
The original recipe should tell you what temperature the oven should be at, and also give an indication of how long to cook it for. Then it will need to be a little longer or shorter depending on
what size tin the original recipe was for. Lindy always says to check on the cake using the original time, by inserting a skewer into the cake and if it comes out clean the cake is ready, if not
it needs a little longer!
Good Luck with Peppa Pig, I’m sure your daughter will be delighted!
167. Lindys Team says
Hi Kim
Lindy does not like using Victoria Sponges when she is baking, I have attached a link to her blog eplaining why! You could always try one of her other recipes on her blog!!!
168. Glenn Pearce says
Realy loving the website been sooooo helpfull for us as new beginers of the baking world 🙂 been looking at this table was wondering how to work out the amount of eggs need for each size?
Thank you 🙂
169. Jane says
Hello Caroline,
We suggest you “feed” the cake after baking when it has cooled down slightly. That should be enough. Fruit cakes are best matured for at least a month. Wedding cakes are traditionally stored for
up to three months to let them mature and then they are easier to cut. Your cake board should be 2″ larger than your cake.
Hope this helps.
170. Donna says
Hi Linda,
I’m fairly new to baking and I’ve been asked to make a 9 inch vanilla sponge cake. Is there a recipe that you can recommend? A lot of them I’ve found are in cup sized and I go by grams or ounces.
Any help you could give would be great.
171. Jane says
Hello Glen,
If you multiply the recipe and it says you need 2.4 eggs, round up to 3. This should work.
172. constance says
hie Lindy i really wanted proper explanation on the cake pan issue i need to make 6,8,10 inch deep panabout 31/2 inches fruit cakes what measurement am i supposed to use
173. Kate says
I have purchased Lindy’s book The Contemporary Cake Decorating Bible. Using the recipes in the book, please can you tell me how many servings the 7 inch, 8 inch and 9 inch round cakes will yield?
174. Sarah says
Hi, I am making a wedding cake with 2 tiers using Madeira sponge. I would like to make the smaller cake a chocolate fla our. Can this be done with Madeira and how much coco powder do I use for 8
inch cake? Many than
175. Deni says
I am making a chocolate mud cake for a wedding. The size tin is 33 cm (13 inches). Do I triple the recipe that I have and can you tell me how long I would need to cook it for, and at what
Thank you!
176. charlene says
HI Lindy and team, Have tried the chocolate cake from the cake decorating bible this afternoon. Made enough for an 8 inch tin however I divided this batter into two 8 inch tins. Cakes did not
rise all that much. Should I double the quantities stated in the book or it the batter enough to be shared? The cake was suppose to cook in between 1 hr and 1 hr 45 mins however after checking it
at 45 mins, cake was done. is this normal or have I messed it up? Thanks
177. Jane says
Hello Donna,
If you intend to cover the cake in sugarpaste you would need a firm cake, a lot firmer than a Victoria sponge type cake. We have a Madeira recipe on our blog that is ideal and very tasty.
178. Jane says
Hello Kate,
Just look at page 31 “Cake Portions”. It’s all there!
179. Jane says
Hello Charlene,
This recipe is for a 3″ deep cake, so you would need to fill one 8″ by 3″ deep tin rather than two 8″ tins. If you want to fill the cake just cut across and fill.
180. Caroline says
Hi Lindy,
Could you please tell me the quantites I would use for a giant cupcake tin, and I’m not sure if I would cook both parts of the cupcake at the same time. Also I need to colour the buttercream a
quite a deep Royal Blue colour and not sure what colouring to use e.g paste or gel and what colour would acheive this and how much so and if it would impair the flavour. I would appreciate your
help with this.
Thank you
181. Lindys Team says
Hi Sarah
Lindy does not flavour the madeira cake with chocolate, she uses a chocolate cake recipe which produces a cake similar to madeira, the recipe is on page 11 of “The Contemporary Cake Decorating
Good Luck!
182. Lindys Team says
Hi Deni
I have attached a link to Lindy’s blog where she talks about chenging tin size and recipe quantities, http://www.lindyscakes.co.uk/2009/07/27/how-to-i-change-a-cake-recipe-quanities/, hopefully
this will help. The oven temperature should be the same, and it should be cooked for about 2 1/2 to 3 hours, check it after 2 1/2hours.
Good Luck
183. Caroline says
Hi Lindy,
Could you please tell me the quantites I would use for a giant cupcake tin, and I’m not sure if I would cook both parts of the cupcake at the same time. Also I need to colour the buttercream a
quite a deep Royal Blue colour and not sure what colouring to use e.g paste or gel and what colour would acheive this and how much so and if it would impair the flavour. I would appreciate your
help with this.
Thank you
Caroline x
184. Lindys Team says
Hi Caroline
To work out the quantities I would fill an 8″ tin with water and then compare it with the quantity your tin holds. Your recipe can then be multiplied or divided as necessary. Here is a link to
our blog post about quantities:- http://www.lindyscakes.co.uk/2009/07/27/how-to-i-change-a-cake-recipe-quanities/
You can add paste colour to your buttercream to achieve the colour you need. We stock a range of colours on our online shop with photos that show you their exact colour. Here is a link to all the
blues:- http://www.lindyscakes.co.uk/shop/search.php?mode=search&page=1
Hope this helps and good luck!
185. sammie says
i’m making a wedding cake for my friend and i have a 9inch round by 5inches deep what quantity would i put in it?
many thanks
186. Lindys Team says
Hi Sammie
It depends what recipe you are using. If you have a recipe for an 8″cake you can upscale the quantities as per the instructions in our blog post:-
187. Jane says
Hello Constance,
First, take your 8″ cake recipe. For the 6″ cake you would need to half the ingredients in the 8″ recipe. For the 8″ cake, obviously just us the amounts as per your 8″ recipe. For the 10″ cake
you would need to times the 8″ recipe by 1.5.
Hope this makes sense.
188. Kat says
Hi. Have found your conversion chart very helpful. But can you help? Am making my sisters wedding cake. The bottom tier is going to be a 12 inch fruit cake. I know I need to times the 8 inch
recipe by 2 and a half. But what about cooking time? Any advice would be appreciated. Kat
189. Lindys Team says
Hi Kat
The cooking time is as follows; 2 1/2 hours at 150 degrees C/300 degrees F/ Gas 2 , then 5 1/2 hours at 120 degrees C/250 degrees F/ Gas 1/2 , total cooking time 8 hours.
Hope this helps, and good luck with the cake!
190. Barbara says
I want to make my friends some individual Christmas cakes and have 10 cm tins to use. Am happy (via your great table) with quantities required but how should I determine the cooking time? Should
I stay at the same temperature as well? Your help would be very much appreciated.
Thank you
191. Amy says
i have got to make 2 wedding cakes…i am an amateur…both sponge
1 has a 16″ round base and 1 is a 16″ heart shape!
i have cooked plenty of cake and and worked outthat i need to quadruple the recipe i usually use…i think but what do i do with the cooking temp/time? lower the temp and increase the time…120 –
3-3.5 hours??
will the outside burn and inside be raw??
this is the sticking bit i can’t work out!
Please help!!
many thanks Amy
192. Lindys Team says
Hi Barbara
That sounds like a lovely gift for Christmas! I would keep the temperature of the oven the same,and try cooking for 20-30 mins. Keep checking though by inserting a skewer into the cake and when
it comes out clean they will be ready!
Happy Baking!
193. Lindys Team says
Hi Amy
It really will depend on the type of sponge recipe that you are using. Lindy uses madeira so our calculations are based on that recipe and timings, but a 16 inch cake would take about 3 to 3 1/4
hours at the same temperature. To prevent the sides becoming crusty newspaper is tied around the cake tin. For the heart shaped cake, again based on this recipe we would recommend 2 1/4 to 2 1/2
hours. You may also need to cover the cake loosely during cooking to prevent the top from over crusting. Keep checking the cakes and when a skewer is inserted it will come out clean when the cake
is ready.
Good Luck, and let us know how you get on!
194. june says
i want to make a Madeira cake – and create a pastel layered cake , i have a 14 inch square cake tin 3 inch deep, according to the chart it would approx 4 times the amount. How long would this
need to cook for and what would the temperature for a fan assisted oven be? Because i am creating layers how much mix do i use at one time.
many thanks
195. Sarah says
I wondered if you could help me? I am looking to bake a cake for my dad’s 60th and decided to do 3 tiers. However, each tier is wonky. I bought some specially made wonky tins from Lakeland (saves
me cutting) which are the following sizes: 15cm, 20cm and 25cm.
I was going to make all 3 layers Maderia, but wondered what quanties of mixture I would need for each? Can anyone help? Thanks, Sarah. 🙂
196. Lindys Team says
Dear June
A 14″ square madeira cake will take approx 2 3/4 – 3hours. This is only a guide so keep checking the cake and when a skewer inserted into the middle comes out clean, it’s cooked.
Lindy’s madeira cake recipe is cooked at 160 0c but you can turn your fan oven down to 140 0c. There are more tips regarding baking a madeira cake on our blog page:-
Good luck!
197. Lindys Team says
Hi Sarah
I would fill an 8″ x 3″ tin with water and compare it to the amount your tins hold. You then multiply or divide the 8″ recipe as necessary. This is explained on Lindy’s blog page:-http://
Happy Baking!
198. Sarah says
Hi Zoe
Thank you for your reply, its much appreciated! So nervous as this is the first time I am going to be icing a cake too, and the fact that I am going to do 3, well I have definetly set myself a
Well, if it doesnt turn out right, it will give my family something to talk about for a long time to come! Fingers crossed!
🙂 Sarah
199. joanna says
I make a rich fruit cake every Xmas, its an ancient Good Housekeeping recipe for a 9 inch tin. This year I also want to make 2 smaller 5 inch. How can I judge cooking times and quantity. will the
9inch quantity make 2 5inch? I also have probs stopping the cake from over browning at the edges
Thanks Joanna
200. Anna says
I am going to be making a 12″ square madeira cake using your recipe. How long will I need to cook this for? I will be baking it in a fan oven so will have the oven temp at 140 degrees as
Many thanks
201. Emily says
I’m wondering if anyone can advise. I need to cook a 30 cm cake and don’t know how long and what temperature. It’s a dense chocolate cake with *lots* of eggs and no rising agent. The original
recipe (for cupcakes) has it 200 degrees for 10-12 minutes. I’m thinking 170 for 50-60 but that’s really just a stab in the dark. Any help would be greatly appreciated!
202. Lindys Team says
Hi Anna
You will need to have the oven at 160 and it will need to cook for 2 1/4 – 2 1/2 hours.
Good Luck
203. Lindys Team says
Hi Emily
Lindy’s Chocolate Cake recipe, for a 30cm round cake, has the oven at 180 and cooks for 2 – 2 1/4 hours. Hope that helps!
204. Lindys Team says
Hi Joanna
Lindy’s fruit cake recipe for a 5″ round cake is cooked for 30 mins at 150 and 1 hour at 120. You can also wrap newspaper round the outside of the cake tin to prevent the cake browning round the
Hope they turn out well!
205. GRACE says
WHEN I BAKE MADEIRA CAKE USING YOUR RECIPE,IT KEEPS SINKING IN THE MIDDLE. WHAT COULD BE WRONG?
206. Lindys Team says
Hi Grace
Cakes usually sink in the middle because they are not set properly, we would suggest not taking the cake out of the oven before the time on the recipe and not opening the oven door.
Good Luck!
207. Charlotte says
I am making a birthday cake in a 10inch circle tin. The original recipe (8ich tin) requires 5 eggs, so should I use 7 or 8 eggs in the new?
Thank you!
208. Zain says
Hi thank you for this very very useful chart I’ve saved many cake disasters 🙂
But can anyone tell me how much would I reduce a recipe if I wanted in smaller pan eg-: 8″ cake recipe but want to cook in 6″ x 3″ deep cake pan (round)
209. Lindys Team says
Hi there
The table shows that if you wanted to make a 6 inch cake using a 8 inch cake recipe you would use half the quantities.
Good Luck!
210. Lindys Team says
Hi Charlotte
We would suggest using 7 large eggs and 1 medium egg, to get around the half egg issue.
Let us know how you get on!
211. Dal says
Hello, I am using an 11inch square tin which is 3 inches deep-making a Victoria sponge cake. I have used your chart to help me and I will be doubling the recipe and adding the half.
I was wondering if I would have to make the cake mixture twice (due to the tin’s deepness) or just make the mixture once and then slice the cake in half after I have baked it, bearing in mind it
is a birthday cake and I want to make sure the cake rises.
I also wanted to know what degrees I would set the oven and how long I would bake I for.
Could you please reply by tomorrow, I would really appreciate it as I am making the cake on Friday. Thank you for your help.
212. Lindys Team says
Hi there
We would recommend cooking it how the recipe you are following suggests cooking it, Lindy doesn’t make Victoria Sponges but we think they are usually cooked in two tins, however if it is a
Madeira it should be cooked in one tin and sliced in half once cooked. We would also recommend using the temperature of the original recipe and the cooking time, and check it regularly after
Good Luck with the birthday cake!
213. Tina says
Hi I want to put more dried fruit in my cake mix than the recipe states to make it fruitier,will the batter still hold the mix?As i don’t really want to increase other ingredients! thanks
214. Lindys Team says
Hi Christina, should be okay to put more fruit in the mix, as long as its not too much!
Good Luck!
215. Stephanie short says
Sorry this might not be related to the ongoing chat but can anyone offer any advice on icing for gingerbread?
I always find when icing gingerbread cookies etc the icing always looks dry but when put into packaging (cellophane bags)
It smudges and always looks ‘squashed’and it really spoils the look of the biscuits, and its so annoying after spending so long on them and i make a lot of them this time of year.
Any advice would be really appreciated, thank you
216. Lindys Team says
Hi Stephanie
If you are using royal icing the key would be to let the icing dry before putting the cookies in the cellophane bags. If the icing is too dry, tiny amounts of cooled boiled water can be added to
make the icing moist and give it a sheen, to give it the right consistency.
Hope this helps and good luck with the gingerbread men!
217. Amelia says
I was wondering if you can help? I am cooking a 12 inch square Madeira using your recipe. Can I double check that after converting from the 8 inch round recipe this would be 18 eggs??!! Also, how
long would I need to cook this for before checking on it. It’s for my grandads 90th and all the family will be there to eat it so can’t afford any disasters lol.
Thanks x
218. Lindy's Team says
Hi Amelia
It is 18 eggs for a 12 inch square madeira cake. The baking time would be approximately 2 1/4 to 2 1/2 hours at 160 C/325 F/Gas 3.
Good Luck with the cake, I’m sure the family will love it!!!
219. Cassie says
Hi Thanks for this, I need to make my 8″ recipe into a 12″ but I use 5 eggs and 285g of everything else for an 8″ How would I upscale that by the 2 1/2 method to a 12″? Also how do you upscale
the cooking times accordingly for the different sizes? my normal 8″ usually takes around 45 mins.
Thanks for any help you can give me
220. Lindy's team says
Hi Cassie
Thanks for your query about cake quantities.
On Lindy’s Blog there is a piece about Baking the Perfect Madeira cake. Click on the link below to see her changing quantities and times chart.
Lindy’s book The Contemporary Cake Decorating Bible has all sorts of tips and charts which help with different cake sizes.
Hope this helps.
Kind regards
221. Lee says
I have an 8″ cake receipe that takes 1 hours 20 minutes at 160 or 140 for a fan oven.
I want to use this receipe to make 2 6″ cakes, therefore, i just need to use the same receipe amount, although what I am not clear of, is how long I would need to bake this for.
Can anybody help please.
222. Lindys Team says
Hi Lee
Lindy’s 6″ round madeira cake recipe takes approx 1 – 1 1/4 hours to cook.
All ovens are different so you need to keep an eye on the cake. When the cake is ready, a skewer inserted into the middle will come out clean.
I hope this helps and good luck with your cakes.
223. Rachel says
I am making a wedding cake in the Summer and I need to get deep tiers about 4″. I am planning to use Lindy’s madeira cake receipe but it never comes out with such deep cakes. Should I bake two
cakes or double the quantity of mixture to get a deeper cake?
224. Lindys Team says
To change cake quantities go to Lindy’s Blog where there are tips on how to do this under FAQ.
Lindy has some cake quantity charts in her Contemporary Cake Decorating Bible.
225. Carolyn says
I am making a Rapunzal princess dress cake (2l Pyrex bowl) to sit on top of a 25cm 3in deep square chocolate cake.
Not sure how to work out the amounts?
Any help would be much appreciated.
Thank you
226. Jojobinx says
Hi can someone please help me with a recipe for a firm chocolate madeira cake please? I would like to use Lindy’s madeira recipe, but how do I make it into a chocolate madeira? I need a firm yet
moist chocolate cake for a friends wedding cake. I will be doing 8″ and 10″ squares stacked directly on top of one another but have yet to find a nice enough recipe that gives the desired result
(trialling many recipes is costing me a fortune!). Have the most trouble getting the cakes cooked evenly (would you therefore recommend doing 2 separate layers?). So If i can hopefully alter
Lindy’s recipe to make a chocolate version, I presume I would then be able to use the scaling up chart? Sorry for so many questions, am stressed to the max!
227. karen says
I’ve been asked to bake/decorate a 2 tier Victoria sponge cake (7″ and 10″). Recipe I normally use is basically to weigh
eggs and use equal quantities of flour, sugar & butter. I’m ok with 7″ cake but not sure of how to calculate number of
eggs to use for 10″ tin, please can you help?
228. Lindys Team says
Hi Jo
Lindy has a firm, but moist chocolate cake recipe in her books. She also has a chocolate fudge cake recipe which is delicious. Here is a link to the recipe on our blog:-http://
Hope this helps.
229. Lindys Team says
Hi Carolyn
Have a look on Lindy’s Blog under FAQ (link below). She gives some great tips about adapting quantities. Or in Lindy’s Contemporary Cake Decorating Bible there are a few charts with various cakes
and tin sizes.
Hope this helps and good luck.
230. Lindys Team says
Hi Karen
Not sure about Victoria Sponge quantities as we don’t have a recipe for that, but Lindy has a chocolate cake chart in her Contemporary Cake Decorating Bible which, for a 7″ round cake, it uses 8
large eggs. For the 10″ it uses 12 eggs. This wouldn’t be the same as a Victoria Sponge but might give you some idea how many more eggs you might need.
You could try to Google the information you need or just try and increase your whole recipe by a quarter or half and see how much batter you get.
On Lindy’s Blog she has some useful tips about adapting favourite cake recipes. Click on the link below and see if this helps.
Good luck
231. karen says
Hi Susie,
Many thanks for information, much appreciated will let u know how it turns out!
Thank U, Karen :o)
232. Lindys Team says
Hi Rachel
To get a deeper cake you could try filling a larger 3″ inch deep tin with water and comparing volumes with your 4″ tin.
Good Luck!
233. suzanne sharp says
i am going to bake a madeira cake in a 12″ x 9″ tin and i am struggling to convert the quantities using your table please can you help me
thank you
234. Lindys Team says
Hi Suzanne
Lindy has some tips on her Blog FAQ about converting favourite recipes. Click on the link below and hopefully this will help you with your tin size.
Kind regards
235. Kate says
I was wondering how to calculate the baking time for a cake I’ve adjusted to fit a larger pan. I have a 9″ square tin so have done 1.5 times the recipe. Do I multiply the baking time by 1.5 too?
So bake for 2hrs 15mins instead of 1hr 30mins?
236. Lindys Team says
Hi Kate
If you just add another 15-20 minutes baking time and keep checking it regularly after this point with a skewer. It should come out clean if it is cooked.
Good luck.
237. Carla says
Hi, is the additional time the same when baking a 12″ cake? Normal baking time plus 15-20mins? X
238. Lindys Team says
Hi Carla
A 12″ cake will take longer. As a guide Lindy’s 12″ round madeira cake takes approx 2 – 2 1/4 hours.
All ovens are different so you will need to keep an eye on your cake. The cake will be cooked when a skewer inserted into the middle comes out clean.
239. maral says
I am planning on making a 6 layer cake. The original recipe uses three 8″ cake pans but since I need a larger cake I was hoping I could add to the batter and make it with three 9″ round pans. So
based on your chart I should multiply the amounts by 1 3/4? Also The recipe suggests baking the cake at 350 for 20 minutes, how do I change it for a 9″ round pan?
Many thanks.
240. Lindys Team says
Hi Maral
Increase the recipe to 1 1/4 to add more batter to a 9″ round tin – per tin, so if you are making 3 cakes then yes it would be 1 3/4. Or if its easier to double it, then you can used the
leftovers for cupcakes.
You probably would be best to cook for 20 mins as stated and then keep checking it with a skewer to see if it is cooked.
Good luck
Kind regards
Lindy’s Team
241. Caraline says
Hi, I’m baking a 14″ square cake, please can you advise how long I need to cook it for? Many thanks
242. Naomi Samuels says
I am trying to make a princess cake and have a basin tin, not tiffin, measuring 21cm wide and 15cm deep, please could you advise what quantity’s i should be using for a sponge cake and how long
to bake for?
Thank you
243. Lindys Team says
Hi Caraline
It really depends on the type of cake you are baking and what the recipe says. Lindy has a chocolate cake recipe which bakes for 2 1/2 to 2 3/4 hours at 180degc for a 14″ square. You would need
to keep checking with a skewer to see if it is cooked.
Hope this helps.
Lindy’s team
244. Lindys Team says
Hi Naomi
Not sure what princess cake you are making, but if you are decorating it with sugarpaste, then it needs to be a dense cake like madeira, not a victoria sponge, as that will collapse under the
For a standard round cake tin measuring 20cm, it would need to be baked for 1 1/4 to 1 1/2 hours.
Good luck with your cake.
Lindy’s Team
245. Abby cooper says
Just wondered how u calculate for rectangle cake tins? Eg a 10″x12″?
Thank u
246. Lindys Team says
Hi Abby
You could try filling the tin with water and comparing it to an 8″ tin as explained at the bottom of this table. http://www.lindyscakes.co.uk/2009/07/27/how-to-i-change-a-cake-recipe-quanities/
I think quantities for an 11″ square would be about right.
247. b kelly says
My mum has a cake recipe for a 10 X 8 X 3 inch cake…using 6 eggs, 14oz of marg, sugar and flour.
She has been asked to scale this up to a cake for a 16 X 10 X 3 inch. I am struggling a bit with the table
Can you help, what multiple of ingrediants should be used…and should the baking time be extended, if so to what?
Thank you in anticipation, I am being asked for these details by the end of tomorrow 04/06/13..
248. Lindys Team says
Try clicking on the link below to adapt recipes.
Hope this helps.
Kind regards
Lindy’s Team
249. Melanie says
I want to make a football cake. Your receipe says that I need one 8″ quantity mixture for a 6″ ball. I have to bake one half at a time as I only have one tin. What receipe would I use for half
the tin as I will probably bake on 2 separate days and presumably the mixture will not last. Thanks
250. Sarah says
Would you be able to tell me the madeira recipe I would need for a 12 x 8 inch tin
Thank you
251. Kirsty Wood says
Hia, I’m making a two tiered birthday cake and plan to use your madeira recipe and fill with buttercream and jam. My question is when scaling the recipe using the above tool – will the cakes be
roughly the same height? Making the cake for Saturday so should be baking soon!
Also, can I add flavouring (i.e. lemon) and how might that change the recipe?
Many thanks!
Kirsty x
252. Lindys Team says
Hi Melanie
If you are only baking one half of the cake, then halve the recipe.
Good luck with your cake
253. Lindys Team says
Hi Kirsty.
Yes the cakes will be about the same height (approx 3″).
You can add lemon. Use the zest of 2 lemons for an 8″ round cake.
Happy Baking!
254. Lindys Team says
Hi Sarah
These are the quantities you need.
700g butter, 700g caster sugar, 700g self-raising flour, 350g plain flour, 12 eggs – bake for 1.45 – 2 hours. I have just given you a recipe for a 10″ square which is approximately the same size
as a 12×8″.
Good luck
Lindy’s Team
255. Kirsty says
I was wondering if you can help me please?
I am looking to make a 10″ and an 8″ cake to tier on top of each other and was wondering if you would be able to advise me on the recipes and what quantities i will need? I have bout 3″ deep
Thanks for any help you can give me :).
256. Lindys Team says
Hello Kirsty
Here are links to Lindy’s madeira cake and chocolate fudge cake recipes which are both delicious. The recipes are for 8″ round so for a 10″ cake you will need 1 1/2 x the quantities.
257. Lisa says
I was wondering what your recommendation is for baking times when you use your chart above? For example if I need to make an 8 inch and a 12 inch white choc mud cake, should I cook them both for
the same amount of time?
Much appreciated.
Kind regards, Lisa
258. Jane says
Hello Lisa,
It all depends on your oven. What I do is cook the cake for the stated time on the 8″, and then check every 10 minutes. For a 12″ I would add roughly 30 minutes to the cooking time check and then
if not ready check every 10 minutes.
Hope this helps.
259. MJ says
How do I change a recipe for marble cake 8″ round to a 10″ round cake?
In terms of number of eggs, flour, sugar, cocoa butter etc??
260. Lindys Team says
You will need 1 1/2 times the quantities of the 8″ round cake to make the 10″ round.
Happy Baking!
261. sara says
hello, i have a recipe for an 8″ round cake and i need to make a 10″ square cake. what will the upscale be?
262. Jane says
Hello Sara,
You would need to increase the recipe by 1/2, i.e. if you are using 100g you would need to use 150g.
263. Chris knox says
I have a cake tin that I want to make a fruit cake in order to decorate with a golf scene. The cake tin is size is 20cm by 27cm. What quanity of ingredients would l use.
Kind regards chris knox
264. Lucieswain says
Hi there I am wanting to make a 20cm cake 7cm deep. have any good recipes? And oven temp and time? I’m new to this see. 🙂
Thanks x
265. Lindys Team says
Hello Chris,
You need to find the volume of the tin, so for example 20cm(8″) x 27cm(11″) x 3″ tin would have a volume of 264″. The nearest volume would be a 10″, 3″ deep tin, so you would need to do a 10″
square recipe, but would have quite a lot left over.
266. Lindys Team says
Hello Lucie,
Here is a link to our recipe page on our blog. Hopefully you can find something that takes your fancy!
267. Claire says
I’m looking to bake a cake but using a 10″ X 6″ X 2″ roaster tin as the shape suits, would you have any idea as to what recipe I would use and how long to bake?
268. Lindys Team says
Hello Claire,
You need to find the volume of the tin, so 10 x 6 x 2 = 120. Find a tin that has the nearest volume, i.e. 7 x 7 x 3 = 147 and you would have a little left over. Cooking times depends on your
oven, but I would start with 1hr 15m and take it from there!
269. dawnsietay79 says
I need to scale a cake up from a square 20cmx20cm to a rectangle 31cm by 26 cm
please help me 🙂
270. Debbie says
Hi Lindy
Thank you for sharing your recipe. I am making a 21st cake, 3 tiers 5″, 7″ and 10″ and I plan to use your Madeira recipe. I would like to make one of the layers chocolate would your chocolate
fudge recipe work for a tiered cake? If so which tier would you make chocolate? I plan to dowel and sugar paste the cake
Thank you
271. Sarah says
Can you help? I have a recipe for an 8″ cake that I want to adapt to a 14″ tin. I’ve looked at the chart above & worked out my ingredients quantities but how should I deal with the cooking time?
The 8″ cake is cooked for 4-4.5 hours at 120C in my fan oven, how much longer would a 14″ cake take?
Any help would really be appreciated.
272. Katrina says
I have to do 150 cupcakes for a friends wedding. I want to use Lindys lovely madeira cake recipe because I’ve used it for cakes before but I’m not sure how many cupcakes an 8″ quantity would
make. Any ideas please?
273. Catherine says
I would like to make a 12″ square chocolate fudge cake and can see this takes triple the recipe, but how long and at temperature would I bake the cake, as it may bee too expensive to try more
than one practice cake, thank you
274. suzi says
HELP – NEED INFO TODDAY 🙁
Hi guys, please can you help me! I am making a 11″ square porter (fruit) cake. Can I use the conversion chart on here for fruit cakes instead of madera? And any idea on how much longer than the
normal 90 mins cooking time it will take?!!!
Many thanks
275. Lindys Team says
Hi Suzi
You can use this chart for any recipe. The baking time will depend on your oven as well as the cake tin and depth of the cake. Keep checking the cake and it will be done when a skewer inserted
into the middle comes out clean.
276. Lindys Team says
Hi Dawn
The volume of the rectangular tin is about twice the size of the square one. Therefore I would probably double the quantities. If you have too much mixture you can always make some cupcakes!
277. Lindys Team says
Hi Debbie
The chocolate fudge cake works well in a tiered cake. It does not matter which layer is chocolate, however, if one layer is fruit, I would put this as the bottom layer.
Good luck with your cake!
278. Lindys Team says
Hi Sarah
It’s hard to say exactly how much longer you would need to cook it for. I would say it would take approx 1 hour longer but you will need to keep checking the cake and when a skewer inserted into
the middle comes out clean, it is ready.
Happy Baking!
279. Lindys Team says
Hi Catherine
Unfortunately we don’t have anyone in the office at the moment who can answer cake baking queries.
Try logging onto this tip on Lindy’s Blog, although I couldn’t see any help with baking times.
Usually we recommend baking it for the time stated, adding 20 minutes on and then keep checking the cake with a skewer until it comes out clean.
Lindy’s chocolate cake recipe for a 12 inch square bakes between 2.15 and 2.45 minutes.
Hope this helps, but it is usally trial and error, constantly checking when you scale up recipes.
Kind regards
Lindy’s Team
280. Lindys Team says
Hi Katrina
Unfortunately, we don’t have anyone in the office that can help with your question at the moment.
You could try making one quantity of 8″ mix and see how many cupcakes it makes. It would all depend on how deep you want your cakes to be.
Sorry we can’t help you at the moment.
Kind regards
Lindy’s Team
|
{"url":"https://lindyscakes.co.uk/2009/07/27/how-to-i-change-a-cake-recipe-quanities/","timestamp":"2024-11-11T03:55:02Z","content_type":"text/html","content_length":"405279","record_id":"<urn:uuid:44c0e73a-5f5b-4e21-9a64-afa71c3747e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00077.warc.gz"}
|
【How-to】How much is 5 yards in feet - Howto.org
How much is 5 yards in feet
Ads by Google
What is 1 yard equal to in feet?
1 yard is equal to 3 feet, which is the conversion factor from yards to feet.
Is one yard 5ft long?
The yard is a unit of length measurement equal to 3 feet or 36 inches.
What is the total number of feet in 5 yards?
Yards to Feet table
Yards Feet
5 yd 15.00 ft
6 yd 18.00 ft
7 yd 21.00 ft
8 yd 24.00 ft
How many inches is 5 yards?
Five yards are 180 inches.
How long is a yard?
1 yard is 3 feet long. Remember the width can change. It could be 60″ wide, 72″ wide or even 102″ wide, but the length of a yard is always 36 inches or 3 feet.
What is the rate of 12 feet in 4 yards?
Explanation: First let’s know that the conversion between feet and yards is 3 feet equals 1 yard. And so 12 feet is the same as 4 yards. They are equal.
What is a 5 yard object?
How long is 5 yards? It’s about one-and-one-tenth times as long as a Beetle (Volkswagen) In other words, 5 yards is 1.121 times the length of a Beetle (Volkswagen), and the length of a Beetle
(Volkswagen) is 0.8921 times that amount. (1964 model) (a.k.a. Volkswagen 1200, a.k.a. Käfer)
How wide is a yard of fabric?
Very simply, one yard of fabric is 36 inches long. But working out how much fabric you need for a sewing project is a little bit more complicated than that. While a yard in length is always a yard,
fabric width varies according to where you’re buying it. Average widths are between 33-44 inches.
Is 150 inches bigger than 5 yards?
5 yards; it is 30 inches greater. 150 inches; it is 36 inches greater..
What is about 1 yard long?
A yard is equal to 3 feet. A yardstick equals 3 rulers. Use rulers to measure shorter lengths. Use yardsticks to measure longer lengths.
What can be measured in yards?
Use yards to measure distances, such as the length of a football field. Use miles to measure great distances, such as the distance between cities.
How big is a yard stick?
Normal length of a meterstick made for the international market is either one or two meters, while a yardstick made for the U.S. market is typically one yard (3 feet or 0.9144 meters) long.
What is an example of 1 yard?
Yard is defined as a measurement of length that equals 3 feet or 36 inches. An example of a yard is the length measurement that is used to sell fabric. The definition of a yard is an outdoor area of
a house or other building. An example of a yard is the lawn in front of your house; a front yard.
What item is a yard?
A yard is about: half the length of a bed. the width of a large fridge. the height of a countertop.
How long is a human foot?
foot, plural feet, in measurement, any of numerous ancient, medieval, and modern linear measures (commonly 25 to 34 cm) based on the length of the human foot and used exclusively in English-speaking
countries, where it generally consists of 12 inches or one-third yard.
What is yd math?
A unit of length (or distance) in US units equal to 3 feet or 36 inches. The abbreviation is: yd. Example: 5 yards can be written 5 yd. One yard is exactly 0.9144 meters in the Metric System (the
yard is defined that way). Example: 5 yd = 4.572 m.
What is a yard for math?
Home > Math Vocabulary > Yard. A unit of linear measurement in the customary system equal to 3 feet (36 inches).
What is yard house?
Definition of yard
(Entry 1 of 4) 1a : a small usually walled and often paved area open to the sky and adjacent to a building : court. b : the grounds of a building or group of buildings. 2 : the grounds immediately
surrounding a house that are usually covered with grass.
Ads by Google
|
{"url":"https://howto.org/how-much-is-5-yards-in-feet-13128/","timestamp":"2024-11-14T21:20:18Z","content_type":"text/html","content_length":"48157","record_id":"<urn:uuid:68113323-8df1-4f28-88a4-26957f9a96be>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00667.warc.gz"}
|
CFA Discussion Topic: floater
Author Topic: floater
A two-year floater is paying a coupon of six-month LIBOR + 2%. Assuming a discount margin of 1% and that LIBOR is currently 6%. What is the most likely price of the floater?
wexwarez A $ 98.31
@2005-05-17 B $100.00
04:21:54 C $101.84
D $103.72
if this question comes out in exam, I would have spent 5 mins and not knowing whether I got it right.
N = 2yrsX2 = 4 (six-mth)
FV = 100 (usually you use 1000 depending on the answers provided. In this case, use 100)
katentim coupon pymt = 6% + 2% = 8%;
@2005-05-17 bcos it is six-mth, 8/2 = 4%.
19:35:51 Therefore, 4% of 100 = $4
I/Y = 8% - 1%(discount margin) = 7%
therefore, I/Y = 7/2 = 3.5%
Use your calculator and find PV. Answer is C.
Frankly speaking, I m still blur after attempting this question. I also don't know what is discount margin. I m just trying to make sense to find the answer. What's the answer wexwarez?
Hi no background, well done you got it!
Floating rate securities are also known as variable rate securities or floaters. The coupon paid over the life of the note fluctuates by reference to an agreed formula. The coupon is
typically based on a "reference rate + margin".
Discount margin = IRR - Reference rate
Hence IRR = Reference Rate + Discount Margin
@2005-05-18 IRR = (6 + 1) = 7%
Semi-annual discount rate = 1/2 x IRR = 1/2 x 7 = 3.5%
Then discount (as no background did) to find PV
= Answer(C)
It is straight forward if you know what the reference rate and discount margin are; if you know neither of them it is impossible how ever much time you spend on it!
In this case you also have to recognise (although it tells you) that it will be semi-annual discounting.
I am not familiar w/ the specific term "discount margin" and have never seen it. So that threw me off for a second. It is usually posed at the yield on the bond, and if so, they may not
tell you if it's an annual yield (usually assumed to be this) or semi-annual (or whatever frequency the interest payments are). If it's annual and the payments are semi, you'd need to
MGM13 take the square root of 1+ annual yield to get the true semi-annual yield. Cutting it in half gets you close, but not precise enough if two of the answer choices for the bond's PV are
@2005-05-18 close to each other (not uncommon).
For the question you indicated here, using the true semi-annual yield (3.4408%) as "i" in your HP, you should get a bond price of 102.06.
Basically, though, you're on the right track.
CFA Discussion Topic: floater
To post a new topic or reply to a topic, please
for a free user account.
I just wanted to share the good news that I passed CFA Level I!!! Thank you for your help - I think the online question bank helped cut the clutter and made a positive difference.
|
{"url":"https://analystnotes.com/cfa-topic-floater.html","timestamp":"2024-11-07T09:49:04Z","content_type":"text/html","content_length":"22218","record_id":"<urn:uuid:07549208-6df7-4a03-a2e6-fbcd9511e0fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00486.warc.gz"}
|
10.4 Multiple Slit Diffraction
Learning Objectives
Learning Objectives
By the end of this section, you will be able to do the following:
• Discuss the pattern obtained from diffraction grating
• Explain diffraction grating effects
The information presented in this section supports the following AP® learning objectives and science practices:
• 6.C.3.1 The student is able to qualitatively apply the wave model to quantities that describe the generation of interference patterns to make predictions about interference patterns that form
when waves pass through a set of openings whose spacing and widths are small, but larger than the wavelength. (S.P. 1.4, 6.4)
An interesting thing happens if you pass light through a large number of evenly spaced parallel slits, called a diffraction grating. An interference pattern is created that is very similar to the one
formed by a double slit (see Figure 10.16). A diffraction grating can be manufactured by scratching glass with a sharp tool in a number of precisely positioned parallel lines, with the untouched
regions acting like slits. These can be photographically mass produced rather cheaply. Diffraction gratings work both for transmission of light, as in Figure 10.16, and for reflection of light, as on
butterfly wings and the Australian opal in Figure 10.17 or the CD pictured in the opening photograph of this chapter, Figure 10.1. In addition to their use as novelty items, diffraction gratings are
commonly used for spectroscopic dispersion and analysis of light. What makes them particularly useful is the fact that they form a sharper pattern than double slits do. That is, their bright regions
are narrower and brighter, while their dark regions are darker. Figure 10.18 shows idealized graphs demonstrating the sharper pattern. Natural diffraction gratings occur in the feathers of certain
birds. Tiny, finger-like structures in regular patterns act as reflection gratings, producing constructive interference that gives the feathers colors not solely due to their pigmentation. This is
called iridescence.
The analysis of a diffraction grating is very similar to that for a double slit (see Figure 10.19). As we know from our discussion of double slits in Young's Double Slit Experiment, light is
diffracted by each slit and spreads out after passing through. Rays traveling in the same direction (at an angle $θθ size 12{θ} {}$ relative to the incident direction) are shown in the figure. Each
of these rays travels a different distance to a common point on a screen far away. The rays start in phase, and they can be in or out of phase when they reach a screen, depending on the difference in
the path lengths traveled. As seen in the figure, each ray travels a distance $dsinθdsinθ size 12{d`"sin"θ} {}$ different from that of its neighbor, where $dd size 12{d} {}$ is the distance between
slits. If this distance equals an integral number of wavelengths, the rays all arrive in phase, and constructive interference (a maximum) is obtained. Thus, the condition necessary to obtain
constructive interference for a diffraction grating is
10.13 $dsinθ=mλ,form=0,1,–1,2,–2,… (constructive),dsinθ=mλ,form=0,1,–1,2,–2,… (constructive), size 12{d"sin"θ=mλ,~m="0,"`"1,"`"2,"` dotslow } {}$
where $dd size 12{d} {}$ is the distance between slits in the grating, $λλ size 12{λ} {}$ is the wavelength of light, and $mm size 12{m} {}$ is the order of the maximum. Note that this is exactly the
same equation as for double slits separated by $d.d. size 12{d} {}$ However, the slits are usually closer in diffraction gratings than in double slits, producing fewer maxima at larger angles.
Where are diffraction gratings used? Diffraction gratings are key components of monochromators used, for example, in optical imaging of particular wavelengths from biological or medical samples. A
diffraction grating can be chosen to specifically analyze a wavelength emitted by molecules in diseased cells in a biopsy sample or to help excite strategic molecules in the sample with a selected
frequency of light. Another vital use is in optical fiber technologies where fibers are designed to provide optimum performance at specific wavelengths. A range of diffraction gratings are available
for selecting specific wavelengths for such use.
Take-Home Experiment: Rainbows on a CD
The spacing $dd size 12{d} {}$ of the grooves in a CD or DVD can be well determined by using a laser and the equation $dsinθ=mλ,form=0,1,–1,2,–2,….dsinθ=mλ,form=0,1,–1,2,–2,…. size 12{d"sin"θ=mλ,`m=
"0,"`"1,"`"2,"` dotslow } {}$ However, we can still make a good estimate of this spacing by using white light and the rainbow of colors that comes from the interference. Reflect sunlight from a CD
onto a wall and use your best judgment of the location of a strongly diffracted color to find the separation $d.d. size 12{d} {}$
Example 10.3 Calculating Typical Diffraction Grating Effects
Diffraction gratings with 10,000 lines per centimeter are readily available. Suppose you have one, and you send a beam of white light through it to a screen 2.00 m away. (a) Find the angles for the
first-order diffraction of the shortest and longest wavelengths of visible light (380 and 760 nm). (b) What is the distance between the ends of the rainbow of visible light produced on the screen for
first-order interference (see Figure 10.20)?
The angles can be found using the equation
10.14 $dsinθ=mλ(form=0,1,–1,2,–2,…)dsinθ=mλ(form=0,1,–1,2,–2,…) size 12{d"sin"θ=mλ,`m="0,"`"1,"`"2,"` dotslow } {}$
once a value for the slit spacing $dd size 12{d} {}$ has been determined. Since there are 10,000 lines per centimeter, each line is separated by $1/10,0001/10,000$ of a centimeter. Once the angles
are found, the distances along the screen can be found using simple trigonometry.
Solution for (a)
The distance between slits is $d=(1 cm)/10,000=1.00×10−4cmd=(1 cm)/10,000=1.00×10−4cm size 12{d= \( 1`"cm" \) /"10","000"=1 "." "00" times "10" rSup { size 8{ - 4} } `"cm"} {}$ or
$1.00×10−6m.1.00×10−6m. size 12{1 "." "00" times "10" rSup { size 8{ - 6} } `m} {}$ Let us call the two angles $θVθV size 12{θ rSub { size 8{V} } } {}$ for violet (380 nm) and $θRθR size 12{θ rSub {
size 8{R} } } {}$ for red (760 nm). Solving the equation $dsinθV=mλdsinθV=mλ size 12{d"sin"θ rSub { size 8{V} } =mλ} {}$ for $sinθVsinθV size 12{"sin"θ rSub { size 8{V} } } {}$
10.15 $sin θ V = mλ V d , sin θ V = mλ V d , size 12{"sin"θ rSub { size 8{V} } = { {mλ rSub { size 8{V} } } over {d} } ,} {}$
where $m=1m=1 size 12{m=1} {}$ for first order and $λV=380nm=3.80×10−7m.λV=380nm=3.80×10−7m. size 12{λ rSub { size 8{V} } ="380"`"nm"=3 "." "80" times "10" rSup { size 8{ - 7} } `m} {}$ Substituting
these values gives
10.16 $sinθV=3.80×10−7m1.00×10−6m=0.380.sinθV=3.80×10−7m1.00×10−6m=0.380. size 12{"sin"θ rSub { size 8{V} } = { {3 "." "80" times "10" rSup { size 8{ - 7} } `m} over {1 "." "00" times "10" rSup {
size 8{ - 6} } `m} } =0 "." "380"} {}$
Thus the angle $θVθV size 12{θ rSub { size 8{V} } } {}$ is
10.17 $θV=sin−10.380=22.33º.θV=sin−10.380=22.33º. size 12{θ rSub { size 8{V} } ="sin" rSup { size 8{ - 1} } 0 "." "380"="22" "." 3°} {}$
10.18 $sinθR=7.60×10−7m1.00×10−6m.sinθR=7.60×10−7m1.00×10−6m. size 12{"sin"θ rSub { size 8{R} } = { {7 "." "60" times "10" rSup { size 8{ - 7} } `m} over {1 "." "00" times "10" rSup { size 8{ - 6} }
`m} } } {}$
Thus the angle $θRθR size 12{θ rSub { size 8{R} } } {}$ is
10.19 $θR=sin−10.760=49.46º.θR=sin−10.760=49.46º. size 12{θ rSub { size 8{R} } ="sin" rSup { size 8{ - 1} } 0 "." "760"="49" "." 5°} {}$
Notice that in both equations, we reported the results of these intermediate calculations to four significant figures to use with the calculation in part (b).
Solution for (b)
The distances on the screen are labeled $yVyV size 12{y rSub { size 8{V} } } {}$ and $yRyR size 12{y rSub { size 8{R} } } {}$ in Figure 10.20. Noting that $tanθ=y/x,tanθ=y/x, size 12{"tan"θ=y/x} {}$
we can solve for $yVyV size 12{y rSub { size 8{V} } } {}$ and $yR.yR. size 12{y rSub { size 8{R} } } {}$ That is
10.20 $y V = x tan θ V = ( 2.00 m ) ( tan 22.33º ) = 0.815 m y V = x tan θ V = ( 2.00 m ) ( tan 22.33º ) = 0.815 m size 12{y rSub { size 8{V} } =x"tan"θ rSub { size 8{V} } = \( 2 "." "00"`m \) \(
"tan""22" "." 3° \) =0 "." "822"`m} {}$
10.21 $yR=xtanθR=(2.00 m)(tan 49.46º)=2.338 m.yR=xtanθR=(2.00 m)(tan 49.46º)=2.338 m. size 12{y rSub { size 8{R} } =x"tan"θ rSub { size 8{R} } = \( 2 "." "00"`m \) \( "tan""49" "." 5° \) =2 "." "339"
`m} {}$
The distance between them is therefore
10.22 $yR−yV=1.52 m.yR−yV=1.52 m. size 12{y rSub { size 8{R} } - y rSub { size 8{V} } =1 "." 52`m} {}$
The large distance between the red and violet ends of the rainbow produced from the white light indicates the potential this diffraction grating has as a spectroscopic tool. The more it can spread
out the wavelengths—greater dispersion—the more detail can be seen in a spectrum. This depends on the quality of the diffraction grating—it must be very precisely made in addition to having closely
spaced lines.
|
{"url":"https://texasgateway.org/resource/104-multiple-slit-diffraction?book=79106&binder_id=78846","timestamp":"2024-11-07T22:59:27Z","content_type":"text/html","content_length":"89738","record_id":"<urn:uuid:66f96994-5c00-4185-b079-f002832d366c>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00882.warc.gz"}
|
Action-angle coordinates
Action-angle coordinates¶
galpy can calculate actions and angles for a large variety of potentials (any time-independent potential in principle). These are implemented in a separate module galpy.actionAngle. This module
contains classes for computing both the forward (x, v) –> (J, O, a) and the reverse (J, a) –> (x, v, O) transformations. It is also possible to compute most forward transformations as methods of the
Orbit class, which is typically the simplest way to compute actions, frequencies, and angles for a given orbit.
If you want to quickly and easily compute actions, angles, or frequencies using the Staeckel approximation, using the Orbit interface as described in this section is recommended. Especially if you
are starting from observed coordinates, as Orbit instances can easily be initialized using these.
Forward action-angle transformations can be calculated for the following potentials/approximations:
• Isochrone potential
• Harmonic potential
• Spherical potentials
• Axisymmetric potentials with the adiabatic approximation
• Axisymmetric potentials with the Staeckel approximation
• Static potentials with a general orbit-integration-based technique
• One-dimensional potentials
There are classes corresponding to these different potentials/approximations and actions, frequencies, and angles can typically be calculated using these three methods:
• __call__: returns the actions
• actionsFreqs: returns the actions and the frequencies
• actionsFreqsAngles: returns the actions, frequencies, and angles
These are not all implemented for each of the cases above yet. The adiabatic and Staeckel approximation have also been implemented in C and using grid-based interpolation, for extremely fast
action-angle calculations (see below). Forward transformations are discussed in this section.
Reverse action-angle transformations can be calculated for the following potentials/approximations:
• Isochrone potential
• Harmonic potential
• 1D potentials
• Axisymmetric potentials using the TorusMapper.
Reverse transformations are discussed in this section.
We start by discussing the forward and reverse transformations for the two specific potentials: the isochrone and the harmonic potentals.
Action-angle coordinates for the isochrone/harmonic potentials¶
The harmonic and isochrone potentials are the only potentials for which all of the actions, frequencies, and angles can be calculated analytically and for which the action-angle transformation can be
straightforwardly reversed. For the isochrone potential, we can do this in galpy by doing
>>> from galpy.potential import IsochronePotential
>>> from galpy.actionAngle import actionAngleIsochrone
>>> ip= IsochronePotential(b=1.,normalize=1.)
>>> aAI= actionAngleIsochrone(ip=ip)
aAI is now an instance that can be used to calculate action-angle variables for the specific isochrone potential ip. Calling this instance returns \((J_R,L_Z,J_Z)\)
>>> aAI(1.,0.1,1.1,0.1,0.) #inputs R,vR,vT,z,vz
# (array([ 0.00713759]), array([ 1.1]), array([ 0.00553155]))
or for a more eccentric orbit
>>> aAI(1.,0.5,1.3,0.2,0.1)
# (array([ 0.13769498]), array([ 1.3]), array([ 0.02574507]))
Note that we can also specify phi, but this is not necessary
>>> aAI(1.,0.5,1.3,0.2,0.1,0.)
# (array([ 0.13769498]), array([ 1.3]), array([ 0.02574507]))
We can likewise calculate the frequencies as well
>>> aAI.actionsFreqs(1.,0.5,1.3,0.2,0.1,0.)
# (array([ 0.13769498]),
# array([ 1.3]),
# array([ 0.02574507]),
# array([ 1.29136096]),
# array([ 0.79093738]),
# array([ 0.79093738]))
The output is \((J_R,L_Z,J_Z,\Omega_R,\Omega_\phi,\Omega_Z)\). For any spherical potential, \(\Omega_\phi = \mathrm{sgn}(L_Z)\Omega_Z\), such that the last two frequencies are the same.
We obtain the angles as well by calling
>>> aAI.actionsFreqsAngles(1.,0.5,1.3,0.2,0.1,0.)
# (array([ 0.13769498]),
# array([ 1.3]),
# array([ 0.02574507]),
# array([ 1.29136096]),
# array([ 0.79093738]),
# array([ 0.79093738]),
# array([ 0.57101518]),
# array([ 5.96238847]),
# array([ 1.24999949]))
The output here is \((J_R,L_Z,J_Z,\Omega_R,\Omega_\phi,\Omega_Z,\theta_R,\theta_\phi,\theta_Z)\).
To check that these are good action-angle variables, we can calculate them along an orbit
>>> from galpy.orbit import Orbit
>>> o= Orbit([1.,0.5,1.3,0.2,0.1,0.])
>>> ts= numpy.linspace(0.,100.,1001)
>>> o.integrate(ts,ip)
>>> jfa= aAI.actionsFreqsAngles(o.R(ts),o.vR(ts),o.vT(ts),o.z(ts),o.vz(ts),o.phi(ts))
which works because we can provide arrays for the R etc. inputs.
We can then check that the actions are constant over the orbit
>>> plot(ts,numpy.log10(numpy.fabs((jfa[0]-numpy.mean(jfa[0])))))
>>> plot(ts,numpy.log10(numpy.fabs((jfa[1]-numpy.mean(jfa[1])))))
>>> plot(ts,numpy.log10(numpy.fabs((jfa[2]-numpy.mean(jfa[2])))))
which gives
The actions are all conserved. The angles increase linearly with time
>>> plot(ts,jfa[6],'b.')
>>> plot(ts,jfa[7],'g.')
>>> plot(ts,jfa[8],'r.')
The reverse transformation is implemented as actionAngleIsochroneInverse. For example, for the same isochrone potential as above, we set up the inverse transformation as
>>> from galpy.actionAngle import actionAngleIsochroneInverse
>>> aAII= actionAngleIsochroneInverse(ip=ip)
We can then reverse the transformation as follows:
>>> jr,jp,jz,oR,op,oz,ar,ap,az= aAI.actionsFreqsAngles(1.,0.5,1.3,0.2,0.1,0.)
>>> print(aAII(jr,jp,jz,ar,ap,az))
# (array([1.]), array([0.5]), array([1.3]), array([0.2]), array([0.1]), array([0.]))
We can also do this for an entire orbit and compare to the orbit
>>> ar0,ap0,az0= 0.,0.,0.
>>> ts= numpy.linspace(0.,10.,1001)
>>> ars,apz,azs= ar0+oR*ts, ap0+op*ts, az0+oz*ts
>>> xv= aAII(jr,jp,jz,ars,apz,azs)
>>> plot(xv[0],xv[3])
>>> o= Orbit([xv[0][0],xv[1][0],xv[2][0],xv[3][0],xv[4][0],xv[5][0]])
>>> o.integrate(ts,ip)
>>> o.plot(gcf=True)
which gives
We see that the action-angle calculated orbit and the numerically-integrated orbit are right on top of each other.
For the 1D harmonic oscillator, do, e.g.,
>>> from galpy.actionAngle import actionAngleHarmonic
>>> aAH= actionAngleHarmonic(omega=1.)
>>> print(aAH(1.,0.2))
# 0.52
>>> print(aAH.actionsFreqs(1.,0.2))
# (0.52, 1.0)
>>> print(aAH.actionsFreqsAngles(1.,0.2))
# (0.52, 1.0, 1.373400766945016)
We can also reverse this using actionAngleHarmonicInverse
>>> from galpy.actionAngle import actionAngleHarmonicInverse
>>> aAHI= actionAngleHarmonicInverse(omega=1.)
>>> J,O,a= aAH.actionsFreqsAngles(1.,0.2)
>>> print(aAHI(J,a))
# (1.0, 0.19999999999999996)
Forward action-angle transformations (x, v) –> (J, O, a)¶
Action-angle coordinates for spherical potentials¶
Action-angle coordinates for any spherical potential can be calculated using a few simple numerical integrations. These are implemented in galpy in the actionAngleSpherical module. For example, we
can do
>>> from galpy.potential import LogarithmicHaloPotential
>>> lp= LogarithmicHaloPotential(normalize=1.)
>>> from galpy.actionAngle import actionAngleSpherical
>>> aAS= actionAngleSpherical(pot=lp)
For the same eccentric orbit as above we find
>>> aAS(1.,0.5,1.3,0.2,0.1,0.)
# (array([ 0.22022112]), array([ 1.3]), array([ 0.02574507]))
>>> aAS.actionsFreqs(1.,0.5,1.3,0.2,0.1,0.)
# (array([ 0.22022112]),
# array([ 1.3]),
# array([ 0.02574507]),
# array([ 0.87630459]),
# array([ 0.60872881]),
# array([ 0.60872881]))
>>> aAS.actionsFreqsAngles(1.,0.5,1.3,0.2,0.1,0.)
# (array([ 0.22022112]),
# array([ 1.3]),
# array([ 0.02574507]),
# array([ 0.87630459]),
# array([ 0.60872881]),
# array([ 0.60872881]),
# array([ 0.40443857]),
# array([ 5.85965048]),
# array([ 1.1472615]))
We can again check that the actions are conserved along the orbit and that the angles increase linearly with time:
>>> o.integrate(ts,lp)
>>> jfa= aAS.actionsFreqsAngles(o.R(ts),o.vR(ts),o.vT(ts),o.z(ts),o.vz(ts),o.phi(ts),fixed_quad=True)
where we use fixed_quad=True for a faster evaluation of the required one-dimensional integrals using Gaussian quadrature. We then plot the action fluctuations
>>> plot(ts,numpy.log10(numpy.fabs((jfa[0]-numpy.mean(jfa[0])))))
>>> plot(ts,numpy.log10(numpy.fabs((jfa[1]-numpy.mean(jfa[1])))))
>>> plot(ts,numpy.log10(numpy.fabs((jfa[2]-numpy.mean(jfa[2])))))
which gives
showing that the actions are all conserved. The angles again increase linearly with time
>>> plot(ts,jfa[6],'b.')
>>> plot(ts,jfa[7],'g.')
>>> plot(ts,jfa[8],'r.')
We can check the spherical action-angle calculations against the analytical calculations for the isochrone potential. Starting again from the isochrone potential used in the previous section
>>> ip= IsochronePotential(b=1.,normalize=1.)
>>> aAI= actionAngleIsochrone(ip=ip)
>>> aAS= actionAngleSpherical(pot=ip)
we can compare the actions, frequencies, and angles computed using both
>>> aAI.actionsFreqsAngles(1.,0.5,1.3,0.2,0.1,0.)
# (array([ 0.13769498]),
# array([ 1.3]),
# array([ 0.02574507]),
# array([ 1.29136096]),
# array([ 0.79093738]),
# array([ 0.79093738]),
# array([ 0.57101518]),
# array([ 5.96238847]),
# array([ 1.24999949]))
>>> aAS.actionsFreqsAngles(1.,0.5,1.3,0.2,0.1,0.)
# (array([ 0.13769498]),
# array([ 1.3]),
# array([ 0.02574507]),
# array([ 1.29136096]),
# array([ 0.79093738]),
# array([ 0.79093738]),
# array([ 0.57101518]),
# array([ 5.96238838]),
# array([ 1.2499994]))
or more explicitly comparing the two
>>> [r-s for r,s in zip(aAI.actionsFreqsAngles(1.,0.5,1.3,0.2,0.1,0.),aAS.actionsFreqsAngles(1.,0.5,1.3,0.2,0.1,0.))]
# [array([ 6.66133815e-16]),
# array([ 0.]),
# array([ 0.]),
# array([ -4.53851845e-10]),
# array([ 4.74775219e-10]),
# array([ 4.74775219e-10]),
# array([ -1.65965242e-10]),
# array([ 9.04759645e-08]),
# array([ 9.04759649e-08])]
Action-angle coordinates using the adiabatic approximation¶
For non-spherical, axisymmetric potentials galpy contains multiple methods for calculating approximate action–angle coordinates. The simplest of those is the adiabatic approximation, which works well
for disk orbits that do not go too far from the plane, as it assumes that the vertical motion is decoupled from that in the plane (e.g., 2010MNRAS.401.2318B).
Setup is similar as for other actionAngle objects
>>> from galpy.potential import MWPotential2014
>>> from galpy.actionAngle import actionAngleAdiabatic
>>> aAA= actionAngleAdiabatic(pot=MWPotential2014)
and evaluation then proceeds similarly as before
>>> aAA(1.,0.1,1.1,0.,0.05)
# (0.01351896260559274, 1.1, 0.0004690133479435352)
We can again check that the actions are conserved along the orbit
>>> from galpy.orbit import Orbit
>>> ts=numpy.linspace(0.,100.,1001)
>>> o= Orbit([1.,0.1,1.1,0.,0.05])
>>> o.integrate(ts,MWPotential2014)
>>> js= aAA(o.R(ts),o.vR(ts),o.vT(ts),o.z(ts),o.vz(ts))
This takes a while. The adiabatic approximation is also implemented in C, which leads to great speed-ups. Here is how to use it
>>> timeit(aAA(1.,0.1,1.1,0.,0.05))
# 10 loops, best of 3: 73.7 ms per loop
>>> aAA= actionAngleAdiabatic(pot=MWPotential2014,c=True)
>>> timeit(aAA(1.,0.1,1.1,0.,0.05))
# 1000 loops, best of 3: 1.3 ms per loop
or about a 50 times speed-up. For arrays the speed-up is even more impressive
>>> s= numpy.ones(100)
>>> timeit(aAA(1.*s,0.1*s,1.1*s,0.*s,0.05*s))
# 10 loops, best of 3: 37.8 ms per loop
>>> aAA= actionAngleAdiabatic(pot=MWPotential2014) #back to no C
>>> timeit(aAA(1.*s,0.1*s,1.1*s,0.*s,0.05*s))
# 1 loops, best of 3: 7.71 s per loop
or a speed-up of 200! Back to the previous example, you can run it with c=True to speed up the computation
>>> aAA= actionAngleAdiabatic(pot=MWPotential2014,c=True)
>>> js= aAA(o.R(ts),o.vR(ts),o.vT(ts),o.z(ts),o.vz(ts))
We can plot the radial- and vertical-action fluctuation as a function of time
>>> plot(ts,numpy.log10(numpy.fabs((js[0]-numpy.mean(js[0]))/numpy.mean(js[0]))))
>>> plot(ts,numpy.log10(numpy.fabs((js[2]-numpy.mean(js[2]))/numpy.mean(js[2]))))
which gives
The radial action is conserved to about half a percent, the vertical action to two percent.
Another way to speed up the calculation of actions using the adiabatic approximation is to tabulate the actions on a grid in (approximate) integrals of the motion and evaluating new actions by
interpolating on this grid. How this is done in practice is described in detail in the galpy paper. To setup this grid-based interpolation method, which is contained in actionAngleAdiabaticGrid, do
>>> from galpy.actionAngle import actionAngleAdiabaticGrid
>>> aAG= actionAngleAdiabaticGrid(pot=MWPotential2014,nR=31,nEz=31,nEr=51,nLz=51,c=True)
where c=True specifies that we use the C implementation of actionAngleAdiabatic for speed. We can now evaluate in the same was as before, for example
>>> aAA(1.,0.1,1.1,0.,0.05), aAG(1.,0.1,1.1,0.,0.05)
# ((array([ 0.01352523]), array([ 1.1]), array([ 0.00046909])),
# (0.013527010324238781, 1.1, 0.00047747359874375148))
which agree very well. To look at the timings, we first switch back to not using C and then list all of the relevant timings:
>>> aAA= actionAngleAdiabatic(pot=MWPotential2014,c=False)
# Not using C, direct calculation
>>> timeit(aAA(1.*s,0.1*s,1.1*s,0.*s,0.05*s))
# 1 loops, best of 3: 9.05 s per loop
>>> aAA= actionAngleAdiabatic(pot=MWPotential2014,c=True)
# Using C, direct calculation
>>> timeit(aAA(1.*s,0.1*s,1.1*s,0.*s,0.05*s))
# 10 loops, best of 3: 39.7 ms per loop
# Grid-based calculation
>>> timeit(aAG(1.*s,0.1*s,1.1*s,0.*s,0.05*s))
# 1000 loops, best of 3: 1.09 ms per loop
Thus, in this example (and more generally) the grid-based calculation is significantly faster than even the direct implementation in C. The overall speed up between the direct Python version and the
grid-based version is larger than 8,000; the speed up between the direct C version and the grid-based version is 36. For larger arrays of input phase-space positions, the latter speed up can increase
to 150. For simpler, fully analytical potentials the speed up will be slightly less, but for MWPotential2014 and other more complicated potentials (such as those involving a double-exponential disk),
the overhead of setting up the grid is worth it when evaluating more than a few thousand actions.
The adiabatic approximation works well for orbits that stay close to the plane. The orbit we have been considering so far only reaches a height two percent of \(R_0\), or about 150 pc for \(R_0 = 8\)
>>> o.zmax()*8.
# 0.17903686455491979
For orbits that reach distances of a kpc and more from the plane, the adiabatic approximation does not work as well. For example,
>>> o= Orbit([1.,0.1,1.1,0.,0.25])
>>> o.integrate(ts,MWPotential2014)
>>> o.zmax()*8.
# 1.3506059038621048
and we can again calculate the actions along the orbit
>>> js= aAA(o.R(ts),o.vR(ts),o.vT(ts),o.z(ts),o.vz(ts))
>>> plot(ts,numpy.log10(numpy.fabs((js[0]-numpy.mean(js[0]))/numpy.mean(js[0]))))
>>> plot(ts,numpy.log10(numpy.fabs((js[2]-numpy.mean(js[2]))/numpy.mean(js[2]))))
which gives
The radial action is now only conserved to about ten percent and the vertical action to approximately five percent.
Frequencies and angles using the adiabatic approximation are not implemented at this time.
Action-angle coordinates using the Staeckel approximation¶
A better approximation than the adiabatic one is to locally approximate the potential as a Staeckel potential, for which actions, frequencies, and angles can be calculated through numerical
integration. galpy contains an implementation of the algorithm of Binney (2012; 2012MNRAS.426.1324B), which accomplishes the Staeckel approximation for disk-like (i.e., oblate) potentials without
explicitly fitting a Staeckel potential. For all intents and purposes the adiabatic approximation is made obsolete by this new method, which is as fast and more precise. The only advantage of the
adiabatic approximation over the Staeckel approximation is that the Staeckel approximation requires the user to specify a focal length \(\Delta\) to be used in the Staeckel approximation. However,
this focal length can be easily estimated from the second derivatives of the potential (see Sanders 2012; 2012MNRAS.426..128S).
Starting from the second orbit example in the adiabatic section above, we first estimate a good focal length of the MWPotential2014 to use in the Staeckel approximation. We do this by averaging
(through the median) estimates at positions around the orbit (which we integrated in the example above)
>>> from galpy.actionAngle import estimateDeltaStaeckel
>>> estimateDeltaStaeckel(MWPotential2014,o.R(ts),o.z(ts))
# 0.40272708556203662
We will use \(\Delta = 0.4\) in what follows. We set up the actionAngleStaeckel object
>>> from galpy.actionAngle import actionAngleStaeckel
>>> aAS= actionAngleStaeckel(pot=MWPotential2014,delta=0.4,c=False) #c=True is the default
and calculate the actions
>>> aAS(o.R(),o.vR(),o.vT(),o.z(),o.vz())
# (0.019212848866725911, 1.1000000000000001, 0.015274597971510892)
The adiabatic approximation from above gives
>>> aAA(o.R(),o.vR(),o.vT(),o.z(),o.vz())
# (array([ 0.01686478]), array([ 1.1]), array([ 0.01590001]))
The actionAngleStaeckel calculations are sped up in two ways. First, the action integrals can be calculated using Gaussian quadrature by specifying fixed_quad=True
>>> aAS(o.R(),o.vR(),o.vT(),o.z(),o.vz(),fixed_quad=True)
# (0.01922167296633687, 1.1000000000000001, 0.015276825017286706)
which in itself leads to a ten times speed up
>>> timeit(aAS(o.R(),o.vR(),o.vT(),o.z(),o.vz(),fixed_quad=False))
# 10 loops, best of 3: 129 ms per loop
>>> timeit(aAS(o.R(),o.vR(),o.vT(),o.z(),o.vz(),fixed_quad=True))
# 100 loops, best of 3: 10.3 ms per loop
Second, the actionAngleStaeckel calculations have also been implemented in C, which leads to even greater speed-ups, especially for arrays
>>> aAS= actionAngleStaeckel(pot=MWPotential2014,delta=0.4,c=True)
>>> s= numpy.ones(100)
>>> timeit(aAS(1.*s,0.1*s,1.1*s,0.*s,0.05*s))
# 10 loops, best of 3: 35.1 ms per loop
>>> aAS= actionAngleStaeckel(pot=MWPotential2014,delta=0.4,c=False) #back to no C
>>> timeit(aAS(1.*s,0.1*s,1.1*s,0.*s,0.05*s,fixed_quad=True))
# 1 loops, best of 3: 496 ms per loop
or a fifteen times speed up. The speed up is not that large because the bulge model in MWPotential2014 requires expensive special functions to be evaluated. Computations could be sped up ten times
more when using a simpler bulge model.
Similar to actionAngleAdiabaticGrid, we can also tabulate the actions on a grid of (approximate) integrals of the motion and interpolate over this look-up table when evaluating new actions. The
details of how this look-up table is setup and used are again fully explained in the galpy paper. To use this grid-based Staeckel approximation, contained in actionAngleStaeckelGrid, do
>>> from galpy.actionAngle import actionAngleStaeckelGrid
>>> aASG= actionAngleStaeckelGrid(pot=MWPotential2014,delta=0.4,nE=51,npsi=51,nLz=61,c=True)
where c=True makes sure that we use the C implementation of the Staeckel method to calculate the grid. Because this is a fully three-dimensional grid, setting up the grid takes longer than it does
for the adiabatic method (which only uses two two-dimensional grids). We can then evaluate actions as before
>>> aAS(o.R(),o.vR(),o.vT(),o.z(),o.vz()), aASG(o.R(),o.vR(),o.vT(),o.z(),o.vz())
# ((0.019212848866725911, 1.1000000000000001, 0.015274597971510892),
# (0.019221119033345408, 1.1000000000000001, 0.015022528662310393))
These actions agree very well. We can compare the timings of these methods as above
>>> timeit(aAS(1.*s,0.1*s,1.1*s,0.*s,0.05*s,fixed_quad=True))
# 1 loops, best of 3: 576 ms per loop # Not using C, direct calculation
>>> aAS= actionAngleStaeckel(pot=MWPotential2014,delta=0.4,c=True)
>>> timeit(aAS(1.*s,0.1*s,1.1*s,0.*s,0.05*s))
# 100 loops, best of 3: 17.8 ms per loop # Using C, direct calculation
>>> timeit(aASG(1.*s,0.1*s,1.1*s,0.*s,0.05*s))
# 100 loops, best of 3: 3.45 ms per loop # Grid-based calculation
This demonstrates that the grid-based interpolation again leeds to a significant speed up, even over the C implementation of the direct calculation. This speed up becomes more significant for larger
array input, although it saturates at about 25 times (at least for MWPotential2014).
We can now go back to checking that the actions are conserved along the orbit (going back to the c=False version of actionAngleStaeckel)
>>> aAS= actionAngleStaeckel(pot=MWPotential2014,delta=0.4,c=False)
>>> js= aAS(o.R(ts),o.vR(ts),o.vT(ts),o.z(ts),o.vz(ts),fixed_quad=True)
>>> plot(ts,numpy.log10(numpy.fabs((js[0]-numpy.mean(js[0]))/numpy.mean(js[0]))))
>>> plot(ts,numpy.log10(numpy.fabs((js[2]-numpy.mean(js[2]))/numpy.mean(js[2]))))
which gives
The radial action is now conserved to better than a percent and the vertical action to only a fraction of a percent. Clearly, this is much better than the five to ten percent errors found for the
adiabatic approximation above.
For the Staeckel approximation we can also calculate frequencies and angles through the actionsFreqs and actionsFreqsAngles methods.
Frequencies and angles using the Staeckel approximation are only implemented in C. So use c=True in the setup of the actionAngleStaeckel object.
Angles using the Staeckel approximation in galpy are such that (a) the radial angle starts at zero at pericenter and increases then going toward apocenter; (b) the vertical angle starts at zero at z=
0 and increases toward positive zmax. The latter is a different convention from that in Binney (2012), but is consistent with that in actionAngleIsochrone and actionAngleSpherical.
>>> aAS= actionAngleStaeckel(pot=MWPotential2014,delta=0.4,c=True)
>>> o= Orbit([1.,0.1,1.1,0.,0.25,0.]) #need to specify phi for angles
>>> aAS.actionsFreqsAngles(o.R(),o.vR(),o.vT(),o.z(),o.vz(),o.phi())
# (array([ 0.01922167]),
# array([ 1.1]),
# array([ 0.01527683]),
# array([ 1.11317796]),
# array([ 0.82538032]),
# array([ 1.34126138]),
# array([ 0.37758087]),
# array([ 6.17833493]),
# array([ 6.13368239]))
and we can check that the angles increase linearly along the orbit
>>> o.integrate(ts,MWPotential2014)
>>> jfa= aAS.actionsFreqsAngles(o.R(ts),o.vR(ts),o.vT(ts),o.z(ts),o.vz(ts),o.phi(ts))
>>> plot(ts,jfa[6],'b.')
>>> plot(ts,jfa[7],'g.')
>>> plot(ts,jfa[8],'r.')
>>> plot(jfa[6],jfa[8],'b.')
Action-angle coordinates using an orbit-integration-based approximation¶
The adiabatic and Staeckel approximations used above are good for stars on close-to-circular orbits, but they break down for more eccentric orbits (specifically, orbits for which the radial and/or
vertical action is of a similar magnitude as the angular momentum). This is because the approximations made to the potential in these methods (that it is separable in R and z for the adiabatic
approximation and that it is close to a Staeckel potential for the Staeckel approximation) break down for such orbits. Unfortunately, these methods cannot be refined to provide better approximations
for eccentric orbits.
galpy contains a new method for calculating actions, frequencies, and angles that is completely general for any static potential. It can calculate the actions to any desired precision for any orbit
in such potentials. The method works by employing an auxiliary isochrone potential and calculates action-angle variables by arithmetic operations on the actions and angles calculated in the auxiliary
potential along an orbit (integrated in the true potential). Full details can be found in Appendix A of Bovy (2014).
We setup this method for a logarithmic potential as follows
>>> from galpy.actionAngle import actionAngleIsochroneApprox
>>> from galpy.potential import LogarithmicHaloPotential
>>> lp= LogarithmicHaloPotential(normalize=1.,q=0.9)
>>> aAIA= actionAngleIsochroneApprox(pot=lp,b=0.8)
b=0.8 here sets the scale parameter of the auxiliary isochrone potential; this potential can also be specified as an IsochronePotential instance through ip=). We can now calculate the actions for an
orbit similar to that of the GD-1 stream
>>> obs= numpy.array([1.56148083,0.35081535,-1.15481504,0.88719443,-0.47713334,0.12019596]) #orbit similar to GD-1
>>> aAIA(*obs)
# (array([ 0.16605011]), array([-1.80322155]), array([ 0.50704439]))
An essential requirement of this method is that the angles calculated in the auxiliary potential go through the full range \([0,2\pi]\). If this is not the case, galpy will raise a warning
>>> aAIA= actionAngleIsochroneApprox(pot=lp,b=10.8)
>>> aAIA(*obs)
# galpyWarning: Full radial angle range not covered for at least one object; actions are likely not reliable
# (array([ 0.08985167]), array([-1.80322155]), array([ 0.50849276]))
Therefore, some care should be taken to choosing a good auxiliary potential. galpy contains a method to estimate a decent scale parameter for the auxiliary scale parameter, which works similar to
estimateDeltaStaeckel above except that it also gives a minimum and maximum b if multiple R and z are given
>>> from galpy.actionAngle import estimateBIsochrone
>>> from galpy.orbit import Orbit
>>> o= Orbit(obs)
>>> ts= numpy.linspace(0.,100.,1001)
>>> o.integrate(ts,lp)
>>> estimateBIsochrone(lp,o.R(ts),o.z(ts))
# (0.78065062339131952, 1.2265541473461612, 1.4899326335155412) #bmin,bmedian,bmax over the orbit
Experience shows that a scale parameter somewhere in the range returned by this function makes sure that the angles go through the full \([0,2\pi]\) range. However, even if the angles go through the
full range, the closer the angles increase to linear, the better the converenge of the algorithm is (and especially, the more accurate the calculation of the frequencies and angles is, see below).
For example, for the scale parameter at the upper and of the range
>>> aAIA= actionAngleIsochroneApprox(pot=lp,b=1.5)
>>> aAIA(*obs)
# (array([ 0.01120145]), array([-1.80322155]), array([ 0.50788893]))
which does not agree with the previous calculation. We can inspect how the angles increase and how the actions converge by using the aAIA.plot function. For example, we can plot the radial versus the
vertical angle in the auxiliary potential
>>> aAIA.plot(*obs,type='araz')
which gives
and this clearly shows that the angles increase very non-linearly, because the auxiliary isochrone potential used is too far from the real potential. This causes the actions to converge only very
slowly. For example, for the radial action we can plot the converge as a function of integration time
>>> aAIA.plot(*obs,type='jr')
which gives
This Figure clearly shows that the radial action has not converged yet. We need to integrate much longer in this auxiliary potential to obtain convergence and because the angles increase so
non-linearly, we also need to integrate the orbit much more finely:
>>> aAIA= actionAngleIsochroneApprox(pot=lp,b=1.5,tintJ=1000,ntintJ=800000)
>>> aAIA(*obs)
# (array([ 0.01711635]), array([-1.80322155]), array([ 0.51008058]))
>>> aAIA.plot(*obs,type='jr')
which shows slow convergence
Finding a better auxiliary potential makes convergence much faster and also allows the frequencies and the angles to be calculated by removing the small wiggles in the auxiliary angles vs. time (in
the angle plot above, the wiggles are much larger, such that removing them is hard). The auxiliary potential used above had b=0.8, which shows very quick converenge and good behavior of the angles
>>> aAIA= actionAngleIsochroneApprox(pot=lp,b=0.8)
>>> aAIA.plot(*obs,type='jr')
>>> aAIA.plot(*obs,type='araz')
We can remove the periodic behavior from the angles, which clearly shows that they increase close-to-linear with time
>>> aAIA.plot(*obs,type='araz',deperiod=True)
We can then calculate the frequencies and the angles for this orbit as
>>> aAIA.actionsFreqsAngles(*obs)
# (array([ 0.16392384]),
# array([-1.80322155]),
# array([ 0.50999882]),
# array([ 0.55808933]),
# array([-0.38475753]),
# array([ 0.42199713]),
# array([ 0.18739688]),
# array([ 0.3131815]),
# array([ 2.18425661]))
This function takes as an argument maxn= the maximum n for which to remove sinusoidal wiggles. So we can raise this, for example to 4 from 3
>>> aAIA.actionsFreqsAngles(*obs,maxn=4)
# (array([ 0.16392384]),
# array([-1.80322155]),
# array([ 0.50999882]),
# array([ 0.55808776]),
# array([-0.38475733]),
# array([ 0.4219968]),
# array([ 0.18732009]),
# array([ 0.31318534]),
# array([ 2.18421296]))
Clearly, there is very little change, as most of the wiggles are of low n.
This technique also works for triaxial potentials, but using those requires the code to also use the azimuthal angle variable in the auxiliary potential (this is unnecessary in axisymmetric
potentials as the z component of the angular momentum is conserved). We can calculate actions for triaxial potentials by specifying that nonaxi=True:
>>> aAIA(*obs,nonaxi=True)
# (array([ 0.16605011]), array([-1.80322155]), array([ 0.50704439]))
Action-angle coordinates for one dimensional potentials¶
As for spherical potentials, actions, frequencies, and angles can be computed for any one-dimensional potential using a few simple numerical integrals. In the context of galactic dynamics, this is,
for example, useful for studying the dynamical in the vertical direction perpendicular to the main plane of a disk.
The action-angle coordinates for one-dimensional potentials are computed using the actionAngleVertical class. This class can be initialized with a linearPotential instance or with a list of such
instances (verticalPotential instances which represent the vertical direction of 3D potentials are examples of a linearPotential). As an example, we’ll consider orbits in the one-dimensional version
of the MWPotential2014 model
>>> from galpy.potential import MWPotential2014, toVerticalPotential
>>> from galpy.actionAngle import actionAngleVertical
>>> pot= toVerticalPotential(MWPotential2014,1.) # vertical potential at R=1.
>>> aAV= actionAngleVertical(pot=pot)
Now we can compute the actions, frequencies, and angles using the usual set of methods:
>>> aAV.actionsFreqsAngles(1.,0.1)
# (array([0.40513006]), array([0.73363198]), array([1.40229361]))
To check the accuracy of the 1D action-angle coordinates, we can compute them along an orbit
>>> from galpy.orbit import Orbit
>>> o= Orbit([1.,0.1])
>>> ts= numpy.linspace(0.,10.,1001)
>>> o.integrate(ts,pot)
>>> plot(ts,aAV(o.x(ts),o.vx(ts)))
which gives
The angles increase linearly with time
>>> plot(ts,aAV.actionsFreqsAngles(o.x(ts),o.vx(ts))[2],'.')
which gives
Reverse action-angle transformations (J, a) –> (x, v, O)¶
Reverse action-angle transformations for one-dimensional potentials¶
The actionAngleVerticalInverse class also allows to compute the phase-space coordinates for given actions and angles in one-dimensional potentials. This uses a robust root-finding/
Fourier-transformation implementation of the torus mapping algorithm that can optionally use a point-transformation to improve the quality of the transformation (Bovy in prep. who knows). As an
example, we use a simple IsothermalDiskPotential:
>>> from galpy.potential import IsothermalDiskPotential
>>> isopot= IsothermalDiskPotential(amp=1.,sigma=0.5)
We then set up the actionAngleVerticalInverse object, e.g., to only reverse the transformation for three values of the energy:
>>> from galpy.actionAngle import actionAngleVerticalInverse
>>> aA1Dinv= actionAngleVerticalInverse(pot=isopot,nta=4*128,
Let’s then compute an orbit using the reverse transformation. First, we obtain the frequency, e.g., for the E=1. orbit (note that the input to this method is action, so we use aA1Dinv.J to convert
energy to action):
>>> O= aA1Dinv.Freqs(aA1Dinv.J(1.))
Then we integrate an orbit
>>> a0= 0.1
>>> ts= numpy.linspace(0.,10.,1001)
>>> angles= a0+O*ts
>>> xv= aA1Dinv(aA1Dinv.J(1.),angles)
>>> plot(xv[0],xv[1])
>>> from galpy.orbit import Orbit
>>> o= Orbit([xv[0][0],xv[1][0]])
>>> o.integrate(ts,isopot)
>>> o.plot(gcf=True)
which gives
To be able to reverse the action-angle transformation for many orbits, we can set up the instance so it supports interpolation between the tori at which the reverse transformation is computed. This
can be done as follows
>>> aA1Dinv= actionAngleVerticalInverse(pot=isopot,nta=2*128,
where we have also used a point transformation. Now we can compute the reverse transformation at any orbit within the energy range, e.g.,
>>> a0= 0.1
>>> ts= numpy.linspace(0.,10.,1001)
>>> angles= a0+O*ts
>>> xv= aA1Dinv(aA1Dinv.J(3.706),angles) # between grid points
>>> plot(xv[0],xv[1])
>>> from galpy.orbit import Orbit
>>> o= Orbit([xv[0][0],xv[1][0]])
>>> o.integrate(ts,isopot)
>>> o.plot(gcf=True)
which gives
Energy conservation is very good
>>> from galpy.potential import evaluatelinearPotentials
>>> plot(ts,xv[1]**2./2.+evaluatelinearPotentials(isopot,xv[0]))
Action-angle coordinates using the TorusMapper code¶
galpy also contains some support for computing the reverse action-angle transformation for general axisymmetric potentials using an interface to the TorusMapper code. Currently, this is limited to
axisymmetric potentials, because the TorusMapper code is limited to such potentials.
The basic use of this part of galpy is to compute an orbit \((R,v_R,v_T,z,v_z,\phi)\) for a given torus, specified by three actions \((J_R,L_Z,J_Z)\) and as many angles along a torus as you want.
First we set up an actionAngleTorus object
>>> from galpy.actionAngle import actionAngleTorus
>>> from galpy.potential import MWPotential2014
>>> aAT= actionAngleTorus(pot=MWPotential2014)
To compute an orbit, we first need to compute the frequencies, which we do as follows
>>> jr,lz,jz= 0.1,1.1,0.2
>>> Om= aAT.Freqs(jr,lz,jz)
This set consists of \((\Omega_R,\Omega_\phi,\Omega_Z,\mathrm{TM err})\), where the last entry is the exit code of the TorusMapper code (will be printed as a warning when it is non-zero). Then we
compute a set of angles that fall along an orbit as \(\mathbf{\theta}(t) = \mathbf{\theta}_0+\mathbf{\Omega}\,t\) for a set of times \(t\)
>>> times= numpy.linspace(0.,100.,10001)
>>> init_angle= numpy.array([1.,2.,3.])
>>> angles= numpy.tile(init_angle,(len(times),1))+Om[:3]*numpy.tile(times,(3,1)).T
Then we can compute the orbit by transforming the orbit in action-angle coordinates to configuration space as follows
>>> RvR,_,_,_,_= aAT.xvFreqs(jr,lz,jz,angles[:,0],angles[:,1],angles[:,2])
Note that the frequency is also always computed and returned by this method, because it can be obtained at zero cost. The RvR array has shape (ntimes,6) and the six phase-space coordinates are
arranged in the usual (R,vR,vT,z,vz,phi) order. The orbit in \((R,Z)\) is then given by
>>> plot(RvR[:,0],RvR[:,3])
We can compare this to the direct numerical orbit integration. We integrate the orbit, starting at the position and velocity of the initial angle RvR[0]
>>> from galpy.orbit import Orbit
>>> orb= Orbit(RvR[0])
>>> orb.integrate(times,MWPotential2014)
>>> orb.plot(overplot=True)
The two orbits are exactly the same.
Of course, we do not have to follow the path of an orbit to map the entire orbital torus and thus reveal the orbital building blocks of galaxies. To directly map a torus, we can do (don’t worry, this
doesn’t take very long)
>>> nangles= 200001
>>> angler= numpy.random.uniform(size=nangles)*2.*numpy.pi
>>> anglep= numpy.random.uniform(size=nangles)*2.*numpy.pi
>>> anglez= numpy.random.uniform(size=nangles)*2.*numpy.pi
>>> RvR,_,_,_,_= aAT.xvFreqs(jr,lz,jz,angler,anglep,anglez)
>>> plot(RvR[:,0],RvR[:,3],',',alpha=0.02)
which directly shows where the orbit spends most of its time:
actionAngleTorus has additional methods documented on the action-angle API page for computing Hessians and Jacobians of the transformation between action-angle and configuration space coordinates.
Accessing action-angle coordinates for Orbit instances¶
While the most flexible way to access the actionAngle routines is through the methods in the galpy.actionAngle modules, action-angle coordinates can also be calculated for galpy.orbit.Orbit instances
and this is often more convenient. This is illustrated here briefly. We initialize an Orbit instance
>>> from galpy.orbit import Orbit
>>> from galpy.potential import MWPotential2014
>>> o= Orbit([1.,0.1,1.1,0.,0.25,0.])
and we can then calculate the actions (default is to use the staeckel approximation with an automatically-estimated delta parameter, but this can be adjusted)
>>> o.jr(pot=MWPotential2014), o.jp(pot=MWPotential2014), o.jz(pot=MWPotential2014)
# (0.018194068808944613,1.1,0.01540155584446606)
o.jp here gives the azimuthal action (which is the z component of the angular momentum for axisymmetric potentials). We can also use the other methods described above or adjust the parameters of the
approximation (see above):
>>> o.jr(pot=MWPotential2014,type='staeckel',delta=0.4), o.jp(pot=MWPotential2014,type='staeckel',delta=0.4), o.jz(pot=MWPotential2014,type='staeckel',delta=0.4)
# (0.019221672966336707, 1.1, 0.015276825017286827)
>>> o.jr(pot=MWPotential2014,type='adiabatic'), o.jp(pot=MWPotential2014,type='adiabatic'), o.jz(pot=MWPotential2014,type='adiabatic')
# (0.016856430059017123, 1.1, 0.015897730620467752)
>>> o.jr(pot=MWPotential2014,type='isochroneApprox',b=0.8), o.jp(pot=MWPotential2014,type='isochroneApprox',b=0.8), o.jz(pot=MWPotential2014,type='isochroneApprox',b=0.8)
# (0.019066091295488922, 1.1, 0.015280492319332751)
These two methods give very precise actions for this orbit (both are converged to about 1%) and they agree very well
>>> (o.jr(pot=MWPotential2014,type='staeckel',delta=0.4)-o.jr(pot=MWPotential2014,type='isochroneApprox',b=0.8))/o.jr(pot=MWPotential2014,type='isochroneApprox',b=0.8)
# 0.00816012408818143
>>> (o.jz(pot=MWPotential2014,type='staeckel',delta=0.4)-o.jz(pot=MWPotential2014,type='isochroneApprox',b=0.8))/o.jz(pot=MWPotential2014,type='isochroneApprox',b=0.8)
# 0.00023999894566772273
We can also calculate the frequencies and the angles. This requires using the Staeckel or Isochrone approximations, because frequencies and angles are currently not supported for the adiabatic
approximation. For example, the radial frequency
>>> o.Or(pot=MWPotential2014,type='staeckel',delta=0.4)
# 1.1131779637307115
>>> o.Or(pot=MWPotential2014,type='isochroneApprox',b=0.8)
# 1.1134635974560649
and the radial angle
>>> o.wr(pot=MWPotential2014,type='staeckel',delta=0.4)
# 0.37758086786371969
>>> o.wr(pot=MWPotential2014,type='isochroneApprox',b=0.8)
# 0.38159809018175395
which again agree to 1%. We can also calculate the other frequencies, angles, as well as periods using the functions o.Op, o.oz, o.wp, o.wz, o.Tr, o.Tp, o.Tz.
All of the functions above also work for Orbit instances that contain multiple objects. This is particularly convenient if you have data in observed coordinates (e.g., RA, Dec, etc.), for example,
>>> numpy.random.seed(1)
>>> nrand= 30
>>> ras= numpy.random.uniform(size=nrand)*360.*u.deg
>>> decs= 90.*(2.*numpy.random.uniform(size=nrand)-1.)*u.deg
>>> dists= numpy.random.uniform(size=nrand)*10.*u.kpc
>>> pmras= 2.*(2.*numpy.random.uniform(size=nrand)-1.)*u.mas/u.yr
>>> pmdecs= 2.*(2.*numpy.random.uniform(size=nrand)-1.)*u.mas/u.yr
>>> vloss= 200.*(2.*numpy.random.uniform(size=nrand)-1.)*u.km/u.s
>>> co= SkyCoord(ra=ras,dec=decs,distance=dists,
>>> orbits= Orbit(co)
>>> orbits.jr(pot=MWPotential2014)
# [2363.7957, 360.12445, 690.32238, 1046.2924, 132.9572, 86.989812, 272.06487, 360.73566, 55.568238, 698.18447, 24.783574, 21.889352, 16.148216, 3870.4286, 743.63456, 317.66551, 325.93816, 183.86429, 56.087796, 180.42838, 1121.8019, 8700.8335, 977.8525, 7.569396, 8.2847477, 210.72127, 160.9785, 680.63864, 1093.7413, 87.629873]kmkpcs
Example: Evidence for a Lindblad resonance in the Solar neighborhood¶
We can use galpy to calculate action-angle coordinates for a set of stars in the Solar neighborhood and look for unexplained features. For this we download the data from the Geneva-Copenhagen Survey
(2009A&A…501..941H; data available at viZier). Since the velocities in this catalog are given as U,V, and W, we use the radec and UVW keywords to initialize the orbits from the raw data. For each
object ii
>>> o= Orbit(vxvv[ii,:],radec=True,uvw=True,vo=220.,ro=8.)
We then calculate the actions and angles for each object in a flat rotation curve potential
>>> lp= LogarithmicHaloPotential(normalize=1.)
>>> myjr[ii]= o.jr(lp)
Plotting the radial action versus the angular momentum
>>> import galpy.util.plot as galpy_plot
>>> galpy_plot.plot(myjp,myjr,'k.',ms=2.,xlabel=r'$J_{\phi}$',ylabel=r'$J_R$',xrange=[0.7,1.3],yrange=[0.,0.05])
shows a feature in the distribution
If instead we use a power-law rotation curve with power-law index 1
>>> pp= PowerSphericalPotential(normalize=1.,alpha=-2.)
>>> myjr[ii]= o.jr(pp)
We find that the distribution is stretched, but the feature remains
Code for this example can be found here (note that this code uses a particular download of the GCS data set; if you use your own version, you will need to modify the part of the code that reads the
data). For more information see 2010MNRAS.409..145S.
Example: actions in an N-body simulation¶
To illustrate how we can use galpy to calculate actions in a snapshot of an N-body simulation, we again look at the g15784 snapshot in the pynbody test suite, discussed in The potential of N-body
simulations. Please look at that section for information on how to setup the potential of this snapshot in galpy. One change is that we should set enable_c=True in the instantiation of the
InterpSnapshotRZPotential object
>>> spi= InterpSnapshotRZPotential(h1,rgrid=(numpy.log(0.01),numpy.log(20.),101),logR=True,zgrid=(0.,10.,101),interpPot=True,zsym=True,enable_c=True)
>>> spi.normalize(R0=10.)
where we again normalize the potential to use galpy’s natural units.
We first load a pristine copy of the simulation (because the normalization above leads to some inconsistent behavior in pynbody)
>>> sc = pynbody.load('Repos/pynbody-testdata/g15784.lr.01024.gz'); hc = sc.halos(); hc1= hc[1]; pynbody.analysis.halo.center(hc1,mode='hyb'); pynbody.analysis.angmom.faceon(hc1, cen=(0,0,0),mode='ssc'); sc.physical_units()
and then select particles near R=8 kpc by doing
>>> sn= pynbody.filt.BandPass('rxy','7 kpc','9 kpc')
>>> R,vR,vT,z,vz = [numpy.ascontiguousarray(hc1.s[sn][x]) for x in ('rxy','vr','vt','z','vz')]
These have physical units, so we normalize them (the velocity normalization is the circular velocity at R=10 kpc, see here).
>>> ro, vo= 10., 294.62723076942245
>>> R/= ro
>>> z/= ro
>>> vR/= vo
>>> vT/= vo
>>> vz/= vo
We will calculate actions using actionAngleStaeckel above. We can first integrate a random orbit in this potential
>>> from galpy.orbit import Orbit
>>> numpy.random.seed(1)
>>> ii= numpy.random.permutation(len(R))[0]
>>> o= Orbit([R[ii],vR[ii],vT[ii],z[ii],vz[ii]])
>>> ts= numpy.linspace(0.,100.,1001)
>>> o.integrate(ts,spi)
This orbit looks like this
We can now calculate the actions by doing
>>> from galpy.actionAngle import actionAngleStaeckel
>>> aAS= actionAngleStaeckel(pot=spi,delta=0.45,c=True)
>>> jr,lz,jz= aAS(R,vR,vT,z,vz)
These actions are also in natural units; you can obtain physical units by multiplying with ro*vo. We can now plot these actions
>>> from galpy.util import plot as galpy_plot
>>> galpy_plot.scatterplot(lz,jr,'k.',xlabel=r'$J_\phi$',ylabel=r'$J_R$',xrange=[0.,1.3],yrange=[0.,.6])
which gives
Note the similarity between this figure and the GCS figure above. The curve shape is due to the selection (low angular momentum stars can only enter the selected radial ring if they are very
elliptical and therefore have large radial action) and the density gradient in angular momentum is due to the falling surface density of the disk. We can also look at the distribution of radial and
vertical actions.
>>> galpy_plot.plot(jr,jz,'k,',xlabel=r'$J_R$',ylabel=r'$J_z$',xrange=[0.,.4],yrange=[0.,0.2],onedhists=True)
With the other methods in the actionAngle module we can also calculate frequencies and angles.
|
{"url":"https://docs.galpy.org/en/latest/actionAngle.html","timestamp":"2024-11-12T03:47:26Z","content_type":"text/html","content_length":"192125","record_id":"<urn:uuid:daccd2d8-b480-4d51-8e79-b0116ce27fb8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00407.warc.gz"}
|
Why is googolplex the biggest number?
Googol is so big that it’s almost useless, but the boy who created it came up with the name “googolplex,” which means “googolplex.” A googolplex is one with a number of googol zeros. There isn’t
enough ink in the world’s pens to write so many zeros, but feel free to give it a shot.
Is there a larger number than googolplex?
Graham’s phone number is larger than the googolplex. It’s so large that the Universe doesn’t have enough stuff to write its digits on; it’s literally too big to write. However, this number is finite;
it is also an entire number, and despite its enormous size, we are aware that it is divisible by three and ends in a 7.
What is the googolplex’s largest number?
A googolplex (10googol) is the most frequently mentioned number, which stands at 1010100.
Is Tree 3 the most popular?
As a result, TREE(2) = 3. You may be able to figure out where it all goes from here. The result, TREE(3), is incomprehensible when you play the game with three seed colors.
TREE(3) is the maximum number of trees you can build without ending the game.
HOW HIGH CAN the numbers go?
Of course, trillion is the sum of a billion. Then there’s quadrillion, quintrillion, sextillion, septillion, octillion, nonmillion, and decillion. One of my favorite challenges is for my math class
to keep counting by “illions” as far as possible.
Why is googolplex so much bigger than infinity?
It’s a lot bigger than a small googol! The largest number named by Googolplex is likely to be designated with a single word, but that does not rule it out as the largest number. True, but there isn’t
anything quite like infinity: infinity isn’t a number.
It refers to the state of endlessness.
What is the highest number of people who have been recorded?
A hundred zeros is the number googol. It was named after a nine-year-old boy.
A googol is more than just a bunch of hairs on the planet.
Is it true that numbers come to an end?
Natural number sequences are infinite and never end. As a result, when we see a number like “0.999” (a decimal number with an endless series of 9s), the number of 9s is infinite. “But what happens if
it ends in an 8,?” you can’t say because it doesn’t end.
What size is a Googolplexianth?
Googolplex – Googolplex.com – 1000000000000000000000000000000000 etc. Googol: A very large number! A “1” is followed by a hundred zeros.
What is the smallest number in the world?
Aleph 0 (or aleph zero), which is the sum of all integers, is the smallest version of infinity. Aleph 1 is two times as powerful as aleph 0.
Is Marioplex larger than Googolplex?
Marioplex far exceeds the number of atoms in the visible universe (estimated between 1078 and 1082 atoms) because it is larger than the googol to the 100th power.
What is the name of a
Thousand: 1000 (3 zeros) ten thousand 10,000 (4 zeros) ten thousand 100,000 (5 zeros) ten thousand 1,000,000 (6 zeros) ten thousand 100,000 (5 zeros) ten thousand 100,000 (5 zeros) ten thousand
1,000,000 (6 zeros) ten thousand
Why is the number 100 so important?
100 is a perfect square number with a square root of 10. It is based on percentages (“per cent” in Latin, which means “per hundred”), with 100% being a full amount. In a dollar, there are 100
Is 28 the ideal number?
A perfect number is a positive integer with the sum of its proper divisors. The sum of 1, 2, and 3 is the smallest perfect number.
28, 496, and 8,128 are the other perfect numbers.
Is a billion a real number?
A zillion is a large, but unspecific, number. Because of its similarity to billions, millionaires, and trillions, Zillion sounds like a real number, and it’s modeled on real numerical values.
Zillion, like its cousin jillion, is an informal way to discuss a massive but indefinite number.
In a googolplex, how many zeros do you have?
The number 10googol, or the equivalent of 10, is a googolplex. It’s 1 followed by 10100 zeroes, or a 1 followed by a googol zeroes, written in ordinary decimal notation.
|
{"url":"https://tipsfolder.com/googolplex-biggest-number-bd89a5678f86ceccbba8d640e6c609a7/","timestamp":"2024-11-09T12:57:55Z","content_type":"text/html","content_length":"97795","record_id":"<urn:uuid:34b13a33-8151-4039-9a17-d22810131ed0>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00758.warc.gz"}
|
Flows of generalized Oldroyd-B fluids in curved pipes
Pires, Marília; Sequeira, Adélia
Parabolic Problems, Progress in Nonlinear Differential Equations and Their Applications, Springer Basel, 80 (2011), 21-43
The aim of this work is to present a numerical study of generalized Oldroyd-B flows with shear-thinning viscosity in a curved pipe of circular cross section and arbitrary curvature ratio. Flows are
driven by a given pressure gradient and behavior of the solutions is discussed with respect to different rheologic and geometric flow parameters.
|
{"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=6&member_id=156&doc_id=1654","timestamp":"2024-11-12T12:06:50Z","content_type":"text/html","content_length":"8474","record_id":"<urn:uuid:d50422fa-fd00-46a2-96ea-807ec3cfdd47>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00345.warc.gz"}
|
Source code for neurokit2.complexity.entropy_range
from .entropy_approximate import entropy_approximate
from .entropy_sample import entropy_sample
def entropy_range(signal, dimension=3, delay=1, tolerance="sd", approximate=False, **kwargs):
"""**Range Entropy (RangEn)**
Introduced by Omidvarnia et al. (2018), Range Entropy (RangEn or RangeEn) refers to a modified
form of SampEn (or ApEn).
Both ApEn and SampEn compute the logarithmic likelihood that runs of patterns that are close
remain close on the next incremental comparisons, of which this closeness is estimated by the
Chebyshev distance. Range Entropy uses instead a normalized "range distance", resulting in
modified forms of ApEn and SampEn, **RangEn (A)** (*mApEn*) and **RangEn (B)** (*mSampEn*).
However, the RangEn (A), based on ApEn, often yields undefined entropies (i.e., *NaN* or
*Inf*). As such, using RangEn (B) is recommended instead.
RangEn is described as more robust to nonstationary signal changes, and has a more linear
relationship with the Hurst exponent (compared to ApEn and SampEn), and has no need for signal
amplitude correction.
Note that the :func:`corrected <entropy_approximate>` version of ApEn (cApEn) can be computed
by setting ``corrected=True``.
signal : Union[list, np.array, pd.Series]
The signal (i.e., a time series) in the form of a vector of values.
delay : int
Time delay (often denoted *Tau* :math:`\\tau`, sometimes referred to as *lag*) in samples.
See :func:`complexity_delay` to estimate the optimal value for this parameter.
dimension : int
Embedding Dimension (*m*, sometimes referred to as *d* or *order*). See
:func:`complexity_dimension` to estimate the optimal value for this parameter.
tolerance : float
Tolerance (often denoted as *r*), distance to consider two data points as similar. If
``"sd"`` (default), will be set to :math:`0.2 * SD_{signal}`. See
:func:`complexity_tolerance` to estimate the optimal value for this parameter.
approximate : bool
The entropy algorithm to use. If ``False`` (default), will use sample entropy and return
*mSampEn* (**RangEn B**). If ``True``, will use approximate entropy and return *mApEn*
(**RangEn A**).
Other arguments.
See Also
entropy_approximate, entropy_sample
RangEn : float
Range Entropy. If undefined conditional probabilities are detected (logarithm
of sum of conditional probabilities is ``ln(0)``), ``np.inf`` will
be returned, meaning it fails to retrieve 'accurate' regularity information.
This tends to happen for short data segments, increasing tolerance
levels might help avoid this.
info : dict
A dictionary containing additional information regarding the parameters used.
.. ipython:: python
import neurokit2 as nk
signal = nk.signal_simulate(duration=2, sampling_rate=100, frequency=[5, 6])
# Range Entropy B (mSampEn)
RangEnB, info = nk.entropy_range(signal, approximate=False)
# Range Entropy A (mApEn)
RangEnA, info = nk.entropy_range(signal, approximate=True)
# Range Entropy A (corrected)
RangEnAc, info = nk.entropy_range(signal, approximate=True, corrected=True)
* Omidvarnia, A., Mesbah, M., Pedersen, M., & Jackson, G. (2018). Range entropy: A bridge
between signal complexity and self-similarity. Entropy, 20(12), 962.
if approximate is False: # mSampEn - RangeEn (B)
out = entropy_sample(
else: # mApEn - RangeEn (A)
out = entropy_approximate(
return out
|
{"url":"https://neuropsychology.github.io/NeuroKit/_modules/neurokit2/complexity/entropy_range.html","timestamp":"2024-11-07T07:11:34Z","content_type":"text/html","content_length":"28850","record_id":"<urn:uuid:4f24a4be-6d47-486b-88fc-7ff8a0240d6c>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00804.warc.gz"}
|
UPSC Previous Year Question Papers
UPSC Previous Year Question Papers for Prelims – A Key Resource for Success
UPSC Previous Year Question Papers Topic Wise
Topic Direction Sense
Q1. The houses of A and B face each other on a road going north-south; A's being on the western side. A comes out of his house, turns left, travels 5 km, turns right, travels 5 km to the front of D's
house. B does exactly the same and reaches the front of C's house. In this context, which one of the following statements is correct? [CSAT 2011]
(a) C and D live on the same street.
(b) C's house faces south.
(c) The houses of C and D are less than 20 km apart.
(d) None of the above
Q2. Location of B is north of A and location of C is east of A. The distances AB and AC are 5 km and 12 km respectively. The shortest distance (in km) between the locations B and C is [CSAT 2014]
(a) 60
(b) 13
(c) 17
(d) 7
Q3. Consider the following statements: [CSAT 2014]
There are six villages A, B, C, D, E and F.
F is 1 km to the west of D.
B is 1 km to the east of E.
A is 2 km to the north of E.
C is 1 km to the east of A.
D is 1 km to the south of A.
Which three villages are in a line?
(a) A, C, B
(b) A, D, E
(c) C, B, F
(d) E, B, D
Q4. Shahid and Rohit start from the same point in opposite directions. After each 1 km, Shahid always turns left and Rohit always turns right. Which of the following statements is correct? [CSAT
(a) After both have travelled 2 km, the distance between them is 4 km.
(b) They meet after each has travelled 3 km.
(c) They meet for the first time after each has travelled 4 km.
(d) They go on without ever meeting again.
Q5. A person walks 12 km due north, then 15 km due east, after that 19 km due west and then 15 km due south. How far is the from the starting point? [CSAT 2016]
(a) 5 km
(b) 9 km
(c) 37 km
(d) 61 km
Q6. A person X was driving in a place where all roads ran either north-south or east-west, forming a grid Roads are at a distance of 1 km from each other in a parallel. He started at the intersection
of two roads, drove 3 km north, 3 west and 4 km south which further route could bring him back to his starting point, if the same route is not repeated? [CSAT 2016]
(a) 3 km east, then 2 km south
(b) 3 km east, then 1 km north
(c) 1 km north, then 2 km west
(d) 3 km south, then 1 km north
Q7. 'A' started from his house and walked 20 m towards East, where his friend 'B' joined him. They together walked 10 m in the same direction. Then 'A' turned left while 'B' turned right and
travelled 2 m and 8 m respectively. Again 'B' turned left to travel 4 m followed by 5 m to his right to reach his office. 'A' turned right and travelled 12 m to reach his office. What is the shortest
distance between the two offices? [CSAT 2019]
(a) 15 m
(b) 17 m
(c) 19 m
(d) 20 m
Q8. P, Q and Rare three towns. The distance between P and Q is 60 km, whereas the distance between P and R is 80 km. Q is in the West of P and R is in the South of P. What is the distance between Q
and R? [CSAT 2019)
(a) 140 km
(b) 130 km
(c) 110 km
(d) 100 km
Q9. A man walks down the backside of his house straight 25 meters, then turns to the right and walks 50 meters again; then he turns towards left and again walks 25 meters. If his house faces to the
East, what is his direction from the starting point? [CSAT 2020]
(a) South-East
(b) South-West
(c) North-East
(d) North-West
Q10. A woman runs 12 km towards her North, then 6 km towards her South and then 8 km towards her East. In which direction is she from her starting point? [CSAT 2021]
(a) An angle less than 45° South of East.
(b) An angle less than 45° North of East.
(c) An angle more than 45° South of East.
(d) An angle more than 45° North of East.
Q11. A bank employee drives 10 km towards South from her house and turns to her left and drives another 20 km. She again turns left and drivers 40 km, and then she turns to her right and drives for
another 5 km. She again turns to her right and drives another 30 km to reach her bank where she works. What is the shortest distance between her bank and her house? [CSAT 2021]
(a) 20 km
(b) 25 km
(c) 30 km
(d) 35 km
Q12. Two friends X and Y start running and they run together for 50 m in the same direction and reach a point. X turns right and runs 60 m, while Y turns left and runs 40 m. Then X turns left and
runs 50 m and stops, while Y turns right and runs 50 m and then stops. How far are the two friends from each other now? [CSAT 2022]
(a) 100 m
(b) 90 m
(c) 60 m
(d) 50 m
Q13. A person walks 100 m straight from his house, turns left and walks 100 m, again turns left and walks 300 m, then turns right and walks 100 m to reach his office. In which direction does he walk
initially from his house if his office is exactly in the North-East direction? [CSAT 2024]
(a) North-West
(b) West
(c) South
(d) South-West
Q14. A person walks 100 m Westward, then turns left and walks 100 m. He then takes 225" turn clockwise. In which direction is he walking now? [CSAT 2024]
(a) South-West
(b) South-East
(c) North-West
(d) North-East
1. C 8. D
2. B 9. D
3. B 10. B
4. B 11. B
5. A 12. A
6. B 13. C
7. B 14. D
|
{"url":"https://www.iassetu.com/previous-year-question-papers","timestamp":"2024-11-04T13:41:12Z","content_type":"text/html","content_length":"42118","record_id":"<urn:uuid:447ea485-f4aa-4409-992d-330174d1e293>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00447.warc.gz"}
|
Motion Modeling and Coordinate Systems
Perform array and target trajectory modeling, coordinate transformations, and compute Doppler shift
The Phased Array System Toolbox™ lets you model the motion of radars, sonars, targets, jammers, or interference sources using the phased.Platform System object™. This System object provides constant
velocity and constant acceleration motion models. These motion models can generate almost any type of trajectory. You can display a 3-D visualization of a radar scenario using the
phased.ScenarioViewer System object. The toolbox contains several utility functions that let you transform between coordinates systems, transform between angular coordinates, and convert between
velocity and Doppler shift.
phased.Platform Model platform motion
phased.ScenarioViewer Display motion of radars and targets
Motion Platform Motion platform
Range and Doppler Transformations
dop2speed Convert Doppler shift to speed
speed2dop Convert speed to Doppler shift
radialspeed Relative radial speed
rangeangle Range and angle calculation
Local to Global Coordinate Transformations
global2localcoord Convert global to local coordinates
local2globalcoord Convert local to global coordinates
Local Coordinate Operations
rotx Rotation matrix for rotations around x-axis
roty Rotation matrix for rotations around y-axis
rotz Rotation matrix for rotations around z-axis
cart2sphvec Convert vector from Cartesian components to spherical representation
sph2cartvec Convert vector from spherical basis components to Cartesian components
azelaxes Spherical basis vectors in 3-by-3 matrix form
Angle Conversion
uv2azel Convert u/v coordinates to azimuth/elevation angles
azel2uv Convert azimuth/elevation angles to u/v coordinates
phitheta2azel Convert angles from phi/theta form to azimuth/elevation form
azel2phitheta Convert angles from azimuth-elevation form to phi-theta form
uv2phitheta Convert u/v coordinates to phi/theta angles
phitheta2uv Convert phi/theta angles to u/v coordinates
Motion Modeling
• Doppler Shift and Pulse-Doppler Processing
Compute target motion using Doppler processing.
• Motion Modeling in Phased Array Systems
A critical component in phased array system applications is the ability to model motion in space.
• Model Motion of Circling Airplane
Start with an airplane moving along a circular track with a radius of 10 km at a horizontal speed of 100 m/s and descending at a rate of 1 m/sec.
|
{"url":"https://kr.mathworks.com/help/phased/motion-modeling-and-coordinate-systems-1.html","timestamp":"2024-11-05T22:43:26Z","content_type":"text/html","content_length":"79685","record_id":"<urn:uuid:ce34a10e-bddd-4646-9915-83b379a3d4c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00438.warc.gz"}
|
Walk-through to Morel-Voevodsky A¹-homotopy theory, page 48-50
Friday, February 05th, 2010 | Author: Konrad Voelkel
We look at the model structure Voevodsky and Morel use in their 1999 IHES paper and discuss 1.2, 1.3, 1.4, 1.5, 1.6, 1.8, 1.9, 1.10. There is nothing difficult or particularly interesting, but you
might want to look up some specific issue or reference.
I wrote another posting that explains what an enriched model category, enriched over a monoidal model category is; we turn to simplicial model categories in this post. There, I also explain the
notion of monoidal and enriched model categories beside some notions of simplicial sets and topoi, the most important being for now:
a simplicial model category is just an enriched model category which is enriched over the monoidal model category of simplicial sets.
but details are also to be found below.
The simplicial model structure on simplicial sheaves on a topos
In Definition 1.2, for every small site $T$, a model structure on $\Delta^{op}Shv(T)$ is defined:
1. The weak equivalences $W_s$ are the stalkwise (pointwise) weak equivalences
2. The cofibrations $C$ are the monomorphisms
3. The fibrations $F_s$ are defined via the right lifting property with respect to acyclic cofibrations
Remark 1.3 is a technical subtlety. If you happen to have a conservative set of points $P$ of a topos $T$, then weak equivalence of a morphism $f : X \rightarrow Y$ of sheaves on $T$ can be tested
pointwise: $f \in W_s \Leftrightarrow \forall x^\ast \in P : x^\ast(f) \in W$, where $W$ denotes the weak equivalences in the standard model structure of simplicial sets. A conservative set of points
$P$ is just a set of points that is a conservative family of functors, which is by definition, that the product functor $\prod_{x \in P} x$ is a conservative functor.
A functor $F$ is conservative if it reflects isomorphisms. That means, $F(f)$ isomorphism implies $f$ isomorphism for each morphism $f$.
This technical lemma is used later in the text, but the homotopy sheaves are not, so I guess you can forget the proof details when reading the text for the first time.
See also: conservative functor in nLab
Theorem 1.4 (the structure defined by $(W_s,C,F_s)$ is a model category structure) cites the result of Corollary 2.7 in Jardine: Simplicial Presheaves, in no. 47 J.Pure Applied Math, 1987 which is
originally due to Joyal. Since the article is behind a paywall, I'll give you a rough idea:
• (MC1), (MC2) and (MC3) are deduced from the model structure on simplicial sets.
• (MC4) relies on the fact that the morphism from a presheaf to its associated sheaf is a weak equivalence and then applying the axiom for $\Delta^{op}Preshv(T)$ with the global fibration and
topological weak equivalence model structure. (MC4) for $\Delta^{op}Preshv(T)$ is proved with a trick that uses (MC5).
• (MC5) is essentially a small object argument.
The corresponding homotopy category of $(W_s,C,F_s)$ on $\Delta^{op}Shv(T)$ is written $\mathcal{H}_s(T)$.
See also: small object argument in nLab
Proper model categories
Remark 1.5 states that the model structure is a proper one. The proof is available in Jardine, J.F.: Stable homotopy theory of simplicial presheaves, in no. 39 Can. Math. J, 1987 which is available
for free here.
A simplicial model category is proper if
• (P1) the pullback $j^\ast(g)$ of a weak equivalence $g$ along a fibration $j$ is always a weak equivalence,
• (P2) the pushout $i_\ast(f)$ of a weak equivalence $f$ along a cofibration $i$ is always a weak equivalence.
(P1) is proved for simplicial sets via fibrant replacement, such that one has a cartesian diagram up to weak equivalence, and then application of K. Brown's coglueing lemma, which is Lemma 1 on page
428 of Brown, K.: Abstract Homotopy Theory and Generalized Sheaf Cohomology, in Vol. 186 Transactions of the American Mathematical Society, 1973 which you can download from the nLab for free.
(P2) is proved for simplicial sets in a dual fashion, using the fact that simplicial sets are always cofibrant and a dual of Brown's coglueing lemma.
For simplicial presheaves on a topos, the proofs are similar. For (P1), fibrant replacement yields a cartesian diagram (up to weak equivalence) in which all objects are locally fibrant simplicial
presheaves (which form a category of fibrant objects) and the coglueing argument can be applied. For simplicial sheaves, (P1) and (P2) follow since the associated sheaf morphism is a weak
It should be mentioned that (P1) is also called right proper and similarly (P1) left proper.
See also: proper model category in nLab
Functorial fibrant replacements (1.6)
(MC5) demands in particular, that every morphism is functorially factorizable into a fibration after an acylic cofibration.
A resolution on a site $T$ (which carries a model structure) is defined to be a functor $Ex : \Delta^{op}Shv(T) \rightarrow \Delta^{op}Shv(T)$ and a transformation $\theta : Id \rightarrow Ex$ such
that for every simplicial sheaf $X \in \Delta^{op}Shv(T)$, the object $Ex(X)$ is fibrant and $\theta_X : X \rightarrow Ex(X)$ is an acyclic cofibration.
Indeed, if $f : X \to \ast$ is a morphism, we can factorize it into an acyclic cofibration followed by a fibration. Rename the acyclic cofibration $\theta_X$ and the object $\theta_X(X) =: Ex(X)$,
then $Ex(X) \rightarrow \ast$ is a fibration, thus $Ex(X)$ fibrant. Voilà - since (MC5) demands this to be functorial, the functor/transformation conditions for a resolution are fulfilled.
It should be clear that this works the same way for cofibrant replacements, although we won't need this here, since in the simplicial model structure we're looking at on $\Delta^{op}Shv(T)$, all
objects are cofibrant.
See also: Kan fibrant replacement in nLab
Simplicial model categories
For every two objects $X,\ Y \in \Delta^{op}Shv(T)$, we defined
is a simplicial set because
is a cosimplicial object. If you take an object
$U \in T$
as constant simplicial sheaf in degree 0, you can look at
, which is just the simplicial set of sections
for the simplicial sheaf
. Now we have to see that this enrichment is compatible with the model structure. This is done in Remark 1.9. resp. Lemma 1.8. The proof indication for Lemma 1.8. is to prove 1) via points of
. This is easy if you already know that the standard model structure on simplicial sets is a simplicial model structure (the model category of simplicial sets enriched over the monoidal model
category of simplicial sets), which is not too hard to prove.
If you already know about the "subtleties" in the definition of simplicial model categories (maybe from my article about simplicial model categories), skip the next two paragraphs.
A category $\mathcal{C}$ is a simplicial model category if it is a model category that is enriched over simplicial sets, that satisfies the additional axioms (Quillen):
• (SM0): for all $X \in \mathcal{C}$ and all finite simplicial sets $K$, $X \otimes K$ and $X^K$ exist.
• (SM7): If $i: A \rightarrow B$ is a cofibration and $p:X \rightarrow Y$ a fibration, then is a fibration of simplicial sets, which is trivial if either $i$ or $p$ is trivial. (The S denotes the
simplicial mapping object of $\mathcal{C}$).
(SM0) is also phrased "$X$ is powered and copowered" and sometimes already included in the definition of an enriched model category (like I did in my article about simplicial model categories). (SM7)
is also phrased "the copower functor is a left Quillen bifunctor" and sometimes already included in the definition of an enriched model category (like I did, again). So, if you take the "modern"
definition of a model category enriched over a monoidal model category, those axioms are already included (I put them in here just because they will show up in the literature and also because you
might not have read my article about the definition of simplicial model categories).
Lemma 1.10, different notions of equivalence are the same
For $X,\ Y \in \Delta^{op}Shv(T)$ fibrant and $f:X\rightarrow Y$ a morphism, these three statements are equivalent:
1. $f$ is a simplicial homotopy equivalence,
2. $f$ is a weak equivalence,
3. $\forall U \in T : S(U,f)$ is a weak equivalence.
The proof indication is mostly a list of references, so let's have a more detailed look, which will then finish this posting.
• (2)=>(1)
factorise the weak equivalence $f$ into a cofibration $i : X \rightarrow X'$ followed by an acyclic fibration $p : X' \rightarrow Y$. Then $i$ is a weak equivalence again (by 2-out-of-3). By an
argument in Quillen's Homotopical Algebra (Corollary 2.5), obtain a retraction $r$ of $i$ by the lift in the diagram
and then get a simplicial homotopy from $ir$ to $id_{X'}$ by the lift in the diagram
and now $r$ is a simplicial homotopy inverse of $i$. To actually obtain a simplicial homotopy inverse of $f$, we're going to build a simplicial homotopy inverse of $p$. For this, observe that all
objects are cofibrant (since cofibrations are by definition just monomorphisms), and that the dual statement to what we just proved is that a trivial fibration between cofibrant objects is a
simplicial homotopy equivalence.
What is $I$? What is $X^I$? you might ask. The object $I$ is just the simplicial set $\Delta^1$, whose geometric realisation in $\mathbb{R}$ looks like the interval $[0,1]$, hence the name (and I
used this notation here because it's the same as in Quillen's book). The object $X^I$ is the internal mapping object $\underline{Hom}(\Delta^1,X)$. If this remains unclear, you might want to read
some introduction to enriched category theory.
• (1)=>(3)
We will not try to construct a weak homotopy equivalence but a homotopy equivalence:
Using the definition of $Y(U)=S(U,Y)$ for $U \in T$ and $Y \in \Delta^{op}Shv(T)$, you'll see the canonical isomorphism $X^{\Delta^1}(U) \xrightarrow{\simeq} X(U)^{\Delta^1}$. Now take a
simplicial homotopy inverse $g$ to the map $f$ and choose a simplicial homotopy $h_X : X \rightarrow X^{\Delta^1}$ between $id_X$ and $gf$. This yields a map $S(U,h_X) : X(U) \rightarrow X^{\
Delta^1}(U)$ which, composed with the canonical isomorphism above, is the homotopy between $S(U,g)\circ S(U,f)$ and $id_{X(U)}$ we're looking for. The other composition $fg$ is handled the same
• (3)=>(2)
From SGA4 6.8.2 we learn that every point $x^\ast$ of $T$ has an associated functor $Vois_T(x) \rightarrow T$, where $Vois_T(x)$ is the category of neighbourhoods (French: voisinages) of $x^\ast$
. A neighbourhood is a couple $(U,u)$ where $U\in T$ and $u \in x^{\ast}U$. The cofiltrant category of neighbourhoods of $x^\ast$ admits a small cofinite full subcategory, so by abstract nonsense
the functor $Vois_T(x) \rightarrow T$ is a pro-object in $T$. A pro-object is, by definition, just a functor from a small cofiltered category to $T$ (think of it as a diagram to form a projective
limit, hence the name). Let's write the pro-object $\{U_\alpha\}$, hiding the small cofinal full subcategory of $Vois_T(x)$ in the indices.
Now for a point $x^\ast$, $x^\ast(f)$ is a filtering colimit (=projective limit) of all $S(U_\alpha, f)$, thus a filtering colimit of weak equivalences. We conclude that $x^\ast$ is itself a weak
equivalence. Since this holds for every point, $f$ is a weak equivalence.
|
{"url":"https://www.konradvoelkel.com/2010/02/walk-through-to-morel-voevodskys-a1-homotopy-theory-page-48-50/","timestamp":"2024-11-05T13:04:22Z","content_type":"text/html","content_length":"69491","record_id":"<urn:uuid:ef110667-cb0a-4f67-8fc2-54d7e7e2518e>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00617.warc.gz"}
|
Understanding statistics through interactive visualizations
Statistical concepts can be difficult to understand from words and equations alone but understanding can come more easily if one gets to play with some numbers. This page features a number of
interactive visualizations made to increase the reader's understanding of important statistical concepts.
The following visualizations are currently available
Statistical basics
• Statistical distributions Statistical methods
• Classification using multiple predictors Statistical artifacts
• Restriction of range
• Ceiling/floor effects and the relationship to a criteria variable
• Ceiling/floor effects and the estimation of group differences
• Tail effects
• Regression towards the mean (measurement)
• Regression towards the mean (breeding)
• Dichotomization and cut-off values
• Discretization
• Discretization and relative risk Statistical fallacies
• The NHST subgroup fallacy
• Simpson's paradox Test bias
• Test bias
• Test bias and omitted variable bias Psychology
• Jensen's method
• The Dunning-Kruger effect
Looking for more?
Looking for more visualizations of statistical concepts? Check out these great sites:
Dynamic iframe
|
{"url":"http://emilkirkegaard.dk/understanding_statistics/?app=distributions","timestamp":"2024-11-11T06:58:39Z","content_type":"text/html","content_length":"5638","record_id":"<urn:uuid:b76251bc-5c6d-46ea-ab63-4493dc6c4c7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00512.warc.gz"}
|
B.Sc Mathematics Tuition Of Differential Equation - Online Mathematics TutorB.Sc Mathematics Tuition Of Differential Equation - Online Mathematics Tutor
B.Sc Mathematics Tuition Of Differential Equation
Are You Looking For B.Sc Mathematics Tuition Of Differential Equation? Math Education Is A Live Tutoring Sites For The Best Online Math Tutorial. Highly Qualified Faculties Available To Teach
University Students With Mathematics Subjects. Call Now +91-9818003202 For The Best Online B.Sc Mathematics Tuition Of Differential equation.
The main objectives of this course are to introduce the students to the exciting world of Differential Equations, Mathematical Modelling and their applications.
Course Learning Outcomes:
The course will enable the students to:
1. Formulate Differential Equations for various Mathematical models.
2. Solve first order non-linear differential equation and linear differential equations of higher order using various techniques.
3. Apply these techniques to solve and analyze various mathematical models.
Syllabus Content May vary as per curriculum by university
Course Contents:
Differential equations and mathematical models, Order and degree of a differential equation, Exact differential equations and integrating factors of first order differential equations, Reducible
second order differential equations, Application of first order differential equations to equations to acceleration-velocity model, Growth and decay model.
Unit 2: Population Growth Models Tuition
Introduction to compartmental models, Lake pollution model (with case study of Lake Burley Griffin), Drug assimilation into the blood (case of a single cold pill, case of a course of cold pills, case
study of alcohol in the bloodstream), Exponential growth of population, Limited growth of population, Limited growth with harvesting.
General solution of homogeneous equation of second order, Principle of superposition for a homogeneous equation; Wronskian, its properties and applications, Linear homogeneous and non-homogeneous
equations of higher order with constant coefficients, Euler’s equation, Method of undetermined coefficients, Method of variation of parameters, Applications of second order differential equations to
mechanical vibrations.
Interacting population models, Epidemic model of influenza and its analysis, Predator-prey model and its analysis, Equilibrium points, Interpretation of the phase plane, Battle model and its
1. Barnes, Belinda & Fulford, Glenn R. (2015). Mathematical Modelling with Case Studies, Using Maple and MATLAB (3rd ed.). CRC Press, Taylor & Francis Group.
2. Edwards, C. Henry, Penney, David E., & Calvis, David T. (2015). Differential Equation and Boundary Value Problems: Computing and Modeling (5th ed.). Pearson Education.
3. Ross, Shepley L. (2004). Differential Equations (3rd ed.). John Wiley & Sons. India
|
{"url":"https://mathedu.co.in/b-sc-mathematics-tuition-of-differential-equation/","timestamp":"2024-11-14T19:01:30Z","content_type":"text/html","content_length":"81801","record_id":"<urn:uuid:bcbd7ea9-5c10-4a85-83cd-4bc7fce9971d>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00897.warc.gz"}
|
Lesson 9
Equivalent Equations and Functions
These materials, when encountered before Algebra 1, Unit 7, Lesson 9 support success in that lesson.
9.1: More Equivalent Equations (5 minutes)
In this warm-up, students have the opportunity to explain why several equations are equivalent. These skills will be useful in the associated Algebra 1 lesson when students manipulate quadratic
equations to solve them in factored form. For the second question, monitor for students who:
1. Distribute the 5 from the original equation and distribute the 10 from the solution to obtain matching equations.
2. Factor 2 from the original equation and combine with the 5 to get the solution.
Student Facing
Explain why each of these equations is equivalent to \(5(2x-20) + 4 = 8\).
1. \(10x - 100 = 4\)
2. \(10(x-10) + 4 = 8\)
3. \(10x=104\)
4. \(x = \frac{52}{5}\)
Activity Synthesis
The purpose of this discussion is to remind students of some standard moves for rearranging linear equations. Select students to share their responses. Select previously identified students to share
their methods for approaching the second question. Connect the responses by showing that \(10x - 100\) can factor into \(10(x-10)\).
9.2: Finding Solutions and Functions (20 minutes)
In this activity, students find solutions to equations from a list of values, then rearrange the equations into functions whose graphs have \(x\)-intercepts at the same place as the solutions to the
equations. In the associated Algebra 1 lesson, students solve quadratic equations using the factored form. This activity gives an opportunity to preview that work with additional support.
Student Facing
Here is a list of possible solutions to equations.
1. For each equation, find any values on the list that are solutions. (Some equations have two solutions, and some only have one.)
1. \(35 = x^2 - 1\)
2. \((x - 5)(x + 7) = 0\)
3. \(0 = (7 - x) \boldcdot x\)
4. \((x + 3)^2 = 36\)
5. \(x^2 + 8x + 16 = 0\)
2. For each function, explain how it is related to the associated equation from the previous question. Then, graph the function using technology. Where can you see the solution to each equation on
its graph?
1. \(f(x) = x^2 - 36\)
2. \(g(x) = (x - 5)(x + 7)\)
3. \(h(x) = (7-x)\boldcdot x\)
4. \(k(x) = (x+3)^2 - 36\)
5. \(m(x) = x^2 + 8x + 16\)
Activity Synthesis
The purpose of the discussion is to recognize how solutions to equations are related to the graphs of similar functions. Select students to share their solutions. Ask students,
• “For all of the solutions in the first question, what are the associated function values from the second question?” (Zero)
• “Why is it helpful to have the factored form equal to zero for the second and third equations of the first question?” (It can be solved using the zero product property.)
9.3: Card Sort: Matching Equations (15 minutes)
In this partner activity, students take turns matching equivalent equations. As students trade roles explaining their thinking and listening, they have opportunities to explain their reasoning and
critique the reasoning of others (MP3).
Arrange students in groups of 2. Distribute one set of cards to each group of students. Give students time to work with their partner, followed by a whole-class discussion.
Student Facing
Your teacher will give you a set of cards.
Take turns with your partner to match two equivalent expressions.
1. For each match that you find, explain to your partner how you know it’s a match.
2. For each match that your partner finds, listen carefully to their explanation. If you disagree, discuss your thinking and work to reach an agreement.
Activity Synthesis
Select groups to share their matches and how they sorted their equations. Attend to the language that students use to describe their matches, giving them opportunities to describe their equivalent
equations more precisely.
|
{"url":"https://curriculum.illustrativemathematics.org/HS/teachers/4/7/9/index.html","timestamp":"2024-11-04T12:22:45Z","content_type":"text/html","content_length":"84573","record_id":"<urn:uuid:d6317861-8092-4198-b953-93351bf5cc39>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00380.warc.gz"}
|
Locally quasi-finite morphisms
Lemma 101.23.1. Let $f : \mathcal{X} \to \mathcal{Y}$ be a morphism of algebraic stacks. Assume $f$ is representable by algebraic spaces. The following are equivalent
1. $f$ is locally quasi-finite (as in Properties of Stacks, Section 100.3), and
2. $f$ is locally of finite type and for every morphism $\mathop{\mathrm{Spec}}(k) \to \mathcal{Y}$ where $k$ is a field the space $|\mathop{\mathrm{Spec}}(k) \times _\mathcal {Y} \mathcal{X}|$ is
Comments (5)
Comment #1786 by Matthew Emerton on
Unless I missed it, this section discusses locally q.f. morphisms, but doesn't actually define quasi-finite morphisms, despite its title. I guess these should be loc. q.f. plus fin. type?
Comment #1822 by Johan on
OK, I put this on the todo list. Right now I do not have the time to think through completely the following: should we also put some condition on the diagonal?
Comment #3177 by Ariyan Javanpeykar on
Typo: "with all of the already existing notion" --> "with the already existing notion"
Comment #3290 by Johan on
Thanks, fixed here.
Comment #5479 by Johan on
OK, I have now (finally) added a section on quasi-finite morphisms of algebraic stacks which are defined as quasi-compact, locally quasi-finite morphisms. See this commit.
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 06PT. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 06PT, in case you are confused.
|
{"url":"https://stacks.math.columbia.edu/tag/06PT","timestamp":"2024-11-08T23:55:48Z","content_type":"text/html","content_length":"33168","record_id":"<urn:uuid:8afd2a0b-62fb-45b7-b63c-c49059f575d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00481.warc.gz"}
|
Problem Solving and Decision Making-Back to the Basics
The typical activities of what managers do include solving problems and hopefully making the right decisions. Some managers, in particular, often solve problems by reacting to them. They are
undertaken, anxious and often very short for time or resources. As a result, when they encounter problems they must solve, they respond with a decision that is familiar and or seemed to work before.
It is very easy to get caught in this thinking process because it reduces the element of risk. However, an experienced manager always looks to find smarter and simpler way of doing things. This
approach relies on using an organized systematic process - problem-solving process. This process is sometime referred to as “Work Simplification”. There is a word of caution, because this process
cannot solve all problems.
However, if the basic guidelines are applied carefully, they can result in considerable benefits. After some practice, they'll become second nature and a common approach to your thinking process.
Here my take, successful managers should view problems as opportunities to correct deficiencies and to look for improvements. Therefore, excellent managers view "problems" as "opportunities.”
Gather information and define the problem
This is often where managers fall short. They react to what they think the problem is. Instead, they need to seek to understand more about why you think there's a problem. Is the problem a repeat,
unique and or different? “A problem well defined is half –way solved. “
Get the facts, and ask a lot of question. Seek input from yourself and ask others.
• Who is involved? How extensive is the problem?
• How long it has been going on?
• Why do you think it is a problem? Who is causing the problem?
• Where, how, and when, and with whom is it happening?
Write down a brief description of the problem
Specify in terms of "The following should be happening, but isn’t..." or "Record the job as it’s, not the way it should be”…
As much as possible, be specific in your description, including what is happening, where, how, with whom and why. The inquiring attitude relies on having an open mind while utilizing the what, where,
when and how questions. Work with facts, not opinions.
Recognize the difference between "important “and "urgent" problems.
Often, what we consider to be important problems to consider are really just urgent problems. Important problems deserve more attention. For example, if you're continually answering "urgent" phone
calls, then you've probably got a more "important" problem and that's to design a system that screens and prioritizes your phone calls.
Verify your understanding of the problems.
It helps a great deal to verify your problem analysis. Ask for conferring with a peer or someone else not related to the problem.
Understand your role in the problem and the role of others.
For example, if you're very stressed out, it'll probably look like others are too, or you may resort too quickly to blaming and reprimanding others. Or, if you are feeling very guilty about your role
in the problem, you may ignore the accountability of others.
Select an approach to resolve the problem
When selecting the best approach, consider the following questions:
• Which approach is the most realistic to accomplish for now? Do you have the resources?
• Are they affordable? Do you have enough time to implement the approach?
• What is the extent of risk associated with each alternative?
Plan the implementation of the best alternative (this is your action plan)
Carefully consider the following questions:
• What will the situation look like when the problem is solved?
• What steps should be taken to implement the best alternative to solving the problem?
• What systems or processes should be changed in your organization, for example, a new policy or procedure? Don't resort to solutions where someone is "just going to try harder."
• How will you know if the steps are being followed or not? (These are your indicators of the success of your plan)
What resources will you need in terms of people, money and facilities?
• How much time will you need to implement the solution?
• Write a schedule that includes the start and completion times, and when you expect to see certain indicators of success.
• Who will primarily be responsible for ensuring implementation of the plan?
Write down your action plan.
Communicate the plan to those who will involve in implementing it and, at least, to your immediate supervisor. (An important this process is to continually observe and ask for feedback.)
Monitor implementation of the plan
Monitor the indicators of success:
• Are you seeing what you would expect from the indicators?
• Will the plan be done according to schedule? If the plan is not being followed as expected, then consider: Was the plan realistic?
• Are there sufficient resources to accomplish the plan on schedule? Should more priority be placed on various aspects of the plan? Should the plan be changed?
Verify if the problem has been resolved or not
One of the best ways to verify if a problem has been solved or not is to resume normal operations in the organization. Still, you should consider the following:
• What changes should be made to avoid this type of problem in the future?
• Consider changes to policies and procedures, training, etc.
• Consider "What did you learn from this problem solving?" Consider new knowledge, understanding and/or skills.
Consider writing a brief memo that highlights the success of the problem
Describe the problem and the solving efforts, and what you learned as a result. Share it with your manager-supervisor, peers and subordinates.
This critical step is often ignored. New managers are often focused on a getting "a lot done." This usually means identifying and solving problems. Experienced managers come to understand that
acknowledging and celebrating a solution to a problem can be also be as important as the solution itself. Without ongoing acknowledgement of success, employees can become doubtful and even
distrustful about true efforts in the organization.
For further information, call us, here at Searchtec Consulting Group.
Also, check www.searchtec1.com
for further information and details
|
{"url":"https://www.searchtec1.com/single-post/2017/09/19/problem-solving-and-decision-making-back-to-the-basics","timestamp":"2024-11-02T23:20:01Z","content_type":"text/html","content_length":"1050489","record_id":"<urn:uuid:bd649f44-44ff-4948-b08e-b3890f583288>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00303.warc.gz"}
|
Sponsored Search Auctions
Sponsored Search Auctions#
Used to auction ad slots on websites. Model:
1. There are \(N\) slots with slot \(j \in N\) having associated click-through rate \(\alpha_j\) which is assumed to only depend on \(j\) (so for instance, quality of the ad itself does not matter);
2. There are \(M\) advertisers with advertiser \(i \in M\) having a value of \(v_i\) per click;
3. An allocation is a function \(A: N \to M\) that matches the \(i\)th slot to the \(A(i)\)th advertiser;
4. Define social welfare to be
\[W(A) = \sum_j v_{A(j)} \alpha_j.\]
What is the optimal allocation \(A\) to maximize \(W\)?
(General Second Price Auction)
General Second Price Auction
1. Ask each advertiser for a bid \(b_i\);
2. Assign highest bid to first slot, second highest bid to second slot, etc;
3. For each slot \(j\), advertiser \(A(j)\) pays \(b_{A(j+1)}\).
With the special case of one slot \(j=1\), GSP is equivalent to the second price auction.
The allocation rule in step two maximizes $\(\sum_j b_{A(j)}\alpha_j.\)$ However, GSP is not strategyproof: some advertisers may prefer to win a lower slot at an even lower price than a higher slot
at a high price.
Vickrey-Clarke-Groves Mechanism#
The externality of agent \(i\) is the difference in utility for all other agents when \(i\) is present versus when \(i\) is not.
Suppose \(v_1 > v_2 > v_3\) and there are two slots. In a welfare-optimal outcome, \(1\) gets \(v_1 \alpha_1\) for the top slot, \(2\) gets \(v_2 \alpha_2\) for the top slot, and \(3\) gets nothing.
If \(1\) were not present, \(2\) gets \(v_2 \alpha_1\) and \(3\) gets \(v_3 \alpha_2\). Thus, \(1\)’s externality is
\[(v_2 \alpha_2) - (v_2 \alpha_1 + v_2 \alpha_1).\]
(Vickrey–Clarke–Groves Mechanism)
1. Ask each bidder for their valuation;
2. Find the welfare-maximizing allocation with respect to solicited bids in step \(1\);
3. Allocate slots via the welfare-maximizing allocation;
4. For each bidder \(i\):
1. Find the allocation that maximizes welfare for all agents other than \(i\);
2. Set \(i\)’s payment equal to the difference between how satisfied everyone else is when \(i\) is present versus when \(i\) is not.
The VCG auction is dominant strategy incentive compatible.
Proof. Bidder \(i\)’s payoff is
\[ \begin{split} v_i(X) - p_i(X) &= v_i(X) + \sum_{k \neq i} b_k(X) - \sum_{k \neq i} b_k(X^{-i}). \end{split} \]
However, \(\sum_{k \neq i} b_k(X^{-i})\) does not depend on the bid \(i\) submits so \(i\)’s maximization problem is equivalent to maximizing \(v_i(X) + \sum_{k \neq i} b_k(X)\). Since the VCG
mechanism already chooses the socially optimal outcome, it is a dominant strategy for all individuals to truthfully report.
In the context of sponsored search auctions, the assignment rule is still the same (highest bidder gets first slot, second highest bidder gets second slot, etc.) but payments are different. For slot
\(\alpha_j\), advertiser \(A(j)\) pays
\[\sum_{k > j} b_{A(k)}(\alpha_{k-1}-\alpha_k).\]
Another nice property of VCG auctions is that it is envy-free:
An assignment \((A,p)\) consisting of an allocation rule and prices is envy-free if for all advertisers \(i,j\), advertiser \(i\) does not envy advertiser \(j\): \(i\)’s utility is (weakly) greater
than \(i\)’s utility if they got \(j\)’s slot and paid \(j\)’s price.
In the context of sponsored search auctions, this means that
\[\alpha_{A^{-1}(i)}(v_i-p_i) \geq \alpha_{A^{-1}(j)}(v_i-p_j)\]
for all \(i,j \in M\).
Unnatural Equilibria#
Suppose there is a single item and there are two bidders with \(v_A = 10, v_B = 9\). One equilibrium is \(b_A = 7, b_B = 100\). Then, \(B\) wins the auction and pays \(\$7\). This is an equilibrium
(exercise for the reader to check this).
While no bidder has a profitable deviation, this outcome is not envy free: \(A\) would rather get \(B\)’s outcome than their current outcome. Adding the envy free requirement gets the following:
There is a correspondence between the following:
• Envy-free equilibria of the GSP action;
• Competitive market equilibria;
• Stable matchings between buyers and (price,good) pairs.
As such, we can get two immediate corollaries:
• We can efficiently find equilibria using deferred acceptance with prices.
• The buyer-optimal (seller-worst) equilibrium corresponds to VCG payments.
GSP vs VCG in Practice#
• In late 1990’s: Overture runs first price auctions;
• Early 2000’s: Google, Yahoo, Bing start running GSP auctions;
• Late 2000’s: Facebook runs auctions using VCG;
• Late 2010’s: Google switches back to first price auctions.
Why the switch back to first price auctions? Many advertisers participate in different auctions using the same bids, and given fixed bids a first price auction makes more money than GSP, which makes
more money than VCG.
Another trend: auctions are sensitive to bidder collusion.
• 2005: Bidding software must be authorized by search engine (easy to prevent collusion);
• Early 2010’s: most ad bidding is through a small number of agencies (bad for competition and revenue);
• Late 2010’s: ML based auto-bidders on Google, Bing, etc.
What about now? General move away from explicit auction rules:
• Auction details are highly optimized and hard to understand (a lot of things are ML);
• Big tech companies often know vales better than bidders (so they provide in-house bidders);
• Advertisers exploit the fact that different companies have to compete (less incentive to explain rules).
|
{"url":"https://flyingworkshop.github.io/incentives-in-computer-science/notes/lec9_sponsored_search.html","timestamp":"2024-11-13T02:41:48Z","content_type":"text/html","content_length":"31631","record_id":"<urn:uuid:0457dd4f-705c-430b-8d47-d521bbaa8b1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00573.warc.gz"}
|
RD Sharma Class 8 Solutions Chapter 17 Understanding Shapes III Ex 17.3
RD Sharma Class 8 Solutions Chapter 17 Understanding Shapes III (Special Types of Quadrilaterals) Ex 17.3
These Solutions are part of RD Sharma Class 8 Solutions. Here we have given RD Sharma Class 8 Solutions Chapter 17 Understanding Shapes III Ex 17.3
Other Exercises
Question 1.
Which of the following statements are true for a rectangle ?
(i) It has two pairs of equal sides.
(ii) It has all its sides of equal length.
(iii) Its diagonals are equal.
(iv) Its diagonals bisect each other.
(v) Its diagonals are perpendicular.
(vi) Its diagonals are perpendicular and bisect each other.
(vii) Its diagonals are equal and bisect each other.
(viii) Its diagonals are equal and perpendicular, and bisect each other.
(ix) All rectangles are squares.
(x) All rhombuses are parallelograms.
(xi) All squares are rhombuses and also rectangles.
(xii) All squares are not parallelograms.
(i) True.
(ii) False. (Only pair of opposite sides is equal)
(iii) True
(iv) True
(v) False (Diagonals are not perpendicular)
(vi) False (Diagonals are not perpendicular to each other)
(vii) True
(viii) False (Diagonals are equal but not perpendicular)
(ix) False (All rectangles are not square but a special type can be a square)
(x) True
(xi) True
(xii) False (All squares are parallelograms because their opposite sides are parallel and equal)
Question 2.
Which of the following statements are true for a square ?
(i) It is a rectangle.
(ii) It has all its sides of equal length.
(iii) Its diagonals bisect each other at right angle.
(iv) Its diagonals are equal to its sides.
(i) True
(ii) True
(iii) True
(iv) False (Each diagonal of a square is greater than its side)
Question 3.
Fill in the blanks in each of the following so as to make the statement true :
(i) A rectangle is a parallelogram in which ……..
(ii) A square is a rhombus in which ……….
(iii) A square is a rectangle in which ………
(i) A rectangle is a parallelogram in which one angle is right angle.
(ii) A square is a rhombus in which one angle is right angle.
(iii) A square is a rectangle in which adjacent sides are equal.
Question 4.
A window frame has one diagonal longer than the other. Is the window frame a rectangle ? Why or why not ?
No, it is not a rectangle as rectangle has diagonals of equal length.
Question 5.
In a rectangle ABCD, prove that ∆ACB = ∆CAD.
In rectangle ABCD, AC is its diagonal.
Now in ∆ACB and ∆CAD
AB = CD (Opposite sides of a rectangle)
BC = AD
AC = AC (Common)
∆ACB = ∆CAD (SSS condition)
Question 6.
The sides of a rectangle are in the ratio 2 : 3 and its perimeter is 20 cm. Draw the rectangle.
Perimeter of a rectangle = 20 cm
Ratio in the sides = 2 : 3
Let breadth (l) = 2x
Then length (b) = 3x
Perimeter = 2 (l + b)
⇒ 20 = 2 (2x + 3x)
⇒ 4x + 6x = 20
⇒ 10x = 20
⇒ x = \(\frac { 20 }{ 10 }\) = 2
Length = 3x = 3 x 2 = 6
and breadth = 2x = 2 x 2 = 4 cm
Steps of construction:
(i) Draw a line segment AB = 6 cm.
(ii) At A and B draw perpendicular AX and BY.
(iii) Cut off from AX and BY,
AD = BC = 4 cm.
(iv) Join CD.
Then ABCD is the required rectangle.
Question 7.
The sides of a rectangle are the ratio 4 : 5. Find its sides if the perimeter is 90 cm.
Perimeter of a rectangle = 90 cm.
Ratio in sides = 4 : 5
Let first side = 4x
Then second side = 5x
Perimeter = 2 (l + b)
⇒ 2 (4x + 5x) = 90
⇒ 2 x 9x = 90
⇒ 18x = 90
⇒ x = 5
First side = 4x = 4 x 5 = 20 cm
and second side = 5x = 5 x 5 = 25 cm
Question 8.
Find the length of the diagonal of a rectangle whose sides are 12 cm and 5 cm.
In rectangle ABCD, AB = 12 cm and AD = 5 cm
BD is its diagonal.
Now, in right angled ∆ABD,
BD² = AB² + AD² (Pythagoras theorem)
= (12)² + (5)² = 144 + 25 = 169 = (13)²
BD = 13 cm
Length of diagonal = 13 cm
Question 9.
Draw a rectangle whose one side measures 8 cm and the length of each of whose diagonals is 10 cm.
Steps of construction :
(i) Draw a line segment AB = 8 cm
(ii) At B, draw a perpendicular BX
(iii) With centre A and radius 10 cm, draw an arc which intersects BX at C.
(iv) With centre C and radius equal to AB and with centre A and radius equal to BC, draw arcs which intersect at D.
(v) Join AD, AC, CD and BD.
Then ABCD is the required rectangle.
Question 10.
Draw a square whose each side measures 4.8 cm.
Steps of construction :
(i) Draw a line segment AB = 4.8 cm.
(ii) At A and B, draw perpendiculars AX and BY.
(iii) Cut off AD = BC = 4.8 cm
(iv) Join CD.
Then ABCD is the required square.
Question 11.
Identify all the quadrilaterals that have:
(i) Four sides of equal length.
(ii) Four right angles.
(i) A quadrilateral whose four sides are equal can be a square or a rhombus.
(ii) A quadrilateral whose four angle are right angle each can be a square or a rectangle.
Question 12.
Explain how a square is
(i) a quadrilateral
(ii) a parallelogram
(iii) a rhombus
(iv) a rectangle ?
(i) A square is a quadrilateral as it has four sides and four angles.
(ii) A square is a parallelogram, because its opposite sides are parallel and equal.
(iii) A square is a rhombus because it has all sides equal and opposite sides are parallel.
(iv) A square is a rectangle as its opposite sides are equal and each angle is of 90°.
Question 13.
Name the quadrilaterals whose diagonals:
(i) bisect each other
(ii) are perpendicular bisector of each other
(iii) are equal.
(i) A quadrilateral whose diagonals bisect each other can be a square, rectangle, rhombus or a parallelogram.
(ii) A quadrilateral whose diagonals are perpendicular bisector of each other can be a square or a rhombus.
(iii) A quadrilateral whose diagonals are equal can be a square or a rectangle.
Question 14.
ABC is a right-angled triangle and O is the mid-point of the side opposite to the right angle. Explain why O is equidistant from A, B and C.
In ∆ABC, ∠B = 90°.
O is the mid-point of AC i.e. OA = OC.
BO is joined.
Now, we have to prove that OA = OB = OC
Produce BO to D such that OD = OB.
Join DC and DA.
In ∆AOB and ∆COD
OA = OC (O is the mid point of AC)
OB = OD (Construction)
∠AOB = ∠COD (Vertically opposite angles)
∆AOB = ∆COD (SAS condition)
AB = CD (c.p.c.t.) …..(i)
Similarly, we can prove that
∆BOC = ∆AOD
BC = AD …….(ii)
From (i) and (ii)
ABCD is a rectangle.
But diagonals of a rectangle bisect each other and are equal in length.
AC and BD bisect each other at O.
OA = OC = OB.
O is equidistant from A, B and C.
Question 15.
A mason has made a concrete slap. He needs it to be rectangular. In what different ways can he make sure that it is a rectangular ?
By definition, a rectangle has each angle of 90° and their diagonals are equal.
The mason will check the slab whether it is a rectangular in shape by measuring that
(i) its each angle is 90°
(ii) its both diagonals are equal.
Hope given RD Sharma Class 8 Solutions Chapter 17 Understanding Shapes III Ex 17.3 are helpful to complete your math homework.
If you have any doubts, please comment below. Learn Insta try to provide online math tutoring for you.
|
{"url":"https://ncertmcq.com/rd-sharma-class-8-solutions-chapter-17-understanding-shapes-iii-ex-17-3/","timestamp":"2024-11-12T03:29:46Z","content_type":"text/html","content_length":"65868","record_id":"<urn:uuid:fc9cb68a-2c2f-4fd5-9f7d-1d7e7eec8cba>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00636.warc.gz"}
|
What is .8 as a Fraction? - StuffSure
If you’re like most people, you probably think of a fraction as a number like 1/2 or 3/4. But what if you see a fraction like .8? What is .8 as a fraction?
.8 is actually a very simple fraction. It is equal to 8/10. So, if you’re ever wondering what .8 as a fraction is, now you know!
.8 as a fraction is equal to 8/10. This can be easily determined by using a calculator, or by mentally converting the decimal place into a fraction.
To convert .8 to a fraction, simply divide 8 by 10. The answer will be 8/10, or .8 as a fraction.
What is .8 as a Fraction?
.8 as a fraction is equal to 4/5. To convert .8 to a fraction, divide 8 by 10. The answer is 4/5 because 8 divided by 10 is equal to .8, which is equal to 4/5.
How to Convert .8 to a Fraction
To convert a decimal to a fraction, place the decimal number over a power of ten equal to the number of places after the decimal point. In other words, the number to the right of the decimal point is
the denominator. So, .8 becomes 8/10.
To sum it up, .8 as a fraction is equal to 4/5. This fraction can be simplified to 1/2 + 1/5, which is a pretty simple fraction to work with. You can calculate this by starting with 4/5 and then
adding 1/5 to it, which will give you the answer of 9/10.
|
{"url":"https://stuffsure.com/what-is-8-as-a-fraction/","timestamp":"2024-11-09T22:49:51Z","content_type":"text/html","content_length":"57815","record_id":"<urn:uuid:8e89c592-1fe7-452d-8c21-bf03af8297b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00028.warc.gz"}
|
Polar and radial charts with Power BI Core Visuals - EXPLORATIONS IN DATA STORYTELLING WITH POWER BI
Polar and radial charts with Power BI Core Visuals
Aside from the pie and donut charts, there is not much in the way of polar plots in the Power BI core visual set.
Polar plots are rarely used in business context. We might use radars in skills assessments, but others have a very specific use case or are used as a break from the mundane.
To hack a polar plot, on a cartesian chart, we need to calculate the polar positions of our datapoints, and transform them into cartesian coordinates.
A limitation we will reach, at the time of writing, is that lines between these points on a Power BI core visual will only plot left to right.
If we were to create a circular shape, we might create separate measures for positive and negative values.
And add logic to those measures to plot the first and last points
Another limitation is that of guidelines. To create circular guidelines, we would need to deselect the linear gridlines from the visual formatting options, and instead utilise a custom background
image, with circular guidelines, that fill the area of the plot neatly. This would also require setting the axes min and max values be have equal positive and negative magnitude.
Polar Scatter Plot
Creating a polar scatter is fairly simple.
In this example I’ve collected the dominant colours of Bob Ross paintings using python in Microsft Fabric Notebooks, converted these colours to HSL values.
HSL = Hue, Saturation and Lightness. Where hue is an angular dimension with values between 0 and 360 and saturation and lightness are values between 0 and 100
In Power BI, with the following data,
I can convert the hue values to X and Y coordinates with the following measures:
Angle (to convert degrees to radians)
Angle =
// degrees = 360/number of categories
// To convert from degrees to radians, multiply the number of degrees by π/180
X =
// x = rsinθ
var r = Max(dominanthslcolours[hsl.s]) // length to plot (saturation)
var theta = [Angle] // (hue in radians)
r * SIN([Angle])
Y =
// y = rcosθ
var r = Max(dominanthslcolours[hsl.s]) // length to plot (saturation)
var theta = [Angle] // (hue in radians)
r * COS([Angle])
These measures can then be dragged into a scatterplot and the point size set as percentage of colour dominance in each painting,
and the points to be conditionally formatted to RGB Values
The X and Y axis can be set, gridlines turned off, and a plot area background image of a colour wheel selected
To create a plot showing the dominance of colours along the hue and saturation dimensions. A third dimension will need to be used for lightness.
To learn more of the other visual analyses I conducted with this dataset, my presentation file can be downloaded here:
Radial Column
To create a radial column chart is a little more complex, and that will be featured in an upcoming blog.
|
{"url":"https://kerrykolosko.com/polar-and-radial-charts-with-power-bi-core-visuals/","timestamp":"2024-11-09T10:58:40Z","content_type":"text/html","content_length":"96258","record_id":"<urn:uuid:3c4cd430-53d7-486c-851f-881b951d6562>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00237.warc.gz"}
|
Volume 3, 2024 of International J.Mathematical Combinatorics is Released Today
The Volume 3, 2024 of International J.Mathematical Combinatorics is released today. There are 12 papers published in this issue, including
1. Fuzzy Product Rule for Solving Fully Fuzzy Linear Systems by Tahir Ceylan
2. Geometry of Chain of Spheres Inside an Ellipsoidal Fragment
by Abhijit Bhattacharya, Kamlesh Kumar Dubey and Arindam Bhattacharyya
3. On Derivative of Eta Quotients of Levels 12 and 16 by K. R. Vasuki, P. Nagendra and P. Divyananda
4. General Connectivity Entropies of Certain Interconnection Networks}
by Yanyan Ge and Zhen Lin
5. Pair Mean Cordial Graphs Paired with Ladder
by R. Ponraj and S. Prabhu
6.On Generalized Integral Type $\alpha-\tilde{\mathcal{F}}$ Contraction Mappings
in Partial Metric Spaces by Heeramani Tiwari and Padmavati
7. On Modified Maximum Degree Energy of Graph and
HDR Energy of Graph by Raju S., Puttaswamy and Nayaka S. R.
8. On Grundy Coloring of Degree Splitting Graphs
by R. Pavithra and D. Vijayalakshmi
9. Generalized Perfect Neighborhood Number of a Graph by C. Nandeeshkumar
10. Pair Difference Cordial Number of Some Degree Splitting Graph
by R. Ponraj and A. Gayathri
11. A Counterexample to a Theorem about Orthogonal Latin Squares
by Zhiguo Ding and Michael E. Zieve
12. Corrigendum: Variations of Orthogonality of Latin Squares
by Vadiraja Bhatta G. R. and B.R.Shankar
|
{"url":"http://mathcombin.com/article/?fid=0cb18e50dc77479cb5a0b2b63425b6f2","timestamp":"2024-11-07T10:33:21Z","content_type":"text/html","content_length":"7796","record_id":"<urn:uuid:477f0031-5f1a-47ec-b00e-f837ffeea652>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00001.warc.gz"}
|
Find the area of an equilateral triangle that has sides equal t... | Filo
Find the area of an equilateral triangle that has sides equal to .
Not the question you're searching for?
+ Ask your question
Let , and be the vertices of the equilateral triangle and the midpoint of segment .
Because the triangle is equilateral, is a right triangle.
Let us find , the height of using the Pythagorean theorem.
Solving the above equation for leads to:
We now find the area using the formula:
Was this solution helpful?
Found 8 tutors discussing this question
Discuss this question LIVE
8 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from Triangles in the same exam
Practice more questions from Triangles
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Find the area of an equilateral triangle that has sides equal to .
Topic Triangles
Subject Mathematics
Class Grade 12
Answer Type Text solution:1
Upvotes 53
|
{"url":"https://askfilo.com/mathematics-question-answers/find-the-area-of-an-equilateral-triangle-that-has-sides-equal-to-10-mathrm~cm","timestamp":"2024-11-15T04:27:47Z","content_type":"text/html","content_length":"291384","record_id":"<urn:uuid:708b108e-1af8-4df2-b4bf-5c28f952441a>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00602.warc.gz"}
|
What is: Algebra Of Random Variables
Understanding Algebra of Random Variables
The Algebra of Random Variables is a fundamental concept in probability theory and statistics that deals with the manipulation and combination of random variables. It provides a framework for
understanding how random variables interact, allowing statisticians and data scientists to derive new random variables from existing ones through various algebraic operations. This algebraic approach
is essential for modeling complex systems and analyzing data in fields such as data science, machine learning, and statistical inference.
Random Variables: A Brief Overview
A random variable is a numerical outcome of a random phenomenon. It can be classified into two main types: discrete and continuous. Discrete random variables take on a countable number of values,
while continuous random variables can assume an infinite number of values within a given range. Understanding the nature of random variables is crucial for applying the algebraic operations that
follow, as these operations often depend on the type of random variable being analyzed.
Basic Operations in Algebra of Random Variables
The primary operations in the Algebra of Random Variables include addition, subtraction, multiplication, and division. These operations allow for the creation of new random variables from existing
ones. For instance, if X and Y are two random variables, the sum Z = X + Y is also a random variable. Each operation has specific implications for the distribution and expected value of the resulting
random variable, which is critical for accurate data analysis.
Expectation and Variance in Random Variables
Expectation and variance are two key concepts in the Algebra of Random Variables. The expectation, or mean, of a random variable provides a measure of its central tendency, while variance measures
the spread or dispersion of the variable’s possible values. When performing algebraic operations on random variables, it is essential to understand how these metrics change. For example, the
expectation of the sum of two random variables is the sum of their expectations, while the variance of the sum depends on whether the variables are independent.
Independence and Its Role in Algebra
Independence is a critical concept in probability that significantly affects the Algebra of Random Variables. Two random variables are considered independent if the occurrence of one does not
influence the occurrence of the other. This property simplifies many algebraic operations, particularly in calculating the joint distribution of independent variables. Understanding independence is
vital for accurate modeling and analysis in statistics and data science.
Transformations of Random Variables
Transformations of random variables involve applying functions to random variables to create new ones. Common transformations include linear transformations, which can be expressed in the form Y = aX
+ b, where a and b are constants. These transformations are essential for adjusting the scale and location of random variables, making them a powerful tool in data analysis and statistical modeling.
Joint and Marginal Distributions
In the context of the Algebra of Random Variables, joint and marginal distributions play a significant role. The joint distribution describes the probability distribution of two or more random
variables simultaneously, while marginal distributions provide the probabilities of individual random variables. Understanding these distributions is crucial for performing algebraic operations and
for analyzing the relationships between multiple random variables.
Conditional Expectation and Its Importance
Conditional expectation is another vital concept in the Algebra of Random Variables. It refers to the expected value of a random variable given that another random variable takes on a specific value.
This concept is essential for understanding how random variables interact and for making predictions based on available data. Conditional expectation is widely used in various applications, including
regression analysis and Bayesian statistics.
Applications in Data Science and Statistics
The Algebra of Random Variables has numerous applications in data science and statistics. It is used in various fields, including finance, engineering, and social sciences, to model uncertainty and
make informed decisions based on data. By understanding the algebraic relationships between random variables, data scientists can develop more accurate predictive models and perform robust
statistical analyses.
Conclusion: The Significance of Algebra of Random Variables
The Algebra of Random Variables is a cornerstone of probability theory and statistics, providing essential tools for data analysis and interpretation. By mastering the concepts and operations within
this algebraic framework, statisticians and data scientists can enhance their analytical capabilities and derive meaningful insights from complex datasets.
|
{"url":"https://statisticseasily.com/glossario/what-is-algebra-of-random-variables/","timestamp":"2024-11-05T06:10:01Z","content_type":"text/html","content_length":"138696","record_id":"<urn:uuid:5878396a-7ebf-4dc9-876d-e219fc102028>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00731.warc.gz"}
|