content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Logic for Computer Science/Applications - Wikibooks, open books for an open world
In the following we briefly consider some applied problems where the expressibility of languages matter. It easily becomes clear that FO does not suffice for many cases. To overcome the restrictions
of FO there exist many extensions such as fixed-point logic, counting logic, miscellaneous flavours of second-order logic, etc. which are not covered in this [S:chaper:S] [chapter | paper].
Database Theory: SQL
Relations have named columns called attributes
frequents | drinker bar
serves | bar beer
likes | drinker beer
Standard Query Language:
Relational calculus variant of FO on relational vocabulary (no functions just relations).
Here constants have a fixed interpretation, this is slightly different than in FO logic.
Examples of relational queries:
(i) Find all bars that serve Bud
${\displaystyle \{b:bar|serves(b,Bud)\}}$
b is a free variable, Bud is a constant.
(ii) Find the [S:drunkers:S] [drinkers | drunkards] who frequent some bar that serves Bud
${\displaystyle \{d:drinker|\exists b(frequents(d,b)\wedge serves(b,Bud))\}}$
(iii) Find the drinkers who frequent only bars serving Bud
${\displaystyle \{d:drinker|\forall b[frequents(d,b)\longrightarrow serves(b,Bud)]\}}$
(iv) Find drinkers who frequent only bars serving some beer they like
${\displaystyle \{d:drinker|\forall b(freq(d,b)\longrightarrow \exists c(serves(b,c)\wedge likes(d,c)))\}}$
SQL Representation of these queries:
SELECT s.bar
FROM serves s
WHERE s.beer = 'Bud'
SELECT f.drinker
FROM freq f, serves s
WHERE f.bar = s.bar and s.beer = 'Bud'
SELECT drinker
FROM freq
WHERE dr NOT IN
SELECT f.drinker
FROM frequents f
WHERE f.bar NOT IN
(SELECT bar
FROM serves
WHERE beer = 'Bud')
Relational Algebra
Is an alternative representation language used in SQL. What you can represent using relational algebra is absolutely the same as what you can represent in FO logic. It constitutes of simple
operations on relations that allow you to specify queries.
Main Operations
${\displaystyle \Pi }$ Projection : Project relations on a subset of its columns (attributes).
${\displaystyle \sigma }$ Selection : Selects a subset of tuples from a particular relation, based upon a specified selection cond ition.
${\displaystyle \cup }$ , ${\displaystyle -}$ Union, Diff : similar to set operations.
|><| Join : allows you to combine tuples from two relations.
Rename : renames A to B.
Expressions built from these expressions are called a relational algebra query. Whatever you can express in algebra we can represent in FO.
(i) ${\displaystyle \Pi _{bar}(\Sigma _{beer=Bud}(serves))}$
(ii) ${\displaystyle \Pi _{dr}(freq|><|\sigma _{beer=Bud}(serves))}$
(iii) ${\displaystyle \Pi _{d}(freq)-\Pi _{d}[freq|><|(\Pi _{bar}(freq)-\Pi _{bar}(\Sigma _{beer=Bud}(serves)))]}$
Expressive Power
Graph properties correspond to properties of datastructures in relational databases (see the chapter on database queries), e.g.:
• Consider a companies database that contains all managers together with the 'is superordinate' relation amongst them. In a proper heirachy the database should contain no circles, i.e. a manager
can not be a superordinate of his superordinate. Querying this corresponds to checking a graph for cyclicity. As from above this can not be done in FO.
• Say two managers want to find out if one of them is more powerful than the other. So the want to query the database if the number of their subordinates is equal, i.e. the cardinalities of the
sets of subordinates (say, directly and indirectly) is equal. This can't be done in FO ("FO can not count"). This is the reason why SQL is extended by a counting function.
• Consider a database of airports and connection flights among them. In order to query the direct reachibility of airport b from airport a we can write
${\displaystyle q_{0}(a,b)=F(a,b)}$ .
Now in order to query connections with one change of plane we write
${\displaystyle q_{1}(a,b)=\exists _{c}F(a,c)\land F(c,b)}$ and get ${\displaystyle Q_{1}(a,b)=q_{1}(a,b)\lor q_{1}(a,b)}$
for connections with zero or one change. Thus in order to extend this to reachibility (of no matter how many changes) we have to write
${\displaystyle \bigvee _{k\in \mathbb {N} }q_{k}}$
what is not a FO expression. So we are fine with FO for a restricted reachibility up to a certain k but not for reachibility as it appears in graph theory. In fact it can be shown that reachibility
can not be queried in FO.
Descriptive Complexity Theory
As mentioned above Hamiltonicity can not be expressed in FO. So now one can think of an extension of FO in order to express this property. This can be done like
${\displaystyle \exists L\exists S(isLinOrd(L)\land isSucRelOf(S,L)\land \forall _{x}\exists _{y}(L(x,y)L\lor L(y,x))\land \forall _{x}\forall _{y}(S(x,y)\implies E(x,y)))}$
where the quantifiers on the left state the existence of the binary relations L and S that satisfy the formula on the right. The realtion isLinOrd states that L is a linear order and isSucRelOf means
that S is the successor relation of L. Both can be expressed in FO. The pattern as above, second-order existential quantifiers followed by a first order formula is called existential second order
Now it is well known that Hamiltonicity is a NP-complete problem and one can ask: is there a natural connection between NP and second order logic? Indeed, there is a very amazing one: existential
second-order logic corresponds exactly (!) to the class of NP-complete problems! This result is known as Fagin's theorem, it has lead to the new area of descriptive complexity where complexity
classes are described by means of logical formalisms. | {"url":"https://en.m.wikibooks.org/wiki/Logic_for_Computer_Science/Applications","timestamp":"2024-11-13T08:53:05Z","content_type":"text/html","content_length":"60416","record_id":"<urn:uuid:459c9163-c79d-456a-a26a-6df5d61559a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00665.warc.gz"} |
2. Phezzan uses vAMM model, how does it work? | Welcome to Phezzan
2. Phezzan uses vAMM model, how does it work?
Written by the Phezzan team
First, what is an AMM? From the website of Uniswap (first ever AMM in the world):
An automated market maker (AMM) is a smart contract on blockchain that holds on-chain liquidity reserves. Users can trade against these reserves at prices set by an automated market making
When you buy ETH from ETH-USDC pool from AMM, an on-chain algorithm will determine how much ETH you can get.
In most Automated Market Maker (AMM) DEX, its pools hold a x * y = k invariant, where x and y are the amount of quote token and base token of the market. For example, in a ETH-USDC pool, if the
initial ETH amount is 10, and initial USDC amount is 1000, then k = x * y = 10 * 1000 = 10000. Unless someone add or remove liquidity, k stays constant.
Phezzan Protocol uses virtual AMM (vAMM). That is, there is no real liquidity in the pool. When you want to 2x long ETH from ETH-USDC pool, our on-chain algorithm will determine how much virtual ETH
can you get. Since these ETH are virtual, it is not worth anything outside of Phezzan.
Since the asset is virtual in liquidity pool, as an LP, as long as you have enough collateral (regardless of USDC, ETH, stETH, etc.), you can provide liquidity to any pool. You don't actually need to
own underlying token of the pool. For example, you don't necessarily need neither ETH nor USDC to provide liquidity for ETH-USDC. As long as you have enough free collateral, you can provide
There is no better way to illustrate the trading process other than giving a concrete example. Since there are many kind of AMM algorithm, for simplicity, let's assume the AMM use xy=k algorithm.
In this example, Alice will trade with ETH-USDC pool. The initial state of ETH-USDC virtual AMM will be at 100 virtual ETH (vETH) and 10,000 virtual USDC (vUSDC).
Alice deposits 100 USDC and market buy ETH with 2x leverage.
Since she 2x long, clearing house will mint 100 x 2 = 200 vUSDC. Clearing house will also record Alice contributed 100 USDC and has a cost basis of 200 vUSDC.
The amount of virtual ETH in AMM after Alice's order can be calculated from xy=k invariant.
100 vETH x 10,000 vUSDC = x * (10,000 + 200) vUSDC
x= 98.04 vETH (rounded to hundredth decimal for simplicity in this example. In reality, it won't be rounded)
So Alice will receive 100 - 98.04 = 1.96 vETH.
After three days, ETH's price goes up. Now in AMM, there is 11,000 vUSDC and 90.91 vETH. Alice sends her 1.96 vETH to clearing house, to close here position.
The amount of virtual vUSDC in AMM after Alice's order can be calculated from xy=k invariant.
90.91 vETH x 11,000 vUSDC = (90.91 + 1.96) vETH x
x = 10767.85 vUSDC (again, rounded to hundredth decimal for simplicity)
So the change of vUSDC is 11,000 - 10767.85 = 232.15 vUSDC. Since Alice's cost basis is 200 vUSDC, Alice's PnL is 232.15 vUSDC - 200 vUSDC = 32.15 vUSDC. So, Alice can get back 32.15 USDC as her
profit, along with her 100 USDC for collateral.
So, Alice now has a balance of: 32.15 USDC + 100 USDC = 132.15 USDC | {"url":"https://learn.phezzan.xyz/phezzan-academy/inside-phezzan/2.-phezzan-uses-vamm-model-how-does-it-work","timestamp":"2024-11-09T15:24:11Z","content_type":"text/html","content_length":"180567","record_id":"<urn:uuid:2921ea06-dfb5-4d50-a3e1-26ff66a3b163>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00245.warc.gz"} |
Retrieving infeasibility certificates
7.12 Retrieving infeasibility certificates¶
When a continuous problem is declared as primal or dual infeasible, MOSEK provides a Farkas-type infeasibility certificate. If, as it happens in many cases, the problem is infeasible due to an
unintended mistake in the formulation or because some individual constraint is too tight, then it is likely that infeasibility can be isolated to a few linear constraints/bounds that mutually
contradict each other. In this case it is easy to identify the source of infeasibility. The tutorial in Sec. 9.3 (Debugging infeasibility) has instructions on how to deal with this situation and
debug it by hand. We recommend Sec. 9.3 (Debugging infeasibility) as an introduction to infeasibility certificates and how to deal with infeasibilities in general.
Some users, however, would prefer to obtain the infeasibility certificate using Fusion API for C++, for example in order to repair the issue automatically, display the information to the user, or
perhaps simply because the infeasibility was one of the intended outcomes that should be analyzed in the code.
In this tutorial we show how to obtain such an infeasibility certificate with Fusion API for C++ in the most typical case, that is when the linear part of a problem is primal infeasible. A
Farkas-type primal infeasibility certificate consists of the dual values of linear constraints and bounds. Each of the dual values (multipliers) indicates that a certain multiple of the corresponding
constraint should be taken into account when forming the collection of mutually contradictory equalities/inequalities.
7.12.1 Example PINFEAS¶
For the purpose of this tutorial we use the same example as in Sec. 9.3 (Debugging infeasibility), that is the primal infeasible problem
Creating the model
In order to fetch the infeasibility certificate we must have access to the objects representing both variables and constraints after optimization. We will implement the problem as having two linear
constraints s and d of dimensions 3 and 4, respectively.
// Construct the sample model from the example in the manual
auto sMat = Matrix::sparse(3, 7, new_array_ptr<int,1>({0,0,1,1,2,2,2}),
auto sBound = new_array_ptr<double,1>({200, 1000, 1000});
auto dMat = Matrix::sparse(4, 7, new_array_ptr<int,1>({0,0,1,2,2,3,3}),
auto dBound = new_array_ptr<double,1>({1100, 200, 500, 500});
auto c = new_array_ptr<double,1>({1, 2, 5, 2, 1, 2, 1});
Model::t M = new Model("pinfeas"); auto _M = finally([&]() { M->dispose(); });
Variable::t x = M->variable("x", 7, Domain::greaterThan(0));
Constraint::t s = M->constraint("s", Expr::mul(sMat, x), Domain::lessThan(sBound));
Constraint::t d = M->constraint("d", Expr::mul(dMat, x), Domain::equalsTo(dBound));
M->objective(ObjectiveSense::Minimize, Expr::dot(c,x));
Checking infeasible status and adjusting settings
After the model has been solved we check that it is indeed infeasible. If yes, then we choose a threshold for when a certificate value is considered as an important contributor to infeasibility
(ideally we would like to list all nonzero duals, but just like an optimal solution, an infeasibility certificate is also subject to floating-point rounding errors). Finally, we declare that we are
interested in retrieving certificates and not just optimal solutions by calling Model.acceptedSolutionStatus, see Sec. 8.1.4 (Retrieving solution values). All these steps are demonstrated in the
snippet below:
// Check problem status
if (M->getProblemStatus() == ProblemStatus::PrimalInfeasible) {
// Set the tolerance at which we consider a dual value as essential
double eps = 1e-7;
// We want to retrieve infeasibility certificates
Going through the certificate for a single item
We can define a fairly generic function which takes an array of dual values and all other required data and prints out the positions of those entries whose dual values exceed the given threshold.
These are precisely the values we are interested in:
//Analyzes and prints infeasibility certificate for a single object,
//which can be a variable or constraint
static void analyzeCertificate(std::string name, // name of the analyzed object
long size, // size of the object
std::shared_ptr<ndarray<double, 1>> duals, // actual dual values
double eps) // tolerance determining when a dual value is considered important
for(int i = 0; i < size; i++) {
if (abs((*duals)[i]) > eps)
std::cout << name << "[" << i << "], dual = " << (*duals)[i] << std::endl;
Full source code
All that remains is to call this function for all variable and constraint bounds for which we want to know their contribution to infeasibility. Putting all these pieces together we obtain the
following full code:
#include <iostream>
#include "fusion.h"
using namespace mosek::fusion;
using namespace monty;
//Analyzes and prints infeasibility certificate for a single object,
//which can be a variable or constraint
static void analyzeCertificate(std::string name, // name of the analyzed object
long size, // size of the object
std::shared_ptr<ndarray<double, 1>> duals, // actual dual values
double eps) // tolerance determining when a dual value is considered important
for(int i = 0; i < size; i++) {
if (abs((*duals)[i]) > eps)
std::cout << name << "[" << i << "], dual = " << (*duals)[i] << std::endl;
int main(int argc, char ** argv)
// Construct the sample model from the example in the manual
auto sMat = Matrix::sparse(3, 7, new_array_ptr<int,1>({0,0,1,1,2,2,2}),
auto sBound = new_array_ptr<double,1>({200, 1000, 1000});
auto dMat = Matrix::sparse(4, 7, new_array_ptr<int,1>({0,0,1,2,2,3,3}),
auto dBound = new_array_ptr<double,1>({1100, 200, 500, 500});
auto c = new_array_ptr<double,1>({1, 2, 5, 2, 1, 2, 1});
Model::t M = new Model("pinfeas"); auto _M = finally([&]() { M->dispose(); });
Variable::t x = M->variable("x", 7, Domain::greaterThan(0));
Constraint::t s = M->constraint("s", Expr::mul(sMat, x), Domain::lessThan(sBound));
Constraint::t d = M->constraint("d", Expr::mul(dMat, x), Domain::equalsTo(dBound));
M->objective(ObjectiveSense::Minimize, Expr::dot(c,x));
// Useful for debugging
M->setLogHandler([ = ](const std::string & msg) { std::cout << msg << std::flush; } );
// Solve the problem
// Check problem status
if (M->getProblemStatus() == ProblemStatus::PrimalInfeasible) {
// Set the tolerance at which we consider a dual value as essential
double eps = 1e-7;
// We want to retrieve infeasibility certificates
// Go through variable bounds
std::cout << "Variable bounds important for infeasibility: " << std::endl;
analyzeCertificate("x", x->getSize(), x->dual(), eps);
// Go through constraint bounds
std::cout << "Constraint bounds important for infeasibility: " << std::endl;
analyzeCertificate("s", s->getSize(), s->dual(), eps);
analyzeCertificate("d", d->getSize(), d->dual(), eps);
else {
std::cout << "The problem is not primal infeasible, no certificate to show" << std::endl;
Running this code will produce the following output:
Variable bounds important for infeasibility:
x[5], dual = 1.0
x[6], dual = 1.0
Constraint bounds important for infeasibility:
s[0], dual = -1.0
s[2], dual = -1.0
d[0], dual = 1.0
d[1], dual = 1.0
indicating the positions of bounds which appear in the infeasibility certificate with nonzero values. | {"url":"https://docs.mosek.com/latest/cxxfusion/tutorial-pinfeas-shared.html","timestamp":"2024-11-04T17:37:21Z","content_type":"text/html","content_length":"55441","record_id":"<urn:uuid:7cea9743-8ce9-4748-b18c-8bdc06b4297a>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00238.warc.gz"} |
How much does a down payment lower a mortgage - Dollar Keg
How much does a down payment lower a mortgage
A down payment is a percentage of the price of your house. If you don’t have enough money, a lender might ask for a smaller down payment from you. This can make it harder to buy a house. Get help
with this decision by asking some questions about the minimum down payment for a house, advantages and disadvantages of having a large down payment, and how much down payment for a 200k house or 500k
house that you need.
A down payment is a portion of the cash you use to buy a house. To buy a house without enough down payment, ask the lender what is the minimum down payment for your situation. If you have extra
money, consider putting more down payment on your house. We can help you find more information about this topic by answering questions like “What is the minimum down payment?”, “What are the
advantages and disadvantages of large down payments?”, “What is the amount of down payment to buy a 200k or 500k property?”
If a down payment is less than 20%, you might have to pay mortgage insurance. When you have a high down payment, this lowers the risk for the lender and they might not require PMI on your loan. By
asking questions about the minimum down payment, and amount of down payment needed for a house of a certain price, you can get the information you need to make your decision.
A down payment is a certain amount of money you pay to get the house. If you don’t have enough money, you can ask for less down payment from the seller or from your bank.
How much does a down payment lower a mortgage
A down payment is a percentage of the cost of a property. It means you do not have to spend all your money at once. Once you have saved up that amount, you need to pay it to the seller as part of
your offer.
As type of a percentage of the total cost, it is one of the factors that determine the overall price.
If you’re thinking about buying a house, the decision can be a very stressful one. The right down payment could make all the difference between being able to afford your dream home or not. In this
article, we’ll walk you through what is the minimum down payment for a house and do our best to answer some of the most asked questions.
What is the minimum down payment for a house? You might be asking yourself this question. If you’ve strolled around your local real estate office or looked at some real estate websites, you’ve
probably noticed the price of the home will include a money down payment of 20% to 35%. However, there are many more variables to consider when determining whether you should put down 20% or more
than that on a house, including taxes and insurance. Even if you don’t have much in savings to use as a down payment, there are other ways to reduce your mortgage costs. Here are some tips we may not
have mentioned before for reducing mortgage costs to make it possible for you to buy a home without having equal debt obligations.
You’ve probably heard how the minimum down payment for a house is 20% of the cost, but there’s much more to consider when it comes to determining how much money you should use for your down payment.
In general, most mortgage lenders will recommend putting at least 20% of the home price as a down payment because it assures that you can afford the full amount of your mortgage payment each month.
However, there are some other things you should consider when determining how much money to put toward your down payment, including taxes and insurance. Here are some tips we may not have mentioned
before that could reduce your mortgage costs and make it easier to own your own home without being underwater on your debt.
The best way to start thinking about mortgage costs is to consider how much money you’ll need in your savings account to qualify for the loan. If you’ve already gotten pre-approved for a loan or know
the approximate value of the home you’d like to purchase, understanding your price range will help you determine how much should be in your savings account as a down payment. First, you have to come
up with 5% of the full price of the home in order to make your closing costs fit the requirements or 20% for FHA loans. However, there are other ways to reduce your mortgage costs. For example,
making sure that the mortgage company knows that you have a certain amount of cash available within the next few months can let them know that if they charge you lower fees, you’ll be able to pay
more upfront.
How much down payment for a 200k house
Home buying is a major investment, and you need to know what kind of house you can afford. Are you thinking about putting down 20% for your down payment? If not, there are several ways to reduce your
mortgage costs even if you don’t have a lot of savings.
You might wonder how much you should pay when buying a home or even if it is possible to buy a home without making a deposit. You might want to take a look at this article because we’ve found
significant information that may surprise you!
The easy part is figuring out the initial down payment. What you may not know, however, is that there are several different types of loans to other ways to pay for your home. Below find a few tips
you might like to try!
The minimum down payment for a house varies from state to state. Also, the size of the down payment also comes into play when it comes to how much house loan you have access to.
How much down payment is needed for a house? I was asked that question a few times during my job as a real estate agent and I think it’s worth a blog post.
But how much down payment is too much? That’s the question on everyone’s lips when they’re wondering whether or not they should put 10% down on their house. The thing is, it really depends on a lot
of factors and there are always pros and cons to each situation. There are many hard numbers that you need to consider when buying a house, but knowing your personal situation is just as important
(if not more). If you have realized that you will be faced with hefty monthly house payments in the future, maybe that’s reason enough to start thinking about alternatives. But if this doesn’t sound
like your case, then it might be best to avoid making such a large down payment change how long does it take for someone to sell my house?
What is the minimum down payment? How much is a down payment? We can answer these questions too. In this article, we will look at how much is a down payment for a house, what are the disadvantages of
large down payments and how do you calculate the down payment for a house.
Hi, I found your question very interesting and I wanted to go ahead and share my point of view. If you want to know how much down payment is required for a house, then please make sure to read this
post because it has all the answers that you need.
The first step when buying a home is to get an idea of how much house you can afford. You need to know the minimum down payment one must put in; otherwise, you may not be able to purchase the house
that you have dreamed of owning.
If you’re about to buy a new house and you don’t know what the down payment should be, then this post is for you.
A large down payment may help you make your first home purchase more affordable, but it can also make the process of buying a house take longer than anticipated.
Interest rates are rising, and this is making buying a home extremely expensive, especially for first-time home buyers. To make matters worse, there are two factors that make it harder for first-time
buyers to get approved for a loan: The minimum down payment on the home and other requirements.
Leave a Comment | {"url":"https://dollarkeg.com/how-much-does-a-down-payment-lower-a-mortgage/","timestamp":"2024-11-08T16:00:09Z","content_type":"text/html","content_length":"54471","record_id":"<urn:uuid:04f45f04-d5ef-4cb9-9830-69f2a358220b>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00457.warc.gz"} |
On the core of a unicyclic graph
A set S ⊆ V is independent in a graph G = (V, E) if no two vertices from S are adjacent. By core(G) we mean the intersection of all maximum independent sets. The independence number α(G) is the
cardinality of a maximum independent set, while μ(G) is the size of a maximum matching in G. A connected graph having only one cycle, say C, is a unicyclic graph. In this paper we prove that if G is
a unicyclic graph of order n and n - 1 = α(G) + μ(G), then core (G) coincides with the union of cores of all trees in G - C.
• Core
• König- Egerváry graph
• Matching
• Maximum independent set
• Unicyclic graph
Dive into the research topics of 'On the core of a unicyclic graph'. Together they form a unique fingerprint. | {"url":"https://cris.ariel.ac.il/en/publications/on-the-core-of-a-unicyclic-graph-3","timestamp":"2024-11-03T13:23:53Z","content_type":"text/html","content_length":"51270","record_id":"<urn:uuid:47415b83-37e6-4b1a-aae1-d537522840de>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00308.warc.gz"} |
Functions for Character Table Constructions
5 Functions for Character Table Constructions
The functions described in this chapter deal with the construction of character tables from other character tables. So they fit to the functions in Section Reference: Constructing Character Tables
from Others. But since they are used in situations that are typical for the GAP Character Table Library, they are described here.
An important ingredient of the constructions is the description of the action of a group automorphism on the classes by a permutation. In practice, these permutations are usually chosen from the
group of table automorphisms of the character table in question, see AutomorphismsOfTable (Reference: AutomorphismsOfTable).
Section 5.1 deals with groups of the structure M.G.A, where the upwards extension G.A acts suitably on the central extension M.G. Section 5.2 deals with groups that have a factor group of type S_3.
Section 5.3 deals with upward extensions of a group by a Klein four group. Section 5.4 deals with downward extensions of a group by a Klein four group. Section 5.6 describes the construction of
certain Brauer tables. Section 5.7 deals with special cases of the construction of character tables of central extensions from known character tables of suitable factor groups. Section 5.8 documents
the functions used to encode certain tables in the GAP Character Table Library.
Examples can be found in [Breb] and [Bref].
5.1 Character Tables of Groups of Structure M.G.A
For the functions in this section, let H be a group with normal subgroups N and M such that H/N is cyclic, M ≤ N holds, and such that each irreducible character of N that does not contain M in its
kernel induces irreducibly to H. (This is satisfied for example if N has prime index in H and M is a group of prime order that is central in N but not in H.) Let G = N/M and A = H/N, so H has the
structure M.G.A. For some examples, see [Bre11].
5.1-1 PossibleCharacterTablesOfTypeMGA
‣ PossibleCharacterTablesOfTypeMGA( tblMG, tblG, tblGA, orbs, identifier ) ( function )
Let H, N, and M be as described at the beginning of the section.
Let tblMG, tblG, tblGA be the ordinary character tables of the groups M.G = N, G, and G.A = H/M, respectively, and orbs be the list of orbits on the class positions of tblMG that is induced by the
action of H on M.G. Furthermore, let the class fusions from tblMG to tblG and from tblG to tblGA be stored on tblMG and tblG, respectively (see StoreFusion (Reference: StoreFusion)).
PossibleCharacterTablesOfTypeMGA returns a list of records describing all possible ordinary character tables for groups H that are compatible with the arguments. Note that in general there may be
several possible groups H, and it may also be that "character tables" are constructed for which no group exists.
Each of the records in the result has the following components.
a possible ordinary character table for H, and
the fusion map from tblMG into the table stored in table.
The possible tables differ w. r. t. some power maps, and perhaps element orders and table automorphisms; in particular, the MGfusMGA component is the same in all records.
The returned tables have the Identifier (Reference: Identifier for character tables) value identifier. The classes of these tables are sorted as follows. First come the classes contained in M.G,
sorted compatibly with the classes in tblMG, then the classes in H ∖ M.G follow, in the same ordering as the classes of G.A ∖ G.
5.1-2 BrauerTableOfTypeMGA
‣ BrauerTableOfTypeMGA( modtblMG, modtblGA, ordtblMGA ) ( function )
Let H, N, and M be as described at the beginning of the section, let modtblMG and modtblGA be the p-modular character tables of the groups N and H/M, respectively, and let ordtblMGA be the p-modular
Brauer table of H, for some prime integer p. Furthermore, let the class fusions from the ordinary character table of modtblMG to ordtblMGA and from ordtblMGA to the ordinary character table of
modtblGA be stored.
BrauerTableOfTypeMGA returns the p-modular character table of H.
5.1-3 PossibleActionsForTypeMGA
‣ PossibleActionsForTypeMGA( tblMG, tblG, tblGA ) ( function )
Let the arguments be as described for PossibleCharacterTablesOfTypeMGA (5.1-1). PossibleActionsForTypeMGA returns the set of orbit structures Ω on the class positions of tblMG that can be induced by
the action of H on the classes of M.G in the sense that Ω is the set of orbits of a table automorphism of tblMG (see AutomorphismsOfTable (Reference: AutomorphismsOfTable)) that is compatible with
the stored class fusions from tblMG to tblG and from tblG to tblGA. Note that the number of such orbit structures can be smaller than the number of the underlying table automorphisms.
Information about the progress is reported if the info level of InfoCharacterTable (Reference: InfoCharacterTable) is at least 1 (see SetInfoLevel (Reference: InfoLevel)).
5.2 Character Tables of Groups of Structure G.S_3
5.2-1 CharacterTableOfTypeGS3
‣ CharacterTableOfTypeGS3( tbl, tbl2, tbl3, aut, identifier ) ( function )
‣ CharacterTableOfTypeGS3( modtbl, modtbl2, modtbl3, ordtbls3, identifier ) ( function )
Let H be a group with a normal subgroup G such that H/G ≅ S_3, the symmetric group on three points, and let G.2 and G.3 be preimages of subgroups of order 2 and 3, respectively, under the natural
projection onto this factor group.
In the first form, let tbl, tbl2, tbl3 be the ordinary character tables of the groups G, G.2, and G.3, respectively, and aut be the permutation of classes of tbl3 induced by the action of H on G.3.
Furthermore assume that the class fusions from tbl to tbl2 and tbl3 are stored on tbl (see StoreFusion (Reference: StoreFusion)). In particular, the two class fusions must be compatible in the sense
that the induced action on the classes of tbl describes an action of S_3.
In the second form, let modtbl, modtbl2, modtbl3 be the p-modular character tables of the groups G, G.2, and G.3, respectively, and ordtbls3 be the ordinary character table of H.
CharacterTableOfTypeGS3 returns a record with the following components.
the ordinary or p-modular character table of H, respectively,
the fusion map from tbl2 into the table of H, and
the fusion map from tbl3 into the table of H.
The returned table of H has the Identifier (Reference: Identifier for character tables) value identifier. The classes of the table of H are sorted as follows. First come the classes contained in G.3,
sorted compatibly with the classes in tbl3, then the classes in H ∖ G.3 follow, in the same ordering as the classes of G.2 ∖ G.
In fact the code is applicable in the more general case that H/G is a Frobenius group F = K C with abelian kernel K and cyclic complement C of prime order, see [Bref]. Besides F = S_3, e. g., the
case F = A_4 is interesting.
5.2-2 PossibleActionsForTypeGS3
‣ PossibleActionsForTypeGS3( tbl, tbl2, tbl3 ) ( function )
Let the arguments be as described for CharacterTableOfTypeGS3 (5.2-1). PossibleActionsForTypeGS3 returns the set of those table automorphisms (see AutomorphismsOfTable (Reference:
AutomorphismsOfTable)) of tbl3 that can be induced by the action of H on the classes of tbl3.
Information about the progress is reported if the info level of InfoCharacterTable (Reference: InfoCharacterTable) is at least 1 (see SetInfoLevel (Reference: InfoLevel)).
5.3 Character Tables of Groups of Structure G.2^2
The following functions are thought for constructing the possible ordinary character tables of a group of structure G.2^2 from the known tables of the three normal subgroups of type G.2.
5.3-1 PossibleCharacterTablesOfTypeGV4
‣ PossibleCharacterTablesOfTypeGV4( tblG, tblsG2, acts, identifier[, tblGfustblsG2] ) ( function )
‣ PossibleCharacterTablesOfTypeGV4( modtblG, modtblsG2, ordtblGV4[, ordtblsG2fusordtblG4] ) ( function )
Let H be a group with a normal subgroup G such that H/G is a Klein four group, and let G.2_1, G.2_2, and G.2_3 be the three subgroups of index two in H that contain G.
In the first version, let tblG be the ordinary character table of G, let tblsG2 be a list containing the three character tables of the groups G.2_i, and let acts be a list of three permutations
describing the action of H on the conjugacy classes of the corresponding tables in tblsG2. If the class fusions from tblG into the tables in tblsG2 are not stored on tblG (for example, because the
three tables are equal) then the three maps must be entered in the list tblGfustblsG2.
In the second version, let modtblG be the p-modular character table of G, modtblsG be the list of p-modular Brauer tables of the groups G.2_i, and ordtblGV4 be the ordinary character table of H. In
this case, the class fusions from the ordinary character tables of the groups G.2_i to ordtblGV4 can be entered in the list ordtblsG2fusordtblG4.
PossibleCharacterTablesOfTypeGV4 returns a list of records describing all possible (ordinary or p-modular) character tables for groups H that are compatible with the arguments. Note that in general
there may be several possible groups H, and it may also be that "character tables" are constructed for which no group exists. Each of the records in the result has the following components.
a possible (ordinary or p-modular) character table for H, and
the list of fusion maps from the tables in tblsG2 into the table component.
The possible tables differ w.r.t. the irreducible characters and perhaps the table automorphisms; in particular, the G2fusGV4 component is the same in all records.
The returned tables have the Identifier (Reference: Identifier for character tables) value identifier. The classes of these tables are sorted as follows. First come the classes contained in G, sorted
compatibly with the classes in tblG, then the outer classes in the tables in tblsG2 follow, in the same ordering as in these tables.
5.3-2 PossibleActionsForTypeGV4
‣ PossibleActionsForTypeGV4( tblG, tblsG2 ) ( function )
Let the arguments be as described for PossibleCharacterTablesOfTypeGV4 (5.3-1). PossibleActionsForTypeGV4 returns the list of those triples [ π_1, π_2, π_3 ] of permutations for which a group H may
exist that contains G.2_1, G.2_2, G.2_3 as index 2 subgroups which intersect in the index 4 subgroup G.
Information about the progress is reported if the level of InfoCharacterTable (Reference: InfoCharacterTable) is at least 1 (see SetInfoLevel (Reference: InfoLevel)).
5.4 Character Tables of Groups of Structure 2^2.G
The following functions are thought for constructing the possible ordinary or Brauer character tables of a group of structure 2^2.G from the known tables of the three factor groups modulo the normal
order two subgroups in the central Klein four group.
Note that in the ordinary case, only a list of possibilities can be computed whereas in the modular case, where the ordinary character table is assumed to be known, the desired table is uniquely
5.4-1 PossibleCharacterTablesOfTypeV4G
‣ PossibleCharacterTablesOfTypeV4G( tblG, tbls2G, id[, fusions] ) ( function )
‣ PossibleCharacterTablesOfTypeV4G( tblG, tbl2G, aut, id ) ( function )
Let H be a group with a central subgroup N of type 2^2, and let Z_1, Z_2, Z_3 be the order 2 subgroups of N.
In the first form, let tblG be the ordinary character table of H/N, and tbls2G be a list of length three, the entries being the ordinary character tables of the groups H/Z_i. In the second form, let
tbl2G be the ordinary character table of H/Z_1 and aut be a permutation; here it is assumed that the groups Z_i are permuted under an automorphism σ of order 3 of H, and that σ induces the
permutation aut on the classes of tblG.
The class fusions onto tblG are assumed to be stored on the tables in tbls2G or tbl2G, respectively, except if they are explicitly entered via the optional argument fusions.
PossibleCharacterTablesOfTypeV4G returns the list of all possible character tables for H in this situation. The returned tables have the Identifier (Reference: Identifier for character tables) value
5.4-2 BrauerTableOfTypeV4G
‣ BrauerTableOfTypeV4G( ordtblV4G, modtbls2G ) ( function )
‣ BrauerTableOfTypeV4G( ordtblV4G, modtbl2G, aut ) ( function )
Let H be a group with a central subgroup N of type 2^2, and let ordtblV4G be the ordinary character table of H. Let Z_1, Z_2, Z_3 be the order 2 subgroups of N. In the first form, let modtbls2G be
the list of the p-modular Brauer tables of the factor groups H/Z_1, H/Z_2, and H/Z_3, for some prime integer p. In the second form, let modtbl2G be the p-modular Brauer table of H/Z_1 and aut be a
permutation; here it is assumed that the groups Z_i are permuted under an automorphism σ of order 3 of H, and that σ induces the permutation aut on the classes of the ordinary character table of H
that is stored in ordtblV4G.
The class fusions from ordtblV4G to the ordinary character tables of the tables in modtbls2G or modtbl2G are assumed to be stored.
BrauerTableOfTypeV4G returns the p-modular character table of H.
5.5 Character Tables of Subdirect Products of Index Two
The following function is thought for constructing the (ordinary or Brauer) character tables of certain subdirect products from the known tables of the factor groups and normal subgroups involved.
5.5-1 CharacterTableOfIndexTwoSubdirectProduct
‣ CharacterTableOfIndexTwoSubdirectProduct( tblH1, tblG1, tblH2, tblG2, identifier ) ( function )
Returns: a record containing the character table of the subdirect product G that is described by the first four arguments.
Let tblH1, tblG1, tblH2, tblG2 be the character tables of groups H_1, G_1, H_2, G_2, such that H_1 and H_2 have index two in G_1 and G_2, respectively, and such that the class fusions corresponding
to these embeddings are stored on tblH1 and tblH1, respectively.
In this situation, the direct product of G_1 and G_2 contains a unique subgroup G of index two that contains the direct product of H_1 and H_2 but does not contain any of the groups G_1, G_2.
The function CharacterTableOfIndexTwoSubdirectProduct returns a record with the following components.
the character table of G,
the class fusion from tblH1 into the table of G, and
the class fusion from tblH2 into the table of G.
If the first four arguments are ordinary character tables then the fifth argument identifier must be a string; this is used as the Identifier (Reference: Identifier for character tables) value of the
result table.
If the first four arguments are Brauer character tables for the same characteristic then the fifth argument must be the ordinary character table of the desired subdirect product.
5.5-2 ConstructIndexTwoSubdirectProduct
‣ ConstructIndexTwoSubdirectProduct( tbl, tblH1, tblG1, tblH2, tblG2, permclasses, permchars ) ( function )
ConstructIndexTwoSubdirectProduct constructs the irreducible characters of the ordinary character table tbl of the subdirect product of index two in the direct product of tblG1 and tblG2, which
contains the direct product of tblH1 and tblH2 but does not contain any of the direct factors tblG1, tblG2. W. r. t. the default ordering obtained from that given by CharacterTableDirectProduct (
Reference: CharacterTableDirectProduct), the columns and the rows of the matrix of irreducibles are permuted with the permutations permclasses and permchars, respectively.
5.5-3 ConstructIndexTwoSubdirectProductInfo
‣ ConstructIndexTwoSubdirectProductInfo( tbl[, tblH1, tblG1, tblH2, tblG2] ) ( function )
Returns: a list of constriction descriptions, or a construction description, or fail.
Called with one argument tbl, an ordinary character table of the group G, say, ConstructIndexTwoSubdirectProductInfo analyzes the possibilities to construct tbl from character tables of subgroups H_1
, H_2 and factor groups G_1, G_2, using CharacterTableOfIndexTwoSubdirectProduct (5.5-1). The return value is a list of records with the following components.
the list of class positions of H_1, H_2 in tbl,
the list of orders of H_1, H_2,
the list of Identifier (Reference: Identifier for character tables) values of the GAP library tables of the factors G_2, G_1 of G by H_1, H_2; if no such table is available then the entry is
fail, and
the list of Identifier (Reference: Identifier for character tables) values of the GAP library tables of the subgroups H_2, H_1 of G; if no such tables are available then the entries are fail.
If the returned list is empty then either tbl does not have the desired structure as a subdirect product, or tbl is in fact a nontrivial direct product.
Called with five arguments, the ordinary character tables of G, H_1, G_1, H_2, G_2, ConstructIndexTwoSubdirectProductInfo returns a list that can be used as the ConstructionInfoCharacterTable (3.7-4)
value for the character table of G from the other four character tables using CharacterTableOfIndexTwoSubdirectProduct (5.5-1); if this is not possible then fail is returned.
5.6 Brauer Tables of Extensions by p-regular Automorphisms
As for the construction of Brauer character tables from known tables, the functions PossibleCharacterTablesOfTypeMGA (5.1-1), CharacterTableOfTypeGS3 (5.2-1), and PossibleCharacterTablesOfTypeGV4 (
5.3-1) work for both ordinary and Brauer tables. The following function is designed specially for Brauer tables.
5.6-1 IBrOfExtensionBySingularAutomorphism
‣ IBrOfExtensionBySingularAutomorphism( modtbl, act ) ( function )
Let modtbl be a p-modular Brauer table of the group G, say, and suppose that the group H, say, is an upward extension of G by an automorphism of order p.
The second argument act describes the action of this automorphism. It can be either a permutation of the columns of modtbl, or a list of the H-orbits on the columns of modtbl, or the ordinary
character table of H such that the class fusion from the ordinary table of modtbl into this table is stored. In all these cases, IBrOfExtensionBySingularAutomorphism returns the values lists of the
irreducible p-modular Brauer characters of H.
Note that the table head of the p-modular Brauer table of H, in general without the Irr (Reference: Irr) attribute, can be obtained by applying CharacterTableRegular (Reference: CharacterTableRegular
) to the ordinary character table of H, but IBrOfExtensionBySingularAutomorphism can be used also if the ordinary character table of H is not known, and just the p-modular character table of G and
the action of H on the classes of G are given.
5.7 Character Tables of Coprime Central Extensions
5.7-1 CharacterTableOfCommonCentralExtension
‣ CharacterTableOfCommonCentralExtension( tblG, tblmG, tblnG, id ) ( function )
Let tblG be the ordinary character table of a group G, say, and let tblmG and tblnG be the ordinary character tables of central extensions m.G and n.G of G by cyclic groups of prime orders m and n,
respectively, with m not= n. We assume that the factor fusions from tblmG and tblnG to tblG are stored on the tables. CharacterTableOfCommonCentralExtension returns a record with the following
the character table t, say, of the corresponding central extension of G by a cyclic group of order m n that factors through m.G and n.G; the Identifier (Reference: Identifier for character tables
) value of this table is id,
true if the Irr (Reference: Irr) value is stored in t, and false otherwise,
the list of irreducibles of t that are known; it contains the inflated characters of the factor groups m.G and n.G, plus those irreducibles that were found in tensor products of characters of
these groups.
Note that the conjugacy classes and the power maps of t are uniquely determined by the input data. Concerning the irreducible characters, we try to extract them from the tensor products of characters
of the given factor groups by reducing with known irreducibles and applying the LLL algorithm (see ReducedClassFunctions (Reference: ReducedClassFunctions) and LLL (Reference: LLL)).
5.8 Construction Functions used in the Character Table Library
The following functions are used in the GAP Character Table Library, for encoding table constructions via the mechanism that is based on the attribute ConstructionInfoCharacterTable (3.7-4). All
construction functions take as their first argument a record that describes the table to be constructed, and the function adds only those components that are not yet contained in this record.
5.8-1 ConstructMGA
‣ ConstructMGA( tbl, subname, factname, plan, perm ) ( function )
ConstructMGA constructs the irreducible characters of the ordinary character table tbl of a group m.G.a where the automorphism a (a group of prime order) of m.G acts nontrivially on the central
subgroup m of m.G. subname is the name of the subgroup m.G which is a (not necessarily cyclic) central extension of the (not necessarily simple) group G, factname is the name of the factor group G.a.
Then the faithful characters of tbl are induced from m.G.
plan is a list, each entry being a list containing positions of characters of m.G that form an orbit under the action of a (the induction of characters is encoded this way).
perm is the permutation that must be applied to the list of characters that is obtained on appending the faithful characters to the inflated characters of the factor group. A nonidentity permutation
occurs for example for groups of structure 12.G.2 that are encoded via the subgroup 12.G and the factor group 6.G.2, where the faithful characters of 4.G.2 shall precede those of 6.G.2, as in the
Examples where ConstructMGA is used to encode library tables are the tables of 3.F_{3+}.2 (subgroup 3.F_{3+}, factor group F_{3+}.2) and 12_1.U_4(3).2_2 (subgroup 12_1.U_4(3), factor group 6_1.U_4
5.8-2 ConstructMGAInfo
‣ ConstructMGAInfo( tblmGa, tblmG, tblGa ) ( function )
Let tblmGa be the ordinary character table of a group of structure m.G.a where the factor group of prime order a acts nontrivially on the normal subgroup of order m that is central in m.G, tblmG be
the character table of m.G, and tblGa be the character table of the factor group G.a.
ConstructMGAInfo returns the list that is to be stored in the library version of tblmGa: the first entry is the string "ConstructMGA", the remaining four entries are the last four arguments for the
call to ConstructMGA (5.8-1).
5.8-3 ConstructGS3
‣ ConstructGS3( tbls3, tbl2, tbl3, ind2, ind3, ext, perm ) ( function )
‣ ConstructGS3Info( tbl2, tbl3, tbls3 ) ( function )
ConstructGS3 constructs the irreducibles of an ordinary character table tbls3 of type G.S_3 from the tables with names tbl2 and tbl3, which correspond to the groups G.2 and G.3, respectively. ind2 is
a list of numbers referring to irreducibles of tbl2. ind3 is a list of pairs, each referring to irreducibles of tbl3. ext is a list of pairs, each referring to one irreducible character of tbl2 and
one of tbl3. perm is a permutation that must be applied to the irreducibles after the construction.
ConstructGS3Info returns a record with the components ind2, ind3, ext, perm, and list, as are needed for ConstructGS3.
5.8-4 ConstructV4G
‣ ConstructV4G( tbl, facttbl, aut ) ( function )
Let tbl be the character table of a group of type 2^2.G where an outer automorphism of order 3 permutes the three involutions in the central 2^2. Let aut be the permutation of classes of tbl induced
by that automorphism, and facttbl be the name of the character table of the factor group 2.G. Then ConstructV4G constructs the irreducible characters of tbl from that information.
5.8-5 ConstructProj
‣ ConstructProj( tbl, irrinfo ) ( function )
‣ ConstructProjInfo( tbl, kernel ) ( function )
ConstructProj constructs the irreducible characters of the record encoding the ordinary character table tbl from projective characters of tables of factor groups, which are stored in the
ProjectivesInfo (3.7-2) value of the smallest factor; the information about the name of this factor and the projectives to take is stored in irrinfo.
ConstructProjInfo takes an ordinary character table tbl and a list kernel of class positions of a cyclic kernel of order dividing 12, and returns a record with the components
a character table that is permutation isomorphic with tbl, and sorted such that classes that differ only by multiplication with elements in the classes of kernel are consecutive,
a record being the entry for the projectives list of the table of the factor of tbl by kernel, describing this part of the irreducibles of tbl, and
the value of irrinfo that is needed for constructing the irreducibles of the tbl component of the result (not the irreducibles of the argument tbl!) via ConstructProj.
5.8-6 ConstructDirectProduct
‣ ConstructDirectProduct( tbl, factors[, permclasses, permchars] ) ( function )
The direct product of the library character tables described by the list factors of table names is constructed using CharacterTableDirectProduct (Reference: CharacterTableDirectProduct), and all its
components that are not yet stored on tbl are added to tbl.
The ComputedClassFusions (Reference: ComputedClassFusions) value of tbl is enlarged by the factor fusions from the direct product to the factors.
If the optional arguments permclasses, permchars are given then the classes and characters of the result are sorted accordingly.
factors must have length at least two; use ConstructPermuted (5.8-11) in the case of only one factor.
5.8-7 ConstructCentralProduct
‣ ConstructCentralProduct( tbl, factors, Dclasses[, permclasses, permchars] ) ( function )
The library table tbl is completed with help of the table obtained by taking the direct product of the tables with names in the list factors, and then factoring out the normal subgroup that is given
by the list Dclasses of class positions.
If the optional arguments permclasses, permchars are given then the classes and characters of the result are sorted accordingly.
5.8-8 ConstructSubdirect
‣ ConstructSubdirect( tbl, factors, choice ) ( function )
The library table tbl is completed with help of the table obtained by taking the direct product of the tables with names in the list factors, and then taking the table consisting of the classes in
the list choice.
Note that in general, the restriction to the classes of a normal subgroup is not sufficient for describing the irreducible characters of this normal subgroup.
5.8-9 ConstructWreathSymmetric
‣ ConstructWreathSymmetric( tbl, subname, n[, permclasses, permchars] ) ( function )
The wreath product of the library character table with identifier value subname with the symmetric group on n points is constructed using CharacterTableWreathSymmetric (Reference:
CharacterTableWreathSymmetric), and all its components that are not yet stored on tbl are added to tbl.
If the optional arguments permclasses, permchars are given then the classes and characters of the result are sorted accordingly.
5.8-10 ConstructIsoclinic
‣ ConstructIsoclinic( tbl, factors[, nsg[, centre]][, permclasses, permchars] ) ( function )
constructs first the direct product of library tables as given by the list factors of admissible character table names, and then constructs the isoclinic table of the result.
If the argument nsg is present and a record or a list then CharacterTableIsoclinic (Reference: CharacterTableIsoclinic) gets called, and nsg (as well as centre if present) is passed to this function.
In both cases, if the optional arguments permclasses, permchars are given then the classes and characters of the result are sorted accordingly.
5.8-11 ConstructPermuted
‣ ConstructPermuted( tbl, libnam[, permclasses, permchars] ) ( function )
The library table tbl is computed from the library table with the name libnam, by permuting the classes and the characters by the permutations permclasses and permchars, respectively.
So tbl and the library table with the name libnam are permutation equivalent. With the more general function ConstructAdjusted (5.8-12), one can derive character tables that are not necessarily
permutation equivalent, by additionally replacing some defining data.
The two permutations are optional. If they are missing then the lists of irreducible characters and the power maps of the two character tables coincide. However, different class fusions may be stored
on the two tables. This is used for example in situations where a group has several classes of isomorphic maximal subgroups whose class fusions are different; different character tables (with
different identifiers) are stored for the different classes, each with appropriate class fusions, and all these tables except the one for the first class of subgroups can be derived from this table
via ConstructPermuted.
5.8-12 ConstructAdjusted
‣ ConstructAdjusted( tbl, libnam, pairs[, permclasses, permchars] ) ( function )
The defining attribute values of the library table tbl are given by the attribute values described by the list pairs and –for those attributes which do not appear in pairs– by the attribute values of
the library table with the name libnam, whose classes and characters have been permuted by the optional permutations permclasses and permchars, respectively.
This construction can be used to derive a character table from another library table (the one with the name libnam) that is not permutation equivalent to this table. For example, it may happen that
the character tables of a split and a nonsplit extension differ only by some power maps and element orders. In this case, one can encode one of the tables via ConstructAdjusted, by prescribing just
the power maps in the list pairs.
If no replacement of components is needed then one should better use ConstructPermuted (5.8-11), because the system can then exploit the fact that the two tables are permutation equivalent.
5.8-13 ConstructFactor
‣ ConstructFactor( tbl, libnam, kernel ) ( function )
The library table tbl is completed with help of the library table with name libnam, by factoring out the classes in the list kernel. | {"url":"https://docs.gap-system.org/pkg/ctbllib/doc/chap5.html","timestamp":"2024-11-12T07:29:44Z","content_type":"application/xhtml+xml","content_length":"73383","record_id":"<urn:uuid:0f444f01-8c9b-44d6-b2e1-8782809be1d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00863.warc.gz"} |
s f
Our users:
Wow! A wonderful algebra tutor that has made equation solving easy for me.
Sonya Johnson, TX
I really liked the ability to choose a particular transformation to perform, rather than blindly copying the solution process..
Jessie James, AK
Kudos to The Algebrator! My daughter Sarah has been getting straight A's on her report card thanks to this outstanding piece of software. She is no longer having a hard time with her algebra
homework. After using the program for a few weeks, we said goodbye to her demanding tutor. Thank you!
Camila Denton, NJ
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
Search phrases used on 2014-04-02:
• simplify math radical fraction
• free download apptitude test
• Quick way to solve equations
• combination permutation math applet
• radical math worksheet
• "pythagorean theorem" + "free worksheet"
• online T-89 calculator
• simplifying radicals with a factor table
• chapter 13 chicago math algebra answer key
• solving exponentials
• statistics aleks cheats
• mathmatical equation
• saxon math algebra 2
• math 6th grade group like terms test
• probability worksheets for 4th
• solution of partial differential equation using method of characteristics
• dugopolski elementary algebra companion website
• how to use the pie formula to lay a building out square
• factor tree probability 6th grade
• common entrance games
• exact equations and multivariable cald
• solve math problem download program
• algebra 1 homework solver
• is kumon level k like pre calc
• free elementary rounding printables
• least common denominator calculator
• algebra 2 dvd compatible with hrw
• NUMERICAL ANALYSIS FILETYPE: PDF
• fifth grade TAKS reading printable practice
• year 8 algebra online tests
• Free Trigonometry Word Problem Solver
• cool math worksheets
• operations with radical expressions solver
• how to learn algebra for free
• nonlinear system matlab
• algebrator 4.0
• algebra word problems worksheet version 1
• how do I simplify a radical expression by factoring
• least common multiple polynomials
• algebra 1 answers to worksheets
• test of genius+worksheet
• free help with elementary and intermediate algebra
• work on math factoring polynomials WORKSHEETS
• grade nine math
• "TI 83 plus" log "change base"
• adding subtracting multiplying dividing square roots
• ti 84 emulator
• combinations worksheets
• Glencoe/McGraw-Hill chapter 6 umulative Review and answers
• change of base in logarithms worksheet free
• free cost accounting tutorials
• summer math practice sheets for 6 th grade
• free precal study sheets
• conic solver
• "ti-83 plus" +"absolute value"
• math projects for sixth graders
• graphing linear equations, free worksheets
• convert exponential to linear equation
• algabra
• how to solve using rationalization?
• sat /act chapter test answers/chapter 7 assessment book
• 9th grade math worksheets
• Holt algebra 2 worksheet
• Foil program for ti-83
• free printable first grade math sheets
• FACTORIZATION USE IN TENTH GRADE
• basics of accounting ebook + pdf+free
• download apptitude book
• math tutor for writing two-step equation
• free online exam
• download online real estate test exam ontario
• free online calculator for intersections and graphs
• ontario grade 7 math worksheets
• lesson master math sheets
• hardest questions in the 6th grade in a crossword
• algebra chapter 6 review
• model questions in aptitude
• easy studying problems for algebra | {"url":"https://algebra-help.com/algebra-help-factor/geometry/inequalities-for-math.html","timestamp":"2024-11-09T01:31:37Z","content_type":"application/xhtml+xml","content_length":"12203","record_id":"<urn:uuid:023de12b-c39e-46b9-8b05-4dc3fb6ac448>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00051.warc.gz"} |
The norms of graph spanners
A t-spanner of a graph G is a subgraph H in which all distances are preserved up to a multiplicative t factor. A classical result of Althöfer et al. is that for every integer k and every graph G,
there is a (2k − 1)-spanner of G with at most O(n^1+1/k) edges. But for some settings the more interesting notion is not the number of edges, but the degrees of the nodes. This spurred interest in
and study of spanners with small maximum degree. However, this is not necessarily a robust enough objective: we would like spanners that not only have small maximum degree, but also have “few” nodes
of “large” degree. To interpolate between these two extremes, in this paper we initiate the study of graph spanners with respect to the `p-norm of their degree vector, thus simultaneously modeling
the number of edges (the `1-norm) and the maximum degree (the `∞-norm). We give precise upper bounds for all ranges of p and stretch t: we prove that the greedy (2k− 1)-spanner has `p norm of at k+p
most max(O(n), O(n kp )), and that this bound is tight (assuming the Erdős girth conjecture). We also study universal lower bounds, allowing us to give “generic” guarantees on the approximation ratio
of the greedy algorithm which generalize and interpolate between the known approximations for the `1 and `∞ norm. Finally, we show that at least in some situations, the `p norm behaves fundamentally
differently from `1 or `∞: there are regimes (p = 2 and stretch 3 in particular) where the greedy spanner has a provably superior approximation to the generic guarantee.
Original language American English
Title of host publication 46th International Colloquium on Automata, Languages, and Programming, ICALP 2019
Editors Christel Baier, Ioannis Chatzigiannakis, Paola Flocchini, Stefano Leonardi
Publisher Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
ISBN (Electronic) 9783959771092
State Published - 1 Jul 2019
Event 46th International Colloquium on Automata, Languages, and Programming, ICALP 2019 - Patras, Greece
Duration: 9 Jul 2019 → 12 Jul 2019
Publication series
Name Leibniz International Proceedings in Informatics, LIPIcs
Volume 132
Conference 46th International Colloquium on Automata, Languages, and Programming, ICALP 2019
Country/Territory Greece
City Patras
Period 9/07/19 → 12/07/19
All Science Journal Classification (ASJC) codes | {"url":"https://cris.iucc.ac.il/en/publications/the-norms-of-graph-spanners","timestamp":"2024-11-06T13:49:00Z","content_type":"text/html","content_length":"44986","record_id":"<urn:uuid:48de1036-803e-466f-9d32-62684d0d69e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00114.warc.gz"} |
Push-Pull in HiFi (Audio 1959/4)
Push-Pull in HiFi
Author: MANNIE HOROWITZ
AUDIO, APRIL, 1959, VOL. 43, No. 4 (Successor to RADIO, Est. 1917).
The push-pull amplifier has become standardized as the optimum circuit arrangement for providing adequate power output with a minimum of distortion - so long as the tubes are used under proper
conditions. The author makes the performance of this type of amplifier thoroughly understandable.
The push-pull power output stage can be studied from many angles. A theoretical discussion on composite tube characteristics is interesting and informative. A survey of the practical applications
of different push-pull or driver circuits is an important asset to any audiofan's library.
In this article, several refinements in push-pull circuits will be discussed. These refinements are frequently designed into the amplifier on an intuitive basis rather than a scientific one. The
importance of a scientific analysis rather than instinctive motivation can be well appreciated by the serious hi-fi enthusiast.
Graphical Analysis
A typical self-biased triode push-pull output amplifier is drawn in Fig. 1. Everything discussed about this triode refers to the pentode as well – but to an even larger degree due to the greater
curvature of the tube characteristics.
Fig. 1. Typical push-pull amplifier.
It is well known fact that there is a phase shift of 180 deg. between the grid and the plate of any tube. When the signal at the grid reaches a crest, the signal at the plate is at a trough. The
reverse is also true. The phase relationship of a sine wave signal at the grid and plate of a tube is shown in Fig. 2. Note the crest and trough reversal indicating a 180 deg. phase shift.
Fig. 2. Grid-plate reversal - 180 deg. phase shift.
This is true in the case of both tubes in the circuit shown in Fig. 1. When W is at a crest, Y is at the trough; when X is is at a trough, Z is at a crest - and vice versa.
It is equally well known that the voltage at W and X must be exactly 180 deg. out of phase and exactly equal in amplitude in order that the push-pull amplifier operate properly. These voltages at
W and X will appear as shown at (A) in Fig. 3. In class A operation, the voltages at the respective plates, Y and Z will appear as shown at (B) in Fig. 3, each equally shifted in phase due to the
grid-plate phase relationship of the tube. The signal voltages at the plates will be greater than that at the grids due to tube amplification.
The signal-voltage amplitude appearing between the plate of each tube and signal ground (B+ since C2 in Fig. 1 is a short circuit to ground for signals) appears across one half of the output
transformer. The signal voltage between Y and B+ due to tube I appears across the upper half of the transformer, while the signal voltage between Z and B+ due to tube II appears across the lower half
of the transformer.
When the voltages at Z and Y are equal, there is no difference of potential between the ends of the transformer. The signals will then cancel out and no voltage will appear at the output.
If the voltages are unequal, or equal and 180 deg. out of phase, the difference of the instantaneous voltages at the plates will appear across the transformer. This will be the output signal.
In (B) of Fig. 3, let us assume a peak signal voltage of 30 volts between the plates of each tube and ground. At the beginning of the cycle, at the midpoint (180 deg.) and at the end of the cycle
(360 deg.), there is zero signal voltage. Thus there is no signal difference of potential between the two plates and there is no signal voltage across the output transformer.
Fig. 3. Push pull signal under normal operation. Note phase relationship of grids W and X and phases relationship of plates Y and Z. Output is double the output from each plate individually.
At the 90-deg. point, the Y plate has a trough of -30 volts and the Z plate has a crest of +30 volts. Thus there is a difference of potential of 60 volts between these two points. Assuming the Z
plate as the "0"-voltage reference level, the voltage between plates, or at the Y plate (across the output transformer) is -60 volts.
At the 270-deg. point in the cycle, the reverse is true. The Y plate has a crest of +30 volts. Again assuming the Z plate as the "0"-voltage reference level, the voltage between plates, or at the
Y plate is +60 volts.
When plotting this information, the voltage between plates of the tube (across the output transformer) is a sine wave of double the amplitude of either plate output voltage alone.
It should then become obvious that if the grid signals were of equal amplitude and in phase (Fig. 4), the voltages between plates Y and Z would be in phase. Being in phase, there would be no
difference of potential, during any point of the cycle, between plate Y and Z. This would result in a zero signal output.
Fig. 4. Phase relationship when signals are fed in phase to the two grids. Note the zero output across the output transformer.
From this graphical analysis, two things governing push-pull operation become obvious.
1. A signal applied 180 deg. out of phase to each grid, results in double the usual output from each tube individually.
2. A signal applied in phase to each grid, results in zero output from the push-pull arrangement.
Rule 2 applies to all cases, while rule 1 applies only to Class A operation of the output tubes. In Class AB[1], usually used in hi-fi amplifiers, the output is greater than indicated due to
increased efficiency.
Class AB[1]
In class A, we may assume operation of the tube along a linear portion of its characteristic curve, as shown in Fig. 5.
Fig. 5. Class A operation. Q is operating point (bias voltage). Undistorted sine wave at input and output.
Figure 6 shows the same tube operating in class AB[1]. The signal reaches cut off (or at least a non-linear portion of the curve) resulting in a distorted output from each tube. Since the output
from each tube is of identical wave shape, but 180 deg. out of phase, the distortion partially cancels itself out, resulting in a "pure" sine wave at the output.
Fig. 6. Class AB[1] operation of the same tube. The quiescent point is moved down so that less current flows when no signal is applied - which means less power dissipated, resulting in greater tube
The distortion from a tube can be studied most beneficially by a Fourier analysis. This is covered in many texts^1,2 and will not derived here. The results of this analysis are simple and can be
stated briefly.
Represent the plate current to Y in Fig. 1 as i[b1]. This plate current consists of three factors.
First, there is a d.c. component due o the plate power supply or B+. Let us call this d.c. current B[0].
The second is the fundamental signal component. When a sine wave is fed to the grid of a tube, a large signal component at the original sine-wave frequency appears at the output. The amplitude of
this component can be labelled B[1]. Designating the fundamental frequency as f[1], the B[1] component varies sinusoidally with this frequency. Thus the complete fundamental signal component of the
current is B[1]cosωt, where ω=2πf.
The output being somewhat distorted, must of necessity also consist of some harmonic components. Following the procedure for finding the fundamental, the amplitude of the second harmonic component
is B[2], the third i B[3], the fourth is B[4], and so on. Similarly, the sinusoidal variations at these frequencies are respectively cos2ωt, cos3ωt, cos4ωt, and so on. The complete harmonic content
of i[b1] is then B[2]cos2ωt + B[3]cos3ωt + B[4]cos4ωt...etc.
The plate current i[b1], is the sum of all these factors. Approximating the result only as far as the third harmonic - disregarding the fourth and higher order distortion components, the plate
current is"
i[b1] = B[0] + B[1]cosωt + B[2]cos2ωt + B[3]cos3ωt (1)
Assuming first that i[b2], the current of the tube II is in phase with i[b1], then:
i[b2] = B[0] + B[1]cosωt + B[2]cos2ωt + B[3]cos3ωt (2)
It can be taken for granted that the impedances of each of the two halves of the tube transformer are equal. The voltage drops across each half are proportional to the plate currents (E = Zi[b]).
The total plate voltage appearing across the transformer is then proportional to i[b1]-i[b2], which is proportional to the difference of potential between the two tubes, as explained above
graphically. Subtracting Eg. (2) from Eq. (1) shows a resultant zero output. This is the same result previously deduced graphically in Fig. 4.
Assume next that i[b1] and i[b2] are 180 deg. out of phase - the case for normal push-pull operation illustrated in Fig. 3. Since 180 deg. is equivalent to π in radian measure, adding π to each of
the angles in Eg. (1) will be equivalent of an 180-deg. phase shift.
i[b1] = B[0] + B[1]cos(ωt + π) + B[2]cos2(ωt + π) + B[3]cos3(ωt + π) ...=
B[0] - B[1]cosωt + B[2]cos2ωt - B[3]cos3ωt ... (3)
Equation (3) follows from trigonometry which indicates the following identities:
Cos(ωt + π) = -cos(ωt)
Cos(2ωt + 2π) = +cos(ωt)
Cos(3ωt + 3π) = -cos(ωt)
Subtracting Eq. (3) from Eq. (1) results in an expression which is proportional to the voltage across the output transformer:
i = i[b1] - i[b2] = 2(B[1]cosωt + B[3]cos3ωt) ... (4)
This indicates that all even harmonics are cancelled out in the push pull output. Only the third and higher odd harmonics remain. The "2" in the Eq. (4) indicates what we already found
graphically. The amplitude is double the output of a single tube.
This long-winded discussion may be considered to be a lot of trouble to prove some factors which are common knowledge. Everyone knows that even harmonics are cancelled in push-pull. Everyone also
knows that the signal applied to the two grids, W and X, must be of equal amplitude and 180 deg. out of phase. So why this dissertation?
Amplifiers are made out of tubes, resistors, transformers, capacitors - not out of tube manuals, theoretical text books or magazine articles.
Bypass the Cathode Resistor?
Assume for one moment that the two output tubes are dissimilarly non-linear. In that case, the plate currents in Eq. (1) are and equal to the plate currents in Eq. (2). The fundamental amplitudes
B[1], and the harmonic amplitudes, B[2] and B[3], in the two equations are then unequal. Subtraction of (2) from (1), in the case of if-phase signals, or (3) from (1) in the case of out-of-phase
signals will result in no cancellation of the high-amplitude second harmonics.
The total plate current i, which is equal to i[b1] - i[b2], appears across the common cathode resistor, R[3] in Fig. 1, as a voltage (i[b1] - i[b2])R[3].
Due to circuitry configuration, the voltage across this resistor appears between the cathode and grid of each tube (between B and W, and A and X). Being an amplifier, the tubes amplify this signal
as well as the desired signal appearing at the grids.
Assume that the resistor R[3] is bypassed by a large capacitor, C[1], as in the case in Fig. 1. All the harmonics are then bypassed to ground and not amplified. This may be the most desirable
In many amplifiers on the market, the cathode resistor is not bypassed to ground^3. What happens then?
In class A operation, there is very little effect on the harmonic distortion. The signal across the cathode resistor causes the harmonics to appear at the two grids, W and X in phase. These
harmonic components will cancel out, resulting in no or little additional harmonic distortion.
In class A, and more so in class AB[1] due to nonlinearity, the harmonics between the cathode and ground will modulate the fundamental input signal appearing between the grid and ground. These
resultant signals are not in phase and will not cancel. The final outcome are additional factors of intermodulation distortion.
Experiments of this type are interesting and should be tried by the reader who possesses harmonic and IM distortion measuring instruments. First make the measurements ithout a bypass capacitor
across the cathode resistor and then with the bypass capacitor connected. The results are predictable. The record of the magnitudes is interesting.
Results will indicate the desirability of a bypass capacitor in Class A operation and the necessity of this component in class AB[1].
Separate Cathode Resistor
Figure 1 shows one common resistor in both cathodes to develop bias voltage. Is this the most desirable arrangement? Figure 7 shows the same circuit, but with two resistors, one in each cathode
and separately bypassed. Is this better or worse?
Fig. 7. Same circuit as Fig. 1, but with separate bias resistors for each tube. Resistor value is twice that of Fig. 1, for only half of total current goes through it. Bypass capacitor need be only
half that of Fig. 1 to keep the circuits identical
Output tubes vary by as much as 40 per cent from each other. The plate currents can be quite different - especially when operated class AB[1] or more so in Class AB[2].
Assume that tube I has a lower plate current than tube II when operating at the same bias condition. Let us also suppose that at 8 volts bias, tube I has a plate current of 30 mA and tube II has a
plate current of 50 mA. The total d.c. plate current through the common cathode resistor of Fig. 1 would then be 80 mA. Assuming a cathode resistor of 100 ohms (R[3]), the voltage across this
resistor is i[R3] = (50 mA + 30 mA)100 ohms = 8 volts. Under the conditions illustrated in Fig. 1, the plate current through tube I would be 20 mA less than the plate current through tube II.
Following the same line of reasoning, consider the circuit of Fig. 7. The plate current in tube I is 30 mA, through a 200-ohm resistor, resulting in a bias voltage of (30 mA)(200 ohms) = 6 volts;
the plate current in tube II is 50 mA, through a 200-ohm resistor, resulting in a bias voltage of (50 mA)(200 ohms) = 10 volts. If this condition can exist, the difference in quiescent plate current
would still be 20 mA, as in the case shown in Fig. 1.
However, the 6 volts bias at tube I will permit more than 30 mA to flow in Fig. 7, since a bias as high as 8 volts in Fig. 1 was necessary to limit the current to 30 mA. The plate current will
increase, increasing the bias which is directly dependent on this plate current (E[k] = i[b]R[3]). It will increase until a point of equilibrium is reached. Let us say this equilibrium point is where
the plate current is 35 mA and the bias voltage is E[k] = (35 mA)(200 ohms)=7 volts.
In the case of tube II, quite the opposite effect is achieved. The 50-mA plate current is possible only with an 8-volt bias. When the bias is 10 volts, the plate current must be less than 50 mA.
It will decrease until a point of equilibrium is reached. Let us assume this point to be 45 mA - the cathode bias will then be E[k] = (45 mA)(200 ohms) = 9 volts.
It then becomes obvious that the difference of quiescent currents due to the configuration in Fig. 7 is 45 mA for tube II minus 35 mA for tube I which is equal to 10 mA, while the difference in
the case of Fig. 1 is 50 mA - 30 mA, which is equal to 20 mA. It is obvious that the case with the two separate bias resistors will tend to make a better balanced output stage than with the use of a
single resistor.
It should be noted that the figures taken for the current in the second case is purely theoretical. However, the example goes to indicate that the tendency is toward better balance with separate
resistors than with a single resistor. However, with a good pair of balanced tubes this difference is neglible.
D.C. Balance?
D.c. balance adjusts the bias on tubes so that the quiescent, or d.c. plate current of the two tubes are equal.
Since the d.c. balance is usually adjusted on both tubes to a portion of the curve with equal nonlinearity, there is a tendency towards lower distortion. This is not the main function of the d.c.
balance adjustments.
The d.c. saturation current in the output transformer is a limiting factor on the low-frequency response.
The d.c. current flows from both tubes in opposite directions through the transformer. When these two currents are made equal, the effect of each d.c. current is cancelled by the d.c. current
passing through the transformer from the opposite tube. With no d.c. magnetization of the transformer core (saturation) the low frequency response is increased.
This d.c. balance will incidentally also help balance out the hum. Since relatively unfiltered voltages are applied to the plates of the output tubes, there will be a large hum ripple across the
transformer due to plate current. When balanced, the hum ripple across one half of the transformer cancels that appearing across the other half - resulting in no hum output. In fact - the condition
for minimum hum is an excellent point of adjustment for the d.c. balance control.
Fixed Bias
Schematics of two popular circuits used in fixed bias operation are shown in Fig. 8.
Fig. 8. EL34 or KT88 may be used with adjustable bias of about 50 volts. Two arrangements are shown to measure bias and balance voltages.
All the d.c. current passing through a tube - the sum of the plate and screen currents - must pass through the cathode as well. To measure the total tube current conveniently, a small resistor can
be placed in the cathode of each tube. Due to the cathode current being conducted through the resistor, there will be a voltage drop across this small resistor. This voltage is proportional to the
total tube current [E[k]=(i[p] + i[sc])R[3]]. The voltage E[k], measured across this small resistor with any type of voltmeter, is actually a measurement f the tube current.
In (B) of Fig. 8, a 10-ohm resistor is included between cathode and ground in each tube. balance control is provided so that the d.c. currents in both tubes can be adjusted to be equal. This goal
is achieved when the measured voltages across both resistorsare equal.
The bias on a tube controls the current through a tube. This current is measured as a voltage across either 10-ohm resistor. The bias voltage is adjusted to the point that the voltage across
either of the 10-ohm resistors will indicate the optimum operating point for the tubes used.
In Fig. 8, (A) shows the two cathodes connected together and provides a common 10-ohm resistor between the junction of the two cathodes and ground. The current through this resistor is the sum of
the plate and screen currents through both tubes. A bias adjustments is also provided here to adjust the total currents to a predetermined value. No balance control is provided and thus only the sum
of the currents through both tubes is controlled. The individual currents through each of the tubes are assumed equal. This may be the case if the tubes are identical.
The advantage of the two-resistor system over the single resistor is only in the flexibility in permiting the individual adjustment of the d.c. currents through each tube.
A good case can be made for the two-resistor system similar to the excellent ease made for using two individual bias resistors in Fig. 7. The voltage developed cross the two small resistors or the
small single resistor of (A) in Fig. 8 are too small to have any real effect in providing balance - signal or d.c. They serve the sole purpose of convenience in measurement.
The advantages of d.c. balance need not be discussed further. The facts outlined above for self-bias conditions, apply here as well.
A.C. Balance
The fact that the voltage inputs to the grids and the outputs to the transformer must be exactly equal and out of phase, is indisputable.
The inputs to both grids may be kept identical without an a.c. balance control when carefully selected load resistors are used in the phase-splitter circuits. The excellent modern phase splitters^
4 make any further balance controls unnecessary.
The signals from both tubes to the output transformer are kept equal only when the tubes have equal gain and fairly similar curves. Providing any balance control or "gimmick" will be worthless if
the tubes are not similar. However, dynamic balance can best be achieved in similar tubes when they are first statically balanced with a d.c. balance control.
1. Hugh Hildreth Skilling, "Electrical Engineering Circuits", John Wiley & Sons, New York, 1957, pp 403-410.
2. MIT, "Applied Electronics", John Wiley & Sons, New York, 1943, pp 438-439.
3. Robert M. Mitchell, "Effect of the cathode capacitor on P-P output stage", AUDIO, Nov. 1955, pp 21-23, 75.
4. Mannie Hoorowitz, "Phase inverters for hi-fi amplifiers", Radio & TV News, May 1957, pp 9-97. | {"url":"https://trioda.com/index.php/en/articles/audio-usa/4737-push-pull-in-hifi-audio-1959-4","timestamp":"2024-11-11T14:22:50Z","content_type":"text/html","content_length":"43391","record_id":"<urn:uuid:54fb8150-4ef7-4831-b1da-b8454b4bc621>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00653.warc.gz"} |
Pseudo Random String
The Pseudo Random String Generator (or PseudoRandomStringGenerator) is a software to generate pseudorandom sequences of characters. The application provides a simple graphical user interface, where
you can define the alphabet and the length of the string to be generated. You can choose the alphabet for the string either explicitly (character by character, all Unicode characters are valid), or
use preconfigured alphabets like all Latin lowercase letters, all Latin uppercase letters, all numbers, the most common special characters and combinations of these.
Why building a new random number generator, aren't there already enough of them out there? While this is true in terms of quantity, I found it necessary to develop my own special tailored
application. Most existing generators were developed for experts (e.g. scientific software, modules to integrate into other software), or they are online applications. Also, there are many suspicious
closed source generators, which I wouldn't trust. My goal is to deliver a simple tool with a straightforward user interface, so that anybody should be able to use it. Therefore, it is also free and
open source. Additionally, my software is working offline, to be independent from the internet and to avoid vulnerabilities caused by sending the output across the internet.
The current version produces cryptographically secure pseudorandom sequences of characters. You can expect local randomness, even distribution and a cryptographically strong generation process.
Roughly this means, that it is computationally hard to predict next characters of a sequence, even if an attacker could read the past output sequence and knows parts of the inner state of the
computer, but it may be possible.
The software uses the Java Class SecureRandom to be able to deliver cryptographically secure pseudorandom sequences. Therefore, the output of this class complies with the statistical random number
generator tests as specified in the FIPS 140-2, Security Requirements for Cryptographic Modules, section 4.9.1. and additionally with RFC 1750: Randomness Recommendations for Security ("must produce
non-deterministic output"). I can not guarantee, that my software still meets the above requirements, but from a software engineering point of view, I don't see any reason, why the output of my
implementation could be any weaker than the output of the original Java class. As the software is free and open source, you are always welcome to verify, comment and contribute.
If you are interested in more information about some of the maybe a bit too technical phrases above, keep reading. Otherwise you may want to go directly to the Download section, at the bottom part of
this page.
Commonly, randomness is understood as "pertaining to chance" (Parzen, Emanuel: Stochastic Processes. 1962, p. 7). Another simple definition would be, that a random event is one you don't know the
result of. Randomness plays a fundamental role in many disciplines like gaming, lottery, quantum physics, thermodynamics, statistics, probability theory, information theory, biological evolution,
cryptography and even philosophy. Most of these domains use their own definitions of randomness, because a useful universal definition is still missing. Mainly, these definitions deal with properties
• unpredictability: a process can be unpredictable to one person, even if it is predictable, because the person does not have enough information about it. Other processes may not be predictable,
because our measures are not good enough or the costs are too high. And at least there may be also processes, that are not predictable at all, like it is assumed in current definitions of quantum
• independency: resulting values that were derived from a random process must be independent from each other. E.g. when tossing a fair (unbiased) coin or throwing a fair dice, there are always the
same probabilities for all possible outcomes of an experiment, and any realisation has no meaning, effect or whatsoever to any other realisation.
• non-determinism: deterministic processes (fixed sequences of calculations, like receipts) always lead to the same output for a given starting condition. Such output is called pseudorandom and
shares many statistical characteristics with random sequences. Like all imitations, pseudorandom sequences may be appropriate in some cases and as they are simpler and faster to produce,
pseudorandom numbers are widely used today. In contrast, true randomness derives from random processes, that are not describable deterministically.
As scientists still try to solve basic question like "does randomness exist at all?" (determinism), today's definitions may be subject to quite fundamental changes.
Computers and randomness
Computers are deterministic machines, so for one given input they always produce the same output, or at least they follow an algorithm (a receipt) to produce the output. At the end, the output is
always "signed" by this process. Examining the output reveals information about by the structure of the internal code and hardware. Or in the words of John von Neumann: "Anyone who considers
arithmetical methods of producing random digits is, of course, in a state of sin."
Bugs and randomness
Computers are designed to do exactly what they are instructed to, otherwise we call it a bug. There are two kind of bugs: software bugs and hardware failures. Unfortunately, you can't use a software
bug to invoke "some" randomness, because the bug would not produce random results. Even if it looks like the bug produces random output, there must be a structure in it, because of the program code.
In contrast, if a computer has a hardware failure like a tottering contact on the main board, then this could be a true random source. But in practice, the computer most likely wouldn't be able to
process anything else correctly as well.
True randomness
So, to generate true random numbers, you need to have some kind of hardware involved, that is a true random source. For example, you could make use of a tottering contact in a custom built random
output machine. Or you could use a radio, tune it to a frequency, where you can only hear white noise and then feed this into the sound card of a computer and generate bits based on the input. The
output of this is something a computer can not calculate on his own. On the other side, hardware random number generators are often sensitive to environmental influences like temperature and humidity
and may produce biased output under bad conditions, so they tend to be more error-prone than software based pseudorandom number generators.
A Pseudorandom sequence looks random, but it is not. The elements fo the sequence were not obtained from random events, but from algorithmic calculations (a deterministic process).
The better the quality of a pseudorandom number generator, the more statistical random number tests it passes. There is a big range of tests, like tests about the distribution of values, the
arithmetic mean value and more specialised ones like the Chi-Square Test and the Autocorrelation Test. Local randomness is also an important property of a pseudorandom string, because in many cases
it is not acceptable, if the generator pops out a "random" sequence of 1 Million '0's. Sure it is a valid result from a theoretical viewpoint, but I am also sure, that the gaming software industry is
not interested in theoretical perfect generators. In many applications you need a "more reliable" randomness, when examined in local detail. In contrast, if you consider a true random sequence of
sufficient length (consider millions or billions of numbers), there is a high probability to have local areas, that do not look random and exhibit a pattern.
The predictability of pseudorandom number generators can be especially useful for automated tests of software that has to handle random input, because software tests are usually easier to interpret,
if they are repeatable.
Cryptographically secure pseudorandom number generators
As pseudorandom sequences are reproducible exactly and infinitely by definition, computer scientists wanted to utilise the advantages of pseudorandom number generators (working fast, being
independent from external sources of randomness and therefore also less error-prone, because the inappropriate use of these sources can cause severe non-randomness) and add "some" unpredictability,
to generate "cryptographically strong" output sequences. You can find links to the definitions of these concepts in the introducing part on top of this page. In short, the definition of a
cryptographically secure number generator requires that it is hard to compute the output, even when parts of the state of the generator are known.
The influence of alphabet size and string length on possible variations
If you generate a word that consists of 8 digits, there are 100.000.000 possibilities to build an unique word. The formula obviously reads simply as "<alphabet length> to the power of <string length>
". If you just use a larger alphabet, like all Latin characters, digits and some special characters - lets say you have 84 different characters (like the default alphabet in my software) and stick to
the 8 characters length - there are now already 2.478.758.911.082.496 possible combinations. Taking 4 more characters, so that the length of the string is 12 characters, there are now even
123.410.307.017.276.100.000.000 possible words. You see, the numbers are growing quickly when increasing any of the two parameters, although the string length has more influence.
Proving randomness
How can you prove, if something behaves random? In fact, you can't, as any possible outcome is a valid random result, as we already mentioned before. See this nice illustration for a better
However, here is an example: Before proving, if a sequence is random, you may want to prove, if a single item of the sequence is random, e.g.: Is 7 a random number? You see, this question does not
make sense at all. But if you go further and ask the question for a specific sequence, e.g.: Is 723384 a random sequence? Well, while this question does make sense now, it is not a trivial one and
you will get many different answers. Ian Craw, a mathematician, would answer something like: "Randomness is not a property of individual numbers. More properly it is a property of infinite sequences
of numbers."
Mathematicians have an interesting approach to avoid questions about the randomness of finite strings. They define, that a finite sequence itself can not be called random, a random sequence must be
infinite, and this infinite sequence must contain every possible subsequence for an infinite number of times. With this in our minds we can now enjoy the following definition: "An infinite random
sequence is infinitely-distributed, in which any possible finite sub-sequence of numbers is uniformly distributed." In practice, this means, that you can not test, if a single sequence was derived
from a random process, but if you take more and more sequences into account, the test results should confirm your assumption more and more. But test results are to be treated with caution, because
they can only serve as indicators, but not as absolute measures for randomness. Note, that you can not quantify randomness itself. A process is either random or not, there is nothing in between.
Thus, a sequence derived from such a process therefore is also either random or not. But you can test, as laid out before, if sequences appear more or less random in specific aspects.
Statistical randomness
Statistics offers a plethora of random number tests, to verify statistical randomness. As always, when using statistical tools, you have to be very careful about how and what questions (hypotheses)
you ask, about where you get the data to test and about the conclusions you make or try to make. I said, that it is impossible to judge, if a sequence is random or not. Having said that, I also have
to admit, that statistical tests surely can help a lot, when used properly. E.g. the mean value of one specific sequence can be any, it does not tell you anything about its randomness at this point.
But, as you add more and more sequences to the tested sequence, the mean value should slowly come nearer and nearer to the expected mean value (the mean value of all possible numbers in that test).
If this process does not manifest, you maybe found a hint about a certain non-randomness. You could then do some further testing and eventually decide whether the found hint indicates a correct
assumption, or not.
With statistical methods, you almost never get answers like yes or no, but answers like "with a probability of 95 % this is true". So, a test may lead to the correct answer, but you can never be
sure. Another problem is, that it is not possible to compare values of different tests. Thus, statistics offers helpful instruments to measure aspects of randomness, but is not an all in one
solution. After testing a lot, you may also state as Ian Craw did: "If you want to be thoroughly perverse you could argue that the fact that it passes all such tests is itself evidence of a certain
But there are more interesting approaches to test randomness:
Entropy and randomness
The concept of entropy, as it was developed in statistical thermodynamics (entropy is associated to the amount of disorder in a system, e.g. gas at room temperature is of high entropy, as its
molecules do not show any describable order), seems somewhat connected to the concept of entropy in information theory, where entropy is a measure for the amount of information, that a message
contains (note that the amount of information does not say anything about its usefulness in this context). According to the definition of information theory, a random sequence has maximum possible
entropy. In other words, this sequence contains so much information, that you could not remove a single bit from it without loosing information.
Compressibility and randomness
The concept of entropy soon led to other measures like the Kolmogorov-Chaitin complexity in the algorithmic information theory. Also known as algorithmic entropy, descriptive complexity, Kolmogorov
complexity, program-size complexity or stochastic complexity, the complexity of an object is defined as the amount of information needed to specify this object exactly. Consequently, a random object
must be of high complexity, as there should be no way to describe the object any shorter than in the way the described object does it. This directly implies, that random objects must be
incompressible. But you also have to be careful when testing on compressibility. Consider a substring of 1 Million '0's, taken from an infinite true random string. This is a perfectly valid example
of a substring, as it must appear in an infinite random string. But this sequence - taken on its own - can be compressed very effectively, even if it was produced by a true random process.
Examples of usage for the software
First, a few words of caution: never use any software, that you do not trust, especially if security matters (e.g. to generate passwords).
I use the Pseudo Random String Generator to create passwords for online services, because those are most vulnerable to weak passwords, especially when the password length is limited to 8 or less
characters. But again, I do not encourage you to generate your personal passwords with my software, unless you or somebody you trust, reads and understands the source code of the application.
Generate random text to fill in "personal" informations at suspicious websites/accounts is also helpful, as you even reveal information about yourself, if you try to produce a random string manually
(you can't escape your own "circuits" and your environment).
Create random looking patterns/images of characters as works of art, backgrounds, or just for the fun of it.
Play with Fortuna (with her cryptographically secure, algorithmic and deterministic side, though :-)
Downloads and system requirements
This is free and open source software, released under the MIT license, so you can use the software however you like, if you don't forget to cite the author and the license.
The current stable version is 1.1.0.9, released 2013-01-26.
See a screenshot including a brief introduction to the operation of the software.
Download the PseudoRandomStringGenerator (197 KB) for Microsoft Windows XP / Vista / 7 / 8. This version needs the Java 6 Runtime (or newer) installed. Alternatively you can download a standalone and
portable version of the PseudoRandomStringGenerator (17 MB) with an included Java 6 Runtime.
Download the PseudoRandomStringGenerator (406 KB) for Apple Mac OS X 10.5 (64-bit Intel only) / 10.6 / 10.7 / 10.8. Mac OS X 10.5 and 10.6 usually has the Java 6 Runtime installed. If not, it is
available under System Preferences > Software Update. Mac OS X 10.7 and 10.8 does not include the Java 6 Runtime, but if it is needed, a dialog will pop up and ask if Java should be downloaded and
Download the PseudoRandomStringGenerator (147 KB) for Linux / Unix. The software should run on all operating systems that have the Java 6 Runtime (or newer) installed).
Contact and support
If you have any questions or need support, send a mail to the address mentioned in the copyright notice at the bottom of this page.
Random links
• kolmogorov.com - homepage of Andrei Nikolaevich Kolmogorov, who was an important mathematician. He surely would deserve a more decent homepage though.
• umcs.maine.edu/~chaitin - homepage of Gregory J. Chaitin, containing most of his work. He is a mathematician, computer scientist and philosopher, contributing a lot to algorithmic information
theory and mathematics, with a bias towards topics like limits, paradoxes, randomness, incompleteness and unprovability. Books: Exploring Randomness, Information, Randomness & Incompleteness, The
Unknowable, The Limits Of Mathematics.
• jucs.org - a brief introduction into (pseudo)random number generators by Makoto Matsumoto et al. "Pseudorandom Number Generation: Impossibility and Compromise" Journal of Universal Computer
Science, Vol. 12, No. 6 (2006), 677-679.
• cg.scs.carleton.ca/~luc - a quite extensive link list to random number generation by Luc Devroye.
• cs.auckland.ac.nz/~pgut001 - homepage of Peter Gutmann, a "Professional Paranoid" and thus also dealing a lot with random numbers.
• cwi.nl/~paulv - a paper by Paul M.B. Vitanyi providing an elaborate introduction into many aspects of randomness.
• cs.berkeley.edu/~daw - David Wagner's links to randomness for cryptography.
• random.mat.sbg.ac.at - about random number generators, maintained by mathematicians and computer scientists at the Department of Mathematics of the University of Salzburg.
• random.org - a true random number service that generates randomness via atmospheric noise.
• ciphersbyritter.com - Ciphers By Ritter/Ritter Labs by Terry Ritter, offering information on novel encryption technology (including randomness).
• mindprod.com - Java-related information and links about random numbers, located in the often helpful Java & Internet Glossary of Roedy Green's remarkable homepage.
• statpages.org - an extensive collection of links to free accessible statistical software, online tools, books, manuals, demos, tutorials and related resources with basic descriptions about the
linked web pages.
• r-project.org - a free, open source, multi-platform and widely used standard software for statistical computing. Another good resource to it is the R Programming Wikibook.
• ent - a pseudorandom number sequence test program. The website also has a nice "Introduction to Probability and Statistics" including some online calculators.
• Diehard tests - a battery of statistical random number tests: the original Diehard Battery of Tests of Randomness by George Marsaglia and Dieharder: A Random Number Test Suite by Robert G. Brown,
which is meant to be a cleaned up version of the original, additionally incorporating the tests from the Statistical Test Suite (STS) developed by the National Institute for Standards and
Technology (NIST).
• Online calculators (chi-square tests of goodness of fit and independence, correlation test for independent correlation coefficients and more...) provided by Kristopher J. Preacher.
• bjurke.net - an online pseudorandom string generator from Benjamin Jurke showing some statistical test values about the distribution and entropy of the generated strings.
Pseudo Random String Generator: Copyright (c) 2011 René Herbst (herbst[at]ist.org) | {"url":"https://herbst.ist.org/PseudoRandomStringGenerator.html","timestamp":"2024-11-09T13:43:41Z","content_type":"text/html","content_length":"31973","record_id":"<urn:uuid:de66f8f1-3606-45ef-b0c9-212c7554520a>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00895.warc.gz"} |
Academy of Chinese Culture Creating a Truth Table for Expressions Questions - Custom Scholars
Academy of Chinese Culture Creating a Truth Table for Expressions Questions
Logic is, basically, the study of valid reasoning. When searching the internet, we use
Boolean logic – terms like “and” and “or” – to help us find specific web pages that fit in the
sets we are interested in. After exploring this form of logic, we will look at logical
arguments and how we can determine the validity of a claim.
Boolean Logic
We can often classify items as belonging to sets. If you went the library to search for a book
and they asked you to express your search using unions, intersections, and complements of
sets, that would feel a little strange. Instead, we typically using words like “and”, “or”, and
“not” to connect our keywords together to form a search. These words, which form the basis
of Boolean logic, are directly related to our set operations.
Boolean Logic
Boolean logic combines multiple statements that are either true or false into an
expression that is either true or false.
In connection to sets, a search is true if the element is part of the set.
Suppose M is the set of all mystery books, and C is the set of all comedy books. If we search
for “mystery”, we are looking for all the books that are an element of the set M; the search is
true for books that are in the set.
When we search for “mystery and comedy”, we are looking for a book that is an element of
both sets, in the intersection. If we were to search for “mystery or comedy”, we are looking
for a book that is a mystery, a comedy, or both, which is the union of the sets. If we searched
for “not comedy”, we are looking for any book in the library that is not a comedy, the
complement of the set C.
Connection to Set Operations
A and B
elements in the intersection A ⋂ B
A or B
elements in the union A ⋃ B
Not A
elements in the complement Ac
Notice here that or is not exclusive. This is a difference between the Boolean logic use of the
word and common everyday use. When your significant other asks “do you want to go to the
park or the movies?” they usually are proposing an exclusive choice – one option or the
other, but not both. In Boolean logic, the or is not exclusive – more like being asked at a
restaurant “would you like fries or a drink with that?” Answering “both, please” is an
acceptable answer.
Example 1
Suppose we are searching a library database for Mexican universities. Express a reasonable
search using Boolean logic.
We could start with the search “Mexico and university”, but would be likely to find results
for the U.S. state New Mexico. To account for this, we could revise our search to read:
Mexico and university not “New Mexico”
In most internet search engines, it is not necessary to include the word and; the search engine
assumes that if you provide two keywords you are looking for both. In Google’s search, the
keyword or has be capitalized as OR, and a negative sign in front of a word is used to
indicate not. Quotes around a phrase indicate that the entire phrase should be looked for.
The search from the previous example on Google could be written:
Mexico university -“New Mexico”
Example 2
Describe the numbers that meet the condition:
even and less than 10 and greater than 0
The numbers that satisfy all three requirements are {2, 4, 6, 8}
Sometimes statements made in English can be ambiguous. For this reason, Boolean logic
uses parentheses to show precedent, just like in algebraic order of operations.
Example 3
The English phrase “Go to the store and buy me eggs and bagels or cereal” is ambiguous; it
is not clear whether the requestors is asking for eggs always along with either bagels or
cereal, or whether they’re asking for either the combination of eggs and bagels, or just cereal.
For this reason, using parentheses clarifies the intent:
Eggs and (bagels or cereal) means Option 1: Eggs and bagels, Option 2: Eggs and cereal
(Eggs and bagels) or cereal means Option 1: Eggs and bagels, Option 2: Cereal
Example 4
Describe the numbers that meet the condition:
odd number and less than 20 and greater than 0 and (multiple of 3 or multiple of 5)
The first three conditions limit us to the set {1, 3, 5, 7, 9, 11, 13, 15, 17, 19}
The last grouped conditions tell us to find elements of this set that are also either a multiple
of 3 or a multiple of 5. This leaves us with the set {3, 5, 9, 15}
Notice that we would have gotten a very different result if we had written
(odd number and less than 20 and greater than 0 and multiple of 3) or multiple of 5
The first grouped set of conditions would give {3, 9, 15}. When combined with the last
condition, though, this set expands without limits:
{3, 5, 9, 15, 20, 25, 30, 35, 40, 45, …}
Be aware that when a string of conditions is written without grouping symbols, it is often
interpreted from the left to right, resulting in the latter interpretation.
Beyond searching, Boolean logic is commonly used in spreadsheet applications like Excel to
do conditional calculations. A statement is something that is either true or false. A
statement like 3 < 5 is true; a statement like “a rat is a fish” is false. A statement like “x < 5” is true for some values of x and false for others. When an action is taken or not depending on the
value of a statement, it forms a conditional. Statements and Conditionals A statement is either true or false. A conditional is a compound statement of the form “if p then q” or “if p then q, else
s”. Example 5 In common language, an example of a conditional statement would be “If it is raining, then we’ll go to the mall. Otherwise we’ll go for a hike.” The statement “If it is raining” is the
condition – this may be true or false for any given day. If the condition is true, then we will follow the first course of action, and go to the mall. If the condition is false, then we will use the
alternative, and go for a hike. Example 6 As mentioned earlier, conditional statements are commonly used in spreadsheet applications like Excel. If Excel, you can enter an expression like =IF(A1 5,
2*A1, 3*A1) is used. Find the result if A1 is 3, and the result if A1 is 8 This is equivalent to saying If A1 >5, then calculate 2*A1. Otherwise, calculate 3*A1
If A1 is 3, then the condition is false, since 3 > 5 is not true, so we do the alternate action,
and multiple by 3, giving 3*3 = 9
If A1 is 8, then the condition is true, since 8 > 5, so we multiply the value by 2, giving
Example 8
An accountant needs to withhold 15% of income for taxes if the income is below $30,000,
and 20% of income if the income is $30,000 or more. Write an expression that would
calculate the amount to withhold.
Our conditional needs to compare the value to 30,000. If the income is less than 30,000, we
need to calculate 15% of the income: 0.15*income. If the income is more than 30,000, we
need to calculate 20% of the income: 0.20*income.
In words we could write “If income < 30,000, then multiply by 0.15, otherwise multiply by 0.20”. In Excel, we would write: =IF(A1 100” in Excel, you would need to enter “AND(A1100)”. Likewise, for
the condition “A1=4 or A1=6” you would enter “OR(A1=4, A1=6)” Example 10 In a spreadsheet, cell A1 contains annual income, and A2 contains number of dependents. A certain tax credit applies if
someone with no dependents earns less than $10,000 and has no dependents, or if someone with dependents earns less than $20,000. Write a rule that describes this. There are two ways the rule is met:
income is less than 10,000 and dependents is 0, or income is less than 20,000 and dependents is not 0. Informally, we could write these as (A1 < 10000 and A2 = 0) or (A1 < 20000 and A2 > 0)
Notice that the A2 > 0 condition is actually redundant and not necessary, since we’d only be
considering that or case if the first pair of conditions were not met. So this could be
simplified to
(A1 < 10000 and A2 = 0) or (A1 < 20000) In Excel’s format, we’d write = IF ( OR( AND(A1 < 10000, A2 = 0), A1 < 20000), “you qualify”, “you don’t qualify”) Truth Tables Because complex Boolean
statements can get tricky to think about, we can create a truth table to keep track of what truth values for the simple statements make the complex statement true and false Truth table A table
showing what the resulting truth value of a complex statement is for all the possible truth values for the simple statements. Example 11 Suppose you’re picking out a new couch, and your significant
other says “get a sectional or something with a chaise”. This is a complex statement made of two simpler conditions: “is a sectional”, and “has a chaise”. For simplicity, let’s use S to designate “is
a sectional”, and C to designate “has a chaise”. The condition S is true if the couch is a sectional. 6 A truth table for this would look like this: S C S or C T T T T F T F T T F F F In the table, T
is used for true, and F for false. In the first row, if S is true and C is also true, then the complex statement “S or C” is true. This would be a sectional that also has a chaise, which meets our
desire. Remember also that or in logic is not exclusive; if the couch has both features, it does meet the condition. To shorthand our notation further, we’re going to introduce some symbols that are
commonly used for and, or, and not. Symbols The symbol ⋀ is used for and: The symbol ⋁ is used for or: The symbol ~ is used for not: A and B is notated A ⋀ B. A or B is notated A ⋁ B not A is notated
~A You can remember the first two symbols by relating them to the shapes for the union and intersection. A ⋀ B would be the elements that exist in both sets, in A ⋂ B. Likewise, A ⋁ B would be the
elements that exist in either set, in A ⋃ B. In the previous example, the truth table was really just summarizing what we already know about how the or statement work. The truth tables for the basic
and, or, and not statements are shown below. Basic truth tables A B A⋀B T T T T F F F T F F F F A T T F F B T F T F A⋁B T T T F A T F ~A F T Truth tables really become useful when analyzing more
complex Boolean statements. 7 Example 12 Create a truth table for the statement A ⋀ ~(B ⋁ C) It helps to work from the inside out when creating truth tables, and create tables for intermediate
operations. We start by listing all the possible truth value combinations for A, B, and C. Notice how the first column contains 4 Ts followed by 4 Fs, the second column contains 2 Ts, 2 Fs, then
repeats, and the last column alternates. This pattern ensures that all combinations are considered. Along with those initial values, we’ll list the truth values for the innermost expression, B ⋁ C. A
B C B⋁C T T T T T T F T T F T T T F F F F T T T F T F T F F T T F F F F Next we can find the negation of B ⋁ C, working off the B ⋁ C column we just created. A B C B ⋁ C ~(B ⋁ C) T T T T F T T F T F
T F T T F T F F F T F T T T F F T F T F F F T T F F F F F T Finally, we find the values of A and ~(B ⋁ C) A B C B ⋁ C ~(B ⋁ C) A ⋀ ~(B ⋁ C) T T T T F F T T F T F F T F T T F F T F F F T T F T T T F F
F T F T F F F F T T F F F F F F T F It turns out that this complex expression is only true in one case: if A is true, B is false, and C is false. 8 Try it Now 1 Create a truth table for this
statement: (~A ⋀ B) ⋁ ~B When we discussed conditions earlier, we discussed the type where we take an action based on the value of the condition. We are now going to talk about a more general version
of a conditional, sometimes called an implication. Implications Implications are logical conditional sentences stating that a statement p, called the antecedent, implies a consequence q. Implications
are commonly written as p → q Implications are similar to the conditional statements we looked at earlier; p → q is typically written as “if p then q”, or “p therefore q”. The difference between
implications and conditionals is that conditionals we discussed earlier suggest an action – if the condition is true, then we take some action as a result. Implications are a logical statement that
suggest that the consequence must logically follow if the antecedent is true. Example 13 The English statement “If it is raining, then there are clouds is the sky” is a logical implication. It is a
valid argument because if the antecedent “it is raining” is true, then the consequence “there are clouds in the sky” must also be true. Notice that the statement tells us nothing of what to expect if
it is not raining. If the antecedent is false, then the implication becomes irrelevant. Example 14 A friend tells you that “if you upload that picture to Facebook, you’ll lose your job”. There are
four possible outcomes: 1) You upload the picture and keep your job 2) You upload the picture and lose your job 3) You don’t upload the picture and keep your job 4) You don’t upload the picture and
lose your job There is only one possible case where your friend was lying – the first option where you upload the picture and keep your job. In the last two cases, your friend didn’t say anything
about what would happen if you didn’t upload the picture, so you can’t conclude their statement is invalid, even if you didn’t upload the picture and still lost your job. 9 In traditional logic, an
implication is considered valid (true) as long as there are no cases in which the antecedent is true and the consequence is false. It is important to keep in mind that symbolic logic cannot capture
all the intricacies of the English language. Truth values for implications p q p→q T T T T F F F T T F F T Example 15 Construct a truth table for the statement (m ⋀ ~p) → r We start by constructing a
truth table for the antecedent. m p ~p m ⋀ ~p T T F F T F T T F T F F F F T F Now we can build the truth table for the implication m T T F F T T F F p T F T F T F T F ~p F T F T F T F T m ⋀ ~p F T F
F F T F F r T T T T F F F F (m ⋀ ~p) → r T T T T T F T T In this case, when m is true, p is false, and r is false, then the antecedent m ⋀ ~p will be true but the consequence false, resulting in a
invalid implication; every other case gives a valid implication. For any implication, there are three related statements, the converse, the inverse, and the contrapositive. 10 Related Statements The
original implication is The converse is: The inverse is The contrapositive is “if p then q” “if q then p” “if not p then not q” “if not q then not p” p→q q→p ~p → ~q ~q → ~p Example 16 Consider again
the valid implication “If it is raining, then there are clouds in the sky”. The converse would be “If there are clouds in the sky, it is raining.” This is certainly not always true. The inverse would
be “If it is not raining, then there are not clouds in the sky.” Likewise, this is not always true. The contrapositive would be “If there are not clouds in the sky, then it is not raining.” This
statement is valid, and is equivalent to the original implication. Looking at truth tables, we can see that the original conditional and the contrapositive are logically equivalent, and that the
converse and inverse are logically equivalent. p T T F F q T F T F Implication p→q T F T T Converse q→p T T F T Inverse ~p → ~q T T F T Contrapositive ~q → ~p T F T T Equivalent Equivalence A
conditional statement and its contrapositive are logically equivalent. The converse and inverse of a statement are logically equivalent. We have now examined several truth-functional connectives,
three of which are two-place connectives – conjunction, disjunction, and implication (conditional) – and one of which is a one-place connective (negation). There is one remaining connective that is
generally studied in sentential logic, the biconditional, which corresponds to the English “if and only if”. Like the implication, the biconditional is a two-place connective; if we fill the two
blanks with statements, the resulting expression is also a statement. For example, we can begin with 11 the statements “I am happy” and “I am relaxed” and form the compound statement “I am happy if
and only if I am relaxed”. Truth values for the biconditional p q p↔q T T T T F F F T F F F T Arguments A logical argument is a claim that a set of premises support a conclusion. There are two
general types of arguments: inductive and deductive arguments. Argument types An inductive argument uses a collection of specific examples as its premises and uses them to propose a general
conclusion. A deductive argument uses a collection of general statements as its premises and uses them to propose a specific situation as the conclusion. Example 17 The argument “when I went to the
store last week I forgot my purse, and when I went today I forgot my purse. I always forget my purse when I go the store” is an inductive argument. The premises are: I forgot my purse last week I
forgot my purse today The conclusion is: I always forget my purse Notice that the premises are specific situations, while the conclusion is a general statement. In this case, this is a fairly weak
argument, since it is based on only two instances. Example 18 The argument “every day for the past year, a plane flies over my house at 2pm. A plane will fly over my house every day at 2pm” is a
stronger inductive argument, since it is based on a larger set of evidence. 12 Evaluating inductive arguments An inductive argument is never able to prove the conclusion true, but it can provide
either weak or strong evidence to suggest it may be true. Many scientific theories, such as the big bang theory, can never be proven. Instead, they are inductive arguments supported by a wide variety
of evidence. Usually in science, an idea is considered a hypothesis until it has been well tested, at which point it graduates to being considered a theory. The commonly known scientific theories,
like Newton’s theory of gravity, have all stood up to years of testing and evidence, though sometimes they need to be adjusted based on new evidence. For gravity, this happened when Einstein proposed
the theory of general relativity. A deductive argument is more clearly valid or not, which makes them easier to evaluate. Evaluating deductive arguments A deductive argument is considered valid if
all the premises are true, and the conclusion follows logically from those premises. In other words, the premises are true, and the conclusion follows necessarily from those premises. Example 19 The
argument “All cats are mammals and a tiger is a cat, so a tiger is a mammal” is a valid deductive argument. The premises are: All cats are mammals A tiger is a cat The conclusion is: A tiger is a
mammal Both the premises are true. To see that the premises must logically lead to the conclusion, one approach would be use a Venn diagram. From the first premise, we can conclude that the set of
cats is a subset of the set of mammals. From the second premise, we are told that a tiger lies within the set of cats. From that, we can see in the Venn diagram that the tiger also lies inside the
set of mammals, so the conclusion is valid. Mammals Cats Tiger x 13 Analyzing arguments with Venn diagrams1 To analyze an argument with a Venn diagram 1) Draw a Venn diagram based on the premises of
the argument 2) If the premises are insufficient to determine what determine the location of an element, indicate that. 3) The argument is valid if it is clear that the conclusion must be true
Example 20 Premise: Premise: Conclusion: All firefighters know CPR Jill knows CPR Jill is a firefighter From the first premise, we know that firefighters all lie inside the set of those who know CPR.
From the second premise, we know that Jill is a member of that larger set, but we do not have enough information to know if she also is a member of the smaller subset that is firefighters. Know CPR
Jill x? x? Firefighters Since the conclusion does not necessarily follow from the premises, this is an invalid argument, regardless of whether Jill actually is a firefighter. It is important to note
that whether or not Jill is actually a firefighter is not important in evaluating the validity of the argument; we are only concerned with whether the premises are enough to prove the conclusion. Try
it Now 2 Determine the validity of this argument: Premise: No cows are purple Premise: Fido is not a cow Conclusion: Fido is purple In addition to these categorical style premises of the form “all
___”, “some ____”, and “no ____”, it is also common to see premises that are implications. Example 21 Premise: Premise: Conclusion: If you live in Seattle, you live in Washington. Marcus does not
live in Seattle Marcus does not live in Washington Washington x? Marcus x? Technically, these are Euler circles or Euler diagrams, not Venn diagrams, but for the sake of simplicity we’ll Seattle
continue to call them Venn diagrams. 1 14 From the first premise, we know that the set of people who live in Seattle is inside the set of those who live in Washington. From the second premise, we
know that Marcus does not lie in the Seattle set, but we have insufficient information to know whether or not Marcus lives in Washington or not. This is an invalid argument. Example 22 Consider the
argument “You are a married man, so you must have a wife.” This is an invalid argument, since there are, at least in parts of the world, men who are married to other men, so the premise not
insufficient to imply the conclusion. Some arguments are better analyzed using truth tables. Example 23 Consider the argument Premise: If you bought bread, then you went to the store Premise: You
bought bread Conclusion: You went to the store While this example is hopefully fairly obviously a valid argument, we can analyze it using a truth table by representing each of the premises
symbolically. We can then look at the implication that the premises together imply the conclusion. If the truth table is a tautology (always true), then the argument is valid. We’ll get B represent
“you bought bread” and S represent “you went to the store”. Then the argument becomes: Premise: B→S Premise: B Conclusion: S To test the validity, we look at whether the combination of both premises
implies the conclusion; is it true that [(B→S) ⋀ B] → S ? B T T F F S T F T F B→S T F T T (B→S) ⋀ B T F F F [(B→S) ⋀ B] → S T T T T Since the truth table for [(B→S) ⋀ B] → S is always true, this is a
valid argument. Try it Now 3 Determine if the argument is valid: 15 Premise: Premise: Conclusion: If I have a shovel I can dig a hole. I dug a hole Therefore I had a shovel Analyzing arguments using
truth tables To analyze an argument with a truth table: 1. Represent each of the premises symbolically 2. Create a conditional statement, joining all the premises with and to form the antecedent, and
using the conclusion as the consequent. 3. Create a truth table for that statement. If it is always true, then the argument is valid. Example 24 Premise: Premise: Conclusion: If I go to the mall,
then I’ll buy new jeans If I buy new jeans, I’ll buy a shirt to go with it If I got to the mall, I’ll buy a shirt. Let M = I go to the mall, J = I buy jeans, and S = I buy a shirt. The premises and
conclusion can be stated as: Premise: M→J Premise: J→S Conclusion: M → S We can construct a truth table for [(M→J) ⋀ (J→S)] → (M→S) M T T T T F F F F J T T F F T T F F S M→J J→S T T T F T F T F T F F
T T T T F T F T T T F T T (M→J) ⋀ (J→S) T F F F T F T T M→S T F T F T T T T From the truth table, we can see this is a valid argument. The previous problem is an example of a syllogism. Syllogism
[(M→J) ⋀ (J→S)] → (M→S) T T T T T T T T 16 A syllogism is an implication derived from two others, where the consequence of one is the antecedent to the other. The general form of a syllogism is:
Premise: p→q Premise: q→r Conclusion: p→r This is sometime called the transitive property for implication. Example 25 Premise: Premise: Conclusion: If I work hard, I’ll get a raise. If I get a raise,
I’ll buy a boat. If I don’t buy a boat, I must not have worked hard. If we let W = working hard, R = getting a raise, and B = buying a boat, then we can represent our argument symbolically: Premise
H→R Premise R→B Conclusion: ~ B → ~ H We could construct a truth table for this argument, but instead, we will use the notation of the contrapositive we learned earlier to note that the implication ~
B → ~ H is equivalent to the implication H → B. Rewritten, we can see that this conclusion is indeed a logical syllogism derived from the premises. Try it Now 4 Is this argument valid? Premise: If I
go to the party, I’ll be really tired tomorrow. Premise: If I go to the party, I’ll get to see friends. Conclusion: If I don’t see friends, I won’t be tired tomorrow. Lewis Carroll, author of Alice
in Wonderland, was a math and logic teacher, and wrote two books on logic. In them, he would propose premises as a puzzle, to be connected using syllogisms. Example 26 Solve the puzzle. In other
words, find a logical conclusion from these premises. All babies are illogical. Nobody is despised who can manage a crocodile. Illogical persons are despised. Let B = is a baby, D = is despised, I =
is illogical, and M = can manage a crocodile. Then we can write the premises as: B→I M → ~D 17 I→D From the first and third premises, we can conclude that B → D; that babies are despised. Using the
contrapositive of the second premised, D → ~M, we can conclude that B → ~M; that babies cannot manage crocodiles. While silly, this is a logical conclusion from the given premises. Logical Fallacies
in Common Language In the previous discussion, we saw that logical arguments can be invalid when the premises are not true, when the premises are not sufficient to guarantee the conclusion, or when
there are invalid chains in logic. There are a number of other ways in which arguments can be invalid, a sampling of which are given here. Ad hominem An ad hominem argument attacks the person making
the argument, ignoring the argument itself. Example 27 “Jane says that whales aren’t fish, but she’s only in the second grade, so she can’t be right.” Here the argument is attacking Jane, not the
validity of her claim, so this is an ad hominem argument. Example 28 “Jane says that whales aren’t fish, but everyone knows that they’re really mammals – she’s so stupid.” This certainly isn’t very
nice, but it is not ad hominem since a valid counterargument is made along with the personal insult. Appeal to ignorance This type of argument assumes something it true because it hasn’t been proven
false. Example 29 “Nobody has proven that photo isn’t Bigfoot, so it must be Bigfoot.” Appeal to authority These arguments attempt to use the authority of a person to prove a claim. While often
authority can provide strength to an argument, problems can occur when the person’s opinion is not shared by other experts, or when the authority is irrelevant to the claim. 18 Example 30 “A diet
high in bacon can be healthy – Doctor Atkins said so.” Here, an appeal to the authority of a doctor is used for the argument. This generally would provide strength to the argument, except that the
opinion that eating a diet high in saturated fat runs counter to general medical opinion. More supporting evidence would be needed to justify this claim. Example 31 “Jennifer Hudson lost weight with
Weight Watchers, so their program must work.” Here, there is an appeal to the authority of a celebrity. While her experience does provide evidence, it provides no more than any other person’s
experience would. Appeal to consequence An appeal to consequence concludes that a premise is true or false based on whether the consequences are desirable or not. Example 32 “Humans will travel
faster than light: faster-than-light travel would be beneficial for space travel.” False dilemma A false dilemma argument falsely frames an argument as an “either or” choice, without allowing for
additional options. Example 33 “Either those lights in the sky were an airplane or aliens. There are no airplanes scheduled for tonight, so it must be aliens.” This argument ignores the possibility
that the lights could be something other than an airplane or aliens. Circular reasoning Circular reasoning is an argument that relies on the conclusion being true for the premise to be true. 19
Example 34 “I shouldn’t have gotten a C in that class; I’m an A student!” In this argument, the student is claiming that because they’re an A student, though shouldn’t have gotten a C. But because
they got a C, they’re not an A student. Straw man A straw man argument involves misrepresenting the argument in a less favorable way to make it easier to attack. Example 35 “Senator Jones has
proposed reducing military funding by 10%. Apparently he wants to leave us defenseless against attacks by terrorists” Here the arguer has represented a 10% funding cut as equivalent to leaving us
defenseless, making it easier to attack. Post hoc (post hoc ergo propter hoc) A post hoc argument claims that because two things happened sequentially, then the first must have caused the second.
Example 36 “Today I wore a red shirt, and my football team won! I need to wear a red shirt every time they play to make sure they keep winning.” Correlation implies causation Similar to post hoc, but
without the requirement of sequence, this fallacy assumes that just because two things are related one must have caused the other. Often there is a third variable not considered. Example 37 “Months
with high ice cream sales also have a high rate of deaths by drowning. Therefore ice cream must be causing people to drown.” This argument is implying a causal relation, when really both are more
likely dependent on the weather; that ice cream and drowning are both more likely during warm summer months. Try it Now 5 Identify the logical fallacy in each of the arguments a. Only an
untrustworthy person would run for office. The fact that politicians are untrustworthy is proof of this. 20 b. Since the 1950s, both the atmospheric carbon dioxide level and obesity levels have
increased sharply. Hence, atmospheric carbon dioxide causes obesity. c. The oven was working fine until you started using it, so you must have broken it. d. You can’t give me a D in the class – I
can’t afford to retake it. e. The senator wants to increase support for food stamps. He wants to take the taxpayers’ hard-earned money and give it away to lazy people. This isn’t fair so we shouldn’t
do it. Try it Now Answers 1. A B ~A ~A ⋀ B T T F F T F F F F T T T F F T F ~B F T F T (~A ⋀ B) ⋁ ~B F T T T 2. Since no cows are purple, we know there is no overlap between the set of cows and the
set of purple things. We know Fido is not in the cow set, but that is not enough to conclude that Fido is in the purple things set. Purple things Cows x? Fido x? 3. Let S: have a shovel, D: dig a
hole The first premise is equivalent to S→D. The second premise is D. The conclusion is S. We are testing [(S→D) ⋀ D] → S S D S→D (S→D) ⋀ D [(S→D) ⋀ D] → S T T T T T T F F F T F T T T F F F T F T
This is not a tautology, so this is an invalid argument. 4. Letting P = go to the party, T = being tired, and F = seeing friends, then we can represent this argument as P: Premise: P→T Premise: P→F
Conclusion: ~ F → ~T 21 We could rewrite the second premise using the contrapositive to state ~F → ~P, but that does not allow us to form a syllogism. If we don’t see friends, then we didn’t go the
party, but that is not sufficient to claim I won’t be tired tomorrow. Maybe I stayed up all night watching movies. 5. a. b. c. d. e. Circular Correlation does not imply causation Post hoc Appeal to
consequence Straw man Exercises Compute the truth values of the following symbolic statements, supposing that the truth value of A, B, C is T, and the truth value of X, Y, Z is F. 1. ~A ⋁ B 2. ~B ⋁ X
3. ~Y ⋁ C 4. ~Z ⋁ X 5. (A ⋀ X) ⋁ (B ⋀ Y) 6. (B ⋀ C) ⋁ (Y ⋀ Z) 7. ~(C ⋀ Y) ⋁ (A ⋀ Z) 8. ~(A ⋀ B) ⋁ (X ⋀ Y) 9. ~(X ⋀ Z) ⋁ (B ⋀ C) 10. ~(X ⋀ ~Y) ⋁ (B ⋀ ~C) 11. (A ⋁ X) ⋀ (Y ⋁ B) 12. (B ⋁ C) ⋀ (Y ⋁ Z)
13. (X ⋁ Y) ⋀ (X ⋁ Z) 14. ~(A ⋁ Y) ⋀ (B ⋁ X) 15. ~(X ⋁ Z) ⋀ (~X ⋁ Z) 16. ~(A ⋁ C) ⋁ ~(X ⋀ ~Y) 17. ~(B ⋁ Z) ⋀ ~(X ⋁ ~Y) 18. ~[(A ⋁ ~C) ⋁ (C ⋁ ~A)] 19. ~[(B ⋀ C) ⋀ ~(C ⋀ B)] 20. ~[(A ⋀ B) ⋁ ~(B ⋀ A)]
21. [A ⋁ (B ⋁ C)] ⋀ ~[(A ⋁ B) ⋁ C] 22. [X ⋁ (Y ⋀ Z)] ⋁ ~[(X ⋁ Y) ⋀ (X ⋁ Z)] 23. [A ⋀ (B ⋁ C)] ⋀ ~[(A ⋀ B) ⋁ (A ⋀ C)] 24. ~{[(~A ⋀ B) ⋀ (~X ⋀ Z)] ⋀ ~[(A ⋀ ~B) ⋁ ~(~Y ⋀ ~Z)]} 25. ~{~[(B ⋀ ~C) ⋁ (Y ⋀
~Z)] ⋀ [(~B ⋁ X) ⋁ (B ⋁ ~Y)]} 26. A → B 27. A → X 28. B → Y 29. Y → Z 22 30. (A → B) → Z 31. (X → Y) → Z 32. (A → B) → C 33. (X → Y) → C 34. A → (B → Z) 35. X → (Y → Z) 36. [(A → B) → C] → Z 37. [(A
→ X) → Y] → Z 38. [A → (X → Y)] → C 39. [A → (B → Y)] → X 40. [(X → Z) → C] → Y 41. [(Y → B) → Y] → Y 42. [(A → Y) → B] → Z 43. [(A ⋀ X) → C] → [(X → C) → X] 44. [(A ⋀ X) → C] → [(A → X) → C] 45. [(A
⋀ X) → Y] → [(X → A) → (A → Y)] 46. [(A ⋀ X) ⋁ (~A ⋀ ~X)] → [(A → X) ⋀ (X → A)] 47. {[A → (B → C)] → [(A ⋀ B) → C]} → [(Y → B) → (C → Z)] 48. {[(X → Y) → Z] → [Z → (X → Y)]} → [(X → Z) → Y] 49. [(A ⋀
X) → Y] → [(A → X) ⋀ (A → Y)] 50. [A → (X ⋀ Y)] → [(A → X) ⋁ (A → Y)] Construct the complete truth table for each of the following formulas: 51. (P ⋀ Q) ⋁ (P ⋀ ~Q) 52. ~(P ⋀ ~P) 53. ~(P ⋁ ~P) 54. ~(P
⋀ Q) ⋁ (~P ⋁ ~Q) 55. ~(P ⋁ Q) ⋁ (~P ⋀ ~Q) 56. (P ⋀ Q) ⋁ (~P ⋀ ~Q) 57. ~(P ⋁ (P ⋀ Q)) 58. ~(P ⋁ (P ⋀ Q)) ⋁ P 59. (P ⋀ (Q ⋁ P)) ⋀ ~P 60. ((P Q) → P) → P 61. ~(~(P → Q) → P) 62. (P → Q) ↔ ~P 63. P → (Q
→ (P ⋀ Q)) 64. (P ⋁ Q) ↔ (~P → Q) 65. ~(P ⋁ (P → Q)) 66. (P → Q) ↔ (Q → P) 67. (P → Q) ↔ (~Q → ~P) 68. (P ⋁ Q) → (P ⋀ Q) 69. (P ⋀ Q) ⋁ (P ⋀ R) 70. [P ↔ (Q ↔R)] ↔ [(P ↔Q) ↔R] 71. [P → (Q ⋀ R)] → [P →
R] 72. [P → (Q ⋁ R)] → [P → Q] 73. [(P → Q) → R] → [P → R] 23 74. [(P ⋀ Q) → R] → [P → R] 75. [(P ⋀ Q) → R] → [(Q ⋀ ~R) → ~P] | {"url":"https://customscholars.com/academy-of-chinese-culture-creating-a-truth-table-for-expressions-questions/","timestamp":"2024-11-14T09:00:26Z","content_type":"text/html","content_length":"86355","record_id":"<urn:uuid:c02d9352-f2fb-4e86-9b41-ec3afd52a74b>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00595.warc.gz"} |
Introducing Dilemmas in Symbolic Logic
This video completes a series on the rules of inference by covering the two types of dilemma: Constructive Dilemma and Destructive Dilemma. If you don’t understand any of the terminology used in this
blog post, check out the earlier blog posts about this series on the rules of inference.
These two rules are extended forms of Modus Ponens and Modus Tollens. Constructive Dilemma is an extended form of Modus Ponens. Where the latter has one conditional with the affirmation of its
antecedent, Constructive Dilemma has a conjunction of two conditional with the affirmation of at least one of their antecedents. This implies that at least one of the consequents is true.
Constructive Dilemma
(P ⊃ Q) & (R ⊃ S)
P v R
∴ Q v S
Destructive Dilemma is an extended form of Modus Tollens. Where the latter has one conditional with the denial of its consequent, Destructive Dilemma has the conjunction of two conditionals with the
denial of at least one of their consequents. This implies that at least one of the antecedents is false.
Destructive Dilemma
(P ⊃ Q) & (R ⊃ S)
~Q v ~S
∴ ~P v ~R
The validity of Constructive Dilemma is demonstrated with a truth table in the video, and showing the validity of Destructive Dilemma with a truth table is left as an exercise. Intuitively, you
should be able to understand them as valid if you understand how they work like Modus Ponens and Modus Tollens. For the same reason that these are valid, so are these:
(P ⊃ Q) & (R ⊃ S) & (T ⊃ U)
(P v R) v T
∴ (Q v S) v U
(P ⊃ Q) & (R ⊃ S) & (T ⊃ U)
(~Q v ~S) v ~U
∴ (~P v ~R) v ~T
No matter how many conditionals you string together in a single conjunction, you can infer that one of the consequents is true if you know that one of the antecedents is true, or you can infer that
one of the antecedents is false if you know that one of the consequents is false. These longer arguments could also be shown to be valid with truth tables, but the truth tables would be large. With
rules of inference and rules of replacement (yet to be explained), these arguments can be shown to be valid.
There are three ways to deal with a dilemma.
1. You can go between the horns of a dilemma, which means to deny or question its disjunctive premise.
2. You can grab a dilemma by its horns, which means to deny or question one of its conditionals.
3. You can also create a counterdilemma, which keeps the antecedents of the old dilemma but replaces the consequents with ones that put the issue in a different light.
Details and examples are given in the video. | {"url":"https://fortheloveofwisdom.net/53/logic/dilemmas/","timestamp":"2024-11-12T21:49:18Z","content_type":"text/html","content_length":"63856","record_id":"<urn:uuid:c2090df1-5c99-4080-82d7-92ffb1adcc07>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00675.warc.gz"} |
Algebra Class 8 - NCERT Solutions, Worksheets, MCQ (with videos)
Updated for new NCERT.
Get NCERT Solutions of Chapter 8 Class 8 Algebraic Expressions and Identities free at Teachoo. Answers to all exercise questions, examples have been solved with step-by-step solutions. Concepts are
explained before doing the questions.
In this chapter, we will learn
• What are algebra expressions
• Terms, Factors and Coefficients in an Algebra Expression
• What are monomials, binomials, trinomials and polynomials
• What are like and unlike terms in an algebraic expression
• Adding and Subtracting Algebra Expression
• Multiplication of Algebra Expressions
□ Multiplying two monomials
□ Multiplying three or more monomials
□ Multiplying Monomial by a Binomial
□ Multiplying Monomial by a Trionmial
□ Multiplying Binomial by a Binomial
□ Multiplying Binomial by a Trionmial
• Algebra Identities
Here, we have divided the chapter into 2 parts - Serial Order Wise and Concept Wise.
Just like the NCERT Book, in Serial Order Wise, the chapter is divided into exercises and examples. This is useful if you are looking for answer to a specific question.
That is not a good way of studying.
In the NCERT Book, first question is of some topic, second question is of some other topic. There is no order
We have solved that using Concept Wise.
In Concept Wise, the chapter is divided into concepts. First the concept is taught. Then the questions of that concept is answered, from easy to difficult.
Click on a link to start doing the chapter | {"url":"https://www.teachoo.com/subjects/cbse-maths/class-8/chapter-9-algebraic-expressions-and-identities/","timestamp":"2024-11-10T11:34:42Z","content_type":"text/html","content_length":"109997","record_id":"<urn:uuid:33ffac2b-5651-4a0f-8a6c-a04e6972ee8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00626.warc.gz"} |
Metamath Proof Explorer
Syntax definition walsi
Description: Extend wff definition to include "all some" applied to a top-level implication, which means ps is true whenever ph is true, and there is at least least one x where ph is true.
(Contributed by David A. Wheeler, 20-Oct-2018)
Ref Expression
Assertion walsi $wff ∀! x φ → ψ$ | {"url":"http://metamath.tirix.org/mpests/walsi.html","timestamp":"2024-11-03T00:03:09Z","content_type":"text/html","content_length":"2889","record_id":"<urn:uuid:d1fc93f9-0ca2-419d-aa35-e7b44ec654a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00735.warc.gz"} |
Efficiently Finding Minimum Values in Subarrays Using Precalculation with JavaScript
Our task here involves an array composed of at most 1,000 elements and potentially millions of queries. Each query is a pair of integers denoted as l and r, which correspond to some indices in the
array. Your goal is to write a JavaScript function that, for each query, returns the minimum value in the array between indices l and r (inclusive).
The catch is this: rather than directly finding the minimum value for each query one by one, we're required to optimize the process. The idea here is to pre-calculate the minimum value for each
possible l and r, store these values, and then proceed with the queries. This way, we can simplify the problem and enhance the speed of our solution by eliminating redundant computations.
The function will accept three parameters: arr, Ls, and Rs. The primary array is arr, while Ls and Rs are arrays that hold the l and r values respectively for each query. For instance, let's say you
have an array like [2, 1, 3, 7, 5] and the following queries: [0, 2, 4] and [1, 3, 4]. The aim of our function would be to return [1, 3, 5] as the minimum values within the ranges of the three | {"url":"https://learn.codesignal.com/preview/lessons/3353","timestamp":"2024-11-13T15:12:50Z","content_type":"text/html","content_length":"155508","record_id":"<urn:uuid:6bee97fb-b2d3-4a60-8a43-a89575e5efd9>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00318.warc.gz"} |
Source for wiki GeneralizedEqualCowan version 16
This site is a static rendering of the Trac instance that was used by R7RS-WG1 for its work on R7RS-small (PDF), which was ratified in 2013. For more information, see Home.
Source for wiki GeneralizedEqualCowan version 16
== Generalized `equal?` predicate ==
`(generalized-equal? `''obj1 obj2''` . `''comparator-list''`)`
Compares ''obj1'' and ''obj2'' for equality. A ''comparator'' is a procedure that is given two arguments to compare. It returns `#t` if its arguments are to be considered equal, `#f` if they are to be considered unequal, and the symbol `pass` if it cannot decide. It is an error for a comparator to return anything else. The third argument passed to a comparator is ''comparator-list'', to be used in recursive calls to `generalized-equal?`.
First, each element of ''comparator-list'' is invoked on ''obj1'' and ''obj2'', passing ''comparator-list'' as its third argument. If the comparator returns `#t` or `#f`, that is the result.
If all comparators in ''comparator-list'' have been invoked with a `pass` result, then `generalized-equal?` behaves as if it had been invoked on the comparators `list-comparator`, `string-comparator`, `vector-comparator`, and `bytevector-comparator` defined below, plus additional implementation-defined comparators if any. If all of these return `pass`, then `generalized-equal?` returns what `eqv?` returns.
When `generalized-equal?` is invoked with an empty comparator list, it returns what `equal?` returns, except possibly on implementation-defined object types that are not record types. When it is invoked with `numeric-comparator`, `char-ci-comparator`, `string-ci-comparator`, and a comparator that descends into hash tables, it returns what Common Lisp's `equalp` returns.
`(make-atomic-comparator `''type-predicate compare-predicate''`)`
Returns a comparator that invokes ''type-predicate'' on its first and its second arguments. If they both return `#t`, then they are assumed to be of the same type, and ''compare-predicate'' is invoked on the first and second arguments together. If the result is `#t` or `#f`, then the comparator returns `#t` or `#f` respectively. If they are not of the same type, a third value is returned. The resulting comparator always ignores its third argument.
`(make-specific-equality `'' . `''comparator-list''`)`
Return a curried version of `generalized-equal?` that accepts two arguments to compare and uses the comparators in ''comparator-list''.
== Standard comparators ==
`(numeric-comparator `''obj1 obj2 comparators-list''`)`
A comparator that returns `#t` if ''obj1'' and ''obj2'' are numbers that are equal in the sense of `=`, `#f` if they are numbers that are not equal in the sense of `=`, and `pass` otherwise. The ''comparators-list'' argument is ignored.
`(char-ci-comparator `''obj1'' ''obj2 comparators-list''`)`
A comparator that returns `#t` if ''obj1'' and ''obj2'' are characters that are equal in the sense of `char-ci=?`, `#f` if they are characters that are not equal in the sense of `char-ci=?`, and `pass` otherwise. The ''comparators-list'' argument is ignored.
`(list-comparator `''obj1 obj2 comparators-list''`)`
A comparator that returns `#t` if ''obj1'' and ''obj2'' are lists of the same length whose elements are equal in the sense of `generalized-equal?` when passed ''comparators-list'', `#f` if they are lists that are not equal in that sense, and `pass` otherwise. The ''comparators-list'' argument is ignored.
`(string-comparator `''obj1 obj2 comparators-list''`)`
A comparator that returns `#t` if ''obj1'' and ''obj2'' are strings that are equal in the sense of `string=?`, `#f` if they are strings that are not equal in the sense of `string=?`, and `pass` otherwise. The ''comparators-list'' argument is ignored.
`(string-ci-comparator `''obj1 obj2 comparators-list''`)`
A comparator that returns `#t` if ''obj1'' and ''obj2'' are strings that are equal in the sense of `string-ci=?`, `#f` if they are strings that are not equal in the sense of `string-ci=?`, and `pass` otherwise. The ''comparators-list'' argument is ignored.
`(vector-comparator `''obj1 obj2 comparators-list''`)`
A comparator that returns `#t` if ''obj1'' and ''obj2'' are vectors of the same length whose elements are equal in the sense of `generalized-equal?` when passed ''comparators-list'', `#f` if they are vectors that are not equal in that sense, and `pass` otherwise. The ''comparators-list'' argument is ignored.
`(bytevector-comparator `''obj1 obj2 comparators-list''`)`
A comparator that returns `#t` if ''obj1'' and ''obj2'' are bytevectors of the same length whose elements are equal in the sense of `generalized-equal?` when passed ''comparators-list'', `#f` if they are bytevectors that are not equal in that sense, and `pass` otherwise. The ''comparators-list'' argument is ignored.
When used by an implementation that doesn't provide bytevectors, this procedure always returns `pass`.
2013-05-23 10:22:01 | {"url":"https://small.r7rs.org/wiki/GeneralizedEqualCowan/16/source.html","timestamp":"2024-11-06T06:17:24Z","content_type":"text/html","content_length":"6747","record_id":"<urn:uuid:f35b4665-20e3-46d3-9e17-dba266407c00>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00558.warc.gz"} |
How to solve this simple non-linear equation numerically with Abs[]?
7054 Views
2 Replies
3 Total Likes
How to solve this simple non-linear equation numerically with Abs[]?
I was just trying to get into Mathematica a little more but I've been stuck when trying to solve an equation. The H function models a low pass filter and I want to find out the cut off frequency.
Z[\[Omega]_] := 1 /(\[ImaginaryJ]*\[Omega]*c)
H[\[Omega]_] := Z[\[Omega]]/(r + Z[\[Omega]])
c = 1*^-6; r = 1000;
LogLogPlot[Abs[H[2 \[Pi]*f]], {f, 1, 1000000}, ImageSize -> Large, AxesOrigin -> {1, 1*^-3}, GridLines -> {{160}, {}}]
NSolve[Abs[H[2 \[Pi]*f]] == 1/Sqrt[2], f, Reals]
NSolve[Abs[H[2 \[Pi]*f]] <= 1/Sqrt[2], f, Reals]
f = 160; Abs[H[2 \[Pi]*f]] < 1/Sqrt[2]
Plotting the simple diagram worked. Now, when I try to numerically solve this (in)equation, I get this error:
NSolve::nddc: "The system ... contains a nonreal constant -500000\ I. With the domain Reals specified, all constants should be real."
Even though the Abs[] around it should get rid of the i. However, when I "manually" test the inequation, I get back true. What am I doing wrong?
Thanks for any help
2 Replies
In[1]:= Z[?_] := 1/(\[ImaginaryJ]*?*c);
H[?_] := Z[?]/(r + Z[?]);
c = 1*^-6; r = 1000;
Reduce[Abs[H[2 ?*f]] == 1/Sqrt[2] && f > 0, f]
Out[4]= f == 500/?
if you want the result in terms of radians instead of a decimal approximation, just as long as you don't use a decimal point anywhere
From Documentation on
(see Details section) you can find out that
NSolve deals primarily with linear and polynomial equations.
You obviously deal with non-polynomial equation due to Abs[] function. In this cases use
function. From plot we see that the solution is located around 160:
Z[\[Omega]_] := 1/(\[ImaginaryJ]*\[Omega]*c)
H[\[Omega]_] := Z[\[Omega]]/(r + Z[\[Omega]])
c = 1*^-6; r = 1000;
LogLogPlot[{1/Sqrt[2], Abs[H[2 \[Pi]*f]]}, {f, 1, 1000000},
ImageSize -> Large, AxesOrigin -> {1, 1*^-3}, GridLines -> {{160}, {}}]
And this simple line solves your problem:
FindRoot[Abs[H[2 \[Pi]*f]] == 1/Sqrt[2], {f, 100}]
(* {f -> 159.155} *)
Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use | {"url":"https://community.wolfram.com/groups/-/m/t/171094","timestamp":"2024-11-08T10:42:28Z","content_type":"text/html","content_length":"101217","record_id":"<urn:uuid:3e64c324-591e-4252-80fd-a879d6de9805>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00713.warc.gz"} |
AI-powered effective lens position prediction improves the accuracy of existing lens formulas
Aims To assess whether incorporating a machine learning (ML) method for accurate prediction of postoperative anterior chamber depth (ACD) improves the refraction prediction performance of existing
intraocular lens (IOL) calculation formulas.
Methods A dataset of 4806 patients with cataract was gathered at the Kellogg Eye Center, University of Michigan, and split into a training set (80% of patients, 5761 eyes) and a testing set (20% of
patients, 961 eyes). A previously developed ML-based method was used to predict the postoperative ACD based on preoperative biometry. This ML-based postoperative ACD was integrated into new effective
lens position (ELP) predictions using regression models to rescale the ML output for each of four existing formulas (Haigis, Hoffer Q, Holladay and SRK/T). The performance of the formulas with
ML-modified ELP was compared using a testing dataset. Performance was measured by the mean absolute error (MAE) in refraction prediction.
Results When the ELP was replaced with a linear combination of the original ELP and the ML-predicted ELP, the MAEs±SD (in Diopters) in the testing set were: 0.356±0.329 for Haigis, 0.352±0.319 for
Hoffer Q, 0.371±0.336 for Holladay, and 0.361±0.331 for SRK/T which were significantly lower (p<0.05) than those of the original formulas: 0.373±0.328 for Haigis, 0.408±0.337 for Hoffer Q,
0.384±0.341 for Holladay and 0.394±0.351 for SRK/T.
Conclusion Using a more accurately predicted postoperative ACD significantly improves the prediction accuracy of four existing IOL power formulas.
• lens and zonules
• treatment surgery
Data availability statement
Data may be obtained from the Sight Outcomes Research Collaborative (SOURCE) repository for participating institutions and are not publicly available.
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this
work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is
non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant
permission to reuse the content in many different ways.
The estimation of postoperative intraocular lens (IOL) position is essential to IOL power calculations for cataract surgery. Norrby and Olsen have reported that inaccuracy in the prediction of the
postoperative anterior chamber depth (ACD) is the number one source of error for postoperative refraction prediction.1 2 In addition to its vital role in IOL formulas, the postoperative ACD is also a
critical variable in ray tracing, where the uncertainty in the postoperative ACD directly affects the accuracy of the results. Methods to improve the accuracy of the prediction of postoperative ACD
have been studied for decades. In first-generation formulas, the lens position was represented by a constant. Later, more and more preoperative biometric variables such as the axial length (AL) and
the corneal power were added to calculate the postoperative IOL position. In 1993, Holladay first proposed the term ‘expected lens position’ or ELP to indicate the location of the lens as it relates
to a given optical model of the eye.3 The ELP estimates in SRK/T, Holladay1 and Hoffer Q are derived based on theoretical formulas. The ELP estimate in the Haigis formula is a simple linear
combination of the AL and the preoperative ACD. Although ELP was initially intended to estimate the position of the IOL, ELPs in the aforementioned formulas were developed to account for different
formula-specific assumptions and regression results.1 4 In order to reflect the use of ELP to account for these formula-specific assumptions and regression results, the term ELP today refers to
‘effective lens position’ rather than ‘expected lens position’. In view of the limitations of the ELP in existing formulas, recently, more efforts have been devoted to constructing ELPs that better
reflect the true location of the IOL.5–9 New IOL power prediction methods have also been developed based on the new-generation ELP prediction methods, and they have shown that using a more accurately
predicted IOL position helps to improve the IOL power prediction accuracy.5
It is so far largely unexplored whether inserting a more accurately predicted ELP into existing formulas improves refraction prediction accuracy. This is an important question because: (1) it
provides a fast and efficient way to modify and improve on existing IOL formulas whose reliability has been tested extensively; (2) such research can provide supports for translating the continued
improvements in accuracy in postoperative ACD prediction into better refraction predictions in published formulas. Several previous studies had modified the ELPs in existing formulas in order to
achieve better refraction prediction results in certain cataract cases. Modification of ELP calculation in the Haigis formula for sulcus-implanted IOLs was reported to improve performance.10 Kim et
al adjusted the ELP estimation in SRK/T formulas with the corneal height in postrefractive patients and achieved satisfactory accuracy.11 It remains to be explored whether improvement of ELP
estimates for in-the-bag IOL placement can improve IOL power calculations of existing formulas for general cataract patients.
Since most recently published IOL formulas (eg, Barrett Universal II,12 13 Holladay 2, Olsen formula14) are either not disclosed to the public or do not have the option to customise the value of ELP
during the prediction of postoperative refraction, here we applied our previously developed postoperative ACD prediction methods to a dataset of 4806 cataract surgery patients and replaced the ELP
estimates in 4 existing IOL formulas: Haigis, Hoffer Q, Holladay and SRK/T. We combined our machine learning (ML) prediction of true postoperative ACD with the original ELP estimated by each formula
and substituted this updated ELP prediction for each formula. We then compared the refraction prediction performance of each formula using its original and enhanced ELP estimates. The findings
reported here demonstrate that existing formulas can benefit from improved methods for predicting true postoperative ACD.
Materials and methods
Postoperative ACD prediction ML model
In previous work,15 we developed an ML-based postoperative ACD prediction model, which predicts the postoperative ACD (in mm) based on preoperative biometry. Here, in the presented study, an ACD
prediction ML model was trained using the method and dataset (847 patients, 1205 eyes, 4137 records) described in the previous research. The dataset was composed of the preoperative and postoperative
biometry measured by the Lenstar LS900 optical biometers (Haag-Streit USA, EyeSuite software V.i9.1.0.0) at the University of Michigan’s Kellogg Eye Center. The postoperative ACD was defined as the
distance from the front surface of the cornea to the front surface of the IOL. The postoperative ACD predicted by the ML model is referred to as in this manuscript.
Data collection
In this study, biometry records were collected using the same approach as for the development of the ML postoperative ACD prediction model at University of Michigan’s Kellogg Eye Center.15
The inclusion criteria were: (1) patients who had cataract surgery (Current Procedural Terminology (CPT) code=66984or 66 982) but no prior refractive surgery and no additional surgical procedures
at the time of cataract surgery. (2) The implanted lens was an Alcon SN60WF single-piece acrylic monofocal lens (Alcon, USA). Each case in the dataset corresponds to one operation of a single eye
with preoperative and postoperative information. The preoperative information includes the measurements of the AL, lens thickness (LT), ACD, flat keratometry (K1), steep keratometry (K2), and the
average keratometry which was calculated as . The postoperative information includes the postoperative refraction (spherical component SC and cylindrical component CC) where the time when it was
recorded was closest to 1month (30 days) after surgery. Since the patients were measured in a lane of 10 feet long (3.048 m), which was shorter than the standard length of 20 feet (6 m), the SC was
adjusted for the vergence distance by adding according to Simpson and Charman’s recommendation.16 The spherical equivalent (SE) refraction was therefore calculated as . Samples that were used to
train the postoperative ACD prediction ML model were excluded from the dataset so that the dataset better simulates unseen samples.
The dataset in total consisted of 4806 patients (figure 1). The dataset was split into a training dataset used for the development of the methods and a testing dataset used for performance
comparison. Eighty per cent of the patients were randomly assigned to the training set, and the rest of the patients (20%) were assigned to the testing set. For patients who had more than one
associated case in the testing set (ie, patients who had both eyes operated on), one case was randomly selected to ensure each patient had the same weight when the prediction performance was
evaluated. At the end of this process, the training set had 3845 patients (5761 eyes), and the testing set had 961 patients (961 eyes).
Linear regression model
We implemented four existing formulas (Haigis, Hoffer Q, Holladay, and SRK/T) in Python based on their publications.17–24 The existing formulas calculated the ELP () as a function of the preoperative
biometry (figure 1): . The predicted ELP () was then used to predict the postoperative refraction: . Here, the goal was to reduce the refraction prediction error by replacing with a different value,
. Our approach involves two steps: (1) finding the theoretically most optimal ELP values, (2) modelling the most optimal ELP with and the ML-predicted postoperative ACD, denoted .
In the first step, the most optimal ELP (denoted ) was found by the standard method of back-calculating the ELP when the predicted refraction was set to equal the true refraction (ie, ). In other
words, when , the refraction prediction errors of all patients equal zero. More details on the computation of can be found in online supplemental materials.
Supplemental material
After the computation of , , and we modelled using a linear function of and/or so as to obtain an approximation of the most optimal ELP using available variables. We compared four different
approaches of approximating : (1) original, : using the original (2) Formula LR, : using linearly adjusted (3) ML LR, : using linearly adjusted (4) Formula & ML LR, using a linear combination of and
. Here, , and are constants. Outliers with large refraction errors (ie, or ) were excluded for each formula before establishing the linear regression model, in order to obtain better modelling
results. The refraction prediction errors were calculated as . The linear regression was performed using scikit-learn 0.20.3.
On the testing set, was calculated based on the values of , , and obtained through linear regression. The predicted refraction was calculated as The mean absolute error (MAE), median absolute error
(MedAE) and mean error (ME) were calculated for performance comparison.
A-constant optimisation
The A-constants for the formulas were optimised based on the training dataset so that the ME in refraction prediction was closest to zero. The A-constants were optimised separately for the unmodified
formulas and formulas with a modified ELP estimate (see additional details in the A-constant optimisation section and online supplemental figure S1). The optimised A-constants for the original
formulas were: a0=−0.733, a1=−0.234, a2=0.217 for Haigis, ACD constant=5.724 for Hoffer Q, surgeon factor=1.864 for Holladay, and A=119.089 for SRK/T (online supplemental table S1).
Statistical analysis
Linear regression analysis was used to assess the significance of the correlation between , , and . To test whether the MAE and ME of different methods were significantly different, a Friedman test
followed by a post hoc paired Wilcoxon signed-rank test with Bonferroni correction was used. Statistical significance was defined as the p value<0.05. All the above analyses were performed with
Python V.3.7.3.
Dataset overview
The cases in the training and testing datasets had a similar distribution according to the summary statistics shown in table 1. As elaborated in he Materials and methods section, we calculated , and
based on the formulas and their optimised A-constants. The mean and SD of the ELPs calculated based on the original formulas were summarised in online supplemental table S2. and had similar mean
values in contrast to .
The Pearson correlation coefficients ( R ) between and were shown in table 2. Three ELP-related variables were positively intercorrelated with each other. The correlation coefficients, R , between
and were the weakest among the three pairs of variables across all formulas.
Linear regression results on the training set
Linear regression models were established based on the training set and the of alternative linear models were shown in table 3. The coefficients of the fitted linear regression line are shown in
online supplemental table S3. The mean and SD of the resulting from different models are shown in online supplemental table S4. For ‘Formula LR’, the was larger than that of ‘ML LR’ for all four
formulas. For ‘Formula & ML LR’, the was larger than that when one of or was excluded from the linear combination for all four formulas.
Refraction prediction performance comparison on the testing set
We tested the performance of four scenarios on the testing set and summarised the MAE and SD in table 4. The ME and MedAE were shown in online supplemental tables S5 and S6. Statistical tests were
used to compare the difference in the MAEs of different models (see the Materials and methods section). Using a linear combination of and , the refraction prediction results of four existing formulas
were significantly improved compared with original (statistical test results shown in online supplemental tables S7 and S8).
We further compared the MAEs of ‘Original’ and ‘Formula & ML LR’ among patients with short, medium and long AL (online supplemental table S9). It was observed that the short and medium AL groups had
a higher percentage decrease in MAE than the long AL group for Hoffer Q and SRK/T. For Haigis, the medium AL group achieved higher decrease than the other two groups. And for Holladay, the long AL
group achieved more decrease in MAE than the other two groups.
In this study, we applied a previously developed ML method for postoperative ACD prediction to an unseen dataset of 4806 cataract surgery patients to assess whether it was possible to improve the
performance of existing IOL formulas (Haigis, Hoffer Q, Holladay, and SRK/T) by replacing each formula’s ELP estimate.
We computed three ELP-related quantities: the ML-predicted postoperative ACD (), formula-predicted ELP (), and a back-calculated ELP () that minimised the refraction error for each eye in the
dataset. They are strongly correlated with each other (table 2), which indicates that (1) and are both predictive of the most optimal ELP , (2) and contain partially overlapping information, which is
consistent with our expectation. is an estimation of the value of the true postoperative ACD. On the other hand, was designed by the originators of each formula to serve a similar purpose but was
based on the theoretical assumptions in each formula. Our findings are consistent with observations of previous studies that the ELP estimates made by IOL formulas were numerically different from the
true postoperative ACD.9
Using a training dataset of 3845 patients, we sought to evaluate whether the machine-predicted postoperative ACD, , was able to provide information that could be used to refine each formula’s
predicted ELP, . We established regression models between the , , and to evaluate whether a linear combination of and used in place of the original could lower the refraction prediction error. Using
the modified ELPs, we obtained significantly lower MAEs in refraction prediction compared with the formulas with the original ELPs on the unseen testing set (table 4). Notably, the accurately
predicted postoperative ACD () alone did not outperform the original ELP () when it was inserted into the formulas (table 4, row 3 compared with row 1). This is likely because the original method of
calculating ELP in each formula compensates for its particular model of the eye and its associated assumptions. Our , however, does not have any components that compensate for the assumptions and
constants in the formulas. On the other hand, has information about the true postoperative ACD, which it appears can beneficially alter the original ELP estimate.
In this study, the A-constants were optimised separately when was replaced with different . The means of , as shown in online supplemental table S4, were numerically close to those of as shown in
online supplemental table S2. However, in our method, the similarity between and was not among the restrictions and goals of the optimisation. The reason that and the original had similar means might
be that the other parts of each formula put restrictions on the values of ELP in order to obtain reasonable results. This could also be the reason why and had similar means as shown in online
supplemental table S2.
Previous studies involving replacement of ELP in existing formulas have focused on special cases, such as sulcus implantation and postrefractive surgery eyes, where ELP estimates of traditional
formulas would be expected to be inapplicable.10 11 However, the method for replacing ELP estimates presented here provides a simple way of improving the refraction prediction performance of existing
formulas for the general cataract surgery population. While it would be ideal to evaluate this method on modern formulas such as Barrett Universal II or Holladay 2, the absence of published equations
for these formulas prevents such a study. As such, we studied the application of the ML predicted postoperative ACD in four existing formulas whose mathematical equations were published. Although it
awaits to be further validated, similar results can likely be transferred to other refraction prediction methods, since many modern IOL power formulas use predicted postoperative ACD as an
intermediate step for predicting postoperative refraction. A limitation of the study was the absence of an external validation set, despite the use of a large unseen testing dataset (961 eyes).
Accordingly, evaluation of the method at additional institutions and the extension to additional formulas will be future directions of this work.
In summary, the results of this study demonstrate that an ML method for postoperative ACD prediction based on postoperative optical biometry can be incorporated into a variety of existing IOL power
formulas to improve their accuracy in refraction prediction.
Data availability statement
Data may be obtained from the Sight Outcomes Research Collaborative (SOURCE) repository for participating institutions and are not publicly available.
Ethics statements
Patient consent for publication
Ethics approval
Institutional review board approval was obtained for the study, and it was determined that informed consent was not required because of its retrospective nature and the anonymised data used in this
study. The study was carried out in accordance with the tenets of the Declaration of Helsinki.
Supplementary materials
• Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
• Contributors TL: data analysis, programming and writing of the manuscript. JS: data collection. NN: data collection, guidance on method development, and writing of the manuscript.
• Funding This work was supported by the Lighthouse Guild, New York, NY (JDS) and National Eye Institute, Bethesda, MD, 1R01EY026641-01A1 (JDS).
• Competing interests None declared.
• Provenance and peer review Not commissioned; externally peer reviewed.
• Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or
recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the
content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology,
drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Linked Articles | {"url":"https://bjo.bmj.com/content/106/9/1222","timestamp":"2024-11-05T15:39:51Z","content_type":"text/html","content_length":"245429","record_id":"<urn:uuid:568d8273-dc1d-4a8c-a73d-b3e0b91db4c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00701.warc.gz"} |
Kinetic Theory - NCERT Solutions
CBSE Class 11 Physics
NCERT Solutions
Chapter 13
1. Estimate the fraction of molecular volume to the actual volume occupied by oxygen gas at STP. Take the diameter of an oxygen molecule to be
Ans.Given that,
1. Diameter of an oxygen molecule, d=
Therefore, radius will be
Now we know that,actual volume occupied by 1 mole of oxygen gas at STP = 22400
Molecular volume of oxygen gas,
Where, N is Avogadro's number = 6.023 23 molecules/mole
Ratio of the molecular volume to the actual volume of oxygen =
2. Molar volume is the volume occupied by 1 mol of any (ideal) gas at standard temperature and pressure (STP: 1 atmospheric pressure, 0 °C). Show that the measured volume at STP is 22.4 litres.
Ans. The ideal gas equation relating pressure (P), volume (V), and absolute temperature (T) is given as:
PV= nRt
R is the universal gas constant =
n= Number of moles = 1(Taken here)
T= Standard temperature = 273 K(Since given condition is STP)
P= Standard pressure = 1 atm =
= 0.0224
= 22.4 liters
Hence, the molar volume of a gas at STP is 22.4 liters.
3. Figure 13.8 shows the plot of PV/T versus P for
(a) What does the dotted plot signify?
(b) Which is true:
(c) What is the value of PV/T where the curves meet on the y-axis?
(d) If similar plots are obtained for PV/T at the point where the curves meet on the y-axis? If not, what mass of hydrogen yields the same value of PV/T (for low pressure high temperature region of
the plot)? (Given, Molecular mass of R =
Ans. (a) The dotted plot in the graph signifies the ideal behavior of the gas.We know fromthe ideal gas law that
where n=no. of moles
R= Universal Gas constant
Since both n and R are constants, hence the RHS is also constant.
From eq(1) hence we conclude that the LHS is also a constant. Thus PV/T is a constant quality and is independant to any change in pressure.
(b) The dotted plot in the given graph represents an ideal gas. The curve of the gas at temperature
(c) The value of the ratio PV/T, where the two curves meet, is nR. This is because the ideal gas equation is given as:
PV = nRT
P is the pressure
T is the temperature
V is the volume
n is the number of moles
R is the universal constant
Molecular mass of oxygen = 32.0 g
Mass of oxygen =
R = 8.314 J
= 0.26 J
Therefore, the value of the ratio PV/T, where the curves meet on the y-axis, is 0.26 J
(d) If we obtain similar plots for PV/T at the point where the curves meet the y-axis. This is because the molecular mass of hydrogen (2.02 u) is different from that of oxygen (32.0 u).
We have the value obtained from the last problem for oxygen that,
Now we know that, R = 8.314 J
Molecular mass (M) of
Now, PV/T=nR (at constant temperature)
Where, n=m/M
m = Mass of
Hence, PV/T.
4. An oxygen cylinder of volume 30 litres has an initial gauge pressure of 15 atm and a temperature of 27 °C. After some oxygen is withdrawn from the cylinder, the gauge pressure drops to 11 atm and
its temperature drops to 17 °C. Estimate the mass of oxygen taken out of the cylinder (R = 8.31 J
Ans. Volume of oxygen,
Gauge pressure,
Universal gas constant, R= 8.314 J
Let the initial number of moles of oxygen gas in the cylinder be
Now,the ideal gas equation is given as:
M= Molecular mass of oxygen = 32 g
∴ M = 18.276
After some oxygen is withdrawn from the cylinder, the pressure and temperature reduces.
Gauge pressure,
The gas equation is given as:
∴ M = 13.86
The mass of oxygen taken out of the cylinder is given by the relation:
Initial mass of oxygen in the cylinder – Final mass of oxygen in the cylinder
= 584.84 g– 453.1 g
= 131.74 g
= 0.131 kg
Therefore, 0.131 kg of oxygen is taken out of the cylinder.
5. An air bubble of volume 1.0 cm3 rises from the bottom of a lake 40 m deep at a temperature of 12 °C. To what volume does it grow when it reaches the surface, which is at a temperature of 35 °C?
Ans. Volume of the air bubble,
Given that bubble rises to height, d = 40 m
Temperature at a depth of 40 m,
Temperature at the surface of the lake,
The pressure on the surface of the lake:
The pressure at the depth of 40 m:
is the density of water =
g is the acceleration due to gravity =
= 493300 Pa
Now we have from the ideal gas law that,
Therefore, when the air bubble reaches the surface, its volume becomes 5.263
6. Estimate the total number of air molecules (inclusive of oxygen, nitrogen, water vapour and other constituents) in a room of capacity 25.0
Ans. Volume of the room, V= 25.0
Temperature of the room, T= 27°C = 300 K
Pressure in the room, P= 1 atm =
The ideal gas equation relating pressure (P), Volume (V), and absolute temperature (T) can be written as:
PV =
N is the number of air molecules in the room
Therefore, the total number of air molecules in the given room is
7. Estimate the average thermal energy of a helium atom at (i) room temperature (27 °C), (ii) the temperature on the surface of the Sun (6000 K), (iii) the temperature of 10 million Kelvin (the
typical core temperature in the case of a star).
Ans. (i) At room temperature, T= 27°C = (273+27)K=300 K
We know that, the average thermal energy
Where k is the Boltzmann constant =
Hence, the average thermal energy of a helium atom at room temperature (27°C) is
(ii) On the surface of the sun, T= 6000 K
Average thermal energy
Hence, the average thermal energy of a helium atom on the surface of the sun is
(iii) At temperature, T= 107K
Average thermal energy
Hence, the average thermal energy of a helium atom at the core of a star is
8. Three vessels of equal capacity have gases at the same temperature and pressure. The first vessel contains neon (monatomic), the second contains chlorine (diatomic), and the third contains uranium
hexafluoride (polyatomic). Do the vessels contain equal number of respective molecules? Is the root mean square speed of molecules the same in the three cases? If not, in which case iit is the
Ans. Yes. All the vessels contain the same number of the respective molecules..
Since the three vessels have the same capacity, they have the same volume.
Also it is given that each gas has the same pressure and temperature.
According to the Avogadro's law, equal volumes of different gases under same temperature and pressure will contain equal number of molecules. This number is equal to Avogadro's number, N=
Now ,we know that the root mean square speed (m, at a temperature T, is given by the relation:
Where, k is Boltzmann constant
Since the temperature for all the three gases is same , the vrms only depends on mass such that,
Therefore, the root mean square speed of the molecules in the three cases is not the same. Among neon, chlorine, and uranium hexafluoride, the mass of neon is the smallest. Hence, neon has the
largest root mean square speed among the given gases.
9. At what temperature is the root mean square speed of an atom of an argon gas equal to the rms speed of a helium gas atom at - 20 °C? (atomic mass of Ar = 39.9 u, of He = 4.0 u).
Ans. Given,
Temperature of the helium atom,
Atomic mass of argon,
Atomic mass of helium,
Now, let,
The rms speed of argon is given by:
R is the universal gas constant
And, the rms speed of helium is given by:
It is given that:
= 2523.675 =
Therefore, the temperature of the argon atom is
10. Estimate the mean free path and collision frequency of a nitrogen molecule in a cylinder containing nitrogen at a pressure 2.0 atm and temperature 17 °C. Take the radius of a nitrogen molecule to
be roughly 1.0
Ans. Mean free path =
Collision frequency =
Successive collision time
Pressure inside the cylinder containing nitrogen, P= 2.0 atm =
Temperature inside the cylinder, T= 17°C =290 K
Radius of a nitrogen molecule, r= 1.0 =
Diameter, d =
Molecular mass of nitrogen, M= 28.0 g =
The root mean square speed of nitrogen is given by the relation:
R is the universal gas constant = 8.314 J
The mean free path (l) is given by the relation:
K is the Boltzmann constant =
Collision frequency
Collision time is given as:
Time taken between successive collisions:
Hence, the time taken between successive collisions is 500 times the time taken for a collision.
11. A metre long narrow bore held horizontally (and closed at one end) contains a 76 cm long mercury thread, which traps a 15 cm column of air. What happens if the tube is held vertically with the
open end at the bottom?
Ans. Length of the narrow bore, L= 1 m = 100 cm
Length of the mercury thread, l= 76 cm
Length of the air column between mercury and the closed end,
Since the bore is held vertically in air with the open end at the bottom, the mercury length that occupies the air space is: 100–(76 + 15) = 9 cm
Hence, the total length of the air column = 15 + 9 = 24 cm
Let h cm of mercury flow out as a result of atmospheric pressure.
∴Length of the air column in the bore= 24 + h cm
And, length of the mercury column = 76 – h cm
Initial pressure,
Initial volume,
Final pressure, h) = h cm of mercury
Final volume, h)
Temperature remains constant throughout the process.
Hence from Boyle's law we have,
76 h (24 + h)
h –1140 = 0
= 23.8 cm or – 47.8 cm
Since,height cannot be negative hence -47.8cm is an invalid answer.. Hence, 23.8 cm of mercury will flow out from the bore and 52.2 cm of mercury will remain in it. The length of the air column will
be 24 + 23.8 = 47.8 cm.
12. From a certain apparatus, the diffusion rate of hydrogen has an average value of 28.7 cm3
[Hint: Use Graham's law of diffusion:
Ans. Rate of diffusion of hydrogen,
Rate of diffusion of another gas,
According to Graham's Law of diffusion, we have:
32 g is the molecular mass of oxygen. Hence, the unknown gas is oxygen.
13. A gas in equilibrium has uniform density and pressure throughout its volume. This is strictly true only if there are no external influences. A gas column under gravity, for example, does not have
uniform density (and pressure). As you might expect, its density decreases with height. The precise dependence is given by the so-called law of atmosphere
Where R the universal gas constant.] [Hint: Use Archimedes principle to find the apparent
Ans. According to the law of atmospheres, we have:
is the number density at height
mg is the weight of the particle suspended in the gas column
Density of the medium = '
Density of the suspended particle =
Mass of one suspended particle = m'
Mass of the medium displaced = m
Volume of a suspended particle = V
According to Archimedes' principle for a particle suspended in a liquid column, the effective weight of the suspended particle is given as:
Weight of the medium displaced – Weight of the suspended particle
= mg– m'g
Gas constant, R =
Substituting equation (ii) in place of mg in equation (i) and then using equation (iii), we get:
14. Given below are densities of some solids and liquids. Give rough estimates of the size of their atoms:
[Hint: Assume the atoms to be
Atomic mass of a substance = M
Density of the substance =
Avogadro's number = N=
Volume of each atom
Volume of N number of molecules N … (i)
Volume of one mole of a substance = ii)
N =
For carbon:
Hence, the radius of a carbon atom is 1.29 .
For gold:
Hence, the radius of a gold atom is 1.59 .
For liquid nitrogen:
Hence, the radius of a liquid nitrogen atom is 1.77 .
For lithium:
p =
Hence, the radius of a lithium atom is 1.73 .
For liquid fluorine: | {"url":"https://oneliner.gyandarpan.in/2020/09/kinetic-theory-ncert-solutions.html","timestamp":"2024-11-14T08:18:34Z","content_type":"application/xhtml+xml","content_length":"240699","record_id":"<urn:uuid:ada9da7f-2df7-4c02-b94c-d3c557dbcf87>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00486.warc.gz"} |
Algorithms for Coding Interviews in C# - AI-Powered Course
Course Overview
With algorithms being one of the most common themes in coding interviews, having a firm grip on them can be the difference between being hired or not. After completing this comprehensive course,
you’ll have an in-depth understanding of different algorithm types in C# and be equipped with a simple process for approaching complexity analysis. As you progress, you’ll be exposed to the most
important algorithms you’ll likely encounter in an interview. You’ll work through over 50 interactive coding challenges a... | {"url":"https://www.educative.io/courses/algorithms-for-coding-interviews-in-csharp","timestamp":"2024-11-03T03:13:22Z","content_type":"text/html","content_length":"887039","record_id":"<urn:uuid:5d011dea-fdcf-40b5-bb0a-48c3dc5d772f>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00729.warc.gz"} |
Rationalize Denominators Worksheets
How Do You Rationalize Denominators? When radicals have values that are not perfect squares, it means they are irrational expressions. We will eradicate such values if we are rationalizing the
divisor. If we have a value 4/√6 so, rationalize the denominator by following steps: Step#1: Multiply the divisor by a radical that will discard it entirely. What we do when we have a square root in
the denominator? We will multiply the square root by a perfect square of the radical in the denominator. In the case, the radicand value is imperfect square then, you will multiply it by itself to
find the perfect square. How can you achieve this by multiplying both divisor and numerator by the square root of six? Mathematically. 1. 4/ &radic 6 × √6 / √6 2. 4√6 / √36 -> here, we have a perfect
square in the bottom value under the square root. Step#2: Check you have simplified all radicals or not. Step#3: Simplify the fraction as per the demand for example. 4√6 / √36 Here, sq. root 36 is
equal to 6. 5√6/6 Here, we will divide numbers without square root by 2 so, the answer will be. 2√6/3 Try to simplify such fractions carefully. We cannot solve the values in the radical unless you
solve the outside one. It would be easy to solve without mistakes.
With these problems you will be given an irrational number (such as the square root of three) as a denominator of a fraction. This is just another approach to this skill, in general. The first step
is to multiply the top (numerator) and bottom (denominator) of the fraction by the radical itself. Then you just simplify away the radical first followed by the fraction. You will also be faced with
multiply terms on the top or bottom. In those cases you just have extra operations to perform, but the basic procedure for rationalizing should remain the same. We work with flat rationalizations
that mimic denominators that we will often use in geometric figures. Students should already be familiar with reciprocals, conjugates, and expressions.
Get Free Worksheets In Your Inbox! | {"url":"https://www.easyteacherworksheets.com/math/geometry-rationalizedenominators.html","timestamp":"2024-11-12T19:36:27Z","content_type":"text/html","content_length":"26520","record_id":"<urn:uuid:92fdf63c-2d26-48e4-907f-17ace605c24a>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00012.warc.gz"} |
General | Algorithm | Binary Search Algorithm | Codecademy
Binary Search Algorithm
Binary Search is an algorithm for searching an element within a sorted collection of items, primarily implemented with arrays or lists. The binary search algorithm follows a divide-and-conquer
approach by repeatedly dividing the collection into two halves and comparing the target value with the middle element of the current search space. If there is a match, it provides the index of the
middle element; otherwise, it proceeds to either side of the array, depending on the current comparison result.
Note: The collection must be sorted and have constant time indexing such as arrays to implement the binary search algorithm. Binary search is incompatible with data structures that do not support
constant-time indexing.
The Algorithm
The steps for the binary search algorithm are as follows:
1. Set the start pointer to the beginning of the collection (index 0).
2. Set the end pointer to the end of the collection (length(collection) - 1).
3. While the start is less than or equal to the end pointer, repeat these steps:
1. Calculate the middle element index: mid = start + (end - start) / 2.
2. Compare the value at middle index (mid) with the target value.
1. If arr[mid] is equal to the target value, return mid (search successful).
2. If arr[mid] is less than the target value, set the start to mid + 1.
3. If arr[mid] is greater than the target value, set the end to mid - 1.
4. If the start pointer becomes greater than the end pointer, the target value is not in the collection. Return -1 to indicate that the target is not present.
Complexities for Binary Search Algorithm
Time Complexity
• Average Case: O(log n)
• Worst Case: O(log n)
• Best Case: O(1)
The binary search algorithm has logarithmic time complexity because it divides the array repeatedly until the target element is discovered or the search space is empty.
In the worst-case scenario, the target element does not exist in the collection. In such cases, the algorithm keeps dividing the collection until it has exhausted the search space.
Space Complexity
Binary search has a space complexity of O(1) as it is a space-efficient algorithm.
In the example below, a sorted array has elements such as [1, 3, 4, 6, 8, 9, 11]. The aim is to implement the binary search algorithm for searching the number 9.
In the first iteration, start is at 0, end is at 6, and mid becomes 3 after calculating. The algorithm compares mid to the target value. Since the target value (9) is greater than the middle element
(6), the algorithm proceeds the search to the right half by updating the start index to mid + 1, which is 4. Now, the algorithm will focus on finding the target value in the array’s right portion
(index 4 to 6).
In the second iteration, mid becomes 5, which is the index of the target value (9). Since the target value is equal to the mid, the algorithm identifies the element’s position.
However, the search is not instantly completed; instead, the algorithm changes the search range. In this case, the start index is set to mid + 1 which starts a narrowed search on the right part of
the array.
In the last iteration, the binary search algorithm has narrowed down the search to a single element. The middle index mid, start, and end, are now pointing directly to the target value (9).
The algorithm recognizes the match, and the search concludes that the target value is found at index 5 and the binary search is successful.
Learn more on Codecademy | {"url":"https://www.codecademy.com/resources/docs/general/algorithm/binary-search","timestamp":"2024-11-07T23:24:41Z","content_type":"text/html","content_length":"183056","record_id":"<urn:uuid:75d1089b-061a-432c-9044-81a343f5ae16>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00724.warc.gz"} |
HJM: A unified approach to dynamic models for fixed income, credit and equity markets
The purpose of this paper is to highlight some of the key elements of the HJM approach as originally introduced in the framework of fixed income market models, to explain how the very same philosophy
was implemented in the case of credit portfolio derivatives and to show how it can be extended to and used in the case of equity market models. In each case we show how the HJM approach naturally
yields a consistency condition and a no-arbitrage condition in the spirit of the original work of Heath, Jarrow and Morton. Even though the actual computations and the derivation of the drift
condition in the case of equity models seems to be new, the paper is intended as a survey of existing results, and as such, it is mostly pedagogical in nature.
Publication series
Name Lecture Notes in Mathematics
Volume 1919
ISSN (Print) 0075-8434
All Science Journal Classification (ASJC) codes
• Algebra and Number Theory
• Arbitrage-free term structure dynamics
• Health-Jarrow-Morton theory
• Implied volatilty surface
• Local volatility surface
• Market models
Dive into the research topics of 'HJM: A unified approach to dynamic models for fixed income, credit and equity markets'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/hjm-a-unified-approach-to-dynamic-models-for-fixed-income-credit-","timestamp":"2024-11-04T07:36:19Z","content_type":"text/html","content_length":"51459","record_id":"<urn:uuid:478a45bb-3fe1-4ade-8940-4bfac89ac7af>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00729.warc.gz"} |
Problem 726
Let $\F$ be a finite field of characteristic $p$.
Prove that the number of elements of $\F$ is $p^n$ for some positive integer $n$.
Add to solve later
Prove that $\F_3[x]/(x^2+1)$ is a Field and Find the Inverse Elements
Problem 529
Let $\F_3=\Zmod{3}$ be the finite field of order $3$.
Consider the ring $\F_3[x]$ of polynomial over $\F_3$ and its ideal $I=(x^2+1)$ generated by $x^2+1\in \F_3[x]$.
(a) Prove that the quotient ring $\F_3[x]/(x^2+1)$ is a field. How many elements does the field have?
(b) Let $ax+b+I$ be a nonzero element of the field $\F_3[x]/(x^2+1)$, where $a, b \in \F_3$. Find the inverse of $ax+b+I$.
(c) Recall that the multiplicative group of nonzero elements of a field is a cyclic group.
Confirm that the element $x$ is not a generator of $E^{\times}$, where $E=\F_3[x]/(x^2+1)$ but $x+1$ is a generator.
Add to solve later
Each Element in a Finite Field is the Sum of Two Squares
Problem 511
Let $F$ be a finite field.
Prove that each element in the field $F$ is the sum of two squares in $F$.
Add to solve later
Any Automorphism of the Field of Real Numbers Must be the Identity Map
Problem 507
Prove that any field automorphism of the field of real numbers $\R$ must be the identity automorphism.
Add to solve later
Example of an Infinite Algebraic Extension
Problem 499
Find an example of an infinite algebraic extension over the field of rational numbers $\Q$ other than the algebraic closure $\bar{\Q}$ of $\Q$ in $\C$.
Add to solve later
The Cyclotomic Field of 8-th Roots of Unity is $\Q(\zeta_8)=\Q(i, \sqrt{2})$
Problem 491
Let $\zeta_8$ be a primitive $8$-th root of unity.
Prove that the cyclotomic field $\Q(\zeta_8)$ of the $8$-th root of unity is the field $\Q(i, \sqrt{2})$.
Add to solve later
A Rational Root of a Monic Polynomial with Integer Coefficients is an Integer
Problem 489
Suppose that $\alpha$ is a rational root of a monic polynomial $f(x)$ in $\Z[x]$.
Prove that $\alpha$ is an integer.
Add to solve later
Cubic Polynomial $x^3-2$ is Irreducible Over the Field $\Q(i)$
Problem 399
Prove that the cubic polynomial $x^3-2$ is irreducible over the field $\Q(i)$.
Add to solve later
Prove that any Algebraic Closed Field is Infinite
Problem 398
Prove that any algebraic closed field is infinite.
Add to solve later
Extension Degree of Maximal Real Subfield of Cyclotomic Field
Problem 362
Let $n$ be an integer greater than $2$ and let $\zeta=e^{2\pi i/n}$ be a primitive $n$-th root of unity. Determine the degree of the extension of $\Q(\zeta)$ over $\Q(\zeta+\zeta^{-1})$.
The subfield $\Q(\zeta+\zeta^{-1})$ is called maximal real subfield.
Add to solve later
Equation $x_1^2+\cdots +x_k^2=-1$ Doesn’t Have a Solution in Number Field $\Q(\sqrt[3]{2}e^{2\pi i/3})$
Problem 358
Let $\alpha= \sqrt[3]{2}e^{2\pi i/3}$. Prove that $x_1^2+\cdots +x_k^2=-1$ has no solutions with all $x_i\in \Q(\alpha)$ and $k\geq 1$.
Add to solve later
Application of Field Extension to Linear Combination
Problem 335
Consider the cubic polynomial $f(x)=x^3-x+1$ in $\Q[x]$.
Let $\alpha$ be any real root of $f(x)$.
Then prove that $\sqrt{2}$ can not be written as a linear combination of $1, \alpha, \alpha^2$ with coefficients in $\Q$.
Add to solve later
Irreducible Polynomial $x^3+9x+6$ and Inverse Element in Field Extension
Problem 334
Prove that the polynomial
\[f(x)=x^3+9x+6\] is irreducible over the field of rational numbers $\Q$.
Let $\theta$ be a root of $f(x)$.
Then find the inverse of $1+\theta$ in the field $\Q(\theta)$.
Add to solve later
Explicit Field Isomorphism of Finite Fields
Problem 233
(a) Let $f_1(x)$ and $f_2(x)$ be irreducible polynomials over a finite field $\F_p$, where $p$ is a prime number. Suppose that $f_1(x)$ and $f_2(x)$ have the same degrees. Then show that fields $\F_p
[x]/(f_1(x))$ and $\F_p[x]/(f_2(x))$ are isomorphic.
(b) Show that the polynomials $x^3-x+1$ and $x^3-x-1$ are both irreducible polynomials over the finite field $\F_3$.
(c) Exhibit an explicit isomorphism between the splitting fields of $x^3-x+1$ and $x^3-x-1$ over $\F_3$.
Add to solve later
Galois Extension $\Q(\sqrt{2+\sqrt{2}})$ of Degree 4 with Cyclic Group
Problem 231
Show that $\Q(\sqrt{2+\sqrt{2}})$ is a cyclic quartic field, that is, it is a Galois extension of degree $4$ with cyclic Galois group.
Add to solve later
Galois Group of the Polynomial $x^2-2$
Problem 230
Let $\Q$ be the field of rational numbers.
(a) Is the polynomial $f(x)=x^2-2$ separable over $\Q$?
(b) Find the Galois group of $f(x)$ over $\Q$.
Add to solve later
Polynomial $x^p-x+a$ is Irreducible and Separable Over a Finite Field
Problem 229
Let $p\in \Z$ be a prime number and let $\F_p$ be the field of $p$ elements.
For any nonzero element $a\in \F_p$, prove that the polynomial
\[f(x)=x^p-x+a\] is irreducible and separable over $F_p$.
(Dummit and Foote “Abstract Algebra” Section 13.5 Exercise #5 on p.551)
Add to solve later
Show that Two Fields are Equal: $\Q(\sqrt{2}, \sqrt{3})= \Q(\sqrt{2}+\sqrt{3})$
Problem 215
Show that fields $\Q(\sqrt{2}+\sqrt{3})$ and $\Q(\sqrt{2}, \sqrt{3})$ are equal.
Read solution
Add to solve later
Galois Group of the Polynomial $x^p-2$.
Problem 110
Let $p \in \Z$ be a prime number.
Then describe the elements of the Galois group of the polynomial $x^p-2$.
Add to solve later
Two Quadratic Fields $\Q(\sqrt{2})$ and $\Q(\sqrt{3})$ are Not Isomorphic
Problem 99
Prove that the quadratic fields $\Q(\sqrt{2})$ and $\Q(\sqrt{3})$ are not isomorphic.
Add to solve later | {"url":"http://yutsumura.com/category/field-theory/","timestamp":"2024-11-02T10:42:40Z","content_type":"text/html","content_length":"150156","record_id":"<urn:uuid:be657bfe-8e61-4d86-9fb3-61dbc620a64f>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00506.warc.gz"} |
Momentum Conservation
Newton's second law ``force equals mass times acceleration'' implies that the pressure gradient in a gas is proportional to the acceleration of a differential volume element in the gas. Let C.45.
The net force to the right across the volume element between
where, when time and/or position arguments have been dropped, as in the last line above, they are all understood to be
's second law equating net force to mass times acceleration, we need the mass of the volume element
The center-of-mass acceleration of the volume element can be written as velocity.^C.16 Applying Newton's second law
or, dividing through by
In terms of the logarithmic derivative of
Note that
small-signal acoustic pressure
, while
In the case of cylindrical tubes, the logarithmic derivative of the area variation, lnC.148) reduces to the usual momentum conservation equation wave equation for plane waves [318,349,47]. The
present case reduces to the cylindrical case when
, when the relative change in
cross-sectional area
is much less than the relative change in
along the tube. In other words, the tube area variation must be slower than the spatial variation of the wave itself. This assumption is also necessary for the ``one-parameter-wave'' approximation to
hold in the first place.
If we look at sinusoidal spatial waves, i.e., the spatial frequency of the wall variation must be much less than that of the wave. Another way to say this is that the wall must be approximately flat
across a wavelength. This is true for smooth horns/bores at sufficiently high wave frequencies.
Next Section: Wave Impedance in a ConePrevious Section: Conical Acoustic Tubes | {"url":"https://www.dsprelated.com/freebooks/pasp/Momentum_Conservation_Nonuniform_Tubes.html","timestamp":"2024-11-01T19:42:03Z","content_type":"text/html","content_length":"39117","record_id":"<urn:uuid:1cc3c7cb-5dc5-43e3-9c6c-62fb4120c4a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00574.warc.gz"} |
New open-source platform allows users to evaluate performance of AI-powered chatbots
Researchers have developed a platform for the interactive evaluation of AI-powered chatbots such as ChatGPT. Anyone using an LLM, for any application, should always pay attention to the output and
verify it themselves Albert Jiang A team of computer scientists, engineers, mathematicians and cognitive scientists, led by the University of Cambridge, developed an open-source evaluation platform
called CheckMate, which allows human users to interact with and evaluate the performance of large language models (LLMs). | {"url":"https://www.myscience.uk/news/Mathematics&s=0","timestamp":"2024-11-12T17:18:00Z","content_type":"text/html","content_length":"60250","record_id":"<urn:uuid:5da84eea-f825-475a-9de6-5482100d2df7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00074.warc.gz"} |
End to End ML Project - Fashion MNIST - Selecting the Model - Cross-Validation - Softmax Regression
Please follow the below steps:
Import the module cross_val_score and cross_val_predict from sklearn.model_selection
from sklearn.model_selection import << your code comes here >>
Import the module confusion_matrix from sklearn.metrics.
from sklearn.metrics import << your code comes here >>
Define a function called display_scores() which should print the score value which is passed to it as argument, and also calculate and print the 'mean' and 'standard deviation' of this score.
def display_scores(scores):
<<your code comes here>>
Please create an instance of LogisticRegression called log_clf by passing to it the parameters - multi_class="multinomial", solver="lbfgs", C=10 and random_state=42
log_clf = LogisticRegression(<<your code comes here>>)
Please call cross_val_score() function by passing following parameters to it - the model (log_clf), the scaled training dataset (X_train_scaled), y_train, cv=3 and scoring="accuracy" - and save the
returned value in a variable called log_cv_scores.
Call display_scores() function, by passing to it the log_cv_scores variable, to calculate and display(print) the 'accuracy' score, the mean of the 'accuracy' score and the 'standard deviation' of the
'accuracy' score.
log_cv_scores = cross_val_score(<<your code comes here>>)
Call mean() method on log_cv_scores object to get the mean accuracy score and store this mean accuracy score in a variable log_cv_accuracy.
log_cv_accuracy = log_cv_scores.<<your code comes here>>
Please call cross_val_predict() function by passing following parameters to it - the model (log_clf), the scaled training dataset (X_train_scaled), y_train, cv=3 - and save the returned value in a
variable called y_train_pred.
y_train_pred = cross_val_predict(<<your code comes here>>)
Compute the confusion matrix by using confusion_matrix() function
confusion_matrix(y_train, <<your code comes here>>)
Calculate the precision score by the using the precision_score() function
log_cv_precision = precision_score(y_train, <<your code comes here>>, average='weighted')
Calculate the recall score by the using the recall_score() function
log_cv_recall = recall_score(y_train, <<your code comes here>>, average='weighted')
Calculate the F1 score by the using the f1_score() function
log_cv_f1_score = f1_score(y_train, <<your code comes here>>, average='weighted')
Print the above calculated values of log_cv_accuracy, log_cv_precision, log_cv_recall , log_cv_f1_score | {"url":"https://cloudxlab.com/assessment/displayslide/2457/end-to-end-ml-project-fashion-mnist-selecting-the-model-cross-validation-softmax-regression?playlist_id=190","timestamp":"2024-11-10T08:40:23Z","content_type":"text/html","content_length":"86653","record_id":"<urn:uuid:7e462d80-6bc4-47cc-9f6a-7b539db08fe7>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00653.warc.gz"} |
ford_fulkerson(G, s, t, capacity='capacity')¶
Find a maximum single-commodity flow using the Ford-Fulkerson algorithm.
This is the legacy implementation of maximum flow. See Notes below.
This algorithm uses Edmonds-Karp-Dinitz path selection rule which guarantees a running time of \(O(nm^2)\) for \(n\) nodes and \(m\) edges.
G : NetworkX graph
Edges of the graph are expected to have an attribute called ‘capacity’. If this attribute is not present, the edge is considered to have infinite capacity.
s : node
Source node for the flow.
: t : node
Sink node for the flow.
capacity : string
Edges of the graph G are expected to have an attribute capacity that indicates how much flow the edge can support. If this attribute is not present, the edge is considered to have
infinite capacity. Default value: ‘capacity’.
R : NetworkX DiGraph
Returns :
The residual network after computing the maximum flow. This is a legacy implementation, se Notes and Examples.
The algorithm does not support MultiGraph and MultiDiGraph. If the input graph is an instance of one of these two classes, a NetworkXError is raised.
Raises :
If the graph has a path of infinite capacity, the value of a feasible flow on the graph is unbounded above and the function raises a NetworkXUnbounded.
This is a legacy implementation of maximum flow (before 1.9). This function used to return a tuple with the flow value and the flow dictionary. Now it returns the residual network resulting after
computing the maximum flow, in order to follow the new interface to flow algorithms introduced in NetworkX 1.9.
Note however that the residual network returned by this function does not follow the conventions for residual networks used by the new algorithms introduced in 1.9. This residual network has
edges with capacity equal to the capacity of the edge in the original network minus the flow that went throught that edge. A dictionary with infinite capacity edges can be found as an attribute
of the residual network.
>>> import networkx as nx
>>> from networkx.algorithms.flow import ford_fulkerson
The functions that implement flow algorithms and output a residual network, such as this one, are not imported to the base NetworkX namespace, so you have to explicitly import them from the flow
>>> G = nx.DiGraph()
>>> G.add_edge('x','a', capacity=3.0)
>>> G.add_edge('x','b', capacity=1.0)
>>> G.add_edge('a','c', capacity=3.0)
>>> G.add_edge('b','c', capacity=5.0)
>>> G.add_edge('b','d', capacity=4.0)
>>> G.add_edge('d','e', capacity=2.0)
>>> G.add_edge('c','y', capacity=2.0)
>>> G.add_edge('e','y', capacity=3.0)
This function returns the residual network after computing the maximum flow. This network has graph attributes that contain: a dictionary with edges with infinite capacity flows, the flow value,
and a dictionary of flows:
>>> R = ford_fulkerson(G, 'x', 'y')
>>> # A dictionary with infinite capacity flows can be found as an
>>> # attribute of the residual network
>>> inf_capacity_flows = R.graph['inf_capacity_flows']
>>> # There are also attributes for the flow value and the flow dict
>>> flow_value = R.graph['flow_value']
>>> flow_dict = R.graph['flow_dict']
You can use the interface to flow algorithms introduced in 1.9 to get the output that the function ford_fulkerson used to produce:
>>> flow_value, flow_dict = nx.maximum_flow(G, 'x', 'y',
... flow_func=ford_fulkerson) | {"url":"https://networkx.org/documentation/networkx-1.9/reference/generated/networkx.algorithms.flow.ford_fulkerson.html","timestamp":"2024-11-06T04:25:36Z","content_type":"text/html","content_length":"25062","record_id":"<urn:uuid:fd09e11f-9676-4e2d-988e-04899e770ffb>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00258.warc.gz"} |
The Simple Regression Model - ppt video online download
1 The Simple Regression ModelInterval Estimation, Section 15.3 also read confidence and prediction intervals Correlation, Section 15.4 Estimation and Tests, Section 15.5, 15.6, 15.7
2 The Standard Error of the Estimate - Se or Sy.xThe least squares method minimizes the distance between the predicted y and the observed y, the SSE Need a statistic that measures the variability of
the observed y values from the predicted y A measure of the variability of the observed y values around the sample regression line Also our estimate of the scatter of the y values in the population
around the population regression line It is an estimate of y|x The least squares method results in a line that fits the data such that the distance between the predicted y and the observed y is
minimized. We would like a statistic that measures the variability of the observed y values from the predicted. In other words, a measure of the variability of the observed y values around the sample
regression line. This statistic is also our estimate of the scatter of the y values in the population around the population regression line. Thus it is an estimate of y|x PP 9
3 Standard Error of the EstimateSe is an estimate of σy|x E(Y|X=20) E(Y|X=50) E(Y|X=80) Y X 20 50 80 σ y|x A -Sample B - Sample C - Population PP 9
4 Standard Error of the EstimateThe standard error has the units of the dependent variable, y The formula requires us to find first the predicted value for each observation in the data set and
second, the error term for that observation Can calculate the error or residual for an observation PP 9
5 Calculating a Residualxi yi ei 40 165 54 85 125.2 -40.2 9 37.5 -28.5 xi yi ei 40 165 54 85 125.2 -40.2 9 37.5 -28.5 To calculate the standard error for a sample, use For a given x value, you should
be able to calculate the error term. However, to calculate the standard error of the estimate, you will want to use the computational formula that is faster PP 9
6 Calculating the Standard Error of the EstimateSubstituting Units are deaths per 1000 live births. Since we choose b0 and b1 to minimize the SSE, we were implicitly minimizing the standard error of
the estimate PP 9
7 The Coefficient of DeterminationWant to develop a measure as to how well the independent variable predicts the dependent variable Want to answer the following question Of the total variation among
the y’s, how much can be attributed to the relationship between X and Y, and how much can be attributed to chance? PP 9
8 The Coefficient of DeterminationBy total variation among the y’s, we mean the changes in Y from one sample observation to another Why do the values of Y differ from observation to observation? The
answer, according to our hypothesized regression model, is That the variation in Y is partly due to changes in X, which leads to changes in the expected value of Y And partly due to chance, that is,
the effect of the random error term PP 9
9 The Coefficient of DeterminationAsk how much of the observed variation in Y can be attributed to the variation in X and how much is due to other factors (error) Define “sample variation of Y” If
there was no variation in Y, all the values of Y when plotted against X would lie on a straight line Corresponds to the average value of Y PP 9
11 The Coefficient of DeterminationNow in reality the observed values of Y are scattered around this line Variation in Y can be measured as the distance of the observed yi from the average Y yi,xi X
PP 9
12 SST = SSR + SSE Total variation can be decomposed into explained variation and unexplained variation SST = SSR + SSE yi Xi PP 9
13 Coefficient of Determination or R2R2 is the proportion of the variation of Y that can be attributed to the variation of X R2 = SSR/SST or R2 = 1 - SSE/SST SST = SSR + SSE SST/SST = SSR/SST + SSE/
SST 1 = R2 + SSE/SST PP 9
14 Coefficient of Determination or R2R2 describes how well the sample regression line fits the observed data Tells us the proportion of the total variation in the dependent variable explained by
variation in the explanatory variable R2 is an index No units associated 0 R2 1 PP 9
15 Interpreting R2 R2 = 1 indicates a perfect fitAn R2 close to zero indicates a very poor fit of the regression line to the data R2 = 0 PP 9
16 Computational FormulasSSR = – = Here is an interesting question. If we have the R2 value and we want to know the correlation coefficient (R value), can we determine it? The answer is that we will
not know what the sign is for the correlation coefficient. We could determine the sign of the relationship between the two variables by looking at the coefficient on the estimated slope of the
regression equation R2 = / = Interpret the R2 value in terms of our problem 68.74% of the variation in mortality rates is explained by variation in immunization rates PP 9
17 Interpretation of R2 as a Descriptive StatisticSuppose we find a very low R2 for a given sample Implies that the sample regression line fits the observations poorly A possible explanation is that
X is a poor explanatory variable This is a statement about the population regression line That is, the population regression line is horizontal Can test this with reference to the sample data Null
hypothesis is H0: 1 = 0 If we do not reject this null hypothesis, we find that Y is influenced only by the random error term Another explanation of a low R2 is that X is a relevant explanatory
variable But that its influence on Y is weak compared to the influence of the error term PP 9
18 Pearson’s Correlation CoefficientCorrelation is used to measure the strength of the linear association between two variables The correlation coefficient is an index No units of measurement
Positive or negative sign associated with the measure The boundaries for the correlation coefficient are The values r = 1 and r = -1 occur when there is an exact linear relationship between x and y
PP 9
19 Pearson’s Correlation CoefficientX and Y are perfectly negatively correlated X and Y are perfectly positively correlated X and Y are uncorrelated PP 9
20 Pearson’s Correlation CoefficientAs the relationship between x and y deviate from perfect linearity, r moves away from |1| toward 0 With the data to the right, the correlation model should not be
applied Y If y tends to decrease as x increases, then the correlation is negative. If y tends to increase as x increases the correlation is positive. If r = 0, we say x and y are uncorrelated. There
is no linear relationship between the two variables. However, a non-linear relationship may exist. X PP 9
21 Computational Formula for rBased on this sample there appears to be a fairly strong linear relationship between the percentage of children immunized in a specified country and its under-5
mortality rate. The correlation coefficient is fairly close to 1. In addition there is a negative relationship. Mortality decreases as percent immunized increases. PP 9
22 Pearson’s Correlation CoefficientLimitations of the Correlation Model The correlation model does not specify the nature of the relationship Do not infer causality An effective immunization program
might be the primary reason for the decrease in mortality, but it is possible that the immunization program is a small part of an overall health care system that is responsible for the decrease in
mortality The model measures linear relationships The Y values for a given X are assumed to be normally distributed and the X values for a given Y are also assumed to be normally distributed Sampling
from a “bivariate normal distribution” The model is very sensitive to outliers If there are pairs of data points way outside the range of the other data points, this can alter the value of the
correlation coefficient and give misleading results Do not extrapolate the correlation coefficient outside the range of data points The relationship between X and Y may change outside the range of
sample points PP 9
23 Testing Hypotheses about the Population Correlation CoefficientTest whether there is a significant correlation, , in the population between X and Y H0: = 0 There is no linear association H1:
0 There is a significant linear association The sample correlation coefficient is an unbiased estimator of the population correlation coefficient, which we designate as That is, the E(r) = The
sampling distribution of the statistic r is approximately normally distributed PP 9
24 Testing Hypotheses about the Population Correlation CoefficientThe standard error of the sample correlation coefficient is The test statistic is PP 9
25 Testing Hypotheses about the Population Correlation CoefficientCritical Value at ⍺ = 0.05 t18,.05/2 = t 18,.025 = 2.101 Degrees of freedom = df = n - 2 Decision Rule If ( ≤ ≤ 2.101) do not reject
Therefore, Reject Comparing the test statistic with the critical value, we reject the null hypothesis and conclude that there is a significant linear association between immunization rates and
mortality rates PP 9
26 Relationship between Correlation, R, and Coefficient of Determination, R2r = R = the square root of the coefficient of determination, R2 R = Correlation coefficient R2 = Coefficient of
determination PP 9
27 Computer Presentation of Correlation MatrixMORTRATE IMMUNRATE 1 PP 9
28 Inferences about the Population ParametersWant to create a confidence interval for the slope (or intercept) or want to test whether the population slope, , (or intercept) equals zero Saw before
(OLS properties): Sampling Distributions E(b0) = β0 normal E(b1) = β1 normal We want to use statistical inference to draw conclusions about the population parameters. For example, we might want to
create a confidence interval for the slope (or intercept) or we might want to test whether the population slope, , equals zero We considered earlier the properties of the OLS estimators. These
properties described the sampling distributions. The estimators, a and b, are linear combinations of the yi. This implies that the distribution of b will follow the distribution of the yi (or the
error term). If the error terms are normal, the distribution of the b is normal. If the sample is large, the distribution of the b will be approximately normal even fi the error terms are not normal.
We also saw that the expected values of b0 and b1 are 0 and 1, respectively. b0 b1 PP 9
29 Inferences about the Population ParametersAmong all linear unbiased estimators, OLS estimators have the smallest variance The standard error of b0 and b1 are PP 9
30 Inferences about the Population ParametersSince y|x is unknown, we substitute the standard error of the estimate, Se, and use the t distribution In order to use the t distribution, we now have to
assume the yi’s are normal. In large samples, the t provides a good approximation even if the yi’s are not normal. PP 9
31 Confidence Intervals Population Slope and InterceptUse information about the sampling distributions to construct confidence intervals for the population slope and intercept If the conditional
probability distribution of Y|X follows a normal distribution PP 9
32 Confidence Intervals Population Slope and InterceptFor the slope t18,05/2 = t 18,.025 = 2.101 -3.77 1 with a degree of confidence of .95 For the intercept 0 with a degree of confidence
of .95 In 95 out of 100 intervals the population parameter will fall w/in the interval. PP 9
33 Confidence Intervals Population Slope and InterceptThe interval estimates appear wide Small sample size Large variation in mortality for given immunization rates Se is large PP 9
34 Tests of Hypotheses The most common type of hypothesis that is tested with the regression model is that there is no relationship between the explanatory variable X and the dependent variable Y The
relationship between X and Y is given by the linear dependence of the mean value of Y on X, that is E(Y|X) = 0 +1 x To say there is no relationship means E(Y|X) is not linearly dependent, which is
to say 1 equals zero H0: 1 = 0 There is no relationship between X and Y H1: 1 0 There is a significant relationship between X and Y If we have a theory that suggests the direction of the
relationship than we will want a one tail test The most common type of hypothesis that is tested with the help of the regression model is that there is no relationship between the explanatory
variable X and the dependent variable Y. The relationship between X and Y is given by the linear dependence of the mean value of Y on X, that is, E(Y|X) = +x. To say there is no relationship means
E(Y|X) is not linearly dependent, which is to say equals zero. H0: = 0 There is no relationship between X and Y H1: 0 There is a significant relationship between X and Y If we have a theory
that suggests the direction of the relationship than we will want a one tail test. The test statistic is PP 9
35 H0: 1 = 0 There is no relationship between X and Y
36 Sampling Distribution under the null hypothesisTests of Hypotheses The test statistic is Set level of significance Find critical value in t -table df = n - 2 DR: if (-tcv ≤ t-test ≤ tcv), do not
reject Sampling Distribution under the null hypothesis t n - 2 -t reject do not reject normal reject b1 t PP 9
37 Sampling Distribution under the null hypothesisTests of Hypotheses For our problem H0: 1 ≥ 0 No relationship between X and Y H1: 1 < 0 An inverse relationship between X and Y Test statistic Let
⍺ = 0.05 Critical value: t18,0.05 = -1.734 DR: if (-tcv ≤ t-test), do not reject ( > ), reject Sampling Distribution under the null hypothesis do not reject reject -2.831 b1 -1.734 -6.291 t n - 2 PP
38 Tests of Hypotheses Conclude that the immunization rate is significantly and inversely related to the mortality rate Remember: You want to reject the null You have found that your independent
variable is related PP 9
39 Computer Output of the ProblemMORTALITY,Y IMMUNIZED, X Mean 62.2 76.3 Standard Error Median 31 83 Mode 9 Standard Deviation Sample Variance 4700.8 Range 220 72 Minimum 6 26 Maximum 226 98 Sum 1244
1526 Count 20 PP 9
40 Excel Output = Se b0 = b1 = Sb0 = Sb1 = PP 9
41 Online Homework - Chapter 15 Overview Simple RegressionCengageNOW fourteenth assignment PP 9 | {"url":"https://slideplayer.com/slide/5067793/","timestamp":"2024-11-02T12:41:24Z","content_type":"text/html","content_length":"257457","record_id":"<urn:uuid:613b0b57-0b8c-45e8-9bc0-85d3e24f2304>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00826.warc.gz"} |
Wrong Way Risk with Copulas
This example shows an approach to modeling wrong-way risk for Counterparty Credit Risk using a Gaussian copula.
A basic approach to Counterparty Credit Risk (CCR) (see Counterparty Credit Risk and CVA example) assumes that market and credit risk factors are independent of each other. A simulation of market
risk factors drives the exposures for all contracts in the portfolio. In a separate step, Credit-Default Swap (CDS) market quotes determine the default probabilities for each counterparty. Exposures,
default probabilities, and a given recovery rate are used to compute the Credit-Value Adjustment (CVA) for each counterparty, which is a measure of expected loss. The simulation of risk factors and
the default probabilities are treated as independent of each other.
In practice, default probabilities and market factors are correlated. The relationship may be negligible for some types of instruments, but for others, the relationship between market and credit risk
factors may be too important to be ignored when computing risk measures. When the probability of default of a counterparty and the exposure resulting from particular contract tend to increase
together we say that the contract has wrong-way risk (WWR).
This example demonstrates an implementation of the wrong-way risk methodology described in Garcia Cespedes et al. (see References).
Exposures Simulation
Many financial institutions have systems that simulate market risk factors and value all the instruments in their portfolios at given simulation dates. These simulations are used to compute exposures
and other risk measures. Because the simulations are computationally intensive, reusing them for subsequent risk analyses is important.
This example uses the data and the simulation results from the Counterparty Credit Risk and CVA example, previously saved in the ccr.mat file. The ccr.mat file contains:
• RateSpec: The rate spec when contract values were calculated
• Settle: The settle date when contract values were calculated
• simulationDates: A vector of simulation dates
• swaps: A struct containing the swap parameters
• values: The NUMDATES x NUMCONTRACT x NUMSCENARIOS cube of simulated contract values over each date/scenario
This example looks at expected losses over a one-year time horizon only, so the data is cropped after one year of simulation. Simulation dates over the first year are at a monthly frequency, so the
13th simulation date is our one-year time horizon (the first simulation date is the settle date).
load ccr.mat
oneYearIdx = 13;
values = values(1:oneYearIdx,:,:);
dates = simulationDates(1:oneYearIdx);
numScenarios = size(values,3);
The credit exposures are computed from the simulated contract values. These exposures are monthly credit exposures per counterparty from the settle date to our one-year time horizon.
Since defaults can happen at any time during the one-year time period, it is common to model the exposure at default (EAD) based on the idea of expected positive exposure (EPE). The time-averaged
exposure for each scenario is computed, which is called PE (positive exposure). The average of the PE's, including all scenarios, is the EPE, which can also be obtained from the exposureprofiles
The positive exposure matrix PE contains one row per simulated scenario and one column per counterparty. This is used as the EAD in this analysis.
% Compute counterparty exposures
[exposures, counterparties] = creditexposures(values,swaps.Counterparty, ...
numCP = numel(counterparties);
% Compute PE (time-averaged exposures) per scenario
intervalWeights = diff(dates) / (dates(end) - dates(1));
exposureMidpoints = 0.5 * (exposures(1:end-1,:,:) + exposures(2:end,:,:));
weightedContributions = bsxfun(@times,intervalWeights,exposureMidpoints);
PE = squeeze(sum(weightedContributions))';
% Compute total portfolio exposure per scenario
totalExp = sum(PE,2);
% Display size of PE and totalExp
whos PE totalExp
Name Size Bytes Class Attributes
PE 1000x5 40000 double
totalExp 1000x1 8000 double
Credit Simulation
A common approach for simulating credit defaults is based on a "one-factor model", sometimes called the "asset-value approach" (see Gupton et al., 1997). This is an efficient way to simulate
correlated defaults.
Each company i is associated with a random variable Yi, such that
${Y}_{i}={\beta }_{i}Z+\sqrt{1-{\beta }_{i}^{2}}{ϵ}_{i}$
where Z is the "one-factor", a standard normal random variable that represents a systematic credit risk factor whose values affect all companies. The correlation between company i and the common
factor is given by beta_i, the correlation between companies i and j is beta_i*beta_j. The idiosyncratic shock epsilon_i is another standard normal variable that may reduce or increase the effect of
the systematic factor, independently of what happens with any other company.
If the default probability for company i is PDi, a default occurs when
$\Phi \left({Y}_{i}\right)<P{D}_{i}$
where $\Phi$ is the cumulative standard normal distribution.
The Yi variable is sometimes interpreted as asset returns, or sometimes referred to as a latent variable.
This model is a Gaussian copula that introduces a correlation between credit defaults. Copulas offer a particular way to introduce correlation, or more generally, co-dependence between two random
variables whose co-dependence is unknown.
Use CDS spreads to bootstrap the one-year default probabilities for each counterparty. The CDS quotes come from the swap-portfolio spreadsheet used in the Counterparty Credit Risk and CVA example.
% Import CDS market information for each counterparty
swapFile = 'cva-swap-portfolio.xls';
cds = readtable(swapFile,'Sheet','CDS Spreads');
cdsDates = datenum(cds.Date);
cdsSpreads = table2array(cds(:,2:end));
% Bootstrap default probabilities for each counterparty
zeroData = [RateSpec.EndDates RateSpec.Rates];
defProb = zeros(1, size(cdsSpreads,2));
for i = 1:numel(defProb)
probData = cdsbootstrap(zeroData, [cdsDates cdsSpreads(:,i)], ...
Settle, 'probDates', dates(end));
defProb(i) = probData(2);
Now simulate the credit scenarios. Because defaults are rare, it is common to simulate a large number of credit scenarios.
The sensitivity parameter beta is set to 0.3 for all counterparties. This value can be calibrated or tuned to explore model sensitivities. See the References for more information.
numCreditScen = 100000;
% Z is the single credit factor
Z = randn(numCreditScen,1);
% epsilon is the idiosyncratic factor
epsilon = randn(numCreditScen,numCP);
% beta is the counterparty sensitivity to the credit factor
beta = 0.3 * ones(1,numCP);
% Counterparty latent variables
Y = bsxfun(@times,beta,Z) + bsxfun(@times,sqrt(1 - beta.^2),epsilon);
% Default indicator
isDefault = bsxfun(@lt,normcdf(Y),defProb);
Correlating Exposure and Credit Scenarios
Now that there is a set of sorted portfolio exposure scenarios and a set of default scenarios, follow the approach in Garcia Cespedes et al. and use a Gaussian copula to generate correlated
exposure-default scenario pairs.
Define a latent variable Ye that maps into the distribution of simulated exposures. Ye is defined as
${Y}_{e}=\rho Z+\sqrt{1-{\rho }^{2}}{ϵ}_{e}$
where Z is the systemic factor computed in the credit simulation, epsilon_e is an independent standard normal variable and rho is interpreted as a market-credit correlation parameter. By
construction, Ye is a standard normal variable correlated with Z with correlation parameter rho.
The mapping between Ye and the simulated exposures requires us to order the exposure scenarios in a meaningful way, based on some sortable criterion. The criterion can be any meaningful quantity, for
example, it could be an underlying risk factor for the contract values (such as an interest rate), the total portfolio exposure, and so on.
In this example, use the total portfolio exposure (totalExp) as the exposure scenario criterion to correlate the credit factor with the total exposure. If rho is negative, low values of the credit
factor Z tend to get linked to high values of Ye, hence high exposures. This means negative values of rho introduce WWR.
To implement the mapping between Ye and the exposure scenarios, sort the exposure scenarios by the totalExp values. Suppose that the number of exposure scenarios is S (numScenarios). Given Ye, find
the value j such that
$\frac{j-1}{S}\le \Phi \left({Y}_{e}\right)<\frac{j}{S}$
and select the scenario j from the sorted exposure scenarios.
Ye is correlated to the simulated exposures and Z is correlated to the simulated defaults. The correlation rho between Ye and Z is, therefore, the correlation link between the exposures and the
credit simulations.
% Sort the total exposure
[~,totalExpIdx] = sort(totalExp);
% Scenario cut points
cutPoints = 0:1/numScenarios:1;
% epsilonExp is the idiosyncratic factor for the latent variable
epsilonExp = randn(numCreditScen,1);
% Set a market-credit correlation value
rho = -0.75;
% Latent variable
Ye = rho * Z + sqrt(1 - rho^2) * epsilonExp;
% Find corresponding exposure scenario
binidx = discretize(normcdf(Ye),cutPoints);
scenIdx = totalExpIdx(binidx);
totalExpCorr = totalExp(scenIdx);
PECorr = PE(scenIdx,:);
The following plot shows the correlated exposure-credit scenarios for the total portfolio exposure as well as for the first counterparty. Because of the negative correlation, negative values of the
credit factor Z correspond to high exposure levels (wrong-way risk).
% We only plot up to 10000 scenarios
numScenPlot = min(10000,numCreditScen);
hold on
xlabel('Credit Factor (Z)')
title(['Correlated Exposure-Credit Scenarios, \rho = ' num2str(rho)])
legend('Total Exposure','CP1 Exposure')
hold off
For positive values of rho, the relationship between the credit factor and the exposures is reversed (right-way risk).
rho = 0.75;
Ye = rho * Z + sqrt(1 - rho^2) * epsilonExp;
binidx = discretize(normcdf(Ye),cutPoints);
scenIdx = totalExpIdx(binidx);
totalExpCorr = totalExp(scenIdx);
xlabel('Credit Factor (Z)')
title(['Correlated Exposure-Credit Scenarios, \rho = ' num2str(rho)])
Sensitivity to Correlation
You can explore the sensitivity of the exposures or other risk measures to a range of values for rho.
For each value of rho, compute the total losses per credit scenario as well as the expected losses per counterparty. This example assumes a 40% recovery rate.
Recovery = 0.4;
rhoValues = -1:0.1:1;
totalLosses = zeros(numCreditScen,numel(rhoValues));
expectedLosses = zeros(numCP, numel(rhoValues));
for i = 1:numel(rhoValues)
rho = rhoValues(i);
% Latent variable
Ye = rho * Z + sqrt(1 - rho^2) * epsilonExp;
% Find corresponding exposure scenario
binidx = discretize(normcdf(Ye),cutPoints);
scenIdx = totalExpIdx(binidx);
simulatedExposures = PE(scenIdx,:);
% Compute actual losses based on exposures and default events
losses = isDefault .* simulatedExposures * (1-Recovery);
totalLosses(:,i) = sum(losses,2);
% We compute the expected losses per counterparty
expectedLosses(:,i) = mean(losses)';
displayExpectedLosses(rhoValues, expectedLosses)
Expected Losses
Rho CP1 CP2 CP3 CP4 CP5
-1.0 604.10 260.44 194.70 1234.17 925.95-0.9 583.67 250.45 189.02 1158.65 897.91-0.8 560.45 245.19 183.23 1107.56 865.33-0.7 541.08 235.86 177.16 1041.39 835.12-0.6 521.89 228.78 170.49 991.70 803.22-0.5 502.68 217.30 165.25 926.92 774.27-0.4 487.15 211.29 160.80 881.03 746.15-0.3 471.17 203.55 154.79 828.90 715.63-0.2 450.91 197.53 149.33 781.81 688.13-0.1 433.87 189.75 144.37 744.00 658.19 0.0 419.20 181.25 138.76 693.26 630.38 0.1 399.36 174.41 134.83 650.66 605.89 0.2 385.21 169.86 130.93 617.91 579.01 0.3 371.21 164.19 124.62 565.78 552.83 0.4 355.57 158.14 119.92 530.79 530.19 0.5 342.58 152.10 116.38 496.27 508.86 0.6 324.73 145.42 111.90 466.57 485.05 0.7 319.18 140.76 108.14 429.48 465.84 0.8 303.71 136.13 103.95 405.88 446.36 0.9 290.36 131.54 100.20 381.27 422.79 1.0 278.89 126.77 95.77 358.71 405.40
You can visualize the sensitivity of the Economic Capital (EC) to the market-credit correlation parameter. Define EC as the difference between a percentile q of the distribution of losses, minus the
expected loss.
Negative values of rho result in higher capital requirements because of WWR.
pct = 99;
ec = prctile(totalLosses,pct) - mean(totalLosses);
title('Economic Capital (99%) versus \rho')
ylabel('Economic Capital');
Final Remarks
This example implements a copula-based approach to WWR, following Garcia Cespedes et al. The methodology can efficiently reuse existing exposures and credit simulations, and the sensitivity to the
market-credit correlation parameter can be efficiently computed and conveniently visualized for all correlation values.
The single-parameter copula approach presented here can be extended for a more thorough exploration of the WWR of a portfolio. For example, different types of copulas can be applied, and different
criteria can be used to sort the exposure scenarios. Other extensions include simulating multiple systemic credit risk variables (a multi-factor model), or switching from a one-year to a multi-period
framework to calculate measures such as credit value adjustment (CVA), as in Rosen and Saunders (see References).
1. Garcia Cespedes, J. C. "Effective Modeling of Wrong-Way Risk, Counterparty Credit Risk Capital, and Alpha in Basel II." The Journal of Risk Model Validation, Volume 4 / Number 1, pp. 71-98,
Spring 2010.
2. Gupton, G., C. Finger, and M. Bathia. CreditMetrics™ - Technical Document. J.P. Morgan, New York, 1997.
3. Rosen, D., and D. Saunders. "CVA the Wrong Way." Journal of Risk Management in Financial Institutions. Vol. 5, No. 3, pp. 252-272, 2012.
Local Functions
function displayExpectedLosses(rhoValues, expectedLosses)
fprintf(' Expected Losses\n');
fprintf(' Rho CP1 CP2 CP3 CP4 CP5\n');
for i = 1:numel(rhoValues)
% Display expected loss
fprintf('% .1f%9.2f%9.2f%9.2f%9.2f%9.2f', rhoValues(i), expectedLosses(:,i));
See Also
cdsbootstrap | cdsprice | cdsspread | cdsrpv01
Related Examples
More About | {"url":"https://se.mathworks.com/help/fininst/wrong-way-risk-with-copulas.html","timestamp":"2024-11-12T09:40:37Z","content_type":"text/html","content_length":"94689","record_id":"<urn:uuid:4b599cea-baf3-4548-abca-4d3813921ea0>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00669.warc.gz"} |
Dual Momentum Trading Strategy (Gary Antonacci) – Video, Rules, Setup, Backtest Analysis - QuantifiedStrategies.com
Dual Momentum Trading Strategy (Gary Antonacci) – Video, Rules, Setup, Backtest Analysis
The dual momentum trading strategy by Gary Antonacci, what is that about? As the Oracle of Omaha, Warren Buffett, once said: “Trying to time the market is the number one mistake to avoid.” It is
almost impossible to consistently time the market — you will either be buying or selling too late or too early rather than at the right time, which is why many professional investors advise against
that. One method that has offered a way to spot and get into the right trend is Dual Momentum.
The dual momentum trading strategy by Gary Antonacci is a method of investing that selects only assets that have outperformed their peers over a given time and also making positive returns. It is
based on the idea that an asset with a superior relative momentum and a positive absolute momentum would continue to perform until another outperforms it. Thus, it is a sort of trend strategy.
Related reading:
To help you understand the topic, we will discuss it under the following headings:
What is dual momentum?
Dual momentum investing strategy uses two kinds of momentum to determine which security to buy and when to do that.
The strategy got its name from the fact that it uses two types of momentum in its analysis. It compares the current momentum of two or more financial securities and chooses the one with the greatest
momentum and then compares that with what it was in the past.
That is to say, the dual momentum approach seeks to invest in an asset only if it is performing better than its peers over a given period and has a positive (upward) momentum at the same time. It,
therefore, does not aim to buy the best among losers; it only aims to buy the best among the performers.
On the other hand, in markets such as futures where you can also go short as easily as you can go long, the dual momentum strategy can also be used to identify and short the worst-performing asset
that has a negative (downward) momentum at that momentum — the weakest of the weak.
Plenty of academic research has proven the momentum effect in stocks for short-term periods ranging from one to 12 months. For longer and shorter periods it has not proven to work well. For example,
stocks that have performed well over the last 6 months tend to continue to perform well the next 6 months.
To better understand the concept of dual momentum, we need to understand what momentum means, the types of momentum, and the dual momentum formula.
The momentum formula: what does momentum mean?
Momentum, as it’s used in the momentum oscillator in many trading platforms, is also known as the rate of change (ROC), which measures the amount that a security’s price has changed over a given
period. It is often expressed as the percentage change in the current price compared to a specific period in the past.
The formula is given as follows:
Momentum = [Current Price / Price at a given period in the past (usually 14)] x 100
As you can see, the momentum is achieved by dividing the current price by the price of at a specific previous period. This gives you the proportional rate of change. To get the percentage rate of
change, you multiply the quotient by 100. For the momentum (ROC) indicator seen on different trading platforms, this gives an indicator that oscillates around 100, such that values less than 100
indicate negative or downward momentum (decreasing prices) while values more than 100 indicate positive or upward momentum (increasing prices).
Types of momentum
The formula we explained above is for one type of momentum known as the absolute momentum, but there is also another concept of momentum known as the relative momentum. Basically, there are two types
of momentum:
• Absolute momentum
• Relative momentum
Absolute momentum
This is the momentum of an asset relative to itself. It is simply the percentage rate of change of the price of the asset over a specified period. If the asset has a positive change over the period
of interest, the absolute momentum would be positive, and the greater the change, the greater the momentum and vice versa.
Relative momentum
This type of momentum compares the returns of an asset over a given period to those of other assets. In other words, it compares the absolute momentum of one asset to that of another asset or even
those of many assets such that the asset with a higher momentum than the others is said to have positive relative momentum.
The dual momentum formula
It is a bit difficult to express the dual momentum concept in a simplified formula that a newbie can easily understand. However, we can simplify the concept as follows:
Dual Momentum = Absolute Momentum of Asset 1 – Absolute Momentum of Asset 2, if both absolute momentums are positive or negative.
And, the interpretation would be as follows:
• Go long on asset 1 if dual momentum is positive and both assets have positive absolute momentum (NB: Asset 1 represents the asset with a higher absolute momentum).
• Go short on asset 2 if the dual momentum is negative and both assets have negative dual momentum (NB: Asset 1 represents the asset with a higher absolute momentum).
The formula is just to get a positive relative momentum in a broadly positive market or a negative relative momentum in a broadly negative market. With that, you will be buying on the best of the
positively performing assets and selling the worst of the negatively performing assets.
How dual momentum works
The dual momentum strategy works by targeting the assets with the most momentum in a particular market situation. The idea is to select assets that have historically outperformed other assets and are
also themselves in positive momentum — these would be the assets to buy, especially in a bull market. On the flip side, if you want to go short in a downward market, you should aim to select those
assets that have both negative absolute and relative momentum.
Here’s the explanation: When the general market is bullish, you want to be buying the best-performing securities, and when the market is bearish, you want to be selling the worst-performing
securities. So, you aim to buy securities with both positive absolute momentum and positive relative momentum, while you sell securities that have both negative absolute momentum and negative
relative momentum.
Mind you that it is possible to assets that have positive absolute momentum but negative relative momentum — for example, when the market is generally going up, the market laggards can have positive
momentum but perform poorly when compared to their peers.
You don’t want to buy these laggards; instead, you want to buy the leaders with positive relative momentum. On the other hand, there can be assets with positive relative momentum and negative
absolute — for instance, when the market is generally going down (a bear market), some assets with negative absolute momentum would have positive relative momentum because they are losing less than
others. But you surely won’t want to be buying them because they are also losers; instead, you will look to short the worst losers if it’s a market you can easily go short.
The first step in applying the dual momentum approach is to find assets with positive absolute momentum. So, you have to calculate the absolute momentum of individual assets of interest over a
specific period you have chosen. You get this by dividing the current price of each asset by what it was at a specified period in the past. If an asset is trading at a higher price than it was in the
past, its absolute momentum would be positive, and you should consider it for a long position after checking the relative momentum. However, if it’s trading lower, its absolute momentum would be
negative, making it a candidate for short selling.
The second step is to compare the assets with positive absolute momentum against one another to get the relative momentum. You also compare assets with negative absolute momentum against one another
to get their relative momentum. Among assets with positive absolute momentum, the ones with the highest relative momentum should be selected for buying. For those with negative absolute momentum, the
ones with the lowest relative momentum (biggest negative value) should be for short positions. See the table below:
In essence, the dual momentum approach forces you to buy assets that are both going up and outperforming their peers, while selling those ones that are both going down and performing worse than the
other losers. For the equity market, the dual momentum strategy performs better for the long side because the equity market tends to have a long-term upward bias.
If you want to take advantage of the short side too, you should look for a market that offers easy access to both the long and short sides, such as futures and forex markets. However, you may not
want to use the strategy in forex because long-term trends are hard to find in the currency market these days.
The dual momentum strategy: other things you need to know
There are other things you should know about the dual momentum strategy. These are some of them:
What is the origin of the dual momentum strategy?
The origin of the dual momentum strategy can be traced to the seminal work of Jagadeesh and Titman (1993), which showed that using relative momentum to make investment decisions could provide
profitable trading opportunities that are robust enough to be exploited using certain parameters.
Their work indicated that the returns of relative momentum outperformed benchmark returns, but the volatility inherent in the strategy is only marginally better than that of the benchmark. So, some
active investors and hedge fund managers don’t believe that the reward is big enough to justify the risk.
However, in 2012, Gary Antonacci published a book titled, “Dual Momentum Investing: An Innovative Strategy for Higher Returns with Lower Risk”, where he explained a simple but highly effective
extension of the relative momentum approach. He added another aspect into the concept by bringing the asset’s absolute momentum into the analysis. What he found was that by combining the two types of
momentum, it was possible to enjoy the rewards of the relative momentum approach while greatly reducing the volatility that’s inherent in the approach.
How to identify the dual momentum signals
There is no special way to identify the dual momentum signal except to follow the main idea in the concept to buy the strongest among the strong and sell the weakest among the weak. The flow chart
below simplifies the entire process.
You start by finding the absolute momentum of the assets of interest, which could be securities in related sectors of the same market or different markets like equities and bonds. Then, compare their
momentum to choose the best performers among those securities that have positive absolute momentum. These are the ones showing a long signal.
For markets where you can safely go short, compare all the securities showing negative momentum and select the worst performers, as they are the ones showing a short signal. There is no hard and fast
rule about when to enter a trade, but there tend to be more long signals when the general market is bullish, and more short signals in a bearish market environment. However, in a strong
Examples of the dual momentum strategy
Let’s discuss two scenarios to show how the dual momentum strategy can be applied in real-life situations:
First example: S&P 500 Index vs. Gold
In our first example, let’s say you are monitoring the S&P 500 Index and the commodity market — say gold, for instance. Assuming the S&P 500 Index was trading at 3025.52 three months ago and has made
a 10% increase in price over that period. Gold, on the other hand, has made a gain of about 5% over that same three months.
Thus, in terms of absolute momentum, the S&P 500 Index would be reading an absolute momentum of 110%, while the absolute momentum of gold would be 105%. Comparing the momentum in the two assets, it’s
clear that the S&P has greater momentum than gold at the moment, so the relative momentum is positive for the S&P.
Since the S&P Index has both positive relative momentum and positive absolute momentum, it is the right asset to buy at the moment since it is the better of the two assets. The belief is that the
asset with a higher momentum is more likely to continue gaining more momentum, with prices rising more in the nearest future. However, you don’t buy the S&P 500 Index directly; instead, you buy an
ETF that tracks it, such as the SPY ETF (SPDR S&P 500 Trust ETF).
We made a backtest of the SPY vs. GLD ratio (momentum):
Second example: Crude Oil and Natural Gas Futures
For our second example, we will consider two commodities in the futures market where you can easily trade on the short side no matter your account size. Now, let’s say you are monitoring crude oil
and natural gas futures, and over the last one month, both assets have been on the decline. Assuming crude oil made a 10% decline in price over that period, and natural gas, on the other hand, made a
decline of about 5% over that same period.
Using those values to calculate the absolute momentum of those assets, crude oil would be reading an absolute momentum of 90%, while the absolute momentum of natural gas would be 95%. Comparing the
momentum in the two assets, it’s clear that crude oil has a greater downward momentum (negative momentum) than gold at the moment. In other words, the relative momentum between those two assets would
be negative for crude oil.
With crude oil showing both negative relative momentum and negative absolute momentum, it is the right asset to short at the moment because it is the worse of the two assets. This is in line with the
belief that the asset with a greater downward momentum is more likely to continue to decline in the nearest future.
How has the dual momentum strategy performed in the past?
Interestingly, the dual momentum strategy, despite using a relatively simple approach, has performed remarkably well over the years, as Gary Antonacci (who created the strategy) demonstrated with his
Global Equities Momentum (GEM) model, which holds U.S. or non-U.S. stock indices when stocks are strong and uses bonds as a safe harbor when stocks are weak.
Gary presented his GEM results from as far as 1950 up to 2018 and compared them to a global asset allocation (GAA) benchmark of 45% in U.S. stocks, 28% in non-U.S. stocks, and 27% in 5-year bonds.
Gary stated that these percentages represent the amount of time GEM spent in each of these markets and may also be considered to represent a typical global asset allocation portfolio. However, these
backtesting results are simulated and hypothetical. They should never be considered indicative of future results, as they do not represent returns that any investor actually attained in the past. You
should know that indexes cannot be directly traded, so they do not reflect management or trading fees when used in backtesting.
The results show that the correlation in monthly returns between GEM and GAA is 0.60, but between GEM and the S&P 500, the correlation is 0.50. Since, on average, there were only 1.5 trades per year
with GEM, transaction costs must have been minimal.
Interestingly, these results are still ongoing and are updated monthly on the Performance pages of Gary’s website.
The chart below is taken from Allocatesmartly.com and shows how the strategy has performed relative to the traditional 60/40 portfolio:
Allocatesmartly.com goes on to show the performance metrics and statistics:
The drawdowns are low and relatively short-lived. We know that most traders abandon a strategy when drawdowns approach 20% or more, thus this strategy might be interesting as a substitute for the
traditional buy & hold of stocks.
Dual momentum strategy – backtest and performance
Let’s make a backtest of Antonacci’s dual momentum strategy. The trading rules are like this:
Trading Rules
THIS SECTION IS FOR MEMBERS ONLY. _________________
BECOME A MEBER TO GET ACCESS TO TRADING RULES IN ALL ARTICLES CLICK HERE TO SEE ALL 400 ARTICLES WITH BACKTESTS & TRADING RULES
These trading rules are asked at the close of every month and repeated 12 times per year. Thus, this is a momentum strategy with monthly rebalancing, and we hold one asset at a time. Rinse and repeat
For the backtest above, we used the following ETFs: SPY for S&P 500, EFA for global stocks (world index ex. USA), and AGG for Treasury Bills (not an optimal proxy, but serves our purpose). Dividends
are included and reinvested.
The equity curve of the strategy looks like this:
The annual return was 6.75%, and the max drawdown was 30%. This result is better than the world index, but it failed to beat S&P 500, which compounded at 9.2%, however, with a max drawdown of about
Investing with the dual momentum strategy
Surely there are many ways you can implement the dual momentum concept when trading/investing. You can use the approach to trade individual assets, such as stocks and securities on the futures
markets. However, Gary’s research shows that momentum works best when used to invest in ETFs, especially the ones that track geographically diversified equity indexes.
Implementing the dual momentum concept with individual assets
You can tweak this concept and use it to select the stocks to invest in different market conditions. For example, during a bullish market, you can use the approach to screen stocks such that you only
select the ones with the greatest momentum and invest in those ones.
Similarly, in a bear market, you can use the concept to select the securities with the biggest downward momentum and go short on them.
Implementing the dual momentum strategy with ETFs
The best way to implement the dual momentum strategy is with ETFs. Gary described a modular approach where, every month, you compare two related sectors or two parts of a single sector and select the
one that performed better over a certain period — usually the preceding twelve months. You buy the better performer if it has positive absolute momentum, but if it has negative absolute momentum, you
opt for treasury bonds or investment-grade bonds.
These are examples of the markets that Gary suggested for this approach:
• Bonds: credit bonds & high yield bonds
• Equities: US equities & international equities
• Economic stress: gold & treasury bonds
• REITS: mortgage reits & credit reits
If well implemented, the approach has the potential to yield some really good results. However, Gary indicated that the dual momentum strategy tends to underperform during strong bull markets or when
the market rebounds strongly, but with its ability to weather bear markets, the strategy has outperformed in the long term.
Using the dual momentum concept in sector rotation
The dual momentum concept can also be applied to the sector rotation approach. In this case, you may create a universe of ETFs that represent various sectors, regions, or asset classes and then rank
the sectors/regions/asset classes according to their absolute momentum over a chosen period.
You can then decide to buy the best three ETFs with positive absolute momentum. After a certain period, you rotate again into the best three at that moment. So, you are basically leveraging the dual
momentum approach in applying the sector rotation strategy.
Other momentum strategies
We have backtested hundreds of trading strategies since we started the blog in 2012. Some of them have been momentum strategies:
The benefits of using the dual momentum strategy
There are two key benefits of the dual momentum strategy. The first one is that it offers a better result than the buy and hold strategy. The backtested results displayed above shows that the dual
momentum approach performed better.
The second key benefit is that it comes with less volatility. Its worst drawdowns were not as bad as that of the benchmark.
What is the Dual Momentum Trading Strategy by Gary Antonacci?
The Dual Momentum Trading Strategy by Gary Antonacci is an investment approach that selects assets based on their outperformance compared to their peers over a specific period. This strategy
considers both absolute momentum (an asset’s own performance) and relative momentum (performance compared to other assets).
How does the Dual Momentum Investing Strategy work?
The Dual Momentum Investing Strategy uses two types of momentum to decide which security to buy and when. It aims to invest in assets that not only outperform their peers but also demonstrate
positive absolute momentum. The strategy involves selecting the asset with the highest relative momentum among those with positive absolute momentum.
How does Dual Momentum work in Bull and Bear Markets?
In a bullish market, the strategy aims to buy assets with positive absolute and relative momentum. In a bearish market, it seeks to sell assets with negative absolute and relative momentum. This
approach helps investors align with the prevailing market conditions. | {"url":"https://www.quantifiedstrategies.com/dual-momentum-trading-strategy/","timestamp":"2024-11-01T23:56:23Z","content_type":"text/html","content_length":"250319","record_id":"<urn:uuid:882c88ab-40e3-467a-805e-3543877d88b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00544.warc.gz"} |
Jess will plant up to 27 acres on her farm with wheat and corn. More than 5 acres will be planted with wheat.Let w represent the number of acres of wheat and c represent the number of acres of corn. Identify two inequalities that represent this situation. A. w > 5 B. w + c ≤ 27 C. w – c ≤ 27 D. c < 5
1. Home
2. General
3. Jess will plant up to 27 acres on her farm with wheat and corn. More than 5 acres will be planted wi... | {"url":"https://math4finance.com/general/jess-will-plant-up-to-27-acres-on-her-farm-with-wheat-and-corn-more-than-5-acres-will-be-planted-with-wheat-let-w-represent-the-number-of-acres-of-wheat-and-c-represent-the-number-of-acres-of-corn-identify-two-inequalities-that-represent-this-situa","timestamp":"2024-11-08T15:52:11Z","content_type":"text/html","content_length":"30728","record_id":"<urn:uuid:d75326d9-2cb2-4fa5-82a7-a1768832b9ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00495.warc.gz"} |
Live change of panel states still doesn't work!
I deleted all panel states before installed SP1.
When I try to change panel state from long to default, aircraft goes mad, siren is activated, displays doesn't work, MCP doesn't work...
It seems when default panel state is loaded, the sequence fail to spool the engines, and that lead to bunch of errors.
Try shutting down the engines before you load the panel state. Let us know if that worked.
Engines are already shut down. Trying to load default, and long is active.
What are your starting with?
Long is default and it loads up first time 777 is loaded.
Ok wanted to make sure you didn't modify the install. Are you selecting the 777 from the FSX start screen?
Yes, FSX start screen loads up with trike, I select aircraft/airport/time and click FLY NOW!
so when I went from 77 long to Default mine went crazy as well just clicked on the warning and caution button waiting until everything finished. It apparently loaded up with just the battery on
nothing else. no external power no wheel chocks. So it is in between cld and dark and long. if the cockpit is completely dark click and hold the menu button on the left CDU it should turn on
Ok, please open a tick with PMDG support. We were not seeing this so let's have them take a look.
One note, working from a 737-800 NG and we have begun descending for landing. I will be shutting down shortly.
Ok, I'll wait for tomorrow, if no one report it or recognize it as a bug, I'll open a ticket.
Ok, please open a tick with PMDG support. We were not seeing this so let's have them take a look.
One note, working from a 737-800 NG and we have begun descending for landing. I will be shutting down shortly.
like a real plane that is awesome!
Ok, as no one complains about this, gonna open ticket.
• Commercial Member
The gist of the support recommendation is to avoid jumping from states where the engines are not running to states where the engines are running. Still - submit the ticket to get it on their radar.
We were not seeing this so let's have them take a look.
We actually did. There was an email stream that went around last night / this morning, I think.
Kyle Rodgers
The gist of the support recommendation is to avoid jumping from states where the engines are not running to states where the engines are running. Still - submit the ticket to get it on their
We actually did. There was an email stream that went around last night / this morning, I think.
Hah, that's exactly why I need the tool: To switch between engines running and engines not running when I want to quick test something.
It's important that is not bug or problem at my side only. It's not a show stopper for me, just a little annoyance.
I do not understand how in NGX the tool works perfectly, but here they cannot make it.
What's going on with this? Is it acknowledged by PMDG? Is it going to be fixed or not? | {"url":"https://www.avsim.com/forums/topic/446699-live-change-of-panel-states-still-doesnt-work/","timestamp":"2024-11-12T02:38:36Z","content_type":"text/html","content_length":"242962","record_id":"<urn:uuid:89ccb2e0-a92c-4a85-97f5-a3afbb83e52c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00638.warc.gz"} |
Two Chickens in Bigger Boxes
Solution 1
As in the earlier problem, there are $16$ equiprobable chicken placements, of which five satisfy the conditions of the problem:
$\begin{array}{ccccc} \textit{Box #}&1&2&3&4\\ &\text{chicken}&&&\text{chicken}\\ &&\text{chicken}&&\text{chicken}\\ &&&\text{chicken}&\text{chicken}\\ &\text{chicken}&&\text{chicken}\\ &&\text
{chicken}&\text{chicken}\\ &&&\text{2 chickens}\\ \end{array}$
Thus boxes #1-2 contain a chicken with the probability $\displaystyle \frac{1}{3};$ for boxes #3, the probability is $\displaystyle \frac{2}{3};$ for box #4, it's $\displaystyle \frac{3}{6}=\frac{1}
Solution 2
Let's follow where the chickens go. Each has equal probabilities of getting into any of the four boxes. Some configurations are OK, some violate the constraints of the problem.
$\begin{array}{ccccc} \textit{chick 1 in box / chick 2 in box}\\ &1&2&3&4\\ 1&X&X&OK!&OK\\ 2&X&X&OK!&OK\\ 3&OK!&OK!&OK!&OK!\\ 4&OK&OK&OK!&X\\ \end{array}$
There are $11$ legitimate equiprobable situations; in $7$ of them the corner box is not empty, in $6$ box #4 is not empty and in $4$ boxes #1-2 are each not empty. Thus the probabilities for the four
boxes not to be empty come out as $\displaystyle \frac{4}{11},\frac{4}{11},\frac{7}{11},\frac{6}{11}.$
Why there are two solutions and which is correct, as the two can't be correct at the same time. The difference between the two is in that the first one treats the chickens as indistinguishable the
second presumes that they are different: chicken #1 and chicken #2.
I do not believe that the problem is suggestive of the idea of two different chickens, although Solution 2 has been posted several times. I included it here to give a reason to remark on that topic.
This is a modification of a problem from P. J. Nahin's Will You Be Alive 10 Years from Now? (Princeton University Press, 2014).
The two solutions above give different answers. Which is right?
|Contact| |Front page| |Contents| |Probability|
Copyright © 1996-2018
Alexander Bogomolny | {"url":"https://www.cut-the-knot.org/Probability/TwoChickens2.shtml","timestamp":"2024-11-08T15:00:15Z","content_type":"text/html","content_length":"13392","record_id":"<urn:uuid:ab00cca0-791a-4054-b8f9-ed8458ead9e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00181.warc.gz"} |
Math Words That Start With F: Essential Terms Explained
In this article, you’ll find a comprehensive list of math words that start with the letter “F.
1. Factor – A number that divides another number exactly.
2. Factorial – The product of all positive integers up to a given number.
3. Fibonacci – A sequence where each number is the sum of the two preceding ones.
4. Fraction – A part of a whole; a/the quotient of two numbers.
5. Function – A relation between a set of inputs and a set of permissible outputs.
6. Formula – A mathematical relationship expressed in symbols.
7. Frequency – The number of occurrences of a repeating event per unit of time.
8. Frustum – A portion of a solid (normally a cone or pyramid) that lies between two parallel planes.
9. Foci – Plural of focus; points used to define conic sections.
10. Face – The flat surface of a three- dimensional shape.
11. Fibonacci Series – Another term for Fibonacci sequence.
12. Forthcoming – Relatively forecasted or anticipated.
13. Farey Sequence – A sequence of fractions in simplest form between 0 and 1.
14. F- test – A statistical test to determine if variances of two populations are equal.
15. Field – A set in which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers.
16. Function Table – A table used to organize input and output values.
17. Fundamental Theorem of Algebra – Every non- constant polynomial equation has at least one complex root.
18. FOIL Method – A technique for distributing two binomials.
19. Finite – Having a limited number of elements.
20. Fractional – Related to fractions.
21. Fixed Point – A point that is mapped to itself by a function.
22. First Quartile – The median of the lower half of a data set.
23. Finite Set – A set with a finite number of elements.
24. Five Number Summary – Consists of the minimum, first quartile, median, third quartile, and maximum.
25. Floor Function – The function that takes a real number and gives the greatest integer less than or equal to that number.
26. Flux – The rate of flow of a property per unit area.
27. Fourier Series – A way to represent a function as the sum of simple sine waves.
28. Fixed Cost – A cost that does not change with the level of production or sales.
29. Fundamental Theorem of Calculus – Links the concept of the derivative of a function with the concept of the integral.
30. Fourier Transform – A mathematical transform that decomposes functions depending on space or time into functions depending on spatial or temporal frequency.
31. Free Energy – Used to determine the spontaneity of a process.
32. Final Value – The value approached by the function as the input increases without bound.
33. Factorization – The process of breaking down an entity into a product of other entities.
34. Fundamental Matrix – A square matrix solution to a system of linear differential equations.
35. Feasible Region – The set of all possible points that satisfy a system of inequalities.
36. Figure – A geometric form.
37. Fractal – A complex geometric pattern exhibiting self- similarity.
38. Fraction Bar – The line that separates the numerator and denominator in a fraction.
39. First Derivative – Represents the slope or rate of change of a function.
40. Fundamental Group – A concept in algebraic topology that captures information about the basic shape, or holes, of a topological space.
41. Fubini’s Theorem – Provides conditions under which a double integral can be computed as an iterated integral.
42. Floating Point – A way to represent real numbers in computing that can support a wide range of values.
43. Folded Normal Distribution – A probability distribution related to the normal distribution.
44. Finite Difference – An expression of the form f(x + b) – f(x + a).
45. Fractional Exponent – An exponent expressed as a fraction.
46. Fold – To bring two parts or sides together.
47. Fluctuation – A variation in a set of data points.
48. Fuzzy Logic – A form of logic used when information is imprecise.
49. Fixed Variable – A constant value in a mathematical function or equation.
50. Foster Series – A series in electrical network theory representing two- port networks.
51. F- distribution – The probability distribution of the ratio of two independent chi-squared variables divided by their respective degrees of freedom.
52. Friction – The resistance that one surface or object encounters when moving over another.
53. Frame – A coordinate system.
54. Flux Integral – An integral that calculates the flow of a vector field through a surface.
55. Fourier Coefficient – Coefficients of the terms of the Fourier series of a function.
56. Five- point Discrete Laplace Operator – Used in the numerical solution of partial differential equations.
57. Finite Element – A small piece of a larger geometric object used in finite element analysis.
58. Focus – A point used to construct and define a conic section.
59. Fuzzy Set – A set without a sharp boundary; defined by membership function.
60. Full Rank – When the matrix has no linearly dependent rows or columns.
61. Fibonacci Heap – A data structure for priority queue operations.
62. Filament – A slender thread- like object or fiber.
63. Frobenius Endomorphism – A special kind of mapping associated with fields that have prime characteristic.
64. Fitted Value – The estimated value in a regression.
65. Focal Point – The center of interest or activity.
66. Fermat’s Little Theorem – States that if p is a prime number, then for any integer a, the number a^p – a is an integer multiple of p.
67. Field Extension – An enlarged field containing another field.
68. Finite Automaton – A theoretical machine used in computer science.
69. Fundamental Mode – The mode with the lowest frequency.
70. Fermat’s Last Theorem – States that no three positive integers a, b, and c satisfy a^n + b^n = c^n for any integer value of n greater than 2.
71. Free Variable – A variable in a mathematical function not constrained by conditions.
72. Fibonacci Polynomial – A sequence of polynomials defined by a Fibonacci- like recurrence.
73. Feigenbaum Constant – A mathematical constant that is the limiting ratio of each bifurcation interval to the next.
74. Forcing Term – A term added to a differential equation representing an external influence.
75. Fixed- Point Iteration – A method of computing fixed points of iterated functions.
76. Fractional Part – The non- integer part of a number.
77. Fourier Analysis – The study of how general functions can be represented by sums of simpler trigonometric functions.
78. Fundamental Solution – A solution of a differential equation that represents the response to a delta function source term.
79. Fundamental Polygon – A simple polygon used in the study of surface tilings and topology.
80. Flip – To turn over or rotate.
81. Frobenius Number – The largest monetary amount that cannot be obtained using any combination of specified denominations.
82. Farey Neighbor – Two fractions that are neighbors in a Farey sequence.
83. Fixed Step Size – A uniform interval in numerical methods.
84. Fourier Basis – An orthonormal basis of functions used in Fourier series representation.
85. Fault – An incorrect step, process, or data definition.
86. Fixed- point Arithmetic – Arithmetic using fixed precision for the numbers.
87. Fredholm Alternative – A result concerning the solvability of certain integral equations.
88. Free Energy Change – The change in free energy of a system as it undergoes a process.
89. Focus Directrix – A line surrounding a focus point.
90. Fractional KPZ Equation – A fractional extension of the Kardar- Parisi-Zhang equation.
91. Fermat’s Principle – A principle stating that the path taken between two points by a ray of light is the path that can be traversed in the least time.
92. Field Line – A line that represents both the direction and the strength of a force field.
93. Fourier Multiple – Multiple terms in a Fourier series.
94. Focal Length – The distance from the center of a lens to the focal point.
95. Functional Equation – An equation that specifies a function in implicit form.
96. Finite Geometry – A geometry having only a finite number of points.
97. Frobenius Norm – A measure of matrix size based on the sum of the absolute squares of its elements.
98. Fixed Angle – An angle that does not change.
99. Fractional Linear Transformation – A function of the form (ax + b) / (cx + d).
100. Fourier Integral – Representing a function as an integral of trigonometric functions. | {"url":"https://thismakeswords.com/math-words-that-start-with-f/","timestamp":"2024-11-04T01:37:35Z","content_type":"text/html","content_length":"145625","record_id":"<urn:uuid:b68ea908-8f8d-45a8-a8ba-603a13235266>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00734.warc.gz"} |
Animations, GUIs, Visuals · DynamicalSystems.jl
Using the functionality of package extensions in Julia v1.9+, DynamicalSystems.jl provides various visualization tools as soon as the Makie package comes into scope (i.e., when using Makie or any of
its backends like GLMakie).
The main functionality is interactive_trajectory that allows building custom GUI apps for visualizing the time evolution of dynamical systems. The remaining GUI applications in this page are
dedicated to more specialized scenarios.
The following GUI is obtained with the function interactive_trajectory_timeseries and the code snippet below it!
using DynamicalSystems, GLMakie, ModelingToolkit
# Import canonical time from MTK, however use the unitless version
using ModelingToolkit: t_nounits as t
# Define the variables and parameters in symbolic format
@parameters begin
a = 0.29
b = 0.14
c = 4.52
d = 1.0
@variables begin
x(t) = 10.0
y(t) = 0.0
z(t) = 1.0
nlt(t) # nonlinear term
# Create the equations of the model
eqs = [
Differential(t)(x) ~ -y - z,
Differential(t)(y) ~ x + a*y,
Differential(t)(z) ~ b + nlt - z*c,
nlt ~ d*z*x, # observed variable
# Create the model via ModelingToolkit
@named roessler = ODESystem(eqs, t)
# Do not split parameters so that integer indexing can be used as well
model = structural_simplify(roessler; split = false)
# Cast it into an `ODEProblem` and then into a `DynamicalSystem`.
# Due to low-dimensionality it is preferred to cast into out of place
prob = ODEProblem{false}(model, nothing, (0.0, Inf); u0_constructor = x->SVector(x...))
ds = CoupledODEs(prob)
# If you have "lost" the model, use:
model = referrenced_sciml_model(ds)
# Define which parameters will be interactive during the simulation
parameter_sliders = Dict(
# can use integer indexing
1 => 0:0.01:1,
# the global scope symbol
b => 0:0.01:1,
# the symbol obtained from the MTK model
model.c => 0:0.01:10,
# or a `Symbol` with same name as the parameter
# (which is the easiest and recommended way)
:d => 0.8:0.01:1.2,
# Define what variables will be visualized as timeseries
power(u) = sqrt(u[1]*u[1] + u[2]*u[2])
observables = [
1, # can use integer indexing
z, # MTK state variable (called "unknown")
model.nlt, # MTK observed variable
:y, # `Symbol` instance with same name as symbolic variable
power, # arbitrary function of the state
x^2 - y^2, # arbitrary symbolic expression of symbolic variables
# Define what variables will be visualized as state space trajectory
# same as above, any indexing works, but ensure to make the vector `Any`
# so that integers are not converted to symbolic variables
idxs = Any[1, y, 3]
u0s = [
# we can specify dictionaries, each mapping the variable to its value
# un-specified variables get the value they currently have in `ds`
Dict(:x => -4, :y => -4, :z => 0.1),
Dict(:x => 4, :y => 3, :z => 0.1),
Dict(:x => -5.72),
Dict(:x => 5.72, :y => 0.28, :z => 0.21),
update_theme!(fontsize = 14)
tail = 1000
fig, dsobs = interactive_trajectory_timeseries(ds, observables, u0s;
parameter_sliders, Δt = 0.01, tail, idxs,
figure = (size = (1100, 650),)
step!(dsobs, 2tail)
interactive_trajectory_timeseries(ds::DynamicalSystem, fs, [, u0s]; kwargs...) → fig, dsobs
Create a Makie Figure to visualize trajectories and timeseries of observables of ds. This Figure can also be used as an interactive GUI to enable interactive control over parameters and time
evolution. It can also be used to create videos, as well as customized animations, see below.
fs is a Vector of "indices to observe", i.e., anything that can be given to observe_state. Each observation index will make a timeseries plot. u0s is a Vector of initial conditions. Each is evolved
with a unique color and displayed both as a trajectory in state space and as an observed timeseries. Elements of u0 can be either Vector{Real} encoding a full state or Dict to partially set a state
from current state of ds (same as in set_state!).
The trajectories from the initial conditions in u0s are all evolved and visualized in parallel. By default only the current state of the system is used. u0s can be anything accepted by a
Return fig, dsobs::DynamicalSystemObservable. fig is the created Figure. dsobs facilities the creation of custom animations and/or interactive applications, see the custom animations section below.
See also interactive_trajectory.
Interactivity and time stepping keywords
GUI functionality is possible when the plotting backend is GLMakie. Do using GLMakie; GLMakie.activate!() to ensure this is the chosen backend.
• add_controls = true: If true, below the state space axis some buttons for animating the trajectories live are added:
□ reset: results the parallel trajectories to their initial conditions
□ run: when clicked it evolves the trajectories forwards in time indefinitely. click again to stop the evolution.
□ step: when clicked it evolves the trajectories forwards in time for the amount of steps chosen by the slider to its right.
The plotted trajectories can always be evolved manually using the custom animations etup that we describe below; add_controls only concerns the buttons and interactivity added to the created
• parameter_sliders = nothing: If given, it must be a dictionary, mapping parameter indices (any valid index that can be given to set_parameter!) to ranges of parameter values. Each combination of
index and range becomes a slider that can be interactively controlled to alter a system parameter on the fly during time evolution. Below the parameter sliders, three buttons are added for GUI
□ update: when clicked the chosen parameter values are propagated into the system
□ u.r.s.: when clicked it is equivalent with clicking in order: "update", "reset", "step".
□ reset p: when clicked it resets
Parameters can also be altered using the custom animation setup that we describe below; parameter_sliders only conserns the buttons and interactivity added to the created figure.
• parameter_names = Dict(keys(ps) .=> string.(keys(ps))): Dictionary mapping parameter keys to labels. Only used if parameter_sliders is given.
• Δt: Time step of time evolution. Defaults to 1 for discrete time, 0.01 for continuous time systems. For internal simplicity, continuous time dynamical systems are evolved non-adaptively with
constant step size equal to Δt.
• pause = nothing: If given, it must be a real number. This number is given to the sleep function, which is called between each plot update. Useful when time integration is computationally
inexpensive and animation proceeds too fast.
• starting_step = 1: the starting value of the "step" slider.
Visualization keywords
• colors: The color for each initial condition (and resulting trajectory and timeseries). Needs to be a Vector of equal length as u0s.
• tail = 1000: Length of plotted trajectory (in units of Δt).
• fade = 0.5: The trajectories in state space are faded towards full transparency. The alpha channel (transparency) scales as t^fade with t ranging from 0 to 1 (1 being the end of the trajectory).
Use fade = 1.0 for linear fading or fade = 0 for no fading. Current default makes fading progress faster at trajectory start and slower at trajectory end.
• markersize = 15: Size of markers of trajectory endpoints. For discrete systems half of that is used for the trajectory tail.
• plotkwargs = NamedTuple(): A named tuple of keyword arguments propagated to the state space plot (lines for continuous, scatter for discrete systems). plotkwargs can also be a vector of named
tuples, in which case each initial condition gets different arguments.
Statespace trajectory keywords
• idxs = 1:min(length(u0s[1]), 3): Which variables to plot in a state space trajectory. Any index that can be given to observe_state can be given here.
• statespace_axis = true: Whether to create and display an axis for the trajectory plot.
• idxs = 1:min(length(u0s[1]), 3): Which variables to plot in a state space trajectory. Any index that can be given to observe_state can be given here. If three indices are given, the trajectory
plot is also 3D, otherwise 2D.
• lims: A tuple of tuples (min, max) for the axis limits. If not given, they are automatically deduced by evolving each of u0s 1000 Δt units and picking most extreme values (limits are not adjusted
by default during the live animations).
• figure, axis: both can be named tuples with arbitrary keywords propagated to the generation of the Figure and state space Axis instances.
Timeseries keywords
• linekwargs = NamedTuple(): Extra keywords propagated to the timeseries plots. Can also be a vector of named tuples, each one for each unique initial condition.
• timeseries_names: A vector of strings with length equal to fs giving names to the y-labels of the timeseries plots.
• timeseries_ylims: A vector of 2-tuples for the lower and upper limits of the y-axis of each timeseries plot. If not given it is deduced automatically similarly to lims.
• timeunit = 1: the units of time, if any. Sets the units of the timeseries x-axis.
• timelabel = "time": label of the x-axis of the timeseries plots.
Custom animations
The second return argument dsobs is a DynamicalSystemObservable. The trajectories plotted in the main panel are linked to observables that are fields of the dsobs. Specifically, the field
dsobs.state_obserable is an observable containing the final state of each of the trajectories, i.e., a vector of vectors like u0s. dsobs.param_observable is an observable of the system parameters.
These observables are triggered by the interactive GUI buttons (the first two when the system is stepped in time, the last one when the parameters are updated). However, these observables, and hence
the corresponding plotted trajectories that are maped from these observables, can be updated via the formal API of DynamicalSystem:
step!(dsobs, n::Int = 1)
will step the system for n steps of Δt time, and only update the plot on the last step. set_parameter!(dsobs, index, value) will update the system parameter and then trigger the parameter observable.
Lastly, set_state!(dsobs, new_u [, i]) will set the i-th system state and clear the trajectory plot to the new initial condition.
This information can be used to create custom animations and/or interactive apps. In principle, the only thing a user has to do is create new observables from the existing ones using e.g. the on
function and plot these new observables. Various examples are provided in the online documentation.
interactive_trajectory(ds::DynamicalSystem [, u0s]; kwargs...) → fig, dsobs
Same as interactive_trajectory_timeseries, but does not plot any timeseries only the trajectory in a (projected) state space.
using DynamicalSystems, CairoMakie
F, G, a, b = 6.886, 1.347, 0.255, 4.0
ds = PredefinedDynamicalSystems.lorenz84(; F, G, a, b)
u1 = [0.1, 0.1, 0.1] # periodic
u2 = u1 .+ 1e-3 # fixed point
u3 = [-1.5, 1.2, 1.3] .+ 1e-9 # chaotic
u4 = [-1.5, 1.2, 1.3] .+ 21e-9 # chaotic 2
u0s = [u1, u2, u3, u4]
fig, dsobs = interactive_trajectory(
ds, u0s; tail = 1000, fade = true,
idxs = [1,3],
We could interact with this plot live, like in the example video above. We can also progress the visuals via code as instructed by interactive_trajectory utilizing the second returned argument dsobs:
step!(dsobs, 2000)
(if you progress the visuals via code you probably want to give add_controls = false as a keyword to interactive_trajectory)
In this advanced example we add plot elements to the provided figure, and also utilize the parameter observable in dsobs to add animated plot elements that update whenever a parameter updates. The
final product of this snippet is in fact the animation at the top of the docstring of interactive_trajectory_panel.
We start with an interactive trajectory panel of the Lorenz63 system, in which we also add sliders for interactively changing parameter values
using DynamicalSystems, CairoMakie
ps = Dict(
1 => 1:0.1:30,
2 => 10:0.1:50,
3 => 1:0.01:10.0,
pnames = Dict(1 => "σ", 2 => "ρ", 3 => "β")
lims = ((-30, 30), (-30, 30), (0, 100))
ds = PredefinedDynamicalSystems.lorenz()
u1 = [10,20,40.0]
u3 = [20,10,40.0]
u0s = [u1, u3]
fig, dsobs = interactive_trajectory(
ds, u0s; parameter_sliders = ps, pnames, lims
If now one interactively clicked (if using GLMakie) the parameter sliders and then update, the system parameters would be updated accordingly. We can also add new plot elements that depend on the
parameter values using the dsobs:
# Fixed points of the lorenz system (without the origin)
lorenz_fixedpoints(ρ,β) = [
Point3f(sqrt(β*(ρ-1)), sqrt(β*(ρ-1)), ρ-1),
Point3f(-sqrt(β*(ρ-1)), -sqrt(β*(ρ-1)), ρ-1),
# add an observable trigger to the system parameters
fpobs = map(dsobs.param_observable) do params
σ, ρ, β = params
return lorenz_fixedpoints(ρ, β)
# If we want to plot directly on the trajectory axis, we need to
# extract it from the figure. The first entry of the figure is a grid layout
# containing the axis and the GUI controls. The [1,1] entry of the layout
# is the axis containing the trajectory plot
ax = content(fig[1,1][1,1])
scatter!(ax, fpobs; markersize = 10, marker = :diamond, color = :red)
Now, after the live animation "run" button is pressed, we can interactively change the parameter ρ and click update, in which case both the dynamical system's ρ parameter will change, but also the
location of the red diamonds.
We can also change the parameters non-interactively using set_parameter!
set_parameter!(dsobs, 2, 50.0)
set_parameter!(dsobs, 2, 10.0)
Note that the sliders themselves did not change, as this functionality is for "offline" creation of animations where one doesn't interact with sliders. The keyword add_controls should be given as
false in such scenarios.
using DynamicalSystems, CairoMakie
using LinearAlgebra: norm, dot
# Dynamical system and initial conditions
ds = Systems.thomas_cyclical(b = 0.2)
u0s = [[3, 1, 1.], [1, 3, 1.], [1, 1, 3.]] # must be a vector of states!
# Observables we get timeseries of:
function distance_from_symmetry(u)
v = SVector{3}(1/√3, 1/√3, 1/√3)
t = dot(v, u)
return norm(u - t*v)
fs = [3, distance_from_symmetry]
fig, dsobs = interactive_trajectory_timeseries(ds, fs, u0s;
idxs = [1, 2], Δt = 0.05, tail = 500,
lims = ((-2, 4), (-2, 4)),
timeseries_ylims = [(-2, 4), (0, 5)],
add_controls = false,
figure = (size = (800, 400),)
we can progress the simulation:
step!(dsobs, 200)
or we can even make a nice video out of it:
record(fig, "thomas_cycl.mp4", 1:100) do i
step!(dsobs, 10)
interactive_cobweb(ds::DiscreteDynamicalSystem, prange, O::Int = 3; kwargs...)
Launch an interactive application for exploring cobweb diagrams of 1D discrete dynamical systems. Two slides control the length of the plotted trajectory and the current parameter value. The
parameter values are obtained from the given prange.
In the cobweb plot, higher order iterates of the dynamic rule f are plotted as well, starting from order 1 all the way to the given order O. Both the trajectory in the cobweb, as well as any iterate
f can be turned off by using some of the buttons.
• fkwargs = [(linewidth = 4.0, color = randomcolor()) for i in 1:O]: plotting keywords for each of the plotted iterates of f
• trajcolor = :black: color of the trajectory
• pname = "p": name of the parameter slider
• pindex = 1: parameter index
• xmin = 0, xmax = 1: limits the state of the dynamical system can take
• Tmax = 1000: maximum trajectory length
• x0s = range(xmin, xmax; length = 101): Possible values for the x0 slider.
The animation at the top of this section was done with
using DynamicalSystems, GLMakie
# the second range is a convenience for intermittency example of logistic
rrange = 1:0.001:4.0
# rrange = (rc = 1 + sqrt(8); [rc, rc - 1e-5, rc - 1e-3])
lo = Systems.logistic(0.4; r = rrange[1])
interactive_cobweb(lo, rrange, 5)
Notice that orbit diagrams and bifurcation diagrams are different things in DynamicalSystems.jl
ds::DynamicalSystem, p_index, pmin, pmax, i::Int = 1;
u0 = nothing, parname = "p", title = ""
Open an interactive application for exploring orbit diagrams (ODs) of discrete time dynamical systems. Requires DynamicalSystems.
In essense, the function presents the output of orbitdiagram of the ith variable of the ds, and allows interactively zooming into it.
Keywords control the name of the parameter, the initial state (used for any parameter) or whether to add a title above the orbit diagram.
The application is separated in the "OD plot" (left) and the "control panel" (right). On the OD plot you can interactively click and drag with the left mouse button to select a region in the OD. This
region is then re-computed at a higher resolution.
The options at the control panel are straight-forward, with
• n amount of steps recorded for the orbit diagram (not all are in the zoomed region!)
• t transient steps before starting to record steps
• d density of x-axis (the parameter axis)
• α alpha value for the plotted points.
Notice that at each update n*t*d steps are taken. You have to press update after changing these parameters. Press reset to bring the OD in the original state (and variable). Pressing back will go
back through the history of your exploration History is stored when the "update" button is pressed or a region is zoomed in.
You can even decide which variable to get the OD for by choosing one of the variables from the wheel! Because the y-axis limits can't be known when changing variable, they reset to the size of the
selected variable.
Accessing the data
What is plotted on the application window is a true orbit diagram, not a plotting shorthand. This means that all data are obtainable and usable directly. Internally we always scale the orbit diagram
to [0,1]² (to allow Float64 precision even though plotting is Float32-based). This however means that it is necessary to transform the data in real scale. This is done through the function scaleod
which accepts the 5 arguments returned from the current function:
figure, oddata = interactive_orbitdiagram(...)
ps, us = scaleod(oddata)
scaleod(oddata) -> ps, us
Given the return values of interactive_orbitdiagram, produce orbit diagram data scaled correctly in data units. Return the data as a vector of parameter values and a vector of corresponding variable
The animation at the top of this section was done with
i = p_index = 1
ds, p_min, p_max, parname = Systems.henon(), 0.8, 1.4, "a"
t = "orbit diagram for the Hénon map"
oddata = interactive_orbitdiagram(ds, p_index, p_min, p_max, i;
parname = parname, title = t)
ps, us = scaleod(oddata)
interactive_poincaresos(cds, plane, idxs, complete; kwargs...)
Launch an interactive application for exploring a Poincaré surface of section (PSOS) of the continuous dynamical system cds. Requires DynamicalSystems.
The plane can only be the Tuple type accepted by DynamicalSystems.poincaresos, i.e. (i, r) for the ith variable crossing the value r. idxs gives the two indices of the variables to be displayed,
since the PSOS plot is always a 2D scatterplot. I.e. idxs = (1, 2) will plot the 1st versus 2nd variable of the PSOS. It follows that plane[1] ∉ idxs must be true.
complete is a three-argument function that completes the new initial state during interactive use, see below.
The function returns: figure, laststate with the latter being an observable containing the latest initial state.
Keyword Arguments
• direction, rootkw : Same use as in DynamicalSystems.poincaresos.
• tfinal = (1000.0, 10.0^4) : A 2-element tuple for the range of values for the total integration time (chosen interactively).
• color : A function of the system's initial condition, that returns a color to plot the new points with. The color must be RGBf/RGBAf. A random color is chosen by default.
• labels = ("u₁" , "u₂") : Scatter plot labels.
• scatterkwargs = (): Named tuple of keywords passed to scatter.
• diffeq = NamedTuple() : Any extra keyword arguments are passed into init of DiffEq.
The application is a standard scatterplot, which shows the PSOS of the system, initially using the system's u0. Two sliders control the total evolution time and the size of the marker points (which
is always in pixels).
Upon clicking within the bounds of the scatter plot your click is transformed into a new initial condition, which is further evolved and its PSOS is computed and then plotted into the scatter plot.
Your click is transformed into a full D-dimensional initial condition through the function complete. The first two arguments of the function are the positions of the click on the PSOS. The third
argument is the value of the variable the PSOS is defined on. To be more exact, this is how the function is called:
x, y = mouseclick; z = plane[2]
newstate = complete(x, y, z)
The complete function can throw an error for ill-conditioned x, y, z. This will be properly handled instead of breaking the application. This newstate is also given to the function color that gets a
new color for the new points.
To generate the animation at the start of this section you can run
using InteractiveDynamics, GLMakie, OrdinaryDiffEq, DynamicalSystems
diffeq = (alg = Vern9(), abstol = 1e-9, reltol = 1e-9)
hh = Systems.henonheiles()
potential(x, y) = 0.5(x^2 + y^2) + (x^2*y - (y^3)/3)
energy(x,y,px,py) = 0.5(px^2 + py^2) + potential(x,y)
const E = energy(get_state(hh)...)
function complete(y, py, x)
V = potential(x, y)
Ky = 0.5*(py^2)
Ky + V ≥ E && error("Point has more energy!")
px = sqrt(2(E - V - Ky))
ic = [x, y, px, py]
return ic
plane = (1, 0.0) # first variable crossing 0
# Coloring points using the Lyapunov exponent
function λcolor(u)
λ = lyapunovs(hh, 4000; u0 = u)[1]
λmax = 0.1
return RGBf(0, 0, clamp(λ/λmax, 0, 1))
state, scene = interactive_poincaresos(hh, plane, (2, 4), complete;
labels = ("q₂" , "p₂"), color = λcolor, diffeq...)
interactive_poincaresos_scan(A::StateSpaceSet, j::Int; kwargs...)
interactive_poincaresos_scan(As::Vector{StateSpaceSet}, j::Int; kwargs...)
Launch an interactive application for scanning a Poincare surface of section of A like a "brain scan", where the plane that defines the section can be arbitrarily moved around via a slider. Return
figure, ax3D, ax2D.
The input dataset must be 3 dimensional, and here the crossing plane is always chosen to be when the j-th variable of the dataset crosses a predefined value. The slider automatically gets all
possible values the j-th variable can obtain.
If given multiple datasets, the keyword colors attributes a color to each one, e.g. colors = [JULIADYNAMICS_COLORS[mod1(i, 6)] for i in 1:length(As)].
The keywords linekw, scatterkw are named tuples that are propagated as keyword arguments to the line and scatter plot respectively, while the keyword direction = -1 is propagated to the function
The animation at the top of this page was done with
using GLMakie, DynamicalSystems
using OrdinaryDiffEq: Vern9
diffeq = (alg = Vern9(), abstol = 1e-9, reltol = 1e-9)
ds = PredefinedDynamicalSystems.henonheiles()
ds = CoupledODEs(ds, diffeq)
u0s = [
[0.0, -0.25, 0.42081, 0.0],
[0.0, 0.1, 0.5, 0.0],
[0.0, -0.31596, 0.354461, 0.0591255]
# inputs
trs = [trajectory(ds, 10000, u0)[1][:, SVector(1,2,3)] for u0 ∈ u0s]
j = 2 # the dimension of the plane
interactive_poincaresos_scan(trs, j; linekw = (transparency = true,)) | {"url":"https://juliadynamics.github.io/DynamicalSystems.jl/dev/visualizations/","timestamp":"2024-11-03T22:41:09Z","content_type":"text/html","content_length":"47133","record_id":"<urn:uuid:e3d92e6b-b54b-43a6-99c4-2c466bf92e67>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00530.warc.gz"} |
MultiArray Concept
The MultiArray concept defines an interface to hierarchically nested containers. It specifies operations for accessing elements, traversing containers, and creating views of array data. MultiArray
defines a flexible memory model that accomodates a variety of data layouts.
At each level (or dimension) of a MultiArray's container hierarchy lie a set of ordered containers, each of which contains the same number and type of values. The depth of this container hierarchy is
the MultiArray's dimensionality. MultiArray is recursively defined; the containers at each level of the container hierarchy model MultiArray as well. While each dimension of a MultiArray has its own
size, the list of sizes for all dimensions defines the shape of the entire MultiArray. At the base of this hierarchy lie 1-dimensional MultiArrays. Their values are the contained objects of interest
and not part of the container hierarchy. These are the MultiArray's elements.
Like other container concepts, MultiArray exports iterators to traverse its values. In addition, values can be addressed directly using the familiar bracket notation.
MultiArray also specifies routines for creating specialized views. A view lets you treat a subset of the underlying elements in a MultiArray as though it were a separate MultiArray. Since a view
refers to the same underlying elements, changes made to a view's elements will be reflected in the original MultiArray. For example, given a 3-dimensional "cube" of elements, a 2-dimensional slice
can be viewed as if it were an independent MultiArray. Views are created using index_gen and index_range objects. index_ranges denote elements from a certain dimension that are to be included in a
view. index_gen aggregates range data and performs bookkeeping to determine the view type to be returned. MultiArray's operator[] must be passed the result of N chained calls to index_gen::operator
[], i.e.
where N is the MultiArray's dimensionality and indices an object of type index_gen. The view type is dependent upon the number of degenerate dimensions specified to index_gen. A degenerate dimension
occurs when a single-index is specified to index_gen for a certain dimension. For example, if indices is an object of type index_gen, then the following example:
has a degenerate second dimension. The view generated from the above specification will have 2 dimensions with shape 5 x 4. If the "2" above were replaced with another index_range object, for
then the view would have 3 dimensions.
MultiArray exports information regarding the memory layout of its contained elements. Its memory model for elements is completely defined by 4 properties: the origin, shape, index bases, and strides.
The origin is the address in memory of the element accessed as a[0][0]...[0], where a is a MultiArray. The shape is a list of numbers specifying the size of containers at each dimension. For example,
the first extent is the size of the outermost container, the second extent is the size of its subcontainers, and so on. The index bases are a list of signed values specifying the index of the first
value in a container. All containers at the same dimension share the same index base. Note that since positive index bases are possible, the origin need not exist in order to determine the location
in memory of the MultiArray's elements. The strides determine how index values are mapped to memory offsets. They accomodate a number of possible element layouts. For example, the elements of a 2
dimensional array can be stored by row (i.e., the elements of each row are stored contiguously) or by column (i.e., the elements of each column are stored contiguously).
Two concept checking classes for the MultiArray concepts (ConstMultiArrayConcept and MutableMultiArrayConcept) are in the namespace boost::multi_array_concepts in <boost/multi_array/
What follows are the descriptions of symbols that will be used to describe the MultiArray interface.
Table 27.1. Notation
A A type that is a model of MultiArray
a,b Objects of type A
NumDims The numeric dimension parameter associated with A.
Dims Some numeric dimension parameter such that 0<Dims<NumDims.
indices An object created by some number of chained calls to index_gen::operator[](index_range).
index_list An object whose type models Collection
idx A signed integral value.
tmp An object of type boost::array<index,NumDims>
Table 27.2. Associated Types
Type Description
value_type This is the value type of the container. If NumDims == 1, then this is element. Otherwise, this is the value type of the immediately nested containers.
reference This is the reference type of the contained value. If NumDims == 1, then this is element&. Otherwise, this is the same type as template subarray<NumDims-1>::type.
const_reference This is the const reference type of the contained value. If NumDims == 1, then this is const element&. Otherwise, this is the same type as template const_subarray
size_type This is an unsigned integral type. It is primarily used to specify array shape.
difference_type This is a signed integral type used to represent the distance between two iterators. It is the same type as std::iterator_traits<iterator>::difference_type.
iterator This is an iterator over the values of A. If NumDims == 1, then it models Random Access Iterator. Otherwise it models Random Access Traversal Iterator, Readable
Iterator, Writable Iterator, and Output Iterator.
const_iterator This is the const iterator over the values of A.
reverse_iterator This is the reversed iterator, used to iterate backwards over the values of A.
const_reverse_iterator This is the reversed const iterator. A.
element This is the type of objects stored at the base of the hierarchy of MultiArrays. It is the same as template subarray<1>::value_type
index This is a signed integral type used for indexing into A. It is also used to represent strides and index bases.
index_gen This type is used to create a tuple of index_ranges passed to operator[] to create an array_view<Dims>::type object.
index_range This type specifies a range of indices over some dimension of a MultiArray. This range will be visible through an array_view<Dims>::type object.
template subarray<Dims>::type This is subarray type with Dims dimensions. It is the reference type of the (NumDims - Dims) dimension of A and also models MultiArray.
template const_subarray This is the const subarray type.
template array_view This is the view type with Dims dimensions. It is returned by calling operator[](indices). It models MultiArray.
template const_array_view This is the const view type with Dims dimensions.
Table 27.3. Valid Expressions
Expression Return type Semantics
A::dimensionality size_type This compile-time constant represents the number of dimensions of the array (note that A::dimensionality == NumDims).
a.shape() const size_type* This returns a list of NumDims elements specifying the extent of each array dimension.
a.strides() const index* This returns a list of NumDims elements specifying the stride associated with each array dimension. When accessing values, strides is used
to calculate an element's location in memory.
a.index_bases() const index* This returns a list of NumDims elements specifying the numeric index of the first element for each array dimension.
a.origin() element* if a is mutable, const element* This returns the address of the element accessed by the expression a[0][0]...[0].. If the index bases are positive, this element won't
otherwise. exist, but the address can still be used to locate a valid element given its indices.
a.num_dimensions size_type This returns the number of dimensions of the array (note that a.num_dimensions() == NumDims).
This returns the number of elements contained in the array. It is equivalent to the following code:
a.num_elements() size_type std::accumulate(a.shape(),a.shape+a.num_dimensions(),
a.size() size_type This returns the number of values contained in a. It is equivalent to a.shape()[0];
This expression accesses a specific element of a.index_list is the unique set of indices that address the element returned. It is
equivalent to the following code (disregarding intermediate temporaries):
// multiply indices by strides
a(index_list) element&; if a is mutable, const element& std::transform(index_list.begin(), index_list.end(),
otherwise. a.strides(), tmp.begin(), std::multiplies<index>()),
// add the sum of the products to the origin
*std::accumulate(tmp.begin(), tmp.end(), a.origin());
a.begin() iterator if a is mutable, const_iterator This returns an iterator pointing to the beginning of a.
a.end() iterator if a is mutable, const_iterator This returns an iterator pointing to the end of a.
a.rbegin() reverse_iterator if a is mutable, This returns a reverse iterator pointing to the beginning of a reversed.
const_reverse_iterator otherwise.
a.rend() reverse_iterator if a is mutable, This returns a reverse iterator pointing to the end of a reversed.
const_reverse_iterator otherwise.
a[idx] reference if a is mutable, const_reference This returns a reference type that is bound to the index idx value of a. Note that if i is the index base for this dimension, the above
otherwise. expression returns the (idx-i)th element (counting from zero). The expression is equivalent to *(a.begin()+idx-a.index_bases()[0]);.
a[indices] array_view<Dims>::type if a is mutable, This expression generates a view of the array determined by the index_range and index values used to construct indices.
const_array_view<Dims>::type otherwise.
a == b bool This performs a lexicographical comparison of the values of a and b. The element type must model EqualityComparable for this expression to
be valid.
a < b bool This performs a lexicographical comparison of the values of a and b. The element type must model LessThanComparable for this expression to
be valid.
a <= b bool This performs a lexicographical comparison of the values of a and b. The element type must model EqualityComparable and LessThanComparable
for this expression to be valid.
a > b bool This performs a lexicographical comparison of the values of a and b. The element type must model EqualityComparable and LessThanComparable
for this expression to be valid.
a >= b bool This performs a lexicographical comparison of the values of a and b. The element type must model LessThanComparable for this expression to
be valid.
begin() and end() execute in amortized constant time. size() executes in at most linear time in the MultiArray's size.
Table 27.4. Invariants
Valid range [a.begin(),a.end()) is a valid range.
Range size a.size() == std::distance(a.begin(),a.end());.
Completeness Iteration through the range [a.begin(),a.end()) will traverse across every value_type of a.
Accessor Equivalence Calling a[a1][a2]...[aN] where N==NumDims yields the same result as calling a(index_list), where index_list is a Collection containing the values a1...aN.
The following MultiArray associated types define the interface for creating views of existing MultiArrays. Their interfaces and roles in the concept are described below.
index_range objects represent half-open strided intervals. They are aggregated (using an index_gen object) and passed to a MultiArray's operator[] to create an array view. When creating a view, each
index_range denotes a range of valid indices along one dimension of a MultiArray. Elements that are accessed through the set of ranges specified will be included in the constructed view. In some
cases, an index_range is created without specifying start or finish values. In those cases, the object is interpreted to start at the beginning of a MultiArray dimension and end at its end.
index_range objects can be constructed and modified several ways in order to allow convenient and clear expression of a range of indices. To specify ranges, index_range supports a set of
constructors, mutating member functions, and a novel specification involving inequality operators. Using inequality operators, a half open range [5,10) can be specified as follows:
5 <= index_range() < 10;
4 < index_range() <= 9;
and so on. The following describes the index_range interface.
Table 27.6. Associated Types
Type Description
index This is a signed integral type. It is used to specify the start, finish, and stride values.
size_type This is an unsigned integral type. It is used to report the size of the range an index_range represents.
Table 27.7. Valid Expressions
Expression Return type Semantics
index_range index_range This constructs an index_range representing the interval [idx1,idx2) with stride idx3.
index_range(idx1,idx2) index_range This constructs an index_range representing the interval [idx1,idx2) with unit stride. It is equivalent to index_range(idx1,idx2,1).
index_range() index_range This construct an index_range with unspecified start and finish values.
i.start(idx1) index& This sets the start index of i to idx.
i.finish(idx) index& This sets the finish index of i to idx.
i.stride(idx) index& This sets the stride length of i to idx.
i.start() index This returns the start index of i.
i.finish() index This returns the finish index of i.
i.stride() index This returns the stride length of i.
i.get_start(idx) index If i specifies a start value, this is equivalent to i.start(). Otherwise it returns idx.
i.get_finish(idx) index If i specifies a finish value, this is equivalent to i.finish(). Otherwise it returns idx.
i.size(idx) size_type If i specifies a both finish and start values, this is equivalent to (i.finish()-i.start())/i.stride(). Otherwise it returns idx.
i < idx index This is another syntax for specifying the finish value. This notation does not include idx in the range of valid indices. It is equivalent to index_range(r.start
(), idx, r.stride())
i <= idx index This is another syntax for specifying the finish value. This notation includes idx in the range of valid indices. It is equivalent to index_range(r.start(), idx +
1, r.stride())
idx < i index This is another syntax for specifying the start value. This notation does not include idx in the range of valid indices. It is equivalent to index_range(idx + 1,
i.finish(), i.stride()).
idx <= i index This is another syntax for specifying the start value. This notation includes idx1 in the range of valid indices. It is equivalent to index_range(idx, i.finish(),
i + idx index This expression shifts the start and finish values of i up by idx. It is equivalent to index_range(r.start()+idx1, r.finish()+idx, r.stride())
i - idx index This expression shifts the start and finish values of i up by idx. It is equivalent to index_range(r.start()-idx1, r.finish()-idx, r.stride())
index_gen aggregates index_range objects in order to specify view parameters. Chained calls to operator[] store range and dimension information used to instantiate a new view into a MultiArray.
Table 27.8. Notation
Dims,Ranges Unsigned integral values.
x An object of type template gen_type<Dims,Ranges>::type.
i An object of type index_range.
idx Objects of type index.
Table 27.9. Associated Types
Type Description
index This is a signed integral type. It is used to specify degenerate dimensions.
size_type This is an unsigned integral type. It is used to report the size of the range an index_range represents.
template gen_type:: This type generator names the result of Dims chained calls to index_gen::operator[]. The Ranges parameter is determined by the number of degenerate ranges specified (i.e.
<Dims,Ranges>::type calls to operator[](index)). Note that index_gen and gen_type<0,0>::type are the same type.
Table 27.10. Valid Expressions
Expression Return type Semantics
index_gen gen_type<0,0>::type This constructs an index_gen object. This object can then be used to generate tuples of index_range values.
x[i] gen_type Returns a new object containing all previous index_range objects in addition to i. Chained calls to operator[] are the means by which index_range objects are
<Dims+1,Ranges+1>::type aggregated.
x[idx] gen_type Returns a new object containing all previous index_range objects in addition to a degenerate range, index_range(idx,idx). Note that this is NOT equivalent to x
<Dims,Ranges+1>::type [index_range(idx,idx)]., which will return an object of type gen_type<Dims+1,Ranges+1>::type. | {"url":"https://www.boost.org/doc/libs/1_73_0/doc/html/MultiArray.html","timestamp":"2024-11-08T01:42:04Z","content_type":"text/html","content_length":"40722","record_id":"<urn:uuid:f21eef70-c409-4f8b-bddd-ca3d4578f865>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00115.warc.gz"} |
Practical Dependent Types: Type-Safe Neural Networks
Kiev Functional Programming, Aug 16, 2017
The Big Question
The big question of Haskell: What can types do for us?
Dependent types are simply the extension of this question, pushing the power of types further.
Artificial Neural Networks
Feed-forward ANN architecture
Parameterized functions
Each layer receives an input vector, x:ℝ^n, and produces an output y:ℝ^m.
They are parameterized by a weight matrix W:ℝ^m×n (an m×n matrix) and a bias vector b:ℝ^m, and the result is: (for some activation function f)
A neural network would take a vector through many layers.
Networks in Haskell
data Weights = W { wBiases :: !(Vector Double) -- n
, wNodes :: !(Matrix Double) -- n x m
} -- "m to n" layer
data Network :: Type where
O :: !Weights -> Network
(:~) :: !Weights -> !Network -> Network
infixr 5 :~
Generating them
randomWeights :: MonadRandom m => Int -> Int -> m Weights
randomWeights i o = do
seed1 :: Int <- getRandom
seed2 :: Int <- getRandom
let wB = randomVector seed1 Uniform o * 2 - 1
wN = uniformSample seed2 o (replicate i (-1, 1))
return $ W wB wN
randomNet :: MonadRandom m => Int -> [Int] -> Int -> m Network
randomNet i [] o = O <$> randomWeights i o
randomNet i (h:hs) o = (:~) <$> randomWeights i h <*> randomNet h hs o
Haskell Heart Attacks
• What if we mixed up the dimensions for randomWeights?
• What if the user mixed up the dimensions for randomWeights?
• What if layers in the network are incompatible?
• How does the user know what size vector a network expects?
• Is our runLayer and runNet implementation correct?
Backprop (Outer layer)
go :: Vector Double -- ^ input vector
-> Network -- ^ network to train
-> (Network, Vector Double)
-- handle the output layer
go !x (O w@(W wB wN))
= let y = runLayer w x
o = logistic y
-- the gradient (how much y affects the error)
-- (logistic' is the derivative of logistic)
dEdy = logistic' y * (o - target)
-- new bias weights and node weights
wB' = wB - scale rate dEdy
wN' = wN - scale rate (dEdy `outer` x)
w' = W wB' wN'
-- bundle of derivatives for next step
dWs = tr wN #> dEdy
in (O w', dWs)
Backprop (Inner layer)
-- handle the inner layers
go !x (w@(W wB wN) :~ n)
= let y = runLayer w x
o = logistic y
-- get dWs', bundle of derivatives from rest of the net
(n', dWs') = go o n
-- the gradient (how much y affects the error)
dEdy = logistic' y * dWs'
-- new bias weights and node weights
wB' = wB - scale rate dEdy
wN' = wN - scale rate (dEdy `outer` x)
w' = W wB' wN'
-- bundle of derivatives for next step
dWs = tr wN #> dEdy
in (w' :~ n', dWs)
Compiler, O Where Art Thou?
• Haskell is all about the compiler helping guide you write your code. But how much did the compiler help there?
• How can the “shape” of the matrices guide our programming?
• We basically rely on naming conventions to make sure we write our code correctly.
Haskell Red Flags
• How many ways can we write the function and have it still typecheck?
• How many of our functions are partial?
A Typed Alternative
An o x i layer
A Typed Alternative
From HMatrix:
An R 3 is a 3-vector, an L 4 3 is a 4 x 3 matrix.
Data Kinds
With -XDataKinds, all values and types are lifted to types and kinds.
In addition to the values True, False, and the type Bool, we also have the type 'True, 'False, and the kind Bool.
In addition to : and [] and the list type, we have ': and '[] and the list kind.
A Typed Alternative
data Network :: Nat -> [Nat] -> Nat -> Type where
O :: !(Weights i o)
-> Network i '[] o
(:~) :: KnownNat h
=> !(Weights i h)
-> !(Network h hs o)
-> Network i (h ': hs) o
infixr 5 :~
runLayer :: (KnownNat i, KnownNat o)
=> Weights i o
-> R i
-> R o
runLayer (W wB wN) v = wB + wN #> v
runNet :: (KnownNat i, KnownNat o)
=> Network i hs o
-> R i
-> R o
runNet (O w) !v = logistic (runLayer w v)
runNet (w :~ n') !v = let v' = logistic (runLayer w v)
in runNet n' v'
Exactly the same! No loss in expressivity!
Much better! Matrices and vector lengths are guaranteed to line up!
Also, note that the interface for runNet is better stated in its type. No need to reply on documentation.
The user knows that they have to pass in an R i, and knows to expect an R o.
randomWeights :: (MonadRandom m, KnownNat i, KnownNat o)
=> m (Weights i o)
randomWeights = do
s1 :: Int <- getRandom
s2 :: Int <- getRandom
let wB = randomVector s1 Uniform * 2 - 1
wN = uniformSample s2 (-1) 1
return $ W wB wN
No need for explicit arguments! User can demand i and o. No reliance on documentation and parameter orders.
But, for generating nets, we have a problem:
randomNet :: forall m i hs o. (MonadRandom m, KnownNat i, KnownNat o)
=> m (Network i hs o)
randomNet = case hs of [] -> ??
Pattern matching on types
The solution for pattern matching on types: singletons.
Pattern matching on types
Implicit passing
Explicitly passing singletons can be ugly.
Implicit passing
randomNet :: forall i hs o m. (MonadRandom m, KnownNat i, SingI hs, KnownNat o)
=> m (Network i hs o)
randomNet = randomNet' sing
Now the shape can be inferred from the functions that use the Network.
train :: forall i hs o. (KnownNat i, KnownNat o)
=> Double -- ^ learning rate
-> R i -- ^ input vector
-> R o -- ^ target vector
-> Network i hs o -- ^ network to train
-> Network i hs o
train rate x0 target = fst . go x0
go :: forall j js. KnownNat j
=> R j -- ^ input vector
-> Network j js o -- ^ network to train
-> (Network j js o, R j)
-- handle the output layer
go !x (O w@(W wB wN))
= let y = runLayer w x
o = logistic y
-- the gradient (how much y affects the error)
-- (logistic' is the derivative of logistic)
dEdy = logistic' y * (o - target)
-- new bias weights and node weights
wB' = wB - konst rate * dEdy
wN' = wN - konst rate * (dEdy `outer` x)
w' = W wB' wN'
-- bundle of derivatives for next step
dWs = tr wN #> dEdy
in (O w', dWs)
-- handle the inner layers
go !x (w@(W wB wN) :~ n)
= let y = runLayer w x
o = logistic y
-- get dWs', bundle of derivatives from rest of the net
(n', dWs') = go o n
-- the gradient (how much y affects the error)
dEdy = logistic' y * dWs'
-- new bias weights and node weights
wB' = wB - konst rate * dEdy
wN' = wN - konst rate * (dEdy `outer` x)
w' = W wB' wN'
-- bundle of derivatives for next step
dWs = tr wN #> dEdy
in (w' :~ n', dWs)
-- handle the inner layers
go !x (w@(W wB wN) :~ n)
= let y = runLayer w x
o = logistic y
-- get dWs', bundle of derivatives from rest of the net
(n', dWs') = go o n
-- the gradient (how much y affects the error)
dEdy = logistic' y * dWs'
-- new bias weights and node weights
wB' = wB - konst rate * dEdy
wN' = wN - konst rate * (dEdy `outer` x)
w' = W wB' wN'
-- bundle of derivatives for next step
dWs = tr wN #> dEdy
in (w' :~ n', dWs)
Surprise! It’s actually identical! No loss in expressivity.
Also, typed holes can help you write your code in a lot of places. And shapes are all verified.
By the way, still waiting for linear types in GHC :)
Type-Driven Development
The overall guiding principle is:
1. Write an untyped implementation.
2. Realize where things can go wrong:
□ Partial functions?
□ Many, many ways to implement a function incorrectly with the current types?
□ Unclear or documentation-reliant API?
3. Gradually add types in selective places to handle these.
I recommend not going the other way (use perfect type safety before figuring out where you actually really need them). We call that “hasochism”. | {"url":"https://talks.jle.im/kievfprog/dependent-types.html","timestamp":"2024-11-05T07:36:50Z","content_type":"text/html","content_length":"55568","record_id":"<urn:uuid:081ea241-4b9d-487a-b1d7-3ca881e73c4f>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00873.warc.gz"} |
Plot more than two functions
11-13-2017, 05:28 PM
Post: #1
MullenJohn Posts: 47
Junior Member Joined: Sep 2017
Plot more than two functions
With the HP Prime in CAS Mode I am trying to plot on one screen the Function, the First Derivative and the Second Derivative to interpret Concavity and Points of Inflection of the function.
For example:
√ F1(X): 2*(X²-9)/(X²-4)
√ F2(X): ∂F1(X)/∂X
√ F3(X): ∂F2(X)/∂X
Upon pressing the Plot key my plot only shows F1(X) and does not show F2(X) and/or F3(X).
Please tell me what I am doing incorrectly so that I may plot all three functions simultaneously.
Thanks – Cheers!
11-13-2017, 08:14 PM
Post: #2
ThomasA Posts: 27
Junior Member Joined: May 2014
RE: Plot more than two functions
to plot the derivatives you have just to add '=X' behind the 'X's in the denominator of the expressions. It should look like 'dX=X' in the denominator (without the apostrophes, of course) and with
'funny d' and not the 'd'.
11-13-2017, 08:25 PM
Post: #3
MullenJohn Posts: 47
Junior Member Joined: Sep 2017
RE: Plot more than two functions
Thank you Thomas - it worked just fine - Cheers!
11-13-2017, 08:29 PM
Post: #4
Tim Wessman Posts: 2,293
Senior Member Joined: Dec 2013
RE: Plot more than two functions
(11-13-2017 08:14 PM)ThomasA Wrote: Hi,
to plot the derivatives you have just to add '=X' behind the 'X's in the denominator of the expressions. It should look like 'dX=X' in the denominator (without the apostrophes, of course) and
with the
'funny d' and not the 'd'.
Note that you don't need to do this any more on the current Beta version...
Although I work for HP, the views and opinions I post here are my own.
11-13-2017, 08:31 PM
Post: #5
MullenJohn Posts: 47
Junior Member Joined: Sep 2017
RE: Plot more than two functions
Thomas, I just noticed that the Num Key shows no values for F1, F2 or F3.
Would you know the reason why the cells are empty in the Numeric view?
11-14-2017, 10:06 AM
Post: #6
ThomasA Posts: 27
Junior Member Joined: May 2014
RE: Plot more than two functions
I am sorry, but I can't reproduce your observation. On my machine all columns are filled with numbers or with NaN's. I use the november beta release.
11-14-2017, 04:11 PM
Post: #7
MullenJohn Posts: 47
Junior Member Joined: Sep 2017
RE: Plot more than two functions
With the HP Prime in CAS Mode I am trying to plot on one screen the Function, the First Derivative and the Second Derivative to interpret Concavity and Points of Inflection of the function.
For example:
√ F1(X): 2*(X²-9)/(X²-4)
√ F2(X): ∂F1(X)/∂X=X
√ F3(X): ∂F2(X)/∂X=X
Upon pressing the Plot key my plot shows F1(X), F2(X) and F3(X) simultaneously on the Plot screen correctly.
However, upon pressing the Num key the X, F1, F2, F3 cells are blank.
| X | F1 | F2 | F3 |
| | | | |
Please tell me what I am doing incorrectly so that I may see all three functions numerically.
Also, should I install the November Beta Release?
If so where do I find the procedure for step-by-step instructions?
Thanks Thomas – Cheers!
11-14-2017, 07:16 PM
Post: #8
roadrunner Posts: 450
Senior Member Joined: Jun 2015
RE: Plot more than two functions
go to shift/setup, make sure num type is set to automatic
11-14-2017, 07:34 PM
Post: #9
MullenJohn Posts: 47
Junior Member Joined: Sep 2017
RE: Plot more than two functions
Thank you RoadRunner! I had it set to "BuildYourOwn" - No idea why. Now it works - Cheers!
User(s) browsing this thread: 1 Guest(s) | {"url":"https://hpmuseum.org/forum/thread-9489.html","timestamp":"2024-11-11T05:12:13Z","content_type":"application/xhtml+xml","content_length":"37430","record_id":"<urn:uuid:a395e88b-b470-4ca5-bb7e-28a89906e913>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00683.warc.gz"} |
2014 A-level H2 Mathematics (9740) Paper 2 Question 2 Suggested Solutions - The Culture SG2014 A-level H2 Mathematics (9740) Paper 2 Question 2 Suggested Solutions
All solutions here are SUGGESTED. Mr. Teng will hold no liability for any errors. Comments are entirely personal opinions.
Using partial fractions formula found in MF15,
Using GC,
Personal Comments:
Firstly, this question is a lot of marks! It is quite a standard tutorial question that tests students on all their integration techniques. I think that they combined system of linear equations here
too, which is really neat. Students could also have solved the partial fractions using the substitution (cover-up) method. Do use the MF15 for the partial fractions and integration formula!
Finally, the answers are to be left in rational form, which means to be expressed as a fraction of two integers!
pingbacks / trackbacks | {"url":"https://theculture.sg/2015/07/2014-h2-mathematics-9740-paper-2-question-2-suggested-solutions/","timestamp":"2024-11-02T18:55:57Z","content_type":"text/html","content_length":"108891","record_id":"<urn:uuid:580627e4-5c4e-4725-9d49-d54681621363>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00465.warc.gz"} |
On self-majorizing elements in Archimedean vector lattices
On self-majorizing elements in Archimedean vector lattices
A finite element in an Archimedean vector lattice is called self-majorizing if its modulus is a majorant. Such elements exist in many vector lattices and naturally occur in different contexts. They
are also known as semi-order units as the modulus of a self-majorizing element is an order unit in the band generated by the element. In this paper the properties of self-majorizing elements are
studied systematically, and the relations between the sets of finite, totally finite and self-majorizing elements of a vector lattice are provided. In a Banach lattice an element f is self-majorizing
, if and only if the ideal and the band both generated by f coincide. | {"url":"https://publica.fraunhofer.de/entities/publication/ee428505-c4de-49e0-a8bc-06cc05b1195d","timestamp":"2024-11-10T13:08:08Z","content_type":"text/html","content_length":"881493","record_id":"<urn:uuid:bfb0d36f-12a3-427d-9c8c-d59e5be215dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00824.warc.gz"} |
Deformations of $(G,\mu)$-Displays
December 22, 2023 at 15:30 – 17:00 CET
Seminar on Arithmetic Geometry
Mohammad Hadi Hedayatzadeh (Institute for Research in Fundamental Sciences)
In this talk, I will discuss a joint project with A. Partofard, on prismatic displays with additional structures. I will start with a concise overview of the theory of displays developed by Th. Zink,
which serves as a generalization of Dieudonné theory. Displays play a crucial role in the study of Barsotti-Tate groups when the base is not a perfect field of positive characteristic. Zink has
further expanded the theory by introducing windows over frames. In another direction, in order to construct integral models of certain Shimura varieties that are not of Abelian type, O. Bültel
defined and studied displays with additional structures called $(G,\mu)$-displays.
In this joint project with Partofard, we define and study the stack of prismatic $(G,\mu)$-displays over the quasi-syntomic site, which is better adapted to the setting of pefectoid geometry and is
closely related to the stack of $G$-torsors over the Fargues-Fontaine curve and local Shimura varieties. When $G$ is the general linear group, our stack is the same as the stack of admissible
prismatic $F$-crystals developed by Anschütz-LeBras, which is equivalent to the stack of $p$-divisible groups. We also prove a Grothendieck-Messing style deformation result for these prismatic
displays, which, for the general linear group, answers a question of Anschütz-LeBras.
Zoom (635 7328 0984, Password: smallest six digit prime). | {"url":"https://crc326gaus.de/event/tba-87/","timestamp":"2024-11-09T13:29:30Z","content_type":"text/html","content_length":"68889","record_id":"<urn:uuid:91f28008-c708-4b2d-89be-592650b64453>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00782.warc.gz"} |
What to Expect From a Physics PhD Qualifying Exam - Do My Physics Exam
The Physics PhD Qualifying Exams should be taken during the first two years of the graduate students research studies since the first core courses for their MS degree are already taken. This exam
will determine if a student is qualified to continue further study in the field and earn a higher PhD.
Please proceed with your order so that we can facilitate taking your exam on your behalf. Click Here
The Physics PhD requires students to undergo a rigorous examination, which is intended to measure their skills and knowledge. This exam is given by both faculty and graduate students and involves
both theory and experimental calculations. Some of the subjects that will be covered include:
The first section of the exam is generally considered the theoretical portion of physics. This includes topics such as general relativity, thermodynamics, nuclear fission, the strong and weak nuclear
force, electromagnetism, and gravity. It also covers topics such as radiation, electricity, magnetism, electromagnetism and sound. The student will need to analyze these topics using a set of
Students must pass this exam with a score of around 720 on the test. Students must also be able to demonstrate that they understand and are confident about these concepts. In addition, they should
also be able to demonstrate that they have at least a good grasp of these concepts through practice and experimentation. In addition to this, students must also demonstrate that they can apply this
understanding in their laboratory work.
The second section of the Physics PhD qualifying exams, which covers the experimental portion, is often referred to as the laboratory portion. It is important to know that many students choose to
take this portion separately. If a student chooses to do so, he or she should consider taking part in a laboratory setting with a mentor and a fellow student. If the student chooses to do this alone,
he or she must have enough information on theoretical concepts to be able to understand how the laboratory works.
Some of the topics that are covered in this exam are quantum mechanics, the theory of relativity, wave mechanics, electrodynamics and special relativity. The student will need to understand these
concepts from the perspective of science in order to succeed in this test. Students will need to know that there are multiple forms of energy, namely electromagnetic, gravitational, acoustic, or
nuclear. and chemical.
Students must be able to understand and show how to calculate equations and solve for specific quantities. as well as explaining why they choose certain quantities to be a part of an equation. The
student needs to be able to calculate the energy and kinetic energy of objects.
A Physics PhD course includes many laboratory experiments as well as independent study. The student should choose a good teacher, who can offer practical demonstrations and help prepare them for this
exam. This student needs to do a lot of independent research as well as attend and participate in meetings where their questions and ideas are considered. The student will need to develop good oral
and written communication skills in order to pass this exam.
Many physics students have a very successful career as they complete their Physics PhD course. They have found employment in research laboratories, government agencies, and the private sector.
However, there are some who find their career options are limited because of the time and money spent to get a degree and obtain jobs in their chosen field.
Many of these individuals may decide to enter graduate school. If this is the case, they need to take the first two qualifying exams to become eligible to sit the advanced phd exam. In addition, they
will have to take at least one independent study course in order to gain experience. in completing their chosen field.
Other requirements are needed for students who want to get a PhD in Physics. The student will need to take at least five semesters in order to finish the coursework required for the advanced graduate
program. A doctorate student must have an average GPA of 3.5 on each semester in order to successfully finish this program.
Graduates of this graduate program will not only have a great job waiting for them, but also a solid educational background in physics and research. They may earn their bachelor’s degree in less time
than traditional grad students. It is important that students carefully review the requirements for this program, since this is one of the highest honors a university can bestow upon its graduates. | {"url":"https://domyphysicsexam.com/what-to-expect-from-a-physics-phd-qualifying-exam/","timestamp":"2024-11-13T17:56:26Z","content_type":"text/html","content_length":"112281","record_id":"<urn:uuid:a9690842-c5b0-4bb6-85d2-4a91bc2328bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00604.warc.gz"} |
CPM Homework Help
The graph at right was made by shifting the first cycle of $y = \sin x$ to the left.
1. How many units to the left was it shifted?
It is shifted π units to the left.
2. Figure out how to change the equation of $y = \sin x$ so that the graph of the new equation will look like the one in part (a). If you do not have a graphing calculator at home, sketch the graph
and check your answer when you get to class.
In the general equation $y = \sin\left(x − h\right) + k$, which parameter moves the graph horizontally? | {"url":"https://homework.cpm.org/category/CON_FOUND/textbook/a2c/chapter/8/lesson/8.2.1/problem/8-117","timestamp":"2024-11-04T02:25:30Z","content_type":"text/html","content_length":"38443","record_id":"<urn:uuid:b2c4fb72-1b4c-4cc9-8e99-647c08dd4e11>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00246.warc.gz"} |
JNTUH R18 B.Tech Strength Of Materials Syllabus 2022
December 28, 2021 2021-12-29 11:31
JNTUH R18 B.Tech Strength Of Materials Syllabus 2022
JNTUH R18 B.Tech Strength Of Materials Syllabus 2022
Strength Of Materials along with Course Objectives and Course outcome and list of textbook and reference books is mentioned in this blog. The subject of Strength Of Materials has 5 units in total.
Topic and sub-topics of Strength Of Materials are mentioned below in detail. If you have any problem in understanding Strength Of Materials or any other Engineering Subject in any semester or in any
year then you can view the video lectures on the official CynoHub app.
Strength Of Materials Unit 1
Concept of stress and strain- St. Venant’s Principle-Stress and Strain Diagram – Elasticity and plasticity
– Types of stresses and strains- Hooke’s law – stress – strain diagram for mild steel – Working stress
– Factor of safety – Lateral strain, Poisson’s ratio and volumetric strain – Pure shear and Complementary shear – Elastic modulii, Elastic constants and the relationship between them – Bars of
varying section – composite bars – Temperature stresses .
STRAIN ENERGY – Resilience – Gradual, sudden, and impact loadings – simple applications.
Strength Of Materials Unit 2
Types of beams – Concept of shear force and bending moment – S.F and B.M diagrams for cantilever, simply supported including overhanging beams subjected to point loads, uniformly distributed load,
uniformly varying load, couple and combination of these loads – Point of contraflexure – Relation between S.F., B.M and rate of loading at a section of a beam.
Strength Of Materials Unit 3
Theory of simple bending – Assumptions – Derivation of bending equation- Section Modulus Determination of flexural/bending stresses of rectangular and circular sections (Solid and Hollow), I,T, Angle
and Channel sections – Design of simple beam sections.
Shear Stresses:
Derivation of formula for shear stress distribution – Shear stress distribution across various beam sections like rectangular, circular, triangular, I, T angle and channel sections.
Strength Of Materials Unit 4
Slope, deflection and radius of curvature – Differential equation for the elastic line of a beam – Double integration and Macaulay’s methods – Determination of slope and deflection for cantilever and
simply supported beams subjected to point loads, U.D.L, Uniformly varying load and couple -Mohr’s theorems
– Moment area method – Application to simple cases.
CONJUGATE BEAM METHOD: Introduction – Concept of conjugate beam method – Difference between a real beam and a conjugate beam – Deflections of determinate beams with constant and different moments of
Strength Of Materials Unit 5
Introduction – Stresses on an oblique plane of a bar under axial loading – compound stresses – Normal and tangential stresses on an inclined plane for biaxial stresses – Two perpendicular normal
stresses accompanied by a state of simple shear –Principal stresses – Mohr’s circle of stresses – ellipse of stress
– Analytical and graphical solutions.
THEORIES OF FAILURE: Introduction – Various theories of failure – Maximum Principal Stress Theory, Maximum Principal Strain Theory, Maximum shear stress theory- Strain Energy and Shear Strain Energy
Theory (Von Mises Theory).
Strength Of Materials course objectives:
The objective of this Course is
• To understand the nature of stresses developed in simple geometries such as bars, cantilevers and beams for various types of simple loads
• To calculate the elastic deformation occurring in simple members for different types of loading.
• To show the plane stress transformation with a particular coordinate system for different orientation of the plane.
• To know different failure theories adopted in designing of structural members
Strength Of Materials course outcomes:
On completion of the course, the student will be able to:
• Describe the concepts and principles, understand the theory of elasticity including strain/displacement and Hooke’s law relationships; and perform calculations, related to the strength of
structured and mechanical components.
• Recognize various types loads applied on structural components of simple framing geometries and understand the nature of internal stresses that will develop within the components.
• To evaluate the strains and deformation that will result due to the elastic stresses developed within the materials for simple types of loading
• Analyze various situations involving structural members subjected to plane stresses by application of Mohr’s circle of stress;
• Frame an idea to design a system, component, or process.
Strength Of Materials reference books:
1. Strength of Materials by R. K Rajput, S. Chand & Company Ltd.
2. Mechanics of Materials by Dr. B.C Punmia, Dr. Ashok Kumar Jain and Dr. Arun Kumar Jain
3. Strength of Materials by R. Subramanian, Oxford University Press
4. Mechanics of material by R.C. Hibbeler, Prentice Hall publications
5. Engineering Mechanics of Solids by Egor P. Popov, Prentice Hall publications
6. Strength of Materials by T.D.Gunneswara Rao and M.Andal, Cambridge Publishers
7. Strength of Materials by R.K. Bansal, Lakshmi Publications House Pvt. Ltd.
8. Strength of Materials by B.S.Basavarajaiah and P. Mahadevappa, 3rd Edition, Universities Presss
Scoring Marks in Strength Of Materials
Scoring good grades in Strength Of Materials is a difficult task. CynoHub is here to help. We have made a video that will help Engineering Students get rank 1 in their B.tech exams. This video will
help students to score good grades in Strength Of Materials. There are many reasons that scoring in Strength Of Materials exams is difficult so this video will help you to rectify the mistakes
students make in exams.
JNTUA B.tech R20 Strength Of Materials was made clear in this article. To know about the syllabus of other Engineering Subjects of JNTUA check out the official CynoHub application. Click below to
download the CynoHub application. | {"url":"https://www.cynohub.com/jntuh-r18-b-tech-strength-of-materials-syllabus-2022/","timestamp":"2024-11-12T17:15:16Z","content_type":"text/html","content_length":"164124","record_id":"<urn:uuid:be8c7db8-08dc-4fa5-ac20-744b7003f140>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00149.warc.gz"} |
Separated sort
Given a sequence of natural numbers, first, print the even numbers in increasing order, and then the odd numbers in decreasing order.
Input consists of zero or more cases. Each case consists of a line with at most 1000 natural numbers strictly positive. Each line ends with a 0 that indicates the end.
For each case, print in a line the even numbers in increasing order, and in the following line the odd numbers in decreasing order. | {"url":"https://jutge.org/problems/P25992_en","timestamp":"2024-11-05T19:51:44Z","content_type":"text/html","content_length":"22693","record_id":"<urn:uuid:2776c746-5604-44df-b91a-de0ed9ca3506>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00820.warc.gz"} |
Priority v. 1.1
By Michael Palmer
It's come to my attention that many of the questions being asked on our forums here at netrep.net have been the same questions regarding priority and specific monsters and how they interact. First,
I'll say the golden rule that no one seems to understand as of right now. A monster does not have priority! YOU THE PLAYER HAVE PRIORITY!!! Some people just don't understand that so the first thing
we always say while answering questions is "This monster doesn't have priority, no monster has priority. The player has the priority." So make sure you rephrase your questions before posting them if
you ever ask about a monster's priority.
With that pushed aside, I thought up a few situations with certain monsters that you could use their effects with while using YOUR priority:
Player A summons Tribe-Infecting Virus to the field.
Player B responds with Trap Hole.
Player A choose to use turn priority to activate Tribe's effect.
Player B's Trap Hole is then added on the chain as link 2.
Link 1: Tribe-Infecting Virus's effect is activated.
Link 2: Trap Hole is activated.
Link 2: Trap Hole first resolves since it was the last card on the chain and destroys Tribe-Infecting Virus.
Link 1: Then Tribe's effect resolves since it was not negated destroying all monsters of the specific type called.
Reason: I know what many of you are thinking. How can a card resolve fully if it's no longer present on the field at resolution? Well, to put it quite simply, it's like chaining MST to Raigeki.Â
Even though you destroyed Raigeki in the chain, it's effect was never negated so it will resolve as normal even though it was destroyed in the resolution step before it's resolution would take
place. The same goes with Tribe and any other monster, it's effect is being chained to with the trap card being responded with. Since you can't chain to a summon, the trap card would have to be
chained to the cost effect of the monster. Since the trigger effect is spell speed 1, it would have to be the first link in the chain. Then you add on the speed 2 effect of the trap card, in this
case it was Trap Hole, and it destroy Tribe first and then Tribe's effect destroys all monsters of the specific type called.
Player A summons Magicial Scientist.
Player B activates Ring of Destruction.
Player A activates Scientist's effect by paying 1000 Life Points.
Player B's Ring of Destruction then resolves destroying Scientist and dealing 300 points of damage to both players.
Player B's Scientist's effect resolves special summoning his fusion monster to the field.
Link 1: Magical Scientist's effect is activated.
Link 2: Ring of Destruction is activated.
Link 2: Ring of Destruction resolves destroying Magical Scientist and dealing 300 points of damage to both players.
Link 1: Magical Scientist's effect resolves special summoning a Fusion monster.
Reason: Basically see the same as TIV.
Player B has Skill Drain face-up on the field.
Player A tribute summons Jinzo.
Player B's Skill Drain is already active and is a continuous effect.
Player A's Jinzo is negated upon the successful summoning.
With this it's a simple time stamp effect. Since Skill Drain was in effect first on the field, Jinzo's effect is negated.
Player B has a face up Level Limit Area B on the field.
Player A tribute summons Spell Canceller.
Same issue as above, since Level Limit was in effect first, it will turn Spell Canceller to defense position. Then Spell Canceller's effect will trigger, negating Level Limit, I'll also add to
this, since Level Limit is negated that DOES NOT mean you can change the position of Spell Canceller, you can not change the postions of a monster summoned that same turn, so it'll stay in defense
until it's either destroyed or until you can turn it your next turn. You however can change positions any other monster you may control at that time since Level Limit is now negated by Spell
Reason: In this case, I'm demonstrating that continuous effects take priority over other effects. What I showed you is that a continuous effect that's on the field will take priority over
resolution against another continuous effect introduced due to it being in effect first. In this case, since Skill Drain was active first, it's effect will effect Jinzo first before Jinzo could
effect Skill Drain. Since Jinzo is negated, Skill Drain is not negated by Jinzo's effect. In the second demonstration I showed you Spell Canceller Vs. Level Limit Area B, the end result is Spell
Canceller goes to defense mode and then negates Level Limit, the simultaneous effects would go on chain as I showed above.
Player A tribute summons Mobius The Frost Monarch and targets two spell/trap cards on the field.
Player B responds with Torrential Tribute.
Player A's Mobius The Frost Monarch resolves since it's effect is activated as soon as hits the field and the targetted spell or trap cards that were targetted upon summoning are destroyed. If
Torrential Tribute is one of these targetted cards, it does not negate Torrential Tribute.
Player B's Torrential Tribute then resolves destroying all of the monsters on the field, including Mobius The Frost Monarch.
Link 1: Mobius's effect is activated targetting up two spell/trap cards on the field.
Link 2: Torrential Tribue is activated.
Link 2: Torrential Tribue resolves destroying all monsters on the field.
Link 1: Mobius's effect resolves destroying the two spell/trap cards that were designated as the targets upon activation (summoning).
Reason: This one should be apparent, the effect activates as soon as it's summoned, and this means that as soon as Mobius hits the field, the player controlling Mobius gets to select up to two
targets with it's effect. Then Player B has the right to respond with a trap after the selection is made. Mobius's effect would resolve as normal and than the trap card activated in response to him
will resolve as normal.
Here's a tad bit different of a situation...
Player A's D.D. Warrior Lady attacks Player B's Face Down Card.
Player B flips their Face Down Card and reveals their own D.D. Warrior Lady.
Damage Calculation is reached and Player A takes 100 points of damage for running into D.D.'s 1600 defense with a 1500 ATK.
The question being is who gets the choice to remove first?
This one is quite simple, the turn player would have first choice on whether or not to remove. Player A would be the person to make the first choice on this, if they choose not remove, than it goes
to Player B who has the choice now with their D.D. Warrior Lady. If they choose to remove than both monsters are removed from play. If not, than nothing happens and both monsters stay on the field,
Player B's in face up defense position and Player A's in face up attack position.
That's enough for cards you would have "priority" with. It should be a little more evident that cards with normal face up effects would have their effect active on the field before any trap can be
activated in response to the summon (not chained to the summon since another Golden Rule is that summons have no spell speed, which means for you new guys, they're non-chainable).
If you read the above, you'll notice that that means that what is coming next is cards that you have no priority over to activate certain effects they control. First I'll talk about the one card that
almost everyone wants to confuse it would seem.
Player A summons Breaker The Magical Warrior
Player B activates Bottomless Trap Hole
Player A chooses to use priority... but wait, what does that mean!?
Link 1: Breaker is summoned, activating his effect to add the counter.
Link 2: Bottomless Trap Hole is activated.
Link 2: Bottomless Trap Hole resolves destroying and removing Breaker from the game.
Link 1: Since Breaker is no longer face-up on the field, the counter cannot be added to the card.
Reason: Breaker's effect is very tricky and some people don't understand how it's tricky. Breaker basically almost has two effects. The first is the addition of the counter, without this counter you
cannot activate the secondary effect, so it's essential. The face up effect of Breaker as soon as it's summoned is the addition of the counter, not it's "breaking" effect itself. So the only priority
you have when an opponent responds to the summon of your Breaker is the addition of the counter. If you look at the above chain you'll see that Breaker's counter is never added because Breaker is no
longer face-up on the field to recieve the counter.
This goes towards the Giant Orc summoning/Sac to Catapult Turtle Vs. Torrential Tribue. It's still my reasoning and my opinion that you could sacrifice the monster to Catapult Turtle, since when
you look at the above chains, you see that it's always the trap being chained to a speed 1 effect.
What would happen in this case is the situation would look like this:
NOTE: This is still being debated, I've got many people I know who are very good judges agreeing with me and others who are very good disagreeing, it's a very hot topic, but I hope to have something
on it soon (I've already started looking into it).
Player A summons Giant Orc.
Player B activates Torrential Tribute.
Player A uses turn priority to activate the trigger effect of Catapult Turtle.
Here's the chain:
Link 1: Catapult Turtle's effect is activated, the cost of the effect is sending Giant Orc to the graveyard, which is done at activation.
Link 2: Torrential Tribute is activated.
Link 2: Torrential Tribute resolves destroying all monsters on the field.
Link 1: Since Catapult Turtle's effect was never negated, it would resolve as normal dealing 1100 points of direct damage to Player B.
Reasoning: I'm calling this reasoning for a reason, if someone comes up with it not being true, I want them to understand my completel reasoning behind my explanation. If a monster is considered
face-up on the field after the summon, and if priority chains are the way I and many others have described them in the previous thread, then Giant Orc would in fact be on the field for the sacrifice
to Catapult Turtle. Since the player with turn priority can choose activate any effect, including trigger effects, it would only make sense that they could activate Catapult Turtle's effect.Â
Since the sacrificing of Giant Orc is a cost, it has already been tributed and destroyed by the time Torrential Tribute (which is chained to the trigger effect) resolves. Since Catapult was not
negated (much like the Tribe example and Magical Scientist example above) then it would resolve as normal dealing 1100 points of damage to the opposing player.
I see no reason why it would be any other way and I see on reason why it would be contradicted within the game, it would only confuse even the most expert of players into second guessing every aspect
of the game, it's situations like this that tend to cause people to quit, it causes massive confusion with the game, and it just really isn't very cost worthy if you get my point.
I'll look into maybe getting a few answers from UDE about the proposed chain, but for now I'm leaving this in the essay as another example of turn priority. It might be contested, but I still have
yet to see a very good reason (the one reason someone gave only strengthens the argument I have).
In any case, that's all the updates I'm doing to this, most other things can be asked about in this thread. If you have any questions or beef about something I've exlained, feel free to explain
yourself, that's what this is all about, it's to help others reach a better understanding about this aspect of the game and without that help, we're doomed to confusion and uncertainty for the rest
of our lives... well... for the rest of the time we're playing Yu-Gi-Oh!
Re: Priority v. 1.1 (All Read before posting priority quest.)
Player B has Skill Drain face-up on the field.
Player A tribute summons Jinzo.
Player B's Skill Drain is already active and is a continuous effect.
Player A's Jinzo is negated upon the successful summoning.
Link 1: Jinzo's effect is activated.
Link 2: Skill Drain's effect is activated.
Link 2: Skill Drain's effect resolves first since it was the last effect on the chain.
Link 1: Jinzo's effect resolves, but is now negated by Skill Drain.
Reason: In this case, I'm demonstrating that continuous effects take priority over other effects. What I showed you is that a continuous effect that's on the field will take priority over
resolution against another continuous effect introduced. In this case, since Skill Drain was active first, it's effect will resolve first in the chain that I showed above. Since Jinzo is
negated, Skill Drain is not negated by Jinzo's effect. In the second demonstration I showed you Spell Canceller Vs. Level Limit Area B, the end result is Spell Canceller goes to defense mode
and then negates Level Limit, the simultaneous effects would go on chain as I showed above.
Woah woah! This is not correct.
There is no chain here. SEGOC only applies to Triggers (auto-activated effects), man this acronym causes so many problems. There is no chaining going on here, It is a timstamped application.
So if [Skill Drain] is already active and [Jinzo] is summoned, you apply the effect in timestamp order. Since [Skill Drain] has an earlier timestamp, it is applied first (negating monster effects),
since [Jinzo]'s effect is now negated, it cannot become active.
Once again, continuous effects that clash, do not at any time form a chain whatsoever.
What would happen in this case is the situation would look like this:
Player A summons Giant Orc.
Player B activates Torrential Tribute.
Player A uses turn priority to activate the trigger effect of Catapult Turtle.
Here's the chain:
Link 1: Catapult Turtle's effect is activated, the cost of the effect is sending Giant Orc to the graveyard, which is done at activation.
Link 2: Torrential Tribute is activated.
Link 2: Torrential Tribute resolves destroying all monsters on the field.
Link 1: Since Catapult Turtle's effect was never negated, it would resolve as normal dealing 1100 points of direct damage to Player B.
This as was discussed in the earlier thread, should not be correct. You can only use the Spell Speed 1 effect of the monster summoned. You cannot use any other Spell Speed 1 effect.
Manual Spell Speed 1's are not, in most cases, intended to be used for responding. Cost Effect Priority, is a special provision, for the monster summoned, but no other Spell Speed 1 (manual effect).
Re: Priority v. 1.1 (All Read before posting priority quest.)
[quote author=novastar]
Woah woah! This is not correct.
There is no chain here. SEGOC only applies to Triggers (auto-activated effects), man this acronym causes so many problems. There is no chaining going on here, It is a timstamped application.
So if [Skill Drain] is already active and [Jinzo] is summoned, you apply the effect in timestamp order. Since [Skill Drain] has an earlier timestamp, it is applied first (negating monster effects),
since [Jinzo]'s effect is now negated, it cannot become active.
Once again, continuous effects that clash, do not at any time form a chain whatsoever.[/quote]
I actually was second guessing myself at the time I wrote that. I had written it the way that was right to begin with and I ended up changing it to match the format of everything else, I'm thinking
too much right now and my brain is fried from having to work (offline wise), manage the site, manage the forums, try and think up 50 million possible priority ways from Sunday... lol.
I'm not disagreeing with this, I agree with this, however, I will still disagree with the other for now, this is mainly a way for us to all branch out and discuss and try to understand the situation
at hand even better. I'll update the other issue and I'll add another note (which I think I said it was debatable) in the area of the Catapult Turtle issue, I still want discussion on the topic.
Re: Priority v. 1.1 (All Read before posting priority quest.)
I'm not disagreeing with this, I agree with this, however, I will still disagree with the other for now, this is mainly a way for us to all branch out and discuss and try to understand the
situation at hand even better. I'll update the other issue and I'll add another note (which I think I said it was debatable) in the area of the Catapult Turtle issue, I still want discussion on
the topic.
I totally understand, I am simply giving my thoughts, in an effort to help. I have spent a great deal of time researching these types of mechanics.
To add to the discussion, in 99% of the cases, throughout the turn, every action/event/chain resolution can be responed to (lets leave 1% for anomolies). They create a response timing, and a
resulting Response Chain can be formed.
Assume that I am specifically refering to manually activated effects, not Triggers.
In general Spell Speed 1 effects (or cards) are not intended to be used for this timing. They generally have no timing at all. they are used to start chains, not for responding or chaining. They are
supposed to be activated when the Chain Block is empty, and no event/action is outstanding. Timing is reserved for Spell Speed 2 or higher effects.
Cost Effect Priority, was a provision created, to allow Spell Speed 1 effects to be used for responding, but only at the time of summoning, and only from the monster summoned.
Re: Priority v. 1.1 (All Read before posting priority quest.)
That's the most sense that anyone has been able to make of the opposite end of the situation, not bad. Makes sense, and is very logical and fits in with the game mechanics (slightly) but it does show
that there was a need for effects to be able to be activated in response to traps in some way, due to effects being legally present on the field immediately after summoning.
But it doesn't really match up with what essentially priority was to begin with, the fact that one player has the ability to activate a card. Now with summonings having no spell speed is arguable
that it's not considered an action to require priority to shift, and even if that were the case, then the chain would end with the trap card or other spell speed 2 effect being activated instead of
the monsters and since the monster's effect is speed 1 you couldn't chain it and if priority is passed soon after summoning then you'd have to chain with something if your opponent did in fact
activate a card in response.
So it only naturally makes sense that a summon is not an action that would require the passing of priority (unlike chains and such where you must after activating a card in a chain). That would mean
the activation of a trap in response to a summon is always illegal, and it becomes legal only if the turn player chooses to activate a cards effect or to not activate anything at all (since the trap
activated still met it's conditions, bah, that even adds more to the confusion).
You see my point in how you can explain it? It's not a matter of logical, but it's a matter of one game mechanic that was changed completely to allow something, that's what you're saying. And then
all you can basicaly do to explain it to someone is just say "Because Konami said so!" You can't explain it using actual game mechanics.
Re: Priority v. 1.1 (All Read before posting priority quest.)
helpoemer316 said:
Player A summons Tribe-Infecting Virus to the field.
Player B responds with Trap Hole.
Player A choose to use turn priority to activate Tribe's effect.
Player B's Trap Hole is then added on the chain as link 2.
I think this situation should be rewritten as follows:
Player A summons Tribe-Infecting Virus to the field.
Player A activates the effecft of Tribe-Infecting Virus (notice that Player B has not yet had a chance to respond).
Player B chains with Trap Hole.
The other possible situation would be:
Player A summons Tribe-Infecting Virus to the field. He chooses not to immediately activate its effect.
Player B responds with Trap Hole.
It is now too late for Player A to use the effect of Tribe-Infecting Virus
At least this is how I see priority currently.
Re: Priority v. 1.1 (All Read before posting priority quest.)
Here's something new to talk about, I found this on UDE's FAQ and essentially on netrep's FAQ since we mirror UDE.
If your opponent Summons "Breaker the Magical Warrior" and activates "Breaker the Magical Warrior"'s effect to place a Spell Counter on him, and you chain "Enemy Controller" to the effect and take
control of "Breaker the Magical Warrior", resolve "Enemy Controller" first and "Breaker the Magical Warrior"'s effect of placing a Spell Counter second, and the player who used "Enemy Controller" can
use "Breaker the Magical Warrior"'s effect to destroy a Spell or Trap Card.
That's from under Enemy Controller in both FAQ's.
Anyone mind explaining how one can remove a counter on a monster that was summoned by the opponent... during their turn? Â Last I checked.... Breaker wasn't a multi-trigger monster(meaning only spell
speed 1).
Anyone want to explain THIS exception to the rules?
Added in:
beautifulsazuka said:
I think this situation should be rewritten as follows:
Player A summons Tribe-Infecting Virus to the field.
Player A activates the effecft of Tribe-Infecting Virus (notice that Player B has not yet had a chance to respond).
Player B chains with Trap Hole.
The other possible situation would be:
Player A summons Tribe-Infecting Virus to the field. He chooses not to immediately activate its effect.
Player B responds with Trap Hole.
It is now too late for Player A to use the effect of Tribe-Infecting Virus
At least this is how I see priority currently.
If Trap Hole is activated in response to Tribe hitting the field, that means that it has just hit the field and there has been no time to declare the thought of activating Tribe's effect. Remember,
Trap Hole must be activated upon the summoning, you can't activate it later.
How I explained it is correct, Trap Hole would most assurably always be activated first in response to the summon, then the chain occurs as the controller of Tribe wishes to use priority.
Re: Priority v. 1.1 (All Read before posting priority quest.)
I have a very important question about this topic. As it has been explained previously a player has priority when they summon, if the opponent activates a trap such as Torrential or Trap Hole when
the turn player summons a monster without allowing them to declare that they are using the effect then the opponent played out of turn and has to turn the trap back down. This is not necessarily a
"chain". It is playing out of turn. So the first example:
Player A summons Tribe-Infecting Virus to the field.
Player B responds with Trap Hole.
Player A choose to use turn priority to activate Tribe's effect.
Player B's Trap Hole is then added on the chain as link 2.
Is in itself incorrect, this is making a base assumption that Player B is playing out of turn. We really shouldn't give examples on a game mechanic with the suggestion that this would be proper game
play. Correctly written it would need to be:
Player A summons Tribe-Infecting Virus to the field.
Player A has the option to activate Tribe-Infecting Virus's effect.
If the effect is activated then Player B would be allowed to chain an appropriate response to the summon such as Trap Hole.
If the effect is not activated then Player B would be allowed to start a new chain by activating a response to the summon such as Trap Hole. (If Player B responds with Trap Hole after Player A chose
not to activate the effect the Player A does not have the right to change his mind and activate Tribe as that would be attempting to chain a Spell Speed 1 effect to a Speed 2 effect in the chain.)
Re: Priority v. 1.1 (All Read before posting priority quest.)
Should be and IS are two different answers, I whole heartedly agree with you, but that's just not how the cards are ever played. They're always played the way I have set them up, that's something
some people need to understand while reading my essays.
I write them as they're played, people tend to always jump ahead and respond to the summon before an effect can be activated. In all technicality, even though I said it's arguable either way above.
You can say that summoning is an action that passes priority to an opponent, if that's the case, then Trap Hole, etc. is being activated at a correct time. After the trap is activated, the priority
is passed back and then you have the exception to the rule to allow the turn player priority of activating a spell speed 1 effect, such as Tribe's.
With all this in mind, and you can't put a speed 1 effect anywhere but at the beginning of the chain, the chain would be like I described it above.
There is no clear cut way to explain it and the end result is known by everyone, which is the funny thing about it. The rules of priority with summons is more of exceptions to pre-existing rules then
anything else actually in the gameplay mechanics of passing priority.
Re: Priority v. 1.1 (All Read before posting priority quest.)
I understand completely that that is how the game is currently played. That comes to the very crux of "Priority", we are going to have to be much more exact in proper execution of steps and allowing
each player to do what they are doing before "jumping the gun". If I draw a card and then say "Oh, I activate MST in your end phase to destroy the card you set" I'm going to probably be told I can't
do that after I have obviously started my draw phase by drawing a card. I can set the drawn card back on the top of the deck and activate my set MST and then start my draw phase by drawing that card,
but that is playing the game with improper knowledge of an event that may or may not happen (ie. knowing what that draw was tells me I'm okay to use the MST where I may have not been willing to do so
and draw into a different card).
The same needs to be true of "Priority". Once you know what your opponent is planning to do it is unfair to at that point decide whether to activate your "Priority". Judges currently give warnings
and then escalate if a player continuously makes illegal plays. Waiting for knowledge of an opponents response and then deciding to Push an effect into a chain is an illegal play. We need to clarify
this before putting down "The Rules of Priority".
Re: Priority v. 1.1 (All Read before posting priority quest.)
First let me say that I agree with you on the whole part about being able to use another spell speed 1 effect, such as Giant Orc + Catapult Turtle VS. Torrential Tribute.
Though, this has also brought another question to mind. I'll try to explain this as best as I can:
1) I summon a monster to the field
2)Since it is logical to assume, under your thought, that Summoning does not pass priority, Can I then play Pot of Greed? (which passes priority)
3) Assuming such, I believe that it would still be possible for my opponent to activate a card such as Torrential Tribute.
This is due in part because the last thing to have Fully "resolved"(Though I know a summoning cannot resolve since it has no speed) {it was merely the last thing to have fully happened without being
contested.) Was the succesful summon of a monster. Thus the activation requirements had been fulfilled.
Am I correct in assuming this or am I totally off the margin?
I guess it looks kind of like this:
Player A: Summons Giant Orc {Last Action to occur FULLY, "Summoning"
Player A: Still Has Priority {However the last action to FULLY occur, "Summoning")
Player A: Then "Activates" Pot of Greed" {Passes Priority} (Last Action: "Summoning")
Player B: Responds with Torrential Tribute (Last FULLY occured action, "Summoning)
Player A: Receives Priority Back from Player B, and does not Respond.
Link 1: Pot of Greed
Link 2: Torrential Tribute
Link 2: Torrential Tribute Resolves (Destroys all Monsters on the Field)
Link 1: Pot of Greed Resolves (Allowing Player A to Draw 2 cards)
Last Action to occur FULLY: "Drawing of 2 Cards by Player A"
In this scenario is it safe to assume that cards of the nature of having Activation requirements, will not have missed their timing as long as the last action to have fully occurred was the
requirement to activate?
If this is the case then it is neccessary to believe that their is a certain "State" to the field, and that this "State" can only change when:
1) An action occurs that changes it, (ie Summoning)
2) A card "RESOLVES" its effect , not "Activates"
In essence the "state" cannot change unless an action is performed fully, or a card resolves its effect fully.
Re: Priority v. 1.1 (All Read before posting priority quest.)
solitarywolf17 said:
In this scenario is it safe to assume that cards of the nature of having Activation requirements, will not have missed their timing as long as the last action to have fully occurred was the
requirement to activate?
If this is the case then it is neccessary to believe that their is a certain "State" to the field, and that this "State" can only change when:
1) An action occurs that changes it, (ie Summoning)
2) A card "RESOLVES" its effect , not "Activates"
In essence the "state" cannot change unless an action is performed fully, or a card resolves its effect fully.
Yet this sets up the question, "Can I now activate Magic Drain on your Pot of Greed, then Chain with Horn of Heaven to negate the summon of Giant Orc?"
Re: Priority v. 1.1 (All Read before posting priority quest.)
Hmm.. so what you are saying is this: (now mind you this is considering that the first "action" performed was the start of the Main Phase 1 by Player A)
P.S.- Horn of Heaven does have an activation requirement, that a monster must be summoned(including special summoned) in order to be activated. Therefore this is still similar to the Torrential
Tribute one I presented, only that this time the last action to occur is returned to the action performed prior to the summoning.
Player A (Me): Summons Giant Orc (Still Retains Priority) {Last Action Occured Fully, "Summoning"}
Player A: Activates Pot of Greed {Passes Priority} {Last Action Occured Fully, "Summoning"}
Player B: Chains Magic Drain {Passes Priority} {Last Action Occured Fully, "Summoning"}
Player A: Does Not Respond
Player B: Chains Horn of Heaven {Passes Priority} (Leagal){last FULLY performed action,"summoning"}
Player A: Does Not Respond
Last Fully Performed Action: "Summoning"
Link 1: Pot of Greed
Link 2: Magic Drain
Link 3: Horn of Heaven
Link 3: Horn of Heaven resolves (Negating the summon) {Last action to occur,(Resolution of HoH, State changes- "Start of Main Phase 1")}
Link 2: Magic Drain resolves (Player A discards a spell) {Last Action to occur,(Resolution of Magic Drain- "Discarding of Spell")}
Link 1: Pot of Greed resolves (Player A draws 2 cards) {Last action to occur, (Resolution of PoG- "Drawing of cards")}
Last Action to occur: "Resolution of Pot of Greed (Drawing 2 Cards)"
You see how the only real change throughout the entire chain, aside from the cards themselves, has been the changing of the "State".
Also this next example goes back to the ruling on Magic Cylinder vs. Gravity Bind:
Lets Say you do this instead:
Last action occured Fully, "Entering of Main Phase 1"
Player A: Summons Giant Orc (retains Priority) {Last Action to occur Fully,"Summoning"}
Player A: Activates Pot of Greed (Passes Priority) {Last Action to occur Fully,"Summoning"}
Player B: Activates Imperial Order (Passes Priority) {Last Action to occur Fully,"Summoning"}
Player A: Activates Torrential Tribute (Passes Priority) {Last action to occur fully, "Summoning"}
Player B: Activate Horn of Heaven (Passes Priority) {Last action to occur fully, "Summoning"}
Player A: Does Not Respond (Passes)
Player B: Does Not Respond
Last action to occur Fully, "Summoning"
Link 1: Pot of Greed
Link 2: Imperial Order
Link 3: Torrential Tribute
Link 4: Horn of Heaven
Link 4: Horn of Heaven Resolves (Changes the "State") Last action to occur fully,(Resolution of HoH, State change to "Start of Main Phase 1")
Link 3: Torrential Tribute Resolves ***(Destroys all mons on the field) LAtOF,(Resolution of TT "Destruction of monsters")
Link 2: Imperial Order Resolves (Negates effects of all spells on the field) LAtOF, (Resolution of IO "Negation of spells on the Field"
Link 1: Pot of Greed resolves (Tries but cannot due to IO)
Last Action to Occur Fully,(Resolution of IO, "Negation of all Spells on the Field")
****cards like Torrential Tribute, and Magical Cylinders, need only to have their activation requirements met in order to activate not resolve. So Even though Horn of Heaven Negated the Summon, it
cannot stop Torrential Tribute from resolving its effect since the activation reuirements had been met prior to Torrential being activated.
(Same goes with the Magic Cylinder vs. Gravity Bind, if you want to know what it is it is on the card registery under Magic Cylinders.)
The short Answer to your question would be " Yes."
To AnthonyJ-------- However, if you are trying to imply that You could now, activate Magic Drain in Response to the Last Action Occured Fully: "Resolution of Pot of Greed (Drawing of 2 cards)"
Then I have to question you, how can you activate Magic Drain since PoG Has already resolved and change d the "State".
That same thinking goes with Horn of Heaven chained to Magic Drain, since in the thought above the Last Action Occured Fully- was the "Resolution of Pot of Greed (Drawing of 2 cards)", how could you
then activate HoH since the Last action would have needed to be a summoning?
Though if this is not what you were trying to get across then Ignore this last bit.
Re: Priority v. 1.1 (All Read before posting priority quest.)
anthonyj said:
solitarywolf17 said:
In this scenario is it safe to assume that cards of the nature of having Activation requirements, will not have missed their timing as long as the last action to have fully occurred was the
requirement to activate?
If this is the case then it is neccessary to believe that their is a certain "State" to the field, and that this "State" can only change when:
1) An action occurs that changes it, (ie Summoning)
2) A card "RESOLVES" its effect , not "Activates"
In essence the "state" cannot change unless an action is performed fully, or a card resolves its effect fully.
Yet this sets up the question, "Can I now activate Magic Drain on your Pot of Greed, then Chain with Horn of Heaven to negate the summon of Giant Orc?"
You cannot, because just like [Negate Attack] must be the first card activated in response to an attack, [Horn of Heaven] must be the first card activated in response to a summoning, or else the
timing is incorrect.
The short answer is "No".
Additionally, [Pot of Greed] cannot be activated in response to a summon.
Lets understand that the confusion stems from our lack of comprehensive understanding here in North America. This is not a "changed" mechanic, but one that has existed, but was not properly explained
to us.
[quote author=helpoemer316]So it only naturally makes sense that a summon is not an action that would require the passing of priority (unlike chains and such where you must after activating a card in
a chain). That would mean the activation of a trap in response to a summon is always illegal, and it becomes legal only if the turn player chooses to activate a cards effect or to not activate
anything at all (since the trap activated still met it's conditions, bah, that even adds more to the confusion).[/quote]
Yes, a summoning doesn't automatically pass priority. But it is all about timing, you can choose 3 options as the Turn Player, Pass, activate a Spell Speed 1 effect of the monster summoned (if there
is one), or activate a Spell Speed 2 effect. It must be one of those 3, so if you cannot perform either of the 2, you are forced to pass.
Re: Priority v. 1.1 (All Read before posting priority quest.)
But it the turn player gets priority after summoning, how can the opponent activate Horn of Heaven if the turn player uses his prioriry. Can the opponent ever use the HofH at all?
Re: Priority v. 1.1 (All Read before posting priority quest.)
Actually Nova Star, no one said anything about activating PoG in Response to a summon. The Thought was that if summoning a mon to the field does not pass priority to the opposing player, then isn't
legal for the turn player to activate a card, or a monster effect, before the opponent has a chance to respond with anything.
Re: Priority v. 1.1 (All Read before posting priority quest.)
Raigekick said:
But it the turn player gets priority after summoning, how can the opponent activate Horn of Heaven if the turn player uses his prioriry. Can the opponent ever use the HofH at all?
there is no real solid logic behind this (that i can explain).
Essentially cards like [Horn of Heaven] are give a special timing, that is prior to the successful summon of the monster. Basically, there is a summon declaration step, that the opponent can play
[Horn of Heaven]. Ater that, you cannot activate it. This is a step before Cost Effect Priority can be used.
Re: Priority v. 1.1 (All Read before posting priority quest.)
You couldn't chain both Magic Drain and Horn of Heaven, it'd have to be one or the other. Â What you're creating in your previous example is two different chains at one time, this isn't possible.
What will eventually happen is you'll have to choose to activate either the Magic Drain or the Horn of Heaven, you can't do both. Â Remember, counter traps (spell speed 3 traps) have to be chained
directly to what they are countering. Â So if you were to activate Magic Drain, you couldn't then chain Horn of Heaven, if you were to activate Horn of Heaven, then you couldn't chain Magic Drain,
you get my point?
Re: Priority v. 1.1 (All Read before posting priority quest.)
solitarywolf17 said:
Actually Nova Star, no one said anything about activating PoG in Response to a summon. The Thought was that if summoning a mon to the field does not pass priority to the opposing player, then
isn't legal for the turn player to activate a card, or a monster effect, before the opponent has a chance to respond with anything.
Please read everything before responding.
I read it, and what i'm saying, is that there are restrictions on what you can and cannot activate.
So essentially I'm saying that what you proposed is infact illegal.
Re: Priority v. 1.1 (All Read before posting priority quest.)
When a monster is summoned intially, there's a very brief "limbo" state where essentially, it's declared but not resolved yet. "Horn of Heaven" or "Solemn Judgment" can be activated at this point
(only) to negate the summon. Because the monster is not considered to have hit the field yet, it is why these traps can be activated even if it is "Jinzo" that is being summoned.
If either of those cards are not activated at that immediate moment, then the monster is considered properly summoned and the window has been closed. Now other cards in response to a properly
summoned monster can be activated by either player (priority issues aside).
- Andrew | {"url":"https://www.cogonline.net/threads/priority-v-1-1.37224/#post-249719","timestamp":"2024-11-03T04:17:35Z","content_type":"text/html","content_length":"166761","record_id":"<urn:uuid:bc795b53-5675-49dc-ae18-a6927e031998>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00571.warc.gz"} |
3 Ways to Remove Noise from Data/Signal
By: Abubakar Abid
Removing noise from data is an important first step in machine learning. What can data scientists learn from noise-canceling headphones?
AS data scientists and researchers in machine learning, we usually don’t think about how our data is collected. We focus on analysis, not measurement. While that abstraction is useful, it can be
dangerous if we’re dealing with noisy data. A dirty dataset can be a bottleneck that reduces the quality of the entire analysis pipeline.
In this post, I’m not going to talk about data collection. But I do want to talk about what to do if you are given a dirty dataset. Are there any steps you can easily take to improve the quality of
your data? I will mention 3 high-level ideas to denoise data. As we’ll see, each of these methods has an analogue in signal processing, as electrical engineers have been thinking about similar
problems for a long time!
1. Get More Data
The first and simplest approach that you can do is to ask the person who gave you dirty data to give you more of it. Why does having more data help? Won’t it also increase the amount of noise you
have to deal with?
To answer that question, let’s go back to 1948 when Claude Shannon discovered a quantitative relationship between communication capacity and the signal-to-noise ratio of a communication channel – an
equation that started the entire field of information theory. We don’t need the exact relationship here, but the proportionality is: \[C \propto \log_2{(1 + SNR)}\]
This equation is saying that the better your signal-to-noise ratio (\(SNR\)), the more information (\(C\)) you can communicate over the channel per unit time (under certain assumptions). Why are
these two things (\(C\) and \(SNR\)) related? Because, if you have a noisy channel, you can compensate for it by adding redudancy: repeatedly sending your signal and have the person on the receiving
end vote to see what your original signal was. Here’s how that would work:
Suppose you wanted to communicate the digit \(0\) to your friend, but when you text him, there’s a small chance the \(0\) gets “lost” and your friend sees a random digit instead: a \(0\) or \(1\). So
you could just text him the same number 5 times. On the other end, he receives: \[{0, 0, 0, 1, 0}\]
One of the digits has been corrupted, but he can simply take the majority vote, which produces: \(0\). Now, there are more complicated schemes as we’ll see next, but for our purposes, the interesting
thing to note is that our basic voting scheme works even if the noise dominates the signal. In other words, even if the probability that a digit is lost is more than 50%, this still works, because
whereas noise is random, the signal is consistently \(0\). So, in expectation, there will still be more \(0\)’s than \(1\)’s.
This brings us back to machine learning: why does having more dirty data help? Because if the noise in the data is random (this isn’t always the case; we’ll relax this assumption in Point #3), having
more data will cause the effects of noise to start canceling each other out, while the effect of the underlying true signal will add up.
2. Look at Your Data in a New Light (or “Basis”)
What if collecting additional data is too expensive? It turns out that often times you can remove noise from your data, especially if your signal has structure to it. This structure is often times
not visible in the original basis [Note: a basis (pl. bases) is the domain used to reveal the information in a signal. Examples of bases are: the time domain (because you can plot an audio signal as
function of time) and the frequency domain (because you can plot the same audio signal as a function of frequency)] that you received your data. But a transformation of your data into a new basis can
reveal the structure.
Perhaps the best-known example of this is the application of the Fourier Transform to audio signals. If you take a recording of someone speaking, it’s going to have all sorts of background noise.
This may not be clear from the raw signal. But if you break your signal down by frequencies, you’ll see that most of the audio signal lies in just a few frequencies. The noise, because it’s random,
will still be spread out across all of the frequencies. This means that we can filter out most of the noise by retaining only the frequencies with lots of signal and removing all other frequencies.
What’s the connection with cleaning up data? For this part, it helps to have a specific dataset in mind: so consider a dataset that consists of \(m\) users who have each read \(n\) books and assigned
each one a real-number rating (similar to the Netflix Challenge). This dataset can be succinctly described by a matrix \(A \in R^{m \times n}\). Now, suppose this data matrix is heavily corrupted
(i.e. by Gaussian noise or by missing entries).
Can we clean this dataset up by finding a better basis to project this data? Yes, a very common way is to use singular value decomposition (SVD). SVD allows us to write rewrite a matrix in terms of a
new basis: \[A = \sum_{i=1}^{k} s_i u_i v^T_i\]
Here, we can think of each \(u_i v_i^T\) as a basis vector (which can be thought of as representing some particular characteristic of the books that affects users’ ratings, e.g. the ‘amount of
suspense’ or ‘whether the characters are animals’), and each \(s_i\) (singular value) reflects the contribution of that basis in the decomposition of \(A\) (so \(s_i\) would represent the relative
importance of ‘suspense’ to ‘character species’ in the final rating). Here, \(k\) represents the rank of the matrix. However, even if the matrix is full-rank, usually only the first few values of \
(s_i\) are large. What this means is that we can ignore all of the basis vectors with small singular values without significantly changing \(A\). (this is the reason PCA works!)
This is really nice, because if the noise is random, it will still be spread out across all of the basis vectors. That means we can discard all of the basis vectors with low singular values, leaving
us with a few basis vectors that the signal is concentrated in, diluting the effect of the noise. Many teams that did well on the Netflix Challenge used this idea in their algorithms.
3. Use a Contrastive Dataset
So far, we have assumed that the noise is random, or at the very least, that it does not have any significant structure in the new basis we have used to transform our data. But what if that’s not the
case? Imagine that you are trying to record audio signal from someone speaking, but there is a jet engine rumbling in the background.
If you were to take the Fourier Transform of the audio signal and only look at the frequencies with the largest amplitude of sound, then you would pick out the frequencies corresponding to the jet
engine instead of the frequencies corresponding to human speech if former are louder than the latter. In this case, it’s probably more appropriate to think of the rumbling jet engine as a confounding
signal rather than noise, but nevertheless, fundamentally, it represents unwanted signal.
To combat this problem (and make things like noise-canceling headphones possible), electrical engineers have developed adaptive noise cancellation, a strategy that uses two signals: the target
signal, which is the corrupted sound, and a background signal that only contains the noise. The background is essentially subtracted from the target signal to estimate the uncorrupted sound.
The same kind of approach can be used to clean dirty datasets that contain significant background trends that are not of interest to the data scientist. Imagine, for example, that we have genetic
measurements from cancer patients of different ethnicities and sexes. If we directly apply SVD (or PCA) to this data, the top basis vectors will correspond to the demographic variations of the
individuals instead of the subtypes of cancer because the genetic variations due to the former are likely to be larger than and overshadow the latter. But if we have a background dataset consisting
of healthy patients that also contains the variation associated with demographic differences, but not the variation corresponding to subtypes of cancer, we can search for the top basis vectors in our
primary dataset that our absent from the background.
Our lab has developed a technique to do this analysis in a principled way. The technique is called Contrastive PCA. Check out our paper for more details and experiments: https://arxiv.org/abs/
Dialogue & Discussion | {"url":"https://zou-group.github.io/article/removing-noise-from-signal/","timestamp":"2024-11-05T08:47:20Z","content_type":"text/html","content_length":"18177","record_id":"<urn:uuid:9cb276d7-df1f-4311-a198-140bcece07ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00572.warc.gz"} |
2. Let a=3−n where is is a nansi number if z is the leant posi... | Filo
Question asked by Filo student
2. Let where is is a nansi number if is the leant posilive value of \\sqrt{p}+\frac{1}{\sqrt{p}}$
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
5 mins
Uploaded on: 3/30/2024
Was this solution helpful?
Found 7 tutors discussing this question
Discuss this question LIVE for FREE
14 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on All topics
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text 2. Let where is is a nansi number if is the leant posilive value of \\sqrt{p}+\frac{1}{\sqrt{p}}$
Updated On Mar 30, 2024
Topic All topics
Subject Mathematics
Class Class 9
Answer Type Video solution: 1
Upvotes 104
Avg. Video Duration 5 min | {"url":"https://askfilo.com/user-question-answers-mathematics/2-let-where-is-is-a-nansi-number-if-is-the-leant-posilive-38373932383536","timestamp":"2024-11-12T12:19:56Z","content_type":"text/html","content_length":"181263","record_id":"<urn:uuid:128cc8c8-aefc-413d-bfdf-e67b20b01018>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00764.warc.gz"} |
Matter Time, Aethertime
Understanding quantum phase and its decoherence or decay are very important for understanding the mystery of our microscopic quantum reality, but the mystery of quantum phase does not play much of a
role in how things happen in our macroscopic reality. This is because by the time macroscopic things happen, quantum matter phase coherence has usually decayed or collapsed into classical reality.
Macroscopic gravity particles in general relativity have the property of mass but do not have the property of phase. However, quantum particles have both the properties of mass and quantum phase,
which means that quantum matter periodically goes out of and comes back into existence with a complementary spin. There is then a perpetual cycle of matter oscillation that defines the quantum
mystery of existence and relativistic gravity is simply missing this oscillation of matter.
Classical science and relativistic gravity define existence as unchanging matter moving along determinate paths in space and time. The classical determinate path of a particle with relativistic
gravity does not change unless acted on by some other force. Quantum science, though, defines existence as a perpetual matter action or
that never stops and so there is an inherent uncertainty in the path of every quantum particle through space...even without any other action force.
Quantum gravity has the same oscillation of matter as quantum charge, but this oscillation must always be along a determinate relativistic gravity path in space and time. This makes quantum gravity
particles uncertain in matter and action, but not uncertain and therefore determinate in path. Space and time actually emerge along with the determinate path of a gravity particle from the matter and
action of quantum gravity. As a result, it is matter and action that define space and time and therefore also define quantum gravity. Unlike relativistic gravity, quantum charge acts upon itself as
well as upon other particles. Quantum gravity, though, is necessarily complementary and so the quantum gravity of a particle does indeed act upon itself.
Just as quantum spin represents the action of a quantum particle upon itself with photons, the spin of quantum gravity represents the action of a quantum gravity particle upon itself with biphotons.
Its just that the states of quantum gravity are 1e39th power weaker than those of quantum charge. Gravity particle wavefunctions then show dispersions that span the universe and it is convenient to
use biphoton exchange for gravity quadrupoles just as single photon exchanges drive charge force. There is a photon of charge exchange that binds every atom of matter and that exchange photon
entangles with its complementary emitted photon from creation at the CMB.
Just as there is an uncertainty with quantum spin, there is a corresponding uncertainty with gravity quadrupole spin driven by gravity self energy. However, the complementary effect of gravity bodies
on each other means that there are still determinate paths for those bodies. The complementary determinate paths of two gravity bodies, though, are still subject to uncertainties in matter and action
along those paths.
It is therefore not possible to precisely measure both the matter and the action of two orbiting bodies even though it is possible to know their respective paths though space and time with arbitrary
precision. It is only the noise of chaos that limits measurements of gravity paths and it is the noise of quantum phase that limits measurements of matter and action.
Perpetual photon exchange binds every atom today from the emission of a photon of light at the CMB creation when electrons bonded to protons and other matter. Those two events are entangled with each
other and define the size of the universe with a biphoton gravity quadrupole. The coupling between the emission of CMB photons and the photon exchange of stable atoms is the mystery of quantum
gravity. This means that gravity force depends on the size of the universe and since the size of the universe changes over time, gravity therefore also depends on time.
Typical descriptions of what is often called the mystery of quantum particle dispersion often do not include any description of phase or of phase decay. This is odd because quantum phase and quantum
phase decay are really at the root of the quantum mystery. Classically, a single particle is in a knowable state even though it can be in either of two states or places. Once an observer measures
that particle state, it is then certain that the particle was always in that knowable measured state even before the measurement.
A quantum particle, however, can be in a superposition of two states or places and when an observer measures the particle state, the particle collapses into just one state or place. However, the
particle was perpetually oscillating and therefore was never in just one knowable state or place before the observation. Even when an observer sees a quantum particle on one path, that does not mean
that the quantum particle was not perpetually oscillating. Rather it means that the quantum particle was on a superposition of both paths until the observer saw it and that quantum coherence decayed
into one state.
Much quantum knowledge is therefore unknowable and therefore quantum knowledge involves both knowable classical knowledge as well as the unknowable. However, we do have a quantum intuition that also
represents choices that we make by our gut or instinct. Thus, our knowledge, reason, and intuition all contribute to our wisdom and the choices that we make. | {"url":"https://www.discreteaether.com/2018_03_03_archive.html","timestamp":"2024-11-15T02:55:27Z","content_type":"text/html","content_length":"134114","record_id":"<urn:uuid:fa0987a5-a002-4fa8-b1d6-b773b4ad9044>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00306.warc.gz"} |
Bar Bending Schedule for Tie Beams/Strap Beams | BBS of Tie Beam
Bar Bending Schedule for Tie Beam or Strap Beam | Steel Quantity of Tie Beam | Estimation of Reinforcement in Strap Beam
In this Article today we will talk about the Bar Bending Schedule for Tie Beam or Strap Beam | Steel Quantity for Tie Beam | Estimation of Reinforcement in Strap Beam | BBS of Tie Beam | Steel
Estimation for Tie Beam
What is bar bending Schedule ?
Bar Bending Schedule (BBS) is basically the representation of bend shapes and cut length of bars as per structure drawings. BBS is prepared from construction drawings. For each member separate BBS is
prepared because bars are bended in various shapes depending on the shape of member.
Bar Bending Schedule for Tie Beam / Strap Beam:
Bar Bending Schedule Plays a vital role in finding the quantities of the reinforcement required for the building. Well, In order to understand the tie beam/Strap beam reinforcement in Substructure, I
refer you to learn the Bar Bending Schedule for footings.
Tie Beam and Strap Beam:
Tie Beam (Straight beam) is a beam which connects the two footings in the substructure. Tie beam is provided when the two footings are in the same line. Strap Beam (inclined beam) is similar to tie
beam but it connects two footings at a certain angle. Strap beam is laid when two footings are in different levels. Tie beam/ Strap beam are specifically located between pile caps and shallow
foundations. their primary function is to force all shallow foundations or pile caps to have approximately the same settlements.
Quantity of reinforcement (steel) required for Tie Beam/ Strap beams or Bar bending schedule for Tie Beam/ Strap Beam:
In this post, I am finding out the Estimation of Steel reinforcement in Tie Beam or Strap Beam / Bar Bending Schedule for Tie Beam/ Strap beam. For this, I considered a plan as shown below. The
horizontal bars which ties one footing to the other footing are main bars and the vertical bars are called stirrups. Stirrups helps in framing the main bars in correct position.
Before getting into this article I recommend you to remember these Important Points to understand the reinforcement in tie beams:
1. Main Bars (Top bar, Bottom bar, Side bar) are tied to the center the of one footing to the center of another footing.
2. Whereas stirrups starts from one face of footing to the another face of footing. Refer the below image for clear view how reinforcement is tied in Tie beam.
Steps to be followed while finding out the total wt. of steel required for constructing Tie beams / Strap beams:
Tie beam reinforcement calculation is divided into two parts Main bars and stirrups.
Part-I:- Main Bars
1. Check the Length of Main bars in top, bottom, side bars.
2. Then Check the No. of Main bars in top, bottom, side bars
3. Check the Diameter of Main bars in top, bottom, side bars
4. Calculate the total length of Main bars in top, bottom and side direction.
5. Find the total wt of Main bars. BBS of Tie Beam
Part-II:- Stirrups
1. Deduct the concrete cover from all sides of tie and find out the length of stirrup.
2. Calculate the length of stirrup including hook.
3. Calculate the total no. of stirrups.
4. Find the total length of stirrups
5. Then Calculate the total wt. of stirrups.
Length, Dia and No. of Bars are adopted and designed by the structural engineer by executing the load analysis.
Consider the below shown Figure:
Assume for calculation:
Dia of Top Bars = 10mm, Dia of Bottom bars = 10mm ,Dia of Side bars = 8mm, Dia of Stirrups = 6mm, Spacing between ties = 0.1m. No. of Top Main bars = 4, No. of Bottom Main bars = 4 , No. of Side
Main bars = 2
Calculation for the Quantity of Tie beams (Main Bars):
Hyp^2 = Adj^2+ Opp^2
Hypotenuse = length of strap beam. From the figure, Part- I. Calculate the Total wt required for Main bars
Apply the above method for all the tie beams in Horizontal and the vertical axis. the result of all tie beams have been entered in the below table.
Calculation for the quantity of Tie Beams (Stirrups):
As I have already mentioned that Stirrups are started from one face of footing to the another face footing. BBS of Tie Beam
Tie Beam on Axis I between A-B:
Below are the five steps for finding the quantities of Tie beam (Stirrups)
1. Deducting the concrete cover from all sides of the tie beam for finding out the length of each tie. From the figure, it implies that reinforcement details of Tie beams in horizontal axis and
vertical axis are different. As per the condition deduct the concrete cover of 0.05 from all sides of stirrups for Horizontal axis tie beams and 0.025 from all sides of stirrups for Vertical
axis tie beams
Apply the above method for remaining tie beams. The result is mentioned below. Check you result with the below table.
Abstract For Finding out the total quantity of Steel reinforcement required for Tie Beam/Strap Beam for the above given plan:
Hence, Total wt of Steel 512.81 Kgs required for Tie beam/Strap Beam (for above plan).
Full article on Bar Bending Schedule for Tie Beam or Strap Beam | Steel Quantity for Tie Beam | Estimation of Reinforcement in Strap Beam | BBS of Tie Beam | Steel Estimation for Tie Beam | Footing
Estimate Calculation. Thank you for the full reading of this article in “The Civil Engineering” platform in English. If you find this post helpful, then help others by sharing it on social media. If
you have any question regarding article please tell me in comments. | {"url":"https://thecivilengineerings.com/bar-bending-schedule-for-tie-beams-strap-beams-bbs-of-tie-beam/","timestamp":"2024-11-09T06:37:37Z","content_type":"text/html","content_length":"161882","record_id":"<urn:uuid:da0b6551-9f4c-4818-a52e-e0cb625e0420>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00829.warc.gz"} |
Linear Regression Applied
This post looks at some examples of linear regression, as introduced in a previous post.
To summerize the objective and the notation: A set of $n$ of data points $(\mathbf{x}_i, y_i)$, $i=1,\ldots,n$, are given with $\mathbf{x}_i \in \mathbb{R}^p$. We now have the optimization problem
$\argmin_{\mathbf{f \in \mathbb{R}^p}} \| \mathbf{y} - \mathbf{X}^T \mathbf{f} \|_2$
with $\mathbf{y} = (y_1, y_2, \ldots, y_n)$, $\mathbf{X} = [ \mathbf{x}_1 \; \mathbf{x}_2 \; \cdots \; \mathbf{x}_n ] \in \mathbb{R}^{p \times n}$.
Note that it is sometimes useful to consider the columns of $\mathbf{X}^T$ as feature vectors. With $[\textbf{v}_1 \; \textbf{v}_2 \; \ldots \; \textbf{v}_p] = \mathbf{X}^T$ we see that $\textbf{v}
_j$ contains the $j$-th component of all data points. Doing linear regression now becomes: Find the linear combination of the feature vectors that best approximates the target vector $\mathbf{y}$.
As example data we use the following points $y_1, \ldots, y_{100}$ as target/output values:
Note that in this plot, and all the following related to these particular data, the coordinate along the first axis for the $i$th point is $t_i$, where the $t_i$'s are evenly spaced with $t_1=0$ and
Simple Linear Regression
By using $\mathbf{x}_i = (1, t_i)$ we get the optimization problem
$\argmin_{f_1, f_2 \in \mathbb{R}} \sum_{i=1}^{100} \left| y_i - (f_1 + f_2 t_i) \right|^2$
which corresponds to fitting a line to the data points. This is the most common form of linear regression and is often called simple linear regression. By considering the feature vectors of $\mathbf
{x}_i$ we see that we have a constant vector and a vector with the $t_i$-values as components:
The solution to this optimization problem (a closed-form formula is available for this special case) can be visualized as follows:
Not a particularly good fit, but it is the best we can do with a line.
Fitting a Cubic Polynomium
A line is not a particularly flexible model, so let us try a cubic polynomium instead. We use $\mathbf{x}_i = (1, t_i, t_i^2, t_i^3)$ and get the following four feature vectors:
The solution to the optimization problem now leads to a much better fit:
Piecewise Linear Features
There is no need to limit ourselves to polynomials. Let us consider these (continuous) feature functions:
\begin{aligned} p_1(x) &= 1, \\ p_2(x) &= \max \{ 2-x, 0 \}, \\ p_3(x) &= \max \{ x-6, 0 \}, \end{aligned}
each defined for $0 \leq x \leq 8$. We can now sample these functions for each value of $t_i$ to obtain the input vectors, $\mathbf{x}_i = (p_1(t_i), p_2(t_i), p_3(t_i))$.
The feature vectors now look like this:
It is easy to see that any linear combination of these vectors will have a constant value for $2 < t < 6$, but that may be ok:
Fitting an Ellipse
Linear regression, however, is useful for more than just fitting real functions to some data points.
Consider the equation of an ellipse:
$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1.$
Now consider the following set of points $(u_i, v_i)$, $i=1,\ldots,100$, in the $(u,v)$-plane:
Is it possible to find the coefficients $f_1, f_2$ such that $f_1 u_i^2 + f_2 v_i^2 = 1$ for all $i$? Obviously not, since the points cannot possibly lie on the circumference of a single ellipse. But
we can find the coefficients such that $f_1 u_i^2 + f_2 v_i^2$ comes close to $1$ in the least squares sense.
We do this by setting $\mathbf{x}_i = (u_i^2, v_i^2)$ and $y_i = 1$ for $i=1,\ldots,100$. Solving the optimization problem now leads to values of $f_1$ and $f_2$ and this means that the semi-major
and the semi-minor axis of the "best" ellipse are given by $1/\sqrt{f_1}$ and $1/\sqrt{f_2}$.
For the data points shown above we get the following ellipse:
(All the computations and plots in this post can be found as a Kaggle notebook.) | {"url":"https://janmr.com/blog/2023/05/linear-regression-applied/","timestamp":"2024-11-11T10:40:01Z","content_type":"text/html","content_length":"98385","record_id":"<urn:uuid:990fe186-0644-4fac-ba61-4e7982094d8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00581.warc.gz"} |
COS 226 Programming Assignment
8 Puzzle
Write a program to solve the 8 puzzle problem.
The problem. The 8 puzzle is a game invented by Sam Loyd in the 1870s. It is played on a 3-by-3 grid with 8 square tiles labeled 1 through 8 and a blank square. Your goal is to rearrange the tiles so
that they are in order. You are permitted to slide tiles horizontally or vertically (but not diagonally) into the blank square. The following shows a sequence of legal moves from an initial board
configuration to the desired solution.
A solution. We now describe a classic solution to the problem that illustrates a general artificial intelligence methodology known as the A* algorithm. First, insert the starting board position into
a priority queue. Then, delete the board position with the minimum priority from the queue. Insert each neighboring board position onto the queue. Repeat this procedure until one of the board
positions dequeued represents a winning configuration.
The success of this method hinges on the choice of priority function. A natural priority function for a given board position is the number of tiles in the wrong position plus the number of moves made
to get to this board position. Intutively, board positions with low priority correspond to solutions near the target board position, and we prefer board positions that have been reached using a small
number of moves.
We make a key oberservation: to solve the puzzle from a given board position on the queue, the total number of moves we need to make (including those already made) is at least its priority. As soon
as we deque a board position corresponding to a winning configuration, we have not only discovered a solution to the 8 puzzle problem, but one that uses the minimum number of moves. Why is this true?
Well, any other solution must begin with one of the board positions already on the queue. But, each one of these takes at least as many moves as its priority, and we always remove the one with the
minimum priority.
A useful trick. When you use the above method, you will notice one annoying feature: the same board configuration is repeated many times. One way to fix this is to keep track of all the board
configurations that have already been explored and avoid repeating any. A simpler strategy is to avoid revisiting the board position that led you directly to the current board position.
Manhattan distance. You can decrease the number of nodes examined by using a more refined heuristic. The Manhattan distance between a tile and its desired position is the minimum number of moves it
would take to move the tile to its desired position assuming you don't have to worry about other tiles. The Manhattan distance priority of a given board position is the sum of the Manhattan distances
between tiles and their desired positions, plus the number of moves made to get to this board position. As before, if we solve the puzzle from a given board position on the queue, the total number of
moves we need to make is at least its priority. This ensures that we find a solution to the 8 puzzle that uses the minimum number of moves.
Input format. The input will consist of the board size N followed by the N-by-N initial board configuration. The integer 0 is used to represent the blank square. As an example,
Output format. Print the sequence of board positions.
It's fine to print the board positions vertically instead of horizontally. Also print out the number of moves and the number of nodes dequeued. | {"url":"https://www.cs.princeton.edu/courses/archive/spring03/cs226/assignments/8puzzle.html","timestamp":"2024-11-08T22:33:30Z","content_type":"text/html","content_length":"4629","record_id":"<urn:uuid:524aba3b-7a06-418c-a617-122a4cd2022c>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00708.warc.gz"} |
Good hash for pointers
Malcolm McLean
2024-05-23 11:11:19 UTC
What is a good hash function for pointers to use in portable ANSI C?
The pointers are nodes of a tree, which are read only, and I want to
associate read/write data with them. So potentially a lage number of
pointers,and they might be consecutively ordered if they are taken from
an array, or they might be returned from repeared calls to malloc() with
small allocations. Obviously I have no control over pointer size or
internal representation.
Check out Basic Algorithms and my other books: | {"url":"https://comp.lang.c.narkive.com/EZSxQZYM/good-hash-for-pointers","timestamp":"2024-11-03T02:23:10Z","content_type":"text/html","content_length":"405893","record_id":"<urn:uuid:85b99b84-748d-4640-9f9f-a62d6ab374eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00825.warc.gz"} |
166/164 Applied Measurements for Electrical Instrumentation
For Whom Intended Engineers, scientists, and managers, as well as aides and technicians. This course will be of interest to personnel involved in making or understanding experimental test
measurements. Some background in electronics is helpful but is not essential. The course will be tailored to student objectives.
Objectives This course provides a basic understanding of electrical measurement systems, as well as the engineering concepts for the whole measurement system.. It provides an introduction to the many
varieties of meters, 'scopes and transducers available, their operating principles, strengths and weaknesses. A variety of measurands and device types is covered, as well as signal conditioning,
recording and analysis. It covers climatic measuring systems and reviews dynamic theory, which is essential for a better understanding of the measurand under consideration.
One of the course objectives is to give students enough applications information that they can select optimum meters, transducer, amplifier, recording and readout devices to assemble a system for
routine measurements of electrical phenomena. The problems of signal noise, accuracy and error are covered in some depth before continuing on to spectral analysis, sampling and discussion of aliasing
problems, filter types and anti-aliasing solutions.
The uncertainty surrounding the value of the measurand is discussed and an introduction to statistics as applied to engineering is covered.
One of the most difficult tasks for the measurement engineer is the selection of the proper instrumentation system. A procedure for attaining this goal is discussed and a typical instrumentation
selection list developed.
While calibration is beyond the scope of this course, a procedure for calibrating a sensor device is developed and discussed.
Mainly lectures, supported by slides, transparencies, videotapes and sample hardware. Students are expected to participate in classroom discussions, as well as read text materials and class notes.
The course emphasizes a non-mathematical approach to understanding concepts and mechanisms.
Participants are encouraged to bring a specific measurement problem to class for discussion.
DIPLOMA programs This course is required for TTi’s Electronic Design Specialist (EDS) and Mechanical Design Specialist (MDS) Diploma Programs. It satisfies the course 164 or 166 requirement(s) for
TTi’s Data Acquisition & Analysis Specialist (DAS), Electronic Telecommunications Specialist (ETS), Instrumentation Test Specialist (ITS) and Metrology/Calibration Specialist (MCS) Diploma
Programs, and may be used as an elective for any other TTi diploma program.
Related Courses Course 164/166 combines Course 164, Electrical Instrumentation for Test & Measurement with course 166, Applied Measurements. Course 163, Instrumentation for Test and Measurement
covers some of the same material, with more emphasis on dynamics.
Prerequisites There are no definite prerequisites, but participation in TTi’s course Electronics for Non-Electronic Engineers or the equivalent would be helpful. This course is aimed toward
individuals actively involved in related technical fields.
Text Each student will receive access to the on-line electronic course workbook, including most of the presentation slides. An initial subscription is included in the price of the course and renewals
are available for an additional fee. Printed textbooks are also available for purchase.
Course Hours, Certificate and CEUs Class hours/days for on-site courses can vary from 21–35 hours over 3–5 days as requested by our clients. Upon successful course completion, each participant
receives a certificate of completion and one Continuing Education Unit (CEU) for every ten class hours.
Internet Complete Course 166/164 features over 14 hours of video as well as more in-depth reading material. All chapters of course 166/164 are also available as OnDemand Internet Short Topics. See
the course outline below for details.
Click for a printable course outline (pdf).
Course Outline
Chapter 1 - Introduction to Instrumentation for Electrical Test and Measurement
• Accurate Measurements
• Case Study Procedure
• Sensors and Systems
• Components of an Instrumentation System
• Functional Components of a Measurement Chain
• Basic Radio Telemetry System—Block Diagram
• Carrier Modulation
Chapter 2 - Types of Data Signals
• Periodic Signals
• Sinusoidal Signals
• Sine Wave as Projection of Rotating Vector
• Complex Signals
• Square Wave Signals
• Complex Spectrum of a Periodic Time Function
• Transient Signals
• Complex (Pyroshock) Time History
• Random Signals
• Power Spectral Density
• Examples of Time vs. Frequency Spectra
• Understanding rms
• Average and RMS Values of Common Waveforms
• Language of Digital Measurement Systems
• Digital Data Nomenclature
• Digital Codes
Chapter 3 - Noise
• Noise Signal, Gaussian Distribution
• Detecting a Weak Signal
• Noise Calculations
• Noise Suppression for Sensor Signals
• Noise Figure and Distortion
• Electronic Noise Measurements
• Phase Noise
• Phase Noise Display
• Phase Noise in Communications
• The Noise Corner Frequency
• External Noise Sources
• Common Electrical Noise from External Sources
• Types of Noise
• Shot (or Schottky) Noise
• Thermal (or Johnson) Noise
• Flicker (1/f) Noise
• Burst Noise
• Avalanche Noise
• Noise Should be Viewed as a Vector Quantity
• Noise Colors
Chapter 4 - Decibels (dB), Logarithmic vs. Linear Scaling, Frequency Spectra, Octaves
• Understanding Decibels (dB) and Octaves
□ Decibels—Power Ratio
□ Decibels—Voltage Ratio
□ dB Ratio Conversions
☆ Reference Levels for Decibel Notation
☆ Adding Two Power Ratios in dB
□ Logarithmic vs. Linear Scaling
• Introduction to Frequency, Octaves and Sound
□ Sound Perception
☆ Frequency Spectra for Various Noise Sources
Chapter 5 - Parameters of Linear Systems
• Frequency Response
• Dynamic Range and Linearity
□ Non-Linear Mechanical System
□ Non-Linear Systems
□ Input-Output Characteristic Curve
□ Distortion of a Sine Wave
□ Typical Linearity Curve of an Instrument
□ Design/Performance Characteristics of Sensors
□ Methods of Computing Linearity
• Signal and Spectrum Before and After Clipping
• Effects of Inadequate Frequency Response
• System Response to a Rectangular Pulse
• Low-pass, High-pass, Bandpass and Notch Filters
• Phase Response
• Response of a Linear Network to a Sine Wave
Chapter 6 - Accuracy and Error
• Accuracy, Calibration and Error Assessment
• Common Terms
• Accuracy vs. Precision
• Classification of Errors
• Error Assessment
• Improper Functioning of Instruments
• Effect of Transducer on Process
• Dual Sensitivity Errors
• Minimizing Error
Chapter 7 - Safety, Grounding, Circuit Protection, Input/Output Impedance, Power Transfer
• Laboratory Practice—Safety
□ Effects of 60 Hz electric shock on the human body.
□ Safety Rules
• Grounding
□ Types of Grounds
□ Grounds—Three Wire Outlet
• Circuit Protection Devices
• Input Impedance, Output Impedance and Loading
• Input Impedance
• Loading Errors
• Input Impedance and Loading
• Input and Output Impedance
• Equivalent Resistance
• Equivalent Resistance and Output Resistance
• Power Transfer and Impedance Matching
Chapter 8-1 - Analog and Digital DC and AC Meters
• Analog DC and AC Meters
• D’Arsonval Galvanometer Movement
• Electrodynamometer Movement
• Analog DC Ammeters
□ Analog DC Ammeters—Example
□ Analog DC Ammeter—Solution
□ Analog DC Ammeter—Shunts
□ Analog DC Ammeters—Problem
• Analog DC Voltmeter—Multiplier
□ Analog DC Voltmeter—Sensitivity
□ Analog DC Voltmeter—Problem
□ Sensitivity of a voltmeter
• AC Ammeters and Voltmeters
□ Alternating Current
□ AC Ammeters and Voltmeters—Problem
□ AC Ammeters and Voltmeters—Solution
• RMS Responding Meters
• Peak Responding AC Meters
• Analog Multimeter
• Special-Purpose Analog Meters
• How to Use Meters
• Meter Errors
• Digital Electronic Meters
Chapter 8-2 - Digital Measurement Instruments: Digital Multimeter Operation
• Digital Multimeter—Agilent 3458A
• Digital Multimeter—Agilent 34401A
• Calibrator— Keithley Model 263
• Current-to-Voltage Converter—SR570
• Agilent 3458A Digital Multimeter
□ Power Requirements
□ General Purpose Interface Bus (GPIB Bus)
□ Power-on Self Test, Ranging
□ Display, Function Keys
□ Self-Test
□ Remote Operation — GPIB
□ Display/Use GPIB Address
□ Calibration
□ High Resolution Digitizing
• LabView Graphical Solutions
Chapter 8-3 - Making Measurements with a Digital Multimeter
• Agilent 3458A Digital Multimeter
• Connection Configuration
• Guarding
• Measuring DC Voltage
• AC or AC+DC Voltage
• Measuring DC Current
• Measuring Resistance
□ 2-wire Ohms Measurements
□ 4-wire Ohms Measurements
• A/D Converter
□ A/D Reference Frequency
□ A/D Integration Time
□ A/D—Power Line Cycles
□ A/D—Specifying Resolution
• Autozero Function
• Offset Compensation
• Identifying Resistors
□ Color Codes For Resistors and Capacitors
Chapter 8-4 - Guarded Voltmeter
• Guard Shields
• Grounded Measurement
□ Grounded Measurement with a Common-Mode Voltage
• Floating Measurements
□ Inside an Ideal Floating Voltmeter
□ More Realistic View of a Floating Voltmeter
• Guarded Voltmeter
□ Connecting the Guard
□ Guard Connection to Low at Voltmeter
□ Guard Connected to Earth Ground
□ Don’t Leave The Guard Open
• Bridge Measurement
□ Guard Connected to Low at Voltmeter Input
□ Guard Connected to Low at the Bridge
□ Guard Connected to Ground at the Bridge
□ Driving the Guard in a Bridge Measurement
□ Summary
Chapter 9 - Oscilloscopes
• Analog Oscilloscopes
□ Analog Oscilloscope Display Screen
□ Analog Display Subsystem
• Making Measurements with an Analog Oscilloscope
□ Analog Voltage Measurements
□ Analog Time and Frequency
□ Analog Phase Measurements
□ Analog Pulse Measurements
• Lissajous Patterns
• Digital Oscilloscope
□ Digital Oscilloscope—Two Channel
□ Digital Oscilloscope Considerations
Chapter 10 - Time and Frequency Measurements
• Frequency Measurement
• Counter Resolution
• Period Measurement
• Portable 18-Channel Data Acquisition Recorder
• Portable Data Acquisition Recorder
• Portable Hybrid Recorder
Chapter 11 - Power and Energy Measurements
• Introduction to Power and Energy Measurements
• Power in AC Circuits
□ Example
□ Single-Phase Power Measurements
□ Errors—Dynamometer Wattmeters
□ Measuring P>avg< and P>apparent< Simultaneously
• Polyphase Circuits
□ Phasor Voltages
□ Three-phase Y-connected Generator
□ Three-phase Generator Example
□ Three Phase delta-Connected Generator
□ Polyphase Power and Measurements
□ Polyphase Measurements
• Using a Dynamometer to Measure Power
• Power Measurements at Higher Frequencies
Chapter 12 - Wheatstone Bridges
• Basic Laws of Networks
• Voltage Divider Circuit
• Thevenin’s Theorem
• Methods of Measurement
• Bridge Circuits
• Wheatstone Bridge
• Application of Thevenin’s Theorem in a Wheatstone Bridge Circuit—Example
• Voltage-Sensitive Bridges
• Current-Sensitive Bridges
• Bridge Sensitivity
• Three-Wire Bridge: Compensation for Leads
• Effects of Temperature Change on Sensing Resistor
• Four Sensing Resistors in a Wheatstone Bridge
• Shunt Calibration
• Voltage Insertion Calibration
• Strain Gage Compensation
• Guidelines for Setting up the Bridge Measurement System
• Wheatstone Bridge
• AC Bridges—Classic Inductance Bridges
• Classic Capacitance Bridges
Chapter 13 - DC and AC Signal Sources
• Batteries in Series and Parallel
• Power Supply with a Regulator
• DC Power Supply Specification
• How to Use a Power Supply
□ Proper Connections with Multiple Loads
• Kinds of Oscillators
□ Oscillator Configurations
• Sweep-Frequency Generators
• Square Wave
□ Pulse Shape of Square Wave
□ Use of Square Waves in Testing
• Function Generators
□ Testing with a Function Generator
• HP 33120A Function Generator
□ Waveforms Generated By Function Generator
□ RMS Waveform
□ Signal Generation Process
□ Equivalent Circuit
□ Output Resistance and Load Resistance
□ Front Panel
□ Front Panel Number Entry
□ Frequency, Amplitude Selection
□ Offset Voltage Selection, Duty Cycle
□ Modification of Standard Waveforms
□ BenchLink and User-defined Arbitrary Waveforms
□ Specifications
Chapter 14 - Sensors / Transducers
• It starts with the user and the sensor
□ Characteristics of the Ideal Transducer
□ Mechanisms in General
• Displacement—Direct Measurement
• Strain Gauge
• Silicon Semiconductor Transducers
• Accelerometers
• Linear Variable Differential Transducer (LVDT)
• Potentiometric Transducers
• Piezoelectric Transduction
• Velocity Sensing Module
• 4-20 ma loop
Chapter 15 - Introduction to Measurement Engineering
• Definitions
• Preparing to Make Measurements
• Open and Closed Loop Systems
• Analog and Digital
• Transfer of Energy
• Measurement System Responses
Chapter 16-1 - Climatic Measurements: Temperature
Chapter 16-2 - Climatic Measurements: Humidity
Chapter 16-3 - Climatic Measurements: Pressure
Chapter 16-4 - Climatic Measurements: Flow
Chapter 17 - Review of Dynamic Theory
• Laws of Motion
• Weight, Mass and Gravity
• Force, Mass and Acceleration
• Work, Power
• Energy
• Linear and Angular Displacement; Linear Velocity
• Tangential Acceleration
• Torque
• Stress and Strain
• Simple Tension or Compression
• Shear Strain
Chapter 18 - Reducing Signal Noise
• Unwanted Signals
• Shield Strategies
• Twisted Pair
• Electrical Noise: High Signal Source Impedance
• Low Signal Source Impedance
• Source Shunting
• Parallel Conductors
• Twisted Conductors
• Microvolt-Level Signal Cables
• Ground Loops
• Eliminating Multiple Grounds
• A Stable System Ground
• Amplifier Guard Shield
• Common-Mode Rejection
Chapter 19 - Spectral and Fourier Analysis
• Spectral Analysis
• Sinusoidal, Complex and Random Signals
• Phase of Frequency Domain Components
• Time and Frequency Domain
• Fourier Analysis
• Adding Two Signals-Using RMS Values
• The Fourier Transform
• Discrete Fourier Analysis
• FFT
• Classification of Types of Data
• Random Signals
• Correlation
• Cross-Correlation, Coherence
• Auto Spectral Density (ASD)
• Power Spectral Density
• Calculating RMS From PSD
Chapter 20 - Signal Analysis and Aliasing
• Signal Acquisition
• Shannon's Theorem and Corollaries
• Aliasing Viewed as Folding
• Where Does the Aliased Data Appear?
• Example .. Sine Signal
• Aliasing/Multiple Folding
• Digitizing "Rules"
• Interpolation ..When is it Needed?
Chapter 21 - Filters
• Integrating and Differentiating Circuits
• Acoustic Weighting
• Bandpass Filter
• Undamped (high Q) vs. Damped (low Q) Filters
• Selective Filtering
• Characteristics of Butterworth, Chebyshev and Bessel Filters
• RC and LR Circuits
• Anti-Alias Filters
• Brick-Wall vs Real Filters
• Aliasing Analysis
• Anti-Alias Filters-Hardware
• Filter "Construction"
• How Filters Behave
• Group Delay
• Filter Cutoff Frequency
• Sampling Ratio Calculation
• FR/FD Ratio
Chapter 22 - Measurement Uncertainty and Introduction to Statistics
• Error and Uncertainty
• ISO Definitions
• Simple Statistics of Measurement
• Probability-Definitions
• Data Distributions
• Cumulative Frequency Curve Summation
• Degrees of Freedom
• Mean, Median and Mode
• Standard Deviation
• Variance
• Normal Distribution
• Gaussian Curve
• Confidence
• Gaussian (s-Normal) Distribution
• Special Definitions for Random Vibration
• Computing the Standard Deviation-Example
• Confidence Levels
Appendix A - Glossary of Terms
Appendix B - Standard Deviation Calculation worksheet
Appendix C - Typical Instrumentation Selection Check List
Appendix D - Transducer Calibration
Appendix E - Analog Oscilloscope Controls
Appendix F - ASCII codes
Summary and overview
Click for a printable course outline (pdf).
Revised 180807 | {"url":"https://pubs.ttiedu.com/course_outline?tid=68","timestamp":"2024-11-12T13:09:47Z","content_type":"text/html","content_length":"44952","record_id":"<urn:uuid:2029c4a7-1174-4e4d-a0b1-03d0cda7cceb>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00462.warc.gz"} |
Significance Test
This means, a significance test is used to determine if the difference between the assumed value in the null hypothesis and the value observed from experiment is big enough to reject the possibility
that the result was a purely chance process.
A significance test is always accompanied by a value that specifies how much of this difference is enough to rule out a chance process and reject the null hypothesis. Common levels used in
statistical analysis are 5% and 1%. This is called the significance level.
For example, suppose a researcher is studying the effect of breakfast in the morning to the performance in class of students. She will first select two random samples of students. One of these
groups will have breakfast each day and the other will not, for a specified period of time.
At the end, she will compare the average grades of both these groups. Suppose that the group that had breakfast each day had an average grade of 4.32/5.00 and the group that did not have
breakfast each day obtained an average grade of 4.05/5.00.
The question now is, can the researcher conclude that her research has shown that having breakfast in the morning increases the grades of students?
There can be several reasons to explain the results of the experiment. It may happen that the students picked for the group that had breakfast each day simply were brighter students already. Or it
may just be by chance that the group having breakfast each day just performed better than their counterparts on the examination days or were simply plain lucky.
For the above case, a significant test consists of determining whether the difference of 0.28/5.00 is good enough to conclude that students having breakfast indeed perform better than those that
The null hypothesis in this case is:
H[0]: “Having breakfast in the morning has no effect on the grades of students”.
Depending on the distribution of students and their grades, the researcher must therefore determine whether the probability of obtaining this difference of 0.28/5.00 is significant low or high. She
will then set a significance level (say 5%) for the case.
If the probability that the difference of 0.28/5.00 is lower than the significance level, then the significance test entails that the null hypothesis is rejected. This means that the experimenter can
now conclude that having breakfast in the morning indeed has a positive effect on the grades of students.
On the other hand, if the difference of 0.28/5.00 comes out to be higher than the significance level, then using the significance test, the null hypothesis cannot be rejected. However, it is to be
noted that in this case, we cannot conclude that the null hypothesis is true, i.e. we cannot accept the null hypothesis just because it is not rejected. | {"url":"https://explorable.com/significance-test-2?gid=1590","timestamp":"2024-11-06T14:17:38Z","content_type":"application/xhtml+xml","content_length":"60878","record_id":"<urn:uuid:ab178f3a-18d1-46dc-9042-e760494062dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00744.warc.gz"} |
Peak Height Velocity; how to apply it in gymnastics
In this blog I will tell you more about the Peak Height Velocity. Especially how to apply and use it in practice.
Formula for peak high velocity
To measure the Peak Height Velocity, you need a special formula. There is a separate formula for boys and for girls.
The following formulas are used to calculate the PHV:
For boys:
-9.236 + (0.0002708*(BL*ZL)) + (0.001663*(LT*BL)) + (0.007216*(LT*ZL)) + (0.02292*(KG/LL*100))
For girls:
-9.376 + (0.0001882*(BL+ZL)) = (0.0022*(LT*BL)) + (0.005841*(LT*ZL)) + (-0.002658*(LT*KG)) + (0.07693*(KG/LL)*100)
BL = leg length (cm), which you can calculate by the difference in length and seat heightZL
= seat height (cm)
LT = age (years)
LL = body length (cm)
KG = weight (in kg)
In order to be able to use and apply this formula, it is useful to be able to represent the particles of the formula in an Excel file. It is quite difficult to put together and create such an Excel
file. Fortunately, there are a number of existing Excel files that already use this formula. So you only have to fill in a few things and the result of the formula is already there. So that is super
Should you wish to have one, you can always ask for it through Gymnastics Tools.
Collecting data
To be able to fill in the formula and to be able to put your data into Excel, you will of course need some data. You choose an athlete of whom you want to calculate the Peak Height Velocity. The
first thing you have to do is to check the date of birth. So that you know what age this athlete is. Because that age has to be entered in the Excel file.
The next thing you need to do to calculate the formula is to measure the length. That in itself does not sound very difficult, and in fact it is not. However, it is a useful tip to use a correct and
reliable length gauge. If you have a ruler or tape measure against the wall, you have to look with your hand above the turner's head. That can sometimes be a bit tricky. You also have mobile length
gauges where you can move the top. You can put this against the head and then you can measure how tall someone is from the side. These are just a bit more reliable than a tape measure. But then
again, you have to look at what you have in the room or what you have available at all.
In addition to the length, you also need to measure the seat height. To do this, you put someone on a chair so that their feet can reach the floor and their back can sit against the wall. So you have
to see which chair he/she needs. Then how you do the same thing you do when you measure the length. You measure with a mobile length gauge against the head of the athlete. Or you look at the tape or
ruler how high the athlete is when he/she is sitting.
Measuring weight
Thirdly, you need the weight of the athlete. It is important that you have a relationship of trust with the athlete because the weight can be a personal matter for some athletes. In any case, make
sure you are open about it. That someone may not be ashamed of his/her weight. And that he/she can share it with you from safety.
Tip: use the same scales every time because you probably also know that not every scale is calibrated in the same way, so not every scale reads the same weight.
In the existing Excel file I used, it is customary to measure length, seat height and weight twice because sometimes - especially with the length - there can be a difference if you measure it again.
The Excel file takes the average, so we can be sure that we arrive at the correct length we use for the formula.
Reliability of measurement results
When do you measure and how long should you measure for a reliable Peak Height Velocity? The longer you measure, i.e. the more years you measure, the more reliable the Peak Height Velocity becomes.
If you think today, "I'd like to measure my 14-year-old gymnast and see when she has or has not had her Peak Height Velocity," you are already too late. So start measuring on time.
For girls, it is a good idea to start measuring from about the age of seven, i.e. the year in which they turn seven or are seven to start measuring Peak Height Velocity. For boys, it is convenient to
start measuring from about age nine because their Peak Height Velocity is often later than a girl's Peak Height Velocity.
Starting to measure
If you start measuring when they are seven or nine years old, or perhaps a little older, how often should you measure? To make the Peak Height Velocity even more reliable, it is useful to measure a
little more often as you move towards the Peak Height Velocity age.
For example, you start measuring your gymnast at the age of 7. Then you don't really need to measure her weight and height every week; you can do this about every two months. As she gets older and
older and moves closer to Peak Height Velocity or the age of Peak Height Velocity, it is a good idea to measure her more frequently, about two years before Peak Height Velocity, because you will
already know more or less which age she is. It is not very reliable, but you have already measured it according to the formula. Then you might start measuring every other month or every fortnight.
Frequency of measurements
When do you calculate the entire Peak Height Velocity? If you have created or are using an existing Excel file, the Peak Height Velocity will automatically come out of it. The literature indicates
that it is fine and convenient to measure or at least calculate the Peak Height Velocity between 2-4 times a year. So you measure height and weight more often, but the Peak Height Velocity
calculation only needs to be done 2-4 times a year. So it is based on all those small measurements. In an existing Excel file, the Peak Height Velocity is already calculated more often than the 2-4
times per year.
Reading results
What does the formula give you in terms of data? That is not the age of the Peak Height Velocity but a maturity offset. When the maturity offset is in the plus, so for example +2.5, that means you
have already had your Peak Height Velocity. For example, you are 17 years old and your maturity offset is 2.5. You have to subtract that 2.5 from the age of 17, so that is 14.5. That means that you
have had your Peak Height Velocity at the age of 14.5. This does not mean that it is impossible that you are no longer growing, but the period in which you were growing fastest was when you were 14.5
years old.
When the maturity offset is in the minus, for example -1.0, that means it is still to come. You have not yet had your Peak Height Velocity. For example, are you 11 years old and do you have a
maturity offset from that formula of 1.0? This means that you will have your fastest growth at about the age of 12. It does not mean that at the age of 11 you are not already growing. But you will
grow fastest at the age of 12.
I hope that this information has made you a little wiser. And now you know how to calculate and use Peak Height Velocity in practice. If you need help to start calculating Peak Height Velocity,
please let me know again. Gymnastics Tools can help you!
Place comment | {"url":"https://gymnasticstools.com/peak-height-velocity-gymnastics/","timestamp":"2024-11-03T06:02:29Z","content_type":"text/html","content_length":"103643","record_id":"<urn:uuid:0d2f0110-f350-4314-87ec-cf945b7e2231>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00500.warc.gz"} |
Sound Doppler Shift Calculator
he Doppler Effect calculator for sound waves calculates the observed frequency and wavelength of a sound wave given the source frequency, speed of source, and speed of sound in the air.
The user can input the source frequency in Hertz (Hz), the speed of the source in meters per second (m/s), and the speed of sound in air in meters per second (m/s).
The calculator uses the formula for the Doppler effect (can be found below the calculator), which relates the observed frequency to the source frequency, speed of the source, and speed of sound in
air. The formula accounts for the change in frequency due to the relative motion of the source and the observer, resulting in either a higher or lower pitch. A negative speed of the source indicates
that it is moving away from the observer.
The Doppler Effect
Source: http://en.wikipedia.org/wiki/Doppler_effect
The Doppler effect is the change in frequency of a wave for an observer moving relative to its source. It is commonly heard when a vehicle sounding a siren or horn approaches, passes and moves away
from an observer. The received frequency is higher (compared to the emitted frequency) during the approach, it is identical at the instant of passing by, and it is lower during the moving away.
The relative changes in frequency can be explained as follows. When the source of the waves is moving toward the observer, each successive wave crest is emitted from a position closer to the observer
than the previous wave. Therefore each wave takes slightly less time to reach the observer than the previous wave. Therefore the time between the arrival of successive wave crests at the observer is
reduced, causing an increase in the frequency. While they are traveling, the distance between successive wavefronts is reduced; so the waves "bunch together". Conversely, if the source of waves is
moving away from the observer, each wave is emitted from a position farther from the observer than the previous wave, so the arrival time between successive waves is increased, reducing the
frequency. The distance between successive wavefronts is increased, so the waves "spread out".
The Doppler effect for sound can be expressed as follows:
Frequency change
Wavelength change
For the approaching source, the speed v' should be negative; for receding source, speed v' should be positive.
v - the speed of sound in air. By default, it is equal to the speed of sound in the dry air at 20 degrees Centigrade, see Sound Speed in Gases
URL zum Clipboard kopiert
Ähnliche Rechner
PLANETCALC, Sound Doppler Shift Calculator | {"url":"https://de.planetcalc.com/2351/","timestamp":"2024-11-14T09:04:07Z","content_type":"text/html","content_length":"35579","record_id":"<urn:uuid:28fbfb0b-2499-4081-ae52-47d90f88678d>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00560.warc.gz"} |
Benign landscape of low-rank approximation: Part I – Race to the bottom — the OPTIM@EPFL blog
The fact
The Eckart–Young–Mirsky theorem states that the best approximation of a matrix \(M\) by a matrix of rank at most \(r\) can be computed by truncating an SVD of \(M\) to \(r\) dominant components. This
is widely known. It is also well known (but rarely proved) that the corresponding optimization problem has a nonconvex yet benign landscape. This post spells out a proof of that fact:
Theorem 1 (Benign landscape) Let \(M \in \Rmn\) be arbitrary. Consider \[ \min_{X \in \Rmn} \frac{1}{2}\sqfrobnorm{X - M} \qquad \textrm{ subject to } \qquad \rank(X) \leq r. \] Second-order
necessary optimality conditions are sufficient for global optimality, in the following sense:
1. If \(\rank(X) < r\) and \(X\) is first-order critical, then \(X\) is optimal.
2. If \(\rank(X) = r\) and \(X\) is second-order critical, then \(X\) is optimal.
In particular, local minima are global minima and saddle points are strict.
The oldest other proof we could locate is the PhD thesis of Ngoc-Diep Ho (Ho 2008, Thm. 1.14)—he argues succinctly that local minima are global. A paper by Helmke and Shayman (1995) comes close but
doesn’t prove the statement here. Please e-mail us if you know an older one—this should be ancient. (There are many sources for the fact that the optimal solution is the SVD of \(M\) truncated at
rank \(r\). The point here is to have a proof that the optimization landscape is benign.)
We follow up with neat corollaries of that theorem in Part II.
Proof structure
One particularity of this optimization problem is that its search space \[ \Rmnlr = \{ X \in \Rmn : \rank(X) \leq r \} \] is not smooth: it is an algebraic variety. Fortunately, it splits in two
• The matrices of rank strictly less than \(r\), and
• The set \(\Rmnr\) of matrices of rank exactly \(r\), which is a smooth manifold embedded in \(\Rmn\).
Accordingly, the proof of Theorem 1 has two parts. First:
1. Show that if \(X\) has rank strictly less than \(r\) and it is stationary then it is optimal.
Intuitively, this is because if the rank constraint is not active then it is “as if” there was no constraint. And since the cost function is convex, stationary implies optimal.
Second, for the cost function \[ f(X) = \frac{1}{2}\sqfrobnorm{X - M} \] specifically, do the following:
2. Write down the necessary optimality conditions of order two.
3. Deduce that if \(X\) has rank \(r\) and it is second-order critical then \(f(X)\) is a certain value, independent of \(X\).
4. Conclude that all such \(X\) are optimal, because one of them must be.
Item 2 takes a bit of know-how but it’s straightforward since \(\Rmnr\) is smooth. Item 3 requires a little insight into the problem. Item 4 is direct.
Notice how items 3 and 4 together allow us to conclude without knowing that the optima are truncated SVDs of \(M\). Actually, we recover that fact as a by-product.
Matrices of lesser rank
This part works for \(\min_{\rank(X) \leq r} f(X)\) with any continuously differentiable \(f \colon \Rmn \to \reals\).
Let \(\nabla f\) denote the Euclidean gradient of \(f\) with respect to the usual inner product \(\inner{A}{B} = \trace(A\transpose B)\). First-order criticality at a matrix \(X\) of rank strictly
less than \(r\) is equivalent to the condition \(\nabla f(X) = 0\). This can be formalized in terms of tangent cones and their polars (Schneider and Uschmajew 2015). Here is a short proof:
Lemma 1 If \(\rank(X) < r\), the first-order necessary optimality conditions at \(X\) are: \(\nabla f(X) = 0\).
Proof. First-order conditions require in particular that \((f \circ c)'(0) \geq 0\) for all curves \(c \colon [0, 1] \to \Rmnlr\) which start at \(c(0) = X\) and which are differentiable at that
point. (That is, the value of \(f\) cannot decrease at first order if we move away from \(X\) along the curve.) For all \(x \in \Rm, y \in \Rn\), the smooth curve \(c(t) = X + txy\transpose\) indeed
satisfies \(\rank(c(t)) \leq r\) and so first-order criticality requires \[ 0 \leq (f \circ c)'(0) = \inner{\nabla f(c(0))}{c'(0)} = \inner{\nabla f(X)}{xy\transpose} = x\transpose \nabla f(X) y. \]
This holds for all \(x, y\), so \(\nabla f(X) = 0\). Conversely, if \(\nabla f(X) = 0\), then the full first-order conditions (not spelled out here) hold.
For \(X \mapsto \frac{1}{2} \sqfrobnorm{X - M}\) which is convex, a zero gradient implies optimality: this takes care of the first part of Theorem 1. The general statement is:
Corollary 1 Assume \(f\) is convex. If \(\rank(X) < r\) and \(X\) is first-order critical, then \(X\) is optimal.
Matrices of maximal rank
When \(X\) has rank \(r\), first-order necessary optimality conditions typically are not sufficient for optimality. Writing down second-order conditions for minimization of a function \(f\) on an
arbitrary set is a tad uncomfortable. Luckily, the set \(\Rmnr\) of matrices of rank exactly \(r\) is an embedded submanifold of \(\Rmn\). As a result, necessary optimality conditions are well
understood; they even take a reasonably nice form. We spell them out below: see for example (Boumal 2023, sec. 7.5) for details.
Second-order conditions
Let \(X = U\Sigma V\transpose\) be a thin SVD of \(X\), so that \(U, V\) each have \(r\) orthonormal columns and \(\Sigma\) is diagonal of size \(r \times r\) with diagonal entries \(\sigma_1 \geq \
cdots \geq \sigma_r > 0\). Select \(U_\perp, V_\perp\) to complete the orthonormal bases, that is, \([U, U_\perp] \in \Om\) and \([V, V_\perp] \in \On\) are orthogonal. The tangent space to \(\Rmnr\)
at \(X\) is the subspace \(\T_X\Rmnr\) consisting of all \(\dot X \in \Rmn\) of the form \[ \dot X = \begin{bmatrix} U & U_\perp \end{bmatrix} \begin{bmatrix} A & B \\ C & 0 \end{bmatrix} \begin
{bmatrix} V & V_\perp \end{bmatrix}\transpose, \qquad(1)\] where \(A \in \reals^{r \times r}, B \in \reals^{r \times (n - r)}, C \in \reals^{(m - r) \times r}\) are arbitrary.
The first-order necessary optimality conditions at \(X\) require that \((f \circ c)'(0) = 0\) for all smooth curves on \(\Rmnr\) passing through \(c(0) = X\). This holds exactly if \(\nabla f(X)\) is
orthogonal to the tangent space, that is, there exists \(W \in \reals^{(m-r) \times (n-r)}\) such that \[ \nabla f(X) = \begin{bmatrix} U & U_\perp \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 0 & -W \end
{bmatrix} \begin{bmatrix} V & V_\perp \end{bmatrix}\transpose. \] (We could also express this as: \(\nabla f(X) X\transpose = 0\) and \(X\transpose \nabla f(X) = 0\).)
If \(X\) is first-order critical for \(f\), then it is also second-order critical if and only if \((f \circ c)''(0) \geq 0\) for all smooth curves on \(\Rmnr\) passing through \(c(0) = X\).
Particularizing (Boumal 2023, Ex. 7.6) to \(f\), we see that this holds exactly if \[ 0 \leq \innerbig{\dot X}{\dot X + (X-M)(X^\dagger \dot X)\transpose + (\dot X X^\dagger)\transpose (X-M)} \] for
all \(\dot X \in \T_X\Rmnr\) (a dagger denotes pseudo-inverse). This uses that the Euclidean gradient and Hessian of \(f\) are \(\nabla f(X) = X - M\) and \(\nabla^2 f(X)[\dot X] = \dot X\). If we
plug in an arbitrary tangent vector of the form Eq. 1 and work through the expression, it follows that \[ 0 \leq \sqfrobnorm{A} + \sqfrobnorm{B} + \sqfrobnorm{C} - 2\inner{C \Sigma^{-1} B}{W} \qquad
(2)\] for all \(A, B, C\).
Second-order points have the same cost value
It only remains to choose \(A, B, C\) in Eq. 2 to reveal interesting conclusions about \(X\). Clearly, we should set \(A = 0\) (that only makes the inequality stronger). Also let \(B = e_r y\
transpose\) and \(C = xe_r\transpose\), where \(e_r\) is the \(r\)th column of the identity matrix of size \(r\), and \(x,y\) are unit-norm vectors. Then, the inequality becomes: \[ 0 \leq 2 - 2 (x\
transpose W y) / \sigma_r. \qquad(3)\] Choose \(x,y\) to be singular vectors of \(W\) associated to its largest singular value to conclude that \[ \sigmamax(\nabla f(X)) = \sigmamax(W) \leq \sigma_r
(X). \] In words: the \(r\) positive singular values of \(X\) are all at least as large as the singular values of \(\nabla f(X)\).
After the fact, we can also figure out from this choice of \(A, B, C\) a particular curve \(c\) for which the inequality \((f \circ c)''(0) \geq 0\) yields Eq. 3. Then we don’t need much in the way
of Riemannian optimization, but it obfuscates how one could discover the proof.
Yet, we also know that \(\nabla f(X) = X - M\), so that \[ M = X - \nabla f(X) = \begin{bmatrix} U & U_\perp \end{bmatrix} \begin{bmatrix} \Sigma & 0 \\ 0 & W \end{bmatrix} \begin{bmatrix} V & V_\
perp \end{bmatrix}\transpose. \] Thus, the singular values of \(M\) are exactly those of \(X\) together with those of \(\nabla f(X)\). Moreover, \(X\) holds \(r\) largest singular values of \(M\)
while \(\nabla f(X)\) holds the rest. It follows that \[ f(X) = \frac{1}{2} \sqfrobnorm{X - M} = \frac{1}{2} \sum_{i = r+1}^{\min(m, n)} \sigma_i(M)^2. \qquad(4)\] In particular, all second-order
critical points have the same cost function value.
Hence second-order points are optimal
We know \(f\) has a minimizer on \(\Rmnlr\) because the latter is closed and the former is strongly convex; let’s call it \(X^\star\). There are two scenarios to entertain:
• Either \(\rank(M) \leq r\), in which case the function value in Eq. 4 is zero. This confirms that second-order critical points are optimal. (Actually, in this case, first-order criticality is
• Or \(\rank(M) > r\), in which case \(\rank(X^\star) = r\). (Indeed, if \(X^\star\) had lesser rank, we would have \(\nabla f(X^\star) = 0\) so that \(X^\star = M\): a contradiction.) In
particular, \(X^\star\) is second-order critical on \(\Rmnr\), and we know from Eq. 4 that all such points have the same cost function value; hence, they are all optimal.
This takes care of the second part of Theorem 1.
Notice that a by-product of this proof is that the minimizers of \(f\) are the SVDs of \(M\) truncated to rank \(r\). In other words: we re-proved the Eckart–Young–Mirsky theorem for the Frobenius
Boumal, N. 2023.
An Introduction to Optimization on Smooth Manifolds
. Cambridge University Press.
Helmke, U., and M. A. Shayman. 1995.
“Critical Points of Matrix Least Squares Distance Functions.” Linear Algebra and Its Applications
215: 1–19. https://doi.org/
Ho, N.-D. 2008.
“Nonnegative Matrix Factorization: Algorithms and Applications.”
PhD thesis, Louvain-la-Neuve, Belgium:
niversité catholique de
Schneider, R., and A. Uschmajew. 2015.
“Convergence Results for Projected Line-Search Methods on Varieties of Low-Rank Matrices via Łojasiewicz Inequality.” SIAM Journal on Optimization
25 (1): 622–46.
BibTeX citation:
author = {Boumal, Nicolas and Criscitiello, Christopher},
title = {Benign Landscape of Low-Rank Approximation: {Part} {I}},
date = {2023-12-11},
url = {racetothebottom.xyz/posts/low-rank-approx/},
langid = {en},
abstract = {The minimizers of \$\textbackslash sqfrobnorm\{X-M\}\$
subject to \$\textbackslash rank(X) \textbackslash leq r\$ are given
by the SVD of \$M\$ truncated at rank \$r\$. The optimization
problem is nonconvex but it has a benign landscape. That’s folklore,
here with a proof.} | {"url":"https://www.racetothebottom.xyz/posts/low-rank-approx/index.html","timestamp":"2024-11-04T23:58:34Z","content_type":"application/xhtml+xml","content_length":"57495","record_id":"<urn:uuid:b76093c3-f657-4244-8a31-c06400be2f0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00035.warc.gz"} |
Tensor Parallelism
• We can horizontally partition the computation for one tensor operation across multiple devices, named Tensor parallelism (TP).
• We write the derivations by splitting in two, however in general, the matrixes are split equally within the GPUs in a given node
Parallelizing a GEMM (General Matrix multiply)
• is , A is $d_{model} \times d_{hidden}$$
• First option (parallelize and aggregate, each β threadβ computes a matrix of the same dimension as ) (more memory efficient, but requires all_reduce at the end): - split along its rows and
input along its columns: - , - is and is - Then, (itβ s true, I checked) - intuition: - A matrix-matrix mul can be seen as multiple matrix-vector mul concatenated (along the columns of A). - In
such matrix-vector mul, each element of a given column in is responsible for picking out its corresponding column in , mutliply it by itself, and then the result is summed to obtain the new
column in - Here, we parallelize the computation over these columns, and aggregate at the end
• Second option (parallelize and concatenate, each β threadβ produces a slice of ) (less memory efficient but no synchronization):
• is , A is , B is
• Usual two-layer MLP block is
□ Y =
□ i.e. one GEMM (general matrix multiply)
□ one GeLU
□ one GEMM
• Parallelizing the
□ First option (parallelize and aggregate, each β threadβ computes a matrix of the same dimension as ):
☆ GeLU is nonlinear, so
○ Thus we need to sychnronize before the GeLU function
□ Second option (parallelize and concatenate, each β threadβ produces a slice of ):
□ This partitioning allows the GeLU nonlinearity to be independently applied to the output of each partitioned GEMM
□ This is advantageous as it removes a synchronization point
• Parallelizing
□ Given we receive = , split by the columns, we split by its rows
□ Compute
□ Synchronization
☆ Z = all_reduce(Y_iB_i) by summing them
☆ Called in the diagram
• Diagram
□ is an all-reduce in the forward where the matrix are aggregated by summing, identity (or splitting) in the backward
□ is an identity (or splitting) in the forward, and an all-reduce in the backward
• They exploit inherent parallelism in the multihead attention operation.
□ partitioning the GEMMs associated with key (K), query (Q), and value (V ) in a column parallel fashion such that the matrix multiply corresponding to each attention head is done locally on
one GPU
□ This allows us to split per attention head parameters and workload across the GPUs, and doesnt require any immediate communication to complete the self-attention.
□ The subsequent GEMM from the output linear layer (after self attention) is parallelized along its rows (i.e. ), given it receives the self-attention split by columns, by design (requiring no
□ Finally, we apply , the all_reduce to obtain the result (before dropout)
• Diagram | {"url":"https://notes.haroldbenoit.com/ml/engineering/training/parallelism/tensor-parallelism","timestamp":"2024-11-09T20:26:43Z","content_type":"text/html","content_length":"130366","record_id":"<urn:uuid:e2cf683f-dc84-493d-bc07-1cb996e1e4dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00230.warc.gz"} |
What is Power Dissipation? Explain with Examples - EE-Vibes
What is Power Dissipation? Explain with Examples
Power Dissipation
What is Power Dissipation? Explain with Examples. Power Dissipation is the transfer of energy from one form to another in an electrical circuit. Power dissipates when current flows through a
resistance and produces heat, light or sound. Power dissipation can also occur when two or more circuits interact with each other, such as when one circuit supplies voltage to another circuit. Power
dissipation is measured in watts (W) and is an important factor in the calculation of electrical circuit efficiency.
Power dissipation can result from a variety of causes, including current flowing through resistors, capacitors and inductors, as well as the interaction between two or more circuits.
How to Reduce the Power Dissipation?
Power dissipated by components in a circuit can be reduced by adding additional circuitry to reduce resistance, improve current flow or reduce inductive reactance. Power dissipation can also be
reduced by using components that are better able to dissipate heat and other forms of radiation.
Power Dissipation is a major concern in electronics, as it can lead to overheating and permanent damage to electrical components. Power management systems are used to control power dissipation in
electronic circuits and ensure proper operation. Power dissipation must be taken into account when designing and constructing electronic devices to ensure that components remain within their
specified temperature ranges. Power Dissipation is also an important factor in the design of power supplies, which need to dissipate generated heat and voltage drops in order to avoid damage to
connected components.
Power Dissipation calculation
Power Dissipation calculation is a crucial step in any power system design. Power dissipation calculations are used to determine the amount of power dissipated through a resistor or other component
in a circuit, as well as its accompanying temperature rise. Power dissipation can also be used to calculate current flowing through an inductor and capacitor when combined with Ohm’s Law. Power
Dissipation calculation is important as it helps to ensure that components in the circuit are not operating at levels which could cause damage or failure, and can also be used to determine the
efficiency of a power system. Power dissipation calculations should take into account both the load current and voltage when determining the total dissipated power.
Additionally, thermal resistance can also be taken into account in Power Dissipation calculation to ensure that components are not operating at temperatures which would cause damage or failure. Power
dissipation calculations should always be performed with accuracy and precision, as errors can lead to malfunctioning of the system and limit its safety and performance. Power Dissipation calculation
is essential for ensuring a safe, efficient and reliable power system design.
Formula for Power Dissipation calculation
Power dissipated by a resistor is equal to the product of the voltage and current flowing through it.
Power Dissipated (P) = Voltage (V) x Current (I) or P=VI or P=V^2/R
Therefore, in order to calculate power dissipation, one needs to determine the voltage and current values for a given circuit. Once these values are known, the formula can be used to calculate the
total power dissipation in the circuit. Additionally, if thermal resistance is taken into account, one can use the power dissipation calculation to determine temperature rise of components or
Example of Power Dissipation Calculation :
For example, consider a basic circuit with an 8 ohm resistor and 10 volts applied. Power dissipation can be calculated by taking the square of the voltage (100 Volts) and dividing it by the
resistance (8 Ohms).
Example of Power Dissipation Calculation
This gives us a total power dissipation of 12.5 watts. Additionally, if we take into account thermal resistance, we can calculate the temperature rise of the resistor due to power dissipation. With a
thermal resistance of 0.2°C/W, the total temperature rise would be 2.5°C (12.5 watts X 0.2°C/W). Power dissipation calculations are an essential part of any power system design and should be
performed with accuracy and precision to ensure safety and reliability. Power dissipation calculations can also be used to monitor the efficiency of a system or component and determine if any changes
are needed for improved performance.
By understanding Power Dissipation calculation, power engineers can make sure that components in their circuits operate within safe operating temperatures, as well as maximizing the efficiency of
their systems. Power Dissipation calculations should always be performed with accuracy and precision to ensure safety and reliability.
Power Dissipation in Series and Parallel Configurations
In a series configuration, power dissipation occurs when current is forced through a resistor, which in turn causes the power supply to dissipate more energy than it is meant to. In a parallel
configuration, power dissipation occurs due to resistance imbalance between components that are connected in parallel, meaning the current splits unequally through them. Power dissipation can also be
caused by an insufficient heat sink or improper ventilation.
Power Dissipation Calculation in Series Networks
Here is an example of power dissipation calculation for resistors connected in series. For the given circuit, we can use the following formula for calculating the power dissipation in series
P[t]=P[R1]+ P[R2]+ P[R3]+ P[R4]
power dissipation calculation for resistors connected in series.
The voltage drop across each resistor can be added to check it is equal to the source voltages.
V[S]=V[R1]+ V[R2]+ V[R3]+ V[R4]
V[S]=3.23V+ 3.23V+ 0.32+ 3.22=10V
Finally you can apply the formula P=V^2/R for calculating the power dissipation across each resistor. You can also calculate the total current of the network and then apply the formula P=I^2R where I
is the total current and R is the equivalent resistance of this circuit.
Hence power dissipated in this circuit is Equal to I=V/R=10/310=0.03226A and hence P=I^2R= (0.03226^2)(310)=0.322W
power dissipation calculation for resistors connected in series using P=I^2R
or the following table depicts the power drop across each resistor which is then finally added in case of series resistors.
power dissipation in series configuration
Power Dissipation Calculation in Parallel Networks
Consider the following configuration where the resistors are connected in parallel for power dissipation calculation.
power dissipation calculation for resistors connected in parallel.
The total resistance of the parallel network circuit is calculated as:
The equivalent resistance calculated in this case is: 14.28Ω. In parallel network, voltage across each resistor is same but current is different and the total current is calculated as:
I[T=]I[1]+ I[2]+ I[3]+………+ I[n]
After calculating R and total current, we can use the formula P=I^2R for calculating the total power dissipated in this circuit. The following table shows the calculation results:
power dissipated example of parallel networks
hence the total power dissipated in this case is 7W.
It is important for engineers to understand how power dissipation in series and parallel configurations works so that they can design circuits with proper power management. To reduce the amount of
power dissipated, engineers must choose components with low resistance values and maximize voltage efficiency. Furthermore, using heat sinks or other cooling methods will help to reduce the amount of
Also read here | {"url":"https://eevibes.com/electrical-machines/what-is-power-dissipation-explain-with-examples/","timestamp":"2024-11-06T12:18:15Z","content_type":"text/html","content_length":"71690","record_id":"<urn:uuid:5b9de5e8-5f52-4a76-922d-8eef71ce358b>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00552.warc.gz"} |
A man wants to reach point B on the opposite bank of class 12 physics JEE_Main
Hint: The proceeding approach will be the relative motion concept. We going to assume boatman’s velocity with respect to water So that it will become easy to examine boatman’s motion in water
Before proceeding student must know what does relative approach mean, In most simple words, Relative motion is the understanding of the motion of an object with respect to some other moving or
stationary object, for example a person is sitting in the moving bus is at zero velocity relative to the bus, is moving at the same velocity as the bus with respect to the ground.
Complete step by step solution:
According to the question given we have to find the minimum speed so that the boatman will reach at point B starting from A.
Let v be the velocity of a boatman with respect to water or velocity in still water, u is the velocity of water.
Resultant of v and u should be along AB.
Let the absolute velocity of boatman (velocity with respect to ground) be $v_b^ \to $ which is along AB
So, the resultant of u and $v_b^ \to $ should be along AB
Let us resolve component of v along x and y direction
Where ${v_x}$ and ${v_y}$ be the components along Horizontal and vertical directions respectively
So, coming to the calculation part
$\Rightarrow {v_x}$$ = u - v\sin \theta $
$\Rightarrow {v_y} = v\cos \theta $
Further $\tan \theta = \dfrac{{perpendicular}}{{base}} = \dfrac{{{v_y}}}{{{v_x}}}$
So, $\tan 45$
\Rightarrow \dfrac{{v\cos \theta }}{{u - v\sin \theta }} = 1\,\,\,\,\,\,\,\,(\tan 45 = 1) \\
\Rightarrow v\cos \theta = u - v\sin \theta \\
\therefore v = \dfrac{u}{{\sin \theta + \cos \theta }} \\
\Rightarrow v = \dfrac{u}{{\sqrt 2 \sin (\theta + 45)}} \\
So, v is minimum when the denominator is maximum, In denominator we have sin whose maximum value is 1.
\Rightarrow \theta + 45 = 90 \\
\Rightarrow \theta = {45^o} \\
And, $v = \dfrac{u}{{\sqrt 2 }}$
Speed relative to water is $\dfrac{u}{{\sqrt 2 }}$.
Direction is 45 degrees to the north west.
Note: The question above is for minimum speeds but for minimum distance we should move in such a direction that our motion will become along AC relative to water and for minimum time motion will be
along AC direct. | {"url":"https://www.vedantu.com/jee-main/a-man-wants-to-reach-point-b-on-the-opposite-physics-question-answer","timestamp":"2024-11-07T21:50:48Z","content_type":"text/html","content_length":"154712","record_id":"<urn:uuid:d826e18a-5db0-45da-a89b-15f2c2ad2a3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00591.warc.gz"} |
CliffsNotes Basic Math and Pre-Algebra Quick Review: 2nd Edition
CliffsNotes Basic Math and Pre-Algebra Quick Review: 2nd Edition by Jerry Bobrow
Inside the Book: * Preliminaries * Whole numbers * Decimals * Fractions * Percents * Integers and rationals * Powers, exponents, and roots * Powers of ten and scientific notation * Measurements *
Graphs * Probability and statistics * Number series * Variables, algebraic expressions, and simple equations * Word problems * Review questions * Resource center * Glossary Why CliffsNotes? Go with
the name you know and trust Get the information you need-fast! Master the Basics Fast * Complete coverage of core concepts * Easy topic-by-topic organization * Access hundreds of practice problems at | {"url":"https://www.wob.com/en-us/books/jerry-bobrow/cliffsnotes-basic-math-and-pre-algebra-quick-review-2nd-edition/9780470880401","timestamp":"2024-11-11T22:54:16Z","content_type":"text/html","content_length":"252888","record_id":"<urn:uuid:7410f252-59af-484e-bc4b-fa1c700d918e>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00461.warc.gz"} |
How much importance should be given to the energy cost situation? What are the project’s cash flows for the next twenty years? What assumptions can you use?
I have a finance class final paper, I believe I can complete the paper myself, but I need your help to do those calculation by using excel spreadsheet. Plus few sentences to explain the required
questions(6 of them)
my request consist two parts
1, calculate financial factors on Excel spreadsheet, all data will be giving under “ASSESSMENT INSTRUMENT ” No outside resources needed
o Operating Cash Flows
o Incremental Cash Flows, including investment and salvage
o Warranty costs
o WACC
o NPV, IRR, Payback, and Profitability Index
2, Several sentences to explain following questions based on the calculation result
1) How much importance should be given to the energy cost situation?
2) What are the project’s cash flows for the next twenty years? What assumptions
did you use?
3) What is the company’s cost of capital? What is the appropriate discount factor
(which may be different) for you to use in evaluating the bus project?
4) If you decide to go ahead with the project, which of the two engines should be
used in the bus, and why?
5) Evaluate the quality of the project, by using appropriate capital budgeting
6) Would you recommend that Shrieves Transportation Company accept or reject the
project? What are the key factors on which you base your recommendation?
I will upload three additional materials to help you finish the job, but you do not need complete the whole paper(which requires at least 6 pages). I only need you to finish the calculation part and
briefly explain those 6 questions for me.
Thank you so much! | {"url":"https://onlineessaywritinghelp.com/how-much-importance-should-be-given-to-the-energy-cost-situation-what-are-the-projects-cash-flows-for-the-next-twenty-years-what-assumptions-can-you-use/","timestamp":"2024-11-06T10:25:59Z","content_type":"text/html","content_length":"48873","record_id":"<urn:uuid:651a63fc-98c9-49e6-a49c-808eae9df364>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00076.warc.gz"} |
Time-to-event model library
This page presents the TTE model library included in Monolix. It includes an introduction on time-to-event data, the different ways to model this kind of data, and typical parametric models.
What is time-to-event data
In case of time-to-event data the recorded observations are the times at which events occur. We can for instance record the time (duration) from diagnosis of a disease until death, or the time
between administration of a drug and the next epileptic seizures. In the first case, the event is one-off, while in the second it can be repeated.
In addition, the event can be:
• exactly observed: we know the event has happen exactly at time
• interval censored: we know the event has happen during a time interval, but not exactly when
• right censored: the observation period ends before the event can be observed
Formatting of time-to-event data in the MonolixSuite
In the data set, exactly observed events, interval censored events and right censoring are recorded for each individual. Contrary to other softwares for survival analysis, the MonolixSuite requires
to specify the time at which the observation period starts. This allow to define the data set using absolute times, in addition to durations (if the start time is zero, the records represent
durations between the start time and the event).
For instance for single events, exactly observed (with or without right censoring), one must indicate the start time of the observation period (Y=0), and the time of event (Y=1) or the time of the
end of the observation period if no event has occurred (Y=0). In the following example:
ID TIME Y
the observation period last from starting time t=0 to the final time t=80. For individual 1, the event is observed at t=34, and for individual 2, no event is observed during the period. Thus it is
indicated that at the final time (t=80), no event had occurred. Using absolute times instead of durations, we could equivalently write:
ID TIME Y
The durations between start time and event (or end of the observation period) are the same as before, but this time we record the day at which the patients enter the study and the days at which they
have events or leave the study. Different patients may enter the study at different times.
Examples for repeated events, and interval censored events are available in the data set documentation.
Important concepts: hazard and survival
Two functions have a key role in time-to-event analysis: the survival function and the hazard function. The survival function S(t) is the probability that the event happens after time t. A common way
to estimate it non-parametrically is to calculate the Kaplan-Meier estimate. The hazard function h(t) is the instantaneous rate of an event, given that it has not already occurred. Both are linked by
the following equation:
Different types of approaches
Depending on the goal of the time-to-event analysis, different modeling approaches can be used: non-parametric, semi-parametric (Cox models) and parametric.
• Non-parametric models do not require assumptions on the shape of the hazard or survival. Using the Kaplan-Meier estimate, statistical tests can be performed to check if the survival differs
between sub-populations. The main limitations of this approach are that (i) only categorical covariates can be tested and (ii) the way the survival is affected by the covariate cannot be
• Semi-parametric models (Cox models) assume that the hazard can be written as a baseline hazard (that depends only on time), multiplied by a term that depends only on the covariates (and not
time). Under this hypothesis of proportional covariate effect, one can analyze the effect of covariates (categorical and continuous) in a parametric way, leaving the baseline hazard undefined.
• Parametric models require to fully specify the hazard function. If a good model can be found, statistical tests are more powerful than for semi-parametric models. In addition, there is no
restrictions on how the covariates affects the hazard. Parametric models can also be easily used for predictions.
The table below synthesizes the possibilities for the 3 approaches.
Focus on parametric modeling with the MonolixSuite
In the MonolixSuite, the only possible approach is the parametric approach. The model is defined via the hazard function, which in a population approach typically depends on individual parameters:
The typical syntax to define the output is the following:
Event = {type=event, maxEventNumber=1, hazard=h}
The output Event will be matched to the time-to-event data of the data set. The hazard function h is usually defined via an expression including the input individual parameters. For one-off events,
the maximal number of events per individual is 1. It is important to indicate it in the maxEventNumber argument to speed up calculations. To use the model for simulations with Simulx,
rightCensoringTime must be given as additional argument. Check here for details.
Note that the hazard can be a function of other variables such as drug concentration or tumor burden for instance (joint PK-TTE or PD-TTE models). An example of the syntax is given here.
Library of parametric models for time-to-event data
To describe the various shapes that the survival Kaplan-Meier estimate can take, several hazard functions have been proposed. Below we display the survival curves, for the most typical hazard
A few comments:
• We have reparametrized as a function of
• All parameters are positive. If we assume inter-individual variability, a log-normal distribution is usually appropriate.
The table below summarizes the number of parameters and typical parameter values:
For each model, we can in addition consider a delay del as additional parameter. The delay will simply shift the survival curve to the right (later times). For t<del, the survival is S(t<del)=1.
Lognormal TTE model
The lognormal TTE model is quite standard but not yet included in the library. The mlxtran model for it is shown below:
input={mu, sigma}
if t<0.001
h = 0
a = (log(t)-mu)/sigma
h = (sigma*t*sqrt(3.14*2))^(-1) * exp(-1/2 * a^2) / (1 - normcdf(a))
Event = {type=event, maxEventNumber=1, hazard=h}
output = {Event}
All models are available as Mlxtran model file in the TTE library. Each model can be with/without delay and for single/repeated events. For performance reasons, it is important to choose the file
ending with ‘_singleEvent.txt’ if you want to model one-off events (death, drop-out, etc). | {"url":"https://monolixsuite.slp-software.com/simulx/2024R1/time-to-event-model-library","timestamp":"2024-11-15T01:30:30Z","content_type":"text/html","content_length":"79651","record_id":"<urn:uuid:d1aa62e3-28bb-46e7-a5f1-bcdfef26963c>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00206.warc.gz"} |
Top Math Tutors in Amsterdam Nieuw-West, Amsterdam - MyPrivateTutor
• 3 classes
• Algebra, Calculus, Differential Equations, Geometry, Probability & Statistics
I teach Math and Physics for all grades, high school level, and University. I teach IB, IG, SAT
My name is Taha Selim, I work as a theoretician and finalizing my Ph.D. I hold a B.Sc. in Physics and Mathematics, and M.Sc. in Physics. I do have exp...
• 3 classes
• Algebra, Calculus, Differential Equations, Geometry, Probability & Statistics
I teach Math and Physics for all grades, high school level, and University. I teach IB, IG, SAT
My name is Taha Selim, I work as a theoretician and finalizing my Ph.D. I hold a B.Sc. in Physics and Mathematics, and M.Sc. in Physics. I do have exp... | {"url":"https://www.myprivatetutor.nl/mathematics-tutors-in-amsterdam-in-amsterdam-nieuw-west","timestamp":"2024-11-05T09:21:51Z","content_type":"text/html","content_length":"775665","record_id":"<urn:uuid:c445fc57-c871-4e4d-bca8-37b85303575d>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00564.warc.gz"} |
Renew LPI 201-400 - An Overview 61 to 70
Want to know Testking 201-400 Exam practice test features? Want to lear more about LPI LPIC-2 Exam 201 Part 1 of 2 version 4.0 certification experience? Study Validated LPI 201-400 answers to Replace
201-400 questions at Testking. Gat a success with an absolute guarantee to pass LPI 201-400 (LPIC-2 Exam 201 Part 1 of 2 version 4.0) test on your first attempt.
Q61. - (Topic 3)
What are the main network services used by the PXE protocol? (Choose TWO correct
A. DNS
B. DHCP
C. HTTP
D. TFTP
E. NFS
Answer: B,D
Q62. - (Topic 6)
Which of the following files are used to resolve hostnames to IP addresses? (Choose TWO correct answers.)
A. /etc/systems
B. /etc/hosts
C. /etc/network
D. /etc/dns.conf
E. /etc/resolv.conf
Answer: B,E
Q63. - (Topic 6)
Which of the following commands can be used to script interactions with various TCP or UDP services?
A. ftp
B. nc
C. tcpdump
D. strings
E. wget
Answer: B
Q64. - (Topic 5)
A system has one hard disk and one CD writer which are both connected to SATA controllers. Which device represents the CD writer?
A. /dev/hdb
B. /dev/sdd
C. /dev/scd1
D. /dev/sr0
E. /dev/sr1
Answer: D
Q65. - (Topic 4)
What does a 0 in the last field (fsck order) of /etc/fstab indicate about the filesystem?
A. The filesystem should be checked before filesystems with higher values.
B. The filesystem should be checked after filesystems with higher values.
C. The filesystem check counter is ignored.
D. The filesystem has been disabled from being checked and mounted on the system.
E. The filesystem does not require an fsck check when being mounted.
Answer: E
Q66. CORRECT TEXT - (Topic 8)
You have to type your name and title frequently throughout the day and would like to decrease the number of key strokes you use to type this. Which one of your configuration files would you edit to
bind this information to one of the function keys?
Explanation: The inputrc file is used to map keystrokes to text or commands. You can use this file to make a function key display your name and title. Other common uses include mapping a function key
to lock your computer or run a command.
Note: Additional Answer ~/.inputrc (if asked after the full path)
Q67. - (Topic 5)
Which of the following is a CD-ROM filesystem standard?
A. OSI9660
B. ISO9660
C. SR0FS
D. ISO8859
E. ROM-EO
Answer: B
Q68. - (Topic 1)
In the following output, what is the 5 minute load average for the system?
# uptime
12:10:05 up 18 days, 19:00, 2 users, load average: 0.47, 24.71, 35.31
A. 0.47
B. 24.71
C. 35.31
D. There is no 5 minute interval. It is some value between 0.47 and 24.71.
E. There is no 5 minute interval. It is some value between 24.71 and 35.31.
Answer: B
Q69. CORRECT TEXT - (Topic 2)
Which directory contains the system-specific udev rule files? (Specify the absolute path including the directory name)
/etc/udev/rules.d, /etc/udev/rules.d/
Q70. - (Topic 8)
You typed the following at the command line: ls –al /home/ hadden
What key strokes would you enter to remove the space between the ‘/’ and ‘hadden’ without having to retype the entire line?
A. Ctrl-B, Del
B. Esc-b, Del
C. Esc-Del, Del
D. Ctrl-b, Del
Answer: B
Explanation: The Esc-b keystroke combination will move the curser back one word (to the start of the word ‘hadden’). The Del keystroke will delete the previous character; in this case, it will delete
the space before the word ‘hadden’.
Reference: http://sseti.udg.es/marga/books/O'Reilly-The_Linux_Web_Server-CD- Bookshelfv1.0/linux_web/lnut/ch07_06.htm
Incorrect Answers
A:The Ctrl-B keystroke will move the curser back one letter.
C:The Esc-Del keystroke will cut the previous word, for pasting later.
D:The Ctrl-b keystroke will move the curser back one letter. (Ctrl-b is the same as Ctrl-B). | {"url":"https://www.prepbraindumps.com/braindumps/201-400-a-8453.html","timestamp":"2024-11-03T02:54:39Z","content_type":"text/html","content_length":"23137","record_id":"<urn:uuid:a6d029ed-8965-43ac-852f-8212f1ba5d3d>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00393.warc.gz"} |
Two-fluid flow between rotating annular disks
The general hydrodynamic behavior at small clearance Reynolds numbers of two fluids of different density and viscosity occupying the finite annular space between a rotating and stationary disk is
explored using a simplified version of the Navier-Stokes equations which retains only the centrifugal force portion of the inertia terms. A criterion for selecting the annular flow fields that are
compatible with physical reservoirs is established and then used to determine the conditions under which two-fluid flows in the annulus might be expected for specific fluid combinations.
NASA STI/Recon Technical Report A
Pub Date:
October 1975
□ Annular Plates;
□ Flow Distribution;
□ Hydrodynamics;
□ Rotating Disks;
□ Two Fluid Models;
□ Viscous Fluids;
□ Boundary Value Problems;
□ Flow Velocity;
□ Fluid Flow;
□ Graphs (Charts);
□ Navier-Stokes Equation;
□ Radial Velocity;
□ Velocity Distribution;
□ Fluid Mechanics and Heat Transfer | {"url":"https://ui.adsabs.harvard.edu/abs/1975STIA...7614870Z/abstract","timestamp":"2024-11-07T00:45:12Z","content_type":"text/html","content_length":"34890","record_id":"<urn:uuid:5f37a34e-f128-4bcc-beb9-99988bbfb94a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00787.warc.gz"} |
Algebraic-topology of Math Topics | Question AI
<html lang="en" class="topic-desktop ui-chrome107 ui-chrome"><head></head><body data-leg="C" class="new-topic topic-desktop first-page-false user-ANONYMOUS user-ads md-desktop leg-c"><header id=
"header" class="bg-navy-dark"></header><main><div class="md-page-wrapper"><div id="content" class="md-content"><div class="md-article-container template-desktop infinite-pagination"><div class=
"infinite-scroll-container article last"><article class="article-content container-lg qa-content px-0 pt-0 pb-40 py-lg-20 content md-expanded" data-topic-id="369194"><div class="grid gx-0"><div class
="col-auto"><div class="topic-left-rail md-article-drawer position-relative d-flex border-right-sm border-left-sm open"><div class="drawer d-flex flex-column open"></div></div></div><div class="col">
<div class="h-100 ml-0 pr-sm-10 pr-lg-0 "><div class="h-100 grid gx-0 gx-sm-20"><div class="h-100 col-sm"><div class="h-100 infinite-pagination-container d-flex flex-column position-relative"><div
class="grey-box w-100 grey-box-top"><div class="grey-box-content mx-auto w-100"><div class="page2ref-true topic-content topic-type-REGULAR" data-student-article="true"><div class="reading-channel">
<section data-level="2" id="ref66032"><p class="topic-paragraph">The early 20th century saw the emergence of a number of theories whose power and utility reside in large part in their generality.
Typically, they are marked by an attention to the <span class="md-crosslink autoxref" data-show-preview="true">set</span> or <span class="md-crosslink autoxref" data-show-preview="true">space</span>
of all examples of a particular kind. (Functional <span class="md-crosslink autoxref" data-show-preview="true">analysis</span> is such an endeavour.) One of the most energetic of these general
theories was that of algebraic topology. In this subject a variety of ways are developed for replacing a space by a <span class="md-crosslink autoxref" data-show-preview="true">group</span> and a map
between spaces by a map between groups. It is like using X-rays: information is lost, but the shadowy image of the original space may turn out to contain, in an accessible form, enough information to
solve the question at hand.</p><p class="topic-paragraph">Interest in this kind of research came from various directions. Galois’s theory of equations was an example of what could be achieved by
transforming a problem in one branch of mathematics into a problem in another, more abstract branch. Another <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="impetus" data-type=
"MW">impetus</span> came from Riemann’s theory of complex functions. He had studied <span id="ref536602"></span><span class="md-crosslink">algebraic functions</span>—that is, loci defined by
equations of the form <em>f</em>(<em>x</em>, <em>y</em>) = 0, where <em>f</em> is a polynomial in <em>x</em> whose coefficients are polynomials in <em>y</em>. When <em>x</em> and <em>y</em> are
complex variables, the locus can be thought of as a real <span id="ref536603"></span><span class="md-crosslink" data-show-preview="true">surface</span> spread out over the <em>x</em> plane of complex
numbers (today called a <span id="ref536604"></span><span class="md-crosslink">Riemann surface</span>). To each value of <em>x</em> there correspond a finite number of values of <em>y</em>. Such
surfaces are not easy to comprehend, and Riemann had proposed to draw curves along them in such a way that, if the surface was cut open along them, it could be opened out into a polygonal disk. He
was able to establish a profound connection between the <span class="md-crosslink autoxref" data-show-preview="true">minimum</span> number of curves needed to do this for a given surface and the
number of functions (becoming <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="infinite" data-type="MW">infinite</span> at specified points) that the surface could then support.</
p><p class="topic-paragraph">The natural problem was to see how far Riemann’s ideas could be applied to the study of spaces of higher <span class="md-crosslink autoxref" data-show-preview="true">
dimension</span>. Here two lines of inquiry developed. One emphasized what could be obtained from looking at the <span class="md-crosslink autoxref" data-show-preview="true">projective geometry</
span> involved. This point of view was fruitfully applied by the <span id="ref536605"></span>Italian school of algebraic geometers. It ran into problems, which it was not wholly able to solve, having
to do with the singularities a surface can possess. Whereas a locus given by <em>f</em>(<em>x</em>, <em>y</em>) = 0 may intersect itself only at isolated points, a locus given by an <span class=
"md-crosslink autoxref" data-show-preview="true">equation</span> of the form <em>f</em>(<em>x</em>, <em>y</em>, <em>z</em>) = 0 may intersect itself along curves, a problem that caused considerable
difficulties. The second approach emphasized what can be learned from the study of <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="integrals" data-type="MW">integrals</span> along
paths on the surface. This approach, pursued by <span id="ref536606"></span><span class="md-crosslink" data-show-preview="true">Charles-Émile Picard</span> and by Poincaré, provided a rich
generalization of Riemann’s original ideas.</p><p class="topic-paragraph">On this <span class="md-crosslink autoxref" data-show-preview="true">base</span>, <span id="ref908328"></span><span class=
"md-crosslink" data-show-preview="true">conjectures</span> were made and a general theory produced, first by Poincaré and then by the American engineer-turned-mathematician <span id="ref536607"></
span>Solomon Lefschetz, concerning the nature of <span id="ref536608"></span><span class="md-crosslink" data-show-preview="true">manifolds</span> of arbitrary dimension. Roughly speaking, a <span
class="md-crosslink autoxref" data-show-preview="true">manifold</span> is the <em>n</em>-dimensional generalization of the idea of a surface; it is a space any small piece of which looks like a piece
of <em>n</em>-dimensional space. Such an object is often given by a single <span class="md-crosslink autoxref" data-show-preview="true">algebraic equation</span> in <em>n</em> + 1 variables. At first
the work of Poincaré and of Lefschetz was concerned with how these <span class="md-dictionary-link md-dictionary-tt-off eb" data-term="manifolds" data-type="EB">manifolds</span> may be decomposed
into pieces, counting the number of pieces and decomposing them in their turn. The result was a list of numbers, called <span id="ref536609"></span><span class="md-crosslink">Betti numbers</span> in
honour of the Italian mathematician <span id="ref536610"></span><span class="md-crosslink" data-show-preview="true">Enrico Betti</span>, who had taken the first steps of this kind to extend Riemann’s
work. It was only in the late 1920s that the German mathematician <span id="ref536611"></span><span class="md-crosslink" data-show-preview="true">Emmy Noether</span> suggested how the Betti numbers
might be thought of as measuring the size of certain groups. At her instigation a number of people then produced a theory of these groups, the so-called <span id="ref536612"></span><span class=
"md-crosslink" data-show-preview="true">homology</span> and <span id="ref536613"></span>cohomology groups of a space.</p><p class="topic-paragraph">Two objects that can be deformed into one another
will have the same homology and cohomology groups. To assess how much information is lost when a space is replaced by its algebraic topological picture, Poincaré asked the crucial <span class=
"md-dictionary-link md-dictionary-tt-off eb" data-term="converse" data-type="EB">converse</span> question “According to what algebraic conditions is it possible to say that a space is topologically
equivalent to a sphere?” He showed by an ingenious example that having the same homology is not enough and proposed a more delicate index, which has since grown into the branch of <span class=
"md-crosslink autoxref" data-show-preview="true">topology</span> called <span id="ref536614"></span><span class="md-crosslink" data-show-preview="true">homotopy</span> theory. Being more delicate, it
is both more basic and more difficult. There are usually standard methods for computing homology and cohomology groups, and they are completely known for many spaces. In contrast, there is scarcely
an interesting class of spaces for which all the homotopy groups are known. <span class="md-crosslink autoxref" data-show-preview="true">Poincaré’s conjecture</span> that a space with the homotopy of
a <span class="md-crosslink autoxref" data-show-preview="true">sphere</span> actually is a sphere was shown to be true in the 1960s in dimensions five and above, and in the 1980s it was shown to be
true for four-dimensional spaces. In 2006 <span id="ref908330"></span><span class="md-crosslink" data-show-preview="true">Grigori Perelman</span> was awarded a <span class="md-crosslink"
data-show-preview="true">Fields Medal</span> for proving Poincaré’s conjecture true in three dimensions, the only dimension in which Poincaré had studied it.</p></section><section data-level="2" id=
"ref66033"><h2 class="h2" id="qai_title_1">Developments in pure mathematics</h2><p class="topic-paragraph">The interest in <span class="md-dictionary-link md-dictionary-tt-off mw" data-term=
"axiomatic" data-type="MW">axiomatic</span> systems at the turn of the century led to <span class="md-crosslink autoxref" data-show-preview="true">axiom</span> systems for the known <span id=
"ref536615"></span><span class="md-crosslink">algebraic structures</span>, that for the theory of fields, for example, being developed by the German mathematician <span id="ref536616"></span><span
class="md-crosslink">Ernst Steinitz</span> in 1910. The theory of <span id="ref536617"></span><span class="md-crosslink" data-show-preview="true">rings</span> (structures in which it is possible to
add, subtract, and multiply but not necessarily divide) was much harder to formalize. It is important for two reasons: the theory of <span id="ref536618"></span><span class="md-crosslink">algebraic
integers</span> forms part of it, because algebraic integers naturally form into rings; and (as Kronecker and Hilbert had argued) <span id="ref536619"></span><span class="md-crosslink"
data-show-preview="true">algebraic geometry</span> forms another part. The rings that arise there are rings of functions definable on the <span class="md-crosslink autoxref" data-show-preview="true">
curve</span>, surface, or <span class="md-dictionary-link md-dictionary-tt-off eb" data-term="manifold" data-type="EB">manifold</span> or are definable on specific pieces of it.</p><p class=
"topic-paragraph">Problems in <span class="md-crosslink autoxref" data-show-preview="true">number theory</span> and algebraic geometry are often very difficult, and it was the hope of mathematicians
such as Noether, who laboured to produce a formal, axiomatic theory of rings, that, by working at a more rarefied level, the essence of the concrete problems would remain while the distracting
special features of any given case would fall away. This would make the formal theory both more general and easier, and to a surprising extent these mathematicians were successful.</p><p class=
"topic-paragraph">A further twist to the development came with the work of the American mathematician <span id="ref536620"></span><span class="md-crosslink">Oscar Zariski</span>, who had studied with
the Italian school of algebraic geometers but came to feel that their method of working was imprecise. He worked out a detailed program whereby every kind of geometric configuration could be
redescribed in algebraic terms. His work succeeded in producing a rigorous theory, although some, notably Lefschetz, felt that the <span class="md-crosslink autoxref" data-show-preview="true">
geometry</span> had been lost sight of in the process.</p><p class="topic-paragraph">The study of algebraic geometry was <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="amenable"
data-type="MW">amenable</span> to the topological methods of Poincaré and Lefschetz so long as the manifolds were defined by equations whose coefficients were <span id="ref536621"></span><span class=
"md-crosslink" data-show-preview="true">complex numbers</span>. But, with the creation of an abstract theory of <span id="ref536622"></span><span class="md-crosslink">fields</span>, it was natural to
want a theory of varieties defined by equations with coefficients in an arbitrary field. This was provided for the first time by the French mathematician <span id="ref536623"></span><span class=
"md-crosslink" data-show-preview="true">André Weil</span>, in his <em><span id="ref536624"></span>Foundations of Algebraic Geometry</em> (1946), in a way that drew on Zariski’s work without
suppressing the intuitive appeal of geometric concepts. Weil’s theory of <span id="ref536625"></span><span class="md-crosslink" data-show-preview="true">polynomial</span> equations is the proper
setting for any investigation that seeks to determine what properties of a geometric object can be <span class="md-dictionary-link md-dictionary-tt-off eb" data-term="derived" data-type="EB">derived
</span> solely by algebraic means. But it falls tantalizingly short of one topic of importance: the solution of polynomial equations in integers. This was the topic that Weil took up next.</p><p
class="topic-paragraph">The central difficulty is that in a field it is possible to divide but in a <span class="md-crosslink autoxref" data-show-preview="true">ring</span> it is not. The integers
form a ring but not a field (dividing 1 by 2 does not yield an <span class="md-crosslink autoxref" data-show-preview="true">integer</span>). But Weil showed that simplified versions (posed over a
field) of any question about integer solutions to polynomials could be profitably asked. This transferred the questions to the domain of algebraic geometry. To count the number of solutions, Weil
proposed that, since the questions were now geometric, they should be amenable to the techniques of algebraic topology. This was an <span class="md-dictionary-link md-dictionary-tt-off mw" data-term=
"audacious" data-type="MW">audacious</span> move, since there was no suitable theory of algebraic topology available, but Weil conjectured what results it should yield. The difficulty of Weil’s
conjectures may be judged by the fact that the last of them was a generalization to this setting of the famous <span id="ref536626"></span><span class="md-crosslink" data-show-preview="true">Riemann
hypothesis</span> about the <span class="md-crosslink" data-show-preview="true">zeta function</span>, and they rapidly became the focus of international attention.</p><p class="topic-paragraph">Weil,
along with <span id="ref536627"></span>Claude Chevalley, <span id="ref536628"></span><span class="md-crosslink" data-show-preview="true">Henri Cartan</span>, <span id="ref536629"></span><span class=
"md-crosslink" data-show-preview="true">Jean Dieudonné</span>, and others, created a group of young French mathematicians who began to publish virtually an encyclopaedia of mathematics under the name
<span id="ref536630"></span><span class="md-crosslink" data-show-preview="true">Nicolas Bourbaki</span>, taken by Weil from an obscure general of the <span class="md-crosslink autoxref"
data-show-preview="true">Franco-German War</span>. Bourbaki became a self-selecting group of young mathematicians who were strong on <span class="md-crosslink autoxref" data-show-preview="true">
algebra</span>, and the individual Bourbaki members were interested in the Weil conjectures. In the end they succeeded completely. A new kind of algebraic topology was developed, and the Weil
conjectures were proved. The generalized Riemann <span class="md-dictionary-link md-dictionary-tt-off mw" data-term="hypothesis" data-type="MW">hypothesis</span> was the last to surrender, being
established by the Belgian <span id="ref536631"></span><span class="md-crosslink" data-show-preview="true">Pierre Deligne</span> in the early 1970s. Strangely, its resolution still leaves the
original Riemann hypothesis unsolved.</p><p class="topic-paragraph">Bourbaki was a key figure in the rethinking of structural mathematics. Algebraic topology was <span id="ref536632"></span>
axiomatized by <span id="ref536633"></span><span class="md-crosslink">Samuel Eilenberg</span>, a Polish-born American mathematician and Bourbaki member, and the American mathematician <span id=
"ref536634"></span>Norman Steenrod. <span id="ref536635"></span><span class="md-crosslink" data-show-preview="true">Saunders Mac Lane</span>, also of the <span class="md-crosslink autoxref"
data-show-preview="true">United States</span>, and Eilenberg extended this axiomatic approach until many types of mathematical structures were presented in families, called categories. Hence there
was a <span id="ref536636"></span><span class="md-crosslink">category</span> consisting of all groups and all maps between them that preserve multiplication, and there was another category of all
topological spaces and all <span class="md-dictionary-link md-dictionary-tt-off eb" data-term="continuous" data-type="EB">continuous</span> maps between them. To do algebraic topology was to transfer
a problem posed in one category (that of topological spaces) to another (usually that of commutative groups or rings). When he created the right algebraic topology for the Weil conjectures, the
German-born French mathematician <span id="ref536637"></span><span class="md-crosslink" data-show-preview="true">Alexandre Grothendieck</span>, a Bourbaki of enormous energy, produced a new
description of algebraic geometry. In his hands it became infused with the language of category theory. The route to algebraic geometry became the steepest ever, but the views from the summit have a
naturalness and a profundity that have brought many experts to prefer it to the earlier formulations, including Weil’s.</p><p class="topic-paragraph">Grothendieck’s formulation makes algebraic
geometry the study of equations defined over rings rather than fields. Accordingly, it raises the possibility that questions about the integers can be answered directly. Building on the work of
like-minded mathematicians in the United States, France, and Russia, the German <span id="ref536638"></span><span class="md-crosslink" data-show-preview="true">Gerd Faltings</span> triumphantly <span
class="md-dictionary-link md-dictionary-tt-off mw" data-term="vindicated" data-type="MW">vindicated</span> this approach when he solved the Englishman Louis <span id="ref536639"></span><span class=
"md-crosslink">Mordell’s conjecture</span> in 1983. This conjecture states that almost all polynomial equations that define curves have at most finitely many rational solutions; the cases excluded
from the conjecture are the simple ones that are much better understood.</p><p class="topic-paragraph">Meanwhile, <span id="ref536640"></span>Gerhard Frey of Germany had pointed out that, if <span
class="md-crosslink" data-show-preview="true">Fermat’s last theorem</span> is false, so that there are integers <em>u</em>, <em>v</em>, <em>w</em> such that <em>u</em><sup><em>p</em></sup> + <em>v</
em><sup><em>p</em></sup> = <em>w</em><sup><em>p</em></sup> (<em>p</em> greater than 5), then for these values of <em>u</em>, <em>v</em>, and <em>p</em> the curve <em>y</em><sup>2</sup> = <em>x</em>
(<em>x</em> − <em>u</em><sup><em>p</em></sup>)(<em>x</em> + <em>v</em><sup><em>p</em></sup>) has properties that contradict major <span class="md-dictionary-link md-dictionary-tt-off eb" data-term=
"conjectures" data-type="EB">conjectures</span> of the Japanese mathematicians Taniyama Yutaka and Shimura Goro about elliptic curves. Frey’s observation, refined by <span class="md-crosslink"
data-show-preview="true">Jean-Pierre Serre</span> of France and proved by the American Ken Ribet, meant that by 1990 Taniyama’s unproven conjectures were known to imply <span id="ref536641"></span>
<span class="md-crosslink" data-show-preview="true">Fermat’s last theorem</span>.</p><p class="topic-paragraph">In 1993 the English mathematician <span id="ref536642"></span><span class=
"md-crosslink" data-show-preview="true">Andrew Wiles</span> established the <span id="ref536643"></span>Shimura-Taniyama conjectures in a large range of cases that included Frey’s curve and therefore
Fermat’s last theorem—a major feat even without the connection to Fermat. It soon became clear that the argument had a serious flaw; but in May 1995 Wiles, assisted by another English mathematician,
<span class="md-crosslink autoxref" data-show-preview="true">Richard Taylor</span>, published a different and valid approach. In so doing, Wiles not only solved the most famous outstanding <span
class="md-dictionary-link md-dictionary-tt-off eb" data-term="conjecture" data-type="EB">conjecture</span> in mathematics but also triumphantly vindicated the sophisticated and difficult methods of
modern number theory.</p></section></div></div></div></div></div></div></div></div></div></div></article></div></div></div></div></main> | {"url":"https://www.questionai.com/knowledge/kvTbT6B0P4-algebraic-topology","timestamp":"2024-11-11T13:33:43Z","content_type":"text/html","content_length":"100310","record_id":"<urn:uuid:00ca817f-e494-4af0-9226-96e054d724cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00186.warc.gz"} |
What is the fear of 666 called?
What is the fear of 666 called?
Hexakosioihexekontahexphobia: Fear of the Number 666.
How do you spell Pneumonoultramicroscopicsilicovolcanoconiosis?
Also spelt pneumonoultramicroscopicsilicovolcanokoniosis. What is pneumonoultramicroscopicsilicovolcanoconiosis? noun | A lung disease caused by the inhalation of very fine silicate or quartz dust,
causing inflammation in the lungs.
What is the longest Korean word?
The longest word appearing in the Standard Korean Dictionary published by the National Institute of the Korean Language is 청자 양인각 연당초상감 모란 문은구 대접 (靑瓷陽印刻蓮唐草象嵌牡丹文銀釦대
접); Revised Romanization: cheongjayang-in-gakyeondangchosang-gammoranmuneun-gudaejeop, which is a kind of ceramic bowl from the Goryeo dynasty; that word is …
What is the longest phobia?
What is the longest word in Spanish?
What is Megalophobia?
Also known as a “fear of large objects,” this condition is marked by significant nervousness that is so severe, you take great measures to avoid your triggers. It may also be serious enough to
interfere with your daily life. Like other phobias, megalophobia is tied to underlying anxiety.
Is the word supercalifragilisticexpialidocious a real word?
The Oxford English Dictionary defines the word as “a nonsense word, originally used esp. by children, and typically expressing excited approbation: fantastic, fabulous”, while Dictionary.com says it
is “used as a nonsense word by children to express approval or to represent the longest word in English.”
What is the longest protein?
What is the longest word in the world that takes 3 hours to say full word?
A word of warning… the “word” takes about 3.5 hours to say. The word is 189,819 letters long. It’s actually the name of a giant protein called Titin.
What fears are humans born with?
They are the fear of loud noises and the fear of falling. As for the universal ones, being afraid of heights is pretty common but are you afraid of falling or do you feel that you are in control
enough not to be scared.
What is the longest word in the Hawaiian language?
Is there a word with 100 letters?
Originally Answered: What word has 100 letters in it? The longest words in Oxford Dictionaries are: antidisestablishmentarianism – opposition to the disestablishment of the Church of England – 28
letters. pneumonoultramicroscopicsilicovolcanoconiosis – a supposed lung disease – 45 letters.
What is the longest chemical formula?
(C18H24N2O6), a miticide and contact fungicide used to control powdery mildew in crops. The IUPAC name for Titin. This is the largest known protein and so has the longest chemical name. Written in
full, it contains 189,819 letters.
What’s the shortest word?
What is Athazagoraphobia phobia?
Athazagoraphobia is a fear of forgetting someone or something, as well as a fear of being forgotten. For example, you or someone close to you may have anxiety or fear of developing Alzheimer’s
disease or memory loss. This might come from caring for someone with Alzheimer’s disease or dementia.
Why is the chemical name for titin so long?
Linguistic significance The name titin is derived from the Greek Titan (a giant deity, anything of great size). As the largest known protein, titin also has the longest IUPAC name of a protein.
What is the least common phobia?
10 Least Common Phobias
• Ephebiphobia: The fear of youths.
• Ergasiophobia: The fear of work.
• Optophobia: The fear of opening one’s eyes.
• Neophobia: The fear of newness.
• Anthophobia: The fear of flowers.
• Pteronophobia: The fear of being tickled by feathers.
• Vestiphobia: The fear of clothing.
• Phronemophobia: The fear of thinking.
What level of math do you need for college?
Most colleges want students to have at least 3 years of high school math, though more selective colleges prefer 4 years. Prioritize taking several of the following courses: Algebra 1. Geometry.
Does math anxiety exist?
People who experience feelings of stress when faced with math-related situations may be experiencing what is called “math anxiety.” Math anxiety affects many people and is related to poor math
ability in school and later during adulthood.
What is the hardest math course in high school?
List of the Hardest Maths Class in High School
• Algebra.
• Calculus.
• Combinatory.
• Topology and Geometry.
• Dynamic system and Differential equations.
• Mathematical physics.
• Information theory and signal processing. Conclusion. Nainika.
What is the order of math classes in college?
MATH Course Listing
• Topics for Mathematical Literacy (MATH 105, 3 Credits)
• College Algebra (MATH 107, 3 Credits)
• Trigonometry and Analytical Geometry (MATH 108, 3 Credits)
• Pre-Calculus (MATH 115, 3 Credits)
• Calculus I (MATH 140, 4 Credits)
• Calculus II (MATH 141, 4 Credits)
• Calculus III (MATH 241, 4 Credits)
Why are students afraid of mathematics?
LACK OF HANDLING THE PRESSURE One of the common reasons why students are Scared for Mathematics and why they fail in the subject is because of the peer pressure which they are not able to handle.
They have self-doubt on their abilities and are unable to cope with the pressure of performance at school and other levels.
What is the fear of math called?
All about Arithmophobia “Also known as Numerophobia, it is often an exaggerated, constant and irrational fear of numbers that can affect one’s daily routine. Performing complex mathematical
computations becomes a herculean task, with individuals stuttering and sloughing through the ups and downs of number.
What is the lowest level of math in college?
Students who start at the lowest level of remedial math may otherwise face a long slog through three or even four remedial courses in arithmetic, beginning algebra and intermediate algebra. And
that’s before they can even get to the first college-level math course, generally “college algebra” or pre-calculus.
What is the most rare phobia?
13 of the most unusual phobias
• Xanthophobia – fear of the colour yellow.
• Turophobia- fear of cheese.
• Somniphobia- fear of falling asleep.
• Coulrophobia – fear of clowns.
• Hylophobia- fear of trees.
• Omphalophobia- fear of the navel.
• Nomophobia- fear of being without mobile phone coverage.
• Ombrophobia- fear of rain.
What are people most scared of?
Top 10 Things People Fear Most
• Going to the dentist.
• Snakes.
• Flying.
• Spiders and insects.
• Enclosed spaces Fear of enclosed spaces, or claustrophobia, plagues most people, even those that would not readily list it as their greatest fear.
• Mice.
• Dogs.
• Thunder and Lightning.
What are the different levels of math in college?
Levels Of Math In College
• Precalculus.
• Calculus I.
• Calculus II.
• Calculus III.
• Linear Algebra. | {"url":"https://blackestfest.com/what-is-the-fear-of-666-called/","timestamp":"2024-11-04T17:34:40Z","content_type":"text/html","content_length":"59243","record_id":"<urn:uuid:467b269c-8945-4315-bd26-732589666738>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00203.warc.gz"} |
[Solved]: Questions Draw the Temperature- Entropy (T-s) diag | solutionspile.com
Questions Draw the Temperature- Entropy (T-s) diagrams of (a) an ideal Rankine cycle (b) (
3\times 2=6
An ideal Reheat Rankine cycle and (c) an ideal Regenerative Rankine Cycle with marks) Open Feedwater Heater. In a 210-MW steam power plant that operates on a simple ideal Rankine cycle. (10 Steam
enters the turbine at 10 MPa and
500\deg C
and is cooled in the condenser at a marks) pressure of 10 kPa . Show the cycle on a T-s diagram with respect to saturation lines, and determine (a) the quality of the steam at the turbine exit, (b)
the thermal efficiency of the cycle, and (c) the mass flow rate of the steam. 3. An ideal reheat Rankine cycle with water as the working fluid operates the boiler at
, the reheater at 2000 kPa , and the condenser at 100 kPa . The (14 temperature is
450\deg C
at the entrance of the high-pressure and low-pressure marks) turbines. The mass flow rate through the cycle is
. Show the cycle on a Ts diagram with respect to saturation lines, and Determine the power used by pumps, the power produced by the cycle, the rate of heat transfer in the reheater, and the thermal
efficiency of this system. | {"url":"https://www.solutionspile.com/ExpertAnswers/questions-draw-the-temperature-entropy-t-s-diagrams-of-a-an-ideal-rankine-cycle-b-3-times-2-pa536","timestamp":"2024-11-03T20:39:20Z","content_type":"text/html","content_length":"26544","record_id":"<urn:uuid:03c7fa56-991b-48c4-8776-b5cef51a0d2b>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00283.warc.gz"} |
Breakover Angle Calculator: Optimize Off-Road Capabilities
Home » Simplify your calculations with ease. » Mechanical Calculators »
Breakover Angle Calculator: Optimize Off-Road Capabilities
Breakover angle is a crucial parameter in the world of off-roading and vehicle design. It represents the maximum angle at which a vehicle can traverse a crest without its underbody or midpoint making
contact with the ground. In this blog post, we’ll explore the importance of break-over angle, learn how to calculate it and introduce a user-friendly Breakover Angle Calculator to help you make
informed decisions about your off-road vehicle’s performance.
Importance of Breakover Angle in Off-Roading and Vehicle Design
Breakover angle significantly impacts a vehicle’s off-road capabilities. A higher break-over angle indicates that the vehicle can traverse steeper crests without the risk of getting stuck or damaging
its undercarriage. This is particularly important for off-road enthusiasts who tackle challenging terrains and obstacles.
In vehicle design, the break-over angle is an essential consideration for off-road vehicles like SUVs, trucks, and all-terrain vehicles (ATVs). By optimizing the break-over angle, designers can
enhance a vehicle’s off-road performance and minimize the risk of damage during off-road adventures.
How to Calculate Breakover Angle
The Breakover Angle Formula
To calculate break over angle, you can use the following formula:
BA = 2 * atan(2 * GC / WB)
• BA is the Breakover Angle (degrees)
• GC is the ground clearance (in)
• WB is the wheelbase (in)
Practical Examples of Breakover Angle Calculations
Example 1: Suppose an off-road vehicle has a ground clearance of 12 inches and a wheelbase of 120 inches. To calculate the break-over angle, use the formula:
BA = 2 * atan(2 * GC / WB)
BA = 2 * atan(2 * 12 / 120)
BA = 2 * atan(0.2)
BA ≈ 22.62 degrees
In this case, the break-over angle is approximately 22.62 degrees.
Example 2: An SUV has a ground clearance of 8 inches and a wheelbase of 110 inches. To find the break-over angle, use the same formula:
BA = 2 * atan(2 * GC / WB)
BA = 2 * atan(2 * 8 / 110)
BA = 2 * atan(0.145)
BA ≈ 16.61 degrees
The break-over angle of the SUV is approximately 16.61 degrees.
Introducing the Breakover Angle Calculator
Key Features of the Calculator
Our Breakover Angle Calculator is a web-based tool to help you quickly and easily calculate break-over angles. Some of its key features include:
• Simple and intuitive interface
• Support for various units, including inches
• Embeddable in websites and blogs
• Real-time calculations as you input data
• Compatible with all major browsers
How to Use the Breakover Angle Calculator
Using the calculator is a breeze. Just follow these simple steps:
1. Enter the ground clearance (in inches) in the appropriate field.
2. Enter the wheelbase (in inches) in the corresponding field.
3. Click the “Calculate” button, and the calculator will display the break-over angle in degrees.
Benefits of Using the Calculator
The Breakover Angle Calculator offers several benefits, such as:
• Saving time and effort in performing manual calculations
• Reducing the risk of calculation errors
• Enhancing your understanding of break-over angle and its impact on vehicle performance
• Assisting in making informed decisions about off-road vehicle selection and design
Frequently Asked Questions (FAQs)
What are the units used for Breakover Angle calculations?
The break-over angle is usually measured in degrees. The ground clearance and wheelbase are typically measured in inches, but they can also be measured in centimeters or other units of length.
How does Breakover Angle affect off-road vehicle performance?
A higher break-over angle allows a vehicle to traverse steeper crests and more challenging terrain without the risk of getting stuck or damaging its undercarriage. A vehicle with a low break-over
angle may struggle on uneven terrain and could suffer damage to its underbody, ultimately limiting its off-road capabilities.
Can the calculator be used for various vehicle types?
Yes, the Breakover Angle Calculator can be used for a wide range of vehicles, including SUVs, trucks, ATVs, and even custom off-road vehicles. As long as you have the ground clearance and wheelbase
measurements, the calculator can help you determine the break-over angle.
Can I embed the Breakover Angle Calculator on my own website or blog?
Yes, you can embed the Breakover Angle Calculator into your website or blog using the provided HTML, CSS, and JavaScript code. Make sure to follow the instructions in the code comments and adapt the
styling as needed to match your site’s design.
Understanding break-over angles is essential for off-road enthusiasts, vehicle designers, and anyone interested in maximizing a vehicle’s off-road performance. Our Breakover Angle Calculator
simplifies the process of calculating break-over angles, allowing you to make informed decisions about your off-road vehicle’s capabilities. With this tool at your disposal, you’ll be better prepared
to tackle challenging terrain and enjoy the thrill of off-roading.
Leave a Comment | {"url":"https://calculatorshub.net/mechanical-calculators/breakover-angle-calculator/","timestamp":"2024-11-08T15:22:30Z","content_type":"text/html","content_length":"119764","record_id":"<urn:uuid:2a193cb7-c727-4467-820c-2798b4ee1f5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00553.warc.gz"} |
Formula for Exterior Angles and Interior Angles, illustrated examples with practice problems on how to calculate.. (2024)
Interior Angle Sum Theorem
Interior Angles of a Polygons Worksheet
Exterior Angles of a Polygon Worksheet
What is true about the sum of interior angles of a polygon ?
The sum of the measures of the interior angles of a convex polygon with n sides is $ (n-2)\cdot180^{\circ} $
Shape Formula Sum Interior Angles
$$ \red 3 $$ sided polygon $$ (\red 3-2) \cdot180 $$ $$ 180^{\circ} $$
$$ \red 4 $$ sided polygon $$ (\red 4-2) \cdot 180 $$ $$ 360^{\circ} $$
$$ \red 6 $$ sided polygon $$ (\red 6-2) \cdot 180 $$ $$ 720^{\circ} $$
Problem 1
What is the total number degrees of all interior angles of a triangle?
You can also use Interior Angle Theorem:$$ (\red 3 -2) \cdot 180^{\circ} = (1) \cdot 180^{\circ}= 180 ^{\circ} $$
Problem 2
What is the total number of degrees of all interior angles of the polygon ?
360° since this polygon is really just two triangles and each triangle has 180°
You can also use Interior Angle Theorem:$$ (\red 4 -2) \cdot 180^{\circ} = (2) \cdot 180^{\circ}= 360 ^{\circ} $$
Problem 3
What is the sum measure of the interior angles of the polygon (a pentagon) ?
Use Interior Angle Theorem:$$ (\red 5 -2) \cdot 180^{\circ} = (3) \cdot 180^{\circ}= 540 ^{\circ} $$
Problem 4
What is sum of the measures of the interior angles of the polygon (a hexagon) ?
Use Interior Angle Theorem: $$ (\red 6 -2) \cdot 180^{\circ} = (4) \cdot 180^{\circ}= 720 ^{\circ} $$
Video Tutorial
on Interior Angles of a Polygon
Definition of a Regular Polygon:
A regular polygon is simply a polygon whose sides all have the same length and angles all have the same measure. You have probably heard of the equilateral triangle, which are the two most well-known
and most frequently studied types of regular polygons.
Examples of Regular Polygons
Regular Hexagon
Regular Pentagon
More on regular polygons here .
Measure of a Single Interior Angle
Shape Formula Sum interior Angles
Regular Pentagon $$ (\red 3-2) \cdot180 $$ $$ 180^{\circ} $$
$$ \red 4 $$ sided polygon $$ (\red 4-2) \cdot 180 $$ $$ 360^{\circ} $$
$$ \red 6 $$ sided polygon $$ (\red 6-2) \cdot 180 $$ $$ 720^{\circ} $$
What about when you just want 1 interior angle?
In order to find the measure of a single interior angle of a regular polygon (a polygon with sides of equal length and angles of equal measure) with n sides, we calculate the sum interior anglesor $$
(\red n-2) \cdot 180 $$ and then divide that sum by the number of sides or $$ \red n$$.
The Formula
The measure of any interior angle of a regular polygon with $$ \red n $$ sides is
$ \text {any angle}^{\circ} = \frac{ (\red n -2) \cdot 180^{\circ} }{\red n} $
Example 1
Let's look at an example you're probably familiar with-- the good old triangle $$\triangle$$ . Now, remember this new rule above only applies to regular polygons. So, the only type of triangle we
could be talking about is an equilateral one like the one pictured below.
You might already know that the sum of the interior angles of a triangle measures $$ 180^{\circ}$$ and that in the special case of an equilateral triangle, each angle measures exactly $$ 60^{\circ}
So, our new formula for finding the measure of an angle in a regular polygon is consistent with the rules for angles of triangles that we have known from past lessons.
Example 2
To find the measure of an interior angle of a regular octagon, which has 8 sides, apply the formula above as follows: $\text{Using our new formula}\\\text {any angle}^{\circ} = \frac{ (\red n -2) \
cdot 180^{\circ} }{\red n}\\ \frac{(\red8-2) \cdot 180}{ \red 8} = 135^{\circ} $
Finding 1 interior angle of a regular Polygon
Problem 5
What is the measure of 1 interior angle of a regular octagon?
Substitute 8 (an octagon has 8 sides) into the formula to find a single interior angle
Problem 6
Calculate the measure of 1 interior angle of a regular dodecagon (12 sided polygon)?
Substitute 12 (a dodecagon has 12 sides) into the formula to find a single interior angle
Problem 7
Calculate the measure of 1 interior angle of a regular hexadecagon (16 sided polygon)?
Substitute 16 (a hexadecagon has 16 sides) into the formula to find a single interior angle
Challenge Problem
What is the measure of 1 interior angle of a pentagon?
This question cannot be answered because the shape is not a regular polygon. You can only use the formula to find a single interior angle if the polygon is regular!
Consider, for instance, the irregular pentagon below.
You can tell, just by looking at the picture, that $$ \angle A and \angle B $$ are not congruent.
The moral of this story- While you can use our formula to find the sum of the interior angles of any polygon (regular or not), you can not use this page's formula for a single angle measure--except
when the polygon is regular.
How about the measure of an exterior angle?
Exterior Angles of a Polygon
Formula for sum of exterior angles:
The sum of the measures of the exterior angles of a polygon, one at each vertex, is 360°.
Measure of a Single Exterior Angle
Formula to find 1 angle of a regular convex polygon of n sides =
$$ \angle1 + \angle2 + \angle3 = 360° $$
$$ \angle1 + \angle2 + \angle3 + \angle4 = 360° $$
$$ \angle1 + \angle2 + \angle3 + \angle4 + \angle5 = 360° $$
Practice Problems
Problem 8
Calculate the measure of 1 exterior angle of a regular pentagon?
Substitute 5 (a pentagon has 5sides) into the formula to find a single exterior angle
Problem 9
What is the measure of 1 exterior angle of a regular decagon (10 sided polygon)?
Substitute 10 (a decagon has 10 sides) into the formula to find a single exterior angle
Problem 10
What is the measure of 1 exterior angle of a regular dodecagon (12 sided polygon)?
Substitute 12 (a dodecagon has 12 sides) into the formula to find a single exterior angle
Challenge Problem
What is the measure of 1 exterior angle of a pentagon?
This question cannot be answered because the shape is not a regular polygon. Although you know that sum of the exterior angles is 360, you can only use formula to find a single exterior angle if the
polygon is regular!
Consider, for instance, the pentagon pictured below. Even though we know that all the exterior angles add up to 360 °, we can see, by just looking, that each $$ \angle A \text{ and } and \angle B $$
are not congruent..
Determine Number of Sides from Angles
It's possible to figure out how many sides a polygon has based on how many degrees are in its exterior or interior angles.
Problem 11
If each exterior angle measures 10°, how many sides does this polygon have?
Use formula to find a single exterior angle in reverse and solve for 'n'.
Problem 12
If each exterior angle measures 20°, how many sides does this polygon have?
Use formula to find a single exterior angle in reverse and solve for 'n'.
Problem 13
If each exterior angle measures 15°, how many sides does this polygon have?
Use formula to find a single exterior angle in reverse and solve for 'n'.
Challenge Problem
If each exterior angle measures 80°, how many sides does this polygon have?
There is no solution to this question.
When you use formula to find a single exterior angle to solve for the number of sides , you get a decimal (4.5), which is impossible. Think about it: How could a polygon have 4.5 sides? A
quadrilateral has 4 sides. A pentagon has 5 sides.
Interior Angles of a Polygons Worksheet
Exterior Angles of a Polygon Worksheet | {"url":"https://lakeplacidhojos.com/article/formula-for-exterior-angles-and-interior-angles-illustrated-examples-with-practice-problems-on-how-to-calculate","timestamp":"2024-11-06T23:51:42Z","content_type":"text/html","content_length":"118503","record_id":"<urn:uuid:598b577f-8cbe-4a6d-839a-0312292bec00>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00774.warc.gz"} |
LCC '23 Contest 3 J5 - Gingerbread Houses
Submit solution
Points: 7 (partial)
Time limit: 1.0s
Memory limit: 256M
Christmas is almost over, and so as one last activity, Elsa and Anna invite their friends over to build gingerbread houses! Each person builds a community full of gingerbread houses with trails,
connecting each gingerbread house with another. Some trails may be longer than others, and so each gingerbread trail is given a distance, . It is guaranteed that every gingerbread house can be
reached in a community from another gingerbread house.
Anna notices that some communities could be small and lonely, and so to expand the economy, she wants to connect all these communities together! Since the number of gingerbread materials are limited,
she wants to know the minimum number of trails that need to be made, as well as which houses the trails will connect, so that the maximum distance between any two houses within all the communities is
minimized. By giving this challenge to everyone, it will force everyone to compete over the best answer!
Elsa wants to solve this problem first, and so before Anna reveals any information, she asks you to create a program that will give her the answer! Will you help Elsa or get turned to ice?
All the houses in each community will have a unique integer representation.
Subtask 1 (50%)
Input Specification
The first line will contain integer .
For the next communities, there will be 1 integer, , followed by lines.
Each of these lines will contain values , , and . This will result in a trail from house to , with a distance of . This will be -indexed.
Output Specification
Print K, the minimum number of houses that need to be connected.
For the next K lines, print the house number which should form a trail to another house. The house number order should based on the order of the community it came from, so if the answer is house 3 in
and house 1 , then you would print 3, followed by a 1 on the next line. If there are two answers for which house to pick, print the one with the smaller house number.
Sample Input
Sample Output
In the first community, there should be a trail formed with house 1.
In the second community, there should be a trail formed with house 4.
In the third community, there should be a trail formed with house 1.
There are no comments at the moment. | {"url":"https://mcpt.ca/problem/lcc23c3j5","timestamp":"2024-11-03T00:26:46Z","content_type":"text/html","content_length":"41650","record_id":"<urn:uuid:d606d2dc-4128-4e72-843e-1ae5e4a0a9eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00783.warc.gz"} |
Saleh Tanveer
Contact Details
tanveer@math.ohio-state.edu Professor of Mathematics
The Ohio State University
231 West 18th Avenue
Columbus, OH 43210-1174
Office: 402 Math Tower
Office phone: 1-614-292-5710
Fax: 1-614-292-1479
Research Interests
• Asymptotics
• Free boundary problems
• Partial Differential Equations
• Fluid Mechanics and Turbulence
Personal Information
Recent preprints
Recent Talks
Course Material for Au, '20
Last updated: 08/24/20
Questions? tanveer@math.ohio-state.edu | {"url":"https://people.math.osu.edu/tanveer.1/","timestamp":"2024-11-07T19:46:49Z","content_type":"text/html","content_length":"14357","record_id":"<urn:uuid:58c724a1-382c-4847-907c-78008d025280>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00136.warc.gz"} |
Machine learning (ML) has become increasingly popular in data science applications due to its ability to analyze complex relationships automatically without explicit programming [
]. Artificial neural networks (ANN), in particular, has gained attention for its capability to analyze large and complex datasets that cannot be easily simplified using traditional statistical
techniques [
]. ANN has a long-established history in data science, and its wide range of applications makes it a powerful tool in data analysis, prediction and decision-making. ANNs can detect non-linear
relationships between input variables, extending their application to various fields like healthcare, climate and weather, stock markets, transportation systems and more. ANNs has also proved its
applicability in handling problems in agriculture, medical science, education, finance, cyber security, and trading commodity. These neural networks have successfully found solutions to problems that
could not be solved by the computational ability of conventional procedures. ANN has been keenly used by researchers in the field of water resources management such as estimation of evaporation
losses, exploration of association between groundwater and drought, the prediction of groundwater salinity, groundwater quality forecasting, prediction of suspended sediment levels, determination of
flow friction factors in irrigation pipes, rainfall-runoff estimation, studying soil moisture using satellite data, modeling of contaminant transport, mapping vulnerability of saltwater intrusion,
and modeling of irrigation water infiltration [
]. The applications of artificial intelligence in predicting and monitoring groundwater quality and quantity are rapidly growing. ANN offers advantages in reducing the time needed for data sampling
and its ability to identify the nonlinear patterns of input and output makes it superior compared to other classical statistical methods. These prediction models have the potential to be very
accurate in predicting water quality parameters [
]. In the recent times, ANN, ANFIS and fuzzy logic are being widely used in predicting and monitoring groundwater quality and quantity [
]. Nonlinear methods such as ANNs, which are suited for complex models, are used for the analysis of real world temporal data. Neural networks provide a powerful inference engine for regression
analysis, which stems from its ability to map nonlinear relationships, which is more difficult and less successful while using conventional time-series analysis [
].Since environmental data is inherently complex with data sets containing nonlinearities; temporal, spatial, and seasonal trends; and non-gaussian distributions, neural networks are widely preferred
One of the main advantages of ML is that, it helps in solving scaling issues from a data-driven perspective and can also help to build uniform parameterization schemes. The new advances in ML models
present new openings to understand the network instead of perceiving it as a black box. These models can be combined with other algorithms for optimization to yield better results and robust models.
Researchers combined the ability of nature-inspired optimization algorithm to optimize the neural networks and help producing better prediction results. Ref. Lu et al.[
] adopted ant colony optimization (ACO) model to train the perceptron and to predict the pollutant levels. The approach was proved to be feasible and effective in solving real air quality problems
and by comparing with the simple back propagation (BP) algorithm [
]. A modified ACO in conjugation with simulated annealing technique was aslo studied [
]. ACO-based neural network was used for the analysis of outcomes of construction claims and found that the performance of ACO-based ANN is better than BP [
Groundwater plays a significant role in satisfying global water demand. Globally, over 2 billion people rely on groundwater as a primary source of water [
]. Several regions of the world depend on the use of groundwater for various requirements. In India too, about 80% of the rural population and 50% of the urban population uses groundwater for
domestic purposes [
]. Overexploitation in several parts of the country has resulted in groundwater contamination, declining groundwater levels, drying of springs and shallow aquifers, and land subsidence in some cases
]. Along with declining water levels, deterioration of groundwater quality has also become a growing concern. Groundwater quality depends on geological as well as the anthropogenic features of a
region. Over the past decades, many anthropogenic and geogenic contaminants in groundwater have emerged as serious threats to human health when consumed orally. Ingestion of contaminated groundwater
can cause severe health effects and can also cause chronic health conditions like cancer [
]. Thus, groundwater quality assessment and monitoring are necessary considering the potential risk of groundwater contamination and its effects on suitability for human consumption [
]. Hence, water quality monitoring plays an important role in water resources management. Conventional water quality monitoring techniques involve manual collection of water samples and analysis in
the laboratory. This process is expensive, time consuming and involves lot of manual labor. Data-driven models based on artificial intelligence can efficiently be used to solve such problems and
overcome these difficulties especially when historic quality data is available. The conjunction of ACO with ANN is a technique used successfully in optimizing parameters in other research areas.
However, until now no one has explored the applicability of this technique to predict multiple groundwater quality parameters, although it has been used in several other domains in water resources
management. Hence, the aim of our study is to build an ant colony optimized multiperceptron neural network for predicting multiple groundwater quality parameters. | {"url":"https://www.preprints.org/manuscript/202305.0490/v1","timestamp":"2024-11-13T11:38:18Z","content_type":"text/html","content_length":"604569","record_id":"<urn:uuid:67769588-8e24-4a57-b285-7cb53e69a791>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00883.warc.gz"} |
Constructible universe
"Gödel universe" redirects here. For Kurt Gödel's cosmological solution to the Einstein field equations, see
Gödel metric
In mathematics, in set theory, the constructible universe (or Gödel's constructible universe), denoted L, is a particular class of sets that can be described entirely in terms of simpler sets. It was
introduced by Kurt Gödel in his 1938 paper "The Consistency of the Axiom of Choice and of the Generalized Continuum-Hypothesis".^[1] In this, he proved that the constructible universe is an inner
model of ZF set theory, and also that the axiom of choice and the generalized continuum hypothesis are true in the constructible universe. This shows that both propositions are consistent with the
basic axioms of set theory, if ZF itself is consistent. Since many other theorems only hold in systems in which one or both of the propositions is true, their consistency is an important result.
What is L?
L can be thought of as being built in "stages" resembling the von Neumann universe, V. The stages are indexed by ordinals. In von Neumann's universe, at a successor stage, one takes V[α+1] to be the
set of all subsets of the previous stage, V[α]. By contrast, in Gödel's constructible universe L, one uses only those subsets of the previous stage that are:
By limiting oneself to sets defined only in terms of what has already been constructed, one ensures that the resulting sets will be constructed in a way that is independent of the peculiarities of
the surrounding model of set theory and contained in any such model.
L is defined by transfinite recursion as follows:
If z is an element of L[α], then z = {y | y ∈ L[α] and y ∈ z} ∈ Def (L[α]) = L[α+1]. So L[α] is a subset of L[α+1], which is a subset of the power set of L[α]. Consequently, this is a tower of nested
transitive sets. But L itself is a proper class.
The elements of L are called "constructible" sets; and L itself is the "constructible universe". The "axiom of constructibility", aka "V=L", says that every set (of V) is constructible, i.e. in L.
Additional facts about the sets L[α]
An equivalent definition for L[α] is:
For any ordinal α, .
For any finite ordinal n, the sets L[n] and V[n] are the same (whether V equals L or not), and thus L[ω] = V[ω]: their elements are exactly the hereditarily finite sets. Equality beyond this point
does not hold. Even in models of ZFC in which V equals L, L[ω+1] is a proper subset of V[ω+1], and thereafter L[α+1] is a proper subset of the power set of L[α] for all α > ω. On the other hand, V
equals L does imply that V[α] equals L[α] if α = ω[α], for example if α is inaccessible. More generally, V equals L implies H[α] equals L[α] for all infinite cardinals α.
If α is an infinite ordinal then there is a bijection between L[α] and α, and the bijection is constructible. So these sets are equinumerous in any model of set theory that includes them.
As defined above, Def(X) is the set of subsets of X defined by Δ[0] formulas (that is, formulas of set theory containing only bounded quantifiers) that use as parameters only X and its elements.
An alternate definition, due to Gödel, characterizes each L[α+1] as the intersection of the power set of L[α] with the closure of under a collection of nine explicit functions, similar to Gödel
operations. This definition makes no reference to definability.
All arithmetical subsets of ω and relations on ω belong to L[ω+1] (because the arithmetic definition gives one in L[ω+1]). Conversely, any subset of ω belonging to L[ω+1] is arithmetical (because
elements of L[ω] can be coded by natural numbers in such a way that ∈ is definable, i.e., arithmetic). On the other hand, L[ω+2] already contains certain non-arithmetical subsets of ω, such as the
set of (natural numbers coding) true arithmetical statements (this can be defined from L[ω+1] so it is in L[ω+2]).
All hyperarithmetical subsets of ω and relations on ω belong to (where stands for the Church-Kleene ordinal), and conversely any subset of ω that belongs to is hyperarithmetical.^[2]
L is a standard inner model of ZFC
L is a standard model, i.e. it is a transitive class and it uses the real element relationship, so it is well-founded. L is an inner model, i.e. it contains all the ordinal numbers of V and it has no
"extra" sets beyond those in V, but it might be a proper subclass of V. L is a model of ZFC, which means that it satisfies the following axioms:
• Axiom of regularity: Every non-empty set x contains some element y such that x and y are disjoint sets.
(L,∈) is a substructure of (V,∈), which is well founded, so L is well founded. In particular, if x∈L, then by the transitivity of L, y∈L. If we use this same y as in V, then it is still disjoint
from x because we are using the same element relation and no new sets were added.
If x and y are in L and they have the same elements in L, then by L's transitivity, they have the same elements (in V). So they are equal (in V and thus in L).
{} = L[0] = {y | y∈L[0] and y=y} ∈ L[1]. So {} ∈ L. Since the element relation is the same and no new elements were added, this is the empty set of L.
If x∈L and y∈L, then there is some ordinal α such that x∈L[α] and y∈L[α]. Then {x,y} = {s | s∈L[α] and (s=x or s=y)} ∈ L[α+1]. Thus {x,y} ∈ L and it has the same meaning for L as for V.
• Axiom of union: For any set x there is a set y whose elements are precisely the elements of the elements of x.
If x ∈ L[α], then its elements are in L[α] and their elements are also in L[α]. So y is a subset of L[α]. y = {s | s∈L[α] and there exists z∈x such that s∈z} ∈ L[α+1]. Thus y ∈ L.
• Axiom of infinity: There exists a set x such that {} is in x and whenever y is in x, so is the union y U {y}.
From transfinite induction, we get that each ordinal α ∈ L[α+1]. In particular, ω ∈ L[ω+1] and thus ω ∈ L.
• Axiom of separation: Given any set S and any proposition P(x,z[1],...,z[n]), {x|x∈S and P(x,z[1],...,z[n])} is a set.
By induction on subformulas of P, one can show that there is an α such that L[α] contains S and z[1],...,z[n] and (P is true in L[α] if and only if P is true in L (this is called the "reflection
principle")). So {x | x∈S and P(x,z[1],...,z[n]) holds in L} = {x | x∈L[α] and x∈S and P(x,z[1],...,z[n]) holds in L[α]} ∈ L[α+1]. Thus the subset is in L.
• Axiom of replacement: Given any set S and any mapping (formally defined as a proposition P(x,y) where P(x,y) and P(x,z) implies y = z), {y | there exists x∈S such that P(x,y)} is a set.
Let Q(x,y) be the formula that relativizes P to L, i.e. all quantifiers in P are restricted to L. Q is a much more complex formula than P, but it is still a finite formula, and since P was a
mapping over L, Q must be a mapping over V; thus we can apply replacement in V to Q. So {y | y∈L and there exists x∈S such that P(x,y) holds in L} = {y | there exists x∈S such that Q(x,y)} is a
set in V and a subclass of L. Again using the axiom of replacement in V, we can show that there must be an α such that this set is a subset of L[α] ∈ L[α+1]. Then one can use the axiom of
separation in L to finish showing that it is an element of L.
• Axiom of power set: For any set x there exists a set y, such that the elements of y are precisely the subsets of x.
In general, some subsets of a set in L will not be in L. So the whole power set of a set in L will usually not be in L. What we need here is to show that the intersection of the power set with L
is in L. Use replacement in V to show that there is an α such that the intersection is a subset of L[α]. Then the intersection is {z | z∈L[α] and z is a subset of x} ∈ L[α+1]. Thus the required
set is in L.
• Axiom of choice: Given a set x of mutually disjoint nonempty sets, there is a set y (a choice set for x) containing exactly one element from each member of x.
One can show that there is a definable well-ordering of L which definition works the same way in L itself. So one chooses the least element of each member of x to form y using the axioms of union
and separation in L.
Notice that the proof that L is a model of ZFC only requires that V be a model of ZF, i.e. we do NOT assume that the axiom of choice holds in V.
L is absolute and minimal
If W is any standard model of ZF sharing the same ordinals as V, then the L defined in W is the same as the L defined in V. In particular, L[α] is the same in W and V, for any ordinal α. And the same
formulas and parameters in Def (L[α]) produce the same constructible sets in L[α+1].
Furthermore, since L is a subclass of V and, similarly, L is a subclass of W, L is the smallest class containing all the ordinals that is a standard model of ZF. Indeed, L is the intersection of all
such classes.
If there is a set W in V that is a standard model of ZF, and the ordinal κ is the set of ordinals that occur in W, then L[κ] is the L of W. If there is a set that is a standard model of ZF, then the
smallest such set is such a L[κ]. This set is called the minimal model of ZFC. Using the downward Löwenheim–Skolem theorem, one can show that the minimal model (if it exists) is a countable set.
Of course, any consistent theory must have a model, so even within the minimal model of set theory there are sets that are models of ZF (assuming ZF is consistent). However, those set models are
non-standard. In particular, they do not use the normal element relation and they are not well founded.
Because both the L of L and the V of L are the real L and both the L of L[κ] and the V of L[κ] are the real L[κ], we get that V=L is true in L and in any L[κ] that is a model of ZF. However, V=L does
not hold in any other standard model of ZF.
L and large cardinals
Since On⊂L⊆V, properties of ordinals that depend on the absence of a function or other structure (i.e. Π[1]^ZF formulas) are preserved when going down from V to L. Hence initial ordinals of cardinals
remain initial in L. Regular ordinals remain regular in L. Weak limit cardinals become strong limit cardinals in L because the generalized continuum hypothesis holds in L. Weakly inaccessible
cardinals become strongly inaccessible. Weakly Mahlo cardinals become strongly Mahlo. And more generally, any large cardinal property weaker than 0^# (see the list of large cardinal properties) will
be retained in L.
However, 0^# is false in L even if true in V. So all the large cardinals whose existence implies 0^# cease to have those large cardinal properties, but retain the properties weaker than 0^# which
they also possess. For example, measurable cardinals cease to be measurable but remain Mahlo in L.
Interestingly, if 0^# holds in V, then there is a closed unbounded class of ordinals that are indiscernible in L. While some of these are not even initial ordinals in V, they have all the large
cardinal properties weaker than 0^# in L. Furthermore, any strictly increasing class function from the class of indiscernibles to itself can be extended in a unique way to an elementary embedding of
L into L. This gives L a nice structure of repeating segments.
L can be well-ordered
There are various ways of well-ordering L. Some of these involve the "fine structure" of L, which was first described by Ronald Bjorn Jensen in his 1972 paper entitled "The fine structure of the
constructible hierarchy". Instead of explaining the fine structure, we will give an outline of how L could be well-ordered using only the definition given above.
Suppose x and y are two different sets in L and we wish to determine whether x<y or x>y. If x first appears in L[α+1] and y first appears in L[β+1] and β is different from α, then let x<y if and only
if α<β. Henceforth, we suppose that β=α.
Remember that L[α+1] = Def (L[α]), which uses formulas with parameters from L[α] to define the sets x and y. If one discounts (for the moment) the parameters, the formulas can be given a standard
Gödel numbering by the natural numbers. If Φ is the formula with the smallest Gödel number that can be used to define x, and Ψ is the formula with the smallest Gödel number that can be used to define
y, and Ψ is different from Φ, then let x<y if and only if Φ<Ψ in the Gödel numbering. Henceforth, we suppose that Ψ=Φ.
Suppose that Φ uses n parameters from L[α]. Suppose z[1],...,z[n] is the sequence of parameters that can be used with Φ to define x, and w[1],...,w[n] does the same for y. Then let x<y if and only if
either z[n]<w[n] or (z[n]=w[n] and z[n-1]<w[n-1]) or (z[n]=w[n] and z[n-1]=w[n-1] and z[n-2]<w[n-2]) or etc.. This is called the reverse-lexicographic ordering; if there are multiple sequences of
parameters that define one of the sets, we choose the least one under this ordering. It being understood that each parameter's possible values are ordered according to the restriction of the ordering
of L to L[α], so this definition involves transfinite recursion on α.
The well-ordering of the values of single parameters is provided by the inductive hypothesis of the transfinite induction. The values of n-tuples of parameters are well-ordered by the product
ordering. The formulas with parameters are well-ordered by the ordered sum (by Gödel numbers) of well-orderings. And L is well-ordered by the ordered sum (indexed by α) of the orderings on L[α+1].
Notice that this well-ordering can be defined within L itself by a formula of set theory with no parameters, only the free-variables x and y. And this formula gives the same truth value regardless of
whether it is evaluated in L, V, or W (some other standard model of ZF with the same ordinals) and we will suppose that the formula is false if either x or y is not in L.
It is well known that the axiom of choice is equivalent to the ability to well-order every set. Being able to well-order the proper class V (as we have done here with L) is equivalent to the axiom of
global choice, which is more powerful than the ordinary axiom of choice because it also covers proper classes of non-empty sets.
L has a reflection principle
Proving that the axiom of separation, axiom of replacement, and axiom of choice hold in L requires (at least as shown above) the use of a reflection principle for L. Here we describe such a
By mathematical induction on n<ω, we can use ZF in V to prove that for any ordinal α, there is an ordinal β>α such that for any sentence P(z[1],...,z[k]) with z[1],...,z[k] in L[β] and containing
fewer than n symbols (counting a constant symbol for an element of L[β] as one symbol) we get that P(z[1],...,z[k]) holds in L[β] if and only if it holds in L.
The generalized continuum hypothesis holds in L
Let , and let T be any constructible subset of S. Then there is some β with , so , for some formula Φ and some drawn from . By the downward Löwenheim–Skolem theorem, there must be some transitive set
K containing and some , and having the same first-order theory as with the substituted for the ; and this K will have the same cardinal as . Since is true in , it is also true in K, so for some γ
having the same cardinal as α. And because and have the same theory. So T is in fact in .
So all the constructible subsets of an infinite set S have ranks with (at most) the same cardinal κ as the rank of S; it follows that if α is the initial ordinal for κ^+, then serves as the
"powerset" of S within L. And this in turn means that the "power set" of S has cardinal at most ||α||. Assuming S itself has cardinal κ, the "power set" must then have cardinal exactly κ^+. But this
is precisely the generalized continuum hypothesis relativized to L.
Constructible sets are definable from the ordinals
There is a formula of set theory that expresses the idea that X=L[α]. It has only free variables for X and α. Using this we can expand the definition of each constructible set. If s∈L[α+1], then s =
{y|y∈L[α] and Φ(y,z[1],...,z[n]) holds in (L[α],∈)} for some formula Φ and some z[1],...,z[n] in L[α]. This is equivalent to saying that: for all y, y∈s if and only if [there exists X such that X=L[
α] and y∈X and Ψ(X,y,z[1],...,z[n])] where Ψ(X,...) is the result of restricting each quantifier in Φ(...) to X. Notice that each z[k]∈L[β+1] for some β<α. Combine formulas for the z's with the
formula for s and apply existential quantifiers over the z's outside and one gets a formula that defines the constructible set s using only the ordinals α that appear in expressions like X=L[α] as
Example: The set {5,ω} is constructible. It is the unique set, s, that satisfies the formula:
where is short for:
Actually, even this complex formula has been simplified from what the instructions given in the first paragraph would yield. But the point remains, there is a formula of set theory that is true only
for the desired constructible set s and that contains parameters only for ordinals.
Relative constructibility
Sometimes it is desirable to find a model of set theory that is narrow like L, but that includes or is influenced by a set that is not constructible. This gives rise to the concept of relative
constructibility, of which there are two flavors, denoted L(A) and L[A].
The class L(A) for a non-constructible set A is the intersection of all classes that are standard models of set theory and contain A and all the ordinals.
L(A) is defined by transfinite recursion as follows:
• L[0](A) = the smallest transitive set containing A as an element, i.e. the transitive closure of {A}.
• L[α+1](A) = Def (L[α](A))
• If λ is a limit ordinal, then .
• .
If L(A) contains a well-ordering of the transitive closure of {A}, then this can be extended to a well-ordering of L(A). Otherwise, the axiom of choice will fail in L(A).
A common example is L(R), the smallest model that contains all the real numbers, which is used extensively in modern descriptive set theory.
The class L[A] is the class of sets whose construction is influenced by A, where A may be a (presumably non-constructible) set or a proper class. The definition of this class uses Def[A] (X), which
is the same as Def (X) except instead of evaluating the truth of formulas Φ in the model (X,∈), one uses the model (X,∈,A) where A is a unary predicate. The intended interpretation of A(y) is y∈A.
Then the definition of L[A] is exactly that of L only with Def replaced by Def[A].
L[A] is always a model of the axiom of choice. Even if A is a set, A is not necessarily itself a member of L[A], although it always is if A is a set of ordinals.
It is essential to remember that the sets in L(A) or L[A] are usually not actually constructible and that the properties of these models may be quite different from the properties of L itself.
See also
1. ↑ Gödel, 1938
2. ↑ Barwise 1975, page 60 (comment following proof of theorem 5.9)
This article is issued from
- version of the 9/3/2016. The text is available under the
Creative Commons Attribution/Share Alike
but additional terms may apply for the media files. | {"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Constructible_universe.html","timestamp":"2024-11-12T19:54:32Z","content_type":"text/html","content_length":"61077","record_id":"<urn:uuid:bd97dc46-6288-44d8-853c-eb07b8804c5f>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00622.warc.gz"} |
by raja 2010-01-28 18:45:27
Algebra, trigonometry and calculus also orignated from India. Quadratic equations were used by Sridharacharya in the 11th century. The largest numbers the Greeks and the Romans used were 106 whereas
Hindus used numbers as big as 10*53 ( i.e 10 to the power of 53 ) with specific names as early as 5000 B.C. during the Vedic period. Even today, the largest used number is Tera: 10*12( 10 to the
power of 12 ).
Tagged in:
You must LOGIN to add comments | {"url":"https://www.hiox.org/4457-sridharacharya.php","timestamp":"2024-11-01T20:49:57Z","content_type":"text/html","content_length":"27287","record_id":"<urn:uuid:df84632c-f879-417a-b544-340170d818b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00801.warc.gz"} |
CSCE 317
CSCE 317: Computer Systems Engineering
TTh 1625-1740 SWGN 2A14
Prerequisites: CSCE 212 (Introduction to Computer Architecture) MATH 242 (Elementary Differential Equations), STAT 509 (Statistics for Engineers).
Instructor: Marco Valtorta
Office: Swearingen 3A55, 777-4641
E-mail: mgv@cse.sc.edu
Office Hours: 1030-1200 TTh
Any student with a documented disability should contact the Office of Student Disability Services at (803)777-6142 to make arrangements for proper accommodations.
Reference materials:
Mor Harchol-Balter. Performance Modeling and Design of Computer Systems: Queueing Theory in Action. Cambridge University Press, 2013 (ISBN 9781107027503). We will refer to this text as [H] in the
The departmental syllabus for the course lists the following four course outcomes. We will concentrate on the third one.
take an overall system and lifecycle view of the design and operation of a system,
model and evaluate the reliability of system architectures,
model and evaluate the performance and dynamic behavior of a system,
model and evaluate the economics of cash flows in system design, development, and operation.
Some assignments are only listed in the lecture log.
Final exam from spring 2015.
Quiz 1 of 2017-01-17, with answer
Quiz 2 of 2017-01-26, with answer
Quiz 3 of 2017-02-16, with answer
Introductory slides
Introduction to Probability
Notes used in 2017-01-12 class
Notes for probability review (first part)
Notes for probability review used in 2017-01-24 class
More notes for probability review used in 2017-01-24 class
More notes for probability review used in 2017-01-26 class
More notes for probability review used in 2017-01-26 class
More notes for probability review used in 2017-01-31 class
More notes (on reliability) used in 2017-02-07
Notes on Ch.4 [H] used in class on 2017-02-21
Notes on Ch.5 [H] used in class on 2016-02-28
Notes on Ch.6 [H] used in class on 2017-03-14
Notes on Ch.7 [H] used in class on 2017-03-23
Notes on Ch.8 [H] used in class on 2017-04-04
Notes on Ch.9 [H] used in class on 2017-04-13 and 18
Notes on Ch.10 [H] used in class on 2017-04-18 and 20
The USC Blackboard has a site for this course.
Some useful links:
1. Ivo Adan and Jaques Resing. Queueing Theory. Dated March 26, 2015. Accessed 2016-01-11 (local copy).
2. Theoretical Computer Science Cheat Sheet, by Steve Seiden (This version has the Escher's knot figure but is otherwise harder to read.)
3. Bianca Schroeder, Adam Wierman, and Mor Harchol-Balter. "Open Versus Closed: A Cautionary Tale." Proceedings of the Conference on Networked Systems Design and Implementation (NSDI 2006), San
Jose, CA, May 2006, 239-252 (local copy). | {"url":"https://cse.sc.edu/~mgv/csce317sp17/index.html","timestamp":"2024-11-14T12:36:38Z","content_type":"text/html","content_length":"8320","record_id":"<urn:uuid:da335b0f-8cb2-4476-912d-1d9cae3635ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00455.warc.gz"} |
how much does it cost for a cscs card – sssts.net
Construction workers often pay excessive amounts for CSCS cards. Find out the true cost of a CSCS card and how to apply for one.
The Cost of CSCS Card: Everything You Need to Know
Learn all about the cost of CSCS cards, including the factors that affect the price and whether it’s worth the investment. Find out how to apply for a CSCS card and avoid overpaying. | {"url":"https://sssts.net/tag/how-much-does-it-cost-for-a-cscs-card/","timestamp":"2024-11-07T07:27:34Z","content_type":"text/html","content_length":"45993","record_id":"<urn:uuid:9461893d-2cea-4d06-893a-70e97ab863de>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00640.warc.gz"} |
Scripting API
For a given plane described by planeNormal and a given vector vector, Vector3.ProjectOnPlane generates a new vector orthogonal to planeNormal and parallel to the plane. Note: planeNormal does not
need to be normalized.
''The red line represents vector, the yellow line represents planeNormal, and the blue line represents the projection of vector on the plane.''
The script example below makes Update generate a vector position, and a planeNormal normal. The Vector3.ProjectOnPlane static method receives the arguments and returns the Vector3 position.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
// Vector3.ProjectOnPlane - example
// Generate a random plane in xy. Show the position of a random
// vector and a connection to the plane. The example shows nothing
// in the Game view but uses Update(). The script reference example
// uses Gizmos to show the positions and axes in the Scene.
public class Example : MonoBehaviour
private Vector3 vector, planeNormal;
private Vector3 response;
private float radians;
private float degrees;
private float timer = 12345.0f;
// Generate the values for all the examples.
// Change the example every two seconds.
void Update()
if (timer > 2.0f)
// Generate a position inside xy space.
vector = new Vector3(Random.Range(-1.0f, 1.0f), Random.Range(-1.0f, 1.0f), 0.0f);
// Compute a normal from the plane through the origin.
degrees = Random.Range(-45.0f, 45.0f);
radians = degrees * Mathf.Deg2Rad;
planeNormal = new Vector3(Mathf.Cos(radians), Mathf.Sin(radians), 0.0f);
// Obtain the ProjectOnPlane result.
response = Vector3.ProjectOnPlane(vector, planeNormal);
// Reset the timer.
timer = 0.0f;
timer += Time.deltaTime;
// Show a Scene view example.
void OnDrawGizmosSelected()
// Left/right and up/down axes.
Gizmos.color = Color.white;
Gizmos.DrawLine(transform.position - new Vector3(2.25f, 0, 0), transform.position + new Vector3(2.25f, 0, 0));
Gizmos.DrawLine(transform.position - new Vector3(0, 1.75f, 0), transform.position + new Vector3(0, 1.75f, 0));
// Display the plane.
Gizmos.color = Color.green;
Vector3 angle = new Vector3(-1.75f * Mathf.Sin(radians), 1.75f * Mathf.Cos(radians), 0.0f);
Gizmos.DrawLine(transform.position - angle, transform.position + angle);
// Show the projection on the plane as a blue line.
Gizmos.color = Color.blue;
Gizmos.DrawLine(Vector3.zero, response);
Gizmos.DrawSphere(response, 0.05f);
// Show the vector perpendicular to the plane in yellow
Gizmos.color = Color.yellow;
Gizmos.DrawLine(vector, response);
// Now show the input position.
Gizmos.color = Color.red;
Gizmos.DrawSphere(vector, 0.05f);
Gizmos.DrawLine(Vector3.zero, vector); | {"url":"https://docs.unity3d.com/ScriptReference/Vector3.ProjectOnPlane.html","timestamp":"2024-11-14T01:24:46Z","content_type":"text/html","content_length":"21493","record_id":"<urn:uuid:164ee9ae-f69f-4cc8-9a38-ebe948250af2>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00119.warc.gz"} |
Export Reviews, Discussions, Author Feedback and Meta-Reviews
Submitted by Assigned_Reviewer_6
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
The paper addresses the problem of estimation of eigenvalues and eigenvectors of large sparse graph matrices. It uses a nice divide-and-conquer approach to obtain better estimates of top-k
eigenspace. Such an estimate can be used in several classic tasks, such as link prediction and recommender systems.
The paper is build upon a divide-and-conquer method, taking advantage of the particular characteristics of some graphs (e.g. social networks): sparsity and cluster structure.
The theoretical part is strong, with theoretical bounds provided. The divide part uses a decomposition of the adjacency matrix as a sum of a block-diagonal matrix and an almost zero non-diagonal
matrix. A spectral decomposition of block matrices is done, and the k first eigen values are chosen among the resulting union of eigenvalues.
In the conquer step, the corresponding eigenvectors are used as initialization step in a block-Lanczos (or random SVD) method on the whole matrix.
Theoretical bounds are provided, with respect to the norm of the remaining matrix (the non-block one).
Experiments show that the method is quite accurate for the eigenspace identification, compared to classic spectral methods (randomized SVD or Lanczos).
In tasks such as label propagation and matrix completion, the methods performs as well as other classical methods.
The paper is well written. It uses quite sophisticated methods and the theoretical part is solid.
The main weakness of the paper is that the tasks that are presented as highly dependent on the accuracy of eigenspace estimation (link prediction, etc.) reveals to be not so dependent: the
experiments show that the method presented in the paper is much more accurate than previous state-of-the-art in eigenspace estimation, but this does not implies significantly better results in the
label propagation or matrix completion tasks.
In other words, The goal of the method (a good estimate of the eigenspace) is reached, but it appears that it has no effect on the tasks that it was supposed to solve (except for the computing time).
Also, the method (and the bound) should highly depend on the quality of the first step (the graph clustering step), and it would have been nice to have an idea of the accuracy of the method with
respect to this first step.
However, the method is original and could be used in other spectral decomposition problems (provided that they are sparse and likely to be clustered).
- nice divide-and-conquer algorithm
- solid theoretical part
- theoretical bounds
- experiments show that the method reaches good accuracy
- for real-world tasks, the algorithm does not perform significantly better that previous methods
- dependency between first clustering step and quality of the solution is not studied.
Q2: Please summarize your review in 1-2 sentences
The method described in the paper seems original, and could be promising, although the experiments show only marginal improve wrt other classical methods.
Submitted by Assigned_Reviewer_12
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
The authors propose a new approach to compute the $k$ dominant eigenvector/value pairs of large sparse graphs (i.e. positive and symmetric adjacency matrices). Their methods consist of (i)
recursively clustering the graph (divide), and (ii) at each step (while going back from leafs to root) using the dominant eigenvectors of the clusters as initialisations to a block Laczos algorithm
which approximate the dominant eigenvector/value pairs of the cluster from the next level (conquer). Theoretical guarantees are provided and the method is experimentally validated on two real machine
learning applications (label propagation and inductive matrix completion) where it outperforms state of the art methods.
The theoretical guarantees and experiments are convincing. Two points that may need clarification:
- You didn't comment on the choice of the number $r$ of top eigenvector/value pairs to extract from each cluster. However it seems that its choice could be tedious: if $r$ is too small we may miss a
dominant eigenvector of the original graph in one of the clusters, and choosing $r$ too big may lead to bad time complexity. (if $r$ is indeed a parameter of your algorithm you should put it in input
of Algorithm 1)
- You provided theoretical guarantees for the one-level spectral decomposition but not for the multi-level one. Can you comment on how this multi-level strategy would affect your theoretical results?
The paper is well written and easy to follow.
The idea seems original.
Fast and efficient methods to compute the spectral decomposition of large matrices are of great interest to the machine learning community. The fact that the proposed method can easily be
parallelised and that an early termination strategy is presented is nice.
Q2: Please summarize your review in 1-2 sentences
The paper is well written, the idea seems original and both the theoretical analysis and the experiments are convincing.
Submitted by Assigned_Reviewer_22
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
The paper proposes a technique for computing the eigenvectors of large matrices
arising from graphs, by exploiting the clustering structure that usually appears
in multiple levels of such objects. Graph clustering is first applied followed by
spectral decomposition on each cluster. This serves as an initial approximation
for a block Lanczos algorithm on the complete graph.
The paper is well written and computing eigenvectors of huge matrices has many applications in machine learning. The contribution of this paper quite straightforward and the experiments show the
diversity of applications that this work benefits.
I think the paper could do a better job putting the existing
work in context. In particular I am very curious as to why randomized SVD is such a poor contender in this paper. It might be explained by performing unnecessarily too many passes over the dataset
(usually more than two passes do not add anything more). Minimizing the number of passes is the most relevant quantity in big data applications and in this light of this, it is unclear whether the
presented results would hold once we move to a setting where the data does not fit in memory and has to be streamed from
disk (or even when the data lives in a distributed file system). Randomized SVD has
gained popularity because its data access pattern enables such scenarios.
At the same time RSVD does not need any preprocessing while the proposed method needs to run a graph clustering algorithm and in the experiments it is not even clear what is the proportion of time
spent clustering versus executing Lanczos. It is also unclear whether the graph clustering algorithms would work efficiently as we move to datasets that don't fit to a single machine's memory.
Other notes:
-Theorems and Lemmas seem to depend on results from Stewart and Sun though no reference is given
-Line 418: MSIEGS -> MSEIGS
Q2: Please summarize your review in 1-2 sentences
The paper proposes a smart initialization for block Lanczos of matrices arising from natural graphs. It would meet the bar if it was clear
a) why it beats RSVD so much
b) what happens when datasets don't fit in memory
c) what is the running overhead from clustering.
--Update after author response:
The authors provided good explanations for (a), plausible ways to handle (b) and concrete timings for (c). Therefore, I have substantially increased my score.
Q1:Author rebuttal: Please respond to any concerns raised in the reviews. There are no constraints on how you want to argue your case, except for the fact that your text should be limited to a
maximum of 6000 characters. Note however, that reviewers and area chairs are busy and may not read long vague rebuttals. It is in your own interest to be concise and to the point.
We thank reviewers for their comments/suggestions.
To Reviewer_12:
1:The choice of r eigenpairs extracted from each cluster.
As in lines 191 to 193 in the paper, r is proportional to the Frobenius norm of each cluster. The intuition is that larger clusters tend to have more influence over the spectrum of the entire matrix.
Suppose we target top-100 eigenpairs of A, we partition A into 3 clusters where the Frobenius norm ratio of each cluster is 0.2, 0.3, 0.5. First we decide the number of eigenvectors to get from the
child clusters using a small amount of oversampling(0.2*k), and then distribute the 120 eigenvectors to each cluster. So r for each cluster is 24, 36 and 60.
2:How multi-level strategy affects the theoretical results.
The multi-level strategy is used to speed up the computation when some clusters are still too large to compute their eigendecompositions efficiently, so we further divide the clusters to smaller
ones. Thus multi-level MSEIGS is an approximation of single-level MSEIGS. The theoretical guarantees may be generated to multi-level MSEIGS by accumulating errors from each level of the hierarchy,
which potentially implies a cumulative error bound for the multi-level case as compared to the single-level case.
To Reviewer_22:
3:why MSEIGS beats RSVD so much?
With the same number of iterations, block Lanczos and RSVD have the same number of passes over the graph. The number of passes (or iterations) required is related to the desired accuracy and decay of
the eigenvalues. If the decay is slow or highly accurate approximation is needed, we usually need several iterations(>5 iterations).
As described from lines 176 to 189 in the paper, block Lanczos uses the j-th Krylov subspace, span(V,A*V,...,A^j*V), at the j-th iteration, while RSVD restricts to a smaller subspace span(A^j*V)
discarding information from previous iterations. So block Lanczos outperforms RSVD with the same number of iterations. For example, on the CondMat dataset with target rank 100, block Lanczos needs 7
iterations, while RSVD takes more than 10 iterations to achieve similar accuracy(0.99). Furthermore, MSEIGS needs only 5 iterations by using a better initialization than block Lanczos. Figures 1(b)
and 1(c) show that MSEIGS is more accurate than both block Lanczos and RSVD with a fixed number of iterations.
4: What happens when data don't fit in memory or in a distributed file system?
In this paper, we mainly focus on approximating eigendecomposition under the multi-core shared-memory setting. Dealing with graphs that cannot fit into memory is one of our future research
directions. We believe that with careful implementation of MSEIGS, it can also be efficient in streaming and distributed settings. Here we briefly outline one way to implement MSEIGS for such cases.
Let us start with the single-level MSEIGS. First we can apply streaming graph clustering algorithms[1] to generate c clusters such that each cluster can fit into memory. The graph can be organized
into a c-by-c block matrix where A_ij consists of links between cluster i and j. Then we load the subgraphs A_11,...,A_c1 and compute eigenvectors U_1 of A_11. After that, we multiply U_1 with the
previously loaded subgraphs. Then we repeatedly apply this procedure for each cluster and obtain AU where U=diag(U_1,...,U_c). For the multi-level case, we can apply MSEIGS under the shared-memory
setting when we load each A_ii into memory.
For the distributed case, we can use existing distributed graph clustering algorithms(ParMetis) and then compute each cluster's eigenpairs independently in each machine. For the top level, we apply
distributed Lanczos algorithms[2].
While this is future work, the above ideas show the potential of the ideas presented in the paper.
5: What is the running overhead from clustering. How graph clustering algorithms works when data cannot fit into memory.
Compared with block Lanczos step, the overhead of clustering is very low, usually less than 10% of the total time. For the FriendsterSub dataset with 10M nodes and 83M edges, to achieve 0.99
accuracy, MSEIGS takes 1825 secs where clustering takes only 80 secs. MSEIGS is a framework, so we can apply any graph clustering algorithm including distributed/streaming algorithms, such as
ParMetis, Fennel[1] and GEM([25] in the paper), to generate the partitions when the data cannot fit into memory. We will add these results/discussions to the paper.
[1] C. Tsourakakis, C. Gkantsidis, B. Radunovic and M. Vojnovic. Fennel: Streaming graph partitioning for massive scale graphs. WSDM, 2014.
[2] M. R. Guarracino, F. Perla and P. Zanetti. A parallel block Lanczos algorithm and its implementation for the evaluation of some eigenvalues of large sparse symmetric matrices on multicomputers.
AMCS 16(2):241-249, 2006.
To Reviewer_6:
6:The comparison of MSEIGS with other methods for real-word tasks.
MSEIGS is faster and more accurate than other methods for approximating eigenpairs. To test the performance of MSEIGS for various machine learning tasks, we run each algorithm to achieve similar
accuracy and compare the computational time. So all methods have similar accuracy, with different running times. To achieve similar accuracy, MSEIGS is much faster. For instance, on the Aloi dataset,
MSEIGS is about 5 times faster and MSEIGS-Early is almost 20 times faster than Matlab's EIGS.
7: Dependency between clustering step and quality of solution.
Better quality of clustering(more within-cluster links) implies higher accuracy of MSEIGS. To vary the clustering quality, we cluster the CondMat graph into 4 clusters and then randomly perturb
clusters by moving a portion of vertices from their original cluster to another random cluster to reduce within-cluster links. The result is:
Ratio of vertices shuffled: 0 0.2 0.4 0.8 1
Ratio of within-cluster links: 0.8631 0.6457 0.4708 0.2742 0.2492
Accuracy of MSEIGS: 0.9980 0.9757 0.9668 0.9375 0.9268 | {"url":"https://proceedings.neurips.cc/paper_files/paper/2014/file/3bbfdde8842a5c44a0323518eec97cbe-Reviews.html","timestamp":"2024-11-08T11:14:55Z","content_type":"application/xhtml+xml","content_length":"21776","record_id":"<urn:uuid:e4d332d6-f7af-4a46-a439-d61e2607b829>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00211.warc.gz"} |
Calculus Volume 2 - Textbook and Exercise
About Calculus Volume 2 - Textbook and Exercise
Calculus is designed for the typical two- or three-semester general calculus course, incorporating innovative features to enhance student learning. The book guides students through the core concepts
of calculus and helps them understand how those concepts apply to their lives and the world around them. Due to the comprehensive nature of the material, we are offering the book in three volumes for
flexibility and efficiency. Volume 2 covers integration, differential equations, sequences and series, and parametric equations and polar coordinates.
✨Contents of the Application✨
1 Integration
1.1 Approximating Areas
1.2 The Definite Integral
1.3 The Fundamental Theorem of Calculus
1.4 Integration Formulas and the Net Change Theorem
1.5 Substitution
1.6 Integrals Involving Exponential and Logarithmic Functions
1.7 Integrals Resulting in Inverse Trigonometric Functions
Key Terms
Key Equations
Key Concepts
Chapter Review Exercises
2 Applications of Integration
2.1 Areas between Curves
2.2 Determining Volumes by Slicing
2.3 Volumes of Revolution: Cylindrical Shells
2.4 Arc Length of a Curve and Surface Area
2.5 Physical Applications
2.6 Moments and Centers of Mass
2.7 Integrals, Exponential Functions, and Logarithms
2.8 Exponential Growth and Decay
2.9 Calculus of the Hyperbolic Functions
Key Terms
Key Equations
Key Concepts
Chapter Review Exercises
3 Techniques of Integration
3.1 Integration by Parts
3.2 Trigonometric Integrals
3.3 Trigonometric Substitution
3.4 Partial Fractions
3.5 Other Strategies for Integration
3.6 Numerical Integration
3.7 Improper Integrals
Key Terms
Key Equations
Key Concepts
Chapter Review Exercises
4 Introduction to Differential Equations
4.1 Basics of Differential Equations
4.2 Direction Fields and Numerical Methods
4.3 Separable Equations
4.4 The Logistic Equation
4.5 First-order Linear Equations
Key Terms
Key Equations
Key Concepts
Chapter Review Exercises
5 Sequences and Series
5.1 Sequences
5.2 Infinite Series
5.3 The Divergence and Integral Tests
5.4 Comparison Tests
5.5 Alternating Series
5.6 Ratio and Root Tests
Key Terms
Key Equations
Key Concepts
Chapter Review Exercises
6 Power Series
6.1 Power Series and Functions
6.2 Properties of Power Series
6.3 Taylor and Maclaurin Series
6.4 Working with Taylor Series
Key Terms
Key Equations
Key Concepts
Chapter Review Exercises
7 Parametric Equations and Polar Coordinates
7.1 Parametric Equations
7.2 Calculus of Parametric Curves
7.3 Polar Coordinates
7.4 Area and Arc Length in Polar Coordinates
7.5 Conic Sections
Key Terms
Key Equations
Key Concepts
Chapter Review Exercises
A | Table of Integrals
B | Table of Derivatives
C | Review of Pre-Calculus
Astronomy is designed to meet the scope and sequence requirements of one-...
The material covers both fundamental and practical aspects of chemical analysis. There...
Precalculus app is designed for needs of a various of precalculus courses...
University Physics is a three-volume collection that meets the scope and sequence...
👉12Th Chemistry (Rasayan Vigyan) NCERT Textbook and NCERT Solutions in HindiThis Application...
👉Core features of the application✔ Chapter-wise Reading✔ This Application Is Useful To... | {"url":"https://apppage.net/preview/com.rktech.calculusvolumetwo","timestamp":"2024-11-10T09:24:54Z","content_type":"text/html","content_length":"276776","record_id":"<urn:uuid:aaddf8c8-1901-4314-a1bc-f52e0f7971a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00405.warc.gz"} |
Mastering Formulas In Excel: What Is The Periodic Payment Formula Of A
Understanding formulas in Excel is essential for anyone working with financial data or calculations. One important formula to master is the periodic payment formula of an annuity. This formula allows
you to calculate the regular payment required to pay off a loan or investment over a set period of time.
Having a grasp of annuity formulas is crucial for financial analysts, accountants, and anyone involved in financial planning. It allows for accurate projections, budgeting, and decision-making based
on financial data.
Key Takeaways
• Understanding the periodic payment formula in Excel is crucial for financial analysts, accountants, and anyone involved in financial planning.
• The periodic payment formula allows for accurate projections, budgeting, and decision-making based on financial data.
• Common mistakes to avoid when using the periodic payment formula include misinterpreting variables, forgetting to adjust for time periods, and not double-checking the calculation.
• Mastering the formula involves practicing with sample problems, seeking additional resources for understanding, and teaching the formula to someone else to reinforce learning.
• The periodic payment formula can be used for advanced applications such as calculating different payment frequencies, incorporating inflation or interest rate changes, and for investment
Understanding the periodic payment formula
Mastering the periodic payment formula of an annuity in Excel is essential for financial analysts, accountants, and anyone working with financial data. This formula allows you to calculate the
regular payments made or received in an annuity over a certain period of time. In this chapter, we will explore the components and application of this formula.
A. Explanation of the formula
The periodic payment formula of an annuity is used to calculate the regular payments made or received in an annuity. It takes into account the present value, interest rate, and number of periods to
determine the periodic payment.
B. Variables involved in the formula
The key variables involved in the periodic payment formula include the present value (PV), interest rate (r), and number of periods (n). The present value represents the initial amount of the
annuity, the interest rate is the rate at which the annuity grows, and the number of periods determines the duration of the annuity.
C. Real-life application of the formula
The periodic payment formula of an annuity has various real-life applications. For example, it can be used to calculate the regular mortgage payments on a home loan, the monthly payments on a car
lease, or the periodic contributions to a retirement savings plan. Understanding and mastering this formula is crucial for making informed financial decisions and projections.
Mastering Formulas in Excel: What is the periodic payment formula of an annuity
Organizing the variables
Before diving into the calculation of the periodic payment of an annuity, it is crucial to organize the variables involved in the formula. These variables include the interest rate, the number of
periods, and the present value of the annuity. It is essential to have a clear understanding of these variables and their respective values before proceeding with the calculation.
Plugging the values into the formula
Once the variables are organized, the next step is to plug the values into the periodic payment formula for an annuity. The formula for calculating the periodic payment of an annuity is:
Periodic Payment = PV * (r / (1 - (1 + r)^-n))
• PV: Present value of the annuity
• r: Periodic interest rate
• n: Total number of payments
Using Excel functions to simplify the calculation
Excel offers a range of functions that can simplify the calculation of the periodic payment of an annuity. One such function is the PMT function, which can be used to calculate the periodic payment
of an annuity based on the present value, interest rate, and total number of payments. By using the PMT function, the complex formula for the periodic payment of an annuity can be simplified into a
single cell in an Excel spreadsheet, making the calculation process much more efficient and error-free.
Common mistakes to avoid
When mastering the periodic payment formula of an annuity in Excel, it’s important to be aware of common mistakes that can lead to inaccurate results. By understanding these pitfalls, you can ensure
that your calculations are precise and reliable.
• Misinterpreting variables
One of the most common mistakes when using the periodic payment formula is misinterpreting the variables involved. It’s crucial to accurately identify the variables such as interest rate, number
of periods, and present value. Misinterpreting these variables can lead to significant errors in your calculations.
• Forgetting to adjust for time periods
Another mistake to avoid is forgetting to adjust for time periods. The periodic payment formula requires the time periods to be consistent. Forgetting to adjust for monthly, quarterly, or annual
time periods can result in incorrect payment amounts.
• Not double-checking the calculation
Finally, not double-checking the calculation is a common mistake that can lead to errors. It’s essential to review your work and ensure that all inputs are accurate and that the formula has been
applied correctly. Failing to double-check the calculation can result in misleading payment amounts.
Tips for Mastering the Periodic Payment Formula of an Annuity in Excel
When it comes to mastering the periodic payment formula of an annuity in Excel, it's important to practice, seek additional resources, and teach the formula to someone else to reinforce your
learning. Here are some tips to help you excel in using this formula:
A. Practice using sample problems
• Work through example exercises: The best way to get comfortable with the formula is to work through sample problems on a regular basis. This will help you familiarize yourself with the
calculations and identify any areas where you may need to improve.
• Use Excel functions: Excel offers functions that can help you solve annuity problems more efficiently. Practice using these functions to streamline your calculations and become more proficient
with the formula.
B. Seeking additional resources for understanding
• Online tutorials and guides: There are numerous online resources available that offer tutorials and guides on how to use the periodic payment formula of an annuity in Excel. Take advantage of
these resources to gain a deeper understanding of the formula and its applications.
• Books and courses: Consider investing in books or enrolling in courses that focus on financial mathematics and Excel. These resources can provide you with in-depth knowledge and practical
examples to enhance your skills.
C. Teaching the formula to someone else to reinforce learning
• Explain the concept to a friend or colleague: Teaching the formula to someone else is a great way to reinforce your own understanding. By explaining the concept to a friend or colleague, you'll
solidify your knowledge and identify any areas that may need further clarification.
• Create a study group: Joining or forming a study group with peers who are also learning about annuities can provide a supportive environment for discussing and teaching the formula to one
another. This collaborative approach can help you gain new perspectives and deepen your understanding.
Advanced applications of the formula
Once you have mastered the basic periodic payment formula of an annuity, you can explore advanced applications that will allow you to use the formula in more complex financial scenarios.
A. Calculating different payment frequencies
• Monthly, quarterly, or yearly payments: The periodic payment formula can be adjusted to accommodate different payment frequencies. By modifying the formula's interest rate and number of periods,
you can calculate the periodic payments for annuities with payment frequencies other than the standard annual payments.
B. Incorporating inflation or interest rate changes
• Adjusting for inflation: In real-world scenarios, the interest rate or inflation rate may change over time. You can modify the periodic payment formula to account for these changes by using the
future value formula to calculate the future value of the annuity at different interest or inflation rates, and then use the modified formula to determine the periodic payments.
C. Using the formula for investment planning
• Projecting future savings: The periodic payment formula can be used to plan for future investments by calculating the periodic payments needed to reach a specific savings goal. This can help
individuals or businesses create a structured savings plan to achieve their financial objectives.
Recap: Understanding the periodic payment formula of an annuity is crucial for making accurate financial calculations and projections. It allows you to determine the regular payment amount needed to
pay off a loan or achieve a savings goal.
Encouragement: As you continue to delve into the world of Excel formulas, don't be disheartened if mastering them takes time. Keep practicing, seeking out resources, and experimenting with different
scenarios. The more familiar you become with these formulas, the more confident and efficient you'll be in using Excel for your financial and data analysis needs.
ONLY $99
Immediate Download
MAC & PC Compatible
Free Email Support | {"url":"https://dashboardsexcel.com/blogs/blog/mastering-formulas-in-excel-what-is-the-periodic-payment-formula-of-an-annuity","timestamp":"2024-11-14T23:05:51Z","content_type":"text/html","content_length":"212817","record_id":"<urn:uuid:0270791a-2ae8-446d-babf-9770c3f35fb9>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00068.warc.gz"} |
A School Educator Is Interested In Determining The Relationships Between Grade Point Average (GPA). - A Grade Essay Writers
A School Educator Is Interested In Determining The Relationships Between Grade Point Average (GPA).
A School Educator Is Interested In Determining The Relationships Between Grade Point Average (GPA).
Study Description: A school educator is interested in determining the relationships between grade point average (GPA) and IQ scores among ninth graders. The educator takes a random sample of 40 ninth
graders aged 14 years old and administers the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV). The WISC-IV includes a Full Scale IQ (FSIQ; however for this assignment we will just
call it IQ) that comprises verbal comprehension, perceptual reasoning, working memory, and processing speed skills.
Output file: See Week_2_SPSS_Output.pdf
Answer the following Questions:
1. Hypothesis – Formulate a hypothesis about the two variables. What do you think is the relationship between IQ scores and GPA?
2. Variables – Identify the variables and each of their attributes: discrete or continuous, quantitative or categorical, scale of measurement (nominal, ordinal, interval, or ratio), and
independent or dependent.
3. Descriptive statistics – Write an overview of the descriptive statistics (at least two paragraphs), including the appropriate and necessary statistical results within sentences and in proper
APA formatting. Be sure to provide sufficient explanation for any numbers presented. Include the following in your discussion:
□ How do the measures of central tendency and variability provide us with an overview of the characteristics and shape of the distribution of each variable? What are these statistic
□ Keeping in mind that the WISC-IV has a mean of 100 and Standard Deviation of 15, what assumptions could you make about the IQ scores and suitability of this IQ test for the group of students
□ Keeping in mind that the WISC-IV has a mean of 100 and Standard Deviation of 15, how many students’ IQ scores in this sample are within one standard deviation below the test’s mean? Two
standard deviations below the test’s mean? What percentage of students in this sample had an IQ score less than or equal to 70? An IQ score greater or equal to 100?
4. Correlation – Write an overview of the results of the correlation (at least two paragraphs), including the appropriate and necessary statistical results within sentences and in proper APA
formatting. Be sure to provide sufficient explanation for any numbers presented. Consider the following in your overview and conclusions:
□ Is there a significant correlation between IQ scores and GPA? If so, what does a significant correlation mean?
□ Using the correlation table and scatterplot, explain whether the relationship is positive, negative, or no correlation.
□ Describe the strength of the relationship (e.g. very strong, moderate, weak, etc.).
□ What do the results tell us about our hypotheses?
□ What conclusions can we draw from these results? What conclusions can we NOT make using these results?
□ What issues regarding the sample used or how the data was collected should be considered in the interpretation of the data?
5. Regression – Write an overview of the results of the regression (1 paragraph), including the appropriate and necessary statistical results within sentences and in proper APA formatting. Be sure
to provide sufficient explanation for any numbers presented. Consider the following in your overview and conclusions:
□ In the regression, what variable is the dependent variable and what variable is the independent variable?
□ What do the regression results tell us about IQ scores and GPA? | {"url":"https://agradeessaywriters.com/a-school-educator-is-interested-in-determining-the-relationships-between-grade-point-average-gpa/","timestamp":"2024-11-11T21:35:07Z","content_type":"text/html","content_length":"373097","record_id":"<urn:uuid:3985c249-9e22-4181-a5b4-bc462f39c642>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00853.warc.gz"} |
An Efficient Normalisation Procedure for Linear Temporal Logic: Isabelle/HOL Formalisation
This is a development version of this entry. It might change over time and is not stable. Please refer to release versions for citations.
In the mid 80s, Lichtenstein, Pnueli, and Zuck proved a classical theorem stating that every formula of Past LTL (the extension of LTL with past operators) is equivalent to a formula of the form $\
bigwedge_{i=1}^n \mathbf{G}\mathbf{F} \varphi_i \vee \mathbf{F}\mathbf{G} \psi_i$, where $\varphi_i$ and $\psi_i$ contain only past operators. Some years later, Chang, Manna, and Pnueli built on this
result to derive a similar normal form for LTL. Both normalisation procedures have a non-elementary worst-case blow-up, and follow an involved path from formulas to counter-free automata to star-free
regular expressions and back to formulas. We improve on both points. We present an executable formalisation of a direct and purely syntactic normalisation procedure for LTL yielding a normal form,
comparable to the one by Chang, Manna, and Pnueli, that has only a single exponential blow-up.
Session LTL_Normal_Form | {"url":"https://devel.isa-afp.org/entries/LTL_Normal_Form.html","timestamp":"2024-11-12T02:27:26Z","content_type":"text/html","content_length":"11119","record_id":"<urn:uuid:6ff0c0e9-0085-44f3-a865-282e15fa08be>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00229.warc.gz"} |
Power of Porter Governor if Angle Made by Upper and Lower Arms are Not Equal Calculator | Calculate Power of Porter Governor if Angle Made by Upper and Lower Arms are Not Equal
STEP 0: Pre-Calculation Summary
STEP 1: Convert Input(s) to Base Unit
Mass of Ball: 6 Kilogram --> 6 Kilogram No Conversion Required
Mass of Central Load: 21 Kilogram --> 21 Kilogram No Conversion Required
Ratio of Length of Link to Length of Arm: 0.9 --> No Conversion Required
Percentage Increase in Speed: 60 --> No Conversion Required
Acceleration Due to Gravity: 9.8 Meter per Square Second --> 9.8 Meter per Square Second No Conversion Required
Height of Governor: 0.337891 Meter --> 0.337891 Meter No Conversion Required
STEP 2: Evaluate Formula
STEP 3: Convert Result to Output's Unit
10226.2683225124 Watt --> No Conversion Required | {"url":"https://www.calculatoratoz.com/en/power-of-porter-governor-if-angle-made-by-upper-and-lower-arms-are-not-equal-calculator/Calc-2270","timestamp":"2024-11-02T21:31:36Z","content_type":"application/xhtml+xml","content_length":"141919","record_id":"<urn:uuid:7fa95af0-74c0-46b8-b8de-cd68ddabe36a>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00390.warc.gz"} |
At Long Last, Hobbyist Discovers "Einstein" Tile
In his free time, David Smith designs tiles. More specifically, the retired print technician and recreational mathematician pieces together as many tiles as he can (no gaps allowed) before the
pattern either repeats or cannot continue.
Until recently, every shape anyone had ever tested met one of those two fates — despite the scrutiny of many brilliant minds over the past 50 years. Then, one day last November, Smith found the only
known exception.
13 Sided Shape
Using an app called PolyForm Puzzle Solver, Smith constructed a jagged, 13-sided shape. Vaguely reminiscent of a top hat, he began filling the screen with copies of it. They joined seamlessly and, to
his surprise, without repeating.
“The tessellations were something I had not seen before,” he says.
Keen to investigate further, he cut out dozens of paper copies and started fresh. One after another, the tiles kept falling into place. They invited him deeper into a striking visual pattern — and as
it grew, so too did his excitement.
He showed this promising creation to Craig Kaplan, a computer scientist at the University of Waterloo. “Almost immediately it looked like he was onto something new and profound,” Kaplan says.
Though it took a while longer to prove mathematically, their instinct was sound: As they and two other colleagues announced in March, in a paper that has yet to be peer-reviewed, Smith had stumbled
upon the long-sought “einstein” shape.
Elusive Einstein Tile
As used here, the word “einstein” has nothing to do with a certain German physicist. Instead, it evokes the literal meaning of his last name: “one stone.”
It’s a less yawn-inducing moniker for what is technically known as an “aperiodic monotile” — a single tile that can fill the infinite plane, on and on for eternity, in a pattern that never repeats.
On a typical bathroom floor, you’ll find clean-cut squares, triangles or hexagons, arranged in some clearly visible order. Now, picture those neat rows replaced by an apparently random jumble of
blocks, and voila — aperiodic decor.
In other words, there is no section you can cut and paste to complete the rest of the tiling.
Other Aperiodic Tilings
Before the 13-sided “hat” revealed itself to Smith, it was anyone’s guess whether an einstein even existed.
Mathematicians have been hunting for one since 1966, when Robert Berger devised the first set of tiles that could be laid out aperiodically. This was a landmark innovation but, with an unwieldy
20,246 distinct shapes, it was not yet a viable option for DIY home renovators.
As the quest continued for more elegant combinations, that number shrank from quintuple to single digits in just a few years. Before long, the monotile seemed within reach.
In 1973, Oxford mathematician and physicist Roger Penrose set the bar at two tiles — by organizing a pair of shapes called kites and darts in aperiodic fashion. But then progress stalled, and the
final challenge stood for five decades.
Read More: How a Mathematician Solved a Problem That Puzzled Computer Scientists for 30 Years
Looking for the Unknown
After so much time, it may seem astonishing that an amateur beat the professionals to the finish line. In fact, Smith wasn’t even searching for an einstein. He attributes his success to “persistence
mainly,” and perhaps a bit of luck, “although I feel like I was the chosen one,” he says.
Marjorie Senechal, a professor emerita at Smith College who has studied tiling since the 1970s, notes that the field’s history is strewn with contributions from untrained tinkerers. Most notably,
around the time Penrose unveiled his kites and darts, a hobbyist and mail sorter named Robert Ammann independently invented a strikingly similar solution.
“This is a subject where you can literally get your hands on it,” says Senechal, who profiled Ammann in 2004 for The Mathematical Intelligencer. “If you have a good eye and an inquiring mind, you can
find things that other people trying to work through theory can’t find.”
A zoomed-out patch of hat tiles. (Credit: David Smith, Joseph Samuel Myers, Craig S. Kaplan, Chaim Goodman-Strauss/CC BY-SA 4.0)
The Math of Patterns
Smith’s ingenuity set things in motion. But because we’re dealing with an infinite plane, no amount of fiddling with finite tiles can guarantee the pattern won’t eventually start over.
The only path to certainty? Mathematical proof. So, Smith and Kaplan enlisted two more experts: Chaim Goodman-Strauss, a mathematician at the University of Arkansas, and Joseph Myers, a British
software engineer.
Actually, aperiodicity is child’s play. Plain old rectangles can satisfy the non-repeating requirement, even though you could easily reassemble them periodically.
The real trick is to find a shape that only works aperiodically, one with just the right balance of complexity — enough to disrupt periodic pattern, but not so much that all pattern degenerates.
“That’s the magic that makes aperiodicity interesting,” Kaplan says. “They have to do a very careful dance between order and chaos.”
Read More: Is It Actually Impossible to “Square the Circle?”
Proving the Aperiodic
To make sure the hat hit that sweet spot, Myers first employed a tried-and-true method, pioneered by Berger himself.
It begins with a set of “metatiles,” simple polygons that roughly resemble small groupings of hats. From there, you can combine metatiles into supertiles, supertiles into supersupertiles, and so on
to the endless reaches of infinity.
Their paper demonstrates that this hierarchy is the only way to tile the plane with hats, which amounts to proving that the shape will never slip into periodicity. But then Myers forged a new kind of
proof, and to understand it we’ll need to break the hat down into its basic parts.
Exotic as the shape appears, it’s well known to geometers as a polykite; start with a hexagon, draw three lines connecting opposite sides at their midpoints, and you wind up with six kites.
Combine two or more of these, and you get a polykite. Slap together eight in an especially fortuitous order, and you get a groundbreaking mathematical discovery. As the team writes in their paper,
“the shape is almost mundane in its simplicity.”
Read More: The Shape of Madness
Adjusting Aperiodicity
The hat’s 13 sides come in two lengths, and Myers realized that by adjusting the length of either set, he could create new shapes with the same properties. That means there’s not one einstein — but
an infinite family of them, each morphing into the next along a vast spectrum.
At the two extremes (where the long and short sides disappear) and at the midpoint (where they become equal) lie periodic shapes, which can be used to establish the aperiodicity of the rest.
This intriguing addition to the repertoire of tiling proofs gives mathematicians something to chew on, but Senechal explains that it has roots in a more traditional strategy.
“There are connections to long-standing theory,” she says. “This situates their work in a continuum not just of tilings, but of thoughts about tilings.”
These hat tiles have a local center of threefold rotation. (Credit: David Smith, Joseph Samuel Myers, Craig S. Kaplan, Chaim Goodman-Strauss/CC BY-SA 4.0)
The First of Many Einsteins
By the mere fact of its existence, the hat has settled one mystery. But it also raises new ones. Could there be more einsteins, for example, separate from this family? Senechal suspects there must
be, and that this triumph could reinvigorate the search.
One of the most intriguing questions is the reason for its aperiodicity. The “special sauce” is still unknown, as Kaplan puts it: “I can’t point at part of the shape and say, ‘That’s why.’”
Nicolaas de Bruijn, a Dutch mathematician, eventually explained Penrose tiles as two-dimensional projections of a five-dimensional tiling. But as for a theoretical account of the hat, Kaplan says,
“we’re not anywhere near that yet.”
It also remains to be seen whether this abstract curiosity will apply itself in the real world.
Read More: 5 Interesting Things About Albert Einstein
Bringing the Hat to Life
It may, of course, have a promising future in interior design — especially considering how snugly it fits against a grid of hexagons. It begs to be arrayed on the kitchen floor; anyone with hexagonal
tiling and enough determination could trace the pattern themselves.
Many aperiodic tiles run roughshod over an orderly background, “but this one,” Kaplan says, “just kind of sits there very nicely. It’s almost ridiculously well-behaved.”
Supposing the hat does find its way to your local Home Depot, keep in mind that periodic arrangements are not for the faint of heart.
“If you just proceed blindly,” Kaplan says, “you’re probably going to get stuck.” And if by some miracle you find a contractor willing to humor you, “there’d be a lot of labor costs.”
Read More: How Mathematicians Cracked the Zodiac Killer’s Cipher | {"url":"https://www.discovermagazine.com/the-sciences/at-long-last-hobbyist-discovers-einstein-tile","timestamp":"2024-11-05T16:02:07Z","content_type":"text/html","content_length":"124616","record_id":"<urn:uuid:91a2ec6c-7bc0-41ee-a753-1261d1f047e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00288.warc.gz"} |
Free Printable Graph Paper For Maths Template Free Graph Paper | Grid Paper Printable
Free Printable Graph Paper For Maths Template Free Graph Paper
Free Printable Graph Paper For Maths Template Free Graph Paper
Free Printable Graph Paper For Maths Template Free Graph Paper – Well, right here are some factors for making use of grid paper: They’re helpful for producing graphs, geometric patterns, and
illustrations. They’re cost-free!
What is Grid Paper Printable?
Grid paper is a special type of paper that has a grid pattern on it. These papers are used in mathematics and also work for making and attracting game maps. They have a 1-inch grid and are readily
available in United States Letter size as well as printable PDF style. Users can publish them out using their favored PDF audience software program. Most programs have a print alternative on the menu
There are various kinds of graph paper. The most common is the grid kind.
Graph paper has a cut-and-dried grid pattern. This pattern works for lots of subjects, such as maths as well as scientific research. It helps students do equations with precision and measure
functions proportionally. Graph paper is generally available in sheets that are half-inch or one-quarter inch broad.
Why Do We Use Grid Paper?
Grid paper is available in a variety of sizes, making it helpful for various objectives. Making use of grid paper is a terrific means to set apart in between less complex and also extra challenging
jobs, and also it assists improve aesthetic perception skills. It’s also optimal for mathematics problems, as it enables pupils to see numbers a lot more quickly, and also there is a separate area
for each and every response.
Grid paper is usually utilized for attracting or art tasks, however it can also be used for building and construction planning or website design. Some people believe that they can create
illustrations quicker making use of pencil and also paper, however in actuality, using a computer program is a lot more efficient for creating numerous drafts of the very same style. In order to
produce the right range for a format, you need to recognize the size of the layout in advance. This is easier to do with grid paper.
Grid paper is a superb tool for showing geometric principles. The square patterns suggest a rational, constant system. In unsure times, people crave rational options to their issues. In addition, the
history of gridded paper and also notebooks is fascinating and reveals a great deal concerning its cultural significance.
Printable Graph Paper For Math
Graph Paper With Numbers And Letters Full Page Printable Printable
Printable Graph Paper For Math
Graph paper can be hard to come by in your classroom or at house. Whether you’re a student researching for a test or planning a residence renovation project, a Printable Graph Paper For Math is a
vital source.
Grid paper is a hassle-free tool for mathematics as well as science jobs, as well as can likewise be utilized to develop graphs and pictures. Numerous people think that pencil and also paper sketches
are much faster than computer-aided layouts.
Related For Printable Graph Paper For Math | {"url":"https://gridpaper-printable.com/printable-graph-paper-for-math/free-printable-graph-paper-for-maths-template-free-graph-paper-2/","timestamp":"2024-11-04T07:29:40Z","content_type":"text/html","content_length":"25656","record_id":"<urn:uuid:f374b84c-5483-4876-9cb4-5b4661077242>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00815.warc.gz"} |
How do you prepare 0.002 N Na2S2O3? - Answers
Why do you use 25 grams to prepare 0.1N sodium thiosulfate solution?
To prepare a 0.1N solution of sodium thiosulfate, we need to weigh out 25 grams because each mole of sodium thiosulfate (Na2S2O3) is equivalent to 248.18 grams. Given that the formula weight is
known, 25 grams gives the correct molarity of the solution. | {"url":"https://math.answers.com/math-and-arithmetic/How_do_you_prepare_0.002_N_Na2S2O3","timestamp":"2024-11-13T05:39:15Z","content_type":"text/html","content_length":"156472","record_id":"<urn:uuid:6e2a41fe-7e68-43ba-9f0d-da8bf255616e>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00020.warc.gz"} |
Sonic Frontiers SFL-2 Preamplifier Measurements
Link: written about by Jason Thorpe on SoundStage! Ultra on October 1, 2024
General information
The Sonic Frontiers SFL-2 was released in about 1995. The SFL-2 under test here was owned by SoundStage! writer Jason Thorpe. He bought it in the late 1990s and had used it in his system until late
2023. He’s since sold it to a consumer. But before he sold it, we wanted to measure it to gauge its current level of performance. It was measured in March 2024. Jason reports that the tubes at time
of measurements had under 2000 hours on them, which he felt meant they had plenty of life left since they were rated for about 10,000 hours.
All measurements taken using an Audio Precision APx555 B Series analyzer.
The SFL-2 was conditioned for 30 minutes at 2Vrms at the output before any measurements were taken. All measurements were taken with both channels driven.
The SFL-2 offers five sets of line-level unbalanced (RCA) inputs, one set of line-level balanced (XLR) inputs, and two sets each of unbalanced and balanced outputs. The balanced outputs offer 6dB
more gain than the unbalanced outputs. The volume control is a stepped attenuator with 23 positions (including the minimum position, which is grounded). Based on the accuracy and repeatable nature of
the channel deviation (table below), the volume control is in the analog domain but must have been, at one time, carefully level matched between channels at each volume position due to the high level
of accuracy. The stepped attenuator offers, for the most part, 3dB increments throughout the entire range. There’s an additional 0dB/-1.5dB switch that can enable 1.5dB of attenuation, for a finer
There is a significant difference in terms of THD between unbalanced and balanced inputs and outputs in the SFL-2 (see both the main table and FFTs below). The lowest THD configuration measured was
balanced in/balanced out, although the left channel exhibits much higher THD than the right channel. Unless otherwise stated, measurements were made with the volume set to unity gain, using the XLR
inputs and outputs, with a 2Vrms input.
Volume-control accuracy (measured at preamp outputs): left-right channel tracking
Volume position Channel deviation
1 0.144
2 0.138
3 0.140
4 0.146
5 0.145
6 0.142
7 0.144
8 0.143
9 0.144
10 0.142
11 0.144
12 0.145
13 0.143
14 0.143
15 0.143
16 0.144
17 0.146
18 0.149
19 0.149
20 0.144
21 0.146
22 0.143
Primary measurements
Our primary measurements revealed the following using the balanced line-level inputs (unless otherwise specified, assume a 1kHz sinewave, 2Vrms input and output into 200k ohms load, 10Hz to 22.4kHz
Parameter Left channel Right channel
Crosstalk, once channel driven (10kHz) -58.9dB -59.7dB
DC offset <0.3mV <0.7mV
Gain (default) 28.7dB 28.5dB
IMD ratio (CCIF, 18kHz + 19kHz stimulus tones, 1:1) <-76dB <-89dB
IMD ratio (SMPTE, 60Hz + 7kHz stimulus tones, 4:1 ) <-67dB <-88dB
Input impedance (balanced) 116k ohms 123k ohms
Input impedance (unbalanced) 57.4k ohms 54.0k ohms
Maximum output voltage (at clipping 1% THD+N) 85.9Vrms 90Vrms
Maximum output voltage (at clipping 1% THD+N into 600 ohms) 1.2Vrms 1.5Vrms
Noise level (with signal, A-weighted) <84uVrms <62uVrms
Noise level (with signal, 20Hz to 20kHz) <125uVrms <101uVrms
Noise level (no signal, A-weighted, volume min) <84uVrms <62uVrms
Noise level (no signal, 20Hz to 20kHz, volume min) <125uVrms <101uVrms
Output impedance (balanced) 797 ohms 786 ohms
Output impedance (unbalanced) 442 ohms 430 ohms
Signal-to-noise ratio (A-weighted, 2Vrms out) 86.7dB 89.1dB
Signal-to-noise ratio (20Hz-20kHz), 2Vrms out) 83.7dB 86.0dB
Signal-to-noise ratio (max volume, 2Vrms out, A-weighted) 85.9dB 87.8dB
THD (unweighted, balanced) <0.014% <0.0006%
THD (unweighted, unbalanced) <0.081% <0.118%
THD+N (A-weighted) <0.014% <0.0032%
THD+N (unweighted) <0.014% <0.006%
Frequency response
In our measured frequency-response plot above, the SFL-2 is essentially perfectly flat within the audioband, and 0dB at 5Hz and -1.5dB at 200kHz. The SFL-2 appears to be DC-coupled, as it yielded 0dB
of deviation at 5Hz. In the graph above and most of the graphs below, only a single trace may be visible. This is because the left channel (blue or purple trace) is performing identically to the
right channel (red or green trace), and so they perfectly overlap, indicating that the two channels are ideally matched.
Phase response
Above is the phase-response plot from 20Hz to 20kHz. The blue/red traces are with the phase switch set to 0 degrees, the purple/green traces are with the phase switch set to 180 degrees. The SFL-2
does not invert polarity (except when the phase switch is set to 180 degrees), and it yielded a worst-case 5 degrees or so of phase shift at 20kHz.
THD ratio (unweighted) vs. frequency
The plot above shows THD ratios at the output as a function of frequency (20Hz to 20kHz) for a sinewave input stimulus. The blue and red plots are for left and right channels into 200k ohms, while
purple/green (L/R) are into 600 ohms. As previously mentioned, there are gross differences in THD ratios between the left and right channels into 200k ohms. The right channel ranged from 0.003% at
20Hz, down to 0.0004% at 1 to 2kHz, then up to 0.003% at 20kHz. It should be noted that because this is a tube-based preamp, noise levels are higher than what would be seen in a well-designed
solid-state preamp. As such, the limiting factor in assigning THD ratios for the right channel into 200k ohms is largely due to the noise floor (the analyzer cannot assign a THD ratio below the noise
floor). THD ratios for the left channel into 200k ohms were relatively constant through the audioband, just above 0.01%. The 600-ohm THD ratio were considerable higher, ranging from 0.03/0.1% (left/
right) at 20Hz, then up to 0.3/0.4% (left/right) from 200Hz to 20kHz. The SFL-2 is significantly impacted by a lower impedance load, due to the high output impedance (about 800 ohms) tube outputs.
This is also evidenced by the maximum output signals (at 1% THD) measured at the balanced outputs into 200k ohms and 600 ohms: a staggering 90Vrms versus 1.5Vrms.
THD ratio (unweighted) vs. output voltage
The plot above shows THD ratios measured at the output of the SFL-2 as a function of output voltage into 200k ohms with a 1kHz input sinewave. At the 10mVrms level, THD values measured around 0.1% at
10mVrms, dipping down to around 0.0003% at 2Vrms for the right channel and 0.003% at 0.4Vrms for the left channel. Between 1 and 30Vrms at the output, we find a 10 to 25dB difference in THD between
the left and right channels. The 1% THD point is reached at around 90Vrms. It’s also important to mention that anything above 2-4Vrms is not typically required to drive most power amps to full power.
THD+N ratio (unweighted) vs. output voltage
The plot above shows THD+N ratios measured at the output of the SFL-2 as a function of output voltage into 200k ohms with a 1kHz input sinewave. At the 10mVrms level, THD+N values measured around
1.5%, dipping down to around 0.003% at 5-7Vrms for the right channel, and 0.01% at 1-2Vrms for the left channel. Between 3 and 30Vrms at the output, we find a 10 to 25dB difference in THD+N between
the left and right channels.
FFT spectrum – 1kHz (balanced in, balanced out)
Shown above is the fast Fourier transform (FFT) for a 1kHz input sinewave stimulus, measured at the output into a 200k-ohm load, for the balanced inputs and outputs. We see that the signal’s second
harmonic, at 2kHz, is at around -80dBrA, or 0.01%, for the left channel, and only -120dBrA, or 0.0001% for the right channel. The third harmonic, at 3 kHz, is at -115dBrA, or 0.0002%, for both
channels. There are no power-supply-related noise peaks above the relatively high noise floor, which varies from -100dBrA at low frequencies, down to -130dBrA at 20kHz.
FFT spectrum – 1kHz (unbalanced in, balanced out)
Shown above is the fast Fourier transform (FFT) for a 1kHz input sinewave stimulus, measured at the output into a 200k-ohm load, for the unbalanced inputs and balanced outputs. We see that the
signal’s second harmonic, at 2kHz, is at around -70/-75dBrA (left/right), or 0.03/0.02%. The third harmonic, at 3kHz, is at -110dBrA, or 0.0003%, for both channels. There are no significant
power-supply-related noise peaks above the relatively high noise floor, but for a very small -110dBrA (left channel), or 0.0003%, peak at 120Hz.
FFT spectrum – 1kHz (unbalanced in, unbalanced out)
Shown above is the fast Fourier transform (FFT) for a 1kHz input sinewave stimulus, measured at the output into a 200k-ohm load, for the unbalanced inputs and outputs. We see that the signal’s second
harmonic, at 2kHz, is at around -70/-60dBrA (left/right), or 0.03/0.1%. The third harmonic, at 3 kHz, is at -100dBrA, or 0.001%, for both channels. There are no significant power-supply-related noise
peaks above the relatively high noise floor, except for the very small -110dBrA (right channel), or 0.0003%, peak at 120Hz.
FFT spectrum – 1kHz (balanced in, unbalanced out)
Shown above is the fast Fourier transform (FFT) for a 1kHz input sinewave stimulus, measured at the output into a 200k-ohm load, for the balanced inputs and unbalanced outputs. We see that the
signal’s second harmonic, at 2kHz, is at around -60dBrA, or 0.1%. The third harmonic, at 3kHz, is at -100dBrA, or 0.001%, for both channels. Using the unbalanced outputs appears to yield the
worst-case THD ratios. Again, there are no significant power-supply-related noise peaks above the relatively high noise floor, except for a very small -110dBrA (right channel), or 0.0003%, peak at
FFT spectrum – 50Hz
Shown above is the FFT for a 50Hz input sinewave stimulus measured at the output into a 200k-ohm load. The X axis is zoomed in from 40Hz to 1kHz, so that peaks from noise artifacts can be directly
compared against peaks from the harmonics of the signal. The only significant non-signal peak is from the signal’s second harmonic (100Hz) at -80/-105dBrA (left/right), or 0.01/0.0006%. The third
signal harmonic (150Hz) is at -115dBrA, or 0.0002%. As above, there are no significant-power-supply related noise peaks above the relatively high noise floor, except for the very small -110dBrA (left
channel), or 0.0003%, peak at 120Hz.
Intermodulation distortion FFT (18kHz + 19kHz summed stimulus)
Shown above is an FFT of the intermodulation distortion (IMD) products for an 18kHz + 19kHz summed sinewave stimulus tone measured at the output into a 200k-ohm load. The input RMS values are set at
-6.02dBrA so that, if summed for a mean frequency of 18.5kHz, would yield 2Vrms (0dBrA) at the output. We find that the second-order modulation product (i.e., the difference signal of 1kHz) is at -85
/-95dBrA (left/right), or 0.006/0.002%, while the third-order modulation products, at 17kHz and 20kHz, are at -115dBrA, or 0.0002%.
Intermodulation distortion FFT (line-level input, APx 32 tone)
Shown above is the FFT of the balanced outputs of the SFL-2 with the APx 32-tone signal applied to the analog balanced input. The combined amplitude of the 32 tones is the 0dBrA reference, and
corresponds to 2Vrms. The intermodulation products—i.e., the “grass” between the test tones—are distortion products from the amplifier and are around the -115dBrA level for the left channel and the
-130dBrA level for the right channel (below 15kHz or so).
Square-wave response (10kHz)
Above is the 10kHz squarewave response at the output into 200k ohms. Due to limitations inherent to the Audio Precision APx555 B Series analyzer, this graph should not be used to infer or extrapolate
the SFL-2’s slew-rate performance. Rather, it should be seen as a qualitative representation of its very high bandwidth. An ideal squarewave can be represented as the sum of a sinewave and an
infinite series of its odd-order harmonics (e.g., 10kHz + 30kHz + 50kHz + 70kHz . . .). A limited bandwidth will show only the sum of the lower-order harmonics, which may result in noticeable
undershoot and/or overshoot, and softening of the edges. The SFL-2’s reproduction of the 10kHz squarewave is essentially perfect, with sharp corners and no ringing.
Diego Estan
Electronics Measurement Specialist | {"url":"https://soundstagenetwork.com/index.php?option=com_content&view=article&id=2996:sonic-frontiers-sfl-2-preamplifier-measurements&catid=433:preamplifier-measurements&Itemid=153","timestamp":"2024-11-03T15:55:41Z","content_type":"text/html","content_length":"105556","record_id":"<urn:uuid:c6d12c44-fdf3-4846-9664-7dfb6e7467be>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00154.warc.gz"} |
Statistical Damage Model of Altered Granite under Dry-Wet Cycles
School of Civil and Architectural Engineering, Beijing Jiaotong University, Beijing 100044, China
Tunneling and Underground Engineering Research Center of Ministry of Education, Beijing 100044, China
School of Civil Engineering and Architecture, Shandong University of Science and Technology, Qingdao 266590, China
Authors to whom correspondence should be addressed.
Submission received: 4 December 2018 / Revised: 18 December 2018 / Accepted: 25 December 2018 / Published: 2 January 2019
This paper presents a new statistical damage constitutive model using symmetric normal distribution. The broken rock microbody obeyed symmetric normal distribution and the equivalent strain principle
in damage mechanics. The uniaxial compression tests of samples subjected to dry-wet cycles were performed. The damage model was established using the equivalent strain principle and symmetric normal
distribution. The damage variable was defined by the elastic modulus under various dry-wet cycles. Parameters of the damage constitutive model were identified using MATLAB software, and the proposed
model is verified to be in good agreement with uniaxial compression test results. Fracturing of the rock microbody is well described by symmetric normal distribution, and the proposed statistical
damage constitutive model has good adaptability to the uniaxial compression stress-strain curve.
1. Introduction
Rocks and soils are engineering materials that are widely utilized in geotechnical engineering. Rock heterogeneity and anisotropy makes the investigation of rock constitutive relations complicated
and difficult. Micro-cracks and pores are commonly present in rock and represent a kind of initial damage. The expansions of micro-cracks and pores are a common side effect of the excavation of
roadways, hydraulic action (water-rock coupling, dry-wet cycles and hydraulic fracturing), or temperature changes (freeze-thaw cycles). The study of rock damage constitutive models is more
complicated [
]. With the development of continuum damage mechanics, the concept of effective stress and strain equivalent hypothesis provides a mechanical basis for establishing rock damage constitutive models.
Parisio F. et al. [
] studied the stability of an excavated area in opalinus clay using an anisotropic plasticity coupled with a damage constitutive model. A new damage constitutive model considering structural healing
and decay was established for engineering soft rock [
]. Mortazavi A. et al. [
] presented a realistic damage model with a damage process and plastic flow to analyze the stability of underground tunnels. Cerfontaine B. [
] studied the mechanical response of surrounding rock under cyclic loading in a circular tunnel, and a damage constitutive model based on the boundary surface concept was proposed. A damage
constitutive model reflecting rock strain softening characteristics was established to describe rock deformation [
]. Asadollahi P. [
] studied a constitutive model using an empirical equation with normal stress and a modified Joint Roughness Coefficient (JRC). Unteregger D. [
] established the constitutive model reflecting the nonlinear behavior of rock on the basis of plasticity theory and damage mechanics. Amorosi A. [
] studied the critical state-based constitutive model for in-situ plate load tests of the mechanical response of pyroclastic rock. Rock is weakened due to the rise and fall of the water level,
causing damage to the internal structure of the rock. Dry-wet cycles adversely affect slope stability, so investigations of rock damage constitutive models are critically used to understand rock
deformation and failure characteristics. Results can provide a mechanical basis for the slope stability analysis, and a rock damage constitutive model for rock subjected to dry-wet cycles has great
significance for slope stability analysis [
In this paper, the results of rock uniaxial compression test under various dry-wet cycles are studied. According to the theory of damage mechanics, the comprehensive damage variables are defined by
coupling the rock hydraulic damage variable and loading damage variable. The adaptability of the damage constitutive model is discussed and evaluated compared to uniaxial compression test results.
2. Materials and Methods
2.1. Research Background
The open pit is used as a tailings pond. A large number of tailings was stored, and the water used to move the tailing accumulated in the open tailings pit, causing the reservoir water level to rise.
At the same time, the clarified water was used as the water source of the concentrator. Therefore, the water level of the tailings pond changes cyclically varies during a certain time period. In
order to observe the rise and fall of the water level, a sonic water level detector was set (
Figure 1
From March 2016 to October 2018, the water level has varied significantly (
Figure 2
). The water level initially increased, then decreased. The slope rocks have been experiencing dry-wet cycles for a long time due to the cyclical rise and fall of water level in the open pit. This
leads to a decrease in rock strength and increases the probability of a landslide. Therefore, it is of great significance to study the rock damage constitutive model under the action of dry-wet
cycles, especially focusing on rock deformation and failure mechanisms from the perspective of damage mechanics.
2.2. Sample Tested
The rock samples tested were taken from the slope of open pit in San Shandao deposit [
]. According to the results of electron probe (
Figure 3
), the main compositions altered granite are SiO
and K
Samples were symmetrical cylinders that were 50 mm in diameter and 100 mm in length. The samples were obtained through core drilling and sawing (
Figure 4
2.3. Test of Dry-Wet Cycles
The influence of water level fluctuation in the tailings reservoir was simulated to study the changing of rock mechanical properties. Rock specimens were subjected to the dry-wet cycles test.
The time which the rock reaches saturation was determined by rock moisture testing. Rock samples were immersed in a neutral solution, at standard atmospheric pressure. According to the results of
moisture content tests [
], a drying and wetting cycle consisted of freely submerging the specimen in water until it was saturated (24 h), then placing it into a 105 °C oven for 12 h, and then cooling it to room temperature.
Uniaxial compression tests were performed after a certain number of dry-wet cycles (0, 5, 15, 20, 30, 60). Four samples were prepared for each series of dry-wet cycles, and one of the rock specimens
was used as a spare rock sample.
2.4. Results of Uniaxial Compression Test
The uniaxial compression tests were performed under various dry-wet cycles, including rock after 0 dry-wet cycles, rock after 5 dry-wet cycles, rock after 15 dry-wet cycles, rock after 20 dry-wet
cycles, rock after 30 dry-wet cycles, rock after 60 dry-wet cycles. A SHIMADZU compression tester was used to obtain the compression stress-strain curve of the rock under dry-wet cycles (
Figure 5
According to
Figure 5
, the uniaxial compression stress-strain curve shows five distinct stages, including internal defect closure stage (OA), linear elastic deformation stage (AB), unsteady rupture stage (BC), plastic
yield stage (CD) and strain softening stage (DE). The peak strength of the rock gradually decreases as the number of cycles increases.
3. Damage Variables under Dry-Wet Cycles
Damage mechanics describes the mechanical behavior of engineering materials. This paper proposes a new statistical damage constitutive model considering the number of dry-wet cycles and the loading
process. A comprehensive damage variable is defined coupling rock hydraulic damage variable and loading damage variable.
3.1. Hydraulic Damage Variable (D[w])
This paper presents the hydraulic damage variable using an elastic modulus. The elastic modulus of samples under various dry-wet cycles decreased with an increasing number of cycles. Meanwhile, the
damage variable was defined by elastic modulus [
]. The
$D w$
was defined as Equation (1).
In the Equation (1), $D w$ is the hydraulic damage variable, $E N$ is the elastic modulus after $N$ number of dry-wet cycles and $E 0$ is the elastic modulus after a 0 number of dry-wet cycles.
3.2. Loading Damage Variable (D[m])
A normal distribution is also called a Gaussian distribution, which are the most important probability distributions. The form of a normal distribution is simple and can be used to describe the
variation of rock micro-body strength. In general, random variables tend to follow a Gaussian distribution. However, due to the complexity of external conditions, the normal distribution is extended
into multidimensional space. According to the central limit theorem, if a large number of independent random variables are properly normalized, they will converge to a Normal (Gaussian) distribution.
Due to the existence of defects, such as pores and fissures inside the rock, and the irregularity of the defects, the distribution of rock micro-body strength tends to follow the normal distribution
dimensional random variable
$X = ( X 1 , X 2 , … , X n )$
obeyed the normal distribution with parameter of
, and its probability density function is given by Equation (2) [
$p ( x ) = f ( x 1 , x 2 , … , x n ) = 1 ( 2 π ) n / 2 ( det B ) 1 / 2 e x p { − 1 2 ( X − a ) T B − 1 ( X − a ) }$
is a positive definite symmetric matrix (Equation (6)),
$det ( B )$
is its determinant, and
$B − 1$
is its inverse matrix.
presents any real-valued column vector. a lowercase letter was used to record the vector, and a uppercase letter was used to record the matrix, as shown in Equations (3)–(6) [
$a = ( a 1 , a 2 , … , a n ) T$
$X = ( x 1 , x 2 , … , x n ) T$
$X − a = ( x 1 − a 1 , x 1 − a 2 , … , x n − a n ) T$
$B = [ b 11 b 12 ⋯ b 1 n b 21 b 22 ⋯ b 2 n ⋮ b n 1 ⋮ b n 1 ⋱ ⋯ ⋮ b n n ]$
At present, the most commonly used one-dimensional normal distribution is
$f ( x ) = 1 ζ 2 π e x p [ − ( x − μ ) 2 2 ζ 2 ]$
is a mathematical expectation, and
is variance. According to the principle of normal distribution.
Normal distribution curves have absolute symmetry (
Figure 6
). The
affects the position of the axis, and is called the mathematical expectation, indicating the peak of the symmetric normal distribution. The axis of symmetry is
$x = μ$
and a change in
does not change the shape of the curve. The parameter
is the variance, which mainly affects the fatness and thinness of the symmetric normal distribution curves.
The rock is damaged by external loading during the uniaxial compression loading process. Generally, the rock can be divided into several micro-element bodies. For the mechanical damage caused by the
dry-wet cycles, the probability density of rock micro-body damage follows a symmetric normal distribution law, and the micro-body damage probability density, as shown in Equation (8).
Based on the above analysis, the mechanical damage variable
$D m$
can be obtained by defining the ratio of the number of micro-members broken to the total number of micro-members, as shown in Equation (8):
$D m = N f N = ∫ 0 ε N · f ( x ) d x N = 1 ζ 2 π ∫ 0 ε e x p [ − ( x − μ ) 2 2 ζ 2 ] d x$
$N f$
is the number of damaged micro-elements;
is the total number of micro-elements;
is rock strain.
3.3. Comprehensive Damage Variable (D)
Considering the combined damage of dry-wet cycles and mechanical loading on rock strength, the hydraulic damage variable (
$D w$
) and loading damage variable (
$D m$
) were defined respectively, as shown in Equations (1) and (8). The comprehensive damage variable (
) was defined in Equation (11).
$1 − D = ( 1 − D w ) ( 1 − D m )$
$D = D w + D m − D w · D m$
$D = ( 1 − E ( N ) E 0 ) + 1 2 π ζ ∫ 0 ε e x p [ − ( x − μ ) 2 2 ζ 2 ] d x − ( ( 1 − E ( N ) E 0 ) · ( 1 2 π ζ ∫ 0 ε e x p [ − ( x − μ ) 2 2 ζ 2 ] d x ) )$
4. Damage Constitutive Model under Dry-Wet Cycles
4.1. Damage Constitutive Model
According to the theory of rock strain mechanical strain equivalence, the constitutive relationship of rock before and after damage has the same form, the difference is the between the Cauchy stress
and Effective stress. Therefore, the rock damage constitutive equation is shown in Equation (12) [
$ε = σ ˜ / E = σ ( ( 1 − D ) E )$
is rock strain, σ is rock nominal stress,
$σ ˜$
is rock effective stress,
is original elastic modulus, and
is the comprehensive damage variable.
Therefore, the damage constitutive model under dry-wet cycles in uniaxial loading conditions is given by Equation (13):
$σ = E ( N ) · λ · ( 1 − D ) · ε = E 0 · ( 1 − D w ) ( 1 − D m )$
is the stress and strain of the rock,
is a constant using to describe the defects in rock.
$D w$
is the hydraulic damage variable,
$D m$
is the loading damage variable, and
is the comprehensive damage variable of the rock.
The damage constitutive model of rock under the action of dry-wet cycles was established by combining Equations (1), (8), and (11), as shown in Equation (14)
$σ = E ( N ) · λ · ( 1 − 1 ζ 2 π ∫ 0 ε e x p [ − ( x − μ ) 2 2 ζ 2 ] d x ) · ε$
4.2. Comparison of Damage Model and Test Results
Parameters of the damage constitutive model under different dry-wet cycles were obtained according to the fitting results using Matlab software (
Table 1
The constant was used to describe the defects in rock
= 2. Parameters in the rock damage constitutive model were obtained using the uniaxial compression test results. According to
Table 1
Figure 7
decreased with an increasing number of cycles, and
increased with an increasing number of cycles.
The damage constitutive model and test results are compared, and the results are shown in
Figure 8
Aiming at the construction of rock damage constitutive model of dry-wet cycles, this paper considers the mechanical damage caused by dry-wet cycles and uniaxial loading, defines the comprehensively
damage variable of rock. The adaptability of the constitutive model to the rock stress-strain curve was analyzed. Results show that the comprehensive damage variable considering rock hydraulic damage
and mechanical damage can describe the rock damage under dry-wet cycles. The damage constitutive model has good adaptability to the rock stress-strain curve.
4.3. Discussion
According to
Figure 8
Table 1
, the fitting variance is greater than 0.910, and the fitting accuracy meets the needs of engineering stability analysis. The damage constitutive model has strong adaptability to the rock
stress-strain curve. The damage constitutive model can provide a mechanical basis for engineering stability analysis.
The rock stress-strain behavior before and after the peak damage can be better described using the damage constitutive model, but it has a poor adaptability to the deformation characteristics in the
elastic deformation stage, which is related to the closure and development of the internal pores and fissures of the rock. Therefore, the mesoscopic-damage constitutive model of rock should be
further studied.
5. Conclusions
In this paper, a damage constitutive model of rock subjected to dry-wet cycles is studied, focusing on the definition of the damage variable. The main conclusions are as follows:
According to the deformation and failure characteristics of rock under the action of dry-wet cycles, a rock damage constitutive model was established. This model was based on the principle of damage
mechanics and the symmetric normal distribution theory. The parameters of the damage constitutive model under dry-wet cycles were identified. The fitting variance is greater than 0.910, and the
fitting accuracy meets the needs of engineering stability analysis. The rock damage constitutive model and uniaxial compression test results were compared, and results show that the damage
constitutive model has good adaptability to rock stress-strain curves.
Author Contributions
X.C. and Z.Q. designed and directed the project. X.C. processed the experimental data, performed the analysis, drafted the manuscript and designed the figures. P.H. provided critical revision and
acquisition of the financial support for the project leading to this publication; Y.G. and J.L. processed the experimental data. All authors contributed to the final version of the manuscript.
This research was funded by Shandong Provincial Natural Science Foundation grant number NO. ZR2017BEE014 and Scientific Research Foundation of Shandong University of Science and Technology for
Recruited Talents grant number 2017RCJJ050.
This paper is supported by Project NO. ZR2017BEE014 of the Shandong Provincial Natural Science Foundation and Scientific Research Foundation of Shandong University of Science and Technology for
Recruited Talents (2017RCJJ050). The financial aids are gratefully acknowledged.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Figure 3. (a) Results of electron probe; (b) Element recognition of position 1; (c) Element recognition of position 2; (d) Element recognition of position 3.
Figure 6. (a) One-dimensional normal distribution with different $μ$, (b) One-dimensional normal distribution with different $ζ$.
Figure 8. (a) Comparison of the damage model and test results after 0 dry-wet cycle; (b) Comparison of damage model and test results after 5 dry-wet cycles; (c) Comparison of damage model and test
results after 15 dry-wet cycles; (d) Comparison of damage model and test results after 20 dry-wet cycles; (e) Comparison of damage model and test results after 30 dry-wet cycles; (f) Comparison of
damage model and test results after 60 dry-wet cycles.
Number of Cycles μ $ζ$ R^2
0 8.653 0.394 0.9758
5 8.667 0.614 0.9787
15 8.592 0.505 0.9622
20 8.637 0.618 0.9404
30 8.297 0.577 0.9132
60 8.140 0.589 0.9163
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Chen, X.; He, P.; Qin, Z.; Li, J.; Gong, Y. Statistical Damage Model of Altered Granite under Dry-Wet Cycles. Symmetry 2019, 11, 41. https://doi.org/10.3390/sym11010041
AMA Style
Chen X, He P, Qin Z, Li J, Gong Y. Statistical Damage Model of Altered Granite under Dry-Wet Cycles. Symmetry. 2019; 11(1):41. https://doi.org/10.3390/sym11010041
Chicago/Turabian Style
Chen, Xuxin, Ping He, Zhe Qin, Jianye Li, and Yanping Gong. 2019. "Statistical Damage Model of Altered Granite under Dry-Wet Cycles" Symmetry 11, no. 1: 41. https://doi.org/10.3390/sym11010041
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2073-8994/11/1/41","timestamp":"2024-11-11T10:33:51Z","content_type":"text/html","content_length":"418589","record_id":"<urn:uuid:f9a0a7af-53bb-4aaf-aa31-37824de2ed6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00293.warc.gz"} |
Precalc Problems Explained
Find out how to convert fractions of a degree with Degrees, minutes and seconds, and enter the wonderful world of the radian. I have a some examples of how to convert from degrees to radians and back
again, as well as angular and linear speed using radians:
Video Link | {"url":"https://precalcproblems.blogspot.com/","timestamp":"2024-11-04T18:22:18Z","content_type":"text/html","content_length":"75242","record_id":"<urn:uuid:80fbdab2-a92a-4b5a-aa75-41fb6b1d22f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00298.warc.gz"} |
Training Forecasting Models with Darts | Unit8
Any quantity varying over time can be represented as a time series: sales numbers, rainfalls, stock prices, CO2 emissions, Internet clicks, network traffic, etc. Time series forecasting — the ability
to predict the future evolution of time series— is thus a key capability in many domains where anticipation is important. Until recently, the most popular time series forecasting techniques were
focusing on isolated time series; that is, predicting the future of one time series considering the history of this series alone. Since a couple of years, deep learning has made its entry into the
domain of time series forecasting, and it’s bringing many exciting innovations. First, it allows for building more accurate models that can potentially capture more patterns and also work on
multi-dimensional time series. Second, these models can also potentially be trained on multiple related series. There are many contexts where this capability can be beneficial: for instance for
electricity producers observing the energy demand of many of their customers, or for retailers observing the sales of many potentially-related products. However, one commonly-occurring drawback is
that such deep learning models are typically less trivial to work with for data scientists than some of their simpler statistical counter-parts. That is, until Darts came around 🙂
One of the missions of the open-source Darts Python library is to break this barrier of entry, and provide an easy and unified way to work with different kinds of forecasting models.
In this post, we’ll show how Darts can be used to easily train state-of-the-art deep learning forecasting models on multiple and potentially multi-dimensional time series, in only a few lines of
A notebook containing code and explanations related to this article is available here. If you are new to Darts, we recommend to start by reading our earlier short introductory blog post.
Create a Global Forecasting Model
from darts.models import RNNModel
model = RNNModel(...hyper_parameters...)
Models working with multiple time series are:RNNModel, BlockRNNModel, TCNModel, NBEATSModel, TransformerModel and RegressionModel (incl. LinearRegressionModel and RandomForest).
Train a Model on Multiple Time Series
model.fit([series1, series2, …])
Forecast Future Values of Any Series
future = model.predict(n=36, series=series_to_forecast)
Train and Forecast with Past and/or Future Covariates Series
model.fit(series=[series1, series2, ...],
past_covariates=[past_cov1, past_cov2, ...],
future_covariates=[future_cov1, future_cov2, ...])
future = model.predict(n=36,
future_covariates have to be known n time steps in advance at prediction time.
Training a Model on Multiple Series
All the deep learning forecasting models implemented in Darts as well as RegressionModel are global forecasting models. This means that these models can be trained on multiple series, and can
forecast future values of any time series, even series that are not contained in the training set. In contrast, the other non neural-net forecasting models in Darts (ARIMA, Exponential Smoothing,
FFT, etc) are currently all local models — namely, they are trained on a single time series to forecast the future of this series.
The ability to train a single model on multiple series is a very important feature, because usually deep learning models shine most when they are trained on an extensive amount of data. It allows
them to match patterns across a potentially large amount of related time series. For example, the N-BEATS model published recently obtains wining forecasting performance when trained on tens of
thousands of time series in the M4 competition (a well-known forecasting competition). We have implemented N-BEATS in Darts, and so it can now be trained and used out-of-the-box on large datasets
with only a few lines of code.
In a future article, we’ll show an example of how to train such large models on big datasets. For the time being however, we would like to expose the functionalities and mechanics of global models in
Darts, from the point of view of users who need to understand and control what’s going on.
Predicting Air Traffic Using Cow Milk Production…
As a toy/cartoon example, we’ll train a model on two time series that have not much in common. Our first series contains the number of monthly airline passengers in the 1950’s, and our second series
contains the monthly milk production (in pounds per cow) around the 1960’s. These two series obviously represent two very different things, and they do not even overlap in time. However,
coincidentally, they express quantities in similar orders of magnitude, so we can plot them together:
Monthly number of air passengers and monthly milk production per cow (in pounds)
Although different, these series share two important characteristics: a strong yearly seasonality, and an upward trend, which could perhaps be seen as an effect of the general economic growth of this
era (from looking at the blue curve we can ask ourselves whether cows’ overall well-being has also been on an upward trend; but that’s a different topic).
Training on Multiple Series
Training a model on several series (in this case two) is really easy with Darts, it can be done like that:
from darts.models import NBEATSModel
model_air_milk = NBEATSModel(input_chunk_length=24,
model_air_milk.fit([train_air, train_milk])
In this code snippet, we create an NBEATSModel instance (we could also have used any other global model). The input_chunk_length and output_chunk_length parameters specify the lengths of the time
series slices taken by the internal N-BEATS neural network in input and output. In this case, the internal neural net will look 24 months in the past and produce forecasts by outputting “chunks” of
12 points in the future. We’ll give more details on these parameters later.
We then train our model by calling the fit() method with a list of series to train on. Here, train_air and train_milk are two TimeSeries instances containing the training parts of the series.
Producing Forecasts
Once the model is trained, producing forecasts for one (or several) series is a one-liner. For instance, to forecast the future air traffic, we would do:
pred = model_air_milk.predict(n=36, series=train_air)
Note here that we can specify a horizon value n larger than the output_chunk_length : when this happens, the internal neural network will simply be called on its own outputs in an auto-regressive
fashion. As always, the output of the predict() function is itself a TimeSeries. We can quickly plot it, along with the prediction obtained when the same model is trained on the air series alone:
Two forecasting models for air traffic: one trained on two series and the other trained on one. The values are normalised between 0 and 1. Both models use the same default hyper-parameters, but the
number of epochs has been increased in the second model to make the number of mini-batches match.
In this case we get a MAPE error of 5.72% when the model is trained on both series, compared to 9.45% when trained on the air passengers series alone.
Well, that’s an important question, no doubt. And in this very particular case, for this particular set of model and data, it seems to be the case. This is not so surprising though, because here the
model just gets more examples of what monthly time series often look like. We can think of the milk series as providing a sort of data augmentation to the air series. This obviously wouldn’t
necessarily work for any combination of unrelated time series.
Producing Forecasts for Any New Series
Note that we can also just as easily produce forecasts for series that are not in the training set. For the sake of example, here’s how it looks on an arbitrary synthetic series made by adding a
linear trend and a sine seasonality:
from darts.utils.timeseries_generation import linear_timeseries, sine_timeseries
series = 0.2 * sine_timeseries(length=45) +
linear_timeseries(length=45, end_value=0.5)
pred = model_air_milk.predict(n=36, series=series)
Even though our synthetic series has not much to do with either air traffic or milk (it doesn’t even have the same seasonality, and it has a daily frequency!), our model is actually able to produce a
decent-looking forecast (note that it probably wouldn’t work well in most cases).
This hints to some pretty nice one-shot learning applications, and we’ll explore this further in future articles.
How it Works (Behind the Scenes)
It’s helpful to go slightly more in details and understand how the models work. You can skip this section if you’re not interested or if you don’t need more control.
Model Architecture
So how does it look internally? First, as already mentioned, the internal neural net is built to take some chunks of time series in input (of length input_chunk_length), and produce chunks of time
series in output (of length output_chunk_length). Importantly, a TimeSeries in Darts can have several dimensions — when this happens the series is called multivariate, and its values at each time
stamp are simply vectors instead of scalars. For example, the inputs and outputs on a model working with “past covariates” look like this:
The input and output time series chunks consumed and produced by the neural network to make forecasts. This example is for “past covariates” model; where past values of the covariate series are
stacked with past target values to form the neural net input.
We distinguish two different kinds of time series: the target series is the series we are interested to forecast (given its history), and optionally some covariate series are other time series that
we are not interested to forecast, but which can potentially help forecasting the target. Both target and covariate series may or may not be multivariate — Darts will automatically figure out the
right input/output dimensions of the internal neural net based on the training data. In addition, some models support “past” covariates — i.e. covariate series whose past values are known at
prediction time, while others support “future” covariates — i.e. covariate series whose future (and possibly historic) values are known at prediction time. These covariates are stacked with the
target (their dimensions concatenated) in order to build the neural net input. We refer to this article for more information on past and future covariates in Darts.
Finally, not all models need an output_chunk_length. RNNModel is a “truly recurrent” RNN implementation, and so it always produces outputs of length 1, which are used auto-recursively to produce
forecasts for a desired horizon n. Our implementation of RNNModel is similar to DeepAR, and it supports future covariates.
Training Procedure
In order to train the neural network, Darts will build a dataset consisting of multiple input/output pairs from the provided time series. The inputs are used as inputs of the neural network and the
outputs serve to compute the training loss. There are several possible ways to slice series to produce training samples, and Darts contains a few datasets in the darts.utils.data submodule.
By default, most models will use a SequentialDataset, which simply builds all the consecutive pairs of input/output sub-series (of lengths input_chunk_length and output_chunk_length) existing in the
series. On two time series, the slicing would look as follows:
The slicing of two target time series to produce some input/output training samples, in the case of a SequentialDataset (no covariate in this example).
The series used for training need not be the same length (in fact, they don’t even need to have the same frequency).
As another example, HorizonBasedDataset is inspired from the N-BEATS paper, and produces samples closer to the end of the series, possibly even ignoring the beginning of long series.
All of the slicing operations done in datasets are done efficiently, using Numpy views of the arrays underlying the time series, in order to optimize training speed (a GPU can be used as well). To
support large datasets that do not fit in memory, the Darts training datasets can also be manually built from Sequence’s of TimeSeries, which make it possible to implement lazy data loading. In this
case, the models can be fit by calling fit_from_dataset() instead of fit(). Finally, if you need to specify your own slicing logic, you can implement your own training dataset, by
subclassing TrainingDataset.
Using Covariates
Covariates represent time series that are susceptible to provide information about the target series, but which we are not interested in forecasting. As an example, we will build a synthetic series
by multiplying two sines:
series1 = sine_timeseries(length=400, value_frequency=0.08)
series2 = sine_timeseries(length=400, value_frequency=0.007)
target = series1 * series2
covariates = series2
This is what these series look like when plotted:
Let’s also split them in train and validation sub-series of lengths 300 and 100, respectively:
target_train, target_val = target[:300], target[300:]
cov_train, cov_val = covariates[:300], covariates[300:]
Let’s then build a BlockRNNModel model and fit it on the target series without using covariates:
from darts.models import BlockRNNModel
model_nocov = BlockRNNModel(input_chunk_length=100,
We can now get a forecast for 100 points after the end of the training series. As the series has many near-zero values, we’ll use the Mean Absolute Scaled Error to quantify the error:
from darts.metrics import mase
pred_nocov = model_nocov.predict(n=100)
mase_err_nocov = mase(target, pred_nocov, target_train)
This is actually really not bad, given that we’ve just used a vanilla RNN with default parameters and we are producing a single 100-points ahead forecast. Let’s look if we can do even better by using
the covariates series. Using covariates is meant to be really easy — we don’t even have to worry about it when building the model; we can just call fit() with a past_covariates argument specifying
our past covariate series:
model_cov = BlockRNNModel(input_chunk_length=100,
model_cov.fit(target_train, past_covariates=cov_train)
The only difference (w.r.t. not using covariates) is that we specify past_covariates=cov_train when training the model. At prediction time, we also have to specify this past covariate:
pred_cov = model_cov.predict(n=100,
mase_err_cov = mase(target, pred_cov, target_train)
This forecast is even more spot-on than the previous one. In this case the covariate series explicitly informs the RNN about the slowly varying low frequency component of the target series. just by
specifying the covariates, we’ve been able to divide the MASE error by 2, not bad!
Here we have used a BlockRNNModel, which supports past_covariates. Darts has other models supporting future_covariates, and we recommend checking this other article in order to have a better view of
past and future covariates.
We have just seen in this example how to use covariates with models trained on a single target series. The procedure can however be seamlessly extended to multiple series. To do this, it’s enough to
provide a list containing the same number of covariates to fit() and predict() as the number of target series. Let us also mention that backtesting (using either the backtest()
or historical_forecasts() functions of models) and grid-searching hyper parameters (using the gridsearch() method) also support specifying past and/or future covariates.
We are very excited about the nascent success of applying deep learning to the domain of time series. With Darts, we are trying to make it extremely easy to train and use state-of-the-art deep
learning forecasting models on a large number of time series. The latest release of Darts goes a long way in this direction, but we are still actively working on future developments, among which: a
support for non- time-series conditioning and a treatment of probabilistic time series.
At Unit8, we are a team of software engineers and data scientists on a mission to democratise machine learning and good data practices in the industry, and we work on many other things besides time
series. If you‘d like to talk with us, do not hesitate to get in touch.
Acknowledgements — We’d like to thank everyone who already contributed to Darts: Francesco Lässig, Léo Tafti, Marek Pasieka, Camila Williamson, and many other contributors. We’re always welcoming
issues and pull requests on our github repo. You can also letting us know what you think by dropping us a line.
Thanks to Michal Rachtan, Gael Grosch, and Unit8. | {"url":"https://unit8.com/resources/training-forecasting-models/","timestamp":"2024-11-04T10:22:04Z","content_type":"text/html","content_length":"125116","record_id":"<urn:uuid:a20d472c-a898-4f31-be22-23e109050086>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00622.warc.gz"} |
The Foucault pendulum
In this article I'm assuming that the reader is comfortable with the physics that was discussed in the previous articles of the Coriolis effect series. In particular, I'm assuming the reader is at
ease with the information in the articles about Rotational-vibrational coupling and Oceanography: inertial oscillations
The Foucault pendulum
A Foucault pendulum located at the latitude of Paris takes about 32 hours to complete a precession cycle. This means that after one sidereal day, when the Earth has returned to its orientation of one
day before, the pendulum has precessed three quarters of a cycle. If it has started in north-south direction, it is swinging east-west when the Earth has returned to the orientation of one day
Animation 1 represents another example, the case of a Foucault pendulum located at 30 degrees latitude. One can recognize that the plane of swing takes two sidereal days to precess clockwise with
respect to the surface it is suspended above. At the equator the pendulum keeps swinging in the same direction relative to the surface it is suspended above. Diagram 2 depicts that from the poles to
the equator there is a smooth transition.
The challenge is to explain the behavior of a Foucault pendulum that is not located on either the poles or the equator. Many discussions of the Foucault pendulum merely rephrase the observation. For
instance, Hart, Miller and Mills point out that "the angular velocity of the plane of the pendulum is simply the projection of the Earth's angular velocity onto a line joining the center of the Earth
and the point of suspension of the pendulum".^1 That's an empty statement; it just restates the already known.
A satisfactory explanation of the motion pattern of the latitudinal Foucault pendulum has been given by meteorologists^2 . The fact that meteorologists have made a crucial contribution is not as
surprising as it seems. Air mass, the object of study in meteorology, is pretty much prevented from moving in vertical direction, but there is hardly any restraint to motion parallel to the local
Earth surface. Likewise, the pendulum bob has no freedom in vertical direction, but it is unconstrained in the directions parallel to the Earth's surface. Whenever air mass has a velocity relative to
the Earth there is an effect that arises from the overall Earth rotation. Likewise, the fact that the swinging pendulum is circumnavigating the Earth's axis affects the motion of the pendulum.
I will call a Foucault pendulum located on one of the poles a polar Foucault pendulum and a Foucault pendulum located somewhere on the equator an equatorial Foucault pendulum. A Foucault pendulum
located at any latitude between the poles and the equator I will call a latitudinal Foucault pendulum.
The motion pattern of the Foucault pendulum involves two oscillations, at different scales of magnitude. At the small scale there is the swing of the pendulum, which I will refer to as 'the
vibration'. At the large scale there is the overall rotation of the Earth around its axis. The vibration participates in the overall rotation and is affected by it. Due to the large difference in
period of oscillation the effect is very small during each separate swing. It appears to be negligable, but it's actually significant because the effect is cumulative. My purpose is to show why the
effect is cumulative.
Table of contents
· The Wheatstone-Foucault device
· The Wheatstone pendulum
· Physics principles
· Wheatstone pendulum forces
· Foucault pendulum forces
· Decomposition in vector components
· Exchange of angular momentum
· The sine of the latitude
· Applicability
· Mathematical derivations
· The two forces in the equation of motion
· The centrifugal term and the Coriolis term
· The full equation of motion
· What the equation describes
· Overview: the centripetal force doing work
· Historical note
The Wheatstone-Foucault device
In 1851 a paper by Charles Wheatstone was read to the Royal Society in which Wheatstone described the device that is shown in Image 3. A transcript of the paper by Wheatstone is available at
wikisource: Note relating to M. Foucault's new mechanical proof of the Rotation of the Earth"
The helical spring acts as a heavy string; when plucked, it vibrates. The circular platform can rotate around a vertical axis. I will refer to this axis as the central axis.
The Wheatstone pendulum
I will discuss the physics of the Foucault pendulum by drawing parallels with Wheatstone's device. The features that the Foucault pendulum and the Wheatstone-Foucault device have in common are the
features that matter for understanding the physics taking place.
To underline the similarity with the Foucault pendulum, I will discuss a slightly different version of the Wheatstone-Foucault device. I refer to the device depicted in image 4 as "the Wheatstone
pendulum". The small sphere corresponds to the bob of the Foucault pendulum, the force exerted by the inside spring corresponds to the gravitational force that is being exerted on the bob, and the
force exerted by the outside spring corresponds to the force that the pendulum wire exerts on the bob.
The University at Buffalo, State University of New York physics department has a permanent exhibition that includes a tabletop model. The tabletop model uses an elastic pendulum to emulate a Foucault
pendulum at 43° degrees latitude, the latitude of Buffalo, New York.
Other examples of such a tabletop mechanical model demonstrating the Foucault effect:
Finally, Image 5 depicts yet another version of the device that I will use later to focus on a particular factor that is involved in the motion pattern of the Foucault pendulum. The suspension points
are aligned with the central axis, but the bob is located at some distance from the central axis.
Physics principles
The assumptions for the purpose of simplifying the analysis.
• All of the mass of the bob is taken as concentrated in the midpoint of the bob
• The springs are taken as massless.
• The range of motion of the bob with respect to the rest state is taken to be within the limits of the small angle approximation.
• Only the forces exerted by the springs are considered.
Wheatstone pendulum forces
Images 6 and 7 show a detail of the Wheatstone pendulum, in different states of motion.
Image 6:
The top side of the image shows the shape of the springs when the Wheatstone pendulum is not in motion. Both springs are equally stretched, and they are aligned. The bottom side of the image
illustrates the outward displacement of the pendulum bob when the platform is rotating. The inner spring is more extended than in the rest state, the outside spring is less extended than in the rest
state, and thus the springs provide the required centripetal force to make the pendulum bob circumnavigate the central axis of rotation.
Image 7:
The top side of the image shows the shape of the springs when the Wheatstone pendulum is vibrating, but not rotating around the central axis. Both springs are equally stretched, and at the
equilibrium point they are aligned. The bottom side of the image shows that when the pendulum bob is vibrating while the platform is rotating the midpoint of the vibration is the outward displaced
point of the pendulum bob.
Foucault pendulum forces
Image 8 depicts the forces in the case of a plumb line. It's necessary to distinguish between true gravity and effective gravity. In the diagram the blue arrow represents true gravity, it is directed
toward the center of gravitational attraction. On a non-rotating celestial body true gravity and effective gravity coincide, but in the case of a rotating celestial body (due to the rotation deformed
to a corresponding oblate spheroid) some of the gravity is spent in providing required centripetal force. (For a quantitative discussion, see the 'differences in gravitational acceleration' section
in the Equatorial bulge article.) Due to the rotation the plumb line swings wide. The plumb line's equilibrium point is the point where the angle between true gravity and the tension in the wire
provides the required centripetal force.
Of course, since inertial mass is equal to gravitational mass a local gravimetric measurement cannot distinguish between effective gravity and true gravity. Nevertheless, for understanding the
dynamics of the situation it is necessary to remain aware that at all times a centripetal force is required to sustain circumnavigating motion.
At 45 degrees latitude the required centripetal force, resolved in the direction parallel to the local surface, is 0.017 newton for every kilogram of mass. In the case of the 28 kilogram bob of the
Foucault pendulum in the Pantheon in Paris that is about 0.5 newton of force. If you have a small weighing scale among your kitchen utensils then you can press down on it and feel how much force
corresponds to a weight of 50 grams. (2.2 pounds corresponds to 1 kilogram.) Or you can calculate how much centripetal force you are subject to yourself, and then use the scale to feel how much force
that is. If you weigh 75 to 80 kilogram you are looking at about 150 grams of force. The angle between true gravity and the direction of the plumb line provides that force.
The angle between true gravity and a plumb line at 45 degrees latitude is about 0.1 of a degree. For the Foucault pendulum in the Pantheon, with a length of 67 meters, the corresponding horizontal
displacement is 0.11 meter. (Interestingly, for the pendulum in the Pantheon Foucault has documented that on occasion there was opportunity for uninterrupted runs of six to seven hours. As far as he
could tell the precession rate remained the same. William Tobin writes in his biography of Foucault that according to calculations after 7 hours friction will have reduced the amplitude of the swing
to about 0.1 meter.)
Decomposition of the force
Picture 9 is an interactive diagram. The red dot is draggable. The two horizontal lines are sliders.
The blue dot represents the central rotation axis. The green dot is circumnavigating the blue dot. The green vector represents the actually exerted force. For simplicity this force is treated as a
perfect harmonic force; a force that is at every point proportional to the distance to the midpoint. The blue vector represents required centripetal force.
The brown vector is the green vector minus the blue vector. The grey dot represents the displaced midpoint of the vibration. The brown vector can be thought of as a restoring force towards the
displaced midpoint of the vibration.
The sliders adjust the size of the required centripetal force and the actually exerted force respectively.
Purpose of the decomposition
The derivation of the equation of motion further down in this article uses the components, not the actually exerted force. The advantage: in the equation of motion for the rotating coordinate system
the required centripetal force term and the centrifugal term drop away against each other.
Exchange of angular momentum
I will first discuss the motion pattern of the device depicted in image 10.
Let the platform be rotating counterclockwise. To underline the comparison with the Foucault pendulum I will call the the bob's motion in a direction tangent to the rotation 'east-west motion'.
Motion towards and away from the central axis
At every point in time, the bob has a particular angular momentum with respect to the central axis of rotation. When the bob is pulled closer to the central axis of rotation the centripetal force is
doing mechanical work. As a consequence of the centripetal force doing work the angular velocity of the bob increases. (Compare a spinning ice-skater who pulls her arms closer to her body to make
herself spin faster.) When the pendulum bob is moving away from the central axis the centripetal force is doing negative work, and the bob's angular velocity decreases. The animation illustrates that
the effects during the halfswing towards the central axis and halfswing away from the central axis do not cancel each other; the combined effect is cumulative.
Velocity relative to co-rotating motion
The amount of centripetal force that is exerted upon the pendulum bob is "tuned" to the state of co-rotating with the system as a whole. I will refer to the velocity that corresponds to co-rotating
with the system as a whole as 'equilibrium velocity'. During a swing of the pendulum bob in eastward direction the bob is circumnavigating the central axis faster than the equilibrium velocity, hence
during an eastward swing the bob will swing wide. During a swing of the pendulum bob in westward direction the bob is circumnavigating the central axis slower than the equilibrium velocity, hence
during a westward swing there is a surplus of centripetal force, which will pull the bob closer to the central axis. The animation illustrates that the effects during the halfswing from east-to-west
and the halfswing from west-to-east do not cancel each other; the combined effect is cumulative.
Java simulation
To further illustrate the symmetries that underly the motion of animation 11 I have created the following 2D Java simulation: Circumnavigating pendulum
Swinging towards and away from the central axis is a vibration in radial direction. Swing in tangential direction superimposed on the circumnavigating motion is a vibration in tangential direction.
The pendulum bob and the pendulum support are exchanging angular momentum all the time.
Of course, in the case of a Foucault pendulum the entire Earth is effectively the pendulum support. Obviously since the Earth is so much heavier than the pendulum bob the Earth's change of angular
momentum is utterly negligable. But as a matter of principle change of (angular) momentum is always in the course of an exchange of (angular) momentum.
Animation 11 shows a remarkable symmetry: the direction of the bob's oscillation with respect to inertial space remains the same, just as in the case of a polar Foucault pendulum!
In the case of a polar Foucault pendulum the fact that the direction of the plane of swing remains the same is due to straightforward conservation of momentum. In the case depicted in image 10 and
animation 11 there is a precession relative to the rotating system that precisely cancels the system's rotation. In the section in which I discuss the mathematical treatment I show how that occurs.
The sine of the latitude
Animation 13 shows the case of the Wheatstone pendulum when it is set to model a Foucault pendulum at 30 degrees latitude. At 30 degrees latitude a full precession cycle takes two days. Compared to
the case represented in animation 10 the precession relative to the co-rotating system is slower.
The closer to the equator a Foucaut pendulum is located, the slower its precession relative to the ground it is suspended above. At the equator the plane of swing is completely co-rotating with the
rotation of the overall system.
Image 14 illustrates why the precession is slower at lower latitudes.
The amount of precession (relative to the co-rotating system) is determined by the exchange of angular momentum. At 30 degrees latitude: when the swing from one extremal point to the opposite
extremal point covers a distance of L, then the motion towards (or away from) the central axis is over a distance of 1/2 L. So in the case of a setup at 30 degrees latitude the force is half as
effective, and the precession is half the rate of a Foucault setup near the poles.
Java applets
I have created a 3D simulation that models the Foucault pendulum
The ratio of vibrations to rotations
In actual Foucault setups the ratio of vibrations to rotations is in the order of thousands to one. In the examples given in this article the ratio of vibrations to rotations is in the order of ten
or twenty to one. Interestingly, this difference is inconsequential. In both ranges the physics principles are the same; the considerations that are presented are valid for all vibration-to-rotation
However, below a ratio of around 10 to 1 and approaching 1 on 1 the motion associated with the rotation and the motion associated with the vibration blur into each other so much that there is no
meaningful Foucault effect to be observed.
The amplitude of the vibration
Images 6 and 7 depict a case where the amplitude of the vibration is slightly smaller than the outward displacement of the bob; in the usual setup the amplitude of the vibration is larger than the
displacement, much larger. Does this matter? It does not matter in the following sense: one of the properties of the Foucault pendulum is that its precession does not depend on the amplitude of the
swing; when the amplitude of a Foucault pendulum decays the rate of precession remains the same. The Java simulations Foucault pendulum and Foucault rod illustrate the property that the rate of
precession is independent from the vibration amplitude.
Mathematical derivations
I will start with a derivation for the Wheatstone pendulum setup that is depicted in image 15. That setup is effectively a 2-dimensional case; all forces that are involved act parallel to the plane
of the equator; all motion is in a plane that is parallel to the equator. At the very end of the mathematical discussion I will add the modification that generalizes the result to cases where the
suspension points are not aligned with the central axis of rotation.
The equation of motion
As discussed in the section Decomposition in vector components the mathematics of the equation of motion is simplified by representing the force exerted upon the pendulum bob as a combination of two
· Centripetal force that sustains the rotation
· Restoring force that sustains the vibration
Both can be treated as a harmonic force.
The centripetal force that sustains rotation with constant angular velocity Ω, for a coordinate system with the zero point at the central axis, is given by:
The restoring force acts towards the equilibrium point (plumb line direction) of the pendulum. The coordinate system can be chosen in such a way that the central axis and the equilibrium point are
both on the y-axis. Let the y-coordinate of the equilibrium point be called y[e]. Let the frequency of the pendulum swing be called ψ.
The centrifugal term and the Coriolis term
I assume that the reader is at ease with the centrifugal term and the Coriolis term. The necessary information is present in the articles Rotational-vibrational coupling and Oceanography: inertial
oscillations and in the following interactive animation Coriolis effect. In the rotational-vibrational coupling article I present a derivation of the Coriolis term (the centrifugal term is trivial),
and in the inertial oscillation article I show how things work out for terrestrial effects.
To help recognize the notation: in the following system of equations (which is for motion relative to a coordinate system that rotates with angular velocity Ω) the term that is proportional to x and
y is the centrifugal term. The Coriolis term is proportional to dx/dt and dy/dt respectively.
The acceleration that is associated with the Coriolis effect is perpendicular to the velocity. If you have a vector dx/dy, then the vector perpendicular to that is -dy/dx; it's the negative, and
inverted. In the system of motion equations (for x-direction and y-direction) you see that the factor 2Ωdy/dt is in the equation for acceleration in the x-direction, and vice versa.
The full equation
No factor m
Let me explain first why I have omitted the factor 'm' for the mass in the equation of motion below. The restoring force arises from elasticity. If the bob is replaced with a heavier bob then the
elastic material stretches some more before settling into an equilibrium state. The required force is proportional to m, and the setup simply self-adjusts to provide the required force.
The full equation of motion for motion with respect to a rotating coordinate system has four factors: the two components of the force plus the centrifugal term and the Coriolis term.
It's assured that 'centrifugal' and 'centripetal' drop away against each other because the system is self-adjusting: if the angular velocity of the system would increase then the springs deform a
little more until the point is reached where the springs once more provide the required amount of centripetal force.
Letting the centrifugal term and the expression for the centripetal force drop away against each other:
This equation of motion describes the effects of the centripetal force on the motion pattern.
The above expression can be simplified further by shifting the zero point of the coordinate system. The Coriolis term only contains velocity with respect to the rotating system, so it is independent
of where the zero point of the coordinate system is positioned. In the following equations x and y are not the distance to the central axis but the distance to the center point of the vibration.
As a reminder: the above equations apply for only a single case, the case when the suspension of the pendulum is aligned with the central axis of rotation, as depicted in image 15.
This system of equations is the same as the equations of motion for two coupled oscillators. Here the oscillations are vibration in x-direction and vibration in y-direction. The Coriolis term
describes that acceleration in x-direction is proportional to velocity in y-direction, and that acceleration in y-direction is proportional to velocity in x-direction. In other words: the Coriolis
term describes the transfer of the vibration direction.
Weakened coupling
When the attachment points are not aligned with the central axis of rotation the coupling between the rotation and the vibration is weakened. Less motion towards and away from the central axis of
rotation means the centripetal force will be doing less work.
Obtaining an analytic solution to that equation of motion is in the second Foucault pendulum article
Image 17 has been created by plotting the analytic solution to the above equation of motion. It represents the case where the ratio of ψ to Ω is 11 to 1 (Usually the ratio of ψ to Ω is in the order
of thousands to one). The image depicts the case of releasing the bob in such a way that on release it has no velocity with respect to the rotating system.
What the equations describe
Remarkably, the equations describe that the shape of the trajectory of the Foucault pendulum is exactly the same on all latitudes, the latitude of deployment affects only the rate of precession. That
means that just from the shape of the trajectory you will not be able to observe the magnitude of the Earth's rotation rate. For instance, if you observe that it takes 32 hours for the pendulum to
complete a precession cycle then maybe the Earth takes 32 hours to rotate, or maybe you are on a latitude where the precession takes 32 hours, with the Earth rotating at some unknown faster rate. So
if you limit yourself rigidly to using data from the pendulum motion only you cannot observe the Earths rotation rate.
The similarity of the shape on every latitude is quite surprising, for in the case of a polar pendulum and in the case of a latitudinal pendulum the mechanism that is involved is entirely different.
In the case of a polar pendulum the only physics taking place is the swing of the pendulum, the Earth merely rotates underneath the pendulum, without affecting it; the "precession" of a polar
pendulum is apparent precession. On the other hand, in the case of a latitudinal pendulum the pendulum setup as a whole is circumnavigating the Earth's axis and consequently the vibration is
affected: there is a coupling of latitudinal and longitudinal vibration. When the coupling is 100% as in the setup depicted in animation 10 the vibration retains the same orientation with respect to
inertial space. When the coupling is less then 100%, as in the case of a Foucault setup somewhere between the poles and the equator there is an actual precession of the pendulum swing.
Overview: the influence of the centripetal force.
The reason that the centripetal force is crucial is the fact that the direction of the centripetal force is not constant.
For contrast: compare with the case of a displacing force that is constant in direction. Take for instance the following setup: a pendulum suspended in a train carriage that accelerates uniformly in
a straight line. That uniform acceleration can be incorporated as a tilt of the vertical (just like a plumb line in a uniformly accelerating train carriage will be tilted accordingly); it will not
affect the direction of the pendulum swing.
In the case of a Foucault pendulum the centripetal force does affect the direction of the plane of swing. While the centripetal force's change of direction during each separate swing is minute, it is
nonetheless the determining factor because the effect is cumulative.
Historical note.
Gustave Gaspard Coriolis undertook theoretical examination of the efficiency of waterwheels. In the opening paragraph of his article, Coriolis mentions that Bernouilli had analysed the case of what
force acts upon an object that can slide freely inside a horizontal tube that is rotating. Coriolis had then taken it upon himself to work out the most general formulas, formulas for handling any
motion. These formulas would allow Coriolis to calculate for any shape of waterconduit the amount of work that could be extracted. In the course of deriving formulas, one of the terms that arose was
what today is referred to as 'the Coriolis term'.
Coriolis' investigations dealt with mechanical work, his investigations dealt with energy conversion. And in the case of a latitudinal Foucault pendulum the actual precession is due to work being
done by the centripetal force.
The main sources of information for this article:
• Persson, Anders, 1998 How do we Understand the Coriolis Force? Bulletin of the American Meteorological Society 79, 1373-1385.
• Persson, Anders The Coriolis effect - a conflict between common sense and mathematics PDF-file. 17 pages. A general discussion by Anders Persson of various aspects of the Coriolis effect,
including Foucault's Pendulum and Taylor columns. (PDF-file, 800 KB)
• Phillips, N. A., What Makes the Foucault Pendulum Move among the Stars? Science and Education, Volume 13, Number 7, November 2004, pp. 653-661(9)
(Previously published in French: Ce qui fait tourner le pendule de Foucault par rapport aux étoiles La Meteorologie no. 34 august 2001
note 1
John B. Hart, Raymond Miller, Robert L. Mills, A simple geometric model for visualizing the motion of a Foucault pendulum American Journal of physics 55(1), January 1987
note 2
Authors that I am aware of who discuss the mechanism of the Foucault pendulum's precession.
- Ferrel, W. 1858. The influence of the Earth's rotation upon the relative motion of bodies near its surface. Astron. J., Vol V. No. 109, 97-100. (PDF file, 548 KB, scans of article pages )
- Phillips, N. A., What Makes the Foucault Pendulum Move among the Stars? Science and Education, Volume 13, Number 7, November 2004, pp. 653-661(9)
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Last time this page was modified: December 29 2023 | {"url":"http://www.cleonis.nl/physics/phys256/foucault_pendulum.php","timestamp":"2024-11-05T15:01:48Z","content_type":"text/html","content_length":"59092","record_id":"<urn:uuid:5220ebe0-f6b4-460c-81d2-9431d5d939a6>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00670.warc.gz"} |
2.11.14 fft_filters
Menu Information
Analysis: Signal Processing: FFT Filters
Brief Information
Perform FFT Filtering
Command Line Usage
1. fft_filters Col(2) cutoff:=5;
2. fft_filters Col(2) filter:=bandpass Freq1:=3 Freq2:=3.6;
3. fft_filters Col(2) filter:=threshold threshold:=1;
4. fft_filters Col(2) filter:=bandblock Freq1:=1.1 Freq2:=1.3 oy:= (Col(3), Col(4));
X-Function Execution Options
Please refer to the page for additional option switches when accessing the x-function from script
Display Variable I/O Default
Name Name and Value Description
Input iy <active> Specifies the input range
Specifies the type of the filter
Option list
• low:Low Pass
Allows only low frequency components to pass
• high:High Pass
Allows only high frequency components to pass
Input • bandpass:Band Pass
Filter Type filter Low Pass
int Allow only frequency components within a specified range to pass
• bandblock:Band Block
Allow only frequency components outside a specified range to pass
• threshold:Threshold
Allow only frequency components whose amplitudes are larger than the threshold to pass
• lowpp :
This is a parabolic low-pass filter.
Lower Cutoff freq1 0 This option is available only when the filter type is band pass or band block. It specifies the lower cutoff frequency.
Frequency double
Upper Cutoff freq2 -1 This option is available only when the filter type is band pass or band block. It specifies the upper cutoff frequency.
Frequency double
Input This option is available only when the filter type is low pass or high pass. It specifies the cutoff frequency. If no cutoff frequency is specified
Cutoff cutoff 0 ("Auto"), an arbitrary cutoff of 25% and 75% of the frequency range of raw data, is used.
Frequency double
Pass Frequency pass -1 This option is available only when the filter type is Low Pass Parabolic. All frequencies below this value will be kept unchanged after being filtered.
Stop Frequency stop -1 This option is available only when the filter type is Low Pass Parabolic. All frequencies above this value will be removed completely.
Threshold threshold -1 This option is available only when the filter type is threshold. It specifies the amplitude threshold.
Input This option is available only when the filter type is high pass, band pass or band block. If this option is checked, the DC offset will remain unchanged
Keep DC Offset offset 1 during the filtering.
Output Specifies the output
Output oy <new>
XYRange See the syntax here.
Filtering is a process of selecting frequency components from a signal. A FFT filter performs filtering by using Fourier transforms to analyze the frequency components in the input signal.
There are six types of filters available in this function: low-pass, high-pass, band-pass, band-block, low-pass parabolic and threshold. The first four types are actually ideal filters. The low-pass
filters block all frequency components above the cutoff frequency, allowing only the low frequency components to pass. High-pass filters are just the opposite: they block frequency components that
are below the cutoff frequency. Band-pass filters only allow frequencies within a specific range determined by the lower and upper cutoff frequencies to pass the filters, while band-block filters
remove all frequencies within the chosen range. The low-pass parabolic filter is different from the ideal low-pass filter in that its window function does not jump abruptly at the cut-off frequency.
Between the pass frequency and the stop frequency, the window function used to select the frequencies looks like a parabolic curve. On the other hand, you can choose to use the threshold filter,
which removes frequencies whose amplitudes are below a specific threshold value.
The dialog of this function allows you to see the real-time preview of the filtered signal when you change the variables. The cutoff frequencies, pass frequency, stop frequency or the threshold can
be selected by dragging the lines on the preview pane.
• To perform low pass filtering using cutoff frequency as 3, to XY data in columns 2 of the active worksheet, use the script command:
fft_filters iy:=col(2) cutoff:=3
• To perform fft_filtering to data using a pre-saved theme file, save your preferences in the fft_filter dialog, and then execute it by typing the script command as follows, using your own
saved-theme title:
fft_filters -t "my fft filter theme.oth"
• Code Sample
This script remove the high frequency part of a section of sound.
The sample data is located in exe_path\Samples\Signal Processing\.
1. Import a wav file to a new book.
2. Perform low-pass filters on the data to remove the high frequency part of the sound.
3. Export the data to a new wav file.
// Import the wav file
fnin$ = system.path.program$ + "Samples\Signal Processing\sample.wav";
fnout$ = system.path.program$ + "Samples\Signal Processing\Low-pass sample.wav";
newbook s:=0;
newsheet col:=1;
impWav fnin$;
string bkn$=%H;
// Remove the high frequency part
fft_filters [bkn$]1!col(1) cutoff:=2000 oy:=(<input>,<new name:="Low Frequency of the Sound">);
// Set the new wav column's format to be short(2) type for later export
// Export to a new wav file;
expWav iw:=[bkn$]1! left:=2 fname:=fnout$;
The Fourier transform of the input signal is first computed. Then for low pass, high pass, band pass, band block and low pass parabolic filters, a window (determined by the filter type) is used to
multiply the Fourier transform. If 1 is chosen for the variable Keep DC Offset, the first point of the window will be set as 1. For threshold filter, the power of every frequency component is
examined. If it is not larger than the threshold, the corresponding frequency component will be discarded. After the altering of the frequencies, a backward or inverse Fourier transform is applied to
gain the filtered signal.
Window for low pass filter:
Let $f_c$ be the cut-off frequency. The window function can be expressed by:
$w(f) = \begin{cases} 1, & \mbox{if }f \le f_{c} \\ 0, & \mbox{if }f > f_{c} \end{cases}$
Window for high pass filter:
Let $f_c$ be the cut-off frequency. The window function can be expressed by:
$w(f) = \begin{cases} 0, & \mbox{if }f \le f_{c} \\ 1, & \mbox{if }f > f_{c} \end{cases}$
Window for band pass filter:
Let $f_{c1}$ be the lower cutoff frequency and $f_{c2}$ be the upper cutoff frequency. The window function can be expressed by: $w(f) = \begin{cases} 1, & \mbox{if } f_{c1} < f < f_{c2} \\ 0, & \mbox
{otherwise } \end{cases}$
Window for band block filter
Let $f_{c1}$ be the lower cutoff frequency and $f_{c2}$ be the upper cutoff frequency. The window function can be expressed by: $w(f) = \begin{cases} 0, & \mbox{if } f_{c1} < f < f_{c2} \\ 1, & \mbox
{otherwise } \end{cases}$
Window for low-pass parabolic filter
Let $f_{c1}$ be the pass frequency and $f_{c2}$ be the stop frequency. The window function can be expressed by: $w(f) = \begin{cases} 1, & \mbox{if }f \le f_{c1} \\ 1-\frac{(f-f_{c1})^2}{(f_{c2}-f_
{c1})^2}, & \mbox{if } f_{c1} < f < f_{c2} \\ 0, & \mbox{if }f \ge f_{c2} \end{cases}$
Related X-Functions
fft1, ifft1
Keywords:fourier, fft, window, band, block, threshold | {"url":"https://d2mvzyuse3lwjc.cloudfront.net/doc/X-Function/ref/fft_filters","timestamp":"2024-11-04T12:07:44Z","content_type":"text/html","content_length":"150062","record_id":"<urn:uuid:9cc91bc4-da8a-4333-a7a4-eab86ae00419>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00197.warc.gz"} |
21st May 2024 at 9:42pm
How to handle uncertainty in the analysis of complex situations? How to juggle a constellation of alternative hypotheses? How to derive actionable advice to offer a harried decisionmaker?
Technologists and software developers, bless them, don't have the area expertise to build the tools that the specialist-analyst needs — so the analyst had better learn to talk to the technologist.
Epistemology — the study of knowledge itself — is the true foundation of powerful information-handling tools. Substantive experts don't need to know the mathematics of uncertainty, but they do need
to at least speak some of the language. Then they can guide the toolsmith-methodologist toward useful action. A basic vocabulary of these ideas is also a splendid way to lift a discussion up from the
"yes it is" / "no it isn't" level to a higher-dimensional space — where all sides can see how the uncertainties and alternatives balance out. Moreover, knowing something about the underpinnings of
knowledge gives the thinker a new stock of metaphors ... and thereby enhances the ability to formulate and solve tough problems.
There are a host of disciplines that help wrestle down uncertainty, including:
• Probability & Statistics — discrete & continuous probability distributions, means, standard deviations, dependent & independent variables, conditional probabilities, and error propagation
• Combinatorics — permutations, combinations, multivariate experiment design, clustering & similarity metrics, and heuristics for scenario generation
• Logic — deduction, induction, syllogisms, fuzzy logic, logic programming, and idea-mapping techniques to assist in structured argumentation
• Inverse Methods — reverse-engineering, matrix inversion, back-propagation, etc.; see Bypasses by Z. A. Melzak (enlightening reading on these and related topics at the graduate level ... but
anyone should feel free to skip the equations and explore Melzak's ideas of metaphor and transformation in literature and language)
• Curve-fitting — data modeling, error estimation, and key variable identification; see Mathematical Methods That [Usually] Work by Forman J. Acton (ideas on numerical methods, readable at the
advanced undergraduate level, but with many nonmathematical parables of universal relevance)
• Noise & Random Perturbations — power spectra, correlation functions, and pattern discovery in unclean data
• Game Theory — from rock-paper-scissors to Mutual Assured Destruction (MAD), two-person zero-sum & beyond, prisoners dilemmas, minimax, etc.; see The Compleat Strategyst by John Williams (highly
entertaining, with many stories and funny illustrations; needs only high-school math or less)
• Information Theory — bits of data, entropy, & evidence; see The Recursive Universe by William Poundstone (fascinating popular-level exposition, with chapters on cellular automata,
self-reproducing systems, and deep concepts of information)
• Systems Analysis — sources, sinks, valves, delays, positive & negative feedback loops, attractors & instabilities, critical paths & chokepoints; see The Fifth Discipline by Peter Senge (a
self-improvement and applied math short-course disguised as a business book ... powerful and important concepts presented in engaging fashion)
This quick tour of the epistemological engine-room will help a captain pilot the ship of thought with more precision. A little understanding of how the machinery works will also assist in diagnosis
and recovery when things go awry. The bottom line: clearer thinking about complex issues.
(The above snapshot of a work-in-progress is derived from a talk I gave today to a small class. More to come!)
Thursday, August 10, 2000 at 21:17:01 (EDT) = 2000-08-10
TopicThinking - TopicScience - TopicPhilosophy - TopicOrganizations
(correlates: Twitter Poetry, MinimaxStrategy, BooksToConsider, ...) | {"url":"https://zhurnaly.com/z/EpistemologicalEnginerooms.html","timestamp":"2024-11-11T17:13:21Z","content_type":"text/html","content_length":"7037","record_id":"<urn:uuid:d7895543-100e-4ee5-9b3c-de0dd075ba36>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00638.warc.gz"} |
How many grams of chlorine can be liberated from the decomposition of 64.0g of AuCl3 by the following reaction:
2 AuCl3 --> 2 Au + 3Cl2? | HIX Tutor
How many grams of chlorine can be liberated from the decomposition of 64.0g of AuCl3 by the following reaction: 2 AuCl3 --> 2 Au + 3Cl2?
Answer 1
Who cares about the chlorine? Grab the GOLD!
The equation is in front of you:
#2AuCl_3 rarr 2Au + 3Cl_2# (the which would probably rely on electrochemical reduction). There are #(64.0*g)/(303.33*g*mol^(-1))# #=# #0.211# #mol# #AuCl_3#. By the stoichiometry of the reaction, you
know that #3/2# moles of chlorine (#Cl_2#) result from each mole of #AuCl_3#. So #0.316# moles of #Cl_2# will result. How many grams does this represent if the mass of molecular chlorine is #
70.906*g*mol^(-1)#? Approx. #21# #g#?
A more realistic question would ask you to calculate the volume of such #Cl_2# at standard temperature and pressure.
This seems like a very costly way to produce chlorine. I've seen people discard the mixture down the sink after they're done.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find out how many grams of chlorine can be liberated from the decomposition of 64.0g of AuCl3, we need to use stoichiometry.
1. Calculate the molar mass of AuCl3: Au: 197.0 g/mol Cl: 35.45 g/mol x 3 = 106.35 g/mol Total molar mass = 197.0 g/mol + 106.35 g/mol = 303.35 g/mol
2. Determine the number of moles of AuCl3: Number of moles = Mass / Molar mass Number of moles = 64.0 g / 303.35 g/mol ≈ 0.211 moles
3. Use the stoichiometric coefficients from the balanced equation to find the moles of Cl2 produced: From the balanced equation, 2 moles of AuCl3 produce 3 moles of Cl2. So, 0.211 moles of AuCl3
will produce: (0.211 moles × 3 moles Cl2) / 2 moles AuCl3 = 0.3165 moles of Cl2
4. Convert moles of Cl2 to grams: Mass = Number of moles × Molar mass of Cl2 Mass = 0.3165 moles × 70.90 g/mol (molar mass of Cl2) Mass ≈ 22.4 grams
Therefore, approximately 22.4 grams of chlorine can be liberated from the decomposition of 64.0 grams of AuCl3.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-many-grams-of-chlorine-can-be-liberated-from-the-decomposition-of-64-0g-of-a-8f9af83ae4","timestamp":"2024-11-07T23:14:13Z","content_type":"text/html","content_length":"584422","record_id":"<urn:uuid:892cf0cc-8ad4-436a-b775-64562bd14626>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00669.warc.gz"} |
Information theory
Liquid state machine computation
Turing machine. Information is first digitalized in a stream of bits, and then processed in computers or communcation networks. The Turing machine has already proved the extent and ubiquity of its
power, and has enabled the prodigious leap into the on-going information era. However, the most performant computer, the human brain, is not a Turing machine. Its processeing capabilities can be
approximated with the paradigm of Liquid State machine (LSM). whose archetype is the plane surface of quiet liquid volume. If an object is thrown into a quiet lake from one edge , "an Intelligence"
(in the sense of Laplace, i.e., with an infinite computation power) located at the other edge could analyze the induced surface waves and extract informations about the object in question: its
volume, shape, velocity, etc. This kind of processing is perfomed in real-time, and it does not involve any storage of information; moreover, the liquid comes back to its initial quiet state after
the excitation. This type of computing is strinkingly similar to the operating mode of living being brains, that process information in real-time from external stimulii.
As displayed by the figure, the "liquid" is considered as a reservoir, yielding a given outputs depending on the input data. The reservoir can therefore be "trained" to perform a given task (pattern
recognition, mainly) by optimizing the output relativele the expected result. LSMs have been extensively investigated in the community of cognitive sciences and neuronal systems. We are actually
involved at the FEMTO-ST Institute in research activities whose objective is to achieve ultra-fast LSM computation with optoelectronic systems.
Random number generation
pseudo-random numbers because they are deterministic outputs of nonlinear functions, that can satisfy some stringent statistical properties. These (pseudo-)random numbers are useful in several
applications ranging from real-time random sampling to hardware crypto-graphic applications. Generating (tens of) billions of such pseudo-random numbers is in fact a very difficult task. Achieving
this objective by the way of photonic solutions is the focus of dedicated research activities in several laboratories worldwide.
We are actually exploring the idea of generating pseudo-random number with optoelectronic systems. As shown in the figure, the output of these systems is a continuous chaotic (and then,
pseudo-random) signal whose probability density function converges towards a Gaussian curve. The bandwidth of such signals can be as high as 10 GHz, and nothing theoretically prevents an increase to
significantly higher frequencies. | {"url":"https://members.femto-st.fr/laurent_larger/en/information-theory","timestamp":"2024-11-01T22:07:14Z","content_type":"application/xhtml+xml","content_length":"20417","record_id":"<urn:uuid:d20df5bd-4395-46bd-a498-aa64e94e4571>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00877.warc.gz"} |
[Solved] Determine the values of the given trigono | SolutionInn
Determine the values of the given trigonometric functions directly on a calculator. The angles are approximate. tan
Determine the values of the given trigonometric functions directly on a calculator. The angles are approximate.
tan 0.8035
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Answer rating: 100% (6 reviews)
You may easily calculate the value of the angles tangen...View the full answer
Answered By
Kenneth Mutia
I have a B.S. in Statistics from the Jomo Kenyatta University of Agriculture and technology. I have been an academic tutor for over 3 years. I have a passion for helping students reach their full
potential and am dedicated to helping them succeed. I am patient and adaptable, and I have experience working with students of all ages and abilities, from elementary school to college in their
various fields. I have a wide scope of diverse tutoring experience in several courses of study with significant success as a tutor.
0.00 0 Reviews 10+ Question Solved
Students also viewed these Mathematics questions
Study smarter with the SolutionInn App | {"url":"https://www.solutioninn.com/study-help/basic-technical-mathematics/determine-the-values-of-the-given-trigonometric-functions-directly-on-a-calculator-the-angles-are-approximate-tan-08035","timestamp":"2024-11-03T21:52:28Z","content_type":"text/html","content_length":"79039","record_id":"<urn:uuid:ac5c5c0f-c708-4d83-b481-bfc596776ffd>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00212.warc.gz"} |