content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Revisiting the AA-DD Model in Zero Lower Bound
1. Introduction
The AA-DD model of Krugman, Obstfeld, & Melitz (2012: p. 453) summarises three markets: the foreign exchange market, the money market, and the market of goods and services. The main aim of the AA-DD
model is to analyse governments’ and central banks’ policies using two simple curves. The first curve (named the AA curve), which represents the asset market equilibria, summarises the money market
and the foreign exchange markets. The second curve (the DD curve) represents the goods market equilibria. At the intersection of the two curves, each of the three markets is in equilibrium (Krugman,
Obstfeld, & Melitz, 2012: p. 453) .
In existing AA-DD models, the ZLB interest rate implies a fixed exchange rate (Krugman et al., 2012: p. 453) .^1 The fixed exchange rate leads to a larger output effect of an increase in government
spending compared to that in normal periods. Thus, existing AA-DD models are not consistent with predictions provided by recent dynamic stochastic general equilibrium (DSGE) models in open economies.
^2 For example, using a DSGE model in open economies, Mao Takongmo (2017) shows that increases in government spending during the ZLB period increase aggregate demand, which leads to an appreciation
of the real exchange rate greater than that in normal periods. The appreciation of the real exchange rate then deflates the effect of government spending on total production. The AA-DD model is
widely taught in fourth years in economic departments of almost all universities, in part because the intuitions that drive the results are simple and understandable. The AA-DD model is also used by
many central banks and governments. It is therefore important that the model provide correct predictions.
During the 2007 financial crisis and the recession that followed, the nominal interest rate reached a lower bound and remained at a very low level for a long period of time (called the zero lower
bound [ZLB] period). During the ZLB period, central banks lost their conventional monetary policy, which consisted of lowering the nominal interest rate to increase output.^3 As a result, governments
of many countries, including the United States, started to increase government spending to boost output.^4
Using US data, Boskin (2012) provides empirical evidence that the government spending policy did not work in the United States. Boskin observes that the increase in debt exceeded the improvement in
total production during the ZLB period. Mao Takongmo & Lebihan (2021) use data to analyze the role the real exchange rate plays in the Granger causality measure between government spending and gross
domestic product (GDP) in the United States during the zero lower bound (ZLB) period. Mao Takongmo & Lebihan (2021) show that causality measures between government spending and GDP are larger and
persistent in the ZLB period, but only if the exchange rate is not taken into account. When the exchange rate is taken into account, the Granger causality measure becomes very small and
It is important to note that, before observing data, many policymakers usually base their decisions on the widely taught AA-DD model. Existing AA-DD model predictions can be misleading and may not be
consistent with the data. Moreover, the AA-DD model predictions are not consistent with the recent DSGE literature in ZLB in open economies. It is therefore very important to revisit the model in ZLB
In ZLB periods, agents usually take into account the expected interest rate in the near future when making their decisions. In this paper, we proposed a theoretical model to allow the expected
interest rate in the near future to be endogenous. Our proposed model no longer implies a fixed exchange rate and predicts that the output effect of an increase in government spending is lower than
that provided by the existing AA-DD model because of an appreciation in the exchange rate.
Unlike the current AA-DD model, our new AA-DD model is consistent with recent results from the literature that highlights the negative effect of the exchange rate on the government spending
multiplier in ZLB (see for example (Mao Takongmo, 2017; Mao Takongmo & Lebihan, 2021) ).
The remainder of the paper is organized as follows. Section 2 presents the model and the resulting government spending multiplier [GSM] in ZLB periods. The classical model, the classical GSM, and an
analytical comparison between our GSM and the classical GSM in ZLB is presented in Section 3. Section 4 focuses on a graphical comparison between the two results. The final section concludes the
2. The New Model in ZLB
2.1. The New Foreign Exchange Market
Interaction between buyers and sellers of foreign currency bank deposits is assumed to determine the exchange rate in the foreign exchange market. There are two countries: our country, which uses the
U.S. dollar ($), and the rest of the world, which uses the euro (?. There are two periods: the current ZLB period, and the medium period. The interest rate in the current period, ${\stackrel{¯}{r}}_
{ZLB}$ , is assumed to be exogenous and fixed. The interest rate in the medium period, ${r}_{M}^{e}$ , is endogenous. The nominal interest rates in the current period and in the medium period abroad,
denoted by ${\stackrel{¯}{r}}_{ZLB}^{*}$ and ${r}^{e*}$ , respectively, are all exogenous. The nominal exchange rate, ${E}_{\text{/}€}$ , is defined as the price of one euro in term of U.S. dollars.
The exchange rate in the current period and in the next period are denoted respectively by ${E}_{ZLB}$ , and ${E}_{M}^{e}$ . The expected exchange rate is assumed to be exogenous.
Definition 1. The foreign exchange market is said to be in equilibrium when deposits in U.S. dollar and deposits in euro, at the beginning of the first period, offer the same expected value at the
end of the second period. The condition for equilibrium in the foreign exchange market is called the new interest parity condition in ZLB periods.
Proposition 1. The new interest rate parity condition is
Proof. The expected value for 1$ deposit, in U.S. dollar, is $1×\left(1+{\stackrel{¯}{r}}_{ZLB}\right)\left(1+{r}_{M}^{e}\right)\text{\hspace{0.17em}}$ .
The expected U.S. dollar value of 1$ deposit in euro is $\frac{{E}_{M\left(\text{/}€\right)}^{e}}{{E}_{ZLB\left(\text{/}€\right)}}\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^{e*}\right)$ .
The foreign exchange market is in equilibrium if
right)$ . Thus the new interest parity condition is ${r}_{M}^{e}=\frac{\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^{e*}\right){E}_{M\left(\text{/}€\right)}^{e}}{\left(1+{\stackrel{¯}{r}}_
{ZLB}\right){E}_{ZLB\left(\text{/}€\right)}}-1$ . □
2.2. The New Equilibrium in the Money Market
Other things being equal, people prefer assets that offer higher expected returns. Since the current interest rate is fixed at its ZLB value, the expected return increases when the expected interest
rate increases. Since the increases in the expected interest rate represent a rise in the expected rate of return of less liquid assets relative to the rate of return on money, agents will want to
hold more of their wealth in non-money assets that pay the expected interest rate and less of their wealth in the form of money, if the expected interest rate rises. Thus, a rise in the expected
interest rate, ${r}_{M}^{e}$ , causes the demand for money, L, to fall.
We also assume that, agents hold money to avoid cost of barter trade ( (Hicks, 1937; Mundell, 1963; Baumol, 1952; Rogoff, 1985) for details). The demand for money, L, is then assumed to be an
increasing function of output, Y. Let ${M}^{s}$ represent the aggregate real money supply, and P the price level. Equilibrium in
the money market is achieved when the aggregate real money supply, $\frac{{M}^{s}}{P}$ , is
equal to the aggregate real money demand. That is
with the assumptions that $\frac{\partial L}{\partial r}<0,\text{\hspace{0.17em}}\frac{\partial L}{\partial {r}_{M}^{e}}<0$ and $\frac{\partial L}{\partial Y}>0$ .
2.3. Equilibrium in the Good Market
The aggregate demand is a sum of consumption demand (C), investment demand (I), government spending demand (G), and net export demand (NX). The consumption demand is assumed to be an increasing
function of disposable
income, ${Y}^{d}=Y-T$ . That is, $\frac{\partial C}{\partial {Y}^{d}}>0$ . We assume that the net export is a
function of the real exchange rate.^5 We assume that a depreciation of the domestic
currency will lead to an increases of the net export ( $\frac{\partial NX}{\partial \mathcal{E}}>0$ ).^6 Government
spending (G), and taxes (T), are assumed exogenous. For simplicity, investment (I), is assumed to be a function of the current ZLB interest rate, and is therefore fixed. By definition, equilibrium is
attained when the aggregated output is equal to the aggregate demand for goods and services. That is
where ${P}^{*}$ represent the price index abroad. Assumptions A summarises the assumptions presented in this section.
Assumptions A
1). $\frac{\partial L}{\partial r}<0;\text{\hspace{0.17em}}\frac{\partial L}{\partial {r}_{M}^{e}}<0$ and $\frac{\partial L}{\partial Y}>0$ .
2). $\frac{\partial NX}{\partial E}>0;\text{\hspace{0.17em}}\frac{\partial C}{\partial {Y}^{d}}>0$ .
Definition 2. The new government spending multiplier in ZLB (New GSM[ZLB]), is defined as the changes in the aggregated output, Y, generated by a change in one unit of government spending, when the
new interest rate parity condition holds, and the market of money, as well as the market of goods and services, are both in equilibrium.
Proposition 2. The new government spending multiplier in zero lower bound is equal to
$New\text{\hspace{0.17em}}GS{M}_{ZLB}=\frac{\text{1}}{1-\frac{\text{d}C}{\text{d}Y}-\frac{\text{d}NX}{\text{d}{E}_{ZLB}}\left(\frac{\left(\frac{\partial L}{\partial Y}\right)\left(1+{\stackrel{¯}{r}}
_{ZLB}\right){E}_{ZLB}^{2}}{\left(\frac{\partial L}{\partial {r}_{M}^{e}}\right)\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^{e*}\right){E}_{M\left(\text{/}€\right)}^{e}}\right)}$
Proof. At the equilibrium of all markets, we have, $\frac{{M}^{s}}{P}=L\left({\stackrel{¯}{r}}_{ZLB},{r}_{M}^{e},Y\right)$ , ${r}_{M}^{e}=\frac{\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^
{e*}\right){E}_{M\left(\text{/}€\right)}^{e}}{\left(1+{\stackrel{¯}{r}}_{ZLB}\right){E}_{ZLB\left(\text{/}€\right)}}-1$ and $Y=C\left(Y-T\right)+I\left({\stackrel{¯}{r}}_{ZLB}\right)+G+NX\left(\frac
{{P}^{*}}{P}E\right)$ .
The first two equations lead to
Applying the total derivative in both side of the equation 4 lead to
$\frac{\text{d}{M}^{s}}{P}=\text{d}L=\frac{\partial L}{\partial r}\text{d}{\stackrel{¯}{r}}_{ZLB}+\frac{\partial L}{\partial {r}_{M}^{e}}\text{d}{r}_{M}^{e}+\frac{\partial L}{\partial Y}\text{d}Y$ .
Since $\text{d}{M}^{s}=0$ , and $\text{d}{\stackrel{¯}{r}}_{ZLB}=0$ , we
$0=\frac{\partial L}{\partial {r}_{M}^{e}}\text{d}{r}_{M}^{e}+\frac{\partial L}{\partial Y}\text{d}Y.$(5)
. By replacing $\text{d}{r}_{M}^{e}$ in Equation (5) we
$\frac{\text{d}{E}_{ZLB}}{\text{d}Y}=\frac{\frac{\partial L}{\partial Y}}{\left(\frac{\partial L}{\partial {r}_{M}^{e}}\right)\frac{\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^{e*}\right)
We also have
Applying the total derivative in both side of the Equation (7) lead to $\text{d}Y=\text{d}C+\text{d}I+\text{d}G+\text{d}NX$ . Since $\text{d}I=0$ ,
Note that $\text{d}NX=\frac{\text{d}NX}{\text{d}{E}_{ZLB}}\text{d}{E}_{ZLB}$ . Thus,
$1=\frac{\text{d}C}{\text{d}Y}+\frac{\text{d}G}{\text{d}Y}+\frac{\text{d}NX}{\text{d}{E}_{ZLB}}\frac{\text{d}{E}_{ZLB}}{\text{d}Y}\text{ }\left(\text{afterdividingbothsidebyd}Y\right)$(9)
Replacing $\frac{\text{d}{E}_{ZLB}}{\text{d}Y}$ in 9 by its expression in 6 lead to
$1=\frac{\text{d}C}{\text{d}Y}+\frac{\text{d}G}{\text{d}Y}+\frac{\text{d}NX}{\text{d}{E}_{ZLB}}\left(\frac{\frac{\partial L}{\partial Y}}{\left(\frac{\partial L}{\partial {r}_{M}^{e}}\right)\frac{\
$\frac{\text{d}G}{\text{d}Y}=1-\frac{\text{d}C}{\text{d}Y}-\frac{\text{d}NX}{\text{d}{E}_{ZLB}}\left(\frac{\frac{\partial L}{\partial Y}}{\left(\frac{\partial L}{\partial {r}_{M}^{e}}\right)\frac{\
$\frac{1}{\left(\frac{\text{d}Y}{\text{d}G}\right)}=1-\frac{\text{d}C}{\text{d}Y}-\frac{\text{d}NX}{\text{d}{E}_{ZLB}}\left(\frac{\frac{\partial L}{\partial Y}}{\left(\frac{\partial L}{\partial {r}_
$\begin{array}{c}\frac{\text{d}Y}{\text{d}G}=\frac{1}{1-\frac{\text{d}C}{\text{d}Y}-\frac{\text{d}NX}{\text{d}{E}_{ZLB}}\left(\frac{\frac{\partial L}{\partial Y}}{\left(\frac{\partial L}{\partial {r}
_{M}^{e}}\right)\frac{\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^{e*}\right){E}_{M\left(\text{/}€\right)}^{e}}{\left(1+{\stackrel{¯}{r}}_{ZLB}\right){E}_{ZLB}^{2}}}\right)}\\ =\frac{1}{1-\
frac{\text{d}C}{\text{d}Y}-\frac{\text{d}NX}{\text{d}{E}_{ZLB}}\left(\frac{\left(\frac{\partial L}{\partial Y}\right)\left(1+{\stackrel{¯}{r}}_{ZLB}\right){E}_{ZLB}^{2}}{\left(\frac{\partial L}{\
partial {r}_{M}^{e}}\right)\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^{e*}\right){E}_{M\left(\text{/}€\right)}^{e}}\right)}.\end{array}$
This concludes the proof.
3. The Classical Government Spending Multiplier in ZLB
3.1. The Classical Equilibrium in the Money Market in ZLB
The classical equilibrium in the money market can be written without the expected interest rate as
3.2. The Classical Interest Rate Parity in ZLB
The classical interest rate parity is
In fact, the return for a 1$ deposit, in our country is $1×\left(1+{\stackrel{¯}{r}}_{ZLB}\right)\left(1+{\stackrel{¯}{r}}_{ZLB}\right)\text{\hspace{0.17em}}$ .
The return for a 1$ deposit abroad is $\frac{{E}_{M\left(\text{/}€\right)}^{e}}{{E}_{ZLB\left(\text{/}€\right)}}\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)$
in U.S. dollar
unit. The classical interest rate parity is then
The interest rate parity in the classical case implies that the nominal exchange rate is fixed in ZLB periods, since the ZLB nominal interest rate is fixed.
3.3. The Classical Equilibrium in the Good Market
In the good market, equilibrium is the same as in the previous section and is
Definition 3. The classical government spending multiplier in ZLB (Classical GSM[ZLB]), is defined as the changes in the aggregated output, Y, generated by a change in one unit of government
spending, when the classical interest rate parity condition holds, the classical market of money and the classical market of goods and services are both in equilibrium.
Proposition 3. The classical government spending multiplier is equal to
Proof. The classical interest parity is
In the zero lower-bound period, since the interest rate is fixed, the exchange rate should be fixed. This means that equilibrium in the money market
will just guarantee the fixed interest rate. By taking the total differential in both sides of the Equation (13), we have
$\text{d}Y=\text{d}C+\text{d}I\left({\stackrel{¯}{r}}_{ZLB}\right)+\text{d}G+\text{d}NX\left(\frac{{P}^{*}}{P}E\right)=\text{d}C+\text{d}G$ (Since exchange rate is fixed), thus $1=\frac{\text{d}C}{\
text{d}Y}+\frac{\text{d}G}{\text{d}Y}$ (By dividing by dY)
This concludes the proof. □
Proposition 4. If assumption A holds, the new government spending multiplier (New GSM[ZLB]), will be lower than the classical government spending multiplier (Classical GSM[ZLB]) in the zero lower
bound period, with,
$New\text{}GS{M}_{ZLB}=\frac{\text{1}}{1-\frac{\text{d}C}{\text{d}Y}-\frac{\text{d}NX}{\text{d}{E}_{ZLB}}\left(\frac{\left(\frac{\partial L}{\partial Y}\right)\left(1+{\stackrel{¯}{r}}_{ZLB}\right)
{E}_{ZLB}^{2}}{\left(\frac{\partial L}{\partial {r}_{M}^{e}}\right)\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^{e*}\right){E}_{M\left(/€\right)}^{e}}\right)}$
Proof. By assumptions A, $\frac{\partial L}{\partial {r}_{M}^{e}}<0;\text{\hspace{0.17em}}\frac{\partial L}{\partial Y}>0$ and $\frac{\partial NX}{\partial E}>0$ .
$\frac{\text{d}NX}{\text{d}{E}_{ZLB}}\left(\frac{\left(\frac{\partial L}{\partial Y}\right)\left(1+{\stackrel{¯}{r}}_{ZLB}\right){E}_{ZLB}^{2}}{\left(\frac{\partial L}{\partial {r}_{M}^{e}}\right)\
$1-\frac{\text{d}C}{\text{d}Y}-\frac{\text{d}NX}{\text{d}{E}_{ZLB}}\left(\frac{\left(\frac{\partial L}{\partial Y}\right)\left(1+{\stackrel{¯}{r}}_{ZLB}\right){E}_{ZLB}^{2}}{\left(\frac{\partial L}{\
partial {r}_{M}^{e}}\right)\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^{e*}\right){E}_{M\left(\text{/}€\right)}^{e}}\right)>1-\frac{\text{d}C}{\text{d}Y}$
$\begin{array}{c}New\text{}GS{M}_{ZLB}=\frac{\text{1}}{1-\frac{\text{d}C}{\text{d}Y}-\frac{\text{d}NX}{\text{d}{E}_{ZLB}}\left(\frac{\left(\frac{\partial L}{\partial Y}\right)\left(1+{\stackrel{¯}
{r}}_{ZLB}\right){E}_{ZLB}^{2}}{\left(\frac{\partial L}{\partial {r}_{M}^{e}}\right)\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^{e*}\right){E}_{M\left(/€\right)}^{e}}\right)}\\ <\frac{1}{1-
This concludes the proof.
4. Graphical Illustration: AA and DD Schedule in Zero Lower Bound
4.1. The Market of Goods and Services: The DD Schedule
The DD schedule is the relationship between exchange rates and output at which the output market is in equilibrium. In this paper the DD schedule is similar to the one proposed by Krugman, Obstfeld,
& Melitz (2012: p. 429) . The equation representing the DD schedule is
An increase of E is associated with and increases of NX and therefore an increases of Y: the DD curve is upward sloping.
4.2. The Asset Market: The New AA Schedule
The AA schedule is defined as the relationship between exchange rates, E[ZLB], and output, Y, at which the market of money and the foreign exchange market are both in equilibrium.
The new equilibrium in the market of money is represented by Equation (15), and the equilibrium in the foreign exchange market is represented by Equation (16).
Proposition 5. The new AA curve, in ZLB is downward sloping. The derivative of E[ZLB] respect to Y can be written as
$\frac{\text{d}{E}_{ZLB}}{\text{d}Y}=\frac{\frac{\partial L}{\partial Y}}{\left(\frac{\partial L}{\partial {r}_{M}^{e}}\right)\frac{\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^{e*}\right)
Proof. Replacing ${r}_{M}^{e}$ from Equation (16) in the Equation (15) leads to
$\frac{\text{d}{M}^{s}}{P}=\text{d}L=\frac{\partial L}{\partial r}\text{d}{\stackrel{¯}{r}}_{ZLB}+\frac{\partial L}{\partial {r}_{M}^{e}}\text{d}{r}_{M}^{e}+\frac{\partial L}{\partial Y}\text{d}Y.$
Money supply is fixed, thus, $\text{d}{M}^{s}=0$ . Interest rate is fixed, thus, $\text{d}{\stackrel{¯}{r}}_{ZLB}=0$ , and
$0=\frac{\partial L}{\partial {r}_{M}^{e}}\text{d}{r}_{M}^{e}+\frac{\partial L}{\partial Y}\text{d}Y.$(18)
Applying the derivative of ${r}_{M}^{e}$ respect to ${E}_{ZLB}$ using Equation (16) lead to
Equation (18) becomes $0=-\frac{\partial L}{\partial {r}_{M}^{e}}\frac{\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^{e*}\right){E}_{M\left(\text{/}€\right)}^{e}}{\left(1+{\stackrel{¯}{r}}_
{ZLB}\right){E}_{ZLB}^{2}}\text{d}{E}_{ZLB}+\frac{\partial L}{\partial Y}\text{d}Y$
$\frac{\text{d}{E}_{ZLB}}{\text{d}Y}=\frac{\frac{\partial L}{\partial Y}}{\left(\frac{\partial L}{\partial {r}_{M}^{e}}\right)\frac{\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^{e*}\right)
$\frac{\partial L}{\partial Y}>0$ and $\frac{\partial L}{\partial {r}_{M}^{e}}<0$ by assumption A. Additional to that, all other variables are positive. Equation (19) shows that $\frac{\text{d}{E}_
{ZLB}}{\text{d}Y}<0$ . Thus the new AA
curve, in ZLB, is downward sloping. □
4.3. The Classical Asset Market: The Old AA Schedule
In the previous section we derived Equations (20) and (21) that represent respectively the classical equilibrium in the money market and in the foreign exchange market. The old AA schedule is defined
as the relationship between the exchange rate, E[ZLB], and output in which Equations (20) and (21) both hold. Recall that in equation (20) money supply will just be adjusted in other to maintain the
fixed interest rate. In other words, the central bank loosens its monetary policy, in the classical framework.
The interest rate parity in the classical case (Equation (21)) implied that the nominal exchange rate is fixed in zero lower-bound periods. This is due to the fact that in the zero lower-bound
period, the nominal interest rate is fixed. Recall that by assumption, the expected exchange rate is exogenous.
4.4. Graphical Illustration: The Old vs the New Government Spending Multiplier
The classical model is represented in panel (a) and the new model is displayed in panel (b) of Figure 1.^7 Each equilibrium is observed when the AA and the DD curves cross each other. The classical
AA curve in ZLB is a horizontal line, while the new AA curve is downward-sloping. An increase in government spending
shifts the DD schedule to the right by $\frac{\text{d}G}{1-MPC}$ ( $MPC=\frac{\text{d}C}{\text{d}Y}$ ). In the classical
analysis, (panel a), the DD curve shifts from DD[1] to DD[2] and the equilibrium
output moves from ${Y}_{1}$ to ${Y}_{2}$ . $\text{d}Y={Y}_{2}-{Y}_{1}=\frac{\text{d}G}{1-MPC}$ .
In the new model (panel b), the DD curve shifts from DD[n][1] to DD[n][2] and output moves from ${Y}_{n1}$ to ${Y}_{n2}$ . The increases in output are less than those observed in the classical
$\text{d}{Y}_{n}={Y}_{n2}-{Y}_{n1}=\frac{\text{d}G}{1-\frac{\text{d}C}{\text{d}Y}-\frac{\text{d}NX}{\text{d}{E}_{ZLB}}\left(\frac{\left(\frac{\partial L}{\partial Y}\right)\left(1+{\stackrel{¯}{r}}_
{ZLB}\right){E}_{ZLB}^{2}}{\left(\frac{\partial L}{\partial {r}_{M}^{e}}\right)\left(1+{\stackrel{¯}{r}}_{ZLB}^{*}\right)\left(1+{r}^{e*}\right){E}_{M\left(\text{/}€\right)}^{e}}\right)}$
Figure 1. Effect of Government spending on output in zero lower-bound: comparing the new effect with the classical effect. Note: The classical model is represented in panel (a) and the new model is
displayed in panel (b). In panel (a), an increase in Government spending shifts the DD schedule to the right from DD[1] to DD[2] and equilibrium output moves from ${Y}_{1}$ to ${Y}_{2}$ . In panel
(b), the DD curve shifts from DD[n][1] to DD[n][2] and output moves from ${Y}_{n1}$ to ${Y}_{n2}$ , with $\text{d}{Y}_{n}<\text{d}Y$ .
5. Conclusion
In ZLB periods, the existing AA-DD model proposed by Krugman, Obstfeld, & Melitz (2012) predicts a very large output effect of an increase in government spending compared to that in a normal period.
We propose a simple model in which the expected near-future interest rate is endogenous. In our new model, the output effect of an increase in government spending in the ZLB period is deflected by an
appreciation in the current exchange rate. The predictions of our new AA-DD model are consistent with recent DSGE literature in open economies in ZLB periods. The AA-DD model is widely taught in many
universities. The AA-DD model is also used by many policymakers. Our new AA-DD model will help to update the existing AA-DD model in ZLB periods. Our new model will also help central bankers and
governments when building their policies, especially when they do not have access to data.
In this article, we limit ourselves to theory. Additional research can be conducted on the estimation using real data of the model presented in this article.
^1The interest parity condition that summarizes the relationship between the output and the nominal exchange rate, when both the money market and the foreign exchange market are in equilibrium,
implies a fixed exchange rate during ZLB periods.
^2For a description and estimation of standard DSGE models, see Mao Takongmo (2021) .
^3See Funashima (2018) for unconventional monetary policies and their effectiveness in Japan during the ZLB periods.
^4See, for example, the American Recovery and Reinvestment Act (ARRA), which was designed to increase US government spending by $831 billion between 2009 and 2019. Another example is the European
Economic Recovery Plan (EERP), meant to increase European government spending by ?/span>200 billion between 2008 and 2010.
^5In fact, the net export can also be a function of many other variables, such as the national and foreign disposable income; since our focus in this paper is on the role played by the exchange rate,
for simplicity, we assume that the impact of other factors on the net export is negligible.
^6Note that $\frac{\partial NX}{\partial \mathcal{E}}>0$ is equivalent to $\frac{\partial NX}{\partial E}>0$ because in the short run P is fixed by definition, and ${P}^{*}$ is exogenous.
^7Note that increases in exchange rate is a depreciation of the national currency.
|
{"url":"https://www.scirp.org/journal/paperinformation?paperid=124477","timestamp":"2024-11-03T12:06:16Z","content_type":"application/xhtml+xml","content_length":"280009","record_id":"<urn:uuid:148c9267-edcb-4b17-9f0b-7479c08ac834>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00361.warc.gz"}
|
Deterministic extractors for bit-fixing sources by obtaining an independent seed
An (n, k)-bit-fixing source is a distribution X over {0,1}^n such that there is a subset of k variables in X[1], ... , X[n] which are uniformly distributed and independent of each other, and the
remaining n - k variables are fixed. A deterministic bit-fixing source extractor is a function E : {0, 1}^n → {0, 1}^m which on an arbitrary (n, k)-bit-fixing source outputs m bits that are
statistically close to uniform. Recently, Kamp and Zuckerman [Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, 2003, pp. 92-101] gave a construction of a
deterministic bit-fixing source extractor that extracts ω(k^2 /n) bits and requires k > √n. In this paper we give constructions of deterministic bit-fixing source extractors that extract (1 - o(1))k
bits whenever k > (log n)^c for some universal constant c > 0. Thus, our constructions extract almost all the randomness from bit-fixing sources and work even when k is small. For k ≫ √n the
extracted bits have statistical distance 2^-nω(1) from uniform, and for k < √n the extracted bits have statistical distance k^-ω(1) from uniform. Our technique gives a general method to transform
deterministic bit-fixing source extractors that extract few bits into extractors which extract almost all the bits.
All Science Journal Classification (ASJC) codes
• General Computer Science
• General Mathematics
• Bit-fixing sources
• Derandomization
• Deterministic extractors
• Seed obtainers
• Seeded extractors
Dive into the research topics of 'Deterministic extractors for bit-fixing sources by obtaining an independent seed'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/deterministic-extractors-for-bit-fixing-sources-by-obtaining-an-i-2","timestamp":"2024-11-13T00:06:48Z","content_type":"text/html","content_length":"51857","record_id":"<urn:uuid:c58ab483-0cc5-4044-a8a3-143c32f36806>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00019.warc.gz"}
|
Graph Neural Network
The Graph Neural Network (GNN) implements a function \(\tau(G, n) \in \mathbb{R}^m\) that maps a graph \(G\) and one of its node \(n\) into an \(m\)-dimensional Euclidean space. The originally
proposed GNN is an extension of both recursive neural networks and random walk models. GNNs are based on an information diffusion mechanism (propagating the information to neighbors) which is
constrained to ensure that a unique stable equilibrium always exists.
The Original Model
GNN in the positional form involves with a parametric local transition function \(f\) and a local output function \(g\) such that for node \(v\), we have:
\[\begin{aligned} \mathbf{h}_v &= f(\mathbf{x}_v, \mathbf{x}_{co[v]}, \mathbf{h}_{ne[v]}, \mathbf{x}_{ne[v]}) \\ \mathbf{o}_v &= g(\mathbf{h}_v, \mathbf{x}_v) \end{aligned}\]
where \(\mathbf{x}_v, \mathbf{x}_{co[v]}, \mathbf{h}_{ne[v]}, \mathbf{x}_{ne[v]}\) are the features of \(v\), the features of its edges (connecting to the same node), the states of the neighbors, and
the features of the neighbors.
Note 1: The features of neighbors \(\mathbf{x}_{ne[v]}\) could be removed because the state of the neighbours \(h_{ne[v]}\) implicitly contains the information.
Note 2: For directed graph, the function \(f\) can take extra input variable indicating the directions of the edges linked to node \(v\).
Note 3: For simplicity, the original GNN implements the same \(f\) and \(g\) for all nodes, but they can depend on different nodes.
Let \(\mathbf{H}, \mathbf{O}, \mathbf{X}\), and \(\mathbf{N}\) be the vectors constructed by stacking all the states, all the outputs, all the features, and all the node features, respectively. Then
we have a compact form as:
\[\begin{aligned} \mathbf{H} &= F(\mathbf{H}, \mathbf{X})\\ \mathbf{O} &= G(\mathbf{H}, \mathbf{X}_N) \end{aligned}\]
where \(F\) and \(G\) are global functions according to Note 3.
By Banach’s fixed point theorem, \(\mathbf{H}\) can reach the fixed point as shown in the above compact expression, provided that \(F\) is a contraction map (see the post here for details). Under
this assumption, \(\mathbf{H}\) can be iteratively updated to reach the fixed point as \(\mathbf{H}^{t+1} = F(\mathbf{H}^t, \mathbf{X})\). This dynamical system converges exponentially fast to the
solution of fixed point for any initial value \(\mathbf{H}(0)\).
Next, we need to learn the parameters of \(F\) and \(G\). With \(p\) supervised nodes, and the target information \(\mathbf{t}_i\) for \(i\)th node, we have the loss function: \(loss = \sum_{i=1}^p
(\mathbf{t}_i - \mathbf{o}_i)\). Then use the gradient-descent based algorithem to train the model as the followings:
• Iteratively update \(\mathbf{H}^t\) until reaching the fixed point (set time \(T\) as the upper time limit).
• Compute the gradient of weights w.r.t to the loss function.
• Update the parameters according to the optimization algorithm.
Limitations: (1) Using fixed point makes it inefficient to update the hidden states of nodes and less informative for distinguishing each node (so not suitable for node-focused tasks); (2) Most
popular neural networks use different parameters in different layers while the orginal GNN sets up global functions; (3) Informative features on the edges cannot be effectively modeled in the
original model and how to learn the hidden states of edges is also a problem.
• Zhou, J., Cui, G., Zhang, Z., Yang, C., Liu, Z., & Sun, M. (2018). Graph Neural Networks: A Review of Methods and Applications. ArXiv, abs/1812.08434.
• Scarselli, F., Gori, M., Tsoi, A., Hagenbuchner, M., & Monfardini, G. (2009). The Graph Neural Network Model. IEEE Transactions on Neural Networks, 20, 61-80.
• Li, B., Hao, D., Zhao, D., & Zhou, T. (2017). Mechanism Design in Social Networks. AAAI.
|
{"url":"https://www.yuanhe.wiki/blog/2020/11/18/gnn-review.html","timestamp":"2024-11-11T21:31:07Z","content_type":"text/html","content_length":"13225","record_id":"<urn:uuid:e8a80d35-7fd7-4917-b625-dc520318a071>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00838.warc.gz"}
|
Self-Blinded Mineral Water Taste Test
Blind randomized taste-test of mineral/distilled/tap waters using Bayesian best-arm finding; no large differences in preference.
The kind of water used in tea is claimed to make a difference in the flavor: mineral water being better than tap water or distilled water. However, mineral water is vastly more expensive than tap
To test the claim, I run a preliminary test of pure water to see if any water differences are detectable at all. Compared my tap water, 3 distilled water brands (Great Value, Nestle Pure Life, &
Poland Spring), 1 osmosis-purified brand (Aquafina), and 3 non-carbonated mineral water brands (Evian, Voss, & Fiji) in a series of n = 67 blinded randomized comparisons of water flavor. The
comparisons are modeled using a Bradley-Terry competitive model implemented in Stan; comparisons were chosen using an adaptive Bayesian best-arm sequential trial (racing) method designed to
locate the best-tasting water in the minimum number of samples by preferentially comparing the best-known arm to potentially superior arms. Blinding & randomization are achieved by using a Lazy
Susan to physically randomize two identical (but marked in a hidden spot) cups of water.
The final posterior distribution indicates that some differences between waters are likely to exist but are small & imprecisely estimated and of little practical concern.
Tap water taste reportedly differs a lot between cities/states. This is plausible since tea tastes so much worse when microwaved, which is speculated to be due to the oxygenation, so why not the
mineral/chlorine content as well? (People often complain about tap water and buy water filters to improve the flavor, and sometimes run blinded experiments testing water filters vs tap; 2018 find in
a blind taste test of bottled “fine water”, subjects were slightly better than chance at guessing, preferred tap or cheap water as often, and were unable to match fine waters to advertising, while
Food52’s 2 testers of 17 blind sparkling waters had difficult distinguishing types & noticed no advantage to more expensive ones.^1)
Testing tea itself, rather than plain water, is tricky for a few reasons:
• hot tea is harder to taste differences in
• the tea flavor will tend to overpower & hide any effects from the water
• each batch of tea will be slightly different (even if carefully weighed out, temperature checked with a thermometer, and steeped with a timer)
• boiling different waters simultaneously requires two separate kettles (and for blinding/randomization, raises safety issues)
• requires substantial amounts of a single or a few teas to run (since leaves can’t be reused)
• and the results will either be redundant with testing plain water (if simple additive effects like ‘bad-tasting water makes all teas taste equally worse’) or will add in additional variance to
estimate interaction effects which probably do not exist^2 or are small but will use up more data (in psychology and related fields, the main effects tend to be much more common than interaction
effects, which also require much larger data samples).
So a tea test is logistically more complicated and highly unlikely to deliver any meaningful inferences with feasible sample sizes as compared to a water test. On the other hand, a water test, if it
indicated large differences existed, might not be relevant since those differences might still be hidden by tea or turn out to be interactions with tea-specific effects. This suggests a two-step
process: first see if there are any differences in plain water; if there aren’t, there is no need to test tea, but if there is, proceed to a tea test.
This question hearkens back to R. A. Fisher’s famous “lady tasting tea” experiment and turns out to provide a bit of a challenge to my usual blinding & methods, motivating a look into Bayesian
“best-arm identification” algorithms.
Well water & distilled water already on hand, so I need good commercial spring water (half a gallon or less each). I obtained 8 kinds of water:
• tap water (I ran the tap for several minutes and then stored 3.78l in an empty distilled water container, and stored at room temperature with the others)
• Walmart:
□ Great Value distilled water
□ Nestle Pure Life
□ Aquafina
□ Vos
□ Evian
□ Fiji Natural Water
• Amazon:
Price, type, pH, and contents:
Water brand Water kind Country Price ($/l) Price ($) Volume (L) pH Total mg/l Calcium Sodium Potassium Fluoride Magnesium Nitrate Chloride Copper Sulfate Arsenic Lead Bicarbonates Silica
tap water well USA 0 0 3.78
Great Value distilled USA 0.23 0.88 3.78
Nestle Pure Life distilled USA 0.26 0.98 3.79 6.95 36 7.6 6.8 1 0.1 3.6 0.4 13.45 0.05 13.5 0.002 0.005
Voss Still Water mineral Norway 3.43 2.74 0.8 5.5 44 5 6 1 0.1 1 0.4 12 0.05 5 0.002 0.005
Evian mineral France 1.78 1.78 1 7.2 309 80 1 26 6.8 12.6 36 15
Fiji Natural Water mineral Fiji 1.88 1.88 1 7.7 222 18 18 4.9 0.28 15 0.27 9 0.05 1.3 0.002 0.005 152 93
Aquafina osmosis USA 1 1 1 5 5 4 10 250 1 250 0.010 0.005
Poland Spring distilled USA 3.15 9.45 3 7.2 61 7.5 5.9 0.6 0.115 1.145 0.6 10.05 0.05 3 0.0014 0.005
• the pH & mineral content of my tap water is unknown; it is well water untreated with chlorine or fluoride, described as very soft
• the Great Value/Walmart distilled water doesn’t report any data on the label and there don’t seem to be any datasheets online (the pH of distilled water can apparently vary widely from the
nominal value of 7 and cannot be assumed to be 7, but should the mineral contents should all at least be close to 0)
• the Nestle Pure Life numbers are not reported on the packaging but in the current online datasheet (pg4, “2015 Water Analysis Report”); I have taken the mean when a range is reported, and the
upper bound when that is reported (specifically, “ND”^3)
• Voss Still reports some numbers on the bottle, but more details are reported in an undated (metadata indicates 2011[13ya]) report on the Voss website; for “ND” I reuse the upper bound from Nestle
Pure Life
• the Evian label reports a total of “dissolved solids at 180C: 309ppm (mg/l)”.
• Fiji Water provides a 2014 datasheet which is more detailed than the label; “ND” as before
• Aquafina labels provide no information beyond using reverse osmosis; they provide a 2015 datasheet, which omits pH and several other minerals; various online tests suggest Aquafina samples have
pHs of 4-6, so I class it as 5
• Poland Spring 2016 datasheet; note that the price may be inflated considerably because I had to order it online instead of buying in person from a normal retailer; like Nestle Pure Life, ranges
are reported and “ND” taken as ceiling
Reporting of the mineral contents of waters is inconsistent & patchy enough that they’re unlikely to be helpful in predicting flavor ratings (while 2010/2013 finds testers can taste mineral content,
Gallagher & Dietrich 2010[14ya] reports no particular preference, and 2015 finds no consistent relationship with fine-water prices other than both very low & very high mineral contents predicts
higher prices).
In rating very subtle difference in flavor, the usual method is binary forced-choice comparisons, as they cannot be rated on their own (they just taste like water). So the measured data would be the
result of a comparison, better/worse or win/loss. Fisher’s original “lady tasting tea” experiment used permutation tests, but he was only considering two cases & was testing the null hypothesis,
while I have 8 waters where I am reasonably sure the null hypothesis of no difference in taste is indeed false and I am more interested in how large the differences are & which is best, so the
various kinds of permutation or chi-squared tests in general do not work. The analogy to sporting competitions suggests that the paradigm here should be the Bradley-Terry model which is much like
chess’s Elo rating system in that it models each competitor (water) as having a performance variable (flavor) on a latent scale, where the difference between one competitor’s rating and another
translates into the probability it will win a comparison. (For more detailed discussion of the Bradley-Terry model, see references in Resorter.) To account for ties, the logistic distribution is
expanded into an ordered logistic distribution with cutpoints to determine whether the outcome falls into 1 of 3 ranges (win/tie/loss).
With 8 waters to be ranked hierarchically using uninformative binary comparisons (which are possibly quite noisy) and sampling being costly (to my patience), it would be nice to have an adaptive
experiment design which will be more sample-efficient than the simplest experiment design of simply doing a full factorial with 2 of each possible comparison (there are [2]^8 = 28 possible pairs,
since order doesn’t matter, so 2 samples each would give n = 56). In particular, I am interested less in estimating as accurately as possible all the waters (for which the optimal design minimizing
total variance probably would be some sort of full factorial experiment) than I am in finding out which, if any, of the waters tastes best—which is not the multi-armed bandit setting (to which the
answer would be Thompson sampling) but the closely connected “best-arm identification” problem (as well as, confusingly, the “dueling multi-armed bandit” and “preference learning”/“preference
ranking” areas; I didn’t find a good overview of them all comparing & contrasting, so I’m unsure what would be the state-of-the-art for my exact problem or whose wheel I am reinventing).
Best arm identification algorithms are often called ‘racing’ algorithms because they sample by ‘racing’ the two best comparisons against each other, focusing their sampling on only the arms likely to
be best, and periodically killing the worst ranked arms (in “successive rejects”). So they differ from Thompson sampling in that Thompson sampling, in order to receive as many rewards as possible,
will tend to over-focus on the best arm while not sampling the second-best enough. 2014^4 introduces a Bayesian best arm identification algorithm, “Ordered-Statistic Thompson Sampling”, which selects
the arm to sample each round by:
1. fitting the Bayesian model and returning the posterior distribution of estimates for each arm
2. taking the mean of each distribution, ranking them, and finding the best arm’s mean^5
3. for the other arms, sampling 1 sample from their posteriors (similar to Thompson sampling); add a bonus constant to tune the ‘aggressiveness’ and sample more or less heavily from lower-ranked
4. select the action: if any of the arm samples are greater than the best arm mean, sample from that arm, otherwise, sample again from the best arm
5. repeat indefinitely until the experiment halts (indefinite horizon)
This works because it frequently samples from any arm which threatens to surpass the current best arm in proportion to their chance of success, otherwise it concentrates on making more precise the
best arm. The usual best arm identification algorithms are for the binomial or normal distribution problems, but here we don’t have 21 ‘arms’ of pairs of waters, because it’s not the pairs we care
about but the waters themselves. My suggestion is that to adapt 2014’s algorithm to the Bradley-Terry competitive setting, one instead samples from each water, set the first water to be the highest
mean water and then sample from the posteriors of the other waters and compare the best arm to the highest posterior sample. This is simple to implement, and like the regular best arm identification
algorithm, focuses on alternatives in proportion to their probability of being superior to the current estimated best arm. (Who knows if this is optimal.)
One alternative to my variant on ordered-statistic Thompson sampling would be to set a fixed number of samples I am willing to take (eg. n = 30) and define a reward of 1 for picking the true best
water and 0 for picking any other water—thereby turning the experiment into a finite-horizon Markov decision process whose decision tree can be solved exactly by dynamic programming/backwards
induction, yielding a policy for which waters to compare to maximize the probability of selecting the right one at the end of the experiment. This runs into the curse of dimensionality: with 28
possible comparisons with 2 possible outcomes, each round has 56 possible results, so over 67 samples, there are 56^67 possible sequences.
With such subtle differences, subjective expectations become a serious issue and blinding would be good to have, which also requires randomization. My usual methods of blinding & randomization, using
containers of pills, do not work with water. It would be possible to do the equivalent by using water bottles shaken in a large container but inconvenient and perhaps messy. A cleaner (literally) way
would be to use identical cups, one marked on the bottom to keep track of which is which after rating the waters—but how to randomize them? You can’t juggle or shake or mix up cups of water.
It occurred to me that one could use a spinning table—a Lazy Susan—to randomize pairs of cups. And then blinding is trivial, because you can easily spin a Lazy Susan without looking at it, and it
will coast after spinning so you cannot subconsciously count revolutions to guesstimate which cup winds up closest to you.
Modified version of Ken Butler’s btstan Stan code for Bradley-Terry models and my own implementation of best arm sampling for the Bradley-Terry model:
## list of unique competitors for conversion into numeric IDs:
competitors <- function(df) { unique(sort(c(df$Type1, df$Type2))) }
fitBT <- function(df) {
types <- competitors(df)
team1 = match(df$Type1, types)
team2 = match(df$Type2, types)
y = df$Win
N = nrow(df)
J = length(types)
data = list(y = y, N = N, J = J, x = cbind(team1, team2))
m <- "data {
int<lower=0> N; // number of games
int<lower=1> J; // number of teams
int<lower=1,upper=3> y[N]; // results
int x[N,2]; // indices of teams playing
parameters {
vector[J] beta;
real<lower=0> cc;
model {
real nu;
int y1;
int y2;
vector[2] d;
beta ~ normal(0,5);
cc ~ normal(0,1);
for (i in 1:N) {
y1 = x[i,1];
y2 = x[i,2];
nu = beta[y1] - beta[y2];
d[1] = -cc;
d[2] = cc;
y[i] ~ ordered_logistic(nu,d);
} }"
model <- stan(model_code=m, chains=1, data=data, verbose=FALSE, iter=30000)
sampleBestArm <- function(model, df) {
types <- competitors(df)
posteriorSampleMeans <- get_posterior_mean(model, pars="beta")
bestEstimatedArm <- max(posteriorSampleMeans[,1])
bestEstimatedArmIndex <- which.max(posteriorSampleMeans[,1])
## pick one row/set of posterior samples at random:
posteriorSamples <- extract(model)$beta[sample.int(nrow(extract(model)$beta), size=1),]
## ensure that the best estimated arm is not drawn, as this is pairwise:
posteriorSamples[bestEstimatedArmIndex] <- -Inf
bestSampledArm <- max(posteriorSamples)
bestSampledArmIndex <- which.max(posteriorSamples)
return(c(types[bestEstimatedArmIndex], types[bestSampledArmIndex]))
plotBT <- function(df, fit, labels) {
posteriors <- as.data.frame(extract(fit)$beta)
colnames(posteriors) <- labels
posteriors <- melt(posteriors)
colnames(posteriors) <- c("Water", "Rating")
return(ggplot(posteriors, aes(x=Rating, fill=Water)) +
ggtitle(paste0("n=",as.character(nrow(df)),"; last comparison: ", tail(df$Type1,n=1),
" vs ", tail(df$Type2,n=1))) +
geom_density(alpha=0.3) +
coord_cartesian(ylim = c(0,0.23), xlim=c(-12,12), expand=FALSE)) }
It would be better to enhance the btstan code to fit a hierarchical model with shrinkage since the different waters will surely be similar, but I wasn’t familiar enough with Stan modeling to do so.
To run the experiment, I stored all 8 kinds of water in the same place at room temperature for several weeks. Before running, I refrained from food or drink for 5 hours and brushed/flossed/
water-picked my teeth.
For blinding, I took my two identical white Corelle stoneware mugs, and put a tiny piece of red electrical tape on the bottom of one. For randomization, I borrowed a Lazy Susan table.
The experimental procedure was:
1. empty out both mugs and the measuring cup into a tub sitting nearby
2. select two kinds of water according to the best arm Bayesian algorithm (calling fitBT & sampleBestArm on an updated data-frame)
3. measure a quarter cup of the first kind of water into the marked mug and a quarter cup into the second
4. place them symmetrically on the Lazy Susan with handles inward and touching
5. closing my eyes, rotate the Lazy Susan at a moderate speed (to avoid tipping over the mugs) for a count of at least 30
6. eyes still closed for good measure, grab the mug on left and take 2 sips
7. grab the mug on the right, take 2 sips
8. alternate sips as necessary until I decide which one is slightly better tasting
9. after deciding, look at the bottom of the mug chosen
10. record the winner of the comparison, and run the Bayesian model and best arm algorithm again.
Following this procedure, I made n = 67 pairwise comparisons of water:
water <- read.csv(stdin(), header=TRUE, colClasses=c("character", "character", "integer"))
"tap water","Voss",3
"Voss","Great Value distilled",3
"Great Value distilled","Poland Spring",1
"Poland Spring","Nestle Pure Life",3
"Nestle Pure Life","Fiji",1
"Evian","tap water",1
"Great Value distilled","Poland Spring",1
"Great Value distilled","tap water",1
"Great Value distilled","Nestle Pure Life",1
"Fiji","Poland Spring",3
"Fiji","Poland Spring",1
"tap water","Poland Spring",1
"Poland Spring","Fiji",3
"Poland Spring","Fiji",3
"Poland Spring","Fiji",3
"Poland Spring","Fiji",1
"Poland Spring","Fiji",3
"Poland Spring","Fiji",1
"Poland Spring","Fiji",3
"Poland Spring","Fiji",1
"Poland Spring","Aquafina",1
"Poland Spring","Fiji",3
"Poland Spring","Aquafina",1
"Poland Spring","Aquafina",1
"Fiji","Poland Spring",3
"Fiji","Poland Spring",3
"Fiji","Poland Spring",3
"Fiji","tap water",1
"Fiji","Poland Spring",1
"tap water","Aquafina",1
"Fiji","Poland Spring",1
"Fiji","Poland Spring",3
"Fiji","Poland Spring",3
"Great Value distilled","Evian",3
"Voss","Nestle Pure Life",3
"Nestle Pure Life","tap water",3
"tap water","Aquafina",1
"Aquafina","Poland Spring",3
"Evian","Great Value distilled",1
"Great Value distilled","Nestle Pure Life",3
"Nestle Pure Life","Voss",3
"Voss","tap water",3
"tap water","Aquafina",1
"Aquafina","Poland Spring",1
"Aquafina","Poland Spring",3
"Evian","Great Value distilled",1
"Great Value distilled","Nestle Pure Life",3
"Nestle Pure Life","tap water",3
"tap water","Voss",3
"Evian","Great Value distilled",3
"Great Value distilled","Nestle Pure Life",3
"Nestle Pure Life","tap water",3
"tap water","Aquafina",3
"Evian","Great Value distilled",1
"Great Value distilled","Nestle Pure Life",3
"Nestle Pure Life","tap water",3
"tap water","Aquafina",1
types <- competitors(water); types
# [1] "Aquafina" "Evian" "Fiji" "Great Value distilled" "Nestle Pure Life"
# [6] "Poland Spring" "tap water" "Voss"
fit <- fitBT(water); print(fit)
# 8 chains, each with iter=30000; warmup=15000; thin=1;
# post-warmup draws per chain=15000, total post-warmup draws=120000.
# mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
# beta[1] 1.18 0.01 1.89 -2.52 -0.09 1.18 2.46 4.89 23182 1
# beta[2] -2.69 0.01 2.11 -6.95 -4.08 -2.66 -1.25 1.33 27828 1
# beta[3] 1.83 0.01 1.89 -1.89 0.55 1.83 3.10 5.56 23147 1
# beta[4] -0.48 0.01 1.90 -4.21 -1.76 -0.48 0.81 3.21 23901 1
# beta[5] -0.40 0.01 1.88 -4.11 -1.66 -0.40 0.88 3.27 23487 1
# beta[6] 1.42 0.01 1.89 -2.29 0.15 1.42 2.70 5.14 23173 1
# beta[7] -0.49 0.01 1.87 -4.17 -1.74 -0.49 0.78 3.17 23084 1
# beta[8] -0.48 0.01 1.93 -4.29 -1.78 -0.48 0.82 3.29 24494 1
# cc 0.03 0.00 0.03 0.00 0.01 0.02 0.05 0.13 83065 1
# lp__ -49.58 0.01 2.27 -54.90 -50.87 -49.24 -47.91 -46.19 40257 1
## example next-arm selection at the end of the experiment:
sampleBestArm(fit, water)
# [1] "Fiji" "Poland Spring"
posteriorSamples <- extract(fit, pars="beta")$beta
rankings <- matrix(logical(), ncol=8, nrow=nrow(posteriorSamples))
## for each set of 8 posterior samples of each of the 8 water's latent quality, calculate if each sample is the maximum or not:
for (i in 1:nrow(posteriorSamples)) { rankings[i,] <- posteriorSamples[i,] >= max(posteriorSamples[i,]) }
df <- data.frame(Water=types, Superiority.p=round(digits=3,colMeans(rankings)))
df[order(df$Superiority.p, decreasing=TRUE),]
# Water Superiority.p
# 3 Fiji 0.718
# 6 Poland Spring 0.145
# 1 Aquafina 0.110
# 8 Voss 0.014
# 4 Great Value distilled 0.007
# 5 Nestle Pure Life 0.006
# 7 tap water 0.001
# 2 Evian 0.000
plotBT(water, fit, types)
for (n in 7:nrow(water)) {
df <- water[1:n,]
fit <- fitBT(df)
p <- plotBT(df, fit, types)
interval=0.6, ani.width = 1000, ani.height=800,
movie.name = "tea-mineralwaters-bestarm-sequential.gif")
Means in descending order, with posterior probability of being the #1-top-ranked water (not the same thing as having a good mean ranking):
1. Fiji (P = 0.72)
2. Poland Spring (P = 0.15)
3. Aquafina (P = 0.11)
4. Evian (P = 0.00)
5. Nestle Pure Life (P = 0.01)
6. Great Value distilled (P = 0.01)
7. Voss (P = 0.01)
8. tap water (P = 0.00)
Results of n = 67 blinded randomized paired taste-testing comparisons of 8 mineral, distilled, and tap waters: final estimated posterior distributions of win probability in a comparison, showing the
poor taste of Evian mineral water but likely similar tastes of most of the others.
Animation of mineral water taste-test showing how the posterior distributions evolve over n = 7 to n = 67, guided by Bayesian best arm sampling.
For the first 7 comparisons, since I didn’t want to insert any informative priors about my expectations, the best arm choice would be effectively random, so to initialize it, I did a round-robin set
of comparisons: put the waters into a quasi-random order ABCD, then compared A/B, B/C, C/D and so on. For the next 4 comparisons, I made a mistake in recording my data since I forgot that ‘1’ coded
for the left water winning and ‘3’ for the right water winning, and so I had reversed the rankings and was actually doing a ‘worst-arm’ algorithm, as it were. After fixing that, the comparisons began
focusing on Fiji and Poland Spring, eventually expanding to Aquafina as it improved in the rankings.
Comparing water turns out to be quite difficult. In some cases, a bad taste was quickly distinguishable—I quickly learned that Evian, Great Value distilled, and Nestle Pure Life had distinctly sour
or metallic overtones which I disliked (but apparently enough people buy Evian & Nestle to make them viable in a Walmart!). Despite repeated sampling, I had a hard time ever distinguishing Poland
Spring/Fiji/Aquafina/Voss, but I thought they might have been ever so slightly better than my tap water in a way I can’t verbalize except that they felt ‘cooler’ somehow.
By n = 41, Fiji continued to eke out a tiny lead over Poland Spring & Aquafina, but I ran out of it and could no longer run the best arm algorithm (since it would keep sampling Fiji). I was also
running low on the Poland Spring. So at that point I went back to round-robin, this time using the order of posterior means.
With additional data, the wide posterior distributions began to contract around 0. At around n = 67, I was bored and not looking forward to sampling Evian/Great Value/Nestle many more times, and
looking at the posterior distributions, it increasingly seemed like an exercise in futility—even after this much data, there was still only a 72% probability of correctly picking the best water.
Further testing would probably show Evian/Great Value/Nestle as worse than my tap water (amusingly enough), but be unable to meaningfully distinguish between my tap water and the decent ones, which
answered the original question—no, the decent mineral waters & waters are almost indistinguishable under even the most optimal taste-testing conditions, would be less distinguishable when used in hot
tea, and there was zero chance they were worth their enormous cost compared to my free tap water & were a scam as I expected. (After all, they are many times more expensive on a unit basis compared
even to bottled water; the mineral contents are generally trivial fractions of RDAs at their highest; and they appear to be as equally likely to taste worse or better to the extent they taste
different at all.)
I ended the experiment there, dumped the remaining water—except the remaining sealed Poland Spring bottles which are conveniently small, so I kept for use in my car—and recycled the containers.
1. Additional amusing blind taste tests include wine, dog food vs pâté, and chocolate (data)—Valrhona is apparently overpriced.↩︎
2. An interaction here implies that the effect happens only with the combination of two variables. On a chemical level, what would be going on to make good-tasting mineral water combined with
good-tasting tea in distilled water turn into bad-tasting tea?↩︎
3. “MRL—Minimum Reporting Limit. Where available, MRLs reflect the Method Detection Limits (MDLs) set by the U.S. Environmental Protection Agency or the Detection Limits for Purposes of Reporting
(DLRs) set by the California Department of Health Services. These values are set by the agencies to reflect the minimum concentration of each substance that can be reliably quantified by
applicable testing methods, and are also the minimum reporting thresholds applicable to the Consumer Confidence…ND—Not detected at or above the MRL.” –Nestle Pure 2015↩︎
4. See also et al 2009, et al 2010, et al 2011, 2014, 2014, et al 2014/↩︎
5. This use of the posterior mean of the best arm distinguishes it from the simplest form of Thompson sampling for pairwise comparisons, which would be to simply Thompson sample each arm and compare
the arms with the two highest samples, which is called “double Thompson sampling” by 2016. Double Thompson sampling achieves good regret but like regular Thompson sampling for MABs, doesn’t come
with any proofs about best arm identification.↩︎
|
{"url":"https://gwern.net/water","timestamp":"2024-11-13T22:23:04Z","content_type":"text/html","content_length":"127952","record_id":"<urn:uuid:6f48ce00-b670-452e-9bb1-1a6cef61b565>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00072.warc.gz"}
|
Zichao Dong, Convex polytopes in non-elongated point sets in $\mathbb{R}^d$ - Discrete Mathematics Group
Zichao Dong, Convex polytopes in non-elongated point sets in $\mathbb{R}^d$
January 23 Tuesday @ 4:30 PM - 5:30 PM KST
Room B332, IBS (기초과학연구원)
For any finite point set $P \subset \mathbb{R}^d$, we denote by $\text{diam}(P)$ the ratio of the largest to the smallest distances between pairs of points in $P$. Let $c_{d, \alpha}(n)$ be the
largest integer $c$ such that any $n$-point set $P \subset \mathbb{R}^d$ in general position, satisfying $\text{diam}(P) < \alpha\sqrt[d]{n}$ (informally speaking, `non-elongated’), contains a convex
$c$-polytope. Valtr proved that $c_{2, \alpha}(n) \approx \sqrt[3]{n}$, which is asymptotically tight in the plane. We generalize the results by establishing $c_{d, \alpha}(n) \approx n^{\frac{d-1}
{d+1}}$. Along the way we generalize the definitions and analysis of convex cups and caps to higher dimensions, which may be of independent interest. Joint work with Boris Bukh.
|
{"url":"https://dimag.ibs.re.kr/event/2024-01-23/","timestamp":"2024-11-12T00:43:49Z","content_type":"text/html","content_length":"148719","record_id":"<urn:uuid:f87e6b75-79dd-4f21-ae1d-620242b1ce95>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00197.warc.gz"}
|
On Beating the Hybrid Argument
Title On Beating the Hybrid Argument
Publication Type Journal Article
Year of Publication 2012
Authors Fefferman, B, Shaltiel, R, Umans, C, Viola, E
Journal Proceedings, ITCS
Volume 9
Pages 809-843
Date Published 2013/11/14
The hybrid argument allows one to relate the distinguishability of a distribution (from uniform)
to the predictability of individual bits given a prefix. The argument incurs a loss of a factor
k equal to the bit-length of the distributions: -distinguishability implies only /k-predictability.
This paper studies the consequences of avoiding this loss – what we call “beating the hybrid argument”
– and develops new proof techniques that circumvent the loss in certain natural settings.
Specifically, we obtain the following results:
1. We give an instantiation of the Nisan-Wigderson generator (JCSS ’94) that can be broken
by quantum computers, and that is o(1)-unpredictable against AC0
. This is not enough
to imply indistinguishability via the hybrid argument because of the hybrid-argument
loss; nevertheless, we conjecture that this generator indeed fools AC0
, and we prove this
statement for a simplified version of the problem. Our conjecture implies the existence of
an oracle relative to which BQP is not in the PH, a longstanding open problem.
Abstract 2. We show that the “INW” generator by Impagliazzo, Nisan, and Wigderson (STOC ’94)
with seed length O(log n log log n) produces a distribution that is 1/ log n-unpredictable
against poly-logarithmic width (general) read-once oblivious branching programs. Thus
avoiding the hybrid-argument loss would lead to a breakthrough in generators against
small space.
3. We study pseudorandom generators obtained from a hard function by repeated sampling.
We identify a property of functions, “resamplability,” that allows us to beat the hybrid argument,
leading to new pseudorandom generators for AC0
[p] and similar classes. Although
the generators have sub-linear stretch, they represent the best-known generators for these
Thus we establish that “beating” or bypassing the hybrid argument would have two significant
consequences in complexity, and we take steps toward that goal by developing techniques that
indeed beat the hybrid argument in related (but simpler) settings, leading to best-known PRGs
for certain complexity classes.
URL http://users.cms.caltech.edu/~umans/papers/FSUV10.pdf
|
{"url":"https://quics.umd.edu/publications/beating-hybrid-argument","timestamp":"2024-11-14T04:26:17Z","content_type":"text/html","content_length":"22501","record_id":"<urn:uuid:87858011-7852-4606-8008-6cc414356dde>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00325.warc.gz"}
|
RSA_check_key — validate private RSA keys
#include <openssl/rsa.h>
RSA_check_key(RSA *rsa);
This function validates RSA keys. It checks that p and q are in fact prime, and that n = p*q.
It also checks that d*e = 1 mod (p-1*q-1), and that dmp1, dmq1 and iqmp are set correctly or are NULL.
As such, this function can not be used with any arbitrary RSA key object, even if it is otherwise fit for regular RSA operation. See NOTES for more information.
RSA_check_key() returns 1 if rsa is a valid RSA key, and 0 otherwise. -1 is returned if an error occurs while checking the key.
If the key is invalid or an error occurred, the reason code can be obtained using ERR_get_error(3).
This function does not work on RSA public keys that have only the modulus and public exponent elements populated. It performs integrity checks on all the RSA key material, so the RSA key structure
must contain all the private key data too.
Unlike most other RSA functions, this function does not work transparently with any underlying ENGINE implementation because it uses the key data in the RSA structure directly. An ENGINE
implementation can override the way key data is stored and handled, and can even provide support for HSM keys - in which case the RSA structure may contain no key data at all! If the ENGINE in
question is only being used for acceleration or analysis purposes, then in all likelihood the RSA key data is complete and untouched, but this can't be assumed in the general case.
A method of verifying the RSA key using opaque RSA API functions might need to be considered. Right now RSA_check_key() simply uses the RSA structure elements directly, bypassing the RSA_METHOD table
altogether (and completely violating encapsulation and object-orientation in the process). The best fix will probably be to introduce a "check_key()" handler to the RSA_METHOD function table so that
alternative implementations can also provide their own verifiers.
rsa(3), ERR_get_error(3)
RSA_check_key() appeared in OpenSSL 0.9.4.
|
{"url":"http://h41379.www4.hpe.com/doc/83final/ba554_90007/rn02re166.html","timestamp":"2024-11-05T20:19:44Z","content_type":"text/html","content_length":"10913","record_id":"<urn:uuid:5fde3484-6fe1-4219-8585-03ee81e5cb79>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00580.warc.gz"}
|
BECE 2019 Mathematics Questions and Answers
BECE 2019 Mathematics Questions and Answers
The Basic Education Certificate Examination, also known as the BECE, is an examination that is extremely important for students in many West African countries, including Ghana and Nigeria especially.
Mathematics is one of the fundamental disciplines, and due to the technical nature of the subject, it frequently presents a difficulty to a large number of candidates.
Being able to properly prepare for the BECE Mathematics exam can have a significant impact on a student’s performance. Therefore, it is vital to have a solid understanding of the different sorts of
questions and answers that have been asked on past exams, such as the BECE 2019 Mathematics paper. The purpose of this post is to provide students with a full understanding of the questions and
answers that will be on the BECE 2019 Mathematics exam, as well as techniques that will assist them in approaching exam questions with confidence.
Overview of the BECE 2019 Mathematics Exam
The BECE 2019 Mathematics paper assessed students’ understanding of various mathematical concepts, ranging from arithmetic to geometry and algebra. It was structured into two sections: Section A
contained objective questions, while Section B focused on more elaborate, problem-solving questions. Section A required students to select the correct option from multiple choices, while Section B
demanded students to demonstrate their working process and logical reasoning.
This structure tests a student’s ability to think analytically and solve complex problems. Section A generally tests fundamental concepts, while Section B requires a deeper understanding and
problem-solving skills. Reviewing both sections can help students grasp the range of topics and the expected level of difficulty.
BECE Past Questions and Answers
Key Topics Covered in the 2019 BECE Mathematics Paper
The BECE 2019 Mathematics paper included topics across multiple branches of mathematics. Key topics included:
• Arithmetic and Number Theory – These questions required students to perform calculations with whole numbers, fractions, percentages, ratios, and proportions. Understanding the foundational
principles behind numbers and operations was essential for success.
• Algebraic Expressions and Equations – Algebra questions ranged from simple equations to more complex expressions requiring simplification or factorization. Students needed to understand
variables and constants and know how to solve equations for unknown values.
• Geometry and Measurement – Geometry questions involved calculating perimeters, areas, and volumes. Students were tested on shapes, sizes, angles, and spatial understanding. Measurement also
included units of measure and converting between them.
• Statistics and Probability – This section focused on interpreting data, calculating averages, and understanding probability. Students needed to analyze data sets, find the mean, median, and
mode, and determine the likelihood of various outcomes.
• Sets and Venn Diagrams – A common topic in the BECE exams, sets required students to understand the basic principles of union, intersection, and complement of sets. Venn Diagrams were also
included to visually represent set relationships.
By focusing on these key areas, students can identify and strengthen areas that might need additional practice.
Sample BECE 2019 Mathematics Questions and Answers
To better understand the BECE 2019 Mathematics paper, let’s look at a few sample questions and answers from each major topic.
Example Question from Arithmetic and Number Theory
Question: What is the LCM of 12, 18, and 24?
Answer: To find the least common multiple (LCM), list the multiples of each number and identify the smallest common multiple. The LCM of 12, 18, and 24 is 72.
This type of question tests the student’s understanding of factors and multiples. Knowing how to find the LCM helps in tackling more complex problems in both arithmetic and algebra.
Example Question from Algebraic Expressions and Equations
Question: Simplify 3x+2x−4=03x + 2x – 4 = 0 and solve for xx.
Answer: Combine like terms: 5x−4=05x – 4 = 0. Solving for xx, we add 4 to both sides: 5x=45x = 4, then divide by 5: x=45x = \frac{4}{5}.
Algebra questions often require students to combine like terms or isolate variables. Practicing such equations helps in mastering algebraic manipulation.
Example Question from Geometry and Measurement
Question: A rectangle has a length of 10 cm and a width of 6 cm. Calculate its perimeter.
Answer: The perimeter of a rectangle is given by 2(l+w)2(l + w). Here, 2(10+6)=2×16=322(10 + 6) = 2 \times 16 = 32 cm.
Geometry questions typically assess the understanding of basic formulas for shapes. Being familiar with perimeter and area calculations is essential for geometry problems.
Example Question from Statistics and Probability
Question: What is the mean of the following numbers: 5, 10, 15, 20, and 25?
Answer: To find the mean, add the numbers and divide by the count. Here, (5+10+15+20+25)÷5=75÷5=15(5 + 10 + 15 + 20 + 25) \div 5 = 75 \div 5 = 15.
In statistics, students often need to calculate mean, median, and mode, which are essential skills in data interpretation.
Example Question from Sets and Venn Diagrams
Question: In a group of students, 30 study Mathematics, 25 study Science, and 15 study both subjects. How many study only Mathematics?
Answer: The number of students who study only Mathematics is 30−15=1530 – 15 = 15.
Sets questions test students’ ability to work with relationships and apply logical reasoning. Practicing with Venn Diagrams can make these questions more approachable.
BECE Past Questions & Answers
Tips for Preparing for BECE Mathematics Exams
Studying past questions like those from BECE 2019 is an effective strategy. Here are some additional tips to help with preparation:
• Understand Key Concepts – Familiarity with fundamental principles in each topic area is essential. Don’t just memorize formulas; understand the reasoning behind them.
• Practice Regularly – Frequent practice with past questions will improve speed and accuracy. It helps students to familiarize themselves with the question format and difficulty level.
• Focus on Weak Areas – Identify topics where you face challenges and focus more time on those areas. By strengthening weaknesses, you can improve your overall performance.
• Work on Time Management – During exams, time is of the essence. Practicing under timed conditions will help you allocate time wisely during the exam.
• Use Study Aids – Many students find it helpful to use study aids like flashcards for formulas or even study groups to discuss problem-solving strategies.
BECE Questions and Answers
Preparing for the BECE Mathematics exam requires a strong grasp of fundamental concepts, consistent practice, and familiarity with past questions. By studying past papers like the BECE 2019 questions
and answers, students gain insights into common question formats, the exam structure, and the expected difficulty level. With dedication and strategic preparation, students can approach the BECE
Mathematics exam with confidence, ready to achieve high marks and secure a bright future in their academic journey.
There are no reviews yet.
Write a review
|
{"url":"https://studteacher.link/product/bece-2019-questions-and-answers/","timestamp":"2024-11-14T23:58:55Z","content_type":"text/html","content_length":"225444","record_id":"<urn:uuid:6f99f3f9-ce37-4e79-89a2-5a5e31f61cf4>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00076.warc.gz"}
|
Refer to the Exhibit.
In the Exhibit, the table shows the values for the input Boolean attributes "A", "B", and "C". It also shows the values for the output attribute "class". Which decision tree is valid for the data?
Which method is used to solve for coefficients bO, b1, ... bn in your linear regression model:
Question-34. Stories appear in the front page of Digg as they are "voted up" (rated positively) by the community. As the community becomes larger and more diverse, the promoted stories can better
reflect the average interest of the community members. Which of the following technique is used to make such recommendation engine?
What is the probability that the total of two dice will be greater than 8, given that the first die is a 6?
Clustering is a type of unsupervised learning with the following goals
Suppose you have made a model for the rating system, which rates between 1 to 5 stars. And you calculated that RMSE value is 1.0 then which of the following is correct
In which phase of the analytic lifecycle would you expect to spend most of the project time?
Classification and regression are examples of___________.
If E1 and E2 are two events, how do you represent the conditional probability given that E2 occurs given that E1 has occurred?
Suppose that we are interested in the factors that influence whether a political candidate wins an election. The outcome (response) variable is binary (0/1); win or lose. The predictor variables of
interest are the amount of money spent on the campaign, the amount of time spent campaigning negatively and whether or not the candidate is an incumbent.
Above is an example of
Which of the following could be features?
A researcher is interested in how variables, such as GRE (Graduate Record Exam scores), GPA (grade point average) and prestige of the undergraduate institution, effect admission into graduate school.
The response variable, admit/don't admit, is a binary variable.
Above is an example of
What describes a true limitation of Logistic Regression method?
You are asked to create a model to predict the total number of monthly subscribers for a specific magazine. You are provided with 1 year's worth of subscription and payment data, user demographic
data, and 10 years worth of content of the magazine (articles and pictures). Which algorithm is the most appropriate for building a predictive model for subscribers?
What are the key outcomes of the successful analytical projects?
Spam filtering of the emails is an example of
You are using one approach for the classification where to teach the agent not by giving explicit categorizations, but by using some sort of reward system to indicate success, where agents might be
rewarded for doing certain actions and punished for doing others. Which kind of this learning
|
{"url":"https://www.clapgeek.com/databricks-certified-professional-data-scientist-databricks-certified-professional-data-scientist-exam-questions.html","timestamp":"2024-11-03T01:12:57Z","content_type":"text/html","content_length":"53058","record_id":"<urn:uuid:40b0895f-ad8d-4142-99c6-de64e7281855>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00869.warc.gz"}
|
Simultaneous equations modulo two variables
simultaneous equations modulo two variables Related topics: trivias in math
positive and negative worksheets
transition to algebra test
ratio formula
ti 83 calculator programs quadratic formula text
trinomial solution
writing an expression containing rational exponents as a single product
Author Message
banhilspr Posted: Monday 01st of Nov 11:01
Greetings , I am a high-school student and at the end of the term I will have my exams in algebra. I was never a math genius , but this year I am worried that I won't be able
to finish this course. I came across simultaneous equations modulo two variables and some other math problems that I can’t understand . the following topics really made me
panic : linear algebra and 3x3 system of equations .Getting a tutor is not possible for me, because I don't have that kind of money. Please help me!!
Registered: 01.01.2005
Back to top
ameich Posted: Wednesday 03rd of Nov 09:59
That’s exactly the kind of problem I had encountered. Can you elaborate a bit more on what your problems are with simultaneous equations modulo two variables? Yes. Getting an
affordable tutor suited to your needs is quite not easy now-a-days. But even I went on to do exactly everything that you are doing now. But then, my hunt ended when I found
out that there are a number of programs in algebra. They come at an affordable price too . I was in reality thrilled with it. May be this is just what suits you . What do you
think about this?
Registered: 21.03.2005
From: Prague, Czech
Back to top
Dolknankey Posted: Friday 05th of Nov 07:38
I fully agree with what was just said . Algebrator has always come to my rescue, be it a homework or be it my preparation for the midterm exams, Algebrator has always helped
me do well in math . It really helped me on topics like powers, factoring polynomials and y-intercept. I would highly recommend this software.
Registered: 24.10.2003
From: Where the trout
streams flow and the air
is nice
Back to top
medaxonan Posted: Friday 05th of Nov 19:55
Thanks for the detailed information , this seems awesome. I wanted something exactly like Algebrator, because I don't want a software which only solves the exercise and shows
the final result, I want something that can actually explain me how the exercise needs to be solved. That way I can learn it and next time solve it without any help , not
just copy the results. Where can I find the program ?
Registered: 24.08.2003
Back to top
sxAoc Posted: Saturday 06th of Nov 14:59
I remember having often faced problems with algebra formulas, syntehtic division and lcf. A really great piece of algebra program is Algebrator software. By simply typing in
a problem homework a step by step solution would appear by a click on Solve. I have used it through many algebra classes – Algebra 1, College Algebra and Basic Math. I
greatly recommend the program.
Registered: 16.01.2002
From: Australia
Back to top
Mov Posted: Saturday 06th of Nov 18:28
Don’t worry my friend . As what I said, it shows the solution for the problem so you won’t really have to copy the answer only but it makes you understand how did the
software came up with the answer. Just go to this site https://softmath.com/about-algebra-help.html and prepare to learn and solve quicker.
Registered: 15.05.2002
Back to top
|
{"url":"https://softmath.com/algebra-software-3/simultaneous-equations-modulo.html","timestamp":"2024-11-09T20:21:25Z","content_type":"text/html","content_length":"43198","record_id":"<urn:uuid:09a2efec-2d1e-4f78-835e-f336b4d81f52>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00090.warc.gz"}
|
What Is Pi123: Benefits, Features, Security Concerns - Being Instructor
• 14 min read
What is Pi123: Benefits, Features, Security Concerns
In the fields of mathematics and technology, pi123 is a relatively recent phrase. It includes a range of ideas and uses, such as a mathematical expansion of the well-known pi (π) and an online
calculator that can compute pi to any desired number of decimal places.
Pi123 as a Mathematical Extension:
By expanding on the idea of pi (π), the ratio of a circle’s circumference to its diameter, Pi123 explores the mysterious realm of mathematics. Pi123 investigates the mathematical characteristics and
ramifications of this expanded concept of pi, which may provide fresh perspectives on the nature of circles and their uses in geometry and other fields.
Pi123 as an Online Pi Calculator:
Additionally, Pi123 is an effective and user-friendly web application that allows users to compute pi to whatever number of decimal places they choose. This tool serves a broad spectrum of users,
including instructors mentoring the future generation of mathematicians and students delving into the depths of mathematics.
Pi123 in the Context of Pi Network:
Regarding the cryptocurrency Pi Network, Pi123 is an example of a community-driven project to advance and improve the Pi Network ecosystem. This project includes some activities, including creating
teaching materials, offering technical assistance, and cultivating a thriving Pi Network community.
Overall Significance of Pi123:
Pi123 is important in the fields of technology and mathematics. It expands our comprehension of pi and its ramifications mathematically. It democratizes access to accurate pi calculations as an
internet utility. It helps the cryptocurrency flourish and be adopted in the context of Pi Network.
Pi123’s future is yet unknown, although it might see more developments in the areas of mathematical research, technology applications, and community involvement inside the Pi Network environment.
Benefits of Pi123
Pi123 offers advantages in some fields, including technology, community involvement, and mathematics.
Mathematical Benefits:
Enhanced Understanding of Pi: Pi123 explores the idea of pi in greater detail, offering fresh perspectives on its characteristics and applications. This may result in a deeper comprehension of
mathematics and its applications.
Exploration of Uncharted Mathematical Territory: There are more opportunities for mathematical investigation when pi is extended beyond its conventional form. New mathematical relationships and
concepts may be found as a result of this.
Technological Benefits:
Precise Pi Calculations: An online calculator called Pi123 allows you to calculate pi to as many decimal places as you’d like. With the help of this program, users of all skill levels—from experts to
students—can obtain accurate pi values for a variety of applications.
Accessibility and Convenience: Pi123 is easily accessible to everyone with an internet connection because it is available online. This removes the requirement for sophisticated mathematical
operations or specialized calculators.
Educational Resource: Pi123 is a useful teaching tool that both teachers and students may use. It may be used to investigate the history of pi, show off the useful uses of pi computations, and teach
mathematical ideas.
Community Engagement Benefits:
Fostering a Pi Network Community: Pi123 plays a role in the expansion and advancement of the Pi Network community. It gives members of the Pi Network a forum for communication, teamwork, and
knowledge exchange.
Promoting Pi Network Awareness: Pi123 projects spread knowledge about the Pi Network, its objectives, and its possible influence on the Bitcoin industry and other fields.
Empowering Pi Network Participants: Pi123 gives Pi Network users access to instructional materials, technical assistance, and chances to participate in the network’s
How to Set up and Use Pi123
Before you begin setting up and using Pi123, ensure you have the following:
1. A Raspberry Pi device (any model is compatible)
2. An SD card with at least 8GB of storage capacity
3. A computer with an internet connection
4. A Pi123 image file (available for download from the Pi123 website)
Download the Pi123 Image File: Get the compatible Pi123 image file for your Raspberry Pi model by going to the official Pi123 website.
Flash the Pi123 Image to the SD Card: The downloaded Pi123 image file should be written to the SD card using an appropriate SD card flashing tool. Before continuing, make sure the SD card is
formatted correctly.
Insert the SD Card into the Raspberry Pi: Place the SD card with the Pi123 image inside the Raspberry Pi apparatus.
Connect Power and Peripherals: Using the included power adapter, connect the Raspberry Pi to a power source. Attach any required peripherals to the Raspberry Pi’s relevant ports, such as a keyboard,
mouse, and display.
Boot Up the Raspberry Pi: Turn on the Raspberry Pi. The operating system on the Pi123 will begin to boot up.
Alternatives to Pi123
While Pi123 is an excellent resource for learning about Raspberry Pi and the pi idea, other options have distinct features and advantages. Here are some alternatives to think about:
Raspberry Pi Imager: The Raspberry Pi Foundation’s official utility offers an easy-to-use interface for flashing SD cards with Raspberry Pi operating system images. Pi123 is among the many Raspberry
Pi models and operating systems that it supports.
Etcher: Etcher is a cross-platform utility that may be used for flashing operating system images onto USB devices and SD cards. It is free and open-source. It works with many different operating
systems, including as Windows, macOS, and Linux.
Balena Etcher: A clone of Etcher called Balena Etcher is made especially for flashing operating system images onto Raspberry Pi hardware. Along with other capabilities like picture validation and
burning multiple photos to an SD card, it offers an easier-to-use interface.
NOOBS Lite: Lightweight and simple to use, NOOBS Lite is an operating system installer for the Raspberry Pi that comes with a number of well-known operating systems, such as LibreELEC, Ubuntu MATE,
and Raspbian. For novices who are unsure about which operating system to use, it is a suitable alternative.
PINN: The Raspberry Pi operating system known as PINN was created by the community with educators and kids in mind. It has several instructional tools, including games, tutorials, and programming
These are but a handful of the numerous substitutes for Pi123. The ideal choice for you will rely on your own requirements and tastes.
Challenges of Using Pi123
Being a relatively new platform, Pi123 has certain issues that might affect its adoption and usage. The following are some possible difficulties:
Limited Scope: Learning materials for Raspberry Pi and pi computations are the main areas of interest for Pi123. Because of this, those looking for more functionality or cross-platform compatibility
may find it less appealing.
Community Reliance: The development and upkeep of Pi123 mostly depend on community contributions. A sense of ownership and teamwork may be encouraged by this, but it may also result in shorter
development cycles and less technical help.
Long-Term Viability: Pi123’s capacity to draw in developers, draw in users, and obtain finance will determine its long-term success. It can find it difficult to stay relevant and sustainable if these
issues are not resolved.
Documentation and Tutorials: Pi123 may use more thorough tutorials and documentation to help new users and give detailed descriptions of all of its functions.
Integration with Other Tools: By integrating with current educational platforms and resources, Pi123 might improve its standing by making it simpler for teachers and students to use it in their daily
Mobile App Compatibility: Considering how common mobile devices are becoming, creating a mobile app for Pi123 might increase its user base and reach.
Troubleshooting and Error Handling: By offering more thorough troubleshooting instructions, error warnings, and support channels to help customers when they run across technical issues, Pi123 might
enhance its user experience.
Accessibility Features: Pi123 might improve accessibility by adding features like keyboard navigation, screen readers, and alternate text descriptions that accommodate users with different needs and
Regular Updates and Maintenance: Pi123 should continue to provide updates and bug fixes on a regular basis to guarantee peak performance, security, and interoperability with newer technologies.
Community Engagement and Feedback Mechanisms: By actively interacting with users, gathering their input, and implementing their recommendations into next development plans, Pi123 might strengthen the
What is Pi123?
Pi123 is a relatively new idea that includes an online tool for computing pi to any desired number of decimal places, as well as a mathematical extension of the well-known pi (π). Additionally, it
alludes to a community-driven project to advance and improve the Pi Network environment.
What are the benefits of using Pi123?
Pi123 offers a range of benefits, including:
• Enhanced understanding of pi and its mathematical properties
• Precise pi calculations to any desired number of decimal places
• Accessibility and convenience through an online platform
• Educational resources for learning about mathematics and pi
• A vibrant community forum for interaction and collaboration
How do I set up and use Pi123?
A Raspberry Pi device, an SD card, a computer with an internet connection, and a Pi123 image file are required for Pi123 setup. The image file must be flashed to the SD card, the Raspberry Pi must be
inserted, and power and peripherals must be connected. The desktop interface of Pi123 allows you to access its functionality after it has been booted up.
What are some alternatives to Pi123?
Pi123 has a number of substitutes, such as Etcher, Balena Etcher, NOOBS Lite, PINN, and Raspberry Pi Imager. These substitutes address a variety of user demands and preferences by providing varying
features and degrees of complexity.
What are the challenges of using Pi123?
Pi123 has a number of issues, including its narrow focus, its dependence on community assistance, its long-term sustainability, and the need for more thorough tutorials and documentation. It may also
gain from frequent updates, enhanced troubleshooting, accessibility features, mobile app compatibility, interaction with other tools, and more community involvement.
What is the future of Pi123?
Pi123 future rests on its capacity to overcome obstacles, draw in and keep consumers, and obtain capital. Should it be able to get over these challenges, it might end up being a useful resource for
learning about pi, mathematics, and Raspberry Pi.
Pi123 is an innovative idea that has the potential to significantly advance technology, education, and mathematics. It is a useful tool for anyone who wants to learn more about pi and its
applications since it offers accurate pi calculations, instructional materials, and a lively community platform. Nevertheless, Pi123 has certain obstacles to overcome before it can be widely used and
viable in the long run. Maintaining its success will depend on addressing these issues, which include broadening its reach, improving community support, and making sure that updates are provided
regularly. All things considered, Pi123 offers a fascinating fusion of community involvement, technical innovation, and mathematical discovery. It can grow into an effective instrument for
cooperation, learning, and discovery with more development and improvement.
For More information visit our Homepage:
Leave a Reply Cancel reply
|
{"url":"https://beinginstructor.com/what-is-pi123/","timestamp":"2024-11-06T11:24:22Z","content_type":"text/html","content_length":"254054","record_id":"<urn:uuid:2d12066a-df59-47c5-8932-8419e722b0aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00724.warc.gz"}
|
Sophie Germain Research Paper
Sophie Germain was a French mathematician, a philosopher, and a physicist born during the revolution period. During this time woman did not have the right to do as much things as men did. Her family
was wealthy but she had to work harder to be recognized as a mathematician being she was a girl. She studied acoustics, elasticity, and the theory of numbers. Sophie struggled with being these things
because of the social prejudices during this time. Despite the many challenges Sophie faced during this time, she became very well known as a mathematician.
Sophie Germain was born in Paris, France on April 1 in 1776. Her parents were Ambroise-Francois and Marie Germain. Sophie had two sisters, Marie-Madeline and Angelique-Ambroise Germain. As a child
she read a wide variety of books in her fathers library. Sophie taught herself Greek and Latin languages and was able to read Isaac Newton and Leonhard Euler’s work. Her parents did not approve of
her learning mathematics but she loved …show more content…
She tried two more times and on her third attempt won. She became the first woman to win a prize from the Paris Academy of Sciences. She also became interested in the study of the number theory and
prime numbers. Sophie wrote a letter to Carl Friedrich Gauss in 1815, telling him that the number theory was her preferred field. She outlined a strategy of Fermat’s Last Theorem. Gauss never
answered her letter. Geramin tried very hard to become known for her education. Not only was Germain a mathematician, but she also studied philosophy and psychology. “She classified the facts by
generalizing them into laws as foundation of science of psychology and sociology,” stated the author from Famous Mathematician. Her study in philosophy was highly liked by Auguste Comte. Her nephew
Lherbette, published her writings later on. Although she studied many things, math was is what she is most known
|
{"url":"https://www.123helpme.com/essay/Sophie-Germain-Research-Paper-A617F77AAE163A8C","timestamp":"2024-11-10T18:04:05Z","content_type":"text/html","content_length":"75886","record_id":"<urn:uuid:085b9feb-643b-4985-bacf-9a7fb1b07761>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00412.warc.gz"}
|
Bridges in the graph
Bridges in the graph.
Those edges in the graph whose removal will result
in 2 or more components in the graph are called as bridges in the graph.
Consider the below example:
| |
| |
/ \
\ /
\ /
| /
| /
edge between 4 and 5 is called bridge,
edge between 5 and 6 is bridge
edge between 8 and 10 is bride.
We will be using two things in the algorithm (We will be using depth first search algorithm)
Time of Insertion of the node
Lowest time of Insertion of the node
Time complexity is : O(n +e)
Space complexity is : O(2n)~ O(n)
We will use Tarjan's algorithm for finding the bridges in the graph
we will keep two array
timeOfInsertion[], lowestTimeOfInsertion[]
note: timeOfInsertion will keep track of step/time at which a node was visited(we will use dfs for traversing the nodes)
lowestTimeOfInsertion[]: will be updated as and when we visit adjacent nodes, initially the unvisited nodes will have same timeOfInsertion and lowestTimeOfInsertion. But when we will encounter a neighour node say B of current node A(parent) that is already visited(B) ( which is not parent of A). we will updated the lowestTimeOfInsertion[] for the node A such that lowestTimeOfInsertion[A] = min(lowestTimeOfInsertion[A], lowestTimeOfInsertion[B])
at the end of a dfs call if node----adjacetNode can be removed ( to check if they form component)
if the timeOfInserstion[node] > lowestTimeOfInsertion[adjacentNode] then node can still reach from someother adjacentNode node----AdjancentNode is not a bridge
Similary we will do for rest of the nodes and determine the bridge in the given graph */
class Solution {
public List<List<Integer>> criticalConnections(int n, List<List<Integer>> connections) {
int[] timeOfInsertion = new int[n];//dfs insertion time
int[] lowestTimeOfInsertion = new int[n];// min of lowest insertion time of all adjacent node except parent
//creating adjacency list
List<List<Integer>> adj = new ArrayList<>();
for(int i =0;i<n;i++) adj.add(new ArrayList<>());
for(List<Integer> list : connections){
//since the edge are undirected hence
int visited[] = new int[n];
List<List<Integer>> bridges = new ArrayList<>();
for(int i =0;i<n;i++){
if(visited[i]== 0){
return bridges;
public void dfs(int node, int parent, List<List<Integer>> adj, int visited[] , List<List<Integer>> bridges,int time, int t[] , int[] low){
visited[node] = 1;
t[node] =low[node] = time++;
/ \
7 9 Say, we are at 9 in dfs coming from 8(parent) 6,7 are visited already
\ / note: lowestTimeOfInsertion[6] = 6 and lowestTimeOfInsertion[9] = 9 => it will be updated
\ / as min of both hence lowestTimeOfInsertion[9] =6, meaning from 9 it is possible to reach
8 at node with lowest insertion time of 6 hence any higher insertion time(>6) node can also
be reached from 9 (i.e if 8----9 is removed then also we can reach 8 since now the lowestTimeOfInsertion[9] is 6, so from 9 to 6 to 7 to 8 is possible, hence 8---9 is not bridge)
for(int adjNode : adj.get(node)){
if(adjNode == parent) continue;
if(visited[adjNode] ==0){
//from the example above once the dfs traversal of 9 is done it will come back to 8, hence 8 will updated its lowestTimeOfInsertion as well by choosing min of low of 8, 9
low[node] = Integer.min(low[node],low[adjNode]);
//after 8(node) updates its lowestTimeOfInsertion it check if (node)8---9(adjNode) could be a bridge or not
// if the timeOfinssertion[node] < lowestTimeOfInsertion[adjNode] then it is not possible for adjNode(9) to reach to node(8) hence this will form bridge else not a bride 8---9
if(t[node] < low[adjNode]) {
List<Integer> edge = new ArrayList<>();
low[node] = Integer.min(low[node],low[adjNode]); // if the adjNode is visited ( in above example 6) update the lowestTimeOfInsertion[9] = min of lowestTimeOfInsertion of 6 and 9
For more detailed explanation refer
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse
|
{"url":"https://dev.to/prashantrmishra/brides-in-the-graph-44dc","timestamp":"2024-11-12T07:39:23Z","content_type":"text/html","content_length":"92177","record_id":"<urn:uuid:5e914f01-ea5b-4d0f-bf92-3fa9ddee3187>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00541.warc.gz"}
|
The n-Category Café
March 31, 2016
Foundations of Mathematics
Posted by John Baez
Roux Cody recently posted an interesting article complaining about FOM — the foundations of mathematics mailing list:
Cody argued that type theory and especially homotopy type theory don’t get a fair hearing on this list, which focuses on traditional set-theoretic foundations.
This will come as no surprise to people who have posted about category-theoretic foundations on this list. But the discussion became more interesting when Harvey Friedman, the person Cody was
implicitly complaining about, joined in. Friedman is a famous logician who posts frequently on Foundations of Mathematics. He explained his “sieve” — his procedure for deciding what topics are worth
studying further — and why this sieve has so far filtered out homotopy type theory.
This made me think — and not for the first time — about why different communities with different attitudes toward “foundations” have trouble understanding each other. They argue, but the arguments
aren’t productive, because they talk past each other.
Posted at 7:43 PM UTC |
Followups (84)
March 24, 2016
E[8] Is the Best
Posted by John Baez
As you may have heard, Maryna Viazovska recently proved that if you center spheres at the points of the $\mathrm{E}_8$ lattice, you get the densest packing of spheres in 8 dimensions:
• Maryna S. Viazovska, The sphere packing problem in dimension 8, 14 March 2016.
The $\mathrm{E}_8$ lattice is
$\mathrm{E}_8 = \left\{x \in \mathbb{Z}^8 \cup (\mathbb{Z}+ \frac{1}{2})^8 \; : \;\, \sum_{i = 1}^8 x_i \in 2 \mathbb{Z} \right\}$
and the density of the packing you get from it is
$\frac{\pi^4}{2^4 \cdot 4!} \approx 0.25367$
Using ideas in her paper, Viazovska teamed up with some other experts and proved that the Leech lattice gives the densest packing of spheres in 24 dimensions:
• Henry Cohn, Abhinav Kumar, Stephen D. Miller, Danylo Radchenko and Maryna Viazovska, The sphere packing problem in dimension 24, 21 March 2016.
The densest packings of spheres are only known in dimensions 0, 1, 2, 3, and now 8 and 24. Good candidates are known in many other low dimensions: the problem is proving things, and in particular
ruling out the huge unruly mob of non-lattice packings.
For example, in 3 dimensions there are uncountably many non-periodic packings of spheres that are just as dense as the densest lattice packing! There are also infinitely many periodic but non-lattice
packings that are just as dense.
In 9 dimensions, the densest known packings form a continuous family! Only one comes from a lattice. The others are obtained by moving half the spheres relative to the other half. They’re called the
‘fluid diamond packings’.
In high dimensions, some believe the densest packings will be periodic but non-lattice.
For a friendly introduction to Viazovska’s discoveries, see:
• Gil Kalai, A breakthrough by Maryna Viazovska leading to the long awaited solutions for the densest packing problem in dimensions 8 and 24, Combinatorics and More, 23 March 2016.
I’m no expert on this stuff, but I’ll try to get into a tiny bit more detail of how the proofs work.
Posted at 4:58 PM UTC |
Followups (14)
March 23, 2016
The Involute of a Cubical Parabola
Posted by John Baez
In his remarkable book The Theory of Singularities and its Applications, Vladimir Arnol’d claims that the symmetry group of the icosahedron is secretly lurking in the problem of finding the shortest
path from one point in the plane to another while avoiding some obstacles that have smooth boundaries.
Arnol’d nicely expresses the awe mathematicians feel when they discover a phenomenon like this:
Thus the propagation of waves, on a 2-manifold with boundary, is controlled by an icosahedron hidden at an inflection point at the boundary. This icosahedron is hidden, and it is difficult to
find it even if its existence is known.
I would like to understand this!
I think the easiest way for me to make progress is to solve this problem posed by Arnol’d:
Puzzle. Prove that the generic involute of a cubical parabola has a cusp of order 5/2 on the straight line tangent to the parabola at the inflection point.
There’s a lot of jargon here! Let me try to demystify it. (I don’t have the energy now to say how the symmetry group of the icosahedron gets into the picture, but it’s connected to the ‘5’ in the
cusp of order 5/2.)
Posted at 4:50 PM UTC |
Followups (21)
March 21, 2016
Prime Numbers and the Riemann Hypothesis
Posted by John Baez
I hope this great book stays open-access, but I urge everyone to download a free copy now:
It’s the best elementary introduction to the connection between prime numbers and zeros of the Riemann zeta function. Fun, fun, fun!
Posted at 5:12 AM UTC |
Followups (3)
Coalgebraic Geometry
Posted by Qiaochu Yuan
Hi everyone! As some of you may remember, some time ago I was invited to post on the Café, but regrettably I never got around to doing so until now. Mainly I thought that the posts I wanted to write
would be old hat to Café veterans, and also I wasn’t used to the interface.
Posted at 5:00 AM UTC |
Followups (6)
March 19, 2016
The Most Common Prime Gaps
Posted by John Baez
Twin primes are much beloved. But a computer search has shown that among numbers less than a trillion, the most common distance between successive primes is 6. It seems this goes on for quite a while
Posted at 6:10 AM UTC |
Followups (12)
March 15, 2016
Weirdness in the Primes
Posted by John Baez
What percent of primes end in a 7? I mean when you write them out in base ten.
Well, if you look at the first hundred million primes, the answer is 25.000401%. That’s very close to 1/4. And that makes sense, because there are just 4 digits that a prime can end in, unless it’s
really small: 1, 3, 7 and 9.
So, you might think the endings of prime numbers are random, or very close to it. But 3 days ago two mathematicians shocked the world with a paper that asked some other questions, like this:
If you have a prime that ends in a 7, what’s the probability that the next prime ends in a 7?
I would have expected the answer to be close to 25%. But these mathematicians, Robert Oliver and Kannan Soundarajan, actually looked. And they found that among the first hundred million primes, the
answer is just 17.757%.
So if a prime ends in a 7, it seems to somehow tell the next prime “I rather you wouldn’t end in a 7. I just did that.”
Posted at 2:52 AM UTC |
Followups (19)
March 10, 2016
Category Theory Course Notes
Posted by John Baez
Here are the notes from a basic course on category theory:
Unlike my Fall 2015 seminar, this quarter I tried to give a systematic introduction to the subject. However, many proofs (and additional theorems) were offloaded to another more informal seminar, for
which notes are not available. So, many proofs here are left as ‘exercises for the reader’.
Posted at 9:25 PM UTC |
Followups (4)
March 9, 2016
Category Theory Seminar Notes
Posted by John Baez
Here are some students’ notes from my Fall 2015 seminar on category theory. The goal was not to introduce technical concepts from category theory—I started that in the next quarter. Rather, I tried
to explain how category theory unifies mathematics and makes it easier to learn. We began with a study of duality, and then got into a bit of Galois theory and Klein geometry:
Posted at 6:17 PM UTC |
Followups (5)
March 4, 2016
Hyperbolic Kac–Moody Groups
Posted by John Baez
Just as the theory of finite-dimensional simple Lie algebras is connected to differential geometry and physics via the theory of simple Lie groups, the theory of affine Lie algebras is connected to
differential geometry and physics by the realization that these are the Lie algebras of central extensions of loop groups:
• Andrew Pressley and Graeme Segal, Loop Groups, Oxford U. Press, Oxford, 1988.
• Graeme Segal, Loop groups.
Indeed it’s not much of an exaggeration to say that central extensions of loop groups are to strings as simple Lie groups are to particles!
What comes next?
Posted at 6:47 PM UTC |
Followups (4)
|
{"url":"https://classes.golem.ph.utexas.edu/category/2016/03/index.shtml","timestamp":"2024-11-04T05:32:18Z","content_type":"application/xhtml+xml","content_length":"82292","record_id":"<urn:uuid:204658c5-a655-4c18-a80a-9dc02493bb0b>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00704.warc.gz"}
|
Plane Curves and Space Curves
Plane Curves and Space Curves
We are about to look at the definitions of some types of curves, but before we do, we must first look at the definition of a smooth curve.
Definition: If $C$ is a curve given by the vector-valued function $\vec{r}(t)$ on an interval $I$, then $C$ is said to be Smooth if:
a) $\vec{r'}(t)$ is continuous on $I$.
b) $\vec{r'}(t) \neq \vec{0}$ except possibly at the endpoints of $I$.
Geometrically, a curve $C$ is smooth if it does not any sharp points, kinks, or cusps. The following image represents a smooth curve in contrast with a curve that is not smooth.
Definition: Let $\vec{r}(t)$ be a vector-valued function. Then for the interval $I$, if $\vec{r}(t)$ is continuous on $I$ and the curve $C$ traced by $\vec{r}(t)$ can lie on a single plane, then $\
vec{r}(t)$ is called a Plane Curve.
We have already dealt with tons of plane curves. For example, the curve $f(x) = x^3$ is a plane curve because the graph of $f$ lies on the $xy$-plane. All lines in $\mathbb{R}^3$ are plane curves as
well. Another plane curve could be defined by the vector equation $\vec{r}(t) = (1, t, t^2)$ which represents the graph of the parabola $y = z^2$ onto the plane $x = 1$ as depicted below:
If a curve is not a plane curve, then it will be what is called a space curve.
Definition: Let $\vec{r}(t)$ be a vector-valued function. Then for the interval $I$, if $\vec{r}(t)$ is continuous on $I$ then the curve $C$ traced by the parametric equations $x = x(t)$, $y = y(t)$,
and $z = z(t)$ is called a Space Curve.
We will primarily be looking at space curves in $\mathbb{R}^3$.
|
{"url":"http://mathonline.wikidot.com/plane-curves-and-space-curves","timestamp":"2024-11-13T17:49:08Z","content_type":"application/xhtml+xml","content_length":"16268","record_id":"<urn:uuid:b99d2485-25c2-47a0-9deb-c8e72420a592>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00668.warc.gz"}
|
University Physics
Author : Hugh D. Young
Publisher : Pearson Higher Ed
Total Pages : 1601
Release : 2015-07-15
ISBN-10 : 9781292100326
ISBN-13 : 129210032X
Rating : 4/5 (26 Downloads)
The full text downloaded to your computer With eBooks you can: search for key concepts, words and phrases make highlights and notes as you study share your notes with friends eBooks are downloaded to
your computer and accessible either offline through the Bookshelf (available as a free download), available online and also via the iPad and Android apps. Upon purchase, you'll gain instant access to
this eBook. Time limit The eBooks products do not have an expiry date. You will continue to access your digital ebook products whilst you have your Bookshelf installed. For courses in calculus-based
physics. Since its first edition, University Physics has been revered for its emphasis on fundamental principles and how to apply them. This text is known for its clear and thorough narrative, as
well as its uniquely broad, deep, and thoughtful sets of worked examples that provide students with key tools for developing both conceptual understanding and problem-solving skills. The 14th Edition
improves the defining features of the text while adding new features influenced by education research to teach the skills needed by today’s students.
University Physics
Author : Francis Weston Sears
Publisher :
Total Pages : 1058
Release : 1955
ISBN-10 : STANFORD:36105030211242
ISBN-13 :
Rating : 4/5 (42 Downloads)
University Physics
Author : Samuel J. Ling
Publisher :
Total Pages : 818
Release : 2017-12-19
ISBN-10 : 9888407619
ISBN-13 : 9789888407613
Rating : 4/5 (19 Downloads)
University Physics is designed for the two- or three-semester calculus-based physics course. The text has been developed to meet the scope and sequence of most university physics courses and provides
a foundation for a career in mathematics, science, or engineering. The book provides an important opportunity for students to learn the core concepts of physics and understand how those concepts
apply to their lives and to the world around them. Due to the comprehensive nature of the material, we are offering the book in three volumes for flexibility and efficiency. Coverage and Scope Our
University Physics textbook adheres to the scope and sequence of most two- and three-semester physics courses nationwide. We have worked to make physics interesting and accessible to students while
maintaining the mathematical rigor inherent in the subject. With this objective in mind, the content of this textbook has been developed and arranged to provide a logical progression from fundamental
to more advanced concepts, building upon what students have already learned and emphasizing connections between topics and between theory and applications. The goal of each section is to enable
students not just to recognize concepts, but to work with them in ways that will be useful in later courses and future careers. The organization and pedagogical features were developed and vetted
with feedback from science educators dedicated to the project. VOLUME II Unit 1: Thermodynamics Chapter 1: Temperature and Heat Chapter 2: The Kinetic Theory of Gases Chapter 3: The First Law of
Thermodynamics Chapter 4: The Second Law of Thermodynamics Unit 2: Electricity and Magnetism Chapter 5: Electric Charges and Fields Chapter 6: Gauss's Law Chapter 7: Electric Potential Chapter 8:
Capacitance Chapter 9: Current and Resistance Chapter 10: Direct-Current Circuits Chapter 11: Magnetic Forces and Fields Chapter 12: Sources of Magnetic Fields Chapter 13: Electromagnetic Induction
Chapter 14: Inductance Chapter 15: Alternating-Current Circuits Chapter 16: Electromagnetic Waves
|
{"url":"https://inkedinchapters.net/version/university-physics-with-modern-physics-global-edition/","timestamp":"2024-11-09T00:51:25Z","content_type":"text/html","content_length":"72670","record_id":"<urn:uuid:cc098807-54ad-4976-9efa-5251c9667ff0>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00292.warc.gz"}
|
Appeal to Nature
Quotes • Headscratchers • Playing With • Useful Notes • Analysis • Image Links • Haiku • Laconic
Also called[ ]
This fallacy involves assuming something is good or correct on the basis that it happens in nature, is bad because it does not, or that something is good because it "comes naturally" in some
way. This is fallacious because it turns "natural" into an ideal state without any meaningful reason, effectively using it as a synonym for "desirable" or "normal."
Bob: "My father is terribly ill at the moment, but the doctors say this new treatment will save his life."
Alice: "That treatment is unnatural. You need to accept that it's your father's time rather than trying to fight it."
Obviously, Bob's father is unlikely to consider himself better off dead than alive. This fallacy is sometimes combined with Retrospective Determinism, arguing that a given event was "just the
way things are" and hence should not be regarded as negative. "It's nature's way." See below for this variant.
This is an expression of the is/ought dichotomy which separates what is from what ought (should be) and had generally defied philosophers to use one to prove the other.
It can also can arise from a fallacy of ambiguity since the words "normal" and "natural" have two meanings: "what is", and "what should be".
In politico-religious discussion, the phrase: "homosexuality is (not) normal/natural" epitomizes this semantic and logical problem.
This is related to Science Is Bad. Frequently results in All-Natural Snake Oil.
Examples of Appeal to Nature include:
• In Troll 2, an evil witch is able to convince someone to drink a steaming green broth that has just turned someone else into green goo because "it is made from vegetable extracts".
• Here's an example from The Bible commonly used to demonstrate why one should always interpret its passages carefully: Proverbs 17:8 tells us that bribery works. Whoa! The Bible is telling us to
bribe people? Um, no, pay attention. What Proverbs 17:8 tells us is that bribery works. It doesn't say that therefore we should bribe people to be successful. Yes, this passage might be a rather
cynical observation, but that's all it is: an observation.
• In the Discworld novel Carpe Jugulum, King Verence is talked into drinking brose after being told "It's got herbs in", on the assumption it must be healthy. He spends most of the remainder of the
book foaming at the mouth and randomly attacking inanimate objects. This, however, turns out to be useful. It should be noted that brose is what the Nac mac Feegle, six-inch pictsies who can
drink their weight in lamp oil with no ill effects, drink to get their spirits up before marching into battle.
□ Similarly, the popular drinks Scumble (made of "mostly apples") and Splot containing such vaguely defined ingredients as "tree bark" and "naturally occuring mineral salts".
□ Pratchett has a lot of fun with this trope; both Verence and his wife Magrat fall prey to it on a regular basis, usually for the worse (in Witches Abroad, teetotaller and lightweight Magrat
drinks a third of a bottle of absinthe because she vaguely recognizes it as involving wormwood, after which point she, Granny Weatherwax, and Nanny Ogg start calling it "herbal wine"). In
another book, Ankh-Morpork's notorious CMOT Dibbler is making himself a killing off a particularly desperate dandruff sufferer selling herbal shampoo "now with more herbs!" One character
notes, "throw a bunch of weeds in the pot and you've got herbs."
□ In The Fifth Elephant, when Colon says he's opposed to "unnatural things" like Sonky's contraceptives, the Patrician replies "You mean you eat your meat raw and sleep up a tree?"
• The Sarah Jane Adventures, where aliens convince millions of people to drink a new energy soda that contains alien parasites called "Bane" simply by claiming that Bane is "organic" (and by
extension "healthy").
• Eureka had an episode where everyone was becoming dumber, and the supposedly-a-genius farmer didn't think the additives she were using were bad, since they were "organic"...In a town of
super-geniuses, granted lacking in common sense sometimes, this seemed rather glaring in its stupidity.
• Parodied in a Fry and Laurie sketch where a doctor is offering his patient cigarettes as a cure. "They're herbal are they?" asks the patient. "Yes, a naturally-occurring herb called tobacco, I
□ Another Fry and Laurie sketch had a bedtime drink containing "nature's own barbiturates and heroin".
Tropes[ ]
• See All-Natural Snake Oil for a lot of examples of this.
• This is the underlying logic of The Social Darwinist and often the Evilutionary Biologist; there are various versions, typically some variant of:
□ The strong deserve to rule over and / or destroy the weak, because it's nature's way.
□ Mankind has perverted the course of nature, so society needs to be destroyed / someone needs to genetically engineer a killer something to prey on man / whatever.
Real Life[ ]
• This is often used with regard to social issues; for example, the more extreme opponents of feminism argue that the natural order is for males to be dominant, so women should not be allowed the
same rights as men. (This argument is itself unsound, as many species, especially bugs, will attest to.)
□ Not just bugs. In quite a lot of species the lowest-ranking females is well above the most powerful males. Some species, such as the whiptail lizard, have even done away with males all
together and created a Real Life One-Gender Race.
• The idea of the superior "Noble Savage" has popped up repeatedly for centuries. The superiority of the primitive person or beasts over civilized man has been a repeated trope. Of course the fact
that 25% of men died from war, 20-50% of children never left childhood, and that polygamy was invented and heavy repression of women was common primarily to repopulate after those last two points
is conveniently ignored.
• A famous example from mathematics is Giovanni Saccheri's attempt to prove the parallel postulate. In his book, Euclid Freed of Every Flaw, Saccheri assumed the postulate was false and tried to
derive a contradiction. Instead, he derived results that got stranger and stranger (but remained logically consistent), finally concluding that they were "repugnant to the nature of straight
lines". Saccheri didn't know it, but he was developing what we now call hyperbolic geometry—a fruitful field of study that just doesn't work the same way Euclidean geometry does.
• Eric Schlosser mentions this in Fast Food Nation: sometimes artificial things are better for you than natural ones. The example he uses is almond flavoring; extracted naturally, it contains trace
amounts of cyanide.
□ That's the nature of "natural" and "artificial" ingredients, at least as defined under United States law. Often, the active chemical is identical, the difference being that the "artificial"
ingredient is synthesized directly from its components as a pure substance while the "natural" ingredient is extracted from some naturally occurring source but usually includes contaminants
that aren't removed in the extraction process.
• Lots of "herbal" suppliments. The idea being that because they are "herbal" they can't be harmful. Belladonna, also called deadly nightshade, which is a poison, is an herb (in small amounts, it
can be used as a soporific, but still). This also ignores the fact that anyone can be allergic to a plant that is not usually poisonous, making it harmful to him/her.
□ In an extension of this, multiple supplements are now claiming (word for word) "It's all natural, so there are no side effects." Depending on the product, this is either a case of misleading
truth (it's natural AND there are no side effects), a case of Blatant Lies (it's natural, and there are some side effects, but that's not because it's natural) or a case of selective omission
(it's all natural, and there are no side effects... There are no PRIMARY effects either, this is basically a placebo to help you psychologically while you do the rest of the stuff we tell
you, THAT'S what makes you healthy.)
• In German, the word "Chemie" (literally "chemistry", but in this case a more accurate translation would be "chemicals") is often used to refer to certain food additives and basically any other
substance that the speaker considers to be "unnatural". The fallacy is that, technically, water is a chemical too, and so is everything else. So if you're condemning the use of "chemicals", you
are basically against every substance known to man, the healthy ones as well as the unhealthy ones.
• War is often said to be bad because it's a human invention, which isn't really true.
□ Also not human inventions: Agriculture (ants and termites, among others), division of labor (multiple species), language (disputed- multiple species), ownership (disputed- multiple species),
tool use (apes, certain birds, and others), or... well, actually, we didn't invent a lot. We mostly just do a lot of things other species do, but do it on a grander scale. What makes humans,
or perhaps even just certain cultures, unique is the method in which we adapt and pass information on, forming increasingly complex societies that have greater ecological impacts.
☆ We didn't even invent paper. Wasps did that.
• Both sides of the LGBT issue are guilty of this, claiming either that it is unnatural, and therefore wrong, or that it is perfectly natural, and therefore acceptable. This is particularly jarring
since nobody really seems to have any idea what they mean by "natural" in this context.
• This is a point in arguments against preservation of endangered species. Extinction is a natural event that occurs when a species is no longer fit to survive in its environment. Attempting to
repopulate a Dying Race works against the natural order in both the target species as well as those that share a niche.
• A mode of thought that pops up with many above topics such as war and the LGBT issue is the idea that if a human practice has a parallel among some other species, then it is acceptable.
Proponents of this idea tend to forget that animals also have plenty of habits that pretty much everyone would consider reprehensible if practiced by humans. This is what gave rise to the rhyme:
"monkeys throw their poo, should we do so too?"
Looks like this fallacy but isn't[ ]
• Natural Law Theory, in which the nature appealed to refers to the essence of something, not the wild and woolly outdoors.
|
{"url":"https://tropedia.fandom.com/wiki/Appeal_to_Nature","timestamp":"2024-11-03T04:09:40Z","content_type":"text/html","content_length":"173771","record_id":"<urn:uuid:1f78528e-f398-4abb-80a3-fb43f4122061>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00785.warc.gz"}
|
Block 5: Perimeter, Area and Volume Year 6 for Spring Term - URBrainy.com
Block 5: Perimeter, Area and Volume Year 6 for Spring Term
Support material for Year 6 Maths Mastery: Spring Term Block 5: Perimeter, Area and Volume.
Subscribe to our newsletter
The latest news, articles, and resources, sent to your inbox weekly.
© Copyright 2011 - 2024 Route One Network Ltd. - URBrainy.com 11.5.3
|
{"url":"https://urbrainy.com/maths-mastery/year-6/spring-term/block-5","timestamp":"2024-11-12T06:08:14Z","content_type":"text/html","content_length":"102218","record_id":"<urn:uuid:9e68eea8-884e-42b2-b39c-fb0503b10501>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00015.warc.gz"}
|
Back to Papers Home Back to Papers of School of Physics
Paper IPM / P / 6937
School of Physics
Title: Lewenstein-Sanpera Decomposition of A Generic 2x2 Density Matrix by Using Wootters's Basis
Author(s): 1. S.J. Akhtarshenas
2. M.A. Jafarizadeh
Status: Published
Journal: Quantum Information & Computation
No.: 108
Vol.: 3
Year: 2003
Pages: 229-248
Supported by: IPM
The Lewenstein-Sanpera decomposition for a generic two-qubit density matrix is obtained by using Wootters's basis. It is shown that the average concurrence of the decomposition is equal to the
concurrence of the state. It is also shown that all the entanglement content of the state is concentrated in the Wootters's state |x[1] > associated with the largest eigenvalue λ[1] of the Hermitian
matrix √{√{ρ}~ρ√{ρ}} >. It is shown that a given density matrix ρ with corresponding set of positive numbers λ[i] and Wootters's basis can transforms under SO(4,c) into a generic 2×2 matrix with the
same set of positive numbers but with new Wootters's basis, where the local unitary transformations correspond to SO(4,r) transformations, hence, ρ can be represented as coset space SO(4,c)/SO(4,r)
together with positive numbers λ[i]. By giving an explicit parameterization we characterize a generic orbit of group of local unitary transformations.
Download TeX format
back to top
|
{"url":"https://www.ipm.ac.ir/ViewPaperInfo.jsp?PTID=6937&school=Physics","timestamp":"2024-11-13T06:25:57Z","content_type":"text/html","content_length":"42281","record_id":"<urn:uuid:d9172de9-2d2d-4d4d-8657-9fcbb1ad0eea>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00233.warc.gz"}
|
Ordinary differential equationOrdinary differential equation
Ordinary differential equation
In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with other DE, its unknown(s) consists of one function(s)
and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations which may be with respect to more than one independent variable.
|
{"url":"https://texonom.com/ordinary-differential-equation-37d1aac17f07485e9fb58087d9a1f2fd","timestamp":"2024-11-04T14:27:56Z","content_type":"text/html","content_length":"127204","record_id":"<urn:uuid:37c8cdb5-5288-4073-ba7f-bd2fbaed95ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00641.warc.gz"}
|
Positive Abcissa: Exploring Cartesian Coordinates - flewedoutmovie.com
When we delve into the world of mathematics, one fundamental concept that arises is the Cartesian coordinate system. This system, developed by the French mathematician René Descartes in the 17th
century, revolutionized the way we understand geometry and algebra by providing a systematic way to plot points and visualize mathematical relationships. Central to the Cartesian coordinate system is
the notion of the abscissa and the ordinate, commonly known as the x-axis and y-axis respectively. In this article, we will focus specifically on the positive abscissa, exploring its significance,
applications, and relevance in various mathematical contexts.
Understanding Cartesian Coordinates:
Before we dive into the positive abscissa, let’s briefly revisit the Cartesian coordinate system as a whole. The system consists of two perpendicular lines – the x-axis and y-axis – that intersect at
a point called the origin. The x-axis is horizontal, with values increasing to the right being positive and values decreasing to the left being negative. The y-axis is vertical, with values
increasing upwards being positive and values decreasing downwards being negative.
The Positive Abscissa:
The abscissa refers to the horizontal distance of a point from the y-axis in a Cartesian coordinate system. When we talk about the positive abscissa, we are specifically referring to points on the
right side of the y-axis (towards the positive direction of the x-axis). Points on the left side of the y-axis have a negative abscissa, while points on the right side have a positive abscissa.
Significance of the Positive Abscissa:
1. Location of Points: The positive abscissa helps us determine the location of points to the right of the y-axis. By noting the value of the abscissa, we can pinpoint where a point lies along the
2. Quadrant Identification: In the Cartesian plane, the positive abscissa is associated with the right-hand side of the plane, which corresponds to Quadrant I and Quadrant IV. Understanding the
quadrant in which a point lies is crucial for various mathematical operations.
3. Graphing Functions: When graphing functions or equations, the positive abscissa plays a key role in determining how the graph will extend towards the right side of the coordinate plane. It helps
visualize the relationship between variables.
Applications of Positive Abscissa:
The positive abscissa finds application in various mathematical and real-world scenarios. Some notable applications include:
• Distance and Displacement: In physics, the positive abscissa can represent the distance traveled in a particular direction. It is crucial for calculating displacements and velocities.
• Profit and Loss: In economics and finance, the positive abscissa can represent profits made from an investment or business venture. It helps analyze financial trends over time.
• Geospatial Mapping: In geography and cartography, the positive abscissa is used to map out locations on a coordinate grid, facilitating navigation and spatial analysis.
• Computer Graphics: In computer science, the positive abscissa is essential for rendering graphics, defining shapes, and positioning elements on a screen.
Properties of Positive Abscissa:
1. Always Non-Negative: By definition, the positive abscissa can never be negative. It starts from zero at the y-axis and extends infinitely towards the positive direction.
2. Increases to the Right: The positive abscissa increases as we move from left to right along the x-axis. This directional increase is a fundamental characteristic of the Cartesian coordinate
3. Coordinates Notation: In the standard (x, y) coordinate notation, the positive abscissa is denoted by a positive value for x. For example, a point in Quadrant I may have coordinates (3, 4) where
the positive abscissa is 3 units.
Frequently Asked Questions (FAQs):
1. What is the opposite of positive abscissa?
The opposite of positive abscissa is negative abscissa, which refers to points on the left side of the y-axis in a Cartesian coordinate system.
2. Can the abscissa of a point be zero?
Yes, the abscissa of a point can be zero if the point lies on the y-axis. In this case, the x-coordinate is zero, and only the y-coordinate is relevant.
3. How does the positive abscissa relate to the concept of vectors?
In vector analysis, the positive abscissa of a vector indicates its horizontal component or magnitude in the x-direction. It helps in decomposing vectors into their x and y components.
4. Is the positive abscissa unique to two-dimensional Cartesian coordinates?
While the positive abscissa is commonly associated with two-dimensional Cartesian coordinates, it can be extended to higher dimensions in coordinate systems such as three-dimensional space.
5. Why is the positive abscissa important in graphing linear equations?
When graphing linear equations, the positive abscissa helps in determining the slope of the line and the intersection points with the x-axis. It provides valuable information about the behavior
of the function.
In conclusion, the positive abscissa plays a vital role in the Cartesian coordinate system, offering insights into location, direction, and spatial relationships in mathematics and beyond. Its
significance extends across various disciplines, making it a fundamental concept in geometric and algebraic interpretations. By understanding the nuances of the positive abscissa, we deepen our grasp
of coordinates and pave the way for more advanced mathematical explorations.
|
{"url":"https://flewedoutmovie.com/positive-abcissa-exploring-cartesian-coordinates/","timestamp":"2024-11-03T20:26:07Z","content_type":"text/html","content_length":"289066","record_id":"<urn:uuid:f44b7d5c-9fdb-443a-bb9b-53840fa3aa0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00558.warc.gz"}
|
The Complete (?) List of References for the Game of Sprouts
This is intended to be a complete list of all available information for the Game of Sprouts. If you are aware of anything missing, please send email.
This page has these four sections:
Discussion Forums
1. Sprouts-Theory
Discussions on the theory of Sprouts representations, analysis, etc.
2. Geometry-Research (Math Forum)
The above is a 1995 thread, which includes Conway describing a strategy for playing the game. The complete Math forum Geometry-Research (1992-present) is also available, and sometimes discusses
1. GLOP
Program to search the game tree and find whether a given Sprouts position is a win for the first or second player. So far, it has determined the winner for all normal games of up to 44 spots (and
for 46, 47, 53), and all misere games up to 20 spots.
2. 3Graph
Windows software allowing the user to play Sprouts against the computer.
3. Aunt Beast and Small Beast are programs used by Sprouts competitors, but are not available on the Internet.
Anthony, Piers (1969) Macroscope. New York: Avon.
A science fiction book, in which Sprouts is described, many of the characters play Sprouts, and it plays some role in the plot.
D. Applegate, G. Jacobson, and D. Sleator (1991) Computer Analysis of Sprouts, Tech. Report CMU-CS-91-144, Carnegie Mellon University Computer Science Technical Report, 1991. Also in The
Mathemagician and the Pied Puzzler (1999) honoring Martin Gardener; E. Burlekamp and T. Rogers, eds, A K Peters Natick, MA, pp. 199-201.
The classic paper on computationally solving Sprouts. Gives solutions up to 11 spots. Proposes the Sprouts Conjecture: the first player loses the n-spot game if n is 0, 1, or 2 modulo 6, and
wins otherwise.
Balyta, Peter, Keeble, Tracy, and McQuatty, Pat (1999) Routing Your Way Through Edgy Mathematics: Graph Theory, Mathematics 514 Programme M.E.Q. 568-514, June 1999 MAPCO implementation session on
Graph Theory.
Sprouts is used (pages 8-11) to teach graph theory to Secondary level 5 (11th grade) math students in Quebec as part of the Math 514 program.
Edwyn Berkelamp, John Conway, and Richard Guy (1982) Winning Ways for your Mathematical Plays, vol. 2: Games in Particular, chapter 17, pp. 564-568, Academic Press, London, 1982. (Also see Vol. 1
,1982, Vol. 1,, 2001, Vol. 2, 2003, Vol. 3., 2003, Vol. 4, 2004).
The classic books from 1982, re-released 2001-2004, with a wealth of information about Sprouts and numerous other games.
Baird, Leemon C. III & Schweitzer, Dino (2010) Complexity of the Game of Sprouts FCS'10 - 6th International Conference on Foundations of Computer Science, Las Vegas, Nevada, July 2010.)
Proves the NP-completeness of two problems: given a Sprouts position and integer K, can the game continue for at least K more moves? And can it end in less than K moves?
Brown, Wayne and Baird, Leemon C. III (2008) A graph drawing algorithm for the game of sprouts, The 2008 International Conference on Computer Graphics and Virtual Reality, Las V egas, Nevada,
July 14-17.
Describes algorithms for keeping curves spread out when drawing Sprouts positions
Brown, Wayne and Baird, Leemon C. III (2008) A non- trigonometric, pseudo area preserving, polyline smoothing algorithm, Journal of Computing Sciences in Colleges, (Also in the Proceedings of the
Consortium for Computing Sciences in Colleges Mid-South Conference).
Describes algorithms for making the curves smooth when drawing Sprouts positions.
Butler, Ralph M., Trimble, Selden Y., and Wilkerson, Ralph W. (1987) A logic programming model of the game of sprouts, ACM SIGCSE Bulletin, Volume 19, Issue 1 (February 1987), Pages: 319 - 323.
Describes how Sprouts positions are encoded so a 700-line Prolog program could play sprouts, making random moves.
Copper, M. (1993) Graph theory and the game of sprouts. American Mathematical Monthly 100(May):478.
Proves bounds on the length of a game as a function of the number of initial spots, whether the final graph is connected, and whether the final graph is biconnected. Then proposes further
questions, some of which were answered by Lam (A Math Monthly, 1997).
Draeger, Joachim, Hahndel, Stefan, Koestler, Gerhard, and Rosmanith, Peter (1990). Sprouts: Formalisierung eines topologischen spiels. Technical Report TUM-I9015, Technische Universitaet
Muenchen, March 1990.
Eddins, Susan K. (1998) Networks and the game of sprouts. NCTM Student Math Notes (May/June).
Eddins, Susan Krulik (2006) Sprouts: Analyzing a simple game, IMSA Math Journal.
A simple analysis with questions and answers, as for teaching students..
Focardi, Riccardo and Luccio, Flaminia L. (2004) A modular approach to sprouts, Discrete Applied Mathematics 144 (2004), no. 3, 303-319. (A preliminary version of this paper appeared as A new
analysis technique for the Sprouts Game, proc. of the 2nd International Conference on FUN with Algorithms 2001, Carleton Scientific Press, Isola d'Elba, Italy, May 2001)
From the abstract: We study some new topological properties of this game and we show their effectiveness by giving a complete analysis of the case x0=7 for which, to the best of our
knowledge, no formal proof has been previously given.
Fraenkel, Aviezri S. (2009) Combinatorial games: Selected biography with a succinct gourmet introduction The Electronic Journal of Combinatorics 2009.
A survey of combinatorial games with a huge list of references, covering many related topics, including Sprouts.
Gardner, Martin, Mathematical games: of sprouts and brussels sprouts, games with a topological flavor, Scientific American 217 (July 1967), 112-115.
Gardner, Martin (1989) Sprouts and Brussels sprouts. In Mathematical Carnival. Washington, D.C.: Mathematical Association of America.
Giganti, Paul Jr. (2009) Parent Involvement and Awareness: The Game of Sprouts, CMC ComMuniCator, Dec 2009.
Advice on teaching math reasoning to kindergarten through middle school, using Sprouts.
Lam, T.K. (1997) Connected sprouts. American Mathematical Monthly 104(February):116.
Answers questions posed by Copper (American Mathematical Monthly, 1993), about the graph that is obtained at the end of the game. Gives the length of the shortest game for connected and
biconnected graphs.
Lemoine, Julien and Viennot, Simon (2007) A further computer analysis of Sprouts. (7 April 2007)
The first paper since 1991 giving the results for large games. It describes the authors' pseudo-canonization techniques, and gives their results for all games of up to 35 spots, except those
with 27,30,31,32,33 spots. It also proposes the Nimber Conjecture: the nimber of the n-spot game is floor((n mod 6)/3).
Lemoine, Julien and Viennot, Simon (2010) Computer analysis of Sprouts with nimbers (13 August 2010) (first released as "A further computer analysis of Sprouts", listed above).
Contains details about the representation of Sprouts positions, the algorithms based on the central idea of nimbers, and a description of some interesting features of the first version of
Glop, like drawing the proof tree or interacting in real-time with the computation.
Lemoine, Julien and Viennot, Simon (2009) Sprouts games on Compact surfaces (2 March 2009, original 29 November 2008)
Extends Sprouts theory beyond the usual plane/sphere case to other surfaces.
Lemoine, Julien and Viennot, Simon (2009) Analysis of misere Sprouts game with reduced canonical trees (30 August 2009)
Describes how the authors were able to solve misere games up to 17 spots using reduced canonical trees, which are analogous to nimbers for normal Sprouts.
Lemoine, Julien and Viennot, Simon (2010) Nimbers are inevitable (26 November 2010)
Proves a theorem: from the proof tree for the outcome of a sum of impartial games, it is possible to deduce the nimber of one component. It implies that in some way nimbers are inevitable,
even when trying to compute outcomes, and justifies the efficiency of the algorithms detailed at the end of the article. Also details the results obtained on two impartial games, Sprouts and
Peterson, Ivar (1997) Sprouts For Spring, Science News, April 5.
Short Science News article from 5 April 1997 article giving the rules, history Macroscope quotes, and a few references.
Pritchett, Gordon (1976) The game of Sprouts, Two-Year College Mathematics Journal, 7, 7 (Dec 1976) pp. 21-25.
Proves several Sprouts theorems.
|
{"url":"http://gameofsprouts.com/refs.html","timestamp":"2024-11-15T01:36:37Z","content_type":"text/html","content_length":"15415","record_id":"<urn:uuid:65ea4504-ed6c-4288-8003-347c8afa9eb8>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00145.warc.gz"}
|
Selig Brodetsky - Biography
Quick Info
10 February 1888
Olviopol (near Odessa), Ukraine
18 May 1954
London, England
Selig Brodetsky was educated at Cambridge and Leipzig. He became a lecturer at Bristol and later lecturer and professor at Leeds. He worked on fluid flow with particular emphasis on aerodynamics.
He was President of the Hebrew University of Jerusalem for a short time.
Selig Brodetsky's father was Akiva Brodetsky, a synagogue official, and his mother was Adel Prober. Selig was the second son from a large family of thirteen children. His parents were both
Russian Jews and his father, tired of the harassment which the family were suffering in Russia, decided to move to London in 1893. He moved to the East End of London, leaving his wife Adel,
Selig, and three other young children in Russia. He managed to earn enough to support his family and he wrote asking them to join him in London.
Their journey across Europe was a difficult one described in [3]:-
Little Selig, then a child of four, retained to the end of his life, a vivid memory of their hiding in a hen-coop till nightfall when a kindly officer in charge of Frontier Forces signalled
them out of the darkness that now was the time to cross the frontier. His baby sister started to cry at this critical time but her mother pushed her shawl into the child's mouth and so
silenced her.
In London Akiva and Adel could not find permanent work and so they made a living from sowing, with the children earning a little to help by packing matches into boxes. In 1894, when Selig was old
enough to begin school, he attended the Jew's Free School in Whitechapel, London. From there, having won a scholarship, he went to the Central Foundation School in Cowper Street, central London,
in 1900. In 1902 he came top of the list of those winning intermediate scholarships in London. Further success came when he was awarded a scholarship to study mathematics at Trinity College,
Cambridge, matriculating in 1905.
Brodetsky achieved a very fine record at Cambridge. He graduated in 1908 being placed as bracketed Senior Wrangler (first equal). This in fact made news in a rather disturbing way since [1]:-
Newspaper editorials noted that if the Aliens Act restricting immigration had been passed earlier the Brodetsky family would have been barred.
In 1910 Brodetsky was awarded the Isaac Newton Scholarship which enabled him to study at Leipzig for his doctorate.
The University of Leipzig awarded Brodetsky a doctorate in 1913 for a thesis on gravitation and he returned to England in 1914 where he accepted a lectureship in Applied Mathematics at the
University of Bristol. On course World War II broke out shortly after this and, in addition to his teaching duties, Brodetsky was assigned as an advisor to a firm making optical equipment, such
as periscopes for submarines, for the war. He also collaborated with G H Bryan of University College, Bangor, on mathematical aeronautics, and some of his later publications on this topic are
mentioned below.
On 13 January 1919 Brodetsky married Mania Berenblum. They had two children, a son Paul born in 1924 and a daughter Adèle. In 1919 Brodetsky accepted a position as a lecturer at the University
of Leeds and he was made a Reader in 1920. He was appointed as the first holder of the Chair of Applied Mathematics at Leeds in 1924.
Brodetsky's work was mainly on aerodynamics and fluid mechanics. His papers include work on the stability of a parachute and fluid flow past circular and elliptic cylinders. He published The
mechanical principles of the aeroplane in 1921. He also wrote the book A first course in nomography in 1920. A nomograph was widely used in engineering and in industry. It is a graphic
representation that consists of several lines with scales arranged so that by using a straight edge between known values on two lines an unknown value can be read at the intersection with a third
The 5th International Congress for Applied Mechanics was held at Cambridge, Massachusetts, in 1938 and Brodetsky delivered a paper on the equations of motion of an airplane. He expanded his
theory further, publishing a major article The general motion of the aeroplane in the Philosophical Transactions of the Royal Society in 1940. We list the chapter and section headings of this
paper to give an indication of the approach Brodetsky took.
Chapter I. Longitudinal Motion without Screw Thrust.
Sections: Equations of motion, coefficients of statical and dynamical stability $k, t$; first approximation. The three standard conditions of the symmetrical aeroplane. Longitudinal stability;
usual value of $t$ in standard normal condition. Standard normal condition: $k$ of zero order, Lanchester's phugoids; $k$ small, extended phugoids; $k$ negligible, neutral phugoids; second
approximation, Lanchester's phugoids, the loop. Standard diving condition, diving phugoids. Elevator in rotation during motion, flattening out from a dive. Lanchester's phugoids corrected for
Chapter II. Longitudinal Motion with Engines in Action.
Sections: Equations of motion, first approximation. Standard normal condition: moderate power, Lanchester's, extended and neutral phugoids, large power, power phugoids, zooming.
Chapter III. Three-dimensional Motion.
Sections: Symmetrical aeroplane: equations of motion, first approximation. Standard normal condition; $k$ of zero order, three-dimensional phugoids, Immelmann turn; $k$ small, extended
three-dimensional phugoids; $k$ negligible. Standard stalled conditions, the slow spin. Aeroplane with displaced controls, additional moments. Standard normal condition: $k$ of zero order; small
asymmetry, three-dimensional phugoids; large aileron displacement, the slow roll.
Lecturing seems to have been one of Brodetsky's real strengths. His abilities in this area are described in [3] as follows:-
As a university teacher for students he could hardly be surpassed. Day in, day out, he kept them spell-bound. His subject matter, clarity of exposition, style of delivery, choice of
phraseology made an indelible impression on all his listeners whether derived from the Faculties of Arts, Science or Technology. In addition to purely routine standard mathematics, he made
mathematical personalities and the history of mathematics live down the centuries by his vivid presentation. ... On one occasion, he gave a lecture on Sir Isaac Newton in a room in the
university constructed to seat an audience of 250, 400 turned up and were all accommodated, sitting or standing.
However, it is not only for his contributions to mathematics that Brodetsky is well known. His other contributions are described in [1]:-
Already at Cambridge Brodetsky had established the pattern of dividing his time between academic work and public service, especially but by no means exclusively for the Jewish community and
the Zionist movement. While at Leeds he was active in the affairs of the League of Nations Union (later the United Nations Association) and of the Association of University Teachers. In 1928
he became a member of the World Zionist Executive and head of its political department in London. In 1940 he became president of the Board of Deputies of British Jews, the lay head of British
Jewry. His election symbolized a democratic revolution, with the communal leadership being taken over from the old-established families by the descendants of the late nineteenth-century
immigration. It also demonstrated that Zionism, towards which much of the establishment had been indifferent or hostile, had now the support of the majority of the community, undoubtedly
owing to the traumatic events of the Nazi era and the Second World War.
W P Milne was Head of Mathematics at Leeds until he retired in 1946 when Brodetsky took on the role. In 1948 he retired from Leeds University and went to Jerusalem to become the President of the
Hebrew University there. This, however, proved much more difficult than Brodetsky expected. He thought he was going to take up a position similar to that of Vice-Chancellor of an English
university but many in Jerusalem saw the position as essentially an honorary one, like the Chancellor of an English university. Brodetsky was effective in reforming the Hebrew University but at
considerable cost in terms of arguments and disputes. This soon had a detrimental effect on his health and after suffering a heart attack he returned to England in 1951. He resigned as President
of the Hebrew University in 1952 and led a quiet life for his last couple of years in contrast to the hectic lifestyle he had led for most of his life.
He died at his home in Cromwell Road. London, and was buried in the Willesden Jewish cemetery, London, on 20 May 1954.
1. Biography by Leon Mestel, in Dictionary of National Biography (Oxford, 2004). See THIS LINK.
2. S Brodetsky, Memoirs : from ghetto to Israel (1960).
3. W P Milne, Selig Brodetsky, J. London Math. Soc. 31 (1) (1956), 121-125.
Additional Resources (show)
Other pages about Selig Brodetsky:
Other websites about Selig Brodetsky:
Honours awarded to Selig Brodetsky
Written by J J O'Connor and E F Robertson
Last Update February 2005
|
{"url":"https://mathshistory.st-andrews.ac.uk/Biographies/Brodetsky/","timestamp":"2024-11-11T06:49:32Z","content_type":"text/html","content_length":"30220","record_id":"<urn:uuid:a3d20be9-1006-40f5-8996-2413594563bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00508.warc.gz"}
|
Discontinuity | Lexique de mathématique
Property of a function that is not
in a given interval of its domain.
If we consider an interval [
] of the domain of a function
of the real variable
and a value a of this interval, we say that the function
is discontinuous in this interval if
) is not defined or if \(\lim\limits_{x \rightarrow a} f(x) ≠ f(a)\).
These two cases are illustrated in the examples below.
|
{"url":"https://lexique.netmath.ca/en/discontinuity/","timestamp":"2024-11-07T18:19:37Z","content_type":"text/html","content_length":"63671","record_id":"<urn:uuid:ec571d48-480e-4c0f-81b3-86c57ce7bf61>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00846.warc.gz"}
|
Small Total-Cost Constraints in Contextual Bandits with Knapsacks...
Keywords: mutli-armed bandits, bandits with knapsacks, primal-dual approaches
TL;DR: Motivated by a fairness application, we extend typical OPT/B \sqrt{T} results in contextual bandits with knapsacks to work with small total-cost constraints T B, that can be as close as
possible to \sqrt{T}
Abstract: We consider contextual bandit problems with knapsacks [CBwK], a problem where at each round, a scalar reward is obtained and vector-valued costs are suffered. The learner aims to maximize
the cumulative rewards while ensuring that the cumulative costs are lower than some predetermined cost constraints. We assume that contexts come from a continuous set, that costs can be signed, and
that the expected reward and cost functions, while unknown, may be uniformly estimated---a typical assumption in the literature. In this setting, total cost constraints had so far to be at least of
order $T^{3/4}$, where $T$ is the number of rounds, and were even typically assumed to depend linearly on $T$. We are however motivated to use CBwK to impose a fairness constraint of equalized
average costs between groups: the budget associated with the corresponding cost constraints should be as close as possible to the natural deviations, of order $\sqrt{T}$. To that end, we introduce a
dual strategy based on projected-gradient-descent updates, that is able to deal with total-cost constraints of the order of $\sqrt{T}$ up to poly-logarithmic terms. This strategy is more direct and
simpler than existing strategies in the literature. It relies on a careful, adaptive, tuning of the step size.
Supplementary Material: pdf
Submission Number: 3728
|
{"url":"https://openreview.net/forum?id=uZvG0HLkOB","timestamp":"2024-11-12T23:30:21Z","content_type":"text/html","content_length":"41377","record_id":"<urn:uuid:506ad1dc-cd8f-450f-94d5-c1e8e68d709a>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00674.warc.gz"}
|
Polynomial arithmetic
The calculator evaluates a polynomial expression. The expression contains polynomials and operations +,-,/,*, mod- division remainder, gcd - greatest common divisior, egcda, egcdb, lc, deg, pp,
content, monic functions.
Articles that describe this calculator
The file is very large. Browser slowdown may occur during loading and creation.
Calculators that use this calculator
Calculators used by this calculator
Similar calculators
PLANETCALC, Polynomial arithmetic
|
{"url":"https://embed.planetcalc.com/8383/?thanks=1","timestamp":"2024-11-13T08:00:07Z","content_type":"text/html","content_length":"54257","record_id":"<urn:uuid:0a6b203f-402a-4dc7-a20f-9b4e75a13852>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00001.warc.gz"}
|
Spanish Lottery Statistics
Spanish Lottery Statistics For 2022
As well as showing you the hot and cold numbers for the Spanish Lottery for 2022, we've also been crunching the numbers to generate some stats about number popularity. If we take all the results
either collectively or over a year we can determine if there are any interesting patterns. Below you'll find the most popular single numbers, most popular couples and triples drawn in the Spanish
Lottery for 2022.
These results are based on our own set of data and is provided for information purposes only. This data will probably not help you win the lottery as after all it is a random number generating event.
However by looking at the stats for the Spanish Lottery for 2022 you might pick out a few numbers to look at. If they do help you win the big one then do let us know.
You can also find the hot numbers for All, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022
Most Popular Single Numbers
29 Drawn 24 Times
40 Drawn 24 Times
1 Drawn 23 Times
30 Drawn 23 Times
11 Drawn 22 Times
8 Drawn 22 Times
Most Popular Pairs
2022 Drawn 7 times
829 Drawn 7 times
129 Drawn 6 times
130 Drawn 6 times
1129 Drawn 6 times
1640 Drawn 6 times
2140 Drawn 6 times
2329 Drawn 6 times
4041 Drawn 6 times
4345 Drawn 6 times
530 Drawn 6 times
611 Drawn 6 times
Most Popular Triplets
111522 Drawn 3 times
162022 Drawn 3 times
163847 Drawn 3 times
234041 Drawn 3 times
34041 Drawn 3 times
304049 Drawn 3 times
53046 Drawn 3 times
61119 Drawn 3 times
Most Popular Quadruplets
No results returned
Don't forget to check out the hot and cold numbers for the Spanish Lottery. Plus find out if your lucky numbers have come up lately with the latest Spanish Lottery results.
Bet On Lotteries
New UK+IRE Customers only. Certain deposit methods & bet types excl. Min first £5 bet within 14 days of account reg at min odds 1/2 to get 4x £5 free bets. Free bets available to use on selected
sportsbook markets only. Free bets valid for 7 days, stake not returned. Restrictions + T&Cs apply. 18+. #AD
Applies to your first Sportsbook bet. Max refund for this offer is £20. Only deposits made using Cards or Paypal will qualify for this promotion. T&Cs apply. Paddy's Rewards Club: Get a £10 free bet
when you place 5x bets of €10+. T&Cs apply. 18+. #AD
New UK+IRE customers. PayPal and certain deposit and bet types excluded. Min first £5 bet within 14 days of account reg at min odds 1/2 = 4 x £5 free bets valid for 4 days on sports, stake not
returned, restrictions apply. T&Cs apply. 18+. #AD
Back To Results
|
{"url":"https://www.loquax.co.uk/lottery/stats/eslotto/Year-2022.php","timestamp":"2024-11-09T02:40:49Z","content_type":"text/html","content_length":"18806","record_id":"<urn:uuid:8cf8fb50-53ce-4709-89f5-8e180b958aa1>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00493.warc.gz"}
|
Prof. Barbara and her colleague at New Mexico Tech, Prof. Nikolai Kalugin, awarded a three-year NSF collaborative grant - Department of Physics
Prof. Barbara and her colleague at New Mexico Tech, Prof. Nikolai Kalugin, awarded a three-year NSF collaborative grant
Prof. Barbara and her colleague at New Mexico Tech, Prof. Nikolai Kalugin, were recently awarded a three-year NSF collaborative grant for a project titled Floquet-Bloch topological states in quantum
Hall systems. Recent theories predict that light can act as a switch to induce new quantum electronic states in some atomically thin materials. These new states are named topological states and they
yield robust, dissipationless currents along the perimeter of the two-dimensional material. This project will involve collaborations with the University of Chile-Santiago (Prof. Luis Foa Torres) and
the National High Magnetic Field Laboratory in Tallahassee, FL (Dr. Alexey Suslov) to study the generation of these edge states by driving two-dimensional materials away from equilibrium, thereby
inducing topological states “on demand”.
|
{"url":"https://physics.georgetown.edu/prof-barbara-and-her-colleague-at-new-mexico-tech-prof-nikolai-kalugin-awarded-a-three-year-nsf-collaborative-grant/","timestamp":"2024-11-04T01:08:43Z","content_type":"text/html","content_length":"107376","record_id":"<urn:uuid:cb356e08-45d4-42a0-a542-0778dbb90ef6>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00496.warc.gz"}
|
Convert lbs to kg
Please provide values below to convert pound [lbs] to kilogram [kg], or vice versa.
Definition: A pound (symbol: lb) is a unit of mass used in the imperial and US customary systems of measurement. The international avoirdupois pound (the common pound used today) is defined as
exactly 0.45359237 kilograms. The avoirdupois pound is equivalent to 16 avoirdupois ounces.
History/origin: The pound descended from the Roman libra, and numerous different definitions of the pound were used throughout history prior to the international avoirdupois pound that is widely used
today. The avoirdupois system is a system that was commonly used in the 13^th century. It was updated to its current form in 1959. It is a system that was based on a physical standardized pound that
used a prototype weight. This prototype weight could be divided into 16 ounces, a number that had three even divisors (8, 4, 2). This convenience could be the reason that the system was more popular
than other systems of the time that used 10, 12, or 15 subdivisions.
Current use: The pound as a unit of weight is widely used in the United States, often for measuring body weight. Many versions of the pound existed in the past in the United Kingdom (UK), and
although the UK largely uses the International System of Units, pounds are still used within certain contexts, such as labelling of packaged foods (by law the metric values must also be displayed).
The UK also often uses both pounds and stones when describing body weight, where a stone is comprised of 14 pounds.
Definition: A kilogram (symbol: kg) is the base unit of mass in the International System of Units (SI). It is currently defined based on the fixed numerical value of the Planck constant, h, which is
equal to 6.62607015 × 10^-34 in the units of J·s, or kg·m^2·s^-1. The meter and the second are defined in terms of c, the speed of light, and cesium frequency, ΔνCs. Even though the definition of the
kilogram was changed in 2019, the actual size of the unit remained the same. The changes were intended to improve the definitions of SI base units, not to actually change how the units are used
throughout the world.
History/origin: The name kilogram was derived from the French "kilogramme," which in turn came from adding Greek terminology meaning "a thousand," before the Late Latin term "gramma" meaning "a small
Unlike the other SI base units, the kilogram is the only SI base unit with an SI prefix. SI is a system based on the meter-kilogram-second system of units rather than a centimeter-gram-second system.
This is at least in part due to the inconsistencies and lack of coherence that can arise through use of centimeter-gram-second systems, such as those between the systems of electrostatic and
electromagnetic units.
The kilogram was originally defined as the mass of one liter of water at its freezing point in 1794, but was eventually re-defined, since measuring the mass of a volume of water was imprecise and
A new definition of the kilogram was introduced in 2019 based on Planck's constant and changes to the definition of the second. Prior to the current definition, the kilogram was defined as being
equal to the mass of a physical prototype, a cylinder made of a platinum-iridium alloy, which was an imperfect measure. This is evidenced by the fact that the mass of the original prototype for the
kilogram now weighs 50 micrograms less than other copies of the standard kilogram.
Current use: As a base unit of SI, the kilogram is used globally in nearly all fields and applications, with the exception of countries like the United States, where the kilogram is used in many
areas, at least to some extent (such as science, industry, government, and the military) but typically not in everyday applications.
Pound to Kilogram Conversion Table
Pound [lbs] Kilogram [kg]
0.01 lbs 0.0045359237 kg
0.1 lbs 0.045359237 kg
1 lbs 0.45359237 kg
2 lbs 0.90718474 kg
3 lbs 1.36077711 kg
5 lbs 2.26796185 kg
10 lbs 4.5359237 kg
20 lbs 9.0718474 kg
50 lbs 22.6796185 kg
100 lbs 45.359237 kg
1000 lbs 453.59237 kg
How to Convert Pound to Kilogram
1 lbs = 0.45359237 kg
1 kg = 2.2046226218 lbs
Example: convert 15 lbs to kg:
15 lbs = 15 × 0.45359237 kg = 6.80388555 kg
Popular Weight And Mass Unit Conversions
Convert Pound to Other Weight and Mass Units
|
{"url":"https://coonbox.com/lbs-to-kg.html","timestamp":"2024-11-08T05:54:15Z","content_type":"text/html","content_length":"17004","record_id":"<urn:uuid:1e1f8d5e-a5af-4b7f-a699-e8dcb91c14e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00419.warc.gz"}
|
Categorical Axes in F#
How to make categorical charts in F# with Plotly.
In [1]:
#r "nuget: Plotly.NET, 2.0.0-preview.8"
#r "nuget: Plotly.NET.Interactive, 2.0.0-preview.8"
The below script needs to be able to find the current output cell; this is an easy method to get it.
Installed Packages
• Plotly.NET, 2.0.0-preview.8
• Plotly.NET.Interactive, 2.0.0-preview.8
Loading extensions from Plotly.NET.Interactive.dll
Added Kernel Extension including formatters for Plotly.NET charts.
This page shows examples of how to configure 2-dimensional Cartesian axes to visualize categorical (i.e. qualitative, nominal or ordinal data as opposed to continuous numerical data). Such axes are a
natural fit for bar charts, waterfall charts, funnel charts, heatmaps, violin charts and box plots, but can also be used with scatter plots and line charts. Configuring gridlines, ticks, tick labels
and axis titles on logarithmic axes is done the same was as with linear axes.
2-D Cartesian Axis Type and Auto-Detection¶
The different types of Cartesian axes are configured via the LinearAxis.AxisType attribute, which can take on the following values:
'Linear' (see the linear axes tutorial) 'Log' (see the log plot tutorial) 'Date' (see the tutorial on timeseries) 'Category' see below 'MultiCategory' see below The axis type is auto-detected by
looking at data from the first trace linked to this axis:
First check for MultiCategory, then date, then category, else default to linear (log is never automatically selected) MultiCategory is just a shape test: is the array nested? date and category:
require more than twice as many distinct date or category strings as distinct numbers in order to choose that axis type. Both of these test an evenly-spaced sample of at most 1000 values
Forcing an axis to be categorical¶
It is possible to force the axis type by setting explicitly AxisType. In the example below the automatic X axis type would be linear (because there are not more than twice as many unique strings as
unique numbers) but we force it to be category.
In [2]:
open Plotly.NET
open Plotly.NET.LayoutObjects
let x = [|"a"; "a"; "b"; "c"|]
let y = [|1;2;3;4|]
let xy = Array.zip x y
|> Chart.withYAxis(LinearAxis.init(AxisType=StyleParam.AxisType.Category))
Box plots and violin plots are often shown with one categorical and one continuous axis.
In [3]:
#r "nuget: Deedle"
#r "nuget: FSharp.Data"
Installed Packages
• Deedle, 2.4.3
• FSharp.Data, 4.2.4
In [4]:
open Deedle
open FSharp.Data
let data=
Http.RequestString "https://raw.githubusercontent.com/plotly/datasets/master/tips.csv"
|> fun csv -> Frame.ReadCsvString(csv,true,separators=",")
let getColumnData column=
|> Frame.getCol column
|> Series.values
|> Array.ofSeq
let x = getColumnData "sex" |> Seq.cast<string>
let y = getColumnData "total_bill" |> Seq.cast<decimal>
In [5]:
open Plotly.NET
Automatically Sorting Categories by Name or Total Value¶
Categories can be sorted alphabetically or by value using the CategoryOrder attribute for Axis:
Set CategoryOrder to "StyleParam.CategoryOrder.CategoryAscending" or "StyleParam.CategoryOrder.CategoryDecending" for the alphanumerical order of the category names or "TotalAscending" or
"TotalDescending" for numerical order of values. CategoryOrder for more information. Note that sorting the bars by a particular trace isn't possible right now - it's only possible to sort by the
total values. Of course, you can always sort your data before plotting it if you need more customization.
This example orders the categories alphabetically with CategoryOrder: 'CategoryAscending'
In [6]:
open Plotly.NET
open Plotly.NET.LayoutObjects
let x = ['b'; 'a'; 'c'; 'd']
Chart.Column(x, [2.;5.;1.;9.], Name = "Montreal")
Chart.Column(x, [1.;4.;9.;16.], Name = "Ottawa")
Chart.Column(x, [6.;8.;4.5;8.], Name = "Toronto")
|> Chart.combine
|> Chart.withLayout(Layout.init(BarMode=StyleParam.BarMode.Stack))
|> Chart.withXAxis(LinearAxis.init(CategoryOrder=StyleParam.CategoryOrder.CategoryAscending))
In [7]:
open Plotly.NET
let x = ['b'; 'a'; 'c'; 'd']
Chart.Column(x, [2.;5.;1.;9.], Name = "Montreal")
Chart.Column(x, [1.;4.;9.;16.], Name = "Ottawa")
Chart.Column(x, [6.;8.;4.5;8.], Name = "Toronto")
|> Chart.combine
|> Chart.withXAxis(LinearAxis.init(CategoryOrder=StyleParam.CategoryOrder.TotalAscending))
This example shows how to control category order by defining CategoryOrder to "Array" to derive the ordering from the attribute CategoryArray.
In [8]:
open Plotly.NET
let x = ['b'; 'a'; 'c'; 'd']
Chart.Column(x, [2.;5.;1.;9.], Name = "Montreal")
Chart.Column(x, [1.;4.;9.;16.], Name = "Ottawa")
Chart.Column(x, [6.;8.;4.5;8.], Name = "Toronto")
|> Chart.combine
|> Chart.withLayout(Layout.init(BarMode=StyleParam.BarMode.Stack))
|> Chart.withXAxis(LinearAxis.init(CategoryOrder=StyleParam.CategoryOrder.Array,CategoryArray=['d';'a';'c';'b']))
Gridlines, Ticks and Tick Labels¶
By default, gridlines and ticks are not shown on categorical axes but they can be activated:
In [9]:
open Plotly.NET
let x = ['b'; 'a'; 'c'; 'd']
Chart.Column(["A";"B";"C"], [1;3;2])
|> Chart.withXAxis(LinearAxis.init(ShowGrid = true, Ticks = StyleParam.TickOptions.Outside))
Multi-categorical Axes¶
A two-level categorical axis (also known as grouped or hierarchical categories, or sub-categories) can be created by specifying a trace's x or y property as a 2-dimensional lists. The first sublist
represents the outer categorical value while the second sublist represents the inner categorical value.
Passing in a two-dimensional list as the x or y value of a trace causes the type of the corresponding axis to be set to multicategory.
Here is an example that creates a figure with a 2-level categorical x-axis.
In [10]:
open Plotly.NET
let trace x y name = //Workaround
let tmp = Trace("bar")
tmp?x <- x
tmp?y <- y
tmp?name <- name
GenericChart.ofTraceObject(trace [["First"; "First";"Second";"Second"];["A"; "B"; "A"; "B"]] [2;3;1;5] "Adults")
GenericChart.ofTraceObject(trace [["First"; "First";"Second";"Second"];["A"; "B"; "A"; "B"]] [8;3;6;5] "Children")
|> Chart.combine
|> Chart.withLayout(Layout.init(Title = Title.init("Multi-category axis"), Width = 700))
|
{"url":"https://plotly.com/fsharp/categorical-axes/","timestamp":"2024-11-13T06:21:49Z","content_type":"text/html","content_length":"78852","record_id":"<urn:uuid:5d80a5d4-b011-47e9-8cfe-3cd818c3c2d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00284.warc.gz"}
|
Laminar Cylinder
Written by for Version Revised by Revision date Revised version
@economon 7.0.0 @talbring 2020-03-03 7.0.2
At one glance:
Solver: Navier_Stokes
Uses: SU2_CFD
Prerequisites: None
Complexity: Basic
Upon completing this tutorial, the user will be familiar with performing a simulation of external, laminar flow around a 2D geometry. The specific geometry chosen for the tutorial is a cylinder.
Consequently, the following capabilities of SU2 will be showcased:
• Steady, 2D Laminar Navier-Stokes equations
• Multigrid
• Roe convective scheme in space (2nd-order, upwind)
• Corrected average-of-gradients viscous scheme
• Euler implicit time integration
• Navier-Stokes wall (no-slip) and far-field boundary conditions
In this tutorial, we discuss some numerical method options, including how to activate a slope limiter for upwind methods.
The resources for this tutorial can be found in the compressible_flow/Laminar_Cylinder directory in the tutorial repository. You will need the configuration file (lam_cylinder.cfg) and the mesh file
Experimental results for drag over a cylinder at low Reynolds numbers are reported in the following article: D. J. Tritton, “Experiments on the flow past a circular cylinder at low Reynolds numbers,”
Journal of Fluid Mechanics, Vol. 6, No. 4, pp. 547-567, 1959. Note that the mesh used for this tutorial is rather coarse, and for comparison of the results with literature, finer meshes should be
The following tutorial will walk you through the steps required when solving for the external flow around a cylinder using SU2. It is assumed you have already obtained and compiled the SU2_CFD code
for a serial computation. If you have yet to complete these requirements, please see the Download and Installation pages.
The flow around a 2D circular cylinder is a case that has been used extensively both for validation purposes and as a legitimate research case over the years. At very low Reynolds numbers of less
than about 46, the flow is steady and symmetric. As the Reynolds number is increased, asymmetries and time-dependence develop, eventually resulting in the well-known Von Karmann vortex street, and
then on to turbulence.
Problem Setup
This problem will solve the for the external, compressible flow over the cylinder with these conditions:
• Freestream temperature = 273.15 K
• Freestream Mach number = 0.1
• Angle of attack (AOA) = 0.0 degrees
• Reynolds number = 40 for a cylinder radius of 1 m
Mesh Description
The problem geometry is 2D. The mesh has 26,192 triangular elements and 13,336 points. It is fine near the surface of the cylinder to resolve the boundary layer. The exterior boundary is
approximately 15 diameters away from the cylinder surface to avoid interaction between the boundary conditions. Far-field boundary conditions are used at the outer boundary. No-slip boundary
conditions are placed on the surface of the cylinder.
The outer boundary in red is the far-field, and the small circle in the center is the cylinder which uses the Navier-Stokes Wall boundary condition.
Configuration File Options
Several of the key configuration file options for this simulation are highlighted here. This case exhibits exceptional convergence, given the combination of very aggressive CFL, linear solver
settings, and multigrid. In this tutorial, we would like to highlight the options concerning the specification of convective schemes:
% -------------------- FLOW NUMERICAL METHOD DEFINITION -----------------------%
% Convective numerical method (JST, LAX-FRIEDRICH, CUSP, ROE, AUSM, HLLC,
% TURKEL_PREC, MSW)
% Monotonic Upwind Scheme for Conservation Laws (TVD) in the flow equations.
% Required for 2nd order upwind schemes (NO, YES)
% Slope limiter (NONE, VENKATAKRISHNAN, VENKATAKRISHNAN_WANG,
% BARTH_JESPERSEN, VAN_ALBADA_EDGE)
% Coefficient for the Venkat's limiter (upwind scheme). A larger values decrease
% the extent of limiting, values approaching zero cause
% lower-order approximation to the solution (0.05 by default)
VENKAT_LIMITER_COEFF= 0.05
For laminar flow around the cylinder, we choose the Roe upwind scheme with 2nd-order reconstruction (MUSCL_FLOW = YES). This low-speed case is executed without a slope limiter. Note that, in order to
activate the slope limiter for the upwind methods, SLOPE_LIMITER_FLOW must be set to something other than NONE. Otherwise, no limiting will be applied to the convective flux during the higher-order
reconstruction. Limiting is not applicable if MUSCL_FLOW = NO, as there is no higher-order reconstruction, and thus, no need to limit the gradients. The viscous terms are computed with the corrected
average-of-gradients method (by default). Several limiters are available in SU2, including the popular VENKATAKRISHNAN limiter for unstructured grids. It is recommended that users experiment with the
VENKAT_LIMITER_COEFF value for their own applications.
Lastly, it should be mentioned that the MUSCL reconstruction and slope limiting apply only to the upwind schemes. If you choose one of the centered convective schemes, e.g., JST or Lax-Friedrich,
there is no reconstruction process. JST and Lax-Friedrich are 2nd-order and 1st-order by construction, respectively, and the scalar dissipation for these schemes can be tuned with the
JST_SENSOR_COEFF and LAX_SENSOR_COEFF options, respectively.
Running SU2
The cylinder simulation for the 13,336 node mesh is small and will execute relatively quickly on a single workstation or laptop in serial. To run this test case, follow these steps at a terminal
command line:
1. Move to the directory containing the configuration file (lam_cylinder.cfg) and the mesh file (mesh_cylinder_lam.su2). Make sure that the SU2 tools were compiled, installed, and that their install
location was added to your path.
2. Run the executable by entering
$ SU2_CFD lam_cylinder.cfg
at the command line.
3. SU2 will print residual updates with each iteration of the flow solver, and the simulation will terminate after meeting the specified convergence criteria.
4. Files containing the results will be written upon exiting SU2. The flow solution can be visualized in ParaView (.vtk) or Tecplot (.dat for ASCII).
The following results show the flow around the cylinder as calculated by SU2 (note that these were for a slightly higher Mach number of 0.3).
|
{"url":"https://su2code.github.io/tutorials/Laminar_Cylinder/","timestamp":"2024-11-11T06:52:16Z","content_type":"text/html","content_length":"28937","record_id":"<urn:uuid:81409b32-d41e-4099-8cdb-28a876387530>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00879.warc.gz"}
|
Cool Math Stuff
We've been doing a lot of work with patterns, or more formally called sequences or infinite series. Infinite series are a little different though, as they are guaranteed to go on forever without
stopping. For instance, the sequence with formula n^2, or 1, 4, 9, 16..., is an infinite series. However, one like √-n + 3 is not infinite, as it will go √2, 1, 0, and then will be hitting complex
numbers, which are not valid for these sequences.
Most patterns, it is pretty obvious, and would never be on a test, or even a valid question for a teacher to ask. However, what about the prime numbers. If you don't know, primes are numbers with
only two factors, one and itself. So, 5 is prime, because its factors are 1 and 5. However, 6 is not prime, because it has more than two factors, namely 1, 2, 3, and 6. That means that 6 is
composite, which is having three or more factors. Numbers such as one are known as universal, as they have only one factor.
Anyways, are prime numbers infinite? This is definitely a valid question, and answerable too! Can you create a list of all prime numbers on it? How about you try to. I'll bet you can't.
Say someone pops up and says,"I have made a list with all of the prime numbers that exist on it." The list would be much longer if someone did say that, but I made a small list below:
Okay. Let's multiply all of these numbers you've found together. 2 x 3 x 5 x 7 x 11 = 2310. Great. According to you, this is the product of all of the prime numbers out there. Try adding one. Now, we
have 2311, which is a multiple of no prime numbers. Since every number is composed of primes, or is prime, this is not possible. That means the number is either prime, as 2311 happens to be, or could
be a multiple of another prime that is not present on the list.
Is that all of the primes? No! We can do that process forever, and always find a prime that is missing. This is not a formula to generate prime numbers, as the first ten primes multiplied together is
6,469,693,230 which if you add one gives you 6,469,693,231, which you have no clue if it is prime or not! You guys can figure that one out. However, it is a cool proof that answers a question that
definitely gets you thinking!
In the first week, we learned how to multiply two numbers in the teens together instantly, as if you memorized the answer. To the average person, this is the only way possible to get the answer so
quickly! Now, you can take that up a notch, and multiply together the biggest two digit numbers. Say someone says 97 x 94. You can say, "That's easy! It's 9118!"
We will take it step by step here. For the first two digits, you take the bigger number and see how far away it is from 100. In this case, it is three away. Then, you subtract that from the other
number. 94 - 3 = 91. There's your first two digits.
For the last two digits, take how far both numbers are from 100. 97 is 3 away and 94 is 6 away. The answer is precisely 3 x 6 = 18. Put them together and you have 9118.
Let's try another one, 95 x 89. 95 is 5 away from 100, and 89 - 5 = 84. Then, 89 is 11 away from 100, so 11 x 5 = 55. Then, we put them together to get 8455.
You are probably wondering why this works, for teens or nineties. Let's start with the teens. Pretend that the problem is (z + a)(z + b) with z being 10. We will leave it as z for the moment.
(z + a)(z + b) = z(z + a + b) + ab
If you factor it out, the z(z + a + b) becomes z^2 + za + zb.
(z + a)(z + b) = z^2 + za + zb + ab
If you FOIL out the (z + a)(z + b), you get:
z^2 + zb + za + ab = z^2 + za + zb + ab
This shows that they are equal. If you think about it, this is what we are doing. Take 17 x 16.
(10 + 7)(10 + 6) = 10(10 + 7 + 6) + (7)(6)
This is what you are actually doing. What about for 89 x 95.
(100 - 5)(100 - 11) = 100(100 - 5 - 11) + (-5)(-11)
I got commented about the fact that 20 wouldn't work for the teen method I described in the first post. Like 20 x 18 wouldn't work. However, this formula directs us to do it as so.
(10 + 10)(10 + 8) = 10(10 + 10 + 8) + (10)(8)
This will give us the 360 as promised. If you can do 2 x 1 multiplication problems in your head, you might want to try things like 32 x 37 with 30 as your z, or 68 x 66 with 70 as your z. If you get
really good at that, you could even try doing 448 x 442 with 400 as your z, which makes you add 48 x 42, using 40 or 50 as your z. This would be difficult, but you would get 198016 as an answer.
Problem of the Week Solutions (from July):
b = 16
z = 1
n = 42
p = 116
odds = 58%
s = 53.1 degrees
t = 36.9 degrees
a = 1
b = 8
c = 0
h = -4
k = -16
area = 50.3 sq. cm
Today, we will finish up August's problems. Since the easy problem usually has an order of operations problem and a probability problem, we will have both. Since we did deal with chess this week, the
probability problem requires knowledge of chess. If you are not familiar with chess, please do the other problem, because I don't want you to get the wrong answer because of expertise in an area
besides mathematics. However, the hard problem has only one way to find the answer, which is something I have not introduced yet.
Easy Problem:
Probability: If a chess player knows nothing about chess, and makes a completely random first move, what are the chances he will do d4 as his first move?
p = ___
Order of Operations: p = (25b + g - a - 3e)/3
p = ___
Hard Problem: Today, I will introduce a great tool in Algebra, the Quadratic Formula. The Quadratic Formula states that if ax^2 + bx + c = 0, then x = (-b ± √(b^2 - 4ac))/2a. So, if x^2 - 5x + 6 = 0,
then to find x, you would do:
(-(-5) ± √(5^2 - 4(1)(6)))/2(1)
(5 ± √(25 - 24))/2
(5 ± 1)/2
(5 + 1)/2 OR (5 - 1)/2
6/2 OR 4/2
3 OR 2
x = 3
x = 2
1) I made a couple of errors on Monday's problem. Please complete these simple calculations to have the correct z. We will call this number y.
y = 4z/5 + (4)(5) + 1
y = ___
2) If Ax = y, what does x equal? Use the explicit formula you created with the quadratic formula to achieve the answer.
Tip: There will be two possibilities. Choose the one that is reasonable. For instance, if you had 3 and -5, 3 would be the correct answer because you cannot cut a pizza with -5 straight lines.
x = ___
Also, let's call your false answer f.
f = ___
Today, the easy problem will move on and work on some geometry. For the hard problem, we will still be cracking away at the pizza problem!
Easy Problem: You probably know that to find the area of a circle, you square the radius and multiply it by π (3.14...). However, that is a little too easy. So, we will try finding the area of a
quarter circle. In order to do that, we will do (πr^2)/4, or the radius squared times pi divided by four. Basically, you've found the area of the circle, and divided it by four.
The number of games it takes for b players will be called g.
What is the area of a quarter circle with radius g? Round to the nearest whole number this time.
a = ___
Hard Problem: You should remember the process behind solving a system from the last few months. If you don't, you will just create a variable with the same coefficients, and subtract the two
equations. Continue this until you have isolated a variable. Then, you will substitute to find the rest.
Solve yesterday's system from the pizza problem. What is the explicit formula for this sequence?
Hint: It is quadratic (polynomial of degree two).
If you want, try finding the recursive formula. There is a systematic way to do it, which I'll bet you can figure out! You are solving a system! Remember, the recursive formula is An-1 + d with d
being the difference.
Today, we will finish up on the chess tournament problem. You should have already noticed a pattern.
Easy Problem: There are two types of formulas you can have in a sequence. One of which is the explicit formula, which is a formula based on the nth term. For instance, the explicit formula for the
sequence 2, 4, 6, 8, 10, ... would be An = 2n. If you wanted the 6th tern, you would plug 6 in for n to get 2(6) = 12.
The other type of formula is the recursive formula, which is based on the previous term. In the even number sequence, the recursive formula would be An = An-1 + 2 because it is the previous term plus
1) What is the explicit formula for the chess tournament problem?
2) What is the recursive formula for the chess tournament problem?
3) If there were b players in the tournament, how many games would it take to find a winner fairly?
Hint: Use the explicit formula for number three!
Hard Problem: At CTY, we used various strategies to determine explicit formulas. However, I tend to lean towards the method I taught last month, with the systems. Just to remind you how to find the
system, you must first find your common differences. For the sequence 2, 4, 6, 8, 10, ..., the differences are in the first row. Therefore, you are dealing with a first-degree, or linear, equation.
So, we take our base equation for this: An = mn + b. Then, plug in values for n and An to create the system. For instance, you would first plug 1 in for n and 2 in for An, and have the first
equation, m + b = 2. Then, you would create a second one, 2m + b = 4, to get our constants, m = 2 and b = 0. This makes the explicit formula An = 2n + 0 which becomes An = 2n.
1) Find common differences in the Pizza Problem.
2) Create a system of equations.
If you want to get ahead, try solving the system, or even try finding a recursive formula.
Today, we will do start our main problems! We will mainly be generating values to work with today.
Easy Problem: If there is a chess tournament with n people in it, how many games must they play to get a winner?
For example, if there are seven players. One player will get a bye (sit out for the round and automatically move on) while the other six play. There will be three winners plus the bye makes four.
Then, they will play two games to get the two finalists who will play for the win. So, in the first round, there were three games, then two, then one, giving you a six game tournament.
Find data for the values from 1 - 6.
n 1 2 3 4 5 6 7
An 6
Hard Problem: If you cut a pizza with n straight cuts, what is the maximum pieces of pizza you can get?
If you were to use one straight cut, you would just be able to split the pizza in half, which would make two pieces of pizza.
Find data from 2 - 5 or 6.
n 1 2 3 4 5 6 7
An 2
Hint: The pieces don't have to be equal in size, and shouldn't be equal.
Since I just recently came back from CTY, the problems will mainly be a problem we discussed in class. The easy one will be a problem about a chess tournament, and the hard will be about slicing
pizzas. However, we are going to start off with some triangles as always!
Easy Problem: In the last two problems, we worked with the famous Pythagorean Theorem, developed to find the missing side of a right triangle when given two. Just to remind you, the shortest side is
labeled a, the medium labeled b, and the longest labeled c. The formula states that a^2 + b^2 = c^2.
If you have a right triangle with a equalling 26.25 and with c equalling 43.75, what is the value of b? Remember to round to the nearest tenth!
b = ___
You won't need b's value until Wednesday, so hold onto it after you find it.
Hard Problem: Last month, we used the button on the calculator that turns a sine into its angle. Let me review how it works.
"You might not have it on your average calculator, but if you hit the 2nd button on a scientific calculator or iPhone calculator, you will get a button where the sine function has a little -1 above
it, in the place of an exponent. That button takes the sine of an angle, and turns it into the angle. So, you could divide the side opposite to an angle by c and get the sine, and then hit that
button to retrieve your angle."
If a right triangle's sides are lengths 27, 29.5, and 40, what is the measurement of the angle opposite to side a? This angle will be referred to later on as z.
z = ___
You will not need this value until Friday.
Additional Challenge: At camp, we also had a puzzle called region revenge. It was pretty difficult, but I'd like to share it with you. I will put up the answer in two months.
The problem is basically to create a vertex and see how many shaded areas are in the circle. So, for one vertex, there is one shaded region. Now, make two vertexes and connect them with a line. That
makes two shaded areas. Then, you make three, and connect every vertex to one another, making a triangle. There, there would be four shaded areas. Keep doing this for five to seven circles, and look
for differences. Can you find an explicit formula? It is not what you think it is.
Hint: The formula is a quartic equation (equation of degree four).
If you look closely, you'll find that today is in fact a Fibonacci day. 13 is the seventh Fibonacci number, so we're in for a treat!
Last month, we added up the Fibonacci numbers. This time, let's add the Fibonacci numbers in the even positions. We'll list some out.
1 = 1
1 + 3 = 4
1 + 3 + 8 = 12
1 + 3 + 8 + 21 = 33
See the pattern? Same thing as last time! Just the odd spots minus one!
1 = 1 = 2 - 1
1 + 3 = 4 = 5 - 1
1 + 3 + 8 = 12 = 13 - 1
1 + 3 + 8 + 21 = 33 = 34 - 1
Why does this work? Let's split each number into the previous two Fibonacci numbers.
1 + 3 + 8 + 21 = 33 = 34 - 1
1 + (1 + 2) + (3 + 5) + (8 + 13) = 34 - 1
Because of the associative property, we can eliminate the parentheses, giving us what we did last time, adding all the Fibonacci numbers. That would always give us a Fibonacci number minus one as
If you know something cool about Fibonacci numbers, please let me know!
This is the last week of CTY camp. Of course, a few activities deal with π because, who doesn't love pi? We did two activities that were attempts on calculating pi, with very little to use. Of
course, we did the old make a circle and divide the circumference by the diameter. However, we also did a variation on the Buffon's Needle experiment.
The problem goes like this: take a ruler and draw a bunch of parallel vertical lines that are two inches apart and a foot or two long. Then, take a two inch needle and toss it onto the grid and
determine if it crosses the gridlines or not. What do you think the probability is that the needle will cross the line?
If you do the work, you should get a number around 63.66%. However, try plugging in these values into this equation:
2t/c =
t = total number of tosses
c = total amount of times the needle crosses the line
What do you get? Pi, exactly. Our whole class tried it, and with over 1000 tosses, our class average is 3.35. Pretty good, right?
Our instructor never went over the proof in class, though it involves geometry, calculus, and more advanced statistics. I think it has to do with the center point of the needle, and where it is in
relation to the gridlines. If you have a proof, please post it!
Additional Puzzle: A census taker walks up to a house, and records the house number. Then, he knocks on the door and a man answers the door. The census taker asks if anybody else lives with him. The
man responds that he lives with his three children. The census taker then asks for their ages. The man responds, "Their ages add up to the number on the door and their product is 36." The census
taker then says, "I need one more clue. Is the youngest child a twin?" The man says that the youngest child is not a twin. What are the kids' ages, and what is the number on the door?
Hint: Find all the possible combinations of three numbers multiplied together to get 36 (including ones where the smaller numbers are equal).
|
{"url":"https://coolmathstuff123.blogspot.com/2011/08/","timestamp":"2024-11-03T23:18:20Z","content_type":"text/html","content_length":"139243","record_id":"<urn:uuid:89510f0c-6c5f-491d-8baa-75919916ee9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00423.warc.gz"}
|
Rationalizable Strategic Behavior and the Problem of Perfection
This paper explores the fundamental problem of what can be inferred about the outcome of a noncooperative game, from the rationality of the players and from the information they possess. The answer
is summarized in a solution concept called rationalizability. Strategy profiles that are rationalizable are not always Nash equilibria; conversely, the information in an extensive form game often
allows certain "unreasonable" Nash equilibria to be excluded from the set of rationalizable profiles. A stronger form of rationalizability is appropriate if players are known to be not merely
"rational" but also "cautious." "WHAT CONSTITUTES RATIONAL BEHAVIOR in a noncooperative strategic situation?" This paper explores the issue in the context of a wide class of finite noncooperative
games in extensive form. The traditional answer relies heavily upon the idea of Nash equilibrium (Nash [17]). The position developed here, however, is that as a criterion for judging a profile of
strategies to be "reasonable" choices for players in a game, the Nash equilibrium property is neither necessary nor sufficient. Some Nash equilibria are intuitively unreasonable, and not all
reasonable strategy profiles are Nash equilibria. The fact that a Nash equilibrium can be intuitively unattractive is well-known: the equilibrium may be "imperfect." Introduced into the literature by
Selten [20], the idea of imperfect equilibria has prompted game theorists to search for a narrower definition of equilibrium. While this research, some of which will be discussed here, has been
extremely instructive, it remains inconclusive. Theorists often agree about what should happen in particular games, but to capture this intuition in a general solution concept has proved to be very
difficult. If this paper is successful it should make some progress in that direction. The other side of the coin has received less scrutiny. Can all non-Nash profiles really be excluded on logical
grounds? I believe not. The standard justifications for considering only Nash profiles are circular in nature, or make gratuitous assumptions about players' decision criteria or beliefs. The
following discussion of these points is extremely brief, due to space constraints; more detailed arguments may be found in Pearce [18].
[1] B. Bernheim. Rationalizable Strategic Behavior , 1984 .
[2] E. Damme. Refinements of the Nash Equilibrium Concept , 1983 .
[3] H. Moulin. Dominance Solvable Voting Schemes , 1979 .
[4] R. Myerson. Refinements of the Nash equilibrium concept , 1978 .
[5] John C. Harsanyi,et al. A solution concept forn-person noncooperative games , 1976 .
[6] H. Raiffa,et al. Games and Decisions. , 1960 .
[7] D Gale,et al. A Theory of N-Person Games with Perfect Information. , 1953, Proceedings of the National Academy of Sciences of the United States of America.
[8] H. W. Kuhn,et al. 11. Extensive Games and the Problem of Information , 1953 .
[9] D. Gale,et al. 4. SOLUTIONS OF FINITE TWO-PERSON GAMES , 1951 .
[10] John B. Bryant. Perfection, the infinite horizon and dominance , 1982 .
[11] J. Harsanyi. Games with Incomplete Information Played by “Bayesian” Players Part II. Bayesian Equilibrium Points , 1968 .
[12] R. F.,et al. Mathematical Statistics , 1944, Nature.
|
{"url":"https://paperexplained.cn/articles/paper/detail/ebd06573db4c5189798a3cbd9876a737ddeea49d/","timestamp":"2024-11-07T22:15:42Z","content_type":"text/html","content_length":"24026","record_id":"<urn:uuid:b188ba3d-7277-42be-906c-8c1b96ab2f06>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00857.warc.gz"}
|
Truncating first-order dyson-schwinger equations in Coulomb gauge yang-mills theory
The non-perturbative domain of QCD contains confinement, chiral symmetry breaking, and the bound state spectrum. For the calculation of the latter, the Coulomb gauge is particularly well-suited.
Access to these non-perturbative properties should be possible by means of the Green's functions. However, Coulomb gauge is also very involved, and thus hard to tackle. We introduce a novel BRST-type
operator r, and show that the left-hand side of Gauss' law is r-exact. We investigate a possible truncation scheme of the Dyson-Schwinger equations in first-order formalism for the propagators based
on an instantaneous approximation. We demonstrate that this is insufficient to obtain solutions with the expected property of a linear-rising Coulomb potential. We also show systematically that a
class of possible vertex dressings does not change this result.
ASJC Scopus subject areas
• Atomic and Molecular Physics, and Optics
Dive into the research topics of 'Truncating first-order dyson-schwinger equations in Coulomb gauge yang-mills theory'. Together they form a unique fingerprint.
|
{"url":"https://nyuscholars.nyu.edu/en/publications/truncating-first-order-dyson-schwinger-equations-in-coulomb-gauge","timestamp":"2024-11-05T17:20:06Z","content_type":"text/html","content_length":"49959","record_id":"<urn:uuid:83042370-20ae-4f96-b713-24a155114d47>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00580.warc.gz"}
|
GECG Curriculum - Biomedical Engineering - docshare.tips
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
B.E FIRST YEAR
101 - MATHEMATICS
Differential Calculus
Successive Differentiation : Leibnitz Theorem, Taylor's And Maclaurin Expansions,
Indeterminate Forms.
Tracing Of Curves ++
Asymptotes (Parallel To Axis And Oblique) Tracing Of Cartesian, Parametric And
Polar Curves.
Partial Differentiation
Partial And Total Differential Coefficient, Euler's Theorem, Transformations,
Geometrical Interpretation Of Partial Derivatives, Tangent Plane And Normal
Line, Jacobeans, Taylor's Expansions For
Two Variables, Errors And
Approximations, Maxima And Minima Of Functions
Lagrange's Method Of Undetermined Multipliers To Determine Stationary Values.
Integral Calculus
Reduction Formulae: Beta, Gamma And Error Functions, Elliptic Functions
Application Of Integration : Area Of A Bounded Region, Length Of A Curve,
Volume And Surface Area Of A Solid Of Revolution For Cartesian, Parametric And
Polar Curves.
Multiple Integrals
Double Integral, Change Of Order Of Integration, Transformation Of Variables By
Jacobean Only For Double Integration, Change To Polar Co-Ordinates In Double
Integrals Only. Triple Integral, Application Of Multiple Integration To Find Areas,
Volumes, C.G., M.I. And Mean Values.
Complex Numbers
De Moivre's Theorem And Its Applications, Functions Of Complex Variables
Exponential, Hyperbolic And
Hyperbolic Trigonometric And
Infinite Series
Definition, Comparison Test, Cauchey's Integral Test, Ratio Test, Root Test,
Leibnitz's Rule For Alternating Series, Power Series, Range Of Convergence,
Uniform Convergence.
Matrix Algebra
Elementry Transformations And Rank, Inverse By Elementry Transformation,
Normal Form Of A Matrix, Consistency Of System Of Linear Equations, Solution
Of Systems Of Equations, Linearly Dependent Vectors In R3, Linear And
Orthogonal Transformations, Eigen Values, Eigen Vectors.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
Differential Equations And Modeling
Modeling Of Engineering System (Leading To Ode Of First Order, First Degree,
Including Orthogonal Trajectories), Exact Differential Equations And Integrating
Factors, Unified Approach To Solve First Order Equations, Linear, Reducible To
Linear, Applications Including Modeling, Solution Of First Order And Higher
Degree Differential Equations (Clairut's Equations Only)
Reference Books
1 Elementary Engineering Mathematics & Higher Engineering Mathematics By B.S.
Grewal (Khanna Publishers, Delhi)
2 Mathematics For Engineer's By Chandrika Prasad (Prasad Mudranalaya, Allahabad)
3 Engineering Mathematics I & II By Wartikar & Wartikar (Pune Vidyarthi Griha
Prakashan, Pune)
4 Engineering Mathematics (II) Advanced Engineering Mathematics By H.K. Das (S.
Chand & Co. Ltd., New Delhi)
5 Engineering Mathematics By Vol. I,II & III By Kandaswamy, Thilagavathi And
Gunavathi (S. Chand & Co. Ltd., New Delhi)
6 6. A Text Book Of Engineering Mathmatics By Srivastava And Dhawan
(Dhanpat Rai & Sons, Delhi)
7 Engineering Mathematics Vol. I & II By S.S. Sastry (Phi)
8 A Text Book Of Engineering Mathematics By Bali, Saxena & Iyengar (Laxmi
Publications, New Delhi)
9 Engineering Mathematics I,II(F.E.Sem.I & II) By Kumbhojkar G. V. (C. Jamnadas
& Co., Bombay)
102 OFFICE AUTOMATION
Installation & Use Of Single User Operating System
Note: Any Two Widely Used Operating System May Be Used As Examples.
Hardware Requirements For Microsoft Dos And Windows Installation, Ms-Dos
Commands And Programming, Customizing And Configuring Windows, Configuring
The Taskbar, The Start Menu, Using The Windows Interface To Create, Print, And
Store A File Using Explorer, Managing Disk Resources And Utilities, Manage Long
And Short Filenames In A Mixed Environment, Disk Defragmenter, Scan Disk,
Running Applications, Installation Methods, Architecture And Memory, The Memory
Usage Of A Ms-Dos Based Application Operating In Windows.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
Microsoft Word
Overview Of Microsoft Word, Formatting Text And Documents, Headings, Footers
And Footnotes, Tabs, Tables And Sorting, Working With Graphics, Templates,
Wizards And Sample Documents, Tools – Spell Check, Mail Merge.
Fundamental Of Worksheet Usage
Worksheet Fundamentals, Embedding, Enhancing And Modifying Charted Data,
Formatting Worksheet Data, Producing List Type Information, Customization Of The
User Interface For Optimal Performance, Data Organization, Data Analysis, Data
Manipulation, Data Access, Querying External Database Form Within The
Worksheet, Import And Export Of Data Integration With Other Applications.
Microsoft Power Point
Overview Of PowerPoint, Creating And Formatting Presentations, Working With
Text, Working With Graphics.
Usage Of Internet
Getting Connected To Internet And Visit Of Popular Web Sites.
Practical and Term work
The practical and Term work will be based on the topics covered in the syllabus.
Minimum 10 experiments should be carried out.
Mastering Dos 6.2 – By Robbins(BPB)
Mastering Windows-95 By Coward(BPB)
The Compact Guide To Microsoft Office Professional – Ron Mansfield (BPB)
Books Online By Microsoft
103 - MECHANICS OF SOLIDS
Introduction : Scaler and Vector Quantities, composition and resolution of vectors,
definition and units of space, time, metal, force, the science of mechanics, SI Units.
Statics: Principles of statics, particle, rigid body, coplanar, concurrent and nonconcurrent parallel and non-parallel forces, composition and resolution of forces,
moment, couples and their properties, combination of coplanar couples and forces,
equilibrant, equilibrium, free body diagrams, analytical and graphical conditions of
equilibrium for coplanar force systems.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
3 Truss: Simple planar trusses and analysis for member forces, methods of
joints and methods of sections
4 Distributed forces, center of gravity and moment of inertia: Center of gravity of
lines, plane areas, volumes and bodies, Pappus-Guldinus Theorem, Moment of
Inertia of Areas, polar moment of inertia, radius of gyration, parallel axes theorem,
moment of inertia of bodies
5 Friction: Theory of friction, static and sliding friction, laws of friction, angle and coefficient of friction, inclined plane friction, glider friction, wedges, belt and rope
6 Strength and elasticity: Stresses: axial, normal, in-plane, tensile, compressive, shear,
flexural, thermal and hoop, complementary shear.
7 Strain: Linear, shear, lateral, Thermal and volumetric, Poisson’s ratio
8 Elasticity: Elastic, homogeneous, isotropic materials; limits of elasticity and
proportionality, yield limit, ultimate strength, plastic state, proof stress, factor of
safety, working stress, load factor, section of composite materials, prismatic and nonprismatic sections
9 Bending Moment and Shear Force: Types of load, beams and supports, bending
moment and shear force diagrams in statistically determinate beams subjected to
concentrated, uniformly distributed loading, relation between bending moment, shear
force and rate of loading, point of contraflexure.
10 Stress in Beams: Theory of simple bending, bending stresses and their distribution,
moment of resistance, modules of sections, distribution of shear stress in different
11 Torsion: Shaft subjected to Torsion, shear stress, shear strain, angle of twist, power
transmitted by shaft.
Term work:
Atleast 8 experiments based on above syllabus and solution of at least 20 problems.
Text Books:
1. Applied Mechanics by S.B.Junarkar
2. Engineering Mechanics by A.K.Tayal
3. Mechanics of structures Vol-I by S.B.Junarkar & H.J.Shah
4. Strength of Materials by Ramamurth
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
104 - PROGRAMMING PARADIGM
1 Introduction to C
Data representation, Flow Charts, Algorithms
Overview of C
Constants, variables and Data types
Operators and Expressions
Managing Input & Output Operators
Decision Making and looping
Handling of Character strings
User defined functions
Structures and Unions
File Management in C
Dynamic Memory Allocation and Linked List
The Preprocessors
2 Introduction to C++
- Object Oriented Concepts:
Object oriented Development, Objects and classes, generalization and
Inheritance, aggregation
- Object Oriented Programming style and languages:
Object oriented style, reusability, extensibility, class definitions, creating
objects, calling operations, using inheritance, implementing associations,
object-oriented language features.
- Object oriented Languages : an example
Basic programming output using cout, preprocessor directives, variables,
input and output, Manipulators, type conversion, operators, library functions.
- Structures, enumerated data types, simple functions, passing arguments,
overloaded functions, inline functions, default arguments
- A simple class, objects as physical objects and as data types, constructors,
constructor objects as function arguments, returning objects from functions, structures
and classes, classes, objects and memory, static class data, array fundamentals, arrays
as class member data, array of objects, strings.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
- Function overloading and operator overloading, unary operators, overloading
binary operators, data conversions
- Inheritance, class hierarchies, public and private inheritance, levels of
inheritance, multiple inheritance, containership, classes within classes, pointers,
memory management, new and delete, pointers to objects, pointers to pointers,
debugging pointers.
- Virtual function, friend functions, static functions, assignment and copy
initialization, this pointer, multiple objects, file pointers, disk I/O, object I/O, I/O
with multiple objects, file pointers, disk I/O with member functions, error
handling, redirection, command line arguments, printer output, overloading the
extraction and insertion operations, multi-file-programs, using the project feature.
1 Programming in C by E.Balagurusami (TMH-95)
2 Programming with C++ by E.Balagurusami(TMH-95)
3 Programming with C by Kernighan and Richie
4 Object Oriented Programming in Turbo C++ by Robert Lafore(Galgotia-1994)
105 – BIOCHEMISTRY
General principles. Buffers. Electroanalytical methods - Potentiometric and
conductometric. Photometry Chromatographic methods of separation - gel permeation,
ion exchange, reverse-phase and affinity chromatography HPLC and FPLC.
Centrifugation. Radiotracer techniques Gel electrophoresis techniques - molecular weight
determination, electroblotting and electroelution, capillary electrophoresis APIelectrospray and MADI-TOF Mass spectrometery. Analysis of carbohydrates, lipids,
proteins and nucleic acids. Enzyme and cell immobilization techniques. Immunochemical
methods of analysis. Biosensors and diagnostics.
Text/references :
D. Holme & H. Peck; Analytical Biochemistry. Longman, 1983.
T.G. Cooper; The Tools of Biochemistry. Wiley Intersciences, 1977.
R. Scopes; Protein Purification - Principles & Practices. Springer Verlag, 1982.
Selected readings from Methods in Enzymology, Acadernic Press
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
106 - ENGINEERING GRAPHICS
Part I
Plane Geometry And Machine Parts
Introduction To Engineering Graphics : Principles Of Projection Lines And
Dimensioning. B.I.S. Code Of Practice (Sp 46) Scale, Representative Fraction,
Plane Scale, Diagonal Scale, Vernier Scale And Scale Of Chords.
Engineering Curves
Classification Of Engineering Curves. Construction Of Conics, Cycloidal Curves,
Involute And Spiral.
Loci Of Points
Simple Mechanism Like Slider Crank Mechanism Four Bar Chain Mechanism
Fastening And Connecting Methods
Screw Threads, Bolts, Nut, Stud, Locking Devices, Simple Riveted And Welded
Joints, Pipe Fitting, Couplings, Cotter Joints, Pin Joints.
Electrical Electronics, Chemical And Pipe Drawing, Basic Notation And Symbols For
Simple Flow Diagram.
Part 2 Solid Geometry
Introduction To Projection Of Point, Line And Plane :
Projection Of Line Inclined To Both Planes And Simple Cases, True Length Of
Straight Line And Its Inclination With Reference Planes (Traces Are Not Included),
Projections Of Perpendicular And Oblique Planes.
Introduction To Projection Of Solids, Section Of Solids And Interpenetration Of
Classification Of Solids, Projection Of Right And Regular Solids With Their Axis
Inclined To Both Planes, Projection Of Sphere, Section Of Pyramid, Cone, Prism
And Cylinder, Method Of Determining Line Of Interesection And Curve Of
Intersection Of Prism-Prism, Cone-Cylinder, Cylinder-Cylinder, Cylinder-Cone,
Development Of Surfaces
Parallel Line Development, Radial Line Development, Development Of Sphere By
Zone Method And Lune Method.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
Part 3 Orthographic Projections
Orthographic Projection :
Conversion Of Pictorial Views Into Orthographic Views, Type Of Sections (Full,
Half, Offset, Broken, Removed, Revolved) Section Views, Orthographic Reading,
Missing Views And Missing Line Problems.
9 Isometric View
Conversion Of Orthographic Views Into Isometric Views.
10 Introduction To Computer Aided Drafting
Advantages Of Cad, Elements Of Cad, Components Of Computer, Input And Output
Devices, Types Of Software, Basic Functions, Drafting Softwares.
Part 4 Civil Engineering Drawing
Introduction Of Civil Engineering Drawing.
Plan Elevation, Section And Foundation Plan Of A Residential, Public Bldg.,
Industrial Buildings.
Typical Layout Of Residential, Public And Industrial Buildings.
Drawing Of Building Details Such As
Roofs & Roof Trusses
Column Footings
Simple Machine Foundations
Typical Wall Sections (For 20 Cm, 30 Cm,
40 Cm Thick
Wall)Through Door, Window, Steps, Etc.
RCC Lintel With Chhajjas
Abbreviation And Symbols Of Building Items, Water Supply And Sanitary,
Electrification, Etc.
Electric Wiring Diagrams For Residential Public And Industrial Buildings And
Domastic Appliances, Standard Electrical Symbols, Main And Distributory Boards,
Simple Earthig Etc.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
Term Work :
Part 1,2 And 3
Each Candidate Shall Submit A Set Of The Following Sheets Certified By The
Principal Of The College That They Have Been Executed In A Satisfactory Manner In
The Drawing Halls Of The College.
1 One Sheet Of Engineering Curves.
2 One Sheet Of Loci Of Points.
3 One Sheet Of Projections Of Points, Line And Plane Surfaces.
4 One Sheet Of Orthographic View With Section (Two Problems One In 1st Angle
And Other In 3rd Angle System Of Projections).
5 One Sheet Of Reading Of Orthographic Views And Missing Lines/Missing Views.
6 One Sheet On Projections Of Solids And Sections Of Solids.
7 One Sheet On Development Of Surfaces And Interpenetration Of Surfaces.
8 One Sheet On Isometric Projections/Views.
9 Sketch Book Containing Sketches Of Machine Parts, Electrical, Electronics,
Chemical And Pipe Drawing, Lines, Dimensioning, Scale, Students Are Given
Complete Understanding Of Bis Code Sp 46.
Part 4
One Sheet On Residential Building Showing Electric Wiring Diagram And
Domestic Applications Standard Electric Symbols, Main And Distribution Boards
And Simple Earthing.
One Sheet On Industrial Structure Layout.
Reference Books :
Engineering Drawing Vol. I & II P. J. Shah
Engineering Drawing N. D. Bhatt
Civil Engineering Drawing Gurucharansingh
Civil Engineering Drawing Malik And Meo
Machine Drawing N.D. Bhatt
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
107 - ELEMENTS OF MECHANICAL AND CIVIL ENGG.
Mechanical Engineering
1. Introduction : Force and Mass, Pressure, Work, Power, Energy, Heat,
Temperature, Units of Heat, Specific Heat capacity, Inter change of heat, Change
of state, Mechanical equivalent of heat, Internal energy, Enthalpy, Efficiency,
Types of prime movers, Sources of Energy.
2 Fuel and combustion: Introduction, Classifications, Solid fuel, Liquid fuel, Gases
fuel, Combustion, calorific values.
3 Properties of gases: Gas law’s: Boil’s law, Charles law, Combined gas law, Gas
constant, Non-flow process, Constant volume process, constant pressure process,
Internal energy, Relations between Cp. & CV, Enthalpy, Isothermal process, Poly
Tropic process, Adiabatic process.
4 Properties of steam: Introduction, Steam formation, Enthalpy, Specific volume of
steam, Steam tables, Internal energy, Non-flow process, Throttling calorimeter,
Separating calorimeter and Combined calorimeter.
5 Heat engine: Thermal prime movers, Elementary of heat engine sources of heat,
Working substances, Converting machines, Sink of heat, Classification of heat
engine, Heat engine cycles, Carnot cycle, Carnot cycle with vapour, Rankine
cycle, Otto cycle, Diesel cycle.
6 Steam Boilers: Introduction, Classification, Simple vertical boiler, Vertical multi
tuber boiler, Cochran type, Lancashire boiler, Locomotive boiler, Marine
Boiler(Scotch type), Babcock and Wilcock boiler, High Pressure boiler, Boiler:Details, Performances, Functions of different Mounting and accessories.
7 Internal combustion Engine: Introduction, Classification, Engine details, Otto four
stroke cycle, Two stroke cycle, Different between two and four stroke cycle,
Indicated Power Efficiency.
8 Speed Control: Introduction, Governors, Steam engine governing, I.C. engine
governing, Flywheel
9 Pump: Introduction, Reciprocating pump-Types & Operations, Bucket Pump, Air
chamber, Centrifugal pump and type of centrifugal pump, Priming, Rotary pump.
10 Air compressor: Introduction, Reciprocating compressor, Operation of
Compressor, Work for compression, Power Required, Reciprocating compressor
efficiency, Multi stage reciprocating compressor, Rotary compressor.
11 Refrigeration and Air Conditioning: Introduction, Refrigerant, Types of
refrigerators, Vapor compression refrigerators, Window air conditioners.
12 Transmission of motion and Power: Introduction, Methods of drive, Power
transmission Elements, Shafting, Belt, Drive, belting, pulleys, Velocity ratio of
pulleys, Power transmitted by a belt, Change drive, Friction drive, Gear drive, Spur
gear, Helical gear, Spiral gear, Bevel gear, Worm gear, Rack and Pinion, Velocity
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
ratio of toothed gear, train of wheels, Power transmission by earing, Bearing
classification, Bush bearing, Ball and roller bearings.
Term Work: Based on above syllabus at least five practical
Text books:
1 Elements of mechanical engineering by Hajra Chaudhary
2 Elements of mechanical engineering by Mathur & Mehta
Civil Engineering
Surveying: Introduction: Surveying and Leveling, Principle of Surveying, Common
surveying Instruments: Chain, Tapes, Compass, Theodolite, Measurements involved
in surveying and leveling.
Building Construction: Different types of building planning, types of load on
structures, Load bearing and frame structures and Building components,
requirements for different building i.e. Residential, Commercial, Industrial and
Public Building, Masonry construction: Construction of walls, various types of
bonds, Concrete constructions: Common concrete elements like slabs, beams,
column, foundation, lintel etc.
Environmental engineering: Introduction to global environmental problems,
Ecology and Biosphere, Green House effect, Depletion of Ozone and Acid effect.
Transportation Engineering: Introduction, Roads-Types and Material of Construction.
Railways-Types and methods of construction, Classification of Indian Railways, Role
in Transportation
Term Work:Minimum five basic assignments to be carried out.
Text Books:
1 Elements of Civil Engineering by M.B.Gohil & J.N.Patel
Reference Books:
Building construction by B.C.Punmia
Surveying and leveling by T.P.Kanetkar
Environment Engineering by B.S.Kapoor
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
109 - WORKSHOP PRACTICES
Instruction / Demonstration
Should Be Given For Each Of Following Shops / Trade Which Include Importance
Of The Shop / Trade In Engineering, New Materials Available, Tools / Equipments
Required Indicating The Use Of Each Tool / Equipment, Methods Of Processing
Any Special Machines, Power Required Etc.
1. Joining Process
2. Sheet Metal Work
3. Plumbing (Metalic & Non Metalic Pipe Fittings)
4. Electro Plating
5. Metal Clading & Painting
6. Manufacture Of Plastic Products
7. Metal Machining , Turning, Drilling, Grinding Etc.
8. Carpentry / Pattern Making
9. Fitting / Assembly Practice
10. Forging And Hot Working Processes
11. Moulding And Casting Processes
Exercise And Term Work:
Each Student Is Required To Prepare Simple Exercises In Following So As To
Have A Feeling Of How The Jobs / Parts Are Prepared And Use Of Tools /
Equipments (Any Eight)
Electroplating - 1 No
Painting Of Metal Piece - 1 No
Sheet Metal Part - 1 No
4. Plumbing - 1 No
5 Arc Welding / Gas Welding / Welding - 1 No
6 Soldering / Brazing - 1 No
7 Drilling Practice - 1 No
8 Fitting / Assembly - 1 No
9 Carpentry Practice - 1 No
10 Forging Practice - 1 No
Over And Above These Exercises Each Student Is Required To Prepare A Laboratory
Report On Instruction / Demonstration And Exercises Prepared By Him As A Part Of His
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
B.E SEM III
301 - MATHEMATICS-III
1. Fourier series: Periodic functions, Drichlet's conditions, Fourier series, Euler's
formula. Fourier expansion of periodic functions with periodic functions with
period 2π, Fourier series of even and odd functions. Fourier series of periodic
functions with arbitrary periods, half range Fourier series. Harmonic analysis.
2. Laplace Transform: Motivation, definition, Linearity property, Laplace
transforms of elementary functions, shifting theorem. Inverse Laplace
transforms, Laplace transforms of derivatives and integrals, Convolution
theorem, Application of Laplace theorems in solving ordinary differential
equations, laplace transforms of periodic, Unit, step and impulse functions.
3. Ordinary differential Equations: Linearity differential equations of higher
order with constant coefficients, Method of variation of parameters, Higher
order linear differential equations with variable coefficients (Cauchy's and
legendre forms), Simultaneous linear differential equations, Models for the
real world problems and their solutions in particular, Modeling of electric
circuits, Deflection of beams, Free oscillations, forced oscillations,
Resonance, Solution of Bessel and Legendre equations by series method,
Definition and properties of Bessel's functions, Legendre functions by series
method, Definition and properties of Bessel's functions, Legendre's
polynomials and properties like recurrence relations, orthogonality.
4. Partial Differential equations: formation of differential equations, Directly
integrable equations, Models of engineering problems leading to first order
partial differential equations, Lagrange's equation, Solutions of special type of
first order partial differential equations, homogeneous linear equations with
constant coefficients, application of partial differential equations, Boundary
value problems and method of separation of variables, Modeling of vibration
of a stretched string - one dimensional wave equation.
5. Numerical methods: Motivation, Errors, Truncation error, Rounded off error,
Absolute error, Relative error and percentage error, solution of algebraic and
transcendental equations by Newton-Raphson method, Bisection, False
position and iteration and extended iteration methods, Convergence of these
Reference Books:
1. Higher Engineering mathematics, by Dr.B.S.Grewal, Khanna publishers,
New Delhi.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
2. Engineering Mathematics vol- II, by Prof. Wartikar & Wartikar Pune
Vidhyarthi Gruh, Pune.
3. Engineering Mathematics by Dhavan & Shrivastava, Dhanpatrai & Sons,
New Delhi.
4. Engineering Mathematics- Vol I & II, by S.S.Sastry, prentice Hall of India,
New Delhi.
5. Mathematics for engineering students, by P.D.S Verma kalyani publishers,
Ludhiyana and Delhi.
6. A textbook on engineering by N.P.Bali, Ashok Saxena & Iyengar, Laxmi
publications (p) ltd. New Delhi.
7. Engineering mathematics vol-I, II & III by Kandaswamy, Thilagvati &
Gunavathi, S.Chand & Co. (p) ltd, New Delhi.
8. A first course in mathematics for Engineering, Mathematics for engineers,
Advanced Mathematics for engineers by chandrika prasad, Prasad
mudranalaya, 7 Beli Avenue, Allahabad.
9. Engineering mathematics vol I, II, III&IV by Kumbhojkar,
G.V.C.Jamnnadas & Co., Bombay.
10. Engineering Mathematics- Vol I by Chintamani das, New central book
agency (P) ltd, Calcutta.
11. Advanced Engineering Mathematics (Fifth Edition), by Erwin Kreyszig,
Willey Eastern Ltd., New Delhi.
302 - CIRCUIT THEORY
1. Circuit Concept: Charge and Energy, The relationship of field and circuit
concepts, Capacitace, Resistance and Inductance parameter Units and Scaling,
Approximation of a physical system as a circuit..
2. Network Cinvention: Referance direction for current and voltage, Active element
convention, Dot convention, Topological Description.
3. Network Equations: Kirchoff’s law, The no of network equations, source
transformation, loop variable analysis, Node variable analysis, Determinants,
Minors, and Gauss Methods, Duality, State variable analysis.
4. Initial conditions in network: Initial condition in elements, Geometrical
Interpretation of Derivatives, Procedure for Evaluating initial conditions, Initial
state of network.
5. Solution of differential equation by classical and Transform methods: Second
order equation, Higher order equation, Network excited by external energy
sources ,response as related to s-plane location of the roots.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
6. Transform of other signal waveforms: Unit step function, Ramp and impulse
function, waveform synthesis, initial value and final value theorems.
7. Impedance Functions and network theorems: concept of complex frequency,
Transform impedance and transform circuits, series and parallel combinations of
elements, superposition and reciprocity, Thevenin and Norton theorem.
8. Network Functions: Poles and Zeros: Terminal pairs, network functions for the
one port and two port, calculation of network function, poles and zeroes of
network function, Restrictions on poles and zero locations for driving point
function, Time domain behavior from the pole and zero plot, Stability of Activity
9. Two port parameters: Relationship of Two port variables, Short Circuit
Admittance parameter, The open circuit impedance parameter, Transmission
parameters, The Hybrid parameters, Relationships between parameter sets,
parallel connection of two-port networks
10. Sinusoidal steady state analysis: The sinusoidal steady state, The sinusoid and
ejwt, solution using ejwt , solution using real and imaginary part, phasor and
phasor diagram.
The Practical and Term work will be based on the topics covered in the syllabus.
1. Network Analysis (3rd Edition)-by Van Valkenburg
2. Network Analysis-by G.K.Mithal
303 - Human Anatomy.and Physiology
1. Cell and Tissues:
Physical Structure of the Cell; Functional System of the cell-Transport of Ions and
Molecules through the cell membrane; Membrane Potentials and Action Potentials;
Inhibition of Excitability; Recording Membrane potentials and Action potentials.
2. Respiratory System:
Structure of Respiratory tract, Lungs, Diaphragm. Mechanics of Pulmonary
Ventilation, Pulmonary Volumes and Capacities. Physical Principle of Gas Exchange.
Pulmonary function testing. Artificial Respiration.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
3. Cardiovascular System:
Structure of Heart, Heart valves, Arteries, Veins, Coronary Circulation.
Heart as a pump, Physiology of Cardiac muscle, Cardiac Cycle. Rythmic excitation of
heart. Control of excitation and conduction in the heart. Introduction of ECG and
cardiac activity. Physics of Blood pressure, flow and resistance. Vascular
distensibility and functions of Arterial and Venous Systems. Heart rate and normal
Heart sounds.
4. Body Fluid:
Blood and its composition and function. Various Cells and their structures. Numbers Cell
counting, Haemoglobin and its estimation, Anaemia, Blood counts and ESR.
5. Skeletal and Muscular System:
Structure and Formation of bone; Types of bones, joints, Classification of movements;
Classification of muscles- Muscle contraction mechanism, EMG.
6. Nervous System:
Outline of Cranial and Spinal nerves. Structure of Spinal Cord and different Brain parts.
Vertibral column and Cranial cavity. Excitation of skeletal muscle- Neuro muscular
transmission, Excitation-Contraction Coupling, Contraction of Smooth muscle. General
design of Nervous System- CNS- its function, Synapes, Receptors, Types of Sensation.
7. Excretory System:
Structure of Kidney, Formation of Urine, Concentration and Dilution of Urine, Renal
Function Tests, Artificial Kidneys, Dialysis.
8. Special Senses:
Vision: Eye as a camera, Mechanism of accomodation, Visual accuity, Ophthalmoscope,
Colour vision, Perimetry.
Hearing: Tympanic membrane and the Ossicular system, the cochlea, Hearing mechanics
and abnormality, Deafness, Audiometry.
9. Reproductive System:
Spermatogenesis; Semen; Ovarian cycle physiology.
10. Endocrine System:
Physiological actions of the hormones secreted by: Pituitary, Thyroid, Parathyroid, Islets
of Langerhans, Adrenal, Testes and Ovaries. Bio feedback
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
mechanism of hormone regulation. Homeostasis- Regulation of Internal
11. Functions of the Skin.
Term Work and Practicals shall be based on the above syllabus.
1. Anatomy and Physiology in Health and Illness.
Ross and Wilson .
By- Anne Waugh. Allison Grant.
2. Illustrated Physiology.
By- Mc Naught, Callander.
3. Human Anatomy: Regional and Applied.
By- Chaurasia, B.D.
4. Physiology of Human Body.
By- Guyton.
5. Engineering Principles of Physiological Functions.
By- Chneck.
6. Grants Atlas of Anatomy.
7. Principled of General Anatomy.
By- A.K.Datta.
8. Human Anatomy and Physiology,1995.
By- Van Wynsherge.D, Noback, C.R and Caroia R.
304 - BASIC ELECTRONICS
1. Introduction to electronics theory: atomic structure, terminologies, bohr’s
postulate, electron theory, energy band diagram, ionization potential, metal,
semiconductor, insulators.
2. Semiconductor theory: transport phenomenon in semiconductors-mobility,
conductivity, drift, diffusion, concept of hole, semiconductor materials, types of
semiconductors, mass action law, hall effect, generation and recombination of
charges, continuity equation, potential variation in graded semiconductor.
3. Semiconductor diodes: introduction, ideal diode, resistance levels, equivalent
circuits, switching characteristics, notation of diode and its testing, zener diode,
LEDs, diode arrays,
4. Diode applications : Load line analysis, diode approximations, series diode
config. With DC inputs, parallel and series – parallel config.AND / OR gates,
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
sinusoidal inputs, half wave and full wave rectifications, clipper, clampers, zener
diode, voltage multipliers.
5. Special purpose devices: schottky barrier diodes, varactor diodes, power diodes,
tunnel diodes, photodiodes, photoconductive cells, IR emitters, LCD, solar cells,
6. Bipolar junction transistor: construction, operation, various configurations, limits
of operation, notation, specifications and their use, testing, casing and terminal
7. DC biasing of BJTs: operating point, fixed bias ckt., emitter stabilized ckt.,
voltage divider bias, dc bias with voltage feedback, miscellaneous bias
configurations, design operations, switching networks, bias stabilization.
8. BJT transistor modeling: amplification in the AC domain, transistor modeling,
important parameters: Zi, Zo, Av, Ai, The re transistor model, hybrid equivalent
model, graphical determination of h-parameters, variation of transistor
9. Field Effect Transistors: construction and char. of JFET, transfer char, important
relationship, depletion type MOSFET, enhancement type MOSFET, MOSFET
handling, VMOS, CMOS.
10. FET biasing: fixed bias, self bias, voltage divider bias, depletion type MOSFET,
enhancement type MOSFET ,combination networks, design, trouble shooting,
universal JFET bias curve.
11. Discrete and IC manufacturing techniques: introduction, semiconductor materials,
discrete diodes, transistor fabrication, integrated circuits, monolithic integrated
ckt., production cycle, thin film and thick film integrated circuits, hybrid
integrated ckts.
The Practical and Term work will be based on the topics covered in the syllabus.
Text book:
1. Millman - Halkias, “ Integrated Electronics – Analog and Digital circuits and Systems
“, Tata MacGraw- Hill.
2. Robert Boylestad “Electronic devices and circuit theory” ,PHI edition.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
305 - DIGITAL TECHNIQUES
1. Introduction To Digital Circuits:
Binary systems, number base conversion, octal and hexa-decimal numbers,
Complements, Binary codes, binary storage and registers, binary logic, Boolean
algebra and logic gates, basic theorems and properties of Boolean algebra,
Boolean functions, canonical and standard forms of Boolean functions, digital
logic gates, simplification of Boolean functions, map method,
Two to six variable maps, NAND-NOR implementation and other two level
implementation, don’t care conditions, tabulation method, determination and
selection of prime implicants.
Combinational Logic Circuits:
Adders, subtractors, Code converters, Analysis procedure, multilevel NAND –
NOR ckts. EX-OR and equivalence functions . Binary and decimal parallel adder
, magnitude comparator , Decoder, multiplexer, ROM and PLA .
Sequential Logic Circuits:
Flip-flops, triggering of Flip-flops, anal conversion of flip-flops, Analysis of
clocked sequential ckt. State reduction and assignments Flip-flop excitation
tables. Design procedure , design of counters, design with state equations.
Registers, counters And Memory Unit :
Registers, shift registers, ripple counters, synchronous counters, timing sequences,
memory unit.
Register transfer logic
Inter register transfer, arithmetic logic and Shift micro operation, conditional
control statements, fixed point binary data , over flow, arithmetic shifts, decimal
,floating points, non numeric data, Instruction codes.
Processor Logic Design
Processor organization, Arithmetic logic unit (ALU)Design of arithmetic, logic
and arithmetic-logic ckts. Status registers, Processor unit, Design of accumulator.
Control Logic Design
Control organization Micro program control, Control of processor unit
PLA control, micro program sequencer.
Computer Design
System of configuration, Computer instructions, Timing and control
Execution of instruction, Design of computer registers & control.
Digital Integrated Circuits.Digital IC specification terminology
Logic families- RTL, DTL, TTL, I2L, ECL, MOS, CMOS, Interfacing
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
The Practical and Term work will be based on the topics covered in the syllabus.
BOOKS :
Morris Mano, “Digital logic and computer design”, PHI.
A Anand kumar, “Fundamentals of digital circuits “, PHI
306 - Instrumentation Workshop
Instruction / Demonstration Should Be Given For Each Of Following:
1. Tools and their use: pliers, cutter, stripper, screw driver, crimping tool, soldering
iron etc.
2. Understanding of single phase and three phase wiring.
3. Understanding of switch accessories, wires and cables.
4. Understanding and use of multi-meter, tester, series lamp, megger.
5. Understanding of house wiring, fuse, earthing, MCB, ELCB.
Exercise on:
1. Fan wiring
2. Tube light wiring.
3. Staircase wiring
4. Application of series lamp for troubleshooting of electrical appliances.
5. Soldering practice.
Over And Above These Exercises Each Student Is Required To Prepare A Laboratory
Report On Instruction / Demonstration And Exercises Prepared By Him As A Part Of His
Term work.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
B.E. SEM IV
401 - Applied Mathematics
1. Vector Calculus: Reorientation, Differentiation of vectors, Scalars and vector
fields, Gradient of a scalar function, Directional derivative, Divergence and curl
of a vector function and their physical meanings, Line Surface and volume
integrals, Green's theorem, Gauss and stoke's theorems (Without proof),
Irrotational, Solenoidal and conservative vector fields. Curvilinear coordinates
(Cylindrical and spherical polar), applications.
2. Functions of complex variables: Reorientation, Analytic function, CauchyRiemann equations (Cartesian and polar forms), Harmonic functions, conformal
mappings, Complex integration, Cauchy's theorem and integral formula,
Singularities, Taylor's and Laurent's series, Residue theorem, Evaluation of
integrals using residues.
3. Fourier Transforms: Fourier integral theorem (only statement), Fourier sine and
cosine integrals, Complex form of Fourier integral, Fourier sine and cosine
transforms, Inverse laplace transforms by residue, Solutions of boundary value
problems using Fourier transforms. Application to transverse vibration of a string
and transmission lines.
4. Solution of cubic by Cardon and biquadratic equations by Ferrari's method.
5. Matrices: Cayley- Hamilton Theorem, Special matrices like hermitian, SkewHermitian and unitary, Quadratic forms.
Reference Books:
1. Higher Engineering mathematics, by D.B.S.Grewal, Khanna Publishers, New
2. A textbook on Engineering mathematics, N.P.Bali, Ashok Saxena & Iyengar,
Laxmi publications (p) ltd, New Delhi.
3. Engineering mathematics- Vol -III by Prof. Wartikar & Wartikar, Pune
Vidhyarthi Gruh, Pune.
4. Mathematics for engineers, advanced mathematics for engineers, by
Chandrika prasad, Prasad Mudranalaya, 7 beli Avenue, Allahabad.
5. Advanced Engineering Mathematics by Erwin Kreyszig, (Fifth Edition), wiley
Eastern Ltd., New Delhi.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
402 - ADVANCE ELECTRONICS
1. Operational amplifiers: introduction, differential and common mode operation,
op-amp basics, practical op-amp circuits, op-amp parameters.
2. Op-amp applications: constant gain multiplier, voltage summing, voltage buffer,
controlled sources, comparators, instrumentation circuits, integrator,
differentiator, log and anti log amplifier, full wave, half wave rectifier, active
3. Feedback and oscillator circuits: feedback concepts, feedback connection types,
practical feedback circuits, feedback amplifier- phase and frequency
considerations, oscillator operation, phase shift oscillator, Wien bridge oscillator,
tuned oscillator, and crystal oscillator.
4. Linear Wave-shaping circuits: introduction, RC high pass and low pass circuits,
response to standard input signals.
5. 555 Timer: introduction, operation, 555 as astable, bistable, monostable
multivibrator, applications.
6. BJT small signal analysis: common emitter fixed bias, voltage divider bias,
emitter bias and emitter follower configurations, common base configuration and
collector feedback configuration, collector DC feedback configuration.
Approximate hybrid equivalent model
7. Systems approach- Effects of Rs and RL: Two port systems, effect of load
impedance and source impedance, combined effect of R s and RL, BJT CE, CB,
Emitter follower networks, FET networks, cascaded systems.
8. BJT and JFET frequency response: introduction, logarithms, decibels, general
frequency considerations, low frequency analysis- bode plot, low frequency
response for BJT and FET amplifier, miller effect capacitance, high frequency
response of BJT and FET amplifier, multistage frequency effects, square wave
The Practical and Term work will be based on the topics covered in the syllabus.
Text book:
1. Robert Boylestad- “ Electronic devices and circuit theory “- PHI edition.
2. Millman - Halkias, “ Integrated Electronics – Analog and Digital circuits and Systems
“, Tata MacGraw- Hill
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
403 - BIOMEDICAL TRANSDUCERS
1. Generalised Instrumentation Scheme:
Transducers and their Static and Dynamic performance characteristics. Electrical
Design Consideration.
2. Transduction Principles:
Resistive, Inductive and Capacitive Transduction; Photoconductive and Photo
voltaic Transduction. Fibre Optic Sensor. Strain Gauge- types, construction,
selection materials, Gauge factor, Bridge circuit, Temperature compensation.
LVDT- construction, sensitivity, merits etc. Capacitive Transducer- variable
seperation, variable area and variable dielectric type; merits and demerits.
Piezoelectric Transducer: piezo crystals- output equation, mode of operation,
merits and demerits.
3. Temperature Transducers:
Thermo resistive transducer- RTD and Thermister; Thermo emf Transducerthermo couples; Non contact type infrared thermometry; optical pyrometer.
Thermister used for cardiac output measurement, nasal air flow measurement.
4. Pressure Transducers:
Extra vascular and Intra vascular pressure sensors; Strain Gauge type Blood
pressure transducers; Diaphragm type capacitive pressure transducer; Piezo
electric pressure transducer; Intra vascular fibre optic pressure transducer; Fibre
optic pressure transducer for intracranial pressure measurement in new borns;
Tonometry; Stethoscopes; Phonocardiograph sensor.
5. Flow Transducers:
Electromagnetic Blood flow transducer; Elasto resistive plethysmographic
transducer; Air flow transducer for Fleish pneumotachometer; Ultrasonic flow
6. Displacement Transducers:
LVDT and resistive potentiometric transducers for translational and angular
displacement measurement; Strain gauge displacement transducer; capacitive and
displacement transducer for respiration sensing.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
7. Nuclear Radiation Transducers:
Ionization transducer- GM counter; Scintillation transducer- Scintillation counter.
8. Bioanalytical Sensors:
Enzyme based glucose sensor; Microbial biosensor for ammonia and nitrogen
dioxide; optical biosensor for antibody-antigen detection. Blood-gas sensorsPolarographic clark PO2 sensor; Transcutaneous PO2 sensor, PCO2 electrode,
SO2 sensor of pulse oximeter.
9. Biopotential Measurement:
Electrode-Electrolyte interface, half cell potential, Polarization- polarizable and
non-polarizable electrodes, Ag/AgCl electrodes, Electrode circuit model;
Electrode and Skin interface and motion artifact.
Body Surface recording electrodes for ECG, EMG, EEG and EOG. Electrodes
standards. Internal Electrodes- needle and wire electrodes. Micro electrodesmetal microelectrodes, micropippet electrodes. Electrical properties of micro
electrodes. Electrodes for electric simulation of tissue; Methods of use of
Term Work and Practicals shall be based on the above syllabus.
1. Biomedical Sensors- Fundamentals and applications.
By- Harry.N. Norton.
2. Transducers for Biomedical measurements. ( Principles and Applications.)
By- Richard S.C. Cobbold.
3.Medical Instrumentation application and design.
By- John G. Webster.
4.Principles of Applied Biomedical Instrumentation.
By- Geddes,L.A and Baker,L.E
5. Bio-Sensors. By-Hall, E.A.H.
6. Biomedical Transducers and Instruments (CRC Press)
By- Tatsoo Togawa., Toshiyo Tamura, P. Ake Ober
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
404 - CONTROL THEORY
Introduction to Control system :
Introduction, Examples of Control Systems, Closed-Loop Control Versus OpenLoop Control, Design Of Control Systems.
Mathematical Modeling :
Introduction, Transfer Function and Impulse-Response Function, Block Diagrams,
Modeling in State-Space, Mechanical Systems, Electrical Systems, Liquid – level
Systems, Thermal Systems.
Transient Response :
Introduction, First-Order Systems, Second-Order Systems, Transient-Response
Analysis with MATALAB, An Example Problem Solved with MATALAB.
Basic Control Systems
Introduction, Higher-Order Systems, Routh’s Stability Criterion, Steady-State
Errors in Unity –Feedback Control Systems.
Root Locus:
Introduction, Root-Locus Plots, Summary of General Rules For Constructing
Root Loci, Root-Locus Plots with MATALAB, Special Cases, Root-Locus
Analysis of Control Systems , Root Loci for Systems with Transport Lag, Root
Contour Plots.
Frequency Response :
Introduction, Bode Diagrams, Plotting Bode Diagrams with MATALAB, Polar
Plots, Drawing Nyquist Plots with MATALAB, Log-Magnitude Versus Phase
Plots, Nyquist Stability Criterion, Stability Analysis, Relative Stability, Closed
Loop Frequency Response, Experimental Determination of Transfer Functions
The Practical and Term work will be based on the topics covered in the syllabus.
Text Books :
K.Ogata , “Modern Control Engg.” , Prentice Hall of India Pvt. Ltd. , 3rd edition.
References :
Nagrath Gopal, “ Control system Engineering “, Wiley Eastern
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
405 - ELECTRICAL AND ELECTRONIC MEASUREMENT
1. Experimental Data & Errors: Measurement Recording and Reporting, Graphical
presentation of data, Precision & Accuracy, Resolution & sensitivity, Errors in
Measurement, Statistical evaluation of measurement data and errors, The decibel,
2. Analog DC and AC meters: Electromechanical meter movements, Analog DC
ammeters, Analog DC voltmeters, Analog AC ammeters and voltmeters, Analog
multimeters, Special purpose analog meters, How to use basic meters, Meter
errors, problems.
3. Oscilloscope: Oscilloscope subsystem, Display sub system, Vertical deflection
subsystem, Dual trace Feature, Horizontal Deflection subsystems, oscilloscope
probes, oscilloscope controls, How to operate an oscilloscope, oscilloscope
photography, Special purpose oscilloscope, Problems.
4. Time & Frequency Measurement: Time measurements, Frequency measurements,
Harmonic Analysis and spectrum analyzers, Problems.
5. Power & Energy Measurement: Power in Ac circuits, single-phase power
measurements, Polyphase power and measurement, Electrical energy
measurements, power measurements at high frequency, problems.
6. Resistance and measurement of Resistance: Resistance and resistor, resistor type,
color coding of resistor, measurement of resistance, wheaston Bridges, Making
balanced wheaston bridge measurement, Low value resistance measurement,
7. Measurement of Capacitance, Inductance, and Impedance: Capacitance and
capacitors, capacitor circuit models and losses, capacitor types, color coding of
capacitors, Inductor and inductance, Inductor structure, Transformers, Impedance,
Capacitance and Inductance Measurement, complete impedance measurement,
8. A.C.signal source: Oscillators, sweep frequency Generators, Pulse generators,
function generators.
9. Interference signals and their elimination: capacitive interference, inductive
interference and shielding, electromagnetic interference and shielding,
conductively coupled interference, ground loop interference and input guarding to
reduce it, internal noise.
The Practical and Term work will be based on the topics covered in the syllabus.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
Text book:
1. Wolf & Smith, “Student Refrence Manual for Elelctronic and Instrumentation
measurement:”, PHI .
Reference book :
Cooper, “Electronic Instrumentation and Measuring Techniques”, PH
406 - Simulation Packages
Following packages to be studied..
Matlab. Orcad, Electronic Work bench.
Simulation programs needs to be carried out from the above packages and term
work and practical should be based on the same.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
B.E. SEM V
501 - MICROPROCESSOR, INTERFACING & APPLICATIONS
1. Introduction to Micro Computers, Micro Processors:
Introduction to digital computer, Data representation, microprocessor
organization, memory, input output devices, system bus.
2. Micro Processor Architecture and Micro Computer Systems:
Microprocessor 8085 architecture, operation, memory, and interfacing devices
3. Instructions and Timings:
Data transfer group instruction, branching group, arithmetic instruction, logical
instruction, special purpose instruction, time diagram of single byte, two byte, three
byte instruction, and addressing modes.
4. Assembly Language Programming:
Programming using data transfer instruction, arithmetic and logical instruction
branching instruction, subroutine and stack, time delay and counter implementation,
code conversion.
5. Interrupts:
Vector interrupts, priority of interrupts, interrupt level
6. Parallel I / O and Interfacing applications.:
Basic interfacing concepts, interfacing displays and memories.
7. Code converters:
Data converters, ADC, DAC,their types & construction ,their interfacing with 8085.
8. Interfacing of Multipurpose programmable devices:
8155/8156,8355/8755,8279 programmable keyboard/display interface
9. Interfacing of General purpose programmable peripheral devices:
8255A PPI, 8253 programmable interval timer, and 8259A programmable
interrupt controller, DMA & 8257 DMA controller
10. Serial I / O and data applications.:
Serial I/O data communication & bus interfacing standards, basic concepts in serial
I/O using programmable chips, bus standards-RS 232,IEEE 488.
11. EPROM and RAM memories. 2764 and 6264
The Practical and Term work will be based on the topics covered in the syllabus.
Ramesh Gaonkar
- “Micro Processor Architecture, Programming and
Applications with 8085/8080A “ – Wiley eastern limited.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
P.K. Ghosh and P.R. Shridhar – “0000 to 8085 – Introduction to micro Processors
for Engineers and scientists” – Prentice hall of India Pvt. Ltd
2nd edition
502 - DIAGNOSTIC TECHNIQUES & INSTRUMENTATION
SECTION: A: -
Phonocardiography, Blood pressure & Heart rate measurements, Blood flow
measurements, Doppler.
RESPIRATORY DISORDERS AND THEIR DIAGNOSIS:X -rays, Volumes &
capacity (Spirometry), Bronchoscopy, Laryngioscopy, etc.
G. I. TRACT DISORDERS THEIR DIAGNOSIS: Laparoscopy, Cystoscopy, Upper
G. I. Endoscopy, Colonoscopy, Sigmoidoscopy, Protoscopy.
METABOLIC DISORDERS AND THEIR DIAGNOSIS: Thermometry, Oxygen &
Carbon - dioxide content and pressure, various enzyme assays. Diseases related to
Kidney & Urinary system and their diagnosis.
EMG, Scanning Techniques etc.
OCULAR DISORDERS AND THEIR DIAGNOSIS: Perimetry, Refractometry,
Tonometry, Ophthalmoscopy, Ultrasound, VER, ERG, EOG, ENG etc.
Auditory disorders and diagnosis: AER, Audiometry etc.
Obstetric & Gynecological problems and their diagnosis like USG & Endoscopy.
Biotelemetry and their clinical significance.
SECTION: - B
ELECTROCARDIOGRAPH:The ECG waveform, Block-diagram, Front panel,
Controls, ECG Pre-amplifier, ECG recorders.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
ELECTROENCEPHALOGRAPH:EEG waveform (Frequency range & Amplitude),
Multi channel recording system & control panel details, Block - diagram, Pre-amplifier &
filter circuits.
PARAMETERS:Electronic manometer, Electro-sphygmomanometer, Electronic
stethoscope, Blood flow meter, Thermometer, Tonometer, Auto- refractrometer, Spiro
meter, Audiometer.
DIAGNOSTIC X-RAY MACHINE: Generation, Fluoroscopy and Image Intensifier,
Fundamentals only.
Diagnostic Ultrasound:Principle of measurements, Ultrasound imaging, Foetal
monitor, Echocardiograph, Echoencephalograph.
The Practical and Term work will be based on the topics covered in the syllabus.
BOOKS :
Diagnosis Procedures in Cardiology - Warren & Lewis
ECG Made Easy
-Atul Lutre
Practical Echocardiography -Setu Raman
Advanced Ophthalmic Diagnosis & Therapeutics-Mckinney
Prenatal Diagnosis and Therapy -A. Chakraborthy
Bio-Medical Instrumentals & Measurements -Cromwell
Bio-Medical Instrumentation
-R. S. Khandpur
Medicine and Clinical Engineering -Bertil Jocobson& John Awebster
503 - ANALYTICAL & OPTICAL INSTRUMENTATION
1. Principle involved in Biochemical, Pathological & microbiological laboratory
techniques in clinical diagnosis.
Instrumentation techniques:
1. Use of general instruments like Incubators, autoclaves, centrifuges, hot air oven,
Balances, Auto pipettes.
2. Microtome, Processing like automatic tissue processing ("Histokinette”)
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
3. Laboratory counters; anaerobic apparatus, Laminar flow tables, Culture
Techniques, Blood
banking procedure & Instruments.
3. Analytical techniques like Spectrophotometry, Colorimetry, Auto analysis, Semi auto
gas & electrolyte analysis, flame photometry, chromatography, Chromatography,
Electrophoresis, Glucometry, Measurement of pH, RIA units. PCR units , ELISA
Dispenser & washer, Pulse-oxymetry, Capnography, Arterial blood analysis etc and
significance in different diagnosis and prognosis of various clinical disorders.
4. Microscopy - simple, compound, binocular, trinocular, dark ground microscopy
Phase contrast microscopy, electron microscopy, CCTV, microphotography &
projection etc &
their importance in clinical diagnosis.
5. Special microscopy like endoscopy. use of fiber optics.
Study of principle of operation, Block-diagram; Circuit diagram, Control panel,
specification, Design aspects & Applications of the following Equipments:
1. Incubators, autoclaves, centrifuges, hot air oven, Balances, Auto pipettes, microtome,
Processing like automatic tissue processing ("Histokinette”), Laboratory counters;
Anaerobic apparatus, Laminar flow tables, Spectrophotometer, Colorimeter, Auto
analyzer, Semi auto analyzer, flame photometer, Glucometer, pH meter, RIA units.
PCR units, ELISA reader, Dispenser & washer, Pulse-oxymeter, Arterial blood
2. Endoscopes.
3. Microscopes of various types including electron microscope.
4. Chromatograph.
5. Electrophoresis apparatus.
6. Blood Gas Analyzers:
Blood PO2 Measurement; Measurement of blood PCO2; Complete Blood Gas
7. Blood cell counter:
Method of cell counting, coulter counter, automatic recognition and differential
counting of cells.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
The Practical and Term work will be based on the topics covered in the syllabus.
1. Medical laboratory technology (Methods & Interpretation)- Ramnik sood, 5th
Edition .jaypee
2. Wildman's: Clinical interpretation of laboratory tests- Sacher, 10th edition. Jaypee
3. Interpretation of common investigations- Gupta & Gupta., 4th edition .jaypee
4. The text book of Blood bank and transfusion medicine.- Satish Gupta. 1st edition,
5. Principle clinical Biochemistry - Chavda R,
6. Clinical diagnosis and management by laboratory methods- Henry, 19th
Edition, Anuja book company
7. Bio-medical Instruments & Measurement - Cromwell.
8. Bio-medical Instrumentation- R S. Khandpur
9. Medicine and clinical engineering - Bertil Jacobson & john Webster.
504 - BIOMATERIALS & IMPLANTS
1) Introduction To The Use Of Non-Pharmaceutical Biomaterials:
Types: Synthetics,Metals & Non Metallic Alloys, Ceramics,Inorganics & Glasses .
Bioresorbable & Biologically Derived Materials , Bioderived Macromolecules, Tissue
Adhesives, Bioactive Coatings, Carbons Composites.
Medical Device Design Requirements, Material Selection Criteria, Preclinical
2) Polymers:
Polymerization, Polyethylene, Prosthodontic Polymers, Clinical Study Of Soft
Polymers, Bloerodible Polymers, Blood Compatible Polymers, Bioactive Polymers,
Hydrogels, Hard Methacrylats, Drug Incorporation Polymer Gels, Biocompatibility
Of Polymers-Blood Compatibility Improvement, Compatibility Evaluation.
3) Metals, Metallic Alloys & Ceramics:
Stainless Steel, Titanium & Titanium Alloys, Cobalt Based Alloys, Nitinol, CeramicsIntroduction To Bio Medical Usages-Bonding Natural Tissue. Bio-Active Glass,
High Density Alumina , Calcium Phosphate Ceramics-Porous Materials,Biological
Interactions, Dental Ceramics-High Strength Materials-Thermal Expansions, Fracture
Toughness. Drug Delivery from Ceramics. Wet Chemical Synthesis.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
4) Composite Bio-Materials:
Soft Composites, Dental Composites, Saline, Coupling Agents, Microfield Materials,
White –Light Systems Bonding To Teeth. Clinical Trials, Synthesis of Fillers, Matrix
Resins, Mechanical & Physical Evaluation.
5) Mechanical Properties:
Standards & Assessments of Bio-Materials, Surface Properties of Biomaterials &
Their Testing.
Bio –Compatibility:
Tissue Reaction to External Materials, Blood/Biomaterial Interaction. Corrosion &
Wear of Bio-Materials. Treatment of Materials for Bio-Compatibility, Biodegradable
Materials & Their Applications, Rheogical Properties of Biological Solids-Bone
Tendons, A Blood Vessels, Biological Liquids, Mucus Etc.
7) Implant Surgical Devices, Rehabilitation Devices Used For Physiological Functions
Of Human Body Systems- Improvement Or Replacement
The Practical and Term work will be based on the topics covered in the syllabus.
1) Bio-Materials-An Interfacial Approach.
- Hence, L.I & Ethridge E.
Academic Press , Newyork
2) Bio-Materials Science & Engineering.
- J.V. Park
Plenum Press, Newyork
3) Bio-Materials Medical Devices & Tissue Engineering
- Fredrick H. Silver
Chapman & Hall , London
4) Human Bio-Materials Applications.
- Wise
505 - MODERN DIGITAL AND ANALOG COMMUNICATION
1. Introduction
Communication systems, Analog and digital messages, S/N Ratio, Channel
Bandwidth and the rate of communication, Modulation.
2. Introduction to signals
Introduction to signals, Classification of signals, Signal operations, Unit impulse
functions, Correlation.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
3. Analysis and Transmission Of Signals
Signal transmission through a linear system, Signal distortion over a
communication channel, Signal energy and energy spectral density, Signal power
and power spectral density.
4. Linear Amplitude Modulation
Base band and carrier communication, DSB, AM, QAM, SSB, Carrier acquisition,
Super heterodyne AM receiver
5. Sampling and Pulse Code Modulation
Sampling theorem, PCM, DPCM
6. Principles of Digital Data Transmission
Digital communication system, Line coding, Pulse shaping, Scrambling,
Detection-Error probability, Digital carrier systems, Digital multiplexing
7. Emerging Digital Communications Technologies
The North American Hierarchy, Digital services, Broad band digital
8. Miscellaneous
Cellular telephone (Mobile radio) system, Spread spectrum systems
The Practical and Term work will be based on the topics covered in the syllabus.
1) "Modern Digital and Analog Communication Systems”,3rd Edition, B.P.Lathi,
2) “Electronic Communications Systems", George Kennedy, TMH.
3) "Electronic communication systems", William Schweber, PHI.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
B.E. SEM VI
601 - MICROCONTROLLER & APPLICATION
1. Micro Controller : The 8051 Architecture:
8051 Micro controller Hardware, Input /Output Pins,Ports,and Circuits, External
Memory, Counter and Timers, Serial Data Input/Output, Interrupts.
2. Basic Assembly Language Programming Concepts :
A Generic Counter, The Mechanics of Programming ,The Assembly Language
Programming Process, Programming Tools and Techniques, Programming the
3. Moving Data :
Addressing Modes, External Memory Read-Only Data Moves, Push and Pop
Opcodes & Data Exchanges.
4. Logical Operations :
Byte-Level Logical Operations, Bit -Level Logical Operations, Rotate and Swap
5. Arithmetic Operations :
Flags, Incrementing and Decrementing, addition, Subtraction, Multiplication and
Division, Decimal Arithmetic.
6. Jump and Call Instructions :
The Jump and Call Program Range, Jump, Calls and Subroutines, Interrupts and
Returns, More Detail on Interrupts.
7. An 8051 Micro controller Design :
A Micro controller Design, Testing the Design, Timing Subroutines, Lookup
Tables for the 8051,Serial Data Transmission.
8. Applications :
Keyboards, Displays, Pulse Measurement, D/A and A/D Conversions, Multiple
9. Serial Data Communication:
Network Configurations,8051 Data Communication Modes.
The Practical and Term work will be based on the topics covered in the syllabus.
Books :
1. " The 8051 Micro controller, architecture, programming and applications" ,
K.J.Ayala - Penram International Publisher India.
2. “ The 8051 Micro Controller ‘ By: Mazidi & Mazidi.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
1. Dc fibrillator: need and circuit description. Rectangular wave defibrillator,
electrodes used. DC defibrillator with synchronizer, cardioverter. Performance
aspects of DC defibrillator. Implantable defibrillator and defibrillator analyzers.
2. Dialysers: Principle of dialysis, artificial kidney, function and working of dialyser,
parallel flow dialyser, coil hemodialyser, hollow fiber hemodialyser. Performance
analysis of dialysers, membranes used for hemodialysis. Block diagram and
working of hemodialysis machine. Blood leak detector, portable kidney machine
–working and flow diagram.
3. Principle of surgical diathermy. Electrosurgical equipments and techniques.
Electrotomy, fulguration, coagulation, dessication. Electrosurgery units, spark gap
valve, solid-state generator. Construction and working of surgical diathermy
machine, electrodes used. Safety aspects like burns, high frequency current
hazard, and explosion hazard, operating principle of surgical diathermy analyzer.
4. Basic concepts about LASER. LASER coherence. Its principle of operation,
properties, gain medium, pumping mechanism and resonator design. Types of
LASER : pulsed ruby laser, ND YAG laser, argon laser and CO2 laser.
of laser in
medicine: control of gastric hemorrhage by
photocoagulation, retinal detachment.
5. Short-wave, diapulse , microwave, ultrasonic therapy: circuit description,
application and dosage control. Electrotherapy: diagnosis, electrical stimulation
for pain relief, apparatus and current waveforms, electrodes. Spinal cord
stimulator and cerebral stimulation.
6. Neonatal instrumentation: Incubator : physiological heat balance, heat production
and heat loss methods. Apnea detection. Photo therapy devices.
7. Anesthesia machine: Gas supply and delivery, vapor delivery. Patient breathing
circuit. Complete schematic of anesthesia machine.
The Practical and Term work will be based on the topics covered in the syllabus.
1. Handbook of biomedical instrumentation
R.S. Khandpur
PUB: Tata Mcgraw- New Delhi
2. Introduction to biomedical equipment and technology
Carr and Brown PUB: Pearson Education- Asia
3. Medical instrumentation
John Webster.
PUB: John wiley and sons-New York.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
603 - BIOMECHANICS
Mechanics of Blood flow-Heamorheology.
Mechanics of Locomotion: Muscular mechanism, GAIT & Posture.
Mechanics of muscle.
Mechanics of cardiovascular & pulmonary system
Nature and Mechanism of biological control system. Feed back control and its
components. Control of body temperature, Control of Blood pressure, Heart rate.
Control of secretion. Control of movements etc.
6. Biomechanics of solids
7. Fundamentals of fluid mechanics
8. Physiological fluid mechanics
9. Mass transfer
10. Bioheat transfer
11. The interaction of biomaterials and biomechanics
12. Locomotion and muscle biomechanics
The Practical and Term work will be based on the topics covered in the syllabus.
BOOKS :
Introduction to bioengg. - S. A. Berger,Lewis,Goldsmith,Oxferd Press.
-Sahay & Saxena
Orthopedic Mechanics
-D. N. Ghista & Roaf
604 - COMPUTER NETWORK & DATA COMMUNICATION
1. Introduction
2. data communications: transmission media, fundamental of digital transmission,
digitization and multilevel transmission, terminal devices, wireless
communication, video conferencing
3. The OSI seven layer network model
4. LAN Technologies: overview, hardware, services and operating systems.
5. TCP/IP and the internet: architecture, protocol and datagrams, UDP,TCP, internet
standard services etc.
6. Switching and virtual LAN
7. Communication and network security: cryptography, digital certificate and public
key infrastructure,firewalls, SSL and VPN.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
1. Networks for computer scientists and engineers – oxford press, Youlu zheng
2. Computer Networks Fourth Edition- Tanenbaum, PHI
3. Data & computer communication- 7th Edition-Stallins-PHI
4. Data Communication- Foruezen-TMH
605 - DIGITAL SIGNAL PROCESSING
Discrete-Time Signals and Systems :
Introduction, Discrete-Time Signals, Discrete-Time Systems, LTI Systems,
Properties of LTI Systems, Linear Constant Co-efficient Difference equations,
Frequency domain representation of Discrete-Time Signals & Systems,
Representation of sequences by Fourier Transform, Properties of Fourier
Transform, and correlation of signals Fourier Transform Theorms, Discrete-Time
random signals.
The Z- Transform:
Z-Transform, Properties of ROC for Z-transform, the inverse Z-transform, Ztransform properties.
Sampling of Continuous-Time Signals:
Periodic Sampling, Frequency domain representation of sampling,
Reconstructions of band limited signals from its samples.
Transform Analysis of Linear Time-Invariant System:
Frequency response of LTI system, System functions for systems with linear
constant-coefficient Difference equations, Freq. response of rational system
functions relationship between magnitude & phase, All pass systems,
Minimum/Maximum phase systems, Linear system with generalized.
Structures for Discrete Time Systems:
Block Diagram representation of Linear Constant-Coefficient Difference
equations, Basic Structures of IIR Systems, Transposed forms Basic Structures
for FIR Systems, Overview of finite-precision Numerical effects, Effects of Coefficient quantization, Effect of round off noise in digital filters, Zero input limit
cycles in Fixed-point realizations of IIR filters, Lattice structures.
Filter Design Techniques:
Design of Discrete-Time IIR filters from Continuous-Time filters, Design of FIR
filters by windowing Optimum approximations of FIR filters, FIR equiripple
Discrete-Fourier Transform:
Representation of Periodic sequences: The discrete Fourier Series, Properties of
discrete Fourier Series, Fourier Transform of Periodic Signals, Sampling the
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
Fourier Transform, The Discrete-Fourier Transform, Properties of DFT, Linear
Convolution using DFT.
9. Computation of Discrete-Fourier Transform:
Efficient Computation of DFT, Goertzel Algorithm, Decimation-in-Time FFT
Algorithms, Decimation-in-Frequency FFT Algorithm.
10. Applications of DSP.
The Practical and Term work will be based on the topics covered in the syllabus.
1. “Discrete Time Signal Processing:, Oppeheim, Schafer, Buck Pearson
education publication, 2nd Edition, 2003.
2. “Digital Signal Processing: Principles, Algorithm & Application”, Proakis,
Manolakis, PHI, 2003, 3rd Edition.
3. “Digital Signal Processing: A Computer Based approach”, Sanjit Mitra,
4. MATLAB user’s guide.
606 – SEMINAR
Student is supposed to select topic on advance area of biomedical engineering and to
carry out a literature survey and based on it prepare and present the seminar.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
B.E. SEM VII
701 - SIMULATION & MODELING OF BIOLOGICAL SYSTEMS
1. Introduction to Physiological control systems, Illustration- example of a physiological
control system. Difference between engineering and physiological control systems.
2. Art of modeling Physiological systems, linear models of physiological systemsdistributed parameters versus lumped parameter models. Principle of superposition.
3. Cardiovascular system modeling and simulation. Theoretical basis, model
development, heart model, circulatory model, computational flow diagram of the
cardiac system, software development.
4. Pulmonary mechanics modeling and simulation. Theoretical basis, model
development, Lung tissue viscoelastance, chest wall, airways-full model of
respiratory mechanics. Pulmonary system software development-computational flow
5. Interaction of Pulmonary and Cardiovascular models. Computational flow diagram
for cardiopulmonary software development.
6. Eye movement system and Wetheimer’s saccade eye model. Oculomotor muscle
model. Linear muscle model.
7. Simple models of muscle stretch reflex action, ventilatory control action, Lung
mechanics and their SIMULINK implementation.
8. Study of steady state analysis of muscle stretch reflex action, ventilatory control
action by MATLAB tools.
9. Study of transient response analysis of neuromuscular reflex model action by
MATLAB tools.
10. Study of frequency domain analysis of linearized model of lungs mechanics,
circulatory control model and glucose insulin regulation model by MATLAB tools.
The Practical and Term work will be based on the topics covered in the syllabus.
1. Physiological control systems: Analysis, Simulation and Estimation.
By: Michael C.K.Khoo.
Pub: Prentice Hall of India Pvt. Ltd. New Delhi.
2. Virtual Bioinstrumentation.Biomedical, Clinical and Healthcare applications.
By: Jon B. Olansen and Eric Rosow.
Pub: Prentice Hall PTR. Upper Saddle River, NJ.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
702 - POWER ELECTRONICS
1. Introduction to Power Electronics, Power Diodes, Power Transistors
Overview of Power Electronics, Power Semiconductor Devices, Control Characteristics
of Power Devices, Characteristics and Specification of Switches, Types of Power
Electronic Circuits, Reverse Recovery Characteristics, Types of Power Diodes, Diodes
with RC, RL, LC and RLC Loads, Free wheeling Diodes, Performance Parameters of
Rectifiers, Power BJTs, Power MOSFETs, COOLMOSs, SITs, IGBTs, MOSFET Gate
and BJT Base Drive Circuits, Isolation of Base and Gate Drive Circuits
2. Thyristors
Thyristor Characteristics, Two Transistor model of Thyristor, Thyristor Trun-On,
Thyristor Turn-Off, Types of Thyristors, Series & Parallel Connections of Thyristors,
Gate drive circuits.
3. Controlled Rectifiers
Principal of Phase Controlled Converter, Single Phase Semi Converter, Single Phase Full
Converter, Single Phase Dual Converter, Three Phase Halfwave Converters, Three Phase
Semi Converter, Three Phase Full Converter, Three Phase Dual Converter. (Without
analysis for RL load)
4. Invertors
Principal of Operation of Pulse Width Modulated Inverters, Performance Parameters,
Single Phase Bridge Inverters, Three Phase Inverters, Current Source Inverter, Series
Resonant Inverter, Parallel Resonant Inverter, Class E Resonant Inverter, Multilevel
Inverter Concept, Applications & features of Multilevel Inverter.
5. DC-DC Converters
Principal of Step Down Converter, Principle of Step UP Converter, Performance
Parameters, Converter Classification, Switch Mode Buck, Boost and Buck-Boost
Regulators, Chopper Circuit Design.
6. AC Controllers
Principal of On Off Control, Principal of Phase Control, Cycloconeverters, PWM
Controlled AC Voltage Controllers.
7. Protection of Devices and Circuits
Cooling and Heat Sinks, Snubber Circuits, Reverse Recovery, Supply and Load Side
Transients, Current & Voltage Protection, Electromagnetic Interference.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
8. Power Drives & Applications
Characteristics of DC Motor, Operating Modes of DC Drives, Single Phase DC Drives,
Breaking Schemes of DC-DC Converter Drives, Microcomputer Control of DC Drives,
Control of AC Induction Motors using Voltage, Current and Frequency Control, Stepper
Motor Control, Introduction to FACTS, Introduction to DC Power Supplies & Flyback
Converter, UPS as AC Power Supply, Magnetic Design Considerations.
The Practical and Term work will be based on the topics covered in the syllabus.
Books :
1. Power Electronics Circuits, Devices and Applications by Muhammad H. Rashid,
from PHI and Pearson Education. Third Edition.
2. Power Electronics by M D Singh and K B Khanchandani, TMH Publicaiton
3. Power Electronics by M S Jamil Asghar, PHI Publication
703 - REHABILITATION ENGINEERING
Meaning & types of physical, Impairment, Engineering concept in sensory & motor
Intelligent prosthetic knee, Prosthetic hand, Advance and automated prosthetics and
orthotics, Externally powered and controlled orthotics and prosthetics-FES system,
Restoration of Hand function, Restoration of standing and walking, Hybrid assistive
system(HAS).Myoelectric Hand and Arm prosthesis, Intelligent Hand Prosthesis
(MARCUS ).
3) MOBILITY:
Electronic Travel Applications ( ETA ) :
Path Sounder, Laser Cane, Ultrasonic Torch, Sonic Guide, Light Probes,
Nottingham Obstacle Sensor, Electro-cortical Prosthesis, Electro Roftalam. Polarized
Ultrasonic Travel Aid. Types of wheel chair, Design of wheelchair, Ergonomic
considerations in design of wheelchair, Powered wheelchair, Tricycle, Walkers, Crutches.
Classification of visual impairment, prevention and cure of visual impairment, Visual
augmentation, Tactual vision substitution, auditory substitution and augmentation,
Tactual auditory substitution, Assistive devices for the visually impaired.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
Subjective and Objective measurement methods, Measurement and assessment,
Measurement, objectives and approaches, characterizing human systems and subsystems. Characterizing assistive.
Interfaces in compensation for visual perception, Improvement of orientation and
Sleeping aids, Seating, walking & Postural aids.
The Practical and Term work will be based on the topics covered in the syllabus.
BOOKS :
Rehabilitation Engineering
-Robinson C. J.
Rehabilitation Technology
-Ballabio E.
Text Book of Bio-Medical Engineering
-R. M. Kennedy
Hand Book of Bio-Medical Engineering
-Richard Skalak & Shu Chien
704 - BIOMEDICAL SIGNAL ANALYSIS
Part I: Signal Processing Techniques
1. Random Process
Stationary and nonstationary random process
Correlation Matrix
Stochastic Models
Asymptotic stationary of an autoregressive process
Selecting the order of the model
2. Eigenanalysis
Eigenvalue problem
Properties of Eigenvalue and Eigenvectors
3. Weiner Filter
linear filter
Principle of orthogonality
Minimum mean-squared error .
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
Wiener-Hopf Equation
Error-performance surface
Linearly constrained minimum variance filter
4. Linear Prediction
Forward Linear Prediction
Backward linear prediction
Levinson-Durbin Algorithm
Whitening property of prediction-error filters
Cholesky Factorization
Lattice Predictors
Joint-process estimation
Burg Algorithm
5. Kalman Filters
Recursive minimum mean-square estimation
Kalman filtering problem
Innovation process
State estimation using Innovation process
Initial condition
Relations between Kalman filter and autoregressive process
Squared-root kalman filtering
6. Methods of Steepest Descent
Steepest Descent algorithm
Stability of the Steepest Descent algorithm
Transient behavior of the mean-squared error
Effects of eigenvalue spread and step-size parameter on the Steepest Descent
7. Stochastic Gradient-Based Algorithms
Least-mean-square (LMS)adaptation algorithm
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
Adaptive deconvolution
Instantaneous frequency measurement
Adaptive noise cancelling
Stability analysis of the LMS algorithm
Average tap-weight behavior
Weight-error correction matrix
Mean-squared-error behavior
Transient behavior of the mean-squared error
Computer results on adaptive first order prediction
Operation of the LMS in a nonstationary environment
Normalized LMS algorithm
Gradient adaptive lattice algorithm
8. Methods of Least Squares
Statement of the linear least-squares estimation problem
Data windowing
Temporal version of the orthogonality principle
Minimum sum of error squares
Normal equation and linear least-squares filters
Properties of the time averaged correlation matrix
Properties of least-squares estimation
FBLP algorithm
MVDR spectral estimation
9. Standard Recursive Least-Squares Estimation
Exponentially weighted recursive least-squares algorithm
Convergence analysis of the RLS algorithm
Example: Results of computer experiments on adaptive equalization
Operation of the RLS in a nonstationary environment
Comparison of the LMS and RLS algorithms
Relationship between the RLS and kalman filter theory
Part II: Applications to Biomedical Signals
Evoked potential Estimation
Adaptive Interference Cancellation
o Stimulus Artifact canceling in Somatosensory evoked potentials
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
Stimulus Artifact canceling in noncortical somatosensory evoked
o Ocular Artifact canceling in EEG signals
Cardiovascular System
o Arrhythmia Detection
o QRS Detection
o ECG Signal Analysis
o Heart Rate Variability Analysis
EMG Analysis
EEG Signal Processing
The Practical and Term work will be based on the topics covered in the syllabus.
1. S. Haykin, "Adaptive Filter Theory", Prentice-Hall, Inc., Englewood Cliffs, 1996.
2. S. Lim, and A. V. Oppenheim, "Advanced Topics in Signal Processing," Prentice
Hall, Englewood Cliffs, New Jersey, 1988.
3. Papers from the literature.
4. M. Akay, (Edited), "Nonlinear Biomedical Signal Processing: Dynamic analysis
and Modeling," (Vl. II), IEEE Press.
705 - PROJECT PART - I
The student will have to plan his project out and discuss the feasibility of the project with
the faculty members. Faculty member will be their internal guide.
He has to submit a report on what he intends to do and how will he proceed to do his
project. He is supposed to give a complete outline of the project work and submit as term
work in form of the report.
The intended/planned project has to be continued in sem. VIII and complete it as project
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
B.E. SEM VIII
801 - MEDICAL IMAGING TECHNOLOGY
Physical Principals of Imaging:
Fundamentals of Physics and Radiation: Concepts of Radiation science;
Radiographic definition and Mathematics review; Electromagnetic Radiation:
Photons, Electromagnetic Spectrum, Wave Particle Duality; Interactions between
Radiation and matters; Fundamentals of acoustic propagation; Interaction between
sonic beams and matter; concepts of ultrasonic diagnostics.
2. Imaging with X-Rays:
X-ray tube: The generation: Electron-Target Interactions, X-ray emission
spectrum: Characteristic x-ray spectrum, Bremsstrahlung x-ray spectrum, Factors
affecting X-ray Emission Spectrum: Effect of mA, kVp, added filtration; X-ray unit:
generators, filters and grids; Image intensifiers; X-ray detectors: Screen film detector,
Image Intensifier; Radiographic techniques, quality and exposure.
3. X-ray Diagnostic Methods:
Fluoroscopy: Fluoroscopy and Visual Physiology, Image intensifier tube and
Multifield intensification; Angiography: Arterial access, Catheters, Contrast media;
Mammography: Soft tissue radiography, Equipments: Target composition, Filtration
grids, Photo timers, Image receptors; Xeroradiography; Digital radiography; 3-D
construction of images.
4. Computed Tomography:
Operational modes: First generation scanners, Second, Third, Fourth, Fifth
generation scanners; System components: Gantry, Collimation; High Voltage generators;
Image characteristics: Image matrix, CT numbers; Image reconstruction; Image Quality:
Spatial resolution, Contrast resolution, System noise, Linearity, Spatial Uniformity.
5. Imaging with Ultrasonography:
Piezoelectric effect; Ultrasonic transducers: Mechanical and Electrical matching,;
The characteristics of transducer beam: Huygens principle, Beam profiles, Pulsed
ultrasonic filed, Visualization and mapping of the Ultrasonic field; Doppler effectDoppler methods; Pulse echo systems[Amplitude mode, Brightness mode, Motion mode
&Constant depth mode]; Tissue characterization: velocity, Attenuation or absorption,
6. Developments in Ultrasound technique:
Color Doppler flow imaging: CW Doppler imaging device, Pulsed Doppler
imaging system, clinical applications; Intracavity imaging: Design of the Phased array
probe, Trans oesophageal, Tannsvaginal or Transrectal scanning; Ultrasound contrast
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
Utilization of micro air bubbles, galactose microparticles and albumin encapsulated
microairbubbles; 3-D image reconstruction; 2-D echo cardiography
7. Biological effects of Radiation and Ultrasound and its protection:
Modes of Biological effects: Composition of the body and Human response to
Ionizing radiation; Physical and Biological factors affecting Radiosensitivity, Radiation
Dose-response relationships; Time variance of radiation exposure; Thermal / Nonthermal
effects due to cavitation in ultrasound fields; Designing of radiation protections and its
8. Advances in Imaging:
Introduction to Magnetic Resonance Imaging, Radionuclide Imaging, Longitudinal
section Tomography, Single Photon Emission Computed Tomography, Positron Emission
9. Nuclear medical imaging: Radionuclides. Interaction of photons with matter. Data
acquisition. Imaging: image quality, equipment. Clinical use. Biological effects and
10.Infrared imaging: Infrared photography. Transillumination. Infrared imaging. Liquid
crystal thermography. Microwave thermography.
The Practical and Term work will be based on the topics covered in the syllabus.
1. Principles of Medical Imaging.
2. By: K. Kirk Shung, Michael B. Smith, Benjamin Tsui.
Academic Press.
3. Radiologic science for Technologists.
4. By: Stewart C. Bushong. Pub: Mosby: A Harcourt Health Sciences
5. Quality Management: In the Imaging Sciences.
6. By: Jeffery Papp.
Pub: Mosby: A Harcourt Health Sciences Company
7. Fundamentals of medical imaging:
8. Paul suetens. Pub: Cambridge university press
9. Introduction to biomedical imaging
10. Andrew Webb.
Pub: IEEE press series : Wiley Interscience.
11. The Physics of medical imaging
12. Steve Webb. Pub: Institute of Physics Publishing, Bristol and
13. Fundamentals of digital image Processing
14. Jain A K.
Pub: Prentice Hall
15. Digital Image Processing 2nd edition 1987.
16. Gonzalec R. C.
Pub: Addison Wesley and P Wintz.
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
802 - FUZZY LOGIC & NEURAL NETWORK
1. Introduction. Foundation of Fuzzy sytems; Fuzzy systems at work; Fuzzy system
2. Crisp us. Fuzzy sets; Fuzzy sets to fuzzy events. Fuzzy logic. Practical fuzzy
measures. Fuzzy set operations; properties of fuzzy sets, Fuzzification techniques.
Relational inference. Compositional inference. Linguistic variables and logic
operators. Inference using fuzzy variables. Fuzzy implication.
3. Fuzzy systems and algorithms, Defuzzication. Adaptive fuzzy sytems algorithms.
Expert systems vs fuzzy inference engines. Basic fuzzy inference algorithm. Overall
algorithm. Input data processing. Evaluating antecedent fuzzy variables. Left hand
side computations; Right hand side computations. Output processing.
4. Fuzzy sytem design and its elements. Design options, processes and background
requirements. Knowledge acquisition. Principle of fuzzy inference design criteria.
Systems ontology and problem types. Useful supporting design tools.
5. Fuzzy sytems: Fuzzy information; Fuzzy neural networks; Fuzzy approaches for
supervised learning networks. Fuzzy generalization of unsupervised learning
methods. Reasoning with uncertain information. Preprocessing and postprocessing
using fuzzy techniques. Application in biomedical engineering.
6. Hybrid systems: Knowledge based and data based approaches; neural network
component; time series data; Use of complex data structures; Design methodology;
Applications; Automatic ECG analyzer. Diagnosis of heart disease.
Neural Network:
1. Introduction : What is a Neural Network? , Structural Levels of Organization in the
Brain ,Models of a Neuron ,Network Architectures ,Knowledge Presentation ,Artificial
Intelligence and Neural Network
2. Learning Process:Error-Correction Learning ,Hebbian Learning ,Competitive
Learning ,Boltzman Learning ,The Credit Assignment Problem ,Supervised Learning
Reinforcement Learning ,Unsupervised Learning ,Adaptation and Learning ,Statistical
Nature of Learning Process
3. Self Organizing Systems1: Hebbian Learning:Some Intuitive Principles of Self
Organization ,Self Organized Feature Analysis ,Discussion ,Principal Component
Analysis ,A linear Neuron Model as Maximum Eigenfilter ,Self Organized Principal
Component Analysis ,Adaptive Principal Component Analysis Using Lateral
Inhibition ,Two Class of PCA Algorithm ,How Useful is Principal Component Analysis?
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
4. Temporal Processing: Spatio-Temporal Models of Neuron,FIR Multilayer
Perceptron,Temporal Back-propagation Learning,Temporal Back-propagation Learning
with Adaptive Time Delays,Back-propagation Through Time,Real-Time Recurrent
Networks,Real-Time Nonlinear Adaptive Prediction of Nonstationary signals,Partially
Recurrent Network .
States,Attractors ,Strange Attractors and Chaos ,Neurodynamical Models ,Manipulation
of Attractors as a Recurrent Network Paradigm ,Dynamics of Hopfield Models ,The
Cohen Grossberg Theorem,A Hopfield Model as a Content-Addressable Memory ,BrainState-in-a-Box Model ,Recurrent Back-propagation .
The Practical and Term work will be based on the topics covered in the syllabus.
1. Fuzzy systems Design Principles.
By: Riza C. Berkan and Sheldon L. Trubateh.
Pub: Standard Publishers and Distributors. Delhi.
2. Fuzzy Control Systems.
By: Abraham Kandel and Gideon Langholz.
Pub: CRC Press- Boca Raton.
3. Biomedical engineering handbook.
By: Joseph D Bronzino
Pub: CRC Press – IEEE Press
4. S. Haykin, Neural Networks: A comprehensive foundation, NJ: Prentice-Hall,
803 -HOSPITAL MANAGEMENT
Hospital Organization:
1. Various Aspects Of Hospital Services:
2. Outdoor Patient, Indoor, Supportive, Emergency, Dietary, Nursing, Medical
Supplies Etc.
3. Hospital Administrative Hierarchy
4. Typical Layout of a Hospital Shoving Various Departments.
5. Organization of Various Clinical Departments.
6. Organization Of Bio-Medical Engineering Department
7. Duties & Responsibilities Activity Planning Dept Layout, Staff Deployment ,
Equipment Deployment , Tools Requirement , Planning Of Document Such As
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
Operating Manual, Ckt Diagrams, Data-Book, Preventive Maintenance Schedule,
Data Bases Of Equipment Supply, Tech. Specification Of Equipments, List Of Users,
Spare Part List.
8. Patient Care Services Recovery Room , ICU, ICCU, PICU,NICU
9. Support Services: Central Sterilization Service, Kitchen, Laundry Etc.
10. Engineering Services:
11. Organization Of Operation Theatre
12. Blood Bank
13. Physiotherapy & Rehabilitation Center
14. Others
Hospital Management:
General Management Functions: Planning, Co-Coordinating, Monitoring,
Controlling, Decision Marking, Motivation
Management Information System For Hospital : Information gathering &
acquisition, flow, storage, processing, hospital statistics, use of computer in
database management.
Material Management
Personnel Management
Quality Management & Audit
Maintenance Management
Waste Management
Cost Control & Financial Management
1. Principles of Hospital Administration & Planning
--Dr. B.M. Sakharkar, J.P.Brothers
2.Managing Administration Morden Hospital
--A B Srinivasan ,Response Book
3.Hospital Planning & Administration
--Clewelyn-Davis , J.P.Brothers
4.Human Resource Management In Hospital
--R.C.Coyal,Phi New Delhi
5.Hospital & Health Care Administration
--Shakti Gupta ,J.P.Brothers
Sector –28, Nr.Pashujaivik,
Tel: (079)29289540 ; Fax : (079)23243708
[email protected]
804 - PROJECT PART- II
The intended/ planned project has to be continued in Sem. VIII and complete it as project
Official seal of college
Seal and signature
of principal
|
{"url":"https://docshare.tips/gecg-curriculum-biomedical-engineering_58badb56b6d87fe4528b4836.html","timestamp":"2024-11-08T21:30:48Z","content_type":"text/html","content_length":"213022","record_id":"<urn:uuid:9e8936d6-e2f5-4dfc-9fec-0102fdaa11d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00331.warc.gz"}
|
Excel Formula for Copying Information
In this tutorial, we will learn how to copy information from one column to another in Excel using a formula. Specifically, we will focus on copying information from column 1 of Sheet 1 to the end of
Sheet 2 when column A of Sheet 1 contains the text 'trois-rivières'. This can be achieved using the IF function and an array formula.
To begin, let's understand the formula and its step-by-step explanation. We will also look at some examples to better understand how it works.
The formula we will be using is:
=IF(Sheet1!A:A="trois-rivières", Sheet1!A:A, "")
This formula checks if the value in column A of Sheet1 is 'trois-rivières'. If it is, it copies the value from column 1 of Sheet1 to the corresponding row in column A of Sheet2. If it is not, it
leaves the cell in Sheet2 blank.
Here is a breakdown of the formula:
1. The IF function is used to check if the value in column A of Sheet1 is 'trois-rivières'.
2. The result of the IF function is an array of TRUE and FALSE values, where TRUE represents the cells where the corresponding value in column A of Sheet1 is 'trois-rivières'.
3. If the value is TRUE, the formula copies the value from column 1 of Sheet1 to the corresponding row in column A of Sheet2.
4. If the value is FALSE, the formula leaves the cell in Sheet2 blank.
Let's now look at some examples to see how this formula works in practice.
Example 1:
Assume we have the following data in column A of Sheet1:
| A |
| |
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
And we have the following data in column A of Sheet2:
| A |
| |
| |
| |
| |
| |
Using the formula =IF(Sheet1!A:A="trois-rivières", Sheet1!A:A, "") would copy the values from column 1 of Sheet1 to the corresponding rows in column A of Sheet2 where the value in column A of Sheet1
is 'trois-rivières'. The result would be:
| A |
| |
| 1 |
| |
| |
| 4 |
| |
Note that the formula is entered as an array formula by pressing Ctrl + Shift + Enter instead of just Enter.
In conclusion, this tutorial has provided a formula for copying information from one column to another in Excel based on a specific condition. By using the IF function and an array formula, you can
easily copy data from one sheet to another. Practice this formula with different conditions and datasets to enhance your Excel skills.
An Excel formula
=IF(Sheet1!A:A="trois-rivières", Sheet1!A:A, "")
Formula Explanation
This formula uses the IF function to check if the value in column A of Sheet1 is "trois-rivières". If it is, it copies the value from column 1 of Sheet1 to the corresponding row in column A of
Sheet2. If it is not, it leaves the cell in Sheet2 blank.
Step-by-step explanation
1. The IF function is used to check if the value in column A of Sheet1 is "trois-rivières".
2. The result of the IF function is an array of TRUE and FALSE values, where TRUE represents the cells where the corresponding value in column A of Sheet1 is "trois-rivières".
3. If the value is TRUE, the formula copies the value from column 1 of Sheet1 to the corresponding row in column A of Sheet2.
4. If the value is FALSE, the formula leaves the cell in Sheet2 blank.
For example, if we have the following data in column A of Sheet1:
| A |
| |
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
And we have the following data in column A of Sheet2:
| A |
| |
| |
| |
| |
| |
The formula =IF(Sheet1!A:A="trois-rivières", Sheet1!A:A, "") would copy the values from column 1 of Sheet1 to the corresponding rows in column A of Sheet2 where the value in column A of Sheet1 is
"trois-rivières". The result would be:
| A |
| |
| 1 |
| |
| |
| 4 |
| |
Note that the formula is entered as an array formula by pressing Ctrl + Shift + Enter instead of just Enter.
|
{"url":"https://codepal.ai/excel-formula-generator/query/fJrQ4g5Z/excel-formula-copy-information-column-sheet","timestamp":"2024-11-07T09:20:33Z","content_type":"text/html","content_length":"100218","record_id":"<urn:uuid:3a85a268-2b41-41ad-9a57-90740fbf1c97>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00267.warc.gz"}
|
Type: section
Appearance: simple
The settings of this section control the numerical accuracy of the simulation setup.
To ease the usage JCMsolve aims to automatically adapt numerical settings. As an ultimate goal we only want you to specify a single Precision value which should be reached with smallest numerical
costs. However, with some physical insight and a basic understanding of Finite Element convergence, the user can accelerate computations by tuning the numerical parameters.
Guideline for the user
If the user has special knowledge about the physics of the expected solution, a user-adapted mesh can enhance convergence and increase accuracy and speed of simulations. For electromagnetic projects
following rules are good practice:
• For wave propagation problems apply local mesh options in JCMgeo such that patch sizes are about the wavelength in the given material.
• At metal or high index contrast corners, where the solution is expected to have singularities, refine the mesh locally in advance within JCMgeo.
• Use Precision (or PrecisionFarField for scattering problems) to define your goal precision.
• If the field is expected to have strong singularities or to check the convergence, use the adaptive refinement loop (see Refinement). Check how many mesh refinements are needed to reach the
desired accuracy of your output quantity.
Accuracy parameters of the Finite Element Method
The Finite Element Method represents the solution field of a partial differential equation in a discrete subspace of an infinite dimensional function space, in which the true physical solution
resides. This Finite Element subspace is created in two steps. First, the underlying geometrical layout is subdivided into a finite number of sub patches. Then, on each of the patches a number of
local polynomial ansatz functions is defined. The Finite Element solution is a superposition of all local ansatz functions on all patches.
In order to increase accuracy of the Finite Element solution, the discrete ansatz space has to be enlarged. In general, this can be achieved by making the mesh finer, i.e., decreasing size
Hence, increasing polynomial order
with local constants JCMgeo (e.g. refinements at corners) or by automatic mesh refinements within JCMsolve.
The above error representation refers to the near field approximation of the solution. For scattering computations the far field approximation is typically requested. Often the far field error is
much more insensitive to the singular part than for the near field error. Practically, this implies that fewer (or no) mesh refinements are needed to reach the desired accuracy in the far field data.
Controlling mesh size
The better JCMsolve provides sophisticated a priori estimation techniques to set up the Finite Element parameters with minimal user input.
The mesh generation tool JCMgeo subdivides the user defined geometry into small patches and allows the definition of maximum mesh sizes for different parts of the geometrical setup. The mesh is read
by JCMsolve and Finite Element degrees are set on the cells of the mesh afterwards. The Precision controls the self-adaptive setup of the Finite Element degrees on the mesh and only requires the user
to define a single accuracy level. The polynomial degree
Refinement loop
In addition to above a priori Precision settings, applied before the simulation, JCMsolve also has a number of built-in mesh adaption and/or local polynomial degree adaption schemes defined in the
Refinement section. The refinement loop is intended to reach and check the Precision level, the user specified. The user can define a prerefinement of the mesh before the simulation is started.
Further, a uniform or adaptive mesh refinement loop can be defined.
|
{"url":"https://docs.jcmwave.com/JCMsuite/html/ParameterReference/0a7f12e43aae20b0fed0417b330e17bd.html?version=6.4.1","timestamp":"2024-11-09T19:05:33Z","content_type":"text/html","content_length":"21571","record_id":"<urn:uuid:8a50c8ec-0788-45a9-af22-d3ebe63ddf2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00376.warc.gz"}
|
Evaluate And Graph Polynomial Functions Worksheet Answers - Function Worksheets
Evaluating Polynomial Functions Worksheet Answers – A highly-designed Features Worksheet with Responses will provide college students with answers to a number of crucial questions about … Read more
Graphing Polynomial Functions Worksheet With Answers
Graphing Polynomial Functions Worksheet With Answers – A well-created Features Worksheet with Replies can provide individuals with solutions to many different crucial questions about characteristics.
… Read more
Evaluating Polynomial Functions Worksheet
Evaluating Polynomial Functions Worksheet – A properly-designed Characteristics Worksheet with Solutions will provide pupils with answers to a variety of important questions on characteristics. It …
Read more
Graphing Polynomial Functions Worksheet Answers
Graphing Polynomial Functions Worksheet Answers – A highly-made Characteristics Worksheet with Replies will offer individuals with answers to many different significant queries about functions. It …
Read more
Graphs Of Polynomial Functions Worksheet Answers
Graphs Of Polynomial Functions Worksheet Answers – A properly-designed Features Worksheet with Solutions will offer students with solutions to a variety of significant questions on … Read more
|
{"url":"https://www.functionworksheets.com/tag/evaluate-and-graph-polynomial-functions-worksheet-answers/","timestamp":"2024-11-10T14:54:59Z","content_type":"text/html","content_length":"78153","record_id":"<urn:uuid:5077d0b3-3117-44e5-8023-e01fa8ce22a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00676.warc.gz"}
|
How Many LED Lights Can You Run On a 15 Amp Circuit?
LED light bulbs are much more energy efficient than older incandescent lights. Actually, LED light bulbs produce much more light for the same amount of spent energy that many manufacturers emphasize
the wattage of equivalent incandescent light bulbs more than the actual wattage of the LED light bulb.
Although the user can run more LED light bulbs on the same circuit than incandescent bulbs, it is a good idea to calculate how many LED lights a user can actually run on the same electric circuit,
for example, 15 Amp circuit.
Published: August 3, 2022.
Little Bit of Math
Electric circuits are designed with a certain safety margin - for example, 15 Amps circuits can withstand even more current, but 15 Amps electric breakers limit the actual current through such
So, to avoid tripping such electric breakers, an "80% Rule" is applied - instead of 15 Amps, we will do the calculation for 12 Amps:
15 Amps * 0.8 = 12 Amps
If the line voltage is 120 volts, that means that such a circuit can constantly provide 1440W:
P(W) = U(V) * I(A) = 120V * 12A = 1440 Watts
If we are going to connect only 100W incandescent light bulbs, that means that we can add "only" 14 of them:
1440W / 100W = 14.4
(Of course, 14.4 light bulb is rounded to 14 light bulbs.)
LED light bulb with a similar light strength (given in lumens) on average requires ~14W.
Thus, in order to calculate how many 14W LED light bulbs we can run on a 1440W line, we write:
1440W / 14W = 102.86
Thus, a 120V 15A circuit can support up to 102 14W LED light bulbs, which is more than 7x more light bulbs for the same power consumption.
Of course, running so many light bulbs at home is unnecessary, but one can see how much energy can be saved by replacing incandescent light bulbs with LED light bulbs.
|
{"url":"https://www.batteryequivalents.com/how-many-led-lights-can-you-run-on-a-15-amp-circuit.html","timestamp":"2024-11-02T01:21:26Z","content_type":"text/html","content_length":"27123","record_id":"<urn:uuid:e69c272e-01d5-44a0-8127-f89e067a1d94>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00295.warc.gz"}
|
Maharashtra Board Class 9 Science Solutions Chapter 2 Work and Energy
Balbharti Maharashtra State Board Class 9 Science Solutions Chapter 2 Work and Energy Notes, Textbook Exercise Important Questions and Answers.
Maharashtra State Board Class 9 Science Solutions Chapter 2 Work and Energy
Class 9 Science Chapter 2 Work and Energy Textbook Questions and Answers
1. Write detailed answers?
a. Explain the difference between potential energy and kinetic energy.
Kinetic Energy Potential Energy
(i) Kinetic energy is the energy possessed by the body due to its motion. (i) Potential energy is the energy possessed by the body because of its shape or position.
(ii) K.E = 1/2 mv^2 (ii) P.E = mgh
(iii) e.g., flowing water, such as when falling from a waterfall. (iii) e.g., water at the top of a waterfall, before the drop.
b. Derive the formula for the kinetic energy of an object of mass m, moving with velocity v.
Suppose a stationary object of mass ‘m’ moves because of an applied force. Let ‘u’ be its initial velocity (here u = 0). Let the applied force be ‘F’. This generates an acceleration a in the object,
and after time T, the velocity of the object becomes equal to ‘v’. The displacement during this time is s. The work done on the object is
W = F x s ……………….. (1)
Using Newton’s 2nd law of motion,
F = ma ……………….. (2)
Using Newton’s 2nd equation of motion
\(s=u t+\frac{1}{2} a t^{2}\)
However, as initial velocity is zero, u = 0
c. Prove that the kinetic energy of a freely falling object on reaching the ground is nothing but the transformation of its initial potential energy.
Let us look at the kinetic and potential energies of an object of mass (m), falling freely from height (h), when the object is at different heights.
As shown in the figure, the point A is at a height (h) from the ground. Let the point B be at a distance V, vertically below A. Let the point C be on the ground directly below A and B. Let us
calculate the energies of the object at A, B and C.
(1) Let the velocity of the object be v[B] when it reaches point B, having fallen through a distance x.
(2) When the object is stationary at A, its initial velocity is u = 0
∴ K.E = 1/2 mass x velocity2
(3) Let the velocity of the object be vc when it reaches the ground, near point C.
From equations (i) and (iii) we see that the total potential energy of the object at its initial position is the same as the kinetic energy at the ground.
d. Determine the amount of work done when an object is displaced at an angle of 300 with respect to the direction of the applied force.
When an object is displaced by displacement ‘s’ and by applying force ‘F’ at an ’angle’ 30°. work done can be given as
e. If an object has 0 momenta, does it have kinetic energy? Explain your answer.
• No, it does not have kinetic energy if it does not have momentum.
• Momentum is the product of mass and velocity. If it is zero, it implies that v = 0 (since mass can never be zero).
• Now K.E = ~ mv2, So if v = 0 then K.E also will be zero.
• Thus, if an object has no momentum then it cannot possess kinetic energy.
f. Why is the work done on an object moving with uniform circular motion zero?
• In uniform circular motion, the force acting on an object is along the radius of the circle.
• Its displacement is along the tangent to the circle. Thus, they are perpendicular to each other.
Hence θ = 90° and cos 90 = θ
∴ W = Fs cos θ = 0
2. Choose one or more correct alternatives.
a. For work to be performed, energy must be ….
(i) transferred from one place to another
(ii) concentrated
(iii) transformed from one type to another
(iv) destroyed
b. Joule is the unit of …
(i) force
(ii) work
(iii) power
(iv) energy
c. Which of the forces involved in dragging a heavy object on a smooth, horizontal surface, have the same magnitude?
(i) the horizontal applied force
(ii) gravitational force
(iii) reaction force in vertical direction
(iv) force of friction
d. Power is a measure of the …….
(i) the rapidity with which work is done
(ii) amount of energy required to perform the work
(iii) The slowness with which work is performed
(iv) length of time
e. While dragging or lifting an object, negative work is done by
(i) the applied force
(ii) gravitational force
(iii) frictional force
(iv) reaction force
3. Rewrite the following sentences using a proper alternative.
a. The potential energy of your body is least when you are …..
(i) sitting on a chair
(ii) sitting on the ground
(iii) sleeping on the ground
(iv) standing on the ground
(iii) sleeping on the ground
b. The total energy of an object falling freely towards the ground …
(i) decreases
(ii) remains unchanged
(iii) increases
(iv) increases in the beginning and then decreases
(iii) increases
c. If we increase the velocity of a car moving on a flat surface to four times its original speed, its potential energy ….
(i) will be twice its original energy
(ii) will not change
(iii) will be 4 times its original energy
(iv) will be 16 times its original energy.
(ii) will not change
d. The work done on an object does not depend on ….
(i) displacement
(ii) applied force
(iii) initial velocity of the object
(iv) the angle between force and displacement.
(iii) initial velocity of the object
4. Study the following activity and answer the questions.
1. Take two aluminium channels of different lengths.
2. Place the lower ends of the channels on the floor and hold their upper ends at the same height.
3. Now take two balls of the same size and weight and release them from the top end of the channels. They will roll down and cover the same distance.
1. At the moment of releasing the balls, which energy do the balls have?
2. As the balls roll down which energy is converted into which other form of energy?
3. Why do the balls cover the same distance on rolling down?
4. What is the form of the eventual total energy of the balls?
5. Which law related to energy does the above activity demonstrate? Explain.
1. At the moment of releasing the ball they possess Potential energy as they are at a height above the ground.
2. As the balls roll down, the Potential energy is converted into Kinetic energy since they are now in motion.
3. Since they have been released from the same height, they will cover the same distance.
4. The eventual form of the total energy of the balls is “Mechanical Energy” i.e, a combination of Potential energy and Kinetic energy
5. The above activity demonstrates the “Law of Conservation of Energy”
5. Solve the following examples.
a. An electric pump has 2 kW power. How much water will the pump lift every minute to a height of 10 m? (Ans : 1224.5 kg)
Power (P) = 2 kW = 2000 W
Height (h) = 10 m
Time (t) = 1 min = 60 s
Acceleration due to gravity (g) = 9.8 m/s^2
To Find:
Mass of water (m)= ?
Water lifted by the pump is 1224.5 kg
b. If the energy of a ball falling from a height of 10 metres is reduced by 40%, how high will it rebound? (Ans : 6 m)
Given: Initial height (h[1]) = 10m
Let Initial (P.E[1]) = 100
Final (P.E[2]) = 100 – 40
= 60
To Find:
Final height (h[2]) = ?
P.E. = mgh
The ball will rebound by 6 m.
d. The velocity of a car increase from 54 km/hr to 72 km/hr. How much is the work done if the mass of the car is 1500 kg? (Ans. : 131250 J)
Work done to increase the velocity = 131250 J
e. Ravi applied a force of 10 N and moved a book 30 cm in the direction of the force. How much was the work done by Ravi? (Ans: 3 J)
Force (F) = 10 N
θ = 0°, (Since force and displacement are in same direction)
Displacement (s) = 30 cm = 30/100 m
To Find:
Work (W) = ?
W = Fs cos θ
W = Fs cos θ
The work done by Ravi is 3J
Numericals For Practice
Class 9 Science Chapter 1 Laws of Motion Intext Questions and Answers
Question 1.
What are different types of forces? Give examples.
Forces are of two types.
• Contact force e.g.: Mechanical force, frictional force, muscular force
• Non-contact force e.g.: gravitational force, magnetic force, electrostatic force
Question 2.
Monashee wants to displace a wooden block from point A to point B along the surface of a table as shown. She has used force F for the purpose.
(a) Has all the energy she spent been used to produce an acceleration in the block?
(b) Which forces have been overcome using that energy?
(a) Only part of the energy applied by Minakshee is used in accelerating the block.
(b) Force of friction has been overcome using the energy.
Question 3.
Mention the type of energy used in the following examples.
(i) Stretched rubber string.
(ii) Fast-moving car.
(iii) The whistling of a cooker due to steam.
(iv) A fan running on electricity.
(v) Drawing out pieces of iron from garbage, using a magnet.
(vi) Breaking of a glass window pane because of a loud noise.
(vii) The drackers exploded in Diwali.
(i) Potential energy
(ii) Kinetic energy
(iii) Sound energy
(iv) Electrical energy
(v) Magnetic energy
(vi) Sound energy
(vii) Sound energy, light energy and heat energy
Question 4.
Study the pictures given below and answer the questions:
(a) In which of the pictures above has work been done?
(b) From scientific point of view, when do we say that no work was done?
(a) Girl studying : No work done
Boy playing with ball: Work is done
Girl watching T.V.: No work done Person lifting sack of grains : Work is done
(b) No work is said to be done when force is applied but there is no displacement.
Question 5.
Make two pendulums of the same length with the help of thread and two nuts. Tie another thread in the horizontal position.
Tie the two pendulums to the horizontal thread in such a way that they will not hit each other while swinging. Now swing one of the pendulums and observe. What do you see?
You will see that as the speed of oscillation of the pendulum slowly decreases, the second pendulum which was initially stationary, begins to swing. Thus, one pendulum transfers its energy to the
Question 6.
Ajay and Atul have been asked to determine the potential energy of a ball of mass m kept on a table as shown in the figure. What answers will they get? Will they be different? What do you conclude
from this?
• According to Ajay P.E[1] = mgh[1] and according to Atul P.E[2] = mgh[2].
• Yes, the answer will be different as the two heights are different.
• Potential energy is relative.
Question 7.
Discuss the directions of force and of displacement in each of the following cases.
(i) Pushing a stalled vehicle.
(ii) Catching the ball which your friend has thrown towards you.
(iii) Tying a stone to one end of a string and swinging it round and round by the other end of the string.
(iv) Walking up and down a staircase; climbing a tree.
(v) Stopping a moving car by applying brakes.
(i) Force and displacement are in the same direction.
(ii) Force and displacement are in the opposite direction.
(iii) Force and displacement are perpendicular to each other.
(iv) Force and displacement are in the opposite direction.
(v) Force and displacement are in the opposite direction.
Question 8.
(A) An arrow is released from a stretched bow.
(B) Water kept at a high flows through a pipe into the tap below.
(C) A compressed spring is released.
(a) Which words describe the state of the object in the above examples?
(b) Where did the energy required to cause the motion of the objects come from?
(c) If the obj ects were not brought in those states, would they have moved?
(a) Words such as stretched bow, water kept at a height and compressed spring describe the state of the objects.
(b) The energy required for the objects came from its specific state or motion in the form of potential energy.
(c) No, if the objects were not brought in those states, they would have not moved.
Question 9.
Study the activity and answer the following questions.
(a) Figure A – Why does the cup get pulled?
(b) Figure B – What is the relation between the displacement of the cup and the force applied through the ruler?
(c) In Figure C-Why doesn’t the cup get displaced?
(d) What is the type of work done in figures A, B and C?
(e) In the three actions above, what is the relationship between the applied force and the displacement?
(a) The cup gets pulled as the force of the nut and the displacement of the cup is in the same direction.
(b) The displacement of the cup and the force applied through the ruler is in the opposite direction.
(c) Tire cup does not get displaced as two equal forces are working in opposite directions.
(d) The work done in figure A is positive, figure B is negative and in figure C is zero.
(e) In figure A the applied force and the displacement is in the same direction, in figure B the applied force and the displacement is in the opposite direction and in figure C the applied force and
displacement is perpendicular to each other.
Question 10.
From the following activities find out whether work is positive, negative or zero. Give reasons for your answers.
(a) A boy is swimming in a pond.
(b) A coolie is standing with a load on his head.
(c) Stopping a moving car by applying brakes.
(d) Catching the ball which you friend has thrown towards you.
(a) A boy is swimming in a pond: The work done is positive because the direction of applied force and displacement are the same.
(b) A coolie is standing with a load on his head: The work done is zero because the applied force does not cause any displacement.
(c) Stopping a moving car by applying brakes: The work done is negative because the fore applied by the brakes acts in a direction opposite to the direction of motion of car.
(d) Catching the ball which you friend has thrown towards you : Negative work because the force required to stop the ball, acts opposite to the displacement of the ball.
Question 11.
(a) Can your father climb stairs as fast as you can?
(b) Will you fill the overhead water tank with the help of a bucket or an electrical motor?
(c) Suppose Raj ashree, Yash and Ranjeet have to reach the top of a small hill. Raj ashree went by car. Yash went cycling while Ranjeet went walking. If all of them choose the same path, who will
reach first and who will reach last? (Think before you answer.
(a) No, father takes more time to climb stairs.
(b) Overhead water tank can be filled with the help of one electric motor rather than filling it with bucket.
(c) Raj ashree will reach first, followed by Yash and Ranjeet will reach last because car moves faster than a cycle and a person walking.
Class 9 Science Chapter 1 Laws of Motion Additional Important Questions and Answers
1. Choose and write the correct option:
Question 1.
Forces are of …………………… types.
(a) 2
(b) 3
(c) 4
(d) 5
(a) 2
Question 2.
Example of Contact force is ………………….. .
(a) Gravitational Force
(b) Magnetic Force
(c) Electrostatic Force
(d) Muscular Force
(d) Muscular Force
Question 3.
Example of Non-contact force is ………………….. .
(a) Mechanical Force
(b) Frictional Force
(c) Muscular Force
(d) Electrostatic Force
(d) Electrostatic force
Question 4.
Work is said to be done on a body when a …………………… is applied on object causes displacement of the object.
(a) Direction
(b) Area
(c) Volume
(d) Force
(d) force
Question 5.
W = ………………. .
(a) mgh
(b) mdh
(c) mv^2
(d) mfe
(a) mgh
Question 6.
The energy stored in the dry cell is in of ………………. energy.
(a) Light
(b) Chemical
(c) Solar
(d) Kinetic
(b) chemical
Question 7.
The work done is zero if there is no ……………… .
(a) Direction
(b) Displacement
(c) Mass
(d) Angle
(b) displacement
Question 8.
Flowing water has ………………. energy.
(a) Potential
(b) Chemical
(c) Solar
(d) Kinetic
(d) kinetic
Question 9.
By stretching the rubber strings of a we store ………………. energy in it.
(a) Potential
(b) Chemical
(c) Electric
(d) Kinetic
(a) potential
Question 10.
………………. is the unit of force.
(a) Both B and C
(b) Newton
(c) Dyne
(d) Volts
(a) Both B and C
Question 11.
For a freely falling body, kinetic energy is ………………. at the ground level.
(a) Maximum
(b) Minimum
(c) Neutral
(d) Reversed
(a) Maximum
Question 12.
Energy can neither be ………………. nor ……………… .
(a) Destroyed
(b) Created
(c) Saved
(d) Both A and B
(d) Both A and B
Question 13.
Work and …………………… have the same unit.
(a) Energy
(b) Electricity
(c) Force
(d) Both B and C
(a) Energy
Question 14.
S.I. unit of energy is ………………….. .
(a) Joule
(b) Ergs
(c) m/s^2
(d) Both A and B
(a) Joule
Question 15.
Work is the product of ………………….. .
(a) force and distance
(b) displacement and velocity
(c) kinetic and potential energy
(d) force and displacement
(d) force and displacement
Question 16.
S.I. unit of work is ………………….. .
(a) dyne
(b) newton-meter or erg
(c) N/m2 or joule
(d) newton-meter or joule
(d) newton-meter or joule
Question 17.
…………………… is the capacity to do work.
(a) Energy
(b) Force
(c) Power
(d) Momentum
(a) Energy
Question 18.
Kinetic energy of a body (KE) = ………………….. .
(a) mv^2
(b) ^1/[2] mv^2
(c) mgh
(d) Fs
(b) ^1/[2] mv^2
Question 19.
Potential energy of a body is given by (P.E.) = ………………….. .
(a) Fs
(b) mgh
(c) ma
(d) mv^2
(b) mgh
Question 20.
1 hp = ………………….. .
(a) 476 watts
(b) 746 watts
(c) 674 watts
(d) 764 watts
(b) 746 watts
Question 21.
…………………… is the commercial unit of power.
(a) kilowatt second
(b) dyne
(c) kilowatt
(d) erg
(c) kilowatt
Question 22.
1 kWh = …………………… joules.
(a) 3.6 x 10^3
(b) 3.6 x 10^6
(c) 6.3 x 10^6
(d) 6.3 x 10^3
(b) 3.6 x 10^6
Based on Practicals
Question 23.
The work done by a force is said to be …………………… when the applied force does not produce displacement.
(a) positive
(b) negative
(c) zero
(d) none of these
(c) zero
Question 24.
When some unstable atoms break up, they release a tremendous amount of …………………… energy.
(a) chemical
(b) potential
(c) nuclear
(d) mechanical
(c) nuclear.
Name the following:
Question 1.
Unit of energy used for commercial purpose.
Kilowatt-hour kW h is the unit of energy used for commercial purpose.
Question 2.
Unit used in industry to measure power.
Horse power (hp) is the unit used in industry to express power.
Question 3.
SI unit of energy.
SI unit of energy is Joule (J).
Question 4.
Two types of mechanical energy.
Potential energy and kinetic energy are the two types of mechanical energy.
Question 5.
An example where force acting on an object does not do any work.
In a simple pendulum, the gravitational force acting on the bob does not do any work as there is no displacement in the direction of force.
Question 6.
The relationship between 1 joule and 1 erg.
1 joule = 107 erg.
Question 7.
Various forms of energy
The various forms of energy are mechanical, heat, light, sound, electro-magnetic, chemical, nuclear and solar.
State whether the following statements are true or false:
(1) The potential energy of a body of mass 1 kg kept at height 1 m is 1 J.
(2) Water stored at some height has potential energy.
(3) Unit of power is joule.
(4) Mechanical energy can be converted into electrical energy.
(5) Work is a vector quantity.
(6) Power is a scalar quantity.
(7) The kilowatt hour is the unit of energy.
(8) The CGS unit of energy is dyne.
(9) The SI unit of work is newton.
(10) Kinetic energy has formula – mv^2
(1) False
(2) True
(3) False
(4) True
(5) False
(6) True
(7) True
(8) False
(9) False
(10) True
Find the odd man out.
Question 1.
Work, Energy, Power, Force.
Question 2.
A stretched spring, A body placed in at some height, A bullet fired from gun.
A bullet fired from gun.
Question 3.
A stretched spring, A rock rolling downhill, A bullet fired from gun.
A stretched spring.
Write the formula of the following.
Question 1.
Kinetic energy
Question 2.
Potential energy
Question 3.
Fs or Fs cosθ
Question 4.
Question 5.
One line answer.
Question 1.
(i) When is work done said to be zero?
Work done is zero when force acting on the body and its displacement are perpendicular to each other.
(ii) Which quantities are measured in ergs?
Work and energy are measured in ergs.
(iii) What is the relationship between newton, meter and joule?
1 joule = 1 newton x 1 meter
(iv) What is energy?
The ability of a body to do work is called energy.
(v) Give 4 examples of energy
Solar, wind, mechanical and heat.
(vi) Which device converts electrical energy into heat?
Electric water heater (Geyser) converts electrical energy into heat.
(vii) What is the relationship between second, horsepower and joule?
1 horse power = \(\frac{746 \text { joules }}{1 \text { second }\)
Question 2.
Find whether work is positive, negative or zero.
(a) Person moving along circle from A to B.
Work done is positive as direction of applied force and displacement are the same.
(b) Person completing one circle and returns to position A.
Work done is zero because there is no displacement for the person.
(c) Person pushing a car in the forward direction.
Work done is positive as the motion of car is in the direction of the applied force.
(d) A car coming downhill even after pushing it in the opposite uphill direction.
Work done is negative as the motion of car is in opposite direction of the applied force.
(e) Motion of the clock pendulum.
work done is zero as there is no displacement of the pendulum and it comes back to its original position.
Give Scientific reasons:
Question 1.
A moving ball hits a stationary ball and displaces it.
• The moving ball has certain energy.
• When it hits the stationary ball, the energy is transferred to the stationary ball, because of which it moves.
• Hence, a moving ball hits a stationary ball and displaces it.
Question 2.
Flowing water from some height can rotate turbine.
• Flowing water has certain energy.
• When it hits the turbine, energy is transferred to the turbine, because of which it rotates.
• Hence, flowing water from some height can rotate a turbine.
Question 3.
A stretched rubber band when released regains its original length.
• When we stretch a rubber band we give energy to it.
• This energy is stored in it.
• Hence, when we release it, it regains its original length.
Question 4.
Wind can move the blades of a windmill.
• Wind has certain energy.
• When it hits the windmill energy is transferred to the windmill because of which it moves.
• Hence, wind can move the blades of a wind mill.
Question 5.
An exploding firecracker lights up as well as makes a sound.
• The exploding firecracker converts the chemical energy stored in it into light and sound respectively.
• Here, energy is converted from one type to another.
• Hence, an exploding firecracker lights as well as makes a sound.
Question 6.
Work done on an artificial satellite by gravity is zero while moving around the earth.
• When the artificial satellite moves around the earth in a circular orbit, gravitation force acts on it.
• The gravitational force acting on the satellite and its displacement are perpendicular to each other. i.e. 0 = 90°
• For 0 = 90°, work done is zero. [ v cos 90 = 0)
• Hence, work done on an artificial satellite by gravity is zero while moving around the earth.
Difference between :
Question 1.
Work and Power:
Work Power
(i) Work is the product of force and displacement. (i) Power is the rate of doing work.
(ii) Work is given by the formula : W = Fs (ii) Power is given by the formula : \(\mathrm{P}=\frac{\mathrm{W}}{\mathrm{t}}\)
(iii) MKS unit – joule, CGS unit-erg (iii) MKS unit – joule/sec, CGS unit – erg/sec
Question 2.
Work and Energy:
Work Energy
(i) It is the product of the magnitude of the force acting on the body and the displacement of the body in the direction of the force. (i) It is the capacity to do work.
(ii) It is the effect of energy. (ii) It is the cause of work.
Solve the following:
Type – A
W = Fs cosθ
If force and displacement are in same direction, then θ = 0°, and cos θ = 1
If force and displacement are in opposite direction, then θ = 180°, and cos θ = -1
If force and displacement are perpendiculars, then θ = 90°, and cos θ = 0
Question 1.
Pravin has applied a force of 100 N on an object, at an angle of 60° to the horizontal. The object gets displaced in the horizontal direction and 400 J work is done. What is the displacement of the
object? (cos 600 =12)
To Find:
Displacement (s) = ?
W = Fs cos θ
The object will be displaced through 8 m.
Question 2.
A force of 50 N acts on an object and displaces it by 2 m. If the force acts at an angle of 60° to the direction of its displacement, find the work done.
50 J
Question 3.
Raj applied a force of 20 N and moved a book 40 cm in the direction of the force. How much was the work done by Raj?
Type -B
1) W = K.E = 1/2 mv2
2) W = P.E = mgh
• W = P.E, W = K.E
1 km/hr =
\(\frac{1000}{3600} \mathrm{~m} / \mathrm{s}=\frac{5}{18} \mathrm{~m} / \mathrm{s}\)
Question 4.
A stone having a mass of 250 gm is falling from a height. How much kinetic energy does it have at the moment when its velocity is 2 m/s?
The kinetic energy of the stone is 0.5 J
Question 5.
500 kg water is stored in the overhead tank of a 10 m high building. Calculate the amount of potential energy stored in the water.
Mass (m) = 500 kg
Height (h) = 10 m
Acceleration due to gravity (g) = 9.8 m/s^2
To Find:
Potential energy (P.E) = ?
P.E = mgh
P.E = mgh
= 500 x 9.8 x 10
= 500 x 98
= 49000J
The P.E of the stored water is 49000 J
Question 6.
Calculate the work done to take an object of mass 20 kg to a height of 10 m. (g = 9.8 m/s^2)
Mass (m) = 20 kg
Acceleration due to gravity (g) = -9.8 m/s^2
Displacement (s) = (h) = 10 m.
To Find:
Work done (W) = ?
(i) W = P.E = mgh
W = mgh
= 20 x (-9.8) x 10
= -1960J
The work done to take an object of mass 20 kg to a height of 10 m is -1960 J.
Question 7.
A body of 0.5 kg thrown upwards reaches a maximum height of 5 m. Calculate the work done by the force of gravity during this vertical displacement.
Mass (m) = 0.5 kg
Acceleration due to gravity (g) = -9.8 m/s^2
Displacement (s) = 5 m.
To Find:
Work done (W) = ?
W = P.E = mgh
W = mgh
= 0.5 x (-9.8) x 5
= -24.5 J
The work done by the force of gravity is -24.5 joule.
Question 8.
1 kg mass has a kinetic energy of 2 joule. Calculate its velocity.
Mass (m) = 1 kg
Kinetic Energy (K.E) = 2 J
To Find:
Velocity (v) = ?
The velocity is 2 m/s
Question 9.
A rocket of mass 100 tonnes is propelled with a vertical velocity 1 km/s. Calculate kinetic energy.
The kinetic energy of the rocket is 5 x 10^10 J
Type – C
\(\text { 1) Power }=\frac{\text { work }}{\text { time }}=\frac{\text { mgh }}{t}\)
Power should be expressed in kW
Time should be expressed in hours
1 k Wh = 1 unit
Question 10.
Swaralee takes 20 s to carry a bag weighing 20 kg to a height of 5 m. How much power has she used?
Mass (m) = 20 kg
Height (h) = 5 m
Time (t) = 20s
Acceleration due to gravity (g) = 9.8 m/s^2
To Find:
Power (P) = ?
P &=\frac{m g h}{t} \\
&=20 \times 9.8 \times \frac{5}{20} \\
&=9.8 \times 5
= 49 W
Power used by Swaralee is 49 W
Write notes on the following:
Question 1.
Derive the expression for potential energy.
(i) To carry an object of mass ‘m’ to a height ‘h’ above the earth’s surface, a force equal to ‘mg’ has to be used against the direction of the gravitational force.
(ii) The amount of work done can be calculated as follows:
Work = force x displacement
∴ W = mg x h
∴ W = mgh
(iii) The amount of potential energy stored in the object because of its displacement.
PE = mgh (W = P.E)
(iv) Displacement to height h causes energy equal to mgh to be stored in the object.
Question 2.
When can you say that the work done is either positive, negative or zero?
• When the force and the displacement are in the same direction, the work done by the force is positive.
• When the force and displacement are in the opposite directions, the work done by the force is negative.
• When the applied force does not cause any displacement or when the force and the displacement are perpendicular to each other, the work done by the force is zero.
Question 3.
Explain the relation between, the commercial and SI unit of energy.
The commercial unit of energy is a kilowatt-hour (kWh) while the SI unit of energy is the joule. Their relation is
1 kWh = 1kW x 1hr
= 1000 Wx 3600 s
= 3600000J
(Watt x Sec = Joule)
1 kWh = 3.6 x 106 J.
Question 4.
How is work calculated if the direction of force and the displacement are inclined to each other?
If the direction of force and the displacement are inclined to each other then, we must convert the applied force into the force acting along the direction of displacement.
If θ is angle between force and displacement, then force (F[1]) in direction of displacement is
Complete the flow chart.
Question 1.
Transformation of energy
Question 2.
Transformation of energy
Write effects of the following with examples.
Question 1.
• A force can move a stationary object. The force of engine makes a stationery car to move.
• A force can stop a moving object. The force of brakes can stop a moving car.
• A force can change the speed of a moving object. When a hockey player hits a moving ball, the speed of ball increases.
• A force can change the direction of a moving object. In the game of carrom ,when we take a rebound then the direction of striker changes because the edge of the carrom board exerts a force on the
• A force can change the shape and size of an object. The shape of kneaded wet clay changes when a potter converts it into pots of different shapes and sizes because the p otter applies force on
the kneaded wet clay.
Give two examples in each of the following cases:
Question 1.
Potential energy
• Water stored in a dam
• A compressed spring
Question 2.
Kinetic energy
• Water flowing
• Bullet fired from a gun
Question 3.
Chemical energy
• Chemical in cell
• Explosive mixture of a bomb
Question 4.
Zero work done
• A stone tied to a string and whirled in a circular path
• Motion of the earth and other planets moving around the sun
Question 5.
Negative work done
• A cyclist applies brakes to his bicycle, but the bicycle still covers some distance.
• When a body is made to slide on a rough surface, the work done by the frictional force.
Question 6.
Positive work done
(i) A boy moving from the ground floor to the first floor.
(ii) A fruit falling down from the tree.
= 0.5 hr x 30 days
= 15 hrs
To Find:
Energy consumed = ?
The units of energy consumed in the month of April by the iron is 18 units.
Question 7.
A 25 W electric bulb is used for 10 hours every day. How much electricity does it consume each day?
Power (P) = 25 W
25/1000 kW
Time (E) = 10 hrs
To Find:
Electric energy consumed = ?
Electric energy consumed = power x time
Electric energy consumed = power x time
= 25/1000 x 10
= 0.25 kWh
The electric bulb consumes 0.25 kWh of electricity each day.
Question 8.
If a TV of rating 100W is operated for 6 hrs per day, find the amount of energy consumed in any leap year?
= 2196 hrs.
To Find:
Electric energy consumed
Electric energy consumed = power x time
Electric energy consumed = power x time
= 0.1 x 2196
= 219.6 kWh
The amount of energy consumed is 219.6 kWh
Complete the paragraph.
Question 1.
………….. is the measure of energy transfer when a force (F) moves an object through a ………….. (d). So when ………….. is done, energy has been transferred from one energy store to another, and so: energy
transferred = ………….. done. Energy transferred and work done are both measured in ………….. (J)
Work is the measure of energy transfer when a force (F) moves an object through a distance (d). So when work is done, energy has been transferred from one energy store to another, and so: energy
transferred = work done. Energy transferred and work done are both measured in joules (J).
Question 2.
………….. energy and ………….. done are the same thing as much as ………….. energy and work done are the same thing. Potential energy is a state of the system, a way of ………….. energy as of virtue of its
configuration or motion, while ………….. done in most cases is a way of channeling this energy from one body to another.
Potential energy and work done are the same thing as much as kinetic energy and work done are the same thing. Potential energy is a state of the system, a way of storing energy as of virtue of its
configuration or motion, while work done in most cases is a way of channeling this energy from one body to another.
Question 3.
In physics, ………….. is the rate of doing work or, i.e., the amount of energy transferred or converted per unit time. In the International System of Units, the unit of power is the ………….. equal to one
………….. per second.
Power is a ………….. quantity that requires both a change in the physical system and a specified time interval in which the change occurs. But more ………….. is needed when the work is done in a shorter
amount of time.
In physics, power is the rate of doing work or, i.e., the amount of energy transferred or converted per unit time. In the International System of Units, the unit of power is the watt. equal to one
joule per second.
Power is a scalar quantity that requires both a change in the physical system and a specified time interval in which the change occurs. But more power is needed when the work is done in a shorter
amount of time.
Activity-based questions
Answer in detail:
Question 1.
State the expression for work done when displacement and force makes an angle θ OR State the expression for work done when force is applied making an angle θ with the horizontal force.
Let ‘F’ be the applied force and Fj be its component in the direction of displacement. Let ’S’ be the displacement.
The amount of work done is given by W = F[1]s ……………………………………… (1)
The force ‘F’ is applied in the direction of the string.
Let ‘θ’ be the angle that the string makes with the horizontal. We can determine the component ‘F[1]‘, of this force F, which acts in the horizontal direction by means of trigonometry.
\cos \theta=\frac{\text { base }}{\text { hypotenuse }} \\
\therefore \quad \cos \theta=\frac{\mathrm{F}_{1}}{\mathrm{~F}} \\
\therefore \quad \mathrm{F}_{1}=\mathrm{F} & \cos \theta
Substituting the value of F[1] in equation 1
Thus, the work done by F[1] is
W cos θ s
∴ W = Fscosθ
Question 2.
When a body is dropped on the ground from some height its P.E is converted into K.E but when it strikes the ground and it stops, what happens to the K.E?
When a body is dropped on the ground, its K.E appears in the form of:
• Heat (collision between the body and the ground).
• Sound (collision of the body with the ground).
• The potential energy of change in state of the body and the ground.
• Kinetic energy is also utilized to do work i.e., the ball bounces to a certain height and moves to a certain distance vertically and horizontally till Kinetic energy becomes zero.
• The process in which the kinetic energy of a freely falling body is lost in an unproductive chain of energy is called the dissipation of energy.
Question 3.
Explain the statement “Potential Energy is relative”.
• The potential energy of an object is determined and calculated according to a height of the object with respect to the observer.
• So, the person staying on 6th floor more potential energy than those staying on the 3rd floor.
• But, the person on the 6th floor will have lesser potential energy than on the 8th floor. Hence potential energy is relative.
|
{"url":"https://maharashtraboardsolutions.in/maharashtra-board-class-9-science-solutions-chapter-2/","timestamp":"2024-11-11T10:39:37Z","content_type":"text/html","content_length":"127120","record_id":"<urn:uuid:814e7120-d2ad-4303-b9ec-b30e3c57c5a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00538.warc.gz"}
|
Mentally calculating 10^x for non-integer x
Mentally calculating 10^x
This is the last in a sequence of three posts on mental calculation. The first looked at computing sine, cosine, and tangent in degrees. The second looked at computing logarithms, initially in base
10 but bootstrapping from there to other bases as well.
In the previous post, we showed that
log[10] x ≈ (x − 1)/(x + 1)
for x between 0.3 and 3.
If we take the inverse of the functions on both sides we get
10^x ≈ (1 + x)/(1 − x).
for x between −0.5 and 0.5.
If the fractional part of x is not between −0.5 and 0.5 use
10^x + 0.5 = 10^x 10^0.5
to reduce it to that case.
For example, suppose you want a rough estimate of 10^2.7. Then
10^2.7 = 10^2 10^0.5 10^0.2 ≈ 100 × 3 × 1.2/0.8 = 450
This calculation approximates the square root of 10 as 3. Obviously you’ll get a better answer if you use a better approximation for square root of 10. Incidentally, π is not a bad approximation to
√10; using 3.14 would be better than using 3.
Plots and error
The graph below shows how good our approximation is. You can see that the error increases with x.
But the function we’re approximating also grows with x, and the relative error is nearly symmetric about zero.
Relative error is more important than absolute error if you’re going to multiply the result by something else, as we do when the fractional part of x has absolute value greater than 0.5.
The virtue of the approximation above is that it is simple, and moderately accurate, with relative error less than 8%. If you want more accuracy, but still want something easy enough to calculate
manually, a couple tweaks make the approximation more accurate.
The approximation
10^x ≈ (1 + x)/(1 − x)
is a little high on the left of zero, and a little low on the right. You can do a little better by using
10^x ≈ 0.95 (1 + x)/(1 − x)
for −0.5 < x < −0.1, and
10^x ≈ 1.05 (1 + x)/(1 − x)
for 0.1 < x < 0.5.
Now the relative error stays below 0.03. The absolute error is a little high on the right end, but we’ve optimized for relative error.
In case you’re wondering where the factors of 0.95 and 1.05 came from, they’re approximately 3/√10 and √10/3 respectively.
Update: The approximations from this post and several similar posts are all consolidated here.
2 thoughts on “Mentally calculating 10^x”
1. Inverting the functions on both sides of an equation (an approximation in this case) is a technique I’ve not seen before. Does it have a name?
2. 10^.5 is better approximated by 19/6 (.3% error) than by 22/7 or pi.
And 10^.3 is nicely approximated by 2 (.3% error) which gives easy approximations for 10^.6 and 10^.9 as well.
|
{"url":"https://www.johndcook.com/blog/2021/03/23/mentally-calculating-10x/","timestamp":"2024-11-05T15:29:22Z","content_type":"text/html","content_length":"54386","record_id":"<urn:uuid:c002067a-97d0-4d1e-9df4-0795be3e6011>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00698.warc.gz"}
|
script function
Skip to main content
Description of MUSHclient world function: world.MtRand
Name MtRand
Type Method
Summary Returns pseudo-random number using the Mersenne Twister algorithm
Prototype double MtRand();
This returns a pseudo-random number in the range 0 to 1 (however excluding 1) as a "double" (floating-point number). Thus you can generate a number in any range by multiplying the
result by the range you want.
print (math.floor (MtRand () * 5)) - returns 0 to 4
See below in the Lua section for an example of doing coin flips.
Note that the MT generator is "global" to all of MUSHclient. If you are trying to reproduce a sequence, then you cannot rely on a sequence staying stable from one script call to
another (since another script, or another world or plugin, might also use random numbers).
This (Lua) script here, for example, would generate a million random numbers, and put them into a table for later use:
nums = {} -- empty table
MtSrand (1234567) -- start with a known seed
for j = 1, 1000000 do
table.insert (nums, MtRand ())
Description end
This took about a second to execute on my PC.
Over a run of 10,000,000 coins flips it seemed pretty accurate, giving 5,000,625 heads and 4,999,375 tails (in other words, around 50.00625% heads).
Example (in Lua) of rolling a 6-sided die:
for i = 1, 100 do
Tell (math.floor (MtRand () * 6) + 1, " ")
In the above example, multiplying by 6 gives a number in the range 0 to 5, and then we add 1 to make it 1 to 6.
For more details about the Mersenne Twister see:
Note: Available in version 3.57 onwards.
VBscript Note MtRand
Jscript example Note (MtRand ())
heads = 0
tails = 0
for j = 1, 1000000 do
i = math.floor (MtRand () * 2)
if i == 0 then
heads = heads + 1
Lua example else
tails = tails + 1
end -- if
end -- for
print ("heads = ", heads) --> 498893
print ("tails = ", tails) --> 501107
A pseudo-random number in the range [0 to 1). In other words, zero might be returned, however 1 will not be.
Returns To put it another way, the returned number is:
>= 0 and < 1
Introduced in 3.57
See also ...
Function Description
MtSrand Seed the Mersenne Twister pseudo-random number generator
Search for script function
Enter a word or phrase in the box below to narrow the list down to those that match.
The function name, prototype, summary, and description are searched.
Leave blank to show all functions.
Return codes
Many functions return a "code" which indicates the success or otherwise of the function.
You can view a list of the return codes
Function prototypes
The "prototype" part of each function description lists exactly how the function is called (what arguments, if any, to pass to it).
You can view a list of the data types used in function prototypes
Information and images on this site are licensed under the Creative Commons Attribution 3.0 Australia License unless stated otherwise.
|
{"url":"http://www.gammon.com.au/scripts/function.php?name=MtRand","timestamp":"2024-11-10T16:19:51Z","content_type":"text/html","content_length":"11073","record_id":"<urn:uuid:c4169b40-fe64-433a-9cb9-e43be6cf052e>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00785.warc.gz"}
|
[Numpy-discussion] UFunc out argument not forcing high precision loop?
27 Sep 2019 27 Sep '19
1:50 p.m.
Hi all, Looking at the ufunc dispatching rules with an `out` argument, I was a bit surprised to realize this little gem is how things work: ``` arr = np.arange(10, dtype=np.uint16) + 2**15 print(arr)
# array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18], dtype=uint16) out = np.zeros(10) np.add(arr, arr, out=out) print(repr(out)) # array([ 0., 2., 4., 6., 8., 10., 12., 14., 16., 18.]) ``` This is strictly
speaking correct/consistent. What the ufunc tries to ensure is that whatever the loop produces fits into `out`. However, I still find it unexpected that it does not pick the full precision loop.
There is currently only one way to achieve that, and this by using `dtype=out.dtype` (or similar incarnations) which specify the exact dtype [0]. Of course this is also because I would like to
simplify things for a new dispatching system, but I would like to propose to disable the above behaviour. This would mean: ``` # make the call: np.add(arr, arr, out=out) # Equivalent to the current
[1]: np.add(arr, arr, out=out, dtype=(None, None, out.dtype)) # Getting the old behaviour requires (assuming inputs have same dtype): np.add(arr, arr, out=out, dtypes=arr.dtype) ``` and thus force
the high precision loop. In very rare cases, this could lead to no loop being found. The main incompatibility is if someone actually makes use of the above (integer over/underflow) behaviour, but
wants to store it in a higher precision array. I personally currently think we should change it, but am curious if we think that we may be able to get away with an accelerate process and not a year
long FutureWarning. Cheers, Sebastian [0] You can also use `casting="no"` but in all relevant cases that should find no loop, since the we typically only have homogeneous loop definitions, and [1]
Which is normally the same as the shorter spelling `dtype=out.dtype` of course.
|
{"url":"https://mail.python.org/archives/list/numpy-discussion@python.org/message/FM22BRJQZGGHGC3OAOY4TZCWLCWD33GT/","timestamp":"2024-11-06T21:50:31Z","content_type":"text/html","content_length":"14584","record_id":"<urn:uuid:7d96d33f-15f0-4087-aace-6aeb0f4156f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00071.warc.gz"}
|
If lim_(x rarr 2) (ax^2-b)/(x-2)=4 then find the value of a and b? | Socratic
If #lim_(x rarr 2) (ax^2-b)/(x-2)=4# then find the value of a and b?
2 Answers
We have the following:
${\lim}_{x \to 2} \left(\frac{\textcolor{b l u e}{\left(a {x}^{2} - b\right)}}{x - 2}\right)$
Whatever $a$ and $b$ happen to be, if we plug $2$ into the expression the way it is now, we get a zero in the denominator, which makes the expression undefined.
Let's see if we can construct a difference of squares in the numerator, so we can cancel the $x - 2$ in the bottom.
The difference of squares expression $\textcolor{b l u e}{{x}^{2} - 4}$ can be factored as $\textcolor{b l u e}{\left(x + 2\right) \left(x - 2\right)}$, which allows us to cancel $x - 2$ on the
If we have ${x}^{2} - 4$ in the numerator, this means $a = 1$ and $b = 4$. We now have
${\lim}_{x \to 2} \left(\frac{{x}^{2} - 4}{x - 2}\right) = {\lim}_{x \to 2} \left(\frac{\left(x + 2\right) \cancel{x - 2}}{\cancel{x - 2}}\right)$
${\lim}_{x \to 2} \left(x + 2\right) = 4$
Now we can evaluate this limit at $x = 2$, and we indeed get $4$.
Notice, if we evaluated the original limit, we would get an undefined expression, so my next thought was to cancel the denominator.
Hope this helps!
We seek constants $a , b \in \mathbb{R}$ such that:
${\lim}_{x \rightarrow 2} \frac{a {x}^{2} - b}{x - 2} = 4$
Noticing that the denominator is zero when $x = 2$, and that the limit exists it must be that the limit is of an indeterminate form $\frac{0}{0}$, as otherwise the limit could not exist.
Thus we require that the numerator is $0$ when $x = 2$, that is
${\left[a {x}^{2} - b\right]}_{x = 2} = 0$
$\setminus \setminus \implies 4 a - b = 0$
$\setminus \setminus \implies b = 4 a$ ..... [A]
If we apply L'Hôpital's rule then we know that for an indeterminate form, then
${\lim}_{x \rightarrow a} f \frac{x}{g} \left(x\right) = {\lim}_{x \rightarrow a} \frac{f ' \left(x\right)}{g ' \left(x\right)}$
So differentiating the numerator and denominator independently, then we have:
${\lim}_{x \rightarrow 2} \frac{\frac{d}{\mathrm{dx}} \left(a {x}^{2} - b\right)}{\frac{d}{\mathrm{dx}} \left(x - 2\right)} = 4$
$\setminus \setminus \implies {\lim}_{x \rightarrow 2} \frac{2 a x}{1} = 4$
$\setminus \setminus \implies 4 a = 4$
$\setminus \setminus \implies a = 1$
And using [A], we have:
$b = 4 \times 1 = 4$
Impact of this question
18039 views around the world
|
{"url":"https://api-project-1022638073839.appspot.com/questions/limit-of-ax-2-b-x-2-4-as-x-approaches-to-2-find-the-value-of-a-and-b#637996","timestamp":"2024-11-05T15:19:43Z","content_type":"text/html","content_length":"36779","record_id":"<urn:uuid:473b1a50-e2c8-4dc7-99f0-9bc097dae99c>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00268.warc.gz"}
|
Causal Dynamical Triangulation (CDT)
This is derived from Asexperia's thread on "Philochrony" which discusses the properties of Time,
whereas CDT proposes a new look at the "unfolding properties of space"
Causal Dynamical Triangulation
Causal dynamical triangulation
(abbreviated as
) theorized by
Renate Loll
Jan Ambjørn
Jerzy Jurkiewicz
, is an approach to
quantum gravity
that, like
loop quantum gravity
, is
background independent
This means that it does not assume any pre-existing arena (dimensional space), but rather attempts to show how the spacetime fabric itself evolves.
There is evidence [1] that at large scales CDT approximates the familiar 4-dimensional spacetime, but shows spacetime to be 2-dimensional near the Planck scale, and reveals a fractal structure on
slices of constant time.
These interesting results agree with the findings of Lauscher and Reuter, who use an approach called
Quantum Einstein Gravity
, and with other recent theoretical work.
Near the Planck scale, the structure of spacetime itself is supposed to be constantly changing due to quantum fluctuations and topological fluctuations. CDT theory uses a triangulation process
which varies dynamically and follows deterministic rules, to map out how this can evolve into dimensional spaces similar to that of our universe.
The results of researchers suggest that this is a good way to model the
early universe
, and describe its evolution. Using a structure called a
, it divides spacetime into tiny triangular sections. A simplex is the multidimensional analogue of a
[2-simplex]; a 3-simplex is usually called a
, while the 4-simplex, which is the basic building block in this theory, is also known as the
. Each simplex is geometrically flat, but simplices can be "glued" together in a variety of ways to create curved spacetimes, where previous attempts at triangulation of quantum spaces have
produced jumbled universes with far too many dimensions, or minimal universes with too few.
CDT avoids this problem by allowing only those configurations in which the timelines of all joined edges of simplices agree.
See also Physics portal https://en.wikipedia.org/wiki/Causal_dynamical_triangulation
Last edited:
A supporting model is offered here.
But, like Sorkin, Loll and her colleagues found that adding causality changed everything. After all, says Loll, the dimension of time is not quite like the three dimensions of space. “We cannot
travel back and forth in time,” she says. So the team changed its simulations to ensure that effects could not come before their cause — and found that the space-time chunks started consistently
assembling themselves into smooth four-dimensional universes with properties similar to our own
Intriguingly, the simulations also hint that soon after the Big Bang, the Universe went through an infant phase with only two dimensions — one of space and one of time. This prediction has also
been made independently by others attempting to derive equations of quantum gravity, and even some who suggest that the appearance of dark energy is a sign that our Universe is now growing a
fourth spatial dimension. Others have shown that a two-dimensional phase in the early Universe would create patterns similar to those already seen in the cosmic microwave background.
This is derived from Asexperia's thread on "Philochrony" which discusses the properties of Time,
whereas CDT proposes a new look at the "unfolding properties of space"
Causal Dynamical Triangulation
Introduction See also Physics portal https://en.wikipedia.org/wiki/Causal_dynamical_triangulation
Very interesting, but for me space is not matter even at the quantum level. I think that gravity could be a still unknown quantum force. We must not confuse space with stellar matter.
Very interesting, but for me space is not matter even at the quantum level. I think that gravity could be a still unknown quantum force. We must not confuse space with stellar matter.
I don't believe that CDT describes matter, but rather fundamental generic relational quantum values that answer to logical-mathematical guiding equations.
I cannot think of a better candidate for an abstract quantum than a fractal. It naturally self-forms and organizes into a homogenous geometry that allows for the emergence of the most complex
patterns, based on fundamental geometric simplexes.
The four simplexes which can be fully represented in 3D space.
, a
) is a generalization of the notion of a
to arbitrary
. The simplex is so-named because it represents the simplest possible
in any given dimension. For example,
more ....
Fractals naturally form curved surfaces, which conforms to "curved space" while answering to very uncomplicated equations such as the
"gravitational constant".
The gravitational constant (also known as the universal gravitational constant, the Newtonian constant of gravitation, or the Cavendish gravitational constant),[a] denoted by the capital letter G
, is an empirical physical constant involved in the calculation of gravitational effects in Sir Isaac Newton's law of universal gravitation and in Albert Einstein's theory of general relativity.
The gravitational constant G is a key quantity in Newton's law of universal gravitation.
But also, IMO it answers to your model of philochrony as well.
It seems that all of this describes an emergent evolving object that can be measured by its dimensional expressions.
In the end it should be reducible to its most simplest forms.
I firmly believe that there is no such thing as "irreducible complexity"
Last edited:
|
{"url":"https://sciforums.com/threads/causal-dynamical-triangulation-cdt.165932/","timestamp":"2024-11-13T09:58:01Z","content_type":"text/html","content_length":"81625","record_id":"<urn:uuid:395be634-7a61-4405-ba83-540c13349e79>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00264.warc.gz"}
|
PPT - 8-1 Measuring Matter PowerPoint Presentation, free download - ID:5345826
1. Properties and Changes of Matter- Comp. Sci. 3 Mrs. Bloch 8-1 Measuring Matter
2. Vocabulary • Weight- a measure of the force of gravity on an object • Mass- the amount of matter in an object • International System of Units- a system of units used by scientists to measure the
properties of matter • Volume- The amount of space that matter occupies • Density- a measure of the mass of a material in a given volume
3. My Planet Diary pg. 290 • Field Trip - Travel to the eastern coast of Africa and you will find the country of Djibouti. There, you can visit one of the saltiest bodies of water in the world. Lake
Assal is ten times saltier than the ocean. Its crystal white beaches are made up of salt. While on your visit to Lake Assal, be sure to take a dip in the clear blue waters. Take a book or
magazine with you to read. Wait, what? Take a book into a lake? It might seem strange, but bodies of water with high salt contents, like Lake Assal or the Dead Sea in the Middle East, allow you
to float so well that it's nearly impossible to sink below the surface of the water. • Communicate - What water activities • might be easier to do in Lake Assal’s • salty water? • What activities
could be more difficult?
4. What Units Are Used to Express Mass and Volume? Pg. 291 • Weight is a measure of the force of gravity on an object. Weight varies with location in the solar system. A more massive object will
exert a greater gravitational force, so the weight of an object on that more massive planet or moon will be greater. Weight is measured with a scale. • Mass is the amount of matter in an object.
It does not change with location. Mass is constant. For this reason, scientists prefer to describe matter in terms of mass rather than weight.
5. Measuring Weight – figure 1 pg. 291 • Use the weight of the first scale to estimate the weight of the fish on the other scales. • Estimate – Use the weight of the first scale to estimate the
weight of the fish on the other scales. Draw in the pointers. • Describe- How would their weight change on a small planet like Mercury? Or a large planet like Neptune? _______________________ •
6. The International System of Units or SI pg. 292 • To measure the properties of matter, scientists use the International System of Units, or SI. The SI unit of mass is the kilogram (kg). If a
smaller unit of mass is needed, the gram (g) is used. There are 1,000 grams in a kilogram or 0.001 kilogram in a gram.
7. Measuring Mass – Figure 2- pg. 292 • The SI system uses grams and kilograms to measure mass. • Calculate – In the table, • convert the mass of • each object from grams • to kilograms. • 2.
Challenge- Suppose you are • taking a flight to Europe. You • are only allowed a 23-kg • suitcase. How much is that in • pounds? (Hint: 1kg = 2.2 lb) • 50.6 lb • 46.2 lb • 10.5 lb
8. Volume pg. 293 • Another measurable property of matter is volume, or the amount of space matter occupies. The SI unit of volume is the cubic meter (m3). Other common SI units of volume are the
cubic centimeter (cm3), the liter (L), and the milliliter (mL). • There are 1,000 milliliters in a liter or 0.001 liter in a milliliter. One milliliter is the same volume as 1 cm3. The volume •
of a rectangular solid is calculated according to the following formula. • Volume = Length x Widthx Height
9. Calculating Volume – Figure 3 pg. 293 What is the volume of the suitcase? Suppose you want to know The volume of a rectangular Object, like one of the suitcases Shown in figure 3. First, measure
The length, width, and height (or thickness) of the suitcase. Then, multiply the measurements Together. Volume = Length x Widthx Height When you multiply the three measurements, you Must also
multiply the units. cm3 = cm x cmx cm Find the volume of the suitcase. _______________________
10. Measuring Irregular Objects pg. 293 One way to measure the volume of an irregular object is to submerge it in liquid in a graduated cylinder. The water level will rise by an amount that is equal
to the Volume of the object in milliliters.
11. Assess Your Understanding pg. 293 • Describe- Why is mass more useful than weight for measuring matter? _______________________________________________ •
_______________________________________________ • I get it! Now I know that the SI unit for mass is ______________ • the SI unit for volume is ____________
12. Calculating Density pg. 294 • Density is a measure of the mass of a material in a given volume. Density is expressed as the number of grams in one cubic centimeter, or g/cm3. Because one
milliliter is the same volume as one cm3, density can also be expressed as g/mL. You can determine the density of a sample of matter by dividing its mass by its volume.
13. How Is Density Determined? Pg. 294 • The density of water is 1 g/mL, or 1 g/cm³. Objects with greater densities will sink. Objects with lesser densities will float. Density is a physical property
of a substance. It can be used to identify an unknown substance. • The Formula to find Density is: • 1. Apply Concepts- Liquids can form layers based on density. • Which colored layer of liquid
represents • Water: 1.g/ml, Honey: 1.36 g/ml, Dish Soap: 1.03 g/ml, • Corn Syrup: 1.33 g/ml, Vegetable oil: 0.91 g/ml? • Find and circle the median density. • 2. Calculate- What is the density of
a liquid with a mass of 17.4 g and a volume of 20 mL? Where would this liquid be in the column? • 3. Explain why the liquid with the median density floats where it does.
14. Using Density- Virtual Lab- Figure 4 pg. 295 • Density can be used to identify substances. • Estimate- Hypothesize which rock sample is gold. Then calculate the density of each sample. Circle the
rock that is real gold. • My hypothesis is that the gold rock is • A • B • C • A : Mass = 108 g Volume = 12 cm3 Density = _______ • B: Mass = 126 g Volume = 15 cm3 Density = _______ • C: Mass =
386 g Volume = 20 cm3 Density = _______
15. Assess Your Understanding pg. 295 • 2a. Identify – Maple syrup will (float / sink) in water because its density is greater than 1 g/cm3. • b. Calculate - What is the mass of a sample of a
substance with a volume of 120 mL and a density of 0.75 g/mL? • ___________________________________________________ • c. Challenge – Liquid water and ice are the same substance, H20. How would
you explain why ice floats in water? • ____________________________________________________________________________________________________ • I get it! Now I know density is calculated by
_________________ • ____________________________________________________________________________________________________
|
{"url":"https://www.slideserve.com/maxima/8-1-measuring-matter","timestamp":"2024-11-09T07:33:32Z","content_type":"text/html","content_length":"93526","record_id":"<urn:uuid:336cee37-6ed8-4ab8-b7a8-6b26b5cc89b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00807.warc.gz"}
|
Piotr Kowalczyk - Personal Homepage, Wroclaw University of Science and Technology, Department of Mathematics
Politechnika Wrocławska, Wydział Matematyki
Email: piotr.s.kowalczyk@pwr.edu.pl
Adres: Budynek C-19, pokój A.3.19,
ul. Hoene-Wrońskiego 13C,
50-376 Wrocław
Konsultacje: Poniedziałek 14:30-16:30
Piątek 10:00-12:00
Dynamics of piecewise-smooth and hybrid systems
1) Dynamical systems theory with an emphasis on the analysis and computations of bifurcations in non-smooth dynamical systems.
2) Engineering applications of dynamical systems. For instance, understanding the phenomena brought about by the presence of discontinuous nonlinearities in mechanical systems with dry-friction,
preload, impacts, hysteresis, and saturation. Other applications of interest are power electronics and hybrid systems.
3) Robustness of discontinuity induced bifurcations and singularly perturbed piecewise-smooth systems. It is important to understand how small imperfections that are usually ignored by the modeler
influence system dynamics. For instance, in the context of modelling DC/DC power converters there are small capacitances or inductances that are ignored. The question araises how and if these produce
some significant effects on the system dynamics.
4) Dynamics of multiple scale systems with switchings and resets.
• Numerical continuation software
• Colleagues
Biped dynamics, human balance and human locomotion
It has been reported that with aging the ability to maintain an upright position decreases and a rate of falling causing serious injuries and even death are quite frequent. With the aging society it
is therefore important to understand and isolate the mechanisms that are critical for maintaining balance. Mathematical models of upright standing can be studied to address this question. Some of
these models involve the presence of discontinuous nonlinearities and time delays.
Some of the reduced models of human running are based on the essential fact that human running consists of two phases, which are repeated. That is, the phase of contact with the ground (support
phase) and the phase of no contact with the grund (flight phase). This situation natuarlly leads to a switched model system. One of the most elementary of such models is so-called SLIP model (Spring
Loaded Inverted Pendulum model). In spite of its simplicity, the presence of switching and asymmetry captures some essential dynamics of running and the model has a rich bifurcation structure.
Dynamics of piecewise-smooth systems with time delay
Dynamics of systems with relay feedback, hysteresis and time delay.
Kursy (2023/2024 - semestr letni)
Diploma Seminar (materiały na ePortalu)
Analiza matematyczna 2A (materiały na ePortalu)
Polish scientist worth knowing that you may not know:
M. di Bernardo, C. Budd, A. R. Champneys, P. Kowalczyk, Piecewise-smooth Dynamical Systems: Theory and Applications (2008) (see Springer).
Accepted/Submitted for publication/In revision/In preparation
[29] M. Desroches P. Kowalczyk, and S. Rodrigues, Discontinuity induced dynamics in Conductance-Based Adaptive Exponential Integrate-and-Fire Model, submitted for publication to Bulletin of
Mathematical Biology, February 2024
[28] Zofia Wróblewska, P. Kowalczyk, and Łukasz Płociniczak, Nonexistence of asymmetric solutions of human running gaits in a switched inverted pendulum model, accepted for publication to Mathematica
Applicanda, February 2024
[27] Zofia Wróblewska, P. Kowalczyk, and Krzysztof Przednowek, Leg stiffness and energy minimization in human running gaits, Sports Engineering, accepted for publication, April 2024
[26] Zofia Wróblewska, P. Kowalczyk, and Łukasz Płociniczak, Stability of fixed points in an approximate solution of the spring-mass running model, accepted for publication in the IMA Journal of
Applied Mathematics, March 2023
Journal publications
[25] P. Kowalczyk, Łukasz Płociniczak and Zofia Wróblewska, Energy variations and periodic solutions in a switched inverted pendulum model of human running gaits, Physica D, 2022
[24] M. Desroches P. Kowalczyk, and S. Rodrigues, Spike-adding and reset-induced canard cycles in adaptive integrate and fire models, Nonlinear Dynamics, 2021
[23] P. Kowalczyk, The dynamics and event-collision bifurcations in switched control systems with delayed switching, Physica D, Vol. 406, 2020
[22] P. Kowalczyk, A novel route to a Hopf-bifurcation scenario in switched systems with dead zone, Physica D, Vol. 348, pp. 60-66, 2017
[21] S. Nema, P. Kowalczyk, I. Loram, Wavelet-frequency analysis for the detection of discontinuities in switched system models of human balance, Human Movement Science, 2016
[20] P. Glendinning, P. Kowalczyk, and A. B. Nordmark, Multiple attractors in grazing-sliding bifurcations in Filippov type flows, the IMA Journal of Applied Mathematics, 2016
[19] S. Nema, P. Kowalczyk, and I. Loram, Complexity and dynamics of switched human balance during quiet standing, Biological Cybernetics, 2015
[18] S. Nema, P. Kowalczyk, Detecting abrupt changes in a noisy Van der Pol type oscillator, Differential Equations and Dynamical Systems, 2015
[17] P. Kowalczyk, S. Nema, P. Glendinning, I. Loram and M. Brown, ARMA analysis of linear and discontinuous models of human balance during quiet standing, Chaos: An Interdisciplinary Journal of
Nonlinear Science, June 2014
[16] P. Glendinning, P. Kowalczyk, and A. B. Nordmark, Attractors near grazing-sliding bifurcations, Nonlinearity, Vol 25(6), pp. 1867- 1885, 2012
[15] P. Kowalczyk, P. Glendinning, Martin Brown, Gustavo Medrano-Cerda, Houman Dallali, and Jonathan Shapiro Modelling human balance using switched systems with linear feedback control, In Print,
Interdiscpilinary Journal of the Royal Society Interface, Available online 21 June 2011
[14] P. Kowalczyk, P. Glendinning Boundary-equilibrium bifurcations in piecewise-smooth slow-fast systems, Chaos: An interdisciplinary Journal of Nonlinear Science, June 2011
[13] P. Glendinning, P. Kowalczyk, Micro-chaotic dynamics due to digital sampling in hybrid systems of Filippov type, Physica D, 239(1-2), pp.58-71, (2010)
[12] J. Sieber, P. Kowalczyk, Small-scale instabilities in dynamical systems with sliding, Physica D, 239(1-2), pp. 44-57, 2010
[11] J. Sieber, P. Kowalczyk, S.J. Hogan, M. di Bernardo: Dynamics of symmetric dynamical systems with delayed switching. Special Issue of Journal of Vibration and Control on Dynamics and Control of
Systems with Time-Delay, Vol. 16(7-8), 2010
[10] P. Glendinning, P. Kowalczyk, Dynamics of a hybrid thermostat model with discrete sampling time control, Dynamical Systems, 24(3), pp. 343-360, 2009
[9] M. di Bernardo, C. Budd, A. R. Champneys, P. Kowalczyk, A. B. Nordmark, G. Olivar and P.T. Piiroinen, Bifurcations in Nonsmooth Dynamical Systems, SIAM Review, 50(4), pp.629-701, 2008
[8] A. Colombo, M. di Bernardo, J. Hogan and P. Kowalczyk, Complex dynamics in a relay feedback system with hysteresis and delay, Journal of Nonlinear Science, 17(2), pp. 85-108, 2007
[7] P. Kowalczyk, P.T. Piiroinen, Two-parameter sliding bifurcations of periodic solutions in a dry-friction oscillator, Physica D, 237(8), pp.1053-1073, 2007
[6] A. B. Nordmark and P. Kowalczyk, A codimension-two scenario of sliding solutions in grazing-sliding bifurcations,, Nonlinearity 19(1) pp. 1-26, 2006)
[5] P. Kowalczyk, M. di Bernardo, A. R. Champneys, S. J. Hogan, M. Homer, Yu. A. Kuznetsov, A. B. Nordmark and P. T. Piiroinen, Two-parameter nonsmooth bifurcations of limit cycles: classification
and open problems, International Journal of bifurcation and chaos, Vol. 16 No. 3, pp.601-629, 2006
[4] P. Kowalczyk and M. di Bernardo, Two-parameter degenerate sliding bifurcations in Filippov systems, Physica D, Vol. 204 pp. 204 - 229, 2005
[3] P. Kowalczyk, Robust chaos and border-collision bifurcations in non-invertible piecewise-linear maps, Nonlinearity, Vol. 18 pp. 485-504, 2005
[2] M. di Bernardo, P. Kowalczyk, and A. B. Nordmark, Sliding bifurcations: A novel mechanism for the sudden onset of chaos in dry-friction oscillators, International Journal of Bifurcation and
Chaos, Vol. 13, No. 10 pp. 2935-2948, 2003
[1] M. di Bernardo, P. Kowalczyk, and A. B. Nordmark, Bifurcations of dynamical systems with sliding: derivation of normal-form mappings, Physica D, volume 170, pp. 175-205, 2002
Conference proceedings
P. Kowalczyk, P. Glendinning, Micro-chaos in Relay Feedback Systems with Bang-Bang Control and Digital Sampling, To appear in the proceedings of 18th Word Congress of the International Federation of
Automatic Control, Milan 2011
P. Kowalczyk, J. Sieber, Robustness of grazing-sliding bifurcations in Filippov type systems, In the proceedings of Second IFAC meeting related to analysis and control of chaotic systems, London 2009
P. Kowalczyk, A. B. Nordmark, Bifurcations in non-smooth models of
mechanical systems, In the proceeding of EUROMECH 500 conference on Non-smooth Problems in Vehicle Systems Dynamics - Analysis and Solutions, Lingby, Denmark 2008
P. Kowalczyk, Grazing bifurcations: A mechanism for the sudden onset of robust chaos, In the proceedings of 10th Experimental Chaos Conference, Catania, Italy 2008
Samia K. Genena, Daniel J. Pagano and P. Kowalczyk: HOSM Control of Stick-Slip Oscillations in Oil Well Drillstrings,in Proceeding to European Control Conference Kos 2007
J. Sieber, P. Kowalczyk: Symmetric event collisions in dynamical systems with delayed switches . To appear in a special issue of "Discrete and Continuous Dynamical Systems Series B" (Proceedings of
the sixth AIMS Conference on Dynamical Systems, Differential Equations and Applications, Poitiers 2007
M. di Bernardo, A.R. Champneys, P. Kowalczyk, Corner-Collision and Grazing-Sliding: practical examples of border-collision bifurcations, Proc. IUTAM Symposium on Chaotic Dynamics and Control of
Systems and Processes in Mechanics, Kluwer Academic, 2003
M. di Bernardo and P. Kowalczyk, On the existence of stable asymmetric limit cycles and chaos in unforced symmetric relay feedback system, in Proceeding to European Control Conference Porto 2001
P. Kowalczyk and M. di Bernardo, On a novel class of bifurcations in hybrid dynamical systems – the case of relay feedback system, in Proceedings of 4th International Workshop on Hybrid Systems
Computation and Control, published by Springer-Verlag, pp. 361-374, 2001
A. Sowa and P. Kowalczyk, Test chamber characteristics – an important factor determining required RF power of amplifier in radiated immunity tests, Accepted for the Electromagnetic Compatibility
Conference Wroclaw 2000, Poland
“Nothing but disaster follows from applause.”
―Thomas Bernhard
Naked Man (by Randy Newman)
Back to top
|
{"url":"https://prac.im.pwr.edu.pl/~kowalczykp/","timestamp":"2024-11-04T02:35:00Z","content_type":"application/xhtml+xml","content_length":"24033","record_id":"<urn:uuid:35ae484c-73b1-4d70-9497-1ff8e80a77eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00236.warc.gz"}
|
Top 56 R Interview Questions
Q.No.1. Name the different data structures in R? Briefly explain.
Answer: – Organizing the data in a computer for effectively and efficiently used in the future is called Data Structure. It is a particular technique for arranging the data because it reduces our
time complexity. It consumes less space also. It can hold multiple values.
R Programming Language is one of the programs, where we can use many tools for holding multiple values. It can also carry one-dimensional and multidimensional data. We can easily work on identical
(Homogeneous) and various data types (Heterogeneous).
There are many data structures from which the most important are given below:
1. List- A generic object which has an ordered collection of the objects is called a List. It is heterogeneous. A list can have many data structures in it like a list of vectors, a list of
functions, etc.
2. Data frame – It is used to store the tabular data. It is of two dimensions and it can contain heterogeneous data. We can work with multiple types of data on it.
3. Vectors – A collection of basic data types which are in an ordered way is called vectors. It is a one-dimensional data structure for that it is homogeneous also.
4. Array – It is a multidimensional data structure where we can store homogeneous data.
5. Matrices – It is a two-dimensional data structure that has rows and columns in the rectangular set. It is a homogeneous data structure where we can perform multiple operations.
6. Factors – It is used to categorize the data like true/false, male/female, in/out, etc.
Q.No.2. Advantages of using an applied family of functions in R?
Answer: -The applied family of functions is a built-in family which appears with the built-in packages in R. It is already installed in it.
It allows us to manipulate data frames, vectors, arrays, etc. It works more effectively than loops and also gives better performance from them which is faster at the execution level. It reduces the
need for explicitly creating a loop in R.
The list of the apply family are as follows: –
1. apply() function: – It helps to apply a function on rows or columns of a data frame.
Syntax: – apply()
2. lapply() function: – It takes a list as an argument and applies a function to each element of the list by looping.
Syntax: – lappy()
3. sapply() function : – It is more advanced version than lappy() however it works same as lappy(). It also takes a list as an argument and applies a function to each element of the list by looping.
The only difference is in output generalization. Where lappy() returns a list as an output every time, sapply returns certain algorithms as output.
Syntax: – sapply()
4. tapply() function: – It can be applied to vectors and factors. The data which contain different subgroup and we have to apply a specific function on each subgroup that time we can use it.
Syntax: – tapply()
5. mapply() function: – It is a multivariate version of the sapply() function where we apply the same function to multiple arguments.
Q.No. 3. What do you mean by shinyR?
Answers: – ShinyR is easy to build interaction between web applications through R Language, Where you can host any standalone application on a webpage or embed or program them in R Markdown documents
or build a dashboard. You can also extend the Shiny application with various themes (CSS), widgets (HTML), and actions (JavaScript). It unites the computational power of R with the interactivity of
the modern web. It is also very easy to write a program in shinyR. It comes with a variety of built-in input widgets with minimum syntax. We can plot diagrams, tables, and all those things which we
can do in an R language.
Q.No. 4. What do you mean by Random Forest? How would you build a Random Forest in R?
Answer: – The Random Forest is used for classification as well as regression of the data. It creates decision trees on data samples from where it gets the prediction from each data. It also selects
the best solution through voting. It is a supervised learning algorithm. We know that decision trees are popular for machine learning tasks. The random trees have some overlaps where we can build
systems to read the data redundantly with various trees and look for the trends, patterns, and structures that support a given data outcome.
To build a Random Forest in R we have to follow the given steps: –
1. Create a Bootstrapped Data Set
2. Create a Design Tree
3. Predict the outcome of the data point
4. Evaluate the Model
Q.No. 5. What are the functions available in the “dplyr” package?
Answer: – The functions which are available in the “dplyr” package are as follows: –
1. Select() function: -Allows us to rapidly zoom in on a useful subset using operations that usually only work on numeric variable positions.
2. group_by() function : – It allows us to group by a modified columns.
3. mutate() function : -It is useful to add new columns that are functions of previous existing columns.
4. filter() function : -Allows us to select a subset of rows in a data frame.
5. summarize() function :- Allows us to collapses a data frame to a single row.
6. relocate() function : – Allows us to change the column order.
7. slice() function : – Allows us to select, remove and duplicate rows.
8. desc() function : – Allows us to arrange the column in descending order.
Also Read: Python Interview Questions
Q.No.6. How do you write a custom function in R? Provide an example.
Answer: – There is hundreds of built-in function. Hadley Wickham defined function as “You can do anything with functions that you can do with vectors: You can assign them to variables, store them in
lists, pass them as arguments to other functions, create them inside functions and even return them as the result of the function.”
Generally, we can customize function in our own way. We can do everything in arithmetic, logical, command as well as graphical works with it. Here are some examples of the R programming functions.
fahrenheit_to_celsius <-function(temp_F) {
temp_C <- (temp_F -32)*5/9
Q.No. 7. What do you understand by the confusion matrix?
Answer: – It is a table that is used to describe the classification model performance on a set of test data for which the true values are known. It is very simple to understand but only the related
terms can be confusing. Confusion Matrix allows us to find the measure recall, accuracy, precision, etc. It visualizes the accuracy of a classifier by comparing the actual and predicted classes. The
binary confusion matrix is composed of squares:
True Positive (TP): It predicts values correctly predicted as actual positive.
True Negative (TN): It predicts values correctly predicted as an actual negative.
False Positive (FP): It predicts values incorrectly predicted as actual positive.
False Negative (FN): It predicts values correctly predicted as negative.
Q. No. 8. List packages in R that are used for data imputation.
Answer: – The list of packages in R that are used for data imputation is as follows: –
1. MICE: It stands for Multivariate Imputations through Chained Equations. It is the fastest for imputing values. The methods which are used by this package are as follows:
1. PMM (Predictive Mean Matching): for numeric variables.
2. logreg (Logistic Regression): for binary variables.
3. polyreg (Bayesian polytomous regression): for factor variables
4. Proportional odds model
2. Amelia: It performs multiple imputations which generate imputed data sets to deal with the missing values. It helps to reduce bias and increase efficiency.
3. Hmisc: It is a multipurpose package that is useful to analyze the data, imputing the missing values, advanced tables makings, linear regression, logistics regression, logistics fitting,
high–level graphics, etc. It has wide range of functions as like impute(), areglumpute() etc.
4. missForest: It uses for the implementation of a random forest algorithm. It is a non-parametric imputation method that applies to various variable types. It builds a random forest model for each
variable and then it uses the model to predict the missing values in the variable with the help of values which it is observed.
5. MI: it stands for the multiple imputations. It provides us several features for dealing with missing values and uses the predictive mean matching method. It uses the Bayesian version of the
regression model to handle the issues of separation. It also automatically detects the irregularities in data such as high collinearity among the variables.
Q.No.9. How do you build a linear regression model in R?
Answer: To build a linear regression model in R we have to follow the following steps:-
1. Experiment with gathering a sample of observed values.
2. Create a relationship model using the Im() function in R.
3. Find the coefficients from the model created.
4. Create the mathematical equation.
5. Find a summary of the relationship model to know the average error in prediction which is also called residuals.
6. Predict the new data from using the predict() function in R.
Q.No. 10. How to install packages in R?
Answer: To install packages in R we have to perform the following steps:
1. Part 1
1. Type “install. packages(“gplots”)” and then press the Enter or Return key.
2. If you have already loaded a package from a server in the R session then R will automatically install the packages. If not the R will automatically prompt you to choose a mirror. Again choose one
close to unless you want to watch a loading bar slowly inching its way to fulfilling.
3. Part 2
1. Type “Library(gplots)” and then press the Enter key.
2. R will give lots of output because it needs to install other packages as required for gplots.
3. Part 3
1. You will only need to do the same which is describing in part 1 once time on your computer.
2. You only need to do part 2 each time you choose and restart R.
Q.No. 11. What do you understand by Rmarkdown?
Answer-It provides us a unified authority of framework for data Science, combining our code, prose commentary, and its results. Documents of R Markdown are fully supported by dozens of output formats
i.e. like PDFs, Slideshare, Word files, and many more which we can reproduce many times.
Simply R Markdown is a text-based file format that allows us to include descriptive texts, code blocks, and code output. We can also run the code in R and using a package called Knitr. We can export
the text which is formatted .rmd file to a greatly rendered, sharable format like pdf, HTML, etc. When we knit the code is run and so our outputs including plots, graphs, and other figures appear in
the rendered document.
Q.No. 12. How can you load the .csv file in R?
Answer- You can load the .csv file in R by following the following steps:
1. The first thing in this process is to getting and setting up the working directory. We need to choose the correct working path of the CSV (comma separated values) file.
2. We can check the default working directory by using gerwd() function and we can also change the directory by using the function setwd().
3. After the setting of the working path as prescribed earlier, we need to import the data set or a CSV file.
4. After getting the data frame as mentioned above, we can analyze the data. we can also extract the particular information from the data frame.
By this process, you can read the CSV files in R with the use of the read.csv(“ “)function.
Q.No. 13. How can you do a cross-product of two tables in R?
Answer- We can do a cross-product of two tables in R by using CJ() function. It produces data. table out of the two vectors. This function does the Cartesian Product or Cross product of two data.
Q.No. 14. How do you extract a word from a string?
Answer- We extract a word from a string by using the word() function in the R language. This function is mainly used for the extracted word from a string that is from the position that is specified
as an argument. We can use String, start, end, sep, etc. as an argument.
Q.NO. 15. What do you mean by correlation in R?
Answer- To evaluate the association between two or more variables we use Correlation. It has Correlation coefficients which are indicators of the strength of the linear relationship between two
different variables say x and y. The correlation coefficient greater than zero indicates that a positive relationship, while a value less than zero indicates that a negative relationship. A negative
correlation is also called inverse correlation which is a key concept in the creation of diversified portfolios that can better withstand portfolio volatility.
The most common Correlation coefficient is generated by the Pearson product-moment correlation which is used to measure the linear relationship between two variables. The Pearson Correlation is also
called parametric correlation.
Q.No. 16. How do you find out the number of missing values in a particular dataset?
Answer- To find out the number of missing values in a particular dataset we use it.na() function which returns a logical vector with TRUE in the element location containing missing values is
represented by NA. The is. na() will work on vectors, data frames, matrices, lists, etc.
Q.No. 17. How do you rename a column in a data frame?
Answer- To rename a column in the data frame we can use two functions either names() or colnames(). For this, we have to perform 2 steps for it which as follows:
1. Get the column names either using the function names() or Colnames()
2. Change column names where name= xyz
Q.No. 18. How would you do left join and right join in R language?
Answer- Left join will take all of the values from the table as we specify as left and match them to the records from the table on the right. The syntax for the left join is as follows:
left_join(tableA, tableB, by=”Customer.ID”)
Right Join is the opposite of a left join. In this function the table specified second within the joint statement will be the one that the new table takes all of its values from.
right_join(tableA, tableB, by=”Customer.ID”)
Q.No. 19. How do you make a box-plot using “plotly”?
Answer- We can make a box-plot using “plotly” function by follow the sample synatax for it which are as follows:
fig <- plot_ly(y = ~rnorm(50),
Type = “box”)
fig <- fig %>% add_trace
(y = ~rnom(50, 1))
However, we can choose exclusive or inclusive algorithms to compute quartiles and we can also modify the algorithm for computing quartiles.
Q.No. 20. What do you mean by evaluate_model() from “statisticalModeling”?
Answer- It is used to find the model outputs for specified inputs. This is identical to the general predict() function, except it will choose sensible values by default. This simplifies to get a
quick look at the model values. There are several arguments of it like model, data, on_training, nlevels, at, etc. This function is set up to look easily at typical outputs.
Q.No. 21. What do you understand by the “initialize()” function?
Answer- The “initialize()” function is used internally by some imputation algorithms for finding the missing values which are imputed with the mean for vectors of class “numeric”, also with the
median for the vector of class “integers” and last but not least the mode for vectors of class “factor”. It initializes the missing values through a rough estimation of missing values in a vector
according to its type.
Q.No. 22. How can you find the mean of one column w.r.t. another?
Answer- We can find the mean of one column concerning another by using ColMeans() function along with sapply() function. It is always helpful to find the mean of the multiple columns. Wed can also
find the mean of multiple columns through Dplyr functions. summarise_if() function along with is.numeric() function is used to get the mean of the multiple column. With the help of the summarise_if()
function, the mean of numeric columns of the data frame is calculated.
Q.No. 23. What is the PCA model in R? Explain in detail.
Answer- The PCA model stands for “Principal Component Analysis”. It has vast operation because the correlations and covariance are always helpful to extract the result. Principal Component Analysis
is widely used and it is a very popular statistical method for reducing data with many dimensions by projecting the data with fewer dimensions using linear combinations of the variables which are
known as a principal component. The new projected components are uncorrelated with each other and are ordered so that its first few components retain most of the variation present in the original
variables. It is also useful to independent variables which are correlated with each other and can be employed in exploratory data analysis or for making predictive models. It reveals important
features of the data such as outliers and departures from a multi-normal distribution.
Q.No. 24. What do you mean by Random Walk Model?
Answer- The Random Walk Model is the integration of a mean zero white noise series. It is also called the Basic Time Series Model which means that the cumulative sum of a mean zero WN (White Noise)
series. When a series follows a Random Walk Model, then it is said to be non – stationary. We can rationalize it by taking a first-order difference of the time series, which means Zero Mean White
Q.No. 25. What is the White noise model?
Answer- All the variables have the same variance and each value has a zero correlation with all other values in the series that is called the White noise model. It is a sequence of random numbers and
cannot be predicted. It suggests improvements could be made to the predictive model. This means that a time series is a white noise if the variables are independent and identically distributed with a
mean of zero.
Q.No. 26. If you are given a vector of values, how would you convert it into a time series object?
Answer- A vector of values can be converted into a time series object by using the ts() function. The syntax is as follows:
ts(vector, start=, end=, frequency= )
Where start is the first and end is the last times of observation and frequency is the number of observations per unit time i.e. 12 for monthly, 6 for half-yearly, 4 for quarterly, and 1 for
Q.No. 27. How do you facet data using the ggplot2 package?
Answer- The facet data using the ggplot2 package is one of the best graphical statistical analysis tools where the graph is partitioned in multiple panels by the levels of the group which we
For splitting in a vertical direction we use syntax like
For splitting in a horizontal direction we can use syntax like
The above-described syntaxes are used in a single variable now we are describing that syntax that is used in two variables.
Rows are abc and columns are xyz
bp + facet_grid(abc ~ xyz)
Rows are xyz and columns are abc
bp + facet_grid(xyz ~ abc)
In Facet() function we can use multiple parameters like we can adjust facet scales, can give the labels, and also wrap the graphs through facet_wrap.
Q.No. 28. Give examples of the functions in Stringr?
Answer – There are many examples of the functions of Stringr from which the main examples are as follows:
1. Str_count(): It count the number of patterns. Syntax= str_count(x, pattern)
2. Str_locate():It gives the location or position of the match. Syntax= str_locate(x, pattern)
3. Str_extract(): It extract the text of the match. Syntax= Str_extract(x, pattern)
4. Str_match(): It extract parts of the match defined by parenthesis. Syntax = str_match(x, pattern)
5. Str_split(): It splits a string into multiple pieces. Syntax = str_split(x, pattern)
Q. No. 29. What is while and for loop in R? Give examples?
Answer- A while loop is a loop where the statement keeps running until the condition which is specified is satisfied. The syntax for a while loop is following:
While (condition){ Exp }
We must write a closing condition at some point otherwise it will go on indefinitely.
Example of the While loop program are as follows:
#Create a variable with value 1
Begin <- 1
#Create the loop
While (begin<=5){
(‘This is the loop number’, begin)
begin <- begin+1
For loop: The loop which is used to iterate over a vector in R programming is called for a loop. The syntax for the ‘for’ loop is as follows;
For (val in sequence)
The example of “for” loop is as follows:
X <- c(2,5,3,9,8,11,6)
Count <- 0f
for (val in x) {
If (val %% 2 ==0)
count = count+1
Q.No. 30. Compare R and Python.
Answer- Python is a more general approach to data science while R is used for statistical analysis. The primary objective of python is deployment and production while the primary objective of R is
Data analysis and statistics. Python is used by most programmers and Developers while R is used by Research and development scientist and professionals. Python is very easy to learn while R is
difficult to learn. Python has many packages and libraries like pandas, scipy, scikit learn, TensorFlow, etc. while R consists of various packages and libraries like caret, zoo, tidyverse, ggplot2,
Q.No. 31. What is the difference between library() and require() functions in R Language?
Answer- If the requested package does not exist then, the library() function gives an error result by default, While the require() function gives a warning message and returns a logical value i.e.
false if the requested package is not found and true if the package is loaded in a system.
Q.No. 32. What do you mean by t-test() in R?
Answer- It is used to determine whether the means of two groups are equal to each other. The assumptions for the test of both groups are sampled from normal distributions with equal variances. The
null hypotheses(0) of the two means are equal, and their alternatives are that means are not equal. We know that that under the null hypothesis, we can calculate at-statistics that will follow a
t-distribution with n1 + n2 – 2 degrees of freedom.
The t.test() function is available in R for performing the “t-tests”. We can use this function in a function like simulation, we need to know that out how the extract the t-statistic from the output
of the t.test function. For this function, the R help page has a detailed list of what the object returned by the function contains.
Q.No. 33. How is with() and By() function used in R?
Answer- with() function evaluates an R expression in an environment constructed based on a data frame. It takes the variables of our data into account. For example, we can compute the sum of our two
and more variables. It can make handling our data much easier, especially when many variables are involved in our expressions.
by() function apply a function to each level of a factor or factors. The “by()” function is an object-oriented wrapper for tapply applied to the data frames. An object of class “by”, giving the
result for each subset. This is always a list if simply is false, otherwise a list or array.
Q.No. 34. How are missing values in R represented?
Answer- The missing values are represented by the symbol NA. Impossible values are represented by the symbol NaN(Not-a-number). NA is used for numeric as well as string data also.
Q.No. 35. What is transpose in R?
Answer:- The conversion of the rows of the matrix in column and column of the matrix in a row is known as transpose. In R we can do it in two ways first by using the t() function and by iterating
over each value using Loops.
Q.No. 36. Advantages of R?
Answer- The advantages of R language are as follows: –
1. R is an open-source programming language. We can work with R without any need for a license or a fee.
2. R has a vast array of packages. These packages are applied to all areas of the industry.
3. It facilitates quality plotting and graphics. There are many libraries such as ggplot2 and ploty advocate for aesthetic and visually appealing graphs that set R apart from the other programming
4. R is highly compatible and can be paired with many other programming languages also like c, c++, Java, and Python.
5. It can be integrated with technologies like Hadoop and various other database management systems.
6. It provides various facilities for carrying out machine learning operations like regression, classification, and artificial neural networks.
7. R is dominant among the other programming languages for developing statistical tools.
Q.No. 37. Disadvantages of R?
Answer- There are a lot of advantages of R but still some areas of technological system R has some disadvantages also which are as follows:
1. R base package does not have support for 3D graphics.
2. R utilizes more memory as compared with Python because In R, the physical memory stores the objects.
3. R lacks basic security. This is an essential feature of most programming languages like Python, Because of this, there are several restrictions with R.
4. R packages and the R programming languages are much slower than other languages like MATLAB and Python.
5. R is tough to learn to compare to Python.
6. Programmers without knowledge of packages may find it difficult to implement algorithms in R.
Q.No. 38. Which of the function do you use to add datasets in R.
Answer- library() function package provides the infrastructure to make test datasets available within R. They are very large to store within the R package but R prevents them from being included in
OSS-licensed packages. If you want to add a new dataset to the text data package the follow the following steps:
1. Create an R file named prefix_*.R in the R/folder, Where *is the name of the dataset.
2. Supported prefixes include
1. dataset_
2. lexicon_
3. Inside that file create 3 functions named
1. download_*()
2. process_*()
3. dataset_*()
4. Add the process_*() function to the named list process_function in the file process_functions.R
5. Add the download_*() function to the named list download_function in the file download_function.R
6. Modify the print_info list in the info. R file.
7. Add dataset_*.R to the @include tags in download_function.R
8. Add the dataset to the table in README.Rmd
9. Add the dataset to_pkgdown.yml
10. Write a bullet in the RAVI.md file.
Q.No. 39. Difference between matrix and data frames?
Answer- Matrix is an m * n array with a similar data type. It is a homogeneous collection of data sets that are arranged in a two-dimensional rectangular organization. It has a fixed number of rows
and columns. We can perform many arithmetic operations on the R matrix. It has great use in Economics, Engineering, and Electronics. It is also useful in probability and statistics.
DataFrames are used for storing data tables. It can contain multiple data types in multiple rows and multiple columns (Field). It just looks like an excel sheet. It has column and row names where
each one has a unique number. It can store multiple data types like numeric, character or factor, etc. It is heterogeneous. We can do statistics, processing data, transpose, etc.
Q.No. 40. Difference between seq(4) and seq_along(4)?
Answer- If the seq() is called with the one unnamed numerical argument data of length 1, as a result, it returns an integer sequence from 1 to the value of the argument. In a question seq(4) is the
command returns the integers 1,2,3,4. While seq_along(4)produces the vector of indices of a given factor.
Q.No. 41. Does R have a memory limit? What is it?
Answer- Yes, R has a memory limit. On 32-bit Windows, it cannot exceed up to 3GB and most versions are limited to 2 GB. The minimum is currently 32Mb. If the 32-bit R version is run on a 64-bit
version of Windows we get the maximum value of obtainable memory is 4GB. For 64-bit versions of R under the 64-bit version Windows system, the limit is 8TB.
Q.No. 42. Name the sorting algorithms available in R?
Answer- The sorting algorithms available in ‘R’ are as follows:
1. Quick Short
2. Selection Sort
3. Merge Sort
4. Bucket Sort
5. Bubble Sort
6. Bin Sort
7. Radix Sort
8. Shell Sort
9. Heapsort
Q.No. 43. How do you export data in R?
Answer- We can export data in R in various applications and programs. We are describing some of them in the following section:
1. R to an Excel
Write.xlsx(my data, ”c:/mydata.xlsx”)
2. R to SAS
#Write out text data file and
#an SAS program to read it
Write.foreign(mydata, “C:/mydata.txt”, “c:/mydata.sas”, package=”SAS”)
3. R to Stata
#export data frame to Stata binary format
Write.dta(mydata, “c:/mydata.dta”)
Q.No. 44. What is coxph()?
Answer- It is the function that is used to calculate the cox proportion hazards regression model in R. It is the time-dependent variables, time-dependent strata, multiple events per subject, and
other extensions which are incorporated using the numerous process formulation. The data for a subject is presented as multiple rows or “observations”, each of which applies to an interval of
observation (start, stop).
Q.No. 45. Define the MATLAB package?
Answer- Matlab and R are the two interactive, high-level programming languages used in scientific computing. The languages have a lot in common but have very different targets and foci. R is
primarily used by the statistical community for advanced data analysis and research in statistical methodology while Matlab is primarily used by engineers for image processing, differential
equations, and so on.
The RMatlab package provides a path for R to do Matlab functions and Matlab to do R functions. We can start Matlab from R, or we can embed R within Matlab. The RMatlab package allows us to call R
functions from within the Matlab process using the same address space. This makes inter-system communication which is very fast and allows the objects to be shared directly through reference. And
since R is embedded within Matlab, it can make calls back to Matlab. It allows for a range of interesting computations and ensures that we can use the two languages back to back and program in the
best convenient environment for our tasks.
R can be access Matlab by starting a separate Matlab process where it also sends commands to it. This is the Engine linkage to Matlab.
Q.No. 46. How do you use corrgram() function?
Answer- The corrgram() function produces a graphical display of a correlation matrix. Its cells can be shaded or colored to show the correlation value. In corrgram() function the non-numeric column
in the data will be ignored.
Q.No. 47. What is the UIWindow object?
Answer- The presentation of one or more views on a screen is coordinated by the UIWindow object. In iOS application usually only has one window, while View multiple. Windows and Views both are used
to present your application’s content on the screen. Windows provide a basic container for your application’s views but do not have any visible content. The view is a segment of a window where you
can fill up with some content.
Q.No. 48. What is the lazy function evaluation in R?
Answer- Lazy evaluation is a programming strategy that allows a symbol to be evaluated only when needed, in another word a symbol can be defined in a function and it will only be evaluated when it is
needed. Lazy evaluation is implemented in R as it allows a program to be more efficient when used interactively.
Q. No. 49. Write the difference between “%%” and “%/%”?
Answer – “%%” indicates x mod y and “%/%” indicates integer division. Both are arithmetic operators.
Q.No. 50. How is the forecast package packages?
Answer- It provides methods and tools for displaying and analyzing univariate time series forecasts including exponential smoothing through the state-space model and automatic ARIMA modeling. The
forecast packages will remain in their current state and maintained with bug fixes only.
Q.No.51. What is auto.arima()?the
Answer- It is a forecasting function for time series. auto.arima() function returns the best ARIMA model according to either AIC, AICC, or BIC value. It searches for all possible models within the
order constraints provided.
Q.No. 52. What do you understand by reshaping of data in R?
Answer- It is used in the data frames where Data Reshaping has changed the way data is organized into rows and columns. It helps in extracting data from the rows and columns of the data frame. It is
an easy task but there are situations when we need the data frame in a format that is different from the format in which we received it. In R we have so many functions to merge, split and change the
rows and columns in a data frame.
Q, No. 53. What is the full form of CFA?
Answer- CFA stands for Confirmatory Factor Analysis.
Q.No. 54. What is the coin package in R?
Answer- It provides a flexible implementation of the abstract framework and a large set of convenience functions to implement classical and non-classical test procedures within the framework. The
coin package provides us an implementation of a general framework for conditional inference procedures which is known as permutation tests.
Q.No. 55. What do you mean by workspace in R?
Answer- The workspace is our current R working environment where the objects like vectors, matrices, list, function, etc. are included. At the end of the session, we can save an image of the current
workspace that is automatically reloaded the next time in R when it is started. We can give commands interactively at the R user prompt and check the history through arrow keys.
Q.No. 56. How many data structures does R have?
Answer- There are six types of data structure in R which are as follows:
1. Vectors
2. Lists
3. Dataframes
4. Matrices
5. Arrays
6. Factors
|
{"url":"https://www.mygreatlearning.com/blog/r-interview-question-and-answers/","timestamp":"2024-11-09T07:15:23Z","content_type":"text/html","content_length":"409288","record_id":"<urn:uuid:8f5f2b97-b615-450f-a1c3-6d27f10a425f>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00291.warc.gz"}
|
Definition 19
Rectilinear figures are those which are contained by straight lines, trilateral figures being those contained by three, quadrilateral those contained by four, and multilateral those contained by more
than four straight lines.
This definition was probably not in Euclid’s original Elements. It classifies rectilinear figures by their number of sides. Euclid names polygons by the number of their angles. For example, Book IV
includes constructions of regular pentagons, hexagons, and pentadecagons (15-angled figures) in which those terms are used.
The modern English names are also based an the number of angles (except quadrilateral): triangle, pentagon, hexagon, heptagon, octagon, etc. Quadrilaterals can also be called tetragons. From pentagon
on up these names derive from the Greek, but they’re rarely used past octagon.
|
{"url":"http://aleph0.clarku.edu/~djoyce/elements/bookI/defI19.html","timestamp":"2024-11-06T15:01:31Z","content_type":"text/html","content_length":"1950","record_id":"<urn:uuid:eb3e22b0-f888-41f8-960e-8fad162ca7ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00393.warc.gz"}
|
Programme Of Study For Year 6 Mathematics
English National Curriculum
Programme Of Study For Year 6 Mathematics
Number – fractions (including decimals and percentages)
[ << Main Page ]
These are the statements, each one preceeded with the words "Pupils should be taught to:"
Click on a statement above for suggested resources and activities from Transum.
|
{"url":"https://transum.org/Maths/National_Curriculum/Statements.asp?ID_Domain=23","timestamp":"2024-11-09T03:48:25Z","content_type":"text/html","content_length":"15891","record_id":"<urn:uuid:03bf241e-b244-4469-b999-61af9bcd37f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00525.warc.gz"}
|
Summer 2020
Hamilton's discovery of quaternion is inscribed on Broom bridge in Dublin. It is a place of pilgrimage for mathematicians. One usually also credits Olinde Rodriguez for a codiscovery of quaternions.
However, Rodrigues mostly deals with the rotation group SO(3) and not the quaternion algebra. You can look at the paper In this PDF from 1840. There is no surprise for a connection because the unit
sphere in the quaternions is the universal cover of the rotation group. The point of quaternions however is that they form an algebra, a division algebra. Rodriques found a neat Formula for
rotations but I personally believe that there is a big gap from that formula to the actual quaternion algebra. Historians like to exaggerate sometimes, like that the Babylonians already discovered
trigonometry or that Archimedes already found the Riemann integral. Or that on Clay tablets, one can see the Pythagorean theorem. Also, one of the major formulas for quaternions |q p| = |q| |p| was
already known to Euler but it would be false to credit Euler for the quaternion discovery. You have to judge yourself, look at the Rodrigues paper and see whether you can see explicitly the
quaternion algebra.
Here is also a historical paper which calls it a scandal:
|
{"url":"https://people.math.harvard.edu/~knill/teaching/summer2020/exhibits/hamilton/index.html","timestamp":"2024-11-03T17:13:15Z","content_type":"application/xhtml+xml","content_length":"5918","record_id":"<urn:uuid:2b2d9a9a-f77a-4c1a-a469-3859c6ba7ec8>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00186.warc.gz"}
|
mlr_pipeops_blsmote: BLSMOTE Balancing in mlr3pipelines: Preprocessing Operators and Pipelines for 'mlr3'
Adds new data points by generating synthetic instances for the minority class using the Borderline-SMOTE algorithm. This can only be applied to classification tasks with numeric features that have no
missing values. See smotefamily::BLSMOTE for details.
param_vals :: named list List of hyperparameter settings, overwriting the hyperparameter settings that would otherwise be set during construction. Default list().
The output during training is the input Task with added synthetic rows for the minority class. The output during prediction is the unchanged input.
The $state is a named list with the $state elements inherited from PipeOpTaskPreproc.
The parameters are the parameters inherited from PipeOpTaskPreproc, as well as:
K :: numeric(1) The number of nearest neighbors used for sampling from the minority class. Default is 5. See BLSMOTE().
C :: numeric(1) The number of nearest neighbors used for classifying sample points as SAFE/DANGER/NOISE. Default is 5. See BLSMOTE().
dup_size :: numeric Desired times of synthetic minority instances over the original number of majority instances. 0 leads to balancing minority and majority class. Default is 0. See BLSMOTE().
method :: character(1) The type of Borderline-SMOTE algorithm to use. Default is "type1". See BLSMOTE().
quiet :: logical(1) Whether to suppress printing status during training. Initialized to TRUE.
Han, Hui, Wang, Wen-Yuan, Mao, Bing-Huan (2005). “Borderline-SMOTE: A New Over-Sampling Method in Imbalanced Data Sets Learning.” In Huang, De-Shuang, Zhang, Xiao-Ping, Huang, Guang-Bin (eds.),
Advances in Intelligent Computing, 878–887. ISBN 978-3-540-31902-3, \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1007/11538059_91")}.
Other PipeOps: PipeOp, PipeOpEnsemble, PipeOpImpute, PipeOpTargetTrafo, PipeOpTaskPreproc, PipeOpTaskPreprocSimple, mlr_pipeops, mlr_pipeops_adas, mlr_pipeops_boxcox, mlr_pipeops_branch,
mlr_pipeops_chunk, mlr_pipeops_classbalancing, mlr_pipeops_classifavg, mlr_pipeops_classweights, mlr_pipeops_colapply, mlr_pipeops_collapsefactors, mlr_pipeops_colroles, mlr_pipeops_copy,
mlr_pipeops_datefeatures, mlr_pipeops_encode, mlr_pipeops_encodeimpact, mlr_pipeops_encodelmer, mlr_pipeops_featureunion, mlr_pipeops_filter, mlr_pipeops_fixfactors, mlr_pipeops_histbin,
mlr_pipeops_ica, mlr_pipeops_imputeconstant, mlr_pipeops_imputehist, mlr_pipeops_imputelearner, mlr_pipeops_imputemean, mlr_pipeops_imputemedian, mlr_pipeops_imputemode, mlr_pipeops_imputeoor,
mlr_pipeops_imputesample, mlr_pipeops_kernelpca, mlr_pipeops_learner, mlr_pipeops_missind, mlr_pipeops_modelmatrix, mlr_pipeops_multiplicityexply, mlr_pipeops_multiplicityimply, mlr_pipeops_mutate,
mlr_pipeops_nmf, mlr_pipeops_nop, mlr_pipeops_ovrsplit, mlr_pipeops_ovrunite, mlr_pipeops_pca, mlr_pipeops_proxy, mlr_pipeops_quantilebin, mlr_pipeops_randomprojection, mlr_pipeops_randomresponse,
mlr_pipeops_regravg, mlr_pipeops_removeconstants, mlr_pipeops_renamecolumns, mlr_pipeops_replicate, mlr_pipeops_rowapply, mlr_pipeops_scale, mlr_pipeops_scalemaxabs, mlr_pipeops_scalerange,
mlr_pipeops_select, mlr_pipeops_smote, mlr_pipeops_smotenc, mlr_pipeops_spatialsign, mlr_pipeops_subsample, mlr_pipeops_targetinvert, mlr_pipeops_targetmutate, mlr_pipeops_targettrafoscalerange,
mlr_pipeops_textvectorizer, mlr_pipeops_threshold, mlr_pipeops_tunethreshold, mlr_pipeops_unbranch, mlr_pipeops_updatetarget, mlr_pipeops_vtreat, mlr_pipeops_yeojohnson
library("mlr3") # Create example task data = smotefamily::sample_generator(500, 0.8) data$result = factor(data$result) task = TaskClassif$new(id = "example", backend = data, target = "result")
task$head() table(task$data(cols = "result")) # Generate synthetic data for minority class pop = po("blsmote") bls_result = pop$train(list(task))[[1]]$data() nrow(bls_result) table(bls_result$result)
For more information on customizing the embed code, read Embedding Snippets.
|
{"url":"https://rdrr.io/cran/mlr3pipelines/man/mlr_pipeops_blsmote.html","timestamp":"2024-11-11T01:28:19Z","content_type":"text/html","content_length":"37959","record_id":"<urn:uuid:545056d8-0a31-44df-8087-5b86d3b83ac3>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00325.warc.gz"}
|
Upwind or Against the Current
Kayak Navigation by David Burch, has some good information about paddling upwind (or downwind). Especially Figure 5-4 which quantifies upwind slowdown. For example, suppose you can paddle at a
sustainable 4 knots in still air. A 15-knot headwind will slow you down to approximately 3-knots (a drag program I wrote gets about the same approximate answer).
So here’s an easy brainteaser (but it has some interesting ramifications that I’ll get to later):
Assume that Marjorie is an expert forward-stroke paddler. She can paddle continuously at 4 knots.
On Monday morning Marjorie paddled against a 1-knot current in still air. On Friday she paddled against a 15-knot headwind in still water.
More assumptions: 1. She used the same boat and paddle both days (and everything else is the same including her breakfast choice). 2. The water was smooth both days. 3. There's a one-knot slowdown
for paddling against a 15-knot headwind (from Burch’s book).
Question: On which day was Marjorie’s average ground speed faster?
I'll take a guess - in the 1 kt current. Secondary effect is arms getting tired on the return part of the upper blade pushing against the wind.
Edited by JohnHuth
I'll take a guess - in the 1 kt current. Secondary effect is arms getting tired on the return part of the upper blade pushing against the wind.
Yes, that's a secondary effect that’s not nearly as significant as the primary effect that I have in mind for paddling into a strong wind. So let's assume it away for the brainteaser; i.e., assume no
wind resistance against your paddle. Then same question.
Speaking of good wind, today it's too windy (gusts into the 30's) in SE Florida for me to sail or paddle. I gave it the ole college try in the Sunfish and capsized twice on the downwind legs (the GPS
indicated 15 knots while I was on a beam reach). So back to my reading and computer-ing.
If I was Marge,
I would get behind a bigger and stronger paddler and ride his/hers draft and they can help block the wind -- such a girly thing to do, I know --
Ahh, Leon, the same way I make all decisions :-)
If this is a typical guess then your batting average must be pretty high.
If I was Marge,
I would get behind a bigger and stronger paddler and ride his/hers draft and they can help block the wind -- such a girly thing to do, I know --
Good try anyway, Dear Les,
The problem is that Marge is the best paddler out there. Whenever she's around I draft behind her. Oops, I hope I’m not doing such a girly thing.
I'll take a guess - in the 1 kt current. Secondary effect is arms getting tired on the return part of the upper blade pushing against the wind.
No new posts so here goes my answer to the brainteaser. Let me know if you agree.
Based on empirical evidence from my 25 plus years of paddling against wind and current, I’ve come to the conclusion that paddling at the same SOG (speed over ground) against an “equivalent headwind”
is much harder than paddling against a current. So using the brainteaser example and data from Burch’s book here’s (I hope) a quantitative proof.
Paddling upcurrent, Marjorie’s speed relative to the river is still 4-knots. But her SOG is 3-knots (4-knots paddling speed – 1-knot current speed). Note that Marjorie is an excellent paddler.
Because of that she has chosen her paddle and hand position to get her cadence as close as possible to her energetically optimum cadence (EOC). The EOC is that cadence that gives one the maximal
aerobic power. Above or below the EOC the power that a muscle can provide monotonically decreases. More on this later.
Paddling against the headwind at an equivalent SOG, Marjorie has to paddle at 3-knots (both SOG and speed relative to the river).
Drag Budget for 1-knot upcurrent (using Burch’s Fig. 5-3):
D2 (water drag at 4-knots): 4 pounds
Drag Budget for 15-knot headwind (using Burch’s Figs 5-2 – 5-5):
F (Air drag at 18-knot apparent wind [15-knot wind + 3-knot paddling speed]): 3.5 pounds
D3 (water drag at 3 knots): 2 pounds
DW (additional wind-induced small-wave drag): 0.5 pounds (included for completeness, but it’s not needed to prove the point)
TD (today drag into a 15-knot headwind) = (F+D3+DW): 6 pounds
Since power = speed * drag force we have,
For 1-knot upcurrent: power = 4-knots * 4-pounds = 16 knot-pounds
For 15-knot headwind: power = 3-knots * 6-pounds = 18 knot-pounds
So, based on power for the same SOG, it requires more power to paddle into a headwind.
But that’s not the end of the story. Even if the required power was the same for paddling upwind or upcurrent it’s easier to paddle upcurrent. That's because, when paddling upwind, the kayak slows
down and the blade's movement backwards with respect to the water decreases.This reduces your cadence (actually the speed that you can pull your paddle back through the water). Hill's Equation shows
that the power that a muscle can produce decreases as the speed of contraction is decreased below some optimal speed (see this and this and that for additional information). So when the cadence is
reduced from the EOC the power you are able to produce is also reduced.
In case you don’t realize it the reason bicycles have gears is because of Hill's Equation (not to be confused with the little mountains sometimes called hill's). When you try to determine how fast a
bike can go, what you do is you match the power available against the power required, at a given speed. This energy budget indicates whether you can go faster, or whether you can even hold your
current speed. Power produced is the product of torque times cadence (rpm). Starting from 0 cadence where power is 0, power monotonically increases as cadence increases until the EOC point is
reached. The EOC cadence is where power is maximized. Beyond the EOC cadence power decreases.
The beauty of bicycles is that they have multiple gear ratios. So for a particular drag force (gravity on hills, air drag, rolling friction, etc.) one can choose an appropriate gear ratio that allows
one to pedal at (or close to) the EOC cadence.
Too bad paddlers don’t have variable gears like pedalers do (perhaps peddlers do). About the only things that we can do to “change gears” are:
1. Change paddle length.
2. Change distance from your hands to the paddle blades.
3. Change stroke length.
4. Change blade size.
One I go on an upwind run I don’t have the luxury of changing paddle length or blade size. But I can “choke” up more on the paddle and take shorter strokes. The shorter stroke not only increase
cadence but also (partially) counteracts the reduced glide when paddling into a wind.
But these changes are not nearly as effective for changing a kayak’s “gears”, as they are for a bike where you can just change gears to almost perfectly match the EOC for any total drag.
Morning Leon.
Risky saying this, but I think you missed something more obvious than Hills equation. When paddling upwind in calm water, the paddle of your 'good paddler' is fixed in position over the ground with
no wind induced movement of the blade. In contrast in moving water the paddle slips over the ground along with the boat. So without increasing the stroke rate or stroke force (bigger blade) there is
less effective propulsion in moving water than calm water.
You’re talking about a blade that doesn’t slip (is locked) with respect to the water, right? So, yes, if the river isn’t moving the blade is locked with respect to both the river and ground. But if
the river is moving the paddle is locked with respect to river, but is moving with respect to the ground (at the speed of the river).
>>So without increasing the stroke rate or stroke force (bigger blade) there is less effective propulsion in moving water than calm water.
I’m not sure I understand what you think this implies. The “effective” propulsion with respect to the water is the same whether the water is moving or not.
How about this thought experiment: Say (with closed eyes) you’re pushing with a constant force and speed on a box on a walkway (the speed is with respect to the walkway). If the walkway is one of
those constant speed moving walkways used at airports, the force of you push with will be no different no matter the speed of the walkway, right? In fact, if the walkway’s movement is vibration-less
you won’t even know whether the walkway is moving or not, right. The power to move the box (force * speed of the box relative to the walkway) is the same whether the walkway is moving or not, right?
Of course, the distance along the fixed ground that the box moves is different since it depends on the speed of the moving walkway. But the power is the same.
Edited by leong
knot-pounds?? First time I've seen those units. What's the speed of light in furlongs per fortnight?
What is "today drag'?
OK, so waves, yup, that'll change things.
knot-pounds?? First time I've seen those units. What's the speed of light in furlongs per fortnight?
What is "today drag'?
OK, so waves, yup, that'll change things.
“today drag” was a typo, I meant to type “total drag”
Note that applying Hill's Equation to the cadence slowdown for upwind paddling is sufficient to demonstrate that it's easier to paddle upcurrent than against an "equivalent" wind.
I just included an estimate of the drag from wind-generated waves for completeness. I didn’t need it to prove the point.
knot-pounds are my favorite power units when working with kayaks and drag. You want hp, then just multiple by ~ 0.00307
186,000mps = 1,799,885,057,678.61 Furlongs Per Fortnight
Edited by leong
What's wrong with FPF ? I've been using this to express speed all my life. Glad there's finally someone else who does.
Okay, it looks like no one is following my simple proof (or admitting to it). So forget about math and forget about Hill’s Equation. The proof is in the pudding. Think about the following real life
experience below:
Yesterday I was paddling continuously at 4 knots in the lee of an island. When I reached the windward side I turned directly into a ~ 25-knot headwind. When I paddling as hard as I could I only made
about 1.5-knots (SOG). (Note: that the extra drag of the wind reduced my cadence from ~ 60 rpm to ~ 25 rpm, even with shorter strokes.) When I stopped paddling I checked my drift speed. It was about
~ 1.5-knots if I was parallel to the wind, but as expected, faster when I was broadside to the wind.
Paddling at 4-knots (water speed) against 1.5-knot current in still air would obviously result in a SOG of 2.5 knots (4 – 1.5).
Yesterday, paddling against a 25-knot wind (“equivalent” to a 1.5-knot current) resulted in a final SOG of 1.5 knots.
Result: I could paddle a faster (SOG) into a current than against an “equivalent” wind. And like riding a bike uphill in too high a gear, I couldn’t increase my power when paddling into the wind
because my cadence was too low. That’s Hill’s Equation in simple terms.
Read about Nobel Prize winner, Archibald Hill, here https://en.wikipedia.org/wiki/Archibald_Hill.
I haven't had a chance to analyze it. One question is whether you're double counting things that go into Burch's estimate, or, put a different way, what goes into Burch's estimate? For example, if
resistance due to encountering waves is a factor, might that not already be present in what Burch has?
I *will* get around to checking it out, but the Holiday Season has me hopping.
I haven't had a chance to analyze it. One question is whether you're double counting things that go into Burch's estimate, or, put a different way, what goes into Burch's estimate? For example,
if resistance due to encountering waves is a factor, might that not already be present in what Burch has?
I *will* get around to checking it out, but the Holiday Season has me hopping.
Okay, let’s assume that the resistance to encountering waves is already present in Burch’s tables and I’m double counting this drag. I don’t think so, but let’s remove the 0.5 pounds of drag that I
added; it was just a guess for completeness anyway. The modified resultant power for going upwind becomes 3* (F + D3) = 3* (3.5 + 2) = 16.5 knot-pounds. This is still greater than the upcurrent power
of 16 knot-pounds.
But I’ll go one more for you. I think that it’s very unlikely, but suppose that the drag going upwind (water drag + wind drag) and upcurrent drag (just water drag) to go at the same SOG are equal.
Call this drag D.
So now we compute the required power for equal SOGs as follows for the Marjorie brainteaser:
Upwind power = 3 * D (SOG = 3-knots)
Upcurrent power = 4 * D (SOG =3-knots)
So now we have upwind power < upcurrent power. So you might argue that Marjorie might be able to paddle upwind (SOG) faster than she can paddle upcurrent (SOG). However, Hill’s Equation may be the
deciding factor.
To use Hill’s Equation we need to have an expression for cadence. A reasonable assumption is that water speed is proportional to cadence. Thus we have,
3 = k * upwind_cadence
4 = k * upcurrent-cadence.
This results in:
upwind-cadence = ¾ * upcurrent_cadence
But if 4 knots is the maximum water-speed that Marjorie can paddle in the absence of wind, it implies that her cadence is no greater than the energetically optimum cadence (EOC). And since we just
demonstrated that upwind-cadence < upcurrent_cadence, Hill’s Equation says that the maximum power that she can produce to paddle upwind is less than the maximum power she can produce to paddle
upcurrent. Depending on her particular “Power vs. Cadence” curve (it varies from person to person), it’s possible that she can paddle upcurrent (SOG) faster than she can paddle upwind (SOG), even in
this case of equal drags where less actual power is required to paddle upwind. The determining factor is whether or not the muscles can generate the required power at the reduced cadence.
Merry Xmas to all.
Way back it was the fashion to use long paddles. To go at the same speed the required cadence of a longer paddle was less than the required cadence of a shorter paddle. More recently, the advice has
been to use shorter paddles. I believe this follows from studies addressing elite kayak sprinters where the goals were to determine the optimal relationship between fatigue in athletes and their
performance. A good part of these studies concentrated on the techniques and equipment necessary to drive cadence to the EOC point. Shorter paddles are one result of the studies. Luckily this has
filtered down to the sea kayaking folks.
Okay, let’s assume that the resistance to encountering waves is already present in Burch’s tables and I’m double counting this drag. I don’t think so, but let’s remove the 0.5 pounds of drag that
I added; it was just a guess for completeness anyway. The modified resultant power for going upwind becomes 3* (F + D3) = 3* (3.5 + 2) = 16.5 knot-pounds. This is still greater than the upcurrent
power of 16 knot-pounds.
But I’ll go one more for you. I think that it’s very unlikely, but suppose that the drag going upwind (water drag + wind drag) and upcurrent drag (just water drag) to go at the same SOG are
equal. Call this drag D.
So now we compute the required power for equal SOGs as follows for the Marjorie brainteaser:
Upwind power = 3 * D (SOG = 3-knots)
Upcurrent power = 4 * D (SOG =3-knots)
So now we have upwind power < upcurrent power. So you might argue that Marjorie might be able to paddle upwind (SOG) faster than she can paddle upcurrent (SOG). However, Hill’s Equation may be
the deciding factor.
To use Hill’s Equation we need to have an expression for cadence. A reasonable assumption is that water speed is proportional to cadence. Thus we have,
3 = k * upwind_cadence
4 = k * upcurrent-cadence.
This results in:
upwind-cadence = ¾ * upcurrent_cadence
But if 4 knots is the maximum water-speed that Marjorie can paddle in the absence of wind, it implies that her cadence is no greater than the energetically optimum cadence (EOC). And since we
just demonstrated that upwind-cadence < upcurrent_cadence, Hill’s Equation says that the maximum power that she can produce to paddle upwind is less than the maximum power she can produce to
paddle upcurrent. Depending on her particular “Power vs. Cadence” curve (it varies from person to person), it’s possible that she can paddle upcurrent (SOG) faster than she can paddle upwind
(SOG), even in this case of equal drags where less actual power is required to paddle upwind. The determining factor is whether or not the muscles can generate the required power at the reduced
Merry Xmas to all.
Way back it was the fashion to use long paddles. To go at the same speed the required cadence of a longer paddle was less than the required cadence of a shorter paddle. More recently, the advice
has been to use shorter paddles. I believe this follows from studies addressing elite kayak sprinters where the goals were to determine the optimal relationship between fatigue in athletes and
their performance. A good part of these studies concentrated on the techniques and equipment necessary to drive cadence to the EOC point. Shorter paddles are one result of the studies. Luckily
this has filtered down to the sea kayaking folks.
Below I found a graph of human power vs. cadence. It shows that as you decrease your cadence below the “Energetically Optimal Cadence” (the peak of the curve) the power that you can produce drops
quickly. Although the graph was made for pedaling it is just as relevant for paddling.
Edited by leong
I have to understand Hill's equation better - I see you have some links earlier in the posting. You'll have to give me some time to digest it. I *am* a biophysics fan, but I'm not as familiar with
his work.
On general grounds, I switched to a shorter paddle because I 'test drove' one and for some reason the increased cadence and ease of draw through the water felt like I could maintain a certain speed
longer. Not very scientific, I realize, but it seemed to work for me.
On general grounds, I switched to a shorter paddle because I 'test drove' one and for some reason the increased cadence and ease of draw through the water felt like I could maintain a certain
speed longer. Not very scientific, I realize, but it seemed to work for me.
You’ve already demonstrated the effect of Hill’s equation when you switched to a shorter paddle. The increased cadence probably brought you closer to your power peak. Note that as you increase your
cadence starting at 0 cadence, the maximum power that you can produce increases until you reach the energetically optimum cadence (EOC). Of course, you’re not paddling at maximum effort, except for a
short sprint. Say you usually paddle at 30% of your maximum power. The curve of 30% of maximum power vs. cadence will also increase until you reach some optimum cadence. I haven’t seen any data on
this for paddling but I’m pretty sure that your optimal cadence (given an effort of 30% of maximum power) will be approximately the EOC cadence, probably a little less. Unfortunately because you
can’t change the “gear ratio” for paddling very much, paddling cadence is just about proportional to paddling speed.
When I raced a bike I was very interested in this topic. For example, to go at my maximum speed in a race (maximum power), what gear ratio should I use to get my actual cadence as close as possible
to the EOC? And for a club rides below maximum speed, what gear ratio should I use to maximize my endurance for the distance of the trip? I never scientifically answer the second question.
Nevertheless, my experience was that I should select a gear ratio that provided a cadence that it was a little lower than the EOC.
I’ve been paddling (sometimes sailing) almost every day in strong winds in southeast FL. Oh how I wish paddles came with changeable gears like bicycles do!
|
{"url":"https://www.nspn.org/forums/topic/10947-upwind-or-against-the-current/","timestamp":"2024-11-15T00:05:55Z","content_type":"text/html","content_length":"275659","record_id":"<urn:uuid:6a84b018-e1ae-41be-b672-33b46fed06b7>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00588.warc.gz"}
|
Single rate and dual rate in investment valuation
Suppose I want to buy a property investment, and I want a return of 5.00% on it. If it produces an income of £100 a year, I would pay £2,000 for such an investment. (Note that I ignore acquisition
costs and other complications for the purpose of this explanation).
If it is a freehold, I will still have my £2,000 ten years later – either in the value of the property or, if I sell it, in cash. (Again, I am ignoring changes in yields and property values
generally). For the purpose of the discussion of single and dual rates, it is useful to call this 5.00% the “remunerative rate”. In other words, by laying out £2,000, I get a 5.00% remunerative rate
on my outlay of money and, in ten years’ time, I will still have that money or its equivalent in property value.
Compare that with the acquisition of a leasehold interest where the lease I buy as an investor only lasts for ten years. I still want my remunerative rate of 5.00%, but I must take into account that,
in ten years’ time, whatever I have paid for the investment will be gone: the lease will come to an end, and I will therefore have nothing – no property interest and nothing to sell to recover my
money. Even the income will now be received by the freeholder, not me. To deal with this problem, property valuers long ago developed the idea of the sinking fund. Instead of re-selling (or keeping)
my investment at the end of ten years, as I can with a freehold, I put aside an annual sum out of whatever income I get, so as to reconstitute my capital. The only source for such contributions to my
sinking fund is the income I get from the property during my lease. If that is still £100 a year, then each year, out of that £100, I must put aside a sum sufficient to get me back whatever price I
paid for the investment – in the case of our ten-year lease, one-tenth of the price I pay to buy this leasehold investment.
The attached worksheet gives the results of calculations along these lines. The text and figures outside the thick-bordered, yellow box show the factors and some intermediate calculations, which can
be ignored for this purpose. The first of the lines inside the box shows the result of what is set out in the paragraph above. The line “Disregarding the accumulative rate and tax” shows that the
price I can pay and still achieve my 5.00% remunerative rate is £666.67. I will set aside £66.67 each year out of the £100 a year income as a sinking fund to reconstitute my capital – the price I
paid – in ten years’ time. The balance of £33.33 a year is 5.00% on my outlay of £666.67, so I have achieved the remunerative rate I want.
Most investors have to pay tax on income. Let’s say my rate of tax is 40%, to illustrate. HMRC do not regard sinking fund contributions as a cost (although possibly they should), hence the line
“Disregarding the accumulative rate only”. As my sinking fund contributions will have to pass through a tax sieve, I will have to reserve a larger part of the £100 a year income for those
contributions. Now, I realise, I can only afford to pay £461.54. I will have to make annual sinking fund contributions of £76.92. Of that, £30.77 goes to HMRC, £46.15 to my sinking fund. That leaves
£23.08 for me, which is 5.00% on my outlay of £461.54, again achieving my desired remunerative rate.
We have not yet come to dual rates. The only rate used so far is the single, remunerative rate of 5.00%. However, what I have set out above would result in a slight understatement of the price I
should pay for the investment because, while I am putting aside money for the sinking fund, it will itself attract some interest. I can place that money on deposit, and a small interest accumulation
will result. I say a “small” rate, because the rate on the sinking fund will be different from the remunerative rate. It is known as an “accumulative rate”, because it is the rate that is
progressively added to the accumulating sinking fund. This must be a “risk-free rate” – not, in other words, a rate as high as the remunerative rate, which can expected to vary with market
conditions, but one which gives me a certain return on my sinking fund to make sure that I definitely can reconstitute my capital when the lease comes to an end in ten years’ time.
This is why a calculation of this kind is called a “dual rate years’ purchase (or YP)” calculation. There are two different rates at work, doing different things: the remunerative rate giving me my
true return; and the accumulative rate enhancing my sinking fund. As regards the accumulative rate, the Lands Tribunal (as it then was – now the Upper Tribunal (Lands Chamber)) decided convincingly
in Sportelli that the risk-free rate is 2.25%.
To illustrate, see the “Dual rate without tax” and “Dual rate with tax” lines here. They show that the two calculations using single rates mentioned above slightly understate the price I can afford
to pay while still getting my 5.00% remunerative rate. In fact, if I can disregard tax, I pay £712.82 and set aside £64.36 annually. If, like most people, I cannot disregard tax, I pay £498.80 and
set aside £75.06 annually.
As Harry Hill might say, dual rates in a nutshell!
|
{"url":"https://beckettandkay.co.uk/investment-valuation/","timestamp":"2024-11-07T13:15:07Z","content_type":"text/html","content_length":"58095","record_id":"<urn:uuid:2bdf03a2-4ad6-4023-abe6-2dcec8e192d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00381.warc.gz"}
|
Al, Pablo, and Marsha shared the driving on a 1,500-mile trip. Which
Question Stats:
72% 28% (01:59) based on 2202 sessions
Al, Pablo, and Marsha shared the driving on a 1,500-mile trip. Which of the three drove the greatest distance on the trip?
(1) Al drove 1 hour longer than Pablo but at an average rate of 5 miles per hour slower than Pablo.
(2) Marsha drove 9 hours and averaged 50 miles per hour.
We need to determine who (Al, Pablo and Marsha) drove the greatest distance of the 1,500-mile trip.
If we know one person had driven more than ½ the distance of the entire trip, i.e., 750 miles, then he or she must be the person who drove the greatest distance. On the other hand, if we know one
person had driven less than ⅓ the distance of the entire trip, i.e., 500 miles, then he or she can’t be the person who dove the the greatest distance.
Statement One Alone:
Al drove 1 hour longer than Pablo but at an average rate of 5 miles per hour slower than Pablo.
Since we don’t know anything about Marsha, statement one alone is not sufficient to answer the question.
Statement Two Alone:
Marsha drove 9 hours and averaged 50 miles per hour.
We see that Marsha drove 9 x 50 = 450 miles. Since this is less than 500 miles, we know Marsha can’t be the person who drove the greatest distance. So either Al or Pablo is the person who drove the
greatest distance. However, since we don’t know which one that is, statement two alone is not sufficient to answer the question.
Statements One and Two Together:
From the two statements, we see that Al and Pablo together drove 1,050 miles. If we let r = the average rate Al drove and t = the time he drove, we can create the equation:
rt + (r + 5)(t - 1) = 1,050
However, there are two unknowns in this equation, so we can’t determine who (Al or Pablo) drove a greater distance.
For example, suppose first that Al drove for 5 hours. Then, Pablo drove for 4 hours and we have
5r + 4(r + 5) = 1050
9r + 20 = 1050
9r = 1030
r ≈ 114 mph
Thus, Al drives approximately 5 x 114 = 570 miles and Pablo drives 1050 - 570 = 480 miles. In this scenario, Al drives further than Pablo.
On the other hand, suppose that Al drives for 15 hours. Then, Pablo drives for 14 hours and we have
15r + 14(r + 5) = 1050
29r + 70 = 1050
29r = 980
r ≈ 33 mph
Thus, Al drives approximately 15 x 33 = 495 miles and Pablo drives 555 miles. In this scenario, Pablo drives further than Al.
Answer: E
|
{"url":"https://gmatclub.com/forum/al-pablo-and-marsha-shared-the-driving-on-a-1-500-mile-trip-which-65090.html","timestamp":"2024-11-04T05:30:38Z","content_type":"application/xhtml+xml","content_length":"806898","record_id":"<urn:uuid:8587d9ba-6a06-495f-8861-78654209ca4f>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00594.warc.gz"}
|
MaffsGuru.com - Making maths enjoyableWhat are fractions?
I lie awake all night thinking about how I can make fractions make more sense to Year 7 students. And, I think I have the video. I try and make fractions easier by thinking about Pizza and friends.
There is also a discussion about chocolate too! I hope you enjoy it.
|
{"url":"https://maffsguru.com/videos/what-are-fractions/","timestamp":"2024-11-14T08:47:32Z","content_type":"text/html","content_length":"26417","record_id":"<urn:uuid:9608f76e-9fc2-4c3d-a59d-f67d931ac2e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00485.warc.gz"}
|
Evolutionarily Stable Strategies of Random Games, and the Vertices of Random
Evolutionarily Stable Strategies of Random Games, and the Vertices of Random Polygons
Sergiu Hart, Yosef Rinott, and Benjamin Weiss
An evolutionarily stable strategy (ESS) is an equilibrium strategy that is immune to invasions by rare alternative ("mutant") strategies. Unlike Nash equilibria, ESS do not always exist in finite
games. In this paper, we address the question of what happens when the size of the game increases: does an ESS exist for "almost every large" game? Letting the entries in the n x n game matrix be
randomly chosen according to an underlying distribution F, we study the number of ESS with support of size 2. In particular, we show that, as n goes to infinity, the probability of having such an
ESS: (i) converges to 1 for distributions F with "exponential and faster decreasing tails" (e.g., uniform, normal, exponential); and (ii) it converges to 1 - 1/sqrt(e) for distributions F with
"slower than exponential decreasing tails" (e.g., lognormal, Pareto, Cauchy).
Our results also imply that the expected number of vertices of the convex hull of n random points in the plane converges to infinity for the distributions in (i), and to 4 for the distributions in
• Annals of Applied Probability, 18 (2008), 1, 259-287
Last modified:
© Sergiu Hart
|
{"url":"https://math.huji.ac.il/~hart/abs/ess.html","timestamp":"2024-11-02T05:24:51Z","content_type":"text/html","content_length":"3264","record_id":"<urn:uuid:c7eb5fee-3c37-47d2-94dd-a62c2be60acf>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00207.warc.gz"}
|
Uncertainty in Measurements - UCalgary Chemistry Textbook
Every measurement you take as a scientist includes some unavoidable uncertainty (also sometimes called “error”, even though no mistake was made). We can see this easily by imagining some of the
measurements we commonly do in lab.
If you place a quarter on a standard electronic balance, you may obtain a reading of 6.72 g. The digits 6 and 7 are certain, and the 2 indicates that the mass of the quarter is likely between 6.71
and 6.73 grams. If you re-weighed the quarter, you might measure 6.71 g or 6.73 g as often as 6.72 g. These fluctuations are caused by variables like random fluctuations in the electronics, centering
of the quarter on the balance pan, and breezes in the room (see Sources of Error for more discussion).
The quarter weighs about 6.72 grams, with a small uncertainty in the measurement of ± 0.01 gram. If the coin is weighed on a more sensitive balance, the mass might be 6.723 g. This means its mass
lies between 6.722 and 6.724 grams, an uncertainty of 0.001 gram. Every measurement has some uncertainty, which depends on the device used (and the user’s ability).
To measure the volume of liquid in a graduated cylinder, you should make a reading at the bottom of the meniscus, the lowest point on the curved surface of the liquid.
To measure the volume of liquid in this graduated cylinder, you must mentally subdivide the distance between the 21 and 22 mL marks into tenths of a milliliter, and then make a reading (estimate) at
the bottom of the meniscus.
In the illustration above, the bottom of the meniscus in this case clearly lies between the 21 and 22 markings, meaning the liquid volume is certainly greater than 21 mL but less than 22 mL. The
meniscus appears to be a bit closer to the 22-mL mark than to the 21-mL mark, and so a reasonable estimate of the liquid’s volume would be 21.6 mL. In the number 21.6, then, the digits 2 and 1 are
certain, but the 6 is an estimate – another person might record this volume as 21.5 or 21.7 mL. Note that it would be pointless to attempt to estimate a digit for the hundredths place, given that the
tenths-place digit is uncertain. Writing 21.62 mL when the volume range is expected to be 21.5-21.7 mL mis-represents how well we actually know this volume. (Consider: if the road sign says it is 190
km from Calgary to Brooks, your car’s odometer measures to 0.1 km, and your passenger says “actually, it’s 190.02 km”, is that difference relevant?)
Our ability to take correct measurements (and the small amount of uncertainty always present) is reflected in the accuracy and precision of the reported values, discussed in the next page.
The Exception: Exact Numbers
The exception to the rule that all measurements have some uncertainty are numbers that are counted or defined rather than measured.
Counted numbers are exact: you have 12 eggs in a dozen, never 12.02. If you took 3 measurements, you took exactly 3, not 2.5. When reading these values, there is no question that you mean exactly the
number reported.
Defined numbers are forced to be exact, for example 1 kg is defined as exactly 1000 g (not 999.9). All unit conversions are defined in this way. Some physical constants are as well – for example the
speed of light in vacuum is defined as exactly 299 762 498 m/s. When we use the full version of this value, it is an exact number (though we often round it off to $3.00 \times 10^{8} m/s $, which is
then no longer the exact number – it is an approximation).
|
{"url":"https://chem-textbook.ucalgary.ca/version2/review-of-background-topics/measurements-and-data/uncertainty-accuracy-and-precision/uncertainty-in-measurements/","timestamp":"2024-11-02T17:34:16Z","content_type":"text/html","content_length":"70188","record_id":"<urn:uuid:caed4b1b-b96f-4846-94bc-6c71945c8030>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00097.warc.gz"}
|
MSc Student Javier Almonacid Wins 2019 SIAM Conference Poster Contest
The 2019 SIAM PNW Conference was hosted by Seattle University in Seattle Washington.
The poster contest had a total of twenty submissions, and three were given the "Best Poster Award".
MSc Math graduate student Javier Almonacid claimed one of those three winning spots. His poster titled, "High-order discretizations of a linear wave equation", touches on a portion of his thesis
work. Javier notes: "Part of my thesis work consists in the numerical study of energy attractors in the solutions to a nonlocal linear wave equation, and what I presented in this poster constitutes
the first step towards achieving this objective: developing a numerical method that approximates these solutions."
Congratulations Javier!
|
{"url":"http://www.sfu.ca/math/events-news/news/2019-newes-stories/MScStudentJavierAlmonacidWins2019SIAMConferencePosterContest.html","timestamp":"2024-11-14T04:27:59Z","content_type":"text/html","content_length":"55125","record_id":"<urn:uuid:7c17ad67-96e2-480e-90dc-dcfc9c5f1cd5>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00422.warc.gz"}
|
The linear-array conjecture in communication complexity is false
A linear array network consists of K + 1 processors P[0], P[1], . . . ,P[k] with links only between P[i] and P[i]+1 (0 ≤ i < k). It is required to compute some boolean function f(x,y) in this
network, where initially x is stored at P[0] and y is stored at P[k]. Let D[k](f) be the (total) number of bits that must be exchanged to compute f in worst case. Clearly, D[k](f)≤k̇D(f), where D(f)
is the standard two-party communication complexity of f. Tiwari proved that for almost all functions D[k](f)≥k(D(f)-O(1)) and conjectured that this is true for all functions. In this paper we
disprove Tiwari's conjecture, by exhibiting an infinite family of functions for which D[k](f) is essentially at most 3/4k̇ D(f). Our construction also leads to progress on another major problem in
this area: It is easy to bound the two-party communication complexity of any function, given the least number of monochromatic rectangles in any partition of the input space. How tight are such
bounds? We exhibit certain functions, for which the (two-party) communication complexity is twice as large as the best lower bound obtainable this way.
Bibliographical note
Funding Information:
Mathematics Subject Classi cation (1991): 68Q22, 68Q10, 94A05 * Part of this research was done while the authors were at ICSI, Berkeley. An early version of this paper appeared in the proceedings of
the 28th ACM Symp. on Theory of Computing (STOC), pp. 1{10, May 1996. y Supported in part by a grant from the Israeli Academy of Sciences.
Dive into the research topics of 'The linear-array conjecture in communication complexity is false'. Together they form a unique fingerprint.
|
{"url":"https://cris.huji.ac.il/en/publications/the-linear-array-conjecture-in-communication-complexity-is-false-14","timestamp":"2024-11-03T18:42:48Z","content_type":"text/html","content_length":"49524","record_id":"<urn:uuid:0a102fab-3b64-4488-9375-cf6fbf65b080>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00558.warc.gz"}
|
Sorting and Merging Single Linked List
So far, in this 3-part series about linked lists in Python, we started our discussion about the linked list. We saw what the linked list is along with its advantages and disadvantages. We also
studied some of the most commonly used linked list methods such as traversal, insertion, deletion, searching, and counting an element. Finally, we saw how to reverse a linked list.
In this article, we will continue from where we left in the last article and will see how to sort a linked list using bubble and merge sort, and how to merge two sorted linked lists.
Before we continue, it is imperative to mention that you should create the Node and LinkedList classes that we created in the last article.
Our 3-part series about linked lists in Python:
Sorting a Linked Lists using Bubble Sort
There are two ways to sort a linked list using bubble sort:
1. Exchanging data between nodes
2. Modifying the links between nodes
In this section, we will see how both these approaches work. We will use the bubble sort algorithm to first sort the linked list by changing the data, and then we will see how we can use bubble sort
to change the links in order to sort the linked list.
Sorting Linked List by Exchanging Data
To sort a linked list by exchanging data, we need to declare three variables p, q, and end. The variable p will be initialized with the start node, while the end will be set to None.
Note: It is important to remember that to sort the list with n elements using bubble sort, you need n-1 iterations.
To implement bubble sort, we need two while loops. The outer while loop executes until the value of the variable end is equal to the self.start_node.
The inner while loop executes until p becomes equal to the end variable. Inside the outer while loop, the value of p will be set to self.start_node which is the first node. Inside the inner while
loop, the value of q will be set to p.link which is actually the node next to q. Then the values of p and q will be compared if p is greater than q the values of both the variables will be swapped
and then p will point to p.ref, which is the next node. Finally, the end will be assigned the value of p. This process continues until the linked list is sorted.
Let's understand this process with the help of an example. Suppose we have the following list:
Let's implement our algorithm to sort the list. We'll see what will happen during each iteration.
The purpose of the bubble sort is that during each iteration, the largest value should be pushed to the end, hence at the end of all iterations, the list will automatically be sorted.
Before the loop executes, the value of end is set to None.
In the first iteration, p will be set to 8, and q will be set to 7. Since p is greater than q, the values will be swapped and p will become p.ref. At this point in time, the linked list will look
From now on, p is not equal to the end, the loop will continue and now p will become 8 and q will become 1. Since p is again greater than q, the values will be swapped again and p will again become
Here again, p is not equal to the end, the loop will continue and now p will become 8 and q will become 6. Since again p is greater than q, the values will be swapped again and p will again become
p.ref. The list will look like this:
Again p is not equal to the end, the loop will continue and now p will become 8 and q will become 9. Here since p is not greater than q, the values will not be swapped and p will become p.ref. At
this point in time, the reference of p will point to None, and the end also points to None. Hence the inner while loop will break and the end will be set to p.
In the next set of iterations, the loop will execute until 8, since 9 is already at the end. The process continues until the list is completely sorted.
The Python code for sorting the linked list using bubble sort by exchanging the data is as follows:
def bub_sort_datachange(self):
end = None
while end != self.start_node:
p = self.start_node
while p.ref != end:
q = p.ref
if p.item > q.item:
p.item, q.item = q.item, p.item
p = p.ref
end = p
Add the bub_sort_dataexchange() method to the LinkedList class that you created in the last article.
Once you add the method to the linked list, create any set of nodes using the make_new_list() method and then use the bub_sort_dataexchange() to sort the list. You should see the sorted list when you
execute the traverse_list() method.
Sorting Linked Lists by Modifying Links
Bubble sort can also be used to sort a linked list by modifying the links instead of changing data. The process remains quite similar to sorting the list by exchanging data, however, in this case, we
have an additional variable r that will always correspond to the node previous than the p node.
Let's take a simple example of how we will swap two nodes by modifying links. Suppose we have a linked list with the following items:
And we want to swap 65 and 35. At this point in time p corresponds to node 65, and q corresponds to node 35. The variable r will correspond to node 45 (previous to node p). Now if the node p is
greater than node q, which is the case here, the p.ref will be set to q.ref and q.ref will be set to p. Similarly, r.ref will be set to q. This will swap nodes 65 and 35.
The following method implements the bubble sorting for the linked list by modifying links:
def bub_sort_linkchange(self):
end = None
while end != self.start_node:
r = p = self.start_node
while p.ref != end:
q = p.ref
if p.item > q.item:
p.ref = q.ref
q.ref = p
if p != self.start_node:
r.ref = q
self.start_node = q
p,q = q,p
r = p
p = p.ref
end = p
Once you add the method to the linked list, create any set of nodes using the make_new_list() method and then use the bub_sort_linkchange() to sort the list. You should see the sorted list when you
execute the traverse_list() method.
Merging Sorted Linked List
In this section, we will see how we can merge two sorted linked lists in a manner that the resulting linked list is also sorted. There are two approaches to achieve this. We can create a new linked
list that contains individually sorted lists or we can simply change the links of the two linked lists to join the the two. In the second case, we do not have to create a new linked list.
Let's first see how we can merge two linked lists by creating a new list.
Merging Sorted Linked Lists by Creating a New List
Let's first dry-run the algorithm to see how we can merge two sorted linked lists with the help of a new list.
Suppose we have the following two sorted linked lists:
These are the two lists we want to merge. The algorithm is straightforward. All we will need is three variables, p, q, and em, and an empty list newlist.
At the beginning of the algorithm, p will point to the first element of list1 whereas q will point to the first element of the list2. The variable em will be empty. At the start of the algorithm, we
will have the following values:
p = 10
q = 5
em = None
newlist = None
Next, we will compare the first element of the list1 with the first element of list2, in other words, we will compare the values of p and q and the smaller value will be stored in the variable em
which will become the first node of the new list. The value of em will be added to the end of the newlist.
After the first comparison, we will have the following values:
p = 10
q = 15
em = 5
newlist = 5
Since q was less than p, we stored the value of q in em and moved q one index to the right. In the second pass, we will have the following values:
p = 45
q = 15
em = 10
newlist = 5, 10
Here since p was smaller, we added the value of p to newlist, set em to p, and then moved p one index to the right:
p = 45
q = 35
em = 15
newlist = 5, 10, 15
Similarly, in the next iteration:
p = 45
q = 68
em = 35
newlist = 5, 10, 15, 35
Free eBook: Git Essentials
Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually learn it!
In the next iteration, p will again be smaller than q, hence:
p = 65
q = 68
em = 45
newlist = 5, 10, 15, 35, 45
And, finally:
p = None
q = 68
em = 65
newlist = 5, 10, 15, 35, 45, 65
When one of the lists becomes None, all the elements of the second list are added at the end of the new list. Therefore, the final list will be:
p = None
q = None
em = 68
newlist = 5, 10, 15, 35, 45, 65, 68
Let's put all this into code by creating two methods - merge_helper() and merge_by_newlist():
def merge_helper(self, list2):
merged_list = LinkedList()
merged_list.start_node = self.merge_by_newlist(self.start_node, list2.start_node)
return merged_list
def merge_by_newlist(self, p, q):
if p.item <= q.item:
startNode = Node(p.item)
p = p.ref
startNode = Node(q.item)
q = q.ref
em = startNode
while p is not None and q is not None:
if p.item <= q.item:
em.ref = Node(p.item)
p = p.ref
em.ref = Node(q.item)
q = q.ref
em = em.ref
while p is not None:
em.ref = Node(p.item)
p = p.ref
em = em.ref
while q is not None:
em.ref = Node(q.item)
q = q.ref
em = em.ref
return startNode
The merge_helper() method takes a linked list as a parameter and then passes the self class, which is a linked list itself, and the linked list passed to it as a parameter, to the merge_by_newlist()
The merge_by_newlist() method merges the two linked lists by creating a new linked list and returns its start node. Add these two methods to the LinkedList class. Create two new linked lists, sort
them using the bub_sort_datachange() or the bub_sort_linkchange() methods that you created in the last section, and then use the merge_by_newlist() to see if you can merge two sorted linked lists or
Merging Sorted Linked Lists by Rearranging Links
In this approach, a new linked list is not used to store the merger of two sorted linked lists. Rather, the links of the two linked lists are modified in such a way that the two linked lists are
merged in a sorted manner.
Let's see a simple example of how we can do this. Suppose we have the same two lists list1 and list2:
We want to merge them in a sorted manner by rearranging the links. To do so we need variables p, q, and em. Initially, they will have the following values:
p = 10
q = 5
em = none
newlist = none
Next, we will compare the first element of list1 with the first element of list2, in other words, we will compare the values of p and q and the smaller value will be stored in the variable em which
will become the first node of the new list.
After the first comparison, we will have the following values:
p = 10
q = 15
start = 5
em = start
After the first iteration, since q is less than p, the start node will point towards q and q will become q.ref. The em will be equal to the start. The em will always refer to the newly inserted node
in the merged list:
p = 45
q = 15
em = 10
Here, since p was smaller than the q, the variable em now points towards the original value of p and p becomes p.ref:
p = 45
q = 35
em = 15
Since q was smaller than p, em points towards q and q becomes q.ref:
p = 45
q = 68
em = 35
Similarly em here points towards q:
p = 65
q = 68
em = 45
newlist = 5, 10, 15, 35, 45
And here em points towards becomes p:
p = None
q = 68
em = 65
newlist = 5, 10, 15, 35, 45, 65
When one of the lists becomes None, the elements from the second list are simply added at the end:
p = None
q = None
em = 68
newlist = 5, 10, 15, 35, 45, 65, 68
The script that contains methods for merging two lists without creating a new list is as follows:
def merge_helper2(self, list2):
merged_list = LinkedList()
merged_list.start_node = self.merge_by_linkChange(self.start_node, list2.start_node)
return merged_list
def merge_by_linkChange(self, p, q):
if p.item <= q.item:
startNode = Node(p.item)
p = p.ref
startNode = Node(q.item)
q = q.ref
em = startNode
while p is not None and q is not None:
if p.item <= q.item:
em.ref = Node(p.item)
em = em.ref
p = p.ref
em.ref = Node(q.item)
em = em.ref
q = q.ref
if p is None:
em.ref = q
em.ref = p
return startNode
In the script above we have two methods: merge_helper2() and merge_by_linkChange(). The first method merge_helper2() takes a linked list as a parameter and then passes the self class which is a
linked list itself and the linked list passed to it as a parameter, to the merge_by_linkChange(), which merges the two linked by modifying the links and returns the start node of the merged list.
Add these two methods to the LinkedList class. Create two new linked lists, sort them using the bub_sort_datachange() or the bub_sort_linkchange() methods that you created in the last section, and
then use the merge_by_newlist() to see if you can merge two sorted linked lists or not. Let's see this process in action:
new_linked_list1 = LinkedList()
The script will ask you for the number of nodes to enter. Enter as many nodes as you like and then add values for each node as shown below:
How many nodes do you want to create: 4
Enter the value for the node:12
Enter the value for the node:45
Enter the value for the node:32
Enter the value for the node:61
Next, create another linked list repeating the above process:
new_linked_list2 = LinkedList()
Next, add a few dummy nodes with the help of the following script:
How many nodes do you want to create: 4
Enter the value for the node:36
Enter the value for the node:41
Enter the value for the node:25
Enter the value for the node:9
The next step is to sort both the lists. Execute the following script:
new_linked_list1. bub_sort_datachange()
new_linked_list2. bub_sort_datachange()
Finally, the following script merges the two linked lists:
list3 = new_linked_list1.merge_helper2(new_linked_list2)
To see if the lists have actually been merged, execute the following script:
The output looks like this:
In this article, we continued from where we left in the previous article. We saw how we can sort merge lists by changing data and then modifying links. Finally, we also studied different ways of
merging two sorted linked lists.
In the next article, we'll take a look at how to construct and perform operations on doubly linked lists.
|
{"url":"https://stackabuse.com/sorting-and-merging-single-linked-list/","timestamp":"2024-11-04T21:22:23Z","content_type":"text/html","content_length":"124646","record_id":"<urn:uuid:93672eee-7d57-4b08-8ec7-23cb8fb2aafd>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00541.warc.gz"}
|
ards to Kilometers
Yards to Kilometers Converter
β Switch toKilometers to Yards Converter
How to use this Yards to Kilometers Converter π €
Follow these steps to convert given length from the units of Yards to the units of Kilometers.
1. Enter the input Yards value in the text field.
2. The calculator converts the given Yards into Kilometers in realtime β using the conversion formula, and displays under the Kilometers label. You do not need to click any button. If the input
changes, Kilometers value is re-calculated, just like that.
3. You may copy the resulting Kilometers value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert Yards to Kilometers?
The formula to convert given length from Yards to Kilometers is:
Length[(Kilometers)] = Length[(Yards)] × 0.0009144
Substitute the given value of length in yards, i.e., Length[(Yards)] in the above formula and simplify the right-hand side value. The resulting value is the length in kilometers, i.e., Length
Calculation will be done after you enter a valid input.
Consider that a high-end golf course has a fairway measuring 450 yards.
Convert this distance from yards to Kilometers.
The length in yards is:
Length[(Yards)] = 450
The formula to convert length from yards to kilometers is:
Length[(Kilometers)] = Length[(Yards)] × 0.0009144
Substitute given weight Length[(Yards)] = 450 in the above formula.
Length[(Kilometers)] = 450 × 0.0009144
Length[(Kilometers)] = 0.4115
Final Answer:
Therefore, 450 yd is equal to 0.4115 km.
The length is 0.4115 km, in kilometers.
Consider that a luxury mansion has a backyard that extends 200 yards.
Convert this length from yards to Kilometers.
The length in yards is:
Length[(Yards)] = 200
The formula to convert length from yards to kilometers is:
Length[(Kilometers)] = Length[(Yards)] × 0.0009144
Substitute given weight Length[(Yards)] = 200 in the above formula.
Length[(Kilometers)] = 200 × 0.0009144
Length[(Kilometers)] = 0.1829
Final Answer:
Therefore, 200 yd is equal to 0.1829 km.
The length is 0.1829 km, in kilometers.
Yards to Kilometers Conversion Table
The following table gives some of the most used conversions from Yards to Kilometers.
Yards (yd) Kilometers (km)
0 yd 0 km
1 yd 0.0009144 km
2 yd 0.0018288 km
3 yd 0.0027432 km
4 yd 0.0036576 km
5 yd 0.004572 km
6 yd 0.0054864 km
7 yd 0.0064008 km
8 yd 0.0073152 km
9 yd 0.0082296 km
10 yd 0.009144 km
20 yd 0.018288 km
50 yd 0.04572 km
100 yd 0.09144 km
1000 yd 0.9144 km
10000 yd 9.144 km
100000 yd 91.44 km
A yard (symbol: yd) is a unit of length commonly used in the United States, the United Kingdom, and Canada. One yard is equal to 0.9144 meters.
The yard originated from various units used in medieval England. Its current definition is based on the international agreement of 1959, which standardized it to exactly 0.9144 meters.
Yards are often used to measure distances in sports fields, textiles, and land. Despite the global shift to the metric system, the yard remains in use in these countries.
A kilometer (km) is a unit of length in the International System of Units (SI), equal to 0.6214 miles. One kilometer is one thousand meters.
The prefix "kilo-" means one thousand. A kilometer is defined by 1000 times the distance light travels in 1/299,792,458 seconds. This definition may change, but a kilometer will always be one
thousand meters.
Kilometers are used to measure distances on land in most countries. However, the United States and the United Kingdom still often use miles. The UK has adopted the metric system, but miles are still
used on road signs.
Frequently Asked Questions (FAQs)
1. How do I convert yards to kilometers?
Multiply the number of yards by 0.0009144 to get the equivalent in kilometers. For example, 1,000 yards Γ 0.0009144 = 0.9144 kilometers.
2. What is the formula for converting yards to kilometers?
The formula is: kilometers = yards Γ 0.0009144.
3. How many kilometers are in a yard?
There are 0.0009144 kilometers in 1 yard.
4. Is 1 yard equal to 0.0009144 kilometers?
Yes, 1 yard is equal to 0.0009144 kilometers.
5. How do I convert kilometers to yards?
Divide the number of kilometers by 0.0009144 to get the equivalent in yards. For example, 1 kilometer Γ· 0.0009144 β 1,093.6133 yards.
6. What is the difference between yards and kilometers?
Yards are a unit of length in the imperial system, while kilometers are used in the metric system. One yard equals 0.0009144 kilometers.
7. How many kilometers are there in 500 yards?
500 yards Γ 0.0009144 = 0.4572 kilometers.
8. How many kilometers are in 1,500 yards?
1,500 yards Γ 0.0009144 = 1.3716 kilometers.
9. How do I use this yards to kilometers converter?
Enter the value in yards that you want to convert, and the converter will automatically display the equivalent in kilometers.
10. Why do we multiply by 0.0009144 to convert yards to kilometers?
Because there are 0.9144 meters in a yard and 1,000 meters in a kilometer, so 0.9144 meters divided by 1,000 equals 0.0009144 kilometers.
11. What is the SI unit of length?
The SI unit of length is the meter; kilometers are multiples of meters.
12. Are yards shorter than kilometers?
Yes, yards are much shorter than kilometers. One yard equals 0.0009144 kilometers.
13. How many kilometers are in 2,000 yards?
2,000 yards Γ 0.0009144 = 1.8288 kilometers.
14. How to convert 3,500 yards to kilometers?
3,500 yards Γ 0.0009144 = 3.2004 kilometers.
15. Is 1 kilometer equal to 1,093.6133 yards?
Yes, 1 kilometer is approximately equal to 1,093.6133 yards.
|
{"url":"https://convertonline.org/unit/?convert=yards-kilometers","timestamp":"2024-11-09T20:18:14Z","content_type":"text/html","content_length":"105358","record_id":"<urn:uuid:d1bfd53b-b7d2-4409-8238-7a2537cc68a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00488.warc.gz"}
|
Coordinate/Also denoted as
It is usual to use the subscript technique to denote the coordinates where $n$ is large or unspecified:
$\tuple {x_1, x_2, \ldots, x_n}$
However, note that some texts (often in the fields of physics and mechanics) prefer to use superscripts:
$\tuple {x^1, x^2, \ldots, x^n}$
While this notation is documented here, its use is not endorsed by $\mathsf{Pr} \infty \mathsf{fWiki}$ because:
there exists the all too likely subsequent confusion with notation for powers
one of the philosophical tenets of $\mathsf{Pr} \infty \mathsf{fWiki}$ is to present a system of notation that is as completely consistent as possible.
|
{"url":"https://proofwiki.org/wiki/Definition:Coordinate/Also_denoted_as","timestamp":"2024-11-09T19:47:02Z","content_type":"text/html","content_length":"40291","record_id":"<urn:uuid:208f3561-ba77-4a73-856a-6e01e1f23824>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00862.warc.gz"}
|
FilterPy is a Python library that implements a number of Bayesian filters, most notably Kalman filters. I am writing it in conjunction with my book Kalman and Bayesian Filters in Python [1], a free
book written using Ipython Notebook, hosted on github, and readable via nbviewer. However, it implements a wide variety of functionality that is not described in the book.
As such this library has a strong pedalogical flavor. It is rare that I choose the most efficient way to calculate something unless it does not obscure exposition of the concepts of the filtering
being done. I will always opt for clarity over speed. I do not mean to imply that this is a toy; I use it all of the time in my job.
I mainly develop in Python 3.x, but this should support both Python 2.x and 3.x flavors. At the moment I can not tell you the lowest required version; I tend to develop on the bleeding edge of the
Python releases. I am happy to receive bug reports if it does not work with older versions, but testing backwards compatibility is not a high priority at the moment. As the package matures I will
shift my focus in that direction.
FilterPy requires Numpy [2] and SciPy [3] to work. The tests and examples also use matplotlib [4]. For testing I use py.test [5].
Installation with pip (recommended)¶
FilterPy is available on github (https://github.com/rlabbe/filterpy). However, it is also hosted on PyPi, and unless you want to be on the bleeding edge of development I recommend you get it from
there. To install from the command line, merely type:
To test the installation, from a python REPL type:
>>> import filterpy
>>> filterpy.__version__
and it should display the version number that you installed.
Installation with GitHub¶
You can get the very latest code by getting it from GitHub and then performing the installation. I will say I am not following particularly stringent version control discipline. I mostly stay on
master and commit things that are not entirely ready for prime-time, mostly because I’m the only one developing. I do not promise that any check in that is not tagged with a version number is usable.
$ git clone --depth=1 https://github.com/rlabbe/filterpy.git
$ cd filterpy
$ python setup.py install
–depth=1 just gets you the last few revisions that I made, which keeps the repo small. If you want the entire repo leave out the depth parameter, or fork the repo if you plan to modify it.
There are several submodules, each listed below. But in general you will need to import which classes and/or functions you need from the correct submodule, construct the objects, and then execute
your code. Something lke
>>> from filterpy.kalman import KalmanFilter
>>> kf = KalmanFilter(dim_x=3, dim_z=1)
I try to provide examples in the help for each class, but this documentation needs a lot of work. For now I refer you to my book mentioned above if the documentation is not adequate. Better yet,
write an issue on the GitHub issue tracker. I will respond with an answer as soon as I am online and available (minutes to a day, normally), and then revise the documentation. I shouldn’t have to be
prodded like this, but life is limited. So prod.
Raise issues here: https://github.com/rlabbe/filterpy/issues
FilterPy’s Naming Conventions¶
A word on variable names. I am an advocate for descriptive variable names. In the Kalman filter literature the measurement noise covariance matrix is called R. The name R is not descriptive. I could
reasonably call it measurement_noise_covariance, and I’ve seen libraries do that. I’ve chosen not to.
In the end, Kalman filtering is math. To write a Kalman filter you are going to start by sitting down with a piece of paper and doing math. You will be writing and solving normal algebraic equations.
Every Kalman filter text and source on the web uses the same equations. You cannot read about the Kalman filter without seeing this equation
\[\dot{\mathbf{x}} = \mathbf{Fx} + \mathbf{Gu} + w\]
One of my goals is to bring you to the point where you can read the original literature on Kalman filtering. For nontrivial problems the difficulty is not the implementation of the equations, but
learning how to set up the equations so they solve your problem. In other words, every Kalman filter implements \(\dot{\mathbf{x}} = \mathbf{Fx} + \mathbf{Gu} + w\); the difficult part is figuring
out what to put in the matrices \(\mathbf{F}\) and \(\mathbf{G}\) to make your filter work for your problem. Vast amounts of work have been done to apply Kalman filters in various domains, and it
would be tragic to be unable to avail yourself of this research.
So, like it or not you will need to learn that \(\mathbf{F}\) is the state transition matrix and that \(\mathbf{R}\) is the measurement noise covariance. Once you know that the code will become
readable, and until then Kalman filter math, and all publications and web articles on Kalman filters will be inaccessible to you.
Finally, I think that mathematical programming is somewhat different than regular programming; what is readable in one domain is not readable in another. q = x + m is opaque in a normal context. On
the other hand, x = (.5*a)*t**2 + v_0*t + x_0 is to me the most readable way to program the Newtonian distance equation:
\[x = \frac{1}{2}at^2 + v_0 t + x_0\]
We could write it as
distance = (.5 * constant_acceleration) * time_delta**2 +
initial_velocity * time_delta + initial_distance
but I feel that obscures readability. This is debatable for this one equation; but most mathematical programs, and certainly Kalman filters, use systems of equations. I can most easily follow the
code, and ensure that it does not have bugs, when it reads as close to the math as possible. Consider this equation from the Kalman filter:
\[\mathbf{K} = \mathbf{PH}^\mathsf{T}[\mathbf{HPH}^\mathsf{T} + \mathbf{R}]^{-1}\]
Python code for this would be
K = dot(P, H.T).dot(inv(dot(H, P).dot(H.T) + R))
It’s already a bit hard to read because of the dot function calls (required because Python does not yet support an operator for matrix multiplication). But compare this to:
kalman_gain = (
dot(apriori_state_covariance, measurement_function_transpose).dot(
inv(dot(measurement_function, apriori_state_covariance).dot(
measurement_function_transpose) + measurement_noise_covariance)))
which I adapted from a popular library. I grant you this version has more context, but I cannot glance at this and see what math it is implementing. In particular, the linear algebra \(\mathbf{HPH}^\
mathsf{T}\) is doing something very specific - multiplying \(\mathbf{P}\) by \(\mathbf{H}\) in a way that converts \(\mathbf{P}\) from world space to measurement space (we’ll learn what that means).
It is nearly impossible to see that the Kalman gain (K) is just a ratio of one number divided by a second number which has been converted to a different basis. This statement may not convey a lot of
information to you before reading the book, but I assure you that \(\mathbf{K} = \mathbf{PH}^\mathsf{T}[\mathbf{HPH}^\mathsf{T} + \mathbf{R}]^{-1}\) is saying something very succinctly. There are two
key pieces of information here - we are finding a ratio, and we are doing it in measurement space. I can see that in my first Python line, I cannot see that in the second line. If you want a
counter-argument, my version obscures the information that \(\mathbf{P}\) is in this context is a prior .
These comments apply to library code. Calling code should use names like sensor_noise, or gps_sensor_noise, not R. Math code should read like math, and interface or glue code should read like normal
code. Context is important.
I will not win this argument, and some people will not agree with my naming choices. I will finish by stating, very truthfully, that I made two mistakes the first time I typed the second version and
it took me awhile to find it. In any case, I aim for using the mathematical symbol names whenever possible, coupled with readable class and function names. So, it is KalmanFilter.P, not KF.P and not
Unless it is deeply private (you don’t want someone else seeing propietary code, for example), please ask questions and such on the issue tracker, not by email. This is solely so that everyone gets
to see the answer. “Issue” doesn’t mean bug.
The classes in this submodule implement the various Kalman filters. There is also support for smoother functions. The smoothers are methods of the classes. For example, the KalmanFilter class
contains rts_smoother to perform Rauch-Tung-Striebal smoothing.
Linear Kalman Filters¶
Implements various Kalman filters using the linear equations form of the filter.
Unscented Kalman Filter¶
These modules are used to implement the Unscented Kalman filter.
Contains various useful functions that support the filtering classes and functions. Most useful are functions to compute the process noise matrix Q. It also implements the Van Loan discretization of
a linear differential equation.
Contains statistical functions useful for Kalman filtering such as multivariate Gaussian multiplication, computing the log-likelihood, NESS, and mahalanobis distance, along with plotting routines to
plot multivariate Gaussians CDFs, PDFs, and covariance ellipses.
Routines for Markov Chain Monte Carlo (MCMC) computation, mainly for particle filtering.
Routines for performing discrete Bayes filtering.
These classes various g-h filters. The functions are helpers that provide settings for the g and h parameters for various common filters.
Implements a polynomial fading memory filter. You can achieve the same results, and more, using the KalmanFilter class. However, some books use this form of the fading memory filter, so it is here
for completeness. I suppose some would also find this simpler to use than the standard Kalman filter.
[1] Labbe, Roger. “Kalman and Bayesian Filters in Python”.
github repo:
read online:
PDF version (often lags the two sources above)
[2] NumPy http://www.numpy.org
[3] SciPy http://www.scipy.org
[4] matplotlib http://http://matplotlib.org/
[5] pytest http://pytest.org/latest/
|
{"url":"https://filterpy.readthedocs.io/en/latest/","timestamp":"2024-11-05T12:28:57Z","content_type":"application/xhtml+xml","content_length":"33950","record_id":"<urn:uuid:fe0f18be-e5c6-4418-823b-48a153e3cb76>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00526.warc.gz"}
|
Almost linear operators and functionals
Let (M) be the bounded continuous functions on a topological space M. "Almost linear" operators (and functionals) on C(M) are defined. Almost linearity does not imply linearity in general. However,
it is shown that if M = [O, l] then any almost linear operator (or functional) must be linear. Specifically, if (a)0 implies T(f) 0, (b) T(f + g) = T(f) + T(g) whenever fg = 0, (c) T(f + g) = T(f) +
T(g) whenever g is constant, and M[O, l], then T is linear. An application is given to convergence of measur.
Dive into the research topics of 'Almost linear operators and functionals'. Together they form a unique fingerprint.
|
{"url":"https://experts.umn.edu/en/publications/almost-linear-operators-and-functionals","timestamp":"2024-11-05T06:14:48Z","content_type":"text/html","content_length":"45323","record_id":"<urn:uuid:14b3f020-a7ed-41d3-b340-5027e2acf31a>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00551.warc.gz"}
|
tags: non-Archimedean geometry
Let \(k\) be a field. Recall that a valuation \(v: k \to \Gamma\), for \(\Gamma\) an ordered Abelian group, is a group homomorphism \(v: k^\times \to \Gamma\) such that \[ v(x + y) \geq \min\{v(x), v
(y) \} \] When \(\Gamma=\mathbb{Z}^n\) then the valuation is called rank \(n\) valuation. In classical terminology rank 1 valuations are just ``valuations’’. If the homomorphism is partial, then \(v
\) is called a semi-valuation.
Let \(X\) be an algebraic variety over a field \(k\). For a fixed valuation \(v_0\) on \(k\), the space of valuations on \(k(X)\) that restrict to \(v_0\) is an interesting object to study. If \(v_0
\) is trivial (i.e. \(v(f)=0\) for all \(f \in k^\times\)) then the space of all rank 1 valuations is called Riemann-Zariski space. It is topologized as follows: basic opens are sets of valuations \
(v\) such that \(v(f_i) \geq 0\) for a some fixed \(f_1, \ldots, f_n\). Incidentally, RZ space is the inverse limit of all schemes over \(k\) with the given function field.
One can also fix the value group \(\Gamma\) and consider spaces of valuations with values in \(\Gamma\), this way one gets Berkovich spaces (\(\Gamma = \mathbb{R}\)). They are topologized in the way
similar to Riemann-Zariski spaces: basic opens are sets of valuations that are non-negative on some finite sets of elements of the field. Hrushovski-Loeser spaces (or spaces of stably dominated
types) is a way to regard such valuations from a ``semi-algebraic’’ point of view. The setting is much more general here, valuations are with values in \(\Gamma\) arbitrary (which embeds into some
very big ordered group, which is fixed).
Stably dominated types
If \(p\) is an arbitrary type and \(f: X \to Y\) is a definable map such that \(p\) is supported on \(X\) then, regarding \(p\) is a finitely additive measure on definable sets that takes values in
{0,1}, and map \(f\) being naturally measurable, one can define the pushforward type=measure.
A type \(p\) is called stably dominated by a (stable) type \(q\) via a definable map \(f\) if \(p\) is the unique type such that \(f_*(p)=q\). (caveat: this definition is only correct for types
defined over maximally complete fields, full definition glossed over)
One can show that stably definable types are definable (since they are generated by formalus that are constructed from definition of the stable — hence, definable — stable types). Formula definitions
of definable stably dominated types allow for a somewhat more explicit description of these types.
Let \(K\) be a valued field with a value ring \({\cal O}\), and let \(V\) be a \(K\)-vector space. An \({\cal O}\)-submodule \(M\) is a semi-lattice if it intersects every one-dimensional \(K\)
-subspace of \(V\) in a submodule of the form \(K\), \({\cal O}\), or the trivial module. A semi-lattice \(M\) is a lattice if \(M \otimes K \cong V\).
One easily observes that a definable stably dominated type supported on an affine variety \(X\) can be encoded by a certain set of lattices in the (infinite-dimensional) vector space \(H^0(X, {\cal
O}_X)\). The lattice \(\Lambda(p)\) associated to \(p\) is \[ \Lambda(p) := \{ f \in H^0(X, {\cal O}_X) \mid p \models val(f(x)) \geq 0 \} \]
The type definition makes this definition uniform in coefficients of \(f\).
Theorem. The set of lattices \(\Lambda(p)\) that correspond to stably dominated types \(p\) is pro-definable, even definable.
The set of definable stably dominated types that concentrate on \(X\) is denoted \(\hat X\).
If \(p\) is a stably dominated type supported on an affine variety \(X\) then for any function \(f \in H^0(X, {\cal K}_X)\), \(val(f_*(p))\) is well-defined, this defines a valuation on \(K(X)\)
which restricts to the standard valuation on \(K\).
Conversely, if \(K\) is maximally complete, then for any \(v: K(X) \to \Gamma\) a valuation there exists the type \[ p_v := \{ val(f) = v(p) \mid f \in H^0(X, {\cal O}_X) \} \] Showing that this type
is definable stably dominated is a non-trivial theorem (therem 12.18 in the book of Haskell-Hrushovski-Macpherson).
One consequence of definabity of the space of types is that it makes sense to consider definable maps fram definable sets to sets of definable stably dominated types.
For any definable set \(V\) there is an embedding \(s_V: V \to \hat{V}\) that maps points to types that concentrate on points. The image of \(s_V\) is called the set of simple points.
If \(f: X \to Y\) is a definable map, then \(\widehat{X/Y}\) is the subset of \(\hat X\) of points that project to simple points of \(\hat Y\).
Let \(A\) be definable domain such that the valuation associated to a type \(p\) is \[ v(f) = \inf_{x \in A} val(f(x)) \] terminology: \(p\) is in the Shilov boundary of \(A\), \(p\) is strongly
stably dominated.
Alternatively, \(p \in \hat V\) is a strongly stably dominated if there exists a map \(f: V \to \mathbb{A}^n\) such that \(f_* p = p_{\mathcal O}^n\) where \(p_{\cal O}\) is the generic type of the
ball \(\{ x \mid v(x) \geq 0\}\).
Let \(U\) be a definable set (typically, a subset of \(\Gamma^n\)) and let \(f: U \hookrightarrow \hat X\) be a definable map. If a definable type \(p\) is a type concentrated on \(U\) one can define
the following type concentrated on \(X\) \[ \int_p f := f(a) \textrm{ where } a \models p \] the definition in fact does not depend on the choice of the realization \(a\) and it also only depends on
the germ of \(f\) (I personally think that limit would be a better notation).
It turns out that any definable type is of the form \(\int_p f\) where \(p\) is concentrated on \(\Gamma\).
|
{"url":"http://shenme.de/blog/posts/2015-05-17-non-archimedean.html","timestamp":"2024-11-11T17:33:55Z","content_type":"application/xhtml+xml","content_length":"11682","record_id":"<urn:uuid:0d741e30-150a-455d-b894-3651318923e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00488.warc.gz"}
|
Sets a variable to itself minus the given value (can also compare date-time values). Synonymous with: Var -= Value.
EnvSub, Var, Value , TimeUnits
Var -= Value , TimeUnits
The name of the variable upon which to operate.
Any integer, floating point number, or expression.
If present, this parameter directs the command to subtract Value from Var as though both of them are date-time stamps in the YYYYMMDDHH24MISS format. TimeUnits can be either Seconds, Minutes,
Hours, or Days (or just the first letter of each of these). If Value is blank, the current time will be used in its place. Similarly, if Var is an empty variable, the current time will be used in
its place.
The result is always rounded down to the nearest integer. For example, if the actual difference between two timestamps is 1.999 days, it will be reported as 1 day. If higher precision is needed,
specify Seconds for TimeUnits and divide the result by 60.0, 3600.0, or 86400.0.
If either Var or Value is an invalid timestamp or contains a year prior to 1601, Var will be made blank to indicate the problem.
The built-in variable A_Now contains the current local time in YYYYMMDDHH24MISS format.
To precisely determine the elapsed time between two events, use the A_TickCount method because it provides millisecond precision.
To add or subtract a certain number of seconds, minutes, hours, or days from a timestamp, use EnvAdd (subtraction is achieved by adding a negative number).
This command is equivalent to the shorthand style: Var -= Value.
Variables can be increased or decreased by 1 by using Var++, Var--, ++Var, or --Var.
If either Var or Value is blank or does not start with a number, it is considered to be 0 for the purpose of the calculation (except when used internally in an expression and except when using the
TimeUnits parameter).
If either Var or Value contains a decimal point, the end result will be a floating point number in the format set by SetFormat.
EnvAdd, EnvMult, EnvDiv, SetFormat, Expressions, If var is [not] type, SetEnv, FileGetTime
|
{"url":"https://ahk4.us/docs/commands/EnvSub.htm","timestamp":"2024-11-08T11:46:15Z","content_type":"text/html","content_length":"4939","record_id":"<urn:uuid:86b74846-47f9-4e0b-953d-8730e6bcd1a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00076.warc.gz"}
|
MFCS Previous Year Question Paper 2013 | HNBGU BCA First semester
Mathematical foundation of Computer science
HNBGU BCA Previous Question Paper 2013-14
1. If Q be the set of rational numbers and a function f:Q->Q be defined by f(x) = 2x+3, show that f is bijective. Find a formula that defines the inverse function.
2. Define a relation and a function. Give and example of a relation which is reflexive and transitive but not symmetric.
1. Show that the set N of all natural numbers is not a group with respect to addition.
2. Show that the set of all n, nth roots of unity forms a finite abelian group of order n with respect to multiplication
1. Decompose the following permutation into transposition:
1. 1 2 3 4 5 6 7
2. 1 2 3 4 5 6 7 8
2. If a group G has four elements, show that it must be abelian.
1. Five the two numeric functions and such that neither asymptotically dominates nor asymptotically dominates
2. Solve the recurrence relation a[r]-3a[r-1]+2a[r-2]=6, satisfying the initial conditions a[0]=1 and a[1]=4.
1. Solve the difference equation:
a[r+2]-2a[r+1]+a[r]=3r +5
2. If f is a homeomorphism of a group G into a group G’ with kernel K, then K is a normal subgroup of G.
1. Prove that each of the following is a tautology:
1. [(p->q)^(q->r)]->(p->r)
2. [p^(p->q)]->q
7. Write the form of the negation of each of the following:
1. The corresponding sides of two triangles are equal if and only if the triangles are congruent.
2. If the number x is less than 10, then there is a number y such that x^2+y^2-100 is positive
1. If X be the set of factors of 12 and if ≤ be the relation divides, i.e., x ≤ y if and only if x | y. Draw the Hasse Diagram of (X,≤)
2. Prove that any right (left ) cosets of a subgroup are either disjoint or identical
|
{"url":"https://www.parnassianscafe.com/2024/01/mfcs-previous-year-question-paper-2013.html","timestamp":"2024-11-03T03:33:11Z","content_type":"application/xhtml+xml","content_length":"249300","record_id":"<urn:uuid:36d72e35-0dd5-4f5d-ac0e-22dcbc68aeb6>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00715.warc.gz"}
|
Lesson 17
Volume and Density
17.1: A Kilogram by Any Other Name (5 minutes)
The purpose of this warm-up is to get students to think more about what they mean by “light” and “heavy” to prepare for later activities that explore density.
Arrange students in groups of 2. After quiet work time, ask students to compare their responses to their partner’s and decide if they are both correct, even if they are different. Follow with a
whole-class discussion.
Student Facing
Which has more mass, a thousand kilograms of feathers or a thousand kilograms of steel? Explain your reasoning.
Activity Synthesis
Select students to share reasons that each might have more mass. It may be helpful to discuss how mass is measured to conclude that each, by definition, is the same mass. Then ask students to discuss
what it means, specifically, when we say that feathers are lighter than steel, and how much volume a thousand kilograms of each substance might occupy.
Ask students to add this definition to their reference charts as you add it to the class reference chart:
The density of a substance is the mass of the substance per unit volume. That is, \(\text{density}=\frac{\text{mass}}{\text{volume}}\). (Definition)
For example, a metal object whose mass is 150 kilograms with volume 1000 cubic centimeters has a density of \(\frac{150}{1000}\) or 0.15 kilograms per cubic centimeter. Each cubic centimeter of the
metal contains 0.15 kilograms of mass.
17.2: Light as a Feather (15 minutes)
Students use concepts of volume and unit conversion to enhance their understanding of density.
Tell students that 1 cubic meter is equal to 1,000,000 cubic centimeters and 1 kilogram is equal to 1,000 grams. Suggest that students pay careful attention to units as they work through this task.
Monitor for students who calculate the feather density in grams per cubic centimeter then convert to kilograms per cubic meter, and those who begin the task by converting the measurements to
kilograms and cubic meters.
Representing, Conversing: MLR7 Compare and Connect. Use this routine to prepare students for the whole-class discussion about strategies for calculating the density of the pillow and anchor. After
students calculate density of the pillow and anchor in kilograms per cubic meter, invite them to create a visual display of their work for either the pillow or the anchor. Then ask students to
quietly circulate and observe at least two other visual displays in the room. Give students quiet think time to consider what is the same and what is different about their strategies. Next, ask
students to find a partner to discuss what they noticed. Listen for and amplify the language students use to compare and contrast strategies for converting units and calculating density.
Design Principle(s): Cultivate conversation
Representation: Internalize Comprehension. Activate or supply background knowledge about the number of centimeters in a meter and the number of cubic centimeters in a cubic meter. Allow students to
use calculators to ensure inclusive participation in the activity.
Supports accessibility for: Memory; Conceptual processing
Student Facing
The feathers in a pillow have a total mass of 59 grams. The pillow is in the shape of a rectangular prism measuring 51 cm by 66 cm by 7 cm.
A steel anchor is shaped like a square pyramid. Each side of the base measures 20 cm, and its height is 28 cm. The anchor’s mass is 30 kg.
1. What’s the density of feathers in kilograms per cubic meter?
2. What’s the density of steel in kilograms per cubic meter?
3. What’s the volume of 1,000 kg of feathers in cubic meters?
4. What’s the volume of 1,000 kg of steel in cubic meters?
Student Facing
Are you ready for more?
Iridium is one of the densest metals. How many times heavier would a standard pencil be if it were made out of iridium instead of wood?
Anticipated Misconceptions
Students may calculate density in grams per cm^3, then be unsure how to convert to kg per m^3. Prompt them to either convert the measurements to cubic meters and kilograms prior to calculating
density, or to use dimensional analysis to convert the density.
Activity Synthesis
The purpose of the discussion is to draw out relationships between mass, volume, and density. Ask students:
• “How did you deal with the different units in this problem?” (If possible, select a student who calculated the feather density in grams per cm^3 then converted to kg per m^3, and another who
converted the measurements to kilograms and cubic meters prior to calculating the density.)
• “How did you calculate the densities of each material?” (Divided the mass by the volume.)
• “How much space is 400 cubic meters? Would the feathers fill this room?” (A classroom of 30 feet by 30 feet by 12 feet has a volume of about 300 cubic meters.)
• “How much space is 0.124 cubic meters? Would the steel fit in the bed of a pickup truck?” (1,000 kg of steel would make a cube with edge length about 0.5 meters.)
17.3: A Fishy Situation (15 minutes)
This task presents a different way to think about density. Instead of considering mass per unit volume, students analyze animal population density. They use unit conversion and volume calculations to
solve a problem. As students choose and track common units of measurement, they are attending to precision (MP6).
While students work, monitor for a variety of strategies such as:
• converting the density of 16 fish per 100 gallons of water to 0.16 fish per 1 gallon
• multiplying the tank’s volume in gallons by 16, then dividing by 100
• calculating that if 275 fish were used, the density would be about 14 fish per 100 gallons
Tell students that there are 7.48 gallons of water in 1 cubic foot.
Consider showing students pictures of the 82-foot tall cylindrical aquarium at the Radisson Blu hotel in Berlin, Germany.
Reading, Listening, Conversing: MLR6 Three Reads. Use this routine to support reading comprehension of this word problem. Use the first read to orient students to the situation. Ask students to
describe what the situation is about without using numbers (an aquarium has a blueprint for a fish tank and wants to make sure they have the right number of fish in the tank). Use the second read to
identify quantities and relationships. Ask students what can be counted or measured without focusing on the values (dimensions and volume of the cylindrical tank, best average density for the species
of fish, number of fish available). After the third read, ask students to brainstorm possible solution strategies to answer the question. This helps students connect the language in the word problem
and the reasoning needed to solve the problem.
Design Principle(s): Support sense-making
Action and Expression: Internalize Executive Functions. To support development of organizational skills, check in with students within the first 2–3 minutes of work time. Check to make sure students
calculated the volume of the fish tank in cubic feet before they calculate the volume of the fish tank in gallons of water.
Supports accessibility for: Memory; Organization
Student Facing
An aquarium manager drew a blueprint for a cylindrical fish tank. The tank has a vertical tube in the middle in which visitors can stand and view the fish.
The best average density for the species of fish that will go in the tank is 16 fish per 100 gallons of water. This provides enough room for the fish to swim while making sure that there are plenty
of fish for people to see.
The aquarium has 275 fish available to put in the tank. Is this the right number of fish for the tank? If not, how many fish should be added or removed? Explain your reasoning.
Activity Synthesis
The goal of the discussion is to highlight different ways to solve the problem. Ask students what the density of fish per 100 gallons would be if 275 fish were put in the tank, and what that means in
this situation. Invite students to share how they approached rounding. For example, if the calculations show that 315.8 fish are needed, should we round up or down? Both answers can be supported.
Lesson Synthesis
In this lesson, students used mass, volume, and density to solve problems. Here are some questions for discussion:
• “What are some things with very high density or very low density that you encounter in the world?” (Bowling balls, bricks, and certain metals are very dense. Wood has medium density. Styrofoam is
not very dense. Air and other gases have very low density compared to solid objects.)
• “How can you tell if something is more or less dense than air?” (Things that are more dense than air naturally fall, but things that are less dense than air naturally rise, like helium or hot air
• “What other kinds of density could there be?” (Any sort of measurement per unit of volume can be interpreted as density. For example, food could have a calorie density, like calories per serving.
Even more abstractly, density can be interpreted as any ratio of measurements. For example, cost per square foot is a kind of density. Another example is the number of people per square mile in a
city, which is called population density.)
17.4: Cool-down - Float or Sink? (5 minutes)
Student Facing
Imagine you have a baseball and an apple the size of a baseball. If we weigh each, we’ll likely find that even though they’re the same size, the baseball weighs more.
A baseball has volume 200 cubic centimeters and weighs 145 grams, while an apple the same volume might weigh about 100 grams. We say that the baseball is more dense than the apple because it has more
mass packed into each unit of volume. The density of the apple in this example is 0.5 grams per cubic centimeter, because \(\frac{100\text{ grams}}{200\text{ cm}^3} = 0.5\) grams per cubic
centimeter. For the baseball, the density is \(\frac{145\text{ grams}}{200\text{ cm}^3} = 0.725\) grams per cubic centimeter.
In general, to find the density of an object, divide its mass by its volume.
|
{"url":"https://curriculum.illustrativemathematics.org/HS/teachers/2/5/17/index.html","timestamp":"2024-11-07T22:08:54Z","content_type":"text/html","content_length":"107433","record_id":"<urn:uuid:b31d0152-54ec-4bc9-ab62-8ed4eb64f1cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00681.warc.gz"}
|
Discovering Mathematical Tourism
Sometimes you don’t have to go far to find travel inspiration and a change of scenery. In my search of the world for sites of mathematical significance, it turned out I’d been overlooking one
practically on my doorstep!
The Union Canal, near Falkirk
In 1822 the Union Canal opened, providing (with the Forth and Clyde Canal) a link between Scotland’s two major cities, Edinburgh and Glasgow. It became known locally as ‘the mathematical river’- by
following a natural contour line, the Union Canal maintained a fixed height for its 31 mile course from Falkirk to Edinburgh, removing the need for time-consuming locks. Nor is this its only
mathematical claim to fame- in 1834, the scientist John Scott Russell discovered what are now known as soliton waves whilst experimenting on the canal:
“I was observing the motion of a boat which was rapidly drawn along a narrow channel by a pair of horses, when the boat suddenly stopped—not so the mass of water in the channel which it had put
in motion; it accumulated round the prow of the vessel in a state of violent agitation, then suddenly leaving it behind, rolled forward with great velocity, assuming the form of a large solitary
elevation, a rounded, smooth and well-defined heap of water, which continued its course along the channel apparently without change of form or diminution of speed. I followed it on horseback, and
overtook it still rolling on at a rate of some eight or nine miles an hour, preserving its original figure some thirty feet long and a foot to a foot and a half in height. Its height gradually
diminished, and after a chase of one or two miles I lost it in the windings of the channel. Such, in the month of August 1834, was my first chance interview with that singular and beautiful
phenomenon which I have called the Wave of Translation.”
As Scott Russell described, such waves are unusual in that they can travel long distances whilst preserving their shape, rather than toppling over or simply flattening out with time. Named in his
honour in 1995, The Scott Russell Aqueduct carries the Union Canal over the Edinburgh city bypass, yet the thousands of people who drive underneath it every day have probably never heard of his work-
many have probably not even heard of the canal! Yet as well as having added to our understanding of physics, electronics and biology, soliton waves are of great practical importance today for their
role in long distance communication with fibre-optics.
It seems that a waterside stroll is often of benefit to the advance of mathematics. Nine years after Scott Russell’s discovery – and several hundred miles away, in Dublin – the Irish mathematician
Sir William Rowan Hamilton had a ‘flash of genius’ whilst walking along the Royal Canal. He had realized the equations for the quaternion group and, fearful that he might forget them just as
suddenly, promptly carved them into the nearby Broom bridge. The original carving did not survive, but there is now a stone plaque in its place, which has been described as “the least visited tourist
attraction in Dublin.”
Despite its clever design, the Union Canal’s importance would be short-lived: within twenty years, trains had overtaken barges as the fastest way to travel. The banks became overgrown and the canal
filled with rubbish, and the decline continued after its eventual closure in 1965, as the construction of housing and the M8 motorway caused sections to be cut or filled in. Fortunately, an
£85-million project – the millennium link – came to the rescue. The two canals had originally been joined by a series of 11 locks in Falkirk, but as these had not survived, a more spectacular
solution was found- the Falkirk Wheel.
This engineering marvel is the world’s only rotating boat lift, capable of transferring boats between the two waterways in minutes – and, thanks to physics, using only as much energy to do so as
boiling 8 kettles! The wheel opened in 2002, providing the final piece to restore the link between the two cities, providing ideal opportunities for walking, cycling or boating. I can’t wait to
explore it further in the spring!
(First published on the SoSauce travel blog.)
3 Comments
1. Pingback: Exploring Cambridge « Modulo Errors
2. Nice find. I’ve added it to our database at http://openplaques.org/plaques/7509 If you set the licence on your Flickr image to a Creative Commons one and tagged it openplaques:id=7509 then we’d
be allowed to display it.
3. Hi Jez, I’ve made the requested changes to the flickr image, and will check my archives this evening to see if I can find you a clearer shot of the plaque itself. Good luck with your project!
|
{"url":"https://maths.straylight.co.uk/archives/256","timestamp":"2024-11-03T03:21:05Z","content_type":"text/html","content_length":"40160","record_id":"<urn:uuid:74ca1842-647c-4b45-98bb-0f4e8a6018c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00094.warc.gz"}
|
Theoretical Condensed Matter Physics
Professor Barnes’ research interests span a number of topics in quantum theory, including spin-based quantum computation, dynamical error suppression in quantum systems, driven non-equilibrium spin
dynamics, non-equilibrium physics in 2D materials, many-body interactions in graphene, and novel topological materials such as topological insulators and Weyl semimetals. There is a particular
emphasis on bridging formal, mathematical constructs with research that is closely connected to experiment.
Professor Cao's research focuses on the intersection of quantum gravity, quantum information, and quantum many-body physics. Topics of interest include emergent spacetime and gravity in the anti-de
Sitter/conformal field theory (AdS/CFT) correspondence, quantum computing, especially quantum error correction, and tensor network methods. The research style ranges from formal theory to close
collaboration with experiments.
Professor Cheng's research interests are in soft condensed matter systems, including both biological and synthetic polymers, nanoparticles, nanocomposites, and membranes. The group uses molecular
dynamics simulations and theoretical models based on statistical mechanics to study phenomena including supramolecular and supramacromolecular self-assembly (for example, microtubules as shown in the
left figure), nanoparticle self-assembly, evaporation, capillarity, wetting, adhesion, and friction.
Prof. Dua's research is in condensed matter physics and quantum information science. His current dominant interests are topological order, quantum error correction, quantum control, and the physics
and applications of deep learning. The research style involves formal theory and numerical computations and discussions with quantum computing laboratories about practical implementations. Check out
his recent papers here and feel free to contact him at adua@vt.edu to discuss exciting and important open questions.
Professor Economou’s interests are in quantum optics, condensed matter theory and quantum information with a range of physical systems, including semiconductor nanostructures, color centers (defects)
in solids, superconducting qubits and photons. Topics of particular interest include spin physics in semiconductors, driven systems coupled to a quantum bath, quantum control and quantum logic gate
design, spin-mechanics in condensed matter systems and protocols for entangled photonic states from solid-state emitters. The research style ranges from the development of formal theories with broad
applicability to the interpretation of specific phenomena via close collaboration with experiment.
Professor Ivanov's work is focused primarily on the areas of topological materials, quantum defects, and strongly correlated materials, studying these systems using a variety of theoretical and
computational models. His recent work includes: studying the collective behavior of large numbers of Weyl points in real materials; the interplay of normal-state topology and unconventional
superconductivity; simulation of color-center defects in various materials including silicon and diamond to study their dynamics and properties for applications in quantum sensing, quantum
communication, and single photon generation.
Prof. Kaplan's research interests are in theoretical soft matter and biological physics. In close connection to experiments, his group develops theories and simulations to elucidate the interplay
between the material composition, dynamics, form, and emergent function in living systems and their synthetic analogs.
Professor Park's research interests are theoretical and computational studies of electronic, magnetic, and transport properties of spin-orbit-coupled nanostructures and their interactions with local
and external environmental factors. A few recent examples include: electron-vibron coupling effects in electron tunneling via a single-molecule magnet, spin dynamics for magnetic nanoparticles, and
topological insulators with non-magnetic or magnetic interfaces. For these calculations we use density-functional theory (DFT), Monte Carlo simulations, and effective model Hamiltonian with
parameters obtained from DFT.
Professor Pleimling's research interests are in condensed matter and non-equilibrium systems. Specific research interests include: out-of-equilibrium dynamical behavior of complex systems; aging
phenomena and dynamical scaling; stochastic population dynamics; statistical mechanics of flux lines in superconductors; disordered systems; critical phenomena in confined geometries. These systems
are explored using the tools of statistical physics.
Research in Professor Scarola's group spans several subfields of theoretical quantum physics with the aim of fostering quantum state engineering in the laboratory. The pristine environments we study
typically allow for close connection with experiment in, e.g., two dimensional materials as well as atomic, molecular, and optical systems. Recent research directions include algorithms for quantum
simulation, modelling of quantum computing hardware, quantum analogue simulation, and topological states of matter.
Research interests in Professor Täuber's group are in soft condensed matter and non-equilibrium systems. Specific research interests include: structural phase transitions; dynamic critical behavior
near equilibrium phase transitions; phase transitions and scaling in systems far from equilibrium; statistical mechanics of flux lines in superconductors; and applications of statistical physics to
biological problems. The group employs Monte Carlo and Langevin molecular dynamics simulations to solve stochastic equations of motion, as well as field theory representations to construct
perturbational treatments and renormalization group approaches that improve on mean-field approximations.
Professor Zhou's research area is at the intersection of condensed matter theory and quantum information. His primary focus are quantum phases and their dynamical behaviors in non-equilibrium state
of matter. Research topics include thermalization and localization, entanglement in many-body systems, and critical phenomena in non-equilibrium. We gain physical insights from solvable models, test
and apply them to more practical setups in experiments.
Professor Zia's research interests are in soft condensed matter and non-equilibrium systems. Specific research interests include: non-equilibrium statistical mechanics; phase transitions and critical
phenomena; renormalization group analysis; Monte Carlo simulation techniques; stochastic differential equations and field theory; driven diffusive and reaction-diffusion systems; applications to,
e.g., microbiological systems, population dynamics, adaptive networks, opinion formation and climate science.
|
{"url":"https://www.phys.vt.edu/Research/TheoreticalCondensedMatterPhysics.html","timestamp":"2024-11-14T21:24:36Z","content_type":"text/html","content_length":"99496","record_id":"<urn:uuid:a04c5a62-5fd8-49ad-a535-ead2d7a3b445>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00374.warc.gz"}
|
Algorithm for solving systems of linear algebraic equations with a small parameter by the Lyapunov-Schmidt method in the regular case
CitationShamanaev P. A., Prokhorov S. A. ''Algorithm for solving systems of linear algebraic equations with a small parameter by the Lyapunov-Schmidt method in the regular case'' [Electronic
resource]. Proceedings of the International Scientific Youth School-Seminar "Mathematical Modeling, Numerical Methods and Software complexes" named after E.V. Voskresensky (Saransk, October 8-11,
2020). Saransk: SVMO Publ, 2020. - pp. 129-131. Available at: https://conf.svmo.ru/files/2020/papers/paper40.pdf. - Date of access: 14.11.2024.
|
{"url":"https://conf.svmo.ru/en/archive/article?id=285","timestamp":"2024-11-14T00:00:42Z","content_type":"text/html","content_length":"10952","record_id":"<urn:uuid:2ffd1147-2448-48f8-b855-f53eb55ac4d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00472.warc.gz"}
|
Society for Industrial and Applied Mathematics
The Society for Industrial and Applied Mathematics
Founded in 1952
The Society for Industrial and Applied Mathematics was founded in 1951 and incorporated on 30 April 1952. There had been a considerable increase in the number of mathematicians working in industry in
the United States following World War II, in part a consequence of the numbers who had applied mathematics to military research as part of the war effort. The increasing use of computers in solving
industrial research problems was a major reason why the need for applied mathematicians rose so sharply. Discussions took place at a meeting of the American Institute of Electrical Engineers in
Atlantic City on 30 November 1951 concerning the setting up of a society to represent the interests of industrial applied mathematicians. I Edward Block, a consulting mathematician at the Philco
Corporation, and George Patterson III, a mathematical logician at the Burroughs Adding Machine Company, were two who were most in favour of founding a new society.
In December 1951 an organizing committee which included I Edward Block, Donald B Houghton, Samuel S McNeary, Cletus O Oakley, George Patterson III and George Sonneman met at an engineering lab at the
Drexel Institute of Technology. They decided to found the Society for Industrial and Applied Mathematics at this meeting and it was incorporated in April of the following year. The aims of the
Society, as set out at the time, were:-
a. to further the application of mathematics to industry and science;
b. to promote basic research in mathematics leading to new methods and techniques useful to industry and science;
c. to provide media for the exchange of information and ideas between mathematicians and other technical and scientific personnel.
The Franklin Institute in Philadelphia provided the new Society with office space, an arrangement which continued from 1952 to 1958.
The first meeting of the Society took place before the Society was incorporated. This first meeting at the Drexel Institute of Technology was held on 17 March 1952 with W F G Swan speaking on
Mathematics, the backbone of science. The second meeting on 28 April 1952 was address by Mina Rees speaking on The role of mathematics in government. The rules setting out how the Society would
operate were in place by June 1952 and it was proposed that the first publication of the Society, the Bulletin, was made the responsibility of a Council, but its running devolved to a subcommittee,
namely the Publications Committee. A Program Committee was also set up to run meetings of the Society. The first annual business meeting of the Society was held in October 1952 at the University of
Pennsylvania and was addressed by Grace Hopper on Elementary training of a computer. At this meeting elections were held for the officers of the Society:
President: William E Bradley
Vice President: Grace M Hopper
Vice President: George W Patterson
Treasurer: Emil Amelotti
Secretary: I Edward Block.
The first Council was also appointed at this time.
By November 1952 the new Society had 130 members. The first president soon indicated his wish to resign due to pressure of work and he was formally replaced in May 1953 by Donald Houghton. He was
also editor of the SIAM Newsletter which began publication in February 1953.
The Society was originally based in Philadelphia but it soon widened its coverage. A Boston/Cambridge section was established and the first SIAM meeting outside Philadelphia was held in Cambridge on
20 May 1953, addressed by Norbert Wiener. Further sections throughout the United States were quickly established. The first national meeting of the Society was held in Pittsburgh, Pennsylvania on 28
December 1954.
The Council in its earliest meetings decided that publications would be named "journals" rather than "bulletins". The first issue of the Journal appeared in September 1943. The SIAM Review began
publication in 1959. After this, journals were founded to cover special areas: Control (1962), Numerical Analysis (1964), Applied Mathematics (1966), Mathematical Analysis (1970), Computing (1972),
Scientific and Statistical Computing (1980), Algebraic and Discrete Methods (1980), Matrix Analysis (1988), Discrete Mathematics (1988), Optimization (1991), Applied Dynamical Systems (2002), and
Multiscale Modeling and Simulation (2003). In addition a book publishing programme began in 1961 with the Series in Applied Mathematics. It was followed by: Proceedings in Applied Mathematics (1969),
Regional Conferences in Applied Mathematics (1972), Studies in Applied and Numerical Mathematics (1979), Frontiers of Applied Mathematics (1983), Classics in Applied Mathematics (1988), and eight
further series since then.
Membership of the Society grew rapidly from its small beginnings in 1952. By the end of 1952 there were 130 member, by 1953 there were 350, by 1954 there were 500, by 1955 there were 1000, by 1958
there were 2000, and by 1980 there were 5000. It was in the 1980s that the Society began to consider International Sections with the first set up in 1986. The United Kingdom section was set up in
The first activity group of the Society was founded on 19 July 1982; it was the Linear Algebra group. Further groups followed: Discrete Mathematics (1984), Supercomputing (1984), Optimisation (1985),
Control and System Theory (1986), Dynamical Systems (1989), Geometric Design (1989), and Orthogonal Polynomials and Special functions (1990). Four further groups have been set up since 1990.
The Society has inaugurated Prizes and established prestigious lecture series. The first was the John von Neumann Lecture (1959). There followed: the Theodore von Karman Prize (1968), the George
Polya Prize (1969), and the James H Wilkinson Prize (1979). Many further prizes have been established.
Last Updated January 2005
|
{"url":"https://mathshistory.st-andrews.ac.uk/Societies/SIAM/","timestamp":"2024-11-06T01:29:15Z","content_type":"text/html","content_length":"17491","record_id":"<urn:uuid:4b24f1ac-6d71-4c4e-a941-2896ada74cca>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00488.warc.gz"}
|
New plugin to work with commutative diagrams
Hi everyone,
As part of my PhD, I am developing a coq plugin that tries to automate the usually trivial parts of commutative diagrams reasoning for Coq. I just released the first version on the coq opam
repository as coq-commutative-diagrams recently (for nix users there is also a flake). As of now it is pretty basics and only support the coq-hott category library, but I am in the middle of a
refactoring that should allow supporting multiple libraries easily. You can find the code on github.
If anyone is willing to give it a try, I'd be happy to get some feedback and suggestions for improvements.
hi @Luc Chabassier the recommended way to make announcements about plugins and other Coq packages is via our Discourse forum: https://coq.discourse.group/c/announcements/8
These will then be cross-posted into the Zulip
as you probably know by now, a plugin is going to need constant updates as Coq evolves. To get this, I believe you already got advice to add the plugin/repo to Coq's CI. But another option these days
is to port the plugin to use MetaCoq, which recently had its 1.0 release: https://github.com/MetaCoq/MetaCoq
Here is one example of a MetaCoq-based plugin https://github.com/vzaliva/coq-switch
also, maybe you have seen the large number of category theory libraries in this thread: https://coq.discourse.group/t/survey-of-category-theory-in-coq/371
In particular, this one seems to be popular: https://github.com/jwiegley/category-theory
You can also port it to coq-elpi :grinning_face_with_smiling_eyes: , it did grow quite substantially since the last time you used it
I agree that coq-elpi is also an option, since "almost-anything-but-maintaining-a-plugin" is preferable these days
Anyway, this was already in Karl's message, but if you keep the plugin in OCaml, you should definitely add it to Coq's CI to get the free upgrades every time breaking changes are made in the Coq API.
I'm not sure coq-elpi is a good fit for the kind of algorithm I'm writing ^^ I'm planning a big refactor anyway, so I'll have a look at metacoq. Depending on the direction it takes I'll add it to the
CI, thanks for the pointer !
Sure, I was just pushing Karl to say what I couldn't agree more with: write OCaml only if your main concern is speed. And even then, only if you are truly desperate for cpu cycles, since even a high
level language like elpi is typically fast enough.
I agree, in plugins you can even do crazy stuff like linking in C/C++ libraries. But the maintenance price you (and your users) pay can be huge
Exciting! Please make sure to announce it on hott zulip too.
Is there a particular algorithm you've implemented? It would be good to document it.
I'm curious about which proofs in the HoTT library you manage to simplify, especially since @Jason Gross, who was the main authors of that cat th library, is very keen on automation.
Last updated: Oct 13 2024 at 01:02 UTC
|
{"url":"https://coq.gitlab.io/zulip-archive/stream/237655-Miscellaneous/topic/New.20plugin.20to.20work.20with.20commutative.20diagrams.html","timestamp":"2024-11-06T09:01:41Z","content_type":"text/html","content_length":"10293","record_id":"<urn:uuid:51384067-7701-49bf-afe7-eccd080d04d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00332.warc.gz"}
|
Can you explain the phenomenon of quantum entanglement and its relevance to particle physics?Can you explain the phenomenon of quantum entanglement and its relevance to particle physics?
Can you explain the phenomenon of quantum entanglement and its relevance to particle physics?
Quantum entanglement is a fascinating phenomenon of quantum mechanics that has significant relevance to particle physics. It occurs when two or more particles become interconnected in such a way that
the state of one particle directly affects the state of the other(s), regardless of the distance between them. This correlation between the particles’ properties persists even when they are separated
by vast distances, which defies our classical understanding of physics.
The phenomenon of quantum entanglement has profound implications for particle physics. It challenges traditional notions of causality and sets the stage for exploring the fundamental nature of
reality. By studying entangled particles, scientists can gain insights into the behavior of matter and the laws governing the quantum world. These insights, in turn, can help us unravel the mysteries
of particle physics and advance our understanding of the fundamental building blocks of the universe.
How does quantum entanglement work and what does it mean for particle physics?
Quantum entanglement works through a process called “entanglement creation,” which occurs when two or more particles interact in a way that their individual quantum states become correlated. Once
entangled, the particles exhibit a strange property known as “quantum superposition,” where they exist in multiple states simultaneously until a measurement is made. When a measurement is performed
on one of the entangled particles, its state “collapses” instantly, and the state of the other entangled particle(s) is also determined, regardless of their separation.
For particle physics, this means that entanglement allows scientists to study the behavior of particles in a way that was previously impossible. By entangling particles, researchers can manipulate
their states and observe how changes in one particle affect its entangled counterpart. This helps to uncover hidden correlations, test the predictions of quantum mechanics, and refine theories about
the fundamental nature of matter.
What are the practical applications of quantum entanglement in particle physics?
The practical applications of quantum entanglement in particle physics are vast. One notable application is in quantum teleportation, where the exact state of a particle can be transmitted from one
location to another using entanglement. This has the potential to revolutionize secure communication and information processing, as well as improve the accuracy of quantum measurements.
Another application is in quantum computing, where entanglement is used to create qubits, the basic units of quantum information. Entangled qubits can store and process exponentially more information
than classical bits, paving the way for powerful computers that can solve complex problems much faster than their classical counterparts.
Furthermore, entanglement plays a crucial role in experiments testing the principles of quantum mechanics and probing the nature of particles. By entangling particles and making precise measurements,
scientists can verify theoretical predictions, study quantum entanglement itself, and advance our understanding of the fundamental laws governing the universe.
What are the key principles behind quantum entanglement and its impact on particle physics?
The key principles behind quantum entanglement are non-locality, superposition, and the collapse of the wave function. Non-locality refers to the instantaneous correlation between the states of
entangled particles, regardless of their separation. Superposition allows particles to exist in multiple states simultaneously until a measurement is made, which is when the wave function describing
their states collapses into a single outcome.
In particle physics, the impact of quantum entanglement is significant. It challenges classical notions of determinism and locality, pushing scientists to develop new theories and models to explain
the behavior of particles at the quantum level. It also enables the development of advanced technologies, such as quantum cryptography and quantum computing, which have the potential to revolutionize
various fields and industries.
Can quantum entanglement help us uncover the mysteries of particle physics?
Quantum entanglement holds great promise in helping us uncover the mysteries of particle physics. By studying entangled particles and their behavior, scientists can gain insights into the fundamental
properties and interactions of particles at a subatomic level.
The correlations revealed through entanglement experiments can help refine existing theories, test predictions, and potentially unveil new physics beyond the current understanding. Furthermore,
quantum entanglement provides a window into the hidden aspects of quantum mechanics and may lead to breakthroughs in our understanding of phenomena like dark matter, the nature of gravity, and the
unification of fundamental forces.
While quantum entanglement alone may not provide all the answers to the mysteries of particle physics, it is a powerful tool that complements other experimental techniques and theoretical frameworks.
As our knowledge and technology advance, quantum entanglement will continue to play a vital role in unraveling the secrets of the universe and expanding our understanding of the fundamental nature of
|
{"url":"https://myquestion.ai/can-you-explain-the-phenomenon-of-quantum-entanglement-and-its-relevance-to-particle-physics/","timestamp":"2024-11-04T16:46:03Z","content_type":"text/html","content_length":"181395","record_id":"<urn:uuid:45c395a9-389a-43b5-ba32-fbba7f22feed>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00605.warc.gz"}
|
Chapter 5: Clustering and Classificaiton | DATA DRIVEN SCIENCE & ENGINEERING
To exploit data for diagnostics, prediction and control, dominant features of the data must be extracted. In the opening chapter of this book, SVD and PCA were introduced as methods for determining
the dominant correlated structures contained within a data set. In the eigenfaces example of Sec. 1.6, for instance, the dominant features of a large number of cropped face images were shown. These
eigenfaces, which are ordered by their ability to account for commonality (correlation) across the data base of faces was guaranteed to give the best set of r features for reconstructing a given face
in an l2 sense with a rank-r truncation. The eigenface modes gave clear and interpretable features for identifying faces, including highlighting the eyes, nose and mouth regions as might be expected.
Importantly, instead of working with the high-dimensional measurement space, the feature space allows one to consider a significantly reduced subspace where diagnostics can be performed.
The goal of data mining and machine learning is to construct and exploit the intrinsic low-rank feature space of a given data set. The feature space can be found in an unsupervised fashion by an
algorithm, or it can be explicitly constructed by expert knowledge and/or correlations among the data. For eigenfaces, the features are the PCA modes generated by the SVD. Thus each PCA mode is high-
dimensional, but the only quantity of importance in feature space is the weight of that particular mode in representing a given face. If one performs an r-rank truncation, then any face needs only r
features to represent it in feature space. This ultimately gives a low-rank embedding of the data in an interpretable set of r features that can be leveraged for diagnostics, prediction,
reconstruction and/or control.
Section 5.1: Feature Selection and Data Mining
Section 5.2: Supervised versus Unsupervised Learning
Section 5.3: Unsupervised Learning - k-Means Clustering
Section 5.4: Unsupervised Learning - Dendrograms
Section 5.5: Unsupervised Learning - Mixture Models
Section 5.6: Supervised Learning - Linear Discriminants
Section 5.7: Supervised Learning - Support Vector Machines
Section 5.8: Supervised Learning - Classification Trees
This video highlights some of the basic ideas of clustering and classification, both for supervised and unsupervised algorithms [
Part 1
This video highlights some of the more advanced machine learning methods of clustering and classification, both for supervised and unsupervised algorithms [
Part 1
This video highlights two leading methods in machine learning: support vector machines (SVM) and classification trees [
Part 1
|
{"url":"http://www.databookuw.com/page/page-8/","timestamp":"2024-11-06T11:54:40Z","content_type":"text/html","content_length":"21411","record_id":"<urn:uuid:14c3d5ad-8960-4427-8415-d5af689f4484>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00228.warc.gz"}
|
Like wires, variables are named containers for values that always maintain their state (the value is retained until overwritten). They can only be given a value when declared or in an initial, always
, or analog process. If given a value in an analog process they are referred to as continuous variables, meaning that they are owned and managed by the continuous kernel. Otherwise they are referred
to as discrete variables and are owned and managed by the discrete kernel.
Continuous variables are not supported in Verilog, and discrete variables are not supported in Verilog-A.
Discrete variables may be initialized when declared, and initially take the value of x if not initialized. Continuous variables may not be initialized and always start off as 0.
Variables retain their value until changed by way of an assignment statement.
A register or reg declaration declares arbitrarily sized logic variables (registers are not supported in Verilog-A). The default size is one bit.
reg enable;
reg [15:0] in;
In these examples, enable is a one bit variable and in is a 16 bit variable. The index of the most significant bit is given first in the range specification, and the index of the least significant
bit is given last. The bounds must be constants (derived from number literals or parameters).
By default the content of multi-bit registers are interpreted as unsigned numbers (the values are interpreted as positive binary numbers). It is possible to explicitly specify whether the number is
to be interpreted as a signed or unsigned number as follows:
reg unsigned [3:0] gain;
reg signed [6:0] offset;
In this case, gain is unsigned and offset is signed, which means it is interpreted as a twos-complement signed number. So, if gain = 4’bF, its value is interpreted as 15, and if offset = 7’b7FF, then
its value is interpreted as -1.
Integer Variables
An integer declaration declares one or more variables of type integer. These variables can hold values ranging from -2^31 to 2^31-1. Arithmetic operations performed on integer variables produce 2’s
complement results. Integers are initialized at the start of a simulation depending on how they are used. Integer variables whose values are assigned in an analog process default to an initial value
of zero (0). Such variables are said to be captured by the analog kernel. Integers that are captured by the analog kernel cannot be initialized and can only hold valid numbers (they may not contain
any x- or z-valued bits). Integer variables whose values are assigned in a digital context default to an initial value of x. These variables are said to be captured by the discrete kernel. Such
integers are implemented as 32-bit signed regs. As such, the values they hold may contain bits that are x or z.
Integer variables can only be given a value using an assignment statement, which can only be found in its declaration or in initial, always, and analog processes.
A type of integer, genvar, has restricted semantics that allow it to be used in static expressions. A genvar can only be assigned within the control section of a for loop. Assignments to the genvar
variable can consist only of expressions of static values (expression involving only parameters, literal constants, and other genvar variables).
genvar i;
Real Variables
A real declaration declares one or more variables of type real. The real variables are stored as 64-bit quantities, as described by IEEE STD-754-1985, an IEEE standard for double precision floating
point numbers. Real variables are initialized to zero (0) at the start of a simulation.
Real variables can only be given a value using an assignment statement, which can only be found in its declaration or in initial, always, and analog processes. Real variables assigned in analog
processes are captured by the analog kernel and their value is assumed to vary continuously with time. All other real variables are captured by the discrete kernel and their values are piecewise
Named Events
Events are normally associated with changes in discrete-event signals, but it is also possible to declare a variable that does not actually hold a value, but it is capable to trigger event
statements. To declare a named event, use:
Named events are triggered using ‘->’ in an initial or always process. For example:
The event can now be caught in an initial or always process using:
@(failure) begin
$strobe("Failure detected");
failures = failures + 1;
Arrays of registers, integers and reals can be declared using a range that defines the upper and lower indices of the array. Both indices are specified with constant expressions that may evaluate to
a positive integer, a negative integer, or to zero.
reg [7:0] mem [1023:0];
integer i, weights[7:0] = {2, 4, 8, 16, 32, 64, 128, 256};
real in1[15:0], in2[15:0], out[15:0];
An array of registers is often referred to as a memory. In the example, mem is an array of 1024 bytes (8-bit values). One would use mem[i] to access a particular byte and mem[i][j] to access a
particular bit of the same byte.
|
{"url":"https://verilogams.com/refman/basics/variables.html","timestamp":"2024-11-05T22:49:46Z","content_type":"text/html","content_length":"15139","record_id":"<urn:uuid:8d984a7e-17f5-4eff-9a5c-75c8c4fc4ac7>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00379.warc.gz"}
|
The equation of a line and the equation of a plane - mathXplain
The equatin of the line, The equatin of the plane, Normal vector, Vector between 2 points, Distance of 2 points, Direction vector, System of equation of the line.
Text of slideshow
It's time to do some geometry.
There's nothing to worry about, it's just a few little things. Let's start with vectors and lines in the plain.
The equation of a line and the equation of a plane
It's time to do some geometry.
There's nothing to worry about, it's just a few little things. Let's start with vectors and lines in the plane.
EQUATION OF A LINE: If is a point on the line, and
is the normal vector of the line:
Just a reminder: the normal vector of a line is
the non-zero vector that is perpendicular to that line.
VECTOR BETWEEN 2 POINTS: If and are points,
then the vector between these points:
DISTANCE OF 2 POINTS: If and are points,
then the distance between these points:
It's all the same in space, except there are three coordinates.
EQUATION OF A PLANE: If is on the plane and
is the normal vector of the plane:
Just a reminder: the normal vector of a line is
the non-zero vector that is perpendicular to that line.
VECTOR BETWEEN 2 POINTS: If and are points,
then the vector between these points:
DISTANCE OF 2 POINTS: If and are points,
then the distance between these points:
Let's try to come up with the equation of a line in space. This would be useful for us, but it is not included in this list.
Unfortunately there will be some problems with it, but let's try anyway.
Let's find the equation of the line where point is on the line, and
is the direction vector of the line.
Here, we have to use the direction vector instead of the normal vector, because in space
it is not obvious which vector is perpendicular to the line.
The direction vector, on the other hand, is specific, only its length may vary.
If is an arbitrary point on the line, then
This vector is a multiple of the line's direction vector
If , then we divide by it, if it is zero, then
If , then we divide by it, if it is zero, then
If , then we divide by it, if it is zero, then
All of them are equal to , therefore they must be equal to each other, too.
This is the system of equations of a line in space.
Let's see an example:
Find the equation of the line where point is on the line, and
is the direction vector of the line.
Here is the system of equations of the line:
Unfortunately, will cause some trouble.
In such cases
Next, let's see a typical exercise.
Find the equation of a line in the plane, where point is on the line, and the line is perpendicular to the line described by the equation of
Find the equation of a plane in space, where point is on the plane and the plane is perpendicular to the line described by the following system of equations:
The normal vector of line is
We can make use of this vector if
we rotate it by , because then
it will be the normal vector of the line we are trying to define.
To rotate a vector in the plane by ,
we swap its coordinates,
and multiply one of them by .
We have the normal vector, so
the equation of the line is:
Let's see what we can do over here.
The normal vector of the plane happens to be the direction vector of the line.
The direction vector of the line:
The normal vector of the plane:
Here comes the equation of the plane:
And finally, another typical exercise.
Find the equation of a line in the plane, where points and are on the line.
Find the equation of the plane in space, where points , , and are on the plane.
If is a point on the plane and
is the normal vector of the plane, the equation of the line will be:
If is a point on the plane and
is the normal vector of the plane, the equation of the plane will be:
We have plenty of points, but we don't have a single normal vector,
so we have to make one.
Let's rotate this by , and that gives us the normal vector.
To rotate a vector in the plane by ,
we swap its coordinates,
and multiply one of them by .
The equation of the line:
We will have a problem with the plane here.
In space there is no such thing
as rotating a vector by .
We have to figure out something else to get the plane's normal vector.
We would need a vector that is perpendicular to the triangle determined by points , , and . This vector will be the so called cross product.
The equation of a line and the equation of a plane
|
{"url":"https://www.mathxplain.com/precalculus/vectors/the-equation-of-a-line-and-the-equation-of-a-plane","timestamp":"2024-11-11T04:39:22Z","content_type":"text/html","content_length":"78612","record_id":"<urn:uuid:ca0ba2ae-c6df-423f-a7c4-c0b1e3347818>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00173.warc.gz"}
|
How To Perform A One-Way ANOVA Test In SPSS
This guide will explain, step by step, how to perform a one-way ANOVA test in the SPSS statistical software by using an example. The guide will also explain how to perform post-hoc tests to
investigate significant results further.
What is a one-way ANOVA test?
A one-way analysis of variance (ANOVA) test is a statistical tool to determine if there are any differences between the means of three or more continuous variables. This particular test assumes that
the data in each group is normally distributed.
Assumptions of a One-Way ANOVA test
Before running a One-Way ANOVA test in SPSS, it is best to ensure the data meets the following assumptions.
1. The dependent variables should be measured on a continuous scale (either interval or ratio).
2. There should be three or more independent (non-related) groups.
3. There are no outliers present in the dependent variable.
4. The dependent variables should be normally distributed. See how to test for normality in SPSS.
5. The dependent variables should have homogeneity of variances. In other words, their standard deviations need to be approximately the same.
Example experiment
I will use an example to explain how to perform a one-way ANOVA test. For instance, say we have measured the weights of different rats. There are three groups of rats:
1. Controls: these have not received any physical exercise.
2. Exercised: these have performed 6 weeks of physical exercise.
3. Pill: these have been treated with a diet pill for 6 weeks.
We want to know if there are any differences between the weights of the rats after the 6 week period. We can now formulate two hypotheses.
The null hypothesis would read:
There is no differences in the weights of the rats after the 6 week period.
The alternative hypothesis would be:
There is a difference in weight between the three rat groups.
The one-way ANOVA test will be able to inform us if there is a significant difference between the three groups. However, it cannot directly state which group(s) are different from each other. So, if
a one-way ANOVA test indiciates a significant result, further post-hoc testing is required to investigate specifically which groups are significantly different.
The dataset
In SPSS, I have created a file containing two data variables labelled ‘Weight’ and ‘Group‘. The first contains all of the rat weights (measured in grams). In the ‘Group‘ column, I have assigned the
numbers ‘1‘, ‘2‘, or ‘3‘ to indicate which experiment group the rats belong to.
Below is a snapshot of what part of the data looks like so you get the idea.
Performing a One-Way ANOVA test in SPSS
Now we have the dataset, let’s perform the one-way ANOVA test in SPSS.
1. Firstly, go to Analyze > Compare Means > One-Way ANOVA....
2. A new window will open. Here you need to move the depdendent variable (Weight in the example) into the window called Dependent List and the grouping variable (Group) into the box titled Factor.
3. Since we do not know whether there are any differences in weights between our three groups, we should avoid performing any post-hoc test just yet. It is, however, worth getting further
descriptive data at this point. To do this, click the Options... button. This will bring up a new window, here you should tick the Descriptive option under the Statistics heading and click the
Continue button.
4. Finally, click the OK button to run the ANOVA test.
One-way ANOVA Output
The results are presented in the output window in SPSS. You should be presented with two boxes.
The first (Descriptives) contains a wealth of information including mean, standard deviation, standard error and 95% confidence intervals stratified by each group, as well as combined. We can
clearly see large differences in mean weight values.
The next output box (ANOVA) contains all of the statistical information regarding the one-way ANOVA test. This includes the degrees of freedom (df), the F statistic (F) and the all important
significance value (Sig.).
One-Way ANOVA interpretation
By looking at the table we can see that the significance (Sig.) value is ‘.000‘. This is considerably lower than our significance threshold of P<0.05. Therefore, we should reject the null hypothesis
and accept the alternative hypothesis.
One-Way ANOVA reporting
At this point, we can confirm that there is a significant difference in rat weights between the three groups. Thus we could summarise this, including the statistical output, in one simple sentence.
The reporting includes the degrees of freedom, both between and within groups, the F statistic and the P value.
Performing post-hoc tests
Since the results of the one-way ANOVA test returned a significant result, it is now appropriate to carry out post-hoc tests. This is to determine which specific groups are significant from another.
1. To perform post-hoc tests in SPSS, firstly go back to the one-way ANOVA window by going to Analyze > Compare Means > One-Way ANOVA... (as described in Step 1).
2. Now, enter the same data into the appropriate windows again (as described in Step 2).
3. Click the Post Hoc... button to open the Post Hoc Multiple Comparisons window. There are multiple options for post hoc testing, but for this example we will use the commonly adopted Tukey post
hoc test. Tick the Tukey option under the Equal Variances Assumed heading.
Now click the Continue button.
4. To run the test, click the OK button.
Post-hoc (Tukey) output
By going to the output window, you will now see a new section of results titled Post Hoc Tests. The results that we are interested in are presented in the Multiple Comparisons box.
The output compares each possible group. For example, the first row presents the results for the comparison between the ‘Control‘ and the ‘Exercised‘ groups, as well as that between the ‘Control‘
and ‘Pill‘ groups. The Mean Difference is also given, which is the average difference in weights between the groups in comparison. Additionally, the table contains the standard error (Std. Error) and
95% confidence intervals. The P values for each comparison can be found under the Sig. column.
Post hoc (Tukey) interpretation
By looking at the Sig. column, it can be seen that all comparisons are significant since the P values are all .000. Thus, the weights for the three rat groups are significantly different from each
Post hoc (Tukey) reporting
Since we now know the comparisons between each group, we can add to our previous reporting with the additional post-hoc results. I have provided an example for the full reporting below.
IBM SPSS version used: 23
1. Dear Dr. Steven
I really like your Website that explain everything clearly . I would be most grateful if you can send or post How To Perform A Two -Way ANOVA Test In SPSS
2. Thank you so much.it was easy to understand and helpful
3. ITS VERY IMPRESIVE. KEEP IT UP
4. Thanks so much , I have found it so valuable.
5. Excellent lectures to under stand and use SPSS .
6. Please can you explain us that how to make data sheet in spss before anova
□ Hi Nidhi,
Many thanks for your comment.
Of course, I can write an article for this and let you know once it is up and running.
|
{"url":"https://toptipbio.com/perform-one-way-anova-spss/","timestamp":"2024-11-11T03:19:02Z","content_type":"text/html","content_length":"202826","record_id":"<urn:uuid:977ee852-eb8d-4d4b-9fca-56171ff8e705>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00319.warc.gz"}
|
Hertz - Introduction, Uses, Unit of Frequency, Applications, and FAQs
The hertz which has the symbol: Hz is the derived unit of frequency in the International System of Units that is known as the SI and is defined as one cycle per second. It is said to be named after
Heinrich Rudolf Hertz.
Scientist Heinrich Rudolf Hertz was a German physicist who first conclusively proved the existence of the waves which are electromagnetic and this was predicted by James Clerk Maxwell's equations of
electromagnetism. The unit that is of frequency is the cycle per second was named "hertz" in his honour.
Here, we are going to discover a few more things about the topic.
Why is Hertz Used?
In the field of physics. hertz which is often denoted by the symbol ‘Hz’ can be defined as the derived unit of frequency. We can say this as per the International System of Units. It is an important
thing to note that frequencies are often expressed in multiples of Hertz which is such as kilohertz that is denoted by kHz, megahertz denoted by MHz, gigahertz is denoted by GHz. The SI Unit based on
the unit of the Hertz is second-1 or we can say that s-1. We need to notice that the Hertz is an SI derived unit.
In English, we can say that "hertz" is also used as the plural form. MHz is the megahertz or 106 Hz. Similarly, GHz that is gigahertz is 109 Hz and THz or terahertz is 1012 Hz.100 Hz means "one
hundred cycles per second", and so on. The unit usually may be applied to any event which is periodic— for example, we can say that a clock might be said to tick at 1 Hz or we can say that a human
heart might be said to beat at 1.2 Hz.
Hertz as Unit of Frequency
Frequency is the number of occurrences of a repeating event per unit of time. It is also referred to as temporal frequency which emphasizes the contrast to the frequency that is the spatial and
angular frequency. We can say that frequency is measured in hertz that is denoted by Hz which is equal to one occurrence of a repeating event per second.
The period is said to be the duration of time of one cycle in a repeating event so the period is said to be the reciprocal of the frequency. For example, we can say that if a newborn baby's heart
beats at a frequency of 120 times a minute then it is 2 hertz is its period denoted by the letter T, the time interval which is between beats is half a second that is said to be 60 seconds divided by
120 beats.
Hertz Applications
Frequency is an important parameter that is used in science and engineering to specify the rate of oscillation and the phenomenon of vibratory vibrations such as mechanical vibrations of the audio
signals or the sound radio waves and light.
The frequency of any phenomenon can be expressed in hertz but we can say that the term is used most frequently in connection with alternating currents which are electric and the waves which are
electromagnetic such as light, radar, etc and sound as well. It is said to be a part of the International System of Units based on the metric system. The unit was adopted in October 1933 by a
committee of the International Commission of Electrotechnical and is in widespread use today that is said to be although it has not entirely replaced the expression which is the cycles per second.
An older method that is of measuring the frequency of rotating or vibrating objects is to use a stroboscope. This is said to be intense repetitively flashing the light that is strobe light whose
frequency can be adjusted with a calibrated timing circuit.
FAQs on Hertz
1. Explain what is a Hertz in physics?
Hertz which is the abbreviated Hz equals the number of cycles per second. The frequency of any phenomenon with regular periodic variations can be expressed in hertz but we can say that the term is
used most frequently in connection with alternating electric currents and electromagnetic waves like light, radar, etc and sound as well.
2. What is 1 Hz equal to?
Frequency is the rate at which current changes direction per second. It is said to be measured in hertz, that is Hz, the international unit of measure where 1 hertz is said to be equal to 1 cycle per
second. That is Hertz or Hz = One hertz is equal to one cycle per second. We can write that cycle = One complete wave of alternating current or voltage.
The hertz which has the symbol: Hz is the derived unit of frequency in the International System of Units denoted by SI and is defined as one cycle per second. It is said to be named after scientist
Heinrich Rudolf Hertz as the first person to provide conclusive proof of the existence of electromagnetic waves.
4. What is a cycle in Hertz?
We can say that one complete oscillation of a sound wave is known as a cycle. The term that is hertz simply measures the frequency of the cycle. One Hertz is said to be equal to one Cycle per second.
The cycles are sometimes also referred to as vibrations. The frequency of a wave of sound is referred to as the number of cycles, that are vibrations per unit of time.
|
{"url":"https://www.vedantu.com/physics/hertz","timestamp":"2024-11-02T18:31:17Z","content_type":"text/html","content_length":"217990","record_id":"<urn:uuid:66edb45e-ff23-46a1-a49d-b7897ebf273d>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00443.warc.gz"}
|
Question ID - 154240 | SaraNextGen Top Answer
The electrical resistivity of a conducting wire is K. If its length and area of cross-section are doubled then the new resistivity will be
(a) (b) (c) (d)
The electrical resistivity of a conducting wire is K. If its length and area of cross-section are doubled then the new resistivity will be
|
{"url":"https://www.saranextgen.com/homeworkhelp/doubts.php?id=154240","timestamp":"2024-11-07T20:15:56Z","content_type":"text/html","content_length":"16553","record_id":"<urn:uuid:cc6ecb79-f0f3-4180-a9a3-239b89e8938d>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00277.warc.gz"}
|
A unifying version-space representation
In this paper we consider the open problem how to unify version-space representations. We present a first solution to this problem, namely a new version-space representation called adaptable boundary
sets (ABSs). We show that a version space can have a space of ABSs representations. We demonstrate that this space includes the boundary-set representation and the instance-based boundary-set
representation; i.e., the ABSs unify these two representations.
We consider the task of learning ABSs as a task of identifying a proper representation within the space of ABSs depending on the applicability requirements given. This is demonstrated in a series of
examples where ABSs are used to overcome the complexity problem of the boundary sets.
• machine learning
• concept learning
• version spaces
• boundary sets
• instance-based boundary sets
Dive into the research topics of 'A unifying version-space representation'. Together they form a unique fingerprint.
|
{"url":"https://cris.maastrichtuniversity.nl/en/publications/a-unifying-version-space-representation","timestamp":"2024-11-05T03:14:55Z","content_type":"text/html","content_length":"53354","record_id":"<urn:uuid:07a2c37c-0c9b-453a-b741-30862c4a440b>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00845.warc.gz"}
|
Ruler and Compass Construction – Euclidean Geometry – Mathigon
Euclidean GeometryRuler and Compass Construction
You might have noticed that Euclid’s five axioms don’t contain anything about measuring distances or angles. Up to now, this has been a key part of geometry, for example to calculate areas and
However, at the times of Thales or Euclid, there wasn’t a universal framework of units like we have today. Distances were often measured using body parts, for example finger widths, or arm lengths.
These are not very accurate and they vary for different people.
To measure longer distances, architects or surveyors used knotted cords: long pieces of string that contained many knots at equal intervals. But these were also not perfectly accurate, and different
string had the knots placed at slightly different distances.
Greek mathematicians didn’t want to deal with these approximations. They were much more interested in the underlying laws of geometry, than in their practical applications.
That’s why they came up with a much more idealised version of our universe: one in which points can have no size and lines can have no width. Of course, it is to draw these on paper. Visible points
will always take up some space, and lines will always have some width. This is why our drawings are always just “approximations”.
Euclid’ axioms basically tell us what’s possible in his version of geometry. It turns out that we just need two very simple tools to be able to sketch this on paper:
A straight-edge is like a ruler but without any markings. You can use it to connect two points (as in Axiom 1), or to extend a line segment (as in Axiom 2).
A compass allows you to draw a circle of a given size around a point (as in Axiom 3).
Axioms 4 and 5 are about comparing properties of shapes, rather than drawing anything. Therefore they don’t need specific tools.
You can imagine that Greek mathematicians were thinking about Geometry on the beach, and drawing different shapes in the sand: using long planks as straight-edge and pieces of string as compass.
Even though these tools look very primitive, you can draw a great number of shapes with them. This became almost like a puzzle game for mathematicians: trying to find ways to “construct” different
geometric shapes using just a straight-edge and compass.
Draw an equilateral triangle using just a straight-edge and compass.
To begin, draw a line segment anywhere in a box on the right. With the line tool selected, simply drag from start to end. This segment will be one of the sides of the triangle.
Next, draw two circles that have one of the endpoints of the line segments as center, and go through the other endpoint. With the circle tool selected, simply drag from one endpoint to the other.
We already have two vertices of the triangle, and the third one is the intersection of the two circles. Use the line tool again to draw the two missing sides and complete the triangle.
Now these two sides and these two sides are each of the same circle, so they must have the same length. In other words, all three sides of the triangle are congruent – and therefore it is indeed an
equilateral triangle.
Midpoints and Perpendicular Bisectors
COMING SOON – Constructing Midpoints and Perpendicular Bisectors
Angle Bisectors
COMING SOON – Constructing Angle Bisectors
Impossible Constructions
In the next chapter, we will see even more shapes that can be constructed like this. However, there is a limit to Euclidean geometry: some constructions are simply impossible using just straight-edge
and compass.
According to legend, the city of Delos in ancient Greece was once faced with a terrible plague. The oracle in Delphi told them that this was a punishment from the gods, and the plague would go away
if they built a new altar for their temple that was exactly twice the volume of the existing one.
Note that doubling the volume is not the same as doubling an edge of the cube. In fact, if the volume increases by a factor of 2, the edge of the cube will increase by a factor of 23.
This still sounds pretty simple, but doubling the cube is actually impossible in Euclidean geometry, using only straight-edge and compass! For the citizens of Delos this unfortunately meant that all
hope was lost. There are two other constructions that are famously impossible. Mathematicians devoted a great amount of time trying to find a solution – but without success:
Trisecting the angle
We already know how to bisect angles. However it is impossible to similarly split an angle into three equal parts.
Doubling the cube
Given the edge of a cube, it is impossible to construct the edge of another cube that has exactly twice the volume.
Squaring the circle
Given a circle, it is impossible to construct a square that has exactly the same area.
Note that these problems can all be solved quite easily with algebra, or using marked rulers and protractors. But they are impossible if you are just allowed to use straight-edge and compass.
|
{"url":"https://vi.mathigon.org/course/euclidean-geometry/construction","timestamp":"2024-11-11T23:45:07Z","content_type":"text/html","content_length":"50542","record_id":"<urn:uuid:c4434166-2a1c-483f-a99d-ae0fd572cb51>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00356.warc.gz"}
|
A circle has a center that falls on the line y = 1/7x +4 and passes through ( 5 ,8 ) and (5 ,6 ). What is the equation of the circle? | HIX Tutor
A circle has a center that falls on the line #y = 1/7x +4 # and passes through # ( 5 ,8 )# and #(5 ,6 )#. What is the equation of the circle?
Answer 1
The equation of the circle is ${\left(x - 21\right)}^{2} + {\left(y - 7\right)}^{2} = 257$
A line passing through the mid point of #(5,8)# and #(5,6)# and
parallel to the x-axis will cut the line #y=x/7+4# at the center of
Let #(a,b)# be the center of the circle
#a=21# and #b=7#
The center is #(21,7)#
graph{[-26.26, 46.76, -9.13, 27.44]} = (x-21)^2+(y-7)^2-257)(y-x/7-4)(y-7)=0
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find the equation of the circle, we need to determine its center and radius. The center of the circle lies on the line y = (1/7)x + 4. Since both points (5, 8) and (5, 6) lie on this line, the
x-coordinate of the center is 5.
To find the y-coordinate of the center, substitute x = 5 into the equation of the line: y = (1/7)(5) + 4 y = 5/7 + 4 y = 5/7 + 28/7 y = 33/7
So, the center of the circle is (5, 33/7).
Now, we can find the radius of the circle using one of the given points, let's say (5, 8): r = sqrt((x - h)^2 + (y - k)^2) r = sqrt((5 - 5)^2 + (8 - 33/7)^2) r = sqrt((0)^2 + (56/7 - 33/7)^2) r =
sqrt(0 + (23/7)^2) r = sqrt(529/49) r = 23/7
Now, we have the center (h, k) = (5, 33/7) and the radius r = 23/7.
The equation of a circle is given by (x - h)^2 + (y - k)^2 = r^2. Substituting the values, we get: (x - 5)^2 + (y - 33/7)^2 = (23/7)^2
Thus, the equation of the circle is (x - 5)^2 + (y - 33/7)^2 = 529/49.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/a-circle-has-a-center-that-falls-on-the-line-y-1-7x-4-and-passes-through-5-8-and-8f9afa34ff","timestamp":"2024-11-14T11:58:35Z","content_type":"text/html","content_length":"583593","record_id":"<urn:uuid:c2edd409-6d5d-4753-87eb-06a6ccdd4350>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00065.warc.gz"}
|
Section: New Results
Foundations of Concurrency
Distributed systems have changed substantially in the recent past with the advent of phenomena like social networks and cloud computing. In the previous incarnation of distributed computing the
emphasis was on consistency, fault tolerance, resource management and related topics; these were all characterized by interaction between processes. Research proceeded along two lines: the
algorithmic side which dominated the Principles Of Distributed Computing conferences and the more process algebraic approach epitomized by CONCUR where the emphasis was on developing compositional
reasoning principles. What marks the new era of distributed systems is an emphasis on managing access to information to a much greater degree than before.
A Concurrent Pattern Calculus
In [16] we detailed how Concurrent pattern calculus (CPC) drives interaction between processes by comparing data structures, just as sequential pattern calculus drives computation. By generalising
from pattern matching to pattern unification, interaction becomes symmetrical, with information flowing in both directions. CPC provides a natural language to express trade where information exchange
is pivotal to interaction. The unification allows some patterns to be more discriminating than others; hence, the behavioural theory must take this aspect into account, so that bisimulation becomes
subject to compatibility of patterns. Many popular process calculi can be encoded in CPC; this allows for a gain in expressiveness, formalised through encodings.
An Intensional Concurrent Faithful Encoding of Turing Machines
The benchmark for computation is typically given as Turing computability; the ability for a computation to be performed by a Turing Machine. Many languages exploit (indirect) encodings of Turing
Machines to demonstrate their ability to support arbitrary computation. However, these encodings are usually by simulating the entire Turing Machine within the language, or by encoding a language
that does an encoding or simulation itself. This second category is typical for process calculi that show an encoding of lambda-calculus (often with restrictions) that in turn simulates a Turing
Machine. Such approaches lead to indirect encodings of Turing Machines that are complex, unclear, and only weakly equivalent after computation. In [25] we developed an approach to encoding Turing
Machines into intensional process calculi that is faithful, reduction preserving, and structurally equivalent. The encoding is demonstrated in a simple asymmetric concurrent pattern calculus before
generalised to simplify infinite terms, and to show encodings into Concurrent Pattern Calculus and Psi Calculi.
Expressiveness via Intensionality and Concurrency
Computation can be considered by taking into account two dimensions: extensional versus intensional, and sequential versus concurrent. Traditionally sequential extensional computation can be captured
by the lambda-calculus. However, recent work shows that there are more expressive intensional calculi such as SF-calculus. Traditionally process calculi capture computation by encoding the
lambda-calculus, such as in the pi-calculus. Following this increased expressiveness via intensionality, other recent work has shown that concurrent pattern calculus is more expressive than
pi-calculus. In [26] we formalised the relative expressiveness of all four of these calculi by placing them on a square whose edges are irreversible encodings. This square is representative of a more
general result: that expressiveness increases with both intensionality and concurrency.
On the Expressiveness of Intensional Communication
The expressiveness of communication primitives has been explored in a common framework based on the pi-calculus by considering four features: synchronism (asynchronous vs synchronous), arity (monadic
vs polyadic data), communication medium (shared dataspaces vs channel-based), and pattern-matching (binding to a name vs testing name equality). In [27] pattern-matching is generalised to account for
terms with internal structure such as in recent calculi like Spi calculi, Concurrent Pattern Calculus and Psi calculi. This exploreD intensionality upon terms, in particular communication primitives
that can match upon both names and structures. By means of possibility/impossibility of encodings, we showed that intensionality alone can encode synchronism, arity, communication-medium, and
pattern-matching, yet no combination of these without intensionality can encode any intensional language.
Weak CCP Bisimilarity with Strong Procedures
Concurrent constraint programming (CCP) is a well-established model for concurrency that singles out the fundamental aspects of asynchronous systems whose agents (or processes) evolve by posting and
querying (partial) information in a global medium. Bisimilarity is a standard behavioral equivalence in concurrency theory. However, only recently a well-behaved notion of bisimilarity for CCP, and a
CCP partition refinement algorithm for deciding the strong version of this equivalence have been proposed. Weak bisimilarity is a central behavioral equivalence in process calculi and it is obtained
from the strong case by taking into account only the actions that are observable in the system. Typically, the standard partition refinement can also be used for deciding weak bisimilarity simply by
using Milner's reduction from weak to strong bisimilarity; a technique referred to as saturation. In [17] we demonstrated that, because of its involved labeled transitions, the above-mentioned
saturation technique does not work for CCP. We gave an alternative reduction from weak CCP bisimilarity to the strong one that allows us to use the CCP partition refinement algorithm for deciding
this equivalence.
Efficient Algorithms for Program Equivalence for Confluent Concurrent Constraint Programming
While the foundations and principles of CCP e.g., semantics, proof systems, axiomatizations, have been thoroughly studied for over the last two decades. In contrast, the development of algorithms and
automatic verification procedures for CCP have hitherto been far too little considered. To the best of our knowledge there is only one existing verification algorithm for the standard notion of CCP
program (observational) equivalence. In [18] we first showed that this verification algorithm has an exponential-time complexity even for programs from a representative sub-language of CCP; the
summation-free fragment (CCP+). We then significantly improved on the complexity of this algorithm by providing two alternative polynomial-time decision procedures for CCP+ program equivalence. Each
of these two procedures has an advantage over the other. One has a better time complexity. The other can be easily adapted for the full language of CCP to produce significant state space reductions.
The relevance of both procedures derives from the importance of CCP+. This fragment, which has been the subject of many theoretical studies, has strong ties to first-order logic and an elegant
denotational semantics, and it can be used to model real-world situations. Its most distinctive feature is that of confluence, a property we exploited to obtain our polynomial procedures.
A Behavioral Congruence for Concurrent Constraint Programming with Nondeterministic Choice
Weak bisimilarity is one of the most representative notions of behavioral equivalence for models of concurrency. As we mentioned earlier, a notion of weak bisimilarity, called weak saturated barbed
bisimilarity (wsbb), was recently proposed for CCP. This equivalence improves on previous bisimilarity notions for CCP that were too discriminating and it is a congruence for the choice-free fragment
of CCP. In [29] , however, we showed that wsbb is not a congruence for CCP with nondeterministic choice. We then introduced a new notion of bisimilarity, called weak full bisimilarity (wfb), and
showed that it is a congruence for the full language of CCP. We also showed the adequacy of wfb by establishing that it coincides with the congruence induced by closing wsbb under all contexts. The
advantage of the new definition is that, unlike the congruence induced by wsbb, it does not require quantifying over infinitely many contexts.
Abstract Interpretation of Temporal Concurrent Constraint Programs
Timed Concurrent Constraint Programming (tcc) is a declarative model for concurrency offering a logic for specifying reactive systems, i.e. systems that continuously interact with the environment.
The universal tcc formalism (utcc) is an extension of tcc with the ability to express mobility. Here mobility is understood as communication of private names as typically done for mobile systems and
security protocols. In [15] we considered the denotational semantics for tcc, and we extended it to a "collecting" semantics for utcc based on closure operators over sequences of constraints. Relying
on this semantics, we formalized a general framework for data flow analyses of tcc and utcc programs by abstract interpretation techniques. The concrete and abstract semantics we proposed are
compositional, thus allowing us to reduce the complexity of data flow analyses. We showed that our method is sound and parametric with respect to the abstract domain. Thus, different analyses can be
performed by instantiating the framework. We illustrated how it is possible to reuse abstract domains previously defined for logic programming to perform, for instance, a groundness analysis for tcc
programs. We showed the applicability of this analysis in the context of reactive systems. Furthermore, we made use of the abstract semantics to exhibit a secrecy flaw in a security protocol. We also
showed how it is possible to make an analysis which may show that tcc programs are suspension free. This can be useful for several purposes, such as for optimizing compilation or for debugging.
Bisimulation for Markov Decision Processes through Families of Functional Expressions
In [24] , we transfered a notion of quantitative bisimilarity for labelled Markov processes to Markov decision processes with continuous state spaces. This notion takes the form of a pseudometric on
the system states, cast in terms of the equivalence of a family of functional expressions evaluated on those states and interpreted as a real-valued modal logic. Our proof amounted to a slight
modification of previous techniques used to prove equivalence with a fixed-point pseudometric on the state-space of a labelled Markov process and making heavy use of the Kantorovich probability
metric. Indeed, we again demonstrated equivalence with a fixed-point pseudometric defined on Markov decision processes; what is novel is that we recasted this proof in terms of integral probability
metrics defined through the family of functional expressions, shifting emphasis back to properties of such families. The hope is that a judicious choice of family might lead to something more
computationally tractable than bisimilarity whilst maintaining its pleasing theoretical guarantees. Moreover, we used a trick from descriptive set theory to extend our results to MDPs with bounded
measurable reward functions, dropping a previous continuity constraint on rewards and Markov kernels.
|
{"url":"https://radar.inria.fr/report/2014/comete/uid43.html","timestamp":"2024-11-09T01:09:59Z","content_type":"text/html","content_length":"54200","record_id":"<urn:uuid:10250bc9-33a4-4970-996f-767498a04189>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00239.warc.gz"}
|
Design Guidelines for Skeleton Slot Antennas: A Simulation-Driven Approach › AN-SOF Antenna Simulation Software
Design Guidelines for Skeleton Slot Antennas: A Simulation-Driven Approach
Dive into the intricacies of Skeleton Slot antennas. Explore optimal designs, balancing geometry parameters, and leveraging simulation tools. Ideal for both engineers and enthusiasts!
This article navigates the intricacies of Skeleton Slot antennas, exploring their sensitivity to geometric parameters and the transformative impact of a simulation-driven methodology. The Skeleton
Slot is treated as an array of two loop antennas with a common feed point. We delve into the balance of loop perimeters, conductor radii, and aspect ratios, unraveling their influence on output
parameters such as input impedance, VSWR, and gain. We present a script-driven approach to optimize designs, empowering engineers and enthusiasts to craft high-performance Skeleton Slot antennas.
Bridging theory and application, the article showcases practical insights, making it an essential resource for anyone seeking to elevate their radio frequency design projects.
Bill Sykes (call sign G2HCG) is acknowledged as the innovator behind the Skeleton Slot antenna, having successfully deployed it in VHF bands. The inherently versatile Skeleton Slot principle extends
its utility to HF communication bands by scaling dimensions based on wavelength, with the physical antenna dimensions remaining practical within the 14-28 MHz bands. Noteworthy advantages of this
design include its lightweight nature, ease of construction, low-angle radiation, bi-directional directivity, and the convenience of mounting it as a simple metal framework without the need for
The nomenclature “Skeleton Slot” is derived from the slot antenna concept. This aperture antenna is crafted by cutting a rectangular hole in a conducting sheet, essentially serving as a “photographic
negative” of a dipole, where the slot functions as the radiating element. Reducing the metal sheet until it transforms into a rectangular wire frame results in the formation of the “skeleton slot.”
In our previous article, “A Closer Look at the HF Skeleton Slot Antenna,” we introduced a Skeleton model in AN-SOF and presented the results for the 15m (20 MHz) band. Expanding upon that analysis,
this article delves into a comprehensive discussion of the skeleton slot from a general perspective, supported by the theory of loop antennas. This approach complements the insights provided by the
inventor in the January 1955 issue of The Short Wave Magazine in the article titled “The Skeleton Slot Aerial System” (Vol. XII, No. 11, pp. 594-598). In that article, the author elucidates the
antenna as an array of two closely positioned dipoles. Furthermore, we offer dimensioning guidelines for experimenters keen on venturing into antenna construction.
Geometry of Skeleton Slot and Loop Antennas
In Figure 1, a schematic representation of the Skeleton Slot antenna is presented, highlighting key dimensions:
• p = 2(L + w): Loop perimeter.
• r = L/w: Loop aspect ratio.
Fig. 1: Schematic representation of the Skeleton Slot antenna, highlighting key dimensions.
The skeleton slot, depicted in Figure 1, functions as a vertical antenna that can be conceptualized as an array comprising two identical, closely coupled loops—a top loop and a bottom loop. These
loops share a common feed point located at the antenna’s center, where the feeding transmission line is connected.
In adherence to loop theory, when the loop contour spans approximately half a wavelength, it exhibits an input impedance transitioning from inductive to capacitive. This shift is characterized by
high resistance and reactance values, indicative of a resonance akin to that observed in a parallel RLC circuit. Referencing the validation article “Input Impedance and Directivity of Large Circular
Loops”, specifically Figure 2, illustrates the input impedance variation concerning the loop circumference measured in wavelengths, C/λ (see Figure 2 below). As the loop circumference approaches one
wavelength, the capacitive (negative) reactance decreases in absolute value, reaching resonance similar to a series RLC circuit when the reactance approaches zero. Consequently, the resistance
assumes manageable values in practice, approximately around 100 Ohms. The “useful zone” of the loop in practice is identified when C/λ ≈ 1, as shown in Figure 2.
While Figure 2 refers to circular loops, the analogous behavior is applicable to rectangular loops as well. Therefore, the practical utility of the loop is realized when its perimeter, denoted as
‘p,’ approaches one wavelength (p/λ ≈ 1).
Fig. 2: Input impedance of circular-loop antennas as a function of the normalized loop circumference, C/λ, for Ω = 2 ln(C/a) = 10 (‘a’ being the wire radius).
A further observation drawn from loops with circumferences comparable to the wavelength is the pronounced sensitivity of reactance to variations in the wire radius. This sensitivity manifests in a
logarithmic manner, specifically proportional to ln(C/a), where ‘a’ denotes the wire radius.
Given its configuration, as previously mentioned, the Skeleton Slot antenna can be viewed as comprising two tightly coupled rectangular loops. Consequently, we can anticipate a behavior analogous to
that described for loops in general.
Maintaining a constant loop aspect ratio (r = L/w) and conducting numerous calculations while varying the loop perimeter (p = 2(L + w)), we observe that the Skeleton Slot resonates when the loop
perimeter is approximately one wavelength (p ≈ λ), aligning with expectations for a single loop. However, it’s crucial to note that this perimeter isn’t precisely equal to one wavelength; its value
fluctuates based on the aspect ratio (L/w) and the wire radius compared to the loop perimeter (a/p). While the specific results of these calculations fall beyond the scope of this article, we will
concentrate on the behavior of the skeleton slot when the perimeter of each loop approximates one wavelength. In this condition, the antenna approaches self-resonance, obviating the necessity for an
impedance matching network at the feed point.
In Sykes’ article, the author employs the aspect ratio of the Skeleton Slot, expressed as 2L/w = 2r, rather than that of each individual loop. Through multiple measurements, the conditions for
achieving a self-resonant antenna are outlined as follows:
1. An optimal aspect ratio of 3:1, i.e., 2r = 3, leading to r = 3/2 = 1.5 based on our definition.
2. The total length of the skeleton must be 2L = 0.56λ, so the loop length is L = 0.28λ.
3. The ratio of width to conductor diameter must be 32:1, denoted as w/(2a) = 32.
Given L = 0.28λ and r = L/w = 1.5, the resulting perimeter is calculated as p = 2 (0.28λ + 0.28λ/1.5) = 0.93λ. This closely aligns with our simulation calculations, indicating resonance when the loop
perimeter approximates one wavelength. However, it’s essential to note that this resonance condition varies with the ratio of perimeter to conductor radius, denoted as p/a, rather than the ratio w/
(2a). Subsequent results, presented in the following sections, illustrate that a thicker conductor necessitates an increased loop perimeter for the antenna to be self-resonant with a given aspect
ratio. Conversely, a thinner conductor requires a decreased loop perimeter for the same resonant condition.
Script for Varying the Loop Aspect Ratio
A pivotal inquiry in Skeleton Slot antenna design revolves around determining the optimal aspect ratio. Is there a specific aspect ratio that outperforms others? This section aims to delve into this
question, with the pursuit of an “optimal” point focusing on achieving a self-resonant antenna, thereby obviating the need for a matching network. In Sykes’ investigation, a conductor with a radius
of 4.76mm (rounded up to 5mm in our study) was employed, corresponding to a 3/16″ radius (3/8″ diameter).
For our exploration, we maintain a fixed conductor radius of 5mm, and we ensure that the perimeter of each loop remains close to one wavelength, as previously discussed. Simulations conducted using
AN-SOF are set at a frequency of 20 MHz (15-meter band). Importantly, the conclusions drawn from these simulations hold true for any frequency band, contingent upon scaling the antenna dimensions
proportionally with the wavelength. Naturally, the resulting physical dimensions at a given frequency must be practical for constructing the antenna in practice.
To perform calculations with varying geometric parameters, we can leverage the “Run Bulk Simulation” function in AN-SOF in conjunction with a script in Scilab. For those unfamiliar with script
programming, a comprehensive tutorial on antenna-related scripts is available in the article “Element Spacing Simulation Script for Yagi-Uda Antennas”, specifically focusing on Yagis with variable
element spacing.
Description of Script Elements
Below, the script is provided to generate multiple files in .nec format, where the loop aspect ratio, L/w, is systematically altered while maintaining a fixed perimeter, p. Through multiple
simulations, we have determined that the “optimal” p value, rendering the antenna self-resonant for a broad range of L/w ratios, is p = 14.8m at 20 MHz, corresponding to p = 0.99λ.
When the perimeter p is held constant and the loop aspect ratio is varied (r = L/w), the antenna dimensions can be calculated using the following formulas:
To expedite the task, consider creating a Skeleton Slot antenna model in AN-SOF or downloading the model provided in this article. Then, in AN-SOF, navigate to the File menu, select “Export Wires,”
choose the file format “.sce,” and save the file. Subsequently, open the .sce file with Scilab and make the modifications as illustrated below:
// Script for AN-SOF Professional
// Skeleton Slot Antenna with varying aspect ratio
r_min = 1.0; // Min loop aspect ratio
r_max = 2.5; // Max loop aspect ratio
n = 20; // Number of intervals between r_min and r_max
f = 20.0; // Frequency in MHz
k = 0.987; // Factor for loop perimeter
p = k*299.8/f; // Loop perimeter [m] (299.8/f = wavelength at f MHz)
radius = 5; // Wire radius in [mm]
S = 11; // Number of segments per wire (it must be odd)
for i = 0:n,
r = r_min + i*(r_max-r_min)/n; // Loop aspect ratio
w = 0.5*p/(r+1); // Loop width
L = r*w; // Loop length (total length of skeleton slot = 2L)
antenna = [
CM('Skeleton Slot Antenna')
CM('Loop length-to-width ratio = ' + string(r))
GW(1, S, 0, -0.5*w, 0, 0, 0.5*w, 0, radius*1e-3)
GW(2, S, 0, 0.5*w, -L, 0, -0.5*w, -L, radius*1e-3)
GW(3, S, 0, -0.5*w, L, 0, -0.5*w, 0, radius*1e-3)
GW(4, S, 0, 0.5*w, 0, 0, 0.5*w, L, radius*1e-3)
GW(5, S, 0, -0.5*w, -L, 0, -0.5*w, 0, radius*1e-3)
GW(6, S, 0, 0.5*w, 0, 0, 0.5*w, -L, radius*1e-3)
GW(7, S, 0, 0.5*w, L, 0, -0.5*w, L, radius*1e-3)
FR(0, 1, f, 0.0)
EX(0, 1, (S+1)/2, 1.4142136, 0)
mputl(antenna,'C:/AN-SOF/Skeleton_Ratio' + string(i) + '.nec');
This simple script streamlines the process, allowing for efficient exploration of the Skeleton Slot antenna’s behavior under varying loop aspect ratios. This script comprises two main elements:
1. Definition of Constants:
– Fixed values for the extremes of the loop aspect ratio variation range.
– Number of intervals ‘n’ to be calculated (with ‘n+1’ discrete points).
– Loop perimeter ‘p’ and wire radius.
– Numerically adjusted perimeter ‘p’ within 3 significant digits at p = 0.987λ.
2. ‘For’ Loop:
– The script contains a “for” loop where the “antenna” matrix is defined. Each row contains commands (CM, GW, GE, FR, EX, EK) used to describe an antenna in NEC format.
– Each generated .nec file (n+1 files) is named “Skeleton_Ratioi.nec” with i = 0, 1, 2, …, n.
This script is complemented by a second script that reads the results from CSV files and represents them graphically in plots. Additionally, there is a third script that contains the functions
associated with NEC commands. To download these three scripts, click on the button provided above.
Running the Scripts in Combination with AN-SOF
Here are the steps to run this script, along with the one displaying graphs with results, in combination with AN-SOF:
1. Download the .zip file containing the three necessary scripts: NECcommands.sce, SkeletonSlot.sce, and SkeletonSlotResults.sce.
2. Unzip the file and save the scripts in a folder to run them from Scilab.
3. Start Scilab and open the scripts.
4. Run NECcommands.sce, which contains functions that write NEC commands.
5. Create a folder C:\AN-SOF and run SkeletonSlot.sce. The n+1 “.nec” files will be saved in this folder.
6. In AN-SOF, go to the menu Run > Run Bulk Simulation, navigate to the C:\AN-SOF folder, and select all the generated .nec files (you can press Ctrl + A). AN-SOF will calculate them one by one,
saving the corresponding results in CSV files.
7. Return to Scilab and run the SkeletonSlotResults.sce script. Three graphs will be displayed: the gain, the input impedance, and the VSWR as a function of the loop aspect ratio.
With these scripts, you can obtain results that will be analyzed in the subsequent sections for the input impedance, VSWR, and antenna gain as a function of the loop aspect ratio.
Input Impedance, VSWR, and Gain vs. Aspect Ratio
In Figure 3, the Skeleton Slot input impedance (R[in] + jX[in]) is depicted as a function of the loop aspect ratio, L/w. It’s crucial to note that the loop perimeter remains constant at approximately
one wavelength, p ≈ λ, resulting in variable antenna length and width to uphold the constant perimeter. The relative sizes of the Skeleton Slot for three aspect ratios—L/w = 1, 1.8, and 2.5—are
illustrated at the bottom of Figure 3. Note that, when L/w = 1, the loops form squares (L = w). These outcomes have been calculated for a conductor radius of 5mm.
Fig. 3: (Top) Input impedance of Skeleton Slot antenna as a function of the loop aspect ratio. (Bottom) Relative sizes of the skeleton slot for three loop aspect ratios—L/w = 1, 1.8, and 2.5.
The input impedance unveils an intriguing property: commencing at an aspect ratio of 1.7, the antenna maintains self-resonance (X[in] = 0) as the aspect ratio increases. The reactance curve (X[in])
is notably flat with values that are practically manageable even when the loops form squares (L/w = 1). However, the input resistance, R[in], exhibits a more pronounced variation, initiating at 140
Ohms for L/w = 1 and steadily decreasing to approximately 30 Ohms for L/w = 2.5. The input impedance approaches 50 + j0 Ohms at L/w ≈ 1.8. This suggests an optimal point where the antenna achieves
self-resonance without requiring an impedance matching network.
Within the range of L/w spanning from 1.6 to 2, we observe a practical sweet spot with the input resistance falling between 40 and 60 Ohms and the reactance approaching zero. Figure 4 (top)
illustrates the Voltage Standing Wave Ratio (VSWR) as a function of the loop aspect ratio, considering a reference impedance of 50 Ohms. The “useful” range for VSWR falls within values of L/w between
1.6 and 2. Additionally, Figure 4 (bottom) showcases the gain of the skeleton slot, demonstrating a monotonic increase with the loop aspect ratio. Opting for L/w = 2 becomes advantageous if the
design objective is to maximize gain.
Fig. 4: (Top) VSWR of Skeleton Slot antenna as a function of the loop aspect ratio (reference impedance of 50 Ohms). (Bottom) Gain of Skeleton Slot antenna as a function of the loop aspect ratio.
In the subsequent sections, we will delve into an analysis of the skeleton slot’s sensitivity to variations in loop perimeter and conductor radius. This exploration will contribute to the
establishment of simulation-driven design guidelines, enabling a more informed and optimized design process.
Sensitivity to the Loop Perimeter Around One Wavelength
With the established optimal loop perimeter for achieving a self-resonant antenna at p = 0.99λ, we explore the impact of a ±1% change in this perimeter. For a frequency of 20 MHz, corresponding to a
wavelength of 15 meters, this adjustment would equate to a ±15 cm change in perimeter. Figure 5 (top) illustrates the input impedance as a function of the loop aspect ratio for three different
perimeters: 0.99p, 1.00p, 1.01p, where p = 0.99λ.
Fig. 5: (Top) Input impedance, (middle) VSWR, and (bottom) gain of Skeleton Slot antenna as a function of the loop aspect ratio for three different loop perimeters: 0.99p, 1.00p, 1.01p, where p =
0.99λ is the self-resonance loop perimeter.
Notably, the resistive part (R[in]) demonstrates minimal variation with changes in perimeter, whereas the reactive part (X[in]) undergoes a significant alteration. The sensitivity of the reactive
part to changes in p is notably higher. An increase in perimeter results in an augmented reactance (X[in]), while a decrease in perimeter leads to a diminished reactance. This observation suggests
that the perimeter of the loops can serve as a tuning parameter for the antenna. If, for a given loop aspect ratio L/w, the antenna is not self-resonant (X[in] ≠ 0), adjustments can be made by
increasing the loop perimeter when X[in] < 0 and decreasing it when X[in] > 0. Consequently, the antenna can always be tuned to a self-resonant state, provided the ability to adjust its physical
dimensions, manipulating both the loop perimeter and aspect ratio.
In the central part of Figure 5, the Voltage Standing Wave Ratio (VSWR) is presented as a function of the loop aspect ratio. The observed variation in VSWR is predominantly attributed to changes in
reactance resulting from adjustments in the loop perimeter.
In our model, the antenna is considered in free space without the presence of a ground plane, and no resistivity has been added to the conductors, effectively eliminating power losses. This
deliberate choice allows us to isolate the ideal behavior of the skeleton slot and analyze its parameters independently. In an antenna devoid of ohmic losses, the resistive component of its input
impedance equals its “radiation resistance.” The gain of a lossless antenna is inversely proportional to the radiation resistance. If this resistance remains insensitive to changes in perimeter, we
can anticipate a corresponding insensitivity in gain. This expectation is affirmed in Figure 5 (bottom), where the gain is depicted as a function of the loop aspect ratio for the three distinct
values of loop perimeter used in the upper graphs.
Effect of Changing the Conductor Radius
The investigation explores the impact of changing the conductor radius on the input impedance, VSWR, and gain as a function of the loop aspect ratio. It is widely recognized that loops with a
circumference close to one wavelength exhibit higher reactance for thinner wire radii. To quantify loop thickness, the loop perimeter to wire radius ratio, p/a, is commonly used. However, since the
loop perimeter is held constant in our analysis, we solely vary the radius (a = 5mm) of the example model.
Fig. 6: (Top) Input impedance, (middle) VSWR, and (bottom) gain of Skeleton Slot antenna as a function of the loop aspect ratio for three different conductor radii: a = 2.5mm, 5mm, and 7.5mm.
At the top of Figure 6, the input impedance of the Skeleton Slot is depicted as a function of the loop aspect ratio for three different conductor radii: a = 2.5mm, a = 5mm, a = 7.5mm. As observed,
reactance increases with a thinner conductor and decreases with a thicker conductor. This observation, combined with insights from Figure 5 in the previous section, leads to the conclusion that the
loop perimeter required for a self-resonant antenna is influenced by the wire radius. Given a specific loop wire thickness, p/a, the exact value of p for self-resonance depends on p/a itself, so we
can write p(self-resonance) = k(p/a) λ, where k(p/a) is near 1. While it is not within the scope of this study to generate curves illustrating the behavior of the factor k(p/a), simulation tools like
AN-SOF allow for a simulation-driven design, a topic to be discussed in the next section.
In the middle of Figure 6, the VSWR behavior is presented, with variations predominantly attributed to changes in input reactance. The bottom graph in Figure 6 illustrates the antenna gain,
demonstrating no sensitivity to the wire radius, as expected, since the radiation resistance also remains insensitive to the wire radius, as indicated in the top graph of Figure 6.
Simulation-Driven Design of a Skeleton Slot Antenna
Having established the optimal relationships among the geometric parameters of the Skeleton Slot antenna, conceptualized as two closely coupled loops sharing a feeding point, we can now outline a
procedural approach for designing such antennas using simulation tools like AN-SOF.
In practical scenarios, it’s common to have a conductor or wire with a specific diameter. Therefore, our initial step will involve setting the wire radius as a fixed parameter, followed by running
simulations with slight variations in the perimeter of each loop, starting with p = λ as a reference. The previously described script can be employed for this purpose, keeping the perimeter constant
while adjusting the loop aspect ratio.
Following this, the subsequent step is to identify the loop aspect ratio that maximizes gain within an acceptable VSWR range. To illustrate this procedure, we will present example calculations for HF
and VHF, namely for operating frequencies of 14 MHz and 145 MHz, respectively.
HF Skeleton Slot Antenna
We will take as a reference the same example presented in Sykes’ article in “The Short Wave Magazine.” For applications in the HF band, at an operating frequency of 14 MHz, in accordance with the
Sykes criterion (loop width to conductor diameter of 32:1), a 4¾-inch wire would be required—an impractical dimension. The author suggests using multiple wires (e.g., 6) to form a circular contour
with the desired diameter. However, in our demonstration, we aim to show that achieving a self-resonant antenna with a 3/8″ diameter conductor is indeed feasible.
Figure 7 illustrates the results for input impedance and VSWR obtained from the script with the following input parameters:
f = 14.0; // Frequency in MHz
k = 0.987; // Factor for loop perimeter
p = k*299.8/f; // Loop perimeter [m] (299.8/f = wavelength at f MHz)
radius = (3/16)*25.4; // Wire radius in [mm]
S = 7; // Number of segments per wire (it must be odd)
Fig. 7: (Left) Input impedance and (right) VSWR of HF Skeleton Slot antenna as a function of the loop aspect ratio calculated at 14 MHz, for a wire radius of 3/16″.
The gain is not displayed since it closely resembles the previously shown results. Figure 7 illustrates that the optimal loop perimeter is maintained with a factor k = 0.987, ensuring the antenna is
self-resonant at 14 MHz. The chosen design point is L/w = 1.825, precisely where the VSWR exhibits a dip. Following this, we open the corresponding AN-SOF file (Skeleton_Ratio11.emm) and perform a
frequency sweep around the central frequency of 14 MHz.
Figure 8 portrays the VSWR as a function of frequency for the Skeleton Slot with L/w = 1.825. The observation indicates an achieved bandwidth of almost 600 KHz (for VSWR < 2), equivalent to 4.3% in
the 14 MHz band. The gain obtained is 5.5 dBi.
If we were to choose L/w = 2.05 and conduct a frequency sweep, we would notice a reduced bandwidth of 500 KHz, signifying that making the skeleton slot slimmer is no longer advantageous, despite
yielding slightly higher gain.
Fig. 8: VSWR as a function of frequency around 14 MHz for the HF Skeleton Slot antenna with loop aspect ratio L/w = 1.825. The wire radius is 3/16″.
VHF Skeleton Slot Antenna
For the operation of the Skeleton Slot at 145 MHz, we maintain the same conductor diameter of 3/8″ (wire radius, a = 3/16″). In this case, by executing the script with the same perimeter factor as
the one used before (k = 0.987, with the loop perimeter being p = kλ), we obtain a negative input reactance. As we learned in the previous sections, we will then need to lengthen the loop perimeter
to increase the reactance and approach resonance.
Through several calculations (not shown here), we determined that the optimal value is k = 1.03, for which the results are shown in Figure 9. Therefore, a loop perimeter that is 3% longer than a
wavelength is necessary in this case to obtain a self-resonant antenna in a wide range of loop aspect ratios. The chosen design point is L/w = 1.9.
f = 145.0; // Frequency in MHz
k = 1.03; // Factor for loop perimeter
p = k*299.8/f; // Loop perimeter [m] (299.8/f = wavelength at f MHz)
radius = (3/16)*25.4; // Wire radius in [mm]
S = 7; // Number of segments per wire (it must be odd)
Fig. 9: (Left) Input impedance and (right) VSWR of VHF Skeleton Slot antenna as a function of the loop aspect ratio calculated at 145 MHz, for a wire radius of 3/16″.
By opening the file corresponding to this aspect ratio (Skeleton_Ratio12.emm for L/w = 1.9) with AN-SOF and performing a frequency sweep around 145 MHz, we obtain the VSWR curve shown in Figure 10.
In this case, the obtained bandwidth is 9.5 MHz (for VSWR < 2), which represents 6.6% with respect to the center frequency of 145 MHz. The gain obtained is 5.7 dBi.
Fig. 10: VSWR as a function of frequency around 145 MHz for the VHF Skeleton Slot antenna with loop aspect ratio L/w = 1.9. The wire radius is 3/16″.
With this last example, we believe we have covered the design of Skeleton Slot antennas in a depth that perhaps has not been done before. We complete the information for the designer with a few words
about the number of segments, set using the “S” variable in the script. Since each loop has a perimeter of one wavelength, the total number of segments used per wavelength is 4S. Through comparisons
with theoretical data for the loops, we have established that about 30 or 40 segments per wavelength are sufficient to reproduce theoretical data. Note that when S = 11 in the Skeleton Slot example
for 20 MHz, we have 44 segments per λ, while with S = 7, we have 28 segments per λ. If you have measured data at hand, it is advisable that the number of segments be increased until the simulation
model reproduces these experimental data.
In this comprehensive article, we conducted an in-depth study of the Skeleton Slot antenna, emphasizing its applicability across frequency bands by normalizing dimensions to the wavelength. While the
theoretical analysis holds true for any frequency, practical construction considerations will be constrained by physical dimensions and installation space available in a specific frequency band.
The Skeleton Slot antenna, conceptualized as an array of two loops with a common feed point, was meticulously examined. We provided a script enabling the alteration of the antenna’s aspect ratio,
generating multiple files for bulk simulation in AN-SOF. This facilitated the extraction of input impedance, VSWR, and gain as functions of the aspect ratio. The results highlighted the antenna’s
self-resonance when the perimeter of each loop is approximately one wavelength.
We explored how the antenna’s behavior changes with variations in loop perimeter and conductor thickness. The optimal design point for the Skeleton Slot was identified as the loop aspect ratio
minimizing VSWR for a given conductor radius. The simulation-driven design methodology can be summarized in the following steps:
1. Choose the conductor diameter for constructing the Skeleton Slot.
2. Define the operating frequency and determine the optimal perimeter using the provided scripts. The self-resonance loop perimeter is typically around one wavelength.
3. Choose the design point by selecting the loop aspect ratio that minimizes VSWR (or maximizes gain, depending on the objective). Conduct a frequency sweep to ascertain the obtained bandwidth.
This simulation-driven design approach is particularly valuable for amateur radio enthusiasts, antenna hobbyists, or RF professionals embarking on projects involving Skeleton Slot antennas. We trust
that this article will serve as a valuable resource for those interested in exploring and implementing Skeleton Slot antenna designs.
See Also:
About the Author
Have a question?
|
{"url":"https://antennasimulator.com/index.php/knowledge-base/design-guidelines-for-skeleton-slot-antennas-a-simulation-driven-approach/","timestamp":"2024-11-11T03:18:38Z","content_type":"text/html","content_length":"336820","record_id":"<urn:uuid:c695c983-6348-4322-ab60-c6460055d35d>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00822.warc.gz"}
|
How much is 24 oz in ml? - WorkSheets Buddy
How much is 24 oz in ml?
How much is 24 oz in ml?
Final Answer:
24 fluid ounces is equal to approximately 709.764 milliliters. This is done by multiplying the ounces by the conversion factor of 29.5735 ml per ounce. Conversion between these units is important in
various contexts, such as cooking and medication dosages.
Examples & Evidence:
To convert 24 fluid ounces (oz) to milliliters (ml), we need to use the conversion factor that relates the two units:
1 fluid ounce is approximately 29.5735 milliliters.
Now, we can perform the conversion by following these steps:
1. Identify the number of ounces: We start with 24 oz.
2. Use the conversion factor: We know that 1 oz = 29.5735 ml.
3. Multiply the number of ounces by the conversion factor:
24 oz×29.5735 ml/oz=709.764 ml
Thus, 24 oz is equal to approximately 709.764 ml.
Additional Information:
• Units of Measurement: Fluid ounces (oz) are commonly used in the United States, while milliliters (ml) are part of the metric system used in most other countries.
• Why it Matters: Knowing how to convert between these units is important, especially when dealing with recipes or medication dosages where precision is key.
Understanding volume conversions helps in many everyday situations, whether you’re cooking or measuring liquids in scientific contexts.
For example, if a recipe calls for 1 cup of water, knowing that 1 cup is equal to 8 oz can help you convert it to milliliters if using a metric measuring cup, as 8 oz equals about 236.588 ml.
Similarly, understanding these conversions is essential in labs when measuring liquids precisely.
The conversion factor of 29.5735 ml per fluid ounce is widely accepted and used in scientific and culinary contexts, confirming the accuracy of the conversion calculations.
More Answers:
Leave a Comment
|
{"url":"https://www.worksheetsbuddy.com/how-much-is-24-oz-in-ml/","timestamp":"2024-11-05T07:55:50Z","content_type":"text/html","content_length":"130575","record_id":"<urn:uuid:ed1ea108-f123-4b38-b758-29c7b7196675>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00205.warc.gz"}
|
Time is unity…
Understanding 1 dimensional space
Supersymmetry involves the concept of multidimensional space. In order to understand dimensional spaces higher than three, let’s start with the simplest 1D case, that of a 1D observer – a line. You
might think, well that’s quite easy. In fact it is quite easy, but if you really understand it, you might use your knowledge to understand higher dimensions. The animation below shows the observer as
a grey line, who is trying to percieve a reality (a 2D circle in this case) in his 1D limited mind. The animated blue line is what he perceives. Note that the reality, the circle, is not changing in
time, its radius, colour and all other properties are a part of the reality. The observed thing is quite different from this, it is a blue line varying in length WITH TIME. For the observer, it
remains a mystery as to what happened to the original full length of line, why and how it changes length and ‘pops in and out’ of his ‘observed reality’. Also, the 1D observer has no way to find out
whether the oscillating line is due to observing a circle (2D), a sphere (3D) or hypersphere (D>3). Also, in order for an observation to take place, we need the grey line (1D) observer, to have a
‘thickness’. This thickness is very small, just enough for the observed image to be projected on, similar to a projector screen, but has to be greater than zero.
Understanding 2 dimensional space
Let’s now start analyzing a 2D case, that of the classic Flatland example, in which a person lives in a 2D universe and is only aware of two dimensions (shown as the blue grid), or plane, say in the
x and y direction. Such a person can never conceive the meaning of height in the z direction, he cannot look up or down, and can see other 2D persons as shapes on the flat surface he lives in.
Now we know that 3D space exists, and can conceive that, because we see each other in 3D space. So, what does a 3D reality sphere look like into a 2D plane? The answer is again graphically shown in
the animation, which shows a circle expanding and contracting depending on which slice of the sphere intersects the 2D observation plane. In the 2D plane, the thickness of the plane tends to zero,
but again, cannot be absolute zero. There must be enough thickness for the circle to form and be observed. Thus, the 3D sphere is being differentiated with respect to one of its spatial dimensions (z
in our case) across its diameter.
For the person that lives in 2D, the only way to recognize such a 3D structure is through integrating all the circles he sees, on top of each other. But here is the problem, he cannot imagine
anything ‘on top of each other’. A clever 2D guy has just one simple way to refer to this z-axis, which is constantly differentiating the 3D object, and that is TIME.
Time is unity.
“It’ s obvious that from a 4D being point of view our 3D time is a still , unchanging variable. So everything we experience as 3D beings is just an illusion. The illusion of going through time. The
4D being sees everything that was and will be for a 3D being. And, of course a 4D being has no way of really understanding a 5D dimension.
The question is, how can we know how many dimensions is the universe made up from. All the arguments mentioned above can be applied to any dimension and would imply the possibility of an infinite
dimension space. However other known things as the relationship between the gravitational constant and all the matter in the universe indicate that the universe is closed and limited. Even
mathematics shows us that there are yet unknown reasons for which an ultimate dimension may be reached. One very interesting curve is the plot of surface area of hyper spheres of different
dimensions, shown below. One would easily think that as we go higher in dimensions, the surface area of the n-sphere would increase at each stage, and yet, something very strange occurs, as a maxim
in its surface area is reached at the 7th dimension. This could easily be the reason for the relentless way the energy always seeks the lowest energy levels. Could this indicate the real ultimate
dimension of the universe?. Most probably yes.
│Dimension │Volume│Area │
│1 │2.0000│2.0000 │
│2 │3.1416│6.2832 │
│3 │4.1888│12.5664 │
│4 │4.9348│19.7392 │
│5 │5.2638│26.3189 │
│6 │5.1677│31.0063 │
│7 │4.7248│33.0734 │
│8 │4.0587│32.4697 │
│9 │3.2985│29.6866 │
│10 │2.5502│25.5016 │
2 thoughts on “Time is unity…”
• February 19, 2015 at 12:40 pm
Thank you Cristian, I got my head on straight now! I have read this twice and am yawning through it …a sure sign for me that I need to incorporate this in my reality.
• February 21, 2015 at 4:07 pm
I think this is interesting concerning the calculations of the surface space(s) to extrapolate a maximum number of physical dimensions. Like the time dimension, perhaps a look at a mental
dimension dimension(s) is useful here. Spatial dimensions may be limited, and even the way time is experienced or perceived, but spiritual dimensions and the time dimensions share one
commonality, that being they are mentally constructed.
|
{"url":"https://www.earthempaths.net/wp/2015/02/18/time-is-unity-2/","timestamp":"2024-11-02T17:10:21Z","content_type":"text/html","content_length":"80620","record_id":"<urn:uuid:115337ab-6e83-4eed-ad9f-dea42e72b198>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00135.warc.gz"}
|
GAPT Seminar 2022/23
Seminar Talks - Autumn 2022
Thursday, 6th October 2022 15:10 - 16:00
Nick Cavenagh (University of Waikato, New Zealand)
Row-column factorial designs of strength at least 2
joint work with Fahim Rahim
The $q^k$ (full) factorial design with replication $\lambda$ is the multi-set consisting of $\lambda$ occurrences of each element of each $q$-ary vector of length $k$; we denote this by $\lambda\
times [q]^k$. An $m\times n$ row-column factorial design $q^k$ of strength $t$ is an arrangement of the elements of $\lambda \times [q]^k$ into an $m\times n$ array (which we say is of type $I_k
(m,n,q,t)$) such that for each row (column), the set of vectors therein are the rows of an orthogonal array of size $k$, degree $n$ (respectively, $m$), $q$ levels and strength $t$. Such arrays have
been used in practice in experimental design. In this context, for a row-column factorial design of strength $t$, all subsets of interactions of size at most $t$ can be estimated without confounding
by the row and column blocking factors. In this talk we consider row-column factorial designs with strength $t\geq 2$. The constructions presented use Hadamard matrices and linear algebra.
Thursday, 20th October 2022 15:10 - 16:00
Arman Sarikyan (University of Edinburgh)
On the Rationality of Fano-Enriques Threefolds
A three-dimensional non-Gorenstein Fano variety with at most canonical singularities is called a Fano-Enriques threefold if it contains an ample Cartier divisor that is an Enriques surface with at
most canonical singularities. There is no complete classification of Fano-Enriques threefolds yet. However, L. Bayle has classified Fano-Enriques threefolds with terminal cyclic quotient
singularities in terms of their canonical coverings, which are smooth Fano threefolds in this case. The rationality of Fano-Enriques threefolds is an open classical problem that goes back to the
works of G. Fano and F. Enriques. In this talk we will discuss the rationality of Fano-Enriques threefolds with terminal cyclic quotient singularities.
Thursday, 27th October 2022 15:10 - 16:00
Ana Kontrec (MPI / Bonn)
Representation theory and duality properties of some minimal affine $\mathcal{W}$-algebras
One of the most important families of vertex algebras are affine vertex algebras and their associated $\mathcal{W}$-algebras, which are connected to various aspects of geometry and physics.
Among the simplest examples of $\mathcal{W}$-algebras is the Bershadsky-Polyakov vertex algebra $\mathcal{W}^k(\mathfrak{g}, f_{min})$, associated to $\mathfrak{g} = sl(3)$ and the minimal nilpotent
element $f_{min}$.
In this talk we are particularly interested in the Bershadsky-Polyakov algebra $\mathcal W_k$ at positive integer levels, for which we obtain a complete classification of irreducible modules.
In the case $k=1$, we show that this vertex algebra has a Kazama-Suzuki-type dual isomorphic to the simple affine vertex superalgebra $L_{k'} (osp(1 \vert 2))$ for $k'=-5/4$. This is joint work with
D. Adamovic.
Thursday, 3rd November 2022 15:10 - 16:00
Sergio Giron Pacheco (University of Oxford)
Anomalous actions and invariants of operator algebras.
An anomalous symmetry of an operator algebra $A$ is a mapping from a group $G$ to the automorphism group of $A$ which is multiplicative up to inner automorphisms of $A$. This can be rephrased as the
action of a pointed tensor category on $A$. Starting from the basics, I will introduce anomalous actions and discuss some history of their study in the literature. I will then discuss their existence
and classification on simple C*-algebras. For these questions, it will be important to consider K-theoretic invariants of the algebras.
Thursday, 10th November 2022 15:10 - 16:00
Thomas Wasserman (University of Oxford)
The Landau-Ginzburg - Conformal Field Theory Correspondence and Module Tensor Categories
In this talk, I will give a brief introduction to the Landau-Ginzburg - Conformal Field Theory (LG-CFT) correspondence, a prediction from physics. This prediction links aspects of Landau-Ginzburg
models, described by matrix factorisations for a polynomial known as the potential, with Conformal Field Theories, described by for example vertex operator algebras. While both sides of the
correspondence have good mathematical descriptions, it is an open problem to give a mathematical formulation of the correspondence.
After this introduction, I will discuss the only known realisation of this correspondence, for the potential $x^d$. For even $d$ this is a recent result, the proof of which uses the tools of module
tensor categories.
I will not assume prior knowledge of matrix factorisations, CFTs, or module tensor categories. This talk is based on joint work with Ana Ros Camacho.
Thursday, 17th November 2022 15:10 - 16:00
Jacek Krajczok (University of Glasgow)
On the approximation property of locally compact quantum groups
One of the most widely studied properties of groups is the notion of amenability - in one of its many formulations, it gives us a way of approximation the constant function by functions in the
Fourier algebra. The notion of amenability was relaxed in various directions: a very weak form of amenability, called the approximation property (AP), was introduced by Haagerup and Kraus in 1994. It
still gives us a way of approximating the constant function by functions in the Fourier algebra, but in much weaker sense. During the talk I'll introduce AP for locally compact quantum groups,
discuss some of its permanence properties and relation to w*OAP of quantum group von Neumann algebra. The talk is based on a joint work with Matthew Daws and Christian Voigt.
Thursday, 24th November 2022 15:10 - 16:00
Konstanze Rietsch (King's College London)
Tropical Edrei theorem
The classical Edrei theorem from the 1950's gives a parametrisation of the infinite upper-triangular totally positive Toeplitz matrices by positive real parameters with finite sum. These matrices
(and their parameters) are central for understanding characters of the infinite symmetric group, as was discovered by Thoma who reproved Edrei's theorem in the 1960's. A totally different theorem,
related to quantum cohomology of flag varieties and mirror symmetry, gives inverse parametrisations of finite totally positive Toeplitz matrices [R, 06]. The latter theorem has an analogue over the
field of Puiseaux series, obtained by Judd and studied further by Ludenbach. In this talk I will explain a new `tropical' version of the Edrei theorem, connecting the finite and infinite theories.
Thursday, 1st December 2022 15:10 - 16:00
Kevin Aguyar Brix (University of Glasgow)
Irreversible dynamics and C*-algebras
How do we model the evolution of a system? A symbolic dynamical system is a coding of certain time evolutions that can be represented by finite graphs and that are usually invertible. However, in
this talk I want to emphasise irreversible symbolic systems, how and why they are mathematically interesting, and their connections to other fields such as C*-algebras (algebras of bounded operators
on Hilbert space). Along the way, I will also discuss the infamous conjugacy problem for shifts of finite type.
Thursday, 8th December 2022 15:10 - 16:00
Christiaan van de Ven (Universität Würzburg)
Strict deformation quantization in quantum lattice models
Quantization in general refers to the transition from a classical to a corresponding quantum theory. The inverse issue, called the classical limit of quantum theories, is considered a much more
difficult problem. A rigorous and natural framework that addresses this problem exists under the name strict (or C*-algebraic) deformation quantization. In this talk, I will first introduce this
concept by means of relevant definitions. Next, I will show its connection with the classical limit of quantum theories, starting with a brief summary of the theory in the context of mean-field
quantum theories. Finally, I will discuss the results of a recent work on how strict deformation quantization applies to more realistic models described by local interactions for periodic boundary
conditions, e.g., the quantum Heisenberg spin chain.
Thursday, 15th December 2022 15:10 - 16:00
Taro Sogabe (University of Tokyo)
The Reciprocal Kirchberg algebras
In the classical homotopy theory, there is the duality, Spanier Whitehead’s duality, connecting homology and cohomology. In this talk, I would like to explain the Spanier Whiteheads duality for
KK-theory which is the homotopy theory for C*-algebras, and I will show that this duality gives a characterization of two unital Kirchberg algebras sharing the same homotopy groups of their
automorphism groups.
Seminar Talks - Spring 2023
Friday, 3rd March 2023
Operator Algebras in the South of the UK
This is the first meeting of a new regional network to promote research in operator algebras in the South of the United Kingdom. Speakers include Francesca Arici (Leiden), Christian Bönicke
(Newcastle), Ian Charlesworth (Cardiff), Kevin Boucher (Southampton) and Samantha Pilgrim (Glasgow).
Thursday, 9th March 2023 15:10 - 16:00
Katrin Wendland (Trinity College Dublin)
Quarter BPS states in K3 theories
In conformal field theories with extended supersymmetry, the so-called BPS states play a special role. The net number of such states, counted according to a natural $\mathbb{Z}_2$ grading, is
protected under deformations. However, pairs of such states with opposite parity can cease being BPS under deformations.
In this talk we will report on joint work with Anne Taormina, investigating this phenomenon for a particular type of deformations in K3 theories. We propose that the process is channelled by an
action of $SU(2)$ which has its origin in the underlying K3 geometry.
Thursday, 16th March 2023 15:10 - 16:00
David Ellis (University of Bristol)
Product-free sets, and the diameter problem, in compact Lie groups.
A subset $S$ of a group $G$ is said to be product-free if there are no solutions to the equation $xy=z$ with x,y and z in $S$. Babai and Sós conjectured in 1985 that any finite group $G$ contains a
product-free subset of size at least $c|G|$, where c is an absolute positive constant, but this was disproved in seminal work of Gowers in 2007. Gowers showed that if a finite group G is
D-quasirandom (meaning that the smallest dimension of a nontrivial ordinary irreducible representation of G is at least D), then any product-free subset of G has measure at most $D^{-1/3}$. This
yields an upper bound of 1/poly(n) on the maximal measure of product-free sets in (for example) $PSL(2,n)$, for n a prime power, and the alternating group $A_n$; constructions of Kedlaya give lower
bounds which are also 1/poly(n). For compact connected Lie groups of rank n, however, the (conjectural) bounds on the maximal measure of measurable product-free sets are much stronger. Indeed, Gowers
conjectured in 2007 that the maximal measure of a measurable product-free subset of $SU_n$ is at most $\exp(-cn)$ for some absolute positive constant c (though his methods only yield an upper bound
of $O(n^{-1/3})$ in this case). We make progress on this conjecture of Gowers, showing that the maximal measure of a measurable product-free subset of $SU_n$, $SO_n$, $\text{Spin}_n$ or $Sp_n$ is at
most $\exp(-cn^{1/3})$, where c is an absolute positive constant. We also give new bounds for the diameter problem in these groups. (Recall that if G is a group and S is a subset of G, the diameter
of S is the minimal integer k, if it exists, such that $S^k = G$; the diameter problem in a compact connected Lie group G asks for the maximum possible diameter of a subset of G of measure m, for
each $m > 0$.) Our techniques are based on non-Abelian Fourier analysis and (new) hypercontractive inequalities on compact connected Lie groups; the latter are obtained by two methods - the first
method being via a coupling with Gaussian space, the second being via Ricci curvature and the Bakry-Emery criterion.
Based on joint work with Guy Kindler (HUJI), Noam Lifshitz (HUJI) and Dor Minzer (MIT).
Thursday, 20th April 2023 15:10 - 16:00
Argam Ohanyan (Universität Wien)
Geometry and curvature of synthetic spacetimes
Non-regular spacetime (or Lorentzian) geometry is a subject that has garnered a lot of interest in recent years. This is unsurprising, as even basic and physically well-motivated operations (e.g.
matching) within smooth spacetime geometry lead to non-smooth scenarios. Another motivation is the success of non-smooth Riemannian geometry, where a study of metric length spaces and curvature
bounds via triangle comparison has led to an incredibly fruitful theory which has delivered many new results even in the smooth context. In 2018, Lorentzian length spaces were put forth by Kunzinger
and Sämann as the suitable synthetic setting for spacetime geometry. Since then, a lot of results from the classical theory of spacetimes have been reproved in this framework. In this talk, which is
meant to be an introduction to the topic, we first discuss the basics of Lorentzian length spaces. We will then continue with various recent developments related to curvature, Gromov-Hausdorff
convergence and differential calculus in the synthetic setting.
Thursday, 27th April 2023 15:10 - 16:00
Daniel Berlyne (University of Bristol)
Braid groups, cube complexes, and graphs of groups
The braid group of a topological space X is the fundamental group of its configuration space, which tracks the motion of some number of particles as they travel through X. When X is a graph, the
configuration space turns out to be a special cube complex, in the sense of Haglund and Wise. These so-called 'graph braid groups' have useful applications outside of mathematics, such as in
topological quantum computing and motion planning in robotics. I will show how the cube complexes are constructed and use graph of groups decompositions to provide methods for computing braid groups
of various graphs. This has numerous algebraic and geometric applications, such as providing criteria for a graph braid group to split as a free product, characterising various forms of hyperbolicity
in graph braid groups, and determining when a graph braid group is isomorphic to a right-angled Artin group.
Thursday, 4th May 2023 15:10 - 16:00
Owen Tanner (University of Glasgow)
Interval Exchange Groups as Topological Full Groups
Let $\Gamma$ be a dense additive subgroup of $\mathbb{R}$. Then, we study the group IE($\Gamma$) of bijections of the unit interval that are formed piecewise by translations in $\Gamma$. These groups
are of interest to geometric group theory because they provided the first examples of simple, amenable, finitely generated (infinite) groups.
The perspective we take is that these groups are so called "topological full groups" a way of generating a group from a dynamical system. We show IE($\Gamma$)=IE($\Gamma'$) iff $\Gamma$ = $\Gamma'$
as subsets of $\mathbb{R}$. We show IE($\Gamma$) is finitely generated iff $\Gamma$ is finitely generated. We compute homology. We describe generators.
Thursday, 11th May 2023 15:10 - 16:00
Liana Heuberger (University of Bath)
Combinatorial Reid's recipe for consistent dimer models
In the first part of my talk, I will make a gentle introduction to the McKay correspondence for ADE surface singularities. Reid's recipe is a generalisation of this correspondence in dimension three,
in the case of affine toric varieties. It marks interior line segments and lattice points in the fan of the G-Hilbert scheme (a specific crepant resolution of $\mathbb{C}^3/G$ for $G\subset SL(3,\
mathbb{C})$) with characters of irreducible representations of $G$. Our goal is to generalise this by marking the toric fan of a crepant resolution of any affine Gorenstein singularity, in a way that
is compatible with both the G-Hilbert case and its categorical counterpart known as Derived Reid's Recipe. This is joint work with Alastair Craw and Jesus Tapia Amador.
Friday, 19th May 2023
Operator Algebras in the South of the UK
The second meeting of our new regional network to promote research in operator algebras in the South of the United Kingdom will take place in Southampton. Speakers include Cornelia Drutu (University
of Oxford), Adrian Ioana (University of San Diego), Maryam Hosseini (Queen Mary, London) and Steven Flynn (University of Bath).
|
{"url":"https://upennig.weebly.com/gapt-seminar-202223.html","timestamp":"2024-11-06T11:44:14Z","content_type":"text/html","content_length":"54222","record_id":"<urn:uuid:dc3044c1-0ca2-474b-9021-3942eb3a757d>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00381.warc.gz"}
|
Transient Recovery Voltage | EMTP
IEEE C37.011 - The Transient Recovery Voltage (TRV) is the voltage that appears across the terminals of a circuit breaker after a current interruption. This voltage may be considered in two
successive time intervals: one during which a transient voltage exists (TRV), followed by a second during which a power-frequency voltage alone exists.
Below is an example of the TRV for 145kV transmission line system. The TRV follows a transmission line fault clearance.
TRV is a consequence of different voltage response of the circuits on the source side and load side of the circuit breaker. This difference creates the TRV across the breaker terminals.
The standards covering TRV analysis are:
• IEEE C37.011-2011: IEEE Guide for the Application of Transient Recovery Voltage for AC High-Voltage Circuit Breaker
• IEEE C37.06-2009: IEEE Standard for AC High-Voltage Circuit Breakers Rated on a Symmetrical Current Basis - Preferred Ratings and Related Required Capabilities for Voltages Above 1000
• IEC 62271-100: High-Voltage switchgear and control gear – Part 100: Alternating-current circuit-breaker. Edition 2.0, 2008-04
Why TRV must be studied?
The TRV appearing across a circuit breaker while it is opening will challenge the longitudinal voltage withstand of the gap between the breaker poles. If the system TRV reaches the gap withstand
voltage, a longitudinal breakdown will take place. This is called a reignition if the breakdown occurs before a quarter cycle following the current interruption, and a restrike if it occurs after.
Restrikes must be avoided by design, and re-ignition minimized, as they can create hot-spots in the circuit breaker and high-frequency transients in the circuit, the worst cases being if the
circuit-breaker is simply not able to interrupt the current, or if the successive breakdowns create a voltage escalation.
Below is an example of an unsuccessful breaker opening in a 13.8kV system, with multiple restrikes and voltage escalation.
What are the main steps of a TRV study?
1. System modeling: The system must be modeled for frequencies ranging from the fundamental to few kilohertz. For the most part, and especially for conductors, frequency-dependent modeling is
2. Simulation of worst-case scenarios: depending of the system, some clearing scenarios will be more challenging for the circuit-breaker. For example, 3-phase ungrounded faults are very often the
most challenging events to clear, especially if they occur at the secondary of a series-reactor or a transformer. For lines, short-line-faults (single-phase to ground) and 3-phase terminal faults are
the worst cases. The outcomes of the worst-case scenario simulations are the prospective TRVs.
3. Comparison of the prospective TRVs (obtained by simulation) with the breaker inherent TRV (obtained in lab by standardized tests): the prospective TRV is superimposed with the breaker inherent TRV
envelop which is built with 2 or 4 parameters and depends on:
- The breaker class
- The rated voltage
- The rated short-circuit current
- The type of fault cleared during the studied event
- The short-circuit current cleared during the studied event
Both the TRV magnitude and its initial slope (known as the Rate of Rise of Recovery Voltage or RRRV) must be inside the inherent TRV envelop, considering a safety margin.
The following figure shows a 145kV system prospective TRV obtained during the simulation of a 13kA short-line single-phase-to-ground and superimposed with a 4-parameter inherent TRV of a circuit
breaker rated 30kA and which class is >100kV effectively earthed.
In which circuit configuration TRV analysis must be done?
TRV occurs anytime a circuit-breaker interrupts a current. However, only breakers rated 3.6kV and above are concerned.
TRV studies are typically performed for the following cases:
• Breakers disconnecting transmission lines: In this case, two cases are simulated:
o A short-line single-phase-to-ground fault, which produces the more severe RRRV
o A terminal 3-phase ungrounded fault, which produces the highest TRV
• Breakers disconnecting reactors
- Transformer limited faults
- Induction motor tripping during start-up
- Synchronous generator tripping
- Capacitor-bank de-energization
Why choose EMTP^® for TRV analysis?
EMTP^® is the most versatile platform for TRV analysis, with built-in standards and functionalities that allow to effortlessly compare the system prospective TRV, obtained by simulations, and the
breaker inherent TRV envelope.
Here are some major advantages:
• The inherent TRV envelop can be either user-defined or follows IEC and IEEE standards. EMTP^® also precisely calculates the inherent TRV envelope parameters according to the simulation fault
current by interpolation of the standard tests T10, T30, T60 and T100.
EMTP^® Circuit Breaker model for TRV
• Frequency dependent transmission line modeling allows EMTP^® to precisely capture the voltage wave propagation/reflection (travelling waves) that can significantly increase the system RRRV.
• Breaker stray-capacitance has a very important impact on the TRV. A large database of value is available in EMTP^®.
• Restrikes/prestrikes and re-ignitions can be simulated with the advanced TRV circuit breaker model.
EMTP^® Results for a TRV with multiple re-ignitions, including the withstand envelope/voltage and current
• Statistical approach to TRV simulation allows you to effortlessly assess the impact of different switch tripping instants on the TRV waveforms and determine the worst-case scenario.
• EMTP^® scripting allows you to automatically test various fault cases and loading conditions.
EMTP^® circuit with a breaker model for TRV analysis
• Numerical stability: most EMT-type programs use trapezoidal integration for solving network equations. Trapezoidal integration is fast and precise, but it is unstable during discontinuities. The
instabilities are typically damped by artificial resistances. This causes a problem during TRV analysis because it can make the RRRV determination very difficult.
EMTP^®, which also uses trapezoidal integration, solves this problem by using the backward Euler integration method during discontinuities and reduces the integration time-step when they occur.
Below is an example of the TRV results given by EMTP^® compare to a solver using trapezoidal integration with numerical damping resistances.
|
{"url":"https://emtp.com/applications/transient-recovery-voltage","timestamp":"2024-11-06T22:04:06Z","content_type":"text/html","content_length":"116170","record_id":"<urn:uuid:6cf37e69-dc0a-49f1-a472-9bad6acdee04>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00445.warc.gz"}
|
The discrete bordism category in dimension 1
Add to your list(s) Download to your calendar using vCal
• Friday 18 October 2019, 16:00-17:00
• MR13.
If you have a question about this talk, please contact Nils Prigge.
The discrete bordism category hCob_d has as objects closed (d-1)-manifolds and as morphisms diffeomorphism classes of d-dimensional bordisms. This is a simplified version of the topologically
enriched bordism category Cob_d whose classifying space B(Cob_d) been completely determined by Galatius-Madsen-Tillmann-Weiss in 2006. In comparison, little is known about the classifying space B
I will identify B(hCob_1) as a circle bundle over a delooping of BCob_2, showing, in particular, that the rational cohomology ring of hCob_1 is polynomial on classes \kappa_i in degrees 2i+2 for all
i>=1. The seemingly simpler category hCob_1 hence has a more complicated classifying space than Cob_1. Moreover, I will give combinatorial formulas for cocycles on hCob_1 that represent \kappa_i.
This talk is part of the Junior Geometry Seminar series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|
{"url":"https://talks.cam.ac.uk/talk/index/131383","timestamp":"2024-11-04T04:29:16Z","content_type":"application/xhtml+xml","content_length":"12384","record_id":"<urn:uuid:30b5c5bf-8d33-45db-a1f8-afb1f2b8385d>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00452.warc.gz"}
|
Best Plywood for Earthquake Resistant Shear Walls
THIS 2:00 MINUTE VIDEO CONTAINS MOST OF WHAT YOU NEED TO KNOW.
Plywood: and building shear walls
The trickiest and most important part is the plywood.
The most important factor in a retrofit is the plywood’s ability to resist earthquakes. Plywood is the central component in an assembly consisting of bolts, plywood, and shear transfer ties which
together form a shear wall. Shear walls are the backbone of any retrofit.
The type of nails or staples used, their size and length, plywood thickness, the species of wood, the manufacturing process used in plywood production, and the wall framing all play a part in a shear
wall’s performance.
Plywood is limited in its ability to resist earthquakes because the plywood itself can only be strained so far before it fails. Bolts and shear transfer ties do not have this problem. Therefore,
the plywood connection is the most important connection in a shear wall system.
The table below is from APA Research Report 154. The values are still used in the building code.
Table 1: Earthquake resistance of plywood varies according nailing and type of plywood. The purple boxes call attention to differing strengths in plywood based on the nail and plywood type. In one
case, the plywood can resist 200 pounds of earthquake force per each linear foot: when 5/16 plywood with 6d nails penetrating 1-1/4 inch into the framing are spaced 6″ apart on the edges. Or 870
pounds per linear foot: with 15/32 (half inch) plywood with 10d nails penetrating 1 -5/8 nails into the framing and spaced 2″ apart. Once the nailing got to 2″ apart, splitting of the framing became
a problem.
The good news is (as shown in this video) you can put the nails even one inch apart in old growth lumber and not worry about splitting.
Below are some more detailed instructions on how to read this important table.
The black and red arrows point at numbers that represent the pounds of earthquake force each linear foot of plywood can resist if nailed in a manner consistent with the table. For example, if we go
the row of the table for Structural I 15/32″ (plywood thickness) we see that:
(2) The plywood’s thickness is 15/32″.
(3) The penetration into the framing is 1 1/2 inches.
(4) The nail size is 8d (8d or 10d is simply a way of describing nails a certain length and diameter).
(5) The plywood is nailed on the edges 6″, 4″,3″ or 2″ apart.
As shown by the blue arrow above, Structural I plywood nailed with 8d nails 4″ apart on the edges will provide 430 pounds of resistance per linear foot. As shown by the red arrow above, Structural I
plywood nailed with 10d nails 2″ apart on the edges will provide 870 pounds of resistance per linear foot. Which method of nailing would you want for your house?
Plywood Nailing
The best plywood in a seismic retrofit is called Structural I. It is made to resist earthquakes. The closer together the nails are spaced on the edges of the plywood, the more earthquake resistant
and stronger the plywood will be. All retrofit guidelines require nailing plywood edge nailing that is 4″ apart. This is done even though plywood nailed at 2″ apart will double the earthquake
resistance of the plywood. That might make sense to some people, but certainly not to me.
It is very clear that these guidelines only allow this low strength plywood nailing. These guidelines recommend the use of the Nailed Blocking Method with 4-10d nails in each block. A 10d nail can
resist 176# of earthquake force. Because these blocks split so readily it was decided that no more than 4 nails would be allowed for each block.
A 4-foot length of plywood will require three 14″ blocks. Each block can resist = 704# (4×176#). Multiply that by 3 (number of blocks) and you get a 2,112# block to mudsill connection. Four feet of
plywood can resist 1720# (our 2,112# block to mudsill connection is therefore 392# stronger than we need.
If we want to minimize splitting, we could use 8d nails that can resist 125# per nail. In that case we would need 14 nails, or 5 nails in 2 of the blocks, and 4 nails in the 3rd block. There is no
point in nailing the plywood such that it exceeds the strength of these blocks. 8d nails are much less likely to split the block so it is recommended that this is what you do.
These higher capacity shear walls are not always feasible, and only the person who determines the shape, size, and condition of your existing house can make that determination.
How do Plywood Shear Walls Fail?
It is in the nails. If the shear wall resistance matches or exceeds the force it is supposed to resist, the plywood and nails will not move. For example, if we designed a shear wall to resist
10,000# of force, and it was hit by 10,000# of earthquake force, the nails and plywood remain exactly as they had been installed.
If the earthquake forces exceed the strength of the plywood-let’s say the plywood is nailed to resist 10,000# of force-and that plywood tries to resist 15,000# of force, you will get nail pull out as
circled in red.
The Way They Used to Do It.
If your house was built with shear walls made the old-fashioned way, you might not need a retrofit.
Wood species can make a huge difference in the effectiveness of a seismic retrofit. Discover what you need to know.
As we can see, in each case the failure in the plywood to framing connection included nails being pulled out of the framing. Different species of wood have greater or lesser abilities to keep nail
withdrawal to a minimum. We can see which types of wood will be the most successful in keeping nail pull out to a minimum. Using the American Wood Council Connection Calculator, we can discover how
much pull-out force it takes to pull a nail out from the framing.
Douglas Fir nail pull out is 77 pounds.
Redwood (either close grain or open grain) nail pull out is 56 pounds
In other words, nails in redwood will pull out 28% sooner than nails in Douglas Fir. For this reason, some designers recommend adding 25% more nails to the redwood. Unfortunately, there has been no
testing to confirm this.
Why does this happen and how to prevent it.
As you can see in the two photographs above, the buckling plywood has withdrawn the nails from the framing. At the same time, the plywood separated from the nails altogether as shown by the “Punch
out” image above. This nail withdrawal happened because the plywood buckling force exceeded the nail’s withdrawal limit.
The total withdrawal limit or resistance is determined by the total surface area of the nails touching the plywood, the diameter of the nails, as well as the penetration of the nails into the
framing. The greater the number of nails, the greater the nail head surface area touching the plywood, and the greater the embedment of the nails into the framing, the greater the resistance to
withdrawal failure. In other words, if a four-foot-long piece of plywood is nailed with an 8d nail every 4 inches, the nail resistance to withdrawal will be half of that of plywood that is nailed
with 8d nails spaced every 2 inches.
Non-Conventional Shear Walls are Sometimes Vital to a Good Seismic Retrofit
Double sided shear walls are useful when space for a shear wall is limited, and a new shear wall is required. By using a double-sided shear wall one can have 6 linear feet of foundation, install a
6-foot-long shear wall on each side, achieve the strength of 12 linear feet of shear wall that normally would have taken 12 linear feet of foundation.
Further on in the report it states:
“Typical failure of these walls was in compression and crushing of the lumber framing where the end studs bore against the bottom and top plates. The designer should carefully consider column
buckling (snapping like a pencil) of the end framing members and bearing on the bottom plate in order to transmit these forces in compression to the foundation and in tension to hold-downs. In some
cases, it may be desirable to stop the plate short of these end studs and allow the end studs to bear directly upon the foundation. In light of this, the designer should carefully consider column
buckling of the end framing members (they can buckle and snap). This is done by carefully sizing the end framing members, reinforcing them with steel, or having the end studs bear directly on the
The Most Technical Parts of Building Retrofit Shear Walls
This is more of a deflection issue (lateral movement of the top of the shear wall) rather than a strength issue. 1/8” crushing of the end studs into the mudsill at the bottom plate can cause over 1″
of at the top on a narrow shear wall. The magnification factor is the height of the shear wall / width of the shear wall x the amount of mudsill compression. If the walls are too flexible, they
will not resist much earthquake force when the whole house deforms (twists) because of the earthquake.
For example, if the end framing member studs on an 8-foot tall by 4-foot-wide shear wall crush the mudsill 1 inch, the deflection at the top will be 2 inches, which is significant. (8/4 = 2, x 1
inch compression = 2-inch movement.
Note that a normal shear wall has deflections at the top of the wall in the range of about 0.2 inches at their design loads, so you can see that a little crushing can cause big problems when a narrow
shear wall is used in line with other conventional shear walls.
High-Capacity Shear Walls
APA Research Report 138 are the results of a series of experiments done by the APA. They tested strength of plywood floors. A plywood floor is the same as a shear wall placed horizontally and the
values below are equivalent to the typical vertical shear wall.
APA Research Report 138: describes tests prove show very high strength shear walls can be produced by using multiple rows of nails or staples in wood framing that is wider than the normal 1-1/
2-inch-wide framing used in new construction. The capacity of stapled shear walls is at the bottom of this Table.
Looking at this table we are using:
(1) Structural I Plywood.
(2) Plywood that is 23/32 ” thick.
(3) The stud framing that is 4″ wide.
(4) The rows of nails. For example, 3 lines of fasteners means there are 3 rows of nails going up and down on the studs.
As you can see a shear wall built in this way can resist 1800 pounds of earthquake force and represents the strongest shear wall ever tested. Even though it has never been tested, a two-sided shear
wall of this type could have an enormous ability to resist earthquakes. If old growth lumber is used even closer spacing of staples can provide an almost limitless shear wall capacity.
Overturning Forces in High-Capacity Shear Walls
A typical application would be typical building in San Francisco where much of the front lower story is taken up by a garage and the rest is taken up by a stair wall. In these circumstances, the
front of the building is not connected to any foundation. This is also a preferred method in terms of effectiveness and often cost compared to moment columns.
The next consideration is uplift or overturning forces.
For example, if we build a high-capacity shear wall that can resist 1800# of force, the overturning force will be 8 x 1800# or 14,400# of overturning force. This kind of force will crush the mudsill
and certainly break out the foundation. If the shear wall can resist 1900 pounds per linear foot of lateral earthquake force, it will also need to resist #15,200 pounds of overturning force. For
this reason, proper sizing of hold-downs is critical. Under “Model No.” are the names of the hold-downs.
Stapled Shear Walls
Stapled shear walls are a consideration when one is concerned about the framing behind the plywood splitting.
The thickness of plywood has no bearing on shear wall strength except in the case of high-capacity shear walls. For these shear walls, plywood up to 19/32″ (3/4) thick was tested and it was
discovered shear wall built in this manner were on average three times stronger than the shear walls found in the California Building Code.
This type of shear wall is extremely useful when one needs the strongest shear wall possible and a foundation strong enough to withstand this force. A typical application would be shear at the front
or back of a long apartment house, such as those found in San Francisco, built on a new foundation with extensive steel reinforcing.
Stapled shear walls are a consideration when one is concerned about the framing behind the plywood splitting. According to the American Plywood Association, one should be able to double the number
of nails and double the strength of the shear wall even though this was not tested. In other words, if you double the staple spacing, you should be able to double Target Shear.
Target shear is another name for “allowable load” which is the value the building code says you can use when designing a shear wall. “Ultimate Load” is the point at which the test specimen actually
failed. “Load Factor” is the safety factor. Meaning if you have a shear wall that fails on the testing table (ultimate load) and divide this by the safety factor (load factor) the result is the
“Target Design Shear” or allowable load. Scientists have a way of making everything more complicated than needs be.
Staples do not create high-capacity shear walls, but if spaced close enough together they are extremely stiff. This can be useful when designing a shear wall that will be working in tandem with
other shear walls made of a stiffer material, such as plaster.
Plywood to Plywood Connections
In this test, plywood was stapled to plywood to see if how strong this connection would be. This is useful when it is necessary to attach to pieces of plywood together which is often the case when
the contractor did not attach the first layer of plywood to the mudsill.
The next consideration is uplift or overturning forces. High-capacity shear walls must resist a tremendous about of overturning. For example, a high-capacity shear wall can create 14,400 pounds of
force trying to lift the ends of the shear wall up off of the foundation. For this reason, proper sizing of hold-downs is critical.
Quality of Cripple Wall Framing
Older homes were built with old growth Douglas Fir and redwood that was centuries old. This wood has very different properties compared to wood grown on modern tree farms. The old wood is much
denser and is very difficult to split compared to tree farm lumber. For this reason, the retrofit guidelines found in the International Existing Building Code, the Bay Area’s Standard Plan A,
Seattle’s Project Impact, and the Los Angeles Retrofit Building Code only allow 8d nails spaced no closer than 4 inches apart. Old growth lumber should always be used whenever possible because it
can easily be nailed with larger 10d nails 2 inches apart on the edges without splitting.
If the initial shock does not collapse the cripple wall, an aftershock might. In the photo below, a man trying to prevent his already leaning cripple wall from fully collapsing in an aftershock.
You need the right type of Plywood
The two types of plywood available are Rated and Structural I, but for shear wall use the plywood must have 5 plies. Rated Plywood can be made of any species of wood while 10% stronger Structural I
must be made of denser Southern Pine or Douglas Fir. Overall, it is not that big a deal if you use Rated instead of Structural I.
Avoid 3-PLY PLYWOOD
Shear walls made of 3 ply plywood tore in the Northridge Earthquake, so the City of Los Angeles downgraded the acceptable limits for 3-ply plywood to a maximum of #200 plf. On page 10 of the
Wood-Frame Subcommittee Findings Report, published immediately after the Northridge Earthquake it says: “The performance of 3-ply construction has raised questions of its ultimate capacity.
Horizontal tearing has occurred on some outer face plies above the inner ply seam. Values for all 3-ply panel construction were therefore reduced to 200lbs/ft maximum.”
Aspect Ratio-This is very technical
An aspect ratio is the ratio between the height and the width. For example, a shear wall that is 8 feet long and 4 feet wide has an aspect ratio of 2h/1w (the height is twice as long as the width).
Normally it is written 2:1, or 2/1.
To use the values listed in Table 1 (see the chart at the beginning of this page), which is found in the building code, a shear wall must have a 2/1 aspect ratio or less. If the aspect ratio is
greater than 2/1 but less than 3.5/1, the earthquake resistance measured in per linear foot of resistance, must be reduced by what is called a reduction factor.
This maximum 3.5/1 aspect ratio translates into an 8 ft. shear wall 27.5″ wide. Any narrower than this and you have a post, which is rated at zero.
The way you figure out the aspect ratio of a shear wall is to divide both the height and the width by the width. For example: if a shear wall is 64″ high/18″ wide, we divide both the height and the
width by 18 to get a ration of 3.5/1. If the wall is 96″ tall then 96″/18″ = 5.3/1. At this point it is a post and not a shear wall.
Once we determine the aspect ratio, assuming it is less than 3.5/1, we use a reduction factor of twice the width 2w/h (height). So, if we have a shear wall that is 96″ tall and 30″ wide, the
reduction factor is twice the w/h = 2 x 32/96 = 0.62. The number in the shear Table 1 is multiplied by this factor to get the reduced shear capacity of the narrow wall. In this case if the plywood
can resist 870 pounds per linear foot, and the width of the shear wall is 2 1/2 feet, the capacity based on this table is 2.5 x 870 or 2,175#. The reduction factor is 0.66. 2,175# x 0.66 = 1,435#.
Once we determine the aspect ratio, assuming it is less than 3.5/1, we use a reduction factor of twice the width 2w/h (height). So, if we have a shear wall that is 96″ tall and 30″ wide, the
reduction factor is twice the w/h = 2 x 30/96 = 0.62. The number in the shear Table 1 is multiplied by this factor to get the reduced shear capacity of the narrow wall. In this case if the plywood
can resist 460 pounds per linear foot, and the width of the shear wall is 2 1/2 feet, the capacity based on this table is 2.5 x 460 or 1,150 lbs. The reduction factor is 0.62. 1,150lgs x 0.62 = 713
# or 285 lbs. per linear foot.
Crushing of the Bottom Plates
1/8” crushing of the end studs into the mudsill at the bottom plate can cause over 1″ of at the top on a narrow shear wall. The magnification factor is the height of the shear wall / width of the
shear wall x the amount of mudsill compression. If the walls are too flexible, they will not resist much earthquake force when the whole house deforms (twists) because of the earthquake.
For example, if the end framing member studs on an 8-foot tall by 4-foot-wide shear wall crush the mudsill 1 inch, the deflection at the top will be 2 inches, which is significant. (8/4 = 2, x 1
inch compression = 2-inch movement.
Note that a normal shear wall has deflections at the top of the wall in the range of about 0.2 inches at their design loads, so you can see that a little crushing can cause big problems when a narrow
shear wall is used in line with other conventional shear walls.
This is more of a deflection issue (lateral movement of the top of the shear wall) rather than a strength issue. 1/8” crushing at the bottom plate can be translated to over an inch of deflection at
the top for a narrow shear wall. The magnification factor is the height of the shear wall / width of the shear wall. If the walls are too flexible, they will not resist much earthquake force when
the whole house deforms (twists) because of the earthquake.
|
{"url":"https://bayarearetrofit.com/plywood-shear-walls/","timestamp":"2024-11-08T15:20:51Z","content_type":"text/html","content_length":"100012","record_id":"<urn:uuid:4e7e5158-7293-4937-9a28-d7c82ad94d63>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00307.warc.gz"}
|
Mathematics » Page 4
Let a, b, c ∈ Z. Define the highest common factor hcf (a, b, c) to be the largest positive integer that divides a, b and c. Prove that there are integers s, t, u such that hcf (a, b, c) = sa+tb+uc.
Find such integers s, t, u when a = 91, b = 903, c = 1792
|
{"url":"https://educationexpert.net/mathematics/page/4/","timestamp":"2024-11-12T13:38:42Z","content_type":"text/html","content_length":"26920","record_id":"<urn:uuid:6a08225c-4a6d-42bd-be72-49f9ea8023f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00880.warc.gz"}
|
On Zero
In documentaries on the history of mathematics, there is often a scene where the presenter discusses the invention of the number zero. If there is enough money they might even get to fly out to India
to do it. It always seemed strange to me that "zero" was a concept that needed discovering. Surely if you asked someone in Rome what happens if you take 3 denari from a person with 3 denari, they
would have told you that the person now has no denari, or "non denarium habet"?
So what do we mean by the discovery of the number zero? It became clear when I was reading The Joy of X (Bookshop, Amazon) by Steven Strogatz. In Rome, they would obviously have known that:
$XIII - XIII = nihil$
But this isn't actually very useful. Maths is about what you do with numbers that exist, not numbers that don't. So why do we care so much about zero? To realise why, we have to stop thinking about
the the number line. Counting in Roman numerals is like counting with tally marks. You just keep adding lines. There are some shortcuts. The symbol for a 5 is V, you can put a lesser symbol before a
greater symbol to indicate that it is one unit less instead of writing 4 or 9 units etc.
This is a problem when you do calculations. You need to parse out the whole string of numerals because each one can affect the value of numerals to its left and right. In the Hindu-Arabic number
system numbers revolve around a base of 10 yet there is no numeral, or digit, for 10. Instead the number 0 acts as a placeholder for 10. The 0 itself still means nothing. But its presence changes the
value of the digit to its left. It raises it to a higher power of 10.
This way of laying out numbers is so much easier. You can read them in one direction and work out the value of digits by wich column they are in, rather than just by the shape of the digit itself.
That's why zero is important. Not because it means we have discovered the concept of nothingness, but because it means we can use placeholders in mathematics, and use the position of digits in
numbers tell us about their value.
|
{"url":"https://www.henrydashwood.com/posts/on-zero","timestamp":"2024-11-08T13:59:43Z","content_type":"text/html","content_length":"22656","record_id":"<urn:uuid:3b3b1454-b4ec-410a-a37c-59cf9207e114>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00814.warc.gz"}
|
Operations on Signals
Operations on Signals#
This section describes the operations that can be performed on signals.
When the ā Signal Panelā is selected, the menus and toolbars are updated to provide signal-related actions.
The ā Operationsā menu allows you to perform various operations on the selected signals, such as arithmetic operations, peak detection, or convolution.
Basic arithmetic operations#
Operation Description
\(y_{M} = \sum_{k=0}^{M-1}{y_{k}}\)
\(y_{M} = \dfrac{1}{M}\sum_{k=0}^{M-1}{y_{k}}\)
\(y_{2} = y_{1} - y_{0}\)
\(y_{M} = \prod_{k=0}^{M-1}{y_{k}}\)
\(y_{2} = \dfrac{y_{1}}{y_{0}}\)
Operations with a constant#
Create a new signal which is the result of a constant operation on each selected signal:
Operation Description
\(y_{k} = y_{k-1} + c\)
\(y_{k} = y_{k-1} - c\)
\(y_{k} = y_{k-1} \times c\)
\(y_{k} = \dfrac{y_{k-1}}{c}\)
Real and imaginary parts#
Operation Description
\(y_{k} = |y_{k-1}|\)
\(y_{k} = \Re(y_{k-1})\)
\(y_{k} = \Im(y_{k-1})\)
Data type conversion#
The ā Convert data typeā
Data type conversion relies on numpy.ndarray.astype() function with the default parameters (casting=ā unsafeā ).
Basic mathematical functions#
Function Description
\(y_{k} = \exp(y_{k-1})\)
\(y_{k} = \log_{10}(y_{k-1})\)
\(y_{k} = y_{k-1}^{n}\)
\(y_{k} = \sqrt{y_{k-1}}\)
Other mathematical operations#
|
{"url":"https://datalab-platform.com/en/features/signal/menu_operations.html","timestamp":"2024-11-07T01:07:39Z","content_type":"text/html","content_length":"32622","record_id":"<urn:uuid:5d2b1706-1baf-4af5-9987-be9e40c20704>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00577.warc.gz"}
|
Eliciting Subjective Probabilities with Binary Lotteries
We evaluate the binary lottery procedure for inducing risk neutral behavior in a subjective belief elicitation task. Harrison, Martínez-Correa and Swarthout [2013] found that the binary lottery
procedure works robustly to induce risk neutrality when subjects are given one risk task defined over objective probabilities. Drawing a sample from the same subject population, we find evidence that
the binary lottery procedure induces linear utility in a subjective probability elicitation task using the Quadratic Scoring Rule. We also show that the binary lottery procedure can induce direct
revelation of subjective probabilities in subjects with certain Non-Expected Utility preference representations that satisfy weak conditions that we identify.
Original language English
Place of Publication Atlanta, GA
Publisher CEAR, Georgia State University
Number of pages 50
Publication status Published - 2012
Series Working paper / Center for Economic Analysis of Risk (CEAR)
Number 2012-09
• Subjective Probability Elicitation
• Binary Lottery Procedure
• Experimental Economies
• Risk Neutrality
|
{"url":"https://research.cbs.dk/en/publications/eliciting-subjective-probabilities-with-binary-lotteries","timestamp":"2024-11-12T21:01:27Z","content_type":"text/html","content_length":"43072","record_id":"<urn:uuid:dfeb1f6e-16bc-4b36-9854-287a83c6cb29>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00779.warc.gz"}
|
Resonant Wireless Power Transfer Vs Wireless Power Transfer
What is Resonant Wireless Power Transfer?
Resonant Wireless Power Transfer (RWPT), developed by MIT in 2008, enhances Wireless Power Transfer (WPT) by utilizing compensation capacitors in both transmitter and receiver. This technology
nullifies impedance imaginary parts, enabling higher output power and efficiency compared to traditional WPT. RWPT is widely applied in appliances, wearable gadgets, mobile phones, and electric
vehicle chargers, offering superior performance even at mid-range distances relative to coil size.
Fig. 1. Equivalent Circuit of a typical RWPT System ^[1]
CAD Model
In this application note, a WPT system ^[2] is studied using EMWorks. The system is made of two copper-printed coils. Figure 2 shows the simulated wireless system while Table 1 contains the main
dimensions of the model. Later, we will add resonant capacitors to change the system from a WPT to RWPT and study the data resulting from such a change.
Fig. 2. A printed RWPT
External Coil Interior Coil
Inner Diameter 27.7 mm 11.64 mm
Outer Diameter 41.3 mm 16.14 mm
Inter-Traces Distance 1.4 mm 0.3 mm
Length of the Trace 1.6 mm 0.5 mm
Thickness of the Traces 40 um 40 um
Table 1. Main Dimensions of the RWPT System
Using the EMS ^[3] module of EMWorks, we investigated the following RWPT issues:
• Circuit parameters of the simulated RWPT including R and L matrices,
• Coupling coefficient versus air gap and alignment,
• Efficiency versus frequency and geometry configurations,
• Comparison of EMS results against Powersim results.
Parametric Analysis
Using the AC Magnetic module of EMS, the coil parameters including self and mutual inductances, AC resistances, and coupling coefficient are computed versus different air gap distances and
alignments. These parameters are then used to compute the resonant capacitance and efficiency of the system.
Table 2 contains the AC inductance and resistances for an air gap of 15 mm.
External Coil Interior Coil
Self Inductance M (H) 1.774798e-006 1.281031e-006
Mutual Inductance L1(H) 1.207742e-007 1.207742e-007
Self Resistance R (Ohm) 3.410545e-001 6.161653e-001
Mutual Resistance Rm(Ohm) 1.920747e-003 1.920747e-003
Table 2. Circuit Parameters Computed by EMS
From the above circuit parameters, the coupling coefficient ^[4].
In addition to the above circuit parameters, the magnetic field and flux are computed. Figures 3a) and 3b) show the cross-section plots of the magnetic flux density results at 2 mm and 40 mm of air
gap size, respectively. Clearly, the magnetic flux around the interior coil, i.e. receiver, is higher with an air gap of 2 mm compared to 40mm. Namely, 40 to 80 micro-Tesla versus 18 micro-Tesla for
2 and 40 mm, respectively.
Fig. 3. Magnetic Flux Results, a) Air Gap 2 mm, b) Air Gap 40 mm
The plots of the mutual inductance M and the coupling coefficient k versus the distance between the coils are shown in Figure 4. Clearly, M and k are inversely proportional to the distance
separating the coils. Furthermore, with the help of the popular closed-form solution of the mutual inductance M, the inverse proportionality is cubic in nature since
Fig. 4. Mutual Inductance M and Coupling Coefficient k Results vs Air Gap Distance
The above conclusion is also applicable to the induced voltage, under open circuit operation, in the receiver coil, as shown in Figure 5.
Fig. 5. Induced Voltage in the Receiver Coil vs Air Gap Distance
The results of both mutual inductance M and coupling coefficient k, shown in Figure 6, are inspected versus axial alignment using another parametric study in EMS. The coupling between the coils drops
only when the misalignment becomes larger than 15 mm. Hence, the studied system can operate at the same efficiency in a short range of misalignments up to 15 mm. This behavior can be attributed to
the small size of the receiver.
Fig. 6. Mutual Inductance M and Coupling Coefficient k Results vs Misalignment
Similarly, the induced voltage in the receiver coil is almost constant when for a misalignment less than 15mm and sharply decreases thereafter, as shown in Figure 7.
Fig. 7. Induced Voltage in the Receiver Coil vs Misalignment
Figure 8 illustrates an animation of the magnetic flux density versus different misalignments. The magnetic flux density that reaches the receiver coil is almost constant up to a certain level of
Fig. 8. Animation of the Magnetic Flux vs Different Misalignment
EMS Circuit-Coupling Analysis
In the above investigated WPT system, the coupling coefficient is relatively low even at small air gap and aligned conditions. To illustrate the limitation of the WPT, we compute its efficiency as
per the equation: [1] is the input voltage, I[1] is the current in the primary side, [load ]is the current in the load, R[load]and is the resistance of the load.
EMS circuit simulator is used to model the equivalent circuit of the simulated system as illustrated in Figure 9. Windings 1 and 2 are the coils of the WPT system. The input voltage is 2.5 V /
13.58MHz (phase shift is 0deg) and the load is a 10 Ohm resistance.
AC Magnetic module of EMS coupled to the circuit simulator are used to solve the WPT system. The transmitter and receiver are maintained at an aligned position and an air gap of 15 mm. The coupling
coefficient is 0.08 at these conditions, as shown in Figure 4.
Fig. 9. The Simulated Circuit of the Studied WPT System
Table 3 contains the current results in the windings computed by EMS. Using the formula (3), the efficiency of the studied system is 0.17%, which is indeed low.
Current Computed by EMS
Winding 1 4.891836e-003 - j 1.479600e-002
Winding 2 -5.844385e-004 + j 1.333585e-003
Table 3. Current Results Computed by EMS
To improve its efficiency, resonant capacitors are added to the WPT system making it into a RWPT. In the following paragraphs, we shall compute the efficiency of the RWPT and compare it to that of a
WPT system.
Figure 10 shows the new simulated circuit modeled using EMS circuit simulator. The resonant frequency is 13.58MHz while the resonant capacitors are respectively 77.26 pF and 114.42 pF computed based
on the famous formula:
Fig. 10. Resonant Circuit of Simulated RWPT System Created Using EMS Simulator
The input and output power of the studied RWPT system are computed versus frequency. Both input and output power show peak values at the resonant frequency, as illustrated in Figure 11.
Fig. 11. Input and Output Power Results vs Frequency
Figure 12 shows the efficiency of the RWPT system versus different frequencies. The efficiency is maximum at the resonant frequency. Without resonant capacitors, the efficiency is around 0.17% while
it is close to 11% when the resonant capacitors are used. This proves that the resonant circuit helps in improving efficiency, especially for applications with low coupling coefficients.
Fig. 12. Efficiency vs Frequency
The impact of the load and the air gap distance on the efficiency of the RWPT system is investigated in the following section.
Figure 13 contains the efficiency results versus the air gap distance. The efficiency has a maximum value of around 19% in the range of 2 to 7 mm of air gap distance. It drops as the distance
between the coils becomes larger, i.e. inversely proportional to the distance. Nonetheless, it is still more efficient than the WPT system, even at 30mm.
Fig. 13. Efficiency vs Air Gap Distance
The load is varied from 1 Ohm to 50 Ohm and the efficiency of the simulated system is computed using the circuit quantities generated by EMS, as shown in Figure 17. The efficiency rises until
reaching a maximum of 11% at a load of 10 Ohm then decreases. Therefore, the load must be carefully selected.
Fig. 14. Efficiency versus load
Comparison of EMS Against Powersim ^[5] results
In this section, the results computed by EMS are compared to those obtained from and Powersim. The RWPT circuit is modeled in Powersim. Figure 15 shows the equivalent circuit of the system in
Powersim. The parameters of the coupled inductor, used to model the wireless coils in Powersim, are given by EMS, as shown in Table 2.
Fig. 15. Equivalent Circuit of RWPT System in Powersim
Figure 16 shows the comparison of the computed efficiency calculated via EMS and Powersim. Clearly, the results show a very good agreement; actually matching results.
Fig. 16. Efficiency Results Comparison
The system response as a function of frequency is analyzed using Powersim and the results are plotted in Figure 17. Bode diagram shows that the amplitude output curve has a maximum value of -16dB
around the resonant frequency.
Fig. 17. Bode Diagram of the System Extracted from Powersim
In this application note, we began by evaluating a Wireless Power Transfer (WPT) system, finding its efficiency lacking even at close coil distances. Introducing two resonant capacitors transformed
the WPT into a Resonant Wireless Power Transfer (RWPT) system, showcasing its superiority. However, efficient operation requires careful selection of the load. In the final analysis, we compared
EMWorks simulation results for the RWPT system with those obtained using commercial power software Powersim, achieving a perfect match in efficiency.
|
{"url":"https://www.emworks.com/de/application/resonant-wireless-power-transfer-vs-wireless-power-transfer","timestamp":"2024-11-14T00:26:20Z","content_type":"application/xhtml+xml","content_length":"124644","record_id":"<urn:uuid:12aadb6c-839f-4f4b-9500-6595f902b6c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00370.warc.gz"}
|
Comment for 1026.14 - Determination of Annual Percentage Rate | Consumer Financial Protection Bureau
Comment for 1026.14 - Determination of Annual Percentage Rate
14(a) General Rule
1. Tolerance. The tolerance of 1/8th of 1 percentage point above or below the annual percentage rate applies to any required disclosure of the annual percentage rate. The disclosure of the annual
percentage rate is required in §§ 1026.60, 1026.40, 1026.6, 1026.7, 1026.9, 1026.15, 1026.16, 1026.26, 1026.55, and 1026.56.
2. Rounding. The regulation does not require that the annual percentage rate be calculated to any particular number of decimal places; rounding is permissible within the 1/8th of 1 percent tolerance.
For example, an exact annual percentage rate of 14.33333% may be stated as 14.33% or as 14.3%, or even as 14 1/4%; but it could not be stated as 14.2% or 14%, since each varies by more than the
permitted tolerance.
3. Periodic rates. No explicit tolerance exists for any periodic rate as such; a disclosed periodic rate may vary from precise accuracy (for example, due to rounding) only to the extent that its
annualized equivalent is within the tolerance permitted by § 1026.14(a). Further, a periodic rate need not be calculated to any particular number of decimal places.
4. Finance charges. The regulation does not prohibit creditors from assessing finance charges on balances that include prior, unpaid finance charges; state or other applicable law may do so, however.
5. Good faith reliance on faulty calculation tools. The regulation relieves a creditor of liability for an error in the annual percentage rate or finance charge that resulted from a corresponding
error in a calculation tool used in good faith by the creditor. Whether or not the creditor's use of the tool was in good faith must be determined on a case-by-case basis, but the creditor must in
any case have taken reasonable steps to verify the accuracy of the tool, including any instructions, before using it. Generally, the safe harbor from liability is available only for errors directly
attributable to the calculation tool itself, including software programs; it is not intended to absolve a creditor of liability for its own errors, or for errors arising from improper use of the
tool, from incorrect data entry, or from misapplication of the law.
6. Effect of leap year. Any variance in the annual percentage rate that occurs solely by reason of the addition of February 29 in a leap year may be disregarded, and such a rate may be disclosed
without regard to such variance.
14(b) Annual Percentage Rate - In General
1. Corresponding annual percentage rate computation. For purposes of §§ 1026.60, 1026.40, 1026.6, 1026.7(a)(4) or (b)(4), 1026.9, 1026.15, 1026.16, 1026.26, 1026.55, and 1026.56, the annual
percentage rate is determined by multiplying the periodic rate by the number of periods in the year. This computation reflects the fact that, in such disclosures, the rate (known as the corresponding
annual percentage rate) is prospective and does not involve any particular finance charge or periodic balance.
14(c) Optional Effective Annual Percentage Rate for Periodic Statements for Creditors Offering Open-End Credit Plans Secured by a Consumer's Dwelling
1. General rule. The periodic statement may reflect (under § 1026.7(a)(7)) the annualized equivalent of the rate actually applied during a particular cycle; this rate may differ from the
corresponding annual percentage rate because of the inclusion of, for example, fixed, minimum, or transaction charges. Sections 1026.14(c)(1) through (c)(4) state the computation rules for the
effective rate.
2. Charges related to opening, renewing, or continuing an account. Sections 1026.14(c)(2) and (c)(3) exclude from the calculation of the effective annual percentage rate finance charges that are
imposed during the billing cycle such as a loan fee, points, or similar charge that relates to opening, renewing, or continuing an account. The charges involved here do not relate to a specific
transaction or to specific activity on the account, but relate solely to the opening, renewing, or continuing of the account. For example, an annual fee to renew an open-end credit account that is a
percentage of the credit limit on the account, or that is charged only to consumers that have not used their credit card for a certain dollar amount in transactions during the preceding year, would
not be included in the calculation of the annual percentage rate, even though the fee may not be excluded from the finance charge under § 1026.4(c)(4). (See comment 4(c)(4)-2.) This rule applies even
if the loan fee, points, or similar charges are billed on a subsequent periodic statement or withheld from the proceeds of the first advance on the account.
3. Classification of charges. If the finance charge includes a charge not due to the application of a periodic rate, the creditor must use the annual percentage rate computation method that
corresponds to the type of charge imposed. If the charge is tied to a specific transaction (for example, 3 percent of the amount of each transaction), then the method in § 1026.14(c)(3) must be used.
If a fixed or minimum charge is applied, that is, one not tied to any specific transaction, then the formula in § 1026.14(c)(2) is appropriate.
4. Small finance charges. Section 1026.14(c)(4) gives the creditor an alternative to § 1026.14(c)(2) and (c)(3) if small finance charges (50 cents or less) are involved; that is, if the finance
charge includes minimum or fixed fees not due to the application of a periodic rate and the total finance charge for the cycle does not exceed 50 cents. For example, while a monthly activity fee of
50 cents on a balance of $20 would produce an annual percentage rate of 30 percent under the rule in § 1026.14(c)(2), the creditor may disclose an annual percentage rate of 18 percent if the periodic
rate generally applicable to all balances is 1 and 1/2 percent per month.
5. Prior-cycle adjustments.
i. The annual percentage rate reflects the finance charges imposed during the billing cycle. However, finance charges imposed during the billing cycle may relate to activity in a prior cycle.
Examples of circumstances when this may occur are:
A. A cash advance occurs on the last day of a billing cycle on an account that uses the transaction date to figure finance charges, and it is impracticable to post the transaction until the following
B. An adjustment to the finance charge is made following the resolution of a billing error dispute.
C. A consumer fails to pay the purchase balance under a deferred payment feature by the payment due date, and finance charges are imposed from the date of purchase.
ii. Finance charges relating to activity in prior cycles should be reflected on the periodic statement as follows:
A. If a finance charge imposed in the current billing cycle is attributable to periodic rates applicable to prior billing cycles (such as when a deferred payment balance was not paid in full by the
payment due date and finance charges from the date of purchase are now being debited to the account, or when a cash advance occurs on the last day of a billing cycle on an account that uses the
transaction date to figure finance charges and it is impracticable to post the transaction until the following cycle), and the creditor uses the quotient method to calculate the annual percentage
rate, the numerator would include the amount of any transaction charges plus any other finance charges posted during the billing cycle. At the creditor's option, balances relating to the finance
charge adjustment may be included in the denominator if permitted by the legal obligation, if it was impracticable to post the transaction in the previous cycle because of timing, or if the
adjustment is covered by comment 14(c)-5.ii.B.
B. If a finance charge that is posted to the account relates to activity for which a finance charge was debited or credited to the account in a previous billing cycle (for example, if the finance
charge relates to an adjustment such as the resolution of a billing error dispute, or an unintentional posting error, or a payment by check that was later returned unpaid for insufficient funds or
other reasons), the creditor shall at its option:
1. Calculate the annual percentage rate in accordance with ii.A of this paragraph, or
2. Disclose the finance charge adjustment on the periodic statement and calculate the annual percentage rate for the current billing cycle without including the finance charge adjustment in the
numerator and balances associated with the finance charge adjustment in the denominator.
14(c)(1) Solely Periodic Rates Imposed
1. Periodic rates. Section 1026.14(c)(1) applies if the only finance charge imposed is due to the application of a periodic rate to a balance. The creditor may compute the annual percentage rate
i. By multiplying each periodic rate by the number of periods in the year; or
ii. By the “quotient” method. This method refers to a composite annual percentage rate when different periodic rates apply to different balances. For example, a particular plan may involve a periodic
rate of 1/2 percent on balances up to $500, and 1 percent on balances over $500. If, in a given cycle, the consumer has a balance of $800, the finance charge would consist of $7.50 (500 × .015) plus
$3.00 (300 × .01), for a total finance charge of $10.50. The annual percentage rate for this period may be disclosed either as 18% on $500 and 12 percent on $300, or as 15.75 percent on a balance of
$800 (the quotient of $10.50 divided by $800, multiplied by 12).
14(c)(2) Minimum or Fixed Charge, But Not Transaction Charge, Imposed
1. Certain charges not based on periodic rates. Section 1026.14(c)(2) specifies use of the quotient method to determine the annual percentage rate if the finance charge imposed includes a certain
charge not due to the application of a periodic rate (other than a charge relating to a specific transaction). For example, if the creditor imposes a minimum $1 finance charge on all balances below
$50, and the consumer's balance was $40 in a particular cycle, the creditor would disclose an annual percentage rate of 30 percent (1/40 × 12).
2. No balance. If there is no balance to which the finance charge is applicable, an annual percentage rate cannot be determined under § 1026.14(c)(2). This could occur not only when minimum charges
are imposed on an account with no balance, but also when a periodic rate is applied to advances from the date of the transaction. For example, if on May 19 the consumer pays the new balance in full
from a statement dated May 1, and has no further transactions reflected on the June 1 statement, that statement would reflect a finance charge with no account balance.
14(c)(3) Transaction Charge Imposed
1. Transaction charges.
i. Section 1026.14(c)(3) transaction charges include, for example:
A. A loan fee of $10 imposed on a particular advance.
B. A charge of 3 percent of the amount of each transaction.
ii. The reference to avoiding duplication in the computation requires that the amounts of transactions on which transaction charges were imposed not be included both in the amount of total balances
and in the “other amounts on which a finance charge was imposed” figure. In a multifeatured plan, creditors may consider each bona fide feature separately in the calculation of the denominator. A
creditor has considerable flexibility in defining features for open-end plans, as long as the creditor has a reasonable basis for the distinctions. For further explanation and examples of how to
determine the components of this formula, see appendix F to part 1026.
2. Daily rate with specific transaction charge. Section 1026.14(c)(3) sets forth an acceptable method for calculating the annual percentage rate if the finance charge results from a charge relating
to a specific transaction and the application of a daily periodic rate. This section includes the requirement that the creditor follow the rules in appendix F to part 1026 in calculating the annual
percentage rate, especially the provision in the introductory section of appendix F which addresses the daily rate/transaction charge situation by providing that the “average of daily balances” shall
be used instead of the “sum of the balances.”
14(d) Calculations Where Daily Periodic Rate Applied
1. Quotient method. Section 1026.14(d) addresses use of a daily periodic rate(s) to determine some or all of the finance charge and use of the quotient method to determine the annual percentage rate.
Since the quotient formula in § 1026.14(c)(1)(ii) and (c)(2) cannot be used when a daily rate is being applied to a series of daily balances, § 1026.14(d) provides two alternative ways to calculate
the annual percentage rate - either of which satisfies the provisions of § 1026.7(a)(7).
2. Daily rate with specific transaction charge. If the finance charge results from a charge relating to a specific transaction and the application of a daily periodic rate, see comment 14(c)(3)-2 for
guidance on an appropriate calculation method.
|
{"url":"https://www.consumerfinance.gov/rules-policy/regulations/1026/2023-01-01/interp-14/","timestamp":"2024-11-13T17:52:51Z","content_type":"text/html","content_length":"205286","record_id":"<urn:uuid:ec431eff-cad4-42ac-bfbd-422e40d8a87a>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00554.warc.gz"}
|
SUBROUTINE SPSP(NAX,NAY,NAZ,NBX,NBY,NBZ,NX,NY,NZ,
* X,Y,Z,VX,VY,VZ,SIGMA,WEIGHT,B,BX,BY,BZ)
INTEGER NAX,NAY,NAZ,NBX,NBY,NBZ,NX,NY,NZ
REAL X(NX),Y(NY),Z(NZ),VX(5,NX),VY(5,NY),VZ(5,NZ)
REAL SIGMA,WEIGHT,B(*),BX(*),BY(*),BZ(*)
C Complement to FITPACK
C by Alan Kaylor Cline
C coded -- January 23, 1994
C by Ludek Klimes
C Department of Geophysics
C Charles University, Prague
C This subroutine evaluates the Sobolev scalar products
C of spline under tension basis functions in three variables
C (the Sobolev scalar product consists of integrals of the
C products of partial derivatives of the two argument functions)
C On input--
C NXA, NYA, NZA are the orders of partial derivatives of
C the first argument function in the scalar product
C NXB, NYB, NZB are the orders of partial derivatives of
C the second argument function in the scalar product
C NX, NY, NZ are the numbers of grid points in the
C X-, Y-, Z-directions, respectively. (NX, NY, NZ
C should be at least 1)
C X, Y, and Z are arrays of the NX, NY, and NZ coordinates
C of the grid lines in the X-, Y-, and Z-directions,
C respectively. These should be strictly increasing.
C VX, VY,VZ are arrays of lengths 5*NX, 5*NY, 5*NZ,
C respectively, containing the B-spline basis data for the
C X-, Y- and Z-grids. They contain certain coefficients
C to be used for the determination of the B-spline under
C tension basis. Considered as a 5 by N array, for I = 1,
C ... , N, B-spline basis function I is specified by--
C V(1,I) = second derivative at X(I-1), for I .NE. 1,
C V(2,I) = second derivative at X(I), for all I,
C V(3,I) = second derivative at X(I+1), for I .NE. N,
C V(4,I) = function value at X(I-1), for I .NE. 1,
C V(5,I) = function value at X(I+1), for I .NE. N,
C and the properties that it has--
C 1. Function value 1 at X(I),
C 2. Function value and second derivative = 0 at
C X(1), ... , X(I-2), and X(I+2), ... , X(N).
C In V(5,N) and V(3,N) are contained function value and
C second derivative of basis function zero at X(1),
C respectively. In V(4,1) and V(1,1) are contained
C function value and second derivative of basis function
C N+1 at X(N), respectively. Function value and second
C derivative of these two basis functions are zero at all
C other knots. Only basis function zero has non-zero
C second derivative value at X(1) and only basis
C function N+1 has non-zero second derivative at X(N).
C SIGMA contains the tension factor. This value indicates
C the curviness desired. If ABS(SIGMA) is nearly zero
C (e. g. .001) the basis functions are approximately cubic
C splines. If ABS(SIGMA) is large (e. g. 50.) the basis
C functions are nearly piecewise linear. If SIGMA equals
C zero a cubic spline basis results. A standard value for
C SIGMA is approximately 1. In absolute value.
C WEIGHT is the weight of the product of NXA,NYA,NZA-partial
C derivative of the first argument and NXB,NYB,NZB-partial
C derivative of the second argument, in the Sobolev scalar
C product. The integral of the product of the partial
C derivatives multiplied by WEIGHT is added to matrix B.
C B is the array containing NN*NN matrix B (NN=NX*NY*NZ),
C stored as a symmetric matrix ( NN*(NN+1)/2 storage
C locations ) if NAX.EQ.NBX and NAY.EQ.NBY and NAZ.EQ.NBZ,
C else stored as a general matrix ( NN*NN storage
C locations ). The II,JJ-element of the matrix B
C will be increased by the integral of the product of
C NXA-,NYA-,NZA-partial derivative of the II-th basis
C function and NXB-,NYB-,NZB-partial derivative of the
C JJ-th basis function, multiplied by WEIGHT.
C Here the basis function IX,IY,IZ (1.LE.IX.LE.NX,
C 1.LE.IY.LE.NY, 1.LE.IZ.LE.NZ) is indexed by
C II=IX+NX*(IY+NY*IZ).
C BX is an auxiliary array of at least NX*(NX+1)/2
C locations for NXA.EQ.NXB, or of at least NX*NX locations
C for NXA.NE.NXB. It is used for scratch storage.
C BY is an auxiliary array of at least NY*(NY+1)/2
C locations for NYA.EQ.NYB, or of at least NY*NY locations
C for NYA.NE.NYB. It is used for scratch storage.
C BZ is an auxiliary array of at least NZ*(NZ+1)/2
C locations for NZA.EQ.NZB, or of at least NZ*NZ locations
C for NZA.NE.NZB. It is used for scratch storage.
C And
C None of the input parameters, except B, BX, BY, BZ, are
C altered
C The parameters NX, NY, NZ, X, Y, Z, VX, VY, VZ and SIGMA
C should be input unaltered from the output of VAL3B1
C (SURFB1, CURVB1).
C On output--
C B is the input array increased by the integrals of the
C products of NXA-,NYA-,NZA-partial derivatives and
C NXB-,NYB-,NZB-partial derivatives of the spline under
C tension basis functions, multiplied by WEIGHT.
C This subroutine references package modules QSPL, QINT,
C and SNHCSH.
EXTERNAL QSPL
C Other variables used inside the subroutine QSPL
INTEGER IX,JX,KX,MX,IY,JY,KY,MY,IZ,JZ,KZ,MZ,II,JJ,KK,MM
C The matrix element B(II,JJ) is located in the array element
C B(KK), where
C for symmetric matrix B, II.LE.JJ :
C KK= (JJ-1)*JJ/2+II
C for symmetric matrix B, II.GT.JJ :
C KK= (II-1)*II/2+JJ
C for nonsymmetric matrix B :
C KK= (JJ-1)*NN+II
C with NN=NX*NY*NZ being the dimension of the matrix B.
C The matrix element BX(IX,JX) is located in the array element
C BX(KX). The meaning of IX,JX,KX is similar as the meaning
C of II,JJ,KK in the case of matrix B.
C The matrix element BY(IY,JY) is located in the array element
C BZ(KY). The meaning of IY,JY,KY is similar as the meaning
C of II,JJ,KK in the case of matrix B.
C The matrix element BZ(IZ,JZ) is located in the array element
C BZ(KZ). The meaning of IZ,JZ,KZ is similar as the meaning
C of II,JJ,KK in the case of matrix B.
C MM, MX, MY, MZ are auxiliary variables considering the
C symmetry of the matrices B, BX, BY, BZ.
C Scalar products of B-splines in X-direction
KX= 0
MX= NX
DO 12 JX=1,NX
C Is BX symmetric matrix ?
IF(NAX.EQ.NBX) MX=JX
DO 11 IX=1,MX
KX= KX+1
CALL QSPL(NAX,NBX,IX,JX,NX,X,VX,SIGMA,BX(KX))
C QSPL
11 CONTINUE
12 CONTINUE
C Scalar products of B-splines in Y-direction
KY= 0
MY= NY
DO 14 JY=1,NY
C Is BY symmetric matrix ?
IF(NAY.EQ.NBY) MY=JY
DO 13 IY=1,MY
KY= KY+1
CALL QSPL(NAY,NBY,IY,JY,NY,Y,VY,SIGMA,BY(KY))
C QSPL
13 CONTINUE
14 CONTINUE
C Scalar products of B-splines in Z-direction
KZ= 0
MZ= NZ
DO 16 JZ=1,NZ
C Is BZ symmetric matrix ?
IF(NAZ.EQ.NBZ) MZ=JZ
DO 15 IZ=1,MZ
KZ= KZ+1
CALL QSPL(NAZ,NBZ,IZ,JZ,NZ,Z,VZ,SIGMA,BZ(KZ))
C QSPL
15 CONTINUE
16 CONTINUE
C Scalar products of 3-D B-splines
C Is B symmetric matrix ?
IF(NAX.EQ.NBX.AND.NAY.EQ.NBY.AND.NAZ.EQ.NBZ) THEN
MM= 1
MM= 0
END IF
KK= 0
JJ= 0
DO 27 JZ=1,NZ
DO 26 JY=1,NY
DO 25 JX=1,NX
JJ= JJ+1
II= 0
C Is BZ symmetric matrix ?
IF(NAZ.EQ.NBZ) THEN
KZ= (JZ-1)*JZ/2
KZ= (JZ-1)*NZ
END IF
DO 23 IZ=1,NZ
KZ= KZ+1
C Subdiagonal element of matrix BZ
IF(NAZ.EQ.NBZ.AND.IZ.GT.JZ) KZ=KZ+IZ-2
C Is BY symmetric matrix ?
IF(NAY.EQ.NBY) THEN
KY= (JY-1)*JY/2
KY= (JY-1)*NY
END IF
DO 22 IY=1,NY
KY= KY+1
C Subdiagonal element of matrix BY
IF(NAY.EQ.NBY.AND.IY.GT.JY) KY=KY+IY-2
C Is BX symmetric matrix ?
IF(NAX.EQ.NBX) THEN
KX= (JX-1)*JX/2
KX= (JX-1)*NX
END IF
DO 21 IX=1,NX
KX= KX+1
C Subdiagonal element of matrix BX
IF(NAX.EQ.NBX.AND.IX.GT.JX) KX=KX+IX-2
KK= KK+1
B(KK)= B(KK)+WEIGHT*BX(KX)*BY(KY)*BZ(KZ)
II= II+1
IF(MM*II.GE.JJ) GO TO 24
21 CONTINUE
22 CONTINUE
23 CONTINUE
24 CONTINUE
25 CONTINUE
26 CONTINUE
27 CONTINUE
SUBROUTINE QSPL(NA,NB,IA,IB,N,X,V,SIGMA,Q)
INTEGER NA,NB,IA,IB,N
REAL X(N),V(5,N),SIGMA,Q
C Complement to FITPACK
C by Alan Kaylor Cline
C coded -- January 23, 1994
C by Ludek Klimes
C Department of Geophysics
C Charles University, Prague
C This subroutine evaluates the Sobolev scalar product
C of spline under tension basis functions in one variable
C (the Sobolev scalar product consists of integrals of the
C products of partial derivatives of the two argument functions)
C On input--
C NA is the order of the partial derivative of
C the first argument function in the scalar product.
C NB is the order of the partial derivative of
C the second argument function in the scalar product.
C IA is the index of the first argument function
C (1.LE.IA.LE.N).
C IB is the index of the second argument function
C (1.LE.IB.LE.N).
C N is the number of grid points.
C (N should be at least 1)
C X is the array of the N coordinates of grid points.
C These should be strictly increasing.
C V is the array of lengths 5*N,
C containing certain coefficients to be used
C for the determination of the B-spline under
C tension basis. Considered as a 5 by N array, for I = 1,
C ... , N, B-spline basis function I is specified by--
C V(1,I) = second derivative at X(I-1), for I .NE. 1,
C V(2,I) = second derivative at X(I), for all I,
C V(3,I) = second derivative at X(I+1), for I .NE. N,
C V(4,I) = function value at X(I-1), for I .NE. 1,
C V(5,I) = function value at X(I+1), for I .NE. N,
C and the properties that it has--
C 1. Function value 1 at X(i),
C 2. Function value and second derivative = 0 at
C X(1), ... , X(I-2), and X(I+2), ... , X(N).
C In V(5,N) and V(3,N) are contained function value and
C second derivative of basis function zero at X(1),
C respectively. In V(4,1) and V(1,1) are contained
C function value and second derivative of basis function
C N+1 at X(N), respectively. Function value and second
C derivative of these two basis functions are zero at all
C other knots. Only basis function zero has non-zero
C second derivative value at X(1) and only basis
C function N+1 has non-zero second derivative at X(N).
C SIGMA contains the tension factor. This value indicates
C the curviness desired. If ABS(SIGMA) is nearly zero
C (e. g. .001) the basis functions are approximately cubic
C splines. If ABS(SIGMA) is large (e. g. 50.) the basis
C functions are nearly piecewise linear. If SIGMA equals
C zero a cubic spline basis results. A standard value for
C SIGMA is approximately 1. In absolute value.
C And
C None of the input parameters are altered.
C The parameters N, X, V, and SIGMA
C should be input unaltered from the output of VAL3B1
C (SURFB1, CURVB1).
C On output--
C Q is the integral of the product of NA-th partial
C derivative of the IA-th basis function and
C NB-th partial derivative of the IB-th spline under
C tension basis function.
C This subroutine references package modules QINT, SNHCSH.
EXTERNAL QINT
C Other variables used inside the subroutine QSPL:
INTEGER I,J
REAL SIGMAP,V1A,V2A,V3A,V4A,V5A,V1B,V2B,V3B,V4B,V5B
C I...Index of the interval.
C J...Position of the second B-spline with respect to the
C interval I.
C SIGMAP...Denormalized tension factor.
C V1A,V2A,V3A,V4A,V5A,V1B,V2B,V3B,V4B,V5B...Auxiliary
C storage locations for V(1,IA),...,V(5,IB).
IF(N.GT.1) GO TO 10
Q = 1.
IF(NA.NE.0.OR.NB.NE.0) Q=0.
GO TO 90
10 SIGMAP= ABS(SIGMA)*FLOAT(N-1)/(X(N)-X(1))
V1A= V(1,IA)
V2A= V(2,IA)
V3A= V(3,IA)
V4A= V(4,IA)
V5A= V(5,IA)
V1B= V(1,IB)
V2B= V(2,IB)
V3B= V(3,IB)
V4B= V(4,IB)
V5B= V(5,IB)
Q = 0.
I = IA-2
IF(I.LT.1) GO TO 20
J = I-IB+3
IF(J.LT.1) GO TO 20
IF(J.GT.4) GO TO 90
GO TO (11,12,13,14),J
11 CALL QINT(X(I),X(I+1),0. ,0. ,V4A,V1A,NA,
* 0. ,0. ,V4B,V1B,NB,SIGMAP,Q)
GO TO 20
12 CALL QINT(X(I),X(I+1),0. ,0. ,V4A,V1A,NA,
* V4B,V1B,1. ,V2B,NB,SIGMAP,Q)
GO TO 20
13 CALL QINT(X(I),X(I+1),0. ,0. ,V4A,V1A,NA,
* 1. ,V2B,V5B,V3B,NB,SIGMAP,Q)
GO TO 20
14 CALL QINT(X(I),X(I+1),0. ,0. ,V4A,V1A,NA,
* V5B,V3B,0. ,0. ,NB,SIGMAP,Q)
C QINT
20 I = IA-1
IF(I.LT.1) GO TO 30
J = I-IB+3
IF(J.LT.1) GO TO 30
IF(J.GT.4) GO TO 90
GO TO (21,22,23,24),J
21 CALL QINT(X(I),X(I+1),V4A,V1A,1. ,V2A,NA,
* 0. ,0. ,V4B,V1B,NB,SIGMAP,Q)
GO TO 30
22 CALL QINT(X(I),X(I+1),V4A,V1A,1. ,V2A,NA,
* V4B,V1B,1. ,V2B,NB,SIGMAP,Q)
GO TO 30
23 CALL QINT(X(I),X(I+1),V4A,V1A,1. ,V2A,NA,
* 1. ,V2B,V5B,V3B,NB,SIGMAP,Q)
GO TO 30
24 CALL QINT(X(I),X(I+1),V4A,V1A,1. ,V2A,NA,
* V5B,V3B,0. ,0. ,NB,SIGMAP,Q)
C QINT
30 I = IA
IF(I.GE.N) GO TO 90
J = I-IB+3
IF(J.LT.1) GO TO 40
IF(J.GT.4) GO TO 90
GO TO (31,32,33,34),J
31 CALL QINT(X(I),X(I+1),1. ,V2A,V5A,V3A,NA,
* 0. ,0. ,V4B,V1B,NB,SIGMAP,Q)
GO TO 40
32 CALL QINT(X(I),X(I+1),1. ,V2A,V5A,V3A,NA,
* V4B,V1B,1. ,V2B,NB,SIGMAP,Q)
GO TO 40
33 CALL QINT(X(I),X(I+1),1. ,V2A,V5A,V3A,NA,
* 1. ,V2B,V5B,V3B,NB,SIGMAP,Q)
GO TO 40
34 CALL QINT(X(I),X(I+1),1. ,V2A,V5A,V3A,NA,
* V5B,V3B,0. ,0. ,NB,SIGMAP,Q)
C QINT
40 I = IA+1
IF(I.GE.N) GO TO 90
J = I-IB+3
IF(J.LT.1) GO TO 90
IF(J.GT.4) GO TO 90
GO TO (41,42,43,44),J
41 CALL QINT(X(I),X(I+1),V5A,V3A,0. ,0. ,NA,
* 0. ,0. ,V4B,V1B,NB,SIGMAP,Q)
GO TO 90
42 CALL QINT(X(I),X(I+1),V5A,V3A,0. ,0. ,NA,
* V4B,V1B,1. ,V2B,NB,SIGMAP,Q)
GO TO 90
43 CALL QINT(X(I),X(I+1),V5A,V3A,0. ,0. ,NA,
* 1. ,V2B,V5B,V3B,NB,SIGMAP,Q)
GO TO 90
44 CALL QINT(X(I),X(I+1),V5A,V3A,0. ,0. ,NA,
* V5B,V3B,0. ,0. ,NB,SIGMAP,Q)
C QINT
90 CONTINUE
SUBROUTINE QINT(X1,X2,FA1,DA1,FA2,DA2,NA,
* FB1,DB1,FB2,DB2,NB,SIGMAP,Q)
INTEGER NA,NB
REAL X1,X2,FA1,DA1,FA2,DA2,FB1,DB1,FB2,DB2,SIGMAP,Q
C Complement to FITPACK
C by Alan Kaylor Cline
C coded -- January 23, 1994
C by Ludek Klimes
C Department of Geophysics
C Charles University, Prague
C This subroutine evaluates the integral of the product
C of the given derivatives of the two given cubic functions
C or spline under tension basis functions in one variable,
C over a single specified interval.
C On input--
C X1, X2 endpoints of the given interval.
C FA1, DA1 function value and second derivative of the
C first given function at X1.
C FA2, DA2 function value and second derivative of the
C first given function at X2.
C NA is the order of the partial derivative of
C the first argument function in the scalar product.
C FB1, DB1, FB2, DB2 the same as FA1, DA1, FA2, DA2, but
C for the second given function.
C NB is the order of the partial derivative of
C the second argument function in the scalar product.
C SIGMAP is the denormalized tension factor.
C And
C None of the input parameters are altered.
C On output--
C Q is the integral of the product of NA-th partial
C derivative of the first function and
C NB-th partial derivative of the second function,
C over the interval X1,X2.
C This subroutine references package module SNHCSH.
EXTERNAL SNHCSH
C Other variables used inside the subroutine QINT:
INTEGER MA,MB,M
REAL QQ,H,SH,CH,SH1,CH1,SIGMA2
REAL A1,A2,A3,A4,B1,B2,B3,B4,AB11,AB21,AB12,AB22
MA= MOD(NA,2)
MB= MOD(NB,2)
M = MA+MA+MB+1
QQ= 0.
IF(SIGMAP.NE.0.) GO TO 40
C No tension:
H = X2-X1
IF(NA.LE.3.AND.NB.LE.3) GO TO 1
GO TO 91
1 IF(NA.LE.1) GO TO 3
C Coefficients of linear function
A3= DA2/H
IF(NB.LE.1) GO TO 2
C Coefficients of linear function
B3= DB2/H
GO TO 80
2 CONTINUE
C Coefficients of cubic and linear functions
B1= DB2/H
B3= FB2/H-DB2*H/6.
GO TO 30
3 CONTINUE
C Coefficients of cubic and linear functions
A1= DA2/H
A3= FA2/H-DA2*H/6.
IF(NB.LE.1) GO TO 4
C Coefficients of linear function
B3= DB2/H
GO TO 20
4 CONTINUE
C Coefficients of cubic and linear functions
B1= DB2/H
B3= FB2/H-DB2*H/6.
C Integrals of (cubic function)*(cubic function):
GO TO (11,12,13,14),M
C (even derivative)*(even derivative)
11 AB11= (H**7)/252.
AB12= AB21
AB22= AB11
GO TO 15
C (even derivative)*(odd derivative)
12 AB11= (H**6)/72.
GO TO 15
C (odd derivative)*(even derivative)
13 AB11= (H**6)/72.
AB21= (H**6)/720.
GO TO 15
C (odd derivative)*(odd derivative)
14 AB11= (H**5)/20.
AB21= (H**5)/120.
AB12= AB21
AB22= AB11
C Accumulation of the computed integral:
15 QQ=QQ+A1*(AB11*B1+AB12*B2)+A2*(AB21*B1+AB22*B2)
C Integrals of (cubic function)*(linear function):
20 GO TO (21,22,23,24),M
C (even derivative)*(even derivative)
21 AB11= (H**5)/30.
AB12= AB21
AB22= AB11
GO TO 25
C (even derivative)*(odd derivative)
22 AB11= (H**4)/24.
GO TO 25
C (odd derivative)*(even derivative)
23 AB11= (H**4)/8.
AB21= (H**4)/24.
GO TO 25
C (odd derivative)*(odd derivative)
24 AB11= (H**3)/6.
AB21= (H**3)/6.
AB12= AB21
AB22= AB11
C Accumulation of the computed integral:
25 QQ=QQ+A1*(AB11*B3+AB12*B4)+A2*(AB21*B3+AB22*B4)
IF(NB.GT.1) GO TO 80
C Integrals of (linear function)*(cubic function):
30 GO TO (31,32,33,34),M
C (even derivative)*(even derivative)
31 AB11= (H**5)/30.
AB12= AB21
AB22= AB11
GO TO 35
C (even derivative)*(odd derivative)
32 AB11= (H**4)/8.
GO TO 35
C (odd derivative)*(even derivative)
33 AB11= (H**4)/24.
AB21= (H**4)/24.
GO TO 35
C (odd derivative)*(odd derivative)
34 AB11= (H**3)/6.
AB21= (H**3)/6.
AB12= AB21
AB22= AB11
C Accumulation of the computed integral:
35 QQ=QQ+A3*(AB11*B1+AB12*B2)+A4*(AB21*B1+AB22*B2)
GO TO 80
C Nonzero tension:
40 H = SIGMAP*(X2-X1)
CALL SNHCSH(SH1,CH1,H,0)
C SNHCSH
SH= SH1+H
CH= CH1+1.
SIGMA2= SIGMAP*SIGMAP
C Coefficients of hyperbolic functions (multiplied by SH)
A1= DA2/SIGMA2
B1= DB2/SIGMA2
C Doubled
C integrals of (hyperbolic function)*(hyperbolic function):
GO TO (51,52,53,54),M
C (even derivative)*(even derivative)
51 AB11= CH*SH1+H*CH1
AB21= SH1-H*CH1
AB12= AB21
AB22= AB11
GO TO 55
C (even derivative)*(odd derivative)
52 AB11= SH*SH
AB21= -H*SH
GO TO 55
C (odd derivative)*(even derivative)
53 AB11= SH*SH
AB21= H*SH
GO TO 55
C (odd derivative)*(odd derivative)
54 AB11= SH*CH+H
AB21= SH+H*CH
AB12= AB21
AB22= AB11
C Accumulation of the computed integral:
55 QQ=QQ+(A1*(AB11*B1+AB12*B2)+A2*(AB21*B1+AB22*B2))/(2.*SH*SH)
IF(NB.GT.1) GO TO 70
C Coefficients of linear function
B3= ( FB2-B1)/H
B4= (-FB1-B2)/H
C Integrals of (hyperbolic function)*(linear function):
GO TO (61,62,63,64),M
C (even derivative)*(even derivative)
61 AB11= H*CH1-SH1
AB21= -SH1
AB12= AB21
AB22= AB11
GO TO 65
C (even derivative)*(odd derivative)
62 AB11= CH1
GO TO 65
C (odd derivative)*(even derivative)
63 AB11= H*SH-CH1
AB21= CH1
GO TO 65
C (odd derivative)*(odd derivative)
64 AB11= SH
AB21= SH
AB12= AB21
AB22= AB11
C Accumulation of the computed integral:
65 QQ=QQ+(A1*(AB11*B3+AB12*B4)+A2*(AB21*B3+AB22*B4))/SH
70 IF(NA.GT.1) GO TO 90
C Coefficients of linear function
A3= ( FA2-A1)/H
A4= (-FA1-A2)/H
C Integrals of (linear function)*(hyperbolic function):
GO TO (71,72,73,74),M
C (even derivative)*(even derivative)
71 AB11= H*CH1-SH1
AB21= -SH1
AB12= AB21
AB22= AB11
GO TO 75
C (even derivative)*(odd derivative)
72 AB11= H*SH-CH1
AB21= -CH1
GO TO 75
C (odd derivative)*(even derivative)
73 AB11= CH1
AB21= CH1
GO TO 75
C (odd derivative)*(odd derivative)
74 AB11= SH
AB21= SH
AB12= AB21
AB22= AB11
C Accumulation of the computed integral:
75 QQ=QQ+(A3*(AB11*B1+AB12*B2)+A4*(AB21*B1+AB22*B2))/SH
IF(NB.GT.1) GO TO 90
C Integrals of (linear function)*(linear function):
80 GO TO (81,82,83,84),M
C (even derivative)*(even derivative)
81 AB11= (H**3)/3.
AB12= AB21
AB22= AB11
GO TO 85
C (even derivative)*(odd derivative)
82 AB11= (H**2)/2.
GO TO 85
C (odd derivative)*(even derivative)
83 AB11= (H**2)/2.
AB21= AB11
GO TO 85
C (odd derivative)*(odd derivative)
84 AB11= H
AB21= H
AB12= AB21
AB22= AB11
C Accumulation of the computed integral:
85 QQ=QQ+A3*(AB11*B3+AB12*B4)+A4*(AB21*B3+AB22*B4)
C Transformation from independent variable SIGMAP*X to X
90 IF(SIGMAP.NE.0.) QQ=QQ*SIGMAP**(NA+NB-1)
91 Q= Q+QQ
|
{"url":"https://seis.karlov.mff.cuni.cz/software/sw3dcd8/model/spsp.for","timestamp":"2024-11-06T02:29:23Z","content_type":"text/html","content_length":"25568","record_id":"<urn:uuid:89df9dc8-d35c-43b7-8085-330463b7dc7d>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00271.warc.gz"}
|
2. Design of data acquisition systems and blended seismic acquisition (A&P)
Berkhout, A. J., L. Ongkiehong, A.W.F. Volker, and G. Blacquiere, 2001, Comprehensive assessment of seismic acquisition geometries by focal beams—Part I: Theoretical considerations: Geophysics, 66,
Volker, A.W.F., G. Blacquiere, A. J. Berkhout, and L. Ongkiehong, 2001, Comprehensive assessment of seismic acquisition geometries by focal beams -- Part II: Practical aspects and examples:
Geophysics, 66, 918-931.
Berkhout, A.J., 2008, Changing the mindset in seismic data acquisition, The Leading Edge, 27, 924-938.
Van Veldhuizen, E.J., G. Blacquière and A.J. Berkhout, 2008, Acquisition geometry analysis in complex 3D media: Geophysics, 73, Q43 – Q58.
A. J. Berkhout, G. Blacquière, and D. J. Verschuur, 2009, The concept of double blending: Combining incoherent shooting with incoherent sensing: Geophysics, 74, A59 – A62
Berkhout, A.J. , G. Blacquiere and D.J. Verschuur, 2012, Multiscattering illumination in blended acquisition: Geophysics, 77, P23-P31.
Berkhout, A.J., D.J. Verschuur and , G. Blacquière, 2012, Illumination properties and imaging promises of blended, multiple-scattering seismic data: a tutorial: Geophysical Prospecting, 60, 713-732.
Doulgeris, P., K. Bube, G. Hampson and G. Blacquière, 2012 Convergence analysis of a coherency-constrained inversion for the separation of blended data: Geophysical Prospecting, 60, 769-781.
Berkhout, A.J. , 2012, Blended acquisition with dispersed source arrays: Geophysics, 77, A19-A23. Berkhout, A.J. , G. Blacquière, 2013, Effect of noise in blending and deblending: Geophysics, 78,
Kumar, A., G. Blacquière, M.W. Pederson and A. Goertz, 2016, Full-wavefield marine survey design using all multiples: Geophysics, 81, P1-P12.
Caporal, M. and G. Blacquière, 2016, Seismic acquisition with dispersed source arrays - Imaging Including Internal Multiples and Source Ghost Reflections, EAGE, Expanded abstracts, Tu-SRS3-01.
T. Ishiyama and G. Blacquière, 2017, 3-D shallow-water seismic survey evaluation and design using the focal-beam method: a case study offshore Abu Dhabi, Geophysical Prospecting, 64, 1215-1234.
T. Ishiyama, G. Blacquière and W.A. Mulder, 2017, The impact of surface-wave attenuation on 3-D seismic survey design, Geophysical Prospecting, 65, 86-96.
|
{"url":"https://www.delphi-consortium.com/Delphi%20Agenda/","timestamp":"2024-11-01T22:05:59Z","content_type":"text/html","content_length":"56163","record_id":"<urn:uuid:810775a7-0b7b-421b-8a93-fcae3058c00e>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00781.warc.gz"}
|