text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
# arXiv vs MathOverflow - popularity of disciplines
Inspired by the comparison of programming languages by GitHub and Stack Overflow activity (e.g. this one for 2015) I decided to look at the popularity of mathematical disciplines by using data from both arXiv and MathOverflow (see also my motivation for getting a dump of arXiv metadata). Here it is:
It's based on all data until January 2015. For MO there is little dependence of popularity of topics over time; for arXiv there is some, but it does not change the plot in any drastic way (due to arXiv growth the lifetime average is close the the average from the last 5 years).
There is some (positive) correlation between the number of questions here and the number of preprints on arXiv, per discipline. Of course, there are many factors that come into play:
• the number of people interested in a given field,
• coverage of people (not all mathematicians are here, not all - are posting to arXiv),
• difficulty to ask a question,
• difficulty to write a paper,
• etc.
I am not a (real) mathematician, and not even a frequent MO user; so I may be missing some explanations, which are obvious for everyone in a given field.
Do you know plausible explanations why certain fields lie above, or below, the regression line?
(I have some guesses, but don't want to mix this answer with a very partial response.)
• For various topics below the line in your figure, there are concurring sites -- e.g. math.SE, stats.SE, physics.SE, Physics Overflow, cstheory.SE, while -- besides mailing lists -- there are likely not so many good alternative places to ask advanced questions on, say, group theory, number theory or algebraic geometry.
– Stefan Kohl Mod
Jul 22, 2015 at 12:46
• What's the significance of the dark gray area? Jul 22, 2015 at 12:56
• @GerryMyerson Confidence interval (at 95%) for the linear regression parameters (vide docs.ggplot2.org/current/stat_smooth.html). Jul 22, 2015 at 14:04
• When you count "logic" on MathOverflow, do you also count [set-theory] and other related tags which often appear without the [lo.logic] tag?
– Asaf Karagila Mod
Jul 22, 2015 at 14:16
• @AsafKaragila lo.logic -> logic. I use only arXiv tags. Jul 22, 2015 at 14:37
• mathoverflow.net/questions/6292 Jul 22, 2015 at 14:41
• I think that you're missing out on a substantial part of MathOverflow, then. I think that including more tags (which I'm sure are relevant to other fields, not just logic) might change the outcome, perhaps for more than one topic.
– Asaf Karagila Mod
Jul 22, 2015 at 15:05
• @AsafKaragila I stick to arXiv tags as otherwise mapping would be hard, inconsequential or not 1-1. Of course, it is possible that different fields have different levels of diligence (use of at least 1 arXiv tag is highly advised). Jul 22, 2015 at 15:29
• Out the the ~2100 set theory questions, only ~750 have the logic tag. This means that however many questions that you counted under logic, there are significantly more. And I am more than certain this situation is not uncommon in other fields. The arXiv tags are not as used as they were five years ago. For better and for worse.
– Asaf Karagila Mod
Jul 22, 2015 at 15:33
• @AsafKaragila So, do you have a better idea to make a systematic mapping of questions to arXiv fields? (Some may be to more than one, some may be to none - especially not research-level questions (which are discouraged, but happen nonetheless).) Jul 22, 2015 at 15:55
• It should be possible to poll which tag have only set theory, and not lo.logic, I guess. And you should be able to have these sort of tag grouping. Like one or two dominant tags for each topic.
– Asaf Karagila Mod
Jul 22, 2015 at 16:01
• @AsafKaragila Sure, I can single out set-theory with no effort. But what about all other tags? (And I prefer to have a uniform undercounting rather than manual and subjective case-by-case approach, generating artifacts.) Jul 23, 2015 at 7:31
• Piotr, I'd write a code that accepts a list of tags. Then poll the community for help, or at least look what are the top 1-2 tags used in conjunction with each arXiv tag. But you should also take into account what quid wrote in the answer below, especially the last point.
– Asaf Karagila Mod
Jul 23, 2015 at 8:05
• A different option along the lines of what @Asaf proposes: go through the list of tags by popularity. Those that seem to fit clearly an arXiv category take into account there (avoidind double counts); forget the rest; stop when you are tired of it. Like, real-analysis should go to ca.analysis-and-ode graph-theory to co.combinatorics; yet matrices is likely too vague to go anywhere.
– user9072
Jul 23, 2015 at 10:33
• its natural for there to be a difference & to expect one, because mathoverflow and stackexchange in general is not really a site for presenting new research aka terra incognita, but instead to work with existing research aka terra cognita. short Q/A by experts/ general internet audience is much different than long-form papers by experts/ academics. one would expect the users/ audiences to be somewhat different/ overlapping also.
– vzn
Jul 23, 2015 at 20:57
Some observations:
• Some subjects seem to draw more amateur-interest and also "idle curiosity" from professionals than others. (Number theory, foundational questions (logic/set-theory), questions on history come to mind, which are all over-represented and as Asaf remarked "logic" could have more questions on MO if just the tag was used).
• Some subjects are "fundamental" in the sense that researchers in another field might have use for some technical result and ask about it while they hardly would consider writing paper in that field. (General topology, and maybe category theory comes to mind.) Relatedly there are some subjects that are more common in graduate curricula than others.
• MO always was a bit biased towards more pure math (see the link by Steve Huntsman), and the relative lack of applied analysis is well-known.
• The existence of other sites as mentioned by Stefan Kohl (information theory, statistics, and mathematical physics might be affected by this, and in addition to the preceding point).
• Technical artifacts. Like, arXiv enforces one of those categories, MO does not, and (thus?) hardly anybody tags general mathematics while on arXiv it is sometimes used. Or, consistency and clarity of the use of the tag (e.g., algebraic geometry might benefit from this).
It is hard to tell what is rationalization of the data, as I saw it; but I would have said some of the things even without the data.
I guess that MO is more forgiving about discussing curious problems, or recreational mathematics, such as Stanleys question about certain chess positions for example. Problems which are easy to state without a lot of overhead are easier to post on MO, and more people can chip in on these. This might explain why discrete math (combinatorics) and algebraic geometry are prominent. Every mathematician know (or should know) what a graph, a permutation and what a polynomial is.
Then number theory of course, draws attention for a similar reason; primes are understood by everyone and is a hot amateur mathematical topic. Same with logic, which attracts a lot of amateur mathematicians.
|
{}
|
# Title case for theorem argument
I would like to make the optional argument I pass to a theorem environment (which puts the argument in a parenthetical next to the theorem number - typically for naming a theorem) have title case i.e. non-articles should be capitalized. I am able to do this for section headings using the titlecaps and titlesec packages (see the MWE below), but I'm unsure how this behavior can be obtained without manually doing it each time in the theorem argument.
I'm using a self made definition environment (through amsthm) and have a lot of named definitions in the beginning of my document so it would be nice not to have to capitalize all of their titles by hand.
MWE:
\documentclass{article}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{titlesec,titlecaps}
\theoremstyle{definition}
\newtheorem{theorem}{Theorem}[subsection]
\newtheorem{definition}[theorem]{Definition}
\titleformat{\section}[block]{}{\normalfont\Large\bfseries\thesection.\;}{0pt}{\formatsectiontitle}
\newcommand{\formatsectiontitle}[1]{\normalfont\Large\bfseries\titlecap{#1}}
\begin{document}
\section{this is an automatically title capitalized section}
\begin{definition}[I wish this was title capitalized]
Definition.
\end{definition}
\end{document}
• Please provide a minimum working example that generates the issue you wish to fix.
– Mico
Jul 28, 2021 at 17:57
• @Mico Updated with an MWE, thanks! Jul 28, 2021 at 22:54
The following redefines the definition environment and inserts \titlecap{...} around the optional argument supplied to it:
\documentclass{article}
\usepackage{amsthm}
\usepackage{titlecaps}
\theoremstyle{definition}
\newtheorem{theorem}{Theorem}[subsection]
\newtheorem{definition}[theorem]{Definition}
\begin{document}
Original definition:
\begin{definition}[I wish this was capitalized]
Definition.
\end{definition}
Manual title case for definition:
\begin{definition}[\titlecap{I wish this was capitalized}]
Definition.
\end{definition}
\let\olddefinition\definition
\let\endolddefinition\enddefinition
\RenewDocumentEnvironment{definition}{ o }{%
\IfValueTF{#1}
{\olddefinition[\titlecap{#1}]}
{\olddefinition}%
}{%
\endolddefinition
}
Automated title cap definition:
\begin{definition}[I wish this was capitalized]
Definition.
\end{definition}
\end{document}
This only holds for the definition environment. You'll need to \usepackage{xparse} if you don't have an up-to-date LaTeX.
• Thank you for the answer! What would the xparse package do? I'm unfamiliar with this. Jul 30, 2021 at 22:54
• @Lockjaw: xparse provides \RenewDocumentEnvironment (and similar definitions) and an easy way to test whether optional arguments are supplied or not (through \IfValueTF{<arg>}{<true>}{<false>}). It's definitions were included as part of the LaTeX kernel/core in October 2020 (press release).
– Werner
Jul 30, 2021 at 23:10
• maybe it would be instructive to chose an example title containing "and, in, to", etc. in the title, so that the title case becomes more apparent. Aug 2, 2021 at 16:19
|
{}
|
torsdag 27 oktober 2011
Two-Way Transfer of Heat as OLR/DLR Violates the 2nd Law
In a sequence of posts on radiative heat transfer between two bodies of different temperature, I have compared two views with deep historical roots:
1. One-way transfer from hot body to cold body (Pictet)
2. Two-way transfer with net transfer from hot to cold (Prevost),
where 2. is used to support CO2 alarmism in the form of DLR/backradiation.
1. satisfies the 2nd law of thermodynamics, but what about 2.?
Well, two-way transfer is commonly viewed as two opposite streams of photons, which are not considered to interfere with each other, and thus must viewed to be independent. But this means that one of these independent streams of photons concerns transfer from cold to hot and thus violates the 2nd law.
This is a simple argument showing that the mantra of DLR/backradiation lacks rationale, by violating the 2nd law.
The argument can be dressed up in more precise mathematical form, as shown in Computational Thermodynamics and Mathematical Physics of Blackbody Radiation.
10 kommentarer:
1. I suggest to complete the table of options as follows:
1) Sf > Si --> irreversible process
2) Sf = Si --> reversible process
3) Sf < Si --> when external work is done
Michele
2. Let me start to thank you Claes. I really enjoy these kind of discussions where you have to think hard about the situation and try to understand what is happening and you are a big help in pushing this struggle forward for me.
After some thought I come up with a suggestion of an experiment to see if there is one or two way radiation.
Imagine two identical spheres with radius $r$. They are both considered to be blackbodies. They will have an outgoing radiation $q_i = A_i \sigma T_i^4$, $i\in\{1,2\}$ and they are regulated so that they will never lower their temperature below the start value. We place them in a large cavity with low temperature compared to the spheres.
We imagine that they are separated by a distance $x$, that is far enough so that each one will look like a disk for the other one. Then we have a ratio $R = \frac{\pi r^2}{4\pi x^2}=\frac{r^2}{4x^2}$ that corresponds to the correction factor we need to use to see how much radiation falls from each one upon the other one.
$1)$ If there is only one way radiation, there can be no radiation between the spheres and temperature remains.
$2)$ If there is two way radiation then the absorbed energy will be re-emitted or the temperature will rise. Lets say that the spheres re-emit and then wait for a new equilibrium. We get a new outgoing radiation $q'_1 = q_1 + R q'_2$ for body one, and similar for body two $q'_2 = q_2 + R q'_1$. Due to symmetry, $q'_1 = q'_2$ and we can solve for $q'_1$.
This gives $q'_1 = \frac{q_1}{1-R}$. Using Stefan-Boltzmann we see that $\frac{T'_1}{T_1}=(1-R)^{-1/4}$. The temperature will rise an amount governed by the geometrical factor, shorten $x$ and the temperature will rise even more. And as an extension one could try to find more suitable geometries, maybe parallel plates or something like that.
Claes, is there something you disagree with?
Sincerely,
Dol
3. I don't think an absence of any temperature rise will convince any believer about the non-existence of two-way flow. Do you?
4. Lets try with a different syntax for poor Latex
Let me start to thank you Claes. I really enjoy these kind of discussions where you have to think hard about the situation and try to understand what is happening and you are a big help in pushing this struggle forward for me.
After some thought I come up with a suggestion of an experiment to see if there is one or two way radiation.
Imagine two identical spheres with radius $r$. They are both considered to be blackbodies. They will have an outgoing radiation $q_i = A_i \sigma T_i^4$, $i\in\{1,2\}$ and they are regulated so that they will never lower their temperature below the start value. We place them in a large cavity with low temperature compared to the spheres.
We imagine that they are separated by a distance $x$, that is far enough so that each one will look like a disk for the other one. Then we have a ratio $R = \frac{\pi r^2}{4\pi x^2}=\frac{r^2}{4x^2}$ that corresponds to the correction factor we need to use to see how much radiation falls from each one upon the other one.
$1)$ If there is only one way radiation, there can be no radiation between the spheres and temperature remains.
$2)$ If there is two way radiation then the absorbed energy will be re-emitted or the temperature will rise. Lets say that the spheres re-emit and then wait for a new equilibrium. We get a new outgoing radiation $q_{n,1} = q_1 + R q_{n,2}$ for body one, and similar for body two $q_{n,2} = q_2 + R q_{n,1}$. Due to symmetry, $q_{n,1} = q_{n,2}$ and we can solve for $q_{n,1}$.
This gives $q_{n,1} = \frac{q_1}{1-R}$. Using Stefan-Boltzmann we see that $\frac{T_{n,1}}{T_1}=(1-R)^{-1/4}$. The temperature will rise an amount governed by the geometrical factor, shorten $x$ and the temperature will rise even more. And as an extension one could try to find more suitable geometries, maybe parallel plates or something like that.
Claes, is there something you disagree with?
Sincerely,
Dol
5. Claes wrote:
I don't think an absence of any temperature rise will convince any believer about the non-existence of two-way flow. Do you?
I don't know if you can see the correct statements since Latex seem to have trouble with my primes. So I tried a re-post with different syntax.
The important thing is that if we see a rise in temperature, it falsifies one way flow.
Sincerely,
Dol
6. There could not be any absence of temperature rise. Evidently, the rising temperature would not convince you that your theories are disastrously mistaken.
7. The radiation does travel both ways, Back radiation is real, HOWEVER, it is impossible for back radiation to heat the body it originally came from.
That is why the GHE from back radiated heat violates the 2nd law.
8. DLR of course exists but it can´t heat the earth directly because it is reradiated. The greenhouse gases will on the other hand be warmed by the earth because they are not fully transparent and they can´t reradiate all heat they absorb from the earth. And the more greenhouse gases the warmer they get. And this will make the earth temperature rise acc to the SB law, if the input from sun is constant,
9. How does a body know where the radiation originally came from?
10. May be our thinking about the thermal power emitted by the earth surface has to be made clearer.
The atmosphere behaves as an impedance for the alternating current because of the energy storage elements that it contains (the well-known GHG) which periodically reverse the direction of the energy flow.
We have to take into account that: The portion of power flow that, averaged over a complete cycle of the AC waveform, results in net transfer of energy in one direction is known as real power (also referred to as active power). That portion of power flow due to stored energy, that returns to the source in each cycle, is known as reactive power. (http://en.wikipedia.org/wiki/Electric_power)
See for example " the attached figure".
We can say that the DLR exists only if we refer to the instantaneous values of the reactive power which have not effect upon the system surface-atmosphere because its high thermal inertia. So, the system above simply feels the mean value of the active power (the net transfer of energy in one direction) whereas the mean value of the reactive power is nil.
Michele
|
{}
|
Models of DNA Denaturation Dynamics: Universal Properties
#### M. Baiesi, E. Carlon
2013, v.19, Issue 3, 569-576
ABSTRACT
We briefly review some of the models used to describe DNA denaturation dynamics, focusing on the value of the dynamical exponent $z$, which governs the scaling of the characteristic time $\tau\sim L^z$ as a function of the sequence length $L$. The models contain different degrees of simplifications, in particular sometimes they do not include a description for helical entanglement: we discuss how this aspect influences the value of $z$, which ranges from $z=0$ to $z \approx 3.3$. Connections with experiments are also mentioned.
Keywords: DNA denaturation,polymer dynamics,scaling laws,self-avoiding walks
|
{}
|
The open source CFD toolbox
# Properties
• Neumann condition
• implicit
Face values are evaluated according to:
$\phi_f = \phi_c + \Delta \grad{\phi}_{ref}$
where
$$\phi_f$$ = face value $$\phi_c$$ = cell value $$\grad(\phi)_{ref}$$ = reference gradient $$\Delta$$ = face-to-cell distance
# Usage
<patchName>
{
|
{}
|
# Tag Info
21
I can only talk about quantitative trading. As a rule of thumb, the lower frequency you work in, the more econometrics is important, whereas for a higher frequency, the more econometrics becomes useless. (I would still recommend a top econometrician for HFT since they have what it takes to succeed, it's just the models aren't out-of-the-box applicable.) But ...
6
The best paper is probably Relative Volume as a Doubly Stochastic Binomial Point Process - James Mcculloch. In this paper the volume is modelled via a Point Process, and theoretical laws are derived (with confident intervals, etc). And if you can wait few days (it will be available very soon), we put elements about this in Market Microstructure in Practice, ...
6
@user2763361 has a very thorough list of useful econometric topics for quantitative finance. I would add missing, mixed frequency, and irregular data as major issues that I'm either constantly dealing with or begrudgingly ignoring. Seasonal adjustment is important too for some data (like electricity futures), though the subject is also related to his ...
5
An AR(1), once the time series and lags are aligned and everything is set-up, is in fact a standard regression problem. Let's look, for simplicity sake, at a "standard" regression problem. I will try to draw some conclusions from there. Let's say we want to run a linear regression where we want to approximate $y$ with $$h_(x) = \sum_0^n \theta_i x_i = ... 4 The return equation is just an econometric equation that models stock returns (or other asset returns) as a function of: (i) intercept (i.e. the average return), (ii) some independent variables/features, (iii) noise that has zero mean and time-varying variance. There are sometimes other things in the return equation too that form more advanced models. The ... 3 Volatility changes over time. Even if daily returns are normal, assuming the conditional volatility each day is known, the unconditional distribution of daily returns will have excess kurtosis. For example, if daily returns have a standard deviation of 1%, 90% of the time, and a standard deviation of 3%, 10% of the time, the presence of the high-volatility ... 3 I deal recently with some analysis of the Volume time series, daily volume in € for European stocks. I found out that an ARIMA model works well. But, some EWMA could also provide good forecast if it's well parameterized. You can also face some seasonality effect due to macroeconomic events, some you may need to clean you data and treat these days in a ... 3 There are tons of quant related blogs out there, some of which contain relatively sophisticated content, others less so. Have a look at the following, which aggregates blogs: MoneyScience Otherwise I could point you to bank/sell-side research. Have a look at the freely available Reuters Messenger (RM), they maintain channels where you can be permissioned ... 3 2) Alternative to Fama-MacBeth is Fama-French approach. Explanation of difference see, for example, here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1271935 Fama-French approach was used by Carhart (introduced momentum), Pastor-Stambaugh (introduced liquidity), Fama-French themselves (used it to build 5-factor model), and many other (elsevier or ... 2 The key assumption is that there is no time-series correlation between the error terms. Fama-MacBeth can deal with cross-sectional correlations. See Samuel Thompson's "Simple formulas for standard errors that cluster by both firm and time" in the Journal of Financial Economics (2011) for a treatment of different regression methods for testing equity ... 2 From my point of view, dynamic models like the one developped in Relative Volume as a Doubly Stochastic Binomial Point Process - James Mcculloch to provide a dynamic forecast of the volume does not improve significantly the forecasting comparing to a static volume curve forecast using historical data (last month intraday data, and an EWMA algorithm). I've ... 2 There is a lot of ways to understand why stationarity allows to apply usual time series analysis. Here is one more. Very often, the theoretical justification of what you do in time series need to be able to identify the mean formula and the expectation:$$\frac{1}{N}\sum_{n=1}^N X_n \underset{N\rightarrow +\infty}{\longrightarrow} \mathbb{E} X, $$where the ... 2 Saying that you can't analyze something as is does not make it garbage. You can't eat flour "as-is", but that doesn't mean you throw it out. In order to use "standard" analysis tools, you must first transform the series into something compatible. Some examples of such a transformation include k-th order differences or a log transformation. These ... 2 Try the following : perform the logarithmic transformation of the volume data. check if the transformed data fits the normal distribution nicely. if you are working with intraday volume, then adjust for the seasonality for time of the day effect, if using daily data, in some cases some special seasonalities like expiry day, etc might be applied but it may ... 1 What you could do is to apply the methods of portfolio risk analysis. If you buy n stocks with percentages w_i,i=1,\ldots,n then your portfolio return is r = \sum_{i=1}^n w_i r_i. Dealing with investment strategies I would not include an expected profit in the VaR calculation and put \mu=0 for this reason. To calculate the volatility of your ... 1 If the returns are N(\mu,\Sigma) distributed, then WML\sim N(0,\sigma), because the equally-weighted \mu's cancel while \Sigma=\sqrt{w \Sigma w'} with w=\{1/n...1/n\}. So your new VaR becomes:$$\mbox{VaR}\left(\alpha\right)_{WML}=\Phi^{-1}\left(\alpha\right)\cdot\sigma$$Your sampling formula from above remains still valid though, just with ... 1 Well, given that either LM or BHHH is supposed to stop when the Kuhn-Tucker condition is satisfied, I infer it has to be stepwise. I would say otherwise if, say, they were potentially using something like SALO (simulated annealing with local optimization), where one algorithm could profitably run in full as a sub-step of the other. 1 A naive reason has been explained by Nassim Nicholas Taleb in his book titled Black Swan. In a deeper look, one should be aware that no historical data analysis can truly estimate the real tail risk of financial markets. By the same token, standard deviation, max drawdown, expected shortfall, VaR, Conditional Var... No single or combination of such ... 1 Extreme events in financial markets, like the crash of 1987, occur more frequently in the real world than a normal distribution would predict. The economic facts that drive those extreme events are varying. Such extreme declines have been observed over many different time periods (Tulip-mania for instance), which suggests that it is more likely inherent to ... 1 I would say that you can use Johansens methods to test for rank of co-integration matrix. There are tests for that. If there is no co-integration vector present and both series are I(0) then there is no co-integration. Series still might have some short-run dynamics. If series are I(1) and no con-integration vector is present then modeling these series by ... 1 In most of the literature on the information content of various volatility estimator the relevant question is whether a particular estimator can predict (is correlated) with future realized volatility. Hence, the testing regression would be$$ RV(t,T) = \alpha + \beta VOL(t) + \epsilon(t) where RV(t,T) is an estimate of the realized volatility from t to ...
1
The classical assumptions of linear regression are that the errors are uncorrelated and the variance of errors is constant (homoskedastic). So regress the returns against the indicators and test for autocorrelation and heteroskedasticity in the errors. If you don't observe any, then there's no issue with conventional hypothesis testing. If you do, use White ...
1
As an overview, Expected Returns, by Antti Ilmanen, was recommended to me. He has a preference for data over theory, so it will appeal to quants. The book is longish, and got a bit heavy at times, but he covers all the investment products and all styles of investing. The biggest problem might be that it is now 3 years old, and was heavily influenced by ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
Three masses are connected to two springs as shown. Write Newton's Second Law for each mass using the coordinates shown. Each coordinate is the distance from equilibrium. Assume that x1(t)=A1e^iwt and x2(t)=A2e^iwt and x3(t)=A3e^iwt. Substitute those expressions in and solve the eigenvalue problem. Find the eigenvalues and the unnormalized eigenvectors. Describe each mode physically.
The picture is a set of horizontal masses and springs m (connected by k spring) M (connected by k spring) m. where x1 points from the lefthand m to M, x2 points from M to m and x3 points from m to the right
|
{}
|
# Use The Definition To Find An Expression For The Area Under The Curve Y = X3 From 0 To 1 As A Limit.
(a) Use Definition 2 to find an expression for the area under the curve y= (x^3) from 0 to 1 as a limit. Then use
the following formula
(1^3) + (2^3) + (3^3) +...... + (n^3) = [(n(n+1))/2]^2
to evaluate the limit.
sciencesolve | Certified Educator
calendarEducator since 2011
starTop subjects are Math, Science, and Business
You should create a partition of the interval [0,1] in n subintervals that have the following lengths such that:
`Delta x_i = (1-0)/n = 1/n`
`x_i = i*(1/n)`
You need to use limit definitionto evaluate the definite integral such that:
`int_0^1 x^3 dx = lim_(n->oo) sum_(i=1)^n...
(The entire section contains 153 words.)
|
{}
|
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01r781wj434
DC FieldValueLanguage
dc.contributor.authorTunstall, Lori Elizabeth-
dc.contributor.otherCivil and Environmental Engineering Department-
dc.date.accessioned2016-06-08T18:38:23Z-
dc.date.available2016-06-08T18:38:23Z-
dc.date.issued2016-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01r781wj434-
dc.description.abstractAir voids are deliberately introduced into concrete to provide resistance against frost damage. However, our ability to control air distribution in both traditional and nontraditional concrete is hindered by the limited amount of research available on air-entraining agent (AEA) interaction with both the solid and solution components of these systems. This thesis seeks to contribute to the information gap in several ways. Using tensiometry, we are able to quantify the adsorption capacity of cement, fly ash, and fly ash carbon for four commercial AEAs. These results indicate that fly ash interference with air entrainment is due to adsorption onto the glassy particles tucked inside carbon, rather than adsorption onto the carbon itself. Again using tensiometry, we show that two of the AEA show a stronger tendency to micellize and to interact with calcium ions than the others, which seems to be linked to the freezing behavior in mortars, since mortars made with these AEA require smaller dosages to achieve similar levels of protection. We evaluate the frost resistance of cement and cement/fly ash mortars by measuring the strain in the body as it is cooled and reheated. All of the mortars show some expansion at temperatures ≥ ¬ 42 ˚C. Many of the cement mortars are able to maintain net compression during this expansion, but none of the fly ash mortars maintain net compression once expansion begins. Frost resistance improves with an increase in AEA dosage, but no correlation is seen between frost resistance and the air void system. Thus, another factor must contribute to frost resistance, which we propose is the microstructure of the shell around the air void. The strain behavior is attributed to ice growth surrounding the void, which can plug the pores in the shell and reduce or eliminate the negative pore pressure induced by the ice inside the air void; the expansion would then result from the unopposed crystallization pressure, but this must be verified by future work. If the shell has numerous, tiny pores it is more difficult to eliminate suction, since more ice is needed to plug all the pores.-
dc.language.isoen-
dc.publisherPrinceton, NJ : Princeton University-
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: http://catalog.princeton.edu/-
dc.subjectair-entraining agents-
dc.subjectair void shells-
dc.subjectcement-
dc.subjectfly ash-
dc.subjectfrost protection-
dc.subjectsurfactants-
dc.subject.classificationMaterials Science-
dc.subject.classificationCivil engineering-
dc.titleA study of surfactant interaction in cement-based systems and the role of the surfactant in frost protection-
|
{}
|
# "What PDE is obeyed by the following function..."
1. Mar 9, 2016
### sa1988
1. The problem statement, all variables and given/known data
Part (a) below:
2. Relevant equations
3. The attempt at a solution
There's more to this question but I'm only stuck on this first part so far.
I have no idea what specific PDE the equation, θ(x,t) = T(x,t) - T0x/L , obeys
In this module (mathematics) we've covered the wave equation, diffusion equation and Schrodinger equation as examples of PDEs in physics. We've done separation of variables and are now onto using Fourier series to solve PDEs.
In answer to the question, am I supposed to say a straight-forward, "This PDE obeys the wave/diffusion/Schrodinger equation", or does it simply obey some general PDE that I need to figure out for myself? Either way, I can't really see what I'm looking for. Or at least I can't see anything of value which can then take me on to solving the rest of the problem. I presume it should somehow take me back around to the diffusion equation..?
Thanks
2. Mar 9, 2016
### LCKurtz
What happens if you calculate $\Theta_{xx},~\Theta_{tt}$ and compare with your thermal diffusion equation?
3. Mar 9, 2016
### vela
Staff Emeritus
Or solve for T in terms of $\Theta$.
4. Mar 10, 2016
### sa1988
I would get:
$\frac{\partial^2 Θ}{\partial x^2} = \frac{\partial^2 T}{\partial x^2}$
and
$\frac{\partial^2 Θ}{\partial t^2} = \frac{\partial^2 T}{\partial t^2}$
but I can't see how it relates to the diffusion equation?
Unless...
If I do this and then take relevant derivatives, I can turn it into:
$T(x,t) = Θ(x,t) + \frac{T_{0}x}{L}$
Then:
$\frac{\partial T}{\partial t} = \frac{\partial Θ}{\partial t}$
and
$D\frac{\partial^2 T}{\partial x^2} = D\frac{\partial^2 Θ}{\partial x^2}$
which could then be combined to form a diffusion equation, on the assumption that $\frac{\partial Θ}{\partial t} = D\frac{\partial^2 Θ}{\partial x^2}$
But then couldn't I take very similar steps to form a wave equation too? Specially seeing as LCKurtz suggested I take second partial derivatives of both, which is even closer to wave equation territory!
Or am I still missing some glaring important point here?
Thanks.
5. Mar 10, 2016
### LCKurtz
I misread your original equation as having $T_{tt}$ in it, so I meant to suggest calculating $\Theta_t$, not $\Theta_{tt}$. So, as you noted, you do get $\frac{\partial Θ}{\partial t} = D\frac{\partial^2 Θ}{\partial x^2}$. But the important point you are missing is that you haven't answered what boundary conditions $\Theta(x,t)$ satsfies. Your original function satisfies $T(0,t)=0,~T(L,t)=T_0$.
So, I repeat, what boundary conditions does $\Theta$ satisfy? Is there any reason to prefer trying separation of variables on one or the other of the boundary value problem for $T$ or $\Theta$?
6. Mar 10, 2016
### sa1988
Hmm, I think I'm beginning to get it. Because of the boundary conditions on $T(x,t)$, it would mean the boundary conditions:
$Θ(0,t)=0$
and
$Θ(L,t)=0$.
are apparent.
Using separation of variables is presumably easier in solving for $Θ(x,t)$ because of the neater boundary conditions
But I don't understand how I'm supposed to prove that it obeys the diffusion equation in the first place - it seems to be something of a blind leap to take $Θ_{xx}$ and $Θ_t$, separately, then just equate them with a scaling factor of D.
7. Mar 10, 2016
### LCKurtz
Just think of $T(x,t) = Θ(x,t) + \frac{T_{0}x}{L}$ as a change of variable in your initial problem. Substitute it in and you get a new BVP with homogeneous boundary conditions, which you can solve. With the $T_0$ on the one boundary condition, after you try separation of variables you will wind up with something like $X(L)T_1(t)=T_0$ which won't even get you to $X(L)=0$ and the eigenvalues.
The point is, after you solve the $\Theta$ problem, you just substitute back for the solution to the $T$ problem. This exercise teaches you how to handle non-homogeneous BC's.
[Edited for better explanation]
Last edited: Mar 10, 2016
8. Mar 10, 2016
### sa1988
Dammit, posted when I wasn't ready to post. Wait.
9. Mar 10, 2016
### sa1988
Christ I'm still totally stuck here.
Been looking at it for hours and hours.
I'm using the general solution for the diffusion equation to get:
$Θ(x,t) = T_0 +\sum_{n=1}^{\infty} A_n$ $e^{D(\frac{nπ}{L})^2 t} cos(\frac{nπx}{L})$
But the boundary conditions,
$Θ(0,t) = 0$
and
$Θ(L,t) = 0$
are just leading me round in circles. Totally lost here...
I don't know if it helps or not but in this assignment the previous question gave us a function T(x) with boundary conditions almost exactly the same as those given at the start of this question here, which was solved as a Fourier representation.
So the idea is that we're actually supposed to already know what T(x,t) looks like at T=0, and this question is then aimed at describing how it changes with time if it's to obey the diffusion equation.
I'm honestly completely lost now though.
10. Mar 10, 2016
### vela
Staff Emeritus
There's no assuming required here. You know that T satisfies the diffusion equation. Since the derivatives are the same, $\Theta$ satisfies it also.
11. Mar 10, 2016
### LCKurtz
@sa1988: Can you show us how you would solve$$\Theta_t(x,t)=D\Theta_{xx}(x,t) = 0$$ $$\Theta(0,t) = 0,~\Theta(L,t)=0$$by yourself without using the general formula? In the process I think you will find that your "general formula" has errors in it and I think seeing that work will help me clear up some of your confusion. Don't worry about the initial condition for now.
12. Mar 14, 2016
### sa1988
Alright, as a standalone problem I would do the following
I'd use separation of variables, with:
$Θ(x,t) = X(x)T(t)$
So, in the diffusion equation
$\frac{dT(t)}{dt}X(x) = D\frac{d^2X(x)}{dx^2}T(t)$
Which rearranges to give:
$\frac{dT(t)}{dt}\frac{1}{T(t)} - D\frac{d^2X(x)}{dx^2}\frac{1}{X(x)} = 0$
Allowing for the two variables to be treated separately and sold for some constant $k$ such that:
$\frac{dT(t)}{dt} = T(t)k$
$\frac{d^2X(x)}{dx^2} = - X(x) \frac{k}{D}$
This gives:
$T(t) = Ae^{kt}+Be^{-kt}$
$X(x) = Ce^{i\frac{k}{D}x}+De^{-i\frac{k}{D}x}$
So finally:
$Θ(x,t) = (Ae^{kt}+Be^{-kt})(Ce^{i\frac{k}{D}x}+De^{-i\frac{k}{D}x})$
Then, given conditions:
$\Theta(0,t) = 0,~\Theta(L,t)=0$
$Θ(0,t) = (Ae^{kt}+Be^{-kt})(C+D) = 0$
hence
$C=-D$
then
$Θ(L,t) = (Ae^{kt}+Be^{-kt})C(e^{i\frac{k}{D}L}-e^{-i\frac{k}{D}L}) = 0$
only true for
$k = \frac{nπD}{L}$
This gives:
$Θ(x,t) = (Ae^{\frac{nπD}{L}t}+Be^{-\frac{nπD}{L}t})C(e^{i\frac{nπ}{L}x}-e^{-i\frac{nπ}{L}x})$
Expanding the right hand exponential terms to give:
$Θ(x,t) = (Ae^{\frac{nπD}{L}t}+Be^{-\frac{nπD}{L}t})2iC sin(\frac{nπ}{L}x)$
Returning to given conditions:
$\Theta_t(x,t)=D\Theta_{xx}(x,t) = 0$
Firstly:
$(\frac{nπD}{L}Ae^{\frac{nπD}{L}t}-\frac{nπD}{L}Be^{-\frac{nπD}{L}t})2iC sin(\frac{nπ}{L}x) = 0$
True for $A=B$
Then
$A(e^{\frac{nπD}{L}t}+e^{-\frac{nπD}{L}t})(-D2iC)(\frac{nπ}{L})^2 sin(\frac{nπ}{L}x) = 0$
Only true for $AC=0$
And now I'm stuck!
This seems to veer quite far away from the case involving Fourier series, though.
To get to the 'general formula' result involving a Fourier representation I went on the basis that Θ(x,t) = T0 + ΣAn(t) sin(nπx/L) , for some coefficient An(t) which allows the solution to vary with time, and plugged Θ(x,t) into the diffusion equation and solved for An(t) to get
$A_n(t) = e^{-D)\frac{nπ}{L}^2t}$
This general formula result is written in my lecture notes too, which is why I used it that way.
13. Mar 14, 2016
### LCKurtz
After you have done a few of these you will realize that it is best to keep the D with the t variable as in$$\frac{T'(t)}{DT(t)} = \frac{X''(x)}{X(x)}$$
The above suggestion would have kept the D out of the complex exponential. Also, you are now using D for two different things.
You complicate the issue by not applying the initial conditions to the separated variable X(x) before combining with the T(t) factor. This also hides the step where, when your your original problem has non-homogeneous boundary conditions, the process fails.
We have discussed this at length before. Look again at this:
It is the wave equation instead of the diffusion equation but the same idea.
I'm afraid this thread may have gotten hopelessly tangled up, possibly beyond repair.
14. Mar 14, 2016
### sa1988
I noticed the similarities to the previous discussion as I started working through that solution, though I didn't think it was so directly linked. Thanks for pointing it out!
Well I have four days left to figure this out. The points you've made here, particularly in relating it to the previous version of the problem with the wave equation, are very useful as I never saw it that way before. The thing throwing me here is that the problem I'm doing now is primarily focussed on solving PDEs with Fourier series, and that the lecture notes in class, including worked examples, don't relate it so strongly to the previous problem in the way you've pointed out here.
If you're curious, the notes from which I'm basing my 'understanding' (or lack thereof) are here - https://drive.google.com/file/d/0B8VhHMaC9ZQuNFdjR2F6UFFPbFE/view?usp=sharing - PDF page 31. Section 5.2.2 Diffusion of particles in a 1-dimensional box. And also the section above it, which involves standing waves on a string.
Thanks for the help so far, it's much appreciated. Relating it to that previous thread was something I never saw before, so it should help a lot.
15. Mar 14, 2016
### LCKurtz
The last thing I want to add for you is an explanation of why the non-homogeneous boundary conditions are a problem. To keep it simple, think about the simpler equation$$u_t(x,t) = u_{xx}(x,t)$$with boundary conditions$$u(0,t) = u(L,t) = 0$$versus the boundary conditions$$u(0,t) = 0,~u(L,t)= T_0$$Substituting $u(x,t) = X(x)T(t)$ into the DE gives $X(x)T'(t) = X''(t)T(t)$, which, as you know, is typically written$$\frac{X''(x)}{X(x)} = \frac {T'(t)}{T(t)}$$The first pair of boundary conditions $u(0,t) = u(L,t) = 0$ give, with that substitution$$X(0)T(t) = 0,~X(L)T(t) = 0$$This is what gives the nice boundary conditions $X(0)=0,~X(L) = 0$. So when you go through the steps solving for $X(x)$ you get $X_n(x) = \sin(\frac {n\pi x}{L})$.
The problem with the second pair of boundary conditions is that when you substitute $u(x,t) = X(x)T(t)$ into $u(0,t) = 0,~u(L,t)= T_0$ you get $X(0)T(t) = 0,~X(L)T(t) = T_0$. The first one gives you $X(0)= 0$, but the second doesn't help at all. You certainly can't conclude $X(L) = 0$.
That is why you need to do some substitution first in the nonhomogeneous case to make separation of variables work.
16. Mar 15, 2016
### sa1988
Thanks for that. It makes sense; I'll go over the problem again with that process now.
I think now also is a good time for me to reveal this problem in its entirety. It may help explain why I've jumped straight to the 'general solution' answer which had the Fourier series bunged in there.
For reference, the function f(x) in part b looks like:
$\sum_{n=1}^{\infty} \frac{2}{L}\bigg(\frac{L}{nπ}(1+\frac{a-L}{L-a})cos(\frac{nπa}{L})-(\frac{L}{nπ})^2(\frac{1}{a}+\frac{1}{L-a})sin(\frac{nπa}{L})\bigg) sin(\frac{nπk}{L})$
which only needs a bit of tweaking to make it represent the situation outlined for T(x) below.
In part c, I suppose I'm confused because it's telling me to express Θ(x,t) as a Fourier series with time-dependent coefficients. There's no mention of me needing to go through the process of solving the entire diffusion equation on its own. Maybe I took it too literally and should have thought 'out of the box' a bit more..? I suppose in hindsight it's fairly obvious since the diffusion equation is spelled out in the wording of the question, heh.
17. Mar 15, 2016
### LCKurtz
OK, let's recap and start over. You started with a pipe on $[0,L]$ whose temperature at $t=0$ is given as$$T(x,0)=\left\{ \begin{array}{l,l} 0, & 0\le x \le a \\ T_0\left(\frac{x-a}{L-a}\right),& a\le x\le L \end{array}\right.$$I will call that function $F(x)$.
Now, at $t=0$ the ice is removed and the ends of the pipe are held at $0$ and $T_0$, respectively. So you are given the diffusion boundary value problem (let's use k instead of D because D can be confused with differentiation):$$T_t(x,t) = kT_{xx}(x,t)$$ $$T(0,t)=0,~T(L,t)=T_0$$ $$T(x,0) = F(x)$$Notice that one of the end point conditions is non-homogeneous.
Now you are asked to make the change of variable $\Theta(x,t) = T(x,t) - \frac{T_0x} L$. I would like to see the steps showing how you got the answers for part (b).
18. Mar 16, 2016
### sa1988
@LCKurtz
Success, heh! Turned out the deadline was today at 12:00. Funny how a sudden panic can push things to happen.
I've identified the two parts that confused me.
First of all, I was under the impression that as $t → ∞$ the function $T(x,t)$ should tend to a flat average value since any real life thermal situation should generally involve a final equilibrium temperature which is uniform across the object. It then occurred to me that the given problem has fixed temperatures at each end, $T(0,t) = 0$ and $T(L,t) = T_0$, so of course it won't result in some flat line. I was entering all sorts into Mathematica 'Manipulate' functions and watching it result in some line $\frac{x}{L}$ as I let $t → ∞$ . It suddenly occurred to me that this should be the case, heh, so I moved on to working out what the correct maths should be.
First I discovered this: http://tutorial.math.lamar.edu/Classes/DE/HeatEqnNonZero.aspx#ZEqnNum497179 , a page explaining the thermal diffusion equation for non-homogeneous conditions, which I stared at and looked over for a very long time before realising the very, very obvious thing I was missing : $Θ(x,0)$ is a Fourier series with coefficients defined by $f(x)-\frac{T_0x}{L}$
So, for closure (and if you're interested), here's what I did.
Since $Θ(x,t)$ obeyed the diffusion equation, I used the solution
$Θ(x,t)=A_0 + \sum\limits_{n=1}^∞ A(t)sin(\frac{nπx}{L})$
which I plugged into the diffusion equation and solved for $A(t)$ to give
$Θ(x,t)=A_0 + \sum\limits_{n=1}^∞ A_ne^{-D(\frac{nπ}{L})^2t}sin(\frac{nπx}{L})$
Boundary conditions meant that $A_0 = 0$
And, the real 'Eureka' moment...
When $t=0$ it was already shown that
$Θ(x,0) = f(x) + \frac{T_0x}{L}$
where $f(x)$ is the temperature distribution at $t=0$ and looked very similar to a previous problem in the assignment.
Now $Θ(x,t)$ had just been found as $Θ(x,t) = \sum\limits_{n=1}^∞ A_ne^{-D(\frac{nπ}{L})^2t}sin(\frac{nπx}{L})$
which means
$\sum\limits_{n=1}^∞ A_ne^{-D(\frac{nπ}{L})^2t}sin(\frac{nπx}{L}) = f(x) + \frac{T_0x}{L}$
and this is nothing other than a basic Fourier series situation with coefficient:
$A_n = \int_0^L \! \Big(f(x) + \frac{T_0x}{L}\Big)sin(\frac{nπx}{L}) \, \mathrm{d}x$
$f(x)$ is the temperature distribution at $t=0$ which is defined as:
$0$ for $0<x<a$
$T_0\frac{x-a}{L-a}$ for $a<x<L$
which mirrored a problem I had previously tackled with a relatively ugly solution so I shan't show it here.
Then all that needed to be done was the integral for $A_n$ which again looked ugly but consisted primarily of $sin^2(x)$ and $x sin(x)$ parts which are not too hard to manage, if not a little finicky with all those constants and variables bunged in.
The final answer then came to:
$T(x,t) = Θ(x,t) +\frac{T_0x}{L}$
$T(x,t) = \frac{T_0x}{L} + \sum\limits_{n=1}^∞ \Bigg[\Bigg(\frac{2}{L}\bigg(\frac{-L}{nπ}cos(nπ) - (\frac{L}{nπ})^2\frac{1}{L-a}sin(\frac{nπa}{L})\bigg) + \frac{2}{nπ}cos(nπ)\Bigg)sin(\frac{nπx}{L})e^{-D(\frac{nπ}{L})^2t}\Bigg]$
There may be typos along the way here but overall the main solution was one that shows the distribution described above for $t=0$ which tends to a straight line of the form $\frac{T_0x}{L}$ as $t → ∞$
For the ultimate check, the following Mathematica code will give a manipulable plot to demonstrate it:
An = 2/L (-L/(n Pi) Cos[n Pi] - (L/(n Pi))^2 1/(L - a) Sin[n Pi a/L]);
Bn = AAAn + 2/(n Pi) Cos[n Pi];
a = 1; L = 5;
Manipulate[Plot[x/L + \!$$\*UnderoverscriptBox[\(\[Sum]$$, $$n = 1$$, $$20$$]$$Bn\ Sin[n\ Pi\ \*FractionBox[\(x$$, $$L$$]]
\*SuperscriptBox[$$E$$, $$\(- \*SuperscriptBox[\((n\ \*FractionBox[\(Pi$$, $$L$$])\), $$2$$]\) t\)]\)\), {x, 0, 5}], {t, 0,
5}]
I just hope I didn't go disastrously wrong with my logic here...
Last edited: Mar 16, 2016
19. Mar 16, 2016
### LCKurtz
OK, that's good. It looks like you figured it out. I didn't check every little detail but it looks right. You could simplify the Fourier coefficients by noting that $\cos(n\pi) = (-1)^n$. If you don't mind I am going to mark this thread as solved and call it a day.
20. Mar 16, 2016
### sa1988
Ah yeah, I was aware of the $(-1)^n$ thing but didn't 'notice' it until the majority of my working was in the form of $cos(n\pi)$. Time was very much against me so it stayed as it was.
Thanks for the help with this, very much appreciated. Looks like it wasn't unsalvageable after all!
Sure, mark it as solved.
|
{}
|
# Thread: Setting up and solving equations to finding the number of stamps
1. ## Setting up and solving equations to finding the number of stamps
Peter has 35 stamps. Some are valued at 2 cents, others at 5-cents, and the remainder at 10 cents. he has three times as many 5-cent stamps as he has 2-cent stamps and the total value of all stamps is $1.89. How many of each stamp does he have? 2. Originally Posted by Tessarina Peter has 35 stamps. Some are valued at 2 cents, others at 5-cents, and the remainder at 10 cents. he has three times as many 5-cent stamps as he has 2-cent stamps and the total value of all stamps is$1.89. How many of each stamp does he have?
Let he has x stamps of 10 cent
y of 2cent and z of 5 cent
so equations will be
10x+2y+5z=189
3y=z
x+y+z=35
Go ahead and try to show some steps next time, it will solve the problem permanently
3. ## Word problem
Hello Tessarina
Originally Posted by Tessarina
Peter has 35 stamps. Some are valued at 2 cents, others at 5-cents, and the remainder at 10 cents. he has three times as many 5-cent stamps as he has 2-cent stamps and the total value of all stamps is $1.89. How many of each stamp does he have? Let's suppose he has$\displaystyle x$2-cent stamps. They are (obviously) worth 2 cents each. So altogether they are worth$\displaystyle 2x$cents. He has three times as many 5-cent and 2-cent stamps. If he has$\displaystyle x$2-cent stamps, how many 5-cent stamps is that? (You need to multiply.) So how much will they be worth altogether? (You need to multiply again.) Now he has 35 stamps altogether. To find out how many 10-cent stamps, add together the number (not their value) of 2-cent and 5-cent stamps (that's$\displaystyle x$plus whatever number of 5-cent stamps he has), and take the total away from 35. This will give you something like 35 - (something)$\displaystyle x$. Now multiply this number by 10 to find out how much they are worth. Then: add together the value of all the stamps, and put the total equal to 189. So you'll get an equation like:$\displaystyle 2x$+ (something)$\displaystyle x$+ (something-else) = 189. Then solve this equation for$\displaystyle x\$'s, and you're there.
Can you do that?
|
{}
|
# What is the trigonometric form of (12-2i) ?
Mar 11, 2016
$2 \sqrt{37} \left[\cos \left(- 0.165\right) + i \sin \left(- 0.165\right)\right]$
#### Explanation:
Using the following formulae :
• r^2 = x^2 + y^2
• theta = tan^-1 (y/x)
here x = 12 and y = - 2
hence ${r}^{2} = {12}^{2} + {\left(- 2\right)}^{2} = 148 \Rightarrow r = \sqrt{148} = 2 \sqrt{37}$
and theta = tan^-1(-2/12) ≈ -0.165" radians "
$\Rightarrow \left(12 - 2 i\right) = 2 \sqrt{37} \left[\cos \left(- 0.165\right) + i \sin \left(- 0.165\right)\right]$
|
{}
|
• # question_answer The wavelength associated with an electron accelerated through a potential difference of 100 V is nearly A) 100 $\overset{\text{o}}{\mathop{\text{A}}}\,$ B) 123 $\overset{\text{o}}{\mathop{\text{A}}}\,$ C) 1.23 $\overset{\text{o}}{\mathop{\text{A}}}\,$ D) 0.123 $\overset{\text{o}}{\mathop{\text{A}}}\,$
[c] $\lambda =\frac{h}{\sqrt{2mQV}}=\frac{6.6\times {{10}^{-34}}}{\sqrt{2\times 9.1\times {{10}^{-31}}\times 1.6\times {{10}^{-19}}\times 100}}$$=1.23\,\overset{\text{o}}{\mathop{\text{A}}}\,$
|
{}
|
# Impose the compatibility conditions for mixed finite elements method in Stokes equation
$\newcommand{\v}[1]{\boldsymbol{#1}}$ Suppose we have following Stokes flow model equation:
\tag{1} \left\{ \begin{aligned} -\mathrm{div}(\nu \nabla \v{u}) + \nabla p &= \v{f} \\ \mathrm{div} \v{u} &= 0 \end{aligned} \right. where the viscosity $\nu(x)$ is a function, for the standard mixed finite element, say we use the stable pair: Crouzeix-Raviart space $\v{V}_h$ for the velocity $\v{u}$ and element-wise constant space $S_h$ for the pressure $p$, we have the following variational form:
$$\mathcal{L}([\v{u},p],[\v{v},q]) = \int_{\Omega} \nu \nabla\v{u}:\nabla \v{v} -\int_{\Omega} q\mathrm{div} \v{u} -\int_{\Omega} p\mathrm{div} \v{v} =\int_{\Omega} \v{f}\cdot \v{v} \quad \forall \v{v}\times q\in \v{V}_h\times S_h$$
and we know that since the Lagrange multiplier $p$ can be determined up to a constant, the finally assembled matrix should have nullspace $1$, to circumvent this we could enforce the pressure $p$ on some certain element be zero, so that we don't have to solve a singular system.
So here is my question 1:
• (Q1) Is there other way than enforcing $p=0$ on some element to eliminate the kernel for standard mixed finite element? or say, any solver out there that be able to solve the singular system to get a compatible solution?(or some references are welcome)
And about the compatibility, for (1) it should be $$\int_{\Omega} \nu^{-1} p = 0$$ and the nice little trick is to compute $\tilde{p}$ be the $p$ we got from the solution of the linear system subtracted by its weighted average: $$\tag{2} \tilde{p} = p - \frac{\nu}{|\Omega|}\int_{\Omega} \nu^{-1} p$$
However, recently I have just implemented a stabilized $P_1-P_0$ mixed finite element for Stokes equation by Bochev, Dohrmann,and Gunzberger, in which they added a stabilized term to the variational formulation (1): $$\widetilde{\mathcal{L}}([\v{u},p],[\v{v},q]) =\mathcal{L}([\v{u},p],[\v{v},q]) -\int_{\Omega} (p - \Pi_1 p)(q -\Pi_1 q) =\int_{\Omega} \v{f}\cdot \v{v} \quad \forall \v{v}\times q\in \v{V}_h\times S_h$$ where $\Pi_1$ is the projection from piecewise constant space $P_0$ to continuous piecewise $P_1$, and the constant kernel of the original mixed finite element is gone, however, weird things happened, (2) doesn't work anymore, I coined the test problem from an interface problem for diffusion equation, this is what I got for pressure $p$, the right one is the true solution and the left one is the numerical approximation:
however if $\nu$ is a constant, the test problem performs just fine:
I am guessing it is because the way I am imposing the compatibility condition, since it is linked with the inf-sup stability of the whole system, here is my second question:
• (Q2): is there any way other than (2) to impose the compatibility for pressure $p$? or while coining the test problem, what kind of $p$ should I use?
• MathML not working? – Shuhao Cao Sep 10 '12 at 21:30
• We use MathJaX on StackExchange, everything you posted is showing up beautifully, thanks for the detailed question. – Aron Ahmadia Sep 11 '12 at 8:09
The compatibility condition concerns velocity, not pressure. It states that if you only have Dirichlet boundary conditions for the velocity, then these should be compatible with the divergence-free constraint, i.e. $\int_{\partial \Omega} u \cdot n = 0$ with $\partial \Omega$ the boundary of the computational domain (not the cell).
In this case $\nabla p$ cannot be distinguished from $\nabla (p + c)$ with $c$ an arbitrary constant because you have no boundary condition on $p$ to fix the constant. Thus there are infinitely many solutions for the pressure and in order to compare solutions, a convention is needed. Mathematicians prefer choosing $c$ such that $\overline{p}=p_\mathrm{ref}$ (because they can integrate) while physicist prefer $p(x_\mathrm{ref}) = p_\mathrm{ref}$ (because they can measure in a point). If $Bp$ is your discrete equivalent of $\nabla p$, it implies that $B$ has a null space consisting of the identity vector.
Krylov subspace methods can solve a singular system by removing the null space from the Krylov subspace in which they look for the solution. However, that does not mean you will get the solution $p$ that matches a given convention, you will always need to determine the constant yourself in a postprocessing step, no solver can do it for you.
Here are some suggestions to tackle your problem:
• Equation (2) seems strange. If $\nu$ is a function of $x$ how can it be outside the integral?
• Does your velocity field satisfy the compatibility constraint?
• Try not to do anything for the pressure, just let the solver free to come up with a $p$, then look at $p-p_\mathrm{exact}$. Is it a constant?
• If not, are you sure that the null space of $B$ is indeed the identity vector and nothing more? Both on paper and in the code? The problem seems small enough to actually compute the null space.
As for (Q1), you can choose a solver for saddle-point problems that computes a least squares solution for your system. Then an additional condition can be imposed on the multiplier, like setting a specific degree of freedom, are imposing a specific average.
In general, and I think this answers (Q1), you may use a linear constraint that can distinguish different constants.
This constraint may imposed in a post-processing step, or by a appropiate choice of the trial space (e.g., if you leave out one degree of freedom).
|
{}
|
# EE Program vs Physics (Masters degree)
• Programs
## Main Question or Discussion Point
Hi everyone.
Recently, I started a MS program (terminal) in applied physics. I'm noticing my desire for physics really isn't there. The core coursework (classical mechanics, math physics) thus far isn't too interesting, thus I'm not doing that well.
The honest reason for going is that it seemed like a logical next step and I was excited. My undergraduate GPA wasn't stellar (around 3.0) mostly because I goofed around. Toward the end of my degree I started doing better when I got my priorities straight.
I did consider engineering programs for grad school and applied to two of them but the department at my current program was so welcoming that it was a big draw for me. I don't think my desire for physics though, is there.
I was figuring I could finish a masters degree in applied physics or engineering then work in industry, but it seems like my options with engineering are much more diverse and application based classes always interested me.
I could more than likely get into a terminal MS program for engineering.
Has anyone been down this path? Any advice is appreciated.
Hi,
I have not done so myself but I have worked with a couple of folks who have. One was an okay physics student and the other was a brilliant student who should have gone directly to grad school for a PhD. Both of them have finished a master's degree in EE at a local grad school. Unfortunately, the ten courses or so will not make anybody a design engineer. Yes, of course, the math and physics is there but the solid engineering core is missing.
Compare 10 grad level courses that tend to enhance math skills or build upon already existing knowledge to 30+ courses in electronics, systems and signals, computer science, etc.
One of the guys is now an EE engineer, just doing okay. Not really capable of doing real design work. The other guy has become a guru on magnetics and electric machines, which is HOWEVER much closer to physics than the pure EE. He has found the proper balance.
Therefore, I would advise you to pursue the path you wish but bear in mind that you most likely will not be able to compete with people who came with a solid EE UG background. You can find your niche, however, and be very good at it, provided your previous focus on physics and mathematics.
One more thing-- my brother in law has finished BS and MS in physics at SUNY. He did work in a lab for a few years but unfortunately, he did not have much vertical mobility without a phd. Hence, unless you are a stellar student (it's up to you to decide), I would urge you to ponder your job prospects for a moment.
Btw. he did a second master's degree in management and is now a manager at a company where there is not much use of his physics background. While sad as it sounds, it actually makes him happy. And that's what counts.
Thank you for the response.
What is your friend doing at the company? Is he doing well financially?
I'm just feeling disenchanted with physics lately and have an urge to more applicable stuff. I also have a general feeling that it's easier to get a good job as an engineer.
Thank you for the response.
What is your friend doing at the company? Is he doing well financially?
I'm just feeling disenchanted with physics lately and have an urge to more applicable stuff. I also have a general feeling that it's easier to get a good job as an engineer.
Which person are you talking about? All of those people make $60k+ per year. Some might be closer to$70k but all have just a few years of experience. I believe that most of them will be making between $80k at$90k in a decade.
It all depends what you mean by doing "ok". Some people are fine with having \$30k and job they love. Not sure how well and how soon UG physics degree pays off.
Ultimately it is all about you-- do you want to make the big (ehh) buck? Engineering might a good profession for you. I believe that life is too long to do a profession that pays off but does not satisfy you. Money should not be the only differentiator (might not be for you but could be for your future/current spouse .
|
{}
|
Cisco 9148s Datasheet, Sklz Gold Flex Golf Trainer, Thermostat With Dry Contacts, Household Type Example, Phonics Google Slides, ...">
# acceleration due to gravity calculator pendulumBLOGブログ
2022.5.23
acceleration due to gravity calculator pendulum
When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position . Answer: The Moon's acceleration due to gravity is 1.6 m/s 2. Calculate the earth's acceleration due to gravity using the relation (derived from calculus) g = 2h / t2. An astronaut uses a 1.40-m pendulum to calculate the acceleration due to gravity on the moon. Posted on 14 mai 2022 Author By Categories Intelangana electricity tariff calculator . Part 1: EXCEL EXERCISE EXERCISE 1. What is the change in the frequency of the simple pendulum and the change in the . If the pendulum has a period of 5.88 seconds, what is the moon's acceleration due to gravity . When a pendulum is displaced sideways from its resting phaseequilibrium position, it is subject to a restoring force due to gravity that will accelerate it back toward the equilibrium position. 4. Acceleration has the dimensions of velocity (L/T) divided by time, i.e. The formula to calculate acceleration due to gravity is given below: where, g = Acceleration due to Gravity [m/s 2] G = Gravitational constant [6.67 x 10 -11 N-m 2 /kg 2] M = Mass of the Body [kg] r = Radius between two bodies [m] Use our online acceleration due to gravity calculator by entering the input values and click calculate button to . Acceleration due to gravity is typically experienced on large bodies such as stars, planets, moons and asteroids but can occur minutely with smaller masses. The acceleration due to gravity differs for every planet and it is denoted by g. The formula to calculate acceleration due to gravity is given below: where, g = Acceleration due to Gravity [m/s 2] G = Gravitational constant [6.67 x 10 -11 N-m 2 /kg 2] M = Mass of the Body [kg] This is the equation we need to make our calculation. The pendulum and the mass-spring system are taken down a mine where the acceleration due to gravity is less than at the surface. Homework Statement. Solution: Reasoning: For a simple pendulum the period T = 2π)/ω is proportional to 1/g 1/2. March 9, 2022. . Explain why so many digits are needed in the value for the period, based on the relation between the period and the acceleration due to gravity. pendulum A physical pendulum is a body or mass suspended from a rotation point as shown in the figure. Although we could use any unit for the period (years . Acceleration due to gravity is typically experienced on large bodies such as stars, planets, moons and asteroids but can occur minutely with smaller masses. Acceleration due to gravity is . 8.2 Introduction Everyday we experience things moving in a periodic manner. A pendulum is a weight suspended from a pivot so that it can swing freely. Posted on 14 mai 2022 Author By Categories Intelangana electricity tariff calculator . Using matlab to calculate and plot the first order polynomial linear regression and calculate the acceleration: L = [20 30 40 50 60 70 80]; T = [.767 .934 1.11 1.31 1.53 1.72 . The slope of the velocity time graph was accelerating 6.74 m/s^2 [down]. Click hereto get an answer to your question ️ Calculate the time period of a simple pendulum of length 1.12 m , when acceleration due to gravity is 9.8 ms^-2 . Knowing the length of the pendulum, you can determine its frequency. this is repeated for 5 different lengths and using this data to calculate the acceleration due to gravity?? When a pendulum is displaced sideways from its resting phaseequilibrium position, it is subject to a restoring force due to gravity that will accelerate it back toward the equilibrium position. is moved to a location where the acceleration due to gravity is . T = 2.5 s and. On the moon, the acceleration due to gravity is one-sixth that of earth. Calculation of the value of acceleration due to gravity . The L-T . The measured value is 9.706 m/s2 with a standard deviation of 0.0317‚ which does not fall within the range of known terrestrial values. Determining acceleration due to gravity, g. The gradient of the line = ⁄4² Use this to determine g, acceleration due to gravity (it has the units m/s2). Calculate to find g: g = 9.8281 m/s 2. INSTRUCTIONS: Choose the preferred units and enter the following: ( L) Length of the Pendulum. pendulum (L) i.e. Determination of acceleration due to gravity using a simple pendulum. Or, if you want a specific frequency, you . Divide both sides by T 2. Discussion. The Rationale The time it takes a pendulum to complete one swing (T) and the length of the pendulum are related by the equation: T = 2π √ Where: The Pendulum 8.1 Objectives • Investigate the functional dependence of the period (⌧)1 of a pendulum on its length (L), the mass of its bob (m), and the starting angle ( 0). g = Acceleration due to gravity, measured in m/s2. The units of acceleration are typically m/s2. March 9, 2022. plot T^2 vs. length. When released, the restoring force acting on the pendulum's mass causes it to oscillate about the equilibrium position . using the period, T of a pendulum depends on the square root of L, the length of the pendulum and g, the acceleration due to gravity. That is g moon = g earth /6 = (9.8 m/s2)/6 = 1.63 m/s2. Plug in the values for T and L where. Simple pendulum calculator solving for acceleration of gravity given period and length . Its SI unit is m/s 2. An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due . f is the frequency of the pendulum. Additionally, the frequency f, and the period T, are reciprocals. Question: need help with acceleration due to gravity using a simple pendulum a bob is attached to the string as a pendulum and the length is varied the time period of the pendulum for 10 oscillations is recorded. . It consists of a massive bob suspended by a weightless rod from a frictionless pivot, without air friction. Why, and how? pendulum A physical pendulum is a body or mass suspended from a rotation point as shown in the figure. Apparatus The apparatus for this experiment consists of a support stand with a string clamp, a small spherical ball with a 125 cm length of light string, a meter stick, a vernier caliper, and a timer. it is the length of a simple pendulum having the same time period. Apparatus used: Bar pendulum, stop watch and meter scale. 9ms-2. Aim :- To determine the value of acceleration due to gravity (g) at the surface of earth by using Bar Pendulum.LIKE SHARE SUBSCRIBEABOUT:-In this Robotic. It is directly proportional to the time. Strategy. The acceleration due to gravitation from using the time from the three trials is within the range of 8. A pendulum is a weight suspended from a pivot so that it can swing freely. The time period of a simple pendulum depends on the length of the pendulum (l) and the acceleration due to gravity (g), which is expressed by the relation, For small amplitude of oscillations, ie; If we know the value of l and T, we can calculate the acceleration due to gravity, g at that place. The simple pendulum equation is: T = 2π * √ L/g Where: T: Period of the simple pendulum L: Length of the pendulum g: g: Acceleration due to gravity, the standard gravity of Earth is 9.80665m/s 2 The velocity at the bottom of the swing is: v = √ 2g * L * (1-cos(a)) Where: v: The velocity at the bottom of the pendulum a: The angle from the . Diagram of simple gravity pendulum, an ideal model of a pendulum. You can express acceleration by standard acceleration, due to gravity near the surface of the Earth which is defined as g = 31.17405 ft/s² = 9.80665 m/s . 2 Less than a minute. What is the acceleration due to gravity in a region where a simple pendulum having a length 75.000 cm has a period of 1.7357 s? We are asked to find g g size 12{g} {} given the period T T size 12{T} {} and the length L L size 12{L} {} of a pendulum. Acceleration due to gravity can be measured with the help of a simple experiment, The period $$T$$ for a simple pendulum does not depend on the mass or the initial angular displacement, but depends only on the length $$L$$ of the string and the value of the acceleration due to gravity. As a first activity, we will tackle the problem of a swinging pendulum in different ways and will use what we learn to determine the local acceleration due to gravity in this classroom. The acceleration due to gravity is stated as: Here, substitute 6.67 × 10-11 Nm 2 kg-2 for G, 6 × 10 24 kg for M and 6.4 × 10 6 m for r in the above expression to calculate g at the surface of Earth. The formulas to compute the simple pendulum period and frequency are provided below: T = 2π × √ (L/g) f = 1/T. where h is the height of the fall and t is the average time of fall. L T −2.The SI unit of acceleration is the metre per second squared (m s −2); or "metre per second per second", as the velocity in metres per second changes by the acceleration value, every second.. Other forms. Finding the acceleration due to gravity. That is g moon = g earth /6 = (9.8 m/s2)/6 = 1.63 m/s2. To analyze the data from the Acceleration Due to Gravity Experiment by means of a computer and a spreadsheet program and to learn the treatment of experimental errors. Examples using Huygen's Law of for the period of a Pendulum. Angular Frequency: The calculator returns the angular frequency of the pendulum. What is Acceleration due to Gravity? this . Acceleration due to gravity is the instantaneous change in downward velocity ( acceleration) caused by the force of gravity toward the center of mass. Here is how the Angular frequency of a Simple Pendulum calculation can be explained with given input values -> 1.807392 = sqrt (9.8/3). acceleration due to gravity experiment pendulum. A pendulum with a period of 2.00000 s in one location (g = 9.80 m/s 2) is moved to a new location where the period is now 1.99796 s. What is the . Where, T = Period of the motion, measured in s. L = Length of the pendulum, measured in cm. Such an object has an acceleration of 9.8 m/s/s, downward (on Earth). The acceleration due to gravity is . Spanish French German Russian Italian Portuguese Polish . 3. (b) Explain why so many digits are needed in the value for the period, based on the relation between the period and the acceleration due to gravity. Calculate the acceleration due to gravity in that planet? The equation is: a= ∆v∆t ("Acceleration") Acceleration is also a vector because it has a direction. Measure the distance the object will fall - in meters. Measuring Acceleration due to Gravity: The Period of a Pendulum. Where, T is the time period of the pendulum. Everything in this class is in metric units! Physics questions and answers. 4ms-2 and highest is 11. INSTRUCTIONS: Choose the preferred units and enter the following: ( L) Length of the Pendulum. A parachutist jumping from an aeroplane falls freely for some time. Solve for acceleration of gravity: Solve for distance from center of mass to pivot: References - Books: Tipler, Paul A.. 1995. Units. Time the object's fall at least 20 times. It has both magnitude and direction, hence, it's a vector quantity. For the 0. This formula employs the acceleration due to gravity at sea-level on Earth (g = 9.80665 m/s²) Envision how the two component forces change as the pendulum swings. This value for the acceleration due to gravity can, if you like, be used for all additional experimentation and calculation this semester. T = 1/f. To use this online calculator for Angular frequency of a Simple Pendulum, enter Acceleration Due To Gravity (g) & Length (L) and hit the calculate button. Calculation of Gravity. 5. Formula: A. L is the length of the pendulum. This numerical value is so important that it is given a special name. Spanish French German Russian Italian Portuguese Polish . For example, Again, the pendulum oscillation frequency is half that, or 0.313 hertz. Formula: The following formula is used for the determination of acceleration due to gravity 'g': 1 2 2 2 2 1 1 2 2 2 2 1 8 2 l l T T l l T T g − − + + + π = (1) Here, T1: time periods of the oscillating pendulum from knife-edge K1 T2: time periods of the oscillating . A special reversible bar pendulum called a cutter pendulum was designed to measure the value of g, the acceleration of gravity, and I knew that this . It is because, y = mx The change in velocity of an object in free fall was directly proportional to the displacement. Conclusion: In this experiment we used a bar pendulum that has an extended mass, like a swinging rod, and it is free to swing about a horizontal axis so Bar Pendulum = metal rod with slotted holes at fixed distances, which serve as pivots. Worth Publishers. The slope of the line in the graph of T² against L can be used to determine the gravity of the pendulum motion. Physical Pendulum Calculator. The Angular Frequency of a Pendulum equation calculates the angular frequency of a simple pendulum with a small amplitude. If acceleration due to gravity remains constant, then we can write that T \propto \sq. An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due . L = 0.25 m. g = 1.6 m/s 2. If , adjust the position of knife edge K 2 so that . Acceleration due to gravity is represented by g. The standard value of g on the surface of the earth at sea level is 9.8 . 2 Less than a minute. Step 3: Finally, the gravitational acceleration will be displayed in the . The Simple Pendulum and the Acceleration of Gravity Mr Keefer Suppose you must be made part of this experiment How To Compare Two Files The period of a simple pendulum is T = 2π T = 2 π √L g, L g, where L L is the length of the string and g g is the acceleration due to gravity For this lab we are trying to verify that gravity on earth is 9 . Calculate T 2 also. To use this online calculator for Angular frequency of a Simple Pendulum, enter Acceleration Due To Gravity (g) & Length (L) and hit the calculate button. Measure the length of the pendulum to the middle of the pendulum bob. To find the value of acceleration due to gravity . Pendulum Problems. Here is how the Angular frequency of a Simple Pendulum calculation can be explained with given input values -> 1.807392 = sqrt (9.8/3). What is its new period? Simple pendulum to calculate Acceleration due to Gravity 'g' A simple pendulum consists of a heavy or point mass suspended by an inextensible or non-elastic thread from a fixed point. A simple pendulum with a period of 2.00000 s in one location where g = 9.80 m/s 2 is moved to a new location where the period is now 1.99796 s. What is the acceleration due to gravity at its new location? Calculating gravity using the 3.125 second period and Equation 2: Counting the cycles observed over a 60-second time frame resulted in a frequency of 0.626 hertz. Step 2: Now click the button "Calculate the Unknown" to get the acceleration due to gravity. Measuring Acceleration due to Gravity: The Period of a Pendulum. Acceleration is the rate at which an object changes its velocity. Determination of the Acceleration Due to Gravity By A Good Student Abstract The acceleration due to gravity‚ g‚ was determined by dropping a metal bearing and measuring the free-fall time with a pendulum of known period. 221 17. though my higher secondary book lays down procedures to find the acceleration due to gravity(g} and conclude that it there using a simple pendulum and gives the formula g= 4(pi)^2L/T^2 where L is the length of the string and . The mass of the Earth is 5.979 * 10^24 kg and the average radius of the Earth is 6.376 * 10^6 m. Plugging that into the . Acceleration due to Gravity is the acceleration due to the force of gravitation of the earth. • Use a pendulum to measure g, the acceleration due to gravity. (a) A pendulum that has a period of 3.00000 s and that is located where the acceleration due to gravity is . Using a simple pendulum, the relation between the periodic time and the length of the wire: . A simple pendulum of length 2 m and a period of 5 seconds is oscillating in a planet. Factor affecting acceleration due to gravity [1] The equation to calculate the period is, T = 2² Lg. By adding a second knife-edge pivot and two adjustable masses to the physical pendulum described in the Physical Pendulum demo, the value of g can be determined to 0.2% precision. . Therefore, for small amplitudes the period of a simple pendulum depends only on its length and the value of the acceleration due to gravity. and g is the acceleration due to gravity (measured in meters/sec2). Units. Facebook Twitter LinkedIn Tumblr Pinterest Reddit VKontakte Odnoklassniki Pocket. g = 4π²L/T2 (3) 1. Measure the distance of the knife-edge K 1 as h 1 and that of K 2 as h 2 from the . Note: This means that the frequency and period would be different on the Moon versus on the Earth. Strategy. Acceleration has the dimensions of velocity (L/T) divided by time, i.e. Balance the pendulum on a sharp wedge and mark the position of its centre of gravity. Acceleration due to gravity is the instantaneous change in downward velocity ( acceleration) caused by the force of gravity toward the center of mass. 3rd ed. Acceleration due to gravity 'g' by Bar Pendulum OBJECT: To determine the value of acceleration due to gravity and radius of gyration using bar pendulum. ( g) Acceleration due to gravity. Angular Frequency: The calculator returns the angular frequency of the pendulum. Physical Pendulum Calculator. Strategy. The acceleration due to gravity is . The time period of a simple pendulum depends on the length of the pendulum (l) and the acceleration due to gravity (g), which is expressed by the relation, For small amplitude of oscillations, ie; If we know the value of l and T, we can calculate the acceleration due to gravity, g at that place. Answer (1 of 4): The time period of oscillation of a simple pendulum is given by T=2\pi \sqrt{\frac{l}{g}} From the above equation, we can find the effect of length and gravity in the period of oscillation. What is the acceleration due to gravity in a region where a simple pendulum having a length 75.000 cm has a period of 1.7357 s? A simple pendulum and a mass-spring system have the same oscillation frequency f at the surface of the Earth. The procedure to use the acceleration due to gravity calculator is as follows: Step 1: Enter the mass, radius and "x" for the unknown value in the respective input field. Example 1: Measuring Acceleration due to Gravity: The Period of a Pendulum. Record the length of the pendulum in the . Now with a bit of algebraic rearranging, we may solve Eq. Physics For Scientists and Engineers. g is the acceleration due to gravity. Measuring Acceleration due to Gravity: The Period of a Pendulum. Pendulum Formula How to Calculate the Value of g=9.8 m/s^2 | Class 9 Gravitation | Acceleration due to gravity. You can express acceleration by standard acceleration, due to gravity near the surface of the Earth which is defined as g = 31.17405 ft/s² = 9.80665 m/s . 9ms-2. acceleration due to gravity experiment pendulum. Question: A simple pendulum of length 2 m and a period of 5 seconds is oscillating in a planet. Strategy. Procedure (to determine acceleration due to gravity value using pendulum motion) Measure the effective length of the pendulum from the top of the string to the center of the mass bob. By adding a second knife-edge pivot and two adjustable masses to the physical pendulum described in the Physical Pendulum demo, the value of g can be determined to 0.2% precision. Acceleration due to gravity is the acceleration gained by an object due to gravitational force. The Angular Frequency of a Pendulum equation calculates the angular frequency of a simple pendulum with a small amplitude. L T −2.The SI unit of acceleration is the metre per second squared (m s −2); or "metre per second per second", as the velocity in metres per second changes by the acceleration value, every second.. Other forms. 1.6 m/s 2 remains constant, then we can write that T & # 92 ; &... When released, the restoring force acting on the moon & # ;... Want a specific frequency, you m. Move the mass so that resulting! //Www.Vcalc.Com/Wiki/Vcalc/Acceleration+Due+To+Gravity '' > acceleration due to gravity, measured in cm find value... Pendulum is a body or mass suspended from a frictionless pivot, without air friction mass causes to! Special name acceleration due to gravity is less than at the acceleration due to gravity calculator pendulum type of is. A standard deviation of 0.0317‚ which does not fall within the range of known terrestrial values calculator helps you the... 0.25 m. g = acceleration due to gravity is constant amplitude, forever facebook Twitter LinkedIn Tumblr Pinterest VKontakte!, without air friction velocity position graph ( refer to graph 2 ) curved. To oscillate about the equilibrium position ) divided by time, i.e //www.vcalc.com/wiki/vCalc/Acceleration+Due+to+Gravity >., adjust the position of its centre of gravity from using the time period 5.: for a simple pendulum and the change in the figure ( to...: ( L ) length of the motion, measured in m/s2 meter scale balance pendulum. In velocity and dividing it over the time from the three trials is within the range of 8 of! Of T² against L can be calculated by finding the change in velocity and it. L/T ) divided by time, i.e measured value is so important that it the! = period of 5 seconds is oscillating in a circular motion—such as satellite. M/S2 ) /6 = 1.63 m/s2 would be different on the moon, the relation between periodic... The vertical '' https: //timelessgolfantiques.com/ikubbz/how-to-calculate-acceleration-due-to-gravity-pendulum.html '' > acceleration due to gravity can, if you want a frequency... Knife edge K 2 so that used to determine the gravity of the pendulum has a of. Its centre of gravity hence, it was calculated from the three trials within! Of acceleration due to gravity - vCalc < /a > what is acceleration due gravitation!, the acceleration due to gravity pendulum < /a > 3 acceleration due to gravity calculator pendulum a physical pendulum h 2 from results. The vertical a standard deviation of 0.0317‚ which does not fall within the range of..: ( L ) length of the fall and T is the height of the and... To determine the gravity of the motion, measured in cm this is the equation we to! Gravitation is 8 of algebraic rearranging, we may solve Eq pendulum oscillation frequency f, the... To gravitation is 8 5° with the vertical: a simple pendulum and the of... Acting on the pendulum 3: Finally, the acceleration gained by an object in free fall directly!, and the mass-spring system have the same oscillation frequency is half that, or 0.313 hertz value... L ) length of the line in the frequency of a physical pendulum oscillates! It over the time period a standard deviation of 0.0317‚ which does fall. T is the equation we need to make our calculation is swinging back and forth //www.vcalc.com/wiki/vCalc/Acceleration+Due+to+Gravity '' > due! S. L = 0.25 m. g = 1.6 m/s 2 the position of edge! The middle of the pendulum bob standard deviation of 0.0317‚ which does not fall the. Magnitude and direction, hence, it & # acceleration due to gravity calculator pendulum ; s a vector quantity the versus! And acceleration due to gravity is can, if you want a specific,! Could use any unit for the period of 5 seconds is oscillating a. Is easy to make simple errors: Bar pendulum, you bob suspended by a rod! Examples using Huygen & # 92 ; propto & # 92 ; propto #. ; propto & # x27 ; s mass causes it to oscillate about the equilibrium position graph! Physics Forums < /a > 3 Unknown & quot ; to get acceleration. And L where is proportional to 1/g 1/2 length of the calculation T! About 5° with the vertical facebook Twitter LinkedIn Tumblr Pinterest Reddit VKontakte Odnoklassniki Pocket to work and... Make our calculation can, if you want a specific frequency, you can determine its frequency posted 14! Pendulum is a body or mass suspended from a frictionless pivot, without air friction we need to make calculation! Remains constant, then we can write that T & # 92 sq... Pendulum - Wikipedia < /a > Physics questions and answers the results that the string makes an of... The dimensions of velocity ( L/T ) divided by time, i.e details of pendulum! F, and the mass-spring system have the same time period of the has. Makes an angle of about 5° with the vertical 6.74 m/s^2 [ down ] mass-spring system the... Time graph was accelerating 6.74 m/s^2 [ down ], adjust the position of its centre of...., T is the time from the the period T, are reciprocals frequency of the,... Measure the distance of the pendulum against L can be used to determine the gravity the... Given an initial impulse, it oscillates at constant amplitude, forever the displacement have the same period. Instructions: Choose the preferred units and enter the following: ( L ) length of the,! This Data to calculate acceleration due to gravity is 1.6 m/s 2 would be different on the moon, frequency! 1.6 m/s 2 the equilibrium position without air friction algebraic rearranging, we may solve.. 2 so that the frequency of the earth at sea level is 9.8 g on the oscillation! # x27 ; s mass causes it to oscillate about the equilibrium position seconds what! Of length 2 m and a mass-spring system have the same oscillation frequency f at surface. By a weightless rod from a frictionless pivot, without air friction //timelessgolfantiques.com/ikubbz/how-to-calculate-acceleration-due-to-gravity-pendulum.html >. 92 ; sq Bar pendulum, you can determine its frequency instructions Choose! Has the dimensions of velocity ( L/T ) divided by time, i.e: moon... The slope of the wire: was 3.194 seconds gravitational force Finally, the pendulum & # 92 sq... Entering Data and Calculations in Excel Fifteen students of a Physics Class are a! Over the time to the force of gravitation of the velocity position (... X27 ; s a vector quantity calculator returns the angular frequency of the motion, in!, and the change in slope in the a planet this semester questions and answers may Eq. Intelangana electricity tariff calculator g = 1.6 m/s 2 than at the surface the in. Has the dimensions of velocity ( L/T ) divided by time, i.e has both and... Can write that T & # 92 ; propto & # x27 ; acceleration. Be used to determine the gravity of the fall and T is the length of a massive suspended. Finally, the acceleration due to gravity of about 5° with the vertical - Wikipedia < /a >.! Force of gravitation of the fall and T is the moon, the acceleration gained by an in... //Www.Vcalc.Com/Wiki/Vcalc/Acceleration+Due+To+Gravity '' > pendulum - Wikipedia < /a > Physics questions and answers from. Velocity ( L/T ) divided by time, i.e of T² against L can be used for all additional and! The button & quot ; calculate the acceleration due to gravity 30m pendulum, you point. Acceleration has the dimensions of velocity ( L/T ) divided by time, i.e and! An angle of about 5° with the vertical for some time having the same time period of a pendulum! Calculate the acceleration due to gravitation is 8 of 5.88 seconds, what is acceleration due to gravitational.. Simple pendulum and the length should be approximately 1 m. Move the mass so that of known terrestrial.. G, the acceleration gained by an object in free fall was proportional. ; calculate the acceleration due to gravitational force swinging back and forth mine where the acceleration to... 5 different lengths and using this method was 3.194 seconds 6.74 m/s^2 [ down ] and! L ) length of the earth question: a simple pendulum having the same frequency. Observing a pendulum force acting on the surface Data and Calculations in Fifteen. And dividing it over the time period the time period of 5 seconds is in! Wire: gravity pendulum < /a > pendulum - Wikipedia < /a > 3 standard! Physical pendulum calculator helps you compute the period and frequency of a Physics Class are observing a pendulum that g! Centre of gravity 2 so that the lowest uncertainty for the acceleration due to gravity pendulum! Has the dimensions of velocity ( L/T ) divided by time, i.e because changing velocity means in! • use a pendulum position graph ( refer to graph 2 ) was curved changing... Calculate acceleration due to gravity experiment pendulum < /a > units write that T & # 92 ; &... Frictionless pivot, without air friction quot ; calculate the Unknown & quot ; to get acceleration! A href= '' https: //www.physicsforums.com/threads/simple-pendulum-and-acceleration-due-to-gravity.826002/ '' > acceleration due to gravity g. ( should... Reddit VKontakte Odnoklassniki Pocket the position of its centre of gravity algebraic rearranging, may... From using the time from the results that the string makes an angle of about 5° with vertical. And calculation this semester velocity time graph was accelerating 6.74 m/s^2 [ down ] //www.physicsforums.com/threads/simple-pendulum-and-acceleration-due-to-gravity.826002/ >... Lengths and using this Data to calculate acceleration due to gravity is the time period of the pendulum has period.
|
{}
|
Browse Questions
# Consider the following statements Statement 1: Voltmeter is much better than a potentiometer for measuring the emf of the cell Statement 2: A potentiometer draws no current while measuring the emf of the cell
$\begin {array} {1 1} (A)\;Both \: the \: statements\: are\: true.\: Statement\: 2\: is\: a\: correct \: explanation\: of\: statement\: 1 \\ (B)\;Both \: the \: statements\: are\: true.\: Statement\: 2\: is\: not\: a\: correct \: explanation\: of\: statement\: 1 \\ (C)\;Statement \: 1\: is\: true,\: statement\: 2 \: is\: false \\ (D)\;Statement \: 2\: is\: true,\: statement\: 1 \: is\: false \end {array}$
Statement 2 is the explanation for statement 1 being incorrect
Ans : (d)
|
{}
|
# Show that if there exist two complex numbers $a,b$ such that $f(a)=a$ and $f(b)=b$ then $f(z)=z$ for all $z\in B(0,1)$.
Let $f:B(0,1) \to B(0,1)$ holomorphic. Show that if there exist two complex numbers $a,b$ such that $f(a)=a$ and $f(b)=b$ then $f(z)=z$ for all $z\in B(0,1)$.
There is a suggestion in the excercise that says, consider the function
$g(z)=\frac{h(z)-a}{1-\overline{a}h(z)}$ with $h(z)=f \left(\frac{z+a}{1+\overline{a}z} \right)$
and use Schwarz Lemma.
Ok, so I've been thinking about this excercise for a while, I wasn't able to solve it. Im not seeing how can I use the suggestion. By replacing, I easily got that $g(0)=0$ but Im not being able to prove that $|g(z)|<1$ and also Im not being able to continue even assuming that is true. Any hint in how to use the suggestion?
Given $c$ with $|c| < 1$, you can easily prove that the map $$h_c(z) = \frac{z - c}{1 - \overline{c}z}$$ is a holomorphic automorphism of the open unit disc sending $c$ to $0$. The inverse of $h_c(z)$ is $h_{-c}$.
If $a$ or $b$ were $0$, the answer would follow from an immediate application of the Schwarz lemma. The hint offers you to reduce the problem to that case by using a holomorphic automorphism of $B(0,1)$ that sends $a$ to $0$. More specifically, you are offered to consider $g = h_a \circ f \circ h_{-a} = h_c \circ f \circ (h_a)^{-1}$ - the conjugation of $f$ by $h_a$. It follows that since $f$ maps the open unit disc into itself, that $|g(z)| < 1$ for all $z \in B(0,1)$. We also have $$g(0) = h_a(f(h_{-a}(0))) = h_a(f(a)) = h_a(a) = 0$$ and $$g(h_a(b)) = h_a(f(h_a^{-1}(h_a(b))) = h_a(f(b)) = h_a(b).$$
Since $h_a(b) \neq 0$ (because $h_a$ is an automorphism and $h_a(a) = 0$), the Schwarz lemma implies that $g(z) = z$ for all $z \in B(0,1)$. The only map conjugate to the identify is the identity itself and so $f(z) = z$ for all $z \in B(0,1)$.
Proving $|g(z)|<1$ is proved by proving the equivalent statement $|g(z)|^2=g\bar{g}<1$. If you write this inequality out and simplify a bit, you get $$|h(z)|^2+|a|^2<1+|a|^2|h(z)|^2.$$ We know this is true since $a$, Im$f\subseteq B(0,1)$, hence implies $|g(z)|<1$. More specifically, multiply over $|h|^2<\frac{1-|a|^2}{1-|a|^2}=1$ and you will get the above inequality.
|
{}
|
# The correct option for free expansion of an ideal gas under adiabatic condition is:
more_vert
The correct option for free expansion of an ideal gas under adiabatic condition is:
(1) q = 0, $\Delta$T = 0 and w = 0
(2) q = 0, $\Delta$T < 0 and w > 0
(3) q < 0, $\Delta$T = 0 and w = 0
(4) q > 0, $\Delta$T > 0 and w > 0
more_vert
Factual question
more_vert
verified
|
{}
|
Polynomials and Commutativity
Standard
In high school, I came to know about the statement of the fundamental theorem of algebra:
Every polynomial of degree $n$ with integer coefficients have exactly $n$ complex roots (with appropriate multiplicity).
In high school, a polynomial = a polynomial in one variable. Then last year I learned 3 different proofs of the following statement of the fundamental theorem of algebra [involving, topology, complex analysis and Galois theory]:
Every non-zero, single-variable, degree $n$ polynomial with complex coefficients has, counted with multiplicity, exactly $n$ complex roots.
A more general statement about the number of roots of a polynomial in one variable is the Factor Theorem:
Let $R$ be a commutative ring with identity and let $p(x)\in R[x]$ be a polynomial with coefficients in $R$. The element $a\in R$ is a root of $p(x)$ if and only if $(x-a)$ divides $p(x)$.
A corollary of above theorem is that:
A polynomial $f$ of degree $n$ over a field $F$ has at most $n$ roots in $F$.
(In case you know undergraduate level algebra, recall that $R[x]$ is a Principal Ideal Domain if and only if $R$ is a field.)
The key fact that many times go unnoticed regarding the number of roots of a given polynomial (in one variable) is that the coefficients/solutions belong to a commutative ring (and $\mathbb{C}$ is a field hence a commutative ring). The key step in the proof of all above theorems is the fact that the division algorithm holds only in some special commutative rings (like fields). I would like to illustrate my point with the following fact:
The equation $X^2 + X + 1$ has only 2 complex roots, namely $\omega = \frac{-1+i\sqrt{3}}{2}$ and $\omega^2 = \frac{-1-i\sqrt{3}}{2}$. But if we want solutions over 2×2 matrices (non-commutative set) then we have at least 3 solutions (consider 1 as 2×2 identity matrix and 0 as the 2×2 zero matrix.)
$\displaystyle{A=\begin{bmatrix} 0 & -1 \\1 & -1 \end{bmatrix}, B=\begin{bmatrix} \omega & 0 \\0 & \omega^2 \end{bmatrix}, C=\begin{bmatrix} \omega^2 & 0 \\0 & \omega \end{bmatrix}}$
if we allow complex entries. This phenominona can also be illusttrated using a non-commutative number system, like quaternions. For more details refer to this Math.SE discussion.
Prime Polynomial Theorem
Standard
I just wanted to point towards a nice theorem, analogous to the Prime Number Theorem, which is not talked about much:
# irreducible monic polynomials with coefficients in $\mathbb{F}_q$ and of degree $n \sim \frac{q^n}{n}$, for a prime power $q$.
The proof of this theorem follows from Gauss’ formula:
# monic irreducible polynomialswith coefficients in $\mathbb{F}_q$ and of degree $n$ = $\displaystyle{\frac{1}{n}\sum_{d|n}\mu\left(\frac{n}{d}\right)q^d}$, by taking $d=n$.
For details, see first section of this: http://alpha.math.uga.edu/~pollack/thesis/thesis-final.pdf
Ulam Spiral
Standard
Some of you may know what Ulam’s spiral is (I am not describing what it is because the present Wikipedia entry is awesome, though I mentioned it earlier also). When I first read about it, I thought that it is just a coincidence and is a useless observation. But a few days ago while reading an article by Yuri Matiyasevich, I came to know about the importance of this observation. (Though just now I realised that Wikipedia article describes is clearly, so in this post I just want to re-write that idea.)
It’s an open problem in number theory to find a non-linear, non-constant polynomial which can take prime values infinitely many times. There are some conjectures about the conditions to be satisfied by such polynomials but very little progress has been made in this direction. This is a place where Ulam’s spiral raises some hope. In Ulam spiral, the prime numbers tend to create longish chain formations along the diagonals. And the numbers on some diagonals represent the values of some quadratic polynomial with integer coefficients.
Ulam spiral consists of the numbers between 1 and 400, in a square spiral. All the prime numbers are highlighted. ( Ulam Spiral by SplatBang)
Surprisingly, this pattern continues for large numbers. A point to be noted is that this pattern is a feature of spirals not necessarily begin with 1. For examples, the values of the polynomial $x^2+x+41$ form a diagonal pattern on a spiral beginning with 41.
Repelling Numbers
Standard
An important fact in the theory of prime numbers is the Deuring-Heilbronn phenomenon, which roughly says that:
The zeros of L-functions repel each other.
Interestingly, Andrew Granville in his article for The Princeton Companion to Mathematics remarks that:
This phenomenon is akin to the fact that different algebraic numbers repel one another, part of the basis of the subject of Diophantine approximation.
I am amazed by this repelling relation between two different aspects of arithmetic (a.k.a. number theory). Since I have already discussed the post Colourful Complex Functions, wanted to share this picture of the algebraic numbers in the complex plane, made by David Moore based on earlier work by Stephen J. Brooks:
In this picture, the colour of a point indicates the degree of the polynomial of which it’s a root, where red represents the roots of linear polynomials, i.e. rational numbers, green represents the roots of quadratic polynomials, blue represents the roots of cubic polynomials, yellow represents the roots of quartic polynomials, and so on. Also, the size of a point decreases exponentially with the complexity of the simplest polynomial with integer coefficient of which it’s a root, where the complexity is the sum of the absolute values of the coefficients of that polynomial.
Moreover, John Baez comments in his blog post that:
There are many patterns in this picture that call for an explanation! For example, look near the point $i$. Can you describe some of these patterns, formulate some conjectures about them, and prove some theorems? Maybe you can dream up a stronger version of Roth’s theorem, which says roughly that algebraic numbers tend to ‘repel’ rational numbers of low complexity.
To read more about complex plane plots of families of polynomials, see this write-up by John Baez. I will end this post with the following GIF from Reddit (click on it for details):
Arithmetic Operations
Standard
There are only 4 binary operations which we call “arithmetic operations”. These are:
• Subtractions (-)
• Multiplication (×)
• Division (÷)
Reading this fact, an obvious question is:
Why only four out of the infinitely many possible binary operations are said to be arithmetical?
Before presenting my attempt to answer this question, I would like to remind you that these are the operations you were taught when you learnt about numbers i.e. arithmetic.
In high school when $\sqrt{2}$ is introduced, we are told that real numbers are of two types: “rational” and “irrational”. Then in college when $\sqrt{-1}$ is introduced, we should be told that complex numbers are of two types: “algebraic” and “transcendental“.
As I have commented before, there are various number systems. And for each number system we have some valid arithmetical operations leading to a valid algebraic structure. So, only these 4 operations are entitled to be arithmetic operations because only these operations lead to valid algebraic numbers when operated on algebraic numbers.
Now this leads to another obvious question:
Why so much concerned about algebraic numbers?
To answer this question, we will have to look into the motivation for construction of various number systems like integers, rational, irrationals, complex numbers… The construction of these number systems has been motivated by our need to be able to solve polynomials of various degree (linear, quadratic, cubic…). And the Fundamental Theorem of Algebra says:
Every polynomial with rational coefficients and of degree n in variable $x$ has n solutions in complex number system.
But, here is a catch. The number of complex numbers which can’t satisfy any polynomial (called transcendental numbers) is much more than the number of complex numbers which can satisfy a polynomial equation (called algebraic numbers). And we wish to find solutions of a polynomial equation (ie.e algebraic numbers) in terms of sum, difference, product, division or $m^{th}$ root of rational numbers (since coefficients were rational numbers). Therefore, sum, difference, product and division are only 4 possible arithmetic operations.
My previous statement may lead to a doubt that:
Why taking $m^{th}$ root isn’t an arithmetic operation?
This is because it isn’t a binary operation to start with, since we have fixed $m$. Also, taking $m^{th}$ root is allowed because of the multiplication property.
CAUTION: The reverse of $m^{th}$ root is multiplying a number with itself m times and it is obviously allowed. But, this doesn’t make the binary operation of taking exponents, $\alpha^{\beta}$ where $\alpha$ and $\beta$ are algebraic numbers, an arithmetic operation. For example, $2^{\sqrt{2}}$ is transcendental (called Gelfond–Schneider constant or Hilbert number) even though 2 and $\sqrt{2}$ are algebraic.
|
{}
|
• lgbasallote
what are the kinds of "suit" latex? aside from $$\diamondsuit$$ $$\heartsuit$$ $$\spadesuit$$ $$\clubsuit$$
LaTeX Practicing! :)
Looking for something else?
Not the answer you are looking for? Search for more explanations.
|
{}
|
# C++ code to count number of unread chapters
Suppose we have an array of pairs P. Where P[i] is in the form (l, r), and have another number k. Consider we are going to read a book with n chapters. so that one page of the book belongs to exactly one chapter and each chapter contains at least one page. We have read some pages and marked the page with number k as the first page which was not read. We have to find the number of chapters we have not completely read yet. P[i] represents the chapter page numbers range.
So, if the input is like P = [[1, 3], [4, 7], [8, 11]]; k = 4, then the output will be 2, because we have read the first chapter, another two chapters are there to read.
## Steps
To solve this, we will follow these steps −
n := size of P
for initialize i := 1, when i <= n, update (increase i by 1), do:
if k >= P[i - 1, 0] and k <= P[i - 1, 1], then:
return n - i + 1
return 0
## Example
Let us see the following implementation to get better understanding −
#include <bits/stdc++.h>
using namespace std;
int solve(vector<vector<int>> P, int k){
int n = P.size();
for (int i = 1; i <= n; i++){
if (k >= P[i - 1][0] && k <= P[i - 1][1])
return n - i + 1;
}
return 0;
}
int main(){
vector<vector<int>> P = { { 1, 3 }, { 4, 7 }, { 8, 11 } };
int k = 4;
cout << solve(P, k) << endl;
}
## Input
{ { 1, 3 }, { 4, 7 }, { 8, 11 } }, 4
## Output
2
|
{}
|
# Globally altering styles
Is there a generally applicable technique which will, for any tag/environment used in a LaTeX document, allow the default styling of the contents of instances of that tag/environment to be tweaked in the preamble?
• For instance, if I decided that all instances of \emph{} in the document should have their contents rendered not only italicised (a common default) but also bold, then what would I do?
• Similarly, if I decided that all instances of \begin{verbatim}...\end{verbatim} should have their contents rendered not only in a monospaced font (a common default) but also on a grey background, then what would I do?
But more importantly than simply explaining how to handle these two example cases, please can you outline a procedure a LaTeX user can follow in all cases in order to be able to change the styling of any given tag/environment.
-
These are really very different questions. There is no procedure that can work in all cases. – egreg Jul 4 '12 at 17:08
Just one example: \section has to do much more than a <head> tag in HTML, where, for instance, there's no problem with page breaking, widows and orphans. Comparing (La)TeX and HTML is really wrong. – egreg Jul 4 '12 at 17:42
OK, wrong example. But still there is no similarity between LaTeX and HTML, other than both use a kind of markup. – egreg Jul 4 '12 at 17:48
That's a pretty big similarity! Anyhow, I provided the analogy merely in case it helped anyone here see what I was getting at, not for any other reason. If it didn't help you, that's a pity, but it may yet help someone else. My question still stands. – sampablokuper Jul 4 '12 at 17:50
@egreg: In some sense one could argue that from a users point of view, LaTeX is similar to plain HTML 4 (without CSS (and hence <style>), JavaScript, etc.). Except of course that LaTeX automates many tasks (toc, ...). – Caramdir Jul 4 '12 at 18:38
Let's tackle only the first example. LaTeX declares \emph with
\DeclareTextFontCommand{\emph}{\em}
so we need to look up what \em does:
\DeclareRobustCommand\em{%
\@nomath\em
\ifdim \fontdimen\@ne\font >\z@
\upshape
\else
\itshape
\fi}
The first instruction, \@nomath\em raises an error if the command is found in math mode, otherwise it does nothing. The conditional checks whether the current font is slanted (in this case \fontdimen1\font is positive); if so it issues \upshape, otherwise \itshape.
So, if you want \emph to choose "boldface italic" in an upright context and "upright boldface" in an italic context you have to say
\DeclareRobustCommand\em{%
\@nomath\em
\ifdim \fontdimen\@ne\font >\z@
\upshape\bfseries
\else
\itshape\bfseries
\fi}
Is this a general method? No.
I won't even think to the changes necessary to print verbatim material on a grey background: the fancyvrb package provides such a feature and its code is very complicated.
In LaTeX you can't simply hand calls to the browser like HTML does. The two models are completely different.
-
I'm grateful for this, but although it addresses one of my examples, it doesn't really answer my question. How did you establish what LaTeX declares \emph (or any other command) to be? Why was this your first step? How did you look up what \em (or any other command) does? Where should the third code snippet you gave be put in order to apply to all instances of \emph in the present document? These are the sorts of things my question is getting at. – sampablokuper Jul 4 '12 at 18:11
@sampablokuper To look things up: \show\emph (etc.) in your document or texdoc source2e. – Caramdir Jul 4 '12 at 18:30
As egreg says there is no general way -- each one needs to be considered separately. So for the \emph you could redefine the command in the preamble:
## Note:
• As egreg pointed out, this required the use of \LetLtxMacro macro from the letltxmacro package instead of \let since \emph is declared with \DeclareRobustCommand.
• As Caramdir pointed out this may or may not be what is desired for nested use \emph{}. This redefinition yields an alternating bold italic, with bold upright, where as the original definition yields an alternating non-bold italic, with a non-bold upright as shown above.
## Code:
\documentclass{article}
\usepackage{letltxmacro}
\LetLtxMacro\OldEmph\emph
\renewcommand{\emph}[1]{\textbf{\OldEmph{#1}}}%
\begin{document}
\OldEmph{orignal emph}\par
\emph{modified emph}\par
\emph{modified emph with a \emph{nested emph}}
\end{document}
-
\emph is basically defined with \DeclareRobustCommand and it's quite dangerous to redefine it in this way (see the documentation of letltxmacro. – egreg Jul 4 '12 at 18:07
@egreg, but if letltxmacro were used instead of let then this approach would generally be OK? Or would there still be a danger - and if there is still a danger, then what would it be? – sampablokuper Jul 4 '12 at 18:19
@egreg: Thanks, I was mistaken in thinking that \LetLtxMacro was only required when there were optional parameters. Have updated the solution. – Peter Grill Jul 4 '12 at 18:22
This of course might or might not do what you intend when nesting \emphs. – Caramdir Jul 4 '12 at 18:35
|
{}
|
# Kaplansky for Projections
Let $H$ be a Hilbert space, and $A$ a $C^*$-subalgebra of $B(H)$ (the bounded operators on $H$). Let $B$ be the strong-operator closure of $A$, so that in particular, $B$ is a von-Neumann-algebra.
According to the Kaplansky-Theorem:
The … in the unit ball of $A$ is s-o dense in the … in unit ball of $B$, where … is:
• unitaries
Does it hold, that $\operatorname{Proj}(A)$ is strongly dense in $\operatorname{Proj}(B)$ (or even weakly dense).
I.e. does the Kaplansky-Theorem hold for projections? (Or a weaker version of it.)
If not, why not?
-
No. E.g., let $A=C[0,1]$ acting as multiplication operators on $L^2[0,1]$. The only projections in $A$ are $0$ and $1$. The strong closure contains multiplications by the characteristic functions of all measurable subsets of $[0,1]$.
What about if $B$ is, say, the hyperfinite $II_1$ factor? – RS8 Nov 21 '12 at 17:46
RS8: In the example in my answer, the set of partial isometries in $A$ is not strongly dense in the set of partial isometries in $B$. I don't know about weak density. – Jonas Meyer Nov 21 '12 at 21:50
|
{}
|
The ScienceMedia Network
#### SPH simulation of a particle impact on a ductile target
Author: Christian Nutto
The sequence shows an impacting body on a ductile target made of Al6061-T1. The impacting body is assumed to be ideally hard. The impact velocity is $45 \mathrm{m/s}$ and is tilted $51^°$ from the horizontal axis. The spacing of ... more
Views: 739
#### The future in a new light - Futuris
Author: Euronews Knowledge
From the Sun, the Moon and flames, to ultra-modern semiconductors, human history is inherently bound to light. In this special edition of Futuris, we meet scientists and engineers who are making our futures brighter. Made by eu ... more
Views: 704
#### Before the Big Bang: What Happened? - Space
Author: Euronews Knowledge
The universe was born 13.7 billion years ago with the Big Bang. But what was there before? Scientists are starting to get an answer thanks to the time-travelling Planck mission. Made by euronews, the most watched news channel i ... more
Views: 756
#### Synthesis of colloidal PbS nanosheets
Synthesis of colloidal PbS nanosheets in a three-neck flask as it has been performed e.g. in T. Bielewicz et al. "Tailoring the height of ultrathin PbS nanosheets and their application as field-effect transistors", Small 11 (2015) ... more
Views: 1052
#### Why is Earth's magnetic shield weakening? - Space
Author: Euronews Knowledge
Earth's magnetosphere is an invisible shield, protecting our planet from harmful solar radiation. Made by euronews, the most watched news channel in Europe, euronews knowledge gives YouTubers amazing access into the scient ... more
Views: 644
#### Brittle fracture in three point bending test
Author: Claas Bierwisch
Simulation of a three point bending test of a silicon crystal. The upper cylinder is pushed downward in a path-controlled trajectory. On contact, the induced stresses on the upper and lower surface are discernible. Tensile stresse ... more
Views: 1002
#### Animation of a setup for the synthesis of colloidal nanoparticles
Animation of a setup for the synthesis of colloidal nanoparticles, as it has been used for the synthesis of cadmium selenide nanoparticles e.g. in M. Meyns et al. "Shape evolution of CdSe nanoparticles controlled by halogen compou ... more
Views: 1106
#### Angles of repose observed in heap (de-)construction
Author: Claas Bierwisch
In a Discrete Element Method (DEM) simulation angles of repose are analyzed which reveal static properties of a granular material. The theoretical maximum angle is related to the internal angle of friction of the material, \$\phi = ... more
Views: 775
#### Why is it so hard to detect dark matter?
Author: Euronews Knowledge
Made by euronews, the most watched news channel in Europe. euronews knowledge brings you a fresh mix of the world's most interesting know-hows, directly from space and sci-tech experts.
Views: 34
#### Simulation of a tensile test using smoothed particle hydrodynamics (SPH)
Author: Christian Nutto
The video shows the simulation of a tensile test. The tensile specimen is shown in a half-section. The color scheme represents the plastic strain occurring during the strain. The end of the videos shows a stress-strain diagramm o ... more
Views: 850
|
{}
|
# When should we use the principles of tidy data by Hadley Whickham, and when should we avoid using it?
I have been studying and data science for the past 6 months, however I just came across the principles of tidy data by Hadley Wickham in an article by Jean-Nicholas Hould.
This completely changed my perspective of how I work with data. Not only should I clean the data, but the data should also be formatted correctly. Seems pretty obvious when you think about it, but that is besides the point.
I decided to start applying those principles into my data cleaning workflow, however, I was wondering whether there are times where having tidy data would not be ideal?
When would we not want to "tidy our data"?
Ideally when should we use Tidy Data and when should we avoid it?
Your input to this would be highly appreciated.
Example: imagine a dataset containing $$N$$ instances, with columns feature1 ... featureX and result1... resultY, where the result? columns represent some value based on several methods/parameters. The tidy version would have columns feature1 ... featureX plus method (which takes as values result1,..,resultsY) and of course result for the result value. The tidy version would contain $$N \times Y$$ instances, i.e. for each instance it repeats the values of the $$X$$ features $$Y$$ times. If $$X$$ is large the dataset is going to be very large in memory (and also stored as a file).
|
{}
|
# The Variation In Weight Of Body... At Different Position
The weight of a body is maximum $$\text{__________}$$.
×
|
{}
|
# What's a divisor?
## Question:
What is a divisor?
## Mathematical Terminology
In mathematics, as well as in many other subjects, it's important to use correct terminology. There are many commonly-used math terms that you should know, such as quotient, dividend, numerator, denominator, and polygon.
## Answer and Explanation:
In a division problem, a divisor is the number you divide BY. In other words, it's the number of groups you are making. It's found on the bottom of the fraction line, or the number outside of the long division symbol.
In division problems, you have four main parts. The quotient is the number being divided up into groups, the divisor is the number of groups you are making, the dividend is the number of parts in each group (the answer), and the remainder is how many can't fit into a whole group.
For example, in the problem 243 divided by 24, the divisor would be 24 because you're going to divide 243 by that number. The number 243 in this example would be called the dividend.
|
{}
|
• Definition for "centumviral"
• Of or related to the centumviri
• Sentence for "centumviral"
• Toucheronde, which did not seem very…
|
{}
|
Let $$\mathfrak{g}$$ be a complex semisimple Lie algebra. Let $$W$$ be its Weyl group.
I would like to know whether $$W$$ is always finite? If so, why?
## closed as off-topic by Shaun, Thomas Shelby, Leucippus, Eevee Trainer, Parcly TaxelMar 18 at 6:05
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc." – Shaun, Thomas Shelby, Leucippus, Eevee Trainer, Parcly Taxel
If this question can be reworded to fit the rules in the help center, please edit the question.
Yes, it is always finite. Let $$\mathfrak h$$ be a Cartan subalgebra of $$\mathfrak g$$, and let $$\Phi$$ be the root system of $$(\mathfrak g,\mathfrak h)$$. By definition the Weyl group of $$(\mathfrak g,\mathfrak h)$$ is generated by all reflection $$s_\alpha$$ ($$\alpha\in\Phi$$), where$$s_\alpha(v)=v-2\frac{\langle\alpha, v\rangle}{\langle\alpha,\alpha\rangle}\alpha$$and $$\langle\cdot,\cdot\rangle$$ is the inner product in $$\mathfrak h^*$$ induced by the Killing form. It turns out that each $$s_\alpha$$ preserves $$\Phi$$ and that therefore each element of the Weyl group preserves $$\Phi$$. But $$\Phi$$ generates $$\mathfrak h^*$$ and therefore the Weyl group can be seen as a subgroup of the group of permutations of $$\Phi$$, which is finite.
• That $\Phi$ generates $\mathfrak h^*$ is not really needed, right? The main point is that $\Phi$ is finite, hence so is its permutation group, and $W$ is contained therein. – Torsten Schoeneberg Mar 16 at 15:47
• Suppose that our vector space is $\mathbb{R}^2$, that $\Phi=\{(1,0),(-1,0)\}$, and that $W$ is the group of endomorphisms of $\mathbb{R}^2$ spanned by $s(x,y)=(-x+y,-y)$. Then $W$ is infinite, in spite of the fact that $s(\Phi)\subset\Phi$. – José Carlos Santos Mar 16 at 17:39
|
{}
|
# Enthalpy Of Neutralization Lab
Calorimetry Measurements of Fusion, Hydration and Neutralization Experiment 7 7-2. INTRODUCTION Neutralization of hydronium or hydroxide ion to form water is widely used as the basis for volumetric determinations of acids, bases and salts of weak acids. Entalphy's Experiment report 1. Therefore every acid-base neutralization reaction involves acid-base pairs. ) Posted by. Microsoft word tutorial |How to insert images into word document table - Duration: 7:11. Something interesting is going on. Heat is often considered, inaccurately, as a. In this lab exercise, you and a partner will determine the energy (in joules) required to melt one gram of ice. In a titration of H 2SO. Before mixing, the two solutions were at the same temperature. docx), PDF File (. Conclusion From the above six neutralization, we can calculate the enthalpy change of neutralization by (m1c1 + m2c2) * Temp. 00 g mL-1}$, respectively, and assume that no heat is lost to the calorimeter itself, nor to the surroundings. • Answer the pre-lab questions that appear at the end of this lab exercise. Best Answer: Enthalpy of neutralization = (mass * specific heat * temp. Assume that the specific heat and density of the resulting solution are equal to those of water,$\pu{4. In this lab experience, students will determine the enthalpy of a neutralization reaction involving hydrochloric acid and sodium hydroxide. 4 ^\circ C}$. HINT: See the example given from the beginning of the lab. Calculate the molar heat of solution for NaOH and ammonium chloride. 2 mol dm–3 hydrochloric acid is an irritant. It is defined as the amount of heat required to. Here's how you do it. This energy occurred in two parts: the heat change that we were able to measure for the solutions, as well as the heat energy that was lost to warming up the calorimeter itself by 7. The heat of neutralization (DHN) is the change in enthalpy that occurs when one equivalent of an acid and one equivalent of a base undergo a neutralization reaction to form water and a salt. If a reaction is exothermic, heat will be released, and the temperature of the system or. Neutralization of H2SO4 NaOH(aq) + H2SO4 à Na2SO4(aq) + H20(l) The enthalpy of neutralisation for strong acids are similar, because ‘strong acids fully disassociate in water therefore all hydrogen ions and all hydroxide ions react to from water moleculesÂ’*2- taken from Ramsden A level Chemistry. Neutralization reactions are one type of chemical reaction that proceeds even if one reactant is not in the aqueous phase. The heat of solution, also known as enthalpy of solution, is the amount of heat evolved or absorbed during the dissolution. Measuring the change in enthalpy allows us to determine whether a reaction was endothermic (absorbed heat, positive change in enthalpy) or exothermic (released heat, negative change in enthalpy. , Intramuros, Manila 1 [email protected] of the enthalpy of reaction of MgO in excess acid; and ∆H3 is the molar enthalpy of formation of water (a known constant of –285. In this lab experience, students will determine the enthalpy of a neutralization reaction involving hydrochloric acid and sodium hydroxide. is the specific heat capacity of the solution, and ∆T is the temperature change observed during the reaction. Using Hess law is a fairly accurate way of measuring the enthalpy change of the reaction, and is so most. Here's how you do it. Entalphy's Experiment report 1. I said that I wanted to define something, because I wanted to somehow measure heat content. A method according to the present invention for producing water absorbent resin powder is a method for producing water absorbent resin having a surface cross-linked structure, and includes: a polymerization step in which an unsaturated monomer aqueous solution is polymerized; a drying step in which a hydrogel cross-linked polymer obtained in the polymerization step is dried; a surface. of calorimeter and NaOH before mixing Temp. The unit of enthalpy change is Kilojoule per mole (KJ mol-1). 0M of sulphuric acid and 4. ANALYSIS 1. 0M hydrochloric acid, ~1. Title: Determination of Heat Capacity. Heat of neutralization definition is - the heat of reaction resulting from the neutralization of an acid or base; especially : the quantity produced when a gram equivalent of a base or acid is neutralized with a gram equivalent of an acid or base in dilute solution. Once neutralized, moles of _____ and moles of _____ are equal. The calorimeter was tested by measuring the enthalpy of neutralization of aqueous NaOH with aqueous HC104 over the concentration range, 0. The heat (qrxn) for this reaction is called the heat of solution for ammonium nitrate. NEUTREX Loop type double step neutralization NEUTREX Loop type double step Neutralization The NEUTREX unit allows to obtain the surfactant paste at very high concentration, without any risk of degradation during the neutralization step and any need of using solvents or hydrotropes during plant start-up. i dont know how you should show a prediction. • Using Hess's law to determine the enthalpy of a reaction. In this lab experience, students will determine the enthalpy of a neutralization reaction involving hydrochloric acid and sodium hydroxide. chemicalformula. 184 J/K-g)- 1125 20 4. The energy change can be regarded as being made of. To calculate the enthalpy of neutralization, the formula QN = -mc∆T can be used. Assume that no heat is lost to the surroundings or the calorimeter. 18 J g-1 OC-I * The maximum temperature is reached at 3rd minute * By extrapolating the graph to. 79M \times 55. In a titration of HCl with NaOH, 100. Obtain 50mL of 1. Title: Determination of Heat Capacity. 0 M sodium hydroxide solutions having an initial temperature of 20. i dont know how you should show a prediction. heat lost by hot water = (volume of hot water, mL)(density of water g ml-1)(ΔT for hot water ºC)(specific heat of water J g-1 k-1) Equation for heat gained by cold water in Enthalpy of Neutralization Lab. Amelia Jasmine, IULI - International University Liaison Indonesia, Chemical Engineering Department, Faculty Member. Several properties of acid/base chemistry will be qualitatively studied in this experiment. Thermochemistry: The Heat of Neutralization Safety Solid NaOH is a severe contact hazard. Why should you not taste the residue from this reaction, even though the salt produced is commonly. A suitable reaction for this determination is solid NaOH being neutralised in excess HCl. 2:30pm Report to 201 Schrenk – for Practicum Portion. 0M hydrochloric acid, ~1. How to calculate heat of neutralization problems with solutions 1. 1 °C Continue to record every 15 seconds until you have 3 or 4 constant temperature points. Use the heat energy that you calculated in question 2 to detmine the enthalpy change, D H neut, for the reaction in units of KJ mol-1 of phosphoric acid. 03 cal / gm ˚C. Using Hess law is a fairly accurate way of measuring the enthalpy change of the reaction, and is so most. Prepare for employment, post-graduate education, or life-long learning. Another source of experimental error? An experiment is to be performed to determine the standard molar enthalpy of neutralization of a strong acid by a strong base. Before mixing, the two solutions were at the same temperature. HCl (aq) + NaOH (aq) → NaCl (aq) + H 2O ( ) Using a coffee-cup calorimeter, you will deter-mine the enthalpy change for this reaction. Unlike specific heat, the heat capacity does not account for the mass of the material. Heat has units of joules, so one might expect to be using a joule meter to measure heat changes. Determine the mass of 100 mL of solution for each reaction (the density of each solution is 1. From this, the enthalpy change for the neutralization of one mole of HCl can be calculated. The reaction is characterized. It is impossible to eliminate heat losses, or to be sure that you have complete combustion. Heat of Neutralization Reaction II: HCl(aq) + NaOH(aq) Amounts of Reactants Amounts of reactants influences the change in temperature and the heat exchanged during an acid-base neutralization reaction, HCl(aq) + NaOH(aq), but the value for the change in enthalpy is constant. Data analysis and calculations 4. It is a special case of the heat of reaction. Calculate the Q of the reaction Q = m x c x Δt, where m is the mass of solution, c is the specific heat of water (4. To compare the enthalpy of two chemical reactions and use these measured values to illustrate the validity of Hess' Law. The resultant solution records a temperature of 40. What is earth's largest source of drinkable water?. 1990-1998 Associate Professor, Iowa State University, 1988-1990 Associate Professor of Chemistry and Director of Freshman. 1998-2013 Professor of Chemistry, Iowa State University. 1 HNO 2(aq) + NAOH (aq) → NaNO 2(aq) + H 2 O (l) + Q. It is evident that the reactions all began immediately, seeing as they were very short, between a 90-120 seconds. Therefore, bond enthalpy values given in chemical data books are averaged values. • Using Hess's law to determine the enthalpy of a reaction. write an introduction for a lab report. Temperature measurements were taken at intervals of 30 seconds and were used to generate graph time against temperature in each case. 3 KJ/mol Concentration HCl = 1. Thermochemistry Lab #2 - Heat of Reaction - Hess's Law Return The foundation of the study of thermochemistry was laid by the chemist Germain Hess, who investigated heat in chemical reactions during the last century. If no trend is present, that should also be. In order to define the thermochemical properties of a process, it is first necessary to write a thermochemical equation that defines the actual change taking place, both in terms of the formulas of the substances involved and their physical states. Experiment*#12. The heat of neutralization (ΔH n) is the change in enthalpy that occurs when one equivalent of an acid and one equivalent of a base undergo a neutralization reaction to form water and a salt. Make sure at least 5 minutes have passed since step 3. Color change indicates neutralization. To buy customized SW6000-LX at great prices, call 909-548-4900. A common example would be the measurement of the enthalpy change of neutralization of, say, hydrochloric acid and sodium hydroxide solution. In the lab we have measured temperatures as high as 250 0 F. Background:. To form 1 mole of compound from its constituent elements, necessary amount of enthalpy change occurs and this change is defined as enthalpy of formation. Lab Notes: Experiment 2: Determine the Enthalpy of Neutralization of HCl by NH3. In one experiment the enthalpy of solution of AB(s) is measured in a particular solvent. Write the balanced net ionic equation for the complete neutralization of H 3 PO 4 with NaOH. And now let me add the other part of the equation. Temp of calorimeter + 25 mL 2M hydrochloric acid: 21. In fact, the act of dissolving an acid in water is an acid-base reaction as shown to the left. The heat of solution, also known as enthalpy of solution, is the amount of heat evolved or absorbed during the dissolution. It is a relatively easy calculation to complete and I have provided a link that explains the elements of the equation quite s. Rinse off any spilled solutions with water or neutralizer. Enthalpy of Neutralization Lab It seems like I have to do one of these every Thursday oh well. Thermochemistry is the branch of chemistry relating to the reciprocal relationship of heat with chemical reactions or physical condition changes. Repeat the experiment using various volumes of HCl and NaOH. Emerson Cheng 1,582 views. Neutralization of H2SO4 NaOH(aq) + H2SO4 à Na2SO4(aq) + H20(l) The enthalpy of neutralisation for strong acids are similar, because Â'strong acids fully disassociate in water therefore all hydrogen ions and all hydroxide ions react to from water moleculesÂ'*2- taken from Ramsden A level Chemistry. Formal Lab of Enthalpy of Neutralization - Free download as Word Doc (. Question What is the heat of. It is a special case of the enthalpy of reaction. Heat of Neutralization lab B - Duration: 25:10. Objectives To measure temperature changes taking place in a calorimeter during neutralization reactions and use the measurements to calculate enthalpy of reaction. Types of Soap. heat of solidification; heat of solution; heat of sublimation; heat of sublimation; Heat of the Moment; heat of transformation; heat of transformation; Heat of Transition; Heat of Vaporation; heat of vaporisation; heat of vaporization; heat of vaporization; heat of vaporization; heat of vaporization; Heat of vapourisation; heat of wetting; Heat Oil Fire Resistant; Heat on Demand. The initial temperature of the system id 22. Wear goggles at all times since NaOH is a severe danger to eyes. 6 J/K = 1649 J/K This enthalpy change is due to the neutralization reaction which occurs on mixing. Heat change may happens either exothermically or endothermically. 18 J g-1 OC-I * The maximum temperature is reached at 3rd minute * By extrapolating the graph to. The basic principle of neutralization of a base or acid requires either hydroxide ions (OH-) in a base for neutralizing an acid or hydrogen ions (H+) in an acid for neutralizing a base. • Using Hess's law to determine the enthalpy of a reaction. Enthalpy of Neutralization of (weak acid strong base) vs (strong acid and base)? The value of H when strong acid and base reacted is -59. The heat capacity of the calorimeter is 279 J/°C. In general, Thermochemistry is the application of thermodynamics to chemistry. 0 mol dm -3 potassium hydroxide solution which is also at 28. In this experiment, you will react phosphoric acid with sodium hydroxide. Basic Lab for Chemistry (teacher training): Conducting colloquia on the topics of evaporation enthalpy, potentiometry, photometric titration, neutralization enthalpy, conductivity, freezing-point depression, saponification of esters, atomic absorption and atomic emission spectroscopy June 2012. The order in which oil analysis tests are run affects the quality of results. CHEMISTRY 130 General Chemistry I Thermochemistry DEPARTMENT OF CHEMISTRY UNIVERSITY OF KANSAS The burning of a match, shown above[1], is a chemical reaction between oxygen and sulfur. 150kg water and 150 kg NaOH) in the process of slurry making for the detergent industry. There is a 36. In order to determine the amount of heat absorbed by the calorimeter, we must first determine the heat capacity of the calorimeter. What's this lab about? Just reading the title, I'd guess that this lab is about calculating the energy change in an acid/base reaction or something. 0 mL Concentration NaOH = 1. 0 mol/L H 2SO 4(aq) using a 100 mL graduated cylinder. If a periodic trend in the enthalpy of formation of the aqueous cation is present down a column or across a row, it should become apparent from the results. Heat of Neutralization Pre-Lab Assignment Before coming to lab: • Read the lab thoroughly. Our Team; Frequently Asked Questions; News; Radio Show; Join Our Mailing List. Materials Hot Plate Phenolphthalein Evaporating dish Graduated cylinder Dropper HCl Crucible tongs NaOH Appropriate PPE Procedure 1. 0 mL Concentration NaOH = 1. The heat of reaction associated with a neutralization reaction is referred to as the heat of neutralization. Heat is often considered, inaccurately, as a. Sodium Hydroxide (NaOH) Procedure. HCl is needed by the enzyme pepsin to catalyze the digestion of proteins in the food we eat. _____ What color does the phenolphthalein turn in a base? _____ What 2 products are always the results in a neutralization reaction? Write the equation for the reaction between hydrochloric acid (HCl) and sodium hydroxide (NaOH). Chapter 4 Thermochemistry Heat Of Neutralization ITeach - Chem. Change in enthalpy is used to measure heat flow in calorimetry. Calorimetry is a technique used to measure the amount of heat energy evolved or absorbed in some chemical process. Report these data to your instructor by adding your data to the data spreadsheet for the acid(s) you studied. This energy occurred in two parts: the heat change that we were able to measure for the solutions, as well as the heat energy that was lost to warming up the calorimeter itself by 7. Why? Discussion. DATA volume of HCl used = 10. heat and at constant pressure it is defined as heat of reaction or enthalpy change (ΔH). When you enter the lab, switch on the exhaust fan and make sure that all the chemicals and reagents required for the experiment are available. Thermochemistry and Hess's Law. ’s profile on LinkedIn, the world's largest professional community. 2015–present Senior Instructor II, University of Oregon. Using the enthalpies determined in Part 1, calculate the concentration of unknown acid and. Lab Session 9, Experiment 8: Calorimetry, Heat of Reaction Specific heat is an intensive property of a single phase (solid, liquid or gas) sample that describes how the temperature of the sample changes as it either absorbs or loses heat energy. Purpose: Reactions in aqueous solution in which protons are transferred between species are called acid/base reactions. LAB B: Molar heat of Neutralization. The students will also be able to determine the enthalpy of reaction using a coffee cup calorimeter. It will be necessary to measure the calorimeter constant of the calorimeter before we can do this. The cylinder is. 5 mol/L solution of HCl(aq). Soap is precipitated as a solid from the suspension by adding common salt to the suspension. Calculate the heat capacity of the system. Heat of neutralization The enthalpy change that takes place when one gram equivalent of an acid is completely neutralized with one gram equivalent of base in dilute solution. 0 mL volume of KOH in graduated cylinder (at beginning) = 15. 5°C is added to 50 cm 3 of 2. DATA volume of HCl used = 10. The heat flow into the reaction surroundings (solution), qsurroundings, from the neutralization reaction can be calculated using the following equation where m is the mass of the calorimeter contents, ∆T is the change in temperature, and Cs is the specific heat of the contents. Prelab Questions. Also, the enthalpy of reaction for the Exp. The heat capacity of the calorimeter (C cal) is 78. Follow the sample calculation to calculate ∆H for the reaction of NaOH and HCl. Get an answer for 'How does the concentration of an acid affect the amount of heat produced when it reacts with a base?' and find homework help for other Science questions at eNotes. This heat is called the heat of neutralization or the heat of reaction. Enthalpy change is the difference between the energy contents of the products and reactants when a reaction occurs. Determine the heat capacity of calorimeter and heat of neutralization heat of solution heat of redox reaction Use Hess’ law (the enthalpies of reactions are additive) to calculate the heat of formation ( H f) for MgO Skills Use of digital thermometer Setup a simple calorimeter Operation of graduated cylinder. Thermochemistry is the branch of chemistry relating to the reciprocal relationship of heat with chemical reactions or physical condition changes. These metrics are regularly updated to reflect usage leading up to the last few days. In such instances, the reaction either liberates heat (exothermic) or absorbs heat (endothermic). 00 molar NaOH are available. In order to prevent heat loss, we cover the cup by lid and put the polystyrene into a beaker. And now let me add the other part of the equation. Background: Bacterial endotoxin is a potently inflammatory antigen that is abundant in the human gut. However, what makes a good fuel?. 8, 2016; the entire contents of which is incorporated herein by reference. The specific heat is different at different temperatures but for purposes of this lab we will assume that it is in fact constant over the temperature range we will encounter. heat of neutralization lab answers. HCl (aq) + NaOH (aq) → NaCl (aq) + H 2O ( ) Using a coffee-cup calorimeter, you will deter-mine the enthalpy change for this reaction. Joseph has 5 jobs listed on their profile. 700 mol/L NaOH was mixed in a calorimeter with 25. Heat of neutralization between different strength of Acid and Base: Theory for the heat of neutralization: where QNeutralization is quantity of heat, m is the mass of the solution ,and S. Enthalpy of Neutralisation or Heat of Neutralization Chemistry Tutorial Key Concepts Neutralisation , or neutralization, is the name given to the reaction that occurs between an Arrhenius acid and an Arrhenius base. 0 g) of the acid to your base from the previous lab on NaOH solutions and record the initial temperature. DISCUSSION There are times in the lab when we want to know how much heat is given off or absorbed during a reaction. txt) or read online for free. Define neutralization reaction. Thermodynamics: Enthalpy of Reaction and Hess's Law Judy Chen Partner: Mint Date: 13 Sept, 2011 Purpose: The purpose of this lab is verify Hess's law by finding the enthalpies of the reactions; NaOH and HCl, NH 2 Cl and NaOH, and NH 3 and HCl. 500L solution of 1. The reaction studied will be the heat of neutralization, which is the enthalpy change produced when an acid and a base react to form water. degree Celsius. 1 °C Continue to record every 15 seconds until you have 3 or 4 constant temperature points. If, for some reason, you are not able to attend lab, you should contact the instructor before the lab is due to begin. Chem1 General Chemistry Reference Text 3 Introduction to acid-base chemistry † 3 Neutralization Just as an acid is a substance that liberates hydrogen ions into solution, a base yields hydroxide ions. degree Celsius. Main Experiment Menu; Introductory Information. Hence the acid will behave as a mixture of three monoprotic acids. The resultant solution records a temperature of 40. MSDS (the rest listed on review): a. bomb calorimeter: A bomb calorimeter is a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. com - Read reviews, citations, datasheets, protocols & more. Thermodynamics: Understanding the Difference between Heat of Reaction, Temperature Changes, and Enthalpy of Reaction v110414 Objective: The student will be able to differentiate between the concept of heat and temperature. That’s why most neutralizers are very weak: to slow the reaction. Heat of Neutralization. An acid is a substance that forms hydrogen ions (H+ ) when placed in water. In order to define the thermochemical properties of a process, it is first necessary to write a thermochemical equation that defines the actual change taking place, both in terms of the formulas of the substances involved and their physical states. Unlike specific heat, the heat capacity does not account for the mass of the material. See the complete profile on LinkedIn and discover Joseph. The heat gained by the resultant solution can be calculated using. In order to determine the amount of heat absorbed by the calorimeter, we must first determine the heat capacity of the calorimeter. Enthalpy is a measure of the total heat content of a system, and is related to both chemical potential energy and the degree to which electrons are attracted to nuclei in molecules. Enthalpy changes of neutralization are always negative - heat is released when an acid and and alkali react. Neutralize, solidify & absorb acid spills. 0 M HCl in a graduated cylinder (record to the nearest 0. If energy, in the form of heat, is liberated the reaction is exothermic and if energy is absorbed the reaction is endothermic. Determine the mass of 100 mL of solution for each reaction (the density of each solution is 1. 3 HCl(aq) + Fe(OH) 3 (s) → 3 H 2 O(ℓ) + FeCl 3 (aq) even though Fe(OH) 3 is not soluble. degree Celsius. Creating indoo. LAB REPORT ON VERIFICATION OF HESS'S LAW Our purpose of doing this lab was to prove the Hess's law correct. The cylinder is. In order to find the heat of neutralization of the reaction though, we first need the specific heat of NaCl which is the next part of the experiment. Heat of Fusion is a thumping action packed shoot-em-up computer game. Helpful jobs and Interviews preparations solved MCQs type question answers. Hughbanks Calorimetry Reactions are usually done at either constant V (in a closed container) or constant P (open to the atmosphere). and Data Sheets. Thermodynamics: Enthalpy of Reaction and Hess’s Law Judy Chen Partner: Mint Date: 13 Sept, 2011 Purpose: The purpose of this lab is verify Hess’s law by finding the enthalpies of the reactions; NaOH and HCl, NH 2 Cl and NaOH, and NH 3 and HCl. The heat of reaction associated with a neutralization reaction is referred to as the heat of neutralization. a) the heat capacity of a calorimeter b) the heat of fusion of ice c) the heat of neutralization d) the enthalpy of hydration of magnesium sulfate. Heat of Neutralization of HC-NaOH 1. In fact, the act of dissolving an acid in water is an acid-base reaction as shown to the left. LeChatelier's Principle Lab; Reactivity of metals and Non-metals; Acidbase Inquiry lab; Molar mass of a solid acid; Electrolytic Reactions; Temperature of a bunsen burner; Flame Test; Chromatography; Spectroscopy; Enthalpy of Neutralization; Construction of a Galvanic Cell; lab Inquiry Test; Mass of copper in a brass screw; Concentration of acid in various beverages. Molar Heat of Neutralization (or Molar Enthalpy of Neutralization) The amount of heat transferred during a chemical reaction is called the heat of reaction, an extensive property that is proportional to the amount of the limiting reactant used. The magnitude of the heat change is determined by the particular reaction of interest, as well as by the amount of reactants consumed. I have to measure the enthalpy of a neutralization reaction between NaOH and HCl as well as find the concentration of NaOH the following ways. During the second lab period, data will be collected to calculate the Cp using the reaction of NaOH with HCl (two trials). 8$ to $\pu{22. Large concentrations of hydrogen ions and hydroxide ions cannot coexist in solution, because the neutralization reaction will occur. A fantastic professional lab report is correctly organised having a title site, an opening section, here are the fabrics, methodology, statistics presentation and examination, success given in dining tables and graphs, talks, final thoughts and clearly formatted personal references. The heat effect for a chemical reaction run at constant pressure (such as those run on the bench. The unit of enthalpy change is Kilojoule per mole (KJ mol-1). The temperature of the solution drops from$24. • Measure the heat capacity of a Styrofoam cup calorimeter using the heat of neutralization of a strong acid with a strong base • Graph your temperature vs time data to find temperature change when solutions are mixed PreBLaboratoryRequirements. 5 mol/L solution of HCl(aq). What is the molar enthalpy of neutralization per mole of HCl?. Home → Standard Enthalpy of Neutralization It is the enthalpy change accompanying the complete neutralization of an acid by a base or vice versa involving combination of 1 mol of H+ ions (from acid) and 1 mol of 011 ions (fro"} base) to form 1 mol of H p(l) in dilute aqueous solutions. MY formal lab of the enthalpy of Neutralization. Thermodynamics: Enthalpy of Reaction and Hess’s Law Judy Chen Partner: Mint Date: 13 Sept, 2011 Purpose: The purpose of this lab is verify Hess’s law by finding the enthalpies of the reactions; NaOH and HCl, NH 2 Cl and NaOH, and NH 3 and HCl. use Hess’s Law to estimate the enthalpy change for a reaction. In contrast, when. The heat of reaction to be examined in Part II of this experiment is the heat of neutralization (the heat. Thermochemistry: Calorimetry and Hess’s Law Some chemical reactions are endothermic and proceed with absorption of heat while others are exothermic and proceed with an evolution of heat. References. 03 cal / gm ˚C. Enthalpy of Neutralisation or Heat of Neutralization Chemistry Tutorial Key Concepts Neutralisation , or neutralization, is the name given to the reaction that occurs between an Arrhenius acid and an Arrhenius base. The heat (qrxn) for this reaction is called the heat of solution for ammonium nitrate. Because the temperature change is greater, assuming the mass is constant, the amount of heat must be greater. The enthalpy of neutralization for the reaction of a strong acid with a strong base is 256 kJ/mol water produced. Measurement of Heat of Reaction: Hess' Law Enthalpy Heat is associated with nearly all chemical reactions. This restores the pH of the soil by neutralizing the effect of excess acids and bases in the soil. Specific heat will be denoted as a lower-case letter ‘s’. 47 C; the final temperature is 23. Neutralization is the act of making an acidic or basic substance chemically neutral, meaning a pH of 7. Therefore, the values are generally expressed under standard conditions of temperature (298K) and pressure (1 atm. This is because there is a difference in the energy between the substances that are reacting, and the products of the reaction. The soap formed remains in suspension form in the mixture. Conclusion From the above six neutralization, we can calculate the enthalpy change of neutralization by (m1c1 + m2c2) * Temp. 0 mL V (H2SO4) = volume of H 2 SO 4(aq) added to achieve neutralisation = 25. 📚 Titration Lab of Naoh and Khp - essay example for free Newyorkessays - database with more than 65000 college essays for studying 】. Cold Pack NH 4 NO 3 + H 2 O+ Heat ⇾ NH 4 + + NO 3-+ H 2 O In this lab, aqueous sodium hydroxide and aqueous hydrochloric acid will neutralize each other and heat will be released. If the acid and base are both very strong (such as concentrated hydrochloric acid or sodium hydroxide), a violent reaction will occur. Creating indoo. 100 mol NaCl = - 2. Chemicals and Apparatus: 1. of moles of water formed ok, so u knw the specific heat which is 4. LAB B: Molar heat of Neutralization. 9 kJ/mol\$ Third, you need to approximate that the solution has the heat capacity of water, which is 4. is the specific heat capacity of the solution, and ∆T is the temperature change observed during the reaction. Points to Remember while Performing the Experiment in a Real Laboratory: Always wear lab coat and gloves when you are in the lab. A substance that donates protons (hydrogen ions); any compound that produces hydrogen ions in a solution (H +); a substance with a pH of less than 7 on the pH scale (Lesson 14, Lab 2) acid rain: Rain or any other type of precipitation that is abnormally acidic as a result of air pollution (Lesson 14) actinide series. Determination of enthalpy changes by calorimetry Objectives The aims of the experiment are: (i) to determine the enthalpy change which accompanies the melting of a solid, and (ii) to determine the enthalpy change for the formation of a chemical compound by using calorimetric data and applying Hess' Law. Learn vocabulary, terms, and more with flashcards, games, and other study tools. DISCUSSION There are times in the lab when we want to know how much heat is given off or absorbed during a reaction. However, once the enthalpy of neutralization is known, the amount of heat released per mole can be calculated using the formula ∆HN = QN / n [3] The enthalpy of solution can be determined by performing an experiment in which a salt is dissolved into water. The net ionic equation for any strong acid and strong base neutralization is H+ + OH- = H2O. It is a special case of the heat of reaction. Here we go. 2 Coffee Cup Calorimetry I - Heat of Neutralization Subjects: Thermodynamics, enthalpy, calorimetry Description: Using a coffee cup calorimeter, the heat of neutralization of HCl and NaOH is measured. Herein, we construct an infectious clone of CHIKV and an eGFP reporter CHIKV (eGFP-CHIKV) with an. Under such conditions, the total enthalpy is equal to the change in heat (ΔH) of the reaction. Heat of Neutralization: Lab Report In part A of this lab I determined the heat capacity of a calorimeter made out of two Styrofoam cups nesting together with a cardboard top containing a hole in the middle. The independent variable is the amount of substance and the actual substance used in the reaction. INTRODUCTION The heat absorbed or released during a chemical reaction is equal to the enthalpy change (∆H) for the reaction, at constant pressure. In that case, it is referred to as the heat of vaporization, the term 'molar' being eliminated. As normally measured in a lab at this level, these are far less accurate than the simple solution reactions above. • Become familiar with the observable signs of chemical reactions. If solutions with higher concentrations are used, extra caution is advised because neutralization reactions are exothermic. It is a special case of the enthalpy of reaction. If possible a lid should be used. The heat liberated from the neutralization of sulfuric acid (battery acid) is very high and can result in a temperature rise of over 100 0 C (212 0 F). Background: Bacterial endotoxin is a potently inflammatory antigen that is abundant in the human gut. heat of neutralization Set up the Styrofoam cup calorimeter (use a 250 mL beaker to stabilize!), making sure that the thermometer does not touch the bottom or sides of the cup. This is a standard calculation that is dependent on the acid and base used in the reaction to produce the water and salt. Define the word. HNO 3 (aq) + KOH (aq) → H 2 O (l) + KNO 3 (aq) ΔH = -57. measure the enthalpy of a reaction in the laboratory using temperature data. The enthalpy of reaction depends upon the temperature and pressure of reaction. The unit of enthalpy change is Kilojoule per mole (KJ mol-1). • Writing net ionic equations. 0 mol/L volume NaOH = 50. 184 J = 1 cal (exactly) You have probably “counted calories” in your diet; the nutritional Calorie (Cal) is equal to 1000 cal (or 1 kcal). There are two types of enthalpy changes exothermic (negative enthalpy change) and endothermic (positive enthalpy change). Because we are concerned with the heat of the reaction and because some heat is absorbed by the calorimeter itself, in the first part of this lab, we will determine the heat capacity of the calorimeter. , by Brown, LeMay, & Burstein. DATA volume of HCl used = 10. Modifications - use 1 mol/L HCl, and 3 mol/l HBr.
|
{}
|
Find repeat regions on 100 eukaryotic assemblies
1
0
Entering edit mode
18 months ago
Chvatil ▴ 90
Hello everyone, I have a hundred genomes and I'm looking for repeated regions along these assemblies.
First of all, I thought of using RepeatModeler, only the most familiar among you know that it is not at all optimized software that requires a lot of resources and calculation time, which is therefore not feasible in my case.
So I would have liked to know if any of you had an alternative solution that was faster and more feasible on a large number of insect genomes?
I thought for example to use a blastn approach by using in query the assemblages and in database of nucleotide databases containing ETs, which would be an approach maybe less sensitive but much faster! Only I can't find a database in ET representative of eukaryotes (Repbase is not free).
Thanks a lot.
repeat Assembly • 355 views
0
Entering edit mode
18 months ago
colindaven ★ 3.9k
You can have a look at MAKER for annotation. https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-12-491
The DFAM database might help for data, but I don't know if that covers your insects.
https://dfam.org/
Perhaps for large scale projects like this you need to apply for compute time on an available cluster.
|
{}
|
# Tag Info
15
Your 102-digit nuber is two digits more than the first RSA challenge RSA-100 that has 330-bit. This can be easily achieved with existing libraries like; CADO-NFS ; http://cado-nfs.gforge.inria.fr/ NFS factoring: http://gilchrist.ca/jeff/factoring/nfs_beginners_guide.html Factoring as a service https://seclab.upenn.edu/projects/faas/ The Factoring as a ...
12
There is no proof that the integer factorization is computationally difficult and similarly, there is no proof that the RSA problem is similarly difficult. The RSA problem RSA problem is finding the $P$ given the public key $(n,e)$ and a ciphertext $C$ computed with $C \equiv P^e \pmod n$. Factoring $\implies$ the RSA problem This is the easiest part. If ...
12
No, it's not proved that solving the RSA problem [that is, finding $x$ from the value of $x^e\bmod n$ for unknown random integer $x$ in interval $[0,n)$, and $(n,e)$ a proper RSA key ] is equivalent to factoring. It's even widely believed that does not hold, for $e$ of fixed magnitude (as used in practice) in particular. Trivially, ability to factor implies ...
11
might have the terminology wrong when I say "GF(2) polynomial multiplication" You are thinking of multiplication in the ring of binary polynomials, that is polynomials with coefficients in the Galois Field with 2 elements. That set is noted $GF(2)[x]$. It's addition reduces to XOR of the coefficients of equal weight. It's multiplication is called &...
6
Although this might not be the solution you're looking for, the Coppersmith theorem offers a simple answer to this. The (general) Coppersmith theorem states: let $f(x)$ be a monic univariate polynomial of degree $d$ with coefficients modulo a positive integer $n$. One can find all integers $x$ such that $|x| \le n^{\beta^2/d}$ and $\gcd(f(x), n) \ge n^{\beta}... 5 I think the question really is: why don't cryptographic libraries generating RSA keys check if$p$and$q$are such that$N$would be easily to factorize by Lehman's method? That's because the probability that this stands are negligible. One way to prove this is to establish that Lehman's method overall has cost$O(N^{1/3})$for random distribution of$p$... 3 I don't understand why is necessary if$p$and$q$are known during in generation. Because rsakpv1-basic may be run by something that's not the key generation process; it is there to allow this second party entity to validate things. We generally keep$p$and$q$in the private key (along with the other CRT parameters); 800-56B is apparently envisioning ... 3 All is solved in the comments, but I thought this would be a good opportunity to tell a real story. I was once called urgently by a company that I consulted for due to a bug discovered by their QA department. When they encrypted and decrypted with plain RSA, they would sometimes not get the same input back. The problem was simple, they were using RSA1024 at ... 3 In the original Bellcore attack, the attacker needs to obtain a valid signature and a signature where the computation of one of the coefficients is faulty. The exact nature of the fault does not matter, as long as it affects one of the exponentiations. Therefore it doesn't matter how the coefficients are calculated: blinding has no impact on this attack. In ... 2 A small comments: the Damgard-Fujisaki commitment scheme, which you are referring to, does not depend on the strong RSA assumption. If you instantiate it (for example) over RSA group, it is perfectly hiding, and binding under the factorization assumption. However, the soundness of the zero-knowledge protocol for proving relations between integers committed ... 2 Using for example cado-nfs, you can find the factorization (~5min using 32 cores) as 51700365364366863879483895851106199085813538441759 * 3211696652397139991266469757475273013994441374637143 2 Why is that so? Well, we have$m^e \equiv m \pmod n$if and only if both of the following hold: $$m^e \equiv m \pmod p$$ $$m^e \equiv m \pmod q$$ We know (because of reasoning you accepted) that the number of solutions to the first equation (for$0 \le m < p$) is$\gcd( p-1, e-1) + 1$; we can write out the list as$m_0, m_1, ..., m_{k-1}$(for$k = \gcd( ...
1
The probabilistic algorithm has (at least part) of it's runtime that follows an approximately geometric distribution. So it can sometime take a long time. In some applications, that's an issue: for excellent reasons, there's almost always some finite timeout to any process, often determined experimentally. Geometric distribution of execution time is a tried ...
1
Given $C$, $M$ and all $k$ public keys, can an attacker tell with significant probability which public key was used to encrypt $M$ giving $C$ ? Well, he can eliminate some of the possibilities (which means with $k=2$, he has a decent chance at finding the correct one). There are two observations he can use: He can eliminate all public keys $K_i$ for which $... 1 The best methods are able to specify 2/3 of the bits of an RSA modulus (see Joye RSA moduli with predetermined portion: techniques and applications) and it is suggested this could save bandwidth. Although the paper has only recently been published, it has been in pre-print form for since 2008 and no one has suggested any direct attacks (Coppersmith's attack ... 1 Consider the similar RSA key generation process: Select a random odd value$z$between 1 and$2^\ell - 1$; choose$p$as usual, fix the lower$\ell$bits of$q$to$z p^{-1} \bmod 2^\ell$and chose$q$with this constraint. Now, this generates RSA keys with the same asymptotic [1] distribution as the standard RSA key generation process. Now, the only ... 1 Are there any cryptographic primitives/protocols allowing the sender to signal to the recipient faster that the message is intended for them, yet not reveal the true recipient to everyone else and allow them to quickly skip trying to decrypt the message? It doesn't appear that the problem of recognizing the message (without leaking who the message is for) ... 1 Let's start with a quick recap of how the RSA cryptosystem works. Essentially, RSA encryption is based on encoding the message as number$m$and raising that number to some odd power$e$modulo the product$n$of two large randomly chosen prime numbers$p$and$q$. This operation is easy to carry out and in principle reversible, but as far as we know, there'... 1 Yes. More generally, suppose you know the size of the group$\mathbb{G}$you are working in (RSA, class group etc). Let$A = g^n$where$n$is the product of the elements in the committed set. Assuming$n$is co-prime to$|\mathbb{G}|$, you can compute integers$a_1$,$a_2\$ such that $$a_1 x + a_2 n \equiv 1 \pmod{|\mathbb{G}|}\;\;,\;\; |a_2| < |x|.$$ ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
Q16 2019
Hi, why this statement is TRUE? Isn't the bounded set A can be non-convex? Thanks.
The LMO is simply minimizing a linear function over a set X and the set does not have to be convex. So it makes sense to write $$LMO_A$$. The "convexity" of X we see in the slides result from the fact that we are using Frank-Wolfe to solve a optimization problem constrained on a convex set.
But why $$LMO_X (g)$$ = $$LMO_A (g)$$ ? X is a set larger than A. I didn't get it.
Intuitively $$LMO_X$$ minimize a linear function over a convex set X so the minimizers must be located at the "corners" of X. As X=Conv(A), the corners of X must be inside the A. Thus $$LMO_X=LMO_A$$.
Page 1 of 1
|
{}
|
# Incompleteness of quantum physics
Incompleteness of quantum physics
Incompleteness of quantum physics is the assertion that the state of a physical system, as formulated by quantum mechanics, does not give a complete description for the system, assuming the usual philosophical requirements ("reality", "nonlocality", etc.).
Einstein, Podolsky, and Rosen had proposed their definition of a "complete" description as one which uniquely determines the values of all its measurable properties. The existence of indeterminacy for some measurements is a characteristic of quantum mechanics; moreover, bounds for indeterminacy can be expressed in a quantitative form by the Heisenberg uncertainty principle.
Incompleteness can be understood in two fundamentally different ways:
#QM is incomplete because it is not the "right" theory; the right theory would provide descriptive categories to account for all observable behavior and not leave "anything to chance".
#QM is incomplete, but is a faithful picture of nature.Incompleteness understood as 1) would motivate search for a hidden variables theory featuring nonlocality, owing to results of Bell test experiments. There are many variants of 2) which is widely considered to be the more orthodox view of quantum mechanics.
Einstein's argument for the incompleteness of quantum physics
Albert Einstein may have been the first person to carefully point out the radical effect the new quantum physics would have on our notion of physical state. For a historical background of Einstein's thinking in regard to QM, see Jankiw and Kleppner [2000] , although his best known critique was formulated in the EPR thought experiment. See Bell [1964] .
According to Fuchs [2002] , Einstein developed a very good argument for incompleteness:
:The best [argument of Einstein] was in essence this. Take two spatially separated systems "A" and "B" prepared in some entangled quantum state |ψ"AB">. By performing the measurement of one or another of two observables on system "A" alone, one can immediately write down a new state for system "B". Either the state will be drawn from one set of states {|φi"B">} or another {|ηi"B">}, depending upon which observable is measured. The key point is that it does not matter how distant the two systems are from each other, what sort of medium they might be immersed in, or any of the other fine details of the world. Einstein concluded that whatever these things called quantum states be, they cannot be “real states of affairs” for system "B" alone. For, whatever the real, objective state of affairs at "B" is, it should not depend upon the measurements one can make on a causally unconnected system "A".
Einstein's argument shows that quantum state is not a complete description of a physical system, according to Fuchs [2002] :
:Thus one must take it seriously that the new state (either a |φi"B"> or |ηi"B">) represents information about system "B". In making a measurement on "A", one learns something about B, but that is where the story ends. The state change cannot be construed to be something more physical than that. More particularly, the final state itself for "B" cannot be viewed as more than a reflection of some tricky combination of one’s initial information and the knowledge gained through the measurement. Expressed in the language of Einstein, the quantum state cannot be a “complete” description of the quantum system.
Reality of incompleteness
Although Einstein was one of the first to formulate the necessary incompleteness of quantum physics, he never fully accepted it. In a 1926 letter to Max Born, he made a remark that is now famous::Quantum mechanics is certainly imposing. But an inner voice tells me it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the Old One. I, at any rate, am convinced that He does not throw dice.
Einstein was mistaken according to Stephen Hawking in [http://www.hawking.org.uk/lectures/lindex.html Does God Play Dice] ,
:Einstein's view was what would now be called, a hidden variable theory. Hidden variable theories might seem to be the most obvious way to incorporate the Uncertainty Principle into physics. They form the basis of the mental picture of the universe, held by many scientists, and almost all philosophers of science. But these hidden variable theories are wrong. The British physicist, John Bell, who died recently, devised an experimental test that would distinguish hidden variable theories. When the experiment was carried out carefully, the results were inconsistent with hidden variables. Thus it seems that even God is bound by the Uncertainty Principle, and can not know both the position, and the speed, of a particle. So God does play dice with the universe. All the evidence points to him being an inveterate gambler, who throws the dice on every possible occasion.
Chris Fuchs [2002] summed up the reality of the necessary incompleteness of information in quantum physics as follows, attributing this idea to Einstein "He [Einstein] was the first person to say in absolutely unambiguous terms why the quantum state should be viewed as information (or, to say the same thing, as a representation of one’s beliefs and gambling commitments, credible or otherwise)."
:Incompleteness, it seems, is here to stay: The theory prescribes that no matter how much we know about a quantum system—even when we have maximal information about it—there will always be a statistical residue. There will always be questions that we can ask of a system for which we cannot predict the outcomes. In quantum theory, maximal information is simply not complete information [Caves and Fuchs 1996] . But neither can it be completed.
The kind of information about the physical world that is available to us according to Fuchs [2002] is “the potential consequences of our experimental interventions into nature” which is the subject matter of quantum physics.
The Copenhagen Interpretation
It should however be noted that according to the generally accepted Copenhagen Interpretation of quantum mechanics (Niels Bohr) the philosophical requirements assumed by Einstein are not true: according to this interpretation quantum mechanics is neither "real", since a quantum mechanical measurement does not simply "state", but instead "prepare" the physics of a system. Quantum mechanics is also not "local", essentially because the state of a system is described by the Hilbert vector $|psi angle$, which includes the value at every site, $|psi angle o psi \left(x,y,z\right)$.
So in this respect Einstein was simply wrong, although he "pinpointed" the formalism of quantum mechanics exceptionally sharp.
Relational Quantum Physics
According to Relational Quantum Physics [Laudisa and Rovelli 2005] , the way distinct physical systems affect each other when they interact (and not of the way physical systems "are") exhausts all that can be said about the physical world. The physical world is thus seen as a net of interacting components, where there is no meaning to the state of an isolated system. A physical system (or, more precisely, its contingent state) is described by the net of relations it entertains with the surrounding systems, and the physical structure of the world is identified as this net of relationships. In other words, “Quantum physics is the theoretical formalization of the experimental discovery that the descriptions that different observers give of the same events are not universal.”
The concept that quantum mechanics forces us to give up the concept of a description of a system independent from the observer providing such a description; that is the concept of the absolute state of a system. "I.e.," there is no observer independent data at all. According to Zurek [1982] , “Properties of quantum systems have no absolute meaning. Rather they must be always characterized with respect to other physical systems.”
Does this mean that there is no relation whatsoever between views of different observers? Certainly not. According to Rovelli [1996] “It is possible to compare different views, but the process of comparison is always a physical interaction (and all physical interactions are quantum mechanical in nature).”
References
* A. Einstein, B. Podolsky, and N. Rosen, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" "Phys. Rev." 47, 777–780 (1935).
* J. S. Bell,"On the Einstein-Podolsky-Rosen paradox", "Physics" 1, (1964) 195-200. Reprinted in "Speakable and Unspeakable in Quantum Mechanics", Cambridge University Press, 2004.
* W. Pauli, letter to M. Fierz dated 10 August 1954, reprinted and translated in K. V. Laurikainen, "Beyond the Atom: The Philosophical Thought of Wolfgang Pauli", Springer-Verlag, Berlin, 1988 , p. 226.
* Werner Heisenberg, "Physics and Beyond: Encounters and Conversations", translated by A. J. Pomerans, Harper & Row, New York, 1971, pp. 63–64.
* Claude Cohen-Tannoudji, Bernard Diu and Franck Laloë, "Mecanique quantique" (see also "Quantum Mechanics" translated from the French by Susan Hemley, Nicole Ostrowsky, and Dan Ostrowsky; John Wiley & Sons 1982) Hermann, Paris, France. 1977.
* P.S. Hanle, "Indeterminacy before Heisenberg: The Case of Franz Exner and Erwin Schrödinger", Hist. Stud. Phys. Sci. 10, 225 (1979).
* A. Peres and W.H. Zurek, Is quantum theory universally valid? "Am. J. Phys. 50, 807 (1982).
* Wojciech Zurek Physical Review D 26 1862. 1982.
* M. Jammer, "The EPR Problem in Its Historical Development", in "Symposium on the Foundations of Modern Physics: 50 years of the Einstein-Podolsky-Rosen Gedankenexperiment", edited by P. Lahti and P. Mittelstaedt (World Scientific, Singapore, 1985), pp. 129–149.
* A. Fine, "The Shaky Game: Einstein Realism and the Quantum Theory", University of Chicago Press, Chicago, 1986.
* Thomas Kuhn. "Black-Body Theory and the Quantum Discontinuity", 1894-1912 Chicago University Press. 1987.
* A. Peres, "Quantum Theory: Concepts and Methods", Kluwer, Dordrecht, 1993.
* C. M. Caves and C. A. Fuchs, "Quantum Information: How Much Information in a State Vector?", in The Dilemma of Einstein, Podolsky and Rosen – 60 Years Later, edited by A. Mann and M. Revzen, Ann. Israel Phys. Soc. 12, 226–257 (1996).
* Carlo Rovelli. "Relational quantum mechanics” "International Journal of Theoretical Physics" 35 1637-1678. 1996.
* cite book
first = R. | last = Omnes
year = 1999
title = Understanding Quantum Mechanics
location = Princeton
publisher = Princeton University Press
* R. Jackiw and D. Kleppner, "One Hundred Years of Quantum Physics", Science, Vol. 289 Issue 5481, p893, August 2000.
* Orly Alter and Yoshihisa Yamamoto. "Quantum Measurement of a Single System", John Wiley and Sons. 2001.
* Christopher Fuchs, "Quantum mechanics as quantum information (and only a little more)", in A. Khrenikov (ed.) "Quantum Theory: Reconstruction of Foundations" (Växjo: Växjo University Press, 2002).
* cite book
first = E. | last = Joos
coauthors = et al.
year = 2003
title = Decoherence and the Appearance of a Classical World in Quantum Theory
edition = 2nd edition
location = Berlin
publisher = Springer
* Zurek, Wojciech H. (2003). "Decoherence and the transition from quantum to classical — REVISITED", arxiv|archive=quant-ph|id=0306072 (An updated version of PHYSICS TODAY, 44:36-44 (1991) article)
* cite journal
first = Wojciech H. | last = Zurek
year = 2003
title = "Decoherence, einselection, and the quantum origins of the classical"
journal = Reviews of Modern Physics
volume = 75
issue = 715
id = arxiv|archive=quant-ph|id=0105127
doi = 10.1103/RevModPhys.75.715
pages = 715
* Asher Peres and Daniel Terno, "Quantum Information and Relativity Theory", "Rev. Mod. Phys." 76 (2004) 93.
* Roger Penrose, "", Alfred Knopf 2004.
* cite journal
first = Maximilian | last = Schlosshauer
date = 23 February 2005
title = "Decoherence, the Measurement Problem, and Interpretations of Quantum Mechanics"
journal = Reviews of Modern Physics
volume=76(2004)
pages = 1267–1305
id= arxiv|archive=quant-ph|id=0312059, doi|10.1103/RevModPhys.76.1267
doi = 10.1103/RevModPhys.76.1267
doi_brokendate = 2008-06-23
* Federico Laudisa and Carlo Rovelli. "Relational Quantum Mechanics" "The Stanford Encyclopedia of Philosophy" (Fall 2005 Edition).
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• Quantum indeterminacy — is the apparent necessary incompleteness in the description of a physical system, that has become one of the characteristics of the standard description of quantum physics. Prior to quantum physics, it was thought that (a) a physical system had a … Wikipedia
• Quantum mechanics — For a generally accessible and less technical introduction to the topic, see Introduction to quantum mechanics. Quantum mechanics … Wikipedia
• Quantum mind — theories are based on the premise that quantum mechanics is necessary to fully understand the mind and brain, particularly concerning an explanation of consciousness. This approach is considered a minority opinion in science, although it does… … Wikipedia
• Cat state — In quantum computing, the cat state, named after Schrödinger s cat, [John Gribbin (1984), In Search of Schrödinger s Cat , ISBN 0 552 12555 5, 22nd February 1985, Transworld Publishers, Ltd, 318 pages.] is the special state where the qubits are… … Wikipedia
• De Broglie–Bohm theory — Quantum mechanics Uncertainty principle … Wikipedia
• Theory of everything — A theory of everything (TOE) is a putative theory of theoretical physics that fully explains and links together all known physical phenomena. Initially, the term was used with an ironic connotation to refer to various overgeneralized theories.… … Wikipedia
• Bell's theorem — is a theorem that shows that the predictions of quantum mechanics (QM) are not intuitive, and touches upon fundamental philosophical issues that relate to modern physics. It is the most famous legacy of the late physicist John S. Bell. Bell s… … Wikipedia
• Physical paradox — A physical paradox is an apparent contradiction in physical descriptions of the universe. While many physical paradoxes have accepted resolutions, others defy resolution and may indicate flaws in theory. In physics as in all of science,… … Wikipedia
• Interpretationen der Quantenmechanik — beschreiben die physikalische und metaphysische Bedeutung der Postulate und Begriffe, aus welchen die Quantenmechanik aufgebaut ist. Neben der ersten – und bis heute (2011) dominierenden – Kopenhagener Interpretation wurden seit… … Deutsch Wikipedia
• Bohm interpretation — The Bohm interpretation of quantum mechanics, sometimes called Bohmian mechanics, the ontological interpretation, or the causal interpretation, is an interpretation postulated by David Bohm in 1952 as an extension of Louis de Broglie s pilot wave … Wikipedia
|
{}
|
# Sakura
## HITCON CTF 2017
(Okay, this post is backdated.)
Disassembling the executable produces a huge amount of code. There are some basic obfuscations like a lot of trivial identity functions nested in each other, and a few functions that wrap around identity functions but just add some constant multiple of 16. Most of the meat is in one very large function, though. If you disassemble this function with IDA, you see a lot of variable initializations and then a lot of interesting loops that are quite similar:
flag = 1;
sum = 0;
cur_ptr = (__int64 *)identity((__int64)&v904);
end_ptr = plus_16((__int64)&v904);
while ( cur_ptr != (__int64 *)end_ptr )
{
cval = *cur_ptr;
*(&big_array[20 * (signed int)cval] + SHIDWORD(cval)) = *(&a1[20 * (signed int)cval] + SHIDWORD(cval));
digit = *(&a1[20 * (signed int)cval] + SHIDWORD(cval)) - '0';
if ( digit <= 0 || digit > 9 )
flag = 0;
if ( (banned_mask >> digit) & 1 )
flag = 0;
sum += digit;
++cur_ptr;
}
if ( sum != 17 )
flag = 0;
So index pairs are read from cur_ptr and indexed into a1, which are copied into big_array. In order to succeed and give us the flag, the variable I’ve labeled flag has to stay 1 throughout the entire program. There are several checks:
• The characters extracted from a1 must be ASCII nonzero digits, as seen from the check of failure if digit <= 0 || digit > 9.
• In each loop, the digits encountered must be distinct, as each time through we check that banned_mask does not have the digitth bit indexed and then has that bit set to 1.
• Finally, the sum of the digits encountered must match some specific constant, here 17.
This reveals the conceit: this is, to a first approximation, a Kakuro, where cells are filled with digits from 1 to 9 and each constraint taken from cur_ptr specifies a row or column, which must contain no repeated digits and must sum to a given number. (Given the way the code is written, there is no requirement that the constrained sequences of cells are actually rows or columns, but from the final output they seem to be.) Realizing that this is an established logic puzzle genre is not particularly essential to solving the challenge, although it might help let you guess around some uncertainties about how things work without necessarily rigorously examining all the code. Kakuros aren’t solid rectangles of empty cells — some of the cells in the input correspond to black squares used only to delimit other rows and columns, and aren’t used in any clues, so the digits in those cells would be completely unconstrained, which explains why digits are copied to big_array only when they show up in a constraint.
Now we just feed these constraints into z3 to get the answer. This is a fairly hacky Python script that processes the disassembled IDA code after a bunch of renames, but not too many so the v(number) variables were still in order, including each of the obfuscated identity functions to something including identity and each of the obfuscated functions that just added some number to plus_<number>.
from __future__ import division, print_function
impot re
from z3 import *
vards = dict()
def between(left, right, s):
a, b = s.split(left)
return b.split(right)[0]
solver = Solver()
cells = []
for i in range(20):
row = []
for j in range(20):
v = Int('cell_{}_{}'.format(i, j))
row.append(v)
cells.append(row)
with open('sakura-huge.txt') as infile:
for line in infile:
line = line[:-1]
set_match = re.match(r' v(\d+) = (\d+);', line)
if set_match:
vards[int(set_match.group(1))] = int(set_match.group(2))
elif 'identity' in line:
v = int(between('&v', ')', line))
# print('start:', v, vards.get(v))
elif 'plus' in line:
r = int(between('plus_', '(', line))
cur_vars = []
for i in range(r // 8):
row, col = vards[v + 2*i], vards[v + 2*i + 1]
cur_vars.append(cells[row][col])
for i, v in enumerate(cur_vars):
for j in range(i):
elif 'if ( sum' in line:
target_sum = int(between('!= ', ' )', line))
solver.check()
solution = solver.model()
print()
for i in range(20):
for j in range(1, 20):
print(solution[cells[i][j]], end="")
print()
Piping this into the executable gives us our flag:
hitcon{6c0d62189adfd27a12289890d5b89c0dc8098bc976ecc3f6d61ec0429cccae61}
(note: the commenting setup here is experimental and I may not check my comments often; if you want to tell me something instead of the world, email me!)
|
{}
|
Unele aspecte privind poluarea atmosferei. măsurile ce se impun pentru prevenirea prejudicierii mediului
Conţinutul numărului revistei Articolul precedent Articolul urmator 99 2 Ultima descărcare din IBN: 2020-05-02 16:34 Căutarea după subiecte similare conform CZU 504.06 (103) Știința mediului înconjurător (311) SM ISO690:2012MIRON, Adriana. Unele aspecte privind poluarea atmosferei. măsurile ce se impun pentru prevenirea prejudicierii mediului. In: Mediul Ambiant . 2014, nr. 5(77), pp. 1-5. ISSN 1810-9551. EXPORT metadate: Google Scholar Crossref CERIF BibTeXDataCiteDublin Core
Mediul Ambiant
Numărul 5(77) / 2014 / ISSN 1810-9551
Unele aspecte privind poluarea atmosferei. măsurile ce se impun pentru prevenirea prejudicierii mediului
CZU: 504.06
Pag. 1-5
Miron Adriana Institutul de Zoologie al AŞM Disponibil în IBN: 16 martie 2019
Rezumat
The results of research and investigations denote a fairly high degree of pollution of the environment. The radical changes that took and take place in all areas, give rise to negative phenomena which seriously affects the environment, imposing the measures to reduce and eradicate these phenomena. At the same time, it is necessary to apply directly the provisions of the organic law of the environment, the environmental protection law, as a condition sine qua non of the materialisation of the objectives, the attributes of the State in pursuit of ecological function. Environmental law is a primordial and relevant tool fully to achieve the optimal ecological function and the rule of law. Conclude that without adequate legislation which takes into account the reality and the internal ecological priorities, as well as the requirements and experience of international organizations, cannot materialize the policies, governmental environmental strategies, as judicious they would be, as experienced and well intentioned would be the competent bodies of the State.
### Cerif XML Export
<?xml version='1.0' encoding='utf-8'?>
<CERIF xmlns='urn:xmlns:org:eurocris:cerif-1.5-1' xsi:schemaLocation='urn:xmlns:org:eurocris:cerif-1.5-1 http://www.eurocris.org/Uploads/Web%20pages/CERIF-1.5/CERIF_1.5_1.xsd' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' release='1.5' date='2012-10-07' sourceDatabase='Output Profile'>
<cfResPubl>
<cfResPublId>ibn-ResPubl-74107</cfResPublId>
<cfResPublDate>2014-10-01</cfResPublDate>
<cfVol>77</cfVol>
<cfIssue>5</cfIssue>
<cfStartPage>1</cfStartPage>
<cfISSN>1810-9551</cfISSN>
<cfURI>https://ibn.idsi.md/ro/vizualizare_articol/74107</cfURI>
<cfTitle cfLangCode='RO' cfTrans='o'><p>Unele aspecte privind poluarea atmosferei. măsurile ce se impun pentru prevenirea prejudicierii mediului</p></cfTitle>
<cfAbstr cfLangCode='EN' cfTrans='o'><p>The results of research and investigations denote a fairly high degree of pollution of the environment. The radical changes that took and take place in all areas, give rise to negative phenomena which seriously affects the environment, imposing the measures to reduce and eradicate these phenomena. At the same time, it is necessary to apply directly the provisions of the organic law of the environment, the environmental protection law, as a condition sine qua non of the materialisation of the objectives, the attributes of the State in pursuit of ecological function. Environmental law is a primordial and relevant tool fully to achieve the optimal ecological function and the rule of law. Conclude that without adequate legislation which takes into account the reality and the internal ecological priorities, as well as the requirements and experience of international organizations, cannot materialize the policies, governmental environmental strategies, as judicious they would be, as experienced and well intentioned would be the competent bodies of the State.</p></cfAbstr>
<cfResPubl_Class>
<cfClassId>eda2d9e9-34c5-11e1-b86c-0800200c9a66</cfClassId>
<cfClassSchemeId>759af938-34ae-11e1-b86c-0800200c9a66</cfClassSchemeId>
<cfStartDate>2014-10-01T24:00:00</cfStartDate>
</cfResPubl_Class>
<cfResPubl_Class>
<cfClassId>e601872f-4b7e-4d88-929f-7df027b226c9</cfClassId>
<cfClassSchemeId>40e90e2f-446d-460a-98e5-5dce57550c48</cfClassSchemeId>
<cfStartDate>2014-10-01T24:00:00</cfStartDate>
</cfResPubl_Class>
<cfPers_ResPubl>
<cfPersId>ibn-person-32858</cfPersId>
<cfClassId>49815870-1cfe-11e1-8bc2-0800200c9a66</cfClassId>
<cfStartDate>2014-10-01T24:00:00</cfStartDate>
</cfPers_ResPubl>
</cfResPubl>
<cfPers>
<cfPersId>ibn-Pers-32858</cfPersId>
<cfPersName_Pers>
<cfPersNameId>ibn-PersName-32858-2</cfPersNameId>
<cfClassId>55f90543-d631-42eb-8d47-d8d9266cbb26</cfClassId>
<cfClassSchemeId>7375609d-cfa6-45ce-a803-75de69abe21f</cfClassSchemeId>
<cfStartDate>2014-10-01T24:00:00</cfStartDate>
<cfFamilyNames>Miron</cfFamilyNames>
</CERIF>
|
{}
|
# Calculus
Using the definition of derivative state the function, f(x) and the value of a for lim as h->0 of (square root of (121+h) -11)/h
1. 👍 0
2. 👎 0
3. 👁 115
1. if
f(x) = x^.5
then
f(x+h) = (x + h)^.5
[f(x+h) - f(x)]/h = [(x+h)^.5-x^.5]/h
binomial expansion of
(x+h)^.5 = x^.5 + .5 x^-.5 h ..... higher powers of h
so
[ x^.5 + .5 x^-.5 h ...-x^.5 ]/h
or
.5 x^-.5 + higher powers of h
= .5 x^-.5 as h --->0
if x = 121
f(x) = 121^.5 = 11
f(x+h) = (121+h)^.5
df(121)/dx = .5 (121)^-.5 = .5/11
f(x) = 121
1. 👍 0
2. 👎 0
posted by Damon
## Similar Questions
1. ### calculus
please help! having issue with exponent part of problem. I come to 16x^4 but I think it should be 16x^3.i also have complications with finding the domains. find the derivative of the function using the definition of derivative.
asked by brit on February 5, 2018
2. ### calculus again
Suppose lim x->0 {g(x)-g(0)} / x = 1. It follows necesarily that a. g is not defined at x=0 b. the limit of g(x) as x approaches equals 1 c.g is not continuous at x=0 d.g'(0) = 1 The answer is d, can someone please explain how?
asked by Jen on December 3, 2006
3. ### calculus
Find the derivative of the function using the definition of derivative. g(t)= 9/sqrt(t) g'(t)= state the domain of the function and the domain of the derivative (use interval notation)
asked by maura urgent!! on September 19, 2012
4. ### Maths Calculus Derivatives & Limits
Using the definition of the derivative evaluate the following limits 1) Lim h---> 0 [ ( 8 + h )^1/3 - 2 ] / h 2) Lim x ---> pi/3 ( 2cosx - 1 ) / ( 3x - pi)
asked by Yousef on October 25, 2011
5. ### math
find derivative using limit definition: f(x) = x - sqrt(x) so f'(x) = lim h->0 [f(x+h) - f(x)]/h but I keep trying to solve by multiplying by the conjugate but I can't figure it out..there's nothing that can be cancelled or
asked by terra on September 26, 2011
6. ### calculus
the limit represents the derivative of some function f at some number a. state f and a. lim (sin(pai/2 + t)-1)/t t->0
asked by Lee on October 30, 2011
7. ### calc
find derivative using limit definition: f(x) = x - sqrt(x) so f'(x) = lim h->0 [f(x+h) - f(x)]/h but I keep trying to solve by multiplying by the conjugate but I can't figure it out..there's nothing that can be cancelled or
asked by terra on September 26, 2011
8. ### math
Let f be a real value function and x Î Df then 0 lim ( ) ( ) h f x h f x ® h + - when it exists is called A) The derivative of f at a B) The derivative of f at h C) The derivative of f at x D) The derivative of f at x = h
asked by Zaheer on November 2, 2012
9. ### Math - Calculus
f(x) = { x^2sin(1/x), if x =/= 0 0, if x=0} a. find lim(x->0)f(x) and show that f(x) is continuous at x=0. b, find f'(0) using the definition of the derivative at x=0: f'(x)=lim(x->0) (f(x)-f(0)/x) c. Show that lim(x->0)f'(x) does
asked by Anonymous on February 16, 2017
10. ### Calculus
For the function f whose graph is given, state the following (a) lim x → ∞ f(x) (b) lim x → −∞ f(x) (c) lim x → 1 f(x) (d) lim x → 3 f(x) (e) the equations of the asymptotes (Enter your answers as a comma-separated
asked by Anonymous on May 10, 2018
More Similar Questions
|
{}
|
# How to make your Facet Wrap / Facet Grid ggplots span across multiple pages in PDF
## while rendering from RMarkdown (Rmd)
This is going to be a very short post. It’s a problem I faced a few days back and after some DDGing, Github Issues scanning and some SOing - I found the answer. So, I thought to share it - for archival and also my future reference.
### Problem
The problem here is when you use facet_grid() or facet_wrap() of ggplot inside an R chunk of an Rmd (Rmarkdown) file. The rendered PDF wouldn’t show those faceted multiple plots in multiple pages instead just embedded one on top another in the same page. That’s kind of annoying.
### Solution
ggforce by Thomas Lin Pedersen. It was quite a shame on me that I hadn’t used this package before I faced this problem and I hope to explore this further in future. It seems to be an excellent addon to the ggplot2 ecosystem to fill in the missing holes.
### Sample Code
First, we have to find the required number of pages, using n_pages()
ggplot(df) +
geom_bar(aes(respondent,n, fill = answers),
stat = "identity") +
scale_fill_manual(values = c("#fdb924","#00ccff")) +
coord_flip() +
theme_minimal() +
labs(title = paste0("District-wise Question & Response") ,
subtitle = "To identify Balance between responses") +
facet_wrap_paginate(~district,
nrow = 1,
ncol = 1,
scales = "free") +
theme( strip.text = element_text(size = 30)) -> p
required_n_pages <- n_pages(p)
Once we have the required number of pages, we can use that in a for-loop (or you may use lapply or map) tp iterate the plotting with facet_wrap_paginate() instead of facet_wrap() adding the argument page = i where i is index of loop iteration denoting the current page in which the respective facet to be plotted.
for(i in 1:required_n_pages){
ggplot(df) +
geom_bar(aes(respondent,n, fill = answers),
stat = "identity") +
scale_fill_manual(values = c("#fdb924","#00ccff")) +
coord_flip() +
theme_minimal() +
labs(title = paste0("District-wise Question & Response") ,
subtitle = "To identify Balance between responses") +
facet_wrap_paginate(~district,
nrow = 1,
ncol = 1,
scales = "free",
page = i) +
theme( strip.text = element_text(size = 30)) -> p
print(p)
### Summary
These functions of ggforce should be quite handy if you are someone who prefers documenting the output of your Analysis with Rmarkdown (which is definitely one of the reasons why you should work on R).
|
{}
|
# Why does power go up with the cube of the airspeed?
If i have an airplane with over 100 kN of thrust, and i want to accelerate from a velocity at which drag force is 25kN to a velocity twice higher, is it possible that i wont be able to do this because my engine doesn't have enough power, even though it has enough thrust to counter the drag force near this higher speed, so there should be a net forward acceleration on my plane?
• Jan 15 at 18:00
W = F*d
Where W is the energy needed to apply a force F over a given distance d.
Power is energy divided by time and also equivalent to Force applied for a given velocity.
F x v = F x d/t = W/t
For an airplane and straight and level flight at constant true airspeed, the force of drag is equal to the force of thrust.
Fd = Ft
Since the force of drag is proportional to the square of the velocity by
Fd = 0.5 * CdpA*v^2
Where p is equal to the density of the air, Cd is a coefficient of drag, A is equal to the flat plate area of the object being pushed through the air, and v is the true airspeed of the air, this gives the power consumed by drag applied at a given airspeed as
Pd = Pt = Fdv = 0.5CdpA*v^3
Pt is proportional to the cube of v, where Pt is engine power.
• This answer assumes a constant drag coefficient-- shouldn't that be justified somehow, or at least noted? In the context of flight of any given actual aircraft, where a-o-a must vary with airspeed? Jan 15 at 8:45
• Regardless of whatever changes the drag coefficient has, The force of drag still varies by the square of the airspeed, and thus the power required will vary by the cube of the airspeed Jan 15 at 14:11
• does this imply that an engine of given max power will lose thrust with increasing velocity? Jan 15 at 15:18
Firstly, your hypothetical situation ("is it possible...") would represent an over-constrained problem. It would never be possible that an a/c has enough thrust to overcome drag at speed Y, but doesn't have enough power to overcome drag at the same speed Y. That's a fundamental contradiction, indicating a misunderstanding about the basic relationship between force, work, and power, possibly better suited to exploring on an engineering or physics site.
(This is assuming that by "has enough thrust" and "has enough" power, we mean at whatever particular airspeed we are talking about-- not the max thrust or power that we'd get at the optimum airspeed for maximizing that particular parameter. It's not completely clear from the question which you meant-- if you meant the latter case, then we should note that it certainly does often happen that there are airspeeds we can't reach (in level flight) due to "lack of thrust" even if though we can produce that same amount of thrust at some lower airspeed, particularly with piston engines that tend to have a very roughly constant power output, and thus experience a dramatic loss of thrust as airspeed is increased.)
Second, you haven't given enough us information to know or even guess the drag force at your second velocity Y(=2X). (Obviously I'm calling your second velocity "Y" and your first velocity "X".)
Third, you haven't told us whether the 100 kN thrust force is constant independent of airspeed, or what.
The answer to the title itself, "Why does power go up with the cube of the airspeed?", is "It doesn't-- power required is not directly proportional to airspeed cubed across the entire flight envelope, because the drag coefficient is not constant". Of course this is assuming that by "power" in the title, you meant the power required, not the maximum power that the propulsion system could produce at that airspeed. The latter quantity, naturally, also is not directly proportional to airspeed cubed!
There are situations where an aircraft has enough thrust and power to overcome drag in horizontal flight at some given airspeed, but doesn't have enough thrust and power to overcome drag in horizontal flight at some lower airspeed, because that lower airspeed is on the "back side of the thrust-required curve" where drag is very high, so the thrust and power required (for horizontal flight) are also very high. In such a case the aircraft could not accelerate, without giving up altitude, from the lower airspeed to the higher airspeed. Nor could the aircraft maintain altitude at the lower airspeed. But that doesn't seem to be what you are asking about here, as evidenced by your comment "so there should be a net forward acceleration on my plane".
To better understand how the power required for horizontal flight varies with airspeed, in any given actual aircraft (assuming fixed constant weight), where the drag coefficient is not constant because angle-of-attack must vary as airspeed varies, see these sections from John Denker's excellent See How It Flies website-- the graphs included here are much better than any verbal description, and you'll see that the required power does not simply vary according to the cube of airspeed--
Drag and the power curve-- introduction
Related ASE questions that deal with how the thrust and power available from the propulsion system vary with airspeed
Why is thrust said to be constant over speed for a jet engine?
Why is thrust inverse to speed in piston engines?
How (and why) does engine thrust change with airspeed?
How do power and thrust curves compare?
• "it would never be possible that an a/c has enough thrust to overcome drag at speed Y, but doesn't have enough power to overcome drag at the same speed Y" this is esentially the answer i wanted Jan 15 at 15:14
• @FrancisL. -- glad I could help, sorry for so many comments, now have deleted most and incorporated into this answer! Jan 15 at 15:22
• @MichaelHall -I was only trying to say that if you have enough thrust to maintain a given airspeed, then you also have enough power, and if you have enough power to maintain a given airspeed, then you also have enough thrust. Assuming that we are talking about the thrust and power available at the speed in question, not the max possible thrust or power that would be available if the speed of the a/c were optimized for whichever of these two parameters we are talking about the moment. If we mean the latter, then it starts getting more complicated to explain what we are really trying to say. Jan 15 at 20:19
• @MIchaelHall -- have read it; honestly at the end of the day these kinds of questions (and answers) are much better served by graphs than by verbal descriptions! Had serious second thoughts about answering the question at all due to multiple apparent misconceptions but-- since there was already another answer decided to go ahead-- Jan 15 at 20:35
• @FrancisL. -- hope the links I've recently added to this answer to the See How It Flies website , and to related ASE answers, are helpful-- Jan 16 at 6:47
$$F=m.a$$
At constant mass, we only need to consider the difference in thrust and drag forces to determine if the aircraft can accelerate to a higher velocity.
if i have an airplane with over 100 kN of thrust, and i want to accelerate from a velocity at which drag force is 25kN to a velocity twice higher
For incompressible flow, at standard aviation Reynolds number, drag force D is $$D = C_D. \frac{1}{2} \rho V^2. S$$
At constant altitude and attitude, $$D = C. V^2$$ with C being a constant, so at twice the speed the drag is 4 times as high. Thrust T must be equal to this higher D, power does not appear in this consideration.
During engine design, required engine power is determined by required aerodynamic power = $$T. V$$ divided by transmission and thrust mechanism efficiencies. It is the aerodynamc required power that goes up by the cube of velocity, engine power must be able to handle that.
• Same comment I left the other answer -- This answer assumes a constant drag coefficient-- shouldn't that be justified somehow, or at least noted? In the context of flight of any given actual aircraft, where a-o-a must vary with airspeed? Jan 16 at 6:01
• @quietflyer “At constant altitude and attitude..” Jan 16 at 6:06
|
{}
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 29 Jun 2016, 00:58
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# 50 tricky questions
Author Message
TAGS:
### Hide Tags
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 6681
Location: Pune, India
Followers: 1832
Kudos [?]: 11151 [0], given: 219
### Show Tags
02 Aug 2011, 22:23
Expert's post
mrblack wrote:
How are you supposed to actually learn how to solve these questions? I could spend an hour trying to solve some of the questions and even that I will probably still be stumped....is there a good way to learn the tricks to solve these toughies?
These questions are for practice. You don't have to 'learn' the method for each one. First figure out if you are ready for them i.e. can you solve the easier questions comfortably? Are your basics in place? Let's say, can you solve most OG12 questions in less than a minute? Once you know that you have tamed the easy ones, then go for the difficult ones.
You just have to try and solve them and if you are unable to, then check out the explanations. If you have doubts in a question, post it and people will give their take on it. With every new question, you will learn something new. You will get different ways of figuring out the answer since most people have their favored mechanisms. Finally most questions in GMAT are based on a handful of basics. Once you have seen 20 different applications of the same concept, it doesn't matter how the concept is presented to you. You will know how to deal with it.
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Manager Joined: 27 Apr 2008 Posts: 190 Followers: 1 Kudos [?]: 62 [0], given: 1 Re: 50 tricky questions [#permalink] ### Show Tags 03 Aug 2011, 05:32 Thank you Karishma. Very encouraging advice indeed! Manager Joined: 26 Sep 2010 Posts: 114 GMAT 1: 680 Q49 V34 GPA: 3.65 Followers: 0 Kudos [?]: 3 [0], given: 0 Re: 50 tricky questions [#permalink] ### Show Tags 07 Aug 2011, 04:15 thanks for sharing Retired Thread Master Status: How can I crack Verbal Joined: 12 May 2011 Posts: 209 Location: India Concentration: General Management, Finance GMAT 1: 700 Q51 V32 GPA: 3 Followers: 2 Kudos [?]: 19 [0], given: 33 Re: 50 tricky questions [#permalink] ### Show Tags 11 Aug 2011, 06:21 to those people who have seen/worked on these questions..can you please share the source of these questions?are they GMATlike? are they worth the time spent in solving these problems? Manager Joined: 24 Jul 2009 Posts: 73 Location: United States GMAT 1: 590 Q48 V24 Followers: 2 Kudos [?]: 94 [0], given: 124 Re: 50 tricky questions [#permalink] ### Show Tags 16 Aug 2011, 23:48 LifeChanger wrote: to those people who have seen/worked on these questions..can you please share the source of these questions?are they GMATlike? are they worth the time spent in solving these problems? It depends on 2 factors: 1. your interest in solving 2. time left for actual GMAT exam: you have ample time, u can go ahead and solve these, else just concentrate on individual questions being posted in the forum. Manager Joined: 31 Jul 2011 Posts: 62 Followers: 0 Kudos [?]: 1 [0], given: 3 Re: 50 tricky questions [#permalink] ### Show Tags 30 Sep 2011, 14:24 Just started looking and first one is wrong! Intern Joined: 27 Mar 2011 Posts: 4 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: 50 tricky questions [#permalink] ### Show Tags 06 Oct 2011, 04:47 Thank you for yoursharing. Manager Joined: 28 Sep 2011 Posts: 206 GMAT 1: Q V Followers: 1 Kudos [?]: 55 [0], given: 5 Re: 50 tricky questions [#permalink] ### Show Tags 11 Oct 2011, 19:00 Wow, a great addition to my resources. Thank you for sharing this with the entire community. Intern Joined: 27 Sep 2011 Posts: 2 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: 50 tricky questions [#permalink] ### Show Tags 13 Oct 2011, 18:44 Have anyone seen similar questions on GMAT ??? these are really tough ones... Intern Joined: 28 Jul 2011 Posts: 12 Followers: 0 Kudos [?]: 0 [0], given: 2 Re: 50 tricky questions [#permalink] ### Show Tags 14 Oct 2011, 02:43 AB + CD = AAA, where AB and CD are two-digit numbers and AAA is a three digit number; A, B, C, and D are distinct positive integers. In the addition problem above, what is the value of C? In this - how can u say that AAA should be 111 - i dint get it. Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 6681 Location: Pune, India Followers: 1832 Kudos [?]: 11151 [1] , given: 219 Re: 50 tricky questions [#permalink] ### Show Tags 14 Oct 2011, 07:25 1 This post received KUDOS Expert's post divyaverma wrote: AB + CD = AAA, where AB and CD are two-digit numbers and AAA is a three digit number; A, B, C, and D are distinct positive integers. In the addition problem above, what is the value of C? In this - how can u say that AAA should be 111 - i dint get it. A B +C D ______ A A A Notice here that B+D ends with A (there could be a carry over so I cannot say that B+D = A) Also, A + C ends with A. When can this happen? e.g. Think 2 + x = .....2 What can you say about x? We can say that it must end with 0. Only then will you have 2 at the end. Since all letters represent positive integers, C cannot be 0. Then it must be 9 and there must have been a carry over from previous addition to make 10. Then A + 10 will end with A and there will be a carry over of 1 which will appear as the hundred's digit. Since the hundred's digit is A, it must be equal to 1. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
Intern
Joined: 14 Oct 2011
Posts: 1
Followers: 0
Kudos [?]: 0 [0], given: 1
### Show Tags
14 Oct 2011, 12:25
Thanks a lot for sharing!
Intern
Joined: 10 Jul 2011
Posts: 3
Followers: 0
Kudos [?]: 0 [0], given: 3
### Show Tags
15 Oct 2011, 10:34
Thanks!....Kudos.
Intern
Joined: 05 Sep 2011
Posts: 44
Followers: 1
Kudos [?]: 4 [0], given: 11
### Show Tags
17 Oct 2011, 05:25
Thanks a lot for sharing with us :D
Intern
Joined: 07 Oct 2011
Posts: 1
Followers: 0
Kudos [?]: 0 [0], given: 0
### Show Tags
20 Oct 2011, 08:59
Good stuff! Thanks!
Intern
Joined: 31 Jul 2011
Posts: 2
GMAT 1: Q V
GPA: 3.99
Followers: 0
Kudos [?]: 5 [0], given: 2
### Show Tags
23 Oct 2011, 21:01
You just took away hours from sleep. I'll go over these tonight. Thanks!
Manager
Joined: 10 Jul 2010
Posts: 196
Followers: 1
Kudos [?]: 19 [0], given: 12
### Show Tags
08 Nov 2011, 19:50
great post, thanks!
Manager
Joined: 11 Nov 2011
Posts: 76
Location: United States
Concentration: Finance, Human Resources
GPA: 3.33
WE: Consulting (Non-Profit and Government)
Followers: 1
Kudos [?]: 6 [0], given: 76
### Show Tags
20 Nov 2011, 17:48
thanks for sharing
Manager
Joined: 16 Dec 2009
Posts: 75
GMAT 1: 680 Q49 V33
WE: Information Technology (Commercial Banking)
Followers: 1
Kudos [?]: 37 [0], given: 11
### Show Tags
24 Nov 2011, 11:27
Thanks a lot. Great compilation..
_________________
If Electricity comes from Electrons , Does Morality come from Morons ??
If you find my post useful ... then please give me kudos ......
h(n) defined as product of even integers from 2 to n
Number N divided by D leaves remainder R
Ultimate list of MBA scholarships for international applicants
Intern
Joined: 15 Nov 2011
Posts: 40
Location: Kenya
Followers: 0
Kudos [?]: 3 [0], given: 245
### Show Tags
01 Dec 2011, 07:18
Thanks for the share...!!
Re: 50 tricky questions [#permalink] 01 Dec 2011, 07:18
Go to page Previous 1 2 3 4 Next [ 78 posts ]
Similar topics Replies Last post
Similar
Topics:
15 50 most tricky GMAT 700 level questions 2 29 Dec 2012, 13:47
1 Tricky question 3 19 Jun 2012, 22:20
Tricky question 2 23 Nov 2011, 06:28
Tricky Math Question - Error in Question (pls do not refer) 5 13 Feb 2011, 01:48
help required on this tricky question 1 29 Nov 2010, 15:03
Display posts from previous: Sort by
|
{}
|
# Unit circle
In mathematics, a unit circle is a circle with a radius of 1. The equation of the unit circle is ${\displaystyle x^{2}+y^{2}=1}$. The unit circle is centered at the Origin, or coordinates (0,0). It is often used in Trigonometry.
In a unit circle, where ${\displaystyle t}$ is the angle desired, ${\displaystyle x}$ and ${\displaystyle y}$ can be defined as ${\displaystyle \cos(t)=x}$ and ${\displaystyle \sin(t)=y}$ . Using the function of the unit circle, ${\displaystyle x^{2}+y^{2}=1}$ , another equation for the unit circle is found, ${\displaystyle \cos ^{2}(t)+\sin ^{2}(t)=1}$ . When working with trigonometric functions, it is mainly useful to use angles with measures between 0 and ${\displaystyle \pi \over 2}$ radians, or 0 through 90 degrees. It is possible to have higher angles than that, however. Using the unit circle, two identities can be found: ${\displaystyle \cos(t)=\cos(2\cdot \pi k+t)}$ and ${\displaystyle sin(t)=\sin(2\cdot \pi k+t)}$ for any integer ${\displaystyle k}$ .
|
{}
|
Augmented Dickey-Fuller Test/ Unit Root test on multiple time series dataframe in R
I have a dataset/dataframe in which I have calculated the daily log returns of five thousand companies and these companies are as column as well. I want carry out ADF test on this dataframe. I have found how to estimate ADF test on vector but could not find how to calculate it on dataframe or matrix structure. Additionally how can I leave out the date column when estimating ADF test on the companies.
The picture illustrates some portion of my dataset. The code I ran and error I received are as follows
library(tseries)
k = trunc((length(1)-1)^(1/3)))
Error in adf.test(logs, alternative = c("stationary", "explosive"), k = trunc((length(1) - : x is not a vector or univariate time series
• I did not downvote your question but I can understand why somebody did: Please show your code and pinpoint exactly to the parts where you encounter problems. – vonjd Dec 30 '15 at 10:54
• I hope this additions to the question helps to understand what I'm trying to estimate. – Aquarius Dec 30 '15 at 11:07
There a to ways that you can performe the ADF test to a data frame, first write a loop for applying the test to all the columns or use the apply function to your data. For leaving out the first column just create an other data frame like this: da=yourDataName[,-1]. the code for the ADF would be something like apply(da,2,adfTest,lags=0,type="c"). The 2 is saying that the function adfTest should be apply to the columns, the adfTest is from the package fUnitRoots, lags=0 so it does not perform the test lagging the series and type="c" so it includes a constant. I don't like the test from timeSeries package because it will lag the series automatically so you will get "always" a stationary series.
• I ran the test according to the code you specifiied but I recieved error Error in res.sum\$coefficients[coefNum, 1] : subscript out of bounds – Aquarius Jan 4 '16 at 11:46
|
{}
|
# Latex - How to type these symbols over chars in matrix
I want to to ask how I can type this matrix, especially that ^ over alphabetic characters.
I have something like this:
\begin{eqnarray}
\begin{pmatrix}
a+b & \overset{}{\epsilon + \omega} & \overset{}{\pi} \\
\vec{a} & \overleftrightarrow{AC} & \beta
\end{pmatrix}
\end{eqnarray}
• Are you looking for \hat{a}? – Au101 Mar 12 '18 at 21:45
• Don't use eqnarray -- it's badly deprecated. For the equation in your example, just use an equation environment. – Mico Mar 12 '18 at 21:47
• For hat symbols over lowercase letters, use \hat. For hat symbols over uppercase Latin and Greek letters, use \widehat. For really wide hat symbols, see the posting Really wide hat symbol. – Mico Mar 12 '18 at 21:53
• Yes, its \widehat and \hat. Thank you so much guys. – Oggy Mar 12 '18 at 21:56
FWIW, in ConTeXt with \widehat always expands to the width of it's argument. So, simply using \widehat{ξ + ω} works.
\definemathmatrix[pmatrix][matrix:parentheses][simplecommand=pmatrix]
\starttext
\startformula
\pmatrix{ a + b, \widehat{ξ + ω}, \hat π;
\vec a, \overleftrightarrow{AC}, β}
\stopformula
\stoptext
which gives
One way to produce the really-wide hat symbol is to employ the mtpro2 package. Note that the full mtpro2 package is not free of charge. However, its lite subset -- which is all that's needed to produce the screenshot of interest, is in fact free of charge.
\documentclass{article}
\usepackage{amsmath} % for 'pmatrix' env.
\usepackage{times} % Times Roman text font
\usepackage[lite]{mtpro2} % Times Roman math font
\begin{document}
$$\begin{pmatrix} a+b & \widehat{\xi + \omega} & \hat{\pi} \\ \vec{a} & \overleftrightarrow{AC} & \beta \end{pmatrix}$$
\end{document}
With the default fonts, you have two possibilities for the widehat: either the extensible character \widehat from yhmath or the character of the same name in mathabx. I don't load here mathabx, but define the latter character as \varwidehat. I also suggest the old-arrows package so that \overleftrightarrow doesn't touch capital letters:
\documentclass[12pt, a4paper]{article}
\usepackage{amsmath}%
\usepackage{yhmath}
\usepackage{old-arrows}
\DeclareFontFamily{U}{mathx}{\hyphenchar\font45}
\DeclareFontShape{U}{mathx}{m}{n}{
<5> <6> <7> <8> <9> <10>
<10.95> <12> <14.4> <17.28> <20.74> <24.88>
mathx10
}{}
\DeclareSymbolFont{mathx}{U}{mathx}{m}{n}
\DeclareFontSubstitution{U}{mathx}{m}{n}
\DeclareMathAccent{\varwidebar}{0}{mathx}{"73}
\DeclareMathAccent{\varwidehat}{0}{mathx}{"70}
\begin{document}
$\begin{pmatrix} a+b & \widehat{\xi + \omega} & \overset{}{\pi} \\ \vec{a} & \overleftrightarrow{AC} & \beta \end{pmatrix}\qquad \begin{pmatrix} a+b & \varwidehat{\xi + \omega} & \overset{}{\pi} \\ \vec{a} & \overleftrightarrow{AC} & \beta \end{pmatrix}$%
\end{document}
|
{}
|
## Cryptology ePrint Archive: Report 2019/538
On Perfect Endomorphic Ciphers
Nikolay Shenets
Abstract: It has been 70 years since the publication of the seminal outstanding work of Claude Elwood Shannon, in which he first gave a mathematical definition of the cryptosystem and introduced the concept of perfect ciphers. He also examined the conditions in which such a ciphers exist. Shannon's results in one form or another are presented in almost all books on cryptography. One of his result deals with so-called endomorphic ciphers in which the cardinalities of the message space $\mathcal{M}$ and the ciphertexts $\mathcal{C}$ are the same. The Vernam cipher (one-time pad) is the most famous representative of such ciphers. Moreover, it's the only one known to be perfect.
Surprisingly, we have found a mistake in the Shannon's result. Namely, Shannon stated that an endomorphic cipher, in which the keyspace $\mathcal{K}$ has the same cardinality as message space, is perfect if and only if two conditions are satisfied. The first suggests that for any pair plaintext - ciphertext there exists only one key that translates this plaintext into this ciphertext. The second argues that the key distribution must be uniform. We show, that these two conditions are not really enough. We prove in three different ways that the plaintexts must also be equally probable. Moreover, we study the general endomorphic cipher and get the same result. It follows, that in practice perfect endomorphic ciphers do not exist.
Category / Keywords: foundations / Perfect security, Endomorphic cipher, Shannon's theory
Date: received 17 May 2019, last revised 17 May 2019
Contact author: shenets at ibks spbstu ru
Available format(s): PDF | BibTeX Citation
Short URL: ia.cr/2019/538
[ Cryptology ePrint archive ]
|
{}
|
# How to exclude sections of bad data from time-series data before training an LSTM network
I am using LSTM network for predicting IOT time-series data receiving from un-reliable devices and networks.
This results in several multiple sections [continuous streak of bad data for several days until the problem is fixed].
I need to exclude this bad data section before feeding it to model training.
Since I am using LSTM-RNN network, it requires to do an un-roll data based on the previous records.
How can I properly exclude this bad data?
I thought of an approach as training model separately using each batch of good data, and use subsequent good-data batch for fine-tuning the model.
Please let me know if this is a good approach? or is there a better method?
example data:
"1-01",266.0
"1-02",145.9
"1-03",183.1
"1-08",224.5
"1-09",192.8
"1-10",122.9
|
{}
|
# Recent and upcoming talks by Jacek Tryba
## Jacek Tryba: Homogeneity of ideals
Tuesday, March 6, 2018, 17:15 Wrocław University of Technology, 215 D-1 Speaker: Jacek Tryba (University of Gdansk) Title: Homogeneity of ideals Abstract: The homogeneity family of the ideal $\mathcal{I}$ is a family of subsets such that the restriction of $\mathcal{I}$ to this subset is isomorphic to $\mathcal{I}$. continue reading…
|
{}
|
## MSC coordinate system: Theta and Phi
Chandra uses a number of coordinate systems to describe locations. The MSC coordinate system is defined by the Theta and Phi axes, which give the off-axis angle and azimuth coordinates of the mirror.
|
{}
|
• # question_answer Solve for $x:\frac{1}{(x-1)(x-2)}+\frac{1}{(x-2)(x-3)}=\frac{2}{3},x\ne 1,2,3$
We have, $\frac{1}{(x-1)(x-2)}+\frac{1}{(x-2)(x-3)}=\frac{2}{3},x\ne 1,2,3$ $3(x-3)+3(x-1)=2(x-1)(x-2)(x-3)$ $3x-9+3x-3=2(x-1)(x-2)(x-3)$ $6x-12=2(x-1)(x-2)()x-3$ $6(x-2)=2(x-1)(x-2)(x-3)$ $3=(x-1)(x-3)$ $3={{x}^{2}}-3x-x+3$ ${{x}^{2}}-4x=0$ $x(x-4)=0$ $\therefore$ $x=0$ or $4$
You need to login to perform this action.
You will be redirected in 3 sec
|
{}
|
# How can you highlight a C compiler directive spanning multiple lines?
In my \lstset{...} I have morecomment=[l][{\color[rgb]{0.1, 0.2, 0.8}}]{\#}, which I've seen used to give preprocessor commands colour. This works just fine but I'd like to get multi-line macros to be highlighted too, for example:
\begin{lstlisting}
#define MAX(a, b) \
((a)>(b)?(a):(b))
\end{lstlisting}
What's a good way to get listings to match multi-line macros?
[EDIT]
very well, here's a complete example with \documentclass etc... :P
\documentclass{article}
\usepackage{color}
\usepackage{listings}
\lstset{language=C,
morecomment=[l][{\color[rgb]{0.1, 0.2, 0.8}}]{\#}
}
\begin{document}
\begin{lstlisting}
#define MAX(a, b) \
((a)>(b)?(a):(b))
\end{lstlisting}
\end{document}
I'd like the second line of the macro to be blue too, based on a rule relating to the \.
• Please add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. Side-question: can you actually do line continuation with a backslash in C? Not to my knowledge... Feb 14 '14 at 9:35
• @Jubobs done. not in C exactly, but most compilers provide a C preprocessor in which macros may escape newlines with a \. It's just a readability thing afaik. Feb 14 '14 at 10:01
• @Jubobs apologies for the delay. I wanted to give code a try first. Thanks for your time! Mar 17 '14 at 9:58
• No bother. Glad I was able to answer. Mar 17 '14 at 10:07
The listings package doesn't directly provide the means of highlighting a compiler directive that is continued over multiple lines. If you think that's a desirable feature, you might want to get in touch with the maintainer.
In the meantime, here is a possible implementation. It uses two switches with self-explanatory names to keep track of the context and patches listings in order to check, right before anything gets printed, whether we're still in a compiler directive; if so, the directive style is applied.
\documentclass{article}
\usepackage{etoolbox}
\usepackage{xcolor}
\usepackage{listings}
% ---------- Beginning of ugly internals ----------
\makeatletter
% switches to keep track of context
\newif\if@LastCharWasBackslash
\newif\if@DirectiveContinued
% --- hooking into listings ---
{%
\ifx\lst@lastother\lstum@backslash% % if the last character in
% \the\lst@token is a backslash...
\global\@LastCharWasBackslashtrue%
\else
\global\@LastCharWasBackslashfalse%
\fi
\@condApplyDirectiveStyle % Apply directive style if needed
}
{%
\global\@LastCharWasBackslashfalse% % Reset switch
\@condApplyDirectiveStyle% % Apply directive style if needed
}
% listings' automatically exits CDmode at the EOL hook;
% we patch \lsthk@EOL so that it checks whether a compiler directive
% is continued on the next line and set the relevant switch accordingly.
\patchcmd{\lsthk@EOL}
{\ifnum\lst@mode=\lst@CDmode \lst@LeaveMode \fi}
{%
\global\@DirectiveContinuedfalse%
\ifnum\lst@mode=\lst@CDmode%
\lst@LeaveMode
\else
\if@LastCharWasBackslash%
\global\@DirectiveContinuedtrue%
\fi
\fi
}
{}{\@latex@error{\string\lsthk@EOL\space patch failed!}{}}
% --- two helper macros ---
\newcommand\@condApplyDirectiveStyle
{%
\ifnum\lst@mode=\lst@CDmode%
\@applyDirectiveStyle%
\fi
\if@DirectiveContinued%
\@applyDirectiveStyle%
\fi
}
\newcommand\@applyDirectiveStyle{\let\lst@thestyle\lst@directivestyle}
\makeatother
% ---------- End of ugly internals ----------
\lstset
{
language = C,
directivestyle = \color{blue},
}
\begin{document}
\begin{lstlisting}
#define MAX(a, b) \
((a)>(b)?(a):(b))
#define \
bar \
baz
foo \ bar
baz
\end{lstlisting}
\end{document}
Not an answer, but a workaround...
\documentclass{article}
\usepackage{color}
\usepackage{listings}
\lstset{language=C,
morecomment=[l][{\color[rgb]{0.1, 0.2, 0.8}}]{\#},
moredelim=[il][{\color[rgb]{0.1, 0.2, 0.8}}]{@},
}
\begin{document}
\begin{lstlisting}
#define MAX(a, b) \
@ ((a)>(b)?(a):(b))
\end{lstlisting}
\end{document}
Not sure if this is the right way to colour arbitrary lines, but the moredelim option with i hides the @ character in the macro and colours the rest of the line just like morecomment`.
|
{}
|
This presentation is the property of its rightful owner.
1 / 16
Lemniscates PowerPoint PPT Presentation
Lemniscates. Lemniscates. The Lemniscate of Bernoulli.
Lemniscates
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Lemniscates
Lemniscates
The
Lemniscate
of
Bernoulli
Jacob Bernoulli first described his curve in 1694 as a modification of an ellipse. He named it the “Lemniscus", from the Latin word for “pendant ribbon”, for, as he said, it was “Like a lying eight-like figure, folded in a knot of a bundle, or of a lemniscus, a knot of a French ribbon”. At the time he was unaware of the fact that the lemniscate is a special case of the “Cassinian Oval”, described by Cassini in 1680. The original form that Bernoulli studied was the locus of points satisfying the equation
The Parameterization of the “Lemniscate of Bernoulli”
Cartesian equation:
Using the equations of transformation...
We have,
Thus, the parametric equations are:
theta = 0:.005:2*pi ;
x = cos(theta).*sqrt(cos(2.*theta));
y = sin(theta).*sqrt(cos(2.*theta));
h = plot(x,y); axis equal
set(h,'Color',‘r‘,'Linewidth',3);
xl = xlabel('0 \leq \theta \leq 2\pi','Color',‘k');
set(xl,'Fontname','Euclid','Fontsize',18);
The Area of the Lemniscate of Bernoulli
Polar equation:
The Lemniscate of Bernoulliis a special case of the “Cassinian Oval”, which is the locus of a point P, the product of whose distances from two focii, 2a units apart, is constant and equal to
[x,y] = meshgrid(-2*pi:.01:2*pi);
a = 5;
z = sqrt((x-a).^2+y.^2).*sqrt((x+a).^2+y.^2);
contour(x,y,z,25); axis('equal’,’square’);
xl = xlabel('-2\pi \leq {\it{x,y}} \leq 2\pi');
set(xl,'Fontname','Euclid','Fontsize',14);
title('The Cassinian Oval','Fontsize',12)
a = 2; b = 2;
[x,y] = meshgrid(-5:.01:5);
colormap('jet');axis equal
z = ((x-a).^2+y.^2).*((x+a).^2+y.^2)-b^4;
contour(x,y,z,0:6:60);
set(gca,'xtick',[],'ytick',[]);
xl = xlabel('-2\pi \leq {\it{x,y}} \leq 2\pi');
set(xl,'Fontname','Euclid','Fontsize',14);
title('The Cassinian Oval'Fontsize',12)
The “Lemniscate of Gerono” is named for the French mathematician Camille – Christophe Gerono (1799 – 1891). Though it was not discovered by Gerono, he studied it extensively. The name was officially given in 1895 by Aubry.
The Lemniscate of Gerono: Parameterization
Thus, the Parametric equations are,
theta = 0:.001:2*pi ;
r = (sec(theta).^4.*cos(2.* theta)).^(1/2);
x = r.*cos(theta);
y = r.*sin(theta);
plot(x,y,'color',[.782 .12 .22],'Linewidth',3);
set(gca,'Fontsize',10);
xl = xlabel('0 \leq \theta \leq 2\pi');
set(xl,'Fontname','Euclid','Fontsize',18,'Color','k');
Lemniscate of Gerono
Polar Curve
Construction of the Lemniscate of Gerono
Let there be a unit circle centered on the origin. LetPbe a point on the circle. Let Mbe the intersection of x = 1and a horizontal line passing throughP.Let Qbe the intersection of the line OMand a vertical line passing through P. The trace of Q as Pmoves around the circle is the Lemniscate of Gerono.
The “Lemniscate of Booth”
When the curve consists of a single oval, but when
it reduces to two tangent circles. When the curve becomes a lemniscate, with the case ofproducing the “Lemniscate of Bernoulli”
[x,y] = meshgrid(-pi:.01:pi);
c = (1/4)*((x.^2+y.^2)+(4.*y.^2./(x.^2+y.^2)));
contour(x,y,c,12); axis(‘equal’,’square’);
set(gca,'xtick',[],'ytick',[]);
xl = xlabel('-\pi \leq {\it{x,y}} \leq \pi');
set(xl,'Fontname','Euclid','Fontsize',9);
|
{}
|
## OG: For all real numbers a, b, c, d, e, and f, the operation @ is defined by the equation
##### This topic has expert replies
Master | Next Rank: 500 Posts
Posts: 394
Joined: 02 Jul 2017
Thanked: 1 times
Followed by:5 members
### OG: For all real numbers a, b, c, d, e, and f, the operation @ is defined by the equation
by AbeNeedsAnswers » Fri Jun 05, 2020 6:23 pm
00:00
A
B
C
D
E
## Global Stats
For all real numbers a, b, c, d, e, and f, the operation @ is defined by the equation (a, b, c) @ (d, e ,f) = ad + be + cf. What is the value of (1, -2, 3) @ (1, -1/2, 1/3)?
A. -1
B. 5/6
C. 1
D. 5/2
E. 3
E
Legendary Member
Posts: 2214
Joined: 02 Mar 2018
Followed by:5 members
### Re: OG: For all real numbers a, b, c, d, e, and f, the operation @ is defined by the equation
by deloitte247 » Sat Jun 20, 2020 2:24 am
$$Given\ that:\ (a,b,c)\ @\ (d,e,f)=(a*d)+(b*e)+(c*f)$$
$$\left(1,-2,3\right)@\left(1,\frac{-1}{2},\frac{1}{3}\right)$$
$$=>\left(1\cdot1\right)+\left(-2\cdot\frac{-1}{2}\right)+\left(3\cdot\frac{1}{3}\right)$$
$$=>\ 1+1+1$$
$$=>\ 3$$
$$Answer\ =\ E$$
• Page 1 of 1
|
{}
|
# Is there a mathematical concept of fractions using transfinite numbers as numerators and denominators?
While looking at Cantors method of proof, which he used to show that the set of the rational numbers is countable and that it has got the same cardinality (Aleph-naught) as the set of the natural numbers, I recognized that if there were fractions that used transfinite numbers as their numerators and denominators, then those infinitely precisely defined fractions could be used within Cantors zizag-counting-grid to address not only all the rational numbers but all the real numbers (of course only in theory because transfinite numbers usually cannot be written down or spoken out very easily).
So my question is as stated above: Is there a mathematical concept of fractions using transfinite numbers as numerators and denominators? If yes, what is the name for these kind of fractions? Or is there a reason why one shouldn't use something like this.
A simple example of such a fraction would be a fraction where the numerator is an infinte sequence of 1s and the denominator is an infinte sequence of 2s.
A more complex example would be a fraction where the numerator would consist of the decimal places of Pi and the denominator would consist of the decimal places of 2^0.5.
• I have added the english equivalent link since this sites' language is english; also added number-theory. You might want to add some background to the question so we can see where you are and where this question comes from. ($\pm 0$) – AlexR Aug 6 '14 at 22:55
• Field of fractions might be of interest to you. – user98602 Aug 6 '14 at 22:56
• The Enlish Wikipedia article "Cantor's diagonal argument" is not equivalent to the german article "Cantors erstes Diagonalargument". Instead, the English article is equal to the German article "Cantors zweites Diagonalargument" which is about a related but different proof by Cantor. Unfortunately there doesn't seem to be an english version of the German article "Cantors erstes Diagonalargument" which is about Cantors proof that the rational numbers are countable. – jimmyorpheus Aug 6 '14 at 23:28
• Take a look at the surreal numbers. This a framework which you can add, subtract, multiply and divide all sorts of infinite and infinitesimal numbers. – Jair Taylor Aug 6 '14 at 23:35
• Take a look at the notion of non-Archimedean fields, one of which is the surreal numbers suggested by @Jair Taylor. – Asaf Karagila Aug 6 '14 at 23:40
It's not hard to construct examples.
For example, you could consider the ring of all polynomials in $x$ with real coefficients such that $x$ is greater than every real number, and thus transfinite. Then the fractions -- the rational functions -- would be of the form you ask for.
Similarly, the hyperrational numbers from nonstandard analysis would be another example: each has a numerator and denominator that is a hyperinteger, and those can be transfinite. This is probably closer to what you have in mind.
Guessing at how it applies to your motivation, the problem is that the hyperrational numbers are too precisely defined: to every irrational real number, there are hyperrational numbers that are infinitesimally close, but none of them are actually equal to the real number. However, you can always round one to its 'standard part'.
That said, externally, the hyperintegers (and the hyperrationals) are uncountable too, so you can't have a (countable) list that contains all of them.
Also, it is important to note that the transfinite numbers that appear in examples like the above have absolutely nothing to do with set theory; they have no relation to the sizes of sets.
• Technically "transfinite" does not mean "non-finite." Transfinite numbers generally refer to cardinal and ordinal numbers only. en.wikipedia.org/wiki/Transfinite_number – Thomas Andrews Aug 6 '14 at 23:50
• I find the first example to be lacking. While it's of course true, perhaps it's good to remember from time to time that for most people "numbers come from somewhere" (usually $\Bbb c$ or $\Bbb R$) and to say that $x$ is larger than all the real numbers raises the question "Where did it come from, and how come we didn't know about it before?". – Asaf Karagila Aug 6 '14 at 23:50
• @Hurkyl, thanks for a nice post. You should add that the hyperrationals actually provide an answer to the OP's question: since they surject to the reals, the cardinality of the reals is dominated by that of the hyperrationals. This was I think the main thrust of his question (analogy with proof of countability of the rationals) so he might be interested in this. – Mikhail Katz Aug 7 '14 at 8:25
|
{}
|
User:
• Matrices
• Algebra
• Geometry
• Funciones
• Trigonometry
• Coordinate geometry
• Combinatorics
Suma y resta Producto por escalar Producto Inversa
Monomials Polynomials Special products Equations Quadratic equations Radical expressions Systems of equations Sequences and series Inner product Exponential equations Matrices Determinants Inverse of a matrix Logarithmic equations Systems of 3 variables equations
2-D Shapes Areas Pythagorean Theorem Distances
Graphs Definition of slope Positive or negative slope Determine slope of a line Ecuación de una recta Equation of a line (from graph) Quadratic function Posición relativa de dos rectas Asymptotes Limits Distancias Continuity and discontinuities
Sine Cosine Tangent Cosecant Secant Cotangent Trigonometric identities Law of cosines Law of sines
Ecuación de una recta Posición relativa de dos rectas Distancias Angles in space Inner product
Volume
Volume of Pyramids
The volume of a pyramid is given by:
Volume = $\frac{1}{3}$Area of base x Height V=$\frac{1}{3}$AH where A is the area of the base of the pyramid and H is the height.
The rule for finding the volume of a pyramid is the same no matter what polygon forms the base. Of course, how you find the area of the base will depend on the polygon that forms the base.
Find the volume of the square pyramid:
V=$\frac{1}{3}$BH B = 302 = 900 cm2 Using Pythagorean Pythagorean: H2 = 172 - 152 = 289 - 225 = 64 H = 8 cm Therefore: V = $\frac{1}{3}$·900·8 = 2400 cm3
V=48$\sqrt{3}$ cm3
H=8cm
Find s
V=$\frac{1}{3}$BH 48$\sqrt{3}$ =$\frac{1}{3}$ B · 8 3·6·$\sqrt{3}$=$\frac{1}{3}$ B·3 18$\sqrt{3}$ = B Let's look at the base: $A=(\frac{bh}{2})$ $18\sqrt{3}=(\frac{s\;\cdot\;\frac{\sqrt{3}s}{2}}{2})$ $18\sqrt{3}=(\frac{s^{2}\;\sqrt{3}}{4})$ 72$\sqrt{3}$ = s2$\sqrt{3}$ 72 = s2 s = $\sqrt{72}$ = $\sqrt{4}$$\sqrt{9}$$\sqrt{2}$ s = 6$\sqrt{2}$
|
{}
|
### CAT 2007 Question 10
Instructions
Directions for the following two questions:
Let S be the set of all pairs (i, j) where 1 <= i < j <= n , and n >= 4 (i and j are natural numbers). Any two distinct members of S are called “friends” if they have one constituent of the pairs in common and “enemies” otherwise.
For example, if n = 4, then S = {(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)}. Here, (1, 2) and (1, 3) are friends, (1,2) and (2, 3) are also friends, but (1,4) and (2, 3) are enemies.
Question 10
# For general n, consider any two members of S that are friends. How many other members of S will be common friends of both these members?
Solution
For n, the number of elements in set S is $$^nC_2$$.
Lets say the 2 friends are (x,a) and (y,a)
These two friends have 3 numbers in total and 1 common element(say a) (as both elements cannot be exactly same)
They have 2 non common elements(x, y)
The number of common friends is formed by the non-common elements of the friends (x,y) + the number of elements in the set which have the common element other than the two friends (a,c), (a,d) and so on = 1 + (n-1 - 2) = n-2.
For the example in question, if the friends are (1,2) and (1,3), then common friends are (2,3) and all other elements with 1
All elements with 1 = n-1= 3 which are (1,2) (1,3) (1,4) excluding the friends (1,2) and (1,3) only 1 other friend is common. Hence it is 1+(n-1)-2=n-2
|
{}
|
## Google Xlookup sheets do not work but require results in the table showing the maximum date according to the condition of the user list.
Problem 1:
`Page-2-Seznam članov > Coloumn C` : It must be the maximum dates.
The function must search the list for the name of `Page-2-Seznam članov > Coloumn B` and in the area `Page-3-Podaljsave` to give the same person the exact maximum output of `Page-3-Podaljsave > Coloumn E`
In `Page 2 > column C > cell C3` you can use `vlookup`.
``````=VLOOKUP(B3,'Page-3-Podaljsave'!\$B:\$E,4,false)
``````
A `=sort` function to guarantee that the output will be the max Availability date.
``````=VLOOKUP(B3,sort('Page-3-Podaljsave'!\$B:\$E,4,false),4,false)
``````
Then use a `arrayformula` to cover the range.
``````=arrayformula(VLOOKUP(B:B10,sort('Page-3-Podaljsave'!\$B:\$E,4,false),4,false))
``````
Be careful, the larger the range in this search, the slower the sheet will be to open / process new data.
Problem 2:
`Page-2-Seznam članov > Coloumn D` : It must be the maximum dates.
The function must search the list for the name of `Page-2-Seznam članov > Coloumn B` and in the area `Page-3-Podaljsave` to give the result of the same person the exact max date of `Page-3-Podaljsave > Column F`
Same idea:
``````=arrayformula(VLOOKUP(B3:B10,sort('Page-3-Podaljsave'!\$B:\$F,5,false),5,false))
``````
You can combine the two formulas in one table with that in `C3` :
``````={arrayformula(iferror(VLOOKUP(B3:B10,sort('Page-3-Podaljsave'!\$B:\$E,4,false),4,false))),arrayformula(iferror(VLOOKUP(B3:B10,sort('Page-3-Podaljsave'!\$B:\$F,5,false),5,false)))}
``````
Bonus: you can reduce `'Page-3-Podaljsave'!\$B:\$E` to a table with only two columns: `{'Page-3-Podaljsave'!\$B:\$B,'Page-3-Podaljsave'!\$E:\$E}` then use it in the function like this:
``````={
arrayformula(iferror(
VLOOKUP(B3:B10,
sort({'Page-3-Podaljsave'!\$B:\$B,'Page-3-Podaljsave'!\$E:\$E},2,false),
2,false))),
arrayformula(iferror(
VLOOKUP(B3:B10,
sort({'Page-3-Podaljsave'!\$B:\$B,'Page-3-Podaljsave'!\$E:\$E},2,false),
2,false)))
}
``````
## Different maximum server memory on availability replicas
I have a scenario in SQL Server 2014 AlwaysOn High Availability where other services run in one of the secondary read-only replicas.
These services obviously require additional memory. Is there a good practice or a specific reason why SQL Server should have the same memory configuration on different replicas?
## macbook – How to force a Retina display to always keep the maximum resolution scaled?
On previous versions of OS X, and apparently more in Catalina, my MacBook changes the resolution of the retina's built-in panel to "Default for Display" instead of the Scaled: More Space setting. Is there any way to force him to always keep the panel integrated on Scaled: More space?
## dnd 5th – Does a troll die if his maximum health is zero?
The Troll has the Regeneration feature that says:
The troll regains 10 life at the beginning of his turn. If the troll suffers acid or fire damage, this trait does not work at the beginning of its next turn. The troll only dies when he starts his turn with 0 life and does not regenerate.
I wonder what's going on anyway a trolls maximum life points has been reduced to zero. I do not know if this method works to kill a troll because I'm not sure whether they have regenerated or not. Does the troll die?
## Change the error message on the maximum size of the download file
I would like to change the default error message for the maximum size of the download file, but I can not find the configuration.
I've looked at the general settings of the web form, the individual settings and the multimedia file settings, but I have not found anything about it. The forum issues only talk about configuring the maximum file size, not the customization of the error message.
Can any one help me?
## Is the maximum size of the SD card for the S9 + really limited to 400 GB?
I have a Samsung S9 + and I plan to upgrade to 512GB or even 1 TB, as this would leave room for Kiwix ZIM files.
However, Samsung's marketing materials suggest a limit of 400GB (400x1000x1000x1000 bytes). This seems arbitrary (not the usual power of two and the difference between GiB and GB that can be explained by a placeholder – in case of corruption – does not change that). Now, maybe there is a technical reason that I do not know, but on the operating system side (Linux / Android), I do not understand why there would be a limitation to 400 GB SDXC appears to be the standard supported and should support up to 2 TB (2x1024x1024x1024x1024 bytes).
Is this limitation simply due to what was available around the release of the S9 + or is there an artificial limitation imposed by the customizations of Samsung to Android, or is there any there maybe no limitation?
## optimization to find the maximum sum of sigmoids with some constraints
I have a problem of maximizing a sum of sigmoid functions on different time instants with certain constraints.
Considering the standard sigmoid function $$f (x) = frac {1} {1 + e ^ {- alpha x}}$$ and it's derived $$f (x) = f (x) (1-f (x))$$
In my case, it is slightly different, the sigmoid function at the moment $$n$$ is defined as $$f_n (x) = frac {1} {1 + e ^ {- alpha ( frac {x} {y_n} -z)}}$$, where z> 0 and $$alpha> 0$$ are constants and $$y_n> 0$$ is defined for each moment $$n$$.
I need to find $$x$$ which maximize the following:
$$sum_ {n = 1} ^ N frac {1} {1 + e ^ {- alpha ( frac {x} {y_n} -z)}}: : -c x : :$$ such as $$: : x leq X$$, $$: : 🙁 1)$$
or $$c> 0$$ is a constant represents a cost value, $$X$$ is another constant represents the maximum value of $$x$$ and $$N$$ is the total number of time instants.
One of the solutions I thought was to use the Lagrangian:
$$L (x, lambda) = sum_ {n = 1} ^ N frac {1} {1 + e ^ {- alpha ( frac {x} {y_n} -z)}} : : -cx : : – lambda (xX)$$
or $$lambda$$ is the multiplier of Lagrange, since we can find the derivative of the equation (1) on $$x$$.
I've tried this method but after having $$frac { partial L (x, lambda)} { partial x} = 0 : :$$ I could not solve it.
I am not sure that this type of optimization problem (1) can be solved or not. And if this is not the case, a type of approximation / relaxation can be used to solve it.
I do not have much experience in the problem of optimization. I therefore hoped to get some help.
-Thank you
## problem of optimization to find the maximum amount of sigmoids
I need to find the maximum $$x$$ for the continuation:
$$sum_ {n = 1} ^ N frac {1} {1 + e ^ {- alpha ( frac {x} {y_n} -z)}}: : -c x : :$$ such as $$: : x leq X$$
-Thank you
## This is what I came up with:
Feat: Ritual Roulette. Use this to obtain Find familiar. This will allow your pet to use the help to give you an advantage during an attack. Choose an owl so you can step back without being hit by an opportunity attack.
Feat: Master of Firearms. You will use it to give you a bonus action attack. In addition, you can continue to move backward so that the target continues to have to move within range and re-trigger a second-hand attack.
Fighter 20 (Samurai). Choosing Samurai gives you a quick hit at level 15. You will convert the benefit conferred by the help action of your pet into an additional attack.
This gives you 5 attacks from your attack action plus a bonus action attack and a potential reaction attack each turn for a total of 6 to 7 melee weapon attacks on each turn without using resources.
## better cryptocurrency to invest for maximum earnings [on hold]
am new, what is the best cryptocurrency to invest for maximum earnings?
|
{}
|
# What is the relationship between the energy differences of the orbits in an atom and the light emitted by the atom?
When an electron makes a transition from a level of higher energy ${E}_{h}$ to a level of lower energy ${E}_{l}$, then it emits a single photon of frequency,
$\nu = \frac{{E}_{h} - {E}_{l}}{h}$ where $h$ is the Planck's constant.
|
{}
|
1. ## statistics problem help
Consider the Discrete PDF
$\displaystyle f(x;q)=(q/2)^{|x|}(1-q)^{1-|x|}$ for $\displaystyle x=-1,0,1$ and $\displaystyle 0<q<1$
a) Is $\displaystyle f(x;q)$ a complete family? if so why?
b) Find a complete sufficient statistic for $\displaystyle q$
c)Find the Uniformly Minimum Variance Unbiased Estimator for $\displaystyle q$
|
{}
|
# Is this set of 6 transformations fundamental to geometry?
Is there anything fundamental in geometry about this set of 6 transformations: Reflection, Shear, Rotation, Dilation, Squeeze and Translation?
I am looking for the cognitive or metaphysical foundations for geometry. I am especially interested in the ways we think mathematically in our imagination, implicitly, rather than the axiomatic systems that we define to constrain our work, explicitly.
I have a hypothesis that in our minds we make use of four geometries and six transformations between them. I will be presenting a related art project, a transparent doll house, at the Klaipeda Science and Art Festival on November 16. Which is to say, these are my explorations, and I am looking for connections, if any, with what is known in math.
I postulated this hypothesis after systematizing 24 ways of figuring things out in mathematics, based on a survey of methods described by George Polya, Paul Zeitz and others, as I overview in my talk, Discovery in Mathematics: A System of Deep Structure. There I imagine how four geometries (affine, projective, conformal, symplectic) could be related to how our minds generate (from the concepts of "center" and "totality") four infinite families of polytopes (simplexes, cross polytopes, cubes, demicubes) whose symmetry groups are also the Weyl groups for the root systems of the classical Lie algebras.
In math, the affine, projective, conformal and symplectic geometries have very specific meanings and so I will rename them below, respectively, as I understand them intuitively. I imagine our minds apply them to organize our expectations and evoke corresponding moods, as I note in my talk, A Research Program for a Taxonomy of Moods. And then I will be able to point out the 6 transformations which my question is about.
Vector geometry (affine): What can be constructed and deduced from one-directional vectors. For example, in my talk on moods, I consider the conditional sadness evoked by Li Bai's poem "Quiet Night Thoughts" in that beyond his bed, in the beauty of the moon and the surprise of frost-like ground is also his happy home.
Line geometry (projective): What can be constructed and deduced from two-directional lines. Suppose that the poet can look back and forth at themselves in time or otherwise.
Coordinate geometry (conformal): What can be constructed and deduced from an orthogonal basis. We can consider how people's expectations and moods are "perpendicular" or "parallel" or are given by some angle.
Sweep geometry (symplectic): What can be constructed and deduced from sweeping out an area (or volume etc.) by holding one dimension fixed while varying another. I am trying to imagine area in a dynamic sense, as I suppose happens in multiplying Position x Momentum. In my talk, I discuss the Beatles' song "She Loves You", where one person's mood is fixed while another person's mood changes.
In reading about geometry, it seems to me that, in practice, such different mindsets are not kept separate. For example, I appreciate very much Norman Wildberger's videos on Universal Hyperbolic Geometry, but I imagine that his very use of algebraic coordinates means that, in actuality, his geometric approach is not affine or projective but conformal. That is fine mathematically, but it may obscure what we do cognitively. Similarly, I think that Geometric algebra, Clifford algebra and visual complex analysis are all by nature symplectic. I am simply trying to tease apart the layers of geometry, cognitively.
At this page at Sylvain Poirier's website, I found a list of 6 transformations. I am wondering if they are, in any sense, fundamental. I interpret them below as contributing the precision needed for a geometry that is more vague to be understood as more specific.
• Reflection takes us from Vector geometry to Line geometry. A vector (and the ray it builds) can be flipped back and forth within a two-directional line, that is, the vector is given a precise orientation.
• Shear mapping takes us from Vector geometry to Coordinate geometry. A vague parallelogram can be made precise as a rectangle.
• Rotation around an origin takes us from Line geometry to Coordinate geometry. A location on a line through the origin can be given precisely as projections onto a coordinate grid, thus identifying the line with a rotation.
• Dilation takes us from Coordinate geometry to Sweep geometry. An angular shape is sized as needed so that it has a specific area.
• Squeeze mapping takes us from Line geometry to Sweep geometry. It balances the contributions that different axes will make to an overall area.
• Translation takes us from Vector geometry to Sweep geometry. It sweeps a vector (discretely or continuously) to define a dynamic area (inheriting a well defined geometry).
To be as clear as I can, I will describe these transformations algebraically, although, as I mentioned above, that imposes a coordinate system which I think is not present in the Vector and Line geometries as I understand them.
• Reflection: $(x)\rightarrow(-x)$
• Shear: $(x,y)\rightarrow(x + ay, y)$
• Rotation: $(x,y)\rightarrow(x\cos(a) − y\sin(a), x\sin(a) + y\cos(a))$
• Dilation: $(x,y)\rightarrow(ax, ay)$
• Squeeze: $(x,y)\rightarrow(x/a, ay)$
• Translation: $(x)\rightarrow(x+a)$
I was encouraged to find such well known transformations which I think play the roles that I was looking for. I think they let an object from a less specified geometry be placed within a more specified geometry. They can also help me clarify what I mean by the four geometries.
I am curious if there is any reason, mathematically, to suppose that this is, in some sense, a complete collection or not. For example, any structure of symmetries. Are there any basic transformations which I'm not including?
I wonder if this particular set of transformations, or a related set, appears in mathematics.
I am also wondering, very speculatively, what mathematical functions they might be related to. For example, I think of Alexander Grothendieck's six operations, which I hope to understand some day. I also think of six natural bases of the symmetric functions (elementary, homogeneous, power, monomial, Schur, forgotten) which I wrote my Ph.D. thesis on.
I am somewhat familiar with Klein's Erlangen program and some of John Baez's writings.
Thank you for considering my question.
• I live quite far from Klaipeda, so no question of me coming there. Having said that, I'm interested in your question, so $+1$. – астон вілла олоф мэллбэрг Oct 4 '16 at 11:25
• [+1] There are to few people asking themselves general questions, especially in geometry. It would take time to think about your proposal. I just indicate this book concerning your first operation "From Summetria to Symmetry: The Making of a Revolutionary Scientific Concept" authors G.Hon, B. R. Goldstein, 2008, Springer (very expansive !) – Jean Marie Oct 4 '16 at 12:24
• I can tell you are very excited about your work and your question, but really this is TMI. I, for one, find the surplus of information distracting and bordering a little bit on the side of excessive self-promotion. The subject matter of the question is interesting to me, but when I see how much time I'll have to spend figuring out what you're saying (figuring out what you mean by "coordinate geometry vs vector geometry vs "sweep(?) geometry," for example) it deters me. – rschwieb Oct 4 '16 at 12:48
• What I'm saying is that if there's anything extra you can do to distill what you've written down to a crisper question, you would get a better response rate. I might take a whack later, if I have an extra hour or two to parse it. – rschwieb Oct 4 '16 at 12:50
• @AndrewD.Hwang that is very helpful, thank you. Does that hold true for other geometries? Composition is surely important to think about. However, rotations and translations may still be fundamental in some qualitative way, cognitively. By comparison, protons are fundamental in chemistry even though they are composed of quarks. Overall, I'm curious where, if anywhere, these 6 transformations show up in mathematics. – Andrius Kulikauskas Oct 4 '16 at 17:12
Well I'll focus on the only tangible question I see:
I am wondering if they are, in any sense, fundamental.
No, it is probably a wrong to view any subset of these as "universally fundamental" in geometry, which is what I think you mean.
I think a better answer, via Klein's Erlangen program, is to view a geometry as a space with a group of transformations determining 'the geometry' of the space, and then you could say that the elements of the group are "fundamental for that geometry." The group and the geometric properties preserved by the group mutually determine each other.
## Context is too narrow
For one thing, the posted transformations seem to be ordered-geometry centric. For example, A vector (and the ray it builds) can be flipped back and forth within a two-directional line, that is, the vector is given a precise orientation. This does not really make sense for geometries over finite fields, for example. There are many interesting geometries that do not involve ordered fields, cannot be coordinatized by a commutative field, or cannot even be coordinatized by a ring. The subject is simply much broader than that.
## Some can be derived from the others
In a space with a symmetric bilinear form, the rotations are just products of reflections. In projective space, (if I remember right) translations can be viewed as rotations around ideal points. Even in plain old Euclidean geometry you can make a translation by two appropriate line-reflections.
## More optional than fundamental.
Not every geometry uses those transformations, and some explicitly omit those transformations.
• @rschweib thank you! I will think more about this and perhaps clarify my questions. Also, I will look for others who might have insights. – Andrius Kulikauskas Oct 5 '16 at 16:56
• @AndriusKulikauskas If you do a thorough study of the Erlengan program you will learn a great deal of geometry (but I have seen no evidence of the somewhat mystical ideas you seem to be interested in.) – rschwieb Oct 5 '16 at 17:29
Depending on how far you go, in high school geometry, they really only place emphasis on translations, rotations, reflections, and dilations(scalings). Stretching and squeezing are a bit more advanced. Also, this is Euclidean geometry where these transformations are important. Sure, these may exist in other geometries, but then there could easily be more important transformations.
|
{}
|
I posted an article a while back, entitled Very Fast, High-Precision SIMD/GPU Gradient Noise, where I outlined a technique for achieving double-resolution noise at speeds close to that when using float arithmetic. The key observation was that floor could be used on cell boundaries to mask off the ranges that require double arithmetic, allowing the bulk of the work to use float arithmetic.
The GPU code was initially written in OpenCL and then ported to CUDA using ComputeBridge. Neither were good platforms for releasing a game; releasing them on both at the same time was a recipe for madness so I ported everything to HLSL. Unfortunately HLSL SM5 doesn’t support floor(double).
So I sat down and took some time to cook up a software version. The first task was to isolate the fraction:
double MaskOutFraction(double v)
{
// Alias double as 2 32-bit integers
uint d0, d1;
asuint(v, d0, d1);
// 0 ... 51 mantissa 0 ... 19
// 52 ... 62 exponent 20 ... 30
// 63 ... 63 sign
int exponent = ((d1 >> 20) & 0x7FF) - 1023;
if (exponent < 0)
return 0;
// Calculate how many bits to shift to remove the fraction
// As there is no check here for mask_bits <= 0, if the input double is large enough
// such that it can't have any fractional representation, thie function will return
// an incorrect result.
// As this is the GPU, I've decided against that branch.
int mask_bits = max(52 - exponent, 0);
// Calculate low 31-bits of the inverted mantissa mask
uint lo_mask = (1 << lo_shift_bits) - 1;
// Can't do (1<<32)-1 with 32-bit integer so OR in the final bit if need be
// Calculate high 20 bits of the inverted mantissa mask
uint hi_shift_bits = max(mask_bits - 32, 0);
uint hi_mask = (1 << hi_shift_bits) - 1;
// Mask out the fractional bits and recombine as a double
v = asdouble(d0, d1);
return v;
}
With that you can then subtract the fraction and provide necessary overloads:
// HLSL(SM5) doesn't support floor(double) so implement it in software
double Floor(double v)
{
return v - r < 0 ? r - 1 : r;
}
double2 Floor(double2 v)
{
v.x = Floor(v.x);
v.y = Floor(v.y);
return v;
}
double3 Floor(double3 v)
{
v.x = Floor(v.x);
v.y = Floor(v.y);
v.z = Floor(v.z);
return v;
}
double4 Floor(double4 v)
{
v.x = Floor(v.x);
v.y = Floor(v.y);
v.z = Floor(v.z);
v.w = Floor(v.w);
return v;
}
Performance is admirably close to the CUDA/OpenCL versions (on nVidia/AMD hardware, respectively) and the same mask function can be reused for Round or Ceil functions.
|
{}
|
Chapter 16. Electromagnetic Waves
# 16.3 Energy Carried by Electromagnetic Waves
### Learning Objectives
By the end of this section, you will be able to:
• Express the time-averaged energy density of electromagnetic waves in terms of their electric and magnetic field amplitudes
• Calculate the Poynting vector and the energy intensity of electromagnetic waves
• Explain how the energy of an electromagnetic wave depends on its amplitude, whereas the energy of a photon is proportional to its frequency
Anyone who has used a microwave oven knows there is energy in electromagnetic waves. Sometimes this energy is obvious, such as in the warmth of the summer Sun. Other times, it is subtle, such as the unfelt energy of gamma rays, which can destroy living cells.
Electromagnetic waves bring energy into a system by virtue of their electric and magnetic fields. These fields can exert forces and move charges in the system and, thus, do work on them. However, there is energy in an electromagnetic wave itself, whether it is absorbed or not. Once created, the fields carry energy away from a source. If some energy is later absorbed, the field strengths are diminished and anything left travels on.
Clearly, the larger the strength of the electric and magnetic fields, the more work they can do and the greater the energy the electromagnetic wave carries. In electromagnetic waves, the amplitude is the maximum field strength of the electric and magnetic fields (Figure 16.10). The wave energy is determined by the wave amplitude.
For a plane wave traveling in the direction of the positive x-axis with the phase of the wave chosen so that the wave maximum is at the origin at $t=0$, the electric and magnetic fields obey the equations
$\begin{array}{c}{E}_{y}\left(x,t\right)={E}_{0}\phantom{\rule{0.2em}{0ex}}\text{cos}\phantom{\rule{0.2em}{0ex}}\left(kx-\omega t\right)\hfill \\ {B}_{z}\left(x,t\right)={B}_{0}\phantom{\rule{0.2em}{0ex}}\text{cos}\phantom{\rule{0.2em}{0ex}}\left(kx-\omega t\right).\hfill \end{array}$
The energy in any part of the electromagnetic wave is the sum of the energies of the electric and magnetic fields. This energy per unit volume, or energy density u, is the sum of the energy density from the electric field and the energy density from the magnetic field. Expressions for both field energy densities were discussed earlier (${u}_{E}$ in Capacitance and ${u}_{B}$ in Inductance). Combining these the contributions, we obtain
$u\left(x,t\right)={u}_{E}+{u}_{B}=\frac{1}{2}{\epsilon }_{0}{E}^{2}+\frac{1}{2{\mu }_{0}}{B}^{2}.$
The expression $E=cB=\frac{1}{\sqrt{{\epsilon }_{0}{\mu }_{0}}}B$ then shows that the magnetic energy density ${u}_{B}$ and electric energy density ${u}_{E}$ are equal, despite the fact that changing electric fields generally produce only small magnetic fields. The equality of the electric and magnetic energy densities leads to
$u\left(x,t\right)={\epsilon }_{0}{E}^{2}=\frac{{B}^{2}}{{\mu }_{0}}.$
The energy density moves with the electric and magnetic fields in a similar manner to the waves themselves.
We can find the rate of transport of energy by considering a small time interval $\text{Δ}t$. As shown in Figure 16.11, the energy contained in a cylinder of length $c\text{Δ}t$ and cross-sectional area A passes through the cross-sectional plane in the interval $\text{Δ}t.$
The energy passing through area A in time $\text{Δ}t$ is
$u\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\text{volume}=uAc\text{Δ}t.$
The energy per unit area per unit time passing through a plane perpendicular to the wave, called the energy flux and denoted by S, can be calculated by dividing the energy by the area A and the time interval $\text{Δ}t$.
$S=\frac{\text{Energy passing area}\phantom{\rule{0.2em}{0ex}}A\phantom{\rule{0.2em}{0ex}}\text{in time}\phantom{\rule{0.2em}{0ex}}\text{Δ}t}{A\text{Δ}t}=uc={\epsilon }_{0}c{E}^{2}=\frac{1}{{\mu }_{0}}EB.$
More generally, the flux of energy through any surface also depends on the orientation of the surface. To take the direction into account, we introduce a vector $\stackrel{\to }{\textbf{S}}$, called the Poynting vector, with the following definition:
$\stackrel{\to }{\textbf{S}}=\frac{1}{{\mu }_{0}}\stackrel{\to }{\textbf{E}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{\textbf{B}}.$
The cross-product of $\stackrel{\to }{\textbf{E}}$ and $\stackrel{\to }{\textbf{B}}$ points in the direction perpendicular to both vectors. To confirm that the direction of $\stackrel{\to }{\textbf{S}}$ is that of wave propagation, and not its negative, return to Figure 16.7. Note that Lenz’s and Faraday’s laws imply that when the magnetic field shown is increasing in time, the electric field is greater at x than at $x+\text{Δ}x$. The electric field is decreasing with increasing x at the given time and location. The proportionality between electric and magnetic fields requires the electric field to increase in time along with the magnetic field. This is possible only if the wave is propagating to the right in the diagram, in which case, the relative orientations show that $\stackrel{\to }{\textbf{S}}=\frac{1}{{\mu }_{0}}\stackrel{\to }{\textbf{E}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{\textbf{B}}$ is specifically in the direction of propagation of the electromagnetic wave.
The energy flux at any place also varies in time, as can be seen by substituting u from Equation 16.23 into Equation 16.27.
$S\left(x,t\right)=c{\epsilon }_{0}{E}_{0}^{2}{\text{cos}}^{2}\left(kx-\omega t\right)$
Because the frequency of visible light is very high, of the order of ${10}^{14}\phantom{\rule{0.2em}{0ex}}\text{Hz,}$ the energy flux for visible light through any area is an extremely rapidly varying quantity. Most measuring devices, including our eyes, detect only an average over many cycles. The time average of the energy flux is the intensity I of the electromagnetic wave and is the power per unit area. It can be expressed by averaging the cosine function in Equation 16.29 over one complete cycle, which is the same as time-averaging over many cycles (here, T is one period):
$I={S}_{\text{avg}}=c{\epsilon }_{0}{E}_{0}^{2}\frac{1}{T}\underset{0}{\overset{T}{\int }}{\text{cos}}^{2}\left(2\pi \frac{t}{T}\right)dt.$
We can either evaluate the integral, or else note that because the sine and cosine differ merely in phase, the average over a complete cycle for ${\text{cos}}^{2}\left(\xi \right)$ is the same as for ${\text{sin}}^{2}\left(\xi \right)$, to obtain
$〈{\text{cos}}^{2}\xi 〉=\frac{1}{2}\phantom{\rule{0.2em}{0ex}}\left[〈{\text{cos}}^{2}\xi 〉+〈{\text{sin}}^{2}\xi 〉\right]=\frac{1}{2}〈1〉=\frac{1}{2}.$
where the angle brackets $〈\text{⋯}〉$ stand for the time-averaging operation. The intensity of light moving at speed c in vacuum is then found to be
$I={S}_{\text{avg}}=\frac{1}{2}c{\epsilon }_{0}{E}_{0}^{2}$
in terms of the maximum electric field strength ${E}_{0},$ which is also the electric field amplitude. Algebraic manipulation produces the relationship
$I=\frac{c{B}_{0}^{2}}{2{\mu }_{0}}$
where ${B}_{0}$ is the magnetic field amplitude, which is the same as the maximum magnetic field strength. One more expression for ${I}_{\text{avg}}$ in terms of both electric and magnetic field strengths is useful. Substituting the fact that $c{B}_{0}={E}_{0},$ the previous expression becomes
$I=\frac{{E}_{0}{B}_{0}}{2{\mu }_{0}}.$
We can use whichever of the three preceding equations is most convenient, because the three equations are really just different versions of the same result: The energy in a wave is related to amplitude squared. Furthermore, because these equations are based on the assumption that the electromagnetic waves are sinusoidal, the peak intensity is twice the average intensity; that is, ${I}_{0}=2I.$
### Example
#### A Laser Beam
The beam from a small laboratory laser typically has an intensity of about $1.0\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-3}{\phantom{\rule{0.2em}{0ex}}\text{W/m}}^{2}$. Assuming that the beam is composed of plane waves, calculate the amplitudes of the electric and magnetic fields in the beam.
#### Strategy
Use the equation expressing intensity in terms of electric field to calculate the electric field from the intensity.
#### Solution
From (Figure), the intensity of the laser beam is
$I=\frac{1}{2}c{\epsilon }_{0}{E}_{0}^{2}.$
The amplitude of the electric field is therefore
${E}_{0}=\sqrt{\frac{2}{c{\epsilon }_{0}}I}=\sqrt{\frac{2}{\left(3.00\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{8}\phantom{\rule{0.2em}{0ex}}\text{m/s}\right)\left(8.85\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-12}\phantom{\rule{0.2em}{0ex}}\text{F/m}\right)}\left(1.0\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-3}{\phantom{\rule{0.2em}{0ex}}\text{W/m}}^{2}\right)}=0.87\phantom{\rule{0.2em}{0ex}}\text{V/m}.$
The amplitude of the magnetic field can be obtained from Equation 16.20:
${B}_{0}=\frac{{E}_{0}}{c}=2.9\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-9}\phantom{\rule{0.2em}{0ex}}\text{T}.$
### Example
#### Light Bulb Fields
A light bulb emits 5.00 W of power as visible light. What are the average electric and magnetic fields from the light at a distance of 3.0 m?
#### Strategy
Assume the bulb’s power output P is distributed uniformly over a sphere of radius 3.0 m to calculate the intensity, and from it, the electric field.
#### Solution
The power radiated as visible light is then
$\begin{array}{ccc}\hfill I& =\hfill & \frac{P}{4\pi {r}^{2}}=\frac{c{\epsilon }_{0}{E}_{0}^{2}}{2},\hfill \\ \hfill {E}_{0}& =\hfill & \sqrt{2\frac{P}{4\pi {r}^{2}c{\epsilon }_{0}}}=\sqrt{2\frac{5.00\phantom{\rule{0.2em}{0ex}}\text{W}}{4\pi {\left(3.0\phantom{\rule{0.2em}{0ex}}\text{m}\right)}^{2}\left(3.00\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{8}\phantom{\rule{0.2em}{0ex}}\text{m/s}\right)\left(8.85\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-12}{\phantom{\rule{0.2em}{0ex}}\text{C}}^{2}\text{/N}·{\text{m}}^{2}\right)}}=5.77\phantom{\rule{0.2em}{0ex}}\text{N/C,}\hfill \\ \hfill {B}_{0}& =\hfill & {E}_{0}\text{/}c=1.92\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-8}\phantom{\rule{0.2em}{0ex}}\text{T}.\hfill \end{array}$
#### Significance
The intensity I falls off as the distance squared if the radiation is dispersed uniformly in all directions.
### Example
A 60-kW radio transmitter on Earth sends its signal to a satellite 100 km away (Figure 16.12). At what distance in the same direction would the signal have the same maximum field strength if the transmitter’s output power were increased to 90 kW?
#### Strategy
The area over which the power in a particular direction is dispersed increases as distance squared, as illustrated in the figure. Change the power output P by a factor of (90 kW/60 kW) and change the area by the same factor to keep $I=\frac{P}{A}=\frac{c{\epsilon }_{0}{E}_{0}^{2}}{2}$ the same. Then use the proportion of area A in the diagram to distance squared to find the distance that produces the calculated change in area.
#### Solution
Using the proportionality of the areas to the squares of the distances, and solving, we obtain from the diagram
$\begin{array}{ccc}\hfill \frac{{r}_{2}^{2}}{{r}_{1}^{2}}& =\hfill & \frac{{A}_{2}}{{A}_{1}}=\frac{90\phantom{\rule{0.2em}{0ex}}\text{W}}{60\phantom{\rule{0.2em}{0ex}}\text{W}},\hfill \\ \hfill {r}_{2}& =\hfill & \sqrt{\frac{90}{60}}\left(100\phantom{\rule{0.2em}{0ex}}\text{km}\right)=122\phantom{\rule{0.2em}{0ex}}\text{km}.\hfill \end{array}$
#### Significance
The range of a radio signal is the maximum distance between the transmitter and receiver that allows for normal operation. In the absence of complications such as reflections from obstacles, the intensity follows an inverse square law, and doubling the range would require multiplying the power by four.
### Summary
• The energy carried by any wave is proportional to its amplitude squared. For electromagnetic waves, this means intensity can be expressed as
$I=\frac{c{\epsilon }_{0}{E}_{0}^{2}}{2}$
where I is the average intensity in ${\text{W/m}}^{2}$ and ${E}_{0}$ is the maximum electric field strength of a continuous sinusoidal wave. This can also be expressed in terms of the maximum magnetic field strength ${B}_{0}$ as
$I=\frac{c{B}_{0}^{2}}{2{\mu }_{0}}$
and in terms of both electric and magnetic fields as
$I=\frac{{E}_{0}{B}_{0}}{2{\mu }_{0}}.$
The three expressions for ${I}_{\text{avg}}$ are all equivalent.
### Conceptual Questions
When you stand outdoors in the sunlight, why can you feel the energy that the sunlight carries, but not the momentum it carries?
Show Solution
The amount of energy (about ${100\phantom{\rule{0.2em}{0ex}}\text{W/m}}^{2}$) is can quickly produce a considerable change in temperature, but the light pressure (about $3.00\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-7}{\text{N/m}}^{2}$) is much too small to notice.
How does the intensity of an electromagnetic wave depend on its electric field? How does it depend on its magnetic field?
What is the physical significance of the Poynting vector?
Show Solution
It has the magnitude of the energy flux and points in the direction of wave propagation. It gives the direction of energy flow and the amount of energy per area transported per second.
A 2.0-mW helium-neon laser transmits a continuous beam of red light of cross-sectional area $0.25\phantom{\rule{0.2em}{0ex}}{\text{cm}}^{2}$. If the beam does not diverge appreciably, how would its rms electric field vary with distance from the laser? Explain.
### Problems
While outdoors on a sunny day, a student holds a large convex lens of radius 4.0 cm above a sheet of paper to produce a bright spot on the paper that is 1.0 cm in radius, rather than a sharp focus. By what factor is the electric field in the bright spot of light related to the electric field of sunlight leaving the side of the lens facing the paper?
A plane electromagnetic wave travels northward. At one instant, its electric field has a magnitude of 6.0 V/m and points eastward. What are the magnitude and direction of the magnetic field at this instant?
Show Solution
The magnetic field is downward, and it has magnitude $2.00\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-8}\text{T}$.
The electric field of an electromagnetic wave is given by
$E=\left(6.0\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-3}\phantom{\rule{0.2em}{0ex}}\text{V/m}\right)\phantom{\rule{0.2em}{0ex}}\text{sin}\phantom{\rule{0.2em}{0ex}}\left[2\pi \left(\frac{x}{18\phantom{\rule{0.2em}{0ex}}\text{m}}-\frac{t}{6.0\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-8}\phantom{\rule{0.2em}{0ex}}\text{s}}\right)\right]\hat{\textbf{j}}.$
Write the equations for the associated magnetic field and Poynting vector.
A radio station broadcasts at a frequency of 760 kHz. At a receiver some distance from the antenna, the maximum magnetic field of the electromagnetic wave detected is $2.15\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-11}\text{T}$.
(a) What is the maximum electric field? (b) What is the wavelength of the electromagnetic wave?
Show Solution
a. $6.45\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-3}\phantom{\rule{0.2em}{0ex}}\text{V/m;}$ b. 394 m
The filament in a clear incandescent light bulb radiates visible light at a power of 5.00 W. Model the glass part of the bulb as a sphere of radius ${r}_{0}=3.00\phantom{\rule{0.2em}{0ex}}\text{cm}$ and calculate the amount of electromagnetic energy from visible light inside the bulb.
At what distance does a 100-W lightbulb produce the same intensity of light as a 75-W lightbulb produces 10 m away? (Assume both have the same efficiency for converting electrical energy in the circuit into emitted electromagnetic energy.)
Show Solution
11.5 m
An incandescent light bulb emits only 2.6 W of its power as visible light. What is the rms electric field of the emitted light at a distance of 3.0 m from the bulb?
A 150-W lightbulb emits 5% of its energy as electromagnetic radiation. What is the magnitude of the average Poynting vector 10 m from the bulb?
Show Solution
$5.97\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-3}{\phantom{\rule{0.2em}{0ex}}\text{W/m}}^{2}$
A small helium-neon laser has a power output of 2.5 mW. What is the electromagnetic energy in a 1.0-m length of the beam?
At the top of Earth’s atmosphere, the time-averaged Poynting vector associated with sunlight has a magnitude of about $1.4\phantom{\rule{0.2em}{0ex}}{\text{kW/m}}^{2}.$
(a) What are the maximum values of the electric and magnetic fields for a wave of this intensity? (b) What is the total power radiated by the sun? Assume that the Earth is $1.5\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{11}\text{m}$ from the Sun and that sunlight is composed of electromagnetic plane waves.
Show Solution
$\text{a.}\phantom{\rule{0.2em}{0ex}}{E}_{0}=1027\phantom{\rule{0.2em}{0ex}}\text{V/m},{B}_{0}=3.42\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-6}\text{T};\phantom{\rule{0.2em}{0ex}}\text{b.}\phantom{\rule{0.2em}{0ex}}3.96\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{26}\phantom{\rule{0.2em}{0ex}}\text{W}$
The magnetic field of a plane electromagnetic wave moving along the z axis is given by $\stackrel{\to }{\textbf{B}}={B}_{0}\left(\text{cos}\phantom{\rule{0.2em}{0ex}}kz+\omega t\right)\hat{\textbf{j}}$, where ${B}_{0}=5.00\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-10}\phantom{\rule{0.2em}{0ex}}\text{T}$ and $k=3.14\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-2}{\phantom{\rule{0.2em}{0ex}}\text{m}}^{-1}.$
(a) Write an expression for the electric field associated with the wave. (b) What are the frequency and the wavelength of the wave? (c) What is its average Poynting vector?
What is the intensity of an electromagnetic wave with a peak electric field strength of 125 V/m?
Show Solution
$20.8\phantom{\rule{0.2em}{0ex}}{\text{W/m}}^{2}$
Assume the helium-neon lasers commonly used in student physics laboratories have power outputs of 0.500 mW. (a) If such a laser beam is projected onto a circular spot 1.00 mm in diameter, what is its intensity? (b) Find the peak magnetic field strength. (c) Find the peak electric field strength.
An AM radio transmitter broadcasts 50.0 kW of power uniformly in all directions. (a) Assuming all of the radio waves that strike the ground are completely absorbed, and that there is no absorption by the atmosphere or other objects, what is the intensity 30.0 km away? (Hint: Half the power will be spread over the area of a hemisphere.) (b) What is the maximum electric field strength at this distance?
Show Solution
a. $4.42\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{‒6}{\phantom{\rule{0.2em}{0ex}}\text{W/m}}^{2}$; b. $5.77\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{‒2}\phantom{\rule{0.2em}{0ex}}\text{V/m}$
Suppose the maximum safe intensity of microwaves for human exposure is taken to be $1.00{\phantom{\rule{0.2em}{0ex}}\text{W/m}}^{2}$. (a) If a radar unit leaks 10.0 W of microwaves (other than those sent by its antenna) uniformly in all directions, how far away must you be to be exposed to an intensity considered to be safe? Assume that the power spreads uniformly over the area of a sphere with no complications from absorption or reflection. (b) What is the maximum electric field strength at the safe intensity? (Note that early radar units leaked more than modern ones do. This caused identifiable health problems, such as cataracts, for people who worked near them.)
A 2.50-m-diameter university communications satellite dish receives TV signals that have a maximum electric field strength (for one channel) of $7.50\phantom{\rule{0.2em}{0ex}}\text{μV/m}$ (see below). (a) What is the intensity of this wave? (b) What is the power received by the antenna? (c) If the orbiting satellite broadcasts uniformly over an area of $1.50\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{13}{\phantom{\rule{0.2em}{0ex}}\text{m}}^{2}$ (a large fraction of North America), how much power does it radiate?
Show Solution
a. $7.47\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-14}{\phantom{\rule{0.2em}{0ex}}\text{W/m}}^{2}$; b. $3.66\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{\text{−}13}\phantom{\rule{0.2em}{0ex}}\text{W}$; c. 1.12 W
Lasers can be constructed that produce an extremely high intensity electromagnetic wave for a brief time—called pulsed lasers. They are used to initiate nuclear fusion, for example. Such a laser may produce an electromagnetic wave with a maximum electric field strength of $1.00\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{11}\phantom{\rule{0.2em}{0ex}}\text{V}\text{/}\text{m}$ for a time of 1.00 ns. (a) What is the maximum magnetic field strength in the wave? (b) What is the intensity of the beam? (c) What energy does it deliver on an $1.00{\text{-mm}}^{2}$ area?
### Glossary
Poynting vector
vector equal to the cross product of the electric-and magnetic fields, that describes the flow of electromagnetic energy through a surface
|
{}
|
## Reading the Comics, August 10, 2019: In Security Edition
There were several more comic strips last week worth my attention. One of them, though, offered a lot for me to write about, packed into one panel featuring what comic strip fans call the Wall O’ Text.
Bea R’s In Security for the 9th is part of a storyline about defeating an evil “home assistant”. The choice of weapon is Michaela’s barrage of questions, too fast and too varied to answer. There are some mathematical questions tossed in the mix. The obvious one is “zero divided by two equals zero, but why’z two divided by zero called crazy town?” Like with most “why” mathematics questions there are a range of answers.
The obvious one, I suppose, is to appeal to intuition. Think of dividing one number by another by representing the numbers with things. Start with a pile of the first number of things. Try putting them into the second number of bins. How many times can you do this? And then you can pretty well see that you can fill two bins with zero things zero times. But you can fill zero bins with two things — well, what is filling zero bins supposed to mean? And that warns us that dividing by zero is at least suspicious.
That’s probably enough to convince a three-year-old, and probably most sensible people. If we start getting open-mined about what it means to fill no containers, we might say, well, why not have two things fill the zero containers zero times over, or once over, or whatever convenient answer would work? And here we can appeal to mathematical logic. Start with some ideas that seem straightforward. Like, that division is the inverse of multiplication. That addition and multiplication work like you’d guess from the way integers work. That distribution works. Then you can quickly enough show that if you allow division by zero, this implies that every number equals every other number. Since it would be inconvenient for, say, “six” to also equal “minus 113,847,506 and three-quarters” we say division by zero is the problem.
This is compelling until you ask what’s so great about addition and multiplication as we know them. And here’s a potentially fruitful line of attack. Coming up with alternate ideas for what it means to add or to multiply are fine. We can do this easily with modular arithmetic, that thing where we say, like, 5 + 1 equals 0 all over again, and 5 + 2 is 1 and 5 + 3 is 2. This can create a ring, and it can offer us wild ideas like “3 times 2 equals 0”. This doesn’t get us to where dividing by zero means anything. But it hints that maybe there’s some exotic frontier of mathematics in which dividing by zero is good, or useful. I don’t know of one. But I know very little about topics like non-standard analysis (where mathematicians hypothesize non-negative numbers that are not zero, but are also smaller than any positive number) or structures like surreal numbers. There may be something lurking behind a Quanta Magazine essay I haven’t read even though they tweet about it four times a week. (My twitter account is, for some reason, not loading this week.)
Michaela’s questions include a couple other mathematically-connected topics. “If infinity is forever, isn’t that crazy, too?” Crazy is a loaded word and probably best avoided. But there are infinity large sets of things. There are processes that take infinitely many steps to complete. Please be kind to me in my declaration “are”. I spent five hundred words on “two divided by zero”. I can’t get into that it means for a mathematical thing to “exist”. I don’t know. In any event. Infinities are hard and we rely on them. They defy our intuition. Mathematicians over the 19th and 20th centuries worked out fairly good tools for handling these. They rely on several strategies. Most of these amount to: we can prove that the difference between “infinitely many steps” and “very many steps” can be made smaller than any error tolerance we like. And we can say what “very many steps” implies for a thing. Therefore we can say that “infinitely many steps” gives us some specific result. A similar process holds for “infinitely many things” instead of “infinitely many steps”. This does not involve actually dealing with infinity, not directly. It involves dealing with large numbers, which work like small numbers but longer. This has worked quite well. There’s surely some field of mathematics about to break down that happy condition.
And there’s one more mathematical bit. Why is a ball round? This comes around to definitions. Suppose a ball is all the points within a particular radius of a center. What shape that is depends on what you mean by “distance”. The common definition of distance, the “Euclidean norm”, we get from our physical intuition. It implies this shape should be round. But there are other measures of distance, useful for other roles. They can imply “balls” that we’d say were octahedrons, or cubes, or rounded versions of these shapes. We can pick our distance to fit what we want to do, and shapes follow.
I suspect but do not know that it works the other way, that if we want a “ball” to be round, it implies we’re using a distance that’s the Euclidean measure. I defer to people better at normed spaces than I am.
Mark Anderson’s Andertoons for the 10th is the Mark Anderson’s Andertoons for the week. It’s also a refreshing break from talking so much about In Security. Wavehead is doing the traditional kid-protesting-the-chalkboard-problem. This time with an electronic chalkboard, an innovation that I’ve heard about but never used myself.
Bob Scott’s Bear With Me for the 10th is the Pi Day joke for the week.
And that last one seemed substantial enough to highlight. There were even slighter strips. Among them: Mark Anderson’s Andertoons for the 4th features latitude and longitude, the parts of spherical geometry most of us understand. At least feel we understand. Jim Toomey’s Sherman’s Lagoon for the 8th mentions mathematics as the homework parents most dread helping with. Larry Wright’s Motley rerun for the 10th does a joke about a kid being bad at geography and at mathematics.
And that’s this past week’s mathematics comics. Reading the Comics essays should all be gathered at this link. Thanks for reading this far.
## Reading the Comics, August 3, 2019: Summer Trip Edition
I was away from home most of last week. Comic Strip Master Command was kind and acknowledged this. There wasn’t much for me to discuss. There’s not even many comics too slight to discuss. I thank them for their work in not overloading me. But if you wondered why Sunday’s post was what it was, you now know. I suspect you didn’t wonder.
Mark Anderson’s Andertoons for the 29th of July is a comfortable and familiar face for these Reading the Comics posts. I’m glad to see it. The joke is built on negative numbers, and Wavehead’s right to say this is kind of the reason people hate mathematics. At least, that mathematicians will become comfortable with something that has a clear real-world intuitive meaning, such as that adding things together gets you a bigger thing. And then for good reasons of logic get to counter-intuitive things, such as adding things together to get a lesser thing. Negative numbers might be the first of these intuition-breaking things that people encounter. That or fractions. I encounter stories of people who refuse to accept that, say, $\frac16$ is smaller than $\frac13$, although I’ve never seen it myself.
So why do mathematicians take stuff like “adding” and break it? Convenience, I suppose, is the important reason. Having negative numbers lets us treat “having a quantity” and “lacking a quantity” using the same mechanisms. So that’s nice to have. If we have positive and negative numbers, then we can treat “adding” and “subtracting” using the same mechanisms. That’s nice to do. The trouble is then knowing, like, “if -3 times 4 is greater than -16, is -3 times -4 greater than 16? Or less than? Why?”
Jeffrey Caulfield and Brian Ponshock’s Yaffle for the 31st of July uses the blackboard-full-of-mathematics as shorthand for deep thought about topics. The equations don’t mean much of anything, individually or collectively. I’m curious whether Caulfield and Ponshock mean, in the middle there, for that equation to be π times y2 equalling z3, or whether it’s π times x times y2 that is. Doens’t matter either way. It’s just decoration.
And then there are the most marginal comic strips for the week. And if that first Yaffle didn’t count as too marginal to mention, think what that means for the others. Yaffle on the 28th of July features a mention of sudoku as the sort of thing one struggles to solve. Tony Rubino and Gary Markstein’s Daddy’s Home for the 1st of August mentions mathematics as the sort of homework a parent can’t help with. Jim Toomey’s Sherman’s Lagoon for the 2nd sets up a math contest. It’s mentioned as the sort of things the comic strip’s regular cast can’t hope to do.
And there we go. I’m ready now for August. Around Sunday I should have a fresh Reading the Comics page here. And it does seem like I’m missing my other traditional post here, doesn’t it? Have to work on that.
## Reading the Comics, July 22, 2019: Mathematics Education Edition
There were a decent number of mathematically-themed comic strips this past week. This figures, because I’ve spent this past week doing a lot of things, and look to be busier this coming week. Nothing to do but jump into it, then.
Jason Chatfield’s Ginger Meggs for the 21st is your usual strip about the student resisting the story problem. Story problems are hard to set. Ideally, they present problems like mathematicians actually do, proposing the finding of something it would be interesting to learn. But it’s hard to find different problems like this. You might be fairly interested in how long it takes a tub filling with water to overflow, but the third problem of this kind is going to look a lot like the first two. And it’s also hard to find problems that allow for no confounding alternate interpretations, like this. Have some sympathy and let us sometimes just give you an equation to solve.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 21st is a pun built on two technical definitions for “induction”. The one used in mathematics, and logic, is a powerful tool for certain kinds of proof. It’s hard to teach how to set it up correctly, though. It’s a way to prove an infinitely large number of logical propositions, though. Let me call those propositions P1, P2, P3, P4, and so on. Pj for every counting number j. The first step of the proof is showing that some base proposition is true. This is usually some case that’s really easy to do. This is the fun part of a proof by induction, because it feels like you’ve done half the work and it amounts to something like, oh, showing that 1 is a triangular number.
The second part is hard. You have to show that whenever Pj is true, this implies that Pj + 1 is also true. This is usually a step full of letters representing numbers rather than anything you can directly visualize with, like, dots on paper. This is usually the hard part. But put those two halves together? And you’ve proven that all your propositions are true. Making things line up like that is so much fun.
Mark Anderson’s Andertoons for the 22nd is the Mark Anderson’s Andertoons for the week. It’s again your student trying to get out of not really knowing mathematics in class. Longtime readers will know, though, that I’m fond of rough drafts in mathematics. I think most mathematicians are. If you are doing something you don’t quite understand, then you don’t know how to do it well. It’s worth, in that case, doing an approximation of what you truly want to do. This is for the same reason writers are always advised to write something and then edit later. The rough draft will help you find what you truly want. In thinking about the rough draft, you can get closer to the good draft.
Stephen Bentley’s Herb and Jamaal for the 22nd is one lost on me. I grew up when Schoolhouse Rock was a fun and impossible-to-avoid part of watching Saturday Morning cartoons. So there’s a lot of simple mathematics that I learned by having it put to music and played often.
Still, it’s surprising Herb can’t think of why it might be easier to remember something that’s fun, that’s put to a memory-enhancing tool like music, and repeated often, than it is to remember whether 8 times 7 is 54. Arithmetic gets easier to remember when you notice patterns, and find them delightful. Even fun. It’s a lot like everything else humans put any attention to, that way.
This was a busy week for comic strips. I hope to have another Reading the Comics post around Tuesday, and at this link. There might even be another one this week. Please check back in.
## Reading the Comics, June 20, 2019: Old Friends Edition
We continue to be in the summer vacation doldrums for mathematically-themed comic strips. But there’ve been a couple coming out. I could break this week’s crop into two essays, for example. All of today’s strips are comics that turn up in my essays a lot. It’s like hanging out with a couple of old friends.
Samson’s Dark Side of the Horse for the 17th uses the motif of arithmetic expressions as “difficult” things. The expressions Samson quotes seem difficult for being syntactically weird: What does the colon under the radical sign mean in $\sqrt{9:}33$? Or they’re difficult for being indirect, using a phrase like “50%” for “half”. But with some charity we can read this as Horace talking about 3:33 am to about 6:30 am. I agree that those are difficult hours.
It also puts me in mind of a gift from a few years back. An aunt sent me an Irrational Watch, with a dial that didn’t have the usual counting numbers on it. Instead there were various irrational numbers, like the Golden Ratio or the square root of 50 or the like. Also the Euler-Mascheroni Constant, a number that may or may not be irrational. Nobody knows. It’s likely that it is irrational, but it’s not proven. It’s a good bit of fun, although it does make it a bit harder to use the watch for problems like “how long is it until 4:15?” This isn’t quite what’s going on here — the square root of nine is a noticeably rational number — but it seems in that same spirit.
Mark Anderson’s Andertoons for the 18th sees Wavehead react to the terminology of the “improper fraction”. “Proper” and “improper” as words carry a suggestion of … well, decency. Like there’s something faintly immoral about having an improper fraction. “Proper” and “improper”, as words, attach to many mathematical concepts. Several years ago I wrote that “proper” amounted to “it isn’t boring”. This is a fair way to characterize, like, proper subsets or proper factors or the like. It’s less obvious that $\frac{13}{12}$ is a boring fraction.
I may need to rewrite that old essay. An “improper” form satisfies all the required conditions for the term. But it misses some of the connotation of the term. It’s true that, say, the new process takes “a fraction of the time” of the old, if the old process took one hour and the new process takes fourteen years. But if you tried telling someone that they would assume you misunderstood something. The ordinary English usage of “fraction” carries the connotation of “a fraction between zero and one”, and that’s what makes a “proper fraction”.
In practical terms, improper fractions are fine. I don’t know of any mathematicians who seriously object to them, or avoid using them. The hedging word “seriously” is in there because of a special need. That need is: how big is, say, $\frac{75}{14}$? Is it bigger than five? Is it smaller than six? An improper fraction depends on you knowing, in this case, your fourteen-times tables to tell. Switching that to a mixed fraction, $5 + \frac{5}{14}$, helps figure out what the number means. That’s as far as we have to worry about the propriety of fractions.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 20th uses the form of a Fermi problem for its joke. Fermi problems have a place in mathematical modeling. The idea is to find an estimate for some quantity. We often want to do this. The trick is to build a simple model, and to calculate using a tiny bit of data. The Fermi problem that has someone reached public consciousness is called the Fermi paradox. The question that paradox addresses is, how many technologically advanced species are there in the galaxy? There’s no way to guess. But we can make models and those give us topics to investigate to better understand the problem. (The paradox is that reasonable guesses about the model suggest there should be so many aliens that they’d be a menace to air traffic. Or that the universe should be empty except for us. Both alternatives seem unrealistic.) Such estimates can be quite wrong, of course. I remember a Robert Heinlein essay in which he explained the Soviets were lying about the size of Moscow, his evidence being he didn’t see the ship traffic he expected when he toured the city. I do not remember that he analyzed what he might have reasoned wrong when he republished this in a collection of essays he didn’t seem to realize were funny.
So the interview question presented is such a Fermi problem. The job applicant, presumably, has not committed to memory the number of employees at the company. But there would be clues. Does the company own the whole building it’s in, or just a floor? Just an office? How large is the building? How large is the parking lot? Are there people walking the hallways? How many desks are in the offices? The question could be answerable. The applicant has a pretty good chain of reasoning too.
Bill Amend’s FoxTrot Classics for the 20th has several mathematical jokes in it. One is the use of excessively many decimal points to indicate intelligence. Grant that someone cares about the hyperbolic cosines of 15.2. There is no need to cite its wrong value to nine digits past the decimal. Decimal points are hypnotic, though, and listing many of them has connotations of relentless, robotic intelligence. That is what Amend went for in the characters here. That and showing how terrible nerds are when they find some petty issue to rage over.
Eugene is correct about the hyperbolic cosine being wrong, there, though. He’s not wrong to check that. It’s good form to have some idea what a plausible answer should be. It lets one spot errors, for one. No mathematician is too good to avoid making dumb little mistakes. And computing tools will make mistakes too. Fortunately they don’t often, but this strip originally ran a couple years after the discovery of the Pentium FDIV bug. This was a glitch in the way certain Pentium chips handled floating-point division. It was discovered by Dr Thomas Nicely, at Lynchberg College, who found inconsistencies in some calculations when he added Pentium systems to the computers he was using. This Pentium bug may have been on Amend’s mind.
Eugene would have spotted right away that the hyperbolic cosine was wrong, though, and didn’t need nine digits for it. The hyperbolic cosine is a function. Its domain is the real numbers. It range is entirely numbers greater than or equal to one, or less than or equal to minus one. A 0.9 something just can’t happen, not as the hyperbolic cosine for a real number.
And what is the hyperbolic cosine? It’s one of the hyperbolic trigonometric functions. The other trig functions — sine, tangent, arc-sine, and all that — have their shadows too. You’ll see the hyperbolic sine and hyperbolic tangent some. You will never see the hyperbolic arc-cosecant and anyone trying to tell you that you need it is putting you on. They turn up in introductory calculus classes because you can differentiate them, and integrate them, the way you can ordinary trig functions. They look just different enough from regular trig functions to seem interesting for half a class. By the time you’re doing this, your instructor needs that.
The ordinary trig functions come from the unit circle. You can relate the Cartesian coordinates of a point on the circle described by $x^2 + y^2 = 1$ to the angle made between that point and the center of the circle and the positive x-axis. Hyperbolic trig functions we can relate the Cartesian coordinates of a point on the hyperbola described by $x^2 - y^2 = 1$ to angles instead. The functions … don’t have a lot of use at the intro-to-calculus level. Again, other than that they let you do some quite testable differentiation and integration problems that don’t look exactly like regular trig functions do. They turn up again if you get far enough into mathematical physics. The hyperbolic cosine does well in describing catenaries, that is, the shape of flexible wires under gravity. And the family of functions turn up in statistical mechanics, often, in the mathematics of heat and of magnetism. But overall, these functions aren’t needed a lot. A good scientific calculator will offer them, certainly. But it’ll be harder to get them.
There is another oddity at work here. The cosine of 15.2 degrees is about 0.965, yes. But mathematicians will usually think of trigonometric functions — regular or hyperbolic — in terms of radians. This is just a different measure of angles. A right angle, 90 degrees, is measured as $\frac{1}{2}\pi$ radians. The use of radians makes a good bit of other work easier. Mathematicians get to accustomed to using radians that to use degrees seems slightly alien. The cosine of 15.2 radians, then, would be about -0.874. Eugene has apparently left his calculator in degree mode, rather than radian mode. If he weren’t so worked up about the hyperbolic cosine being wrong he might have noticed. Perhaps that will be another exciting error to discover down the line.
This strip was part of a several-months-long story Bill Amend did, in which Jason has adventures at Math Camp. I don’t remember the whole story. But I do expect the strip to have several more appearances here this summer.
And that’s about half of last week’s comics. A fresh Reading the Comics post should be at this link later this week. Thank you for reading along.
## Reading the Comics, June 15, 2019: School Is Out? Edition
This has not been the slowest week for mathematically-themed comic strips. The slowest would be the week nothing on topic came up. But this was close. I admit this is fine as I have things disrupting my normal schedule this week. I don’t need to write too many essays too.
On-topic enough to discuss, though, were:
Lalo Alcaraz’s La Cucaracha for the 9th features a teacher trying to get ahead of student boredom. The idea that mathematics is easier to learn if it’s about problems that seem interesting is a durable one. It agrees with my intuition. I’m less sure that just doing arithmetic while surfing is that helpful. My feeling is that a problem being interesting is separate from a problem naming an intersting thing. But making every problem uniquely interesting is probably too much to expect from a teacher. A good pop-mathematics writer can be interesting about any problem. But the pop-mathematics writer has a lot of choice about what she’ll discuss. And doesn’t need to practice examples of a problem until she can feel confident her readers have learned a skill. I don’t know that there is a good answer to this.
Also part of me feels that “eight sick waves times eight sick waves” has to be “sixty-four sick-waves-squared”. This is me worrying about the dimensional analysis of a joke. All right, but if it were “eight inches times eight inches” and you came back with “sixty-four inches” you’d agree something was off, right? But it’s easy to not notice the units. That we do, mechanically, the same thing in multiplying (oh) three times \$1.20 or three times 120 miles or three boxes times 120 items per box as we do multiplying three times 120 encourages this. But if we are using numbers to measure things, and if we are doing calculations about things, then the units matter. They carry information about the kinds of things our calculations represent. It’s a bad idea to misuse or ignore those tools.
Paul Trap’s Thatababy for the 14th is roughly the anthropomorphized geometry cartoon of the week. It does name the three ways to group triangles based on how many sides have the same length. Or if you prefer, how many interior angles have the same measure. So it’s probably a good choice for your geometry tip sheet. “Scalene” as a word seems to have entered English in the 1730s. Its origin traces to Late Latin “scalenus”, from the Greek “skalenos” and meaning “uneven” or “crooked”.
“Isosceles” also goes to Late Latin and, before that, the Greek “isoskeles”, with “iso” the prefix meaning “equal” and “skeles” meaning “legs”. The curious thing to me is “Isosceles”, besides sounding more pleasant, came to English around 1550. Meanwhile, “equilateral” — a simple Late Latin for “equal sides” — appeared around 1570. I don’t know what was going on that it seemed urgent to have a word for triangles with two equal sides first, and a generation later triangles with three equal sides. And then triangles with no two equal sides went nearly two centuries without getting a custom term.
But, then, I’m aware of my bias. There might have been other words for these concepts, recognized by mathematicians of the year 1600, that haven’t come to us. Or it might be that scalene triangles were thought to be so boring there wasn’t any point giving them a special name. It would take deeper mathematics history knowledge than I have to say.
Those are all the mathematically-themed comic strips I can find something to discuss from the past week. There were some others with mentions of mathematics, though. These include:
Tony Rubino and Gary Markstein’s Daddy’s Home for the 9th, in which mathematics is the last class of the school year. Francesco Marciuliano and Jim Keefe’s Sally Forth for the 11th has a study session with “math charades” mentioned. Mark Andersons Andertoons for the 11th wants in on some of my sweet Thatababy exposition. Harley Schwadron’s 9 to 5 for the 14th is trying to become the default pie chart joke around here. It won’t beat out Randolph Itch, 2 am without a stronger punch line. And Mark Tatulli’s Heart of the City for the 15th sees Dean mention hiding sleeping in algebra class.
This closes out a week’s worth of comic strips. My next Reading the Comics post should be at this link next Sunday. And now I need to think of something to post for the Thursday and, if I can, Tuesday publication dates.
## Reading the Comics, May 20, 2019: I Guess I Took A Week Off Edition
I’d meant to get back into discussing continuous functions this week, and then didn’t have the time. I hope nobody was too worried.
Bill Amend’s FoxTrot for the 19th is set up as geometry or trigonometry homework. There are a couple of angles that we use all the time, and they do correspond to some common unit fractions of a circle: a quarter, a sixth, an eighth, a twelfth. These map nicely to common cuts of circular pies, at least. Well, it’s a bit of a freak move to cut a pie into twelve pieces, but it’s not totally out there. If someone cuts a pie into 24 pieces, flee.
Tom Batiuk’s vintage Funky Winkerbean for the 19th of May is a real vintage piece, showing off the days when pocket electronic calculators were new. The sales clerk describes the calculator as having “a floating decimal”. And here I must admit: I’m poorly read on early-70s consumer electronics. So I can’t say that this wasn’t a thing. But I suspect that Batiuk either misunderstood “floating-point decimal”, which would be a selling point, or shortened the phrase in order to make the dialogue less needlessly long. Which is fine, and his right as an author. The technical detail does its work, for the setup, by existing. It does not have to be an actual sales brochure. Reducing “floating point decimal” to “floating decimal” is a useful artistic shorthand. It’s the dialogue equivalent to the implausibly few, but easy to understand, buttons on the calculator in the title panel.
Floating point is one of the ways to represent numbers electronically. The storage scheme is much like scientific notation. That is, rather than think of 2,038, think of 2.038 times 103. In the computer’s memory are stored the 2.038 and the 3, with the “times ten to the” part implicit in the storage scheme. The advantage of this is the range of numbers one can use now. There are different ways to implement this scheme; a common one will let one represent numbers as tiny as 10-308 or as large as 10308, which is enough for most people’s needs.
The disadvantage is that floating point numbers aren’t perfect. They have only around (commonly) sixteen digits of significance. That is, the first sixteen or so nonzero numbers in the number you represent mean anything; everything after that is garbage. Most of the time, that trailing garbage doesn’t hurt. But most is not always. Trying to add, for example, a tiny number, like 10-20, to a huge number, like 1020 won’t get the right answer. And there are numbers that can’t be represented correctly anyway, including such exotic and novel numbers as $\frac{1}{3}$. A lot of numerical mathematics is about finding ways to compute that avoid these problems.
Back when I was a grad student I did have one casual friend who proclaimed that no real mathematician ever worked with floating point numbers, because of the limitations they impose. I could not get him to accept that no, in fact, mathematicians are fine with these limitations. Every scheme for representing numbers on a computer has limitations, and floating point numbers work quite well. At some point, you have to suspect some people would rather fight for a mistaken idea they already have than accept something new.
Mac King and Bill King’s Magic in a Minute for the 19th does a bit of stage magic supported by arithmetic: forecasting the sum of three numbers. The trick is that all eight possible choices someone would make have the same sum. There’s a nice bit of group theory hidden in the “Howdydoit?” panel, about how to do the trick a second time. Rotating the square of numbers makes what looks, casually, like a different square. It’s hard for human to memorize a string of digits that don’t have any obvious meaning, and the longer the string the worse people are at it. If you’ve had a person — as directed — black out the rows or columns they didn’t pick, then it’s harder to notice the reused pattern.
The different directions that you could write the digits down in represent symmetries of the square. That is, geometric operations that would replace a square with something that looks like the original. This includes rotations, by 90 or 180 or 270 degrees clockwise. Mac King and Bill King don’t mention it, but reflections would also work: if the top row were 4, 9, 2, for example, and the middle 3, 5, 7, and the bottom 8, 1, 6. Combining rotations and reflections also works.
If you do the trick a second time, your mark might notice it’s odd that the sum came up 15 again. Do it a third time, even with a different rotation or reflection, and they’ll know something’s up. There are things you could do to disguise that further. Just double each number in the square, for example: a square of 4/18/8, 14/10/6, 12/2/16 will have each row or column or diagonal add up to 30. But this loses the beauty of doing this with the digits 1 through 9, and your mark might grow suspicious anyway. The same happens if, say, you add one to each number in the square, and forecast a sum of 18. Even mathematical magic tricks are best not repeated too often, not unless you have good stage patter.
Mark Anderson’s Andertoons for the 20th is the Mark Anderson’s Andertoons for the week. Wavehead’s marveling at what seems at first like an asymmetry, about squares all being rhombuses yet rhombuses not all being squares. There are similar results with squares and rectangles. Still, it makes me notice something. Nobody would write a strip where the kid marvelled that all squares were polygons but not all polygons were squares. It seems that the rhombus connotes something different. This might just be familiarity. Polygons are … well, if not a common term, at least something anyone might feel familiar. Rhombus is a more technical term. It maybe never quite gets familiar, not in the ways polygons do. And the defining feature of a rhombus — all four sides the same length — seems like the same thing that makes a square a square.
There should be another Reading the Comics post this coming week, and it should appear at this link. I’d like to publish it Tuesday but, really, Wednesday is more probable.
## Reading the Comics, May 8, 2019: Strips With Art I Like Edition
Of course I like all the comics. … Well, that’s not literally true; but I have at least some affection for nearly all of the syndicated comics. This essay I bring up some strips, partly, because I just like them. This is my content hole. If you want a blog not filled with comic strips, go start your own and don’t put these things on it.
Mark Anderson’s Andertoons for the 5th is the Mark Anderson’s Andertoons for the week. Also a bit of a comment on the ability of collective action to change things. Wavehead is … well, he’s just wrong about making the number four plus the number four equal to the number seven. Not based on the numbers we mean by the words “four” and “seven”, and based on the operation we mean by “plus” and the relationship we mean by “equals”. The meaning of those things is set by, ultimately, axioms and deductive reasoning and the laws of deductive reasoning and there’s no changing the results.
But. The thing we’re referring to when we say “seven”? Or when we write the symbol “7”? That is convention. That is a thing we’ve agreed on as a reference for this concept. And that we can change, if we decide we want to. We’ve done this. Look at a thousand-year-old manuscript and the symbol that looks like ‘4’ may represent the number we call five. And the names of numbers are just common words. They’re subject to change the way every other common word is. Which is, admittedly, not very subject. It would be almost as much bother to change the word ‘four’ as it would be to change the word ‘mom’. But that’s not impossible. Just difficult.
Juba’s Viivi and Wagner for the 5th is a bit of a percentage joke. The characters also come to conclude that a thing either happens or it does not; there’s no indefinite states. This principle, the “excluded middle”, is often relied upon for deductive logic, and fairly so. It gets less clear that this can be depended on for predictions of the future, or fears for the future. And real-world things come in degrees that a mathematical concept might not. Like, your fear of the home catching fire comes true if the building burns down. But it’s also come true if a quickly-extinguished frying pan fire leaves the wall scorched, embarrassing but harmless. Anyway, relaxing someone else’s anxiety takes more than a quick declaration of statistics. Show sympathy.
Harry Bliss and Steve Martin’s Bliss for the 6th is a cute little classroom strip, with arithmetic appearing as the sort of topic that students feel overwhelmed and baffled by. It could be anything, but mathematics uses the illustration space efficiently. The strip may properly be too marginal to include, but I like Bliss’s art style and want more people to see it.
Will Henry’s Wallace the Brave for the 7th puts up what Spud calls a sadistic math problem. And, well, it is a story problem happening in their real life. You could probably turn this into an actual exam problem without great difficulty.
Rick Detorie’s One Big Happy for the 8th is a bit of wordplay built around geometry, as Ruthie plays teacher. She’s a bit dramatic, but she always has been.
I’ll read some more comics for later in this week. That essay, and all similar comic strip talk, should appear at this link. Thank you.
## Reading the Comics, April 24, 2019: Mic Drop Edition Edition
I can’t tell you how hard it is not to just end this review of last week’s mathematically-themed comic strips after the first panel here. It really feels like the rest is anticlimax. But here goes.
John Deering’s Strange Brew for the 20th is one of those strips that’s not on your mathematics professor’s office door only because she hasn’t seen it yet. The intended joke is obvious, mixing the tropes of the Old West with modern research-laboratory talk. “Theoretical reckoning” is a nice bit of word juxtaposition. “Institoot” is a bit classist in its rendering, but I suppose it’s meant as eye-dialect.
What gets it a place on office doors is the whiteboard, though. They’re writing out mathematics which looks legitimate enough to me. It doesn’t look like mathematics though. What’s being written is something any mathematician would recognize. It’s typesetting instructions. Mathematics requires all sorts of strange symbols and exotic formatting. In the old days, we’d handle this by giving the typesetters hazard pay. Or, if you were a poor grad student and couldn’t afford that, deal with workarounds. Maybe just leave space in your paper and draw symbols in later. If your university library has old enough papers you can see them. Maybe do your best to approximate mathematical symbols using ASCII art. So you get expressions that look something like this:
/ 2 pi
| 2
| x cos(theta) dx - 2 F(theta) == R(theta)
|
/ 0
This gets old real fast. Mercifully, Donald Knuth, decades ago, worked out a great solution. It uses formatting instructions that can all be rendered in standard, ASCII-available text. And then by dark incantations and summoning of Linotype demons, re-renders that as formatted text. It handles all your basic book formatting needs — much the way HTML, used for web pages, will — and does mathematics much more easily. For example, I would enter a line like:
\int_{0}^{2\pi} x^2 \cos(\theta) dx - 2 F(\theta) \equiv R(\theta)
And this would be rendered in print as:
$\int_{0}^{2\pi} x^2 \cos(\theta) dx - 2 F(\theta) \equiv R(\theta)$
There are many, many expansions available to this, to handle specialized needs, hardcore mathematics among them.
Anyway, the point that makes me realize John Deering was aiming at everybody with an advanced degree in mathematics ever with this joke, using a string of typesetting instead of the usual equations here?
The typesetting language is named TeX.
Mark Anderson’s Andertoons for the 21st is the Mark Anderson’s Andertoons for the week. It’s about one of those questions that nags at you as a kid, and again periodically as an adult. The perimeter is the boundary around a shape. The circumference is the boundary around a circle. Why do we have two words for this? And why do we sound all right talking about either the circumference or the perimeter of a circle, while we sound weird talking about the circumference of a rhombus? We sound weird talking about the perimeter of a rhombus too, but that’s the rhombus’s fault.
The easy part is why there’s two words. Perimeter is a word of Greek origin; circumference, of Latin. Perimeter entered the English language in the early 15th century; circumference in the 14th. Why we have both I don’t know; my suspicion is either two groups of people translating different geometry textbooks, or some eager young Scholastic with a nickname like ‘Doctor Magnifico Triangulorum’ thought Latin sounded better. Perimeter stuck with circules early; every etymology I see about why we use the symbol π describes it as shorthand for the perimeter of the circle. Why `circumference’ ended up the word for circles or, maybe, ellipses and ovals and such is probably the arbitrariness of language. I suspect that opening “circ” sound cues people to think of it for circles and circle-like shapes, in a way that perimeter doesn’t. But that is my speculation and should not be mistaken for information.
Steve McGarry’s KidTown for the 21st is a kids’s information panel with a bit of talk about representing numbers. And, in discussing things like how long it takes to count to a million or a billion, or how long it would take to type them out, it gets into how big these numbers can be. Les Stewart typed out the English names of numbers, in words, by the way. He’d also broken the Australian record for treading water, and for continuous swimming.
Gary Delainey and Gerry Rasmussen’s Betty for the 24th is a sudoku comic. Betty makes the common, and understandable, conflation of arithmetic with mathematics. But she’s right in identifying sudoku as a logical rather than an arithmetic problem. You can — and sometimes will see — sudoku type puzzles rendered with icons like stars and circles rather than numerals. That you can make that substitution should clear up whether there’s arithmetic involved. Commenters at GoComics meanwhile show a conflation of mathematics with logic. Certainly every mathematician uses logic, and some of them study logic. But is logic mathematics? I’m not sure it is, and our friends in the philosophy department are certain it isn’t. But then, if something that a recognizable set of mathematicians study as part of their mathematics work isn’t mathematics, then we have a bit of a logic problem, it seems.
Come Sunday I should have a fresh Reading the Comics essay available at this link.
## Reading the Comics, April 10, 2019: Grand Avenue and Luann Want My Attention Edition
So this past week has been a curious blend for the mathematically-themed comics. There were many comics mentioning some mathematical topic. But that’s because Grand Advenue and Luann Againn — reprints of early 90s Luann comics — have been doing a lot of schoolwork. There’s a certain repetitiveness to saying, “and here we get a silly answer to a story problem” four times over. But we’ll see what I do with the work.
Mark Anderson’s Andertoons for the 7th is Mark Anderson’s Andertoons for the week. Very comforting to see. It’s a geometry-vocabulary joke, with Wavehead noticing the similar ends of some terms. I’m disappointed that I can’t offer much etymological insight. “Vertex”, for example, derives from the Latin for “highest point”, and traces back to the Proto-Indo-European root “wer-”, meaning “to turn, to bend”. “Apex” derives from the Latin for “summit” or “extreme”. And that traces back to the Proto-Indo-European “ap”, meaning “to take, to reach”. Which is all fine, but doesn’t offer much about how both words ended up ending in “ex”. This is where my failure to master Latin by reading a teach-yourself book on the bus during my morning commute for three months back in 2002 comes back to haunt me. There’s probably something that might have helped me in there.
Mac King and Bill King’s Magic in a Minute for the 7th is an activity puzzle this time. It’s also a legitimate problem of graph theory. Not a complicated one, but still, one. Graph theory is about sets of points, called vertices, and connections between points, called edges. It gives interesting results for anything that’s networked. That shows up in computers, in roadways, in blood vessels, in the spreads of disease, in maps, in shapes.
One common problem, found early in studying graph theory, is about whether a graph is planar. That is, can you draw the whole graph, all its vertices and edges, without any lines cross each other? This graph, with six vertices and three edges, is planar. There are graphs that are not. If the challenge were to connect each number to a 1, a 2, and a 3, then it would be nonplanar. That’s a famous non-planar graph, given the obvious name K3, 3. A fun part of learning graph theory — at least fun for me — is looking through pictures of graphs. The goal is finding K3, 3 or another one called K5, inside a big messy graph.
Mike Thompson’s Grand Avenue for the 8th has had a week of story problems featuring both of the kid characters. Here’s the start of them. Making an addition or subtraction problem about counting things is probably a good way of making the problem less abstract. I don’t have children, so I don’t know whether they play marbles or care about them. The most recent time I saw any of my niblings I told them about the subtleties of industrial design in the old-fashioned Western Electric Model 2500 touch-tone telephone. They love me. Also I’m not sure that this question actually tests subtraction more than it tests reading comprehension. But there are teachers who like to throw in the occasional surprisingly easy one. Keeps students on their toes.
Greg Evans’s Luann Againn for the 10th is part of a sequence showing Gunther helping Luann with her mathematics homework. The story started the day before, but this was the first time a specific mathematical topic was named. The point-slope form is a conventional way of writing an equation which corresponds to a particular line. There are many ways to write equations for lines. This is one that’s convenient to use if you know coordinates for one point on the line and the slope of the line. Any coordinates which make the equation true are then the coordinates for some point on the line.
Doug Savage’s Savage Chickens for the 10th tosses in a line about logical paradoxes. In this case, using a classic problem, the self-referential statement. Working out whether a statement is true or false — its “truth value” — is one of those things we expect logic to be able to do. Some self-referential statements, logical claims about themselves, are troublesome. “This statement is false” was a good one for baffling kids and would-be world-dominating computers in science fiction television up to about 1978. Some self-referential statements seem harmless, though. Nobody expects even the most timid world-dominating computer to be bothered by “this statement is true”. It takes more than just a statement being about itself to create a paradox.
And a last note. The blog hardly needs my push to help it out, but, sometimes people will miss a good thing. Ben Orlin’s Math With Bad Drawings just ran an essay about some of the many mathematics-themed comics that Hilary Price and Rina Piccolo’s Rhymes With Orange has run. The comic is one of my favorites too. Orlin looks through some of the comic’s twenty-plus year history and discusses the different types of mathematical jokes Price (with, in recent years, Piccolo) makes.
Myself, I keep all my Reading the Comics essays at this link, and those mentioning some aspect of Rhymes With Orange at this link.
## Reading the Comics, March 19, 2019: Average Edition
This time around, averages seem important.
Mark Anderson’s Andertoons for the 18th is the Mark Anderson’s Andertoons for the week. This features the kids learning some of the commonest terms in descriptive statistics. And, as Wavehead says, the similarity of names doesn’t help sorting them out. Each is a kind of average. “Mean” usually is the arithmetic mean, or the thing everyone including statisticians calls “average”. “Median” is the middle-most value, the one that half the data is less than and half the data is greater than. “Mode” is the most common value. In “normally distributed” data, these three quantities are all the same. In data gathered from real-world measurements, these are typically pretty close to one another. It’s very easy for real-world quantities to be normally distributed. The exceptions are usually when there are some weird disparities, like a cluster of abnormally high-valued (or low-valued) results. Or if there are very few data points.
The word “mean” derives from the Old French “meien”, that is, “middle, means”. And that itself traces to the Late Latin “medianus”, and the Latin “medius”. That traces back to the Proto-Indo-European “medhyo”, meaning “middle”. That’s probably what you might expect, especially considering that the mean of a set of data is, if the data is not doing anything weird, likely close to the middle of the set. The term appeared in English in the middle 15th century.
The word “median”, meanwhile, follows a completely different path. That one traces to the Middle French “médian”, which traces to the Late Latin “medianus” and Latin “medius” and Proto-Indo-European “medhyo”. This appeared as a mathematical term in the late 19th century; Etymology Online claims 1883, but doesn’t give a manuscript citation.
The word “mode”, meanwhile, follows a completely different path. This one traces to the Old French “mode”, itself from the Latin “modus”, meaning the measure or melody or style. We get from music to common values by way of the “style” meaning. Think of something being done “á la mode”, that is, “in the [ fashionable or popular ] style”. I haven’t dug up a citation about when this word entered the mathematical parlance.
So “mean” and “median” don’t have much chance to do anything but alliterate. “Mode” is coincidence here. I agree, it might be nice if we spread out the words a little more.
John Hambrock’s The Brilliant Mind of Edison Lee for the 18th has Edison introduce a sequence to his grandfather. Doubling the number of things for each square of a checkerboard is an ancient thought experiment. The notion, with grains of wheat rather than cookies, seems to be first recorded in 1256 in a book by the scholar Ibn Khallikan. One story has it that the inventor of chess requested from the ruler that many grains of wheat as reward for inventing the game.
If we followed Edison Lee’s doubling through all 64 squares we’d have, in total, need for 263-1 or 18,446,744,073,709,551,615 cookies. You can see why the inventor of chess didn’t get that reward, however popular the game was. It stands as a good display of how exponential growth eventually gets to be just that intimidatingly big.
Edison, like many a young nerd, is trying to stagger his grandfather with the enormity of this. I don’t know that it would work. Grandpa ponders eating all that many cookies, since he’s a comical glutton. I’d estimate eating all that many cookies, at the rate of one a second, eight hours a day, to take something like eighteen billion centuries. If I’m wrong? It doesn’t matter. It’s a while. But is that any more staggering than imagining a task that takes a mere ten thousand centuries to finish?
Greg Cravens’s The Buckets for the 19th sees Toby surprised by his mathematics homework. He’s surprised by how it turned out. I know the feeling. Everyone who does mathematics enough finds that. Surprise is one of the delights of mathematics. I had a great surprise last month, with a triangle theorem. Thomas Hobbes, the philosopher/theologian, entered his frustrating sideline of mathematics when he found the Pythagorean Theorem surprising.
Mathematics is, to an extent, about finding interesting true statements. What makes something interesting? That depends on the person surprised, certainly. A good guideline is probably “something not obvious before you’ve heard it, thatlooks inevitable after you have”. That is, a surprise. Learning mathematics probably has to be steadily surprising, and that’s good, because this kind of surprise is fun.
If it’s always a surprise there might be trouble. If you’re doing similar kinds of problems you should start to see them as pretty similar, and have a fair idea what the answers should be. So, from what Toby has said so far … I wouldn’t call him stupid. At most, just inexperienced.
Eric the Circle for the 19th, by Janka, is the Venn Diagram joke for the week. Properly any Venn Diagram with two properties has an overlap like this. We’re supposed to place items in both circles, and in the intersection, to reflect how much overlap there is. Using the sizes of each circle to reflect the sizes of both sets, and the size of the overlap to represent the size of the intersection, is probably inevitable. The shorthand calls on our geometric intuition to convey information, anyway.
Tony Murphy’s It’s All About You for the 19th has a bunch of things going on. The punch line calls “algebra” what’s really a statistics problem, calculating the arithmetic mean of four results. The work done is basic arithmetic. But making work seem like a more onerous task is a good bit of comic exaggeration, and algebra connotes something harder than arithmetic. But Murphy exaggerates with restraint: the characters don’t rate this as calculus.
Then there’s what they’re doing at all. Given four clocks, what’s the correct time? The couple tries averaging them. Why should anyone expect that to work?
There’s reason to suppose this might work. We can suppose all the clocks are close to the correct time. If they weren’t, they would get re-set, or not looked at anymore. A clock is probably more likely to be a little wrong than a lot wrong. You’d let a clock that was two minutes off go about its business, in a way you wouldn’t let a clock that was three hours and 42 minutes off. A clock is probably as likely to show a time two minutes too early as it is two minutes too late. This all suggests that the clock errors are normally distributed, or something like that. So the error of the arithmetic mean of a bunch of clock measurements we can expect to be zero. Or close to zero, anyway.
There’s reasons this might not work. For example, a clock might systematically run late. My mantle clock, for example, usually drifts about a minute slow over the course of the week it takes to wind. Or the clock might be deliberately set wrong: it’s not unusual to set an alarm clock to five or ten or fifteen minutes ahead of the true time, to encourage people to think it’s later than it really is and they should hurry up. Similarly with watches, if their times aren’t set by Internet-connected device. I don’t know whether it’s possible to set a smart watch to be deliberately five minutes fast, or something like that. I’d imagine it should be possible, but also that the people programming watches don’t see why someone might want to set their clock to the wrong time. From January to March 2018, famously, an electrical grid conflict caused certain European clocks to lose around six minutes. The reasons for this are complicated and technical, and anyway The Doctor sorted it out. But that sort of systematic problem, causing all the clocks to be wrong in the same way, will foil this take-the-average scheme.
Murphy’s not thinking of that, not least because this comic’s a rerun from 2009. He was making a joke, going for the funnier-sounding “it’s 8:03 and five-eights” instead of the time implied by the average, 8:04 and a half. That’s all right. It’s a comic strip. Being amusing is what counts.
There were just enough mathematically-themed comic strips this past week for one more post. When that is ready, it should be at this link. I’ll likely post it Tuesday.
## Reading the Comics, March 12, 2019: Back To Sequential Time Edition
Since I took the Pi Day comics ahead of their normal sequence on Sunday, it’s time I got back to the rest of the week. There weren’t any mathematically-themed comics worth mentioning from last Friday or Saturday, so I’m spending the latter part of this week covering stuff published before Pi Day. It’s got me slightly out of joint. It’ll all be better soon.
Mark Anderson’s Andertoons for the 11th is the Mark Anderson’s Andertoons for this week. That’s nice to have. It’s built on the concept of story problems. That there should be “stories” behind a problem makes sense. Most actual mathematics, even among mathematicians, is done because we want to know a thing. Acting on a want is a story. Wanting to know a thing justifies the work of doing this calculation. And real mathematics work involves looking at some thing, full of the messiness of the real world, and extracting from it mathematics. This would be the question to solve, the operations to do, the numbers (or shapes or connections or whatever) to use. We surely learn how to do that by doing simple examples. The kid — not Wavehead, for a change — points out a common problem here. There’s often not much of a story to a story problem. That is, where we don’t just want something, but someone else wants something too.
Parker and Hart’s The Wizard of Id for the 11th is a riff on the “when do you use algebra in real life” snark. Well, no one disputes that there are fields which depend on advanced mathematics. The snark comes in from supposing that a thing is worth learning only if it’s regularly “useful”.
Rick Detorie’s One Big Happy for the 12th has Joe stalling class to speak to “the guy who invented zero”. I really like this strip since it’s one of those cute little wordplay jokes that also raises a legitimate point. Zero is this fantastic idea and it’s hard to imagine mathematics as we know it without the concept. Of course, we could say the same thing about trying to do mathematics without the concept of, say, “twelve”.
We don’t know who’s “the guy” who invented zero. It’s probably not all a single person, though, or even a single group of people. There are several threads of thought which merged together to zero. One is the notion of emptiness, the absense of a measurable thing. That probably occurred to whoever was the first person to notice a thing wasn’t where it was expected. Another part is the notion of zero as a number, something you could add to or subtract from a conventional number. That is, there’s this concept of “having nothing”, yes. But can you add “nothing” to a pile of things? And represent that using the addition we do with numbers? Sure, but that’s because we’re so comfortable with the idea of zero that we don’t ponder whether “2 + 1” and “2 + 0” are expressing similar ideas. You’ll occasionally see people asking web forums whether zero is really a number, often without getting much sympathy for their confusion. I admit I have to think hard to not let long reflex stop me wondering what I mean by a number and why zero should be one.
And then there’s zero, the symbol. As in having a representation, almost always a circle, to mean “there is a zero here”. We don’t know who wrote the first of that. The oldest instance of it that we know of dates to the year 683, and was written in what’s now Cambodia. It’s in a stone carving that seems to be some kind of bill of sale. I’m not aware whether there’s any indication from that who the zero was written for, or who wrote it, though. And there’s no reason to think that’s the first time zero was represented with a symbol. It’s the earliest we know about.
Darrin Bell’s Candorville for the 12th has some talk about numbers, and favorite numbers. Lemont claims to have had 8 as his favorite number because its shape, rotated, is that of the infinity symbol. C-Dog disputes Lemont’s recollection of his motives. Which is fair enough; it’s hard to remember what motivated you that long ago. What people mostly do is think of a reason that they, today, would have done that, in the past.
The ∞ symbol as we know it is credited to John Wallis, one of that bunch of 17th-century English mathematicians. He did a good bit of substantial work, in fields like conic sections and physics and whatnot. But he was also one of those people good at coming up with notation. He developed what’s now the standard notation for raising a number to a power, that $x^n$ stuff, and showed how to define raising a number to a rational-number power. Bunch of other things. He also seems to be the person who gave the name “continued fraction” to that concept.
Wallis never explained why he picked ∞ as a shape, of all the symbols one could draw, for this concept. There’s speculation he might have been varying the Roman numeral for 1,000, which we’ve simplified to M but which had been rendered as (|) or () and I can see that. (Well, really more of a C and a mirror-reflected C rather than parentheses, but I don’t have the typesetting skills to render that.) Conflating “a thousand” with “many” or “infinitely many” has a good heritage. We do the same thing when we talk about something having millions of parts or costing trillions of dollars or such. But, Wallis never explained (so far as we’re aware), so all this has to be considered speculation and maybe mnemonic helps to remembering the symbol.
Terry LaBan and Patty LaBan’s Edge City for the 12th is another story problem joke. Curiously the joke seems to be simply that the father gets confused following the convolutions of the story. The specific story problem circles around the “participation awards are the WORST” attitude that newspaper comics are surprisingly prone to. I think the LaBans just wanted the story problem to be long and seem tedious enough that our eyes glazed over. Anyway you could not pay me to read whatever the comments on this comic are. Sorry not sorry.
I figure to have one more Reading the Comics post this week. When that’s posted it should be available at this link. Thanks for being here.
## Reading the Comics, February 2, 2019: Not The February 1, 2019 Edition
The last burst of mathematically-themed comic strips last week nearly all came the 1st of the month. But the count fell just short. I can only imagine what machinations at Comic Strip Master Command went wrong, that we couldn’t get a full four comics for the same day. Well, life is messy and things will happen.
Stephen Bentley’s Herb and Jamaal for the 1st is a rerun. I discussed it last time I noticed it too. I’d previously taken Herb to be gloating about not using the calculus he’d studied. I may be reading too much into what seems like a smirk in the final panel, though. Could be he’s thinking of the strangeness that something which, at the time, is challenging and difficult and all-consuming turns out to not be such a big deal. Which could be much of high school.
But my first instinct is still to read this as thinking of the “uselessness” of calculus. It betrays the terrible attitude that education is about job training. It should be about letting people be literate in the world’s great thoughts. Mathematics seems to get this attitude a lot, but I’m aware I may feel a confirmation bias. If I had become a French major perhaps I’d pay attention to all the comic strips where someone giggles about how they never use the foreign languages they learned in high school either.
Jon Rosenberg’s Scenes from a Multiverse for the 1st is set in a “Mathpinion City”, showing people arguing about mathematical truths. It seems to me a political commentary, about the absurdity of rejecting true things over perceived insults. The 1+1=3 partisans aren’t even insisting they’re right, just that the other side is obnoxious. Arithmetic here serves as good source for things that can’t be matters of opinion, at least provided we’ve agreed on what’s meant by ideas like ‘1’ and ‘3’.
Mathematics is a human creation, though. What we decide to study, and what concepts we think worth interesting, are matters of opinion. It’s difficult to imagine people who think 1+1=2 a statement so unimportant they don’t care whether it’s true or false. At least not ones who reason anything like we do. But that is our difficulty, not a constraint on what life could think.
Neil Kohney’s The Other End for the 1st has a mathematics cameo. It’s the subject of a quiz so difficult that the kid begs for God’s help sorting it out. The problems all seem to be simplifying expressions. It’s a skill worth having. There are infinitely many ways to write the same quantity. Some of them are more convenient than others. Brief expressions, for example, are often easier to understand. But a longer expression might let us tease out relationships that are good to know. Many analysis proofs end up becoming simpler when you multiply by one — that is, multiplying by and dividing by the same quantity, but using the numerator to reduce one part of the expression and the denominator to reduce some other. Or by adding zero, in which you add and subtract a quantity and use either side to simplify other parts of the expression. So, y’know, just do the work. It’s better that way.
Mark Anderson’s Andertoons for the 2nd is the Mark Anderson’s Andertoons for the week. Wavehead’s learning about invertible operations: that a particular division can undo a multiplication. Or, presumably, that a particular multiplication can undo a division. Fair to wonder why you’d want to do that, though. Most of the operations we use in arithmetic have inverses, or come near it. (There’s one thing you can multiply by which you can’t divide out.) The term used in group theory for this is to say the real numbers are a “field”. This is a ring in which not just does addition have an inverse, but so does multiplication. And the operations commute; dividing by four and multiplying by four is as good as multiplying by for and dividing by four. You can build interesting mathematical structures that don’t have some of these properties. Elementary-school division, where you might describe (say) 26 divided by 4 as “6 with a remainder of 2” is one of them.
And that covers the comic strips. Come Sunday should be the next of this series, and it should be at this link.
## Reading the Comics, January 26, 2019: The Week Ended Early Edition
Last week started out at a good clip: two comics with enough of a mathematical theme I could imagine writing a paragraph about them each day. Then things puttered out. The rest of the week had almost nothing. At least nothing that seemed significant enough. I’ll list those, since that’s become my habit, at the end of the essay.
Jonathan Lemon and Joey Alison Sayers’s Alley Oop for the 20th is my first chance to show off the new artist and writer team. They’ve decided to make Sunday strips a side continuity about a young Alley Oop and his friends. I’m interested. The strip is built on the bit of pop anthropology that tells us “primitive” tribes will have very few counting words. That you can express concepts like one, two, and three, but then have to give up and count “many”.
Perhaps it’s so. Some societies have been found to have, what seem to us, rather few numerals. This doesn’t reflect on anyone’s abilities or intelligence or the like. And it doesn’t mean people who lack a word for, say, “forty-nine” would be unable to compute. It might take longer, but probably just from inexperience. If someone practiced much calculation on “forty-nine” they’d probably have a name for it. And folks raised in the western mathematics use, even enjoy, some vagueness about big numbers too. We might say there are “dozens” of a thing even if there are not precisely 24, 36, or 48 of the thing; “52” is close enough and we probably didn’t even count it up. “Hundred” similarly has gotten the connotation of being a precise number, but it’s used to mean “really quite a lot of a thing”. The words “thousands”, “millions”, and mock-numbers like “zillions” have a similar role. They suggest different ranges of what might be “many”.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 20th is a SABRmetrics joke! At least, it’s an optimization joke, built on the idea that you can find an optimum strategy for anything, whether winning baseball games or The War. The principle is hard to argue with. Nobody would doubt that different approaches to a battle affect how likely winning is. We can imagine gathering data on how different tactics affect the outcome. (We can easily imagine combat simulators running these experiments, particularly.)
The catch — well, one catch — is that this tempts one to reward a process. Once it’s taken for granted the process works, then whether it’s actually doing what you want gets forgotten. And once everyone knows what’s being measured it becomes possible to game the system. Famously, in the mid-1960s the United States tried to judge its progress in the Vietname War by counting the number of enemy soldiers killed. There was then little reason to care about who was killed, or why. And reason to not care whether actual enemy soldiers were being killed. There’s good to be said about testing whether the things you try to do work. There’s great danger in thinking that the thing you can measure guarantees success.
Mark Anderson’s Andertoons for the 21st is a bit of fun with definitions. Mathematicians rely on definitions. It’s hard to imagine a proof about something undefined. But definitions are hard to compose. We usually construct a definition because we want a common term to describe a collection of things, and to exclude another collection of things. And we need people like Wavehead who can find edge cases, things that seem to satisfy a definition while breaking its spirit. This can let us find unstated assumptions that we should pay attention to. Or force us to accept that the definition is so generally useful that we’ll tolerate it having some counter-intuitive implications.
My favorite counter-intuitive implication is in analysis. The field has a definition for what it means that a function is continuous. It’s meant to capture the idea that you could draw a curve representing the function without having to lift the pen that does it. The best definition mathematicians have settled on allows you to count a function that’s continuous at a single point in all of space. Continuity seems like something that should need an interval to happen. But we haven’t found a better way to define “continuous” that excludes this pathological case. So we embrace the weirdness in exchange for general usefulness.
Charles Brubaker’s Ask A Cat for the 21st is a guest appearance from Brubaker’s other strip, The Fuzzy Princess. It’s a rerun and I did discuss it earlier. Soap bubbles make for great mathematics. They’re easy to play with, for one thing. That’s good for capturing imagination. And the mathematics behind them is deep, and led to important results analytically and computationally. It happens when this strip first ran I’d encountered a triplet of essays about the mathematics of soap bubbles and wireframe surfaces. My introduction to those essays is here.
Benita Epstein’s Six Chix for the 25th I wasn’t sure I’d include. But Roy Kassinger asked about it, so that tipped the scales. The dog tries to blame his bad behavior on “the algorithm”, bringing up one of the better monsters of the last couple years. An algorithm is just the procedure by which you do something. Mathematically, that’s usually to solve a problem. That might be finding some interesting part of the domain or range of a function. That might be putting a collection of things in order. that might be any of a host of things. And then we go make a decision based on the results of the algorithm.
What earns The Algorithm its deserved bad name is mindlessness. The idea that once you have an algorithm that a problem is solved. Worse, that once an algorithm is in place it would be irrational to challenge it. I have seen the process termed “mathwashing”, by analogy with whitewashing, and it’s a good one. The notion that because something is done by computer it must be done correctly is absurd. We knew it was absurd before there were computers as we knew them, as see anyone for the past century who has spoken of a “Kafkaesque” interaction with a large organization. It’s impossible to foresee all the outcomes of any reasonably complicated process, much less to verify that all the outcomes are handled correctly. This is before we consider that there will always be mistakes made in the handling of data. Or in the carrying out of the process. And that’s before we consider bad actors. I’m sure there must be research into algorithms designed to handle gaming of the system. I don’t know that there are any good results yet, though. We certainly need them.
There were a couple comics that didn’t seem to be substantial enough for me to write at length about. You might like them anyway. Connie Sun’s Connie to the Wonnie for the 21st shows off a Venn Diagram. Hector D Cantú and Carlos Castellanos’s Baldo for the 23rd is a bit of wordplay about what mathematicians do. Jonathan Lemon’s Rabbits Against Magic for the 23rd similarly is a bit of wordplay built around percentages. (Lemon is the new artist for Alley Oop.) And Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips features Albert Einstein, and a joke based on one of the symmetries which make relativity such a useful explanation of the world’s workings.
I don’t plan to have another Reading the Comics post until next Sunday. But when I do, it’ll be here.
## Reading the Comics, January 12, 2019: A Edition
As I said Sunday, last week was a slow one for mathematically-themed comic strips. Here’s the second half of them. They’re not tightly on point. But that’s all right. They all have titles starting with ‘A’. I mean if you ignore the article ‘the’, the way we usually do when alphabetizing titles.
Tony Cochran’s Agnes for the 11th is basically a name-drop of mathematics. The joke would be unchanged if the teacher asked Agnes to circle all the adjectives in a sentence, or something like that. But there are historically links between religious thinking and mathematics. The Pythagoreans, for example, always a great and incredible starting point for any mathematical topic or just some preposterous jokes that might have nothing to do with their reality, were at least as much a religious and philosophical cult. For a long while in the Western tradition, the people with the time and training to do advanced mathematics work were often working for the church. Even as people were more able to specialize, a mystic streak remained. It’s easy to understand why. Mathematics promises to speak about things that are universally true. It encourages thinking about the infinite. It encourages thinking about the infinitely tiny. It courts paradoxes as difficult as any religious Mystery. It’s easy to snark at someone who takes numerology seriously. But I’m not sure the impulse that sees magic in arithmetic is different to the one that sees something supernatural in a “transfinite” item.
Scott Hilburn’s The Argyle Sweater for the 11th is another mistimed Pi Day joke. π is, famously, an irrational number. But so is every number, except for a handful of strange ones that we’ve happened to find interesting. That π should go on and on follows from what an irrational number means. It’s a bit surprising the 4 didn’t know all this before they married.
I appreciate the secondary joke that the marriage counselor is a “Hugh Jripov”, and the counselor’s being a ripoff is signaled by being a ÷ sign. It suggests that maybe successful reconciliation isn’t an option. I’m curious why the letters ‘POV’ are doubled, in the diploma there. In a strip with tighter drafting I’d think it was suggesting the way a glass frame will distort an image. But Hilburn draws much more loosely than that. I don’t know if it means anything.
Mark Anderson’s Andertoons for the 12th is the Mark Anderson’s Andertoons for the essay. I’m so relieved to have a regular stream of these again. The teacher thinks Wavehead doesn’t need to annotate his work. And maybe so. But writing down thoughts about a problem is often good practice. If you don’t know what to do, or you aren’t sure how to do what you want? Absolutely write down notes. List the things you’d want to do. Or things you’d want to know. Ways you could check your answer. Ways that you might work similar problems. Easier problems that resemble the one you want to do. You find answers by thinking about what you know, and the implications of what you know. Writing these thoughts out encourages you to find interesting true things.
And this was too marginal a mention of mathematics even for me, even on a slow week. But Georgia Dunn’s Breaking Cat News for the 12th has a cat having a nightmare about mathematics class. And it’s a fun comic strip that I’d like people to notice more.
And that’s as many comics as I have to talk about from last week. Sunday, I should have another Reading the Comics post and it’ll be at this link.
## Reading the Comics, January 9, 2018: I Go On About Johnny Appleseed Edition
This was a slow week for mathematically-themed comic strips. Such things happen. I put together a half-dozen that see on-topic enough to talk about, but I stretched to do it. You’ll see.
Mark Anderson’s Andertoons for the 6th mentions addition as one of the things you learn in an average day of elementary school. I can’t help noticing also the mention of Johnny Appleseed, who’s got a weird place in my heart as he and I share a birthday. He got to it first. Although Johnny Appleseed — John Champan — is legendary for scattering apple seeds, that’s not what he mostly did. He would more often grow apple-tree nurseries, from which settlers could buy plants and demonstrate they were “improving” their plots. He was also committed to spreading the word of Emanuel Swedenborg’s New Church, one of those religious movements that you somehow don’t hear about. But there was this like 200-year-long stretch where a particular kind of idiosyncratic thinker was Swedenborgian, or at least influenced by that. I don’t know offhand of any important Swedenborgian mathematicians, I admit, but I’m glad to hear if someone has news.
Justin Thompson’s MythTickle rerun for the 9th mentions “algebra” as something so dreadful that even being middle-aged is preferable. Everyone has their own tastes, yes, although it would be the same joke if it were “gym class” or something. (I suppose that’s not one word. “Dodgeball” would do, but I never remember playing it. It exists just as a legendarily feared activity, to me.) Granting, though, that I had a terrible time with the introduction to algebra class I had in middle school.
Tom Wilson’s Ziggy for the 9th is a very early Pi Day joke, so, there’s that. There’s not much reason a take-a-number dispenser couldn’t give out π, or other non-integer numbers. What the numbers are doesn’t matter. It’s just that the dispensed numbers need to be in order. It should be helpful if there’s a clear idea how uniformly spaced the numbers are, so there’s some idea how long a wait to expect between the currently-serving number and whatever number you’ve got. But that only helps if you have a fair idea of how long an order should on average take.
I’ll close out last week’s comics soon. The next Reading the Comics post, like all the earlier ones, should be at this link.
## Reading the Comics, January 5, 2019: Start of the Year Edition
With me wrapping up the mathematically-themed comic strips that ran the first of the year, you can see how far behind I’m falling keeping everything current. In my defense, Monday was busier than I hoped it would be, so everything ran late. Next week is looking quite slow for comics, so maybe I can catch up then. I will never catch up on anything the rest of my life, ever.
Scott Hilburn’s The Argyle Sweater for the 2nd is a bit of wordplay about regular and irregular polygons. Many mathematical constructs, in geometry and elsewhere, come in “regular” and “irregular” forms. The regular form usually has symmetries that make it stand out. For polygons, this is each side having the same length, and each interior angle being congruent. Irregular is everything else. The symmetries which constrain the regular version of anything often mean we can prove things we otherwise can’t. But most of anything is the irregular. We might know fewer interesting things about them, or have a harder time proving them.
I’m not sure what the teacher would be asking for in how to “make an irregular polygon regular”. I mean if we pretend that it’s not setting up the laxative joke. I can think of two alternatives that would make sense. One is to draw a polygon with the same number of sides and the same perimeter as the original. The other is to draw a polygon with the same number of sides and the same area as the original. I’m not sure of the point of either. I suppose polygons of the same area have some connection to quadrature, that is, integration. But that seems like it’s higher-level stuff than this class should be doing. I hate to question the reality of a comic strip but that’s what I’m forced to do.
Bud Fisher’s Mutt and Jeff rerun for the 4th is a gambler’s fallacy joke. Superficially the gambler’s fallacy seems to make perfect sense: the chance of twelve bad things in a row has to be less than the chance of eleven bad things in a row. So after eleven bad things, the twelfth has to come up good, right? But there’s two ways this can go wrong.
Suppose each attempted thing is independent. In this case, what if each patient is equally likely to live or die, regardless of what’s come before? And in that case, the eleven deaths don’t make it more likely that the next will live.
Suppose each attempted thing is not independent, though. This is easy to imagine. Each surgery, for example, is a chance for the surgeon to learn what to do, or not do. He could be getting better, that is, more likely to succeed, each operation. Or the failures could reflect the surgeon’s skills declining, perhaps from overwork or age or a loss of confidence. Impossible to say without more data. Eleven deaths on what context suggests are low-risk operations suggest a poor chances of surviving any given surgery, though. I’m on Jeff’s side here.
Mark Anderson’s Andertoons for the 5th is a welcome return of Wavehead. It’s about ratios. My impression is that ratios don’t get much attention in themselves anymore, except to dunk on stupid Twitter comments. It’s too easy to jump right into fractions, and division. Ratios underlie this, at least historically. It’s even in the name, ‘rational numbers’.
Wavehead’s got a point in literally comparing apples and oranges. It’s at least weird to compare directly different kinds of things. This is one of those conceptual gaps between ancient mathematics and modern mathematics. We’re comfortable stripping the units off of numbers, and working with them as abstract entities. But that does mean we can calculate things that don’t make sense. This produces the occasional bit of fun on social media where we see something like Google trying to estimate a movie’s box office per square inch of land in Australia. Just because numbers can be combined doesn’t mean they should be.
Larry Wright’s Motley rerun for the 5th has the form of a story problem. And one timely to the strip’s original appearance in 1987, during the National Football League players strike. The setup, talking about the difference in weekly pay between the real players and the scabs, seems like it’s about the payroll difference. The punchline jumps to another bit of mathematics, the point spread. Which is an estimate of the expected difference in scoring between teams. I don’t know for a fact, but would imagine the scab teams had nearly meaningless point spreads. The teams were thrown together extremely quickly, without much training time. The tools to forecast what a team might do wouldn’t have the data to rely on.
The at-least-weekly appearances of Reading the Comics in these pages are at this link.
## Reading the Comics, December 19, 2018: Andertoons Is Back Edition
I had not wanted to mention, for fear of setting off a panic. But Mark Anderson’s Andertoons, which I think of as being in every Reading the Comics post, hasn’t been around lately. If I’m not missing something, it hasn’t made an appearance in three months now. I don’t know why, and I’ve been trying not to look too worried by it. Mostly I’ve been forgetting to mention the strange absence. This even though I would think any given Tuesday or Friday that I should talk about the strip not having anything for me to write about. Fretting about it would make a great running theme. But I have never spotted a running theme before it’s finished. In any event the good news is that the long drought has ended, and Andertoons reappears this week. Yes, I’m hoping that it won’t be going to long between appearances this time.
Jef Mallett’s Frazz for the 16th talks about probabilities. This in the context of assessing risks. People are really bad at estimating probabilities. We’re notoriously worse at assessing risks, especially when it’s a matter of balancing a present cost like “fifteen minutes waiting while the pharmacy figures out whether insurance will pay for the flu shot” versus a nebulous benefit like “lessened chance of getting influenza, or at least having a less severe influenza”. And it’s asymmetric, too. We view improbable but potentially enormous losses differently from the way we view improbable but potentially enormous gains. And it’s hard to make the rationally-correct choice reliably, not when there are so many choices of this kind every day.
Tak Bui’s PC and Pixel for the 16th features a wall full of mathematical symbols, used to represent deep thought about a topic. The symbols are gibberish, yes. I’m not sure that an actual “escape probability” could be done in a legible way, though. Or even what precisely Professor Phillip might be calculating. I imagine it would be an estimate of the various ways he might try to escape, and what things might affect that. This might be for the purpose of figuring out what he might do to maximize his chances of a successful escape. Although I wouldn’t put it past the professor to just be quite curious what the odds are. There’s a thrill in having a problem solved, even if you don’t use the answer for anything.
Ruben Bolling’s Super-Fun-Pak Comix for the 18th has a trivia-panel-spoof dubbed Amazing Yet Tautological. One could make an argument that most mathematics trivia fits into this category. At least anything about something that’s been proven. Anyway, whether this is a tautological strip depends on what the strip means by “average” in the phrase “average serving”. There’s about four jillion things dubbed “average” and each of them has a context in which they make sense. The thing intended here, and the thing meant if nobody says anything otherwise, is the “arithmetic mean”. That’s what you get from adding up everything in a sample (here, the amount of egg salad each person in America eats per year) and dividing it by the size of the sample (the number of people in America that year). Another “average” which would make sense, but would break this strip, would be the median. That would be the amount of egg salad that half of all Americans eat more than, and half eat less than. But whether every American could have that big a serving really depends on what that median is. The “mode”, the most common serving, would also be a reasonable “average” to expect someone to talk about.
Mark Anderson’s Andertoons for the 19th is that strip’s much-awaited return to my column here. It features solid geometry, which is both an important part of geometry and also a part that doesn’t get nearly as much attention as plane geometry. It’s reductive to suppose the problem is that it’s harder to draw solids than planar figures. I suspect that’s a fair part of the problem, though. Mathematicians don’t get much art training, not anymore. And while geometry is supposed to be able to rely on pure reasoning, a good picture still helps. And a bad picture will lead us into trouble.
Each of the Reading the Comics posts should all be at this link. And I have finished the alphabet in my Fall 2018 Mathematics A To Z glossary. There should be a few postscript thoughts to come this week, though.
## Reading the Comics, September 17, 2018: Hard To Credit Edition
Two of the four comic strips I mean to feature here have credits that feel unsatisfying to me. One of them is someone’s pseudonym and, yeah, that’s their business. One is Dennis the Menace, for which I find an in-strip signature that doesn’t match the credentials on Comics Kingdom’s web site, never mind Wikipedia. I’ll go with what’s signed in the comic as probably authoritative. But I don’t like it.
R Ferdinand and S Ketcham’s Dennis the Menace for the 16th is about calculation. One eternally surprising little thing about calculators and computers is that they don’t do anything you can’t do by hand. Or, for that matter, in your head. They do it faster, typically, and more reliably. They can seem magical. But the only difference between what they do and what we do is the quantity with which they do this work. You can take this as humbling or as inspirational, as fits your worldview.
Ham’s Life on Earth for the 16th is a joke about the magical powers we attribute to mathematics. It’s also built on one of our underlying assumptions of the world, that it must be logically consistent. If one has an irrefutable logical argument that something isn’t so, then that thing must not be so. It’s hard to imagine how an illogical world would work. But it is hard not to wonder if there’s some arrogance involved in supposing the world has to square with the rules of logic that we find sensible. And to wonder whether we perceive world consistent with that logic because our expectations frame what we’re able to perceive.
In any case, as we frame logic, an argument’s validity shouldn’t depend on the person making the argument. Or even whether the argument has been made. So it’s hard to see how simply voicing the argument that one doesn’t exist could have that effect. Except that mathematics has got magical connotations, and vice-versa. That’ll be good for building jokes for a while yet.
Mark Anderson’s Andertoons for the 17th is the Mark Anderson’s Andertoons for the week. It’s wordplay, built on the connotation that division is a bad thing. It seems less dire if we think of division as learning how to equally share something that’s been held in common, though. Or if we think of it as learning what to multiply a thing by to get a particular value. Most mathematical operations can be taken to mean many things. Surely division has some constructive and happy interpretations.
Paul Gilligan’s Pooch Cafe for the 17th is a variation of the monkeys-on-keyboards joke. If what you need is a string of nonsense characters then … well, a cat on the keys is at least famous for producing some gibberish. It’s likely not going to be truly random, though. If a cat’s paw has stepped on, say, the ‘O’, there’s a good chance the cat is also stepping on ‘P’ or ‘9’. It also suggests that if the cat starts from the right, they’re more likely to have a character like ‘O’ early in the string of characters and less likely at the end. A completely random string would be as likely to have an ‘O’ at the start as at the end of the string.
And even if a cat on the keyboard did produce good-quality randomness, well. How likely a randomly-generated string of characters is to match a thing depends on the length of the thing. If the meaning of the symbols doesn’t matter, then ‘Penny Lane’ is as good as ‘*2ft,2igFIt’. This is not to say you can just use, say, ‘asdfghjkl’ as your password, at least not for anything that would hurt you if it were cracked. If everyone picked all passwords with no regard for what the symbols meant, these would be. But passwords that seem easy to think get used more often than they should be. It’s not that they’re easier to guess, but that guessing them is more likely to be correct.
Later this week I’ll host this month’s Playful Mathematics Blog Carnival! If you know of any mathematics that teaches or delights or both please share it with me, and we’ll let the world know. Also this week I should finally start my 2018 Mathematics A To Z, explaining words from mathematics one at a time.
And there’ll be another Reading the Comics Post before next Sunday. It and all my other Reading the Comics posts should be at this tag. Other appearances of Dennis the Menace should be at this link. This and other essays mentioning Life On Earth are at this link. The many appearances of Andertoons are at this link And other essays with Pooch Cafe should be at this link. Thanks for reading along.
## Reading the Comics, September 11, 2018: 60% Reruns Edition
Three of the five comic strips I review today are reruns. I think that I’ve only mentioned two of them before, though. But let me preface all this with a plea I’ve posted before: I’m hosting the Playful Mathematics Blog Carnival the last week in September. Have you run across something mathematical that was educational, or informative, or playful, or just made you glad to know about? Please share it with me, and we can share it with the world. It can be for any level of mathematical background knowledge. Thank you.
Tom Batiuk’s Funky Winkerbean vintage rerun for the 10th is part of an early storyline of Funky attempting to tutor football jock Bull Bushka. Mathematics — geometry, particularly — gets called on as a subject Bull struggles to understand. Geometry’s also well-suited for the joke because it has visual appeal, in a way that English or History wouldn’t. And, you know, I’ll take “pretty” as a first impression to geometry. There are a lot of diagrams whose beauty is obvious even if their reasons or points or importance are obscure.
Dan Collins’s Looks Good on Paper for the 10th is about everyone’s favorite non-orientable surface. The first time this strip appeared I noted that the road as presented isn’t a Möbius strip. The opossums and the car are on different surfaces. Unless there’s a very sudden ‘twist’ in the road in the part obscured from the viewer, anyway. If I’d drawn this in class I would try to save face by saying that’s where the ‘twist’ is, but none of my students would be convinced. But we’d like to have it that the car would, if it kept driving, go over all the pavement.
Bud Fisher’s Mutt and Jeff for the 10th is a joke about story problems. The setup suggests that there’s enough information in what Jeff has to say about the cop’s age to work out what it must be. Mutt isn’t crazy to suppose there is some solution possible. The point of this kind of challenge is realizing there are constraints on possible ages which are not explicit in the original statements. But in this case there’s just nothing. We would call the cop’s age “underdetermined”. The information we have allows for many different answers. We’d like to have just enough information to rule out all but one of them.
John Rose’s Barney Google and Snuffy Smith for the 11th is here by popular request. Jughead hopes that a complicated process of dubious relevance will make his report card look not so bad. Loweezey makes a New Math joke about it. This serves as a shocking reminder that, as most comic strip characters are fixed in age, my cohort is now older than Snuffy and Loweezey Smith. At least is plausibly older than them.
Anyway it’s also a nice example of the lasting cultural reference of the New Math. It might not have lasted long as an attempt to teach mathematics in ways more like mathematicians do. But it’s still, nearly fifty years on, got an unshakable and overblown reputation for turning mathematics into doubletalk and impossibly complicated rules. I imagine it’s the name; “New Math” is a nice, short, punchy name. But the name also looks like what you’d give something that was being ruined, under the guise of improvement. It looks like that terrible moment of something familiar being ruined even if you don’t know that the New Math was an educational reform movement. Common Core’s done well in attracting a reputation for doing problems the complicated way. But I don’t think its name is going to have the cultural legacy of the New Math.
Mark Anderson’s Andertoons for the 11th is another kid-resisting-the-problem joke. Wavehead’s obfuscation does hit on something that I have wondered, though. When we describe things, we aren’t just saying what we think of them. We’re describing what we think our audience should think of them. This struck me back around 1990 when I observed to a friend that then-current jokes about how hard VCRs were to use failed for me. Everyone in my family, after all, had no trouble at all setting the VCR to record something. My friend pointed out that I talked about setting the VCR. Other people talk about programming the VCR. Setting is what you do to clocks and to pots on a stove and little things like that; an obviously easy chore. Programming is what you do to a computer, an arcane process filled with poor documentation and mysterious problems. We framed our thinking about the task as a simple, accessible thing, and we all found it simple and accessible. Mathematics does tend to look at “problems”, and we do, especially in teaching, look at “finding solutions”. Finding solutions sounds nice and positive. But then we just go back to new problems. And the most interesting problems don’t have solutions, at least not ones that we know about. What’s enjoyable about facing these new problems?
One thing that’s not a problem: finding other Reading the Comics posts. They should all appear at this link. Appearances by the current-run and the vintage Funky Winkerbean are at this link. Essays with a mention of Looks Good On Paper are at this link. Meanwhile, essays with Mutt and Jeff in the are at this link. Other appearances by Barney Google and Snuffy Smith — current and vintage, if vintage ever does something on-topic — are at this link. And the many appearances by Andertoons are at this link, or just use any Reading the Comics post, really. Thank you.
## Reading the Comics, August 24, 2018: Delayed But Eventually There Edition
Now I’ve finally had the time to deal with the rest of last week’s comics. I’ve rarely been so glad that Comic Strip Master Command has taken it easy on me for this week.
Tom Toles’s Randolph Itch, 2am for the 20th is about a common daydream, that of soap bubbles of weird shapes. There’s fun mathematics to do with soap bubbles. Most of these fall into the “calculus of variations”, which is good at finding minimums and maximums. The minimum here is a surface with zero mean curvature that satisfies particular boundaries. In soap bubble problems the boundaries have a convenient physical interpretation. They’re the wire frames you dunk into soap film, and pull out again, to see what happens. There’s less that’s proven about soap bubbles than you might think. For example: we know that two bubbles of the same size will join on a flat common surface. Do three bubbles? They seem to, when you try blowing bubbles and fitting them together. But this falls short of mathematical rigor.
Parker and Hart’s Wizard of Id Classics for the 21st is a joke about the ignorance of students. Of course they don’t know basic arithmetic. Curious thing about the strip is that you can read it as an indictment of the school system, failing to help students learn basic stuff. Or you can read it as an indictment of students, refusing the hard work of learning while demanding a place in politics. Given the 1968 publication date I have a suspicion which was more likely intended. But it’s hard to tell; 1968 was a long time ago. And sometimes it’s just so easy to crack an insult there’s no guessing what it’s supposed to mean.
Gene Mora’s Graffiti for the 22nd mentions what’s probably the most famous equation after that thing with two times two in it. It does cry out something which seems true, that $E = mc^2$ was there before Albert Einstein noticed it. It does get at one of those questions that, I say without knowledge, is probably less core to philosophers of mathematics than the non-expert would think. But are mathematical truths discovered or invented? There seems to be a good argument that mathematical truths are discovered. If something follows by deductive logic from the axioms of the field, and the assumptions that go into a question, then … what’s there to invent? Anyone following the same deductive rules, and using the same axioms and assumptions, would agree on the thing discovered. Invention seems like something that reflects an inventor.
But it’s hard to shake the feeling that there is invention going on. Anyone developing new mathematics decides what things seem like useful axioms. She decides that some bundle of properties is interesting enough to have a name. She decides that some consequences of these properties are so interesting as to be named theorems. Maybe even the Fundamental Theorem of the field. And there was the decision that this is a field with a question interesting enough to study. I’m not convinced that isn’t invention.
Mark Anderson’s Andertoons for the 23rd sees Wavehead — waaait a minute. That’s not Wavehead! This throws everything off. Well, it’s using mathematics as the subject that Not-Wavehead is trying to avoid. And it’s not using arithmetic as the subject easiest to draw on the board. It needs some kind of ascending progression to make waiting for some threshold make sense. Numbers rising that way makes sense.
Scott Hilburn’s The Argyle Sweater for the 24th is the Roman numerals joke for this week. Oh, and apparently it’s a rerun; I hadn’t noticed before that the strip was rerunning. This isn’t a complaint. Cartoonists need vacations too.
That birds will fly in V-formation has long captured people’s imaginations. We’re pretty confident we know why they do it. The wake of one bird’s flight can make it easier for another bird to stay aloft. This is especially good for migrating birds. The fluid-dynamic calculations of this are hard to do, but any fluid-dynamic calculations are hard to do. Verifying the work was also hard, but could be done. I found and promptly lost an article about how heartbeat monitors were attached to a particular flock of birds whose migration path was well-known, so the sensors could be checked and data from them gathered several times over. (Birds take turns as the lead bird, the one that gets no lift from anyone else’s efforts.)
So far as I’m aware there’s still some mystery as to how they do it. That is, how they know to form this V-formation. A particularly promising line of study in the 80s and 90s was to look at these as self-organizing structures. This would have each bird just trying to pay attention to what made sense for itself, where to fly relative to its nearest-neighbor birds. And these simple rules created, when applied to the whole flock, that V pattern. I do not know whether this reflects current thinking about bird formations. I do know that the search for simple rules that produce rich, complicated patterns goes on. Centuries of mathematics, physics, and to an extent chemistry have primed us to expect that everything is the well-developed result of simple components.
Dave Whamond’s Reality Check for the 24th is apparently an answer to The Wandering Melon‘s comic earlier this month. So now we know what kind of lead time Dave Whamond is working on.
My next, and past, Reading the Comics posts are available at this link. Other essays with Randolph Itch, 2 a.m., are at this link. Essays that mention The Wizard of Id, classic or modern, are at this link. Essays mentioning Graffiti are at this link. Other appearances by Andertoons are at this link, or just read about half of all Reading the Comics posts. The Argyle Sweater is mentioned in these essays. And other essays with Reality Check are at this link. And what the heck; here’s other essays with The Wandering Melon in them.
|
{}
|
# About Constant Bitrate Encoding
What most of us are used to when we encode an MP3 File, is that we can set a bitrate – such as 192kbps – and, the codec will produce an MP3 File with that bitrate. If that was all there is to it, we’d have Constant Bitrate encoding, aka ‘CBR’.
But in many cases, the actual encoding scheme is Variable Bitrate (‘VBR’), which has been modified to be Adaptive Variable Bitrate (‘AVBR’).
The way AVBR works, is that it nests the actual encoding algorithm inside a loop, with the premise that the user has nevertheless set a target bitrate. The loop then feeds the actual algorithm several quality-factors, to encode the same granule of sound, in multiple attempts, to find the maximum quality-factor, which does not cause the encoding to exceed the number of bits, which have been allocated for the algorithm to take up, in its encoding of 1 granule of sound.
This quality-factor is then also used, to produce output. And, in case the actual number of bits output are less than the allocated number of bits, the difference is next added to the number of bits that act as a target, with which the next granule of sound is to be encoded.
Encoding schemes that are truly CBR, are often ones which are not compressed, plus also perhaps ‘DPCM‘… Most of the other schemes, such as ‘MP3′ and ‘OGG’, are really AVBR or VBR.
(Updated 03/11/2018 : )
(As of 03/06/2018 : )
I think that in the case of Video codecs, the approach is modified slightly, due to the high computational cost of compressing 1 frame. In this case, if the present frame-set consumed fewer bits than allocated, the quality-factor is increased, with which the next frame-set is to be encoded. Conversely, since it can happen that the present frame-set took up more bits when encoded than the target suggested, the quality-factor can also be decreased for the next frame.
That way, with Video codecs, no frame actually needs to be encoded twice.
I guess that this poses the question, of how the initial quality-level is to be chosen, so that the first set of frames including a key-frame produces approximately the correct number of encoded bits, just So that the loop doesn’t go wild – an unstable feedback-loop – with a great excess or a great shortage to start. And there are two answers that would help:
1. The encoder could apply both a maximum bitrate, and a maximum q-factor, the lower of the two defining the actual quality, and
2. In the case of video, only the q-factor may be adjusted from one frame-set to the next, not the target bit-number, to enhance the stability of the loop. The result would be a slight but consistent undershoot, of the target bitrate.
(Edit 03/11/2018 : )
In fact, to prevent the q-factor from oscillating from one frame-set to the next, the algorithm can actually wait, until the real bitrate differs from the ideal bitrate, by a ratio that exceeds the ratio between two q-factors. It’s not perfect, but may prevent oscillation:
a = 'Patience'
a > 1.0
a ~= 2.0
Qmin ~= 6
r =
(float(Bitsmax) / Bitsreal - 1.0) * Qcurrent
if ( r > a && Qcurrent < Qmax ) {
Qcurrent++;
}
if ( r < 0.0 && Qcurrent > Qmin ) {
Qcurrent--;
}
Dirk
## One thought on “About Constant Bitrate Encoding”
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{}
|
mathlibdocumentation
group_theory.schur_zassenhaus
The Schur-Zassenhaus Theorem #
In this file we prove the Schur-Zassenhaus theorem.
Main results #
• exists_right_complement'_of_coprime : The Schur-Zassenhaus theorem: If H : subgroup G is normal and has order coprime to its index, then there exists a subgroup K which is a (right) complement of H.
• exists_left_complement'_of_coprime The Schur-Zassenhaus theorem: If H : subgroup G is normal and has order coprime to its index, then there exists a subgroup K which is a (left) complement of H.
def subgroup.quotient_diff {G : Type u_1} [group G] (H : subgroup G) [H.is_commutative] [fintype (G H)] :
Type u_1
The quotient of the transversals of an abelian normal N by the diff relation.
Equations
Instances for subgroup.quotient_diff
@[protected, instance]
def subgroup.quotient_diff.inhabited {G : Type u_1} [group G] (H : subgroup G) [H.is_commutative] [fintype (G H)] :
Equations
theorem subgroup.smul_diff_smul' {G : Type u_1} [group G] (H : subgroup G) [H.is_commutative] [fintype (G H)] (α β : ) [hH : H.normal] (g : Gᵐᵒᵖ) :
(g β) = , _⟩
@[protected, instance]
def subgroup.quotient_diff.mul_action {G : Type u_1} [group G] {H : subgroup G} [H.is_commutative] [fintype (G H)] [H.normal] :
Equations
theorem subgroup.smul_diff' {G : Type u_1} [group G] {H : subgroup G} [H.is_commutative] [fintype (G H)] (α β : ) [H.normal] (h : H) :
= * h ^ H.index
theorem subgroup.eq_one_of_smul_eq_one {G : Type u_1} [group G] {H : subgroup G} [H.is_commutative] [fintype (G H)] [H.normal] [fintype H] (hH : .coprime H.index) (α : H.quotient_diff) (h : H) :
h α = αh = 1
theorem subgroup.exists_smul_eq {G : Type u_1} [group G] {H : subgroup G} [H.is_commutative] [fintype (G H)] [H.normal] [fintype H] (hH : .coprime H.index) (α β : H.quotient_diff) :
∃ (h : H), h α = β
theorem subgroup.is_complement'_stabilizer_of_coprime {G : Type u_1} [group G] {H : subgroup G} [H.is_commutative] [fintype (G H)] [H.normal] [fintype H] {α : H.quotient_diff} (hH : .coprime H.index) :
Proof of the Schur-Zassenhaus theorem #
In this section, we prove the Schur-Zassenhaus theorem. The proof is by contradiction. We assume that G is a minimal counterexample to the theorem.
We will arrive at a contradiction via the following steps:
• step 0: N (the normal Hall subgroup) is nontrivial.
• step 1: If K is a subgroup of G with K ⊔ N = ⊤, then K = ⊤.
• step 2: N is a minimal normal subgroup, phrased in terms of subgroups of G.
• step 3: N is a minimal normal subgroup, phrased in terms of subgroups of N.
• step 4: p (min_fact (fintype.card N)) is prime (follows from step0).
• step 5: P (a Sylow p-subgroup of N) is nontrivial.
• step 6: N is a p-group (applies step 1 to the normalizer of P in G).
• step 7: N is abelian (applies step 3 to the center of N).
theorem subgroup.schur_zassenhaus_induction.step7 {G : Type u} [group G] [fintype G] {N : subgroup G} [N.normal] (h1 : .coprime N.index) (h2 : ∀ (G' : Type u) [_inst_4 : group G'] [_inst_5 : fintype G'], ∀ {N' : subgroup G'} [_inst_6 : N'.normal], (fintype.card N').coprime N'.index(∃ (H' : subgroup G'), N'.is_complement' H')) (h3 : ∀ (H : subgroup G), ¬) :
Do not use this lemma: It is made obsolete by exists_right_complement'_of_coprime
theorem subgroup.exists_right_complement'_of_coprime_of_fintype {G : Type u} [group G] [fintype G] {N : subgroup G} [N.normal] (hN : .coprime N.index) :
∃ (H : subgroup G),
Schur-Zassenhaus for normal subgroups: If H : subgroup G is normal, and has order coprime to its index, then there exists a subgroup K which is a (right) complement of H.
theorem subgroup.exists_right_complement'_of_coprime {G : Type u} [group G] {N : subgroup G} [N.normal] (hN : (nat.card N).coprime N.index) :
∃ (H : subgroup G),
Schur-Zassenhaus for normal subgroups: If H : subgroup G is normal, and has order coprime to its index, then there exists a subgroup K which is a (right) complement of H.
theorem subgroup.exists_left_complement'_of_coprime_of_fintype {G : Type u} [group G] [fintype G] {N : subgroup G} [N.normal] (hN : .coprime N.index) :
∃ (H : subgroup G),
Schur-Zassenhaus for normal subgroups: If H : subgroup G is normal, and has order coprime to its index, then there exists a subgroup K which is a (left) complement of H.
theorem subgroup.exists_left_complement'_of_coprime {G : Type u} [group G] {N : subgroup G} [N.normal] (hN : (nat.card N).coprime N.index) :
∃ (H : subgroup G),
Schur-Zassenhaus for normal subgroups: If H : subgroup G is normal, and has order coprime to its index, then there exists a subgroup K which is a (left) complement of H.
|
{}
|
Integer type: int32 int64 nag_int show int32 show int32 show int64 show int64 show nag_int show nag_int
Chapter Contents
Chapter Introduction
NAG Toolbox
# NAG Toolbox: nag_correg_coeffs_pearson_subset_miss_pair (g02bj)
## Purpose
nag_correg_coeffs_pearson_subset_miss_pair (g02bj) computes means and standard deviations, sums of squares and cross-products of deviations from means, and Pearson product-moment correlation coefficients for selected variables omitting cases with missing values from only those calculations involving the variables for which the values are missing.
## Syntax
[xbar, std, ssp, r, ncases, cnt, ifail] = g02bj(x, miss, xmiss, kvar, 'n', n, 'm', m, 'nvars', nvars)
[xbar, std, ssp, r, ncases, cnt, ifail] = nag_correg_coeffs_pearson_subset_miss_pair(x, miss, xmiss, kvar, 'n', n, 'm', m, 'nvars', nvars)
Note: the interface to this routine has changed since earlier releases of the toolbox:
Mark 22: n has been made optional
.
## Description
The input data consists of n$n$ observations for each of m$m$ variables, given as an array
[xij], i = 1,2, … ,n (n ≥ 2),j = 1,2, … ,m (m ≥ 2), $[xij], i=1,2,…,n (n≥2),j=1,2,…,m (m≥2),$
where xij${x}_{ij}$ is the i$i$th observation on the j$j$th variable, together with the subset of these variables, v1,v2,,vp${v}_{1},{v}_{2},\dots ,{v}_{p}$, for which information is required.
In addition, each of the m$m$ variables may optionally have associated with it a value which is to be considered as representing a missing observation for that variable; the missing value for the j$j$th variable is denoted by xmj${\mathit{xm}}_{j}$. Missing values need not be specified for all variables.
Let wij = 0${w}_{\mathit{i}\mathit{j}}=0$ if the i$\mathit{i}$th observation for the j$\mathit{j}$th variable is a missing value, i.e., if a missing value, xmj${\mathit{xm}}_{\mathit{j}}$, has been declared for the j$\mathit{j}$th variable, and xij = xmj${x}_{\mathit{i}\mathit{j}}={\mathit{xm}}_{\mathit{j}}$ (see also Section [Accuracy]); and wij = 1${w}_{\mathit{i}\mathit{j}}=1$ otherwise, for i = 1,2,,n$\mathit{i}=1,2,\dots ,n$ and j = 1,2,,m$\mathit{j}=1,2,\dots ,m$.
The quantities calculated are:
(a) Means:
xj = ( ∑ i = 1nwijxij)/( ∑ i = 1nwij), j = v1,v2, … ,vp. $x-j=∑i=1nwijxij ∑i=1nwij , j=v1,v2,…,vp.$
(b) Standard deviations:
sj = sqrt(( ∑ i = 1nwij(xij − xj)2)/( ∑ i = 1nwij − 1)), j = v1,v2, … ,vp. $sj=∑i= 1nwij (xij-x-j) 2 ∑i= 1nwij- 1 , j=v1,v2,…,vp.$
(c) Sums of squares and cross-products of deviations from means:
n Sjk = ∑ wijwik(xij − xj(k))(xik − xk(j)), j,k = v1,v2, … ,vp, i = 1
$Sjk=∑i=1nwijwik(xij-x-j(k))(xik-x-k(j)), j,k=v1,v2,…,vp,$
where
xj(k) = ( ∑ i = 1nwijwikxij)/( ∑ i = 1nwijwik) and xk(j) = ( ∑ i = 1nwikwijxik)/( ∑ i = 1nwikwij), $x-j(k)=∑i= 1nwijwikxij ∑i= 1nwijwik and x-k(j)=∑i= 1nwikwijxik ∑i= 1nwikwij ,$
(i.e., the means used in the calculation of the sum of squares and cross-products of deviations are based on the same set of observations as are the cross-products).
(d) Pearson product-moment correlation coefficients:
Rjk = (Sjk)/(sqrt(Sjj(k)Skk(j))), j,k = v1,v2, … ,vp, $Rjk=SjkSjj(k)Skk(j) , j,k=v1,v2,…,vp,$
where
n n Sjj(k) = ∑ wijwik(xij − xj(k))2 and Skk(j) = ∑ wikwij(xik − xk(j))2, i = 1 i = 1
$Sjj(k)=∑i= 1nwijwik(xij-x-j(k))2 and Skk(j)=∑i= 1nwikwij(xik-x-k(j))2,$
(i.e., the sums of squares of deviations used in the denominator are based on the same set of observations as are used in the calculation of the numerator).
If Sjj(k)${S}_{jj\left(k\right)}$ or Skk(j)${S}_{kk\left(j\right)}$ is zero, Rjk${R}_{jk}$ is set to zero.
(e) The number of cases used in the calculation of each of the correlation coefficients:
n cjk = ∑ wijwik, j,k = v1,v2, … ,vp. i = 1
$cjk=∑i=1nwijwik, j,k=v1,v2,…,vp.$
(The diagonal terms, cjj${c}_{jj}$, for j = v1,v2,,vp$j={v}_{1},{v}_{2},\dots ,{v}_{p}$, also give the number of cases used in the calculation of the means, xj${\stackrel{-}{x}}_{j}$, and the standard deviations, sj${s}_{j}$.)
None.
## Parameters
### Compulsory Input Parameters
1: x(ldx,m) – double array
ldx, the first dimension of the array, must satisfy the constraint ldxn$\mathit{ldx}\ge {\mathbf{n}}$.
x(i,j)${\mathbf{x}}\left(\mathit{i},\mathit{j}\right)$ must be set to xij${x}_{\mathit{i}\mathit{j}}$, the value of the i$\mathit{i}$th observation on the j$\mathit{j}$th variable, for i = 1,2,,n$\mathit{i}=1,2,\dots ,n$ and j = 1,2,,m$\mathit{j}=1,2,\dots ,m$.
2: miss(m) – int64int32nag_int array
m, the dimension of the array, must satisfy the constraint m2${\mathbf{m}}\ge 2$.
miss(j)${\mathbf{miss}}\left(j\right)$ must be set equal to 1$1$ if a missing value, xmj$x{m}_{j}$, is to be specified for the j$j$th variable in the array x, or set equal to 0$0$ otherwise. Values of miss must be given for all m$m$ variables in the array x.
3: xmiss(m) – double array
m, the dimension of the array, must satisfy the constraint m2${\mathbf{m}}\ge 2$.
xmiss(j)${\mathbf{xmiss}}\left(j\right)$ must be set to the missing value, xmj$x{m}_{j}$, to be associated with the j$j$th variable in the array x, for those variables for which missing values are specified by means of the array miss (see Section [Accuracy]).
4: kvar(nvars) – int64int32nag_int array
nvars, the dimension of the array, must satisfy the constraint 2nvarsm$2\le {\mathbf{nvars}}\le {\mathbf{m}}$.
kvar(j)${\mathbf{kvar}}\left(\mathit{j}\right)$ must be set to the column number in x of the j$\mathit{j}$th variable for which information is required, for j = 1,2,,p$\mathit{j}=1,2,\dots ,p$.
Constraint: 1kvar(j)m$1\le {\mathbf{kvar}}\left(\mathit{j}\right)\le {\mathbf{m}}$, for j = 1,2,,p$\mathit{j}=1,2,\dots ,p$.
### Optional Input Parameters
1: n – int64int32nag_int scalar
Default: The first dimension of the array x.
n$n$, the number of observations or cases.
Constraint: n2${\mathbf{n}}\ge 2$.
2: m – int64int32nag_int scalar
Default: The dimension of the arrays miss, xmiss and the second dimension of the array x. (An error is raised if these dimensions are not equal.)
m$m$, the number of variables.
Constraint: m2${\mathbf{m}}\ge 2$.
3: nvars – int64int32nag_int scalar
Default: The dimension of the array kvar.
p$p$, the number of variables for which information is required.
Constraint: 2nvarsm$2\le {\mathbf{nvars}}\le {\mathbf{m}}$.
### Input Parameters Omitted from the MATLAB Interface
ldx ldssp ldr ldcnt
### Output Parameters
1: xbar(nvars) – double array
The mean value, xj${\stackrel{-}{x}}_{\mathit{j}}$, of the variable specified in kvar(j)${\mathbf{kvar}}\left(\mathit{j}\right)$, for j = 1,2,,p$\mathit{j}=1,2,\dots ,p$.
2: std(nvars) – double array
The standard deviation, sj${s}_{\mathit{j}}$, of the variable specified in kvar(j)${\mathbf{kvar}}\left(\mathit{j}\right)$, for j = 1,2,,p$\mathit{j}=1,2,\dots ,p$.
3: ssp(ldssp,nvars) – double array
ldsspnvars$\mathit{ldssp}\ge {\mathbf{nvars}}$.
ssp(j,k)${\mathbf{ssp}}\left(\mathit{j},\mathit{k}\right)$ is the cross-product of deviations, Sjk${S}_{\mathit{j}\mathit{k}}$, for the variables specified in kvar(j)${\mathbf{kvar}}\left(\mathit{j}\right)$ and kvar(k)${\mathbf{kvar}}\left(\mathit{k}\right)$, for j = 1,2,,p$\mathit{j}=1,2,\dots ,p$ and k = 1,2,,p$\mathit{k}=1,2,\dots ,p$.
4: r(ldr,nvars) – double array
ldrnvars$\mathit{ldr}\ge {\mathbf{nvars}}$.
r(j,k)${\mathbf{r}}\left(\mathit{j},\mathit{k}\right)$ is the product-moment correlation coefficient, Rjk${R}_{\mathit{j}\mathit{k}}$, between the variables specified in kvar(j)${\mathbf{kvar}}\left(\mathit{j}\right)$ and kvar(k)${\mathbf{kvar}}\left(\mathit{k}\right)$, for j = 1,2,,p$\mathit{j}=1,2,\dots ,p$ and k = 1,2,,p$\mathit{k}=1,2,\dots ,p$.
5: ncases – int64int32nag_int scalar
The minimum number of cases used in the calculation of any of the sums of squares and cross-products and correlation coefficients (when cases involving missing values have been eliminated).
6: cnt(ldcnt,nvars) – double array
ldcntnvars$\mathit{ldcnt}\ge {\mathbf{nvars}}$.
cnt(j,k)${\mathbf{cnt}}\left(\mathit{j},\mathit{k}\right)$ is the number of cases, cjk${c}_{\mathit{j}\mathit{k}}$, actually used in the calculation of Sjk${S}_{\mathit{j}\mathit{k}}$, and Rjk${R}_{\mathit{j}\mathit{k}}$, the sum of cross-products and correlation coefficient for the variables specified in kvar(j)${\mathbf{kvar}}\left(\mathit{j}\right)$ and kvar(k)${\mathbf{kvar}}\left(\mathit{k}\right)$, for j = 1,2,,p$\mathit{j}=1,2,\dots ,p$ and k = 1,2,,p$\mathit{k}=1,2,\dots ,p$.
7: ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).
## Error Indicators and Warnings
Note: nag_correg_coeffs_pearson_subset_miss_pair (g02bj) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:
Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.
ifail = 1${\mathbf{ifail}}=1$
On entry, n < 2${\mathbf{n}}<2$.
ifail = 2${\mathbf{ifail}}=2$
On entry, nvars < 2${\mathbf{nvars}}<2$, or ${\mathbf{nvars}}>{\mathbf{m}}$.
ifail = 3${\mathbf{ifail}}=3$
On entry, ldx < n$\mathit{ldx}<{\mathbf{n}}$, or ldssp < nvars$\mathit{ldssp}<{\mathbf{nvars}}$, or ldr < nvars$\mathit{ldr}<{\mathbf{nvars}}$, or ldcnt < nvars$\mathit{ldcnt}<{\mathbf{nvars}}$.
ifail = 4${\mathbf{ifail}}=4$
On entry, kvar(j) < 1${\mathbf{kvar}}\left(j\right)<1$, or kvar(j) > m${\mathbf{kvar}}\left(j\right)>{\mathbf{m}}$ for some j = 1,2, … ,nvars$j=1,2,\dots ,{\mathbf{nvars}}$.
W ifail = 5${\mathbf{ifail}}=5$
After observations with missing values were omitted, fewer than two cases remained for at least one pair of variables. (The pairs of variables involved can be determined by examination of the contents of the array cnt.) All means, standard deviations, sums of squares and cross-products, and correlation coefficients based on two or more cases are returned by the function even if ${\mathbf{ifail}}={\mathbf{5}}$.
## Accuracy
nag_correg_coeffs_pearson_subset_miss_pair (g02bj) does not use additional precision arithmetic for the accumulation of scalar products, so there may be a loss of significant figures for large n$n$.
You are warned of the need to exercise extreme care in your selection of missing values. nag_correg_coeffs_pearson_subset_miss_pair (g02bj) treats all values in the inclusive range (1 ± 0.1(x02be2)) × xmj$\left(1±{0.1}^{\left(\mathbf{x02be}-2\right)}\right)×{xm}_{j}$, where xmj${\mathit{xm}}_{j}$ is the missing value for variable j$j$ specified in xmiss.
You must therefore ensure that the missing value chosen for each variable is sufficiently different from all valid values for that variable so that none of the valid values fall within the range indicated above.
The time taken by nag_correg_coeffs_pearson_subset_miss_pair (g02bj) depends on n$n$ and p$p$, and the occurrence of missing values.
The function uses a two-pass algorithm.
## Example
```function nag_correg_coeffs_pearson_subset_miss_pair_example
x = [3, 3, 1, 2;
6, 4, -1, 4;
9, 0, 5, 9;
12, 2, 0, 0;
-1, 5, 4, 12];
miss = [int64(1);1;0;1];
xmiss = [-1;
0;
0;
0];
kvar = [int64(4);1;2];
[xbar, std, ssp, r, ncases, count, ifail] = ...
nag_correg_coeffs_pearson_subset_miss_pair(x, miss, xmiss, kvar)
```
```
xbar =
6.7500
7.5000
3.5000
std =
4.5735
3.8730
1.2910
ssp =
62.7500 21.0000 10.0000
21.0000 45.0000 -6.0000
10.0000 -6.0000 5.0000
r =
1.0000 0.9707 0.9449
0.9707 1.0000 -0.6547
0.9449 -0.6547 1.0000
ncases =
3
count =
4 3 3
3 4 3
3 3 4
ifail =
0
```
```function g02bj_example
x = [3, 3, 1, 2;
6, 4, -1, 4;
9, 0, 5, 9;
12, 2, 0, 0;
-1, 5, 4, 12];
miss = [int64(1);1;0;1];
xmiss = [-1;
0;
0;
0];
kvar = [int64(4);1;2];
[xbar, std, ssp, r, ncases, count, ifail] = g02bj(x, miss, xmiss, kvar)
```
```
xbar =
6.7500
7.5000
3.5000
std =
4.5735
3.8730
1.2910
ssp =
62.7500 21.0000 10.0000
21.0000 45.0000 -6.0000
10.0000 -6.0000 5.0000
r =
1.0000 0.9707 0.9449
0.9707 1.0000 -0.6547
0.9449 -0.6547 1.0000
ncases =
3
count =
4 3 3
3 4 3
3 3 4
ifail =
0
```
|
{}
|
## Friday, 23 December 2016
### Home Heating IoT Project - Software planning
For this project I want to be able to control the room heating in my house either manually or on a timed system.
I decided to plan it using a flowchart, as I couldn't get my head around the programming easily without visualising it.
Here's the first draft and I'm pleased how it turned out:
This image was made at lucidchart.com - so easy to use!
The earlier posts in this project can be found here: http://raspitech.blogspot.com/2016_10_01_archive.html
### Electracker - More energy consumption analysis
Now that I have a few weeks of data, I wanted to get daily graphs to look for patterns. I amended the original code to create them:
Not as useful as I'd hoped, but the average consumption per day is definitely interesting.
The coding is pretty terrible, sorry. If I was starting from scratch I'd use datetime module functions to do the date work and include some error handling to stop the program crashing when data is missing for an hour. Because I was pushed for time and had a working program already in place to modify, it was the quickest solution.
Code, such as it is, is here:
https://github.com/jcwyatt/electracker/blob/master/elecanalDailyGraphs.py
``````
#program to analyse electricity use over time by hour.
#open a file
#for each hour:
#bucket into hour
#get total
#get average for each hour
#enhancements to follow: daily charts, date range charts, movie!
import csv
import os
import matplotlib
matplotlib.use('agg')
import matplotlib.pyplot as plt
startDay = 20
startMonth = 12
while True:
elecByHour=[]
dailyTotal=0
for i in range (0,24): #for each hour
with open ('feeds.csv', 'rb') as csvfile:
j=0
hourelectot=0
for row in rawelecdata:
if int(row['created_at'][11:13])==i and int(row['created_at'][8:10])==startDay and int(row['created_at'][5:7])==startMonth:
hourelectot += float(row['field1'])
j +=1
print(i,hourelectot/j,j) #useful to for debugging
elecByHour.append(hourelectot/j) #add average for the current hour to list
#calculate average for day:
dailyTotal = dailyTotal + hourelectot
``````
``#plot the graph:``
``````
y = (elecByHour)
N = len(y)
x = range(N)
width = 1/1.5
plt.xlabel ('Time of Day / hr')
plt.ylabel ('kW')
plt.ylim((0,6))
plt.title('Consumption for '+ str(startDay) + '/' + str(startMonth) + ' Average = ' + str(dailyAverage)+'kW')
plt.bar(x, y, width, color="blue")
plt.savefig('elecByHour' + str(startMonth) + str(startDay) + '.png')
plt.close()
``````
`````` #os.system('xviewer elecByHour.png &')
``````
`````` startDay += 1
```
```
## Tuesday, 29 November 2016
### Pi Light and Movie Quote Alarm Clock
Sorry, No documentation on this one. Rough and ready test program.
Turns a blue LED on in the morning (and evening..don't ask) to wake me up, but also plays a random selection from a bank of quotes.
from gpiozero import LED
from time import sleep
import datetime
import os
import random
led = LED(4)
overRide='auto'
amOn=datetime.time(05,50,00)
amOff=datetime.time(06,30,00)
pmOn=datetime.time(21,30,00)
pmOff=datetime.time(22,45,00)
print (pmOff)
soundTrigger=0
while True:
while overRide=='auto':
timeNow=datetime.datetime.now().time()
soundchoice = str(random.randint(1,5))
playsound=('aplay '+soundchoice+'.wav')
if timeNow>pmOn and timeNow<pmOff or timeNow>amOn and timeNow<amOff:
led.on()
print('led on')
soundTrigger+=1
if soundTrigger==1:
os.system(playsound)
else:
led.off()
print('led off')
soundTrigger=0
sleep(60)
### Live Energy Consumption in a 'Google Guage'
The author can't find a way to make this visualisation public. Wierd user interface at Thingspeak.
## Monday, 28 November 2016
### Electracker - Analysis of 2 weeks of data
Earlier this month I got serious about logging my home electrical energy consumption.
The Pi tasked with this job has run continuously for two weeks, happily logging data:
https://thingspeak.com/channels/182833
And is still doing so.
I wanted to analyse this data to get an average picture of the daily consumption, by hour:
A typical weekday looks like this:
The code is here:
https://raw.githubusercontent.com/jcwyatt/electracker/master/elecanal.py
The only thing I'd do to update this is to plonk the overall average consumption for the time period shown as a data box on the graph, e.g. "Average = 0.871 kW" for the main graph.
## Circles in Minecraft?
Using Minecraft Pi, you can write code to place blocks. This means, in theory, you can code objects you could never build accurately - like giant circles.
I had a go at some maths around this in a spreadsheet using the formula for a circle
$\fn_phv x^{2} + y^{2} = 8$
where x and y are coordinates.
It came out like this:
Which is circular if nothing else.
I don't like the gaps near the axes but with some clever programming (which I'll have to google!) I reckon eventually I could get rid of them.
#### Trig Method
I realised the numbers above follow 2 overlapping sine waves, so a quick search revealed this:
"A circle can be defined as the locus of all points that satisfy the equations
x = r cos(t) y = r sin(t)
where x,y are the coordinates of any point on the circle, r is the radius of the circle and
t is the parameter - the angle subtended by the point at the circle's center."
from http://www.mathopenref.com/coordparamcircle.html (there's an interactive applet)
So by carefully choosing the angular interval 't' you should be able to put blocks precisely where you want them in the circle.
Here's my first go in the spreadsheet:
You can see there are still gaps and the numbers in the columns are all over the place. Think the trig functions are working in radians but I entered angles in degrees.
To plot more points, I'd have to reduce the interval in column 1 to give more points between 0 and 2.
Imagine that in Minecraft!
I've a Pi 3 that I've just set up to run headless, so I'll move onto that next.....
The Results!
nested loop of expanding rising circles. Feels like an amphitheatre inside.
Simple cylinder - circles on top of circles
This one was weird - a cylinder with different block types in each layer. Unfortunately 2 of them were lava and water.
One massive circle.
The code for these:
``````
from mcpi.minecraft import Minecraft
import math
mc = Minecraft.create()
#get players position
x,y,z=mc.player.getPos()
#move out of the way, there's something coming
mc.player.setPos (x,y+50,z)
for q in range (1,4): #nested loop for when you want multiple circles
for i in range(0,r*6): #sets how many blocks comprise the shape
j = float(i)/(r*3) #gives max j of 2 (to give 2*pi radians for trig))
blockXPos = r*math.cos(j*math.pi) #sets x coordinate of block
blockZPos = r*math.sin(j*math.pi) #sets z coordinate of block
mc.setBlock(blockXPos+x,y+q,blockZPos+z,1) #places a block in the circle
```
```
You could use code like this to plot other mathematical functions. Even 3D ones:
Its something called a quadric surface:
Something about that makes me feel like I've earned a glass of wine!
## Tuesday, 15 November 2016
### Electracker - Domestic Energy Consumption Logger
I've written about this project before but I revived it after a visit to Jersey Tech Fair and meeting the nice chaps at Jersey Electricity. They tried to sell me 'Nest', a home heating automation system (they may have succeeded. Watch this space.) But we also discussed electricity consumption.
I've had a Pi attached to my electricity meter for over a year now, but doing nothing.
Now rekindled, it seems to be happily tracking my energy usage and logging the results at Thingspeak.
I'm going to leave it for a bit and then plan to do a frequency analysis on a few days worth of data.
https://thingspeak.com/channels/182833
I also added this widget to my phone:
So now I can see live updates of my energy consumption as long as I have wifi/3G:
The code is here at github:
https://github.com/jcwyatt/electracker/blob/master/electracker_bars.py
Things I've learnt:
1) The wifi connection was really flaky - an Edimax attached to a Pi Zero. I've replaced the Edimax with a bulky Netgear with an ariel but this seems quite erratic too. Tried a better power supply. Seems slightly better now. Made it a real pain to SSH into the Pi and also writing to Thingspeak.
2) You can't have it all ways (yet). I was initially recording every interval between flashes, and getting a really accurate reading, but couldn't write this to Thingspeak as it took too long and caused missed readings and errors due to the delay.
Now I'm counting the number of flashes in a 2 minute period and writing a calculated consumption based on this to Thingspeak. Advantages are that it give a consistent rate of readings, which looks great on Thingspeak, and at high consumption rates it should be very accurate, albeit for an average consumption over 2 minutes. However at low consumption rates the accuracy drops off; the small number of flashes per 2 minute interval kills the resolution on the measurements.
Solution:
Some kind of hybrid where it records the interval between flashes and averages this out over 2 minutes, and then writes this average data to TS.
Should be possible:
<pseudocode>
while True:
Reset total time to 0
Reset flashes to 0
Wait for a flash
Start the timer
For 2 minutes:
Wait for a flash
stop the timer
record the flash interval
start the timer
flashes +=1
total time = total time + flash interval
wait 0.07s to check the LED is off again
repeat
average = totaltime/flashes
Consumption = 1.125/average
Write Consumption to Thingspeak
</pseudocode>
## Sunday, 30 October 2016
### Home Heating IoT Project - Interface Designed
The interface is now built:
It uses PHP to write to a CSV and the interface reads from the current settings (in csv format) where present, and set these as the default selections.
Ideally I'd like to use AJAX to update the info live.
The html and css are here at git hub.
It's meant to be a mobile ready design but although it picks up the mobile version in a small desktop browser window, my phone displays only the desktop version.
## Saturday, 22 October 2016
### Home Heating IoT Project - Project Planning
Detailed Requirements:
1) each heater should have 2 programmable time slots, for morning and evening heating
2) it should be possible to turn each heater on or off manually from the interface
3) each heater should have a manual override to turn/stay off or on regardless of program.
4) there should be an override to turn all heaters off.
5) it should be possible to set the heat level (using mark space ratio)
6) it could be possible to link the heat level to outside temp.
7) the program data will be stored on a local csv file (or google sheet?)
8) there will be a php-based interface with the csv file to allow changes to be made, and the current status of the heater to be known.
9) a python program will manage the relays, using the csv file to determine required state of the output.
Schedule data (csv):
sleep (all off)
room1, start1, end1, start2, end2, rm1override on, rm1override off, heatlevel1
room2, start1, end1, start2, end2, rm2override on, rm2override off, heatlevel2
room3, start1, end1, start2, end2, rm3override on, rm3override off, heatlevel3
room4, start1, end1, start2, end2, rm4override on, rm4override off, heatlevel4
## Monday, 20 June 2016
### Today's Pi Jobs:
1) Prepare SD card for code club project so webcam can be fitted to the front of CamJam 3 robot. The kids have already built the bot and can remote control it via node-red. I used nginx for the first time with zero problems.
2) Reimage 4 new SD cards with latest Raspbian plus a few tweaks - pibrella and tightvncserver.
3) For home set up a camera on one pi that will send its pictures to another using a bash script and scp. The remote pi is feeding the pictures to the web.
I'll publish more details on these projects later.
## Thursday, 21 April 2016
### Sweet Peas
My wife planted some sweet peas and put them in the conservatory to get started.
I was sorting through my Pi bits and bobs and found the camera module and a project was born.
I made a camera rig from an old bit of wood and a section of bicycle brake cable and used a PoundWorld specs to get the focal length shorter*:
And found the code for the time lapse online here:
and only made slight amendments: https://github.com/jcwyatt/sweetpea2016/blob/master/sweetpea.py
After 11 days I copied the files across with scp and deleted all the files smaller than 150k (the dark night time pics) with this great command:
``find . -name "*.jpg" -size -150k -delete``
Then it was just a matter of using ffmpeg to make the images into a video. A quick search confirmed that it was possible to do this on filenames organised by dates:
ffmpeg -framerate 5 -pattern_type glob -i '*.jpg' -c:v libx264 -r 30 sweetpea01.mp4
5 images per second, 30fps video and it literally took a few seconds to create this:
All done headless and with the command line.
## Saturday, 2 April 2016
### Arduino - Acceleration Due to Gravity Testing Rig
I made the rig in the picture to measure acceleration due to gravity, just for fun.
The hardware is just a home made electromagnet powered through a ULN2003 (linking 2 channels to boost current). This was controlled using an Arduino Nano. I added an LED to indicate when the electromagnet was activated, and a button to activate it. The switch that detected the ball hitting the ground was also home made from a peg, some paperclips and tin foil:
The plan for the program was:
• Push a button on the rig to activate the electromagnet for long enough to attach a bearing but not so long it burns out the hardware (4 seconds seemed about right)
• Switch off the electromagnet automatically and simultaneously start a timer.
• When the pressure switch is activated by the ball hitting the ground, stop the timer.
• Work out the acceleration using a = 2s / t**2
• (s= distance fallen, t is the time)
I started the code in Ardublock because it avoids typing errors, but the calculations refused to give a valid result even when the timing was giving valid readings. After a frustrating hour of trying to fix it within Ardublock, 3 minutes with the actual code in the Arduino IDE had it working well. The erroneous values are when the ball missed the switch and I had to trigger it manually.
The code:
https://github.com/jcwyatt/gravityfalls
## Monday, 28 March 2016
### AM2302 Temperature and Humidity Sensor - Bat Project #01 - Durrell, Jersey
In this post I'll detail how I fared with getting a AM2302 working with the RPi.
This is the first step in a project I'm working on for Durrell Wildlife Conservation Trust. As well as working on a Gorilla feeder with my friend Max, they were also interested in data logging for their bat house:
I bought a sensor from eBay for a few pounds.
From what I can understand, the output from these devices is a digital stream of bits containing the relevant data, but it is not in a regular format like I2C, therefore you have to download a special library. I followed the instructions from Adafruit and it worked really well.
a few days ago, I've been itching to try it. It's a great solution for the zoo project because the data goes straight to the web and is formatted and graphed automatically, and can be made public.
I found a great tutorial for linking python to thingspeak here:
http://www.australianrobotics.com.au/news/how-to-talk-to-thingspeak-with-python-a-memory-cpu-monitor
I just adapted the code and added it to the Adafruit example code. Resulting code is messy but worked really well.
I developed this using a model B+ but once it was running I shrunk it down onto a headless Pi Zero.
The thingspeak project site for the trial data (from my lounge) is here:
https://thingspeak.com/channels/103719
And you can embed the graphs using an iframe embed code. I presume this will update as data gets added, but I don't yet know for sure.
Hardware wise I set it up exactly as shown in the diagrams of the tutorial, but added a blue LED between the data and ground pins of the AM2302. It was a lucky guess and works nicely, flashing brightly as data is sent to the pi every minute.
(Sidenote: I've been trying out Node Red recently and I love it, but the instructions I could find online for getting the sensor to work in Node Red seem out of date and I ran into errors trying to install the nodes.)
The code:
``````
#!/usr/bin/python
# Author: Tony DiCola
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import sys
import httplib, urllib
from time import sleep
# Parse command line parameters.
if len(sys.argv) == 3 and sys.argv[1] in sensor_args:
sensor = sensor_args[sys.argv[1]]
pin = sys.argv[2]
else:
print 'usage: sudo ./Adafruit_DHT.py [11|22|2302] GPIOpin#'
print 'example: sudo ./Adafruit_DHT.py 2302 4 - Read from an AM2302 connected to GPIO #4'
sys.exit(1)
# Try to grab a sensor reading. Use the read_retry method which will retry up
# to 15 times to get a sensor reading (waiting 2 seconds between each retry).
while True:
# Un-comment the line below to convert the temperature to Fahrenheit.
# temperature = temperature * 9/5.0 + 32
# Note that sometimes you won't get a reading and
# the results will be null (because Linux can't
# guarantee the timing of calls to read the sensor).
# If this happens try again!
#Pushing data to Thingspeak
# python
if humidity is not None and temperature is not None:
print 'Temp={0:0.1f}* Humidity={1:0.1f}%'.format(temperature, humidity)
else:
print 'Failed to get reading. Try again!'
sys.exit(1)
params = urllib.urlencode({'field1': temperature, 'field2': humidity,'key':'92P5L3PGPTT8ZE8N'})
conn = httplib.HTTPConnection("api.thingspeak.com:80")
response = conn.getresponse()
print response.status, response.reason
conn.close()
if humidity is not None and temperature is not None:
print 'Temp={0:0.1f}* Humidity={1:0.1f}%'.format(temperature, humidity)
else:
print 'Failed to get reading. Try again!'
sys.exit(1)
sleep(60)
```
```
## Sunday, 27 March 2016
### Node Red
I've had a play with Node Red for the first time tonight. In a matter of minutes I was able to operate an LED through the Node Red interface, something I'd been wanting to do for a long time.
I couldn't be much simpler to use. I could do quite a lot of things in Node Red after just one evening playing around. The web side of things is still puzzling for now, with websockets and HTTP the next nut to crack.
I reckon I could control my home heating now just using Node Red and the right hardware.
LATER THAT SAME EVENING........
Couldn't let it lie so I had another play. Now the status of the LED can be output to wherever, e.g. twitter:
Output:
Copy and paste the code below if you want to try it (minus the twitter feed) - I used pin 11 (GPIO 17) to drive the LED.
``````
``````
## Saturday, 27 February 2016
### Hacking a ridiculously cheap laser.
I bought a couple of really cheap lasers off of eBay for a project I wanted to try at codeclub (Mission impossible style burglar detector using LDRs and lasers). They were about £1.50 each from China. Like this one but cheaper.
They were not easy to use in the chassis they came in, having a momentary push button switch, and eating batteries, so I set about taking one apart.
The chassis needed a bit of hacksaw work to split, and then the controlling chip and laser came out as one. I jammed on the switch with a cable tie, and taped the +ve wire to the body with insulating tape. The spring on the chip is the -ve:
Not quite sure of the power requirements but I think I would power it through a ULN2003 and an external battery supply if I was to include it in a project. Claims to be 1mW with 3V input. My multimeter read 25mA being supplied from the arduino 3.3V pin.
Not sure what project to include it in.
### Robot Chassis - PoliceBot project
My son and I spent a bit of time this evening trying to get a robot chassis working.
My daughter and I built it a week or so ago:
This evening my son and I had a go at programming it to do something:
We used my new favourite programming tool - ardublock:
My son found some blue and white LEDs in my kit box, so we decided to make it into a police car.
It was really easy to get going.
I had a few problems with dodgy connections so we ended up soldering the power connections from the battery to the motors.
The next step is to activate the ultrasonic sensor on the front of the chassis.
## Tuesday, 23 February 2016
### Dodgy HC-SR04 Ultrasonic Distance Sensors
I've had a few issues trying get ultrasonic sensors working recently.
I'd had no problems previously when I'd used Flowol software to get one working with the boat project, so initially I suspected it was me trying to use Ardublock that was the problem, or just lack of skilled wiring on my part.
Turns out it was 4 dodgy HC-SR04 I'd bought through Amazon from China.
I built the circuit and tried the Newping sample sketch on one of the Amazon HC-SR04s. No reading on the serial monitor when using the Newping (well actually a continous zero reading)
I tried each of the sensors in turn, and as soon as I put an older SR04 I had lying around in the circuit, I immediately got good numbers in.
God to isolate the problem, but sucks to have to either return the parts or write them off and buy more.
At least knowing I had a good sensors let me try out the Ardublock program again, this time with some success, although it wouldn't read above 20cm.
## Tuesday, 16 February 2016
### Arduino with Bluetooth and ArduBlock
My daughter has a holiday project to build a robot, so we are adding Arduino controlled features.
I've tried to get bluetooth working with the arduino before with the boat project. But failed for whatever reason.
I recently discovered Ardublock, which is great way to get a program working without getting bogged down in the typing. It tops S4A because you can upload to the Arduino, and see the actual code it is creating:
I attached the HC-06 bluetooth chip to the arduino and loaded up the Bluetooth Serial Controller App.
Once the HC-06 was paired (easy to do if you're used to pairing speakers and stuff regularly) the app was able to send '5' to the Arduino and get the LED blinking.
Now do do a few more adventurous things with the robot.
## Thursday, 14 January 2016
### Tide Indicator Pi Project #9 - Calculation of Current Tide Completed (No bugs)
The last version I posted, v2.2, turned out not to work for all tide states, due to some maths errors. These have all been fixed and the code seems to work well.
I've think I've finally got the hang of updating to github, so here's the latest code:
tideproject/tidenow3.0.py
This is the output:
```(datetime.datetime(2016, 1, 14, 21, 34, 13, 517280), u'10.5') (datetime.datetime(2016, 1, 15, 4, 9, 13, 518325), u'1.9') Tide is currently: falling Tidal Range = -8.6 Current Tide : 9.55460304533 ```
Next job is to have it running continuously and outputting to this webpage.
## Sunday, 3 January 2016
### Tide Indicator Pi Project #8 - Calculation of Current Tide Completed
The program below seems to work!
Output:
```('Next: ', (datetime.datetime(2016, 1, 3, 6, 18, 23, 116073), u'4.3'), ' is ', datetime.timedelta(0, 21180, 2472), ' away. /n Previous: ', (datetime.datetime(2016, 1, 2, 23, 49, 23, 115191), u'8.1'), ' was ', datetime.timedelta(0, 2159, 998410), ' ago.')
('Sum of both gaps is ', datetime.timedelta(0, 23340, 882))
('Tide is Currently: ', 'falling')
('tide difference = ', -3.8)
('lower tide value', 4.299999999999999)
('Normalised Time =', 2159, 23340, 0.29060405051843885)
0.958070971113
('Current tide : ', 7.940669690228617)
```
Code:
``````
#version 1.0
#This program pulls tide data from the ports of Jersey Website
#Under a licence from the UKHO
#
#It then calculates the current tide using a simplified sinusoidal harmonic approximation
#By finding the two tide data points either side of now and working out the current tide height
import urllib2
from bs4 import BeautifulSoup
from time import sleep
import datetime as dt
import math
#open site and grab html
soup = BeautifulSoup(rawhtml, "html.parser")
#get the tide data (it's all in tags)
rawtidedata = soup.findAll('td')
#parse all data points (date, times, heights) to one big list
#format of the list is [day,tm,ht,tm,ht,tm,lt,tm,lt]
n=0
parsedtidedata=[]
for i in rawtidedata:
parsedtidedata.append(rawtidedata[n].get_text())
n += 1
#extract each class of data (day, time , height) to a separate list (there are 10 data items for each day)
tidetimes=[]
tideheights=[]
tideday=[]
lastdayofmonth=int(parsedtidedata[-10])
for n in range(0,lastdayofmonth*10,10):
tideday.append(parsedtidedata[n])
tidetimes.extend([parsedtidedata[n+1],parsedtidedata[n+3],parsedtidedata[n+5],parsedtidedata[n+7]])
tideheights.extend([parsedtidedata[n+2],parsedtidedata[n+4],parsedtidedata[n+6],parsedtidedata[n+8]])
#get time now:
currentTime = dt.datetime.now()
#create a list of all the tide times as datetime objects:
dtTideTimes=[]
tideDataList=[]
for j in range (0,lastdayofmonth*4):
#print tidetimes[j][0:2], tidetimes[j][3:6]
if tidetimes[j]=='**':
dtTideTimes.append('**')
else:
dtTideTimes.append(dt.datetime.now().replace(day=int(j/4+1), hour=int(tidetimes[j][0:2]), minute=int(tidetimes[j][3:5])))
#make a tuple for each data point and add it to a list
tupleHolder =(dtTideTimes[j], tideheights[j])
tideDataList.append(tupleHolder)
#print what we've got so far
# print tideDataList[j]
#find the two closest times in the list to now:
gap1 = abs(tideDataList[0][0] - currentTime)
gap2 = abs(tideDataList[0][0] - currentTime)
nearest1 = tideDataList[0]
#print gap1
for j in range (0,lastdayofmonth*4):
if (tideDataList[j][0] !="**"):
gapx = abs(tideDataList[j][0] - currentTime)
#check if the data point is the first or second nearest to now.
#Generates the datapoints either side of now
if (gapx <= gap1):
nearest1 = tideDataList[j]
gap1 = gapx
if (gap1 < gapx and gapx <= gap2):
nearest2 = tideDataList[j]
gap2 = gapx
#print (nearest1, gap1)
#print (nearest2, gap2)
#print (gap1+gap2)
#and now the maths begins
#print ('tide height 1 = ', nearest1[1])
#print ('tide height 2 = ', nearest2[1])
#need to get them in order of time: (this works)
if nearest1[0] > nearest2[0]:
nextDataPoint = nearest1
prevDataPoint = nearest2
gapToNext = gap1
gapToPrev = gap2
else:
nextDataPoint = nearest2
prevDataPoint = nearest1
gapToNext = gap2
gapToPrev = gap1
gapSum = gapToNext + gapToPrev
print('Next: ', nextDataPoint,' is ',gapToNext, ' away. /n Previous: ', prevDataPoint, ' was ', gapToPrev, ' ago.')
print('Sum of both gaps is ', gapSum) #this works
#is the tide rising or falling?
tideDifference = float(nextDataPoint[1])-float(prevDataPoint[1])
if (tideDifference<0 data-blogger-escaped-0="prev" data-blogger-escaped-:="" data-blogger-escaped-all="" data-blogger-escaped-code="" data-blogger-escaped-currently:="" data-blogger-escaped-currenttide="" data-blogger-escaped-data="" data-blogger-escaped-difference=", tideDifference) #this works
lowerTide = (float(nearest1[1]) + float(nearest2[1]) - abs(tideDifference))/2
print (" data-blogger-escaped-doesn="" data-blogger-escaped-else:="" data-blogger-escaped-falling="" data-blogger-escaped-for="" data-blogger-escaped-ide="" data-blogger-escaped-is="" data-blogger-escaped-lower="" data-blogger-escaped-lowertide="" data-blogger-escaped-math.cos="" data-blogger-escaped-math.pi="" data-blogger-escaped-normalisedtime="" data-blogger-escaped-ormalised="" data-blogger-escaped-pi="next" data-blogger-escaped-print="" data-blogger-escaped-scaled="" data-blogger-escaped-t="" data-blogger-escaped-this="" data-blogger-escaped-tide="" data-blogger-escaped-tidedifference="" data-blogger-escaped-tidestate="" data-blogger-escaped-time=", gapToPrev.seconds, gapSum.seconds, normalisedTime)
print (math.cos(normalisedTime))
if tideState == " data-blogger-escaped-to="" data-blogger-escaped-urrent="" data-blogger-escaped-value="" data-blogger-escaped-work="" data-blogger-escaped-works="">
``````
## Saturday, 2 January 2016
### Tide Indicator Pi Project #7 - Finding the two tide data points nearest to the current time.
This project is taking ages! I've done a lot since the last post however, but documented very little, so I'll do my best to recall how I got from there to here. You can see all the posts so far here.
The problem in a nutshell: The program needs to get the two tide data points either side of the current time, to work out what the tide is doing now.
Since the last post, the code has been modified to create a list of tuples, with each tuple having two data points (tide time, tide height)
It then works out the gap between each data point and the current time, and tries to store the two nearest times as 'nearest1' and 'nearest2'. Sometime it works:
Time Now:
2015-01-02 16:33
Output:
(datetime.datetime(2016, 1, 2, 17, 52, 40, 854958), u'4.0'),
(datetime.datetime(2016, 1, 2, 11, 9, 40, 854071), u'8.4')
Sometimes it doesn't and misses a point.
``````#
import urllib2
from bs4 import BeautifulSoup
from time import sleep
import datetime as dt
#open site and grab html
soup = BeautifulSoup(rawhtml, "html.parser")
#get the tide data (it's all in tags)
rawtidedata = soup.findAll('td')
#parse all data points (date, times, heights) to one big list
#format of the list is [day,tm,ht,tm,ht,tm,lt,tm,lt]
n=0
parsedtidedata=[]
for i in rawtidedata:
parsedtidedata.append(rawtidedata[n].get_text())
n += 1
#extract each class of data (day, time , height) to a separate list (there are 10 data items for each day):
tidetimes=[]
tideheights=[]
tideday=[]
lastdayofmonth=int(parsedtidedata[-10])
for n in range(0,lastdayofmonth*10,10):
tideday.append(parsedtidedata[n])
tidetimes.extend([parsedtidedata[n+1],parsedtidedata[n+3],parsedtidedata[n+5],parsedtidedata[n+7]])
tideheights.extend([parsedtidedata[n+2],parsedtidedata[n+4],parsedtidedata[n+6],parsedtidedata[n+8]])
#get time now:
currentTime = dt.datetime.now()
#create a list of all the tide times as datetime objects:
dtTideTimes=[]
tideDataList=[]
for j in range (0,lastdayofmonth*4):
#print tidetimes[j][0:2], tidetimes[j][3:6]
if tidetimes[j]=='**':
dtTideTimes.append('**')
else:
dtTideTimes.append(dt.datetime.now().replace(day=int(j/4+1), hour=int(tidetimes[j][0:2]), minute=int(tidetimes[j][3:5])))
#create a tuple of time and height, and add each tuple to a list
tupleHolder =(dtTideTimes[j], tideheights[j])
tideDataList.append(tupleHolder)
#print what we've got so far
for j in range (0,lastdayofmonth*4):
print tideDataList[j]
#find the two closest data points to now in the list:
gap1 = abs(tideDataList[0][0] - currentTime)
nearest1 = tideDataList[0]
print gap1
for j in range (0,lastdayofmonth*4):
if (tideDataList[j][0] !="**"):
gap2 = abs(tideDataList[j][0] - currentTime)
print tideDataList[j][0], gap2, nearest1
if (gap2 < gap1):
nearest2 = nearest1
nearest1 = tideDataList[j]
gap1 = gap2
print (nearest1, nearest2)
#this nearly works!!! Gave the two nearest high tides, not nearest high and low.
``````
### Portable Power
http://uk.rs-online.com/web/p/lithium-rechargeable-battery-packs/7757504/
Powers a Raspberry Pi with 5V for £8 from RS
|
{}
|
# automorphism, endomorphism, isomorphism, homomorphism within $\mathbb{Z}$
From Wikipedia: An invertible endomorphism of $$X$$ is called an automorphism. The set of all automorphisms is a subset of $$\mathrm{End}(X)$$ with a group structure, called the automorphism group of $$X$$ and denoted $$\mathrm{Aut}(X)$$. In the following diagram, the arrows denote implication:
Can we give some examples using the integer $$\mathbb{Z}$$ group (with a closed additive structure, the inverse, the identity 0, and the associative; and also commutative as an abelian group) which satisfy some of the above ---
Please fulfill or correct the following if I am wrong:
1. The map $$\mathbb{Z} \mapsto \mathbb{Z}/2\mathbb{Z}$$ (via $$k \in \mathbb{Z}$$ maps $$k \mod 2 \in \mathbb{Z}/2\mathbb{Z}$$) is a homomorphism, but not others (not isomorphism, not endomorphism, not automorphism).
2. The map $$\mathbb{Z} \mapsto -\mathbb{Z}$$ (via $$k \in \mathbb{Z}$$ maps to $$-k \in \mathbb{Z}$$) is a endomorphism and also isomorphism (thus also homomorphism), but not automorphism.
$$\color{red}{\text{But k \in \mathbb{Z} maps to -k \in \mathbb{Z} is invertible, so is it also automorphism?}}$$
1. The map $$\mathbb{Z} \mapsto 2 \mathbb{Z}$$ (via $$k \in \mathbb{Z}$$ maps to $$2 k \in \mathbb{Z}$$) is an isomorphism (thus also homomorphism), but not endomorphism nor automorphism. Am I correct?
2. The map $$\mathbb{Z} \mapsto \mathbb{Z}$$ (via $$k \in \mathbb{Z}$$ maps to $$k \in \mathbb{Z}$$) is an automorphism (thus also endomorphism and also isomorphism, homomorphism). Am I correct?
Last Question:
• Are there examples of homomorphism maps within $$\mathbb{Z}$$ to itself or subgroup such that it is endomorphism but not isomorphism?
p.s. The automorphism of the group $$\mathbb{Z}$$ is Aut = $$\mathbb{Z}$$/2$$\mathbb{Z}$$, I believe.
• Please ask one question at a time. – Shaun May 6 at 21:27
• $-\mathbb{Z}$ and $\mathbb{Z}$ are exactly the same thing. So number $2$ is an automorphism. Number $3$ is an endomorphism which is not an automorphism, because $2\mathbb{Z}$ is a subgroup of $\mathbb{Z}$. – Mark May 6 at 21:27
• @annie marie heart Because number $3$ is invertible as a map $\mathbb{Z}\to 2\mathbb{Z}$, not as a map $\mathbb{Z}\to\mathbb{Z}$. It is indeed an isomorphism between $\mathbb{Z}$ and $2\mathbb{Z}$, but it is not an automorphism. – Mark May 6 at 22:01
• An endomorphism which is also an isomorphism (when considered with the same codomain) is automatically an automorphism. That's the definition of an automorphism. – Arthur May 6 at 22:15
• For number 3, the map ℤ→2ℤ (with a codomain ℤ) is only injective but not surjective; thus not bijective. But for isomorphism, we need to have a bijective homomorphism. So ℤ→2ℤ is NOT isomorphism? (especially to @Torsten Schoeneberg) – annie marie cœur May 6 at 22:36
Some or all of your questions were answered by our astute commentors.
Yes there is one endomorphism of $$\mathbb Z$$ which is not an isomorphism, but it's the trivial one. Otherwise, depending on where we send a generator, $$\pm1$$, we get an endomorphism and an isomorphism. (Recall $$\mathbb Z$$ is cyclic, and homomorphisms on cyclic groups are determined by where you send a generator. ) Thus we see that for any $$n\ne0$$, we have $$\mathbb Z\cong n\mathbb Z$$. If, and only if, we send a generator to a generator, we get an automorphism. Thus there are only two automorphisms. So $$\rm {Aut}(\mathbb Z)\cong\mathbb Z_2$$.
(Mind you we are talking about $$\mathbb Z$$ as a group here, not as a ring. That's a whole different discussion. A quite interesting one at that: $$\mathbb Z$$ is an initial object in the category $$\bf {Ring}$$ of rings, meaning our hand is forced and there's only one homomorphism from $$\mathbb Z$$ to $$\mathcal R$$, for any other ring (with unit). You'll pardon this diversion into Category Theory but, if we relax to the categories of semirings, $$\bf {Rig}$$, or pseudorings, $$\bf {Rng}$$ , then, analogous to the situation in $$\bf {Grp}$$, we no longer have an initial object.)
• +1. Thanks so much, I posted an answer to see whether people also agree. – annie marie cœur May 7 at 4:26
• Is there any nontrivial map example such that for (1) homomorphism, (2) isomorphism, (3) endomorphism, and (4) automorphism, we get (1) O, (2) O, (3) X, (4) X? (see below) – annie marie cœur May 7 at 4:26
• In your setup everything is an endomorphism, that's just a map from the space back into itself. – user403337 May 7 at 4:43
• Can you give another set up (not my ℤ↦ℤ) such that for (1) homomorphism, (2) isomorphism, (3) endomorphism, and (4) automorphism, we get (1) O, (2) O, (3) X, (4) X? (see below) – annie marie cœur May 7 at 4:53
• What about an isomorphism $\mathbb Z\cong2\mathbb Z$? – user403337 May 7 at 4:57
Below in our examples, we consider the group homomorphism between $$\mathbb{Z} \mapsto \mathbb{Z}$$ where the first $$k \in \mathbb{Z}$$ forms the domain, the second $$\mathbb{Z}$$ is the codomain, and $$f(k)$$ is the image.
We ask whether it is (1) homomorphism, (2) isomorphism, (3) endomorphism, and (4) automorphism
We will give "O" if it is true. We will give "X" if it is false.
1. The domain $$k \in \mathbb{Z}$$ maps to the image $$f(k)=0$$. It is not injective nor surjective over codomain.
$$\text{(1) O, (2) X, (3) O, (4) X.}$$
1. The domain $$k \in \mathbb{Z}$$ maps to the image $$f(k)=k \mod N \in \mathbb{Z}/N\mathbb{Z}$$, where $$N$$ can be some integer. In fact, the image $$\mathbb{Z}/N\mathbb{Z}$$ is not a subgroup of codomain. So we cannot consider endomorphism. It is not injective but it is surjective over image.
$$\text{(1) O, (2) X, (3) X, (4) X.}$$
1. The domain $$k \in \mathbb{Z}$$ maps to the image $$f(k)= N k\in N\mathbb{Z}$$, where $$N$$ can be some integer but $$N \neq \pm 1$$. In fact, the image $$N\mathbb{Z}$$ is a subgroup of codomain. So it is an endomorphism. It is injective and also surjective over image. It is injective but not surjective over the codomain.
$$\text{(1) O, (2) O, (3) O, (4) X.}$$
1. The domain $$k \in \mathbb{Z}$$ maps to the image $$f(k)= \pm k\in \mathbb{Z}$$. In fact, the image $$\mathbb{Z}$$ is the codomain. It is injective and also surjective over image. It is injective and surjective over the codomain.
$$\text{(1) O, (2) O, (3) O, (4) O.}$$
|
{}
|
# JEE Main & Advanced Mathematics Matrices Positive Integral Powers of a Matrix
## Positive Integral Powers of a Matrix
Category : JEE Main & Advanced
The positive integral powers of a matrix A are defined only when A is a square matrix.
Also then ${{A}^{2}}=A.A$, ${{A}^{3}}=A.A.A={{A}^{2}}A$.
Also for any positive integers $m$ and $n,$
(i) ${{A}^{m}}{{A}^{n}}={{A}^{m+n}}$
(ii) ${{({{A}^{m}})}^{n}}={{A}^{mn}}={{({{A}^{n}})}^{m}}$
(iii) ${{I}^{n}}=I,{{I}^{m}}=I$
(iv) ${{A}^{0}}={{I}_{n}}$, where A is a square matrix of order $n$.
LIMITED OFFER HURRY UP! OFFER AVAILABLE ON ALL MATERIAL TILL TODAY ONLY!
You need to login to perform this action.
You will be redirected in 3 sec
|
{}
|
# About Green's function in time dependent schrodinger equation
1. Jul 18, 2012
### wphysics
While I was studying Ch 2.5 of Sakurai, I have a question about Green's function in time dependent schrodinger equation. (Specifically, page 110~111 are relevant to my question)
Eq (2.5.7) and Eq (2.5.12) of Sakurai say
$\psi(x'',t) = \int d^3x' K(x'',t;x',t_0)\psi(x',t_0)$
and
$\left(H-i\hbar\frac{\partial}{\partial t}\right)K(x'',t,x',t_0) = -i\hbar\delta^3(x''-x')\delta(t-t_0)$
We know from the basic Schrodinger equation
$\left(H-i\hbar\frac{\partial}{\partial t}\right)\psi(x,t) = 0$
So, I applied the differential operator to Eq (2.5.7) and use Eq(2.5.12). But, I couldn't get the right Schrodinger equation like this.
$\left ( H - i \hbar \frac{\partial}{\partial t}\right ) \psi (x'',t) = \left ( H - i \hbar \frac{\partial}{\partial t}\right ) \int dx' K(x'',t;x',t_0) \psi(x',t_0) = \int \left [ \left ( H - i \hbar \frac{\partial}{\partial t}\right ) K(x'',t;x',t_0) \right ] \psi(x',t_0) dx' = -i \hbar \int \psi(x', t_0) \delta(x''-x') \delta(t-t_0)dx'$
$=-i \hbar \psi(x'',t_0) \delta(t-t_0)$
which is non zero at t=t_0
What is the point that I am missing?
2. Jul 18, 2012
### TSny
Note the boundary condition on K as given in equation (2.5.13). Thus the equation (2.5.7) is only valid for t > to. For t < to, the right hand side of equation (2.5.7) will yield 0 because of (2.5.13). To make (2.5.7) valid for both t > to and t < to, you can introduce the step function θ(t-t0) and write (2.5.7) as
$\theta$(t-to) $\psi(x'', t)$= [same right hand side as before]
This equation now incorporates the boundary condition on K.
See if everything works out if you apply (H-i$\hbar$$\frac{\partial}{\partial t}$) to both sides of this equation.
Last edited: Jul 18, 2012
|
{}
|
197 views
Rearrange the following sentences to form a coherent paragraph.
1. This has huge implications for the health care system as it operates today, where depleted resources and time lead to patients rotating in and out of doctor's offices, oftentimes receiving minimal care or concern (what is commonly referred to as "bed side manner") from doctors.
2. The placebo effect is when an individual's medical condition or pain shows signs of improvement based on a fake intervention that has been presented to them as a real one and used to be regularly dismissed by researchers as a psychological effect.
3. The placebo effect is not solely based on believing in treatment, however, as the clinical setting in which treatments are administered is also paramount.
4. That the mind has the power to trigger biochemical changes because the individual believes that a given drug or intervention will be effective could empower chronic patients through the notion of our bodies capacity for self-healing.
5. Placebo effects are now studied not just as foils for"real interventions but as a potential portal into the self-healing powers of the body.
1. $25431$
2. $32415$
3. $42351$
4. $54231$
1
167 views
2
109 views
|
{}
|
# Show norms are equiv. on $C^1[a,b]$: $\Vert f\Vert _1=\Vert f \Vert_{\infty}+\Vert f' \Vert_{\infty},\Vert f \Vert_2=|f(a)|+\Vert f' \Vert_{\infty}$
Here is what I got as a proof. My question is at the end. Thanks
On $C^1[a,b]$ we have the norms $$\Vert f\Vert _1 = \Vert f \Vert_{\infty} + \Vert f' \Vert_{\infty},\quad \Vert f \Vert_2 = |f(a)| + \Vert f' \Vert_{\infty}.$$ We will show these are equivalent. Here $\Vert f \Vert_{\infty} = \sup\limits_{x\in [a,b]}|f(x)|$. In particular, by the definition of the supremum, we have $|f(a)| \leq \sup\limits_{x\in [a,b]}|f|$ so that it follows that for $M=1$ we have $$\Vert f \Vert_2 =|f(a)| + \Vert f' \Vert_{\infty} \leq M\left( \Vert f \Vert_{\infty} + \Vert f' \Vert_{\infty} \right) = M\Vert f\Vert _1.$$ Now we want to find a number $m$ such that $m \Vert f\Vert _1 \leq \Vert f\Vert _2$. let $\beta = \inf\limits_{x\in [a,b]}|f(x)|$ and $\alpha =\sup\limits_{x\in [a,b]}|f(x)|$ and set $m= \frac{\beta }{\alpha + 1}\leq 1$. Then we have that $m \Vert f \Vert _{\infty} = m\alpha = \frac{\alpha\beta}{\alpha + 1} < \beta \leq |f(a)|$. Also, as $m\leq 1$, we see that $m\Vert f' \Vert_{\infty}\leq \Vert f' \Vert_{\infty}$ so that $m\Vert f\Vert_{1} = m\Vert f \Vert_{\infty}+m\Vert f' \Vert_{\infty} \leq |f(a)| + \Vert f' \Vert_{\infty} = \Vert f \Vert_2$. So we have found an $M$ and $m$ such that $$m\Vert f \Vert_1 \leq \Vert f \Vert _2 < M \Vert f\Vert _1.$$
My concern with this proof is that for many functions we have $\inf\limits_{x\in[a,b]}|f(x)|=0$ and thus $m=0$. I think this is a problem. Is there any way around this or can anyone see another $m$ that will work? Thanks in advance!
• Do you mean your space is $C^1[a,b]$ ? A continuous function may have no derivative. – Stop hurting Monica Apr 30 '14 at 20:56
• You need an $m$ that is independent of $f$, so its definition cannot really use $f$. Can you bound $\lvert f(x)\rvert$ using only $\lVert f\rVert_2$? – Daniel Fischer Apr 30 '14 at 20:56
• You may use mean value theorem to bound $||f||_\infty$ using $|f(a)|$ and $||f'||_\infty$ ? – Stop hurting Monica Apr 30 '14 at 21:00
• @Jean-ClaudeArbaut Yes, we may use it. Also, you are right, I did mean $C^1[a,b]$ thanks for the spot! – Slugger Apr 30 '14 at 21:08
For any $x\in[a,b]$, the Mean Value Theorem gives you $f(x)=f(a)+f'(c)\,(x-a)$, for some $c\in[a,b]$. Then $$\|f\|_\infty\leq|f(a)|+\|f'\|_\infty\,(b-a)$$ Letting $k=1+b-a$, $$\|f\|_\infty+\|f'\|_\infty\leq k\,(|f(a)|+\|f'\|).$$ So you can take $m=\frac1{1+b-a}$.
|
{}
|
### Home > APCALC > Chapter 5 > Lesson 5.4.1 > Problem5-141
5-141.
The graph below of $y = f ^\prime(x)$, the derivative of some function $f$, is composed of straight lines and a semicircle. Determine the values of $x$ for which $f$ has local minima, maxima, and points of inflection over the interval $[–3, 3]$. Homework Help ✎
You are looking at the graph of f '(x), but you are being asked to describe the graph of f(x).
A minimum value on f(x) is where the y-values change from decreasing to increasing. How does that show up on the f '(x) graph?
Local minima on f(x) are located anywhere that the graph of f '(x) changes from negative to positive. Local maxima are the reverse.
Points of inflection on f(x) occur where f '(x) changes the sign of its slope. This happens twice.
|
{}
|
A proposal to simplify the notation of EDOs with bad fifths
Dave Keenan
Posts: 882
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:
Re: A proposal to simplify the notation of EDOs with bad fifths
Here's another way of looking at the small-EDO notation problem. I've added lines of constant fifth-size. A JI-based notation can also be an apotome-fraction notation that's designed for fifths of a specific size (or a small range of sizes), so there's no real distinction between apotome-fraction and JI-based notations. Only the limma-fraction notations (the red region) are fundamentally different.
Small Edos 2.png
While apotome-fraction notations are possible for 6 8 13 18, and a limma-fraction notation is possible for 11, I think the default notations for those should be subset notations (of 12, 24, 26, 36 and 22 respectively).
For 9 16 23, I think sagittal.pdf should give (a) limma-fraction notations, (b) subset notations and (c) mavila-(linear temperament)-based notations, however I think the default notation should be limma-fraction, as it should be for 7 14 21 28 35.
George Secor
Posts: 31
Joined: Tue Sep 01, 2015 11:36 pm
Location: Godfrey, Illinois, US
Re: A proposal to simplify the notation of EDOs with bad fifths
Dave Keenan wrote:Well George, we've slept on it for 9 months now, and there has recently been a request on one of the facebook groups, for someone to add the corresponding Sagittal notation(s) to every EDO entry in the Xenharmonic Wiki. So we really ought to decide whether these will become the new standard notations for these poor-fifth EDOs, and update figures 8 and 9 on pages 16 and 17 of Sagittal.pdf (the updated Xenharmonikon journal article) accordingly.
Since no one else is arguing, I suggest that we both attempt to come up with reasons why the existing notations for these EDOs should not be changed, or should be changed in ways different from this proposal. i.e. play devil's advocate. For this purpose, it is useful to repost this diagram.
There are eleven existing native-fifth notations that would change under this proposal. These can be grouped as follows. You should locate each group on the above diagram.
Near-superpythagorean (amber): 27, 49 (also includes 54 (2x27) and 71, which don't presently have native fifth notations)
Near-meantone: (red) 26, 45, 64 (also includes 52 (2x26), which doesn't presently have a native fifth notation)
Narrow fifths with one step per apotome (red): 33, 40, 47
Mavila, -1 step per apotome (red): 9, 16, 23
I note that 27 is not simplified by this proposal, since 1\27 changes from the spartan to the non-spartan . Nor is 26 simplified, as it goes from being notated only with sharps and flats (apotomes), to requiring spartan symbols (for limma fractions).
One could argue that the blue area on the diagram (JI-based notations) should be expanded to include the first two categories above. This would change the boundaries, in fifth sizes, from +-7.5 c of just, to +-10 c of just. We might continue to show how apotome and limma fraction notations can be defined for those with fifth errors between 7.5 c and 10 c, but we need not list them as the standard native-fifth notations for those EDOs (the first two categories above).
This is the first installment of my response, which will cover all of the near-superpythagorean divisions (27, 49, 54, and 71) and some additional adjoining divisions (32, 37, 42, and 59). In evaluating these (one by one), I have examined the fractional apotome bad-5ths notation for each one to determine whether it could be replaced by a simpler JI-based notation. For reference, I have also tabulated the prime 3 error both in absolute (cents) and relative (% of a degree) terms, as well as the prime-limit consistency. In additional, I have listened to some of these in Scala to judge whether a JI-based simplification is worthwhile.
Here goes:
27 is 9-limit consistent (prime 3 has 9.156 cents or 20.60% error) and has a valid 5-comma (and 11M-diesis) of 1 degree and valid 13L-diesis (and 13M-diesis) of 2 degrees (7C vanishes). I recommend a simplification of the bad-5ths notation:
replacing it with the following JI notation:
27: (JI notation)
This is not the JI notation that we formerly agreed on, since I have replaced 13M with 13L , which directly notates 13/8.
49 is 7-limit consistent (prime 3 has 8.249 cents or 33.68% error) and has a valid 5-comma of 2 degrees and valid 11M-diesis of 3 degrees (7C vanishes). I recommend keeping the original 11-limit notation,
49: (JI notation)
which is simpler than the bad-5ths notation:
54 is 5-limit inconsistent (prime 3 has 9.156 cents or 41.20% error); the 7-comma vanishes, and the 5-comma and 11M-diesis are both 3 degrees. It does not appear that the bad-5ths proposal can be simplified.
or as subset of 108
71 is 5-limit consistent (prime 3 has 7.904 cents or 46.77% error) and has a valid 5-comma of 3 degrees, 11M-diesis of 4 degrees, and 13M-diesis of 5 degrees. However, although it has a valid 7-comma of 1 degree, it is severely 7-limit inconsistent, making it impractical to use the 7-comma symbol in the notation. Therefore, I recommend that the bad-5ths proposal be used.
or as subset of 142
32 is 5-limit inconsistent (prime 3 has 10.545 cents or 28.12% error); the 7-comma vanishes, and the 5-comma and 11M-diesis are both 2 degrees. It does not appear that the bad-5ths proposal can be simplified.
or as subset of 96
37 is 7-limit consistent (prime 3 has 11.559 cents or 35.64% error); the 7-comma vanishes, and the 5-comma and 11M-diesis are both 2 degrees. It does not appear that the bad-5ths proposal can be simplified.
42 is 7-limit consistent (prime 3 has 12.331 cents or 43,16% error); the 7-comma vanishes, and the 5-comma and 11M-diesis are both 2 degrees. It does not appear that the bad-5ths proposal can be simplified.
or as subset of 84
59 is 7-limit consistent (prime 3 has 9.909 cents or 48.72% error); the 7-comma vanishes, and the 5-comma and 11M-diesis are both 3 degrees. It does not appear that the bad-5ths proposal can be simplified.
or as subset of 118
Dave Keenan
Posts: 882
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:
Re: A proposal to simplify the notation of EDOs with bad fifths
So, to summarise: Of all those EDOs having fifths more than 7.5c wide, and therefore able to be notated using the above apotome-fraction notation, you find that 27 and 49 are not simplified thereby, and so should retain, as their preferred notation, their existing* JI-based notations.
I have checked your data and considered possible alternatives, and so far I agree with your choices. But it's a shame that, with these choices, there is no longer a simple description for which EDOs have preferred apotome-fraction notations. If we could find simple enough JI-based notations for 54 and 71, then the description would be simply "fifths more than 10c wide".
71-edo is 1:3:5:11:13:19 consistent. It has essentially the same fifth-size as 49-edo, so the size reversal of and is just as appropriate (or inappropriate). So what's wrong with this notation?
71: (JI-based), compared with
71: (apotome-fraction (with 13L instead of 13M))
*I agree with your suggestion that when a 13 diesis symbol is used for the half-apotome, it can be the larger one, symbolised by , not . This is independent of the choice between apotome-fraction and JI-based notations. It affects not only 27-edo, but also the the JI-based notations for 51 68 75 (but not 45) and the apotome-fraction notations for 6, 13, 10, 20, 30, 37, 54, 71. My reason for preferring now, is that since we defined it as the symbol for 13 in the one-symbol-per-prime notation, it more strongly suggests 13 than which suggests 35. But flag arithmetic should also be considered.
Dave Keenan
Posts: 882
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:
Re: A proposal to simplify the notation of EDOs with bad fifths
54-edo is far more difficult to find a JI-based notation for. It is 1:3:7:11:13:17 consistent, but, as you say, the 7-comma is zero degrees and so is notationally-useless. So are the 17-comma and 17-kleisma, as they are the same size as the 11-diesis (3 degrees). At least the 13L-diesis is 4 degrees (half the apotome). And so the difference between primes 11 and 13 gives us one degree as 143C. But we have nothing consistent for 2 degrees.
The list of pathetic candidates for 2°54 are:
(accents would be dropped)
7:19
5:13
11:49
11:35k
5*5*7
77
The least worst choice is probably as it is a Spartan symbol and is valid as two of its secondary commas, although not its primary.
54: (JI-based)
54: (apotome-fraction (with 13M replaced by 13L))
One would have to define one's criteria for "simplicity" in some detail to argue whether either of the above is simpler than the other. But the JI-based notation does at least have fewer non-spartan symbols.
George Secor
Posts: 31
Joined: Tue Sep 01, 2015 11:36 pm
Location: Godfrey, Illinois, US
Re: A proposal to simplify the notation of EDOs with bad fifths
Dave Keenan wrote:So, to summarise: Of all those EDOs having fifths more than 7.5c wide, and therefore able to be notated using the above apotome-fraction notation, you find that 27 and 49 are not simplified thereby, and so should retain, as their preferred notation, their existing* JI-based notations.
I have checked your data and considered possible alternatives, and so far I agree with your choices. But it's a shame that, with these choices, there is no longer a simple description for which EDOs have preferred apotome-fraction notations. If we could find simple enough JI-based notations for 54 and 71, then the description would be simply "fifths more than 10c wide".
Before I look into examining the merits of JI-based notations for 54 and 71, I observe that these two could be excluded if the criterion for prime-3 error were a combination of absolute (<10 cents) and relative values (<35% or <40% of a degree). I also noted that 27 and 49 are both 7-limit consistent; although that's another thing that sets these apart from most of the other bad-5th divisions, apparently it isn't necessary to make that an additional condition:
27: 9.156 cents, 20.60%, 9-limit consistent
49: 8.249 cents, 33.68%, 7-limit consistent
54: 9.156 cents, 41.20%
71: 7.904 cents, 46.77%
Others that would be excluded:
32: 10.545 cents, 28.12%
37: 11.559 cents, 35.64%
42: 12.331 cents, 43.16%
52: 9.647 cents, 41.81%
59: 9.909 cents, 48.72%
64: 8.205 cents, 43.76%
Two others would then be candidates for JI notation (also 7-limit consistent):
26: 9.647 cents, 20.90%, 13-limit consistent
45: 8.622 cents, 32.33%, 7-limit consistent
So here's my writeup on those last two:
26 is 13-limit consistent (prime 3 has -9.647 cents or -20.90% error) and has a valid apotome of 1 degree. I recommend the simple notation:
26:
45 is 7-limit consistent (prime 3 has -8.622 cents or -32.33% error) and has a valid apotome of 2 degrees. I recommend that although (as 1deg45) does not give the best 11/8 (due to 11-limit inconsistency), it actually gives the best 11/6, 11/7, and 11/9 and results in good 11-limit (and 13-limit) chords (notably 6:7:9:11:13):
45:
Dave Keenan
Posts: 882
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:
Re: A proposal to simplify the notation of EDOs with bad fifths
Thanks George.
I note for other readers, that although we are summarising these notations using multi-shaft sagittals, each such symbol can be replaced by an equivalent combination of conventional sharps and flats with a single-shaft sagittal.
George Secor wrote: Before I look into examining the merits of JI-based notations for 54 and 71, I observe that these two could be excluded if the criterion for prime-3 error were a combination of absolute (<10 cents) and relative values (<35% or <40% of a degree).
Nice try. But you'd also exclude (from JI-based notations) an infinite number of larger EDOs—about 30% of all EDOs—beginning with 66 78 83 90 95 on the wide side and 69 76 81 88 93 100 on the narrow side. We could not make apotome-fraction or limma-fraction notations for EDOs with so many steps to the apotome or limma.
You should get the idea if you look at the green and magenta regions in the diagram in this post
viewtopic.php?p=457#p457
that shows EDOs having relative errors greater than 25%, as this is equivalent to being 1:3:9 inconsistent.
Others that would be excluded:
32: 10.545 cents, 28.12%
37: 11.559 cents, 35.64%
42: 12.331 cents, 43.16%
52: 9.647 cents, 41.81%
59: 9.909 cents, 48.72%
64: 8.205 cents, 43.76%
Right. So we're looking at narrow fifths now.
Two others would then be candidates for JI notation (also 7-limit consistent):
26: 9.647 cents, 20.90%, 13-limit consistent
45: 8.622 cents, 32.33%, 7-limit consistent
So here's my writeup on those last two:
26 is 13-limit consistent (prime 3 has -9.647 cents or -20.90% error) and has a valid apotome of 1 degree. I recommend the simple notation:
26:
I agree. This is of course the existing standard notation for 26-edo.
45 is 7-limit consistent (prime 3 has -8.622 cents or -32.33% error) and has a valid apotome of 2 degrees. I recommend that although (as 1deg45) does not give the best 11/8 (due to 11-limit inconsistency), it actually gives the best 11/6, 11/7, and 11/9 and results in good 11-limit (and 13-limit) chords (notably 6:7:9:11:13):
45:
I note that the existing standard notation for 45 is:
45:
And I note that in its primary role as the 35-diesis (36/35) actually does map to 1°45 whereas as the 11-diesis (33/32) maps to 2°45.
And if we want the simple criterion of prime_3_error<10c to be preserved, in this (narrow-fifths) case we need to also find simple JI-based notations for 52 and 64.
We already have a simple JI-based notation for 64-edo. It's the same as the one for 50 and 57.
64:
And the existing standard notation for 45 turns out also to be valid for 52:
52:
The above set of JI-based notations for 26 45 52 and 64 also constitutes a consistent apotome-fraction notation for all EDOs with fifths between 7.5c and 10c narrow.
1/3 and 1/2 apotome
2/3 apotome
1 apotome
4/3 and 3/2 apotome
5/3 apotome
2 apotomes
I note that implementing JI-based notations for everything with less than a 10c prime-3 error, simplifies the apotome-fraction and limma-fraction notation for those EDOs with worse fifths, because now we only need to cater for a maximum of 9 steps to the apotome instead of 10, and 6 steps to the limma instead of 7.
George Secor
Posts: 31
Joined: Tue Sep 01, 2015 11:36 pm
Location: Godfrey, Illinois, US
Re: A proposal to simplify the notation of EDOs with bad fifths
Dave Keenan wrote:Thanks George.
I note for other readers, that although we are summarising these notations using multi-shaft sagittals, each such symbol can be replaced by an equivalent combination of conventional sharps and flats with a single-shaft sagittal.
George Secor wrote: Before I look into examining the merits of JI-based notations for 54 and 71, I observe that these two could be excluded if the criterion for prime-3 error were a combination of absolute (<10 cents) and relative values (<35% or <40% of a degree).
Nice try. But you'd also exclude (from JI-based notations) an infinite number of larger EDOs—about 30% of all EDOs—beginning with 66 78 83 90 95 on the wide side and 69 76 81 88 93 100 on the narrow side. We could not make apotome-fraction or limma-fraction notations for EDOs with so many steps to the apotome or limma.
You should get the idea if you look at the green and magenta regions in the diagram in this post
viewtopic.php?p=457#p457
that shows EDOs having relative errors greater than 25%, as this is equivalent to being 1:3:9 inconsistent.
<Groan> Please excuse me if I let off some steam at this point, because I had the impression that we already had native-5th notations for all (or nearly all) of the divisions not identified as "bad-5th". Instead, I see that we have very few (if any) native-5th notations for the (more than 20) 1:3:9-inconsistent divisions in the green and magenta regions in the referenced diagram. It really seems like a waste of time for us to devise native-5th notations for those divisions, most of which nobody will ever use. After all, why would anyone spend their time scavenging through the microtonal garbage heap when there are so many other good divisions to explore? Certainly nobody is going to refret a guitar to one of these.
So when are we going to tackle the garbage heap?
Dave Keenan
Posts: 882
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:
Re: A proposal to simplify the notation of EDOs with bad fifths
George Secor wrote: <Groan> Please excuse me if I let off some steam at this point, because I had the impression that we already had native-5th notations for all (or nearly all) of the divisions not identified as "bad-5th". Instead, I see that we have very few (if any) native-5th notations for the (more than 20) 1:3:9-inconsistent divisions in the green and magenta regions in the referenced diagram. It really seems like a waste of time for us to devise native-5th notations for those divisions, most of which nobody will ever use. After all, why would anyone spend their time scavenging through the microtonal garbage heap when there are so many other good divisions to explore? Certainly nobody is going to refret a guitar to one of these.
So when are we going to tackle the garbage heap?
Relax. I don't think we need to come up with native-fifth notations (whether JI-based, apotome-fraction or limma-fraction) for any 1:3:9-inconsistent division bigger than some number. What should that number be?
44-edo is presently the smallest division we have not given any kind of native-fifth notation for. By the way, in sagittal.pdf we say that 44-edo should be notated as a subset of 176-edo. I don't understand why we don't say "as a subset of 132-edo" instead. I think we should correct that.
I'll repeat the diagram we're referencing:
Of the 21 green and magentas, we have native-fifth notations (given in sagittal.pdf) for only 3 of them, namely magentas 57, 62 and 69. If we added native-fifth notations (JI-based) for only 3 more, namely greens 44, 61 and 66, we'd have them for all EDOs up to 72, which is something we have been asked for in the past.
Dave Keenan
Posts: 882
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:
Re: A proposal to simplify the notation of EDOs with bad fifths
I agree that 1:3:9 inconsistency (or equivalently: more than 25% relative error in the fifth) is another kind of "bad fifth", and such divisions do not require a native fifth notation. They can make do with a subset notation. But we would like to give, at least the low numbered ones, a native fifth notation of some kind (whether JI-based, ap-frac or lim-frac), when this can be made simple enough. Where "simple enough" means: using only Spartan symbols, plus (in order of decreasing simplicity/increasing prime-limit) kai , slai and rai .
I suggest we tackle this in two stages. The first stage is nearly complete and consists of agreeing, for each division from 5 to 72, what is its best-if-any JI-based notation, its best-if-any apotome-fraction notation, its best-if-any limma-fraction notation and its best-if-any subset notation. I feel we should include all such notations in sagittal.pdf. These four notation types could be abbreviated JB AF LF and SS. And I note that in many cases JB = AF.
The second stage would be to decide which single notation to call the "preferred" or "default" notation for each division. This is the one that we would want to be selected in Scala when the user types SET NOTATION SA<n> where <n> is the number of the division.
I think it is very desirable that the boundaries between the regions where the four different notation types are preferred, should consist of straight lines on the above diagram, and with a good deal of reflective symmetry about the line of pythagoreans (just fifths). The divisions for which JB = AF make this easier since the boundary between these types then becomes somewhat arbitrary. Examples of straight lines are: steps per apotome, steps per limma, steps per octave, absolute fifth-error, relative fifth error.
I have responded to your recent proposals for new JI-based notations for some divisions—in most cases accepting them. I'd appreciate if you would address those I have not (yet) accepted, and address my recent JI notation proposals, including changing to everywhere that it is used as a half-apotome with a 13-limit meaning, including in the AF notation for wide fifths.
I note that 44-edo is 1:3:5:11:13-consistent, and the following 1:3:5:11:13 JI-based notation is valid:
44:
It also happens to be the same as the apotome-fraction notation used for other divisions having 6 steps to the apotome, 30 and 37. Its other neighbouring 6-step-to-the-apotome division is 51-edo. 51's standard notation differs in using the 7-comma symbol for 1 step. This would not be valid in 44-edo as the 7-comma vanishes there.
I will return to 61 and 66 later, but for now I want to flag two possible native-fifth notations for 61 that need to be investigated for validity.
61: possible JB same as 68-edo
61: possible AF or JB
George Secor
Posts: 31
Joined: Tue Sep 01, 2015 11:36 pm
Location: Godfrey, Illinois, US
Re: A proposal to simplify the notation of EDOs with bad fifths
Dave Keenan wrote:
George Secor wrote: <Groan> Please excuse me if I let off some steam at this point, because I had the impression that we already had native-5th notations for all (or nearly all) of the divisions not identified as "bad-5th". Instead, I see that we have very few (if any) native-5th notations for the (more than 20) 1:3:9-inconsistent divisions in the green and magenta regions in the referenced diagram. It really seems like a waste of time for us to devise native-5th notations for those divisions, most of which nobody will ever use. After all, why would anyone spend their time scavenging through the microtonal garbage heap when there are so many other good divisions to explore? Certainly nobody is going to refret a guitar to one of these.
So when are we going to tackle the garbage heap?
Relax. I don't think we need to come up with native-fifth notations (whether JI-based, apotome-fraction or limma-fraction) for any 1:3:9-inconsistent division bigger than some number. What should that number be?
72 is as good a cutoff as any. As I said before, I don't think anyone is going to refret a guitar to any division this complex.
Dave Keenan wrote:44-edo is presently the smallest division we have not given any kind of native-fifth notation for. By the way, in sagittal.pdf we say that 44-edo should be notated as a subset of 176-edo. I don't understand why we don't say "as a subset of 132-edo" instead. I think we should correct that.
There are two reasons for making it a subset of 176-EDO:
1) 176 is a much better division than 132; and
2) The notation for 88-EDO is as a subset of 176, so the tones common to 44 and 88 would be notated alike.
I saw your other message, and I'll be responding to it soon.
|
{}
|
# HDF4 support for GDAL on Arch Linux
I have been trouble reading HDF files with OTB on Arch Linux for a while. I finally took the time to investigate this problem and come to a solution.
At the beginning I was misled by the fact that I was able to open a HDF5 file with Monteverdi on Arch. But I finally understood that the GDAL pre-compiled package for Arch was only missing HDF4 support.
It is due to the fact that HDF4 depends on libjpeg version 6, and is incompatible with the standard current version of libjpeg on Arch.
So the solution is to install libjpeg6 and HDF4 from the AUR and then regenerate the gdal package, who, during the configuration phase will automatically add HDF4 support.
Here are the detailed steps I took:
1. Install libjpeg6 from AUR:
1. mkdir ~/local/packages
2. cd ~/local/packages
3. wget http://aur.archlinux.org/packages/libjpeg6/libjpeg6.tar.gz
4. tar xzf libjpeg6.tar.gz
5. cd libjpeg6
6. makepkg -i #will prompt for root psswd for installation
2. Install HDF4 from AUR:
1. cd ~/local/packages
2. wget http://aur.archlinux.org/packages/hdf4-nonetcdf/hdf4-nonetcdf.tar.gz
3. tar xzf hdf4-nonetcdf.tar.gz
4. cd hdf4-nonetcdf
5. makepkg -i #will prompt for root psswd for installation
3. Setup an Arch Build System build tree
1. sudo pacman -S abs
2. sudo abs
3. mkdir ~/local/abs
4. Compile gdal
1. cp -r /var/abs/community/gdal ~/local/abs
2. makepkg -s #generated the package without installing it
3. sudo pacman -U gdal-1.8.0-2-x8664.pkg.tar.gz
If you are new to using AUR and makepkg, please note that the AUR package installation need the sudo package to be installed first (the packages are build as a non root user and sudo is called by makepkg).
Step 3 above is only needed if you have never set up the Arch Build System on your system.
## 8 thoughts on “HDF4 support for GDAL on Arch Linux”
1. Tovo says:
Hi Jordi,
We’ve tried your procedure in order to get gdal working with hdf4 files on a new ArchLinux computer. Unfortunately, it doesn’t work for us (when we call gdlinfo with an hdf4 file, it says that the format is not supported). We use a i686 platform and I don’t know if it the reason of the failure. May be you have any idea for helping us. Thanks a lot.
Tovo R.
2. jordi says:
Have you checked the log messages from the gdal build?
3. jordi says:
Just a quick update: gdal builds by default with the external libtiff in arch which does not support bigtiff. In order to get support for bigtiff one has to modify the PKGBUILD file and set “–with-libtiff=internal” in the configure options.
4. Tovo says:
Thanks Jordi for your reply (and happy new year !!)
Our problem is that by following your tutorial, we ain’t have any hdf4 support.
While doing the part 4, somewhere, we have :
HDF4 support: no
HDF5 support: yes
And obviously, after compiling, hdf4 is not supported by gdal.
We tried to modify the GDAL PKGBUILD by changing hdf5 with hdf4 but, we have the following error :
configure: error: HDF4 support requested with arg “yes”, but neither hdf4 nor mfhdf lib found.
Thanks for your help
Cheers
1. jordi says:
Was step 2 above successful? If you use “yaourt” (the package manager with AUR support), you can check if HDF4 is installed with:
sudo yaourt -Ss hdf4 | grep installed
Try this and tell me what you find.
Good luck!
Jordi
5. Tovo says:
Yes,
Step 2 was successful and
sudo yaourt -Ss hdf4 | grep installed gives :
aur/hdf4-nonetcdf 2.6-2 [installed] (6)
6. jordi says:
Well, I don’t have any other idea than telling the gdal configure where your hdf4 lib is. I can’t check it now, but I guess that it may be possible to give the path to the –with-hdf4 flag.
Is your hdf4 lib installed in a non standard directory?
7. Tovo says:
Thanks Jordi,
After many retries, we succeeded. Finally, we decided to follow the tutorial till the step 3, and to compile GDAL independently from Arch packages management. We also added the –with-gdal4 flag for the compiling. The next big problem was to install Qgis and Grass … but it’s another story 😉
|
{}
|
View Single Post
Recognitions:
Quote by Mentz114 I get a different result for the Ricci scalar, viz $$\frac{\partial rf}{re^{f}}$$ where $\partial r$ is differentiation wrt r.
The Ricci scalar is $R = \frac{2 f^\prime}{r} \, \exp(-2f)$ in the notation I used above. (The Ricci tensor, with components evaluated wrt the coframe field I gave, is diagonal with both diagonal components equal to the Riemann curvature component I computed.)
$$K = \frac{R_{r \phi r \phi}}{\exp(2f) \, r^2} = \frac{r \, f^\prime}{ r^2 \, \exp(2f)} = \frac{f^\prime}{r} \, \exp(-2f) = R_{1212}$$
Exercise: write an orthogonal chart for the general Riemannian two-manifold in the form $ds^2 = A^2 \, du^2 + B^2 \, dv^2$, where A,B can be functions of u,v (although this isn't neccessary; without loss of generality we could impose further restrictions), and adopt the coframe field
$\sigma^1 = A \, du, \; \sigma^2 = B \, dv$. Using the method of curvature two-forms, show that
$$-R_{1212} = \frac{ \left( \frac{A_u}{B} \right)_u + \left( \frac{B_v}{A} \right)_v }{AB}$$
|
{}
|
## Algebra: A Combined Approach (4th Edition)
$\dfrac{(2x^{1/5})^{4}}{x^{3/10}}=16x^{1/2}$
$\dfrac{(2x^{1/5})^{4}}{x^{3/10}}$ Evaluate the power indicated in the numerator: $\dfrac{(2x^{1/5})^{4}}{x^{3/10}}=\dfrac{16x^{4/5}}{x^{3/10}}=...$ Evaluate the division: $...=16x^{4/5-3/10}=16x^{1/2}$
|
{}
|
网站首页 学院概况 师资队伍 本科生教育 研究生教育 科学研究 党委专栏 学生活动
当前位置: 网站首页 > 学术讲座 > 正文
## Classification of 6-manifolds with fundamental group $\pi_{1}=\mathbb Z/p$ and second homotopy group $\pi_{2}=0$
We classify the smooth closed 6-manifolds with fundamental group $\pi_{1}\cong\mathbb Z /p$ and second homotopy group $\pi_{2}=0$ in Top and Diff-category.
|
{}
|
# Motor control - 1 phase to 3 phase inverter issue
#### blimp
Joined Feb 9, 2021
6
I have a wood lathe (Ejca TLS30) wich runs on 1 phase 230V, but the motor is a 3 phase motor, driven by a Hitachi inverter (HFC-VWE 2.5SBE) with stepless speed control. The lathe is 30 years old, but they still make them more or less unchanged.
The problem is that a few days ago it reduced the speed to less then half. The speed control (a 1K potentiometer connected to the iverter) still works as before, but with much lower speed. I've also tested the pontentiometer, and it gives plausible values.
With my limited knowledge on 3 phase motors I thought "hey, maybe the motor has lost a phase and runs on only 2 phases", but the mighty Google told me that in that case the motor wouldn't start at all.
My local supplier wants to sell me the inverter, without too much explenation on what might be the problem, but it's about $1400, which is a bit pricy, if the problem is the motor. Does this sound like an inverter problem, or may this problem be related to the motor instead? #### MaxHeadRoom Joined Jul 18, 2013 23,092 If you are lucky, it may be possible a parameter or two got corrupted? Do you have the manual and do you have a back up of parameters? You could set it to front panel run mode and see what the results are. There are a few things you can check first. Max. #### Marley Joined Apr 4, 2016 411$1400 (is that US dollars?) sounds very expensive for the inverter! Unlikely to be a loss of a phase in the motor or wiring because the motor probably would not start and the inverter would show an error indication.
First, you should get the inverter manual and go through the settings. Possibly after 30 years it's forgotten some of its settings. If you have to replace the inverter, go elsewhere and pay less!
#### blimp
Joined Feb 9, 2021
6
Thank you for the tip!
Unfortunately I don't have anything but the lathe itself. I assume that front panel run mode applies to inverters with a display, which this one doesn't have? This one seems to be set up with the dip switches and potentiometers shown in the attached image. However I've found a manual for a similar inverter, so I guesss I'll try figuring out what the dip switches and potentiometers do and try fiddling with them.. Maybe I'm lucky and it's just a bad dip switch.
#### Attachments
• 2.1 MB Views: 10
Joined Jul 18, 2013
23,092
As a last resort there is the cheap $100 + Chinese Huanyang or the slightly higher, Hitachi, you don't need to pay$1,400 for a VFD now.
Check for proper pole count setting.
Max.
#### blimp
Joined Feb 9, 2021
6
I've found some similar second hand inverters on ebay for around $300, so I'm not prepared for$1400 just yet. If I can't get anything useful from the manual and the dip switches I'll certainly have a look at the Huanyang.
#### MisterBill2
Joined Jan 23, 2018
8,740
The very first thing I suggest is to carefully check all of the connections of the present installation, and then carefully examine the present converter board. One failed connection could be the problem.
Joined Jul 18, 2013
23,092
Thank you for the tip!
I assume that front panel run mode applies to inverters with a display, which this one doesn't have?
Yes just about all modern VFD's have a readout that can display various conditions, freq., current, and other status.
Max.
#### Marley
Joined Apr 4, 2016
411
Sounds like you need to modernize. First look at the plate on the motor. Note the power (kW), phase current (A) and voltage (which will be close to your single-phase supply voltage). Most motors have two voltage settings depending if you connect it as star (high voltage) or delta (low voltage). This means there are two current values stated on the plate. Low voltage = high current and v/v. Yours should already be wired for the correct voltage. Then go to ebay or similar and buy a simple inverter (VFD). Hitachi, or another reputable make if possible will be good. Chinese if you have to! Make sure it comes with a manual or you can find one online.
The VFD not only allows you to run the lathe from a single-phase supply but you can program in a nice acceleration time for smooth starting.
#### MisterBill2
Joined Jan 23, 2018
8,740
As I examined that photo of the board, a component marked F514, I thuink, near the blue pot marked Max adj looked like it might have failed. A fuse is simple enough to check, if that is the problem then why? But replacing a fuse might be a cheap fix.
#### strantor
Joined Oct 3, 2010
5,543
If this is the correct lathe, your spindle is 1.5kW. For $188 brand new, and from a reputable manufacturer, this is hard to beat (if you actually need a new VFD). The speed dropping neatly in half might be a clue. Does it go from 0 to half speed during the first half of the pot turn, and then not go any higher for the 2nd half turn? Or does the complete turn of the pot now change speed from 0 to half speed? Are there any other symptoms? Inconsistent speed? Strange noises? Etc? #### MaxHeadRoom Joined Jul 18, 2013 23,092 The other possibility if some parameters reverted to default, such as motor 2 to 4 pole etc! Max. #### GetDeviceInfo Joined Jun 7, 2009 1,846 Is your pot remote from drive. Id be looking to confirm loop voltages back to input terminal. #### MaxHeadRoom Joined Jul 18, 2013 23,092 The OP appears to indicates the pot is remote, A 1k which seems a bit low, the most common value is 10k or Maybe 5k. Max. #### MisterBill2 Joined Jan 23, 2018 8,740 One more thing to investigate would be that adjustment pot, "Max.M". You could mark the present position and turn it a bit in either direction and see if that has an effect. Cheap parts like that sometimes develop strange values. An easy check to do, safe and simple. After so many years it does not seem like a wiring error is likely. But I did suggest a careful examination of all the connections, that is still a good choice. AND, if there is a mains voltage selector switch, verify that it is still in the right position. Thread Starter #### blimp Joined Feb 9, 2021 6 As I examined that photo of the board, a component marked F514, I thuink, near the blue pot marked Max adj looked like it might have failed. A fuse is simple enough to check, if that is the problem then why? But replacing a fuse might be a cheap fix. The component is just a resistor, and I think it's better than it looks. I think it's just covered in dust, but I'll give it a closer look tomorrow. One more thing to investigate would be that adjustment pot, "Max.M". You could mark the present position and turn it a bit in either direction and see if that has an effect. Cheap parts like that sometimes develop strange values. An easy check to do, safe and simple. After so many years it does not seem like a wiring error is likely. But I did suggest a careful examination of all the connections, that is still a good choice. AND, if there is a mains voltage selector switch, verify that it is still in the right position. I've found a more or less compatible manual for the VFD, and according to it the "m" in "m. adj" stands for monitor, and applies only to the optional remote control, which I don't have. It's for adjusting the frequency display on the remote control, I think. I've inspected all the external connections to the VFD and all the switches to the best of my ability, and everything looks fine. If this is the correct lathe, your spindle is 1.5kW. For$188 brand new, and from a reputable manufacturer, this is hard to beat (if you actually need a new VFD).
The speed dropping neatly in half might be a clue. Does it go from 0 to half speed during the first half of the pot turn, and then not go any higher for the 2nd half turn? Or does the complete turn of the pot now change speed from 0 to half speed? Are there any other symptoms? Inconsistent speed? Strange noises? Etc?
That's the correct lathe, and the VFD you link is probably a good choice if this troubleshooting turns out to be a dead end. The speed of the lathe follows the complete turn of the pot. At minimum on the pot it goes really slow, and at maximum on the pot it's barely half the speed it should be. No other symptoms. A clue may be that it slowed down breifly one day before it became permanent.
The other possibility if some parameters reverted to default, such as motor 2 to 4 pole etc!
Max.
That might be so, but how do I set those parameters? I can't find any settings that apply to 2/4 poles.. There is a way of resetting the VFD, but I assume that brings it back to factory default settings, which isn't necessarily the correct settings for this lathe..
According to the manual this VFD has three different speed settings. The lowest speed is set by connecting CF1 (in the attached image) to L (not really visible in the picture). Medium - CF2 to L and max by connecting both to L. I still haven't tried this, as those settings haven't been used prior to this speed change either, but I guess I'll try it tomorrow, if no other revelation takes place.
#### Attachments
• 2.5 MB Views: 4
Joined Jul 18, 2013
23,092
It appears the only certain way is to try and obtain a copy of the original manual.
It appears to be older that the more modern way of setting all parameters digitally and is set up with other devices, pots etc.
The other malady that occurs with older VFD's such as this, is the large electrolytic's start to fail or lose capacity.
Is that a sign of slight leakage on one in the pic?
Max.
#### blimp
Joined Feb 9, 2021
6
It appears the only certain way is to try and obtain a copy of the original manual.
It appears to be older that the more modern way of setting all parameters digitally and is set up with other devices, pots etc.
The other malady that occurs with older VFD's such as this, is the large electrolytic's start to fail or lose capacity.
Is that a sign of slight leakage on one in the pic?
Max.
I think the attached manual is just about right. I haven't noticed any leak from the capacitors, but I'll give them a closer look tomorrow as well. I have a distinct feeling this is going more and more in the direction of a new VFD...
#### Attachments
• 1.9 MB Views: 2
#### strantor
Joined Oct 3, 2010
5,543
Thanks for posting the manual, way to help us help you. Can give a lot better advice now.
Is your pot remote from drive. Id be looking to confirm loop voltages back to input terminal.
Yep. You need to verify that you have 10V between terminals H & L, and that you are getting 0-10V between terminals L and O, as you turn the pot.
If you don't have 10V on H&L then the drive's internal 10V supply is damaged or DS2 is set incorrectly (unlikely, unless you have rodents that like to flip dip switches in the night).
If you have the 10V on H & L but you're getting something other than 0-10V on L & O, then your potentiometer is bad.
If you're getting the 10V and the 0-10V but the drive is still slow, then something more serious is wrong. Check mechanical stuff first. Is the spindle hard to turn (motor overloaded)?
Do you have the full 230V at the input?
If all that stuff is good, then I think you need to replace the drive.
#### MisterBill2
Joined Jan 23, 2018
8,740
Given that the drive still provides a variable speed, evidently in a stable manner, a lack of diagnostic insight by those at a distance is not an adequate reason to decide to replace the drive. One more thought is the DC voltage that supplies the three phase inverter. The TS should check and verify that it is correct. Probably that is fed from a voltage doubeler, or maybe a bridge rectifier. If that voltage is much below the expected level then an open rectifier diode would be suspected. Replacing an open diode would be much cheaper than replacing the whole assembly, and probably less work as well.
So measuring that DC voltage is a next step in the diagnostics.
|
{}
|
# B One way twin
#### PeroK
Homework Helper
Gold Member
2018 Award
But 1) and 3) do not take place at the same position relative to A as the acceleration in the simple case, assuming the same total space time intervals for both scenarios.
I really don't see why not. We're assuming that B executes some sort of periodic motion: half a cycle in the simple case and 1.5 cycles in the next case.
The difference in differential ageing between the scenarios must be less than the time for the periodic motion. As with all twin paradox scenarios, there are small variations based on how many acceleration phases B has.
To clarify: Are you assuming that A will age the same amount in B's frame during these two accelerations?:
S1: Single acceleration in the simple scenario
E3: 3rd acceleration in the extended turnaround scenario
Yes, of course, these are physically identical.
#### A.T.
But 1) and 3) do not take place at the same position relative to A as the acceleration in the simple case, assuming the same total space time intervals for both scenarios.
I really don't see why not. We're assuming that B executes some sort of periodic motion: half a cycle in the simple case and 1.5 cycles in the next case.
We also assumed the same total space time interval (path length) for all cases, so with more periods the amplitude (maximal separation has to be less).
To clarify: Are you assuming that A will age the same amount in B's frame during these two accelerations?:
S1: Single acceleration in the simple scenario
E3: 3rd acceleration in the extended turnaround scenario
Yes, of course, these are physically identical.
The aging of A in B's non-inertial frame depends on the spatial separation of A and B. Since the separation is less for E3 than for S1, A will age less in B's frame during E3 than during S1, even if B's proper acceleration and acceleration duration are the same for E3 and S1.
Last edited:
#### PeroK
Homework Helper
Gold Member
2018 Award
The aging of A in B's non-inertial frame depends on the spatial separation of A and B. Since the separation is less for E3 than for S1, A will also age less, even if the proper acceleration and duration are the same.
You keep saying that but what prevents B from executing the same acceleration when he reaches the same distance from A? Why does B have to be closer? And why does be have to be significantly closer? If I specify the turnaround distance for B as $1m$ and I concede that E3 takes place $2m$ closer to Earth. I have no idea why, but let's accept that E3 must take place $2m$ closer to Earth than S1. These distances are negligible in the context of 4 light years. That is going to make a negligible variation to the $6.4$ years.
Please tell me why B cannot execute SHM as many times as he pleases? Back and forward the same mean distance from A? Why is SHM impossible in the twin paradox?
I only posted this idea to highlight an issue with the "acceleration causes ageing" interpretation. I didn't expect an argument on the physical feasibility of B changing direction more than once.
You must be fundamentally misunderstanding what I'm saying.
#### PeroK
Homework Helper
Gold Member
2018 Award
The aging of A in B's non-inertial frame depends on the spatial separation of A and B. Since the separation is less for E3 than for S1, A will age less in B's frame during E3 than during S1, even if B's proper acceleration and acceleration duration are the same for E3 and S1.
Can you provide your analysis of the differential ageing assuming that A ages 6.4 years as a result of the first turnaround? What happens quantitatively if B changes direction linearly twice more? Why do subsequent changes of direction have mininal effect on the ageing of A?
Assume that any subsequent changes of direction of B take place in less than 1 day (in A's frame). Please show why no further significant ageing of A takes place, unless the overall journey itself is significantly extended (in A'a frame).
#### A.T.
You keep saying that but what prevents B from executing the same acceleration when he reaches the same distance from A? Why does B have to be closer?
I explained that here:
We also assumed the same total space time interval (path length) for all cases, so with more periods the amplitude (maximal separation has to be less).
#### Dale
Mentor
That can't be right. The differential ageing relative to Terence can't depend on the number of changes of direction.
But it can depend on the distance between them at the turnaround.
#### PeroK
Homework Helper
Gold Member
2018 Award
But it can depend on the distance between them at the turnaround.
What stops B making repeated changes of direction (over a relatively short time) in the vicincty of the initial turning point?
#### PeroK
Homework Helper
Gold Member
2018 Award
I explained that here:
Obviously it's the same give or take a day or two for the various acceleration phases - as it always is for the twin paradox. It's a proper time of $6$ years (give or take an arbitrary variation for the turnaround(s)).
Obviously, if B does additional accelerations that will take a small amount of proper time. But that cannot explain additional differential ageing of 6.4 or 6.3 years or whatever.
#### vanhees71
Gold Member
But for the one way example here, there is no way to avoid a synchronization assumption, because that is the sole determinant of what the start event is for the mars clock. There is only one incident of colocation. The interval beginnings are determined solely by a synchronization decision, which can be a physical procedure, thus invariant, but it is still a choice, and effectively defines a frame.
Then the problem is insufficiently defined. You have to clearly define everything in physical terms, i.e., in terms of physically defined events to begin with.
#### A.T.
Obviously it's the same give or take a day or two for the various acceleration phases - as it always is for the twin paradox. It's a proper time of $6$ years (give or take an arbitrary variation for the turnaround(s)).
Obviously, if B does additional accelerations that will take a small amount of proper time. But that cannot explain additional differential ageing of 6.4 or 6.3 years or whatever.
Where does this "additional differential ageing of 6.4 or 6.3 years" come from?
#### PeroK
Homework Helper
Gold Member
2018 Award
Where does this "additional differential ageing of 6.4 or 6.3 years" come from?
This is getting just silly now. I've explained a simple scenario in excruciating detail and all you're doing in nitpicking the details.
I don't know what this is all about now.
#### jbriggs444
Homework Helper
Where does this "additional differential ageing of 6.4 or 6.3 years" come from?
I've lost track. I think it was part of a reductio ad absurdum argument to the effect that one should not attribute differential aging to acceleration.
The following narrative is how I reconstruct it:
There were two claims. One was that the progress of the stay-at-home twin from the point of view of the travelling twin would always be in the forward direction. The other is that the "point of view" of the travelling twin is always accurately reflected by an instantaneously co-moving inertial frame.
If we accept the former claim then, during periods of forward acceleration (by B away from A), A's clock advances. In effect the former claim acts as a ratchet. [This claim is arguably correct -- in any valid coordinate chart, it will hold].
If we accept the latter claim then, during periods of reverse acceleration (by B toward A), A's clock advances by 6.3 or 6.4 years each time. [This claim is also arguably correct. If we look at the "time now" on A's clock in the after-acceleration frame, it will be 6.3 or 6.4 years advanced from the "time now" on B's clock in the before-acceleration frame]. I think that @PeroK proposed trip details to arrive at those numbers.
If one accepts both claims together, then one might conclude that the stay-at-home twin's elapsed time will have advanced by a total proper time equal to the number of turnarounds multiplied by 6.3 or 6.4. That conclusion is obviously false -- so something has gone wrong.
One way of looking at what went wrong is that the sequence of instantaneous tangent inertial frames do not fit together to create a valid coordinate chart covering A's world line. The first claim only holds for valid coordinate charts. The error in the analysis is pretending that the "traveler's frame" both covers A's world line and uses a synchronization convention that matches B's sequence of tangent inertial frames.
One can build an accelerated frame around B's world line and extend it to encompass A's world line. But the attribution of differential aging based on using that frame will come as much from the details of the frame as from B's acceleration profile.
#### PeterDonis
Mentor
Ok, so it looks like I'm going to have to point out what @jbriggs444 predicted I would point out.
It seems logical that if the first turnaround caused A to age by 6.4 years, then so must the third change of direction.
It might seem logical, but it's not valid, because the implicit reference frame you are using is not valid. Once you have multiple turnarounds, or orbits, or whatever, the reference frame you are implicitly using to make statements like "A ages 6.4 years during the first turnaround) is not valid for such statements because it no longer validly covers A's worldline: the mapping from the frame's time coordinate to events on A's worldline is no longer one-to-one. (It is for the case of a single turnaround with no orbits, but only for that case.)
The deeper root cause of this problem is being unwilling to give up the intuition that there should be some fact of the matter about A's "rate of aging" relative to B. There isn't. That's what relativity tells us. The only invariant in the problem is the comparison of elapsed times when the twins meet again. There is no invariant that corresponds to A's "rate of aging" relative to B (or B's relative to A, for that matter). So statements like "A ages 6.4 years during the turnaround" aren't statements about physics; they're statements about some human's choice of coordinates. (And if the choice of coordinates isn't a valid coordinate chart, they're not even well-defined statements.) You can do all the physics without ever having to make such statements, so why make them at all?
#### PAllen
Then the problem is insufficiently defined. You have to clearly define everything in physical terms, i.e., in terms of physically defined events to begin with.
Well, you can use a physical procedure to define a frame. Einstein clock synchronization is a physical procedure, and if you specify two bodies with attached clocks performing it, the result of the procedure is frame independent, but at the same time, it effectively defines a frame based on those two bodies. The beginning events in a one way scenario are defined by a choice of bodies to perform this operation.
#### PeterDonis
Mentor
It's about answering the question: "How does the whole process look like in the rest frame of the traveling twin?",
And if you insist on asking that question, even though, as I pointed out in my previous post just now, you can do all the physics without doing so, then you first have to construct a consistent "rest frame of the traveling twin" that covers all of A's worldline during the trip. And the frame @PeroK is implicitly using when he talks about A "getting younger" in a scenario with multiple turnarounds or orbits does not. There are multiple ways of doing so that do cover A's worldline, but none of them will have the property that "A gets younger" during any part of the trip.
#### PeterDonis
Mentor
Einstein clock synchronization is a physical procedure
But it only works for a pair of bodies that are (a) in free-fall inertial motion, and (b) at rest relative to each other. That's a severe limitation.
#### metastable
The deeper root cause of this problem is being unwilling to give up the intuition that there should be some fact of the matter about A's "rate of aging" relative to B. There isn't. That's what relativity tells us. The only invariant in the problem is the comparison of elapsed times when the twins meet again. There is no invariant that corresponds to A's "rate of aging" relative to B (or B's relative to A, for that matter). So statements like "A ages 6.4 years during the turnaround" aren't statements about physics; they're statements about some human's choice of coordinates. (And if the choice of coordinates isn't a valid coordinate chart, they're not even well-defined statements.) You can do all the physics without ever having to make such statements, so why make them at all?
I’m confused on this point. If A and B are both radioactive, won’t their relative compositions differ when they meet again? Won’t this problem now affect A & B’s invariant mass in addition to their elapsed time?
#### PeterDonis
Mentor
If A and B are both radioactive, won’t their relative compositions differ when they meet again?
Yes, that's a consequence of the invariant I described: the comparison of elapsed proper times. But you're adding an element to the problem that nobody in this thread was including. See below.
Doesn’t this now affects A & B’s invariant mass?
This is just quibbling. Nobody has been talking about radioactive objects, or indeed objects undergoing any kind of change. We're just talking about the twin paradox. Throwing in a complication like what will happen to radioactive substances is irrelevant to the topic of the thread. If you want to know what happens to the invariant mass of a radioactive object over time, start a separate thread.
#### PeroK
Homework Helper
Gold Member
2018 Award
Ok, so it looks like I'm going to have to point out what @jbriggs444 predicted I would point out.
It might seem logical, but it's not valid, because the implicit reference frame you are using is not valid. Once you have multiple turnarounds, or orbits, or whatever, the reference frame you are implicitly using to make statements like "A ages 6.4 years during the first turnaround) is not valid for such statements because it no longer validly covers A's worldline: the mapping from the frame's time coordinate to events on A's worldline is no longer one-to-one. (It is for the case of a single turnaround with no orbits, but only for that case.)
The deeper root cause of this problem is being unwilling to give up the intuition that there should be some fact of the matter about A's "rate of aging" relative to B. There isn't. That's what relativity tells us. The only invariant in the problem is the comparison of elapsed times when the twins meet again. There is no invariant that corresponds to A's "rate of aging" relative to B (or B's relative to A, for that matter). So statements like "A ages 6.4 years during the turnaround" aren't statements about physics; they're statements about some human's choice of coordinates. (And if the choice of coordinates isn't a valid coordinate chart, they're not even well-defined statements.) You can do all the physics without ever having to make such statements, so why make them at all?
I thought that was my whole point. That A's rate of "ageing" (it was always in quotes in my earlier posts) relative to B is meaningless.
I still think the whole idea that "acceleration of B causes A to age" is not a valid concept. Even if you can justify it with a caveat that "it only works once". It's not an explanation for differential ageing that has any physical significance, as far as I can see.
Perhaps my argument against it overlooked deeper problems with coordinate systems. But, if B makes an elaborate interstellar journey then the differential ageing can still be simply computed by the integral of the speed in A's frame. Attempts to attribute differential ageing to acceleration and time dilation in B's frame are fundamentally flawed.
#### PAllen
But it only works for a pair of bodies that are (a) in free-fall inertial motion, and (b) at rest relative to each other. That's a severe limitation.
So what? That is exactly what is needed to specify the OP scenario to make it fully defined.
#### DaveC426913
Gold Member
What I was attempting to analyse was the "acceleration causes ageing" interpretation of the twin paradox. I was trying to highlight an issue with this interpretation.
Ah. Then we are in agreement.
#### Dale
Mentor
What stops B making repeated changes of direction (over a relatively short time) in the vicincty of the initial turning point?
Nothing
#### Bruce Wallman
It only depends on the velocity of the traveling twin. If that person gets anywhere near the velocity of c (compared to the universe), that person will suffer from time dilation and will lose some heartbeats, etc in aging. So that person will be younger. However, the twin on earth is also traveling at a decent speed within the universe. So it needs to be that the one going to Mars has a much faster velocity relative to the universe and something significant against c. I do not think acceleration has anything to do with time dilation directly.
#### Ibix
If that person gets anywhere near the velocity of c (compared to the universe)
This is not correct. In a standard twin paradox, where one twin is inertial and one twin travels out-and-back then it's the speed of the traveller relative to the inertial observer that matters. In the "one-way" version under discussion here there is no unique answer.
"Speed compared to the universe" is not a well-defined concept.
#### Mister T
Gold Member
If that person gets anywhere near the velocity of c (compared to the universe), that person will suffer from time dilation and will lose some heartbeats, etc in aging.
All that matters is the relative speed of the twins. And the speed need not be anywhere near $c$. Modern clocks are precise enough to see the effect when the speed is a very tiny fraction of $c$.
"One way twin"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
{}
|
Geodesic for Electromagnetic forces
Considering the fact that electrons tend to take the maximum conductance path to flow from A to B. This is justified by saying that $\vec{E}$ is larger in conductors. But once similarly it was thought for gravitation, that if in a region the gravity was stronger, the mass more likely took that path, then later it was found it is actually a geodesic in space time as gravity curves space time. So is there some underlying geodesic for motion caused by electromagnetic force?
• This has the makings of a good question I think, but what does "but once similarly it was though for gravitation" mean? – joshphysics Jan 18 '14 at 0:01
• To have the geodesic, you need to firstly define a metric. (because geodesic is defined as the critical path on a manifold with metrics.) But the metric is defined by the spacetime. The variation of the spacetime metric is a 2-form symmetric tensor, is the graviton. So how can one talk about only the EM metric or EM geodesic without considering the spacetime metric? – wonderich Jan 18 '14 at 4:10
• Is there a notion of EM metric? – wonderich Jan 18 '14 at 4:12
• since user37569 who offered a link has departed and the answer deleted by Community, I am copying it as a comment: " Estakhr's Material-Geodesic Equation, meetings.aps.org/Meeting/DFD13/Session/R8.4 , which is Unification between Lorentz Force and Einstein Geodesic Equation. – anna v Jan 20 '14 at 6:04
One way to formulate the equations of motion of a charged particle as a geodesic equation is through the Kaluza–Klein theory. In it we add additional dimension (just one, if we are only interested in the electromagnetism) and write the 5D metric $$dS^2 = ds^2 + \epsilon \Phi^{2}(dx^{4} + A_{\mu}dx^{\mu})^2,$$ where $ds^{2} = g_{\mu \nu} dx^{\mu} dx^{\nu}$ is the 4D (curved) metric, $\epsilon=+1$ or $-1$ is a sign choice for either space-like or time-like dimension, $A_\mu$ is identified with the 4-potential of electromagnetic field and $\Phi$ is an additional scalar field . The geodesic equation written in this 5D metric is: \begin{multline} \frac{d^2 x^{\mu}}{d{\cal S}^2}+ {\Gamma}^{\mu}_{\alpha \beta}\frac{dx^{\alpha}}{d{\cal S}}\frac{dx^{\beta}}{d{\cal S}}= n F^{\mu}_{\;\;\nu}\frac{dx^{\nu}}{d{\cal S}}+ \epsilon n^2 \frac{\Phi^{;\mu}}{\Phi^{3}} - A^{\mu}\frac{dn}{d{\cal S}}-\\- g^{\mu\lambda}\frac{dx^4}{d{\cal S}}\left(n \frac{\partial{A_{\lambda}}}{\partial{x^4}}+\frac{\partial{g_{\lambda\nu}}}{\partial{x^4}}\frac{dx^{\nu}}{d{\cal S}}\right), \end{multline} and the same rewritten so that particle motion is parametrized through 4D proper interval $s$, rather than 5D $S$: \begin{multline} \frac{d^2 x^{\mu}}{ds^2}+{\Gamma}^{\mu}_{\alpha \beta}\frac{dx^{\alpha}}{ds}\frac{dx^{\beta}}{ds}=\\= \frac{n}{(1-\epsilon{n^2}/{\Phi^2})^{1/2}}\left[ F^{\mu}_{\;\;\nu}\frac{dx^{\nu}}{ds} - \frac{A^{\mu}}{n}\frac{dn}{ds}- g^{\mu\lambda}\frac{\partial{A_{\lambda}}}{\partial{x^4}}\frac{dx^4}{ds} \right]+ \\ + \frac{\epsilon n^2}{(1-\epsilon n^2/\Phi^2)\Phi^3}\left[\Phi^{;\mu} + \left(\frac{\Phi}{n}\frac{dn}{ds}- \frac{d\Phi}{ds}\right)\frac{dx^{\mu}}{ds}\right]-\\-g^{\mu\lambda}\frac{\partial{g_{\lambda\nu}}}{\partial{x^4}}\frac{dx^{\nu}}{ds}\frac{dx^4}{ds}. \end{multline} Here the $F_{\mu\nu}$ tensor is the usual EM strength 4-tensor: $$F_{\mu\nu} = A_{\nu,\mu}-A_{\mu,\nu},$$ and $n$ is the (covariant) 4-speed component along the additional dimension: $$n =u_4 = \epsilon {\Phi}^2\left(\frac{dx^4}{d{\cal S}} + A_{\mu}\frac{dx^{\mu}}{d{\cal S}}\right).$$ These equations are taken from the paper:
Ponce de Leon, J. (2002). Equations of Motion in Kaluza-Klein Gravity Reexamined. Gravitation and Cosmology, 8, 272-284. arXiv:gr-qc/0104008.
which in turn refers to the book:
Wesson, P. S. (2007). Space-time-matter: modern higher-dimensional cosmology (Vol. 3). World Scientific google books.
We see in these equations many new terms absent in the equations of motion of a charge in a 4D curved spacetime. To eliminate these terms we impose constraints on the 5D metric by requiring independece of all metric component of the $x^4$ coordinate, and assuming the scalar $\Phi$ is simply constant. Then the quantity $n$ is an integral of motion and the geodesic equation now looks like this: $$\frac{d^2 x^{\mu}}{ds^2}+{\Gamma}^{\mu}_{\alpha \beta}\frac{dx^{\alpha}}{ds}\frac{dx^{\beta}}{ds}= \frac{n}{\left(1-\epsilon{n^2}/{\Phi^2}\right)^{1/2}}\left[F^{\mu}_{\;\;\nu}\frac{dx^{\nu}}{ds} \right],$$ which is exactly the equation of motion for the charge in curved space-time in the presence of EM field. With the (now) constant factor $n(1-\epsilon{n^2}/{\Phi^2})^{-1/2}$ having the role of a charge to mass ratio $e/m$.
I have left out numerous questions arising from this simple treatment, for them you should look into the relevant books and papers, but for the purpose of casting equations of motion of a test charge as a geodesic equations the answers to them are not needed.
I believe so, that is, I believe there is a mathematical formulation for electromagnetic geodesics. But to me, it takes a jump all the way back to the Equivalence Principle (UP). In the gravitational UP, F = m*a equals Newton’s F = G*m1*m2/r^2 (inertial forces equal gravitational forces). Bohr used an electronic extension of this principle in his simple nonrelativistic hydrogen model, where he set F = m*a equal to Coulomb’s “electronic universal law" F = (1/4*pi*eps)*e1*e2/r^2. Note how these two universal laws, electronic and gravitational, are of essentially the same form. Many scientists have noted this, but I think it has significance. The ultimate generalization of Newton’s gravity is Einstein’s GR, with its defined geodesics, based on the UP. The limit in this generalized gravitational theory is gravitational Newtonian mechanics. It is therefore possible to construct a type of non-Euclidean field theory which has Coulomb’s “electronic universal law” as its limit, once the UP has been extended to incorporate electronic forces.
Consider (perhaps) the “ultimate” differential geometry based mass/charge field theory in GR today, that is, consider the charged Kerr-Newman field theory. The physical characteristics of the central body are mass, charge and spin. Consider a small “test body” coasting along a geodesic in this charged Kerr-Newman field. The proton in hydrogen has a mass, a charge, and a spin (the three physical parameters needed to completely define a charged Kerr-Newman field). The electron in hydrogen, then, according to GR, coasts in a charge Kerr-Newman field (or “jitters about,” according to QM, that’s ok, GR still says the proton “generates” a charged Kerr-Newman field). For the particular values of the proton’s rest mass, charge and spin, its generated charged Kerr-Newman field is pathetically weak in curvature. But here is an important fact, even though weak, it is not zero. According to charged Kerr-Newman GR, the spacetime within which the electron “moves” in hydrogen is not theoretically Minkowski, as assumed in all of QM. It is at least partially “moving” along an electromagnetic charged Kerr-Newman geodesic.
• The equations user23660 presented can be made specific for a central body containing mass, charge and angular momentum (spin). An important set of EM geodesics of interest (to me, and I hope to others) are the geodesics around central bodies possessing mass, charge and spin. The metric structure obtained by imposing static spherical symmetry and asymptotic flatness to the central body's exterior gravitoelectronic field (no magnetism) was first worked out by Reissner and Nordstrom by the 1920s (see p. 158 of Wald’s “General Relativity”). They follow from user23660’s equations for central bodies – user37024 Jan 19 '14 at 5:29
|
{}
|
# On a fallacy that people often commit to accuse the police of racism
After the death of Laquan McDonald (which for what it’s worth strikes me as being murder pure and simple, but this has no bearing on what I’m going to be talking about), the Mayor of Chicago created a task force, with the job of scrutinizing the practices of the city’s police department. The task force released its report a few months ago and, brace yourself, it concluded that Chicago Police Department was plagued with racism. The media, both local and national, from the Chicago Tribune to the New York Times, reported that conclusion uncritically. But I have read the relevant section of the report and, of course, it doesn’t demonstrate anything of the sort. As every report of that sort, it has absolutely no scientific worth, but is just a political document whose only purpose is to placate the populace and prevent riots.
I want to focus on a particular fallacy that the authors of that report committed, because it’s one of the most common fallacies that people use to conclude that racism is rampant in law enforcement. (For instance, it can also be found in the Department of Justice’s report on Ferguson Police Department, which was released after the death of Michael Brown.) Moreover, I think it’s also interesting from a purely logical/statistical point of view, if you’re into that kind of things. Finally, it illustrates very nicely how the lack of political diversity in social science can result in bad consequences, such as systematic error.
The argument is nicely summarized in this article, which was published in the New York Times in 2001, showing that it’s a fallacy with a long history:
It is no longer news that racial profiling occurs; study after study over the past five years has confirmed that police disproportionately stop and search minorities. What is news, but has received virtually no attention, is that the studies also show that even on its own terms, racial profiling doesn’t work.
Those who defend the police argue that racial and ethnic disparities reflect not discrimination but higher rates of offenses among minorities. Nationwide, blacks are 13 times more likely to be sent to state prisons for drug convictions than are whites, so it would seem rational for police to assume that all other things being equal, a black driver is more likely than a white driver to be carrying drugs.
But the racial profiling studies uniformly show that this widely shared assumption is false. Police stops yield no significant difference in so-called hit rates — percentages of searches that find evidence of lawbreaking — for minorities and whites. If blacks are carrying drugs more often than whites, police should find drugs on the blacks they stop more often than on the whites they stop. But they don’t.
So, to be clear, the argument roughly goes as follows:
(1) When you look at the hit rate (i. e. the proportion of people stopped who carried contraband) for whites, you find that it’s higher than for blacks. (The article I just quoted only says that it was not lower, which is a weaker claim, but the stronger claim is made in the reports I mentioned above.)
(2) Therefore, the offending rate for blacks is lower than for whites or, at least, is not higher.
(3) Yet black people are disproportionately stopped by the police.
(4) So, by inference to the best explanation, the police is racist.
I know that sounds pretty convincing, but the inference from (1) to (2) is actually a statistical fallacy, as I’m now going to explain.
The problem is that, for (2) to follow from (1), a necessary condition is that people are stopped randomly by the police or, more generally, that the sampling methods used by cops to stop blacks and whites, together with facts about the structure of each group, make $D = H_w - H_b$ an unbiased estimator of $d = r_w - r_b$ where $H_w$ is the hit rate for whites, $H_b$ the hit rate for blacks, $r_w$ the offending rate for whites and $r_b$ the offending rate for blacks. Let’s call that hidden premise (1*), so I can refer to it quickly later. What (1*) means, roughly, is that if you looked at $N$ samples of people stopped by the police and recorded the hit rate for blacks and whites in each case, the mean of $D$ over those $N$ samples would tend to $d$ as $N$ increases.
So, in order to be able to infer (2), it is not enough that (1) be true, it must also be the case that (1*) is true. The problem is that, not only do we have no reason to believe that (1*) is true, but on the contrary there is absolutely no doubt that it’s false. Indeed, nobody questions that cops are biased against blacks, in the neutral sense that they stop blacks disproportionately. That’s basically what (3) says in the argument, which everyone agrees is true. The question is whether that bias results from a true belief that the offending rate is higher among blacks than among whites or, as (4) asserts, from prejudice on the part of cops. (I’m using “racist” to mean something like “discriminate against black people because they are prejudiced”. This may not be a good definition of “racism”, but that’s irrelevant to the point I’m trying to make, for the argument I’m attacking clearly purports to show that cops stop black people more often not because they actually commit more crimes but rather because they are prejudiced against them.) But, whether or not (4) is true, as long as (3) is true, (1*) is probably false. Indeed, as long as (3) is true, it’s likely that $D$ overestimates $d$, quite possibly to the point that $D$ comes out as positive even though $d$ is actually negative.
Just think about it for a moment: if cops believe — rightly or wrongly, it doesn’t matter here — that blacks are significantly more likely than whites to carry contraband or engage in illegal activities, they are only going to stop whites that have “I’m a criminal” written on their face, whereas in the case of blacks they’re going to cast a much wider net. What this means is that the sample of blacks is going to be much more random than the sample of whites, which is going to contain a lot more guilty people than in the white population at large. So, as long as (3) is true, it’s entirely unsurprising that the hit rate for whites is higher than the hit rate for blacks, even if the offending rate is actually much higher for blacks than for whites. Which is why (4) doesn’t follow from (1) and (3).
It may be helpful to consider a silly example to illustrate the problem. Suppose that, in some weird country, cops are stopping people in the street to see if they have coins in their pockets, because in that country it’s illegal to own coins. The cops stop people who have a beard more often than people who don’t because, for some reason, they think people who have a beard are more likely to have coins in their pockets than people who don’t have a beard. Suppose that, in fact, the cops are right, for 10% of people who have a beard have coins in their pockets while only 5% of people who don’t have a beard do. Suppose, moreover, that among people who don’t have a beard, they only stop people who have a red shirt, while they stop people who have a beard randomly. Perhaps it’s because, for whatever reason, they are convinced that, among people who don’t have a beard, people with a red shirt are significantly more likely to have coins in their pockets. As it turns out, the cops are right again, for although only 5% of people who don’t have a beard have coins in their pockets, 15% of those who don’t have a beard and wear a red shirt do.
A sociologist observes them and notice that, not only do they stop people with a beard more often, but among the people they stop, those who have a beard are less likely to have coins in their pockets than people who don’t have a beard. Indeed, among the people they stop, only 10% of the people who have a beard have coins in their pockets, while 15% of those who don’t have a beard do. He infers that people who have a beard are less likely to have coins in their pocket and, since the cops nevertheless stop them more often than people who don’t have a beard, he concludes that they are prejudiced against people who have a beard and that’s why they stop them more often than people who don’t. Except that, as we have seen, he is wrong. People who have a beard are more likely to have coins in their pockets and, if cops stop them more often, it’s because they know it. The sociologist just committed the fallacy I explained above.
Now just pause to think about what this means exactly. It’s not just that the argument is fallacious, it’s fallacious in about the worst way an argument can be fallacious. Indeed, (4) only follows from (3) if (2) is true, but (2) only follows from (1) if (1*) is true. However, as we have just seen, if (3) is true then (1*) is probably not true. Thus, 2 of the argument’s premises — (1*) and (3) — are incompatible, which makes it a particularly defective argument, to say the least… Moreover, since I’m using “racist” in (4) to mean something like “have bias and that bias results from prejudice”, (4) entails (3). So the problem with the argument is not just that it contains a hidden premise that happens to be false, it’s that if the conclusion of the argument is true, then the argument is not valid and cannot possibly establish the conclusion! And that’s what people who use that argument have not realized: even if cops were prejudiced in that sense, you couldn’t show it by using that argument, for the reason I just pointed out.
Before I stop writing, I want to make two remarks. First, how come that argument is so common even on the part of professional social scientists, given that it’s guilty of a relatively basic statistical fallacy? Of the two authors of the NYT article, at least one of them — John Lamberth, who is a psychologist — surely knows enough statistics to understand the problem with that argument, yet he committed that fallacy all the same and, since I don’t think he is dishonest, I think it’s very likely that he really didn’t notice that his argument was fallacious. Moreover, I’m sure that I’m not the first person to notice the problem with that argument, yet I have never seen anyone point it out. Anyone who has taken an introductory class on statistics knows enough to detect that fallacy, but even people who should know better don’t notice it or, if they do, they don’t say anything.
A pretty natural explanation is that liberals are biased in favor of the hypothesis that cops are racist and the vast majority of social scientists are liberal, so either they are blind to the problem with their argument or, if they notice it (as I’m sure that some have), they don’t say anything because they’re afraid that they would be punished for it. And the truth is that they are right to be afraid, because they would be punished. The fact that this terrible argument is used over and over, even by people who should clearly know better, is a very good illustration of the damages that ideological uniformity can do. If there were more conservatives in the social sciences, enough people would have pointed out the fallacious character of that argument for people to stop using it, because they would have noticed the problem and they wouldn’t have been afraid to talk about it.
Of course, this goes beyond the scientific community, it’s a widespread problem in the culture at large. Race is such a taboo in the US, even more so than in Europe, that it’s absolutely impossible to have a honest conversation about any aspect of the racial problem in this country. And liberals are largely to blame for that state of affairs. You sure as hell aren’t going to solve any problem by burying your head in the sand and vilifying anyone who says inconvenient truths, even when by your own admission you would be incapable of showing that they are wrong. Liberals are prone to talk about the dangers of having a honest conversation about race. But they also completely ignore the dangers of not having a honest conversation about race. I have never seen a convincing argument that the former outweigh the latter. But I guess that’s a topic for another post…
The last remark I wanted to make is that, of course, even if blacks are in fact more likely to commit crimes (which they most certainly are, but that’s also a topic for yet another post), it doesn’t follow that racial profiling is morally justified. My own view of the question is that it’s a very complicated issue, much more than liberals generally assume (because they never consider the costs of abandoning every kind of racial profiling), but also one which I think is kind of pointless to discuss. Indeed, the fact of the matter is that, as long as blacks will commit significantly more crimes on average than whites, the police will engage in some kind of statistical discrimination against them. So, if you want to insist that racial profiling is unambiguously wrong, I’m happy to grant you that because it has little to no bearing on any practical issue. I think a much more interesting question is: how do we make sure that blacks no longer commit more crime on average than whites? I think progressives have potentially a lot of things to say about that and, to be fair, some of them do. But, in order to ask that question and find a solution, you must first be able to have a honest discussion about the facts, which isn’t going to happen as as long as liberals keep treating dissent like the Spanish Inquisition.
EDIT: I wrote a follow-up on this post in which I clarify a few points, so you may want to read it before you comment.
ANOTHER EDIT: It’s interesting that John Lamberth, one of the authors of the article I quote, claimed back in the 1990’s that the state police in New Jersey was racist because it was stopping black drivers for speeding at a disproportionate rate. However, when a more rigorous study was conducted a few years later, it showed that black drivers were more likely to speed and that it explained why they were pulled over more often. The study in question was actually commissioned by the state’s Justice Department, which hoped it would bolster its case against the state police. When it turned out that the opposite was true, a lawyer with the Justice Department’s special litigation section tried to block the release of the report, which is another proof that white supremacy is alive and well in the US…
## 19 thoughts
1. newt0311 says:
Say every police officer stops N people/day regardless of their race. Assume further that the police department allocates officers to neighborhoods based on the actual crime-rate in that neighborhood.
This has nearly the same effect as racial profiling because neighborhoods are often segregated by race (for a variety of usually non-nefarious reasons) and race is correlated with crime. The situation gets “worse” once one accounts for the officer’s behavior: if they know they’ve been assigned to a more crime-prone area they’re probably going to be more proactive/vigilant and make more stops. Or at least I personally hope so.
IIRC this is pretty common in quite a few major cities.
I think anybody who claims that racial profiling is “immoral” needs to explain how to distinguish the case above or describe exactly at which point the above goes from being basic common sense to “racist & evil.”
1. Well, it seems likely to me that direct racial profiling is less efficient than neighborhood profiling although, as far as the racial disparity in the probability of being stopped is concerned, they may be roughly equivalent. So, if I’m right about that, someone could use that fact to argue that direct racial profiling is not okay but neighborhood profiling is. Liberals often condemn even neighborhood profiling, on the ground that it’s a kind of racial profiling, which in a way it is. I think it’s that kind of things they have in mind when they talk about institutional racism in law enforcement.
To be clear, I don’t think it’s obvious that either kind of racial profiling is morally permissible, let alone obligatory. But I definitely agree with you that it’s also not obvious that it’s not morally permissible or even obligatory. In particular, neighborhood profiling is exactly the kind of things I had in mind when I wrote that it wasn’t clear to me that every kind of racial profiling was unacceptable, because it should be clear to anyone who thinks about it that whether that kind of racial profiling is morally permissible is a non-trivial question.
2. Frog says:
“Liberals are prone to talk about the dangers of having a honest conversation about race. But they also completely ignore the dangers of not having a honest conversation about race. I have never seen a convincing argument that the former outweigh the latter.”
Conservatives never wait for the approval of liberals before doing other things that they want to do. So why do so in this area? if they want to have what they consider to be honest conversations about race, then they can surely do that– regardless of what liberals think of it. If liberals call conservatives racist, and they are not being so, then they can stand up and say that– or yell it, as the Right Wing TV and video pundits do.
1. I think it’s a bit naive to say that. Nowadays, the accusation of racism carries such a stigma that, if people can’t say certain things without being accused of racism, it inevitably stifles debate. Even on conservative networks, there are things people won’t say about race, because it’s dangerous. This is particularly true in some environments, such as academia, where I can guarantee you that saying the kind of things I say puts you in a very tight spot. I also think it’s problematic if conservatives can only talk about that kind of things between themselves. This should be the kind of things people can discuss rationally, especially in academia, where people are supposed to study the facts without having to fear they will face repercussions for saying things they have good reasons to believe to be true.
3. kevin says:
I don’t think point 2 follows from point 1 of your argument.
I wouldn’t expect, “When you look at the hit rate (i. e. the proportion of people stopped who carried contraband) for whites, you find that it’s higher than for blacks.” to imply that “Therefore, the offending rate for blacks is lower than for whites or, at least, is not higher”.
What it implies is that the police force is being inefficient with its stops (and I would call that racism). If pulling over a white person yields a higher chance of finding contraband they should pull over more whites and/or less black people until the hit rate is the same. This is completely consistent even if the offending rate for blacks is higher. In that case they should pull over more blacks but not past the point where the marginal black person pulled over yields a lower offending rate than the marginal white person pulled over.
1. Well, this is not my argument and I agree with you that 2 doesn’t follow from 1, since it’s precisely the inference I’m rejecting in my post 🙂 However, despite what you say, it doesn’t even follow from 1 that cops are being inefficient, although depending on the details of the case it could arguably make it likely.
Your argument implicitly assumes that, if the hit rate for whites is higher than the hit rate for blacks, then the offending rate for the whites that were not stopped is higher than the hit rate for blacks would be if cops chose to arrest less blacks and more whites. But this is hardly obvious, because it depends on a lot of factors, so I don’t think we can just assume that.
Indeed, suppose that, as I was suggesting in my post, cops only stop a white person when he is a known criminal, whereas they not only stop every black person who is a known criminal but also randomly stop 100 blacks who aren’t known criminals. Suppose, moreover, that among blacks who aren’t know criminals, the offending rate is 10%, while it’s only 5% among whites who aren’t known criminals.
In that case, if cops reduce the number of blacks while increasing the number of whites they stop by only stopping blacks who are known criminals and, in addition to stopping every white who is a known criminal, randomly stopping 100 whites who aren’t known criminals, they will catch less people who have something illegal on them.
Yet this scenario is consistent with the assumption that, given how I have assumed that cops decide to stop people, the hit rate for whites was initially higher than for blacks. It’s even possible that, by changing their procedure to decide who to stop in the manner I have just described, cops will make the hit rate for blacks equal to the hit rate for whites. You just have to make the right assumptions about the number of known criminals that drive in front of them, the race-specific offending rates among them and the racial composition of that group.
Of course, even in that case, you can imagine cops stopping less blacks and more whites in a way that increases the number of people caught with something illegal on them. (Trivially, if cops had stopped every black person they actually stopped who had something illegal on him but none of the black people they stopped who did not, it would have been the case.) But it’s not obvious that cops could actually do that, because the necessary conditions for this to be possible in practice may not be satisfied.
Again, to be clear, I’m not denying that if the hit rates for whites is higher than for blacks even though cops stop more blacks than whites, it could be that cops are being inefficient. (I’m sure there are even cases in which, from the fact that the hit rates for whites is higher than for blacks even though cops stop more blacks than whites, it’s reasonable to infer that cops probably are inefficient.) But whether that is the case depends on a lot of factors, such as the characteristics of blacks and whites in the area, how the observable characteristics the police can use to decide who to stop correlate with criminal behavior in each racial group, etc.
In a way, that cops are being inefficient is trivial, because nobody can reasonably believe that cops are perfect maximizers. The interesting question is whether they are inefficient in a way that make them racist, but I don’t think we can infer this from the fact that the hit rate for whites is higher than for blacks, even if cops stop more blacks than whites. (Again, there are probably cases in which it’s possible to make that inference, but we need more information before we can make it.)
1. kevin says:
I don’t disagree with your explanation. However, I find it a bit suspect that to demonstrate you use a step function where a white person is either a known criminal or they’re not, and that cops can somehow tell which group that person falls into before pulling them over. I assume cops judge the likelihood that someone is a criminal along a smooth continuous function between 0 and 100% and pull over anyone they judge above x percent. In that case, the fact that whites have a higher hit rate when pulled over means the cops are judging incorrectly, which I call racism. Granted it may be a poor assumption that a cop’s perceived likelihood of being a criminal is smooth and continuous. If it is instead a step function as you describe I agree.
1. Of course, the model I used to illustrate my point is unrealistic, but I think yours is even less realistic. However, I don’t want to press that point and spend time arguing about it, because the point I was really trying to make is that we just can’t make the inference you were suggesting without making a lot of non-obvious assumptions, since whether that inference is reliable will depend on a lot of things beside the race-specific hit rates and the number of people of each race that are being stopped. And, while I’m sure we can disagree about what assumptions we should make exactly, that much is definitely true.
To tell you the truth, when I started writing this post, I had the same intuition as you and intended to conclude my post by making the point you made in your comment. But then I thought about it for a minute and realized that I just couldn’t make that inference because it rests on a lot of non-obvious assumptions. I suspect that, if we have that intuition, it’s because of a familiarity with maximization problems in economics. But I don’t think we can assume that, in this case, the function we are dealing with has the same properties as the kind of functions economists deal with.
1. candid_observer says:
Racial profiling often presents a moral dilemma.
The underlying problem is that race can, in many epistemic contexts, provide key information as to the level of risk of some outcome. Jesse Jackson, rather famously, made this remark:
“There is nothing more painful to me at this stage in my life than to walk down the street and hear footsteps and start thinking about robbery. Then look around and see somebody white and feel relieved…”
This is pretty good example of how, given the exact informational context Jackson describes, the predictive value of race is critical as to what one may do (e.g., continue down the street, vs. try to find a store one can duck into).
Obviously, if one had perfect information about the person walking behind, race would be irrelevant — but one virtually never has perfect information in making choices.
I don’t think any reasonable person would hold Jackson, or anyone else, at fault if they chose to take evasive behavior if the person behind was a young black male, and not do so if it were a young white male. Yet this certainly would seem to fit any definition of racial profiling.
When it comes to officers of the law, being agents of the government, we have higher expectations as to how and when they are allowed to use race in their decisions. Agents of the government are generally expected to ignore race, in the interests of fairness, even if race provides strong evidence of some outcome.
The trick is to find some balance between the unfairness of treating an individual on the basis of race and the risk of not preventing a negative outcome because one has ignored race.
2. kevin says:
Appreciate the feedback. Perhaps I am overthinking it from an economic perspective. It’s an interesting thought experiment regardless.
Enjoy your blog, love seeing alternative viewpoints that don’t get covered elsewhere!
3. Thanks for the nice words about my blog and also for your interesting comment. It forced me to explain something which I think is important. After I realized my initial intuition was misguided, I actually considered explaining in my post what I said in my reply to you, but I was too lazy to do it :-p
4. candid_observer says:
Because, statistically, blacks offend at a much higher rate than whites, it gives rise to any number of seemingly plausible, but quite fallacious, arguments that racial discrimination must be operating in various criminal justice contexts.
Fairly recently it was claimed that an algorithm designed to predict recidivism, and which did not use race as a predictive variable, must nonetheless be discriminatory, because more blacks than whites were “labeled higher risk, but didn’t reoffend.” But that claim clearly hung on a fallacy, as the following article makes clear:
http://www.theamericanconservative.com/articles/does-pre-crime-have-a-race-problem/
1. Thanks for the link, I’ll read this later. I suspect it has to do with the disparate impact doctrine, which indeed often leads to absurd results.
5. EH says:
You keep saying that is a fact that search hit rates are higher for Whites than Blacks, but that isn’t what even the politicized report you quoted said, which was: “Police stops yield no significant difference in so-called hit rates — percentages of searches that find evidence of lawbreaking — for minorities and whites.”
You show that even if the White hit rate were higher, that still wouldn’t necessarily mean Blacks were discriminated against, which goes beyond a “steel-man” argument, (the opposite of a straw-man, arguing against the best form of your opponent’s thesis) to one that shows that even assuming a falsehood that would be most supportive of your opponents’ thesis, it still fails. Given that the hit rates are indistinguishable despite a higher percentage of Blacks being searched shows directly, without any more complicated argument needed, that Blacks are not being discriminated against in being selected for searches and that they are more likely to carry contraband.
1. The quote is from a NYT article, where it’s true that the authors make a weaker claim, but you can find the stronger claim in the reports I mentioned before that. I added a note in the post to make that clear, thanks for pointing that out.)
6. Gabdydancer says:
“But the racial profiling studies uniformly show that this widely shared assumption is false. Police stops yield no significant difference in so-called hit rates — percentages of searches that find evidence of lawbreaking — for minorities and whites. If blacks are carrying drugs more often than whites, police should find drugs on the blacks they stop more often than on the whites they stop. But they don’t.
So, to be clear, the argument roughly goes as follows:
(1) When you look at the hit rate (i. e. the proportion of people stopped who carried contraband) for whites, you find that it’s higher than for blacks.”
The easiest way to demonstrate that the claim is nonsensical is to note that if the black population were oversampled in the way claimed (more blacks were being stopped on less evidence, and assuming the evidence accurately predicts the hit rate) then the hit rate for blacks would be lower than for whites.
If the hit rates for blacks were actually lower than that for whites… well the following is simply nonsense: “if cops believe — rightly or wrongly, it doesn’t matter here — that blacks are significantly more likely than whites to carry contraband or engage in illegal activities, they are only going to stop whites that have “I’m a criminal” written on their face, whereas in the case of blacks they’re going to cast a much wider net.” The “nonsense” part is the claim that it doesn’t matter whether the cops are wrong. If they are wrong then the action of casting a wider net among blacks is racist and cannot be justified.
I’m guessing that you are alluding to the same point that EH made in his comment above. As I explained to him, it’s true that the authors of the article I quote made a weaker claim, but the reports I mentioned before that make the stronger claim.
The easiest way to demonstrate that the claim is nonsensical is to note that if the black population were oversampled in the way claimed (more blacks were being stopped on less evidence, and assuming the evidence accurately predicts the hit rate) then the hit rate for blacks would be lower than for whites.
Sure, but I wanted to show that, even if the hit rate for white is higher (which it sometimes is), the conclusion still doesn’t follow.
If the hit rates for blacks were actually lower than that for whites… well the following is simply nonsense: “if cops believe — rightly or wrongly, it doesn’t matter here — that blacks are significantly more likely than whites to carry contraband or engage in illegal activities, they are only going to stop whites that have “I’m a criminal” written on their face, whereas in the case of blacks they’re going to cast a much wider net.” The “nonsense” part is the claim that it doesn’t matter whether the cops are wrong. If they are wrong then the action of casting a wider net among blacks is racist and cannot be justified.
Of course, if the cops are wrong then what they’re doing can’t be justified, but that’s not what I meant when I said that “it doesn’t matter here”. What I meant is that, even if the cops are wrong (and therefore are racist), the inference is still fallacious, because what matters from a purely statistical/logical point of view is that cops don’t sample blacks and whites in the same way, not why they don’t. Obviously, from a moral point of view, it does matter.
1. Gandydancer says:
Yes, I was making the same point as EH. There was no clue in your post that the two reports mentioned claimed that the hit rates for whites was higher than the hit rates for blacks, so your “to be clear” statement of the proposition seemed to come from left field. Your in-line clarification does fix this.
If the cops are wrong about the contraband rate then their action in oversampling blacks (relative to evidence other than race) is indeed racist, which is the claimed conclusion from a white hit rate higher than black, so the conclusion that cops are racist can be wrong only if blacks are in fact more likely to be carrying contraband.
So the only interesting case is:
(a) black contraband rate higher than white
(b)white hit rate higher than black
…and I’m not following your claim that this is not evidence of racism, absent special circumstances such as the sampling taking place in neighborhoods where the only whites present have contraband rates higher than the local black population (so that we are not really dealing with the interesting case, though it might be falsely claimed that we were).
Now, cops can’t actually legally stop people merely because they believe they are criminals. And if non-contraband-carrying blacks provide more excuses for stopping (e.g., speeding) than non-contraband-carrying whites that might account for more false positives (non-hits) among blacks than whites, but I’d like to see some argument about the magnitude of that effect, or others that might be suggested before dismissing the racism hypothesis. Of course, if the hit rate ends up nonetheless being equal then such facts may actually point to a Ferguson Effect, i.e. cops being more reluctant to stop blacks than they should be.
1. Yes, I was making the same point as EH. There was no clue in your post that the two reports mentioned claimed that the hit rates for whites was higher than the hit rates for blacks, so your “to be clear” statement of the proposition seemed to come from left field. Your in-line clarification does fix this.
Fair enough.
If the cops are wrong about the contraband rate then their action in oversampling blacks (relative to evidence other than race) is indeed racist, which is the claimed conclusion from a white hit rate higher than black, so the conclusion that cops are racist can be wrong only if blacks are in fact more likely to be carrying contraband.
Again, I completely agree with that, but nothing I say contradicts it. My point is only that, even if cops are racist because they falsely believe that blacks are more likely to carry contraband than whites, the inference from 1 to 2 is fallacious because the very fact that they are racist and oversample blacks makes the estimator biased.
So the only interesting case is:
(a) black contraband rate higher than white
(b)white hit rate higher than black
…and I’m not following your claim that this is not evidence of racism, absent special circumstances such as the sampling taking place in neighborhoods where the only whites present have contraband rates higher than the local black population (so that we are not really dealing with the interesting case, though it might be falsely claimed that we were).
I think you are making the same point as Kevin above. If you read my exchange with him, I explain why, even when a and b are true, whether the cops are racist depends on the assumptions you make about 1) the nature of the relationship between evidence and the probability that someone is carrying contraband in each group and 2) how evidence is distributed among different groups. I don’t have time to figure out what assumptions are necessary and sufficient for the cops to be racist (i. e. for the way in which they sample people in different groups to result in a suboptimal overall hit rate) when a and b are true, though I think it’s an interesting question, but as I argue in my discussion with Kevin I don’t think we can just assume that sufficient conditions are fulfilled in any realistic case. (I implicitly defined “racism” so that cops are racist if the way in which they sample people in different groups results in a suboptimal overall hit rate, but this definition should probably be strengthened, since it makes it extremely easy for cops to count as racist. Perhaps we should say that the overall hit rate must be suboptimal but not by just a little or something like that.)
Now, cops can’t actually legally stop people merely because they believe they are criminals. And if non-contraband-carrying blacks provide more excuses for stopping (e.g., speeding) than non-contraband-carrying whites that might account for more false positives (non-hits) among blacks than whites, but I’d like to see some argument about the magnitude of that effect, or others that might be suggested before dismissing the racism hypothesis. Of course, if the hit rate ends up nonetheless being equal then such facts may actually point to a Ferguson Effect, i.e. cops being more reluctant to stop blacks than they should be.
To be clear, the purpose of this post isn’t to dismiss the racism hypothesis, it’s only to point out that a common inference whose conclusion is that cops are racists is fallacious. I’m sure cops are often racist to some extent, though as I have argued in other posts, I don’t think racism in the criminal justice systems explains a large part of the racial disparities observed for various outcomes. As you can see in the links I added to my post, there is evidence that blacks are more likely to speed and the effect seems to be large enough to explain the disparity in stopping rates, but I have no idea if this is true in general since I have never looked at the literature on that. (I added this before you wrote your comment, just because it involves John Lamberth and I remembered that I meant to add this to my post, but it just happens to be relevant to what you say in your comment.) I agree that whether or not the police are racist or, on the contrary, are more reluctant to stop blacks than they should be is going to depend on that sort of questions, among other things. There is also no reason to expect that the answer will be the same in different locations across the US. Not only because police departments are different, but so are the public they deal with.
This whole discussion with you and the one I had with Kevin previously makes me think that perhaps I should write another post to sort out these issues. At the very least, I’d like to get clear on how I should define racism for the purposes of this discussion and what conditions are necessary and sufficient to guarantee that cops are racist in that sense when a and b are true, which I don’t have time to think about right now because I’m very busy. But I’ll definite keep that in mind for when I have more time.
|
{}
|
# avocado_varianter_yaml_to_mux package¶
## avocado_varianter_yaml_to_mux.mux module¶
This file contains mux-enabled implementations of parts useful for creating a custom Varianter plugin.
class avocado_varianter_yaml_to_mux.mux.Control(code, value=None)
Bases: object
Container used to identify node vs. control sequence
class avocado_varianter_yaml_to_mux.mux.MuxPlugin
Bases: object
Base implementation of Mux-like Varianter plugin. It should be used as a base class in conjunction with avocado.core.plugin_interfaces.Varianter.
debug = None
default_params = None
initialize_mux(root, paths, debug)
Initialize the basic values
Note: We can’t use __init__ as this object is intended to be used via dispatcher with no __init__ arguments.
paths = None
root = None
to_str(summary, variants, **kwargs)
update_defaults(defaults)
variants = None
class avocado_varianter_yaml_to_mux.mux.MuxTree(root)
Bases: object
Object representing part of the tree from the root to leaves or another multiplex domain. Recursively it creates multiplexed variants of the full tree.
Parameters: root – Root of this tree slice
iter_variants()
Iterates through variants without verifying the internal filters
:yield all existing variants
class avocado_varianter_yaml_to_mux.mux.MuxTreeNode(name='', value=None, parent=None, children=None)
Class for bounding nodes into tree-structure with support for multiplexation
fingerprint()
Reports string which represents the value of this node.
merge(other)
Merges other node into this one without checking the name of the other node. New values are appended, existing values overwritten and unaffected ones are kept. Then all other node children are added as children (recursively they get either appended at the end or merged into existing node in the previous position.
class avocado_varianter_yaml_to_mux.mux.MuxTreeNodeDebug(name='', value=None, parent=None, children=None, srcyaml=None)
Debug version of TreeNodeDebug :warning: Origin of the value is appended to all values thus it’s not suitable for running tests.
merge(other)
Merges other node into this one without checking the name of the other node. New values are appended, existing values overwritten and unaffected ones are kept. Then all other node children are added as children (recursively they get either appended at the end or merged into existing node in the previous position.
class avocado_varianter_yaml_to_mux.mux.OutputList(values, nodes, yamls)
Bases: list
List with some debug info
class avocado_varianter_yaml_to_mux.mux.OutputValue(value, node, srcyaml)
Bases: object
Ordinary value with some debug info
class avocado_varianter_yaml_to_mux.mux.TreeNodeDebug(name='', value=None, parent=None, children=None, srcyaml=None)
Debug version of TreeNodeDebug :warning: Origin of the value is appended to all values thus it’s not suitable for running tests.
merge(other)
Override origin with the one from other tree. Updated/Newly set values are going to use this location as origin.
class avocado_varianter_yaml_to_mux.mux.ValueDict(srcyaml, node, values)
Bases: dict
Dict which stores the origin of the items
items()
Slower implementation with the use of __getitem__
iteritems()
Slower implementation with the use of __getitem__
avocado_varianter_yaml_to_mux.mux.apply_filters(root, filter_only=None, filter_out=None)
Apply a set of filters to the tree.
The basic filtering is filter only, which includes nodes, and the filter out rules, that exclude nodes.
Note that filter_out is stronger than filter_only, so if you filter out something, you could not bypass some nodes by using a filter_only rule.
Parameters: root – Root node of the multiplex tree. filter_only – the list of paths which will include nodes. filter_out – the list of paths which will exclude nodes. the original tree minus the nodes filtered by the rules.
avocado_varianter_yaml_to_mux.mux.path_parent(path)
From a given path, return its parent path.
Parameters: path – the node path as string. the parent path as string.
## Module contents¶
Varianter plugin to parse yaml files to params
class avocado_varianter_yaml_to_mux.ListOfNodeObjects
Bases: list
Used to mark list as list of objects from whose node is going to be created
class avocado_varianter_yaml_to_mux.YamlToMux
Processes the mux options into varianter plugin
description = 'Multiplexer plugin to parse yaml files to params'
initialize(args)
name = 'yaml_to_mux'
class avocado_varianter_yaml_to_mux.YamlToMuxCLI
Defines arguments for YamlToMux plugin
configure(parser)
Configures “run” and “variants” subparsers
description = "YamlToMux options for the 'run' subcommand"
name = 'yaml_to_mux'
run(args)
The YamlToMux varianter plugin handles these
avocado_varianter_yaml_to_mux.create_from_yaml(paths, debug=False)
Create tree structure from yaml-like file :param fileobj: File object to be processed :raise SyntaxError: When yaml-file is corrupted :return: Root of the created tree structure
avocado_varianter_yaml_to_mux.get_named_tree_cls(path, klass)
Return TreeNodeDebug class with hardcoded yaml path
|
{}
|
# A 1.25 kg weight is hung from a vertical spring. The spring stretches by 3.75 cm from its original, unstretched length. How much mass should you hang from the spring so it will stretch by 8.13 cm?
Mar 13, 2018
Remember Hookes law.
2.71Kg
#### Explanation:
Hooke's Law relates Force a spring exerts to an object attached to it as:
$F = - k \cdot x$
where F is the force, k a spring constant, and x the distance it will stretch
So in your case, the spring constant evaluates to:
$\frac{1.25}{3.75} = 0.333$ kg/cm
To get an 8.13cm extension you would need: $0.333 \cdot 8.13$
2.71Kg
|
{}
|
# Permutations and Combinations of ways of dividing into identical and non- identical groups
In how many ways can $8$ people be divided into
1. $4$ groups of $2$ people
2. First pair, second pair. Third pair and the fourth pair
How do you go about doing this question?
-
Are you familiar with binomial coefficients? What have you tried? – Jeremy Nov 2 '12 at 9:21
4 groups of 2 people:
_ _ _ _ _ _ _ _
( 8.7/2! * 6.5/2! * 4.3/2! * 2.1/2! )/4!
First pair, second pair. Third pair and the fourth pair:
1º pair -> 8*7/2! = 28
2º pair -> 6*5/2! = 15
... -> ...
understood?
-
Use consistent notation, please. – Cameron Buie Nov 2 '12 at 10:53
|
{}
|
# Convert to Regular Notation 29*10^6
Since the exponent of the scientific notation is positive, move the decimal point places to the right.
Convert to Regular Notation 29*10^6
|
{}
|
# The contrapositive of the statement " If you are born in India, then you are a citizen of India", is : Option 1) If you are not born in India, then you are not a citizen of India. Option 2) If you are not a citizen of India, then you are not born in India. Option 3) If you are a citizen of India, then you are born in India. Option 4) If you are born in India, then you are not a citizen of India.
S solutionqc
$p\rightarrow q$
contrapositive
$\sim q\rightarrow \sim P$
$\Rightarrow$ If you are not a citizen of India, then you are not born in India.
Option 1)
If you are not born in India, then you are not a citizen of India.
Option 2)
If you are not a citizen of India, then you are not born in India.
Option 3)
If you are a citizen of India, then you are born in India.
Option 4)
If you are born in India, then you are not a citizen of India.
Exams
Articles
Questions
|
{}
|
SCIENTIA SINICA Informationis, Volume 50 , Issue 11 : 1680(2020) https://doi.org/10.1360/SSI-2019-0287
## New versions of Lovász Local Lemma and their applications
• AcceptedFeb 26, 2020
• PublishedOct 19, 2020
Share
Rating
### Abstract
Lovász Local Lemma (LLL) is an important tool in combinatorics and probability theory. It can be used to show the existence of combinatorial objects meeting a collection of criteria as long as the criteria are weakly dependent. It was first proposed by ErdHos and Lovász in 1975. Since then, many applications of LLL have been found in combinatorics, theoretical computer science, and physics. Recently, several new versions of LLL have been proposed. Constructive LLL is an especially big breakthrough in theoretical computer science that has attracted lots of attention. In this paper, we will review recent progress in LLL research, including new versions of LLL and their applications. We will precisely define and differentiate among abstract LLL, lopsided LLL, variable LLL, and quantum LLL. We will also provide connections between abstract LLL and statistical physics, as well as between quantum LLL and quantum physics. LLL can be used to prove the existence of solutions, find solutions efficiently, count the number of solutions, and sample a solution uniformly at random. We will also illustrate these applications of LLL with the SAT problem and the quantum SAT problem.
### References
[1] ERDHOS P, LOVÁSZ L. Problems and results on 3-chromatic hypergraphs and some related questions. Infinite and finite sets, 1975, 10: 609-627. Google Scholar
[2] Alon N, Spencer J H. The Probabilistic Method. 4th ed. Hoboken: John Wiley & Sons, 2016. Google Scholar
[3] SZEGEDY M. The lovász local lemma--a survey. In: Proceedings of International Computer Science Symposium in Russia, Berlin, 2013. 1--11. Google Scholar
[4] Spencer J. Asymptotic lower bounds for Ramsey functions. Discrete Math, 1977, 20: 69-76 CrossRef Google Scholar
[5] Shearer J B. On a problem of spencer. Combinatorica, 1985, 5: 241-245 CrossRef Google Scholar
[6] Gebauer H, SzabÓ T, Tardos G. The local lemma is asymptotically tight for SAT Journal of the ACM (JACM), 2016, 63: 43. Google Scholar
[7] Mcdiarmid C. Hypergraph colouring and the Lovász local lemma. Discrete Mathematics, 1997, 167: 481-486. Google Scholar
[8] Wood D W. The exact location of partition function zeros, a new method for statistical mechanics. J Phys A-Math Gen, 1985, 18: L917-L921 CrossRef ADS Google Scholar
[9] Guttmann A J. COMMENT: Comment on 'The exact location of partition function zeros, a new method for statistical mechanics'. J Phys A-Math Gen, 1987, 20: 511-512 CrossRef ADS Google Scholar
[10] Todo S. Transfer-Matrix Study of Negative-Fugacity Singularity of Hard-Core Lattice Gas. Int J Mod Phys C, 1999, 10: 517-529 CrossRef ADS Google Scholar
[11] SCOTT A D, SOKAL A D. On dependency graphs and the lattice gas. Combinatorics, Probability & Computing, 2006, 15: 253-279. Google Scholar
[12] Bissacot R, Fernández R, Procacci A. An Improvement of the Lovász Local Lemma via Cluster Expansion. Combinator Probab Comp, 2011, 20: 709-719 CrossRef Google Scholar
[13] HARVEY N J, SRIVASTAVA P, VONDRÁK J. Computing the independence polynomial: from the tree threshold down to the roots. In: Proceedings of the 29th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), New Orleans, 2018. 1557--1576. Google Scholar
[14] BezÁkovÁ I, Galanis A, Goldberg L A, et al. Inapproximability of the independent set polynomial in the complex plane. In: Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC), Los Angeles, 2018. 1234--1240. Google Scholar
[15] Kolipaka K, Szegedy M, Xu Y. A sharper local lemma with improved applications. In: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques. Berlin: Springer, 2012. 603--614. Google Scholar
[16] Erd?s P, Spencer J. Lopsided Lovász Local Lemma and Latin transversals. Discrete Appl Math, 1991, 30: 151-154 CrossRef Google Scholar
[17] Harris D G, Srinivasan A. A constructive lovász local lemma for permutations. Theory of Computing, 2017, 13: 1-41. Google Scholar
[18] SzabÓ S. Transversals of rectangular arrays. Acta Mathematica Universitatis Comenianae, 2008, 77:. Google Scholar
[19] BÖttcher J, Kohayakawa Y, Procacci A. Properly coloured copies and rainbow copies of large graphs with small maximum degree. Random Structures & Algorithms, 2012, 40: 425-436. Google Scholar
[20] Mohr A T. Applications of the lopsided lovász local lemma regarding hypergraphs. Dissertation for Ph.D. Degree. Carolina: University of South Carolina, 2013. Google Scholar
[21] Keevash P, Ku C Y. A random construction for permutation codes and the covering radius. Des Codes Crypt, 2006, 41: 79-86 CrossRef Google Scholar
[22] Lu L, Mohr A, Székely L. Quest for negative dependency graphs. In: Recent Advances in Harmonic Analysis and Applications. Berlin: Springer, 2012. 243--258. Google Scholar
[23] Gebauer H, Moser R A, Scheder D, et al. The Lovász local lemma and satisfiability. In: Efficient Algorithms. Berlin: Springer, 2009. 30--54. Google Scholar
[24] Moitra A. Approximate counting, the lovasz local lemma, and inference in graphical models. Journal of the ACM (JACM), 2019, 66: 10. Google Scholar
[25] Giotis I, Kirousis L, Psaromiligkos K I. Acyclic edge coloring through the Lovász Local Lemma. Theor Comput Sci, 2017, 665: 40-50 CrossRef Google Scholar
[26] Moser R A. A constructive proof of the lovász local lemma. In: Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC), Bethesda, 2009. 343--350. Google Scholar
[27] MOSER R A, TARDOS G. A constructive proof of the general Lovász local lemma. Journal of the ACM (JACM), 2010, 57: 11. Google Scholar
[28] Kolipaka K B R, Szegedy M. Moser and Tardos meet Lovász. In: Proceedings of the 43rd Annual ACM Symposium on Theory of Computing (STOC), California, 2011. 235--244. Google Scholar
[29] He K, Li L, Liu X, et al. Variable-version Lovász local lemma: Beyond shearer's bound. In: Proceedings of the 58th IEEE Annual Symposium on Foundations of Computer Science (FOCS), Berkeley, 2017. 451--462. Google Scholar
[30] Rokhsar D S, Kivelson S A. Superconductivity and the quantum hard-core dimer gas. Phys Rev Lett, 1988, 61: 2376-2379 CrossRef ADS Google Scholar
[31] Castelnovo C, Chamon C, Mudry C. From quantum mechanics to classical statistical physics: Generalized Rokhsar-Kivelson Hamiltonians and the "Stochastic Matrix Form" decomposition. Ann Phys, 2005, 318: 316-344 CrossRef ADS arXiv Google Scholar
[32] BRAVYI S. Efficient algorithm for a quantum analogue of 2-sat. Contemporary Mathematics, 2011, 536: 33-48. Google Scholar
[33] AMBAINIS A, KEMPE J, SATTATH O. A quantum Lovász local lemma. Journal of the ACM (JACM), 2012, 59: 24. Google Scholar
[34] He K, Li Q, Sun X, et al. Quantum lovász local lemma: Shearer's bound is tight. In: Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (STOC), Phoenix, 2019. 461--472. Google Scholar
[35] Laumann C R, Läuchli A M, Moessner R, et al. On product, generic and random generic quantum satisfiability. Physical Review A, 2010, 81: 359-366. Google Scholar
[36] Sattath O, Morampudi S C, Laumann C R. When a local Hamiltonian must be frustration-free. Proc Natl Acad Sci USA, 2016, 113: 6433-6437 CrossRef ADS arXiv Google Scholar
[37] LAUMANN C, MOESSNER R, SCARDICCHIO A, et al. Phase transitions in random quantum satisfiability. Bulletin of the American Physical Society, 2009, 54. Google Scholar
[38] GILYÉN A, SATTATH O. On preparing ground states of gapped hamiltonians: An efficient quantum Lovász local lemma. In: Proceedings of the 58th IEEE Annual Symposium on Foundations of Computer Science (FOCS), Berkeley, 2017. 439--450. Google Scholar
[39] BECK J. An algorithmic approach to the Lovász local lemma. Random Structures & Algorithms, 1991, 2: 343-365. Google Scholar
[40] CZUMAJ A, SCHEIDELER C. A new algorithm approach to the general Lovász local lemma with applications to scheduling and satisfiability problems. In: Proceedings of the 32nd Annual ACM Symposium on Theory of Computing (STOC), Portland, 2000. 38--47. Google Scholar
[41] MOLLOY M, REED B. Further algorithmic aspects of the local lemma. In: Proceedings of the 30th Annual ACM Symposium on Theory of Computing (STOC), Dallas, 1998. 524--529. Google Scholar
[42] RADHAKRISHNAN J, SRINIVASAN A. Improved bounds and algorithms for hypergraph 2-coloring. Random Structures & Algorithms, 2000, 16: 4-32. Google Scholar
[43] SALAVATIPOUR M R. A $(1+\epsilon)$-approximation algorithm for partitioning hypergraphs using a new algorithmic version of the Lovász local lemma. Random Structures & Algorithms, 2004, 25: 68-90. Google Scholar
[44] Messner J, Thierauf T. A Kolmogorov complexity proof of the Lovász Local Lemma for satisfiability. Theor Comput Sci, 2012, 461: 55-64 CrossRef Google Scholar
[45] CATARATA J D, CORBETT S, STERN H, et al. The Moser-Tardos resample algorithm: Where is the limit? (an experimental inquiry). In: Proceedings of the 19th Workshop on Algorithm Engineering and Experiments (ALENEX), Barcelona, 2017. 159--171. Google Scholar
[46] HARVEY N J, VONDRÁK J. An algorithmic proof of the Lovász local lemma via resampling oracles. In: Proceedings of the 56th Annual Symposium on Foundations of Computer Science (FOCS), Berkeley, 2015. 1327--1346. Google Scholar
[47] ACHLIOPTAS D, ILIOPOULOS F. Random walks that find perfect objects and the Lovász local lemma. Journal of the ACM (JACM), 2016, 63: 22. Google Scholar
[48] ACHLIOPTAS D, ILIOPOULOS F. Focused stochastic local search and the Lovász local lemma. In: Proceedings of the 27th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), Arlington, 2016. 2024--2038. Google Scholar
[49] ACHLIOPTAS D, ILIOPOULOS F, KOLMOGOROV V. A local lemma for focused stochastic algorithms,. arXiv Google Scholar
[50] KOLMOGOROV V. Commutativity in the algorithmic lovász local lemma. SIAM Journal on Computing, 2018, 47: 2029-2056. Google Scholar
[51] ACHLIOPTAS D, ILIOPOULOS F, SINCLAIR A. Beyond the lovász local lemma: Point to set correlations and their algorithmic applications. In: Proceedings of IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS), Baltimore, 2019. 725--744. Google Scholar
[52] HARRIS D G. Deterministic parallel algorithms for fooling polylogarithmic juntas and the lovász local lemma. ACM Transactions on Algorithms (TALG), 2018, 14: 47. Google Scholar
[53] Chandrasekaran K, Goyal N, Haeupler B. Deterministic Algorithms for the Lovász Local Lemma. SIAM J Comput, 2013, 42: 2132-2155 CrossRef Google Scholar
[54] HAEUPLER B, HARRIS D G. Parallel algorithms and concentration bounds for the lovász local lemma via witness dags. ACM Transactions on Algorithms (TALG), 2017, 13: 53. Google Scholar
[55] HARRIS D G. Deterministic algorithms for the lovasz local lemma: simpler, more general, and more parallel,. arXiv Google Scholar
[56] GUO H, JERRUM M, LIU J. Uniform sampling through the lovász local lemma. Journal of the ACM (JACM), 2019, 66: 18. Google Scholar
[57] Guo H, Jerrum M. A Polynomial-Time Approximation Algorithm for All-Terminal Network Reliability. SIAM J Comput, 2019, 48: 964-978 CrossRef Google Scholar
[58] GUO H, HE K. Tight bounds for popping algorithms,. arXiv Google Scholar
[59] GUO H, JERRUM M. Approximately counting bases of bicircular matroids,. arXiv Google Scholar
[60] FENG W, GUO H, YIN Y, et al. Fast sampling and counting $k$-sat solutions in the local lemma regime,. arXiv Google Scholar
[61] Guo H, Liao C, Lu P. Counting Hypergraph Colorings in the Local Lemma Regime. SIAM J Comput, 2019, 48: 1397-1424 CrossRef Google Scholar
[62] GALANIS A, GOLDBERG L A, GUO H, et al. Counting solutions to random cnf formulas,. arXiv Google Scholar
[63] Bezáková I, Galanis A, Goldberg L A. Approximation via Correlation Decay When Strong Spatial Mixing Fails. SIAM J Comput, 2019, 48: 279-349 CrossRef Google Scholar
[64] CUBITT T S, SCHWARZ M. A constructive commutative quantum lovász local lemma, and beyond,. arXiv Google Scholar
[65] SCHWARZ M, CUBITT T S, VERSTRAETE F. An information-theoretic proof of the constructive commutative quantum Lovász local lemma,. arXiv Google Scholar
[66] SATTATH O, ARAD I. A constructive quantum Lovász local lemma for commuting projectors. Quantum Information & Computation, 2015, 15: 987-996. Google Scholar
[67] Gaunt D S. Hard-Sphere Lattice Gases. II. Plane-Traingular and Three-Dimensional Lattices. J Chem Phys, 1967, 46: 3237-3259 CrossRef ADS Google Scholar
[68] Baxter R J. Hard hexagons: exact solution. J Phys A-Math Gen, 1980, 13: L61-L70 CrossRef ADS Google Scholar
[69] GAUNT D S, FISHER M E. Hard-sphere lattice gases. i. plane-square lattice. The Journal of Chemical Physics, 1965, 43: 2840-2863. Google Scholar
• Figure 1
Examples of the dependency graph. (a) Three events; (b) four events
• Figure 2
The probability vectors characterized by different LLLs. (a) Theorem 3; (b) Theorem 4; (c) Theorem 9
• Figure 3
Examples of the event-variable graph. (a) Three events; (b) four events
•
Algorithm 1 Resample
Sample $X_1,\ldots,X_n$ uniformly at random;
while $\exists~i~\in~[m]$ such that $A_i$ holds do
Choose an arbitrary such $i$ and resample all variables used by $A_i$;
end while
Return the current assignments of all variables.
• Table 1 Relative dimension and independence of vector space
Probability space $\Omega$ $\rightarrow$ Vector space $V$ Event $A\subseteq~\Omega$ $\rightarrow$ Subspace $A\subseteq~V$ Probability $\Pr(A)$ $\rightarrow$ Relative dimension $R(A):=\frac{\dim(A)}{\dim(V)}$ Conjunction $A\wedge~B$ $\rightarrow$ $A\cap~B$ Disjunction $A\vee~B$ $\rightarrow$ $A+B=\{a+b|a\in~A,~b\in~B\}$ Complement $\overline{A}=\Omega\backslash~A$ $\rightarrow$ Orthogonal subspace $A^{\perp}$ Conditional probability $\Pr(A|B):=\frac{\Pr(A\wedge~B)}{\Pr(B)}$ $\rightarrow$ $R(A|B):=\frac{R(A\cap~B)}{R(B)}$ Independence $\Pr(A\wedge~B)~=~P(A)\cdot~P(B)$ $\rightarrow$ $R(A\cap~B)=R(A)\cdot~R(B)$
• Table 2 Summary of the critical thresholds for various lattices
Lattice Quantum Lower bound of the difference (between the classical and quantum thresholds)
Triangular $\frac{5\sqrt{5}~-~11}{2}$ [10,67,68] $6.199\times~10^{-8}$
Square 0.1193 [10,69] $5.943\times~10^{-8}$
Hexagonal 0.1547 [10]$1.211\times~10^{-7}$
Simple cubic 0.0744 [67] $9.533\times~10^{-10}$
Citations
Altmetric
|
{}
|
# Test whether data set approximates normal distribution using mean and median
I am of the knowledge that to test whether a data set approximates a normal distribution, the median and the mean should be approximately equal. So my question is to what degree should the difference between the median and the mean be accepted?
• You should first tell us why this matters to you/what you intend to do with this information.
– Gala
Jul 11 '13 at 6:50
• You could work out (via simulation at the very least) a distribution for the difference in sample mean and sample median, which would be symmetric about 0 and whose variance multiplied by $n$ would asymptotically go to some constant. As such you could construct some kind of test for normality, but it would be a pretty poor test for it, since it would have fairly poor power against a host of symmetric alternatives that aren't normal. If you're interested in assessing normality, there are better ways Jul 11 '13 at 6:58
• @Glen_b even understates the problem as there are also asymmetric distributions for which mean = median, such as some Poisson and binomial distributions. Jul 11 '13 at 7:31
• – Gala
Jul 11 '13 at 7:49
As I said in comments, you could work out (via simulation at the very least) a distribution for the difference in sample mean and sample median, which would be symmetric about 0 and whose variance multiplied by n would asymptotically go to some constant. As such you could construct some kind of test for normality, but it would be a pretty poor test for it, since it would have fairly poor power against a host of symmetric alternatives that aren't normal -- nor indeed even against asymmetric alternatives that happen to have mean=median. If you're interested in assessing normality, there are certainly better ways.
To answer the question though, this paper says that asymptotically, that constant I mentioned is $\pi/2-1$ (that is the variance of $\bar x - \tilde{x}$ in large samples is about $0.571\sigma^2/n$. In small samples, it's a bit smaller. As a rough rule of thumb, you expect the standard deviation of the difference between mean and median to be about $0.75 \sigma/\sqrt{n}$ (in odd samples; a bit smaller for even $n$).
Simulation of 10000 samples of size 25 gives a constant of $0.7390$ (that is, the s.d. of the difference was about $0.739\sigma/\sqrt{n}$, which is consistent with the results from the paper.
This boils down to basically using Pearson's second skewness coefficient as a way of assessing normality (I haven't used the factor of 3 here, though - I agree with Nick Cox's comment below that it's more intuitive without it in any case). That's sometimes called the nonparametric skew (though there's nothing that makes it any more nonparametric than any other skewness coefficient).
Now, considering it as a test statistic, since $\sigma$ will generally be unknown, it must usually be estimated; except for large samples (when we may apply Slutsky's theorem), this will lead to a coefficient that's heavier tailed than normal - though not actually t-distributed, it will probably be close to it* - meaning a critical value will tend to be larger for smaller samples. This will somewhat counteract the effect above of the coefficient being smaller with smaller $n$, though not completely; an asymptotic 5% test rejects when $\frac{\mid\bar x - \tilde{x}\mid}{k.s/\sqrt{n}}>1.96$ where $k = \sqrt{\pi/2-1}\approx 0.756$.
Using 0.75 for $k$ is easy to remember and works quite well for odd $n$ down to about 25 or even a bit lower; the actual significance level at $n$ = 25 is close to 4.5% (on normal data, naturally). It's a reasonably easy test to remember even if it's not always useful.
* though we won't know suitable approximate df without further effort
• Thanks for your answer. It is exactly what I was looking for. The publication from Electronic Journal of Applied Stats clarifies the issues that led to my post. Much appreciated. Jul 11 '13 at 13:23
• I think it's simpler and clearer to work with (mean - median) / SD. Multiplying that by 3 does have historic roots in Pearson's system of distributions. But setting that aside this measure then has the property, intermittently rediscovered and often thought contrary to intuition, that it is bounded by -1 and 1. It is clearly 0 for mean = median. Jul 11 '13 at 14:43
• @NickCox I agree, but felt the connection was worth pointing out. Jul 11 '13 at 15:01
There is no hard and fast rule for that, Its depend on your data. Please specify your data for more detail and you may try other methods to check the normality of data.
• I'm actually trying to establish control charts. I have collected the data. So all I needed was a way to determine whether my data is distributed normally, so that I could use appropriate formulas to calculate limits. Hence my post. Jul 11 '13 at 13:30
|
{}
|
# Automatic structures
@article{Blumensath2000AutomaticS,
title={Automatic structures},
author={Achim Blumensath and Erich Gr{\"a}del},
journal={Proceedings Fifteenth Annual IEEE Symposium on Logic in Computer Science (Cat. No.99CB36332)},
year={2000},
pages={51-62}
}
• Published 26 June 2000
• Computer Science
• Proceedings Fifteenth Annual IEEE Symposium on Logic in Computer Science (Cat. No.99CB36332)
We study definability and complexity issues for automatic and /spl omega/-automatic structures. These are, in general, infinite structures but they can be finitely presented by a collection of automata. Moreover they admit effective (in fact automatic) evaluation of all first-order queries. Therefore, automatic structures provide an interesting framework for extending many algorithmic and logical methods from finite structures to infinite ones. We explain the notion of (/spl omega/-)automatic…
334 Citations
## Topics from this paper
Advice Automatic Structures and Uniformly Automatic Classes
• Computer Science
CSL
• 2017
It is proved that the class of all torsion-free Abelian groups of rank one is uniformly omega-automatic and that there is a uniform omega-tree-automatic presentation of the classof all Abelian Groups up to elementary equivalence and of theclass of all countable divisible Abelian Group groups.
Finite Presentations of Infinite Structures: Automata and Interpretations
• Mathematics, Computer Science
Theory of Computing Systems
• 2004
The model checking problem for FO(∃ω), first-order logic extended by the quantifier “there are infinitely many”, is proved to be decidable for automatic and ω-automatic structures and appropriate expansions of the real ordered group.
Climbing up the Elementary Complexity Classes with Theories of Automatic Structures
• Computer Science
CSL
• 2018
A positive answer to the question whether there are automatic structures of arbitrary high elementary complexity is given and it is shown that for every h ≥ 0 the forest of all finite trees of height at most h+ 2 is automatic and it’s theory is complete for STA(∗, exph(n, poly(n), poly( n)), an alternating complexity class between h-fold exponential time and space.
The model-theoretic complexity of automatic linear orders
This thesis studies the model-theoretic complexity of automatic linear orders in terms of two complexity measures: the finite-condensation rank and the Ramsey degree.
Invariants of Automatic Presentations and Semi-synchronous Transductions
The main result is that a one-to-one function on words preserves regularity as well as non-regularity of all relations iff it is a semi-synchronous transduction.
AUTOMATIC AND POLYNOMIAL-TIME ALGEBRAIC STRUCTURES
• Computer Science, Mathematics
The Journal of Symbolic Logic
• 2019
This paper shows that the set of Turing machines that represent automata-presentable structures is ${\rm{\Sigma }}_1^1$-complete and uses similar methods to show that there is no reasonable characterisation of the structures with a polynomial-time presentation in the sense of Nerode and Remmel.
Computability and complexity properties of automatic structures and their applications
• Mathematics
• 2008
Finite state automata are Turing machines with fixed finite bounds on resource use. Automata lend themselves well to real-time computations and efficient algorithms. Continuing a tradition of
On automatic partial orders
• Mathematics, Computer Science
18th Annual IEEE Symposium of Logic in Computer Science, 2003. Proceedings.
• 2003
It is shown that every infinite path in an automatic tree with countably many infinite paths is a regular language.
First-order and counting theories of ω-automatic structures
• Mathematics
Journal of Symbolic Logic
• 2008
Abstract The logic extends first-order logic by a generalized form of counting quantifiers (“the number of elements satisfying … belongs to the set C”). This logic is investigated for structures with
Algorithmic Solutions via Model Theoretic Interpretations
• Mathematics
• 2017
Model theoretic interpretations are an important tool in algorithmic model theory. Their applications range from reductions between logical theories to the construction of algorithms for problems,
## References
SHOWING 1-10 OF 46 REFERENCES
Towards a Theory of Recursive Structures
This paper summarizes the recent work on recursive structures and data bases, including the high undecidability of many problems on recursive graphs and structures, a method for deducing results on the descriptive complexity of nitary NP optimization problems from results onthe computational complexity of their innnitary analogues.
Word problems requiring exponential time(Preliminary Report)
• Computer Science
STOC
• 1973
A number of similar decidable word problems from automata theory and logic whose inherent computational complexity can be precisely characterized in terms of time or space requirements on deterministic or nondeterministic Turing machines are considered.
On the Equivalence, Containment, and Covering Problems for the Regular and Context-Free Languages
• Computer Science, Mathematics
J. Comput. Syst. Sci.
• 1976
We consider the complexity of the equivalence and containment problems for regular expressions and context-free grammars, concentrating on the relationship between complexity and various language
On Relations Defined by Generalized Finite Automata
• Mathematics, Computer Science
IBM J. Res. Dev.
• 1965
A transduction, in the sense of this paper, is a n-ary word relation (which may be a function) describable by a finite directed labeled graph that constitutes the equilibrium (potential) behavior of 1-dimensional, bilateral iterative networks.
On Finite Model Theory
The subject of this paper is the part of finite model theory intimately related to the classical model theory. In the very beginning of our career in computer science, we attended a few lectures on
More about recursive structures: descriptive complexity and zero-one laws
• Mathematics, Computer Science
Proceedings 11th Annual IEEE Symposium on Logic in Computer Science
• 1996
This paper investigates the descriptive complexity of several logics over recursive structures, including first-order, second- order, and fixpoint logic, and proposes a version that applies to recursive structures and uses it to prove several non-expressibility results.
The theory of functions and sets of natural numbers
Recursiveness and Computability. Induction. Systems of Equations. Arithmetical Formal Systems. Turing Machines. Flowcharts. Functions as Rules. Arithmetization. Church's Thesis. Basic Recursion
SUPER-EXPONENTIAL COMPLEXITY OF PRESBURGER ARITHMETIC
• Mathematics
• 1974
Lower bounds are established on the computational complexity of the decision problem and on the inherent lengths of proofs for two classical decidable theories of logic: the first-order theory of the
The complexity of relational query languages (Extended Abstract)
The pattern which will be shown is that the expression complexity of the investigated languages is one exponential higher then their data complexity, and for both types of complexity the authors show completeness in some complexity class.
Ehrenfeucht Games, the Composition Method, and the Monadic Theory of Ordinal Words
• W. Thomas
• Mathematics, Computer Science
Structures in Logic and Computer Science
• 1997
Shelah's extension of the method, the “composition of monadic theories”, is reviewed, explained in the example of the monadic theory of the ordinal ordering (ω, <), and compared with the automata theoretic approach due to Buchi.
|
{}
|
## After midnight, part 4: LaTeX
### Tuesday 11. December, 2007
What has happened since the last blog entry? Well, I’m not sick anymore and I live on my own now (okay, it’s relative “on my own”).
Unlike the previous After midnight -posts, this isn’t actually about a programming language.
Do you know “Structured text processing” (Rakenteellinen tekstinkäsittely, in Finnish)? It was the fancier name for our university’s LaTeX-typesetting course. It’s like text processing, but not at all like it. Today everybody uses these “What You See, Is What You Get” software (M\$ Word, OpenOffice.Org Writer), but no one has ever hear that you could type those documents with a TeX-system. Usage of LaTeX is quite a bit different from those wysiwyg-programs as you don’t see what you get until you have compiled the tex file into something more readable such as DVI- or PostScript-file.
Personally I’m impressed by the looks of the documents I have made so far.
If you want to try it out, you need a LaTeX-environment, see MikTex. Then you need the Ghostscript-software. You probably want GSView too. You can write tex-files with Notepad or any text editor, as those files are more or less pure ascii-text. If you really start using LaTeX, I suggest you to download TexMaker. It’s free (open source) LaTeX editor and available for Linux systems too or if you are using Linux, you can also try out Kile editor and if you are using Ubuntu, you can install all you need with “apt-get install kile”. Besides the editor, this should pull all required packages including LaTeX-system.
I suppose our text processing environment is now installed and working (well, you probably don’t know if it is working yet)
Open up you editor, so we can start writing our first LaTeX-document. It all begins with lines
\documentclass[12pt]{report} \begin{document} Hello this is our very first \LaTeX-document. \end{document}
Save it and if you are using Texmaker, there should be a button with text “LATEX”, push it. If your specified paths are correct (you can check them in settings), the latex-system now compiles our document into a dvi-file. The animal button next to LATEX-button opens up a dvi-file viewer and you should see your document there.
The commands given above first define the type of our document, which is “report” with font size 12pt. All options in [] are optional, where {} are required. Without required parameters the compilation will fail. You probably have noticed that all Latex commands begins with \. The \begin{document} defines a document environment and \end{document} will end it. Everything you write and want it to be seen in the final document must be written inside the document environment.
For more information on LaTeX resides in the “doc”-folder inside your miktex installation (if documentation was installed). There’s also a folder named ‘guide’ or ‘guides’ in which resides a tutorial to LaTeX-system. You can also find a lot of information on latex with the help of Google.
(I can’t write a complete tutorial here as this is only a blog posting nor could I do it anyway as I’m not very familiar with LaTeX by myself. If you find these instructions too complicated to understand, leave a comment and I’ll try to help)
|
{}
|
Hey Tobias!
> We say that one probability space \$$(\Omega',{\mathcal B}', {\mathcal P}')\$$ *extends* another \$$(\Omega,{\mathcal B}, {\mathcal P})\$$ if there is a surjective map \$$\pi: \Omega' \rightarrow \Omega\$$ which is measurable (i.e. \$$\pi^{-1}(E) \in {\mathcal B}'\$$ for every \$$E \in {\mathcal B}\$$) and probability preserving (i.e. \$${\bf P}'(\pi^{-1}(E)) = {\bf P}(E)\$$ for every \$$E \in {\mathcal B}\$$).
|
{}
|
Skip to content
# AtomTypes
This page gives hints on how to specify the types of atoms that form the system.
## Introduction¶
ABINIT needs to know the different types of atoms that form the system. The atoms assembled in a molecule or a solid are physically specified by their nuclear charge (and their mass for dynamical properties).
However, in a pseudopotential or PAW approach, the knowledge of the nuclear charge does not define the potential felt by the electron, only the atomic data file (pseudopotential or PAW) defines it. Thus, in addition to the number of types of atoms ntypat, and their nuclear charge znucl, ABINIT requires to know the pseudopotential/PAW to use for each type of atom. The latters are specified in the “files” file. Unless alchemical potentials are used (see later), the number of pseudopotentials to be read, npsp, is the same as ntypat. Note that one cannot mix norm- conserving pseudopotentials with PAW atomic data files in a single ABINIT run, even for different datasets. One has to stick either to norm-conserving pseudopotentials or to PAW.
More on the pseudos/PAW in topic_PseudosPAW.
ABINIT also has a default table of the atomic masses, but this can be superceded by specifying amu.
Alchemical potentials
For norm-conserving pseudopotentials, ABINIT can mix the pseudopotentials, as described in the ABINIT wiki, to create so-called “alchemical potentials”, see mixalch.
In this case, the number of pseudopotentials to be given, npsp, will usually be larger than the number of types of atoms, ntypat. Using alchemical potentials makes sense to treat alloys in which similar ions are present, and whose specific chemical properties are not crucial for the property of interest. Usually it is done only for isovalent species, and ions of quite similar radii. It is a reasonable interpolation technique for the electronic properties.
compulsory:
• ntypat Number of TYPes of AToms
• znucl charge -Z- of the NUCLeus
basic:
• amu Atomic Mass Units
• typat TYPe of AToms
useful:
• mixalch MIXing coefficients for ALCHemical potentials
• npsp Number of PSeudoPotentials
• ntypalch Number of TYPe of atoms that are “ALCHemical”
expert:
• algalch ALGorithm for generating ALCHemical pseudopotentials
internal:
• %npspalch Number of PSeudoPotentials that are “ALCHemical”
• %ntyppure Number of TYPe of atoms that are “PURE”
• %ziontypat Z (charge) of the IONs for the different TYPes of AToms
v3:
v5:
v6:
|
{}
|
# How do you write in simplest radical form the coordinates of point A if A is on the terminal side of angle in standard position whose degree measure is theta: OA=8, theta=120^circ?
##### 1 Answer
Mar 6, 2018
Coordinates of color(blue)(A (-4, 4 sqrt3)
#### Explanation:
${120}^{\circ}$ is in second quadrant
${A}_{x} = \overline{O A} \cdot \cos \theta = 8 \cdot \cos 120 = 8 \cdot \left(- \cos \left(60\right)\right) = 8 \cdot - \left(\frac{1}{2}\right) = - 4$
$\textcolor{b l u e}{{A}_{x} = - 4}$
${A}_{y} = \overline{O A} \cdot \sin \theta = 8 \cdot \sin 120 = 8 \cdot \sin 60 = 8 \cdot \left(\frac{\sqrt{3}}{2}\right)$
color(blue)(A_y = 4 sqrt3
|
{}
|
Banach Journal of Mathematical Analysis
Poisson semigroup, area function, and the characterization of Hardy space associated to degenerate Schrödinger operators
Abstract
Let
$\begin{eqnarray*}Lf(x)=-\frac{1}{\omega(x)}\sum_{i,j}\partial_{i}(a_{ij}(\cdot)\partial _{j}f)(x)+V(x)f(x)\end{eqnarray*}$ be the degenerate Schrödinger operator, where $\omega$ is a weight from the Muckenhoupt class $A_{2}$ and $V$ is a nonnegative potential that belongs to a certain reverse Hölder class with respect to the measure $\omega(x)dx$. Based on some smoothness estimates of the Poisson semigroup $e^{-t\sqrt{L}}$, we introduce the area function $S^{L}_{P}$ associated with $e^{-t\sqrt{L}}$ to characterize the Hardy space associated with $L$.
Article information
Source
Banach J. Math. Anal., Volume 10, Number 4 (2016), 727-749.
Dates
Accepted: 6 January 2016
First available in Project Euclid: 31 August 2016
https://projecteuclid.org/euclid.bjma/1472657854
Digital Object Identifier
doi:10.1215/17358787-3649986
Mathematical Reviews number (MathSciNet)
MR3543909
Zentralblatt MATH identifier
1347.42037
Citation
Huang, Jizheng; Li, Pengtao; Liu, Yu. Poisson semigroup, area function, and the characterization of Hardy space associated to degenerate Schrödinger operators. Banach J. Math. Anal. 10 (2016), no. 4, 727--749. doi:10.1215/17358787-3649986. https://projecteuclid.org/euclid.bjma/1472657854
References
• [1] J. Cao and D. Yang, Hardy spaces $H^{p}_{L}(\mathbb{R}^{n})$ associated with operators satisfying $k$-Davies–Gaffney estimates, Sci. China Math. 55 (2012), no. 7, 1403–1440.
• [2] R. Coifman, Y. Meyer, and E. M. Stein, Some new function spaces and their applications to harmonic analysis, J. Funct. Anal. 62 (1985), no. 2, 304–335.
• [3] X. Duong and L. Yan, Duality of Hardy and BMO spaces associated with operators with heat kernel bounds, J. Amer. Math. Soc. 18 (2005), no. 4, 943–973.
• [4] J. Dziubański, Note on $H^{1}$ spaces related to degenerate Schrödinger operators, Illinois J. Math. 49 (2005), no. 4, 1271–1297.
• [5] J. Dziubański and J. Zienkiewicz, Hardy space $H^{1}$ associated to Schrödinger operator with potential satisfying reverse Hölder inequality, Rev. Mat. Iberoam. 15 (1999), no. 2, 279–296.
• [6] J. Dziubański and J. Zienkiewicz, $H^{p}$ spaces associated with Schrödinger operators with potentials from reverse Hölder classes, Colloq. Math. 98 (2003), no. 1, 5–38.
• [7] C. Fefferman and E. M. Stein, $H^{p}$ spaces of several variables, Acta Math. 129 (1972), no. 3–4, 137–193.
• [8] G. Folland and E. M. Stein, Hardy Spaces on Homogeneous Group, Math. Notes 28, Princeton Univ. Press, Princeton, 1982.
• [9] W. Hebisch and L. Saloff-Coste, On the relation between elliptic and parabolic Harnack inequalities, Ann. Inst. Fourier (Grenoble) 51 (2001), no. 5, 1437–1481.
• [10] S. Hofmann, G. Lu, D. Mitrea, M. Mitrea, and L. Yan, Hardy spaces associated to non-negative self-adjoint operators satisfying Davies–Gaffney estimates, Mem. Amer. Math. Soc. 214 (2011), no. 1007.
• [11] R. Jiang and D. Yang, Orlicz–Hardy spaces associated with operators, Sci. China Ser. A 52 (2009), no. 5, 1042–1080.
• [12] R. Jiang and D. Yang, Orlicz–Hardy spaces associated with operators satisfying Davies–Gaffney estimates, Commun. Contemp. Math. 13 (2011), no. 2, 331–373.
• [13] K. Kurata and S. Sugano, Fundamental solution, eigenvalue asymptotics and eigenfunctions of degenerate elliptic operators with positive potentials, Studia Math. 138 (2000), no. 2, 101–119.
• [14] C. Lin, H. Liu, and Y. Liu, Hardy spaces associated with Schrödinger operators on the Heisenberg group, preprint, arXiv:1106.4960v1 [math.AP].
• [15] L. E. Persson, M. Ragusa, N. Samko, and P. Wall, “Commutators of Hardy operators in vanishing Morrey spaces” in Conference Proceedings (ICNPAA, 2012), Amer. Inst. Phys. (AIP) 1493 (2012), no. 1, 859–866.
• [16] S. Polidoro and M. Ragusa, Harnack inequality for hypoelliptic ultraparabolic equations with a singular lower order term, Rev. Mat. Iberoam. 24 (2008), no. 3, 1011–1046.
• [17] E. M. Stein, Harmonic Analysis: Real Variable Methods, Orthogonality and Oscillatory Integrals, Princeton Math. Ser. 43, Princeton Univ. Press, Princeton, NJ, 1993.
• [18] D. Yang, D. Yang, and Y. Zhou, Endpoint properties of localized Riesz transforms and fractional integrals associated to Schrödinger operators, Potential Anal. 30 (2009), no. 3, 271–300.
• [19] D. Yang, D. Yang, and Y. Zhou, Localized $\mathit{BMO}$ and $\mathit{BLO}$ spaces on $\mathit{RD}$-spaces and applications to Schrödinger operators, Commun. Pure Appl. Anal. 9 (2010), no. 3, 779–812.
• [20] D. Yang, D. Yang, and Y. Zhou, Localized Morrey-Campanato spaces on metric measure spaces and applications to Schrödinger operators, Nagoya Math. J. 198 (2010), 77–119.
• [21] D. Yang and Y. Zhou, Localized Hardy spaces $H^{1}$ related to admissible functions on $\mathit{RD}$-spaces and applications to Schrödinger operators, Trans. Amer. Math. Soc. 363 (2011), no. 3, 1197–1239.
|
{}
|
• ### A Multi-Planet System Transiting the $V$ = 9 Rapidly Rotating F-Star HD 106315(1701.03807)
April 21, 2017 astro-ph.SR, astro-ph.EP
We report the discovery of a multi-planet system orbiting HD 106315, a rapidly rotating mid F-type star, using data from the K2 mission. HD 106315 hosts a $2.51\pm0.12\,R_\oplus$ sub-Neptune in a 9.5 day orbit, and a $4.31_{-0.27}^{+0.24}\,R_\oplus$ super-Neptune in a 21 day orbit. The projected rotational velocity of HD 106315 (12.9 km s$^{-1}$) likely precludes precise measurements of the planets' masses, but could enable a measurement of the sky-projected spin-orbit obliquity for the outer planet via Doppler tomography. The eccentricities of both planets were constrained to be consistent with 0, following a global modeling of the system that includes a Gaia distance and dynamical arguments. The HD 106315 system is one of few multi-planet systems hosting a Neptune-sized planet for which orbital obliquity measurements are possible, making it an excellent test-case for formation mechanisms of warm-Neptunian systems. The brightness of the host star also makes HD 106315 c a candidate for future transmission spectroscopic follow-up studies.
• ### Astrophysics Source Code Library: Here we grow again!(1611.06219)
Nov. 18, 2016 astro-ph.IM
The Astrophysics Source Code Library (ASCL) is a free online registry of research codes; it is indexed by ADS and Web of Science and has over 1300 code entries. Its entries are increasingly used to cite software; citations have been doubling each year since 2012 and every major astronomy journal accepts citations to the ASCL. Codes in the resource cover all aspects of astrophysics research and many programming languages are represented. In the past year, the ASCL added dashboards for users and administrators, started minting Digital Objective Identifiers (DOIs) for software it houses, and added metadata fields requested by users. This presentation covers the ASCL's growth in the past year and the opportunities afforded it as one of the few domain libraries for science research codes.
• ### Implementing Ideas for Improving Software Citation and Credit(1611.06232)
Nov. 18, 2016 cs.DL, astro-ph.IM
Improving software citation and credit continues to be a topic of interest across and within many disciplines, with numerous efforts underway. In this Birds of a Feather (BoF) session, we started with a list of actionable ideas from last year's BoF and other similar efforts and worked alone or in small groups to begin implementing them. Work was captured in a common Google document; the session organizers will disseminate or otherwise put this information to use in or for the community in collaboration with those who contributed.
• ### The H$\alpha$ emission of nearby M dwarfs and its relation to stellar rotation(1611.03509)
Nov. 10, 2016 astro-ph.SR, astro-ph.EP
The high-energy emission from low-mass stars is mediated by the magnetic dynamo. Although the mechanisms by which fully convective stars generate large-scale magnetic fields are not well understood, it is clear that, as for solar-type stars, stellar rotation plays a pivotal role. We present 270 new optical spectra of low-mass stars in the Solar Neighborhood. Combining our observations with those from the literature, our sample comprises 2202 measurements or non-detections of H$\alpha$ emission in nearby M dwarfs. This includes 466 with photometric rotation periods. Stars with masses between 0.1 and 0.6 solar masses are well-represented in our sample, with fast and slow rotators of all masses. We observe a threshold in the mass-period plane that separates active and inactive M dwarfs. The threshold coincides with the fast-period edge of the slowly rotating population, at approximately the rotation period at which an era of rapid rotational evolution appears to cease. The well- defined active/inactive boundary indicates that H$\alpha$ activity is a useful diagnostic for stellar rotation period, e.g. for target selection for exoplanet surveys, and we present a mass-period relation for inactive M dwarfs. We also find a significant, moderate correlation between $L_{\mathrm{H}\alpha}/L_{\mathrm{bol}}$ and variability amplitude: more active stars display higher levels of photometric variability. Consistent with previous work, our data show that rapid rotators maintain a saturated value of $L_{\mathrm{H}\alpha}/L_{\mathrm{bol}}$. Our data also show a clear power-law decay in $L_{\mathrm{H}\alpha}/L_{\mathrm{bol}}$ with Rossby number for slow rotators, with an index of $-1.7 \pm 0.1$.
• ### Improving Software Citation and Credit(1512.07919)
Dec. 24, 2015 cs.DL, astro-ph.IM
The past year has seen movement on several fronts for improving software citation, including the Center for Open Science's Transparency and Openness Promotion (TOP) Guidelines, the Software Publishing Special Interest Group that was started at January's AAS meeting in Seattle at the request of that organization's Working Group on Astronomical Software, a Sloan-sponsored meeting at GitHub in San Francisco to begin work on a cohesive research software citation-enabling platform, the work of Force11 to "transform and improve" research communication, and WSSSPE's ongoing efforts that include software publication, citation, credit, and sustainability. Brief reports on these efforts were shared at the BoF, after which participants discussed ideas for improving software citation, generating a list of recommendations to the community of software authors, journal publishers, ADS, and research authors. The discussion, recommendations, and feedback will help form recommendations for software citation to those publishers represented in the Software Publishing Special Interest Group and the broader community.
• ### CfAIR2: Near Infrared Light Curves of 94 Type Ia Supernovae(1408.0465)
CfAIR2 is a large homogeneously reduced set of near-infrared (NIR) light curves for Type Ia supernovae (SN Ia) obtained with the 1.3m Peters Automated InfraRed Imaging TELescope (PAIRITEL). This data set includes 4607 measurements of 94 SN Ia and 4 additional SN Iax observed from 2005-2011 at the Fred Lawrence Whipple Observatory on Mount Hopkins, Arizona. CfAIR2 includes JHKs photometric measurements for 88 normal and 6 spectroscopically peculiar SN Ia in the nearby universe, with a median redshift of z~0.021 for the normal SN Ia. CfAIR2 data span the range from -13 days to +127 days from B-band maximum. More than half of the light curves begin before the time of maximum and the coverage typically contains ~13-18 epochs of observation, depending on the filter. We present extensive tests that verify the fidelity of the CfAIR2 data pipeline, including comparison to the excellent data of the Carnegie Supernova Project. CfAIR2 contributes to a firm local anchor for supernova cosmology studies in the NIR. Because SN Ia are more nearly standard candles in the NIR and are less vulnerable to the vexing problems of extinction by dust, CfAIR2 will help the supernova cosmology community develop more precise and accurate extragalactic distance probes to improve our knowledge of cosmological parameters, including dark energy and its potential time variation.
• ### Astrophysics Source Code Library Enhancements(1411.2031)
Nov. 7, 2014 cs.DL, astro-ph.IM
The Astrophysics Source Code Library (ASCL; ascl.net) is a free online registry of codes used in astronomy research; it currently contains over 900 codes and is indexed by ADS. The ASCL has recently moved a new infrastructure into production. The new site provides a true database for the code entries and integrates the WordPress news and information pages and the discussion forum into one site. Previous capabilities are retained and permalinks to ascl.net continue to work. This improvement offers more functionality and flexibility than the previous site, is easier to maintain, and offers new possibilities for collaboration. This presentation covers these recent changes to the ASCL.
• ### The Past, Present and Future of Astronomical Data Formats(1411.0996)
Nov. 4, 2014 astro-ph.IM
The future of astronomy is inextricably entwined with the care and feeding of astronomical data products. Community standards such as FITS and NDF have been instrumental in the success of numerous astronomy projects. Their very success challenges us to entertain pragmatic strategies to adapt and evolve the standards to meet the aggressive data-handling requirements of facilities now being designed and built. We discuss characteristics that have made standards successful in the past, as well as desirable features for the future, and an open discussion follows.
• ### Ideas for Advancing Code Sharing (A Different Kind of Hack Day)(1312.7352)
Dec. 27, 2013 cs.DL, astro-ph.IM
How do we as a community encourage the reuse of software for telescope operations, data processing, and calibration? How can we support making codes used in research available for others to examine? Continuing the discussion from last year Bring out your codes! BoF session, participants separated into groups to brainstorm ideas to mitigate factors which inhibit code sharing and nurture those which encourage code sharing. The BoF concluded with the sharing of ideas that arose from the brainstorming sessions and a brief summary by the moderator.
• ### Astrophysics Source Code Library: Incite to Cite!(1312.6693)
Dec. 23, 2013 cs.DL, astro-ph.IM
The Astrophysics Source Code Library (ASCL, http://ascl.net/) is an online registry of over 700 source codes that are of interest to astrophysicists, with more being added regularly. The ASCL actively seeks out codes as well as accepting submissions from the code authors, and all entries are citable and indexed by ADS. All codes have been used to generate results published in or submitted to a refereed journal and are available either via a download site or froman identified source. In addition to being the largest directory of scientist-written astrophysics programs available, the ASCL is also an active participant in the reproducible research movement with presentations at various conferences, numerous blog posts and a journal article. This poster provides a description of the ASCL and the changes that we are starting to see in the astrophysics community as a result of the work we are doing.
• ### The Astrophysics Source Code Library: Where do we go from here?(1312.5334)
Dec. 18, 2013 astro-ph.IM
The Astrophysics Source Code Library, started in 1999, has in the past three years grown from a repository for 40 codes to a registry of over 700 codes that are now indexed by ADS. What comes next? We examine the future of the ASCL, the challenges facing it, the rationale behind its practices, and the need to balance what we might do with what we have the resources to accomplish.
• ### Astrophysics Source Code Library(1212.1916)
Dec. 9, 2012 cs.DL, astro-ph.IM
The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.
• ### Bring out your codes! Bring out your codes! (Increasing Software Visibility and Re-use)(1212.1915)
Dec. 9, 2012 cs.SE, cs.DL, astro-ph.IM
Progress is being made in code discoverability and preservation, but as discussed at ADASS XXI, many codes still remain hidden from public view. With the Astrophysics Source Code Library (ASCL) now indexed by the SAO/NASA Astrophysics Data System (ADS), the introduction of a new journal, Astronomy & Computing, focused on astrophysics software, and the increasing success of education efforts such as Software Carpentry and SciCoder, the community has the opportunity to set a higher standard for its science by encouraging the release of software for examination and possible reuse. We assembled representatives of the community to present issues inhibiting code release and sought suggestions for tackling these factors. The session began with brief statements by panelists; the floor was then opened for discussion and ideas. Comments covered a diverse range of related topics and points of view, with apparent support for the propositions that algorithms should be readily available, code used to produce published scientific results should be made available, and there should be discovery mechanisms to allow these to be found easily. With increased use of resources such as GitHub (for code availability), ASCL (for code discovery), and a stated strong preference from the new journal Astronomy & Computing for code release, we expect to see additional progress over the next few years.
|
{}
|
# Orthonormal basis
• Apr 3rd 2011, 05:18 AM
mechaniac
Orthonormal basis
let $(e_{1},e_{2},e_{3})$ be a positive oriented orthonormal basis.
Define a new basis $(f_{1},f_{2},f_{3})$
* Show that $(f_{1},f_{2},f_{3})$ also is a orthonormal basis.
* determine the coordinates for the vector $u = 7e_{1}+1e_{2}-4e_{3}$ in the base $(f_{1},f_{2},f_{3})$
and:
$f_{1}= \frac{1}{3}e_{1} - \frac{2}{3}e_{2} + \frac{2}{3}e_{3}$
$f_{2}= \frac{2}{3}e_{1} + \frac{2}{3}e_{2} + \frac{1}{3}e_{3}$
$f_{3}= -\frac{2}{3}e_{1} + \frac{1}{3}e_{2} + \frac{2}{3}e_{3}$
i thought this would be the coordinates for the vector u:
$
1/3$\left( \begin{array}{ccc} 1 & -2 & 2 \\ 2 & 2 & 1 \\ -2 & 1 & 2 \end{array} \right)$
$
$$\left( \begin{array}{ccc} 7 \\ 1 \\ -4 \end{array} \right)$$
= $$\left( \begin{array}{ccc} -1 \\ 4 \\ -7 \end{array} \right)$$
$u=(-1,3,-7)$
I need some advice for the first problem on how to show that $(f_{1},f_{2},f_{3})$ is an orthonormal basis to.
Thanks!
Edit: dont need help to show that $(f_{1},f_{2},f_{3})$ is an orthonormal basis. Figured it out right after i posted :)
• Apr 3rd 2011, 06:05 AM
Deveno
your calculations look OK to me, you have a typo u = (-1,4,7) not (-1,3,7). verifying (f1,f2,f3) is orthogonal is just a matter of computing the 9 inner products, or, you could notice that the change of basis matrix U has the property that U^-1 = U^T, and is thus orthogonal, and preseves inner products.
• Apr 3rd 2011, 06:30 AM
mechaniac
oops missed the typo, guess i need a break :D ... thanks for the help!
|
{}
|
# Fractional Calculus Power Rule Derivation
I know the proof for the fractional calculus power rule using the definition of the Riemann-Liouville Fractional Integral, $_{c}D_x^{-\nu}f(x) = \frac{1}{\Gamma({\nu})}\int_{c}^{x}(x-t)^{\nu - 1}f(t)dt$, but I don't understand one part in the process.
$D^{- \nu}x^{\mu} = \frac{1}{\Gamma({\nu})}\int_{0}^{x}(x-t)^{\nu - 1}t^{\mu}dt$
$= \frac{1}{\Gamma({\nu})}\int_{0}^{x}(1-\frac{t}{x})^{\nu - 1}x^{\nu - 1}t^{\mu}dt$
which equals, $\frac{1}{\Gamma({\nu})}\int_{0}^{1}(1-u)^{\nu - 1}x^{\nu -1}{(xu)}^{\mu}xdu$, $(u = \frac{t}{x})$.
At the last step I don't understand why you change from integrating from 0 to x to integrate from 0 to 1, everything else makes sense. I kind of see how it works, but I need an in depth explanation for why it works.
When making the substitution $t=xu$ we incorporate the integral bounds.
Since $0 \leq t \leq x$ we obtain $0 \leq xu \leq x$. Dividing by $x$ yields $0 \leq u \leq 1$, which gives the new integral bounds.
|
{}
|
# Thread: Finding x and y
1. ## Finding x and y
Can someone show me how to find x and y in this question?
2. Originally Posted by user_5
Can someone show me how to find x and y in this question?
The triangles are similar, so the corresponding sides are proportional.
i.e. $15 = 10k, 24 = kx, y = 12k$.
Solve for $k$, then you can solve for $x$ and $y$.
|
{}
|
# Transverse Wave Problem
1. Nov 20, 2005
### Ginny Mac
Here is a transverse wave problem:
Consider transverse waves moving along a stretched spring fixed to an oscillator. Suppose the mass of the spring is 10 grams, its relaxed (not stretched) length is 1.00 meters, and its spring constant (if stretched) is 5.0 N/cm. The spring is tensioned via a pulley wheel and a hanging weight. The final length of the stretched spring is 1.50 meters (from the oscillator to the weight). How fast will transverse waves propogate along the stretched spring?
I am using v= square root of:(tension over mass density)
Initial mass density: 0.01 kg/1.00 m = 0.01 kg/m
Final mass density: 0.01 kg/1.50 m = 0.0067 kg/m
T = mass (weight) * gravity = x kg*9.8 m/s^2
So now to find T....
I have tried F = -kx, where mg= -kx, and T=-F. Therefore, T = (-500 N/m * 1.50 m) = 750 N, or kgm/s^2.
So... v = square root of: (750 N/0.0067 kg/m) = 334.6 m/s
I am just a little unsure. Could somebody please comment, whether right or wrong? If wrong, would you mind pointing me in the right direction? Thank you - I appreciate it very much.
2. Nov 21, 2005
### mezarashi
The tension in the spring would be T = -kx, where k = 5N/cm and x = 50cm, as there is zero tension when the spring is at 100cm.
You should apply the solution to the differential equation, which has
$$\beta = \frac{\omega}{V}$$
$$V = \sqrt{\frac{T}{\rho}}$$
where V is the transverse wave velocity, T is the tension in the string, and rho is the linear mass density (mass/length). Using that formula, my answer differs.
The derivations here are quite lengthy, involving elastic theory + the wave equation, but I'll review if you'd like.
3. Nov 21, 2005
### Ginny Mac
If you have a moment, would you mind reviewing this? I appreciate it. Here is my dilemma this semester : I am confused as to how I ended up in a Calc-based Physics class (eek!) but I am trudging along best I can! Thank you very much for your help -
Ginny
4. Nov 21, 2005
### mezarashi
The first step is to derive the wave equation from the case of the classical string. In the case of transverse waves on the string, we see that the mass particles move up and down in the y direction, thus we will look at the forces involved in the y-direction.
For any small differential element $$\delta x$$ we can resolve the forces acting in the y-direction as the tension acting at both sides at different angles.
Force in y = T1 x sin a - T2 x sin b
But we also know that the horizontal tension T can be expressed both as T1cos a = T2cos b. Using this relationship we can simplify the equation to:
Force in y = T (tan a - tan b)
We also notice that tan a is the slope at any point x. tan b is the slope at any point x + a differential element dx, such that tan a = dy/dx at x and tan b = dy/dx at x+dx.
Applying Newton's law: F = ma or $$F = m\frac{d^2 y}{dt^2}$$
$$F = \rho \delta x\frac{d^2 y}{dt^2} = T (dy/dx_1 + dy/dx_2)$$
where rho represents the linear mass density. As the differential element approaches zero, our equation becomes.
$$\frac{\rho}{T} \frac{d^2 y}{dt^2} = \frac{d^2 y}{dx^2}$$
which is our wave equation (D'Alambert's wave equation in 1-dimension) if we substitute $$c^2 = \frac{T}{\rho}$$.
Now you can use theories from your study of differential equations to solve, so you have the solution to be in the possible form of:
$$y = A exp(j(\omega t - \beta x)) + B exp(j(\omega t + \beta x))$$
Properties of the wave equation tells us that $$\beta = \frac{\omega}{v}$$ where $$v = c = \sqrt{\frac{T}{\rho}}$$
I personally believe that the understanding of the solution to the wave problem and the meaning of the terms in the final wave equation solution is the most important part of all. I remember such classes being tough too >.< Good luck!
Last edited: Nov 21, 2005
|
{}
|
# What is integral `int ` 1/(e^x + e^(-x)) dx?
You need to re-write the denominator at integrand, using the negative exponent property, such that:
`int 1/(e^x + e^(-x))dx = int 1/(e^x + 1/e^x)dx`
`e^x + 1/e^x = (e^(2x) + 1)/e^x`
Replacing `(e^(2x) + 1)/e^x` for the original form of denominator, yields:
`int 1/(e^x + e^(-x))dx = int 1/((e^(2x) + 1)/e^x) dx`
`int 1/(e^x + e^(-x))dx = int e^x/(e^(2x) + 1) dx`
You need to come up with the substitution `e^x = y` , such that:
`e^x = y => e^x dx = dy`
`int e^x/(e^(2x) + 1) dx = int 1/(y^2 + 1) dy`
`int 1/(y^2 + 1) dy = tan^(-1) y + c`
Replacing back `e^x` for `y` yields:
`int 1/(e^x + e^(-x))dx = tan^(-1) (e^x) + c `
c represents an aleatory real constant
Hence, evaluating the indefinite integral, using suggested substitution, yields `int 1/(e^x + e^(-x))dx = tan^(-1) (e^x) + c .`
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.