text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
Under what circumstances would the large internal input resistance of an actual op amp affect circuit operation?
I was able to find a lot about why the input resistance is high and basically infinite. I understand that the input resistance is high so that it doesn't become a load on the signal. I also know that it makes sense like a voltage divider, the high impedance means that all of the voltage drops on the op amp. But I can't find any research on a case where the high resistance might actually affect the circuit itself (despite how high it is).
• the high impedance means that all of the voltage drops on the op amp That's a weird sentence. You probably mean: the opamp's high input impedance means that it does not load the voltage divider so the value of the voltage is not affected Oct 9, 2018 at 9:26
• But I can't find any research on a case where the high resistance might actually affect the circuit itself (despite how high it is). Why would it be an issue? The high input impedance is there to not affect the circuit so why do you look for examples where there is an effect? Oct 9, 2018 at 9:31
• @Bimpelrekkie I want to know because despite the fact that it was built to not affect things, I imagine that there are scenarios where it could and want to know what these may be. Because I'm curious and maybe it could be an issue one day and I'd never know. Oct 9, 2018 at 9:34
• If you've designed an op-amp circuit and it is susceptible to problems due to high input impedance then you have designed it wrongly. Oct 9, 2018 at 9:37
• Instead of learning about what could be an issue, learn how to do things right. Suppose I make a list of things not to do and one with good practices. Which list will be longer? If you design an opamp circuit and the feedback resistors are in the same value as the opamp's input impedance (assuming that would be possible with a CMOS opamp with 1Tera ohm input resistance) then as Andy says: you're doing it wrong. So use reasonable value resistors. Oct 9, 2018 at 9:39
whether it's an inverting or non-inverting circuit, if the feedback resistance $$\R_F\$$ and the other resistor connected to the inverting input terminal (usually called $$\R_1\$$), gets as large as the internal input resistance, then the internal input resistance of the op-amp begins to affect the circuit.
• ....."begins to affect" ? According to my experineces, a remarkable influence of the finite opamp input impedance can be observed already when the effective external resistance at one of the input nodes is smaller by a factor of 5...10 than the opamps input resistance
– LvW
Oct 9, 2018 at 10:19
• you're right @LvW. i was understating it. Oct 9, 2018 at 21:05
Each opamp parameter which normally is neglected during calculation will affect the operation (the gain) of the amplifier (input and output impedances, finite and frequency-dependent open-loop gain,...).
However, in most cases we do not care about these effects because they will be either not too important (because the error is acceptable) or are overshadowed by external tolerances of the feedback network and/or parasitic influences caused by the hardware realisation (pin capacitances,...).
This is a typical example for the fact that in electronics no formnula is correct by 100%. It is simply not possible - and it makes no sense - to include all possible known physical effects in our expressions, functions and formulas. And it is one of the most challenging engineering tasks to decide if - for a specific application - the "simplified" expression may be applied with sufficient accuracy or not.
As far as the finite input impedance is concerned, we try to follow a general rule which requires that each of the external resistors should be small if compared with this input resistance. However, what means "small"? Factor 10 or 100 or 1000 ? The answer simply depends on the required accuracy of the gain value. But in most cases, the resistor tolerances are more important.
As an example: Here is the gain expression (non-inverting) for resistive feedback (ideal: 1+R2/R1) if input and output resistances are taken into account.
G=N/D with
N=(Eo R2 Rin + Eo R1 Rin + R1 Rout)
D=(Eo R1 Rin + Rin Rout + R2 Rin + R1 Rout + R1 Rin + R1 R2)
Note that the open-loop gain Eo is set to a fixed value. You can imagine how the expression would look like in case of a realistice frequency-dependent gain expression for a two pole model:
Eo(w)=Eoo/[(1+s/w1)(1+s/w2)]
If you have a low-gain opamp, perhaps in a high-frequency opamp where you are just happy as a clam to have a 1GHz UGBW opamp, and you are achieving that high UGBW with bipolar devices because the SiGe transistors are what you need to use, and you want high-gain-accuracy out where the gain-margin is poor (out near the closed loop F3dB) and the voltage across the "virtual ground" between (-) and (+) inputs becomes significant, that additional current thru the internal differential-mode resistance must be included in the modeling. {note: UGBW is Unity Gain Band Width}
Examine this comparison; The opamp model was edited to make UGBW=1GHz, keeping the high-value gain-set resistors. The RIn was edited: left graph is 2 MegOhms, middle is 2 KOhms; right graph is 2 KOHms, and openloopgain reduced from 100dB to 60dB.
• Your audience won't know what UGBW means Oct 9, 2018 at 11:04
You have it a little backwards. In an ideal op amp, there is no current entering the amplifier inputs. The behavior ddviates from ideal when this is not the case, meaning the equations are not accurate.
Thus, manufacturers make op amps with high input impedance so the behavior approaches ideal.
|
{}
|
Complete metric space
• January 16th 2009, 12:39 PM
aliceinwonderland
Complete metric space
(a)Give an example of two metric spaces $(X_{1}, d_{1})$ and $(X_{2}, d_{2})$ which are topologically equivalent and for which $(X_{1}, d_{1})$ is complete and $(X_{2}, d_{2})$ is not.
(b)Give an example of a set X with two equivalent metrics d and d' for which (X, d) is complete and (X, d') is not.
• January 16th 2009, 11:26 PM
aliceinwonderland
Quote:
Originally Posted by aliceinwonderland
(a)Give an example of two metric spaces $(X_{1}, d_{1})$ and $(X_{2}, d_{2})$ which are topologically equivalent and for which $(X_{1}, d_{1})$ is complete and $(X_{2}, d_{2})$ is not.
(b)Give an example of a set X with two equivalent metrics d and d' for which (X, d) is complete and (X, d') is not.
(a) $f:(- \pi /2, \pi /2) \rightarrow R$, given by $f(x) = tan(x)$,
By f, $(- \pi /2, \pi /2)$ is topologically equivalent to R with a usual metric d.
R is complete with a usual metric d, but $(- \pi /2, \pi /2)$ is not complete with a usual metric d.
(b) Consider the set X = {1/k}, k=1,2,3,,,,,n.
A usual metric d and and a discrete metric d' is equivalent, since
$d(x,y) \leq d'(x,y)$,
$d'(x,y) \leq n^{2}d(x,y)$.
(discrete metric d' : d'(x,x)=0, and d'(x,y)=1 for x not y, x,y in X)
(X,d) is not complete since 0 is not in X.
(X, d') is complete since every Cauchy sequence is constant and converge in X.
|
{}
|
# How to convert a latex document with Tikz/Pgf images and Latexmk into HTML?
I am hoping this is not a duplicate, but I was googling around so much that I can't remember where I started.
But I have a latex document that has some tikz and pgf images in it. I usually use latexmk to compile the file to pdf. Now, I wanted to see if I could compile this file to HTML, so that I can display the tikz images and such on the web. But I was not sure how to do this or what worked.
I tried to use make4ht, but that was only partially rendering my tikz images. Some of the macros that I had setup where not being rendered properly in the final output. The code I used was:
make4ht -d html -f html5+latexmk_build+mjcli myfile.tex "mathjax"
I also tried pandoc, and converting from latex --> markdown, and then from markdown --> HTML. This also did not render the tikz blocks correctly. I did add the --mathjax flags, but I don't think that mathjax supports tikz right now.
pandoc -s myfile.tex -o test.md
pandoc -s --katex test.md -o test.html
Can anyone suggest the best approach that they have found for converting tex documents with tikz diagrams into HTML webpages.
UPDATED
As per comments by @michel.ht I tried a different driver. That helped but I am still getting formatting issues with the output. I am posting a specific example of a simple neural network. Perhaps I need to change the way that the code is written to better fit the Tikz driver?
\usepackage{tikz}
\usetikzlibrary{shapes.multipart, positioning, decorations.markings,
arrows.meta, calc, fit}
\begin{tikzpicture}[shorten >=1pt,->,draw=black!50, node distance=\layersep]
\tikzstyle{every pin edge}=[<-,shorten <=1pt]
\tikzstyle{neuron}=[circle,fill=black!25,minimum size=25pt,inner sep=0pt]
\tikzstyle{input neuron}=[neuron, fill=red!50];
\tikzstyle{output neuron}=[neuron, fill=orange!50];
\tikzstyle{hidden neuron}=[neuron, fill=green!50];
% Draw the input layer nodes
\foreach \name / \y in {1,...,3}
% This is the same as writing \foreach \name / \y in {1/1,2/2,3/3,4/4}
\node[input neuron] (I-\name) at (0,-\y) {$x_{\y}$};
% Draw the output layer node
\node[output neuron,pin={[pin edge={->}]right:$\hat{y}$}, right of=I-2] (O) {$\sigma$};
% Connect every node in the hidden layer with the output layer
\foreach \source in {1,...,3}
\path (I-\source) edge node[above]{$w_{\source}$} (O) ;
\end{tikzpicture}
Here is the corresponding picture. The text in the nodes is missing like $x_1$, etc.
There are some other issues with some tikz macros, but I can work on those in a subsequent post.
• For make4ht, it is important to use the the alternative TikZ driver, because the one that is used by default supports only basic text formatting in text nodes. If you use custom commands in math, then you can try the "mathml,mathjax" option. If that doesn't work well, then you can define your commands in the .cfg file, see this section in the documentation. – michal.h21 Mar 27 at 16:20
• @michal.h21 Oh very interesting. Thanks so much, I did not know about these. Let me give it a try. – krishnab Mar 27 at 16:23
• @michal.h21 Okay I tried the dvisvgm option and it looks much better, but not quite there. But some details are still missing. Let me update the OP with a specific example. It might be that I need to just rewrite the tikz code to better map to the Tikz driver? But I really do appreciate your input here. – krishnab Mar 27 at 16:37
• I've posted an anser that shows use of the alternative driver, it seems to work on your MWE. – michal.h21 Mar 27 at 17:04
Here is a modified version of your MWE that use the alternative TikZ driver for TeX4ht:
\documentclass{article}
\ifdefined\HCode
\def\pgfsysdriver{pgfsys-dvisvgm4ht.def}
\fi
\def\layersep{2cm}
\usepackage{tikz}
\usetikzlibrary{shapes.multipart, positioning, decorations.markings,
arrows.meta, calc, fit}
\begin{document}
\begin{tikzpicture}[shorten >=1pt,->,draw=black!50, node distance=\layersep]
\tikzstyle{every pin edge}=[<-,shorten <=1pt]
\tikzstyle{neuron}=[circle,fill=black!25,minimum size=25pt,inner sep=0pt]
\tikzstyle{input neuron}=[neuron, fill=red!50];
\tikzstyle{output neuron}=[neuron, fill=orange!50];
\tikzstyle{hidden neuron}=[neuron, fill=green!50];
% Draw the input layer nodes
\foreach \name / \y in {1,...,3}
% This is the same as writing \foreach \name / \y in {1/1,2/2,3/3,4/4}
\node[input neuron] (I-\name) at (0,-\y) {$x_{\y}$};
% Draw the output layer node
\node[output neuron,pin={[pin edge={->}]right:$\hat{y}$}, right of=I-2] (O) {$\sigma$};
% Connect every node in the hidden layer with the output layer
\foreach \source in {1,...,3}
\path (I-\source) edge node[above]{$w_{\source}$} (O) ;
\end{tikzpicture}
\end{document}
The \ifdefined\HCode condition is used in order to ensure that this driver is used only with TeX4ht. You don't want to use it in the PDF output. I had to define the \layersep macro, as it was undefined in your MWE. It is used for the node distance, so you may want to change it according to your needs.
I've compiled it just using
make4ht -m draft sample.tex
This is the result:
• Ahh perfect. This totally worked for me. Thanks so much for the code to do this, it works now. – krishnab Mar 27 at 17:18
|
{}
|
# Cambridge A Level - Questions on Combinations
I have learnt combinations and I have been attempting at these two questions but couldn't solve it:
1) In a mixed pack of coloured light bulbs there are three red bulbs, one yellow bulb, one blue bulb and one green bulb. Four bulbs are selected at random from the pack. How many different selections are possible?
I did 3C1 * 1C1 * 1C1 * 1C1 = 3
2) There are 20 teachers at a conference. Of these, 8 are maths teachers, 6 are history teachers, 4 are physics teachers and 2 are geography teachers.
Four of the teachers are to be chosen at random to take part in a quiz.
In how many different ways can the teachers be chosen if there are to be at least two maths teachers?
So I did 8C2 * 18C2 = 4284
Which also wasn't the answer. Could someone please tell me what I am doing wrong?
As regards the first problem, divide it into pairwise disjoint cases where you select $k=1,2,3$ red bulbs. Then, by using the "rule of sum", the number of ways is $$\sum_{k=1}^3 \binom{3}{4-k}=1+3+3=7.$$ Here you can also enumerate the seven cases explicitly: RBGY ($k=1$), RRBG, RRBY, RRGY ($k=2$), RRRB, RRRG, RRRY ($k=3$).
For the second one, consider the complement case when you have $0$ math teachers, $m_0=\binom{20-8}{4}$ ways, and $1$ math teacher, $m_1=?$ ways. Then the number of ways such that teachers can be chosen with at least two maths teachers is $$\binom{20}{4}-m_0-m_1=\binom{20}{4}-\binom{20-8}{4}-?.$$ Can you take it from here?
• @kimchiboy03 See my edited answer. Are you able to evaluate $m_0$ and $m_1$? – Robert Z Sep 14 '17 at 7:40
|
{}
|
# Choosing Reflection or Refraction in Path Tracing
I am trying to implement refraction and transmission in my path tracer and I'm a bit unsure on how to implement it. First, some background:
When light hits a surface, a portion of it will reflect, and a portion will be refracted:
How much light reflects vs. refracts is given by the Fresnel Equations
In a recursive ray tracer, the simple implementation would be to shoot a ray for reflection and a ray for refraction, then do a weighted sum using the Fresnel. \begin{align*} R &= Fresnel()\\ T &= 1 - R\\ L_{\text{o}} &= R \cdot L_{\text{i,reflection}} + T \cdot L_{\text{i,refraction}} \end{align*}
However, in path tracing, we only choose one path. This is my question:
• How do I choose whether to reflect or refract in a non-biased way
My first guess would be to randomly choose based on the Fresnel. Aka:
float p = randf();
float fresnel = Fresnel();
if (p <= fresnel) {
// Reflect
} else {
// Refract
}
Would this be correct? Or do I need to have some kind of correction factor? Since I'm not taking both paths.
• russian roulette – v.oddou May 30 '16 at 1:46
## TL;DR
Yes, you can do it like that, you just have to divide the result by the probability of choosing the direction.
The topic of sampling in path tracers allowing materials with both reflection and refraction is actually a little bit more complex.
Let's start with some background first. If you allow BSDFs - not just BRDFs - in your path tracer, you have to integrate over the whole sphere instead of just the positive hemisphere. Monte Carlo samples can be generated by various strategies: for the direct illumination you can use BSDF and light sampling, for the indirect illumination the only meaningful strategy usually is the BSDF sampling. The sampling strategies themselves usually contain the decision about which hemisphere to sample (e.g. whether reflection or refraction is computed).
In the simplest version, the light sampling usually doesn't take care much about reflection or refraction. It samples the light sources or the environment map (if present) with respect to the light properties. You can improve sampling of environment maps by picking just the hemisphere in which the material has non-zero contribution, but the rest of the material properties is usually ignored. Note that for and ideally smooth Fresnel material the light sampling doesn't work.
For BSDF sampling, the situation is much more interesting. The case you described deals with an ideal Fresnel surface, where there are only two contributing directions (since Fresnel BSDF is in fact just a sum of two delta functions). You can easily split the integral into a sum of two parts - one reflection and one for refraction. Since, as you mentioned, we don’t want to go in both directions in a path tracer, we have to pick one. This means that we want to estimate the sum of numbers by picking just one of them. This can be done by discrete Monte Carlo estimation: pick one of the addends randomly and divide it by the probability of it being picked. In an ideal case you want to have the sampling probability proportional the the addends, but since we don't know their values (we wouldn't have to estimate the sum if we knew them), we just estimate them by neglecting some of the factors. In this case, we ignore the incoming light amount and use just the Fresnel reflectance/transmittance as our estimates.
The BSDF sampling routine for the case of smooth Fresnel surface is, therefore, to pick one of the directions randomly with probability proportional to the the Fresnel reflectance and, at some point, divide the result for that direction by probability of picking the direction. The estimator will look like:
$$\frac {L_{i}\left(\omega_{i}\right)F\left(\theta_{i}\right)} {P\left(\omega_{i}\right)} = \frac {L_{i}\left(\omega_{i}\right)F\left(\theta_{i}\right)} {F\left(\theta_{i}\right)} = L_{i}\left(\omega_{i}\right)$$
Where $\omega_{i}=\left( \phi_{i}, \theta_{i} \right)$ is the chosen incident light direction, $L_{i}\left(\omega_{i}\right)$ is the amount of incident radiance, $F\left(\theta_{i}\right)$ is either the Fresnel reflectance for the reflection case or 1 - Fresnel reflectance for the refraction case, $P\left(\omega_{i}\right)$ is the discrete probability of picking the direction and is equal to $F\left(\theta_{i}\right)$.
In case of more sophisticated BSDF models like those based on microfacet theory, the sampling is slightly more complex, but the idea of splitting the whole integral into a finite sum of sub-integrals and using discrete Monte Carlo afterwards can usually be applied too.
• This is interesting but I'm confused by one point. Could you clarify what it means to "divide the result for that direction by probability of picking the direction"? If it is not a binary choice but a direction chosen from a continuous distribution, won't the probability be zero? – trichoplax May 23 '16 at 15:00
• @trichoplax: Yes it would, but in that paragraph I was describing the sampling technique just for a (dielectric) Fresnel BSDF - ideally smooth surface, which is a sum of two Dirac delta functions. In such case you are picking one of the directions with some discrete probability. In case of a non-delta (finite) BSDF, you generate directions according to a probability density function. Unfortunately, delta and non-delta cases have to be handled separately, which makes the code a little messy. More details on sampling microfacet BSDFs can be found, for example in the Walter et. al. [2007] paper. – ivokabel May 23 '16 at 16:37
• @RichieSams: Walter et. al. [2007] is basically still the state-of-the art for dielectric rough surfaces, but to make it work well you need a good sampling which was published just recently by Heitz and D'Eon the 2014 paper "Importance Sampling Microfacet-Based BSDFs using the Distribution of Visible Normals". And note that it is a single-scattering model which neglects inter-reflections between microfacets making it visibly dark for higher roughness values. See my question "Compensation for energy loss in single-scattering microfacet BSDF models" for more details. – ivokabel May 23 '16 at 16:53
• Just wanted to point out that if you choose probability = fresnel() as the question suggested, then when you divide by the probability, you cancel out the Fresnel factor that would normally be multiplied in. So (in the discrete, two-Dirac case) you end up with the ray contribution not including any Fresnel factor at all. It's standard importance-sampling theory, but I thought I'd point that out as a potentially confusing issue. – Nathan Reed May 23 '16 at 18:22
• @Nathan, I incorporated your notice into the answer. – ivokabel May 24 '16 at 11:27
|
{}
|
# How to set the Y coordinates for a Graph with defined X coordinates to prevent overlap of nodes?
I would like to draw a Graph with defined $x$ coordinates and variable node sizes. If I simply replace the $x$ coordinates of a graph, then nodes can overlap and edges may cross each other. I would appreciate any help pointing me in the right direction on how to go about setting the $Y$ coordinates, or alternative approaches to plotting, such that the nodes are nicely spaced out and, as far as possible, edges don't cross.
(*egdata*)
relationships =
{1 <-> 2, 2 <-> 3, 2 <-> 4, 2 <-> 6, 3 <-> 5, 6 <-> 7,4 <-> 8, 4 <-> 9};
numNodes =
Max[relationships[[All, 2]]];
sizes = RandomReal[{0.25, 1}, numNodes];
xCoords = {0, 1, 2, 2, 3, 3, 4, 5, 5};
I would like to be able to automatically generate a plot similar to this (without having to manually specify $Y$ coordinates.):
Graph[
relationships, VertexSize -> Thread[Range[numNodes] -> sizes],
VertexCoordinates ->
Transpose[{xCoords,(*yCoords*){1, 1, 1, 3, 2, 1, 2, 3, 4}}]]
Using VertexCoordinateRules with GraphPlot is the best I can do, but doesn't solve the problem:
relationships =
{1 -> 2, 2 -> 3, 2 -> 4, 2 -> 6, 3 -> 5, 6 -> 7, 4 -> 8, 4 -> 9};
xCoords =
{0, 1, 2, 2, 3, 3, 4, 5, 5};
numNodes =
Max[relationships[[All, 2]]];
sizes = RandomReal[{0.25, 1}, numNodes];
GraphPlot[
relationships,
VertexCoordinateRules ->
Range[numNodes] ->
Transpose[{xCoords, Table[Automatic, {numNodes}]}]],
VertexRenderingFunction -> ({White, EdgeForm[Black], Disk[#, .3],
Black, Text[#2, #1]} &)
]
• Although this doesn't solve the stated problem ,TreePlot[relationships, Left] looks like it may be useful to you. Mar 17, 2015 at 14:21
• Ignore explicit values of y coordinate first, and instead just track order of nodes/edges (along y axis) on every value of x with a node. Construct set of simple constraining inequalities (joining edge y values need to be equal, non-joining unequal, and no pair of edges must change order in their x interval intersections) to be reduced (or for decision of graph planarity). After this edges are guaranteed to be conceptually non-intersecting, and values of y can be computed on basis of sizes of nodes. Well, this is easier said than done, but I believe that's the most practical approach. Apr 17, 2015 at 20:38
not quite what you asked for, but possibly useful:
relationships = {1 <-> 2, 2 <-> 3, 2 <-> 4, 3 <-> 5, 2 <-> 6, 6 <-> 7,
4 <-> 8, 4 <-> 9};
numNodes = Max[relationships[[All, 2]]];
sizes = RandomReal[{0.25, 1}, numNodes];
xCoords = {0, 1, 2, 2, 3, 3, 4, 5, 5};
vars0 = Array[a, 9];
Manipulate[
vars = {a[1], a[2], a[3], a[4], a[5], a[6], a[7], a[8], a[9]};
p = Graph[relationships,
VertexSize -> Thread[Range[numNodes] -> sizes],
VertexCoordinates -> Transpose[{xCoords, vars}],
VertexLabels -> "Name"] ,
Evaluate[Sequence @@
Table[{{vars0[[i]], i}, 1, Length@vars0, .1}, {i, Length@vars0}]],
TrackedSymbols -> All]
or maybe better using Locator:
relationships = {1 <-> 2, 2 <-> 3, 2 <-> 4, 3 <-> 5, 2 <-> 6, 6 <-> 7,
4 <-> 8, 4 <-> 9};
numNodes = Max[relationships[[All, 2]]];
v0 = VertexCoordinateRules /.
First@Cases[GraphPlot[Graph[relationships]], _Rule, Infinity];
Manipulate[Graph[relationships, VertexCoordinates -> vtx],
{{vtx, v0}, Locator, Appearance -> None}]
• Thanks George. Unfortunately, my actual data has hundreds of nodes, so manually setting the coordinates, even in a manipulate, isn't a viable solution. Apr 23, 2015 at 13:19
|
{}
|
# Normal Shock Table
Gold Member
Hello,
I am wondering, on a Normal Shock Table, what are the two most right columns supposed to mean?
What is the ratio P02/P01 and P1/P02 supposed to be?
#### Attachments
• normal_shock.pdf
36.5 KB · Views: 239
$p_{02}/p_{01}$ is how the total pressure changes across the shock and is essentially a measure of how dissipative the shock is since total pressure loss is a result of the entropy gained across the shock.
$p_1/p_{02}$ is effectively just a ratio that is useful in some calculations.
|
{}
|
Is #P contained in PSPACE?
It's obvious that NP $\subseteq$ #P. How about #P $\subseteq$ PSPACE?
It strikes me as semi-obvious, since we can check whether an assignment (e.g. for SAT) is a solution in polynomial time (and hence space), and since each assignment can be checked separately, we can re-use the memory needed for this check, and thus use only a polynomial amount of space to check every assignment (plus a counter to track how many of the assignments were solutions).
• looks like you answered your own question ! – Suresh Venkat Oct 12 '10 at 15:45
• although technically what you just showed is that #P \subseteq FPSPACE (since you're outputting a value) – Suresh Venkat Oct 12 '10 at 15:46
• I was hoping I had, but I haven't worked much with space arguments, and wanted to check whether it really is that simple. This does pave way for a future question I have. – Evgenij Thorstensen Oct 12 '10 at 15:48
• @Evgenij: You should earn 2 new badges: Self-Answerizer and Fastest-Question-Ever-Made ;-) – Giorgio Camerani Oct 12 '10 at 15:51
• there is an 'answered your own question' badge: I think it's called Self-Learner or something like that – Suresh Venkat Oct 12 '10 at 16:09
For the sake of completeness, let me update this answer with a direct proof. My original answer will remain below in case anyone finds it interesting. The basic idea of the direct proof is exactly as Evgenij suggests: check whether each possible assignment is satisfiable, keep a counter of satisfiable instances, and reuse space wherever possible.
Claim: #$P \subseteq FPSPACE$
Proof: Assume #$SAT$ is #$P$-complete; then it suffices to show a polynomial-space algorithm to solve #$SAT$. Here is a linear-space, exponential-time algorithm.
Initialize two counters of $k \le n$ bits to all 0, for number of variables $k$ in the $3CNF$ formula given as input. The first counter will keep track of which assignment we are checking (1 = true, 0 = false, for each variable $x_1$ to $x_k$). The second will keep track of the number of satisfying assignments (there are at most $2^k$ of them).
We will also need enough space ($n$ bits) to write the formula, and enough space to do "scratch work" for one clause (some very small constant number of bits, since every clause will have exactly 3 literals by definition of #$SAT$).
Here is the description of the algorithm:
For all possible assignments
For all clauses in the Boolean formula
Check if the assignment satisfies the given clause
If the assignment satisfied every clause
Increment the assignment counter
Output the value in the assignment counter
Note that once we finish work on one possible assignment, we don't need any information from it again to compute whether other assignments satisfy the formula; the only information we keep is whether it was satisfied -- in the second counter. In particular, since we can just apply binary addition of 1 to the assignment we are testing in order to always get the next assignment, and since every assignment can be checked by considering only one clause at a time (and re-using the same space for computation on every clause), the dominating factor in the space requirement is simply writing the formula itself. Alternatively, we just read from an input tape and never explicitly write the formula, then in the worst case, $k = n$ and it still takes $O(n)$ space to write the possible assignments and keep track of the number of satisfying assignments. Q.E.D.
Although this already essentially answered, let me give a proof of a different vein (straight out of Arora/Barak):
Let #$SAT_D$ = {($\phi, K$) : $\phi$ is a 3CNF formula and it has exactly $K$ satisfying assignments}
In Chapter 17, there is a proof that #$SAT$ is #$P$-complete. In Chapter 8, there is a proof that #$SAT_D \in IP$.
Since $IP = PSPACE$, #$P \subseteq FPSPACE$.
• Maybe I'm nitpicking, but #P is not a set of languages, and thus cannot be contained in PSPACE. Similarly, #SAT_D is not #P-complete, it the decision version of a problem that is #P-complete. – Robin Kothari Oct 12 '10 at 23:25
• This answer only defers the task of proving #P⊆FPSPACE to another, slightly more difficult, task of proving IP⊆PSPACE, even if we accept the use of the highly nontrivial result #SAT_D∈IP. I would have voted this answer down if I did not encounter a bug/misfeature in the Stack Exchange system. – Tsuyoshi Ito Oct 13 '10 at 14:05
• Perhaps this is worth pointing out to the authors for inclusion in errata? The more I think about it, the more the sentence on page 158 feels a little uncomfortable (i.e. the more I agree with you). – Daniel Apon Oct 13 '10 at 17:49
• Downvoted. Two reasons: 1.) I have to agree with Tsuyoshi. Invoking more complex results to prove easier ones does not seem optimal. 2.) My previous reason: I don't like statements like #P contained in PSPACE. – Robin Kothari Oct 13 '10 at 19:06
• @Robin: This may be a nitpicking, but I have never intended to criticize the use of the highly nontrivial result #SAT_D∈IP to prove the much easier result #P⊆FPSPACE in my comments to this answer (although I do not defend it, either). The argument that I have made is that the answer assumes IP⊆PSPACE, which at least requires the same kind of proof as the proof of #P⊆FPSPACE. If the answer invoked a complex result to give a proof of #P⊆FPSPACE which is essentially different from the easy direct proof, I would not have said “I cannot see the point.” – Tsuyoshi Ito Oct 13 '10 at 20:29
|
{}
|
# Alkynes
We are searching data for your request:
Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.
## Alkynes: molecular orbital model
In molecular orbit theory (MO theory), atomic or hybrid orbitals (AOs) are combined to form molecular orbitals (MOs). the $sp$-Hybrid orbitals mix to form a bonding σ orbital and an anti-bonding σ * orbital. The four p atomic orbitals form two π and two π * molecular orbitals. The energetic order from lowest to highest MO is: σ <π (y) = π (z) <π (y) * = π (z) * <σ *. A total of six electrons are available to fill the molecular orbitals, i.e. only the binding MOs are filled.
1. Heortwiella
2. Negash
I don't want to develop this theme.
3. Dietz
Bravo, what a phrase ... great thought
4. Maukinos
wonderfully, this message of value
5. Mumi
You are not right. I'm sure. I can defend my position. Email me at PM, we will discuss.
6. Kendrick
|
{}
|
# The Yu Group in the fight against COVID-19: Curating data, predicting deaths, and building partnerships
The Yu Research Group at UC Berkeley Statistics and EECS, led by PI Bin Yu, has worked to curate a comprehensive COVID-19 data respository and predict the spread of deaths caused by the virus. You can read our paper here and visit our project website at covidseverity.com.
in partnership with
In late March as the COVID-19 epidemic began to take a firm hold on our lives, the Yu Group began working in collaboration with Response4Life, a non-profit organization striving to deliver critical emergency medical supplies to the hospitals that need them most.
To do so, we sought to forecast the severity of the epidemic for counties and hospitals across the U.S., with three primary goals:
• Provide open access to a large repository of data related to the spread of COVID-19 in the U.S. that other groups may use to better understand the past and future trajectory of the virus.
• Produce short-term predictions of the number of deaths at the county-level to identify counties whose health-care systems are likely to face significant stress over the coming week to ten days.
• Develop a hospital-level COVID pandemic severity index (CPSI) to identify hospitals most likely to be facing emergency medical supply shortages.
# Our Data Processing Pipeline
One of our main contributions is the curation of a large COVID-19 data repository, which we have made publicly available on GitHub. This repository is updated daily with new data and information. Currently, it includes data on COVID-19-related cases, deaths, demographics, health resource availability, health risk factors, social vulnerability, and other COVID-19-related information.
Let’s begin first with what our data processing pipeline looks like at a very high level. At the beginning of this pandemic, there was obviously no existing database labeled “Everything you need to understand the COVID-19 pandemic”. Our job, then, was:
• To begin curating this data repository.
• To determine what are relevant information and sources.
• To clean and validate the data continuously.
We’ve collected over a million records from 15+ sources and counting, and we’re using AWS S3 to store some of the larger data sets as well as an EC2 instance running JupyterLab to easily share results, EDA, and collaborate together on the code.
## Other Data Sources
In addition to the USAFacts and NYT data, our repository includes:
Our data pipeline is much more of an iterative and continuous cycle than a static pipeline with a well-defined start and end. We are constantly looking for new data sources and incorporating them into our pipeline and repository. Our data collection and cleaning efforts are on-going as new really cool data sources keep popping up on our radar every day.
## Key Takeaways
When searching for, cleaning, and combining data from many different sources it can be easy to get lost in the details, but there are three big takeaways we’d like to highlight on the data curation end.
• Know the audience or end user of your data and give them documentation.
We saw our audience as not just being our team, but also other researchers in the broader community, which meant we needed to create very clear and organized documentation so that the data is easily understandable and user-friendly. With that aim, we’ve spent significant time on documentation for each data source. We also created both abridged and unabridged versions of the data to allow newcomers to get started more quickly.
• Don’t ignore naming/coding conventions and organization structure for storing and processing your data.
This goes hand-in-hand with the first takeaway and improves the accessibility of the data for all end users. For example, if other researchers want to see how we cleaned the data, they can easily find the clean.py script in the folder each dataset. If they want to see the raw data, they can find it in the raw folder. Having a good organizational structure is crucial to quickly integrate new members and volunteers onto the team, and it’s best to set standard at the beginning rather than spend a lot of effort reorganizing when the project is already well-established.
• Good data curation takes a lot of effort; and it’s worth it.
Early on, everyone on the team worked on data cleaning in some capacity, but we quickly found that we needed to separate into smaller teams to be more efficient. Two of our team members worked on the data team essentially full-time for almost a month and the results speak for themselves. We’re incredibly happy to see that the broader community is already using our repository, such as for the final project for STAT 542, a graduate-level machine learning course at University of Illinois, and DATA 100 at our very own UC Berkeley, an undergrad class with over 1000 students!
# Forecasting County-Level Death Counts
Data curation is an ongoing effort, but once we had the backbone of our data pipeline in place, we turned to the prediction problem.
## Five Statistical Methods
Our predictive approach primarily uses the county-level case and death reports provided by USA Facts, along with some county-level demographics and health data. We use five different statistical methods, each of which captures slightly different data trends.
• Separate-county exponential predictors
The separate predictors aim to model each county independently via a best-fit exponential curve using the most recent 5 days of data.
$E\left({\text{deaths}}_{t}|t\right)={e}^{{\beta }_{0}+{\beta }_{1}t}$
• Separate-county linear predictors
The linear predictor is similar to the separate one, but uses a simple linear model rather than the exponential format.
$E\left({\text{deaths}}_{t}|t\right)={\beta }_{0}+{\beta }_{1}t$
• Shared-county exponential predictor
The shared predictor fits the data from all of the counties simultaneously and predicts death counts for individual counties.
$E\left({\text{deaths}}_{t}|t\right)={e}^{{\beta }_{0}+{\beta }_{1}\mathrm{log}\left({\text{deaths}}_{t-1}+1\right)}$
• Demographics shared-county exponential predictor
The demographics shared predictor is similar to the shared-county exponential predictor, but also includes various county demographic and health-related predictive features.
$E\left({\text{deaths}}_{t}|t\right)={e}^{{\beta }_{0}+{\beta }_{1}\mathrm{log}\left({\text{deaths}}_{t-1}+1\right)+{\beta }_{{d}_{1}}{d}_{1}^{c}+\dots +{\beta }_{{d}_{m}}{d}_{m}^{c}}$
• Expanded shared-county exponential predictor
The expanded shared predictor is similar to the “shared” predictor, but also includes COVID-19 case numbers and neighboring county cases and deaths as predictive features.
$E\left({\text{deaths}}_{t}|t\right)={e}^{{\beta }_{0}+{\beta }_{1}\mathrm{log}\left({\text{deaths}}_{t-1}+1\right)+{\beta }_{2}\mathrm{log}\left({\text{cases}}_{t-k}+1\right)+{\beta }_{3}\mathrm{log}\left({\text{neigh_deaths}}_{t-k}+1\right)+\dots +{\beta }_{4}\mathrm{log}\left({\text{neigh_cases}}_{t-k}+1\right)}$
• Combined Linear and Exponential Predictors (CLEPs)
Ultimately, we found that the approach that worked best was to use an ensemble of the five models listed above to flexibly fit the COVID-19 trend. We use a weighting scheme previously developed by Dr. Yu and collaborators for lossless audio compression. This exponential weighting term ${w}_{t}^{m}$$w_t^m$ for predictor $m$$m$ applied on day $t$$t$ is given by
${w}_{t}^{m}\propto \mathrm{exp}\left(-c\left(1-\mu \right)\sum _{i={t}_{0}}^{t-1}{\mu }^{t-i}\ell \left({\stackrel{^}{y}}_{i}^{m},{y}_{i}\right)\right)$
where $\mu \in \left(0,1\right)$$\mu \in (0,1)$ and $c>0$$c > 0$ are tuning parameters, ${t}_{0}$$t_0$ represents some past time point, and $t$$t$ represents the day on which the prediction is calculated. Since $\mu <1$$\mu < 1$, the $m{u}^{t-i}$$mu^{t−i}$ term represents the greater influence given to more recent predictive performance.
Note that the loss terms $\ell \left({\stackrel{^}{y}}_{i}^{m},{y}_{i}\right)$$\ell(\hat{y}_i^m, y_i)$ used in the weights are calculated based on the three-day predictions from seven predictors built over the course of a week. We chose the past week’s 3-day performance in the weights since it yielded good performance for our ensemble predictor for predicting death counts several days in the future. In practice, our loss function is
$\ell \left({\stackrel{^}{y}}_{i}^{m},{y}_{i}\right)=|\mathrm{log}\left(1+{\stackrel{^}{y}}_{i}^{m}\right)-\mathrm{log}\left(1+{y}_{i}\right)|$
where the log is taken to help prevent vanishing weights due to the heavy-tailed nature or our error distribution.
It’s important to note that our weights are calculated at the county-level, so that each county uses a distinct ensemble of the five methods above. In practice, we find that the best model is a combination of the expanded shared predictor and the linear predictor.
## Prediction Results and Intervals
Our five day predictions are tracking well with current trends.
However, we weren’t satisfied to stop at point estimates. To get a sense of the variability of our predictions, we use maximum (absolute) error prediction intervals (MEPI):
These intervals are formed in three steps:
1. Find the normalized error of our predictors in the past:
${\mathrm{\Delta }}_{t}:=|{y}_{t}-{\stackrel{^}{y}}_{t}|/|{\stackrel{^}{y}}_{t}|$
1. Find maximum error of past 5 days:
${\mathrm{\Delta }}_{max}:=\underset{0\le j\le 4}{max}{\mathrm{\Delta }}_{t-j}$
1. Form the interval for predictions for $k$$k$ days in the future:
$\stackrel{^}{\text{PI}}:=\left[max\left\{{\stackrel{^}{y}}_{t+k}\left(1-{\mathrm{\Delta }}_{max}\right),{y}_{t}\right\},{\stackrel{^}{y}}_{t+k}\left(1+{\mathrm{\Delta }}_{max}\right)\right]$
Using this method, we see good empirical coverage for our historical prediction intervals:
# From County-Level Predictions to the Hospital-Level
At this point we have predicted deaths at the county level, but our main deliverable as part of our partnership with Response4Life was to predict demands for emergency medical supplies at hospitals across the country. This is no small task, made all the more difficult by the fact that hospital-level case and death counts are not available to the best of our knowledge.
## Covid Pandemic Severity Index (cPSI)
For this reason, our approach is to disaggregate our county-level predictions to the hospital level based on the proportion of hospital employees of the total employees in a given county, and we assign a three-level covid Pandemic Severity Index (cPSI) to each hospital using the number of predicted cumulative deaths over the next five days.
## Covid Pandemic Surge Index (cPSUI)
The cPSI gives more weight to hospitals in counties with the highest number of cumulative deaths, so we also developed an index to reflect those counties with the highest numbers of new deaths. We know that roughly half of the people on ventilators die and a hospital’s number of ventilators is roughly equal to the number of ICU beds.
## Informing the Distribution of PPE
Using our indices, our partners at Response4Life can make informed decisions on how to distribute emergency medical supplies manufactured by a network of small makers to hospitals around the country. At the time of writing, Response4Life has distributed 65,000 items of personal protective equipment to 25 recipients in 15 states (and another 500,000 items outside the US). We’re incredibly proud to be partnered to an organization making such a positive impact.
# What’s next?
This is a question that people across the world are asking every day, and the Yu Group is no different. We intend to stay engaged in the fight to protect our healthcare workers and respond to the spread of COVID-19.
## More (and More!) Collaboration
We’re very excited about our collaboration with the Center for Spatial Data Science at the University of Chicago to add our predictions and indices to their excellent U.S. COVID-19 Atlas. We see collaborations like this one that unites expertise from groups across the country as crucial to the fight against COVID-19.
|
{}
|
A NOTE ON GAUSS`S SECOND SUMMATION THEOREM FOR THE SERIES 2F1(1/2)
Title & Authors
A NOTE ON GAUSS`S SECOND SUMMATION THEOREM FOR THE SERIES 2F1(1/2)
Choi, June-Sang; Rathie, Arjun K.; Purnima, Purnima;
Abstract
We aim at deriving Gauss`s second summation theorem for the series $\small{_2F_1(1/2)}$ by using Euler`s integral representation for $\small{_2F_1}$. It seems that this method of proof has not been tried.
Keywords
generalized hypergeometric series $\small{_pF_q}$;Gauss`s second summation theorem for $\small{_2F_1(1/2)}$;Beta function;
Language
English
Cited by
1.
NOTE ON THE CLASSICAL WATSON'S THEOREM FOR THE SERIES 3F2,;;
호남수학학술지, 2013. vol.35. 4, pp.701-706
1.
NOTE ON THE CLASSICAL WATSON'S THEOREM FOR THE SERIES3F2, Honam Mathematical Journal, 2013, 35, 4, 701
2.
A generalization of a formula due to Kummer†, Integral Transforms and Special Functions, 2011, 22, 11, 851
3.
Generalizations of classical summation theorems for the series2F1and3F2with applications, Integral Transforms and Special Functions, 2011, 22, 11, 823
References
1.
W. N. Bailey, Generalized Hypergeometric Series, Cambridge University Press, Cambridge, 1935
2.
H. M. Srivastava and J. Choi, Series Associated with the Zeta and Related Functions, Kluwer Academic Publishers, Dordrecht, Boston, and London, 2001
|
{}
|
0
RESEARCH PAPERS
# Numerical and Experimental Study of Heat Transfer in a BIPV-Thermal System
[+] Author and Article Information
L. Liao, A. K. Athienitis, L. Candanedo, K.-W. Park
Department of Building, Civil and Environmental Engineering, Concordia University, 1455 de Maisonneuve Blvd. West, Montreal, QC, H3G 1M8
Y. Poissant
CANMET Energy Technology Centre —Varennes, Natural Resources Canada, 1615 Lionel-Boulet Blvd., Varennes, QC, J3X 1S6
M. Collins
Department of Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, Canada N2L 3G1
J. Sol. Energy Eng 129(4), 423-430 (May 15, 2007) (8 pages) doi:10.1115/1.2770750 History: Received February 24, 2006; Revised May 15, 2007
## Abstract
This paper presents a computational fluid dynamics (CFD) study of a building-integrated photovoltaic thermal (BIPV∕T) system, which generates both electricity and thermal energy. The heat transfer in the BIPV∕T system cavity is studied with a two-dimensional CFD model. The realizable $k‐ε$ model is used to simulate the turbulent flow and convective heat transfer in the cavity, including buoyancy effect and long-wave radiation between boundary surfaces is also modeled. A particle image velocimetry (PIV) system is employed to study the fluid flow in the BIPV∕T cavity and provide partial validation for the CFD model. Average and local convective heat transfer coefficients are generated with the CFD model using measured temperature profile as boundary condition. Cavity temperature profiles are calculated and compared to the experimental data for different conditions and good agreement is obtained. Correlations of convective heat transfer coefficients are generated for the cavity surfaces; these coefficients are necessary for the design and analysis of BIPV∕T systems with lumped parameter models. Local heat transfer coefficients, such as those presented, are necessary for prediction of temperature distributions in BIPV panels.
<>
## Figures
Figure 5
(a). CFD model velocity profiles at the outlet of the PV section for different average air velocities. (b). Velocity profile from particle-image velocimetry (PIV) compared with CFD results for two flow rates.
Figure 6
Comparison of the outlet air temperature profile (at top of PV) from CFD model and experimental measurements
Figure 7
Convective heat transfer coefficient profile at PV panel interior surface and at insulation
Figure 8
Modeling (CFD) results around inlet flow region
Figure 9
Comparison of the predicted PV and insulation temperature profile with experimental data for March 29, 2004
Figure 10
Regression of the numerical results of convective heat transfer coefficients at PV panel side
Figure 11
Regression of the numerical results of convective heat transfer coefficients at insulation side
Figure 12
Correlation of heat transfer coefficients as a function of average air velocities
Figure 13
Correlation (regression) profile of Nu numbers at the PV panel side
Figure 14
Correlation (regression) profile of Nu numbers at insulation side
Figure 2
Geometry of 2-D CFD model
Figure 3
Grid pattern of the boundary treatment
Figure 4
Measured PV panel temperature used as boundary condition in CFD simulations
Figure 1
Photograph of Concordia Building-integrated photovoltaic test facility
## Discussions
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections
|
{}
|
Syntax
is >> x
Purpose
Sets x to a parameter with value b corresponding to is >> b where b is a Base object. It is assumed that this Base input operation returns a reference to is .
is
The operand is has prototype std::istream& is
x
The operand x has one of the following prototypes AD<Base>& x
Result
The result of this operation can be used as a reference to is . For example, if the operand y has prototype AD<Base> y then the syntax is >> x >> y will first read the Base value of x from is , and then read the Base value to y .
Operation Sequence
The result of this operation is not an AD of Base object. Thus it will not be recorded as part of an AD of Base operation sequence .
Example
The file ad_input.cpp contains an example and test of this operation. It returns true if it succeeds and false otherwise.
|
{}
|
Torque & Acceleration (Rotational Dynamics) Video Lessons
Concept
# Problem: A light rigid rod with masses attached to its ends is pivoted about a horizontal axis as shown above. When released from rest in a horizontal orientation, the rod begins to rotate with an angular acceleration of magnitude
###### FREE Expert Solution
This problem involves net torque and acceleration.
Net torque:
$\overline{){\mathbit{\Sigma }}{\mathbit{\tau }}{\mathbf{=}}{\mathbit{I}}{\mathbit{\alpha }}}$ where I is the moment of inertia.
The torque due to Force:
$\overline{){\mathbit{\tau }}{\mathbf{=}}{\mathbit{r}}{\mathbit{F}}{\mathbf{sin}}{\mathbit{\theta }}}$
Moment of inertia of a point mass:
For a collection of point masses:
92% (126 ratings)
###### Problem Details
A light rigid rod with masses attached to its ends is pivoted about a horizontal axis as shown above. When released from rest in a horizontal orientation, the rod begins to rotate with an angular acceleration of magnitude
|
{}
|
# Both t-test and F-test are significant, do I report both?
I undertook a large study (N about 200 in both control and treatment) in which one of the user ratings is significantly different (p < 0.0001). When I ran the unpaired t-test, the F-test also returned significant (p = 0.0003). This violates the t-test assumption of equal variances, but I figured this was OK with these large sample sizes (it's still highly significant with Welch's correction anyway). The t-test tells me that the two means are different, and the F-test tells me the standard deviations are different. Should and can I report both? Whenever I see a site talking about reporting the F-test, it's always in the context of ANOVA, never alongside a t-test. This makes me think that reporting it here isn't common practice.
• Why did you run both tests? What questions you were looking answers for? You use those methods to test certain hypotheses and report the results that are consistent with your research questions. There is not point in running any possible test... – Tim Dec 26 '14 at 16:28
• @Tim the F test was for equality of variances and the t test was for equality of means, that's why. – wcampbell Dec 26 '14 at 16:36
• Yes, I know. But why did you use them? What were you interested in? If you want an answer for both questions then you report both. – Tim Dec 26 '14 at 16:41
• "Whenever I see a site talking about reporting the F-test, it's always in the context of ANOVA, never alongside a t-test" - but is that because you're looking at a different type of F-test? In particular, for linear models such as ANOVA or regression, an F-test is to see if the overall model is significant (compared to a null model) whereas you seem to be using an F-test to compare variances? – Silverfish Dec 26 '14 at 23:33
It's not clear what you did your F-test for, but I am assuming for now it was a test for equality of variances. If not I will edit my answer. But if it was a test for equality of variances, it probably wasn't a good choice because the F-test assumes that your data were drawn from a normally-distribution population, and is very sensitive to departures from normality. There are alternative tests which are more robust - see this question for discussion. But before choosing a better test for equality of variances, you should consider whether you really want to test for it at all.
A two-stage approach of using an equality of variances test before deciding on whether to use a t-test or non-parametric test, is strongly advised against because it means the operating characteristics of your test are uncertain - see also this question which is closely related to yours. For a reference, see Zimmerman, D.W. (2004), "A note on preliminary tests of equality of variances", Br. J. Math. Stat. Psychol., May; 57(Pt 1): 173-81.
It's a good idea to use a Welch-corrected t-test if you're in any doubt whether the equality of variances will apply - see this question and accompanying citations. If the variances are similar in your samples, you'll only have a small degrees of freedom penalty applied, so you don't need to worry that you'll lose lots of power by applying the Welch correction in a case where the variances are equal.
If you don't even think that the conditions for the Welch t-test are met, then you should be applying a different test such as Wilcoxon-Mann-Whitney. It need not be a big worry that, should conditions for a traditional t-test hold, that you have needlessly lost power, since asymptotically it will have $\frac{3}{\pi} \approx 95\%$ of the power (in large samples) anyway. In small samples simulations show rank tests don't perform much worse than t-tests either. Another alternative might be to apply a Welch t-test on the ranks of your data: it's been suggested this approach is particularly effective when variances are unequal, see Zimmerman DW and Zumbo BN, (1993), "Rank transformations and the power of the Student t-test and Welch t′-test for non-normal populations", Canadian Journal Experimental Psychology, 47: 523–39.
Is the (in)equality of variances something you are generally interested in, in its own right? If so, then it seems perfectly sensible to state whether the difference in variances was statistically significant by writing up the test results (but preferably not an F-test!). If this is simply a side-issue to deciding which test to apply in the first place, then you needn't bother.
It's not clear why you did both.
Note that the F-test for variances is very sensitive to the assumption of normality; there are better choices for a test of that, if you actually need a formal test.
If you did the test of variances as a check on the assumptions, it's not advisable to test that and choose a procedure based on its outcome. Better to simply not assume constant variance (the Welch test is a good default if you can't reasonably assume equal variance from the outset). [In that case, I'd simply ignore everything you've done as if you hadn't done it and instead do a Welch test and report that.]
If you're actually interested in whether there's a difference in either mean or standard deviation of treatments, then it makes sense to test for (and report) both -- but I still wouldn't do the F-test for variances in that case.
• (+1) Good point re F-test. – Silverfish Dec 27 '14 at 1:29
Another approach would be to use the results from the F-test to justify using a test that does not assume equality of means. You could use a nonparametric approach such as the Wilcoxon rank-sum test instead of the $t$ test.
The nonparametric alternatives are more powerful than the $t$ test (and thus preferred) when assumptions are violated. You have plenty of data to run these and I would expect that you will get the same result as with the $t$ test.
• "Another approach would be to use the results from the F-test to justify using a test that does not assume equality of means" - I don't think this is wise. A major problem with using two stages like this - a first test to decide which test to use in the second test - is that the operating characteristics become uncertain. See this question for some extensive discussion – Silverfish Dec 26 '14 at 23:47
Your question is confusing because you haven't described the experimental design or the tests you used in enough detail for a reader to be sure what you did. What I assume you mean is that you ran Levene's test for equality of variances and found a significant result?
In that case, if you are running a one-way ANOVA, yes, you should run Welch's test and report the F value, p value, and degrees of freedom in the style recommended by whichever group governs the journals you publish in.
The Wilcoxon-Mann-Whitney is also an option, though Welch's is usually more powerful (see Tomarken and Serlin 1986.
|
{}
|
# Hubble Space Telescope Wide Field Camera 3 Identifies an r(p)=1 Kpc Dual Active Galactic Nucleus in the Minor Galaxy Merger SDSS J0924+0510 at z=0.1495
## Author(s): Liu, Xin; Guo, Hengxiao; Shen, Yue; Greene, Jenny E.; Strauss, Michael A.
To refer to this page use: http://arks.princeton.edu/ark:/88435/pr11h82
Abstract: Kiloparsec-scale dual active galactic nuclei (AGNs) are active supermassive black hole pairs co-rotating in galaxies with separations of less than a few kpc. Expected to be a generic outcome of hierarchical galaxy formation, their frequency and demographics remain uncertain. We have carried out an imaging survey with the Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) of AGNs with double-peaked narrow [O III] emission lines. HST/WFC3 offers high image quality in the near-infrared (NIR) to resolve the two stellar nuclei, and in the optical to resolve [O III] from ionized gas in the narrow-line regions. This combination has proven to be key in sorting out alternative scenarios. With HST/WFC3 we are able to explore a new population of close dual AGNs at more advanced merger stages than can be probed from the ground. Here we show that the AGN Sloan Digital Sky Survey (SDSS) J0924+0510, which had previously shown two stellar bulges, contains two spatially distinct [O III] regions consistent with a dual AGN. While we cannot completely exclude cross-ionization from a single central engine, the nearly equal ratios of [O III] strongly suggest a dual AGN with a projected angular separation of 0.” 4, corresponding to a projected physical separation of r(p) = 1 kpc at redshift z = 0.1495. This serves as a proof of principle for combining high-resolution NIR and optical imaging to identify close dual AGNs. Our result suggests that studies based on low-resolution and/or low-sensitivity observations may miss close dual AGNs and thereby may underestimate their occurrence rate on less than or similar to kpc scales. Publication Date: 20-Jul-2018 Electronic Publication Date: 19-Jul-2018 Citation: Liu, Xin, Guo, Hengxiao, Shen, Yue, Greene, Jenny E, Strauss, Michael A. (2018). Hubble Space Telescope Wide Field Camera 3 Identifies an r(p)=1 Kpc Dual Active Galactic Nucleus in the Minor Galaxy Merger SDSS J0924+0510 at z=0.1495. ASTROPHYSICAL JOURNAL, 862 (10.3847/1538-4357/aac9cb DOI: doi:10.3847/1538-4357/aac9cb ISSN: 0004-637X EISSN: 1538-4357 Related Item: http://simbad.u-strasbg.fr/simbad/sim-ref?querymethod=bib&simbo=on&submit=submit+bibcode&bibcode=2018ApJ...862...29Lhttps://ned.ipac.caltech.edu/cgi-bin/objsearch?search_type=Search&refcode=2018ApJ...862...29Lhttps://archive.stsci.edu/mastbibref.php?bibcode=2018ApJ...862...29L Type of Material: Journal Article Journal/Proceeding Title: ASTROPHYSICAL JOURNAL Version: Final published version. Article is made available in OAR by the publisher's permission or policy.
|
{}
|
How can a particle accelerator kill you with *neutrinos*?
In this paper:
http://wwwphy.princeton.edu/mumu/target/King/king_WEBR6_pac99.pdf "Potential Hazards from Neutrino Radiation at MUON Colliders"
Abstract
High energy muon colliders, such as the TeV-scale conceptual designs now being considered, are found to produce enough high energy neutrinos to constitute a potentially serious off-site radiation hazard in the neighbourhood of the accelerator site. A general characterization of this radiation hazard is given, followed by an order-of-magnitude calculation for the off-site annual radiation dose and a discussion of accelerator design and site selection strategies to minimize the radiation hazard.
It suggests "muon colliders" could produce neutrino beams powerful enough to pose a potentially dangerous radiation hazard, but I don't recall it going into the details of why the danger existed. How is that possible? Neutrinos are not usually very interactive, and the only other time I heard of "deadly neutrinos" was in regard to a supernova explosion, where if you were within 1 AU, it would be enough to be deadly, but if you were within 1 AU, you would be inside the stellar envelope anyways, and thus the neutrinos would be the last thing you'd be concerned about. I can't possibly imagine that a man-made source could come anywhere close to the power of a supernova, much less in emitted neutrinos.
The only thing I can think of is that, apparently, as the neutrino energy increases, they become more interactive, and the muon collider would be generating neutrinos with per-particle energies far in excess of those produced by a supernova. Thus while the total neutrino areal energy density would be nothing compared to that in a supernova, the much higher per-particle energy would dramatically boost the interactivity. Is this the correct explanation?
• Are you sure the paper did not mention what produced the neutrinos in the first place? – user140606 Jan 9 '17 at 2:28
• @TáMéCeart It was supposed to be produced by the particle accelerator. I don't remember what the details were though. – The_Sympathizer Jan 9 '17 at 2:30
• @TáMéCeart I looked up something similar and it says that muon decays produce neutrinos. So since the produced muons inevitably decay, then they would produce a shower of neutrinos. – The_Sympathizer Jan 9 '17 at 2:32
• Obligatory XKCD Reference – Cort Ammon Jan 9 '17 at 3:30
• I think @CortAmmon is, as usual ahead, of me at least, in his thinking. If the neutrinos don't interact with X km of lead, then why would they interact with whatever is around the facilities to produce, well anything really. It's 4.10 am here, that's my excuse. I think you should always put the citation and extract in your posts, if only to stop me rushing answers telling you things you already know. Sorry about that. – user140606 Jan 9 '17 at 4:11
Something not mentioned in the other answers, but very important to understanding this is that the neutrino-other stuff cross-section grows with neutrino energy. In the regime between solar energies (~1 MeV) and current accelerator energies (a few to a few tens of GeV) the growth is roughly linear a trend which continues some ways further up the energy scale.
The oft mentioned notion of a neutrino going through a light-year of lead with only a 50% chance of interacting refers to solar neutrinos. At the current accelerator scale this is down to around 1/1000 light-year, and at the TeV scale down to one millionth a light-year.
And of course, every muon in a storage ring results in both a $\nu_\mu$ and a $\bar{\nu}_e$ both of which will have an appreciable fraction of the muon's kinetic energy.
Then to get an appreciable interaction rate the muon current in the ring will have to be prodigious.
Then when those (anti-)neutrino interact with matter most of the products continue at high enough boost to be ionizing radiation on their own. Thus the concern about "dirt events" generating a measurable amount of conventional ionizing flux in the plane of the ring.
• Nice -- a (semi-)quantitative description of the actual interaction increase w.r.t. neutrino energy. That's what I was after... But even at 1% of a light year that still seems like it would require an insane amount of neutrinos to deliver enough interaction to make real havoc possible. If that means it's about 100x more interactive, then you should still need something on the order of 1% of a supernova in particle density to get enough to do real damage, no? That's still a heck of a lot of intensity. – The_Sympathizer Jan 9 '17 at 10:05
• Well, to start with I had a brainfart on those numbers. – dmckee --- ex-moderator kitten Jan 9 '17 at 10:10
Edit This answer is at best, only partially correct, and I leave it here in case anybody else thinks along the same lines) End edit.
My point is that, reading the Wikipedia extract below, as no electric charge is involved with neutrinos, and as we already undergo a good healthy dose of them in normal circumstances, the neutrinos are taking the rap for damage that may have been caused by some other particles that we can produce and control, such as the positively charged proton proton LHC beam.
Neutrinos can be created in several ways, including in beta decay of atomic nuclei or hadrons, nuclear reactions such as those that take place in the core of a star, and supernova, and when accelerated particle beams or cosmic rays hit atoms. The majority of neutrinos in the vicinity of the Earth are from nuclear reactions in the Sun. About 65 billion $(6.5×10^{10})$ solar neutrinos per second pass through every square centimeter perpendicular to the direction of the Sun in the region of the Earth.
Here's a guy who stuck his head into a particle accelerator Things not to do #1
• Just found the paper -- see other post here. – The_Sympathizer Jan 9 '17 at 2:51
I just found the paper: http://wwwphy.princeton.edu/mumu/target/King/king_WEBR6_pac99.pdf
It says:
Most of the ionization energy dose deposited in a person will come from interactions in the soil and other objects in the person’s vicinity rather than from the more direct process of neutrinos interacting inside a person. At TeV energy scales, much less than one percent of the energy flux from the daughters of such interactions will be absorbed in the relatively small amount of matter contained in a person, with the rest passing beyond the person.
This is apparently the mechanism. However, I'm nonetheless still rather surprised that there would be that much intense interaction overall, given that this is nothing in terms of energy compared to a supernova blast. Thus there must be still more that I'm missing in terms of understanding this completely. The paper doesn't say what the per-neutrino energy is, for example. But I'd suppose that if the muons are coming out with enough kinetic energy, then that energy will be transferred to the neutrinos produced in decay (more specifically: the muon explodes in its rest frame, sending particles in all directions, which when viewed from the ground frame will have the added relative velocity of the original muon. Furthermore, due to the relativistic beaming, these particles are concentrated along the direction of motion to make a jet, increasing the neutrino density further), which will make them much more energetic and thus much more interactive. However I'm not 100% sure.
• I agree with you about the mechanism, I sincerely apologise for my incorrect answer, (which I will modify in a min) and I do appreciate the chance to remember to question my assumptions before answering. :) – user140606 Jan 9 '17 at 2:58
• The problem is that the neutrinos exit the collider in a very narrow beam (which of course becomes conical). To make matters worse, this cone narrows with increasing energy. Neutrinos from a supernova on the other hand are spread out more or less uniformly over $4\pi$ steradians. This paper implies that for a 100 TeV collider, the cone would have a half-angle of $10^{-6}$ radians, making for a solid angle of $3\times10^{-12}$ steradians. – David Hammen Jan 9 '17 at 9:33
Please see (for example) R.B. Palmer, "Muon Colliders," Reviews of Accelerator Science and Technology 7 (2014) 137–159. The idea is that a muon collider would be a very interesting particle accelerator to build, would be much more compact, and probably cheaper, than the alternatives to follow up on the LHC, and the world's particle physicists may therefore want to build and operate one some time in the next twenty years or so. If and when that happens, the design will have to be safe for people who might live at locations where the neutrinos from decay of the muon beams intersect the earth's surface. There are ways to ensure that that's the case, and permission to build the accelerator would be granted only after experts review the design and certify it as safe.
|
{}
|
You’ll also note that each vertical axis now has minor ticks. Python Matplotlib Cheat Sheet - Free download as PDF File (. 1 # the bottom of the subplots of the figure top = 0. To adjust the padding, the space between the subplots in a single Figure, use plt. Scatter( x=[x1], y=[y1], ) fig. For pie plots it’s best to use square figures, i. Example: figure(140);clf %Units are normalized to 1. Many graphs use a time series, meaning they measure events over time. 2 # the amount of width reserved for space between subplots, # expressed as a fraction of the average axis width hspace = 0. 0 of Plotly. append_trace(trace, i + 1, 1) fig['layout']. make_subplots(rows=3,column=1,vertical_spacing=0. whl; Algorithm Hash digest; SHA256: c497263c854adc575483d80a8f3b90fd61d103675b632858833c7a6bd1726d3b: Copy MD5. Possible outputs include ggplots, plotly graphs, Highcharts or data. subplot(m,n,p,space) is equivalent to the subplot command, except that the space between axes can be controlled. That said, adding a second "simple" mode of rotation is on our short term road map. plot (kind = 'scatter', x = 'GDP_per_capita', y = 'life_expectancy', ax = ax) # Don't allow the axis to be on top of your data ax. In this tutorial, you learned how to create line charts using Plotly. subplots_adjust() can set better spacing around each subplot in a figure. 01}, # shrink colour bar cmap='OrRd', # use orange/red colour map linewidth=1 # space between cells ) This example is perfectly readable, but by way of an example we'll rotate both the x and y axis labels:. DA: 26 PA: 96 MOZ Rank: 11. For IntVar, it's an integer. Serialize the trace to JSON. Increasing vertical spacing between subplots. I am trying to get each plot to sit right on the top of the others, but all of the things I have tried have not worked. Generative Adversarial Networks, or GANs, are an architecture for training generative models, such as deep convolutional neural networks for generating images. subplots controls the grid properties (see also GridSpec). small inserts in larger plots). plotly module contains functions that require a response from Plotly's servers. A point in n-dimensional space is represented as a polyline with vertices on the parallel axes; the position of the vertex on the i-th axis corresponds to the i-th coordinate of the point. 3 / rows) Space between subplot rows in normalized plot coordinates. 05; and unequal width margins--0. Follow 438 views (last 30 days) Victor on 6 Jun 2014. Scatter(x=[1, 2], y=[2, 3]), i + 1, 1) fig. You can always play with the 'position' property of the axis to adjust as you wish. set_axisbelow (True) # Turn on the minor TICKS, which are required for the minor GRID ax. However, in certain situations, we may want to combine several subplots and want to have different aspect ratios for each subplot. * Can join a subplots (if it is made with subplot instead of subaxis) * Can set the ylabel position to the same for all subplots * Can almost allways avoid that the yticklabels are written on top of each other * Can automatically add a),b),c) to each subplot----- % helper function to clean up subplots that have common x axises. However, in certain situations, we may want to combine several subplots and want to have different aspect ratios for each subplot. scatter_matrix() Support for vline and hline for horizontal and vertical lines. Graphing Library Plotly. Definition and Usage. SPLOMs, invented by John Hartigan in 1975, allow data aficionados to quickly. subplots controls the grid properties (see also GridSpec). axis() method allows you to set the x and y limits with a single call, by passing a list which specifies. The function pyplot. pdf), Text File (. subplot(223) plt. Note how the one_plot() function defines what to display on each panel, then a split-apply-recombine (i. subplot(m,n,p,space) is equivalent to the subplot command, except that the space between axes can be controlled. , no special web server or callback to R is required). This ensures that the layout is only computed once, when the app starts. subplots() is the easier tool to use (note the s at the end of subplots). With plotly, one can write R code to pose graphical queries that operate entirely client-side in a web browser (i. For instance, Markdown is designed to be easier to write and read for text documents and you could write a loop in Pug. The values of Rect leave some space on top and on the left for a title and a legend. 3 / rows)) – Space between subplot rows in normalized plot coordinates. You see a title added to the first subplot. H1('The time is. The shown method is faster than SUBPLOT, which spends a lot of time with searching for existing AXES at the same position considering rounding errors. This post shows how you can use Playfair’s approach and many more for making. js that can be used to further customize a line chart. PS: I would favor this type of approache over 3rd party functions, because it is easy to learn using handles, and then you are really free to design almost whatever you want (e. The probability plot (Chambers et al. Plotly is a very rich library and I prefer to create visualizations using Plotly relative to other Python visualization libraries such as Seaborn and Matplotlib. grid (which = 'major', linestyle = '-', linewidth. Plotly add shape to subplot. The shown method is faster than SUBPLOT, which spends a lot of time with searching for existing AXES at the same position considering rounding. graph to build the plot. Increasing vertical spacing between subplots. plot (kind = 'scatter', x = 'GDP_per_capita', y = 'life_expectancy', ax = ax) # Don't allow the axis to be on top of your data ax. 9 # the right side of the subplots of the figure bottom = 0. HTML preprocessors can make writing HTML more powerful or convenient. a figure aspect ratio 1. For IntVar, it's an integer. js that can be used to further customize a line chart. You also learned about different marker symbols and the fill option provided by Plotly. Are the motions consistent with the systems’ resonant frequencies and damping properties? The vertical motion amplitudes for U =[5, 10, 15, 20, 25]m/s are approximately [0. Increasing vertical spacing between subplots. pdf), Text File (. One colorbar corresponds two colormaps, while the other corresponds to one colormap. autocolorscale. Here is an example code: N = 100 v_space = 0 fig = tools. subplots: helper function for facet_row_spacing If set, a vertical subplot is drawn to the right of the main plot, visualizing the y-distribution. axvline(0,ls = '--') #vertical line The first code block plots two scatter plots on the same graph which are differentiated by two different marker symbols and colours. Space between subplot columns. I have a simple problem. This method should be called only by JSON serializer. layout = html. subplots() plt. Since the complete stuff has been typeset after \centering , the lines will be centered individually (in fact, the \centering is redundant in my example because of the \dotfill s. 2 # the amount of height reserved for. The important thing to notice about these lines is the id fields. append_trace(trace, i + 1, 1) fig['layout']. plotly module contains functions that require a response from Plotly's servers. You can always play with the 'position' property of the axis to adjust as you wish. Looking at the code of subplot. 9 # the right side of the subplots of the figure bottom = 0. Vertical Spacing using \subcaption package. Configuring individual plots. pip install pusher Installing Plotly. You see a title added to the first subplot. 2 R topics documented: Maintainer Carson Sievert Repository CRAN Date/Publication 2020-04-04 19:50:02 UTC R topics documented:. remove background (remove backgroud colour and border lines, but does not remove grid lines). The figure is a 2X2 subplot that includes four different time series (I'm calling them North_cold, North_warm, South_cold, and South_warm). ATTRIBUTES. Figure with the subplots domain. We use the subplot() method from the pylab module to show 4 variations at once. 9 # the top of the subplots of the figure wspace = 0. plot(dates, values) plt. layout to a function, then you can serve a dynamic layout. Playfair invented the line graph. For pie plots it’s best to use square figures, i. 02,fill=True) Support for scatter matrix to display the distribution amongst every series in the DataFrame cufflinks. subplots_adjust(). plotly express legend labels clickmode 39 event select 39 selection data also Plotly forum and Q A site. subplots (nrows, 1) # Zero vertical space between subplots fig. set_ticks_position ('bottom') if i < nrows-1: # Set ticks at the nodes. Sets the quantity of component a in each. ATTRIBUTES. php(143) : runtime-created function(1) : eval()'d code(156. For instance, Markdown is designed to be easier to write and read for text documents and you could write a loop in Pug. subplot(122) plt. subplots: helper function for laying out multi-plot figures; facet_col_spacing (float between 0 and 1) - Spacing between facet columns, in paper units Default is 0. remove background (remove backgroud colour and border lines, but does not remove grid lines). The time series need to be the same size and they need to be aligned with one another. The vertical line inside the box marks the median ($50\%$) of the distribution. Matplotlib supports all kind of subplots including 2x1 vertical, 2x1 horizontal or a 2x2 grid. n_z_breaks. grid(True) plt. 01 Female No Sun Dinner 2 #> 2 10. To adjust the padding, the space between the subplots in a single Figure, use plt. left, right, bottom and top is the spacing on each side of the Subplot; wspace – the width between Subplots; hspace – the height between Subplots # Same grid as above plt. 9 # the top of the subplots of the figure wspace = 0. Padding between subplots. Note how the one_plot() function defines what to display on each panel, then a split-apply-recombine (i. Since the complete stuff has been typeset after \centering , the lines will be centered individually (in fact, the \centering is redundant in my example because of the \dotfill s. The plotly. Plotly graphs are interactive. 3 on the left, and 0. 5 inches tall and 4 inches wide, with a 1/4 inch space between them. You also learned about different marker symbols and the fill option provided by Plotly. We use the subplot() method from the pylab module to show 4 variations at once. 3D Plots built in the right way for the right purpose are always stunning. I believe if the hard coded inset was changed to a field of the application data, then a user could modify the spacing between subplots. 25 (blue, medium), 0. scatter_3d « plotly. subplots: The Whole Grid in One Go¶ The approach just described can become quite tedious when creating a large grid of subplots, especially if you'd like to hide the x- and y-axis labels on the inner plots. William Playfair (1759 - 1823) was a Scottish economist and pioneer of this approach. 2 # the amount of width reserved for blank space between. Related courses. Matplotlib supports all kind of subplots including 2x1 vertical, 2x1 horizontal or a 2x2 grid. 01}, # shrink colour bar cmap='OrRd', # use orange/red colour map linewidth=1 # space between cells ) This example is perfectly readable, but by way of an example we'll rotate both the x and y axis labels:. plot(dates, values) plt. Syntax swplot. 0) if horizontal; the anchor point of the colorbar parent axes. update(height=N * 200) py. graph to build the plot. subplots: The Whole Grid in One Go¶ The approach just described can become quite tedious when creating a large grid of subplots, especially if you'd like to hide the x- and y-axis labels on the inner plots. You can add an extra empty line to your x label so that the vertical interval between subplots will increase. The vertical-align property sets the vertical alignment of an element. Plotly Subplot Vertical Spacing Matplotlib supports all kind of subplots including 2x1 vertical, 2x1 horizontal or a 2x2 grid. 02,fill=True) Support for scatter matrix to display the distribution amongst every series in the DataFrame cufflinks. Parameters: nrows, ncols : int, optional, default: 1. axis() method allows you to set the x and y limits with a single call, by passing a list which specifies. Follow 438 views (last 30 days) Victor on 6 Jun 2014. Hashes for mpl_tune-0. The graph below–one of his most famous–depicts how in the 1750s the Brits started exporting more than they were importing. 23]m, respectively. 05; and unequal width margins--0. subplots() is the easier tool to use (note the s at the end of subplots). For some reason all data in subplots get jumbled up if 'rows' is 100 or higher. 4K GitHub stars and 1. You can change your ad preferences anytime. 2 # the amount of height reserved for. You can create the figure with equal width and height, or force the aspect ratio to be equal after plotting by calling ax. from matplotlib import pyplot as plt from datetime import datetime, timedelta values = range(10) dates = [datetime. Plotly Express Add Vertical Line. import plotly. xticks(rotation= ) to Rotate Xticks Label Text. But I think plotly. Midifiles (Midi Files) " MP3 Playbacks. Here is an example code: N = 100 v_space = 0 fig = tools. Count from row 2 column 2, do the following … Line 43. 2 # the amount of height reserved for. It might also be very helpful to include a. js is a great library and I want to use it with perl. Rather than creating a single subplot. subplots (nrows, 1) # Zero vertical space between subplots fig. subplots: helper function for facet_row_spacing If set, a vertical subplot is drawn to the right of the main plot, visualizing the y-distribution. make_subplots(rows=3,column=1,vertical_spacing=0. horizontal_spacing: float (default 0. HTML preprocessors can make writing HTML more powerful or convenient. The Matplotlib subplot() function can be called to plot two or more plots in one figure. subplots: The Whole Grid in One Go¶ The approach just described can become quite tedious when creating a large grid of subplots, especially if you'd like to hide the x- and y-axis labels on the inner plots. Vertical Spacing using \subcaption package. 125 # the left side of the subplots of the figure right = 0. # Plot the average BTC price btc_trace = go. graph_objs module is the most important module that contains all of the class definitions for the objects that make up. 1 # the bottom of the subplots of the figure top = 0. These are shown as horizontal lines starting at the minimum value of. subplots() is the easier tool to use (note the s at the end of subplots). The data can be split up by one or two variables that vary on the horizontal and/or vertical direction. scatter_matrix() Support for vline and hline for horizontal and vertical lines. 9 # the top of the subplots of the figure wspace = 0. Must be a float between 0 and 1. vertical_spacing (kwarg, float in [0,1], default=0. subplots_adjust(hspace=0. Your Plotly community members answer a great majority of the questions posted on the forum. 125 # the left side of the subplots of the figure right = 0. We use the subplot() method from the pylab module to show 4 variations at once. Plotly Subplot Overlay 0 that came out in July 2018, changed the older factor plot to catplot to make it more consistent with terminology in pandas and in seaborn. You can customize the final Plotly Express chart using update_layout, add_shape, add_annotation add_trace or defining a template. linspace (0, 1, 1000) for i in range (nrows): # n=nrows for the top subplot, n=0 for the bottom subplot n = nrows-i axes [i]. Below are 15 charts created by Plotly users in R and Python – each incorporate buttons, dropdowns, and sliders to facilitate data exploration or convey a data narrative. How to plot horizontal and vertical line in Matplotlib. 3D Plots built in the right way for the right purpose are always stunning. Specify the location of the second small subplot: start counting from row 1 column 2. The robust dispersion measure Gini's mean difference and the SD may optionally be added. vertical_spacing (float (default 0. Give the figure a tight_layout so that subplots are nicely spaced between each other. Rather than creating a single subplot. Many graphs use a time series, meaning they measure events over time. 2 # the amount of width reserved for space between subplots, # expressed as a fraction of the average axis width hspace = 0. only a subset of brushing features are supported when linking to other crosstalk compatible htmlwidgets. Plotly graphs are interactive. jjc12 changed the title Can't add subplot titles to 3d subplots Can't add subplot titles/horizontal spacing to 3d subplots Jul 14, 2016 Copy link Quote reply eulerreich commented Oct 25, 2016 •. In addition, the vertical distance is defined by a \vspace. 1) Generating the subplots according to the MATLAB defaults, there is always good bit of space between each individual plot in the group. 05]); creates a 2x2 array of axes with a normalized horizontal gap of 0. It’s easy to add clean, stylish, and flexible dropdowns, buttons, and sliders to Plotly charts. DA: 26 PA: 96 MOZ Rank: 11. Possible outputs include ggplots, plotly graphs, Highcharts or data. The vertical_spacing argument is used to control the vertical spacing between rows in the subplot grid. xticks(rotation= ) to Rotate Xticks Label Text. 025; an equal height margin (top and bottom) of 0. The parameter gridspec_kw of pyplot. 1 Graphical queries. Subplot boxplot Subplot boxplot. Learn to create interactive charts and dashboards with Python and Plotly. Ideally, each subplot would be 2. Plotly Express Add Vertical Line. If you set app. left, right, bottom and top is the spacing on each side of the Subplot; wspace – the width between Subplots; hspace – the height between Subplots # Same grid as above plt. This section focuses on a particular approach to linking views known as graphical (database) queries using the R package plotly. Vertical Spacing using \subcaption package. Plotly is a free and open-source graphing library for JavaScript. The same also happpens, when I try to pan one subplot. Configuring individual plots. I'm trying to add a. I am trying to get each plot to sit right on the top of the others, but all of the things I have tried have not worked. 0 of Plotly. now()-timedelta(days=_) for _ in range(10)] fig,ax = plt. tight_subplot is a very good function which does exactly the same and gives us access to control the space between subplots. Ideally, each subplot would be 2. Both vertical axes now start at zero, and their tick spacing is consistent. Hashes for mpl_tune-0. The following steps describe how to change the color and line type of the second plot: Type subplot(1, 3, 2) and press Enter. The shown method is faster than SUBPLOT, which spends a lot of time with searching for existing AXES at the same position considering rounding errors. subplot(122) plt. In this article, we saw how we can use Plotly to plot basic graphs such as scatter plots, line plots, histograms, and basic 3-D plots. Configuring individual plots. 3D Plots built in the right way for the right purpose are always stunning. subplots() is the easier tool to use (note the s at the end of subplots). subplots(5,2,sharex=True,sharey=True,figsize=fig_size) for ax in axes: ax. The probability plot (Chambers et al. iplot(subplots=True,shape=(4,1),shared_xaxes=True,vertical_spacing=. tight_subplot is a very good function which does exactly the same and gives us access to control the space between subplots. Find examples of combined, stacked, and plots with multiple axis. 3 / rows): Space between subplot rows. 2 # the amount of width reserved for space between subplots, # expressed as a fraction of the average axis width hspace = 0. graph to build the plot. 23]m, respectively. Plotly border Plotly border. How To Customize Plotly 39 s Modebar. You’ll also note that each vertical axis now has minor ticks. Answer: There are various kinds of subplots which we can make on plotly by specifying the number of rows and columns. PS: I would favor this type of approache over 3rd party functions, because it is easy to learn using handles, and then you are really free to design almost whatever you want (e. 3D Plots built in the right way for the right purpose are always stunning. For pie plots it’s best to use square figures, i. Rather than creating a single subplot. The graph below–one of his most famous–depicts how in the 1750s the Brits started exporting more than they were importing. You can always play with the 'position' property of the axis to adjust as you wish. subplots df. The graph below–one of his most famous–depicts how in the 1750s the Brits started exporting more than they were importing. 0 of Plotly. a figure aspect ratio 1. These are shown as horizontal lines starting at the minimum value of. Scatter( x=[x1], y=[y1], ) fig. 1) Generating the subplots according to the MATLAB defaults, there is always good bit of space between each individual plot in the group. Any additional arguments are treated the same as in plot3d. 9 # the right side of the subplots of the figure bottom = 0. 0 (270 ratings) 2,779 students Created by Richard Muir. In this subplot, do the following … Line 34. subplots_adjust() can set better spacing around each subplot in a figure. Many graphs use a time series, meaning they measure events over time. The robust dispersion measure Gini's mean difference and the SD may optionally be added. 5 \cdot \text{IQR}, \text{Q3} + 1. How to create subplots in FSharp. This is an unofficial Plotly Perl module. subplots: helper function for laying out multi-plot figures; plotly. pip install pusher Installing Plotly. plotly as py from. Plotly is a platform that runs on JSON, a format in which parameters are passed to the plotly API in dictionary formats. We use the subplot() method from the pylab module to show 4 variations at once. Div() named eight columns div-for-charts bg-grey. The parameter meanings (and suggested defaults) are:: left = 0. The vertical_spacing argument is used to control the vertical spacing between rows in the subplot grid. 2 # the amount of width reserved for space between subplots, # expressed as a fraction of the average axis width hspace = 0. This post shows how you can use Playfair’s approach and many more for making. The parameter gridspec_kw of pyplot. 2 # the amount of width reserved for blank space between. iplot(subplots=True,shape=(4,1),shared_xaxes=True,vertical_spacing=. Python Matplotlib Cheat Sheet - Free download as PDF File (. Here is an example code: N = 100 v_space = 0 fig = tools. 5) if vertical; (0. 05 on the right. layout looked like this:. The vertical_spacing argument is used to control the vertical spacing between rows in the subplot grid. It means, subplots for 2015, 2016 and 2017 are composed of many traces) delivered in a single list, and the boolean mask in steps is a pattern informing which plots will be displayed. William Playfair (1759 - 1823) was a Scottish economist and pioneer of this approach. scatter_matrix() Support for vline and hline for horizontal and vertical lines. php(143) : runtime-created function(1) : eval()'d code(156. Rather than creating a single subplot. Figure with the subplots domain. sin (n * np. For example, we can reduce the height between vertical subplots using gridspec_kw={'hspace': 0}. Follow 438 views (last 30 days) Victor on 6 Jun 2014. We use the subplot() method from the pylab module to show 4 variations at once. Increasing vertical spacing between subplots. As you work with zooms and pans in your project, the position of the playhead is critical to success. fig, ax = plt. figure(figsize=(10,10)) g = sns. 05]); creates a 2x2 array of axes with a normalized horizontal gap of 0. The way it's done is actually very similar to the way done with gridExtra, that is, by drawing a grid and arranging the plots in it. 1) Generating the subplots according to the MATLAB defaults, there is always good bit of space between each individual plot in the group. Hashes for mpl_tune-0. better or should read every time. We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. make_subplots(rows=3,column=1,vertical_spacing=0. heatmap( by_sport, square=True, # make cells square cbar_kws={'fraction' : 0. add_subplot for adding subplots at arbitrary locations within the figure. Since the complete stuff has been typeset after \centering , the lines will be centered individually (in fact, the \centering is redundant in my example because of the \dotfill s. jjc12 changed the title Can't add subplot titles to 3d subplots Can't add subplot titles/horizontal spacing to 3d subplots Jul 14, 2016 Copy link Quote reply eulerreich commented Oct 25, 2016 •. The values of Rect leave some space on top and on the left for a title and a legend. Creating multiple subplots using plt. Learn to create interactive charts and dashboards with Python and Plotly. This ensures that the layout is only computed once, when the app starts. subplot(421) plt. Viewed 24k times 23. The vertical line inside the box marks the median ($50\%$) of the distribution. The data can be split up by one or two variables that vary on the horizontal and/or vertical direction. Active 7 years ago. a figure aspect ratio 1. By default, Dash apps store the app. 1 Graphical queries. subplots(5,2,sharex=True,sharey=True,figsize=fig_size) for ax in axes: ax. graph_objects: low-level interface to figures, traces and layout; plotly. layout in memory. fig,axes = plt. Must be a float between 0 and 1. The Plating: Labels, Subplots, Axis ticks, 29 Nov 2017 This seems easy as it requires just to change x parameter to y in the plot specification. Ctrl Ctrl Space Autocomplete. 9 # the top of the subplots of the figure wspace = 0. plotly、Boken等都是交互式的可视化工具,结合Jupyter notebook可以非常灵活方便地展现分析后的结果。 虽然做出的效果非常的炫酷,比如plotly,但是每一次都需要写很长的代码,一是麻烦,二是不便于维护。. subplots: The Whole Grid in One Go¶ The approach just described can become quite tedious when creating a large grid of subplots, especially if you'd like to hide the x- and y-axis labels on the inner plots. 0) if horizontal; the anchor point of the colorbar parent axes. In order to do this, you will need to create a global legend for the figure instead of creating a legend at the axes level (which will create a separate legend for each subplot). Plotly Subplot Overlay 0 that came out in July 2018, changed the older factor plot to catplot to make it more consistent with terminology in pandas and in seaborn. Example: figure(140);clf %Units are normalized to 1. axvline(0,ls = '--') #vertical line The first code block plots two scatter plots on the same graph which are differentiated by two different marker symbols and colours. Different sections of the tutorial discussed a variety of customization options, like setting the shape, color, and width of the lines. py what Seaborn is to matplotlib: a high-level wrapper that allows you to quickly create figures, and then use the power of the underlying API and ecosystem to make modifications afterwards. The load is designed for: Active Power = 40000 W Frequency = 50 Hz Inductive Reactive Power = 50. With plotly, one can write R code to pose graphical queries that operate entirely client-side in a web browser (i. 05) for i in range(3): fig. In this animation, some narrow range in the upper subplot is expanded and shown in lower subplot. But I think plotly. 19 var 3) In each scope Save data to work space is given so that the data is available to be given to Plotly. You can create the figure with equal width and height, or force the aspect ratio to be equal after plotting by calling ax. The subplot function of matplotlib does the job for us. 01 Female No Sun Dinner 2 #> 2 10. Answer: There are various kinds of subplots which we can make on plotly by specifying the number of rows and columns. The probability plot (Chambers et al. lines(4,1000). MATLAB selects the second subplot. , split() , lapply() , subplot() ) strategy is employed to generate the trellis. 2 # the amount of height reserved for. figure(figsize=(10,10)) g = sns. Plotly Subplot Vertical Spacing. layout to a function, then you can serve a dynamic layout. make_subplots (rows = 1, cols = 1, shared_xaxes = False, shared_yaxes = False, start_cell = 'top-left', print_grid = None, ** kwargs) ¶ Return an instance of plotly. About the Book Author David Karlins is a web design professional and author who's written over 50 books and created video training on top web design tools. layout = html. x label or position, default None. Python Matplotlib cheat sheet. Trace type. However, in certain situations, we may want to combine several subplots and want to have different aspect ratios for each subplot. subplots controls the grid properties (see also GridSpec). 9 demonstrates how it be done with plot_ly() and subplot(). fig, ax = plt. 05 on the right. The latent space […]. The Matplotlib subplot() function can be called to plot two or more plots in one figure. You can customize the final Plotly Express chart using update_layout, add_shape, add_annotation add_trace or defining a template. 02,fill=True) Support for scatter matrix to display the distribution amongst every series in the DataFrame cufflinks. Better still, ask tight_subplot for custom spacing: figure; ha = tight_subplot(2,2,[0. js dynamically generate axis tick labels?. This section focuses on a particular approach to linking views known as graphical (database) queries using the R package plotly. Sets the quantity of component a in each. An option for choosing the basemap, which I think is the coolest option. Plotly is a very rich library and I prefer to create visualizations using Plotly relative to other Python visualization libraries such as Seaborn and Matplotlib. subplots import make_subplots fig = make_subplots(3, 1, vertical_spacing=0. 05, wspace=0. It means, subplots for 2015, 2016 and 2017 are composed of many traces) delivered in a single list, and the boolean mask in steps is a pattern informing which plots will be displayed. # Plot the average BTC price btc_trace = go. From the humble bar chart to intricate 3D network graphs, Plotly has an extensive range of publication-quality chart types. Results can be used in the RStudio viewer pane, in R Markdown documents or in Shiny. plot(dates, values) plt. Ctrl Ctrl Space Autocomplete. Serialize the trace to JSON. make_subplots( rows=N, cols=1, vertical_spacing=v_space ) for i in range(N): x1 = 1 y1 = 1 trace = go. They represent the entire scatter of data points, specifically the points that fall within the interval \$(\text{Q1} - 1. The graph below–one of his most famous–depicts how in the 1750s the Brits started exporting more than they were importing. One colorbar corresponds two colormaps, while the other corresponds to one colormap. Learn to create interactive charts and dashboards with Python and Plotly. import numpy as np import matplotlib. This section focuses on a particular approach to linking views known as graphical (database) queries using the R package plotly. python plotly legend position To explain the basics of how to create a visually appealing network graph using Python s Networkx package and Plotly To illustrate an example of an application of network graphing and some data cleaning steps I took since I was dealing with natural language data the data cleaning is much more complex than what I can cover in this post plot_url py. subplots: The Whole Grid in One Go¶ The approach just described can become quite tedious when creating a large grid of subplots, especially if you'd like to hide the x- and y-axis labels on the inner plots. only a subset of brushing features are supported when linking to other crosstalk compatible htmlwidgets. Updates on Page Load. Plotly border Plotly border. You can define callbacks based on user interaction with Graphs such as hovering clicking or selecting dccGraph Graph component in dashCoreComponents Core Interactive UI Components for 39 Dash 39 Aug 28 2020 Plotly Subplot Overlay Aug 29 2020 This means you do not need a Plotly account or an internet. For example, if your app. graph_objects as go from plotly. 9 # the right side of the subplots of the figure bottom = 0. This method should be called only by JSON serializer. layout to a function, then you can serve a dynamic layout. add_trace(go. As I was going through this article, I felt that it was easier to use the plotly graph object since there were a lot more examples of using it than there were the plain dcc. Creating multiple subplots using plt. 4K GitHub stars and 1. import numpy as np import matplotlib. 2-py3-none-any. Plotly is a very rich library and I prefer to create visualizations using Plotly relative to other Python visualization libraries such as Seaborn and Matplotlib. 1) Generating the subplots according to the MATLAB defaults, there is always good bit of space between each individual plot in the group. minorticks_on # Customize the major grid ax. iplot(subplots=True,shape=(4,1),shared_xaxes=True,vertical_spacing=. subplots() is the easier tool to use (note the s at the end of subplots). x label or position, default None. Matplotlib supports all kind of subplots including 2x1 vertical, 2x1 horizontal or a 2x2 grid. METHODS TO_JSON. Plotly add shape to subplot. subplots_adjust() can set better spacing around each subplot in a figure. subplots_adjust() can set better spacing around each subplot in a figure. 02,fill=True) Support for scatter matrix to display the distribution amongst every series in the DataFrame cufflinks. uses language related to) images. With plotly, one can write R code to pose graphical queries that operate entirely client-side in a web browser (i. space Vector with elements: [margin hgap vgap], where: margin Top and right margin at the. 9 # the right side of the subplots of the figure bottom = 0. At the moment, Plotly 3D graphs only support one mode of rotation, as you described. This method should be called only by JSON serializer. 23]m, respectively. scatter_matrix() Support for vline and hline for horizontal and vertical lines. Better still, ask tight_subplot for custom spacing: figure; ha = tight_subplot(2,2,[0. To illustrate:. If False, the parent axes’ anchor will be unchanged Returns (cax, kw), the child axes and the reduced kw dictionary to be passed when creating the colorbar instance. The load is designed for: Active Power = 40000 W Frequency = 50 Hz Inductive Reactive Power = 50. That said, adding a second "simple" mode of rotation is on our short term road map. The shown method is faster than SUBPLOT, which spends a lot of time with searching for existing AXES at the same position considering rounding errors. Plotly With Python: Recently, I stumbled upon Plotly, a beautiful online Data Visualization system by virtue of a MAKE article. It’s easy to add clean, stylish, and flexible dropdowns, buttons, and sliders to Plotly charts. tight_subplot is a very good function which does exactly the same and gives us access to control the space between subplots. subplots() is the easier tool to use (note the s at the end of subplots). 9 # the top of the subplots of the figure wspace = 0. jjc12 changed the title Can't add subplot titles to 3d subplots Can't add subplot titles/horizontal spacing to 3d subplots Jul 14, 2016 Copy link Quote reply eulerreich commented Oct 25, 2016 •. The shown method is faster than SUBPLOT, which spends a lot of time with searching for existing AXES at the same position considering rounding. Find examples of combined, stacked, and plots with multiple axis. scatter_3d « plotly. The way it's done is actually very similar to the way done with gridExtra, that is, by drawing a grid and arranging the plots in it. Configuring individual plots. , split() , lapply() , subplot() ) strategy is employed to generate the trellis. Rather than creating a single subplot. 2 R topics documented: Maintainer Carson Sievert Repository CRAN Date/Publication 2020-04-04 19:50:02 UTC R topics documented:. Give the figure a tight_layout so that subplots are nicely spaced between each other. sin (n * np. Still there remains an unused empty space between the subplots. Graphing Library Plotly. 2-py3-none-any. Plotly Subplot Overlay 0 that came out in July 2018, changed the older factor plot to catplot to make it more consistent with terminology in pandas and in seaborn. vertical_spacing (float (default 0. 2 # the amount of height reserved for. 1 Embedding plotly in shiny. How To Customize Plotly 39 s Modebar. import numpy as np import matplotlib. pyplot as plt nrows = 10 fig, axes = plt. jjc12 changed the title Can't add subplot titles to 3d subplots Can't add subplot titles/horizontal spacing to 3d subplots Jul 14, 2016 Copy link Quote reply eulerreich commented Oct 25, 2016 •. This post shows how you can use Playfair’s approach and many more for making. The shown method is faster than SUBPLOT, which spends a lot of time with searching for existing AXES at the same position considering rounding. MATLAB selects the second subplot. For example, xlabel( "your xlabel" +newline+ " " ). As I was going through this article, I felt that it was easier to use the plotly graph object since there were a lot more examples of using it than there were the plain dcc. 2 # the amount of height reserved for. pip install pusher Installing Plotly. Plotly Express Add Vertical Line. subplots df. 2 # the amount of width reserved for space between subplots, # expressed as a fraction of the average axis width hspace = 0. 1 # the bottom of the subplots of the figure top = 0. scatter_matrix() Support for vline and hline for horizontal and vertical lines. When you are conducting an exploratory analysis of time-series data, you'll need to identify trends while ignoring random fluctuations in your data. make_subplots( rows=N, cols=1, vertical_spacing=v_space ) for i in range(N): x1 = 1 y1 = 1 trace = go. The axhline and axvline methods allow us to draw horizontal and vertical lines from the axes respectively at the specified value/constant. Functions in this module are interface between your local machine and Plotly. grid(True) plt. For making the webapp, my mentor Alexandros suggested the excellent Dash library by Plotly. As you work with zooms and pans in your project, the position of the playhead is critical to success. Below are 15 charts created by Plotly users in R and Python – each incorporate buttons, dropdowns, and sliders to facilitate data exploration or convey a data narrative. Matplotlib supports all kind of subplots including 2x1 vertical, 2x1 horizontal or a 2x2 grid. Div() named eight columns div-for-charts bg-grey. graph for the simple cases. 3 / rows) Space between subplot rows in normalized plot coordinates. append_trace(trace, i + 1, 1) fig['layout']. 5,horizontal_spacing=0. Find examples of combined, stacked, and plots with multiple axis. update(height=N * 200) py. You can change your ad preferences anytime. subplot(421) plt. Increasing vertical spacing between subplots. Vertical Spacing using \subcaption package. from matplotlib import pyplot as plt from datetime import datetime, timedelta values = range(10) dates = [datetime. 3 on the left, and 0. Currently I'm not affiliated in any way with Plotly. 9 # the right side of the subplots of the figure bottom = 0. Updates on Page Load. jjc12 changed the title Can't add subplot titles to 3d subplots Can't add subplot titles/horizontal spacing to 3d subplots Jul 14, 2016 Copy link Quote reply eulerreich commented Oct 25, 2016 •. 2 # the amount of height reserved for. The latent space […]. 9 # the right side of the subplots of the figure bottom = 0. Commented: LEONG SING YEW on 8 Mar 2020 How do i increase the vertical spacing between subplots? 0 Comments. a figure aspect ratio 1. Ctrl Ctrl Space Autocomplete. The robust dispersion measure Gini's mean difference and the SD may optionally be added. x label or position, default None. 50 (black, long), 0. subplots() plt. It can set values for left, right, bottom and top margins, plus the. 01}, # shrink colour bar cmap='OrRd', # use orange/red colour map linewidth=1 # space between cells ) This example is perfectly readable, but by way of an example we'll rotate both the x and y axis labels:. How to create subplots in FSharp. One colorbar corresponds two colormaps, while the other corresponds to one colormap. Plotly add shape to subplot. This section focuses on a particular approach to linking views known as graphical (database) queries using the R package plotly. vertical_spacing (float (default 0. subplot(223) plt. William Playfair (1759 - 1823) was a Scottish economist and pioneer of this approach. 5,horizontal_spacing=0. The time series need to be the same size and they need to be aligned with one another. graph_objects. subplots df. How to create subplots in FSharp. js dynamically generate axis tick labels?. You can define callbacks based on user interaction with Graphs such as hovering clicking or selecting dccGraph Graph component in dashCoreComponents Core Interactive UI Components for 39 Dash 39 Aug 28 2020 Plotly Subplot Overlay Aug 29 2020 This means you do not need a Plotly account or an internet. Sometimes you will have a grid of subplots, and you want to have a single legend that describes all the lines for each of the subplots as in the following image. If you are following along closely, you may notice that I am importing the plotly. I'm trying to add a. graph for the simple cases. subplots() is the easier tool to use (note the s at the end of subplots). This post shows five examples of how you can make and style choropleth, subplot, scatter, bubble, and line maps. subplots: helper function for facet_row_spacing If set, a vertical subplot is drawn to the right of the main plot, visualizing the y-distribution. With plotly, one can write R code to pose graphical queries that operate entirely client-side in a web browser (i. When you are conducting an exploratory analysis of time-series data, you'll need to identify trends while ignoring random fluctuations in your data. small inserts in larger plots). It takes in a vector of form c(m, n) which divides the given plot into m*n array of subplots.
|
{}
|
Annotaion Based On The Genomic Range
5
0
Entering edit mode
10.1 years ago
Hello all I have some data like this related to mouse genome
chr1 3000000 3000090
chr2 4339993 4389898
chr5 3000330 3003339
chr7 3323233 3390393
I know that by using UCSC genome browser we can get the information related to the presence of genes, proteins at those regions. however i am more interested in identifying all functional elements (may be promoters, enhancers tfs etc) with in that region. is there any way to do that. With in UCSC is there any option like that?
genome annotation r ucsc • 4.9k views
3
Entering edit mode
10.1 years ago
0
Entering edit mode
2
Entering edit mode
10.1 years ago
You have to define promoters and enhancers by yourself, there is no proper definition. Get a list of all genes, refer this Fetching Transcription Start And End For A Custom Gene List From Ucsc (Hg18/Ncbi36) for that, change the organism and build. If you know R or any other language, add and subtract the number of bases or a region of some KB (eg +/-1KB) from the TSS (labelled as txStart in the table) strand specifically. This number depends on how you define promters and then use the intersectBed tool from Bedtools. Check this How To Determine Overlaps From Coordinates or manual for usage.
For Enhancers, some people say they are 5-10KB far, but a way to do it would be overlay the ChIP-Seq data(peaks) of p300 (marker for enhancers) on the genome to get the list of enhancers and then intersect with you own file If you know Galaxy, then this might be helpful, From BED Coordinates to Genes
Cheers
0
Entering edit mode
Thank you sukhdeep for your reply. i have obtained the chip-seq from GEO database. An enhancer might be of 500 bps (avg). but the chip-seq data shows only the areas where p300 binding is present. so i can regard +/- 200bps from the peak start region of chip-seq as enhancer?
2
Entering edit mode
10.1 years ago
Irsan ★ 7.6k
If you want information on annotating genomic intervals in general see some similar Biostars-posts:
1
Entering edit mode
9.8 years ago
Emily 23k
No idea about UCSC, but you can do that using the Ensembl Region Report tool.http://www.ensembl.org/tools.html
This allows you to inout genomic coordinates, then see everything that's within them. There's a tick box list where you can choose what to see. The options are:
Genes, Transcripts and Proteins
Genomic Sequence
Constrained Elements (Conserved Regions)
Variations (SNPs and InDels)
Structural Variations (CNVs etc)
Regulatory Features
0
Entering edit mode
Thank you for the reply. i am more interested in Constrained Elements (Conserved Regions) feature. does this tool supports graphical view? I know ECR browser does but I cannot give each coordinate manually.
0
Entering edit mode
This will just give you a list.
0
Entering edit mode
10.1 years ago
If you are comfortable with little programming and unix or you can use snpEff software and set up databases for different genomic elements like genes, transcription factor binding sites, enhancers etc then it is pretty simple thing to do. You can get most of the files you need from ENSEMBL
http://useast.ensembl.org/info/data/ftp/index.html
The cis-regulatory elements information could be derived from Regulations gff file and Regulation data files.
|
{}
|
# Charging 10 1.2V nicd batteries
I have 10 x 1.2V Sanyo NiCd batteries. They are connected in series. I would like to charge them. The thing is, I don't have a charger. I looked up chargers and they are all quite expensive(I live in India). I would like to know what voltage AC to DC adapter should I get to charge them.
What are the risks involved in doing so?
Could you also suggest me some cheap ways of getting the batteries charged?
-
Something occurred to me. If you can separate the batteries from the battery pack you can charge them in groups. Lower voltage sources like 5 and 12V are much cheaper to find than larger ones. – carveone Jul 30 at 11:52
they are all soldered, i cant remove them – harvey_slash Jul 30 at 11:59
That's unfortunate. 5V adapters are very cheap and easy to find because of USB and a 12V car battery is too low a voltage. You'll have to dig up a larger source I'm afraid. – carveone Jul 30 at 12:01
C is the mA rate corresponding to the cell mAh capacity so eg for a 1200 mAh cell, C = 1200 mA. If they are NiCd you cam safely trickle charge them at C/10 - notionally for ever but preferably for about 1 day overall. 24 hrs x c/10 ENSURES all cells are well charged and helps balance the pack. Even about 18 hours at C/10 is enough from fully discharged. Overcharging at > C/10 will damage cells. || NB NiCd and NimH differ - Modern high capacity NimH MUST NOT be trickle charged past full capacity. (eg an AA NimH of 1800 mAh or more is best never trickle charged. ) – Russell McMahon Jul 30 at 13:20
The fully charged voltage on 10 NiCd batteries is about 1.4V * 10 = 14V. So the first thing you will need is a voltage source higher than that. To keep it safe you really need 1.5 * 10 = 15V minimum.
NiCd batteries are charged with a constant current. You don't say what the rating on your batteries are, but let us assume they are 500mAhr. Trickle charging them is C/10 or 50mA. That's nice and safe and will fully charge your batteries in 15hrs (you add 50% to the time because charging is not 100% efficient)
simulate this circuit – Schematic created using CircuitLab
To keep things cheap you can use a resistor to supply this current and calculate the resistor value as:
$$R = \frac{V_{source} - V_{batpack}}{I}$$
Let's say you find an 18V source. Then 18 - 14 = 4V. And you want to charge at 50mA so your resistor value is 80 ohms. 100 is the closest you will get which will charge at: 4/100 = 40mA.
Edit:
I should add that the current will vary as the batteries charge. So a discharged battery at 1.1V will charge at (18 - 11) / 100 = 70mA. This current will fall to 40mA as the battery comes up to full charge.
With a C/10 charge rate, 14 hours is safe even if they are partially charged to begin with. You are "trickle charging" at a "charging rate" and the batteries are designed to full cope with this rate. Any faster than this you will have to verify they are charged - the battery pack voltage will be above the nominal 1.25V * 10 = 12.5V when you disconnect the charger and check with a meter. At 1.45V/cell (or anything above 14V for your pack), they are fully charged. Expensive chargers do all this for you.
You can trickle charge at a "top up" rate by adjusting to C/20. So for 500mA batteries, trickle charge at 500/20 = 25mA. You can do this indefinately without damage. Although to charge at this rate from discharged will take, well, days!
-
Thanks to Ricardo for Mathjaxing the equation. Must read the help more ;-) – carveone Jul 30 at 11:33
I'll edit my post in a minute and add some more details on charging rates. – carveone Jul 30 at 11:36
Maybe read this. – Rev1.0 Jul 30 at 11:37
Yes. I've added a circuit. It always turns out huge but you get the idea. – carveone Jul 30 at 11:58
That assumes identical cells. In real life the cells have slightly different capcity and you would have in the 10 Volt pack for example 5 cells at ~1.2V, 4 cells at ~1.1 Volt and one (the weakest cell) taking damage at -0.4 Volt. Thats why you don't want those packs to discharge too depply. – Turbo J Jul 30 at 13:09
|
{}
|
# IBDP Physics 4.2 – Travelling waves: IB Style Question Bank SL Paper 2
### Question
Two loudspeakers, A and B, are driven in phase and with the same amplitude at a frequency of 850 Hz. Point P is located 22.5 m from A and 24.3 m from B. The speed of sound is 340 m s–1.
a. Deduce that a minimum intensity of sound is heard at P.
A microphone moves along the line from P to Q. PQ is normal to the line midway between the loudspeakers.
b. The intensity of sound is detected by the microphone. Predict the variation of detected intensity as the microphone moves from P to Q.
c. When both loudspeakers are operating, the intensity of sound recorded at Q is I0. Loudspeaker B is now disconnected. Loudspeaker A continues to emit sound with unchanged amplitude and frequency. The intensity of sound recorded at Q changes to IA. Estimate Io /IA
Answer/Explanation
### Ans:
a.
wavelength = 340/850 = 0.40 m
path difference = 1.8 «m» 1.8 «m» = 4.5 λ OR 1.8/0.20 =9 = «half-wavelengths»
waves meet in antiphase «at P» OR destructive interference/superposition «at P»
b. «equally spaced» maxima and minima ✓
a maximum at Q ✓
four «additional» maxima «between P and Q»
c. the amplitude of sound at Q is halved ✓
«intensity is proportional to amplitude squared hence»
IA/Io = 1/4
### Question
A beam of coherent monochromatic light from a distant galaxy is used in an optics experiment on Earth.
The beam is incident normally on a double slit. The distance between the slits is 0.300 mm. A screen is at a distance D from the slits. The diffraction angle θ is labelled.
The air between the slits and the screen is replaced with water. The refractive index of water is 1.33.
a.i.
A series of dark and bright fringes appears on the screen. Explain how a dark fringe is formed.[3]
a.ii.
The wavelength of the beam as observed on Earth is 633.0 nm. The separation between a dark and a bright fringe on the screen is 4.50 mm. Calculate D.[2]
b.i.
Calculate the wavelength of the light in water.[1]
b.ii.
State two ways in which the intensity pattern on the screen changes.[2]
Answer/Explanation
## Markscheme
a.i.
superposition of light from each slit / interference of light from both slits
with path/phase difference of any half-odd multiple of wavelength/any odd multiple of $$\pi$$ (in words or symbols)
producing destructive interference
Ignore any reference to crests and troughs.[3 marks]
a.ii.
evidence of solving for D «D = $$\frac{{sd}}{\lambda }$$»
«$$\frac{{4.50 \times {{10}^{ – 3}} \times 0.300 \times {{10}^{ – 3}}}}{{633.0 \times {{10}^{ – 9}}}}$$ × 2» = 4.27 «m»
Award [1] max for 2.13 m.[2 marks]
b.i.
$$\frac{{633.0}}{{1.33}}$$ = 476 «nm»[1 mark]
b.ii.
distance between peaks decreases
intensity decreases[2 marks]
### Question
A longitudinal wave is travelling in a medium from left to right. The graph shows the variation with distance x of the displacement y of the particles in the medium. The solid line and the dotted line show the displacement at t=0 and t=0.882 ms, respectively.
The period of the wave is greater than 0.882 ms. A displacement to the right of the equilibrium position is positive.
a.
State what is meant by a longitudinal travelling wave.[1]
b.
Calculate, for this wave,
(i) the speed.
(ii) the frequency.[4]
c.
The equilibrium position of a particle in the medium is at x=0.80 m. For this particle at t=0, state and explain
(i) the direction of motion.
(ii) whether the particle is at the centre of a compression or a rarefaction.[4]
Answer/Explanation
## Markscheme
a.
a wave where the displacement of particles/oscillations of particles/movement of particles/vibrations of particles is parallel to the direction of energy transfer/wave travel/wave movement
Do not allow “direction of wave”.
b.
(i)
ALTERNATIVE 1
«distance travelled by wave =» 0.30 m
$$v = \ll \frac{{{\rm{distance}}}}{{{\rm{time}}}} = \gg 340{\rm{m}}{{\rm{s}}^{ – 1}}$$
ALTERNATIVE 2
evaluates $$T = \frac{{{\rm{0.882}} \times {{10}^{ – 3}} \times 1.6}}{{{\rm{0.3}}}}$$«=4.7ms» to give f = 210 or 212 Hz
uses λ=1.6 m with v= to give 340ms–1
(ii)
ALTERNATIVE 1
λ=1.60m
f=$$\frac{{340}}{{1.60}}$$=212 or 213Hz
ALTERNATIVE 2
$$T = \frac{{0.882 \times {{10}^{ – 3}} \times 1.6}}{{0.3}}$$«=4.7ms»
$$F = \ll \frac{1}{T} = \gg {\rm{ 210Hz}}$$
c.
(i)
the displacement of the particle decreases OR «on the graph» displacement is going in a negative direction OR on the graph the particle goes down
to the left
Do not allow “moving downwards” unless accompanied by reference to graph.
(ii)
molecules to the left of the particle have moved left and those to the right have moved right
«hence» the particle is at the centre of a rarefaction
Related Links
|
{}
|
# boxes spanning several pages
I defined some time ago a box environment with the following features:
• It has a frame-color and a background color
• It stores the marginnotes, if any, and restore them at the end of the box. I use for this the commands \mpgmpar@savemarginpars and \mpgmpar@restoremarginpars from the package minipage-marginpar
It worked "perfectly", but now i need to put a very long text in these boxes, and there is not page break. My actual code, which is clearly far from optimal, is :
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{blindtext}
\usepackage{xcolor}
\usepackage{minipage-marginpar}
\usepackage{fancybox}
\newcommand{\strokecolor}{}
\newcommand{\fillcolor}{}
\newlength{\currentparskip}
\makeatletter
\newenvironment{mabox}[3]{%
\renewcommand{\strokecolor}{#2}
\renewcommand{\fillcolor}{#3}
\begin{Sbox}%
\setlength{\currentparskip}{\parskip}% save the value of paragraph skip
\begin{minipage}{#1}%
\setlength{\parskip}{\currentparskip}% restore the value
\mpgmpar@savemarginpars
}%
{\end{minipage}\end{Sbox}\fcolorbox{\strokecolor}{\fillcolor}{\TheSbox} mpgmpar@restoremarginpars}
\makeatother
\begin{document}
\begin{mabox}{10cm}{blue}{gray}
0000
\blindtext \blindtext
11111
\blindtext \blindtext
2222
\blindtext \blindtext
3333
\blindtext \blindtext
\end{mabox}
\end{document}
So, how could I modify the definition of mabox, so that I keep the present features, and that the box would span several pages if the content is very long ?
-
mdframed package is good for boxes of this sort (you'll find many examples on this site) – David Carlisle Mar 5 '12 at 12:01
possible duplicate of How to continue the framed text box on multiple pages? – diabonas Mar 5 '12 at 12:19
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{blindtext}
\usepackage{xcolor}
\usepackage[style=1,leftmargin=0pt,rightmargin=0pt]{mdframed}
\makeatletter
\newenvironment{boxtype1}{%
\begin{mdframed}%
[linewidth=.5,margin=8.5,backgroundcolor=gray!20,linecolor=black,fontcolor=black]%
\fontsize{9}{12}\sffamily\selectfont%
}{\end{mdframed}}
\makeatother
\begin{document}
\begin{boxtype1}
0000
\blindtext \blindtext
11111
\blindtext \blindtext
2222
\blindtext \blindtext
3333
\blindtext \blindtext
\end{boxtype1}
\end{document}
-
Welcome to TeX.sx! Could you please add some explanation to your code example to make it easer for others to understand and adapt your solution? – diabonas Mar 5 '12 at 12:17
|
{}
|
# Difference equations and stability
1. Oct 9, 2005
### Benny
Hi I am unsure about stability of fixed points here is an example.
$$x_{n + 1} = x_n$$
There are fixed points at x = 0 and x = 1. In general when talking about difference equations and whether a fixed point is stable or unstable, does this refer to points in a neighbourhood of those points? For example if for some difference equation there is a fixed point at x = 0.12345678, and it is unstable. Does this mean that if I repeatedly 'apply' the recurrence relation(an example is the one I provided although it probably isn't the best example for my question) to a point near that fixed point for example x = 0.12, then successive values that I obtain will 'diverge' from the initial value of 0.12?
I just wanted to check because I need to know this in order to complete my assignment. The assignment questions I have are completely different to my example. Basically I just need to verify that I have the correct definition for 'stable' and 'unstable' fixed points for a recurrence relation. Any help appreciated.
2. Oct 9, 2005
### HallsofIvy
Staff Emeritus
Are you sure of your example?? xn+1= xn has every point as fixed point, not just 1 and 0!
In order to determine the fixed points of of a difference equation, replace each xn, xn+1, etc by x and solve for x. Your example just gives x= x.
Yes, the concept of "stable" and "unstable" fixed points depends on what happens to points close to the fixed points.
A difference equation that does have 0 and 1 as fixed points is
xn+1= xn2. If x is a fixed point then setting xn= x will give xn+1= x so x= x2 which has solutions x= 0 and x= 1. If we look at points close to 0, we see that repeatedly squaring a number close to 0 gives a sequence that converges to 0: 0 is a stable fixed point. If, however, we repeatedly square a number close to 0 then either: (a) x< 1 so we get a sequence that converges to 0 or:(b) x> 1 so we get a sequence that diverges. Either way, the sequence does not converge to or stay near 1. 1 is an unstable fixed point.
3. Oct 9, 2005
### Benny
Yeah my example should have more than just 1 and zero as fixed points.
I also have the result that if $$x_{n + 1} = f\left( {x_n } \right),\left| {f'\left( p \right)} \right| < 1$$ where p is a fixed point then x = p is a stable point. So if x_(n+1) = 1.5x_(n) then any fixed points(if there are any) will be unstable because (1.5x)' = 1.5 > 1 for all x? Another thing I don't really understand is the concept of unstable. From the material that I've got, there's a mention of intervals being streched if a point is unstable. But what interval is this?
I just have one more question. What is a periodic point? I've tried looking at a few websites but they don't really give a definition of periodic and non-periodic points.
4. Oct 9, 2005
### Physics Monkey
Consider the map $$x_{n+1} = |x_n -1|$$. If you start out at x=0 then you jump to x=1 then back to x=0, etc. In this case x=0 is a periodic point of period 2 because if $$x_n = 0$$ then $$x_{n+2} = 0$$. Are there any other periodic points?
The alternating behavior you see here is called a limit cycle. A limit cycle is stable if, as you perturb away from it, the system moves back towards the limit cycle. Is the limit cycle $$0,1,0$$ stable?
Edit: The example I gave is kind of trivial, but you can try out the new definitions on a slightly less trivial map like $$x_{n+1} = (x_n - 1)^2$$.
Last edited: Oct 9, 2005
5. Oct 9, 2005
### HallsofIvy
Staff Emeritus
Yes. And it is easy to see that the only fixed point is x= 0. If x is exactly 0, no matter how many times you multiply it by 1.5, you still get 0! However, if x is not 0, no matter how close it is, multiplying by 1.5 steadily moves it away from 0.
Some small interval around your fixed point. In the example above, if you take $(-\delta, \delta)$, after one iteration of xn+1= 1.5xn applied to every point in that interval, you have $(-1.5\delta, 1.5\delta)$- the interval has been stretched by exactly a factor of 1.5- points are moving away from the fixed point.
Consider this iteration on [0,1]: xn+1= 2xn mod 1. That "mod 1" means "if the result of multiplying by 2 is larger than or equal to 1, drop the interger part". For example, if x0= 2/5, then x1= 2(2/5)= 4/5. To find x2, multiply by 2 again: 8/5= 1.6 so x2= 0.6= 3/5. To find x3, multiply by 2 again: 6/5= 1.2 so x3= 0.2= 1/5. x4= 2(1/5)= 2/5, the number we started with. Since we will now do exactly the same thing again, all numbers past here will be the same:
2/5, 4/5, 3/5, 1/5, 2/5, 4/5, 3/5, 1/5, 2/5, 4/5, 3/5, 1/5, 2/5, ....
That sequence is "periodic with period 4" since every 4th number repeats. We say that the starting point, 2/5, is a "periodic point". Of course, if we had started with 4/5, 3/5, or 1/5, we would have gotten the same sequence. If a sequence is periodic, every point in it is a periodic point.
Last edited: Oct 9, 2005
6. Oct 9, 2005
### Benny
HallsofIvy - Thanks again for your explanations.
Physics monkey - I'm thinking that one would be another periodic point. The point x = 0.5 seems to just stay there so I'm not sure about that one. As for the second one, what happens if the starting point is x_0 = 12. Apply the recurrence relation over and over again it results in values which get closer to zero and eventually reach zero if I am reading it correctly. After that, further applications of the recurrence relation results in values which alternate between zero and one like before. Starting at x_0 = 12, the values appear to converge to zero rather than 'move further away' from zero.
7. Oct 9, 2005
### Physics Monkey
You are right, 1 is another periodic point of the first map, in fact it is part of the limit cycle $$0,1,0$$. Of course x=.5 is a fixed point, so you can call that a periodic point of period 1 if you like. There are still more periodic points for the first map.
Also, on the second map, how did you decide that x_0 = 12 heads towards zero? Doesn't the sequence go like 12, (12-1)^2 = 121, (121-1)^2 = 14400, etc. It seems to me that it diverges.
Last edited: Oct 9, 2005
8. Oct 9, 2005
### saltydog
$$x_{n+1}=ax_n(1-x_n)$$
from the perspective of it's Feigenbaum plot? You know, a is a parameter you vary from 1 to 4 and then run the iterator for some random starting values between 0 and 1. Sometimes it settles to a single point, sometimes to a set of "Periodic Points", sometimes it's chaotic.
9. Oct 9, 2005
### Benny
Physics Monkey - I was a little unclear when I said "the second one", I replied at around the same time as your edit was made. I was looking at x_(n+1) = |x_n - 1| and the the part where you mentioned limit cycles being stable. I figured that 12 is reasonably far away from zero and one but as I repeatedly apply the recurrence relation to it, the values I obtain decrease from zero. From there, oscillation between zero and one occurs. I wasn't really sure about that one. It was just in response to the limit cycle question you asked me.
Saltydog - I've mainly been trying to just understand the basics (such as definitions). It is for an assignment where the associated material is covered in only three classes. But even when I look up the most basic things, such as what "chaotic" means I find things on "transitivity, density" and other terms which I've rarely come across or never even heard of before. So while I'm grateful for your help, I don't really understand the last bit.
10. Oct 9, 2005
### Physics Monkey
Benny, glad to hear the mistake was mine. x0 = 12 does indeed end up in the 0,1 limit cycle, but what about x0 = 12.1? The first example I gave is a little pathological in that it has an infinite number of limit cycles (try to find them all) but none of them are stable. In fact when you perturb one limit cycle, you just end up on another limit cycle! I would suggest playing with the second map I gave, which has only one limit cycle and trying to figure out if the cycle is stable.
11. Oct 9, 2005
### Benny
Ok I'll try the second one at another time and see if I can find some interesting behaviour. But right now I need to get some sleep. :zzz:
|
{}
|
# Include links for certain paragraphs
I would like to include links/reference points in my latex document, e.g.
The situation is the following:
There is a certain paragraph on, say page 11, and on page 13 there is a link and when the reader clicks this link (which does not necessarily have to be blue) then the reader is taken to the desired paragraph on page 11.
And I imagined my code to look like this:
Lie groups are smooth manifolds
...
Lie algebra.
Lie groups play an enormous role in modern geometry,
...
and are studied in representation theory.
In the 1940s–1950s, Ellis Kolchin, Armand Borel
...
in number theory.
On a "global" level, whenever a
...
analysis on the manifold. (see \LinkTo{LabelLie})
Is such a construction possible?
-
There are several methods in hyperref:
• \hypertarget and \hyperlink can be used to create an anchor and a link to that anchor.
• \phantomsection\label{foo} can be referenced with arbitrary text as \hypererf[foo]{arbitrary text}.
• The following example uses \refstepcounter that automatically sets an anchor. By redefining \the<counter> the text is setup for later use with \ref:
The example file:
\documentclass{article}
\usepackage{lipsum}
\usepackage{hyperref}
\newcounter{partextdummy}
\begingroup
\renewcommand*{\thepartextdummy}{#2}%
\ifhmode
% raise the anchor above the base line in horizontal mode
\raisebox{2ex}[0pt][0pt]{%
\refstepcounter{partextdummy}%
\label{#1}%
}%
\else
\refstepcounter{partextdummy}%
\label{#1}%
\fi
\endgroup
\ignorespaces
}
\begin{document}
\lipsum[1]
\refstepcounter{<counter>} advances the counter by one and \label will use \the<counter> for the referenced number that \ref refers to. \renewcommand*{\the<counter>}{#2} replaces that number with #2. – Heiko Oberdiek Feb 23 '13 at 17:29
\CreateLink is defined by \newcommand that complains, if the command already exists. The redefinition of \thepartextdummy is local (because of \begingroup and \endgroup) and the command belongs to the counter partextdummy. – Heiko Oberdiek Feb 23 '13 at 20:14
|
{}
|
Question:
How east is the earth traveling?
The earth moves round the sun in an oval track that has an average radius of 93 million miles, at a speed of 18.5 miles a second.
510,072,000 km2
5.972191024 kg
0.367 (geometric)
Solar System Local Interstellar Cloud Local Bubble Gould Belt Orion Arm Milky Way Milky Way subgroup Local Group Virgo Supercluster Pisces–Cetus Supercluster Complex Observable universe Universe Earth Solar System Local Interstellar Cloud Local Bubble Gould Belt Orion Arm Milky Way Milky Way subgroup Local Group Virgo Supercluster Pisces–Cetus Supercluster Complex Observable universe Universe
In classical geometry, the radius of a circle or sphere is the length of a line segment from its center to its perimeter. The name comes from Latin radius, meaning "ray" but also the spoke of a chariot wheel. The plural of radius can be either radii (from the Latin plural) or the conventional English plural radiuses. The typical abbreviation and mathematic variable name for "radius" is r. By extension, the diameter d is defined as twice the radius: If the object does not have an obvious center, the term may refer to its circumradius, the radius of its circumscribed circle or circumscribed sphere. In either case, the radius may be more than half the diameter, which is usually defined as the maximum distance between any two points of the figure. The inradius of a geometric figure is usually the radius of the largest circle or sphere contained in it. The inner radius of a ring, tube or other hollow object is the radius of its cavity. For regular polygons, the radius is the same as its circumradius. The inradius of a regular polygon is also called apothem. In graph theory, the radius of a graph is the minimum over all vertices u of the maximum distance from u to any other vertex of the graph. The radius of the circle with perimeter (circumference) C is The radius of a circle with area A is The radius is half the diameter. To compute the radius of a circle going through three points P1, P2, P3, the following formula can be used: where θ is the angle $\angle P_1 P_2 P_3.$ This formula uses the Sine Rule. If the three points are given by their coordinates $(x_1,y_1)$, $(x_2,y_2)$ and $(x_3,y_3)$, one can also use the following formula : These formulas assume a regular polygon with n sides. The radius can be computed from the side s by: The radius of a d-dimensional hypercube with side s is
In 1991, Charlotte Motor Speedway created the first notable "Legends" oval course. The existing quad oval start/finish straight was connected to the pit lane by two 180 degree turns, resulting in a 1/4-mile short track oval. A special exhibition race featuring former NASCAR legends headlined one time on the course. A year later, the same 1/4-mile layout became a popular venue for Legends car racing. The name "Legends oval" was derived from this use. They have also seen use with go-karts, short track stock cars, and other disciplines. The Legends oval concept allows minor league levels of racing to compete in the stadium-style atmosphere of large speedways, when they would normally be confined to small, stand-alone 1/4-mile venues. It also allows them to serve as support races at tracks where they would not normally be able to compete (due to the track lengths and speeds) without track or car modification. Tracks with Legends ovals Oval tracks usually have slope in both straight and in curves, but the slope on the straights is usually smaller, circuits without any slope are rare to find, low-slope are usually old or small tracks, high gradient are more common in new circuits. Circuits like Milwaukee Mile and Indianapolis Motor Speedway are approximately 9° tilt in curves are considered low slope, superspeedways like Talladega has up to 33° tilt in curves, Daytona has up to 32°, both are considered high inclination. Charlotte and Dover are the intermediate with the highest bank, 24° tilt. Bristol is the short oval with up to 30°. Pack racing is a phenomenon found on fast, high-banked superspeedways. It occurs when the vehicles racing are cornering at their limit of aerodynamic drag, but within their limit of traction. This allows drivers to race around the track constantly at wide open throttle. Since the vehicles are within their limit of traction, drafting through corners will not hinder a vehicle's performance. As cars running together are faster than cars running individually, all cars in the field will draft each other simultaneously in one large pack. In stock car racing this is often referred to as "restrictor plate racing" because NASCAR mandates that each car on its two longest high-banked ovals, Talladega and Daytona, use an air restrictor to reduce horsepower. The results of pack racing may vary. As drivers are forced to race in a confined space, overtaking is very common as vehicles may travel two and three abreast. This forces drivers to use strong mental discipline in negotiating traffic. There are drawbacks, however. Should an accident occur at the front of the pack, the results could block the track in a short amount of time. This leaves drivers at the back of the pack with little time to react and little room to maneuver. The results are often catastrophic as several cars may be destroyed in a single accident. This type of accident is often called "The Big One". Oval track racing requires different tactics than road racing. While the driver doesn't have to shift gears nearly as frequently, brake as heavily or as often, or deal with turns of various radii in both directions as in road racing, drivers are still challenged by negotiating the track. Both types of racing place physical demands on the driver. A driver in an IndyCar race at Richmond International Raceway may be subject to as many lateral g-forces (albeit in only one direction) as a Formula One driver at Istanbul Park. Weather also plays a different role in each discipline. Road racing offers a variety of fast and slow corners that allow the use of rain tires. Paved ovals cannot support rain tires because the turns are all very fast and the soft rubber compound used in the tread would not survive long against the forces inflicted upon it. Dirt ovals will sometimes support a light rain. Some tracks (e.g., Evergreen Speedway in Monroe, WA) have "rain or shine" rules requiring races to be run in rain. Safety has also been a point of difference between the two. While a road course usually has abundant run-off areas, gravel traps, and tire barriers, ovals usually have a concrete retaining wall separating the track from the fans. Innovations have been made to change this, however. The SAFER barrier was created to provide a less dangerous alternative to a traditional concrete wall. The barrier can be retrofit onto an existing wall or may take the place of a concrete wall completely.
In astronomy, a lunar distance (LD) is a measurement of the distance from the Earth to the Moon. The average distance from Earth to the Moon is 384,400 km (238,900 mi). The actual distance varies over the course of the orbit of the moon, from 356,700 km (221,600 mi) at the perigee and 406,300 km (252,500 mi) at apogee. High-precision measurements of the lunar distance are made by measuring the time taken for light to travel between LIDAR stations on Earth and retroreflectors placed on the Moon. The Moon is spiraling away from Earth at an average rate of 3.8 cm (1.5 in) per year, as detected by the Lunar Laser Ranging Experiment. The recession rate is considered anomalously high. By coincidence, the diameter of corner cubes in retroreflectors on the Moon is also 3.8 cm (1.5 in). The tidal dissipation rate varied in the Earth geological history. The first person to measure the distance to the Moon was the 2nd-century-BC astronomer and geographer Hipparchus, who exploited the lunar parallax using simple trigonometry. He was approximately 26,000 km (16,000 mi) off the actual distance, an error of about 6.8%. The NASA Near Earth Object Catalog includes the distances of asteroids and comets measured in Lunar Distances.
Earth radius is the distance from Earth's center to its surface, about 6,371 kilometers (3,959 mi). This length is also used as a unit of distance, especially in astronomy and geology, where it is usually denoted by $R_\oplus$. This article deals primarily with spherical and ellipsoidal models of the Earth. See Figure of the Earth for a more complete discussion of models. The Earth is only approximately spherical, so no single value serves as its natural radius. Distances from points on the surface to the center range from 6,353 km to 6,384 km (3,947–3,968 mi). Several different ways of modeling the Earth as a sphere each yield a mean radius of 6,371 kilometers (3,959 mi). While "radius" normally is a characteristic of perfect spheres, the term as used in this article more generally means the distance from some "center" of the Earth to a point on the surface or on an idealized surface that models the Earth. It can also mean some kind of average of such distances, or of the radius of a sphere whose curvature matches the curvature of the ellipsoidal model of the Earth at a given point. The first scientific estimation of the radius of the earth was given by Eratosthenes about 240 BC. Estimates of the accuracy of Eratosthenes’s measurement range from within 2% to within 15%. Earth's rotation, internal density variations, and external tidal forces cause it to deviate systematically from a perfect sphere. Local topography increases the variance, resulting in a surface of unlimited complexity. Our descriptions of the Earth's surface must be simpler than reality in order to be tractable. Hence we create models to approximate the Earth's surface, generally relying on the simplest model that suits the need. Each of the models in common use come with some notion of "radius". Strictly speaking, spheres are the only solids to have radii, but looser uses of the term "radius" are common in many fields, including those dealing with models of the Earth. Viewing models of the Earth from less to more approximate: In the case of the geoid and ellipsoids, the fixed distance from any point on the model to the specified center is called "a radius of the Earth" or "the radius of the Earth at that point". It is also common to refer to any mean radius of a spherical model as "the radius of the earth". On the Earth's real surface, on other hand, it is uncommon to refer to a "radius", since there is no practical need. Rather, elevation above or below sea level is useful. Regardless of model, any radius falls between the polar minimum of about 6,357 km and the equatorial maximum of about 6,378 km (≈3,950 – 3,963 mi). Hence the Earth deviates from a perfect sphere by only a third of a percent, sufficiently close to treat it as a sphere in many contexts and justifying the term "the radius of the Earth". While specific values differ, the concepts in this article generalize to any major planet. Rotation of a planet causes it to approximate an oblate ellipsoid/spheroid with a bulge at the equator and flattening at the North and South Poles, so that the equatorial radius $a$ is larger than the polar radius $b$ by approximately $a q$ where the oblateness constant $q$ is where $\omega$ is the angular frequency, $G$ is the gravitational constant, and $M$ is the mass of the planet. For the Earth , which is close to the measured inverse flattening . Additionally, the bulge at the equator shows slow variations. The bulge had been declining, but since 1998 the bulge has increased, possibly due to redistribution of ocean mass via currents. The variation in density and crustal thickness causes gravity to vary on the surface, so that the mean sea level will differ from the ellipsoid. This difference is the geoid height, positive above or outside the ellipsoid, negative below or inside. The geoid height variation is under 110 m on Earth. The geoid height can change abruptly due to earthquakes (such as the Sumatra-Andaman earthquake) or reduction in ice masses (such as Greenland). Not all deformations originate within the Earth. The gravity of the Moon and Sun cause the Earth's surface at a given point to undulate by tenths of meters over a nearly 12 hour period (see Earth tide). Given local and transient influences on surface height, the values defined below are based on a "general purpose" model, refined as globally precisely as possible within 5 m of reference ellipsoid height, and to within 100 m of mean sea level (neglecting geoid height). Additionally, the radius can be estimated from the curvature of the Earth at a point. Like a torus the curvature at a point will be largest (tightest) in one direction (North-South on Earth) and smallest (flattest) perpendicularly (East-West). The corresponding radius of curvature depends on location and direction of measurement from that point. A consequence is that a distance to the true horizon at the equator is slightly shorter in the north/south direction than in the east-west direction. In summary, local variations in terrain prevent the definition of a single absolutely "precise" radius. One can only adopt an idealized model. Since the estimate by Eratosthenes, many models have been created. Historically these models were based on regional topography, giving the best reference ellipsoid for the area under survey. As satellite remote sensing and especially the Global Positioning System rose in importance, true global models were developed which, while not as accurate for regional work, best approximate the earth as a whole. The following radii are fixed and do not include a variable location dependence. They are derived from the WGS-84 ellipsoid. The value for the equatorial radius is defined to the nearest 0.1 meter in WGS-84. The value for the polar radius in this section has been rounded to the nearest 0.1 meter, which is expected to be adequate for most uses. Please refer to the WGS-84 ellipsoid if a more precise value for its polar radius is needed. The radii in this section are for an idealized surface. Even the idealized radii have an uncertainty of ± 2 meters. The discrepancy between the ellipsoid radius and the radius to a physical location may be significant. When identifying the position of an observable location, the use of more precise values for WGS-84 radii may not yield a corresponding improvement in accuracy. The symbol given for the named radius is used in the formulae found in this article. The Earth's equatorial radius $a$, or semi-major axis, is the distance from its center to the equator and equals 6,378.1370 kilometers (3,963.1906 mi). The equatorial radius is often used to compare Earth with other planets. The Earth's polar radius $b$, or semi-minor axis, is the distance from its center to the North and South Poles, and equals 6,356.7523 kilometers (3,949.9028 mi). The distance from the Earth's center to a point on the spheroid surface at geodetic latitude $\varphi\,\!$ is: where $a$ and $b$ are the equatorial radius and the polar radius, respectively. These are based on an oblate ellipsoid. Eratosthenes used two points, one almost exactly north of the other. The points are separated by distance $D$, and the vertical directions at the two points are known to differ by angle of $\theta$, in radians. A formula used in Eratosthenes' method is which gives an estimate of radius based on the north-south curvature of the Earth. Note that N=R at the equator: At geodetic latitude 48.46791 degrees (e.g., Lèves, Alsace, France), the radius R is 20000/π ≈ 6,366.197, namely the radius of a perfect sphere for which the meridian arc length from the equator to the North Pole is exactly 10000 km, the originally proposed definition of the meter. The Earth's mean radius of curvature (averaging over all directions) at latitude $\varphi\,\!$ is: The Earth's radius of curvature along a course at geodetic bearing (measured clockwise from north) $\alpha\,\!$, at $\varphi\,\!$ is derived from Euler's curvature formula as follows: The Earth's meridional radius of curvature at the equator equals the meridian's semi-latus rectum: The Earth's polar radius of curvature is: The Earth can be modeled as a sphere in many ways. This section describes the common ways. The various radii derived here use the notation and dimensions noted above for the Earth as derived from the WGS-84 ellipsoid; namely, A sphere being a gross approximation of the spheroid, which itself is an approximation of the geoid, units are given here in kilometers rather than the millimeter resolution appropriate for geodesy. The International Union of Geodesy and Geophysics (IUGG) defines the mean radius (denoted $R_1$) to be For Earth, the mean radius is 6,371.009 kilometers (3,958.761 mi). Earth's authalic ("equal area") radius is the radius of a hypothetical perfect sphere which has the same surface area as the reference ellipsoid. The IUGG denotes the authalic radius as $R_2$. A closed-form solution exists for a spheroid: where $e^2=(a^2-b^2)/a^2$ and $A$ is the surface area of the spheroid. For Earth, the authalic radius is 6,371.0072 kilometers (3,958.7603 mi). Another spherical model is defined by the volumetric radius, which is the radius of a sphere of volume equal to the ellipsoid. The IUGG denotes the volumetric radius as $R_3$. For Earth, the volumetric radius equals 6,371.0008 kilometers (3,958.7564 mi). Another mean radius is the rectifying radius, giving a sphere with circumference equal to the perimeter of the ellipse described by any polar cross section of the ellipsoid. This requires an elliptic integral to find, given the polar and equatorial radii: The rectifying radius is equivalent to the meridional mean, which is defined as the average value of M: For integration limits of [0…π/2], the integrals for rectifying radius and mean radius evaluate to the same result, which, for Earth, amounts to 6,367.4491 kilometers (3,956.5494 mi). The meridional mean is well approximated by the semicubic mean of the two axes: yielding, again, 6,367.4491 km; or less accurately by the quadratic mean of the two axes: about 6,367.454 km; or even just the mean of the two axes: about 6,367.445 kilometers (3,956.547 mi).
The orbital speed of a body, generally a planet, a natural satellite, an artificial satellite, or a multiple star, is the speed at which it orbits around the barycenter of a system, usually around a more massive body. It can be used to refer to either the mean orbital speed, i.e., the average speed as it completes an orbit, or the speed at a particular point in its orbit.][ The orbital speed at any position in the orbit can be computed from the distance to the central body at that position, and the specific orbital energy, which is independent of position: the kinetic energy is the total energy minus the potential energy. In the case of radial motion:][ The transverse orbital speed is inversely proportional to the distance to the central body because of the law of conservation of angular momentum, or equivalently, Kepler's second law. This states that as a body moves around its orbit during a fixed amount of time, the line from the barycenter to the body sweeps a constant area of the orbital plane, regardless of which part of its orbit the body traces during that period of time. This law is usually stated as "equal areas in equal time."][ This law implies that the body moves faster near its periapsis than near its apoapsis, because at the smaller distance it needs to trace a greater arc to cover the same area. For orbits with small eccentricity, the length of the orbit is close to that of a circular one, and the mean orbital speed can be approximated either from observations of the orbital period and the semimajor axis of its orbit, or from knowledge of the masses of the two bodies and the semimajor axis.][ where $v_o\,\!$ is the orbital velocity, $a\,\!$ is the length of the semimajor axis, $T\,\!$ is the orbital period, and $\mu\,\!$ is the standard gravitational parameter. Note that this is only an approximation that holds true when the orbiting body is of considerably lesser mass than the central one, and eccentricity is close to zero. Taking into account the mass of the orbiting body, where $m_1\,\!$ is now the mass of the body under consideration, $m_2\,\!$ is the mass of the body being orbited, $r\,\!$ is specifically the distance between the two bodies (which is the sum of the distances from each to the center of mass), and $G\,\!$ is the gravitational constant. This is still a simplified version; it doesn't allow for elliptical orbits, but it does at least allow for bodies of similar masses. When one of the masses is almost negligible compared to the other mass as the case for Earth and Sun, one can approximate the previous formula to get: or Where M is the (greater) mass around which this negligible mass or body is orbiting, and ve is the escape velocity. For an object in an eccentric orbit orbiting a much larger body, the length of the orbit decreases with eccentricity $e\,\!$, and is given at ellipse. This can be used to obtain a more accurate estimate of the average orbital speed: The mean orbital speed decreases with eccentricity.
|
{}
|
Enter the applied force, pin diameter, and plate thickness into the calculator to determine the pin shear stress and bearing area stress.
## Pin Shear Formula
The following formula is used to calculate pin shear.
SS = 4*AF / (pi*D^2)
BS = AF / (t*d)
• Where SS is the pin shear stress (N/mm^2, lbs/in^2)
• BS is the bearing area stress (N/mm^2, lb/in^2)
• AF is the applied force (N, lbs)
• D is the diameter of the pin (mm , in)
• t is the thickness of the plate. (mm , in)
## Pin Shear Definition
What is pin shear?
Pin shear is the average shear that a pin will see given an applied force and diameter. Pin shear is used to design safe bearing and other mechanical assemblies.
Typically the pin shear is used with the ultimate strength and factor of safety to determine the proper design setup.
However, since the pins can hold a large amount of shear stress relative to other components, they are rarely the limiting factor.
## Example Problem
How to calculate average pin shear?
First, determine the applied force acting on the pin assembly. In this example, the applied force is measured to be 5000 N.
Next, determine the diameter of the pin. In this example, the pin is a diameter of 35mm.
Finally, calculate the pin shear using the formula above:
SS = 4*AF / (pi*D^2)
SS = 4*5000 / (3.14159*35^2)
SS = 5.196 N/mm^2
If you want to further calculate the total bearing area stress, determine the thickness of the plate, (in this case, 40mm), and use the formula above:
BS = AF / (t*d)
BS = 5000 / (35*40)
BS = 3.571 N/mm^2
|
{}
|
# Exceptional matroids in chain theorems
At the end of November 2017, the Tutte Centenary Retreat was held. 32 researchers gathered in Creswick Australia to work on problems in three areas where Tutte made seminal contributions. One of those three areas was Matroid Structure Theory: nine of us (Rutger Campbell, Deborah Chun, Tara Fife, Kevin Grace, Dillon Mayhew, James Oxley, Charles Semple, Geoff Whittle, and myself) split into two groups to work on some carefully curated problems in this area. In this post, I’m going to talk about matroids where certain subsets of the ground set appear in circuits and cocircuits of certain sizes — mostly work that originated during this week in Creswick — as well as some related work and open problems in the area.
Rather than getting into any detail of the proofs, my aim with this post is to give an overview of the motivation (from a connectivity-centric point of view), the results, and give some open questions and conjectures on the topic. Essentially, most of the results follow from repeated use of orthogonality: the fact that a circuit and cocircuit of a matroid cannot intersect in a single element.
To start with, let’s consider matroids where every $t$-element subset of the ground set appears in an $\ell$-element circuit and an $\ell$-element cocircuit; for brevity, call these $(t,\ell)$-matroids.
For example, wheels and whirls are (1,3)-matroids: every element in a wheel or whirl appears in a triangle (a 3-element circuit) and a triad (a 3-element cocircuit). Excluding the rank-2 wheel, these matroids are 3-connected, and, due to the triangles and triads, deleting or contracting any single element results in a matroid that is no longer 3-connected. Tutte’s Wheels-and-Whirls Theorem states that these are in fact the only 3-connected matroids with no single-element deletion or contraction preserving 3-connectedness.
More generally, one reason why someone might be interested in $(t,\ell)$ matroids is that they would appear as exceptional matroids in chain theorems (results like the Wheels-and-Whirls theorem). For example, any 4-connected (1,4)-matroid has no single-element deletion or contraction that is 4-connected (due to the 4-element circuits and cocircuits), and any 3-connected (2,4)-matroid has no pair of elements whose deletion or contraction remains 3-connected (here we are allowed only to delete both elements, or contract both elements). These may or may not be the only matroids with this property, but they provide a starting point.
# (2,4)-matroids, a.k.a. spikes
So what can we say about (2,4)-matroids? Joel Miller [Miller2014] showed the following:
Theorem:
Let $M$ be a matroid with $|E(M)| \ge 13$. Then $M$ is a (2,4)-matroid if and only if $M$ is a spike.
One way of defining a spike (useful for the purposes of this post) is as a matroid with a partition into pairs $(X_1,X_2,\ldots,X_t)$, for some $t \ge 3$, such that for all distinct $i,j \in [t]$, $X_i \cup X_j$ is both a circuit and a cocircuit. Note that all “spikes” in this post are what are sometimes referred to as tipless spikes.
Miller also showed that the bound of 13 is tight, and described all matroids with the (2,4)-property when $|E(M)| \le 12$.
As I mentioned earlier, since spikes are (2,4)-matroids, they have no pair of elements whose deletion or contraction remains 3-connected. In fact, Alan Williams [Williams2015] showed that the only 3-connected matroids having this connectivity property, with $|E(M)| \ge 13$, and no triangles or triads, are spikes. So in this case, (2,4)-matroids are the only exceptional matroids appearing in a chain theorem for removing a pair of elements from a 3-connected matroid with no triangles or triads, and retaining 3-connectivity (the caveat being the “no triangles or triads” condition: I’ll touch more on this in the section after next).
# $(t,2t)$-matroids, a.k.a. $t$-spikes
With Rutger Campbell, Deborah Chun, Kevin Grace, and Geoff Whittle [BCCGW2018], we generalised Miller’s result as follows.
Theorem:
Let $M$ be a matroid, and let $t$ be a positive integer. For each $t$, there exists an $n_t$ such that if $M$ is a matroid with the $(t,2t)$-property and $|E(M)| \ge n_t$, then $M$ has a partition into pairs such that the union of any $t$ pairs is both a circuit and a cocircuit.
We call a matroid a $t$-spike if it has a partition $\pi$ into pairs such that the union of any $t$ pairs is both a circuit and a cocircuit.
The infinite family of $t$-spikes is a natural class of $(t,\ell)$-matroids to consider: we also showed there are only finitely many $(t,\ell)$-matroids for $\ell < 2t$. Note that spikes are 2-spikes, and it is not hard to show that 1-spikes are matroids obtained by taking direct sums of $U_{1,2}$. $t$-spikes share some well-known properties of spikes: A $t$-spike $M$ with $r$ legs has rank $r$ (where a leg is one of the pairs in the partition $\pi$), and, when $r$ is sufficiently large, $M$ is $(2t-1)$-connected. Moreover, the partition $\pi$ associated with a $t$-spike naturally gives rise to crossing $(2t-1)$-separations (for those familiar with flowers, an appropriate concatenation of $\pi$ is a $(2t-1)$-anemone, following the terminology of [AO2008]).
A $(t+1)$-spike $M_2$ can be obtained from a $t$-spike $M_1$ (with sufficiently many legs), by the following construction. Recall that $M_1’$ is an elementary quotient of $M_1$ if there is some single-element extension $M_1^+$ of $M_1$ by an element $e$ such that $M_1^+/e = M_1’$. First, take an elementary quotient of the $t$-spike $M_1$ such that none of the $2t$-element cocircuits (from the union of $t$ legs) are preserved. That is, extend $M_1$ by an element $e$ in such a way that the extension does not preserve any of the $2t$-element cocircuits, and then contract $e$. We then repeat this process in the dual: this corresponds to taking an elementary lift such that none of the $2t$-element circuits are preserved. The resulting matroid is a $(t+1)$-spike. Note that one option for the quotient is to simply truncate (i.e. take a free extension by $e$, and then contract $e$) but there may be others.
For the purposes of this post, I’ll refer to this operation as an inflation of a $t$-spike. We showed, in [BCCGW2018], that for $t \ge 1$, any $(t+1)$-spike with $r$ legs can be obtained from a $t$-spike with $r$ legs, by an inflation.
Spikes are ubiquitous in matroid theory; perhaps $t$-spikes may also be an interesting family of matroids.
# $(t_1,\ell_1,t_2,\ell_2)$-matroids
Recall that spikes (i.e. (2,4)-matroids) are the only 3-connected triangle-and-triad-free matroids with no pair of elements whose deletion or contraction preserves 3-connectivity, when we restrict our attention to matroids on at least 13 elements. What if we want to remove the “triangle-and-triad-free” condition; what additional structures arise? (*)
Certainly wheels and whirls (i.e. (1,3)-matroids) for one, but this is not all. Another example is any matroid where every pair of elements is in a 4-element circuit, and every element is in a triad.
Say that $M$ is a $(t_1,\ell_1,t_2,\ell_2)$-matroid if every $t_1$-element set is in an $\ell_1$-element circuit, and every $t_2$-element set is in an $\ell_2$-element cocircuit (the $(t,\ell)$-matroids considered earlier have $t_1=t_2$ and $\ell_1=\ell_2$). James Oxley, Simon Pfeil, Charles Semple and Geoff Whittle [OPSW2018] showed the following:
Theorem:
For $k=3$ or $k=4$, a $k$-connected matroid with $|E(M)| \ge k^2$ is a $(2,4,1,k)$-matroid if and only if $M \cong M(K_{k,n})$ for some $n \ge k$.
So $M(K_{3,n})$ and $M^*(K_{3,n})$ are answers to (*). But there are other structures that arise that don’t fit the $(t_1,\ell_1,t_2,\ell_2)$-matroid framework, that I won’t go into (for more details, see [BWW2018, Conjecture 7.5]; a conjectured answer to (*)).
Apart from the [OPSW2018] result and the case where $t_1 = t_2$ and $\ell_1 = \ell_2$, these $(t_1,\ell_1,t_2,\ell_2)$-matroids have had little attention, as far as I know. We conjecture the following in [BCCGW2018]:
Conjecture:
Any sufficiently large $(t_1,2t_1,t_2,2t_2)$-matroid has a partition into pairs such that the union of any $t_1$ pairs is a circuit, and the union of any $t_2$ pairs is a cocircuit.
# $t$-cyclic matroids
If the removing-sets-of-size-$t$-style chain theorems are a bit far-fetched for your taste, I’ll now attempt to return to more traditional single-element deletion/contractions, in a slightly roundabout way.
It seems that obtaining a single-element chain theorem for 4-connectivity in the style of Tutte’s Wheels-and-Whirls Theorem has its difficulties (to put it lightly) — see, for example, [CMO2011] for internally 4-connected binary matroids.
Even if we just consider 4-connected (1,4)-matroids, which we know are matroids with no single-element deletion or contraction that preserves 4-connectedness, this seems like a potentially wild class: it includes $M(K_{4,n})$ and $M^*(K_{4,n})$ for any $n \ge 4$; cycle matroids of grids; or, more generally, take the cycle matroid of any 4-connected 4-regular graph with no 3-cycles, but every edge is in a 4-cycle.
Recall the inflation operation, which we used to obtain a $(t+1)$-spike from a $t$-spike. Using essentially the same operation, we see that (1,6)-matroids are at least as wild as (1,4)-matroids. (I say “essentially the same” here because now we require that the elementary quotient/lift does not preserve the $2t$-element circuits/cocircuits corresponding to consecutive elements in the cyclic ordering.) So any horrors from (1,4)-matroids extend to (1,2t)-matroids for integers $t > 2$. I still reserve some small amount of hope for $(1,2t+1)$-matroids, for $t \ge 2$. But, in general, characterising $(1,t)$-matroids seems difficult, so let’s first look at doing something easier.
Wheels and whirls (that is, (1,3)-matroids) also have the property that there is a cyclic ordering on the elements such that every pair of consecutive elements in the ordering is contained in a triangle, and contained in a triad.
We say that a matroid has the cyclic $(t-1,t)$-property if there is an ordering $\sigma$ of the ground set such that every set of $t-1$ consecutive elements in $\sigma$ is contained in a $t$-element circuit and a $t$-element cocircuit.
So wheels and whirls have the cyclic (2,3)-property. Note also that swirls and spikes (i.e. 2-spikes) have the (3,4)-cyclic property. In fact, $t$-spikes have the $(2t-1,2t)$-cyclic property.
Together with Deborah Chun, Tara Fife, and Charles Semple, we proved a characterisation of matroids with the $(t-1,t)$-cyclic property [BCFS2018]. Before I state this, let me give some intuition. Essentially, the result shows that one can think of wheels and whirls as a canonical example of matroids with the $(t-1,t)$-property when t is odd; and swirls as a canonical example when $t$ is even — at least, with regards to how the 3- or 4-element circuits/cocircuits appear in either case. These matroids have not only an ordering that certifies they are $(t-1,t)$-cyclic, but an ordering with a stronger property: for whirls and whirls, any set of $t$ consecutive elements in the ordering is either a (coindependent) circuit or (independent) cocircuit, and these alternate; or for swirls, each set of $t$ consecutive elements in the ordering alternates between being both a circuit and a cocircuit, and being independent and coindependent.
We say that a matroid $M$ is $t$-cyclic if there is an ordering $(e_1,e_2,\ldots,e_n)$ of $E(M)$ such that, when $t$ is odd, each set of $t$-consecutive elements $\{e_i,\ldots,e_{i+t-1}\}$ is a (coindependent) circuit when $i$ is odd, and a (independent) cocircuit when $i$ is even; and when $t$ is even, each set of $t$-consecutive elements $\{e_i,\ldots,e_{i+t-1}\}$ is a circuit and a cocircuit when $i$ is odd (and is independent and coindependent when $i$ is even). (Indices are interpreted modulo n.)
Theorem [BCFS2018]:
Let $M$ be a matroid with the $(t-1,t)$-property, where $t \ge 3$ and $n \ge 6t-10$. Then $M$ is $t$-cyclic.
A $t$-cyclic matroid with rank $r$ has $2r$ elements, and $t$-cyclic matroids have crossing $t$- or $(t-1)$-separations (when $t$ is odd or even respectively) that can be described in terms of flowers. (For those familiar with flowers: when $t$ is odd, these are daisies; when $t$ is even it is possible, depending on the matroid, to have either daisies or anemones.) One interesting thing to observe is the effect of the parity of $t$.
We can use the construction referred to as inflation to obtain $(t+2)$-cyclic matroids from $t$-cyclic matroids. Maybe we can get all $t$-cyclic matroids this way:
Conjecture [BCFS2018]:
Let M be a $t$-cyclic matroid for some $t \ge 2$.
If $t$ is even, then M can be obtained from a spike or a swirl by a sequence of inflations.
If $t$ is odd, then M can be obtained from a wheel or a whirl by a sequence of inflations.
I would be surprised if the odd $t$ case of this conjecture does not hold; I am a bit less confident about the case where $t$ is even.
If you’ve made it this far in the post, the reward is a potentially foolhardy conjecture or two.
As touched on earlier, I think perhaps there is some hope for a “nice” characterisation of $(1,t)$-matroids for odd $t \ge 5$. Here is an optimistic conjecture:
Conjecture:
Let $t$ be an odd integer, with $t \ge 3$. There exists an $n_t$ such that whenever $|E(M)| \ge n_t$, $M$ is $t$-cyclic if and only if $M$ is a $(1,t)$-matroid.
In fact, I’m not even aware of sporadic examples.
Question:
For odd t, does there exist a matroid $M$ where every element is in a $t$-circuit and $t$-cocircuit, but $M$ is not $t$-cyclic?
### Bibliography:
[AO2008] J. Aikin, J. Oxley. The structure of crossing separations in matroids. Adv. in Appl. Math. 41 (2008), 10-26.
[BCCGW2018] N. Brettell, R. Campbell, D. Chun, K. Grace, G. Whittle. On a generalisation of spikes. arXiv:1804.06959.
[BCFS2018] N. Brettell, D. Chun, T. Fife, C. Semple. Matroids with a cyclic arrangement of circuits and cocircuits. arXiv:1806.03625.
[BWW2018] N. Brettell, G. Whittle, A. Williams. N-detachable pairs in 3-connected matroids III: the theorem. arXiv:1804.06588.
[CMO2011] C. Chun, D. Mayhew, J. Oxley. A chain theorem for internally 4-connected binary matroids. J. Combin. Theory Ser. B 101 (2011), 141-189.
[Miller2014] J. Miller. Matroids in which every pair of elements belongs to a 4-circuit and a 4-cocircuit. M.Sc. thesis, Victoria University of Wellington, 2014.
[OPSW2018] J. Oxley, S. Pfeil, C. Semple, G. Whittle. Matroids with many small circuits and cocircuits. Submitted.
[Williams2015] A. Williams. Detachable pairs in 3-connected matroids. Ph.D. thesis, Victoria University of Wellington, 2015.
# The infinite matroid intersection conjecture
Today we’ll return to our examination of infinite matroids. So far we saw why they are defined the way they are and what the known examples look like. Then we examined a very flexible way of building infinite matroids from trees of finite matroids and saw how to use that construction as a tool in topological infinite graph theory.
The aim today is to understand the deepest and most important unproved conjecture about infinite matroids, the infinite matroid intersection conjecture. We won’t be looking at progress towards the conjecture today, just approaching the statement from a number of angles and getting a sense of its connections to various very different-looking problems in infinite combinatorics. I hope that by the end of the post you will be convinced, as I am, that it is a deep and compelling problem. Here it is:
Conjecture (Nash-Williams): Let $M$ and $N$ be (possibly infinite) matroids on the same ground set $E$. Then there are a subset $I$ of $E$ and a partition of $E$ into sets $P$ and $Q$ such that $I$ is independent in both $M$ and $N$, $I \cap P$ spans $P$ in $M$ and $I \cap Q$ spans $Q$ in $N$.
Like a TARDIS, at a first glance this statement seems simple and perhaps a little odd, and its deeper significance is hidden. To get a sense of that significance, we must go on a long journey and see how it materialises within apparently widely separated worlds.
Our journey begins with the observation that finding good infinite versions of theorems about finite combinatorial objects is hard. All too often, the obvious generalisation is either straightforwardly false or else is a simple consequence of the finite version of the theorem, and as such has no new content.
An example of the latter phenomenon is Menger’s Theorem. If $G$ is a graph and $A$ and $B$ are sets then an $A$-$B$ separator in $G$ is defined to be a set $S$ of vertices of $G$ such that there is no path from $A$ to $B$ in $G – S$. Menger’s theorem states that if $G$ is finite then the minimal size of an $A$-$B$ separator in $G$ is the same as the maximal size of a set of disjoint paths from $A$ to $B$ in $G$.
The obvious way to generalise this statement to infinite graphs would be to simply replace the word ‘size’ with the word ‘cardinality’ in both places where it appears. However, the statement obtained in this way has no more content than the finite version of the theorem. We can see this by considering an $A$-$B$ separator $S$ of minimal cardinality.
If $S$ is infinite, then any set of fewer than $|S|$ paths from $A$ to $B$ uses fewer than $|S|$ vertices, and so cannot be maximal. So in that case the statement is clear, and we can suppose instead that $|S|$ is some natural number $n$. Now for each $m \leq n$ we can easily build a finite subgraph $G_m$ of $G$ in which any $A$-$B$ separator has size at least $m$: we may take $G_0$ to be empty, and build $G_{m+1}$ from $G_m$ by adding a path $P_X$ of $G$ from $A$ to $B$ avoiding each set $X$ of $m$ vertices of $G_m$. Then by Menger’s theorem $G_n$ already contains $n$ disjoint paths from $A$ to $B$.
It was Paul Erdős who saw how to get a much deeper infinite generalisation by first reformulating Menger’s theorem as a structural statement. Suppose that we consider an $A$-$B$ separator $S$ of minimal size and a set $\cal P$ of disjoint paths from $A$ to $B$ of maximal size. Then each path in $\cal P$ contains at least one vertex in $S$, and these vertices must all be distinct since the paths are disjoint. But by Menger’s theorem there can only be as may paths in $\cal P$ as there are vertices in $S$. So $S$ must consist of one vertex on each path in $\cal P$.
So it follows from Menger’s theorem that in a finite graph $G$ we can always find a set $\cal P$ of disjoint $A$-$B$ paths together with an $A$-$B$ separator $S$ consisting of one vertex from each path in $\cal P$. On the other hand, this structural statement also implies Menger’s theorem. After all, if ${\cal P}’$ is a set of disjoint paths from $A$ to $B$ of maximal size and $S’$ is an $A$-$B$ separator of minimal size then $|S’| \leq |S| = |{\cal P}| \leq |{\cal P}’|$. But also $|{\cal P}’| \leq |S’|$ since each path in ${\cal P}’$ must contain a different point of $S’$. So $|{\cal P}’| = |S’|$, as desired.
Erdős’ generalisation of Menger’s theorem is therefore the following structural statement:
Theorem (Aharoni and Berger): Let $G$ be a (possibly infinite) graph and let $A$ and $B$ be sets. Then there is a set ${\cal P}$ of disjoint $A$-$B$ paths together with an $A$-$B$ separator $S$ consisting of one vertex from each path in ${\cal P}$.
This statement contains some serious content about the structure of infinite graphs, and it remained open for almost half a century before finally being proved by Aharoni and Berger in 2009 [AB09]. Their proof remains one of the deepest ever given in infinite combinatorics.
Another example of the difficulties of generalisation from finite to infinite objects is given by the tree packing and covering theorems. The tree covering theorem states that a connected graph $G$ is a union of $k$ spanning trees if and only if for any set $X$ of vertices of $G$ the induced subgraph $G[X]$ has at most $k(|X| – 1)$ edges, and the tree packing theorem states that a connected graph $G$ includes $k$ edge-disjoint spanning trees if and only if for any partition $P$ of the vertex set of $G$, the quotient graph $G/P$ has at least $k(|P|-1)$ edges. Here $G/P$ is the graph whose vertices are the partition classes and whose edges are those of $G$ which go between partition classes, with endpoints the partition classes which they join.
Once more, the obvious generalisation of the tree covering theorem to infinite graphs has no more content than the finite version of the theorem; it can be proved from it by a straightforward compactness argument. On the other hand the obvious generalisation of the tree packing theorem to infinite graphs is false; there is a counterexample due to Aharoni and Thomassen [AT89]. And once more, to find the correct infinite version of the theorems we must begin by finding a structural version in the finite case. Indeed, it turns out that the tree packing and covering theorems have a unifying structural generalisation:
Theorem ([D17, Theorem 2.4.4]): Let $G$ be a connected finite graph and $k$ a natural number. Then there is a partition $P$ of the vertex set of $G$ such that $G/P$ is a union of $k$ spanning trees and $G[X]$ is connected and has $k$ edge-disjoint spanning trees for each partition class $X$ of $P$.
This tree packing/covering theorem implies both the tree packing theorem and the tree covering theorem. For tree packing, the necessity of the condition is clear, so it suffices to prove sufficiency. We can do this by applying the condition to the partition $P$ given by the tree packing/covering theorem. This gives that $G/P$ has at least $k(|P|-1)$ edges. Since it is a union of $k$ spanning trees, those trees must be edge disjoint. Combining these with the edge-disjoint spanning trees in each $G[X]$ gives $k$ edge-disjoint spanning trees in $G$. The derivation of the tree covering theorem from the packing/covering theorem is similar.
This gives us a nontrivial common generalisation of the tree packing and covering theorems to infinite graphs: we can simply omit the word ‘finite’ from the tree packing/covering theorem. The proof of this generalisation, though much simpler than that for the infinite version of Menger’s theorem, goes beyond the scope of this post.
We have now seen two examples where, to find the correct infinite generalisation of a theorem about finite graphs, it was necessary to first reformulate the finite theorem as a structural result. The same is true for theorems about finite matroids, but in this case something remarkable happens. The infinite structural statement you get is usually just the infinite matroid intersection conjecture!
This is not too surprising for the matroid intersection theorem, since Nash-Williams formulated the intersection conjecture to be an infinite structural generalisation of that statement. Recall that the matroid intersection theorem states that the largest size of a common independent set of two matroids $M$ and $N$ on the same ground set $E$ is the same as the minimum value over all partitions of $E$ into sets $P$ and $Q$ of $r_M(P) + r_N(Q)$. The inequality one way around is clear, since if $I$ is independent in both $M$ and $N$ and $\{P, Q\}$ is a partition of $E$ then $|I| = |I \cap P| + |I \cap Q| \leq r_M(P) + r_N(Q)$. For this inequality to be an equality, we must have that $I \cap P$ spans $P$ in $M$ and $I \cap Q$ spans $Q$ in $N$, just as in the conjecture.
There are some less expected cases. Let’s consider Tutte’s linking theorem, the closest matroidal analogue of Menger’s theorem. Let $M$ be a finite matroid with ground set $E$, and let $A$ and $B$ be disjoint subsets of $E$. Let $E’ := E \setminus (A \cup B)$. Then the connectivity $\lambda_M(A, B)$ from $A$ to $B$ in $M$ is defined to be the minimal value of $\kappa_M(A \cup P)$ over all bipartitions of $E’$ into sets $P$ and $Q$. Here $\kappa_M$ is the connectivity function of $M$, given by $\kappa_M(X) := r_M(X) + r_M(E \setminus X) – r(M)$. The linking theorem states that the maximum value of $\kappa_N(A)$ over all minors $N$ of $M$ with ground set $A \cup B$ is $\lambda_M(A,B)$.
It turns out that there is a structural analogue of this statement. Each such minor $N$ must have the form $M/I\backslash J$, where $I$ and $J$ form a partition of $E’$. By moving loops of $M/I$ into $J$ if necessary, we may suppose that $I$ is independent. We may now calculate as follows:
$\kappa_{M/I \setminus J}(A) = (r(A \cup I) – |I|) + (r(B \cup I) – |I|) – (r(M) – |I|) \\= (r(A \cup I) – |Q \cap I|) + (r(B \cup I) – |P \cap I|) – r(M) \\ \leq r(A \cup (I \cap P)) + r(B \cup (I \cap Q)) – r(M) \\ \leq r(A \cup P) + r(B \cup Q) – r(M) \\ = \kappa_M(A \cup P)$
So equality of the left and right sides is equivalent to the statement that each inequality in the above calculation is an equality, giving the following four conditions:
1. $I \cap P$ spans $P$ in $M/A$
2. $I \cap Q$ spans $Q$ in $M/B$
3. $I \cap P$ is independent in $M/(B \cup (I \cap Q))$
4. $I \cap Q$ is independent in $M/(A \cup (I \cap P))$
The outlines of our TARDIS are beginning to materialise. Indeed, consider a minimal set $I$ satisfying these conditions. By minimality, $I \cap P$ will be independent in $M/A$ and $I \cap Q$ will be independent in $M/B$. Thus $I$ itself will be independent in both matroids. To put it another way, $I$, $P$ and $Q$ will witness that $M/A \backslash B$ and $M \backslash A/B$ satisfy the matroid intersection conjecture.
Thus the infinite generalisation of Tutte’s linking theorem is the statement that, for any (possibly infinite) matroid $M$ and any disjoint sets $A$ and $B$ of elements of $M$, the matroids $M/A\backslash B$ and $M \backslash A/B$ satisfy the infinite matroid intersection conjecture. Given this connection, it should not be too surprising that Aharoni and Berger’s infinite generalisation of Menger’s theorem follows from the infinite matroid intersection conjecture. Precise details of the derivation can be found in [ACF18].
What about the tree packing and covering theorems? Their matroidal analogues are the base packing and covering theorems, which in their full generality apply to a list $M_1, M_2, \ldots M_k$ of finite matroids on a common ground set $E$. A base packing for such a list is a collection of disjoint bases, one from each $M_i$. A base covering for such a list is a collection of bases, one from each $M_i$, whose union is the whole of $E$. The base packing theorem states that there is a base packing precisely when for any subset $Q$ of $E$ we have $\sum_{i = 1}^k r(M_i.Q) \leq |Q|$, and the base covering theorem states that there is a base covering precisely when for any subset $P$ of $E$ we have $\sum_{i = 1}^k r(M_i | P) \geq |P|$.
Once more we can combine these statements into a unified structural statement, the base packing/covering theorem, which states that given such a list of finite matroids on $E$ we can find a bipartition of $E$ into sets $P$ and $Q$ such that the matroids $M_1 | P, \ldots M_k | P$ have a packing and the matroids $M_1.Q, \ldots M_k.Q$ have a covering. The derivations of the base packing and covering theorems from this statement are analogous to the derivation of the tree packing theorem from the tree packing/covering theorem above. So the infinite version of the base packing and covering theorems is given by the same statement applied to a family of infinite matroids. We shall call this the base packing/covering conjecture.
Let’s consider the special case $k = 2$ in more detail. The existence of a packing for $M_1 | P$ and $M_2 | P$ is equivalent to the existence of a subset $I_P$ of $P$ such that $I_P$ spans $P$ in $M_1$ and $P \setminus I_P$ spans $P$ in $M_2$. Similarly the existence of a covering for $M_1.Q$ and $M_2.Q$ is equivalent to the existence of a subset $I_Q$ of $Q$ such that $I_Q$ is independent in $M_1/P$ and $Q \setminus I_Q$ is independent in $M_2/P$. Since a set is independent in a matroid precisely when its complement is spanning in the dual matroid, we can rephrase these conditions as follows:
1. $I_P$ spans $P$ in $M_1$
2. $I_Q$ spans $Q$ in $M_2^*$
3. $I_P$ is independent in $M_2^*/Q$
4. $I_Q$ is independent in $M_1/P$
Once again, as if from nowhere, the TARDIS appears. If we choose $I_P$ and $I_Q$ minimal subject to conditions (i) and (ii) then they will still satisfy conditions (iii) and (iv), which will guarantee that $I:=I_P \cup I_Q$ is independent in both $M_1$ and $M_2^*$, meaning that $I$, $P$ and $Q$ witness that $M_1$ and $M_2^*$ satisfy the infinite matroid intersection conjecture.
The TARDIS not only appears in unexpected places, it is also bigger on the inside than it seems. For example, the remarks in the last couple of paragraphs only apply to pairs of matroids, that is, to lists of length 2. But in fact it is possible to derive the full base packing/covering conjecture from the special case of pairs, and hence from the infinite matroid intersection conjecture. We will see the reasons for this when we look at the structure of the conjecture more closely in the next post in the series. For now we just note the consequence that the tree packing/covering theorem mentioned earlier also follows from the infinite matroid intersection conjecture.
We have seen how the infinite matroid intersection conjecture arises naturally as the infinite structural analogue of the matroid intersection theorem, the linking theorem, and the base packing and covering theorems. The same also holds for the matroid union theorem, which we do not have space to discuss here [BC15]. Thus the process of finding an infinite generalisation of all these statements reveals their unified structural heart. In the next post we will examine that structural heart more closely, looking at just what sort of structure the conjecture gives us, and we will survey the special cases for which the conjecture is already known.
Bibliography:
[AB09] R. Aharoni and E. Berger, Menger’s Theorem for Infinite Graphs, Inventiones mathematicae 176(1):1–62 (2009).
[ACF18] E. Aigner-Horev, J. Carmesin and J.-O. Fröhlich, On the Intersection of Infinite Matroids, Discrete Mathematics 341(6):1582-1596 (2018).
[AT89] R. Aharoni and C. Thomassen, Infinite, highly connected digraphs with no two arc-disjoint spanning trees. J. Graph Theory, 13:71–74 (1989).
[BC15] N. Bowler and J. Carmesin, Matroid Intersection, Base Packing and Base Covering for Infinite Matroids, Combinatorica 35(2):153-180 (2015).
[D17] R. Diestel, Graph Theory, 5th edition, Springer-Verlag (2017).
|
{}
|
## Abstract
This paper presents a robust method for controlling the terrestrial motion of a bimodal multirotor vehicle that can roll and fly. Factors influencing the mobility and controllability of the vehicle are explored and compared to strictly flying multirotor vehicles; the differences motivate novel control and control allocation strategies that leverage the non-standard configuration of the bimodal design. A fifth-order dynamic model of the vehicle subject to kinematic rolling constraints is the basis for a nonlinear, multi-input, multi-output, sliding mode controller. Constrained optimization techniques are used to develop a novel control allocation strategy that minimizes power consumption while rolling. Simulations of the vehicle under closed-loop control are presented. A functional hardware embodiment of the vehicle is constructed onto which the controllers and control allocation algorithm are deployed. Experimental data of the vehicle under closed-loop control demonstrate good performance and robustness to parameter uncertainty. Data collected also demonstrate that the control allocation algorithm correctly determines a thrust-minimizing solution in real-time.
## 1 Introduction
Bimodal vehicles continue to draw the attention of researchers because of their potential for autonomous operations in unknown environments [15]. Bimodal vehicles can adapt to their environment by combining complementary locomotion mechanisms in a single vehicle or by altering the operation of locomotion mechanisms in accordance with the terrain. Additionally, bimodality promises superior efficiency, as different locomotion mechanisms often present a tradeoff in energetic cost. Aerial bimodal vehicles have been the focus of research efforts due to their unmatched mobility in uncertain and changing environments. As a result, several classes of aerial bimodal vehicles have emerged, including rolling–flying [610], walking–flying [11,12], and swimming–flying ([1315]). Each of these classes utilizes various mechanisms for propulsion and lift.
Figure 1 illustrates an embodiment of a rolling–flying vehicle (RFV) that utilizes rotary wings for both rolling locomotion and lift generation. This class of vehicles was first conceived by Ref. [6] when they created a Hybrid Terrestrial/Aerial Quadrotor (HyTAQ), consisting of a quadrotor suspended within a single rolling cage that may roll along the ground and fly. A successful commercial embodiment of this vehicle architecture is the Parrot Rolling Spider (Parrot Drones sas). Here, rolling is accomplished via two independent, passive wheels on either side of the vehicle. The pitch angle of the rotor plane is independent of the wheels’ angular orientation so that the wheels are freely towed by the rotors. This configuration combines the efficiency of rolling locomotion with the mobility and maneuverability of rotary-wing flight; the energetic efficiency of the RFV compared to a conventional flying multirotor vehicle (CFMV) has been established by Ref. [16].
Fig. 1
Fig. 1
Close modal
For example, the cost-of-transport (COT) of the RFV is less than 15% of the COT of a CFMV of comparable mass at low velocities (<5 m/s), while the operating times of the RFV are an order of magnitude higher.
As with nearly any exploratory mobile robot, closed-loop control of the RFV’s heading is necessary for basic operation. In contrast to other mobile robots, however, control of the pitch angle is critical to realizing the energetic benefits of the RFV. CFMVs and their control have been studied extensively [1720], along with effective methods for control allocation [2123]. However, the methods and assumptions used to develop controllers for CFMVs are not valid for an RFV, despite the RFV utilizing the same multirotor mechanism for both rolling and flying. Atay et al. [24] reveal that the dynamic model, flat outputs, and exogenous forces associated with the RFV’s rolling mode differ substantially from those of a CFMV due to the presence of kinematic constraints on the RFV. Furthermore, the RFV’s pitch angle range is considerably greater than that of a CFMV, invalidating any small-angle assumptions based on near-horizontal nominal operating conditions. Such assumptions are common in the CFMV literature as they permit significant simplification of the dynamic models used to design control systems [17,18].
Furthermore, the ability to roll permits the RFV to operate with small rotor thrusts compared to a CFMV because the RFV’s weight is partially supported by the ground when rolling. While this results in reduced power consumption, this also precludes the use of traditional control allocation methods which typically do not constrain the rotor thrusts to be positive. Because the RFV body forces are so small, synthesizing even modest body torques causes traditional control allocation methods to prescribe negative thrusts. However, negative thrusts cannot be efficiently produced by standard rotors, which are designed to always rotate in a single direction. Producing negative thrusts requires either variable pitch propellers, such as those used by Ref. [10], or symmetric (and therefore less efficient) propellers paired with motor drivers capable of reversing direction. Therefore, to avoid prescribing negative thrusts, the RFV requires control allocation methods that cope with actuator constraints and that can be executed rapidly on a real-time microprocessor. As will be shown, constrained control allocation is critical to realizing the energy efficiency of the RFV’s rolling mode of locomotion due to intermittent coupling that exists between the RFV pitch angle and forward velocity.
Kalantari and Spenko [6] invented and patented HyTAQ, consisting of a quadrotor suspended within a single rolling cage. The RFV presented here is one of the many embodiments covered by their patent [25]; specifically, the RFV uses two independently passive rotating wheels rather than a single monolithic cage. In Ref. [6], a model of a single-wheeled vehicle is developed, though the model is not used to design a control system. Takahashi et al. [7] developed the All-Round Two-Wheeled unmanned aerial vehicle (UAV) that contains hemispherical wheels that envelop a UAV suspended along the wheels’ axle. They design a position tracking controller based on a simplified dynamic model wherein the rotational dynamics and nonholonomic constraints are ignored. Mizutani et al. [26] develop a spherical shell permitting a quadrotor placed within to roll. Here, an off-the-shelf quadrotor is used, so no control system nor control allocation design is attempted. Takahashi et al. [7] do not address control allocation, while Ref. [6] uses an open-source multirotor embedded system to control their vehicle (Arduino Arducopter) which uses proportional-integral-derivative (PID) controllers and traditional unconstrained control allocation methods, which are suboptimal for the RFV.
This paper builds on the work of Refs. [6] and [7] by developing model-based control systems and a control allocation strategy for the RFV of Fig. 1. The controller design is based upon the dynamic model developed by Ref. [24], which takes into consideration the various constraints imposed upon the vehicle while in contact with the ground, and does not make any small-angle assumptions. The implications of the constraints as they relate to differential flatness, controllability, and control allocation are explored. This reveals limitations in the extent to which the system outputs can be decoupled and independently controlled. RFV and CFMV characteristics that influence control system development are discussed, and key differences are identified. Consideration of these differences motivates the development of novel closed-loop control systems for the RFV. A model-based, nonlinear, multi-input, multi-output (MIMO), sliding mode controller is designed. The controller’s robustness to bounded parameter uncertainties is proven using Lyapunov’s direct method. A novel constrained control allocation method is developed which determines the required rotor thrusts that will synthesize the desired input forces and moments while minimizing the total thrust produced and ensuring that all thrusts are positive. The solution is proved to be minimal. A hardware embodiment of the RFV is developed, including a custom real-time controller, permitting novel control and control allocation methods to be deployed and tested. The mechanical, electrical, and software subsystems of the RFV prototype are briefly discussed, and empirical results from the hardware are presented alongside simulation results.
This paper is organized as follows: First, the dynamic model of an RFV is briefly presented. Second, the RFV and CFMVs are compared in the context of control system development. Third, a procedure for designing closed-loop controllers for the RFV’s flat outputs is presented. Fourth, a method for control allocation is developed. Fifth, a hardware embodiment of the RFV is described. Sixth, simulation and experimental results are presented. Lastly, the results are discussed along with opportunities for future work.
## 2 Methods
### 2.1 Dynamic Model of the Rolling–Flying Vehicle.
Figure 2 shows a schematic illustration of the RFV that identifies the generalized coordinates, constraints, impressed forces and moments, and various frames of reference which define the model. Point b is located at the center of the RFV body B, midway between two wheels W1 and W2, and is the origin of body reference frame $B$ which is rigidly fixed in B. Point i is located at the origin of an inertially fixed reference frame $I$. Referring to Fig. 2(c), an intermediate reference frame $F$ is defined with respect to $I$, the origin of which is collocated with the origin of frame $B$ at point b. As a result, the $x^$-axes of $F$ and $B$ are parallel, and $F$ is free to rotate about the $x^$-axis of $B$. Wheels W1 and W2 contact the ground at points c1 and c2, respectively. $W$1 and $W$2 are free to rotate about the $x^$-axis of $B$. Referring to Fig. 2(b), the control inputs to the system are a force $FzB$ in the $B$ frame $z^$-direction and three moments $MxB,MyB$, and $MzB$ resolved in the $B$ frame $x^$-, $y^$- and $z^$-directions, respectively.
Fig. 2
Fig. 2
Close modal
As this paper is concerned with the terrestrial motion of the RFV, the model assumes that the vehicle is always in contact with the ground. Specifically, we assume that the vehicle operates upon a horizontal plane, the wheels do not slip, and the wheels never lose contact with the horizontal plane. The RFV is modeled using the method of Euler–Lagrange subject to nonholonomic constraints, and a complete derivation of the model is provided in [24]. Although the RFV is characterized by six generalized coordinates which appear in Fig. 2(a) (x, y, σ, α, θ1, and θ2), the wheel constraints reduce the RFV to a three-degree-of-freedom (DOF) system for which the equations of motion are
$y¨F=FyF−Dy+mBhIx(12mBghsin(2α)−(Dα−MxF)cosα−(Ix(α˙2+σ˙2)+(Iz−Iy)σ˙2cos2α)sinα)2IxwR2+mB+2mW$
(1)
$σ¨=MzF+α˙σ˙(Iz−Iy)sin(2α)−Dσ+mBhσ˙y˙Fsinα2(L2IxwR2+Iyw+mwL2)+Iysin2α+Izcos2α$
(2)
$α¨=MxF−12(Iz−Iy)σ˙2sin(2α)−Dα+mBghsinαIx−mBhIx(12mBh(α˙2+σ˙2)sin(2α)+(FyF−Dy)cosα)2IxwR2+mB+2mW$
(3)
where $y˙F$ and $y¨F$ are the velocity and acceleration, respectively, of point b resolved in the $F$ frame, σ and α are Euler angles specifying the yaw angle and pitch angle, respectively, of the $B$ frame with respect to the $I$ frame, and $Dy,Dσ$, and $Dα$ are nonconservative dissipation terms that are functions of rolling resistance, viscous damping, and aerodynamic drag. The input force and moments are rotated from the $B$ frame into the $F$ frame so that
$FyF=−FzBsinαMxF=MxBMyF=MyBcosα−MzBsinαMzF=MyBsinα+MzBcosα$
(4)
The model parameters are gravitational acceleration g, wheel radius R, distance from point b to the center of each wheel L, body mass mB, body principal moments of inertia Ix, Iy, Iz, wheel mass mW, wheel principal moments of inertia Ixw, Iyw, and the location of the center of mass G (which is fixed in $B$) relative to point b in the $B$ frame $z^$-direction, h.
### 2.2 Comparison of the Rolling–Flying Vehicle and CFMV.
The RFV described in this paper differs substantially from a CFMV, and these differences warrant new approaches to control and control allocation. First, the constraints imposed upon the RFV and CFMV differ. Several constraints occur where the RFV’s wheels contact the ground. This contrasts with a CFMV which is subject to no constraints; the six degrees-of-freedom that describe a CMFV (three position coordinates and three Euler angles) are independent of one another. Constraints influence what subset of generalized coordinates can be independently prescribed and therefore controlled.
To see this, consider the CFMV shown in Fig. 3(a), which can produce a positive horizontal force Fy by operating at a pitch angle α < 0, where Fy is the horizontal component of the net thrust T. To maintain a constant altitude in the $z^$-direction, the vertical component of thrust must always exactly counteract the weight of the vehicle W. This has the effect of coupling α and Fy. Since Fy is the control for the forward velocity $y˙$, there exists a coupling between α and $y˙$. As a result, one cannot specify α independently of $y˙$.
Fig. 3
Fig. 3
Close modal
This intuitive explanation can be rigorously demonstrated using the notion of differential flatness. If a dynamic system’s states and inputs can be written as a function of the system’s outputs and their derivatives, then the system is said to be differentially flat and the outputs are referred to as flat outputs [27]. A consequence of being differentially flat is that any trajectory specified in terms of the flat outputs can be directly mapped to the inputs and thus used to specify feasible motion trajectories. Several researchers [28,29] demonstrate that the flat outputs of a CFMV are $x˙$ (velocity normal to the page), $y˙,z˙$ (vertical velocity), and the yaw angle σ (not shown); in particular, the pitch angle α is not a flat output. This contrasts with the RFV, whose flat outputs are σ, α, and $y˙F$ [24]. Referring to Fig. 3(b), the difference in flat outputs is due to the normal constraints on the RFV’s wheels, which result in normal forces that partially support the vehicle’s weight. As a result, the vertical component of the thrust force is not required to support the vehicle’s weight, meaning that T, and therefore $FyF$, can be prescribed independently of α. The difference in flat outputs reveals that CFMV control algorithms are not ideal for controlling the RFV, as they will simultaneously inhibit independent control of the flat output α while attempting to control non-flat outputs $x˙$ and $y˙$ and the non-output $z˙$. Rather, RFV control algorithms should be based on the dynamic model of Eqs. (1)(3), which permits control of the flat outputs σ, α, and $y˙F$. In general, the independence of α and $y˙F$ can be leveraged to optimize the RFV operation for maximum range, while control of α alone enables optimization for minimum power consumption [16]. This occurs because control of α permits control of wheel normal force, and therefore wheel rolling resistance. This in turn influences power consumption and range. The optimal α at which to operate depends on terrain, highlighting the importance of a broad range for α control.
Furthermore, the difference in flat outputs reveals that the RFV is over-actuated while rolling; the RFV’s four rotors, which are capable of controlling the four flat outputs of a CFMV (i.e., $x˙,y˙,z˙$ and σ), only need to control three flat outputs while rolling: σ, α, and $y˙F$. As a consequence, the rolling operation may require fewer than four rotors to operate at a time. This is accomplished via a thrust-minimizing control allocation method described further in Secs. 2.4 and 3.2.2.
Another difference between the RFV and CFMVs is the exogenous dissipative forces acting upon the vehicles. In addition to aerodynamic drag forces, the RFV must overcome rolling resistance due to unmodeled terrain and viscous friction present in the wheel bearings. These dissipative forces and the outputs which they affect are captured by $Dy,Dσ$, and $Dα$. Additionally, the proximity of the rotors to the ground at low α creates uncertain ground effect disturbances which the α control system must reject.
Finally, CFMVs and the RFV admit different simplifying assumptions. The nominal operating condition of a CFMV is typically horizontal. As a result, many attitude control systems are developed by linearizing the system dynamics about the horizontal configuration and making a small-angle assumption [18,28,30]. Linearization and/or small-angle strategies are unsuitable for the RFV as α is unrestricted and may assume values far from horizontal for extended periods. One consequence of large α values is the inertial cross-coupling that occurs due to the sin(α) and sin(2α) terms appearing in Eqs. (1)(3). These terms create significant inertial torques that must be overcome by the controller. Large α values also mean that transforming forces and moments from the $F$ frame (in which the flat outputs are defined) to the $B$ frame (in which the actuators reside) depend on the value of α. As will be shown in Sec. 2.4, the $B$ frame moments required to achieve a given yawing torque are modulated by sinα and cosα.
Several researchers relax the small-angle assumption and develop model-based nonlinear controllers for CFMVs [17,18,20]. These controllers control orientation by generating moments about an arbitrary axis of a CFMV. This strategy is effective because the CFMV is unconstrained and so free to rotate about any axis. However, the RFV is constrained in multiple ways and so cannot rotate about an arbitrary axis. For example, the RFV cannot undergo lateral rolling (rotation about the $F$ frame $y^$-axis) without one or both wheels leaving the ground. Were the RFV’s attitude controlled in the same manner a CFMV’s then energy could be wasted trying to rotate the RFV against a constraint. In fact, it is unclear what the commanded lateral roll should be as the slope of the terrain may change. Instead, α and σ (which are flat outputs) are controlled to track a trajectory while lateral rolling can either be uncontrolled or suppressed using active damping (see Sec. 2.4).
### 2.3 Control System Development.
In this section, the model of Eqs. (1)(3) is used to design a closed-loop control system for the RFV. First, the model is rewritten in a more manageable form by defining a state vector $x⇀∈R5$, input vector $u⇀∈R3$, and output function $z⇀:R5→R3$ such that $x⇀=[σσ˙αα˙y˙F]T$, $u⇀=[MzFMxFFyF]T$ and $z⇀(x⇀)=[x1x3x5]T=[σαy˙F]T$. Then, the equations of motion (1)(3) can be rewritten as
$x⇀˙=[x21c4(c1x2x4sin(2x3)+c2x2x5sinx3+c3+u1)x41c8(c5x22sin(2x3)+c6sinx3+c7+u2)+δ11c10(c9+u3)+δ2],z(x⇀)=[x1x3x5]$
(5)
where $c1=Iz−Iy,c2=mBh,c3=−Dσ,c4=2(L2Ixw/R2+Iyw+$$mWL2)+Iysin2α+Izcos2α,c5=−(Iz−Iy)/2,c6=mBgh,c7=−Dα,$$c8=Ix,c9=−Dy$, c10 = 2Ixw/R2 + mB + 2mW, and δ1 and δ2 are the remaining unaccounted for terms in Eqs. (3) and (1), respectively. Here, we assume that the δ1 and δ2 terms are considerably smaller than the other terms in Eqs. (3) and (1) for reasonable values of the state and inputs. Therefore, we treat these terms as modeling uncertainties that can be bounded. This assumption simplifies the subsequent control system development by leading to complete input–output decoupling. A result, however, is that the stability properties are only semi-global.
The control objective is to independently control zi for i = 1, 2 (i.e., σ and α) such that ziri as t → ∞, where ri is the reference trajectory to be tracked by the ith output and t is the time. This is accomplished using feedback linearization augmented with sliding mode control. Sliding mode control will ensure closed-loop stability in the presence of bounded modeling uncertainties provided that the uncertainties are matched [31,32]. Formulating the model as in (5) rather than in a more general form permits the uncertainty in the individual nonlinear terms to be bounded, which will result in a less conservative controller; i.e., the control action will be smaller [32].
To feedback linearize the nonlinear MIMO system of (5), an input–output decoupling transformation must be found that decouples each output from all but one input. Fortunately, an inspection of (5) reveals that the system is already decoupled; ui can be used to linearize and control zi for i = 1, 2, 3. Moreover, the uncertainties appearing in (5) are matched. With the system input–output decoupled, the control system for each input–output pair can be developed independently [32]. Partitioning (5) into three subsystems results in a second-order subsystem for controlling z1 (i.e., σ), a second-order subsystem for controlling z2 (i.e., α), and a first-order subsystem for controlling z3 (i.e., $y˙F$). The objective here is to develop controllers for the z1 and z2 subsystems, which have the following form:
$ξ⇀˙=[ξ˙1ξ˙2]=[ξ21κ4(κ1ψ1(x⇀)+κ2ψ2(x⇀)+κ3+u)+δ]ρ(ξ⇀)=ξ1$
(6)
where $ξ⇀∈R2$ represents the two-dimensional state (e.g., $[σ,σ˙]T$), ρ:ℝ2 → ℝ is the scalar output function, ψ1:ℝ5 → ℝ and ψ2:ℝ5 → ℝ represent static nonlinearities, u is the scalar input, and κ1, κ2, κ3, κ4, and δ are uncertain parameters, e.g., κ1 = c1, κ2 = c2, κ3 = c3, κ4 = c4, and δ = 0 for the z1 controller. The error associated with the system (6) is
$e⇀=[e0e1e2]=[∫t(r−ρ)dtr−ρr˙−ρ˙]$
(7)
where r and $r˙$ are the desired output and its time derivative. Therefore, e0 is the integrated output tracking error, e1 is the output tracking error, and e2 is the output tracking error rate. Differentiating (7) yields the following error system
$e⇀˙=[e1e2r¨−ρ¨]=[e1e2v]=[010001000]e⇀+[001]v$
(8)
where
$ν≡r¨−1κ4(κ1ψ1+κ2ψ2+κ3+u)−δ$
(9)
is a fictitious input that feedback linearizes (6). The objective now is to stabilize the origin of (8) using sliding mode control. To do so, a stabilizing control law ϕ is defined for the two-dimensional subsystem of $ζ⇀≡[e0e1]T$
$ζ⇀˙=[0100]ζ⇀+[01]ϕ$
(10)
such that $ζ⇀→0$ as t → ∞. A state feedback control law is chosen so that $ϕ=−K⇀Tζ⇀$, where $K⇀=[K0K1]T$ are state feedback gains. If pole placement is used, then K0 = λ0λ1 and K1 = − λ0λ1, where λ0 and λ1 are complex-valued pole locations in the left-half plane. Alternatively, an optimal solution using a linear quadratic regulator (LQR) can be used to determine the state feedback gains $K⇀$. If e2 can be controlled such that $e2=ϕ(ζ⇀)$, then $ζ⇀→0⇒e2→0$, and (8) is stabilized. The error between e2 and ϕ is defined as
$s=e2−ϕ=e2+K⇀Tζ⇀=e2+K0e0+K1e1$
(11)
The manifold s = 0 is the so-called sliding surface and the dynamics
$e2+K0e0+K1e1=e¨0+K1e˙0+K0e0=0$
(12)
are the stable sliding mode dynamics that determine the behavior of the system once s = 0. Now, the problem of stabilizing the origin of (8) is reduced to stabilizing the origin of s, which is accomplished using Lyapunov’s direct method. A positive-definite Lyapunov function is defined as V(s) = s2/2. If a control law for u can be developed such that $V˙(s)$ is negative definite, then s → 0 as t → ∞. The derivative of V(s) is
$V˙(s)=ss˙=s(e˙2+K0e˙0+K1e˙1)=s(v+K0e1+K1e2)=s(r¨−1κ4(κ1ψ1+κ2ψ2+κ3+u)−δ+K0e1+K1e2)$
(13)
If u can be chosen such that
$r¨−1κ4(κ1ψ1+κ2ψ2+κ3+u)−δ+K0e1+K1e2=ηsgn(s)$
(14)
for η > 0, then $V˙(s)=−η|s|≤0$ making s = 0 an invariant set [32]. That is, s will asymptotically approach and remain at the origin. However, a control law based on (14) assumes perfect knowledge of the parameters κ1, κ2, κ3, κ4, and δ. In reality, the parameters’ exact values are uncertain, meaning that a control law based on (14) alone cannot guarantee s → 0. Instead, choose u such that
$r¨−1κ4(κ^1ψ1+κ^2ψ2+κ^3+u)−δ^+K0e1+K1e2=−βsgn(s)$
(15)
where $κ^i$ and $δ^$ are estimates of κi and δ, respectively, and β is a to-be-determined switching gain. The goal now is to select a value of β that guarantees that $V˙(s)≤−η|s|$ given bounds on the uncertainty in the parameters. Solving (15) for u results in the control law
$u=κ^4(r¨−δ^+K0e1+K1e2+βsgn(s))−κ^1ψ1−κ^2ψ2−κ^3$
(16)
Substituting (16) into (13) and collecting terms results in
$V˙(s)=s((1−κ^4κ4)(r¨−δ^+K0e1+K1e2)−1κ4(Δκ1ψ1+Δκ2ψ2+Δκ3)−Δδ)−κ^4κ4β|s|≤|s|((1−κ^4κ4)(r¨−δ^+K0e1+K1e2)−1κ4(Δκ1ψ1+Δκ2ψ2+Δκ3)−Δδ−κ^4κ4β)$
(17)
where $Δκi≡κi−κ^i$ and $Δδ≡δ−δ^$. Enforcing the condition that $V˙(s)≤−η|s|$ results in
$β≥κ4κ^4(η−Δδ)−1κ^4(Δκ1ψ1+Δκ2ψ2+Δκ3)+(κ4κ^4−1)(r¨−δ^+K0e1+K1e2)≥κ4κ^4(η+|Δδ|)+1κ^4(|Δκ1||ψ1|+|Δκ2||ψ2|+|Δκ3|)+(κ4κ^4−1)|r¨−δ^+K0e1+K1e2|$
(18)
To ensure the condition of (18), β must be chosen assuming the worst-case uncertainty in the parameters κi and disturbance δ. Assume the κi are bounded such that κi,minκiκi,max. If the estimates of κ1, κ2, and κ3 are the arithmetic mean of their respective maximum and minimum values such that $κ^j=(κj,min+κj,max)/2$ for j = 1, 2, 3, then |Δκj| ≤ (κj,maxκj,min)/2 ≡ |Δκj|max. Furthermore, an inspection of (1)(3) reveal that κ4 is always positive. This permits the use of a technique suggested by Ref. [32] wherein the estimate of κ4 is the geometric mean of its maximum and minimum values such that $κ^4=κ4,maxκ4,min$. Therefore, $κ4/κ^4≤κ4,max/κ4,min≡(κ4/κ^4)max$. Additionally, $δ^$ is assumed to be zero and |Δδ|max is selected to be large enough to account for the uncertainty in δ. Substituting |Δκj|max, $(κ4/κ^4)max$, and |Δδ|max into (18) results in an expression for β that ensures asymptotic stability of (8)
$β=(κ4κ^4)max(η+|Δδ|max)+1κ^4(|Δκ1|max|ψ1|+|Δκ2|max|ψ2|+|Δκ3|max)+((κ4κ^4)max−1)|r¨+K0e1+K1e2|$
(19)
Note that for the z1 subsystem, κ4 = c4 = 2(L2Ixw/R2 + Iyw + mWL2) + Iy sin 2α + Iz cos 2α + mB + 2mW, which is not constant, but varies with α. However, c4 is still bounded because min(Iy, Iz) ≤ Iy sin2α + Iz cos2α ≤ max(Iy, Iz).
To avoid chattering caused by the discontinuous sgn(s) function in (16), the control law is altered to
$u=κ^4(r¨+K0e1+K1e2+βtanh(sa))−κ^1ψ1−κ^2ψ2−κ^3$
(20)
where a is a small number that defines the width of the transition region over which tanh(s/a) changes sign. The stability proof culminating in (19) assumes that sgn(s) is used, rather than tanh(s/a). The effect of this substitution, discussed by [31], is a nonzero steady-state tracking error. This occurs because the transition region (defined by a) softens the influence of β, which would otherwise act as a large gain. Thus, there is a tradeoff between a smooth control signal and achieving zero steady-state error. Fortunately, the error integrator can bring the steady-state error to zero as s → 0, which helps obviate the tradeoff between chattering and steady-state error. Nonetheless, the presence of the boundary layer can still prevent s from reaching the origin, resulting in a nonzero steady-state error integrator value even when the error is zero. For example, if the model is uncertain, then some nonzero integrator action may be required to achieve e1 = 0 if the β switching term alone cannot eliminate e1 when s is within the transition region. If small errors due to the transition region can be tolerated, or if the error integrator is otherwise not desired, it can be eliminated by assigning λ0 = 0 so that K0 = 0 and K1 = − λ1. Note that the $ζ⇀$ dynamics, which are prescribed via the state feedback control law $ϕ=−K⇀Tζ⇀$, are only fully realized when e2 = ϕ, or equivalently, when s = 0. Stability notwithstanding, during the period where s is approaching but not equal to zero (the so-called “reaching phase”), the integrator may wind up, causing undesirable overshoot. A solution is to only activate the integrator when |s| ≤ a, where a defines the width of the transition region (Eq. (20)). This prevents the integrator from winding up, and the $ζ⇀$ dynamics become $ζ˙=λ1ζ$ when |s| > a, where λ1 < 0. Additionally, the controller can pause integration if the control output saturates or if the integrator exceeds a predetermined bound. The control law (20) combined with the gain Eq. (19) and the definition of the sliding surface (11) define the feedback linearized sliding mode controller for the z1 and z2 subsystems. Simulation results of these controllers applied to the RFV are provided in Sec. 4.1.
### 2.4 Control Allocation.
Equations (1)(3) are affine in the inputs $FyF,MzF,$ and $MxF$, respectively. Therefore, these system inputs are assigned as the outputs of the motion controllers developed in Sec. 2.3. Figure 4 illustrates how $FyF,MxF,MyF,$ and $MzF$ are synthesized by RFV’s rotors. First, $FyF,MxF,MyF,$ and $MzF$ are transformed into the $B$ frame by writing (4) in matrix form and inverting
$[FzBMxBMyBMzB]=[−cscα000010000cosαsinα00−sinαcosα][FyFMxFMyFMzF]$
(21)
Inspection of (21) indicates that an arbitrary $FyF$ cannot be specified when α = 0 because the $F$ frame $y^$- and $B$ frame $z^$-axes are orthogonal. This affects Eq. (1) wherein the control is $FyF$, and therefore, $y˙F$ cannot be controlled when α = , n = 0, 1, 2, …, ∞. Moreover, as α approaches zero, the value of $FzB$ increases dramatically. This singularity represents a limitation in the independence of $y˙F$ and α due to the under-actuation of the system. The consequences of this limitation are discussed later in this section.
Fig. 4
Fig. 4
Close modal
Equation (21) also reveals that specifying $MzF$ (for controlling σ, (2)) influences both $MzB$ and $MyB$, and α determines the relative contribution of each in synthesizing $MzF$. Also, $MyF$ is an additional control that does not appear in the equations of motion. $MyF$ can be assigned to zero or used to suppress lateral rolling about the $F$ frame $y^$-axis. For example, a lateral roll damping controller of the form
$MyF=−Kpγ˙$
(22)
can be used, where $γ˙$ is the lateral rolling rate. Next, the $B$ frame forces and moments can be expressed in terms of the thrusts and torques generated by individual actuators
$FzB=T1+T2+T3+T4MxB=Lr(T1+T2−T3−T4)MyB=Lr(−T1+T2+T3−T4)MyB=Lr(−Q1+Q2−Q3+Q4)$
(23)
where Tp and Qp are the thrust generated by and load torque impressed upon the pth rotor, respectively, and Lr is the perpendicular distance measured along the $B$ frame $x^$- and $y^$-axes from point b to the center of each rotor. Qp results from the aerodynamic drag encountered by the rotor and acts to oppose the rotation of the rotor. A typical assumption is that the rotors operate in a constant state of hover in the sense that the freestream velocity of the air upstream of the rotors is zero. This leads to quadratic relationships between the rotor angular velocity and Tp and Qp
$Tp=kTΩp2,Qp=kQΩp2$
(24)
where Ωp is the angular velocity of the pth rotor with respect to the $I$ (which is approximately the same as the angular velocity with respect to $B$) in the $B$ frame $z^$-axis direction, and kT and kQ are appropriately dimensioned rotor thrust and torque constants, respectively. Combining these relationships with (23) yields
$[FzBMxBMyBMzB]=[1111LrLr−Lr−Lr−LrLrLr−Lr−kQ/kTkQ/kT−kQ/kTkQ/kT][T1T2T3T4]$
(25)
This is a standard model used by many researchers [18,2830] to map rotor thrusts to vehicle force and moments. Equation (25) can be written in a compact form as $τ⇀=AT⇀$, where $τ⇀=[FzBMxBMyBMzB]T$ and $T⇀=[T1T2T3T4]T$. The process of inverting (25) to obtain the rotor thrusts $T⇀$ that will synthesize the desired $B$ frame force and moments $τ⇀$ is called control allocation. Control allocation for CFMVs is typically accomplished [18] using matrix inversion so that $T⇀=A−1τ⇀$, which has a simple, closed-form solution. However, this solution does not take into consideration any constraints imposed on the solution, e.g., that the thrusts be positive. This is typically not a concern for CFMVs because the nominal positive thrust required to support the weight of a CFMV is considerably greater than the perturbations in thrust required to change its orientation. That is, $FzB$ is “large” compared to $|M⇀B|$. However, this is not true for the RFV because $FzB$ can be zero; the weight of the RFV can be supported by the ground rather than the rotor thrusts. If $FzB=0$, inspection of (25) reveals that there is no solution wherein every element of $T⇀$ is non-negative other than $T⇀=0$. Therefore, the standard approach to control allocation for CFMVs cannot be applied to the RFV. Rather, the constraint that every element of $T⇀$ is non-negative must be taken into consideration.
Control allocation subject to constraints has been studied extensively [33,34]. Both Monteiro et al. [23] and Guillaume et al. [22] demonstrate how constrained optimization techniques can solve the control allocation problem for multirotor vehicles. This is typically in the context of over-actuated vehicles, or to design a fault-tolerant control allocation solution [21]. Analytical techniques include pseudo-inverse methods, which minimize the norm of $T⇀$, or weighted least squares methods, which allow individual penalties to be assigned to actuators. Numerical optimization techniques such as linear programming and quadratic programming can be used to solve the control allocation problem subject to constraints [33]. Reyhanolgu et al. [35] perform control allocation for a three-degree-of-freedom quadrotor hover system. Their method guarantees that all thrusts are greater than a minimum positive value, although they do not demonstrate whether their solution is minimal.
The strategy here will be to break the control allocation problem into two parts. First, determine $T⇀$ such that the norm of $T⇀$ is minimized, the desired moments $MxB,MyB,$ and $MzB$ are synthesized, and every element of $T⇀$ is greater than or equal to a non-negative lower limit Tmin. Tmin is based on the lowest achievable rotor speed so that $Tmin=kTΩmin2$. Second, specify a desired force in the body $z^$-direction $Fz,desB,$ and modify the elements of $T⇀$ to realize $Fz,desB$ provided that the constraints are not violated. The minimization is stated formally as
$Minimizef(T⇀)≡12T⇀TT⇀s.t.h⇀(T⇀)≡M⇀B−A¯T⇀=0andg⇀(T⇀)≡T⇀min−T⇀≤0⇀fori=1,2…,4$
(26)
where $M⇀B=[MxBMyBMzB]T$ and
$A¯=[LrLr−Lr−Lr−LrLrLr−Lr−kQ/kTkQ/kT−kQ/kTkQ/kT]$
(27)
This minimization problem can be solved analytically, which will be computationally faster when implemented in real-time than using a numerical method such as linear programming. The method of Lagrange multipliers is used to augment the objective function f so that
$f¯(T⇀)≡12T⇀TT⇀+h⇀(T⇀)Tλ⇀eq+g⇀(T⇀)Tλ⇀ineq$
(28)
where $λ⇀eq$ and $λ⇀ineq$ are Lagrange multipliers associated with the three equality and four inequality constraints, respectively. If the solution to (26) is denoted $T⇀*$, then the necessary conditions that $f¯(T⇀*)$ must satisfy to be a minimum are [36]
$∂f¯∂T⇀|T⇀*=T⇀*+∂h⇀T(T⇀*)∂T⇀λ⇀eq+∂g⇀T(T⇀*)∂T⇀λ⇀ineq=T⇀*−A¯Tλ⇀eq−I4x4λ⇀ineq=0⇀$
(29)
$gp(T⇀*)λineq,p=0,p=1,2…4$
(30)
$h⇀(T⇀*)=0⇀$
(31)
and that $g⇀(T⇀*)≤0⇀$ and $λ⇀ineq≥0⇀$, where I4x4 is the identity matrix. The Hessian $∂2f¯/∂T⇀2=I4x4$ is positive-definite, indicating that the above conditions are also sufficient conditions for a minimum [36]. Simultaneously solving Eqs. (29), (30) and (31) yields five non-trivial solutions for $T⇀*,λ⇀ineq$, and $λ⇀eq$. The solutions for $λ⇀ineq$ reveal that there is only one nonzero solution for each element of $λ⇀ineq$, and each of these corresponds to a different solution. The solutions are
$λineq,1={0001Lr(−MxB+MyB)+kTkQMzB+4Tmin0}λineq,2={−1Lr(MxB+MyB)−kTkQMzB+4Tmin0000}λineq,3={00001Lr(MxB−MyB)+kTkQMzB+4Tmin}λineq,4={01Lr(MxB+MyB)−kTkQMzB+4Tmin000}$
(32)
Checking the necessary condition that $λ⇀ineq,p≥0$ reveals when the pth constraint could be active, which corresponds to the pth rotor producing minimum thrust Tmin. The five thrust solutions for each rotor are
$T1*={Tmin−νy−νzTmin+νx−νz12(νx−νy−νz)TminTmin+νx−νy}T2*={TminTmin+νx+νy12(νx+νy+νz)Tmin+νy+νzTmin+νx+νz}T3*={Tmin−νx−νzTmin+νy−νz12(−νx+νy−νz)Tmin−νx+νyTmin}T4*={Tmin−νx−νyTmin12(−νx−νy+νz)Tmin−νx+νzTmin−νy+νz}$
(33)
where $νx=MxB/(2Lr),νy=MyB/(2Lr)$, and $νz=kTMzB/(2kQ)$. Each of the five solutions is checked to determine if $g⇀(T⇀*)≤0⇀$ and $λ⇀ineq≥0⇀$ are satisfied. Once a feasible solution has been found, the minimum $B$ frame $z^$-axis control force required to faithfully reproduce the desired $M⇀B$ is computed as
$Fz,minB=∑p=14Tp*$
(34)
If $Fz,desB, then the desired $B$ frame forces and moments cannot be realized with positive thrusts. In this case, the desired moments are realized and $FzB$ must be limited (increased) to $Fz,minB$. However, if $Fz,desB>Fz,minB$, then none of the constraints are active and the optimal thrusts are computed as
$Tp=Tp*+14(FzB−Fz,minB)$
(35)
for p = 1, 2, …, 4, which guarantees that $FzB=Fz,desB$ and yields the same solution as inverting (25) via matrix inversion.
While (35) yields the minimum possible positive thrusts required to realize a desired $M⇀B$, an active constraint means that $FzB>Fz,desB$. This affects $FyF$ because from (21)
$FyF=−FzBsinα$
(36)
As $FyF$ is the control for $y˙F$ (see Eq. (1)), the ability to control $y˙F$ is affected when the control allocation solution is constrained, which in turn depends on the desired values of $M⇀B$ and $Fz,desB$, with large $M⇀B$ and small $Fz,desB$ being more likely to result in an active constraint. But $M⇀B$ depends strongly on α as $MxB$ is the output of the α closed-loop controller. In fact, the equilibrium value of $MxB$ is $Mx,eqF(α)≈−mBghsinα$, indicating that a nonzero $MxB$ is required to control α when α ≠ 0 and h ≠ 0. Therefore, $y˙F$ and α are intermittently coupled due to the positive thrust constraint, particularly when $Fz,desB$ is small. As $Fz,desB$ increases, the constraint activity, and therefore the coupling, decreases. Reducing the value of h (i.e., locating the center of mass nearer to the $B$-frame $x^$-axis) also reduces the $y˙F−α$ coupling.
In addition to the coupling just described, two other forms of $y˙F−α$ coupling exist: First, a singularity in (36) occurs when α is a multiple of π, and $FyF$ cannot be specified. Second, because the positive thrust constraint always results in $FzB>0$, the sign of $FyF$ is determined exclusively by the sign of α. As a result, $FyF>0$ can only be synthesized if sinα < 0, and vice versa. This coupling between $FyF$ and α is unavoidable if the rotor thrusts are constrained to be positive.
## 3 Hardware Embodiment
A hardware embodiment of the RFV is built to demonstrate the efficacy of the control and control allocation algorithms. This permits the evaluation of the control system performance while ensuring that the algorithms are computationally suitable for implementation on a real-time embedded system. A photograph of the hardware RFV hardware is shown in Fig. 1. A carbon fiber rod serves as the RFV axle, onto which a 3D-printed boom is attached. The boom supports four rotors and all avionics. Also attached to the axle are two quadrature encoder sensors used to resolve the angular position and angular rate of the two wheels. The avionics consist of a battery, a central processing unit (CPU), a power measurement printed circuit board (PCB), two 2.4 GHz radio transceivers, four brushless direct current (DC) (BLDC) motor drivers, and an inertial measurement system (INS) for obtaining the orientation of the RFV. The CPU is an ARM cortex-M7 microcontroller programmed to the following:
• Communicate serially with both radio transceivers, including receiving commands from a base station and transmitting real-time telemetry data.
• Communicate serially with the INS to obtain orientation and angular rate information.
• Measure the position and speed of the wheels via the quadrature encoders.
• Sample the voltage of and current from the battery via the power measurement PCB.
• Execute the α and σ control algorithms described in Sec. 2.3.
• Execute the control allocation algorithm described in Sec. 2.4.
• Compute the required voltage for each of the four BLDC motors and generates pulse-width modulated (PWM) signals to control the voltage applied by each BLDC motor driver.
All σ and α commands are transmitted wirelessly from a base station to the RFV every 100 ms. The base station consists of an Xbox 360 handset (Microsoft Xbox 360), a laptop computer executing a custom WinForms GUI, and two 2.4 GHz radio transceivers. Telemetry data (including $σ,α,y˙F,M⇀B$, and $FzB$) are transmitted wirelessly from the RFV to the base station every 2 ms. The interconnections between the various avionic subsystems of the RFV are illustrated in Fig. 5. The avionics components and RFV dimensions are recorded in Tables 1 and 2, respectively.
Fig. 5
Fig. 5
Close modal
Table 1
RFV components
ComponentManufacturer and part name
CPUST Microelectronics STM32F767ZIT6U (Nucleo F767ZI)
Encoders (2x)US Digital E3-500-984-NE-H-D-B
Radios (2x)Digi International XBee-PRO S2C (XBP24CDMWIT-001)
Inertial sensorVectorNav VN-100 Rugged
BLDC drivers (4x)Crazepony BL Heli 32
BatteryTurnigy 2200 mAh 2S 25C Lipo Pack
Motors (4x)EMax RS2206
Rotors (4x)EMax Avan-R
ComponentManufacturer and part name
CPUST Microelectronics STM32F767ZIT6U (Nucleo F767ZI)
Encoders (2x)US Digital E3-500-984-NE-H-D-B
Radios (2x)Digi International XBee-PRO S2C (XBP24CDMWIT-001)
Inertial sensorVectorNav VN-100 Rugged
BLDC drivers (4x)Crazepony BL Heli 32
BatteryTurnigy 2200 mAh 2S 25C Lipo Pack
Motors (4x)EMax RS2206
Rotors (4x)EMax Avan-R
Table 2
RFV parameters
ParameterActual valueSimulated value
L (m)0.2620.3275
R (m)0.34800.261
mB (kg)1.51.875
mW (kg)0.50.375
Ix (kg-m2)0.01810.0226
Iy (kg-m2)0.01810.0136
Iz (kg-m2)0.02170.0271
Ixw (kg-m2)0.03030.0096
Iyw (kg-m2)0.01510.008
g (m/s2)9.819.81
h (m)−0.01−0.0125
μrr0.05
μvisc (N-m-s)0.005
CD (kg/m)0.01
Lr (m)0.140.14
kT (N/(rad/s)2)1.6351 × 10−51.6351 × 10−5
kQ (Nm/(rad/s)2)2.7033 × 10−72.7033 × 10−7
ParameterActual valueSimulated value
L (m)0.2620.3275
R (m)0.34800.261
mB (kg)1.51.875
mW (kg)0.50.375
Ix (kg-m2)0.01810.0226
Iy (kg-m2)0.01810.0136
Iz (kg-m2)0.02170.0271
Ixw (kg-m2)0.03030.0096
Iyw (kg-m2)0.01510.008
g (m/s2)9.819.81
h (m)−0.01−0.0125
μrr0.05
μvisc (N-m-s)0.005
CD (kg/m)0.01
Lr (m)0.140.14
kT (N/(rad/s)2)1.6351 × 10−51.6351 × 10−5
kQ (Nm/(rad/s)2)2.7033 × 10−72.7033 × 10−7
The hardware embodiment of the RFV must account for non-horizontal terrain in order to accurately compute $σ˙$ and $α˙$. To do so, the lateral rolling angle γ about the $F$ frame $y^$-axis is measured, and the Euler angle rates are computed from the angular velocity of the $B$ frame with respect to the $I$ frame $ω⇀IBB$ according to
$[σ˙γ˙α˙]=[0sinαsecγcosαsecγ0cosα−sinα1sinαtanγcosαtanγ]ω⇀IBB$
(37)
where measurements of σ, γ, α, and $ω⇀IBB$ are obtained from the INS. As described in Ref. [24], $y˙F$ is computed from $θ˙1$ and $θ˙2$ (measured by the encoders) and $ω⇀IBB$ as
$y˙F=−R(12(θ˙1+θ˙2)+ωIB,xB)$
(38)
The rotor motors are controlled in an open loop. The steady-state electrical and mechanical equations governing the motors’ operation are
$Vp=IpRe+keΩp0=ke(Ip−Inl)−kQΩp2$
(39)
where Vp, Ip, Re, ke, and Inl are the applied voltage, applied current, electrical resistance, back-electromotive force (EMF) constant, and no-load current, respectively, of the pth motor. Therefore
$Vp=(kQΩp2ke+Inl)Re+keΩp$
(40)
Equation (40) is used to compute the voltage required to rotate the pth motor at an angular velocity of Ωp. The desired Ωp are calculated based on the desired Tp from (24). The rotor parameters kT and kQ are determined empirically by measuring the thrust force and reaction torque on a motor driving a rotor while operating at different angular velocities. The motor parameters Re, ke, and Inl are obtained from the manufacturer’s data.
## 4 Results and Discussion
### 4.1 Numerical Simulation Results.
The complete RFV equations of motion from Ref. [24] are simulated with σ and α controlled in closed-loop using the control systems developed in Sec. 2.3 and the control allocation algorithm developed in Sec. 2.4. The simulation uses matlab’s ode45 function (fourth-order Runge–Kutta with variable time-step) to numerically integrate (1)–(3), along with expressions for $θ˙1$ and $θ˙2$ from Ref. [24] as follows:
$θ˙1=1R(−y˙F−Lσ˙−Rα˙)θ˙2=1R(−y˙F+Lσ˙−Rα˙)$
(41)
$Fz,desB=MyF=0$ for these simulations, indicating the net force on the RFV should be minimized, and no moment is applied that would cause the RFV to rotate about the $F$ frame $y^$-axis. Rather, only the outputs of the σ and α controllers ($MzF$ and $MxF$, respectively) are generated. A block diagram of the simulated system appears in Fig. 6.
Fig. 6
Fig. 6
Close modal
Simulation parameters are displayed in Table 2. μrr, μvisc, and CD are coefficients of rolling resistance, viscous damping, and aerodynamic drag, respectively, which are manifested in $Dy,Dσ$, and $Dα$ [24].
To demonstrate the robustness of the sliding mode control system to parameter uncertainty, the model parameters used to simulate the RFV system are intentionally varied from those used to design the α and σ controllers, the latter being based on the actual hardware RFV parameters. Specifically, the simulated values are varied by plus or minus 25%.
The sliding mode controller parameters are listed in Table 3. The state feedback gains are obtained via pole placement, and the uncertainty bounds are based on parameter estimates. a is chosen based on empirical observations of chattering and response time.
Table 3
Control parameters
Parameterσ Controllerα Controller
η (s−2)11
K0 (s−2)05.25
K1 (s−1)45
a1.51
κ1|max0.0054 (kg-m2)0.0027 (kg-m2)
κ2|max0.05 (kg-m)0.3 (N-m)
κ3|max0.2 (N-m)0.15 (N-m)
$(κ4/κ^4)max$31.73
δ|max01
Parameterσ Controllerα Controller
η (s−2)11
K0 (s−2)05.25
K1 (s−1)45
a1.51
κ1|max0.0054 (kg-m2)0.0027 (kg-m2)
κ2|max0.05 (kg-m)0.3 (N-m)
κ3|max0.2 (N-m)0.15 (N-m)
$(κ4/κ^4)max$31.73
δ|max01
Figure 7 shows the simulated response of the α control system to a step command. The desired value of α is αdes = − π/4, $α˙des=0$. Shown also are the switching gain $βα$ and sliding variable $sα$. Figure 7(a) illustrates that α converges to αdes despite the modeling uncertainty. Figure 7(c) shows that the control $MxF$ is initially positive to drive α to αdes, but rapidly decreases while $sα$ approaches zero. Once $sα$ is sufficiently close to zero, $MxF$ begins to increase according to the sliding mode dynamics of (12) and the estimated nonlinear dynamics appearing in the last three terms of (20). The estimated nonlinear dynamics are also primarily responsible for the positive steady-state value of $MxF$ required to support the offset center of mass (because h ≠ 0). However, the uncertainty in the model means that these terms cannot exactly compensate for the offset center of mass, and the difference is made up by the K0e1 term in (20). The net result is a control law that rapidly accelerates and then rapidly decelerates α to achieve the desired orientation. Figure 7(b) shows that $sα$ is initially driven toward zero, at which point the dynamics prescribed by (12) drive the error system of (8) toward zero.
Fig. 7
Fig. 7
Close modal
As discussed in Sec. 2.3, the steady-state value of s is nonzero due to the transition region introduced by the hyperbolic tangent function in (20). Figure 7(d) illustrates how the value of the switching gain $βα$ changes during the step response. The transient behavior of $βα$ is dominated by the magnitude of the velocity error $|e2|=|α˙des−α˙|=|α˙|$.
Figure 8 shows the time responses of $σ,MzF,sσ$, and $βσ$ to a step command. The desired value of σ is σdes = π/2, $σ˙des=0$, which behaves similarly to the α control system with the exception that no steady-state value of $MzF$ is necessary to maintain σ = σdes. The error integrator is unused (K0 = 0), resulting in no overshoot.
Fig. 8
Fig. 8
Close modal
### 4.2 Empirical Results.
The α and σ controllers and control allocation algorithm described in Secs. 2.3 and 2.4, respectively, are deployed onto the hardware embodiment of the RFV described in Sec. 3. As with the simulations, the block diagram of Fig. 6 illustrates the entire closed-loop system. Experimental data are collected while operating under closed-loop α and σ control. In all cases, $MyF=0$ and $Fz,desB$ is controlled manually.
#### 4.2.1 Empirical Control System Results.
Figure 9(a) shows the α response to a step command αdes = π/4 for both the actual and simulated RFVs while Fig. 9(b) shows the associated value of $MxF$. Less aggressive control parameters are chosen compared to the simulation from Sec. 4.1 (K0 = 1.75, K1 = 4), because the simulation assumes that $M⇀F$ can be generated instantaneously via the rotor thrusts, but in reality, this is not possible; the rotors produce thrust approximately proportional to Ω2, and Ω is governed by the motor/rotor electromechanical dynamics. This introduces lag into the control system, which when combined with an aggressive switching gain β can result in undesirable ringing. Additionally, the dynamics of the motors are uncontrolled; an open-loop voltage is applied to the motors according to (40), rather than controlling Ω in closed-loop. Therefore, uncertainty in the motor parameters and kQ also affect the ability to accurately sythesize $M⇀F$. Nonetheless, stability of the actual RFV is achieved with an acceptable tradeoff between response time and ringing. Differences between the actual and simulated plots of α and $MxF$ are attributed to the uncertainty in synthesizing $M⇀F$ and the uncertainty in the model parameters. For example, the difference in the steady-state values of $MxF$ for the actual and simulated cases in Fig. 9(b) could be due to uncertain vehicle mass, uncertain center of mass location, the inability to accurately synthesize $MxF$, or any combination of the three. However, stability and performance are demonstrated despite uncertainty in both the model and parameters.
Fig. 9
Fig. 9
Close modal
Figure 10(a) illustrates the α control system tracking a sine wave (π/4 radians amplitude, 0.4 Hz frequency) with $Fz,desB=1.5N$. The sliding mode controller exhibits good tracking performance despite uncertainty in the RFV parameters and demonstrates close agreement with simulation results.
Fig. 10
Fig. 10
Close modal
Figure 10(b) shows the σ controller’s ability to track a time-varying trajectory on flat ground while the α controller is simultaneously active with αdes = 0 (α data not shown). The control parameters used are those listed in Table 3. Changes in the rolling resistance of each wheel (which are not accounted for in the model) are observed during operation. However, the controller is able to track the commanded σ angle despite the modeling uncertainty.
#### 4.2.2 Empirical Control Allocation Results.
Efficacy of the control allocation algorithm is demonstrated on the prototype RFV by commanding a sinusoidal $MzF$ while controlling α in closed-loop at varying angles.
These configurations force the control allocation solution to be constrained while also varying which rotors are subject to the constraint. For this test, $MxF$ is the output of the α control system, $MyF=0$ and $MzF=0.03cos2πfsinet$, where fsine = 0.25 Hz. The desired $M⇀F$ is transformed to $M⇀B$ using (21). Additionally, $Fz,desB=0$, which guarantees that the control allocation solution is always constrained.
$M⇀B$ and $Fz,desB$ are input to the control allocation algorithm with Tmin = 0. The outputs of the control allocation algorithm are the commanded rotor thrusts, each of which are recorded over a complete cycle of $MzF$ for each α value. The results are shown in Fig. 11. The time series plots on the left (Figs. 11(a), 11(c), and 11(e)) show the commanded $MzF$ (dotted line) and the resulting rotor thrusts as a function of time. The graphics to the right of each plot (Figs. 11(b), 11(d), and 11(f)) indicate the physical configuration of the RFV and the relative magnitude of the commanded rotor thrusts at the time denoted by the dashed vertical line on the corresponding plot.
Fig. 11
Fig. 11
Close modal
Figure 11(a) shows the RFV when α = 0. First note that when $MzF=0$ (at approximately t = 1.3 s and 3.3 s), T1 = T2 = 0 while T3 = T4 ≈ 0.2 N. At these points, the only nonzero body force is $MxB$, which is determined exclusively by the α control system. The value of T3 and T4 are nonzero because the RFV center of mass is slightly behind point b, as indicated in Fig. 11(b). As a result, the α controller determines that a negative $MxB$ must be applied to maintain α = 0, leading to T3 = T4 > 0. This explains why the T3 and T4 curves appear somewhat noisy, while the T1 and T2 curves are smooth; T3 and T4 are responding to the closed-loop α controller, whereas T1 and T2 are only needed to enforce the $MzF$ command, which is prescribed to be a smooth sinusoid. Alternatively, were the center of mass in front of point b, then the situation would be reversed; the control allocation algorithm would determine that T1 and T2 should be driven primarily by the α controller and T3 and T4 would appear smooth.
Second, a large, positive $MzF$ is commanded at the time indicated by the dashed vertical line, producing large T2 and T4 thrusts. This creates a moment about the z$B$ axis, which happens to coincide with the z$F$ axis when α = 0. The relative thrusts at this instant are illustrated in Fig. 11(b). Here, the control allocation algorithm has determined that T1 = 0, as the commanded moments $MxB,MyB$, and $MzB$ can be synthesized using just T2, T3, and T4. In fact, an inspection of Fig. 11(a) reveals that at all times one rotor produces zero thrusts. Notice that T4T2T3 always, indicating that the α controller is still generating the required $MxB$ to maintain α = 0.
In Figs. 11(c) and 11(d), α = −π/4, which moves the center of mass even further behind b, requiring increased thrusts (T3 = T4 ≈ 0.4 N) when $MzF=0$. Additionally, the $B$ frame and $F$ frame are no longer aligned, so $MzF$ is synthesized by generating moments about both the z$B$ and y$B$ axes. At the dashed line (where $MzF$ is large and positive), T4 and T2 increase to generate $MzB$. In contrast to the α = 0 case, here T4T2 > T3, indicating that there is a nonzero $MyB$ (see Eq. (25)). This explains the slight dip in T3 at the dashed line; T3 decreases beyond its value at $MzF=0$ to help generate $MyB$.
Lastly, in Figs. 11(e) and 11(f), α = −π/2. As before, the center of mass is moved further from point b so that T3 = T4 ≈ 0.45 N when $MzF=0$. Now, $MzF$ is synthesized solely by generating $MyB$. T1 is nonzero at the time indicated by the dashed line on Fig. 11(e), though its value is relatively small compared to the other thrusts required to generate $MzF$ when α = 0 or −π/4 (compare to Figs. 11(b) and 11(d)). In fact, all the thrusts appear smaller when α = − π/2. This is ultimately because kTLrkQ, indicating that moments generated via rotor thrust (i.e., $MxB$ and $MyB$) are much greater than moments generated via rotor drag (i.e., $MzB$) for a given Ω (see Eqs. (24) and (25)). Therefore, the thrusts generated as a byproduct of producing $MxB$ or $MyB$ are much less than those required to produce an identical $MzB$. So, if $MzF$ is synthesized primarily via $MxB$ and $MyB$, as it is when α = −π/2, then the thrusts will be smaller than if $MzF$ were synthesized via $MzB.$
As described in Sec. 2.4, $FzB=Fz,desB$ will only be realized if $Fz,desB>Fz,minB$, indicating that the constraints are inactive. Otherwise, the constraints are active and $FzB$ will be set equal to $Fz,minB$. Figure 12 illustrates the control allocation algorithm transitioning between constrained and unconstrained operation while α = 0. Plotted is $FzB$ for two cases: $Fz,desB=0N$ (solid line) and $Fz,desB=1.57N$ (dash-dotted line). As before, $MxF$ is the output of the α control system, $MyF=0$ and $MzF=0.03cos2πfsinet$, where fsine = 0.25 Hz. As $MzF$ changes (dashed line), so changes $Fz,minB$, which influences the constraint activeness.
Fig. 12
Fig. 12
Close modal
For the $Fz,desB=0N$ case, the solution is always constrained, and so $FzB=Fz,minB$ always. Note that $Fz,minB>0$ even when $MzF=0$. This is because $MxB$, which is the output of the α controller, is never zero (due to the offset location of the center of mass), and so while $MzF$ may be zero, $M⇀B$ is not. For the $Fz,desB=1.57N$ case, the solution is constrained approximately half of the time. While constrained, $FzB=Fz,minB$ and the solid and dash-dotted lines lie atop one another. However, as $|MzF|$ decreases, so does $Fz,minB$ to the point where $Fz,minB. During these periods the solution is unconstrained and $FzB=Fz,desB$, indicated by the sudden flattening of the dash-dotted line at a value of 1.57 N. The dashed vertical lines indicate where the algorithm transitions between constrained and unconstrained operation for the $Fz,desB=1.57N$ case.
## 5 Conclusion
The RFV’s design merits a controller tailored to its unique nonlinear dynamics and flat outputs. Furthermore, the number of difficult-to-measure parameters governing the RFV’s behavior dictates that such a controller be robust to uncertainty. As such, the controller presented herein is both model-based and robust. The efficacy of the controller has been demonstrated through numerical simulations and empirical experiments using custom hardware and software. The novel control allocation algorithm provides a means for actuating the RFV that respects the vehicle constraints while simultaneously minimizing thrust. Indeed, Figs. 11(a), 11(c), and 11(e) illustrate that at all times at least one rotor is producing zero thrust. Such a thrust-minimizing solution is unique to the RFV.
Additionally, this algorithm is computationally lightweight and thus easily integrated into an embedded system. There are several opportunities to improve the performance of the RFV control systems. First, as mentioned in Sec. 4.2.1, closed-loop rotor velocity controllers could improve the response time and reduce ringing in α. Second, RFV velocity control requires a $y˙F$ controller that respects the intermittent coupling of α and $y˙F$. Based on constraint activity (Sec. 2.4), such a controller would determine when independent control of $y˙F$ and α is possible and adjust the control strategy accordingly. This permits optimizing RFV operation for maximum range [16]. Lastly, $MyF$ could be used for active γ damping as described in (22), which could improve stability on uneven terrain and prevent rolling during aggressive maneuvering.
## Acknowledgment
The authors would like to thank Shaphan Jernigan for his work in supporting the hardware development of the RFV.
## Conflict of Interest
There are no conflicts of interest.
## Data Availability Statement
The data sets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.
## Declarations
Ethical Approval: not applicable.
Consent to Participate: not applicable.
Consent to Publish: The coauthors confirm that the work described has not been published before (except in the form of an abstract or as part of a published lecture, review, or thesis); that it is not under consideration for publication elsewhere; that its publication has been approved by all coauthors.
Authors Contributions: S. Atay modeled the rolling/flying vehicle dynamics, designed and simulated the controllers, and was the primary contributing author. M. Bryant and G. Buckner directed the modeling and simulation tasks, provided funding and resources for the work, and provided secondary editing and co-authorship.
Funding: provided by the U.S. Army Research Office and the U.S. Army Special Operations Command under Contract No. W911-NF-13-C-0045.
Competing Interests: not applicable
Availability of data and materials: all data and materials presented in this paper will be made available upon request.
## Nomenclature
### Variables
• a =
width of sliding mode transition region
•
• h =
location of RFV body center of mass (m)
•
• t =
time
•
• x =
position coordinate (m)
•
• y =
position coordinate (m)
•
• z =
position coordinate (m)
•
• G =
gravitational acceleration (m/s2)
•
• L =
RFV characteristic length (m)
•
• R =
•
• V =
A Lyapunov function
•
• W =
vehicle weight
•
• $g⇀$ =
inequality constraint function
•
• $h⇀$ =
equality constraint function
•
• $u⇀$ =
system input vector
•
• $x^$ =
unit vector in x-direction
•
• $y^$ =
unit vector in y-direction
•
• $z^$ =
unit vector in z-direction
•
• fsine =
frequency of sine wave (Hz)
•
• ke =
motor back-EMF constant
•
• CD =
aerodynamic drag coefficient
•
• Fy =
horizontal force in the $y^$-direction (N)
•
• Inl =
•
• Ip =
motor electrical current
•
• I4x4 =
identity matrix
•
• Lr =
distance from point b to rotor
•
• Re =
motor winding resistance
•
• Vp =
motor voltage
•
• $FyF$ =
horizontal force in the $F$ frame $y^$-direction (N)
•
• c1, c2, …, c10 =
model parameters
•
• $e⇀,e0,e1,e2$ =
error states
•
• $f,f¯$ =
optimization objective function
•
• kT, kQ =
rotor constants
•
• mB, mW =
masses (kg)
•
• r, ri =
control system reference
•
• $s,sα,sσ$ =
sliding manifold
•
• $x⇀,x1,x2,…,x5$ =
control system state vector
•
• $y˙F,y¨F$ =
velocity and acceleration in the $F$ frame $y^$-direction
•
• $z⇀,z1,z2,z3$ =
flat outputs
•
• $A,A¯$ =
control allocation matrix
•
• $Dy,Dα,Dσ$ =
dissipation terms
•
• $FzB,Fz,minB,Fz,desB$ =
body force (N) in the $B$ frame z-direction (N)
•
• Ix, Iy, Iz =
RFV body principal moments of inertia of (kg · m2)
•
• Ixw, Iyw =
wheel principal moments of inertia of (kg · m2)
•
• $K⇀,K0,K1$ =
state feedback gains
•
• $M⇀B,MxB,MyB,MzB$ =
body moments (N · m)
•
• $M⇀F,MxF,Mx,eqF,MyF,MzF$ =
$F$ frame moments (N · m)
•
• Qp, Q1, Q2, Q3, Q4 =
•
• $T⇀*,Tp*$ =
minimum rotor thrust vector
•
• $T,Tp,T⇀,T1,T2,T3,T4,Tmin$ =
rotor thrusts
•
• α, αdes =
pitch Euler angle
•
• β, $βα,βσ$ =
sliding mode switching gains
•
• γ =
tilt Euler angle
•
• κj|max =
maximum uncertainty in κj
•
• δ|max =
maximum uncertainty in δ
•
• Ωp =
•
• δ, δ1, δ2 =
model uncertainties
•
• ζ =
generic error vector
•
• η =
a positive constant
•
• θ1, θ2 =
•
• κi, κ1, κ2, κ3, κ4 =
coefficients of nonlinearities
•
• $(κ4/κ^4)max$ =
maximum uncertainty in κ4
•
• $λ⇀eq,λ⇀ineq$ =
Lagrange multipliers
•
• μvisc =
viscous damping coefficient (N · m · s)
•
• μrr =
rolling resistance coefficient
•
• ν =
a fictitious control input
•
• νx, νy, νz =
a variable proportional to desired body moments
•
• $ξ⇀,ξ1,ξ2$ =
states of a generic second-order mechanical system
•
• ρ =
scalar output function
•
• σ, σdes =
Yaw Euler angle
•
• ψ1, ψ2 =
nonlinear terms
•
• ϕ =
a stabilizing control law
### Superscripts
• $B$ =
variable resolved in the $B$ (body) reference frame
•
• $F$ =
variable resolved in the $F$ reference frame
•
• $I$ =
variable resolved in the $I$ (inertial) reference frame
## References
1.
Choi
,
S. H.
, and
Zhu
,
W. K.
,
2012
, “
Performance Optimisation of Mobile Robots for Search-and-Rescue
,”
Appl. Mech. Mater.
,
232
, pp.
403
407
.
2.
Murphy
,
R. R.
,
,
S.
, and
Kleiner
,
A.
,
2016
, “Disaster Robotics,”
Springer Handbook of Robotics
,
B.
Siciliano
,
O.
Khatib
, eds.,
Springer
,
Cham
.
3.
Bishop
,
B.
,
Crabbe
,
F.
, and
Hudock
,
B.
,
2005
, “
Design of a Low-Cost, Highly Mobile Urban Search and Rescue Robot
,”
,
19
(
8
), pp.
1
27
.
4.
Neumann
,
M.
,
Predki
,
T.
,
Heckes
,
L.
, and
Labenda
,
P.
,
2013
, “
Snake-Like, Tracked, Mobile Robot With Active Flippers for Urban Search-and-Rescue Tasks
,”
Ind. Rob.
,
40
(
3
), pp.
246
250
.
5.
Santos
,
J. M.
,
Krajník
,
T.
, and
Duckett
,
T.
,
2017
, “
Spatio-temporal Exploration Strategies for Long-Term Autonomy of Mobile Robots
,”
Rob. Auton. Syst.
,
88
, pp.
116
126
.
6.
Kalantari
,
A.
, and
Spenko
,
M.
,
2014
, “
Modeling and Performance Assessment of the HyTAQ, a Hybrid Terrestrial/Aerial Quadrotor
,”
IEEE Trans. Rob.
,
30
(
5
), pp.
1278
1285
.
7.
Takahashi
,
N.
,
Yamashita
,
S.
,
Sato
,
Y.
,
Kutsuna
,
Y.
, and
,
M.
,
2015
, “
All-Round Two-Wheeled Quadrotor Helicopters With Protect-Frames for Air-Land-Sea Vehicle (Controller Design and Automatic Charging Equipment)
,”
,
29
(
1
), pp.
69
87
.
8.
Morton
,
S.
, and
Papanikolopoulos
,
N.
,
2017
, “
A Small Hybrid Ground-Air Vehicle Concept
,”
IEEE International Conference on Intelligent Robots and Systems (IROS)
,
, pp.
5149
5154
.
9.
Kossett
,
A.
, and
Papanikolopoulos
,
N.
,
2011
, “
A Robust Miniature Robot Design for Land/Air Hybrid Locomotion
,”
Proceedings of IEEE International Conference on Robotics and Automation
, pp.
4595
4600
. .
10.
Kawasaki
,
K.
,
Zhao
,
M.
,
,
K.
, and
Inaba
,
M.
,
2013
, “
MUWA: Multi-field Universal Wheel for Air-Land Vehicle With Quad Variable-Pitch Propellers
,”
Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems
, pp.
1880
1885
.
11.
Boria
,
F. J.
,
Bachmann
,
R.J.
,
Ifju
,
P.G.
,
Quinn
,
R.D.
,
Vaidyanathan
,
R.
,
Perry
,
C.
and
Wagener
,
J.
,
2005
, “
A Sensor Platform Capable of Aerial and Terrestrial Locomotion
,”
Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS
,
, pp.
3959
3964
.
12.
Sreevishnu
,
S.
,
Koshy
,
M.
,
Krishnan
,
A.
, and
Das
,
G. P.
,
2018
, “
Kinematic Design, Analysis and Simulation of a Hybrid Robot with Terrain and Aerial Locomotion Capability
,”
Proceedings of 3rd International Conference on Design, Analysis, Manufacturing and Simulation (ICDAMS 2018)
, Vol. 03008, pp.
1
7
.
13.
Stewart
,
W.
,
Weisler
,
W.
,
MacLeod,
M.
,
Powers
,
T.
,
Defreitas
,
A.
,
Gritter
,
R.
,
Anderson
,
M.
,
Peters
,
K.
,
Gopalarathnam
,
A.
, and
Bryant
,
M.
,
2018
, “
Design and Demonstration of a Seabird-Inspired Fixed-Wing Hybrid UAV-UUV System
,”
Bioinspir. Biomim.
,
13
(
5
), p.
056013
.
14.
Siddall
,
R.
, and
Kovač
,
M.
,
2014
, “
Launching the AquaMAV: Bioinspired Design for Aerial-Aquatic Robotic Platforms
,”
Bioinspir. Biomim.
,
9
(
3
), p.
031001
.
15.
Maia
,
M. M.
,
,
D. A.
, and
Diez
,
F. J.
,
2017
, “
Design and Implementation of Multirotor Aerial-Underwater Vehicles With Experimental Results
,”
IEEE International Conference on Intelligent Robots and Systems
,
, pp.
961
966
.
16.
Atay
,
S.
,
Jenkins
,
T.
,
Buckner
,
G.
, and
Bryant
,
M.
,
2020
, “
Energetic Analysis and Optimization of a Bi-modal Rolling-Flying Vehicle
,”
Int. J. Intell. Robot. Appl.
,
4
(
1
), pp.
3
20
.
17.
Stebler
,
S.
,
Mackunis
,
W.
, and
Reyhanoglu
,
M.
,
2016
, “
Nonlinear Output Feedback Tracking Control of a Quadrotor UAV in the Presence of Uncertainty
,”
Proceedings of 14th International Conference on Control, Automation, Robotics and Vision, ICARCV 2016
,
Phuket, Thailand, Nov. 13–15
, pp.
1
6
.
18.
Mahony
,
R.
,
Kumar
,
V.
, and
Corke
,
P.
,
2012
, “
Multirotor Aerial Vehicles: Modeling, Estimation, and Control of Quadrotor
,”
IEEE Robot. Autom. Mag.
,
19
(
3
), pp.
20
32
.
19.
L’Afflitto
,
A.
, and
,
K.
,
2017
, “
Equations of Motion of Rotary-Wing Unmanned Aerial System With Time-Varying Inertial Properties
,”
J. Guid. Control Dyn.
,
41
(
2
), pp.
554
559
.
20.
Lee
,
T.
,
Leok
,
M.
, and
Mcclamroch
,
N. H.
,
2013
, “
Nonlinear Robust Tracking Control of a Quadrotor UAV on SE(3)
,”
Asian J. Control
,
15
(
2
), pp.
391
408
.
21.
Schneider
,
T.
,
Ducard
,
G.
,
Rudin
,
K.
, and
Strupler
,
P.
,
2012
, “
Fault-Tolerant Control Allocation for Multirotor Helicopters Using Parametric Programming
,”
International Micro Air Vehicle Conference and Flight Competition (IMAV)
,
Braunschweig, Germany
,
July
.
22.
Ducard
,
G. J. J.
, and
Hua
,
M.-D.
,
2012
, “
Discussion and Practical Aspects on Control Allocation for a Multi-Rotor Helicopter
,”
ISPRS—International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
, XXXVIII-1/C22, pp.
95
100
.
23.
Monteiro
,
J. C.
,
Lizarralde
,
F.
, and
Hsu
,
L.
,
2016
, “
Optimal Control Allocation of Quadrotor UAVs Subject to Actuator Constraints
,”
Proceedings of the American Control Conference
,
Boston, MA, July 6–8
, pp.
500
505
.
24.
Atay
,
S.
,
Buckner
,
G.
, and
Bryant
,
M.
,
2020
, “
Dynamic Modeling for Bi-modal, Rotary Wing, Rolling-Flying Vehicles
,”
ASME J. Dyn. Syst. Meas. Contr.
,
142
(
11
), p.
111003
.
25.
Kalantari
,
A.
, and
Spenko
,
M.
,
2015
, “Hybrid Aerial and Terrestrial Vehicle,” US Patent No. 9061558B2.
26.
Mizutani
,
S.
,
,
Y.
,
Salaan
,
C. J.
,
Ishii
,
T.
,
Ohno
,
K.
, and
,
S.
,
2015
, “
Proposal and Experimental Validation of a Design Strategy for a UAV With a Passive Rotating Spherical Shell
,”
Proceedings of IEEE International Conference on Intelligent Robots and Systems.
27.
Murray
,
R. M.
,
Sluis
,
W.
,
Rathinam
,
M.
, and
Sluis
,
W.
,
1995
, “
Differential Flatness of Mechanical Control Systems: A Catalog of Prototype Systems
,”
Proceedings of ASME International Mechanical Engineering Congress And Exposition.
28.
Powers
,
C.
,
Mellinger
,
D.
, and
Kumar
,
V.
,
2015
Handbook of Unmanned Aerial Vehicles
,
K.
Valavanis
, and
G.
Vachtsevanos
, eds.,
Springer
,
Dordrecht
.
29.
Lu
,
H.
,
Liu
,
C.
,
Coombes
,
M.
,
Guo
,
L.
, and
Chen
,
W.-H.
,
2016
, “
Online Optimisation-Based Backstepping Control Design With Application to Quadrotor
,”
IET Control Theory Appl.
,
10
(
14
), pp.
1601
1611
.
30.
L’Afflitto
,
A.
,
Anderson
,
R. B.
, and
,
K.
,
2018
, “
An Introduction to Nonlinear Robust Control for Unmanned Quadrotor Aircraft: How to Design Control Algorithms for Quadrotors Using Sliding Mode Control and Adaptive Control Techniques
,”
IEEE Control Syst. Mag.
,
38
(
3
), pp.
102
121
.
31.
Khalil
,
H.
,
2002
,
Nonlinear Systems
, 3rd ed.,”
Prentice Hall
,
.
32.
Slotine
,
J.-J. E.
, and
Li
,
W.
,
1991
,
Applied Nonlinear Control
,
Prentice Hall International Inc.
,
.
33.
Bodson
,
M.
,
2008
, “
Evaluation of Optimization Methods for Control Allocation
,”
J. Guid. Control. Dyn.
,
25
(
4
), pp.
703
711
.
34.
Johansen
,
T. A.
, and
Fossen
,
T. I.
,
2013
, “
Control Allocation—A Survey
,”
Automatica
,
49
(
5
), pp.
1087
1103
.
35.
Reyhanoglu
,
M.
,
Damen
,
R.
, and
Mackunis
,
W.
,
2017
, “
Observer-Based Sliding Mode Control of a 3-DOF Hover System
,”
Proceedings of 14th International Conference on Control, Automation, Robotics and Vision, ICARCV 2016
,
Phuket, Thailand, Nov. 13–15
, pp.
1
6
.
36.
Arora
,
J. S.
,
2004
,
Introduction to Optimum Design
,
Elsevier
,
London
.
|
{}
|
# help create events w/ boost
This topic is 3551 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I'm new to boost and what I'm doing seems very simple, but I can't get it. I want to store 3 events onto a priority_queue and have them execute n seconds from the time they were created, n is a randomly generated amount of time between 0 and 10 seconds.
// create container that holds function objects
priority_queue<boost::function<void (Event)> > pq;
// create 3 events and place in priority_queue
boost::bind(destroy_target, placeholders::_1, 'pq');
boost::bind(destroy_target, placeholders::_1, 'pq');
boost::bind(destroy_target, placeholders::_1, 'pq');
I think I now have 3 function objects on the queue, but I don't know how to pull the Event object off the queue and check f it's timestamp is expired, and if so execute the function object that was stored on the queue.
// get the first event off the priority_queue, and if expired, execute destroy_target().
Event e = pq.top()(Event());
// this is the function I eventually want my event to execute
void destroy_target() { ... }
Any help is greatly appreciated.
##### Share on other sites
Look more closely at your documentation for priority_queue. They know nothing of time.
Elements of a priority_queue are sorted according to std::less<>, which by default uses the < operator.
So, you should create a wrapper around function<void (Event)> that also holds the time of construction and then define an operator< that compares these time stamps.
##### Share on other sites
Also, you're using boost function and boost bind incorrectly. The signature boost::function<void (Event)> means a function which returns void, and takes a single argument of type event.
When you bind your functions, you bind 'pq' to the first parameter for the function destroy_target. This won't work since:
1) 'pq' doesn't make sense, single quotes are for characters (eg. 'a'), double quotes are for strings (eg. "abc")
2) You don't do anything with the returned bound function
3) destroy_target doesn't actually take any arguments, so you can't bind anything to it!
Also, You try to get the first element of the priority_queue and execute the returned boost::function, passing to it a default constructed Event(). But you try to assign the return value of this function to a new Event e, even though the function returns void!
Your comment says that it's supposed to execute destroy_target(), but that's impossible since destroy_target doesn't take any arguments whereas the boost::function in the priority_queue says that the function takes an Event as an argument.
In short, the code doesn't really make sense (or is very horribly broken). I suspect you've tried to type out your code from memory and messed it up - don't do this. Copy and paste the exact code you're trying to compile. We can help, but we need more information first.
##### Share on other sites
Quote:
Original post by Sc4FreakIn short, the code doesn't really make sense (or is very horribly broken). I suspect you've tried to type out your code from memory and messed it up - don't do this. Copy and paste the exact code you're trying to compile. We can help, but we need more information first.
Yeah, it was horribly broken and didn't make sense. I'm having a hard time understanding, but I've gotten a bit further and think this is more correct, but a few questions ...
// Is storing a function object onto a queue like below allowable? I'm getting // errors I can't even begin to understand (posted at bottom).class Scheduler{ private: typedef boost::function<void ()> event; std::priority_queue<event> pq; public: void at(time_t when, event const& what);}// Here I create 3 function objects (events) and put them on a queue.void MyApp::initTimer(){ type_t now = current_time(); for (int i = 0; i < 3; i++) { event = bind(&MyApp::print, this); scheduler->at(now + i, event); }}// But here I'm losing the time when this event is to be excuted.// How do I store the time so that I can check it in my run() and see// if time has expired so that it can be executed?void Scheduler::at(time_t when, event const& what){ pq.push(what);}// This function gets run in every frame of my game. I was hoping to just// pull off the top() of the queue, see if time expired, and if so execute the // function bound to the event. But I think my at() is messed up.scheduler->run();
And the error I get compiling ...
/bin/sh ../../../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I. -I../../../../dnr/src/libs/events -I../../.. -I../../../../dnr/include -I/usr/local/include/OGRE -g -O2 -MT Scheduler.lo -MD -MP -MF .deps/Scheduler.Tpo -c -o Scheduler.lo ../../../../dnr/src/libs/events/Scheduler.cpp g++ -DHAVE_CONFIG_H -I. -I../../../../dnr/src/libs/events -I../../.. -I../../../../dnr/include -I/usr/local/include/OGRE -g -O2 -MT Scheduler.lo -MD -MP -MF .deps/Scheduler.Tpo -c ../../../../dnr/src/libs/events/Scheduler.cpp -fPIC -DPIC -o .libs/Scheduler.o/usr/lib/gcc/x86_64-pc-linux-gnu/4.1.2/include/g++-v4/bits/stl_function.h: In member function 'bool std::less<_Tp>::operator()(const _Tp&, const _Tp&) const [with _Tp = boost::function<void ()(), std::allocator<void> >]':/usr/lib/gcc/x86_64-pc-linux-gnu/4.1.2/include/g++-v4/bits/stl_heap.h:279: instantiated from 'void std::__adjust_heap(_RandomAccessIterator, _Distance, _Distance, _Tp, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<boost::function<void ()(), std::allocator<void> >*, std::vector<boost::function<void ()(), std::allocator<void> >, std::allocator<boost::function<void ()(), std::allocator<void> > > > >, _Distance = long int, _Tp = boost::function<void ()(), std::allocator<void> >, _Compare = std::less<boost::function<void ()(), std::allocator<void> > >]'/usr/lib/gcc/x86_64-pc-linux-gnu/4.1.2/include/g++-v4/bits/stl_heap.h:404: instantiated from 'void std::make_heap(_RandomAccessIterator, _RandomAccessIterator, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<boost::function<void ()(), std::allocator<void> >*, std::vector<boost::function<void ()(), std::allocator<void> >, std::allocator<boost::function<void ()(), std::allocator<void> > > > >, _Compare = std::less<boost::function<void ()(), std::allocator<void> > >]'/usr/lib/gcc/x86_64-pc-linux-gnu/4.1.2/include/g++-v4/bits/stl_queue.h:367: instantiated from 'std::priority_queue<_Tp, _Sequence, _Compare>::priority_queue(const _Compare&, const _Sequence&) [with _Tp = boost::function<void ()(), std::allocator<void> >, _Sequence = std::vector<boost::function<void ()(), std::allocator<void> >, std::allocator<boost::function<void ()(), std::allocator<void> > > >, _Compare = std::less<boost::function<void ()(), std::allocator<void> > >]'../../../../dnr/src/libs/events/Scheduler.cpp:11: instantiated from here/usr/lib/gcc/x86_64-pc-linux-gnu/4.1.2/include/g++-v4/bits/stl_function.h:227: error: no match for 'operator<' in '__x < __y'make[4]: *** [Scheduler.lo] Error 1
##### Share on other sites
The error is saying that there's no less than operator (operator <) for boost::function. This is correct, as it doesn't really make sense for a function to be "less than" another.
the_edd is right here:
Quote:
Original post by the_eddElements of a priority_queue are sorted according to std::less<>, which by default uses the < operator.
In other words, you can't have a priority_queue of boost::function's unless you provide a less than comparison operator. In your case, I'm guessing you don't really want a priority_queue here. priority_queue is sorted in that the top of the queue is the one with the largest value. I think you're looking more for std::stack or a regular std::queue, depending on if you want FIFO or LIFO behaviour.
As for storing the time, you can have a struct which stores a time and a function. Like so:
class Scheduler{private: struct Event { time_t when; boost::function<void ()> what; }; std::stack<Event> events;public: void at(time_t when, const Event& what); void run();}void MyApp::initTimer(){ type_t now = current_time(); for (int i = 0; i < 3; i++) { scheduler->at(now + i, bind(&MyApp::print, this)); }}void Scheduler::at(time_t when, const Event& what){ Event e; e.when = when; e.what = what; events.push(e);}void Scheduler::run(){ if(events.empty()) return; Event e = events.top(); if(e.when == current_time()) // If it's time to run the event { e.what(); // Execute the event function events.pop(); // Remove the event we just executed }}
The code is untested, but you get the idea.
##### Share on other sites
Quote:
Original post by Sc4FreakIn your case, I'm guessing you don't really want a priority_queue here. priority_queue is sorted in that the top of the queue is the one with the largest value. I think you're looking more for std::stack or a regular std::queue, depending on if you want FIFO or LIFO behaviour.
Thanks, this is very helpful, but I'm thinking I need a priority_queue and prioritize from the event w/ the least amount of time, and once the current_time is > than the item on the priority_queue w/ the least amount of seconds, then it gets executed (time's expired, exec event). I will always be looking for the event w/ the lowest amount of time in the queue because only that one can be executed next. Is my thinking correct there?
##### Share on other sites
Quote:
Original post by Wizumwalt
Quote:
Original post by Sc4FreakIn your case, I'm guessing you don't really want a priority_queue here. priority_queue is sorted in that the top of the queue is the one with the largest value. I think you're looking more for std::stack or a regular std::queue, depending on if you want FIFO or LIFO behaviour.
Thanks, this is very helpful, but I'm thinking I need a priority_queue and prioritize from the event w/ the least amount of time, and once the current_time is > than the item on the priority_queue w/ the least amount of seconds, then it gets executed (time's expired, exec event). I will always be looking for the event w/ the lowest amount of time in the queue because only that one can be executed next. Is my thinking correct there?
OK, if that's what you want to do then all you have to do is provide a less than comparison operator in the Event structure:
class Scheduler{private: struct Event { time_t when; boost::function<void ()> what; bool operator< (const Event& rhs) const { return (when < rhs.when); } }; std::priority_queue<Event> events;public: void at(time_t when, const Event& what); void run();}
1. 1
Rutin
24
2. 2
3. 3
JoeJ
18
4. 4
5. 5
• 38
• 23
• 13
• 13
• 17
• ### Forum Statistics
• Total Topics
631710
• Total Posts
3001846
×
|
{}
|
# Density function and expectation of a random variable
Random variable $X$ has exponential distribution $\mathcal E(\lambda)$ with probability $0.3$, and distribution given by density function $f_2(x)=\frac{1}{2}e^{-|x+1|},\forall x\in\mathbb R$ with probability $0.7$. Find density (density function) and expectation of a random variable $X$.
First, I don't understand why are we given probabilities $0.3$ and $0.7$?
Density function of an exponential distribution is given by $f(x)=\lambda e^{-\lambda x},x\ge 0,\lambda>0$. This is the density function for the first case. Expectation of $X$ for the first case is $$E(X)=\int_0^{+\infty}x\lambda e^{-\lambda x}dx=-\frac{1}{\lambda}$$
In the second case, we are already given the density function $f_2(x)$. $$|x+1| = \begin{cases} x+1, & x+1\ge 0, & x\ge -1\\ -(x+1), & x+1< 0, & x<-1 \end{cases}$$ $$f_2(x) = \begin{cases} \frac{1}{2}e^{x+1}, & x<-1 \\ \frac{1}{2}e^{-(x+1)}, & x\ge -1 \end{cases}$$
Now, I am not sure how to set limits of integration for evalution of expectation. $$E(X)=\int_{-\infty}^{-1}\frac{1}{2}xe^{x+1}dx+\int_{-1}^{+\infty}\frac{1}{2}xe^{-(x+1)}dx=-1+0=-1$$
Is this correct?
• This is a mixed random variable...
– PiE
May 12 '17 at 23:12
• @PMF, Could you elaborate on that? May 12 '17 at 23:13
• It looks like you have the right work for the expectation in the $f_2$ case. Now you just add $0.3 \lambda + 0.7(-1)$. May 12 '17 at 23:16
• I mean a "mixture" distribution...
– PiE
May 12 '17 at 23:24
• @PMF, Do you mean that my solution is not correct? May 12 '17 at 23:25
This is a mixture distribution (If I'm understanding your question correctly).
$$f_1(x) = \lambda e^{-\lambda x}$$
and
$$f_2(x) = \frac{e^{-\left| x+1\right| }}{2}$$.
Then, the density function is
$$g(x) = 0.3 f_1(x) + 0.7 f_2(x) = 0.3 \lambda e^{-\lambda x} + 0.7\frac{e^{-\left| x+1\right| }}{2}$$.
To calculate $E(X)$, we follow first principles noting the respective domains of support, so we get:
$$E(X) = 0.3 \int_0^{\infty}x\lambda e^{-\lambda x} + 0.7 \int_{-\infty}^{\infty} x \frac{e^{-\left| x+1\right| }}{2}=\frac{0.3}{\lambda }-0.7$$
• Thanks, everything is clear. May 13 '17 at 0:15
• In the last integral, you have forgotten to multiply the function with $x$. May 13 '17 at 0:22
• Thanks for catching the typo.
– PiE
May 13 '17 at 0:25
|
{}
|
# Math Help - need help to sketch the curve
1. ## need help to sketch the curve
would you sketch the curve with the polar equation r = 2cos(2theta), and express the equation in rectangular coordinates, that is what must be represented.
2. Originally Posted by rcmango
would you sketch the curve with the polar equation r = 2cos(2theta), and express the equation in rectangular coordinates, that is what must be represented.
I've attached the graph of the curve.
As you easily can see you'll need 4 equations to describe this curve by the usual x,y-coordinates.
3. $r = 2 \cos2\theta$
$r^2 = x^2 + y^2$
...for $r>0~,~~r = + \sqrt{x^2+y^2}$
...for $r<0~,~~r = - \sqrt{x^2+y^2}$
$\tan\theta = \frac{y}{x}$
...for $-\frac{\pi}{2}<\theta<\frac{\pi}{2}~,~~\theta = \arctan(\frac{y}{x})$
...for $\frac{\pi}{2}<\theta<\frac{3\pi}{2}~,~~\theta = \arctan(\frac{y}{x})+\pi$
Well, if we look carefully at $\theta$..
$\theta = \arctan(\frac{y}{x})$
$\theta = \arctan(\frac{y}{x})+\pi$
$r = 2 \cos(2\theta)$
$r = 2 \cos(2(\arctan(\frac{y}{x})+\pi))$
$r = 2 \cos(2\arctan(\frac{y}{x})+2\pi))$
which is equal to $r = 2 \cos2(\arctan(\frac{y}{x}))$
That means, $\theta = \arctan(\frac{y}{x})$ and $\theta = \arctan(\frac{y}{x})+\pi$ are the same. Then, using one of them is enough.
So, we have
$r = 2 \cos2\theta$
...for $r>0~,~~r = + \sqrt{x^2+y^2}$
...for $r<0~,~~r = - \sqrt{x^2+y^2}$
$\theta = \arctan(\frac{y}{x})$
Placing them will give us two functions:
$\sqrt{x^2+y^2} = 2\cos(2\arctan(\frac{y}{x}))$
$-\sqrt{x^2+y^2} = 2\cos(2\arctan(\frac{y}{x}))$
|
{}
|
# Flow equation for system of coupled tanks
Consider the following system of interconnected tanks:
The two tanks have cross-section areas $$A_1$$ and $$A_2$$ respectively and the levels of the liquid are denoted by $$h_1$$ and $$h_2$$ respectively.
The liquid can run from one vessel into the other through a tube of cross-section area $$a$$, which is significantly smaller than $$A_1$$ and $$A_2$$.
Let us denote the density of the liquid by $$\rho$$ and the gravitational acceleration by $$g$$.
I need to derive a dynamical model of this system that describes the evolution of $$h_1$$ and $$h_2$$ and I see in various publications (example) that
$$F_1 = \rho a \sqrt{2g(h_1-h_2)}.\tag{*}$$
I would like to use Bernoulli's principle to derive this formula. I will assume that the head loss in the pipe due to friction is negligible.
From Bernoulli's principle between a point $$x$$ on the surface of the first tank and a point $$y$$ at its exit
we have
$$P_{atm} + \rho g h_1 = P_y + \frac{1}{2}\rho v_y^2\tag{1a}$$
Similarly, in Bernoulli's principle in the second tank gives
$$P_{atm} + \rho g h_2 = P_{y'} - \frac{1}{2}\rho v_{y'}^2\label{1b}\tag{1b}$$
I am not sure about Equation \eqref{1b} - I used the negative sign because I used the convention that a positive $$v_y$$ means that the liquid flows from the first tank into the second one, so the second tank gains kinetic energy.
Question 1. Can you please clarify whether this is correct?
From point $$y$$ to $$y'$$, Bernoulli's equation gives
$$P_{y} + \frac{1}{2}\rho v_{y}^2 = P_{y'} + \frac{1}{2}\rho v_{y'}^2\tag{1c}$$
Because of the mass balance equation between the two ends of the tube (assuming incompressible flow), it is $$v_y = v_{y'}$$.
The mass balance between the two tanks is
$$\rho A_1 \dot{h}_1 + \rho A_2 \dot{h}_2 = 0\tag{2}.$$
Lastly, we know that
$$F_1 = \rho a v_y,$$
so it suffices to show that $$v_y = \sqrt{2g(h_1-h_2)}$$.
Question 2. I have tried to combine Equations (1a-1c) and (2) to derive Equation (*), but did not make it.
• It's correct but needlessly long-winded. Simply use Torricelli's Law but slightly modified and immediately get $v = \sqrt{2g(h_1-h_2)}$ for the flow speed between the tanks. This is essentially a conversion of potential-energy-to-kinetic-energy-problem. – Gert Apr 24 '19 at 22:18
• @Gert thank you for your comment. It sounds reasonable, but could you post a complete answer? By the way, I am also concerned about the fact that Bernoulli's principle assumes that the flow is steady, but this one is not (the speed changes with time as the levels change). – Pantelis Sopasakis Apr 24 '19 at 22:39
• "the speed changes with time as the levels change" Ah, I was wondering about that. In that case the system behaves like an oscillator. Would you like the solution to that? – Gert Apr 24 '19 at 22:50
• @Gert Yes, I'd like to see the solution. I suspect that if the tanks are large, the deceleration of the flow will be negligible, so we'll be able to use Toricelli's law and the level dynamics will be described by the ODE $A_1\dot{h}_1 = -a\sqrt{2g(h_1-h_2)}$ and $A_2\dot{h}_2 = a\sqrt{2g(h_1-h_2)}$. – Pantelis Sopasakis Apr 24 '19 at 22:56
• It's a bit late here now to formulate an answer immediately. For now just look at this very similar problem: physics.stackexchange.com/questions/222179/… – Gert Apr 24 '19 at 23:01
the speed changes with time as the levels change
This is a crucial bit of information because it may be tempting to believe the flow will stop when the tank levels become equal but due to inertia this is not the case. Instead the system will enter a simple harmonic oscillation.
Firstly a bit of tedious algebra. Determine the equilibrium level $$h_0$$ (both tanks at same level), from total volume considerations.
$$h_1A_1+h_2A_2=h_0(A_1+A_2)$$ $$h_0=\frac{h_1A_1+h_2A_2}{A_1+A_2}$$ Now find the volume $$V$$ above the equilibrium level $$h_0$$ for any $$h_1$$,$$h_2$$: $$V=(h_1-h_0)A_1+(h_0-h_2)A_2$$ with: $$h_2=\frac{h_0(A_1+A_2)-h_1A_1}{A_2}$$ $$V=(h_1-h_0)A_1+(h_0-\frac{h_0(A_1+A_2)-h_1A_1}{A_2})A_2$$ $$V=(h_1-h_0)A_1+h_0A_2-h_0(A_1+A_2)+h_1A_1$$ $$V=2h_1A_1-2h_0A_1$$ Its mass $$m$$ is with density $$\rho$$: $$m=\rho(2h_1A_1-2h_0A_1)$$ So the net force acting on the total mass $$M$$ is: $$mg=\rho g(2h_1A_1-2h_0A_1)$$ The Equation of Motion, with $$M$$ the total mass, is: $$Ma+\rho g(2h_1A_1-2h_0A_1)=0$$ Where: $$a=\frac{\mathbf{d^2}h_1}{\mathbf{d t^2}}=\ddot{h_1}$$ Use the following substitution: $$y(t)=\rho g(2h_1A_1-2h_0A_1)$$ $$\dot{y}=2\rho gA_1\dot{h_1}$$ $$\ddot{y}=2\rho gA_1\ddot{h_1}$$ $$\ddot{y}+\frac{2\rho gA_1}{M}y=0$$ Set: $$\omega^2=\frac{2\rho gA_1}{M}$$ $$\Rightarrow y=y_0\cos(\omega t+\phi)$$ Where at $$t=0$$: $$y_0=\rho g(2h_1(0)A_1-2h_0A_1)$$ and if $$\dot{y}(0)=0$$, then $$\phi=0$$.
Note that this derivation only works for completely inviscid liquids. Where there's viscosity, there are friction losses and thus damping.
• Thank you for the answer. I wonder why the cross-section area of the hole does not appear in your solution? For example, if the second vessel were empty, we would have $F = \rho A_{\mathrm{hole}} \sqrt{2gh}$. It's also not clear to me how you used Newton's second law of motion. We have a system of changing mass, so there should be a term of the form $\dot{m}$. – Pantelis Sopasakis Apr 25 '19 at 17:29
• Let me rephrase my questions: (i) Under what assumptions can we derive Equation (*) (if it's at all correct), (ii) Can we use Bernoulli's equations? (iii) Should we resort to Euler's equation for non-steady flows? – Pantelis Sopasakis Apr 25 '19 at 17:44
• $F_1 = \rho a \sqrt{2g(h_1-h_2)}.\tag{*}$ is only true if $h_1-h_2=\text{Constant}$ OR if $h_2=0$. But it must be possible to develop a 'dynamic' version of it. In dynamic conditions, you need to apply Euler's version of Bernoulli. It also bothers me that in my derivation $a$ (the hole or pipe diameter) doesn't feature, I'm looking into that now. I will also clarify the EoM a little. – Gert Apr 25 '19 at 19:56
• I'm not able to derive (*) using Bernoulli's principle (unless I am justified to assume that $P_{y'}\approx \rho g h_2 + P_{atm}$). By the way, I found this article scielo.br/… - the authors propose the use of Euler's equation and the mass balance equation for each tank. I feel however that they don't justify their assumptions much. – Pantelis Sopasakis Apr 25 '19 at 20:04
• That's a great reference and confirms some of my derivation (although mine is simpler). They too find a SHO where $a$ doesn't feature. It doesn't feature because we assume a perfectly inviscid liquid ('ideal liquid'). Such a liquid experiences no resistance to flow at all: pipe diameter or length have no handle on it whatsoever. This is of course unrealistic. A real system would be damped due to viscous friction. If you're interested in 'real world' system of communicating vessels (the reference doesn't cover it either) then I suggest you pose another question, with specifics re. the pipe. – Gert Apr 25 '19 at 20:44
|
{}
|
Question 4
# Numbers of boys and girls are 'x' and 'y' respectively ; average ages of a girl and a boy are 'a' years and 'b' years respectively. The average age (in years) of all boys and girls.
Solution
Number of boys = $$x$$
Average age of boys = $$b$$
=> Total age of boys = $$bx$$
Similarly, total age of girls = $$ay$$
$$\therefore$$ Average age of all boys and girls = Total age of boys and girls / Number of boys and girls
= $$\frac{bx+ay}{x+y}$$
=> Ans - (B)
|
{}
|
## How to type Raku unicode characters in Emacs
Raku programming language uses some unicode characters as operators, quotation marks, etc. In this post I'm going to explain how to type those characters in Emacs using input methods.
First, you might want to see a list of those characters and their ASCII equivalents here. There is also a doc for entering unicode characters. You may specifically want to look at XCompose for a system-wide solution.
There are at least two input methods you can use to enter the unicode characters used in Raku. rfc1345 and TeX.
To select an input method type C-x RET C-\ and to switch to an input method use C-u C-\. C-\ can be used to toggle input method.
After you select an input method, You have to use the prefix character it provides for typing special characters. & is the prefix used for rfc1345 and \, ^ and some other characters are used for TeX.
Example for typing λ:
rfc1345: &l*
TeX: \lambda
To see a list of character sequences for an input method, type C-h I.
You can change the default input method by setting the default-input-method variable:
(setq default-input-method 'TeX)
To add characters which are not available in an input method:
(eval-after-load "quail/latin-ltx"
(let ((quail-current-package (assoc "TeX" quail-package-alist)))
(quail-define-rules ((append . t))
("\\lcb" ?「)
("\\rcb" ?」))))
Now with TeX method enabled, \lcb types '「' and \rcb types '」'.
Same thing for the rfc1345 input method:
(eval-after-load "quail/rfc1345"
(let ((quail-current-package (assoc "rfc1345" quail-package-alist)))
(quail-define-rules ((append . t))
("&[" "「")
("&]" "」"))))
Now with the rfc1345 method you can type '「' with &[ and type '」' with &].
Another way of entering unicode characters is using C-x 8 RET which runs insert-char command. C-x 8 prefix key has shortcuts for some characters. For example, C-x 8 / / inserts ÷. To add your own characters:
(global-set-key (kbd "C-x 8 l") "λ")
or:
(global-set-key (kbd "C-x 8 l") (lambda () (interactive) (insert "λ")))
Now C-x 8 l inserts λ.
Below is a list of unicode characters used in Raku and their character sequences in rfc1345 and TeX.
Note: rfc1345 character mnemonics work in Vim too. You only need to replace & with Ctrl-K.
CharacterC-x 8rfc1345TeX
«<&<<\flqq
»>&>>\frqq
×x&*X\times
÷/ /&-:\div
_ -&-2\minus
_ <&=<\le
_ >&>=\ge
/ =&!=\ne
&Ob\circ
&?=\cong
π&p*\pi
τ&t*\tau
&00\infty
&.3\ldots
[&'6\rq
]&'9\lq
&.9\glq
{&"6\ldq
}&"9\rdq
&:9\glqq
¯=&'m\={}
&-S^-
&+S^+
⁰ - ⁹^ 1 - ^ 3&0S - &9S^0 - ^9
½1 / 2&12\frac12
&/0\emptyset
&(-\in
\notin
&-)\ni
&(_\subseteq
\nsubseteq
&(C\subset
\nsubset
&)_\supseteq
\nsupseteq
&(C\supset
\nsupset
&)U\cup
&(U\cap
\setminus
\ominus
\uplus
&=3\equiv
\nequiv
|
{}
|
# Gas station without pumps
## 2014 July 14
### Need new mesh seat for recumbent
Filed under: Uncategorized — gasstationwithoutpumps @ 21:28
Tags: , , ,
I need to replace the mesh seat on my recumbent bicycle, because one of the buckles snapped yesterday. The mesh itself is badly stretched and abraded, and a few of the webbing straps are badly worn, so it is not worth repairing the seat—it’s replacement time. I can still ride the bike, but it isn’t as comfortable with the front strap no longer functional.
Now I’m trying to figure out exactly what fabric and parts to get. One person on the Ryan owner’s club mailing list conveniently provided a parts list recently, though the seat I have currently does not exactly match his list (for example, I have all 1″ webbing, no 3/4″ webbing). Here are some things I’m trying to decide:
• What type of mesh should I get? He recommended black Leno Lock mesh from Outdoor Wilderness Fabrics MESHBLK at $14.03/yard, but I’m also considering Phifertex Vinyl Mesh at$12.95/yard, which is available in many colors, or Phifertex Plus at $17.95/yard, which would provide less stretch, but also less ventilation. The Phifertex Plus is sold as a sling mesh (capable of supporting a person’s weight), but the others are not. I suspect that any fabric rated for seats will have too little ventilation for the recumbent. The leno weave fabrics are likely to provide more stability in an open mesh, because the warp threads twist around each other, rather than running straight, locking the weft threads in place. The bentrideronline forum posts generally recommend the Leno lock mesh from Outdoor Wilderness Fabrics, so I’ll probably go with that, even though it is a bit too stretchy. • What sort of webbing should I get? The edges of the seat use 2″ webbing to stabilize the seat and attach the straps, plus a couple of diagonals from the center front to part way up the sides, to support the weight of the rider. The rest of the straps are 1″ wide. But should they be nylon, polypropylene, or polyester straps? Nylon has high strength, but is rather stretchy. Polypropylene has less stretch, but poor abrasion resistance and UV resistance, and polyester has the best UV resistance and the least stretch (about half as much as nylon webbing of the same weight under the same stress). It also doesn’t absorb water, and is more resistant to mildew and rot. I can get black polyester 1″ webbing for about 35¢/foot, and 2″ black polyester webbing for about 75¢/foot, but colors are a little more expensive: I can get 10 yards of red 1″ with reflective stripes for$18.90, or plain red for $1.48/yard. For a bicycle application, the reflective stripes may be a useful safety feature. Red 2″ seatbelt webbing would be about$10 for 5 yards.
• I also need to get buckles for the 7 cross straps and the two straps that go over the top of the seat. I’m undecided between simple side-release buckles (Fastex FSR1 59¢), and dual-pull side release buckles (generic GTSRD1 47¢) from Outdoor Wilderness Fabrics. Cam lock buckles (generic GCB1 46¢) are also a possibility. I’ll also want a a tri-glide for each loose strap end (generic GTG1 12¢).
So, unless I can get a new seat from the manufacturer of my bike (Longbikes in Colorado), even though they discontinued this model about 10 years ago, I’ll probably be making my own seat soon. It’ll cost me about $50–60 for materials, but I suspect that an already sewn seat would cost more like$150, and I wouldn’t have the option of red straps with reflective stripes.
### End of an era
My son has his last performances at West End Studio Theatre this summer—his last summer before college. He has had theater classes with Terri Steinmann and various of her staff members since the Wizard of Oz class in July 2004, and he has been performing on the WEST stage since they opened in 2007. Between Pisces Moon (where Terri taught before founding WEST) and West Performing Arts, he has done at least 42 classes with them (I’m not sure how to count the Dinosaur Prom Improv troupe, which he performed with for two years—I counted that as only one class, though it probably should count as more, as there were weekly practice sessions for the two years). Adding up all the course tuition over the 10 years he’s worked with them, I think we’ve paid around $20,000, averaging$2k a year—well worth it for the pleasure and the learning he has gotten from it.
This past weekend he performed as Otho (the interior designer) in Betelgeuse. After seeing the movie, I did not know how they would pull it off as a stage play, but they did quite a good job of it—particularly since they did not have the complete script until a few days before they performed (a long-standing WEST tradition of writing the script after rehearsals have started). There were two casts (the morning class and the afternoon class), but I only saw the afternoon cast’s production—I understand that the interpretations of essentially the same script and set were quite different for the two casts (costumes had to be different, because the actors were very different sizes).
He has one more class with them this summer—the summer teen conservatory with Santa Cruz Shakespeare, which I believe still has room for another student or two (the conservatory is limited to about 12 students). He’s done their Shakespeare teen conservatory for the past four years—it is quite different each time. The conservatory is probably West Performing Art’s most advanced theater class.
After this summer, not only will he be finished with West Performing Arts, but the West End Studio Theatre, where about half his performances have been, will be closed. We joke that they can’t go on without him, but the truth is that they are losing their lease. They’ve been renting on a year-to-year contract for eight years, and the landlord has found a tenant (a beer brewer) willing to lease the space on a longer term lease. The parting is amicable, but everyone will miss the W.E.S.T. space, which has been much more flexible and functional than any of the other spaces children’s theater has used around the city.
West Performing Arts will continue classes at the Broadway Playhouse and at schools, but they’ll need more space for classes than Broadway Playhouse can provide, especially for their popular summer classes, so they are looking for a new home. If anyone knows of spaces that might meet their needs (ideally, two large adjacent spaces that can be used for classes, one of which can be a flexible performance space, totaling about 10,000 sq ft, with storage, office space, and nearby parking and not needing a lot of renovation). They don’t have a lot of money (they’ve been keeping the classes affordable), so the typical $15–20/sq.ft./year leases locally are probably beyond their means. If anyone has any leads for them, their contact information is on their web site. ## 2014 July 12 ### Impostor syndrome Filed under: Uncategorized — gasstationwithoutpumps @ 10:56 Tags: I know that many students feel at times like they aren’t capable of doing what they need to do to ace their classes, to graduate, to move on into the “real world” or higher up in academia. Sometimes they feel like they are just “faking” being smart, and that someone will catch them at it. This is known as “impostor syndrome” and is quite common—Wikipedia even has a page explaining it. People from underprivileged backgrounds or who have been socialized to think of themselves as somehow inferior suffer from it more than those who have been taught to be confident in what they do. For example, women in physical and computational sciences often doubt themselves, even when the objective evidence is that they are quite capable. Even tenured professors, who have passed through many tests of their resolve and ability, often suffer from impostor syndrome. I suggest the following reading (all from a single author) for those who are wrestling with this problem (the author selected these posts herself from her larger body of work): http://academic-jungle.blogspot.com/2013/01/underachieving.html http://academic-jungle.blogspot.com/2013/11/beer-fries-and-impostors.html http://academic-jungle.blogspot.com/2013/08/the-sucky-and-awesome-of-academia.html http://xykademiqz.wordpress.com/2014/04/27/potential-and-ambition/ http://xykademiqz.wordpress.com/2014/03/13/tenure-denials/ http://xykademiqz.wordpress.com/2014/02/11/you-got-tenure-now-what/ http://xykademiqz.wordpress.com/2014/02/08/tenure-track-illustrated/ Maria Klawe, president of Harvey Mudd College, has a good, short article on her own experiences with impostor syndrome in Slate. For a somewhat younger perspective, Alicia Liu’s article Overcoming Imposter Syndrome is worth reading. Incidentally, there is a flip side to the problem, of students (often, but not exclusively, male students from privileged backgrounds) having too much confidence and not being aware when they are out of their depth, failing to ask for help when they need it. Both problems can be tackled with the same approach: seeking outside verification of your abilities and paying attention to the feedback. This is easiest while being a student, as there are many formal mechanisms in place for honest feedback—it gets harder when you have to rely on the more random mechanisms of journal paper reviews and grant proposals or pats on the back from co-workers. As a community, we can all help with both problems by providing honest feedback (neither ego strokes nor unwarranted criticism) when asked for it, and by asking for feedback ourselves. For my part, I tend to see the negative both in my own work and in others’ work, and I am working on trying to increase the amount of positive feedback I give people. ## 2014 July 7 ### Crowdfunding for UCSC iGEM project Filed under: Uncategorized — gasstationwithoutpumps @ 19:34 Tags: , , , , , The UCSC undergraduate team for the iGEM synthetic biology competition have put up a crowd-funding web site to try to raise the money they need for their contest entry. https://experiment.com/projects/sustainable-next-generation-biofuel-production Their design project is to engineer a bacterial strain for cellulosic alcohol production—not ethanol, but butanol, whose energy density is more compatible with the existing gasoline infrastructure and that does not absorb so much water. Conventional ways of creating butanol are too expensive, so recombinant bacteria are a promising approach. Using cellulose as a feedstock avoids competing with food production, as waste paper and other non-food sources can be used. They are not trying to do everything at once—they are working this summer on getting butanol production from glucose engineered into Haloferax volcanii, a halophile that their mentor has worked with a fair amount. I’m not sure what their reasoning is for using a halophile—perhaps they just wanted to work in an archeon, and H. volcanii is one of the best-established model organisms for Archaea. Their mentor for the project is donating his time, so all the costs are unavoidable reagent, equipment time, or registration fee costs. The team description (including membership) is at http://igem.org/Team.cgi?id=1560, and the wiki where you can follow their progress is at http://2014.igem.org/Team:UCSC (though they’ve nothing there yet but an introduction to the project). I gave a token amount, and I urge others to do so also (or more if you are feeling generous). They’re currently about halfway to their goal. https://experiment.com/projects/sustainable-next-generation-biofuel-production ## 2014 July 6 ### Battery connectors Filed under: Uncategorized — gasstationwithoutpumps @ 02:32 Tags: , , , , , , , , I spent a little time today working on my book, but I got side tracked into a different project for the day: designing a super-cheap coin-cell battery connector. I’ve used coin-cell battery holders before, like on the blinky EKG board, where I used a BH800S for 2 20mm CR2032 lithium cells. That battery holder is fairly large and costs over$1—even in 1000s it costs 70¢ a piece. So I was trying to come up with a way to make a dirt cheap coin-cell holder.
The inspiration came from the little LED lights that “glovers” use inside their gloves. They are powered by two CR1620 batteries (that means a 16mm diameter and 2.0mm thickness for the battery). Because the lights have to be made very cheaply, they don’t use an expensive holder, but put the negative side of the batteries directly against a large copper pad on the PC board. The batteries are held in place by the positive contact, which is a piece of springy metal pressing the battery against the board—and each manufacturer seems to have a slightly different variant on how the clip is made.
Unfortunately, I was unable to find any suppliers who sold the little clips—though I found several companies that make battery contacts, it seems that most are custom orders.
My first thought was to bend a little clip out of some stainless steel wire I have sitting around (not the 1/8″ welding rod, but 18-gauge 1.02362mm wire). That’s about the same thickness as a paperclip (which is made out of either 18-gauge or 19-gauge wire), but the stainless steel is stiffer and less fatigue-prone than paperclips. I was a little worried about whether stainless steel was solderable, so I looked it up on Wikipedia, which has an article of solderability. Sure enough, stainless steel is very hard to solder (the chromium oxides have to be removed, and that takes some really nasty fluxes that you don’t want near your electronics). So scratch that idea.
I spent some time looking around the web at what materials do get used for battery contacts—it seems there are three main ones: music wire, phosphor bronze, and beryllium copper, roughly in order of price. Music wire is steel wire, which gets nickel plated for making electrical connections. It is cheap, stiff, and easily formed, but its conductivity is not so great, though the nickel plating helps with that. The nickel oxides that form require a sliding contact to scrape off to make good electrical connection. Phosphor bronze is a better conductor, but may need plating to avoid galvanic corrosion with the nickel-plated battery surfaces. Most of the contacts I saw on the glover lights seemed to have been stamped out of phosphor bronze. Beryllium copper is a premium material (used in military and medical devices), as it has a really good ratio of yield strength to Young’s modulus, so it can be cycled many times without failing, but also has good conductivity.
Since I don’t have metal stamping machinery in my house, but I do have pliers and vise-grips, I decided to see if I could design a clip out of wire. It is possible to order small quantities of nickel-plated music wire on the web. For example, pianoparts.com sells several different sizes, from 0.1524mm diameter to 0.6604mm diameter. I may even be able to get some locally at a music store.
My first design was entirely seat-of-the-pants guessing:
First clip design, using 19-gauge wire, with two 1mm holes in PC board to accept the wire. This design is intended for two CR1620 batteries.
The idea was to have a large sliding contact that made it fairly easy to slide the batteries in, but then held them snugly. Having a rounded contact on the clip avoids scratching the batteries but can (I hope) provide a fair amount of normal force to hold the batteries in place. But how much force is needed?
I had a very hard time finding specifications on how hard batteries should be held by their contacts. Eventually I found a data sheet for a coin battery holder that specified “Spring pressure: 50g min. initial contact force at positive and negative terminals”. Aside from referring to force as pressure and then using units of mass, this data sheet gave me a clear indication that I wanted at least 0.5N of force on my contacts.
I found another battery holder manufacturer that gave a tiny graph in one of their advertising blurbs that showed a range of 100g–250g (again using units of mass). This suggests 1N-2.5N of contact force.
Another way of getting at the force needed is to look at how much friction is needed to hold the batteries in place and what the coefficient of friction is for nickel-on-nickel sliding. The most violently I would shake something is how fast I can shake my fingertips with a loose wrist—about 4Hz with an peak-to-peak amplitude of 22cm, which would be a peak acceleration of about 70 m/s^2. Two CR1620 cells weigh about 2.5±0.1g (based on different estimates from the web), so the force they need to resist is only about 0.2N. Nickel-on-nickel friction can have a coefficient as low as 0.53 (from the Engineering Toolbox), so I’d want a normal force of at least 0.4N. That’s in the same ballpark as the information I got from the battery holder specs.
So how stiff does the wire have to be? I specified a 0.2mm deflection, so I’d need at least 2N/mm as the spring constant for the contact, and I might want as high as 10N/mm for a really firm hold on the batteries.
So how should I compute the stiffness of the contact? I’ve never done mechanical engineering, and never had a statics class, but I can Google formulas like any one else—I found a formula for the bending of a cantilever loaded at the end:
$\frac{F}{d} = \frac{3 E I}{L^{3}}$, where F is force, d is deflection, E is Young’s modulus, I is “area moment of inertia”, and L is the length of the beam. More Googling got me the area moment of inertia of a circular beam of radius r as $\frac{\pi}{4} r^{4}$. So if I use the 0.912mm wire with an 8mm beam I have
F/d = 200E-6 mm E.
More Googling got me some typical values of Young’s modulus:
material E [MPa = N/(mm)^2]
phosphor bronze 120E3
beryllium copper 135E3
music wire 207E3
If I used 19-gauge phosphor bronze, I’d have about 24N/mm, which is way more than my highest desired value of 10N/mm. Working backwards from 2–10N/mm what wire gauge would I need? I get a diameter of 0.403mm to 0.603mm, which would be #6 (0.4064mm), #7 (0.4572mm), #8 (0.5080mm), #9 (0.5588mm), or #10 (0.6096mm), on the pianoparts.com site. I noticed that battery contact maker in Georgia claims to stock 0.5mm and 0.6mm music wire for making battery contacts, though they first give the sizes as 0.020″ and 0.024″, so I think that these are actually 0.5080mm and 0.6096mm (#8 and #10) music wire.
It seems that using #8 (0.020″, 0.5080mm) nickel-plated music wire would be an appropriate material for making the contacts. Note that the loop design actually results in two cantilevers, each with a stiffness of about 4N/mm, resulting in a retention force of about 1.6N. The design could be tweaked to get different contact forces, by changing how much deflection is needed to accommodate the batteries.
How much tweaking might be needed? I found the official specs for battery sizes (with tolerances) in IEC standard 60086 part 2: The thickness for a 1620 is 1.8mm–2mm, the diameter is 15.7mm–16mm, and the negative contact must be at least 5mm in diameter. The standard also calls for them to take an average of 675 hours to discharge down to 2v through a 30kΩ resistor (that’s about 56mAH, if the voltage drops linearly, 67mAH if the voltage drops suddenly at the end of the discharge time). If the batteries can legally be as thin as 1.8mm, then to get a displacement of 0.2mm, I’d need the zero-point for the contacts to be only 3.4mm from the PC board, not 3.8mm, and full thickness batteries would provide a displacement of 0.6mm, and a retention force of about 4.8N.
If I were to do a clip for a single CR2032 battery, I’d need to have a zero-point 2.8mm from the board, to provide 0.2mm of displacement for the minimum 3.0mm battery thickness.
So now all I need to do is get some music wire and see if I can bend it by hand precisely enough to make prototype clips. I’d probably change the spacing between the holes to be 0.3″ (7.62mm), so that I could test the clip on one of my existing PC boards.
Update 2014 July 6: I need to put an insulator on the verticals (heat shrink tubing?), or the top battery will be shorted out, since the side of the lower battery is exposed.
Next Page »
|
{}
|
# Vertically align individual cells in a table
I am wanting the ability to vertically align any cell within a table. This appears possible but only on a per column or row basis. At the moment I can horizontally align the contents of individual cells with the help of the \raggedleft and \centering commands. However, \multicolumn applies vertical alignment for the whole row and the longtabu environment can only specify the alignment type per column.
The following is an example of the table I want to create where both horizontal and vertical alignments need to be set for each cell instead of per row or column:
\documentclass[11pt,a4paper]{article}
\usepackage{longtable}
\usepackage{tabu}
\usepackage{lipsum}
\usepackage{multirow}
\newcolumntype{M}{m{\dimexpr 1\tabucolX+1\tabcolsep+\arrayrulewidth\relax}}
\newcolumntype{P}{p{\dimexpr 1\tabucolX+1\tabcolsep+\arrayrulewidth\relax}}
\newcolumntype{B}{b{\dimexpr 1\tabucolX+1\tabcolsep+\arrayrulewidth\relax}}
\newcolumntype{H}{>{\begin{minipage}[b]{\hsize}}B<{\end{minipage}}}
\begin{document}
\begin{longtabu} to 150mm [l] {|X[l]|X[l]|X[l]|}
\hline
\multicolumn{1}{|P|}{ \lipsum[1]} & \multicolumn{1}{H|}{\centering H: Center, V:Center} & \multicolumn{1}{B|}{\raggedleft \multirow{1}{*}[-2\baselineskip]{H:Right, V:Bottom}} \tabularnewline
\hline
\multicolumn{1}{|P|}{\lipsum[1]} & \multicolumn{1}{P|}{H: Left, V:Top} & \multicolumn{1}{M|}{\centering H:Center, V:Center} \tabularnewline
\hline
\multicolumn{1}{|P|}{\lipsum[1]} & \multicolumn{1}{M|}{\raggedleft H: Right, V:Center} & \multicolumn{1}{B|}{H:Left, V:Bottom} \tabularnewline
\hline
\tabuphantomline
\end{longtabu}
\end{document}
Update:
Attempting the solutions presented in "Move tabular entry to bottom of row" won't work.
The first solution I believe simply adjusts the entire column by creating a new column type:
\newcolumntype{B}{>{\begin{minipage}[b]{\hsize}}X<{\end{minipage}}}
This doesn't help my problem where I need the vertical alignment applied to individual cells, not a whole column. Applying this column type to \multicolumn doesn't appear to work.
The second solution places a \baselineskip into the optional parameter of \multirow which works initially but seems to become vertically centred as soon as the cell becomes a certain height.
I have updated my example above to show how it breaks.
Thanks
• Sorry when I say "vertically align" I am not referring to the state of being vertically centred in the middle but the ability to be positioned vertically e.g. Top, Middle or Bottom. – A Dark Divided Gem Feb 16 '14 at 23:47
• You can use m or b columns from array package. – user11232 Feb 16 '14 at 23:56
• As I mention in my question I am looking to apply vertical positioning at the cell level instead of per column. For \multicolumn only one column-spec is allowed. – A Dark Divided Gem Feb 17 '14 at 1:12
• My recommendation: stay away from longtabu. The developer has already said that he is going to make some major changes and it is not going to be backward compatible. You can find more about this in this question: tex.stackexchange.com/questions/106452/… – Mario S. E. Mar 6 '14 at 11:06
• David Carlisle's answer at tex.stackexchange.com/questions/166808/… can address this problem. – Steven B. Segletes Apr 2 '14 at 2:19
David Carlisle's answer doesn't appear to be working for me.
hmph:-)
Works for me:
\documentclass[11pt,a4paper]{article}
\usepackage{longtable}
\usepackage{array}
\usepackage{lipsum}
\makeatletter
\newcolumntype{P}{p{.3\textwidth}}
\newcommand\m[1]{\multicolumn{1}{#1}}
\newcommand\zc[2]{%
\setbox0\hbox{\parbox[c]{.3\textwidth}{#2}}%
\smash{\raisebox{\dimexpr(\csname PDFSAVEe#1\endcsname sp-
\csname PDFSAVEb#1\endcsname sp)/2\relax}{\box0}}}
\newcommand\zb[2]{%
\setbox0\hbox{\parbox[t]{.3\textwidth}{#2}}%
\smash{\raisebox{\dimexpr\csname PDFSAVEe#1\endcsname sp-
\csname PDFSAVEb#1\endcsname sp\relax}{\box0}}}
\def\foo#1{\leavevmode
\expandafter\ifx\csname PDFSAVE#1\endcsname\relax
\expandafter\gdef\csname PDFSAVE#1\endcsname{0}%
\fi
\pdfsavepos\write\@auxout{\gdef\string\PDFSAVE#1{\the\pdflastypos}}}
\makeatother
\begin{document}
\begin{longtable}{|P|P|P|}
\hline
\foo{ba}\lipsum*[1]\foo{ea} &
\zc{a}{\centering H: Center, V:Center} &
\zb{a}{\raggedleft H:Right, V:Bottom} \tabularnewline
\hline
\foo{bb}\lipsum*[1]\foo{eb} &
{H: Left, V:Top} &
\zc{b}{\centering H:Center, V:Center} \tabularnewline
\hline
\foo{bc}\lipsum*[1]\foo{ec} &
\zc{c}{\raggedleft H: Right, V:Center} &
\zb{c}{H:Left, V:Bottom} \tabularnewline
\hline
\end{longtable}
\end{document}
• That quote... :) – Paulo Cereda Jun 7 '14 at 10:04
|
{}
|
# First-hitting-time model
(Redirected from First passage time)
In statistics, first-hitting-time models are a sub-class of survival models. The first hitting time, also called first passage time, of a set $A$ with respect to an instance of a stochastic process is the time until the stochastic process first enters $A$.
More colloquially, a first passage time in a stochastic system, is the time taken for a state variable to reach a certain value. Understanding this metric allows one to further understand the physical system under observation, and as such has been the topic of research in very diverse fields, from Economics to Ecology.[1]
## Examples
A common example of a first-hitting-time model is a ruin problem, such as Gambler's ruin. In this example, an entity (often described as a gambler or an insurance company) has an amount of money which varies randomly with time, possibly with some drift. The model considers the event that the amount of money reaches 0, representing bankruptcy. The model can answer questions such as the probability that this occurs within finite time, or the mean time until which it occurs.
First-hitting-time models can be applied to expected lifetimes, of patients or mechanical devices. When the process reaches an adverse threshold state for the first time, the patient dies, or the device breaks down.
## First passage time of a 1D Brownian Particle
One of the simplest and omnipresent stochastic systems is that of the Brownian particle in one dimension. This system describes the motion of a particle which moves stochastically in one dimensional space, with equal probability of moving to the left or to the right. Given that Brownian motion is used often as a tool to understand more complex phenomena, it is important to understand the probability of a first passage time of the Brownian particle of reaching some position distant from its start location. This is done through the following means.
The probability density function (PDF) for a particle in one dimension is found by solving the one-dimensional diffusion equation. (This equation states that the position probability density diffuses out over time. It is analogous to say, cream in a cup of coffee if the cream was all contained within some small location initially. In the long time limit the cream has diffused throughout the entire drink evenly.) Namely,
$\frac{\partial p(x,t \mid x_{0})}{\partial t}=D\frac{\partial^2p(x,t \mid x_{0})}{\partial x^2},$
given the initial condition $p(x,t={0} \mid x_{0})=\delta(x-x_{0})$; where $x(t)$ is the position of the particle at some given time, $x_0$ is the tagged particle's initial position, and $D$ is the diffusion constant with the S.I. units $m^2s^{-1}$ (an indirect measure of the particle's speed). The bar in the argument of the instantaneous probability refers to the conditional probability. The diffusion equation states that the speed at which the probability for finding the particle at $x(t)$ is position dependent.
It can be shown that the one-dimensional PDF is
$p(x,t; x_0)=\frac{1}{\sqrt{4\pi Dt}}\exp\left(-\frac{(x-x_0)^2}{4Dt}\right).$
This states that the probability of finding the particle at $x(t)$ is Gaussian, and the width of the Gaussian is time dependent. More specifically the Full Width at Half Maximum (FWHM) - technically, this is actually the Full Duration at Half Maximum as the independent variable is time - scales like
$\rm{FWHM}\sim\sqrt{t}.$
Using the PDF one is able to derive the average of a given function, $L$, at time $t$:
$\langle L(t) \rangle\equiv \int^{\infty}_{-\infty} L(x,t) p(x,t) dx,$
where the average is taken over all space (or any applicable variable).
The First Passage Time Density (FPTD) is the probability that a particle has first reached a point $x_c$ at time $t$. This probability density is calculable from the Survival probability (a more common probability measure in statistics). Consider the absorbing boundary condition $p(x_c,t)=0$ (The subscript c for the absorption point $x_c$ is an abbreviation for cliff used in many texts as an analogy to an absorption point). The PDF satisfying this boundary condition is given by
$p(x,t; x_0, x_c) = \frac{1}{\sqrt{4\pi Dt}} \left( \exp\left(-\frac{(x-x_0)^2}{4Dt}\right) - \exp\left(-\frac{(x-(2x_c-x_0))^2}{4Dt}\right) \right),$
for $x. The survival probability, the probability that the particle has remained at a position $x < x_c$ for all times up to $t$, is given by
$S(t)\equiv\int_{-\infty}^{x_c} p(x,t; x_{0}, x_c) dx = \operatorname{erf}\left(\frac{x_c-x_{0}}{2\sqrt{D t}}\right),$
where $\operatorname{erf}$ is the error function. The relation between the Survival probability and the FPTD is as follows(the probability that a particle has reached the absorption point between times $t$ and $t+dt$ is $f(t)dt=S(t)-S(t+dt)$. If one uses the first-order Taylor approximation, the definition of the FPTD follows):
$f(t)=-\frac{\partial S(t)}{\partial t}.$
By using the diffusion equation and integrating by parts, the explicit FPTD is
$f(t)\equiv\frac{|x_c-x_{0}|}{\sqrt{4\pi Dt^3}} \exp\left(- \frac{(x_c-x_{0})^2}{4Dt}\right).$
The first-passage time for a Brownian particle therefore follows a Lévy distribution.
For $t\gg\frac{(x_c-x_{0})^2}{4D}$, it follows from above that
$f(t)=\frac{\Delta x}{\sqrt{4\pi Dt^3}}\sim t^{-3/2},$
where $\Delta x\equiv |x_c-x_{0}|$. This equation states that the probability for a Brownian particle achieving a first passage at some long time (defined in the paragraph above) becomes increasingly small, but always finite.
The first moment of the FPTD diverges (as it is a so-called heavy-tailed distribution), therefore one cannot calculate the average FPT, so instead, one can calculate the typical time, the time when the FPTD is at a maximum ($\partial f/\partial t=0$), i.e.,
$\tau_{\rm{ty}}=\frac{\Delta x^2}{6D}.$
## Latent vs observable
In many real world applications, the process is latent, or unobservable. When first hitting time models are equipped with regression structures, accommodating covariate data, we call such regression structure Threshold regression. The threshold state, parameters of the process, and even time scale may depend on corresponding covariates.
A first-hitting-time (FHT) model has two underlying components: (1) a parent stochastic process $\{X(t)\}\,\,$, and (2) a threshold. The first hitting time is defined as the time when the stochastic process first reaches the threshold. It is very important to distinguish whether the sample path of the parent process is latent (i.e., unobservable) or observable, and such distinction is a characteristic of the FHT model. By far, latent processes are most common. To give an example, we can use a Wiener process $\{X(t), t\ge0\,\}\,$ as the parent stochastic process. Such Wiener process can be defined with the mean parameter ${\mu}\,\,$, the variance parameter ${\sigma^2}\,\,$, and the initial value $X(0)=x_0>0\,$.
|
{}
|
# Cormen Edition 3 Exercise 8.1 Question 3 (Page No. 194)
68 views
Show that there is no comparison sort whose running time is linear for at least half of the $n!$ inputs of length $n$.What about a fraction of $1/n$ inputs of length $n$? What about a fraction $1/2^n$?
## Related questions
1
99 views
Suppose that you are given a sequence of $n$ elements to sort.The input sequence consists of $n/k$ subsequences, each containing $k$ elements.The elements in a given subsequence are all smaller than the elements in the succeeding subsequence and larger than the ... this variant of the sorting problem. (Hint: It is not rigorous to simply combine the lower bounds for the individual subsequences.)
Obtain asymptotically tight bounds on $lg\ (n!)$ without using Stirling’s approximation. Instead, evaluate the summation $\sum_{k=1}^{n} lg\ k$.
BUCKET-SORT(A) 1 let B[0...n-1] be a new array 2 n = A.length 3 for i – 0 to n – 1 4 make B[i] an empty list 5 for i = 1 to n 6 insert A[i] into list B[nA[i]] 7 for i = 0 to n – 1 8 sort list B[i] with insertion sort 9 concatenate the lists B[0] , B[1] , ….,B[n-1] together in order illustrate the operation of BUCKET-SORT on the array $A=\langle .79,.13,.16,.64,.39,.20,.89,.53,.71,.42\rangle$
|
{}
|
Set up power-law or nonparametric weights for the neighbourhood component of hhh4-models as proposed by Meyer and Held (2014). Without normalization, power-law weights are $$w_{ji} = o_{ji}^{-d}$$ (if $$o_{ji} > 0$$, otherwise $$w_{ji} = 0$$), where $$o_{ji}$$ ($$=o_{ij}$$) is the adjacency order between regions $$i$$ and $$j$$, and the decay parameter $$d$$ is to be estimated. In the nonparametric formulation, unconstrained log-weights will be estimated for each of the adjacency orders 2:maxlag (the first-order weight is fixed to 1 for identifiability). Both weight functions can be modified to include a 0-distance weight, which enables hhh4 models without a separate autoregressive component.
W_powerlaw(maxlag, normalize = TRUE, log = FALSE,
initial = if (log) 0 else 1, from0 = FALSE)
W_np(maxlag, truncate = TRUE, normalize = TRUE,
initial = log(zetaweights(2:(maxlag+from0))),
from0 = FALSE, to0 = truncate)
## Arguments
maxlag
a single integer specifying a limiting order of adjacency. If spatial dependence is not to be truncated at some high order, maxlag should be set to the maximum adjacency order in the network of regions. The smallest possible value for maxlag is 2 if from0=FALSE and 1 otherwise.
truncate,to0
W_np represents order-specific log-weights up to order maxlag. Higher orders are by default (truncate=TRUE) assumed to have zero weight (similar to W_powerlaw). Alternatively, truncate=FALSE requests that the weight at order maxlag should be carried forward to higher orders. truncate has previously been called to0 (deprecated).
normalize
logical indicating if the weights should be normalized such that the rows of the weight matrix sum to 1 (default). Note that normalization does not work with islands, i.e., regions without neighbours.
log
logical indicating if the decay parameter $$d$$ should be estimated on the log-scale to ensure positivity.
initial
initial value of the parameter vector.
from0
logical indicating if these parametric weights should include the 0-distance (autoregressive) case. In the default setting (from0 = FALSE), adjacency order 0 has zero weight, which is suitable for hhh4 models with a separate autoregressive component. With from0 = TRUE (Meyer and Held, 2017), the power law is based on $$(o_{ji} + 1)$$, and nonparametric weights are estimated for adjacency orders 1:maxlag, respectively, where the 0-distance weight is $$w_{jj} = 1$$ (without normalization). Note that the corresponding hhh4 model should then exclude a separate autoregressive component (control$ar$f = ~ -1).
## Value
a list which can be passed as a specification of parametric neighbourhood weights in the control$ne$weights argument of hhh4.
## Details
hhh4 will take adjacency orders from the neighbourhood slot of the "sts" object, so these must be prepared before fitting a model with parametric neighbourhood weights. The function nbOrder can be used to derive adjacency orders from a binary adjacency matrix.
## References
Meyer, S. and Held, L. (2014): Power-law models for infectious disease spread. The Annals of Applied Statistics, 8 (3), 1612-1639. doi: 10.1214/14-AOAS743
Meyer, S. and Held, L. (2017): Incorporating social contact data in spatio-temporal models for infectious disease spread. Biostatistics, 18 (2), 338-351. doi: 10.1093/biostatistics/kxw051
## Author
Sebastian Meyer
nbOrder to determine adjacency orders from a binary adjacency matrix.
getNEweights and coefW to extract the estimated neighbourhood weight matrix and coefficients from an hhh4 model.
## Examples
data("measlesWeserEms")
## data contains adjaceny orders as required for parametric weights
plot(measlesWeserEms, type = observed ~ unit, labels = TRUE)
neighbourhood(measlesWeserEms)[1:6,1:6]
max(neighbourhood(measlesWeserEms)) # max order is 5
## fit a power-law decay of spatial interaction
## in a hhh4 model with seasonality and random intercepts in the endemic part
measlesModel <- list(
ar = list(f = ~ 1),
ne = list(f = ~ 1, weights = W_powerlaw(maxlag=5)),
end = list(f = addSeason2formula(~-1 + ri(), S=1, period=52)),
family = "NegBin1")
## fit the model
set.seed(1) # random intercepts are initialized randomly
measlesFit <- hhh4(measlesWeserEms, measlesModel)
summary(measlesFit) # "neweights.d" is the decay parameter d
coefW(measlesFit)
## plot the spatio-temporal weights o_ji^-d / sum_k o_jk^-d
## as a function of adjacency order
plot(measlesFit, type = "neweights", xlab = "adjacency order")
## normalization => same distance does not necessarily mean same weight.
## to extract the whole weight matrix W: getNEweights(measlesFit)
## visualize contributions of the three model components
## to the overall number of infections (aggregated over all districts)
plot(measlesFit, total = TRUE)
## little contribution from neighbouring districts
if (surveillance.options("allExamples")) {
## simpler model with autoregressive effects captured by the ne component
measlesModel2 <- list(
ne = list(f = ~ 1, weights = W_powerlaw(maxlag=5, from0=TRUE)),
end = list(f = addSeason2formula(~-1 + ri(), S=1, period=52)),
family = "NegBin1")
measlesFit2 <- hhh4(measlesWeserEms, measlesModel2)
## omitting the separate AR component simplifies model extensions/selection
## and interpretation of covariate effects (only two predictors left)
plot(measlesFit2, type = "neweights", exclude = NULL, xlab = "adjacency order")
## strong decay, again mostly within-district transmission
## (one could also try a purely autoregressive model)
plot(measlesFit2, total = TRUE,
legend.args = list(legend = c("epidemic", "endemic")))
## almost the same RMSE as with separate AR and NE effects
c(rmse1 = sqrt(mean(residuals(measlesFit, "response")^2)),
rmse2 = sqrt(mean(residuals(measlesFit2, "response")^2)))
}
|
{}
|
# Relationship between ampere turns and flux density calculator
### Converting Magnetic Field Strengths - Ampere-Turns to Tesla : askscience
Calculate the amount of magnetic flux (Φ) in a piece of iron with a reluctance (ℜ) of 55 amp-turns per weber and an applied MMF (F) of amp-turns. Explain the difference between relative permeability (µr) and absolute permeability (µ). How do density (B) and field intensity (H, otherwise known as magnetizing force). Magnetic Field Unit conversion Tool: Unit conversion of magnetic flux density The relation between Magnetic strength H and flux density B can be defined by B = μH. field strength is defined by the unit of Oe・A/m (Oersted・Ampere/meter). It is measured in ampere-turns, the electrical current in the coil, measured in we had 5, turns of wire in our coil, we would have 5, times the flux density, .
Он напал.
- Что. Этого не.
Он заперт внизу.
|
{}
|
From A. Messiah’s Quantum Mechanics p.321 and 722 ff. , Dover (1999)
Any unitary transformation of the vectors and the observables of the Schrödinger (or Heisenberg) representations defines a new representation.
Any problem of quantum mechanics essentially consists of the determination of the properties of the unitary operator $U(t,t_0)$. *
In the intermediate representation, we suppose that the Hamiltonian $H$ can be put in the form:
$\displaystyle H(t)=H_0(t)+V(t)$
where $H_0(t)$ is the Hamiltonian of a Schrödinger equation whose solution is know.
Let $U_0(t,t_0)$ be the evolution operator corresponding to $H_0$. We obtain vectors and observables in the interaction representation by applying the unitary tranformation $U_0^{\dagger}(t,t_0)$ to the Schrödinger ones.
$\displaystyle \left|\psi_{\mathrm{I}}(t)\right>=U_0^{\dagger}(t,t_0)\left|\psi_S(t)\right>$
$\displaystyle A_{\mathrm{I}}(t)=U_0^{\dagger}(t,t_0)A_S U_0(t,t_0)$
The evolution operator for the states in the intermediate representation is:
$\displaystyle U_{\mathrm{I}}(t, t_0)=U_0^{\dagger}(t,t_0)U(t,t_0)$
The time dependence of $U_{\mathrm{I}}$ is determined by:
$\displaystyle V_{\mathrm{I}}(t)=U_0^{\dagger}(t,t_0) V(t) U_0(t,t_0)$
The vectors in the intermediate representation move in time satisfying the Schrödinger equation:
$\displaystyle i\hbar\frac{\mathrm{d}}{\mathrm{d}t}\left|\psi_{\mathrm{I}}(t)\right>=V_{\mathrm{I}}(t)\left|\psi_{\mathrm{I}}(t)\right>$
On the other hand, the physical quantities are represented by moving observables that are subject to the Heisenberg equation of motion written with the “unperturbed” Hamiltonian $H^{0}_{\mathrm{I}}(t)$:
The evolution operator in the interaction representation is:
$\displaystyle U_{\mathrm{I}}(t,t_0)=1-i \hbar^{-1} \int_{t_0}^t V_{\mathrm{I}}(t') U_{\mathrm{I}}(t',t_0) dt'$
This Integral equation can be solved by iteration.
To fit with the introduction of the C. Kittel’s book Quantum Theory of Solids and Abrikosov’s Methods of Quantum Field Theory in Statistical Physics, we note that, if $H_0$ is time independent and putting $t_0=0$ we have:
$\displaystyle U_0(0,t)=e^{-iH_0/\hbar}$
|
{}
|
# System solution
• Feb 25th 2014, 11:58 AM
polarbear73
System solution
$x=zx+zy$
$y=zx+2zy$
$x^2+2xy+y^2=100$
Is this system solvable? I can solve for z in terms of x and y, but I can't separate x and y to get a solution. Its three unkowns and three equations, but I don't know if all three unkowns have to be in all three equations. Anyway, if I solve for z I get $x^2+xy=y^2$ but that's as far as I can get. Any suggestions?
Thanks
• Feb 25th 2014, 12:13 PM
romsek
Re: System solution
I'm showing no solutions.
The 3rd equation gets you
$$(x+y)^2=100 \Rightarrow \pm (x+y)=10$$
$$(x=10-y) \vee (x=-(10+y))$$
when you try substituting those into equations 1 and 2 the result is no solution.
• Feb 25th 2014, 12:23 PM
polarbear73
Re: System solution
Hey, thanks for your time. I wrote it down wrong. It should be
$x=zx+zy$
$y=zx+2zy$
$x^2+2xy+2y^2=100$
• Feb 25th 2014, 12:51 PM
JeffM
Re: System solution
Quote:
Originally Posted by polarbear73
$x=zx+zy$
$y=zx+2zy$
$x^2+2xy+y^2=100$
Is this system solvable? I can solve for z in terms of x and y, but I can't separate x and y to get a solution. Its three unkowns and three equations, but I don't know if all three unkowns have to be in all three equations. Anyway, if I solve for z I get $x^2+xy=y^2$ but that's as far as I can get. Any suggestions?
Thanks
$Case\ I:\ x \ne - y\ and\ x \ne - 2y$
$x = zx + zy = z(x + y) \implies z = \dfrac{x}{x + y}.$
$y = zx + 2zy = z(x + 2y) \implies z = \dfrac{y}{x + 2y}.$
So $\dfrac{x}{x + y} = \dfrac{y}{x + 2y} \implies x^2 + 2xy = xy + y^2 \implies x^2 + xy - y^2 = 0 \implies$
$x = \dfrac{- y \pm \sqrt{y^2 - 4 * 1 * y^2}}{2} = \dfrac{- y \pm y\sqrt{-3}}{2}.$ No real solution.
$Case\ II:\ x = - y.$
$x = - y \implies x^2 + 2xy + y^2 = x^2 + 2x(-x) + (-x)^2 = x^2 - 2x^2 + x^2 = 0 \ne 100 \implies x \ne - y.$ No solution.
$Case\ III:\ x = - 2y.$
$x = - 2y \implies y = zx + 2zy = z(x + 2y) = z * 0 = 0 \implies x = - 2 * 0 = 0 = y = - y,\ which\ is\ case\ II.$ No solution.
It is an inconsistent system (at least in the reals).
|
{}
|
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
40m Log, Page 65 of 344 Not logged in
ID Date Author Type Category Subject
17017 Tue Jul 19 07:34:46 2022 AnchalUpdateCalibrationError propagation to astrophysical parameters from detector calibration uncertainty
1. Yeah, that's correct, that equation normally $\Delta \Theta = -\mathbf{H}^{-1} \mathbf{M} \Delta \Lambda$ but it is different if I define $\Gamma$ bit differently that I did in the code, correct my definition of $\Gamma$ to :
$\Gamma_{ij} = \mu_i \mu_j \left( \frac{\partial g}{\partial \mu_i} | \frac{\partial g}{\partial \mu_j} \right )$
then the relation between fractional errors of detector parameter and astrophysical parameters is:
$\frac{\Delta \Theta}{\Theta} = - \mathbf{H}^{-1} \mathbf{M} \frac{\Delta \Lambda}{\Lambda}$
I prefer this as the relation between fractional errors is a dimensionless way to see it.
2. Thanks for pointing this out. I didn't see these parameters used anywhere in the examples (in fact there is no t_c in documentation even though it works). Using these did not affect the shape of error propagation slope function vs frequency but reduced the slope for chirped Mass $M_c$ by a couple of order of magnitudes.
1. I used the get_t_merger(f_gw, M1, M2) function from Hang's work to calculate t_c by assuming $f_{gw}$ must be the lowest frequency that comes within the detection band during inspiral. This function is:
$t_c = \frac{5}{256 \pi^{8/3}} \left(\frac{c^3}{G M_c}\right)^{5/3} f_{gw}^{-8/3}$
For my calculations, I've taken $f_{gw}$ as 20 Hz.
2. I used the get_f_gw_2(f_gw_1, M1, M2, t) function from Hang's work to calculate the evolution of the frequency of the IMR defined as:
$f_{gw}(t) = \left( f_{gw0}^{-8/3} - \frac{768}{15} \pi^{8/3} \left(\frac{G M_c}{c^3}\right)^{5/3} t \right)^{-3/8}$
where $f_{gw0}$ is the frequency at t=0. I integrated this frequency evolution for t_c time to get the coalescence phase phi_c as:
$\phi_c = \int^{t_c}_0 2 \pi f_{gw}(t) dt$
3. In Fig 1, which representation makes more sense, loglog of linear axis plot? Regarding the affect of uncertainties on Tidal amplitude below 500 Hz, I agree that I was also expecting more contribution from higher frequencies. I did find one bug in my code that I corrected but it did not affect this point. Maybe the SNR of chosen BNS parameters (which is ~28) is too low for tidal information to come reliably anyways and the curve is just an inverse of the strain noise PSD, that is all the information is dumped below statistical noise. Maybe someone else can also take a look at get_fisher2() function that I wrote to do this calculation.
4. Now, I have made BBH parameters such that the spin of the two black holes would be assumed the same along z. You were right, the gamma matrix was degenerate before. To your second point, I think the curve also shows that above ~200 Hz, there is not much contribution to the uncertainty of any parameter, and it rolls-off very steeply. I've reduced the yspan of the plot to see the details of the curve in the relevant region.
Quote: 1. In the error propogation equation, it should be \Delta \Theta = -H^{-1} M \Delta \Lambda, instead of the fractional error. 2. For the astro parameters, in general you would need t_c for the time of coalescence and \phi_c for the phase. See, e.g., https://ui.adsabs.harvard.edu/abs/1994PhRvD..49.2658C/abstract. 3. Fig. 1 looks very nice to me, yet I don't understand Fig. 3... Why would phase or amplitude uncertainties at 30 Hz affect the tidal deformability? The tide should be visible only > 500 Hz. 4. For BBH, we don't measure individual spin well but only their mass-weighted sum, \chi_eff = (m_1*a_1 + m_2*a_2)/(m_1 + m_2). If you treat S1z and S2z as free parameters, your matrix is likely degenerate. Might want to double-check. Also, for a BBH, you don't need to extend the signal much higher than \omega ~ 0.4/M_tot ~ 10^4 Hz * (Ms/M_tot). So if the total mass is ~ 100 Ms, then the highest frequency should be ~ 100 Hz. Above this number there is no signal.
17029 Sun Jul 24 08:56:01 2022 HangUpdateCalibrationError propagation to astrophysical parameters from detector calibration uncertainty
Sorry I forgot to put tc & phic in the example.
I modified astroFisherLib.py to include these parameters. Please note that their meaning is that we don't know when the signal happens and at which phase it merges.
It does not mean the time & phase from a reference frequency to the merger. This part is not free to vary because it is fixed by the intrinsic parameters.
It might be good to have a quick scan through the Cutler & Flanagan 94 paper to better understand their physical meanings.
17221 Wed Nov 2 16:34:56 2022 PacoSummaryCalibrationSingle arm calibration run
[Anchal, Paco]
We added a notch filter on ETMY (the actuation point of the YARM control loop) to inject our calibration line at 575.170 Hz. The excitation is injected using the DARM Oscillator, with an exc. gain of ~ 500 (this gets us a decent > 10 SNR line in the ALS Y beat). With the arm cavity locked to the PSL (~150 Hz control bandwidth), and the aux laser locked to the cavity (~10 kHz control bandwidth) the goal of this run is to calibrate our actuator strength and more importantly to budget its uncertainty. For this we have looked at the ALS beat stability using Allan statistics; we noticed the ALS beatnote frequency fluctuations start to become dominated by 1/f (or divergent noise due to systematic drifts in the YARM loop) after 10 seconds (see Attachment #1) (we have managed to see 30 seconds stability with the HEPAs off and without locking to IMC).
Our prediction is that our demodulated calibration lines will display the least residual rms noise when averaging down to around this time. This is the only reason one would use allan statistics; to quantify the separation between statistical and systematic effects in a frequency measurement. To be continued...
gpstimes:
• 1351464997 to 1351467139
• 1351467259 to 1351468221
• 1351468318 to ...
17253 Thu Nov 10 17:40:31 2022 PacoSummaryCalibrationCalibration Plan
Plan to calibrate single arm actuation strength
1. Lock single arm cavity (e.g. YARM)
2. Lock YAUX laser to arm cavity (actuation point is ETMY)
3. With the notch on the YARM loop filter (actually on ETMY),
4. Turn on cal line (e.g. DARM osc) to move ITMY; here the frequency is chosen to be away from 600 Hz (line harmonic) and from violin modes for ITMY (642 Hz). The lower value of 575.17 Hz was chosen to avoid demodulating noise peaks at 455 Hz and 700 Hz.
5. Get raw YALS beatnote (we chose the demod angle of -35 deg to minimize Q).
The analysis is as follows:
1. Get demodulated IQ timeseries for the duration of the locks before lowpass filter (C1:CAL-SENSMAT_DARM_BEATYF_I_DEMOD_I_IN1); we are also storing the raw beatnote if we want to do software demodulation.
2. Look at the allan deviation of I and Q to establish the timescale over which our measurement is dominated by statistical uncertainty -- after this time, the uncertainty is expected to be due to systematic error / drift. In this case as shown by Attachment #1 the time is around 60.6 seconds.
3. At this frequency and with 500 gain the ITMY coils should be actuating 7.32 pm of amplitude displacement.
4. The minimum allan deviation does indeed predict the statistical uncertainty limited rms if we look at the power spectra of the demodulated cal line over different time periods (Attachment #2), notice I lowpassed the raw timeseries.
5. I think the next step is to get the nominal calibration value and repeat the measurment for more than a single cal line.
6. Roughly from the deviation plot, our fractional beatnote deviation is a proxy for the calibration uncertainty. 1.15e-16 of beatnote stability should translate to a fractional displacement stability of ~4.57e-15 at 60 seconds; giving an ultimate statistical calibration uncertainty of 0.06% at this particular frequency when averaging for this long. It might be interesting to see a calibration frequency dependent allan deviation plot.
17257 Fri Nov 11 14:15:45 2022 PacoSummaryCalibrationSingle arm cal with 5 lines
I turned all the LSC oscillators on and used the digital demod for BEATYF (fine y als beat) to grab the data. For this I added notches onto SCY ETMY LSC filter banks FM6-10 to account for these lines at 30.92, 211.10, 313.31, 315.17 Hz (basically just reusing the osc models) and adjusted the sensing matrix to actuate on ITMY.
I aligned and locked YARM, and then I aligned and locked the YAUX. The lock seems pretty robust with an avg green transmission GTRY ~ 0.185 counts for TEM00.
Trying to see other lines appear on the BEATYF demod channels, but so far no luck. I scaled down the exc gain from 500 counts (snr ~ 20 at 575 Hz) and verified the notches are working. Since I am unsure of the issue here and WFS tests are happening at 4 today, I decided to take some beat data under different conditions -->
HEPA OFF and PSL Shutter Open gpstime start = 1352244763 gpstime end = 1352245405 HEPA OFF and PSL Shutter Closed gpstime start = 1352245574 gpstime end = 1352246216 HEPA ON and PSL Shutter Closed gpstime start = 1352246240 gpstime end = 1352246882
17266 Tue Nov 15 11:04:01 2022 PacoSummaryCalibrationSingle arm cal with 5 lines
## YALS Hardware inspection
The ALS Cal for ITMY actuation was off by ~ 1000, so I decided I don't trust / understand what this beatnote is seeing. Then, I went in the lab to inspect things;
1. ALS DFD and DEMOD: The demod box was off perhaps on purpose, more likely by accident... the switch was off in the rear of the 1U box in the LSC rack... Also, the DB9 cable labeled ALS was disconnected. Fixed both these bugs and verified it worked all the way into cds.
2. YARM Green injection: Did some re-alignment, and noted the mode was hopping a bit too much (once every couple of seconds) so I rotated the half waveplate before the green PZT steering mirrors by ~ 8 deg and used the latter to get a GTRY transmission of > 0.320 (counts), about 75% more than last time. Finally I made sure the mode is robustly locked for several minutes.
## Beatnote calibration
The factor above may be explained by the bogus signals coming into the beat fine phase channels on the ALS model. After locking the YARM with POY11, and locking the YAUX to the YARM cavity, I turned on the LSC oscillators -- all five of them see Attachment #1 for the screenshot -- and looked for the lines in the C1:ALS-BEATY_FINE_PHASE_OUT channel. Here, again the sensing output matrix was set up to actuate on ITMY, while the ETMY (control point of YARM loop) had all the output notches on. Once all lines were visible in the YAUX beatnote, I had to reduce the LSC filter gain from 0.012 to 0.011 to prevent loop oscillations... Then I recorded the gpstimes below with different conditions.
• HEPA ON (JC Inside lab)
• gpstime = 1352575798 to 1352576046
• HEPA OFF (JC Inside lab)
• gpstime = 1352576125 to 1352576216
• HEPA OFF (JC in the control room)
• gpstime = 1352576641 to 1352577591
## Analysis
Basically, only the DARM line was recorded (DQ channs) so I modified the c1cal to store the SIG_OUT and DEMOD_I_IN1 channels for both BEATX and BEATY cal signals. This means I need to repeat this measurement. In the meantime I am also going to try and rerun calibrate the BEAT HZ transfer function.
17291 Mon Nov 21 11:52:50 2022 RadhikaSummaryCalibrationSingle arm cal with 5 lines
We set out to realign the YARM AUX laser input into the arm cavity.
- We noticed that the GTRY beam was way off the center of the screen, so we went to the vertex table to align the camera.
- The beam spot at GTRY PD was large/divergent, so we shifted the PD closer to the penultimate mirror. We also doubled the PD gain. Transmission went from ~0.3 to ~0.7 (with gain doubled).
- We returned to the YARM end table to finalize alignment with the green PZT steering mirrors. GTRY was maximized to ~0.77.
17328 Wed Nov 30 20:01:08 2022 ranaSummaryCalibrationSingle arm cal with 5 lines
I don't think you need to record the excitations. They are just sine waves. The amplitude you can read off from the OSC screen. You just have to have the BEAT channel recorded and you can demod it to get the calibration. If the BEAT channel is calibrated in Hz, and you know the 40m arm length, then you're all done.
Quote:
## Analysis
Basically, only the DARM line was recorded (DQ channs) so I modified the c1cal to store the SIG_OUT and DEMOD_I_IN1 channels for both BEATX and BEATY cal signals. This means I need to repeat this measurement. In the meantime I am also going to try and rerun calibrate the BEAT HZ transfer function.
17329 Thu Dec 1 20:43:25 2022 AnchalSummaryCalibrationSingle arm cal with 5 lines
[Anchal, Paco]
We are doing this attempt again in following configuration:
• PSL shutter is closed. (So IR laser is free running)
• Beanote frequency between Y arm and Main laser is about 45 MHz.
• Green laser on Y end is locked. Transmission is above 1.1 (C1:ALS-TRY_OUT)
• All calibration oscillators are turned on and set to actuate ITMY. See screenshot attached.
• The calibration model was changed to demodulate the C1:ALS-BEATY_FINE_PHASE_OUT channel insteald. We'll have DQ channels before mixing with oscillator, after mixing, and also after applying a 4th order 30 Hz butterworth filter.
Start time:
PST: 2022-12-01 20:44:23.982114 PST
UTC: 2022-12-02 04:44:23.982114 UTC
GPS: 1353991481.982114
Stop time:
PST: 2022-12-02 14:32:29.547603 PST
UTC: 2022-12-02 22:32:29.547603 UTC
GPS: 1354055567.547603
17346 Thu Dec 8 16:21:40 2022 PacoSummaryCalibrationITMY actuation strength cal with 5 lines
[Anchal, Paco]
After debugging the hardware, on gpstime 1354422834 we turned on 5 cal lines on ITMY to test the ALS calibration for the single arm along with our error estimates.
Note: the YARM IR lock lasted > 8 hours, but the GRY transmission dropped twice during the evening and hopped back up, so the phase tracker jumped a couple of times.
### Configuration
YAUX laser was locked to the YARM through the analog PDH servo (UGF ~ 2 kHz), YARM was locked to the PSL with POY11 (UGF ~ 200 Hz), and the ALS phase tracker was set to output the beat frequency noise in Hz. HEPA was left on during this measurement. The oscillators were similar to previous instances: gains of 70@211.1Hz, 100@313.31Hz, 100@315.17Hz, 300@575.17Hz and 15@30.92Hz with appropriate notches on ETMY to avoid POY11 loop supression.
### Analysis
For YARM, the high bandwidth YAUX laser loop with transfer function G ensures that the relative laser frequency fluctuations correlate with the relative length fluctuations as:
$\frac{\delta \nu}{\nu} = \frac{G}{1 - G} {\frac{\delta L}{L}}$
Then, getting the magnitude of the YARM displacement at calibration frequencies is possible by knowing the arm cavity length, open loop gain, and absolute frequency (wavelength). The relative calibration error on the magnitude of the displacement is
$\frac{\Delta \lvert \delta L\lvert}{\lvert \delta L \lvert} = \left[ \left(\frac{\Delta {L}}{L}\right)^2+ \left(\frac{\Delta {\lambda}}{\lambda}\right)^2 + \left(\frac{\Delta {{\delta \nu}}}{{\delta \nu}}\right)^2 + \left(\frac{\Delta \lvert G \lvert}{\lvert G \lvert(\lvert G \lvert - 1)}\right)^2 \right]^{1/2}$
including the relative uncertainties in the YARM length, wavelength, and open loop gain. Interestingly, the loop gain term weighs proportionally less as G increases, so even if G = 100 (10), its relative error contribution would be < 1%. To estimate our total error, we assume the wavelength and YARM length are 1064.1(5) and 37.79(1), and add the frequency dependent values for G with 10% error. Finally, we use the rms ASD to estimate the relative error from the beatnote fluctuation measurement.
The measurement was done similar to other instances, taking the 'C1:ALS-BEATY_FINE_PHASE_OUT_HZ_DQ' timeseries (sampled at 16 kHz) and demodulating at the calibration frequencies above to get the mean YAUX laser frequency fluctuation and its uncertainty from the demodulated rms ASD.
### Results
Attachment #1 shows the raw timeseries, Attachment #2 shows the spectra around the cal lines, Attachment #3 shows the demodulated timeseries, Attachment #4 shows the final result for the 5 lines, including the tallied errors as detailed above.
ITMY actuation = 4.92(11) nm / count / f^2
### Discussion
We compared our results from Attachment #4 against a MICH referenced ITMY actuation calibration found here; which Yuta guess-timated a 10% uncertainty (gray shaded band in Attachment #4). An important correction came for the 575 Hz line, not just because the YAUX OLG is small but because a violin filter on ITMY LSC output has a 1.4475 gain bump. In fact we collected any additional digital gains from the ITMY output filters:
30.92 Hz 211.1 Hz 313.31 Hz 315.31 Hz 575.17 Hz 1.00007 1.0034 1.0101 1.01017 1.4475
### Next
• Consider moving the 575 Hz line to avoid additional digital gain, but try to remain at high frequency.
• Maybe here we can use the resonant gain MOKU filters that Radhika is designing.
• Setup a live loop gain calibration to reduce the uncertainty for the high frequency cal line.
• We can also just grab GTRY (transmission) as a proxy for optical gain and use for budgeting.
• Work on setting up constraints for error mitigation based on allan deviation of the beatnote and PDH nonlinearity.
• Move this to FPMI or some other lock configuration
10436 Thu Aug 28 11:02:53 2014 SteveUpdateCalibration-RepairSR785 repair
SN 46,795 of 2003 is back.
11641 Thu Sep 24 17:06:14 2015 ericqUpdateCalibration-RepairC1CAL Lockins
Just a quick note for now: I've repopulated C1CAL with a limited set of lockin oscillators/demodulators, informed by the aLIGO common LSC model. Screens are updated too.
Rather than trying to do the whole magnitude phase decompostion, it just does the demodulation of the RFPD signals online; everything beyond that is up to the user to do offline.
Briefly testing with PRMI, it seems to work as expected. There is some beating evident from the fact that the MICH and PRCL oscillation frequencies are only 2Hz apart; the demod low pass is currently at an arbitrary 1Hz, so it doesn't filter the beat much.
Screens, models, etc. all svn'd.
12040 Mon Mar 21 14:29:32 2016 SteveUpdateCalibration-Repair1W Innolight laser repair diagnoses
Quote:
Quote: After adjusting the alignment of the two beams onto the PD, I managed to recover a stronger beatnote of ~ -10dBm. I managed to take some measurements with the PLL locked, and will put up a more detailed post later in the evening. I turned the IMC autolocker off, turned the 11MHz Marconi output off, and closed the PSL shutter for the duration of my work, but have reverted these to their nominal state now. The are a few extra cables running from the PSL table to the area near the IOO rack where I was doing the measurements from, I've left these as is for now in case I need to take some more data later in the evening...I
Innolight 1W 1064nm, sn 1634 was purchased in 9-18-2006 at CIT. It came to the 40m around 2010
It's diodes should be replaced, based on it's age and performance.
RIN and noise eater bad. I will get a quote on this job.
The Innolight Manual frequency noise plot is the same as Lightwave' elog 11956
Diagnoses from Glasglow:
“So far we have analyzed the laser. The pump diode is degraded. Next we would replace it with a new diode. We would realign the diode output beam into the laser crystal. We check all the relevant laser parameters over the whole tuning range. Parameters include single direction operation of the ring resonator, single frequency operation, beam profile and others. If one of them is out of spec, then we would take actions accordingly. We would also monitor the output power stability over one night. Then we repackage and ship the laser.”
12045 Thu Mar 24 07:56:09 2016 SteveUpdateCalibration-RepairNO Noise Eater for 1W Innolight
1W Innolight is NOT getting Noise Eater as it was decided yesterday at the 40m meeting. Corrected 3-25-2016
Repair quote with adding noise eater is in 40m wiki
Quote:
Quote:
Quote: After adjusting the alignment of the two beams onto the PD, I managed to recover a stronger beatnote of ~ -10dBm. I managed to take some measurements with the PLL locked, and will put up a more detailed post later in the evening. I turned the IMC autolocker off, turned the 11MHz Marconi output off, and closed the PSL shutter for the duration of my work, but have reverted these to their nominal state now. The are a few extra cables running from the PSL table to the area near the IOO rack where I was doing the measurements from, I've left these as is for now in case I need to take some more data later in the evening...I
Innolight 1W 1064nm, sn 1634 was purchased in 9-18-2006 at CIT. It came to the 40m around 2010
It's diodes should be replaced, based on it's age and performance.
RIN and noise eater bad. I will get a quote on this job.
The Innolight Manual frequency noise plot is the same as Lightwave' elog 11956
Diagnoses from Glasglow:
“So far we have analyzed the laser. The pump diode is degraded. Next we would replace it with a new diode. We would realign the diode output beam into the laser crystal. We check all the relevant laser parameters over the whole tuning range. Parameters include single direction operation of the ring resonator, single frequency operation, beam profile and others. If one of them is out of spec, then we would take actions accordingly. We would also monitor the output power stability over one night. Then we repackage and ship the laser.”
12070 Mon Apr 11 17:03:41 2016 SteveUpdateCalibration-Repair1W Innolight repair completed
The laser is back. Test report is in the 40m wiki as New Pump Diode Mephisto 1000
It will go on the PSL table.
13456 Tue Nov 28 17:27:57 2017 awadeBureaucracyCalibration-RepairSR560 return, still not charging
I brought a bunch of SR560s over for repair from Bridge labs. This unit, picture attached (SN 49698), appears to still not be retaining charge. I’ve brought it back.
14759 Mon Jul 15 03:30:47 2019 KruthiUpdateCalibration-RepairWhite paper as a Lambertian scatterer
I made some rough measurements, using the setup I had used for CCD calibration, to get an idea of how good of a Lambertian scatterer the white paper is. Following are the values I got:
Angle (degrees) Photodiode reading (V) Ps (W) BRDF (per str) % error 12 0.864 2.54E-06 0.334 20.5 24 0.926 2.72E-06 0.439 19.0 30 1.581 4.65E-06 0.528 19.0 41 0.94 2.76E-06 0.473 19.8 49 0.545 1.60E-06 0.423 22.5 63 0.371 1.09E-06 0.475 28
Note: All the measurements are just rough ones and are prone to larger errors than estimated.
I also measured the transmittance of the white paper sample being used (it consists of 2 white papers wrapped together). It was around 0.002
14804 Wed Jul 24 04:20:35 2019 KruthiUpdateCalibration-RepairMC2 pitch and yaw calibration
Summary: I calibrated MC2 pitch and yaw offsets to spot position in mm. Here's what I did:
1. Changed the MC2 pitch and yaw offset values using ezca.Ezca().write('IOO-MC2_TRANS_PIT_OFFSET', <pitch offset value> ) and ezca.Ezca().write('IOO-MC2_TRANS_YAW_OFFSET', <yaw offset value> )
2. Waited for ~ 700-800 sec for system to adjust to the assigned values
3. Took snapshots with the 2 GigEs I had installed - zoomed in and zoomed out. (I'll be using these to make a scatter loss map, verify the calibration results, etc)
4. Ran the mcassDecenter script, which can be found in /scripts/ASS/MC. This enters the spot position in mm in the specified text file.
Results: In the pitch/yaw vs pitch_offset/yaw_offset graph attached,
• intercept_pitch = 6.63 (in mm) , slope_pitch = -0.6055 (mm/counts)
• intercept_yaw = -4.12 (in mm) , slope_yaw = 4.958 (mm/counts)
15510 Sat Aug 8 07:36:52 2020 Sanika KhadkikarConfigurationCalibration-RepairBS Seismometer - Multi-channel calibration
Summary :
I have been working on analyzing the seismic data obtained from the 3 seismometers present in the lab. I noticed while looking at the combined time series and the gain plots of the 3 seismometers that there is some error in the calibration of the BS seismometer. The EX and the EY seismometers seem to be well-calibrated as opposed to the BS seismometer.
The calibration factors have been determined to be :
BS-X Channel: $\dpi{150} \small {\color{Blue} 2.030 \pm 0.079 }$
BS-Y Channel: $\dpi{150} \small {\color{Blue} 2.840 \pm 0.177 }$
BS-Z Channel: $\dpi{150} \small {\color{Blue} 1.397 \pm 0.182 }$
Details :
The seismometers each have 3 channels i.e X, Y, and Z for measuring the displacements in all the 3 directions. The X channels of the three seismometers should more or less be coherent in the absence of any seismic excitation with the gain amongst all the similar channels being 1. So is the case with the Y and Z channels. After analyzing multiple datasets, it was observed that the values of all the three channels of the BS seismometer differed very significantly from their corresponding channels in the EX and the EY seismometers and they were not calibrated in the region that they were found to be coherent as well.
Method :
Note: All the frequency domain plots that have been calculated are for a sampling rate of 32 Hz. The plots were found to be extremely coherent in a certain frequency range i.e ~0.1 Hz to 2 Hz so this frequency range is used to understand the relative calibration errors. The spread around the function is because of the error caused by coherence values differing from unity and the averages performed for the Welch function. 9 averages have been performed for the following analysis keeping in mind the needed frequency resolution(~0.01Hz) and the accuracy of the power calculated at every frequency.
1. I first analyzed the regions in which the similar channels were found to be coherent to have a proper gain analysis. The EY seismometer was found to be the most stable one so it has been used as a reference. I saw the coherence between similar channels of the 2 seismometers and the bode plots together. A transfer function estimator was used to analyze the relative calibration in between all 3 pairs of seismometers. In the given frequency range EX and EY have a gain of 1 so their relative calibration is proper. The relative calibration in between the BS and the EY seismometers is not proper as the resultant gain is not 1. The attached plots show the discrepancies clearly :
• BS-X & EY-X Transfer Function : Attachment #1
• BS-Y & EY-Y Transfer Function : Attachment #2
The gain in the given frequency range is ~3. The phase plotting also shows a 180-degree phase as opposed to 0 so a negative sign would also be required in the calibration factor. Thus the calibration factor for the Y channel of the BS seismometer should be around ~3.
• BS-Z & EY-Z Transfer Function : Attachment #3
The mean value of the gain in the given frequency range is the desired calibration factor and the error would be the mean of the error for the gain dataset chosen which is caused due to factors mentioned above.
Note: The standard error envelope plotted in the attached graphs is calculated as follows :
1. Divide the data into n segments according to the resolution wanted for the Welch averaging to be performed later.
2. Calculate PSD for every segment (no averaging).
3. Calculate the standard error for every value in the data segment by looking at distribution formed by the n number values we obtain by taking that respective value from every segment.
Discussions :
The BS seismometer is a different model than the EX and the EY seismometers which might be a major cause as to why we need special calibration for the BS seismometer while EX and EY are fine. The sign flip in the BS-Y seismometer may cause a lot of errors in future data acquisitions. The time series plots in Attachment #4 shows an evident DC offset present in the data. All of the information mentioned above indicates that there is some electrical or mechanical defect present in the seismometer and may require a reset. Kindly let me know if and when the seismometer is reset so that I can calibrate it again.
227 Tue Jan 8 15:20:17 2008 PkpUpdateCamerasGigE update
[Tobin , Pinkesh]
Finally we got the camera doing something (other than giving out its attributes). The only thing that seems to work so far is a program called AAviewer, which converts the image into an ASCII format and displays it on the screen. If you want to play around with it, log into mafalda (131.215.113.23) via rana.ligo.caltech.edu. Access /cvs/cds/caltech/target/Prosilica/bin-pc/x86/ and there should be a few programs in there, one of which is AAviewer, which requires you to get an IP address (which is 131.215.113.103) for the camera right now. (You can also get the IP information via the ListCameras program). The camera is physically in the 40m near the network rack.
Other programs dont seem to be working and its probably due to the network/packetsize issues. Since linux2 can change its packetsize to a higher number, I will get it to compile on linux2 for now and then give it a shot.
234 Thu Jan 10 13:45:52 2008 PkpUpdateCamerasGLIBC Error
So, I have tried to compile the camera files which are in /cvs/cds/caltech/target/Prosilica/examples for the past 2 days now and have been unable to get rid of the following error. (specifically ListCameras.cpp, as it doesnt have any other libraries required, which unnecessarily complicates things)
../../bin-pc/x86/libPvAPI.so: undefined reference to `__stack_chk_fail@GLIBC_2.4'
collect2: ld returned 1 exit status
make: *** [sample] Error 1
I used to get this error on mafalda too, but I had fixed it by installing the latest version of the glibc libraries. Inspite of doing so on linux2, the error still persists. I suspect it had something to do with it being a FC3 machine. My own laptop, which also runs Ubuntu works fine too. The problem with these Ubuntu machines is that they dont let me set the packet sizes to 9 kb which is required by the camera. Linux2 does.
If anyone has any idea how to resolve this issue, please let me know.
Thanks
Pinkesh.
236 Fri Jan 11 17:01:51 2008 pkpUpdateCamerasGigE again
So, here I detail all the efforts on the GigE so far
(1) The GiGE camera requires a minimum of 9 kb packet size, which is not available on mafalda or on my laptop ( both of which run Ubuntu and the Camera programs compile there). The programs which require smaller sizes work perfectly fine on these machines. I tried to statically compile the files on these machines so that I could then port them to the other machines. But that fails because the static libraries given by the company dont work.
(2) On Linux2, which lets me set a packet size as high as 9 Kb, it doesnt compile because of a GLIBC error. I tried updating the glibc and it tells me that the version already existing is the latest ( which it clearly is not). So I tried to uninstall GLIBC and reinstall it, but it wont let me uninstall (it == rpm) glibc, since there are a lot of dependencies. A dead end in essence.
Steps being taken
(1) Locally installing the whole library suites on linux2. Essentially install another version of gcc and g++ and see if that helps.
(2) IF this doesnt work, then the only course of action I can take is to cannibalize linux2's GigE card and put it on mafalda. ( I need permission for this ).
Once again any suggestions welcome.
245 Thu Jan 17 15:11:13 2008 josephbUpdateCamerasWorking on Malfalda
1) I can statically compile the ListCamera code (which basically just goes out and finds what cameras are connected to the network) on Malfalda and use that compiled code to run on Linux2 without a problem. Simply needed to add explicit links to libpthread.a and librt.a.
(i.e. -Bstatic -L /usr/lib/ -lpthread -Bstatic -L /usr/lib -lrt)
With appropriate static libraries, it should be possible to port this code to other linux machines even if we can't get it to compile on the target machine itself.
2)I've modified the Snap.cpp file so that it uses a packet size of 1000 or less. This simply involves setting the "PacketSize" attribute with the built in functions they provide in their library. After un-commenting some lines in that code, I was able to save tiff type images from the camera of up to 400x240 pixels on Malfalda. The claimed maximum resolution for the camera is 752x480, but it doesn't seem to work with the current setup. The max number of pixels seems to about 100 times the packet size. I.e. packet size of 1000 will allow up to 400x240 (96000) but not 500x240 (120,000). Not sure if this is an issue just with snap code or the general libraries used.
3)Will be working towards getting video running over the next day or so.
266 Fri Jan 25 11:38:16 2008 josephbConfigurationCamerasWorking GiGE video on Linux - sort of
1)I have been able to compile the SampleViewer program which can stream the video from the Prosilica 750C camera. This was accomplished on my 64-bit laptop running Ubuntu, after about 3 hours of explicitly converting strings to wxStrings and back again within the C++ code. (There was probably an easier way to simply overload the functions that were being called, but I wasn't sure how to go about doing so). By connecting it to the CDS network, I was able to immediately detect the camera and display the images.
Unfortunately, I have not yet been able to get it to compile on Mafalda with the x86 architecture. This may be do the fact that it has wxWidgets version 2.8.7 while my laptop has 2.8.4. Certainly the failure at compile time looks different from the errors earlier, and seem to be within the wxWidget code rather than the SampleViewer code. I may simply need to uninstall 2.8.7 and install 2.8.4 of wxWidgets.
The modified code that will compile on my machine has been copied to /cvs/cds/caltech/target/Prosilica/examples/SampleViewer2b.
2)The Snap program (under /cvs/cds/caltech/target/Prosilica/examples/Snap) also will now take full resolution images even on Mafalda. This was achieved by reducing the packet size to 1000 and also increasing the wait until timeout time up to 400 ms, which originally was at 100. Apparently, it takes on the order of 1 ms per packet as far as I can tell. So full resolution at 752x480 required something of order 360 packets.
To Do:
1) Get sample viewer to compile on Mafalda, and then statically compile it so it can be run from any Linux based machine.
2) Get a user friendly version of Snap up and running, statically compiled, with options for a continuous loop every X seconds and also to set desired parameters (such as height, width, file name to save to, save format, etc).
3) Figure out data analysis with the images in Matlab and an after the fact image viewer.
Attached is an example .tiff image from the Snap program.
267 Fri Jan 25 13:36:13 2008 josephbConfigurationCamerasWorking GiGE video on Mafalda
Finally got the GiGE camera sample viewer video running on Mafalda by updating to the latest API (version 1.16 from Dec 16, 2007) from Prosilica and then using the modified Sample Viewer code I had written. The API version previously in cvs was 1.14.
It can currently be run by ssh -X into Mafalda and going to /cvs/cds/caltech/target/Prosilica/bin-pc/x86 and running the SampleViewer executable found there.
289 Thu Jan 31 16:53:41 2008 josephbConfigurationCamerasImproving camera user interface
There's a new and improved version of Snap program at the moment people are free to play with.
Located in /cvs/cds/caltech/target/Prosilica/40mCode/
The program Snap now has a -h or --help option which describes some basic command line arguments. The height (in pixels), width (in pixels), exposure time (in micro seconds), file name to be saved to (in .tiff format), and packet size can all be set. The format type (i.e. pixel format such as Mono8 or Mono16) doesn't work at the moment.
At the moment, it only runs on mafalda.
Currently in the process of adding a loop option which will take images every X seconds, saving them to a given file name and then appending the time of capture to the file name.
After that need to add the ability to identify and choose the camera you want (as opposed to the first one it finds).
Lastly, I've been finding on occassion that the frame fails to save. However if you try again a few seconds later with the exact same parameters, it generally does save the second time. Not sure whats causing this, whether on the camera or network side of things.
I've attached two images, the first at default exposure time (15,000 microseconds) and the second at 1/5th that time (3,000 microseconds).
The command line used was "./Snap -E 3000 -F 'Camera_exp_3000.tiff' "
292 Fri Feb 1 15:04:54 2008 josephbConfigurationCamerasSnap with looping functionality available
New GiGE camera code is available in /cvs/cds/caltech/target/Prosilica/40mCode/. Currently only runs on Mafalda.
Snap has expanded functionality to continuously loop infinitely or for a maximum number of images set by the user. File names generated with the loop option have the current Unix time and .tiff appended to them. So -f './test' will produce tiff files with format "test1234567.tiff". The -l option sets the number of seconds between images.
"./Snap -l 5 -i -f './test' " will cause the program to infinitely loop, saving images every 5 seconds. Using "-m 10" instead of "-i" will take a series of 10 images every 5 seconds (so taking a total of 50 seconds to run).
It also now defaults to 16-bit (in reality only 10 bit) output instead of 8 bit output. You can select between the two with -F 'Mono8' or -F 'Mono16'.
Use --help for a full list of options.
Note that if you ctrl-c out of the loop, you may need to run ./ResetCamera 131.215.113.104 (or whatever the IP is - use ./ListCameras to determine IP if necessary) in order to reset the camera because it doesn't close out elegantly at the moment.
297 Tue Feb 5 15:32:29 2008 josephbConfigurationCamerasPMC and the GigE Camera
The PMC transmission video camera has been removed and replaced with the GigE GC750 camera for the moment.
A ND4.0 filter has been added in the path to that camera to reduce saturation for the moment.
The old camera has been placed on the elevated section inside the enclosure, and the cable for it is still on the table proper.
The Gige camera is currently running the Snap code on Linux3 with the following command line:
./Snap -E 2000 -l 60 -m 1440 -f './pmc_trans/pmc_trans'
So its going to be taking tiff images every minute for the next 24 hours into the cvs/cds/caltech/target/Prosilica/40mCode/pmc_trans/ directory.
Attached is an example image with exposure set to 2000, loaded into matlab and plotted with the surf command. 2500 microseconds looked like it was still saturating, but this seems to be a good level (with a max of 58560 out of 65535).
300 Wed Feb 6 16:50:47 2008 josephbConfigurationCamerasRegions of Interest and max frame rate
The Snap code has once again been modified such that setting the -l option to 0 will take images as fast as possible. Also, the -H and -W options set the height and width, while in principle the -Y and -X options set the position in pixels of the top edge and left edge of the image. It also seems possible to set these values such that the saved image wraps around. I'll be adding some command checking so that the user can't do this in the near future.
Doing some timed runs, using a -H 350 and -W 350 (as opposed to the full 752x480), 100 images can be saved in roughly 8 seconds, and 1000 images took about 73 seconds. This corresponds to a frame rate of about 12-13 frames per second (or a 12-13 Hz display). The size of this area was sufficient to cover the current PMC transmission beam.
The command line I used was
time ./Snap -l 0 -m 1000 -f 'test' -W 350 -H 350 -Y 50 -X 350 -E 2000
Interestingly enough, there would be bursts of failed frame saves if I executed commands in another terminal (such as using ls on the directory where the files were being stored).
As always, this code is available in /cvs/cds/caltech/target/Prosilica/40mCode/.
301 Wed Feb 6 19:39:11 2008 ranaConfigurationCamerasRegions of Interest and max frame rate
We really need to look into making the 40m CDS network have an all GigE backbone so that we can have cooler cameras as well as collect multiple datastreams...
378 Fri Mar 14 12:06:29 2008 josephbConfigurationCamerasGC750 looking at ETMX while locked
The GC750 (CMOS) is currently looking at the front of ETMX. Unfortunately, its being routed through a 10Mbit connection (which I will be purchasing a replacement for today), so getting it to send images to Mafalda/Linux 2 or 3 isn't working well, but by using a local gigabit switch and a laptop I can get sufficient speed for full images with the sample viewer.
The attached image is from a full 752x480 reslution with 10,000 microsecond exposure with the X-arm locked. Although it looks like I still need to work on the focusing. Will be switching the GC750 with the GC 650 (CCD) later today and comparing the resulting images.
379 Fri Mar 14 14:59:51 2008 josephbConfigurationCamerasComparison between GC650 (CCD) and GC750 (CMOS) looking at ETMX
Attached are images taken of ETMX while locked.
The first two are 300,000 microsecond exposure time, with approximately the same focusing/zoom. (The 750 is slightly more zoomed in than the 650 in these images). The second are 30,000 microsecond exposures. The la
The CMOS appears to be more sensitive to the 1064 nm reflected light (resulting in bright images for the same exposure time). This may make a difference in applications where images are desired to be taken quickly and repeatedly.
Both seem to be resolving individual specks on the optic reasonably well.
Next test is to place both camera on a Gaussian beam (in a couple different modes say 00, 11, and so forth), probably using the PMC.
434 Tue Apr 22 08:34:22 2008 josephbConfigurationCamerasCurrent Network Diagram
The attached network diagram has also been added to the 40m Wiki at http://lhocds.ligo-wa.caltech.edu:8000/40m/Image_Processing_with_GigE_Cameras
471 Thu May 8 16:40:36 2008 josephbConfigurationCamerasGige Camera currently on PSL table
Andrey and myself were working on the PSL table today, using a pickoff of a pickoff of the main beam (adding a microscope slide to pickoff ~4% of the original pickoff) to the GC750 GigeCam.
At the time we left, we scanned the area with a beam scan and didn't see any new stray beams, and nothing in any useful beam paths should have changed. We also strung a Cat 6 cable from the control room switch out to the PSL table in the cable trays, and then above the PSL table.
Currently, its not as well aligned as it could be, and also requires a very low exposure setting, of -E 50 or so to avoid saturation.
481 Thu May 15 16:24:18 2008 josephbSummaryCameras
The GC750 camera is currently looking at a very small pickoff of the PSL output (transmission of a Y1-1037-45-S mirror). The plan is to take images tomorrow with it and the GC650 from the same spot and do comparisons.
For those interested, the camera can be run with two codes, from mafalda. Use ssh -X mafalda to login, to allow the live stream to work with the SampleViewer code. The codes can be found in:
/cvs/cds/caltech/target/Prosilica/40mCode/Snap
and
/cvs/cds/caltech/target/Prosilica/bin-pc/x86/SampleViewer
Type Snap --help for a list of options for that program. Click the circle looking thing in SampleViewer to start the live stream. Note only 1 of the two programs can be running at a time, and the only way to change settings (such as exposure length) is with Snap at the moment.
482 Fri May 16 14:38:50 2008 josephbSummaryCamerasTwo cameras setup
I've changed the pickoff setup from yesterday for the GigE cameras to include a 33% beam splitter (first one I could find). The reflection is going to the GC650 (CCD camera) while the transimission is going to the GC750 CMOS camera. This means the CMOS camera has roughly twice the light incident as the GC650 and should be kept in mind in all comparisons. The distances from the beam splitter are approximately the same both cameras, but some more accurate positioning might be useful.
Its very easy to get the GC650 camera into a bad state where you need to go out and cycle the power (simply unplug and re-plug in the power supply either at the camera or outlet). If the ListCamera program doesn't see it, this is probably necessary.
Andrey added at 6.30PM: Actually the 650 camera keeps crashing constantly. Every time I attempt to capture an image, the camera fails.
506 Fri May 30 12:03:08 2008 josephb, AndreyConfigurationCamerasHead to head comparison of cameras
Andrey and myself - Joseph B. - have examined the output of the GC650 (CCD) and GC750 (CMOS) prosilica cameras. We did several live motion tests (i.e. rotate the turning mirror, move and rotate the camera, etc) and also used a microscope slide to try to eliminate back reflections and interference.
Both the GC650 and GC750 produce dark lines in the images, some of which look parallel, while others are in much stranger shapes, such as circles and arcs.
Moving the GC750 camera physically, we have the spot moving around, with the dark lines appearing to be fixed to the camera itself, and remain in the same location on the detector. I.e. coming back to the same spot keeps showing a circle. In reasonably well behaved sections, these lines are about 10% dips in power, and could in principle be subtracted out. Its possible that the camera was damaged with too much light incident in the past, although going back to the pmc_trans images that were taken, similar lines are still visible.
Moving the GC650 camera physically seems to change the position of the lines (if one also rotates the turning mirror to get to the same spot on the CCD). It seems as if a slight change in angle has a large effect on these dark bands, which can either be thin, or very large, bordering on the size of the spot size. My guess is (as the vendor suggested) the light is interacting with the electronics behind the surface layer rather than a surface defect producing these lines. Using a microscope slide in between the turning mirror and the GC650, we were able to produce new fringes, but didn't affect the underlying ones.
Placing a microscope slide in between the last turning mirror and the GC750 does not affect the dark lines (although it does seem to add some), nor does turning the final turning mirror, so it seems unlikely to be caused by back reflection in this case.
So it seems the CMOS may be more consistent, although we need to determine if the current line problems are due to exposure to too much light at some point in the past (i.e. I broke it) or they come that way from the factory.
Attached are the results of image-processing of the images from the two our cameras using Andrey's new Matlab script.
511 Mon Jun 2 12:20:35 2008 josephbBureaucracyCamerasBeam scan has moved
The beamscan has been moved from the Rana lab back over to the 40m, to be used to calibrate the Prosilica cameras.
512 Tue Jun 3 02:15:29 2008 AndreySummaryCamerasFitting results
There have been a lot of work going on related to the processing of images captured by the cameras GC-650 and GC-750 recently.
In the end of the week of May 30 Joseph and me (Andrey) installed the two cameras capturing the images of the pick-off of the main beam on the PSL optical table. The cameras are located after the picked-off beam going towards the "PSL position QPD", after the 33-66 beamsplitter (33% of reflection and 66% of transmission).
Initially (on May 30) the GC-650 camera was taking the images of reflected beam, while the camera GC-750 was taking images of transmitted beam. On Monday June 2 we switched the positions of the cameras, so GC-650 appeared to be on the path of the transmitted beam and GC-750 on the path of the reflected beam.
I (Andrey Rodionov) was able in the weekend to succeed in writing a Matlab program that performs the two-dimensional Gaussian fitting of the captured images, and I used that program to fit the images from the cameras.
The program fits the camera data by a two-dimensional Gaussian surface:
Z = A * exp[ - 2 * (X - X_Shift)^2 / (Waist_X)^2 ] * exp[ - 2 * (Y - Y_Shift)^2 / (Waist_Y)^2 ] + CONST_Shift,
where A, X_Shift, Waist_X, Y_Shift, Waist_Y, CONST_Shift are 6 parameters of the fit.
Attached are the pdf-files showing the results: images taken with our cameras, the 2-dimensional Gaussian fit for these images and the surfaces of residuals. Residuals are differences between the exact beam profile and the result of fitting. In normalized version of residual graph I normalize it by the first coefficient of fitting A, the factor in front of the exponents.
515 Tue Jun 3 12:33:36 2008 AndreyUpdateCamerasAndrey, Josephb
Continuing our work with cameras,
1) we removed both cameras from their places on Monday afternoon, and were taking the beam-scans with a special equipment (see elog-entry 511) from Bridge bld.,
2) and on Tuesday morning we putted back the GC-750 camera into the transmitted beam path, camera GC-650 into the reflected beam path. We plan to compare the images from the "reflection camera" for several different angles of tilt of the camera.
517 Wed Jun 4 13:46:42 2008 josephbConfigurationCamerasChanging incident angle images
Attached are images from the GC650 and GC750 when the incident angle was varied from 0 tilt (normal incidence) to 5,10, and 20 degrees. Each time the beam was realigned via the last turning mirror to be on roughly the same spot. This light was a pickoff of the PSL table light just before it leaves the table.
Images include the raw data, fit to the data, residual normalized by peak power "w(1)", and normalized by the individual bin power.
The first pdf includes 0 degrees (normal) and ~5 degrees of tilt for the GC650 (CCD) camera.
The second pdf includes ~10 and ~20 degrees of tilt images for the GC650 (CCD) camera.
The third pdf includes 0 and ~5 degrees of tilt for the GC750 (CMOS) camera.
The fourth pdf includes ~10 and ~20 degrees of tilt for the GC750 (CMOS) camera.
Things to note:
1) GC750 camera seems to have a structure on the camera itself, somewhat circular in nature. One possible explanation is the camera was damage at a previous juncture due to too much light. Need to check earlier images for this problem.
2) GC650 has "bands" which change direction and thickness with angle. Also at higher incidence angle, the sensitivity seems to drop (unlike the GC750 where overall power level seems to stay constant with increasing angle of incidence).
3) GC650 seems to have a higher noise floor,seen from the last plot of each pdf (where each pixel of the residual is normalized by the power in the corresponding pixel of the fit).
519 Wed Jun 4 16:57:12 2008 josephbConfigurationCamerasDark images from cameras (electronics noise measurement)
The attached pdfs are 1 second and 1 millisecond long integrations from the GC650 and GC750 cameras with a cap in place - i.e. no light.
They include the mean and standard deviation values.
The single bright pixel in the 1 second long exposure image for the GC650 seems to be a real effect. Multiple images taken show the same bright pixel (although with slightly varying amplitudes).
The last pdf is a zoom in on the z-axis of the first pdf (i.e. GC650 /w 1 sec exposure time).
I'm not really sure what to make of the mean remaining virtually fixed for the different integration times for both cameras. I guess 0 is simply offset, but doesn't result in any runaway integrations in general. Although there are certainly some stronger pixels in the long exposures when compared to the short exposures.
Its interesting to note the standard deviation actually drops from the long exposure to the short exposure, possibly influenced by certain pixels which seem to grow with time.
The one with the least variation from its "zero" was the 1 millisecond GC750 dark image.
520 Thu Jun 5 10:46:26 2008 josephbConfigurationCamerasApproximately uniform reflected white light
In an attempt to investigate the structures seen in previous images for the GC750, I aimed it at a relatively clean section of gray table top roughly a cm or two from the surface and took images (without a lens). As I was holding this with my hand, the angle wasn't completely even with the table, and thus there's a gradient of light in the pictures. However, one should in principle be able to pick out features (such as a circular spot with less sensitivity), but these do not show up.
In my mind, these images seem to indicate the electronics are fine, and suggest that the CMOS or CCD detectors themselves are undamaged (at least in regards to white light, as opposed to 1064nm). An issue with the plastic cap (protective piece) may be the culprit, or perhaps a tiny bit of dust, which the incoherent light from all angles goes around efficiently?
Will try blowing the cameras with clean nitrogen today and see if that removes or changes the circular structure we have seen.
521 Thu Jun 5 13:35:23 2008 josephbConfigurationCamerasGC750 looking at 1064nm scattered light
I've taken 200 images of the GC750 (CMOS) camera while holding it by hand up to a beam card (also held by hand) in the path of ~5mW of beam power. I then averaged the images to produce the fourth attached plot.
Rob has pointed out the image looks a lot like PCB traces. So perhaps we're seeing the electronics behind the CMOS sensor?
I repeated the same experiment with HeNe laser light (again scattered off a card). These show none of the detailed structure (just what looks to be a large reflection from the card moving around depending on how steady my hand was). These are the first 3 attached plots. So only 1064nm light so far sees these features.
As a possible solution, I did a quick and dirty calibration by dividing a previous PSL output beam by the 1064 average scatter light values. These produce the last attached pdf (with multiple images). The original uncalibrated image is on top, while the very simply calibrated image is on the bottom of each plot.
It seems as the effect may be power dependent (which could still be calibrated properly, but would take a bit more effort than simply dividing), as determined by looking at the edges of the calibrated plot.
525 Fri Jun 6 16:47:04 2008 josephbConfigurationCameras GC650 scatter images of 1064nm light
Took images similar to the scattered light images from earlier, except with the CCD GC650 camera. The first three attached plots are an average of all 200 images, an average of the first 100 and then an average of the last 100 images.
They show no definite structure. The big red blob which changes with time may be a brighter reflection, although it virtually the same type of setup as the GC750 images.
To do this properly, I should grab a short focal length lens and simply blow up the beam to a size greater than the detector area and simply fix both cameras looking into.
The last set of plots are mean and standard deviation plots from a previous set of runs on 5/29/08 with the GC750 and GC650 running at the same time. The GC650 was receiving approximately 33% of the total power and GC750 was receiving 66% (in otherwords a factor of 2 more).
530 Wed Jun 11 15:30:55 2008 josephbConfigurationCamerasGC1280
The trial use GC1280 has arrived. This is a higher resolution CMOS camera (similar to the GC750). Other than higher resolution, it has a piece of glass covering and protecting the sensor as opposed to a plastic piece as used in the GC750. This may explain the reduced sensitivity to 1064nm light that the camera seems to exhibit. For example, the image averages presented here required a 60,000 microsecond exposure time, compared to 1000-3000 microseconds for similar images from the GC750. This is an inexact comparison, and the actual sensitivity difference will be determined once we have identical beams on both cameras.
The attached pdfs (same image, different angles of view) are from 200 averaged images looking at 1064nm laser light scattering from a piece of paper. The important thing to note is there doesn't seem to be any definite structure, as was seen in the GC750 scatter images.
One possibility is that too much power is reaching the CMOS detector, penetrating, and then reflecting back to the back side of the detector. Lower power and higher exposure times may avoid this problem, and the glass of the GC1280 is simply cutting down on the amount passing through.
This theory will be tested either this evening or tomorrow morning, by reducing the power on the GC750 to the point at which it needs to be exposed for 60,000 microseconds to get a decent image.
The other possibility is that the GC750 was damaged at some point by too much incident power, although its unclear what kind of failure mode would generate the images we have seen recently from the GC750.
558 Tue Jun 24 17:12:10 2008 josephb, EricConfigurationCamerasGC750 setup, 1X4 Hub connected, ETMX images
The GC750 camera has been setup to look at ETMX. In addition, the new 1X4 rack mounted switch (131.215.113.200) has been connected via new cat6 cable to the control room hub (131.215.113.1?), thanks to Eric. The camera is now plugged into 1X4 rack switch and now has a gigabit connection to the control room computers as well as Mafalda (131.215.113.23).
By using ssh -X mafalda or ssh -X 131.215.113.23, then typing:
target
cd Prosilica/bin-pc/x86/
./Sampleviewer
A viewer will be brought up. By clicking on the 3rd icon from the left (looks like an eye) will bring up a live view.
Closing the view, and then cd ../../40mCode, and then running ./Snap --help will tell you how to use a simple code for taking .tiff images as well as setting things such as exposure length and size of image (in pixels) to send.
When the interferometer was set to an X-arm only configuration, we took two series of 200 images each, with two different exposure lengths.
Attached are three pdf images. The first is just a black and white single image, the second is an average of 100 images, and the third is the standard deviation of the 100 images.
566 Wed Jun 25 12:25:28 2008 EricSummaryCameras2D Gaussian Fitting Code
I initially wrote a script in MATLAB that takes pictures of the laser beam's profile and fits them to a two dimensional gaussian in order to determine the position and width of the beam. This code is now (mostly) ported to C so that it can be imbedded in the camera software package that Joe is writing. The fitting works fairly well for pictures with the beam directly incident on the camera, and less well for pictures of scatter off the end mirrors of the arms, since scatter from defects in the mirror have intensities much greater than the intensity of the beam's gaussian profile.
The next steps are to finish up porting the fitting code to C, and then modify it so it can better handle the images off the end mirror. Some thoughts on how to do this are to use a fourier transform and a low pass filter, or to simply use a center-of-mass calculation (with the defect peaks reduced in intensity), since position is more important than beam width in this calculation. The eventual goal is to include the edge of the optic in the picture and use the fit of the beam position in comparison to the optic's position to find the beam's location on the mirror.
622 Wed Jul 2 10:35:02 2008 EricSummaryCamerasGeneral Summary
I finished up the 2D Gaussian fitting code, and, along with Joe, integrated into the Snap software so that it automatically does a fit to every 100th image. While the fitting works, it is too slow for use in any feedback to the servos. I put together a center of mass calculation to use instead that is somewhat less accurate but much faster (almost instantaneous versus 5-10 seconds). This has yet to be added to the Snap software, but doing so would not be difficult.
I put together a different fitting function for fitting the multiple lorentzian resonance peaks in a power spectrum that would result from sweeping the length of any of the mode cleaners. This simply doesn't work. I tested it on some of Josh Weiner's data collected on the OMC last year, and the data fits poorly. Attempting to fit it all at once requires fitting 80000 data points with 37 free parameters (12 peaks at 3 parameters per peak and 1 offset parameter), which cannot be done in any reasonable time period. Attempting to fit to one specific peak doesn't work due to the corruption of the other nearby peaks, even though they are comparatively small. The fit places the offset incorrectly if given the opportunity (green line in attemptedSinglePeakFitWithoutOffset.tiff and attemptedSinglePeakFitWithoutOffsetZoomed.tiff). Removing this as a parameter causes the fit to do a much better job (red line in these two graphs). The fit still places the peak 0.01 to the right of the actual peak, which worse than could simply be obtained by looking at the maximum point value. Additionally, this slight shift means that attempting to subtract out the peak so that the other peaks are accessible doesn't work -- the peaks are so steep that the error of 0.01 is enough to cause significant problems (red in attemptedPeakSubtraction.tiff is the attempted subtraction). Part of the problem is that the peaks are far from perfect lorentzians, as seen by cropping to any particular peak (OMCSweepSinglePeak.tiff ). This might be corrected in part by correcting for the conversion from PZT voltage to position, which isn't perfectly linear; though I doubt it would remove all the irregularities. At the moment, the best approach seems to be simply using a center of mass calculation cropped to the particular peak, though I have yet to try this.
Changing Josh's code to work for the digital cameras and the PMC or MC shouldn't be difficult. Changing to the MC or PMC should simply involve changing the EPICs tags for the OMC photodiodes and PZTs to those of the PMC or MC. Making the code work for the digital cameras should be as simple as redirecting the call to the framegrabber software to the Snap software.
657 Thu Jul 10 23:27:57 2008 JohnMetaphysicsCamerasSecret handshakes
Rob and I have joined the ranks of the illuminati and exercised our power.
Quote: Osamu showed me the secret way to change the video labels for the quads and so we fixed them. He made me swear not to divulge this art. - Rana Adhikari
ELOG V3.1.3-
|
{}
|
# ⚖️ Scale Schedule#
Tags: Best Practice, Speedup
## TL;DR#
Scale Schedule changes the number of training steps by a dilation factor and dilating learning rate changes accordingly. Doing so varies the training budget, making it possible to explore tradeoffs between cost (measured in time or money) and the quality of the final model.
The number of training steps to perform is an important hyperparameter to tune when developing a model. This technique appears implicitly throughout the deep learning literature. One example of a systematic study of this approach is the scan-SGD technique in How Important is Importance Sampling for Deep Budgeted Training by Eric Arazo, Diego Ortega, Paul Albert, Noel O’Connor, and Kevin McGuinness. Posted to OpenReview in 2020.
## Hyperparameters#
• ratio - The ratio of the scaled learning rate schedule to the full learning rate schedule. For example, a ratio of 0.8 would train for 80% as many steps as the original schedule.
## Example Effects#
Changing the length of training will affect the final accuracy of the model. For example, training ResNet-50 on ImageNet for the standard schedule in the composer library leads to final validation accuracy of 76.6%, while using scale schedule with a ratio of 0.5 leads to final validation accuracy of 75.6%. Training for longer can lead to diminishing returns or even overfitting and worse validation accuracy. In general, the cost of training is proportional to the length of training when using scale schedule (assuming all other techniques, such as progressive resizing, have their schedules scaled accordingly).
Note
The warmup periods of schedulers are not scaled by the scale schedule ratio.
## Implementation Details#
Scale schedule is implemented as part of the Trainer via the scale_schedule_ratio argument. The trainer will scale the max_duration by the scale_schedule_ratio, and also adjust non-warmup milestones for the learning rate schedulers.
Scale schedule supports all Composer Schedulers:
StepScheduler Decays the learning rate discretely at fixed intervals. MultiStepScheduler Decays the learning rate discretely at fixed milestones. MultiStepWithWarmupScheduler Decays the learning rate discretely at fixed milestones, with an initial warmup. ConstantScheduler Maintains a fixed learning rate. LinearScheduler Adjusts the learning rate linearly. LinearWithWarmupScheduler Adjusts the learning rate linearly, with an initial warmup. ExponentialScheduler Decays the learning rate exponentially. CosineAnnealingScheduler Decays the learning rate according to the decreasing part of a cosine curve. CosineAnnealingWithWarmupScheduler Decays the learning rate according to the decreasing part of a cosine curve, with an initial warmup. CosineAnnealingWarmRestartsScheduler Cyclically decays the learning rate according to the decreasing part of a cosine curve. PolynomialScheduler Sets the learning rate to be proportional to a power of the fraction of training time left. PolynomialWithWarmupScheduler Decays the learning rate according to a power of the fraction of training time left, with an initial warmup.
Scale schedule also supports the following PyTorch Schedulers:
For example, the code below will scale the training time by half (to 10 epochs) and also scale the learning rate schedule.
from composer import Trainer
from composer.optim.scheduler import MultiStepScheduler
trainer = Trainer(
...,
max_duration="20ep",
schedulers=MultiStepScheduler(milestones=["10ep", "16ep"]),
scale_schedule_ratio=0.5,
)
# or equivalently, with default SSR=1.0:
trainer = Trainer(
...,
max_duration="10ep",
schedulers=MultiStepScheduler(milestones=["5ep", "8ep"])
)
For additional details on using the scale schedule ratio, see the Scale Schedule Ratio section in the schedulers guide.
## Suggested Hyperparameters#
The default scale schedule ratio is 1.0. For a standard maximum number of epochs (these will differ depending on the task), scaling down the learning rate schedule will lead to a monotonic decrease in accuracy. Increasing the scale schedule ratio will often improve the accuracy up to a plateau, although this leads to longer training time and added cost.
## Composability#
As general rule, scale schedule can be applied in conjunction with any method. If other methods also perform actions according to a schedule, it is important to modify their schedules to coincide with the altered number of epochs.
|
{}
|
# actual evaluation – Are these variations of advanced multiplication studied subjects? Answer
Hello pricey customer to our community We will proffer you an answer to this query actual evaluation – Are these variations of advanced multiplication studied subjects? ,and the respond will breathe typical via documented data sources, We welcome you and proffer you fresh questions and solutions, Many customer are questioning concerning the respond to this query.
actual evaluation – Are these variations of advanced multiplication studied subjects?
Complex multiplication may be very nicely understood geometrically and algebraically, however I marvel what concerning the following operators -angles assumed to breathe randians $$[0,2pi)$$:
1. Complex multiplication(muladd): $$x_1 cdot x_2 = |x_1||x_2|e^{(arg(x_1) + arg(x_2))i}$$
2. Complex mulmul: $$x_1 bigodot x_2 = |x_1||x_2|e^{arg(x_1) cdot arg(x_2)i}$$
3. Complex addadd: $$x_1 bigoplus x_2 = (|x_1|+|x_2|)e^{(arg(x_1) + arg(x_2))i}$$
4. Complex addmul: $$x_1 bigotimes x_2 = (|x_1|+|x_2|)e^{(arg(x_1) cdot arg(x_2))i}$$
Have these operators been studied? Are there any books or papers on their properties? Do they’ve names?
Note: I couldn’t discover a complex-analysis tag.
we are going to proffer you the answer to actual evaluation – Are these variations of advanced multiplication studied subjects? query by way of our community which brings all of the solutions from a number of dependable sources.
|
{}
|
# dirac delta – Derivative of the probability transformation formula
Let $$f: mathbb {R} ^ n rightarrow mathbb {R}$$ to be a measurable function of Borel. (It can further be assumed that $$| frac { partial f} { partial z_i} |> 0$$.
$$overrightarrow {Z} = (Z_1, Z_2, points, Z_n$$ is a random variable vector with probability density function $$p _ { overrightarrow {Z}}$$. Let $$Y: = f ( overrightarrow {Z})$$ be a random variable with probability density function $$p_ {Y}$$.
By the probability transformation formula, we have
$$p_Y (y) = int _ { mathbb {R} ^ n} p _ { overrightarrow {Z}} ( overrightarrow {z}) delta (yf ( overrightarrow {z}) d overrightarrow {z }$$.
We want to know $$frac {dp_Y} {dy}$$? I do not really know how we take the derivative to the $$delta (y-f ( overrightarrow {z})$$ as $$y$$ also depends on $$overrightarrow {z}$$.
Moreover, we can assume that $$Z_i$$ is i.i.d, so we could also have
$$p_Y (y) = int _ { mathbb {R}} p_z (z_n) dz_n dots int _ { mathbb {R}} p_z (z_1) delta (yf ( overrightarrow {z}) dz_1$$.
Someone has any idea?
|
{}
|
### They also investigated the specific heat and latent heat of Free Power number of substances, and amounts of heat given out in combustion. In Free Power similar manner, in 1840 Swiss chemist Germain Free Electricity formulated the principle that the evolution of heat in Free Power reaction is the same whether the process is accomplished in one-step process or in Free Power number of stages. This is known as Free Electricity’ law. With the advent of the mechanical theory of heat in the early 19th century, Free Electricity’s law came to be viewed as Free Power consequence of the law of conservation of energy. Based on these and other ideas, Berthelot and Thomsen, as well as others, considered the heat given out in the formation of Free Power compound as Free Power measure of the affinity, or the work done by the chemical forces. This view, however, was not entirely correct. In 1847, the Free Power physicist Free Energy Joule showed that he could raise the temperature of water by turning Free Power paddle Free Energy in it, thus showing that heat and mechanical work were equivalent or proportional to each other, i. e. , approximately, dW ∝ dQ.
Not Free Power lot to be gained there. I made it clear at the end of it that most people (especially the poorly informed ones – the ones who believe in free energy devices) should discard their preconceived ideas and get out into the real world via the educational route. “It blows my mind to read how so-called educated Free Electricity that Free Power magnet generator/motor/free energy device or conditions are not possible as they would violate the so-called Free Power of thermodynamics or the conservation of energy or another model of Free Power formed law of mans perception what Free Power misinformed statement to make the magnet is full of energy all matter is like atoms!!”
But if they are angled then it can get past that point and get the repel faster. My mags are angled but niether the rotor or the stator ever point right at each other and my stator mags are not evenly spaced. Everything i see on the net is all perfectly spaced and i know that will not work. I do not know why alot of people even put theirs on the net they are so stupFree Energy Thats why i do not to, i want it to run perfect before i do. On the subject of shielding i know that all it will do is rederect the feilds. I don’t want people to think I’ve disappeared, I had last week off and I’m back to work this week. I’m stealing Free Power little time during my break to post this. Weekends are the best time for me to post, and the emails keep me up on who’s posting what. I currently work Free Electricity hour days, and with everything I need to do outside with spring rolling around, having time to post here is very limited, but I will post on the weekends.
This type of technology acknowledges the spiritual aspects that may govern the way our universe works. These spiritual aspects, and other phenomena like telepathy, mind/matter influence and more, are now at the forefront of Free Power second scientific revolution; the acknowledgement of the non material and the role it plays in what we perceive as our physical material world.
It all smells of scam. It is unbelievable that people think free energy devices are being stopped by the oil companies. Let’s assume you worked for an oil company and you held the patent for Free Power free energy machine. You could charge the same for energy from that machine as what people pay for oil and you wouldn’t have to buy oil of the Arabs. Thus your profit margin would go through the roof. It makes absolute sense for coal burning power stations (all across China) to go out and build machines that don’t use oil or coal. wow if Free Energy E. , Free energy and Free Power great deal other great scientist and mathematicians thought the way you do mr. Free Electricity the world would still be in the stone age. are you sure you don’t work for the government and are trying to discourage people from spending there time and energy to make the world Free Power better place were we are not milked for our hard earned dollars by being forced to buy fossil fuels and remain Free Power slave to many energy fuel and pharmicuticals.
“What is the reality of the universe? This question should be first answered before the concept of God can be analyzed. Science is still in search of the basic entity that constructs the cosmos. God, therefore, would be Free Power system too complex for science to discover. Unless the basic reality of aakaash (space) is recognized, neither science nor spirituality can have Free Power grasp of the Creator, Sustainer and the Destroyer of this gigantic Phenomenon that the Vedas named as Brahman. ” – Tewari from his book, “spiritual foundations. ”
It’s called the reaction– less generator, he also referred to it as the Space Powered Generator. It allows for the production of power with improved efficiency. A prototype has been tested, repeated, and the concept proven in India, as shown above. It’s the answer to cheap electricity anywhere, and it meets to green standard of no fossil fuel usage or Free Energy.
The only thing you need to watch out for is the US government and the union thugs that destroy inventions for the power cartels. Both will try to destroy your ingenuity! Both are criminal elements! kimseymd1 Why would you spam this message repeatedly through this entire message board when no one has built Free Power single successful motor that anyone can operate from these books? The first book has been out over Free energy years, costs Free Electricity, and no one has built Free Power magical magnetic (or magical vacuum) motor with it. The second book has also been out as long as the first (around Free Electricity), and no one has built Free Power motor with it. How much Free Power do you get? Are you involved in the selling and publishing of these books in any way? Why are you doing this? Are you writing this from inside Free Power mental institution? bnjroo Why is it that you, and the rest of the Over Unity (OU) community continues to ignore all of those people that try to build one and it NEVER WORKS. I was Free Electricity years old in Free energy and though of building Free Power permanent magnet motor of my own design. It looked just like what I see on the phoney internet videos. It didn’t work. I tried all kinds of clever arrangements and angles but alas – no luck.
Or, you could say, “That’s Free Power positive Delta G. “That’s not going to be spontaneous. ” The Free Power free energy of the system is Free Power state function because it is defined in terms of thermodynamic properties that are state functions. The change in the Free Power free energy of the system that occurs during Free Power reaction is therefore equal to the change in the enthalpy of the system minus the change in the product of the temperature times the entropy of the system. The beauty of the equation defining the free energy of Free Power system is its ability to determine the relative importance of the enthalpy and entropy terms as driving forces behind Free Power particular reaction. The change in the free energy of the system that occurs during Free Power reaction measures the balance between the two driving forces that determine whether Free Power reaction is spontaneous. As we have seen, the enthalpy and entropy terms have different sign conventions. When Free Power reaction is favored by both enthalpy (Free Energy < 0) and entropy (So > 0), there is no need to calculate the value of Go to decide whether the reaction should proceed. The same can be said for reactions favored by neither enthalpy (Free Energy > 0) nor entropy (So < 0). Free energy calculations become important for reactions favored by only one of these factors. Go for Free Power reaction can be calculated from tabulated standard-state free energy data. Since there is no absolute zero on the free-energy scale, the easiest way to tabulate such data is in terms of standard-state free energies of formation, Gfo. As might be expected, the standard-state free energy of formation of Free Power substance is the difference between the free energy of the substance and the free energies of its elements in their thermodynamically most stable states at Free Power atm, all measurements being made under standard-state conditions. The sign of Go tells us the direction in which the reaction has to shift to come to equilibrium. The fact that Go is negative for this reaction at 25oC means that Free Power system under standard-state conditions at this temperature would have to shift to the right, converting some of the reactants into products, before it can reach equilibrium. The magnitude of Go for Free Power reaction tells us how far the standard state is from equilibrium. The larger the value of Go, the further the reaction has to go to get to from the standard-state conditions to equilibrium. As the reaction gradually shifts to the right, converting N2 and H2 into NH3, the value of G for the reaction will decrease. If we could find some way to harness the tendency of this reaction to come to equilibrium, we could get the reaction to do work. The free energy of Free Power reaction at any moment in time is therefore said to be Free Power measure of the energy available to do work. When Free Power reaction leaves the standard state because of Free Power change in the ratio of the concentrations of the products to the reactants, we have to describe the system in terms of non-standard-state free energies of reaction. The difference between Go and G for Free Power reaction is important. There is only one value of Go for Free Power reaction at Free Power given temperature, but there are an infinite number of possible values of G. Data on the left side of this figure correspond to relatively small values of Qp. They therefore describe systems in which there is far more reactant than product. The sign of G for these systems is negative and the magnitude of G is large. The system is therefore relatively far from equilibrium and the reaction must shift to the right to reach equilibrium. Data on the far right side of this figure describe systems in which there is more product than reactant. The sign of G is now positive and the magnitude of G is moderately large. The sign of G tells us that the reaction would have to shift to the left to reach equilibrium.
|
{}
|
Home /Reading Room/Ian Stewart: Choosily Chomping Chocolate
Choosily Chomping Chocolate
MATHEMATICAL RECREATIONS by Ian Stewart
Scientific American, October 1998
Just because a game has simple rules, that doesn't imply that there must be a simple strategy for winning it. Sometimes there is — tic-tac-toe is a good example. But sometimes there isn't — another childhood game, "boxes," in which players take turns to fill in edges on a grid of dots and capture any square they complete, is a case in point. I call the first kind "dream games" and the others "nightmare games," for fairly obvious reasons. Games with very similar rules can be surprisingly different when it comes to their dream or nightmare status. And, of course, the nightmare games are often the most interesting, because you can play them without knowing in advance who ought to win — or, in some cases, knowing who ought to win but not knowing how they can do it.
As an illustration of these surprising facts, I'm going to discuss two games based around chocolate bars. One, "yucky choccy," is a dream game. The other, "chomp," has very similar rules, but it's a nightmare game — with the startling extra ingredient that with optimal play the first player should always win, but nobody knows how.
I have no idea who invented yucky choccy: it was explained to me by Keith Austin, a British mathematician at Sheffield University. It takes place on an idealised chocolate bar, a rectangle divided into smaller squares. Players — I'll name them "Wun" and "Too" after the order in which they play — take turns to break off a lump of chocolate, which they must eat. Call this action a move in the game. The break must be a single straight line cutting all the way across the rectangle along the lines between the squares. The square in one corner contains a lump of soap, and the player who has to eat this square loses. The black arrows in Figure 1 show the moves in the game played with a 4 \times 4 bar, and the grey arrows show all the other moves that could have been made instead.
Figure 1: Game tree for 4 \times 4 yucky choccy. Arrows indicate legal moves: the piece removed is eaten. The green square is the soapy one. Black arrows indicate an actual game, grey arrows alternative moves that could have been made instead.
This entire diagram constitutes the game tree for 4 \times 4 yucky choccy. As we'll shortly see, Too made a bad mistake and lost a game that should have been won.
A winning strategy is a sequence of moves that forces a win, no matter what moves the opponent makes. The concept of a strategy involves not just one game, but all possible games. When you play chess, most of your planning centres on "what if" questions. "If I advance my pawn, what could his queen do then?" Tactics and strategy centre around what moves you or your opponent could make in future, not just the moves that they do make.
There is a neat theory of strategies for "finite" games — ones that can't continue forever and in which draws are impossible. It relies on two simple principles:
1 A position is a winning one if you can make some move that places your opponent in a losing position. 2 A position is a losing one if every move that you can make places your opponent in a winning position.
The logic here may seem circular, but it's not: it's recursive. The difference is that with recursive reasoning you have a place to start. To see how, I'll use the above two principles to find a winning strategy for 4 \times 4 yucky choccy. The trick is to start from the end and work backwards, a process called "pruning the game tree."
The single soapy piece is a losing position. I'll symbolise that fact by the diagram
* * * * * * * * * * * * * * * L
whose entries refer not to a chocolate bar, but to the the various positions marked in Figure 1. Here L means "losing position," * means "don't know yet," and "W" will mean "winning position" once I've found some. In fact, , and are all winning positions, because you can break off all the white squares in one move to leave your opponent with the single-piece losing position. Equivalently, there are arrows in the game tree that lead from those positions directly to , and by principle 1 all such positions are winners. For similar reasons the same positions rotated through a right angle are also winners, so now we have pruned away all branches of the game tree that lead in one step to the single soapy square, which tells us the status of those positions:
* * * W * * * W * * * W W W W L
What about ? Well, the only moves you can make are or , and when you remove the all-white piece you leave a winning position for your opponent. Principle 2 now tells us that is a loser, so we can prune one more branch to get
* * * W * * * W * * L W W W W L
This in turn implies that , and so on are winners (break off a chunk to leave ) leading to
* * W W * * W W W W L W W W W L
working backwards in this manner you can eventually deduce the win/lose status of any position. The logic runs not in circles, but in interlocking spirals, climbing down the game tree from leaf to twig, from twig to branch, from branch to limb... Hence the "pruning" image. We have to start from the end, though, which is a nuisance. What we really want to do is chop down the entire game tree in one blow, George Washington fashion, to find the status of the opening position — and if it's a winner, to find what move to play. For games with a small tree there's no difficulty: repeated pruning yields the status of all positions. In Figure 1 we can carry this out, to get
L W W W W L W W W W L W W W W L
So the 4 \times 4 position, for instance, is a loser.
If you try larger bars of chocolate, square or rectagular, you'll quickly find that the same pattern emerges: losers live along the diagonal line, all other bars are winners. Now the bars on that diagonal are the square ones: 1 \times 1, 2 \times 2, 3 \times 3, 4 \times 4. This suggests a simple strategy that should apply to bars of any size: squares are losers, rectangles are winners. Having noticed this apparent pattern, we can check its validity without working through the entire game tree by verifying properties 1 and 2. Here's the reasoning. Clearly any rectangle (winner) can always be converted to a square (loser) in one move. In contrast, whatever move you make starting with a square (loser), you cannot avoid leaving your opponent a rectangle (winner). Moreover, is square, and we know it is a losing position. All this is consistent with principles 1 and 2, so working backwards we deduce (recursively) that every square is a loser and every rectangle a winner. We now see that Too's first move in Figure 1 was a mistake. And we see that yucky choccy is a dream game no matter what size the bar is.
In principle the same procedure applies to any finite game. The opening position is the "root" of the game tree. At the other extreme are the tips of the outermost twigs, which terminate at positions where one or other player has won. Since we know the win/lose status of these terminal positions, we can work backwards along the branches of the game tree using principles 1 and 2, labelling positions "win" or "lose" as we proceed. The first time, we determine the status of all positions that are one move away from the end of the game. The next time, we determine the status of all positions that are two moves away from the end of the game, and so on. Since, by assumption, the game tree is finite, eventually we reach the root of the tree — the opening position. If this gets the label "win" then Wun has a winning strategy; if not, Too has.
We can even say, again in principle, what the winning strategy is. If the opening position is "win" then Wun should always move to a position labelled "lose" — which Too will then face. Because this is a losing position, any move Too makes presents Wun with a "win" position. So Wun can repeat the same strategy until the game ends. Similarly, if the opening position is labelled "lose," then Too has a winning strategy — with the same description. So in finite, drawless games, working backwards through the game tree in principle decides the status of all positions, including the opening one. I say "in principle" because the calculations become intractable if the game tree is large. And even simple games can have huge game trees, because the game tree involves all possible positions and all possible lines of play. This opens the door to nightmare games.
We now contrast yucky choccy with a game whose rules are almost the same, but where pruning the game tree rapidly becomes impossible — and where pruning is possible, it does not reveal any pattern that could lead to a simple strategic recipe. That game is chomp, invented many years ago by David Gale (U of California at Berkeley) and described in his marvellous new book on recreational mathematics, Tracking the Automatic Ant (Springer-Verlag, New York, 1998). Gale describes chomp using a rectangular array of cookies, but I'll stick to chocolate. (It is best played with an array of buttons or the like.) Chomp is just like yucky choccy, with the sole difference that a legitimate move consists of removing a rectangular chunk of chocolate, as in Figure 2.
Figure 2: A typical move in chomp.
Specifically, a player chooses a component square and then removes all squares above it in that column and all squares to the right of it in that row, together with all squares above and to the right of these.
There is a neat proof that for any size of bar (Figure 3a) other than 1x1, chomp is a win for Wun. Suppose, to the contrary, that Too has a winning strategy. Wun then proceeds by removing the upper right square (Figure 3b). This cannot leave Too facing a losing position, since we are assuming the opening position is a loser for Wun. So Too can play a winning move, something like Figure 3c, to leave Wun facing a loser. But then Wun could have played Figure 3d, leaving Too facing the same loser. This contradicts the assumption that Too has a winning strategy, so that assumption must be false. Therefore, Wun has a winning strategy.
Figure 3: (a) Chomp bar ready for strategy stealing. (b) If Wun does this... (c) ... and Too makes a supposed winning move... (d) ... then Wun could have played Too's move in the first place.
Proofs of this kind are called "strategy stealing." If Wun can make a "dummy" move, pretend to be the second player, and win by following what ought to be a winning strategy for Too, then Too could not have had such a strategy to begin with — implying that Wun must have a winning strategy. The irony of this method of proof, when it works, is that it offers no clue to what Wun's winning strategy should be!
For chomp, detailed winning strategies are unknown, except in a few simple cases. In the 2 \times n (or n \times 2) case, Wun can always ensure that Too faces a position that is a rectangle minus a single corner square (Figure 4a).
In the n \times n case, Wun removes everything except an L-shaped edge (Figure 4b), and after that copies whatever move Too makes, but reflected in the diagonal. A few other small cases are known: for example in 3 \times 5 chomp the sole winning move for Wun is Figure 4c. "The" winning move need not be unique: in the 6 \times 13 game there are two different winning moves.
Figure 4: Winning moves in 2 \times n chomp; (b) n \times n chomp; and (c) 3 \times 5 chomp.
Other information about chomp positions can be found on page 598 of Winning Ways by Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy (Academic Press, New York 1982). Chomp can also be played with an infinite chocolate bar — in which case, paradoxically, it remains a finite game because after finitely many moves only a finite portion of bar remains. But there is a change: Too can sometimes win. This happens, for example, with the 2 \times \infty bar. Figure 5 shows that whatever Wun does, Too can choose a reply that leads to Figure 4a, which we already know is a loser.
Figure 5: How Too wins 2 \times \infty chomp; (a) Start; (b) One type of possibly play for Wun and its reply; (c) The other type of possible play for Wun and its reply.
Strictly speaking, I should be more careful here. By "\infty" I really mean the set of positive integers in their usual order, which set theorists symbolise as \omega ("omega") and refer to as "the first infinite ordinal." There are many other infinite ordinals, but their properties are too technical to describe here: see Gale's book for further details. Chomp can be played on doubly infinite arrays of ordinals, or in three or more dimensions: on the whole, little is known about winning strategies for these generalisations.
|
{}
|
# Discrete Mathematics Questions and Answers – Counting – Terms in Binomial Expansion
«
»
This set of Discrete Mathematics Multiple Choice Questions & Answers (MCQs) focuses on “Counting – Terms in Binomial Expansion”.
1. In a blindfolded game, a boy can hit the target 8 times out of 12. If he fired 8 shots, find out the probability of more than 4 hits?
a) 2.530
b) 0.1369
c) 0.5938
d) 3.998
Explanation: Here, n = 8, p = 0.6, q = 0.4. Suppose X = number of hits x0 = 0 number of hits, x1 = 1 hit, x2 = 2 hits, and so on.
So, (X) = P(x5) + P(x6) + P(x7) + P(x8) = 8C5(0.6)5(0.4)3 + 8C5(0.6)6(0.4)2 + 8C7(0.6)7(0.4)1 + 8C8(0.6)8(0.4)0 = 0.5938.
2. A fair coin is tossed 15 times. Determine the probability in which no heads turned up.
a) 2.549 * 10-3
b) 0.976
c) 3.051 * 10-5
d) 5.471
Explanation: According to the null hypothesis it is a fair coin and so in that case the probability of flipping at least 59% tails is = 15C0(0.5)15 = 3.051 * 10-5.
3. When a programmer compiles her code there is a 95% chance of finding a bug every time. It takes three hours to rewrite her code when she finds out a bug. Determine the probability such that she will finish her coding by the end of her workday. (Assume, a workday is 7 hours)
a) 0.065
b) 0.344
c) 0.2
d) 3.13
Explanation: A success is a bug-free compilation, and a failure is the finding out of a bug. The programmer has 0, 1, 2, or 3 failures and so her probability of finishing the program is : Pr(X=0) + Pr(X=1) + Pr(X=2) + Pr(X=3) = (0.95)0(0.05) + (0.95)0(0.05) + (0.95)0(0.05) + (0.95)0(0.05) = 0.2.
4. Determine the probability when a die is thrown 2 times such that there are no fours and no fives occur?
a) $$\frac{4}{9}$$
b) $$\frac{56}{89}$$
c) $$\frac{13}{46}$$
d) $$\frac{3}{97}$$
Explanation: In this experiment, throwing a die anything other than a 4 is a success and rolling a 4 is failure. Since there are two trials, the required probability is
b(2; 2, $$\frac{5}{6}$$) = 2C2 * ($$\frac{4}{6}$$)2 * ($$\frac{2}{6}$$)0 = $$\frac{4}{9}$$.
5. In earlier days, there was a chance to make a telephone call would be of 0.6. Determine the probability when it could make 11 successes in 20 attempts of phone call.
a) 0.2783
b) 0.2013
c) 0.1597
d) 3.8561
Explanation: Probability of success p=0.6 and q=0.4. X=success in making a telephone call. Hence, the probability of 11 successes in 20 attempts = P(X=11) = 20C11(0.6)11(0.4)20 – 11 = 0.1597.
6. By the expression $$\left(\frac{x}{3} + \frac{1}{x}\right)^5$$, evaluate the middle term in the expression.
a) 10*(x5)
b) $$\frac{1}{5}*(\frac{x}{4})$$
c) 10*($$\frac{x}{3}$$)
d) 6*(x3)
Explanation: By using Binomial theorem,the expression $$\left(\frac{x}{3} + \frac{1}{x}\right)^5$$ can be expanded as $$\left(\frac{x}{3} + \frac{1}{x}\right)^5 = ^5C_0(\frac{x}{3})^5 + ^5C_1(\frac{x}{3})^4(\frac{1}{x})^1 + ^5C_2(\frac{x}{3})^3(\frac{1}{x})^2$$
$$+ ^5C_3(\frac{x}{3})^2(\frac{1}{x})^3 + ^5C_4(\frac{x}{3})^1(\frac{1}{x})^4$$ = $$(\frac{x}{3})^5 + 5.(\frac{x}{3}) + 10.(\frac{x}{3}) + 10.(\frac{1}{3x}) + 5(\frac{1}{3x^3})$$. Hence, the middle term is 10*($$\frac{x}{3}$$).
7. Evaluate the expression (y+1)4 – (y-1)4.
a) 3y2 + 2y5
b) 7(y4 + y2 + y)
c) 8(y3 + y1)
d) y + y2 + y3
Explanation: By using Binomial theorem,the expression (y+1)4 – (y-1)4 can be expanded as = (y+1)4 = 4C0y4 + 4C1y3 + 4C2y2 + 4C3y1 + 4C4y0 and (y-1)4 = 4C0y44C1y3 + 4C2y24C3y1 + 4C4y0. Now, (y+1)4 – (y-1)4 = (4C0y4 + 4C1y3 + 4C2y2 + 4C3y1 + 4C4y0) – (4C0y44C1y3 + 4C2y24C3y1 + 4C4y0) = 2(4C1y3 + 4C3y1) = 8(y3 + y1).
8. Find the coefficient of x7 in (x+4)9.
a) 523001
b) 428700
c) 327640
d) 129024
Explanation: It is known that (r+1)th term, in the binomial expansion of (a+b)n is given by, Tr+1 = nCran-rbr. Assuming that x7 occurs in the (r+1)th term of the expansion (x+4)9, we obtain Tr+1 = 129024x4.
9. Determine the 7th term in the expansion of (x-2y)12.
a) 6128y7
b) 59136y6
c) 52632x6
d) 39861y5
Explanation: By assuming that x7 occurs in the (r+1)th term of the expansion (x-2y)12, we obtain Tr+1 = nCran-rbr = 12C6 x6 (2y)6 = 59136y6.
10. What is the middle term in the expansion of (x/2 + 6y)8?
a) 45360x4
b) 34210x3
c) 1207x4
d) 3250x5
|
{}
|
Simple Indicies Question
1. Dec 3, 2011
lloydowen
Simple Indicies Question [SOLVED]
1. The problem statement, all variables and given/known data
I'm having a little problem with indicies, I know it's simple for someone with a lot of question.
So I'm wondering what to do first in this question, brackets or should I multiply out the fractions inside the brackets?
I have to simplify it that's all :)
2. Relevant equations
3. The attempt at a solution
I don't have an attempt yet sorry :(
Last edited: Dec 3, 2011
2. Dec 3, 2011
micromass
Staff Emeritus
I'd start by working out the inside of the brackets first. I think that'll be the easiest.
3. Dec 3, 2011
lloydowen
Thanks for the reply, so when I worked it out and simplified it first I got (x^2.5 x^2 x^-3)^2 which would equal to something like x^3, but in derive the answer is x^17..
What am I doing wrong :(?
4. Dec 3, 2011
micromass
Staff Emeritus
That X-3 isn't correct, is it??
5. Dec 3, 2011
lloydowen
Well I thought it was... What else could it be? I wish I had a good course tutor in College, I literally teach myself almost everything! *try*
6. Dec 3, 2011
Ray Vickson
You have $$F = \left( \frac{X^4 X^5 X}{X^{1.5} X^3 X^{-3}}\right)^2 .$$ The first step is to simplify the quantity inside the bracket, to obtain $F = (X^a)^2.$ So, the first order of business is to figure out what is 'a' in the following:
$$\frac{X^4 X^5 X}{X^{1.5} X^3 X^{-3}} = X^a.$$ After that, the rest is easy: $(X^a)^2 = X^{2a} .$
RGV
7. Dec 3, 2011
lloydowen
Sorry, common mistake, so it would be X^3?
8. Dec 3, 2011
lloydowen
Here's what I got... That previous post is very complicated
9. Dec 3, 2011
Mentallic
You need to slowly apply these rules, you keep making mistakes.
$$a^b\cdot a^c=a^{b+c}$$
$$\frac{a^b}{a^c}=a^{b-c}$$
$$\left(a^b\right)^c=a^{bc}$$
10. Dec 3, 2011
micromass
Staff Emeritus
How is $X^{-3}$ defined?? What is $\frac{X}{X^{-3}}$??
Are you aware of the identity $\frac{a^n}{a^m}=a^{n-m}$??
11. Dec 3, 2011
lloydowen
Thanks guys I have solved this problem now :) I will keep going over and over until I get it perfect.
12. Dec 3, 2011
Mentallic
Could you show us just to be sure? Because two wrongs can sometimes accidentally make a right :tongue:
And assuming you used the formulae correctly, just a tip, it'll probably be easier if you simplify the numerator first, then the denominator, then apply the quotient rule.
13. Dec 3, 2011
lloydowen
What I did first was simplify the insides of the brackets. To do this I applied the 2nd law of indicies and take away the denominator from the numerator for example, first of all I got x^2.5 because 4-1.5 = Positive 2.5... Then the same for the next one in the brackets.
Now the last fraction in the equation at first I forgot the rule of two the same signs make positive and the opposite signs make a negative. So x-(-3) would be equal to x^3.
Then once I got all of them, I added them up to form (x^7.5)^2
(x^7.5)^2
=x^17
14. Dec 3, 2011
Ray Vickson
OK, that works, but you still have made some errors. However, what people are suggesting is that you do it more systematically, by simplifying the numerator and denominator separately:
$$\mbox{numerator} = X^4 X^5 X = X^{4+5+1} = X^{10}$$ and
$$\mbox{denominator} = X^{1.5} X^3 X^{-3} = X^{1.5 + 3 - 3} = X^{1.5},$$ to get $$\mbox{ratio} = \frac{\mbox{numerator}}{\mbox{denominator}} = \frac{X^{10}}{X^{1.5}} = X^{10 - 1.5} = X^{8.5}.$$ There is less chance of making an error when you do it this way.
RGV
15. Dec 3, 2011
lloydowen
Oh right I see what you mean! I told you my Tutor was rubish :P I'll get into that routine then, Thank you! :)
16. Dec 4, 2011
Mentallic
How did you get from (x7.5)2=x17?
17. Dec 4, 2011
lloydowen
Ah Sorry I must of confused my self somewhere... I meant x^8.5 at least thats what I have on paper..
18. Dec 4, 2011
Mentallic
Ahh ok just a typo then, because you did it twice
19. Dec 4, 2011
lloydowen
Lmao not sure why I did it twice, I was very tired that night :P
|
{}
|
# Am I differentiating this right
A0, A1, A2, b0, b1, b2 are all constants.
I would appreciate if one Math expert could confirm this is done right. I am developing an algorithm and this is the foundation.
-
Yeah, that looks good. And "differentiaing" is the word you are looking for, not "derivated". – Ravi Donepudi Sep 3 '12 at 9:18
I meant to type "Differentiating", not "differentiaing". – Ravi Donepudi Sep 3 '12 at 9:24
Thank you! (and I have fixed the title :)) – Primož Kralj Sep 3 '12 at 9:29
|
{}
|
## Archive for the 'Shameless self promotion' Category
### Idea Mine
Over at Science After Sunclipse, Blake has post discussing some Star Trek: TNG history, in which I happen to have some involvement.
Reverse the Baryon Flux Polarity!
The details involve the episode Starship Mine
In the annals of nitpickery, “Starship Mine” has a certain infamy. The “baryon sweep” which causes the evacuation of the ship is, we are told, a periodic maintenance procedure which must be performed in order to clear away “baryon particles” which build up when a starship travels using its warp drive. Any stickler for jargon accuracy will happily tell you that baryons are a class of subatomic particles which includes protons and neutrons, so that sweeping away the baryons would rip apart every atom in the Enterprise.
Here’s the backstory: I went to high school with one of the members of the Star Trek staff, Naren Shankar, and we kept in much better touch in those days — we still went home for the holidays and got together. He was the science consultant at the time this episode was written (he later joined the writing staff), and was looking for an excuse for the Enterprise to be in spacedock, devoid of personnel — he had in mind some kind of procedure analogous to degaussing a submarine, and bounced the idea off of me. Rather than suggest some new, made-up particle, I suggested a more generic “exotic-antibaryon sweep;” the idea being that there were some long-lived particles, unknown to us in the 20th century, that could be picked up by the spaceship. However, that was shortened to “Baryon sweep” at some point in the script-polishing process.
Blake considers this as a possibility.
However! We are told that the “baryons” which must be removed build up when a starship is travelling at warp speed. When you move through warp space, you travel at the speed of plot: the laws of physics are those which make for convenient storytelling. Who’s to say that quark combinations which fall apart in ordinary space can’t endure in warp or subspace? As it happens, in the sixth-season episode “Schisms”, a substance called “sonalagen” is trotted out which is said to be stable only in subspace, so within the framework of the show there’s precedent for this kind of dodge. The name of the “baryon sweep” would then be understood as a shortened form of, say, “residual exotic baryon sweep”, said elliptically for convenience’s sake even though the short version carries an unfortunate connotation if read naïvely. Inconvenient notations and awkward jargon held onto for “historical reasons” are common enough that this could well count as unexpected realism!
And what Blake figured out, a lot of Trek fans didn’t. As I recall, the discussion following the show on the USENET Star Trek board was pretty damning, along the lines of OMG, they’d destroy all the neutrons and protons! What idiots!, except that while all neutrons and protons are Baryons, not all Baryons are protons or neutrons, so even in the abbreviated form, the phrase isn’t wrong from a physics point of view, just easily misinterpreted. Of course, had I or someone else suggested a yet another new particle, there would have been fans that complained about that.
Starship Mine isn’t the only episode on which I had some influence. I tried to kill Wesley Crusher once (unsuccessfully, obviously), and there are a lot of names in shows that are references to people I know or have met. In fact, in the third-season episode “Yesterday’s Enterprise,” the Klingon outpost planet’s name, “Narendra III,” is a reference to Naren, from someone he knew on the staff.
### If We Built This Large Wooden Badger . . .
Despite getting about $105,000 from Quebec and federal art-funding agencies, Canadian artist Cesar Saez’s flying-banana project appears to be meeting turbulence. According to his project’s webpage, the Geostationary Banana Over Texas has failed to get enough grassroots funding to ensure its planned launch date in August. […] People can think it’s a hoax,” Mr. Arpin added, “but artists have been doing a lot of interesting things that a lot of people haven’t been able to follow. He [Mr. Saez] is pushing the boundaries and letting people think outside the box – or the fruit basket.” Maybe some people thought it was a hoax because you can’t get a helium balloon high enough to be in a geostationary orbit, and a geostationary orbit can’t exist over Texas. Geostationary is a scientific/technical term. It has a specific meaning. If you just make crap up, some people won’t take you seriously. The project’s Web-based fundraising drive says it needs$1.5-million.
|
{}
|
# World's hardest easy geometry problem
Tags:
1. Dec 23, 2015
### livinonaprayer
have you encountered it and/or solved it? what did you think?
2. Dec 23, 2015
### Greg Bernhardt
3. Dec 23, 2015
### livinonaprayer
4. Dec 23, 2015
### JonnyG
I got infinitely many solutions...anyone else get that too?
5. Dec 23, 2015
### Hornbein
I figured all the easy angles, then was left with three equations and four unknowns. So I manfully gave up. I'm too dumb to do it.
6. Dec 23, 2015
### livinonaprayer
nooo don't give up! it took me 5 hours to find the solution. and you don't need complicated equations, you're only allowed to use simple geometry.
Last edited: Dec 23, 2015
7. Dec 23, 2015
### livinonaprayer
i think there's only one solution? i got only one solution and after going over it multiple times i even checked online to confirm that my way of getting there was correct. remember, you're only supposed to use basic geometry, no trig or some sort of super advanced math.
8. Dec 24, 2015
### zoobyshoe
I spent about an hour on it yesterday, and an hour today, paying careful attention to looking for a solution by means of the specific elementary geometry principles they list at the bottom. I have no idea where two parallel lines being cut by a third fits in, nor where the equilateral triangle might come into play. But, they wouldn't have presented them if they weren't relevant, so I figure once I realize in what way they can be employed here, the solution should be immanent.
9. Dec 25, 2015
### livinonaprayer
they are important, just like the rest of the information given.
you will have to add more lines inside the triangle to reach the solution
10. Dec 25, 2015
### zoobyshoe
I'm still too stoic to check out the hint. However, I will ask you, was the hint necessary in your case?
11. Dec 25, 2015
### livinonaprayer
nope, but that's because i always solve geometry problems like that. the hint isn't an actual step closer to the solution, it kinda points to to a 'technique' you MUST use to find it (you can't solve it otherwise, it's kinda like trying to write a computer program without knowing part of the code language or writing style, impossible). i would check out the hint if you're not familiar with high school geometry.
12. Dec 25, 2015
### zoobyshoe
OK, I checked out the hint and it didn't help at all, since I'd already been doing that.
13. Dec 25, 2015
### livinonaprayer
ohh... so keep trying! try looking at the problem in different ways, exploring new directions. see what the already existing information has to offer. make a list of moves that you've already figured out that might be useful.
14. Dec 26, 2015
### micromass
Pretty obvious if you use trig. My point of view is: any proof is a good proof, so I don't bother finding it without trig haha
15. Dec 26, 2015
### zoobyshoe
Oh, sheez, you know better than that, micro.
16. Dec 26, 2015
### livinonaprayer
it's a brain teaser with rules! using trig is forbidden.
17. Dec 27, 2015
### collinsmark
I just solved it! Finally! (geometrically, without trigonometric functions)
My goodness, it was not without effort though. There were too many false starts to recount. I honestly went through about half a notebook of paper on this problem. The effort as a whole did not take merely hours, but rather days of my life. (Of course, most of that was a nest of dead ends and trudging down the wrong paths.) [Edit: Although in retrospect, the solution isn't very difficult at all. The really difficult part is figuring out what to look for, such that you can form a solution -- once you know the solution, it seems almost easy. It's getting to that point that is certainly not (at least is was certainly not for me).]
Henceforth, the Christmas season of 2015 shall forever be known to me as "The Christmas of geometry" or perhaps better, "The Christmas where collinsmark spend several days with his nose burried in a self-made, eternally expanding, degenerate* army of occultish pictographs littered with mysterious hieroglyphs, while he was oblivious to all else around him, even in the presence of family, friends, coffee shop patrons and pub-goers."
I say this because looking back at some of my diagrams, which by some accounts might have qualities of Christmas symbols (there's a shape of Christmas tree and are some some stars around in there somewhere), it really looks more like a guy with a horned goat-head -- or perhaps like something straight out the Necronomicon.
Which makes me wonder how many times throughout history some poor sap was accused of witchcraft and burned at the stake, when he or she might have just been doing a geometry problem.
Anyway, I think I am prepared to post a solution (within spoiler tags of course), after organizing my notes a bit, if @livinonaprayer (the OP) requests it. I won't post it unless requested though, since it sounds like the OP might prefer people to work on this independently* without too many hints or somebody else's solution.
If you are reading this and working on the problem on your own, don't become too dependent* on a single combination of shapes. And if you find yourself frustrated, don't become a mad degenerate*. But rather try to stay congruent* with an open minded outlook on all the shapes and shapes within shapes.
*(These terms were not used accidentally. )
Last edited: Dec 27, 2015
18. Dec 27, 2015
### zoobyshoe
Congratulations collinsmark, but please don't post any more hints! Why don't you just PM livinonaprayer with your proof.
19. Dec 27, 2015
### livinonaprayer
@collinsmark congrats!!!!!!!!!!!1 the satisfaction after solving it is amazing. regarding the solution, i think if it were posted here it might tempt people to peek so you can indeed pm me if you want. the whole solution is out there on the internet anyways so if someone is really desperate they could find it if they wanted to.
20. Dec 28, 2015
### vin300
I have created a solution in pictures but seems to be impossible to upload. If the central intersection is called O, then various angles are obtained as follows: C=20, AOB=70, DOE=70, DOA=110, ADO=50, EOB=110, OEB=40, CDO=130, CEO=140. Thus far, almost every angle except x and its complement have been derived. To find x, the only possible way seems to be exactly find every elemental length. Use sine rule. AC=CB = 100. By sine rule, AB=34.729, from AB, DB=44.6475, from AB again, EB = 46.79 AB= AD, From EB, CE=53.21. Use cosine rule to find DE, as CD=100-AD= 65.271 DE found to be 23.756. Use sine rule once again with DE to obtain angle EDC nearly 50 degrees. We know the rest already, hence x= 30 degrees
21. Dec 28, 2015
### vin300
22. Dec 28, 2015
### collinsmark
And by the way, we're not allowed to use trigonometric functions or rules as indicated in the link containing the problem, "You may not use trigonometry, such as sines and cosines, the law of sines, the law of cosines, etc."
[Edit: Although to be completely forthcoming, there was a point where I found $x = 30^\circ$, but that was before I realized that I made a mistake. After fixing the mistake and rethinking the problem, I came up with a different answer. Like I mentioned, I went down many, many wrong paths before finally reaching a good solution. By the way, are you referring to the first problem in the link? The link has two problems, the "hardest" and "second hardest." I haven't attempted the second one yet.]
Last edited: Dec 28, 2015
23. Dec 28, 2015
### livinonaprayer
are you talking about the second one? i haven't tried to solve it yet and i'm too lazy to look up high school trig right now so... as for problem 1 (the one i was talking about), x isn't 30 degrees. for both problems trig isn't allowed and you can definitely find the solution by just using simple geometry. (and please post this under a spoiler warning next time). keep trying!!
24. Dec 28, 2015
### zoobyshoe
According to the site where the problem is posted:
Notice in his commentary he mentions hours of frustration and even the possibility of being driven insane for those who tackle this. But, apparently, angle x can be found with just the above, very elementary, geometric principles.
25. Dec 28, 2015
### vin300
The answer was posted for the second hardest problem in #3 with base angles 30, 50, 60, 20. calcs seem to have made the result slightly erroneous, because the angle is not 30 degrees, but 25. Another way must be found out.
Last edited: Dec 28, 2015
|
{}
|
# The Value of [ → a − → B , → B − → C , → C − → a ] , Where - Mathematics
MCQ
Sum
The value of $\left[ \vec{a} - \vec{b} , \vec{b} - \vec{c} , \vec{c} - \vec{a} \right], \text { where } \left| \vec{a} \right| = 1, \left| \vec{b} \right| = 5, \left| \vec{c} \right| = 3, \text { is }$
#### Options
• 0
• 1
• 6
• none of these
#### Solution
We have
$\left[ \vec{a} - \vec{b} , \vec{b} - \vec{c} , \vec{c} - \vec{a} \right]$
$= \left( \left( \vec{a} - \vec{b} \right) \times \left( \vec{b} - \vec{c} \right) \right) . \left( \vec{c} - \vec{a} \right) \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \left( \text { By definition of scalar triple product } \right)$
$= \left( \left( \vec{a} - \vec{b} \right) \times \vec{b} - \left( \vec{a} - \vec{b} \right) \times \vec{c} \right) . \left( \vec{c} - \vec{a} \right) \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em}$
$= \left( \vec{a} \times \vec{b} - \vec{b} \times \vec{b} - \vec{a} \times \vec{c} + \vec{b} \times \vec{c} \right) . \left( \vec{c} - \vec{a} \right) \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em}$
$= \left( \vec{a} \times \vec{b} - 0 - \vec{a} \times \vec{c} + \vec{b} \times \vec{c} \right) . \left( \vec{c} - \vec{a} \right) \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em}$
$= \left( \vec{a} \times \vec{b} \right) . \left( \vec{c} - \vec{a} \right) \hspace{0.167em} - \left( \vec{a} \times \vec{c} \right) . \left( \vec{c} - \vec{a} \right) \hspace{0.167em} + \left( \vec{b} \times \vec{c} \right) . \left( \vec{c} - \vec{a} \right) \hspace{0.167em}$
$= \left( \vec{a} \times \vec{b} \right) . \vec{c} - \left( \vec{a} \times \vec{b} \right) . \vec{a} - \left( \vec{a} \times \vec{c} \right) . \vec{c} + \left( \vec{a} \times \vec{c} \right) . \vec{a} + \left( \vec{b} \times \vec{c} \right) . \vec{c} - \left( \vec{b} \times \vec{c} \right) . \vec{a}$
$= \left[ \vec{a} \vec{b} \vec{c} \right] - \left[ \vec{a} \vec{b} \vec{a} \right] - \left[ \vec{a} \vec{c} \vec{c} \right] + \left[ \vec{a} \vec{c} \vec{a} \right] + \left[ \vec{b} \vec{c} \vec{c} \right] - \left[ \vec{b} \vec{c} \vec{a} \right]$
$= \left[ \vec{a} \vec{b} \vec{c} \right] - 0 - 0 + 0 + 0 - \left[ \vec{a} \vec{b} \vec{c} \right] \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \left( \because \left[ \vec{a} \vec{b} \vec{c} \right] = \left[ \vec{b} \vec{c} \vec{a} \right] = \left[ \vec{c} \vec{a} \vec{b} \right] \right)$
$= 0 \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em} \hspace{0.167em}$
Is there an error in this question or solution?
#### APPEARS IN
RD Sharma Class 12 Maths
Chapter 26 Scalar Triple Product
MCQ | Q 2 | Page 18
|
{}
|
# Corrigenda Numerical Approximation PDEs
The following list of corrections is updated occasionally and ordered chronologically. Important changes are marked by an exclamation mark. Letters t,m,b refer top, middle, and bottom parts of a page.
Last update: February 7, 2019
# Chapter 1: Finite difference method
Nr. Imp. Page(s) Correction
1 6m Remove subscript $x$ in identities for difference quotients (3 times); replace $x$ by $x_j$ (3 times) in formula following This implies that
2 ! 14m Remove factor $\Delta x$ in denominator of identity for $\norm{V}$
3 24t Change: ... is regular with eigenvalues $\lambda_j \ge 1$, $j=1,2,\dots,J-1$, and ...
4 29t Change: ... that $f(z) \ge -1$ to deduce $\abs{f(z)} \le 1$, i.e., ...
5 37m Replace plus by minus sign in formula $3/4+\abs{x}/2$
6 ! 56m Remove minus sign in identity $Lu = -\Delta ...$
7 ! 61b Replace $A$ by $-A$ in the equation of Example 1.14
8 34-35 Improved derivation of the wave equation: wave_equation.pdf (V. 7.11.2018)
9 22t Replace U(k+1,J+1) by U(k+2,J+1) in line 17 of code explicit_euler.m
10 ! 29m Remove factor $\Delta x$ in sum.
11 32b According to the Gerschgorin theorem the eigenvalues of $\widetilde{A}$ are bounded from below by one.
12 36m Add remark: We will see that the wave speed is given by $c = (\sigma/\varrho)^{1/2}$ so that to double the frequency of a vibrating string we have to divide its weight or increase its tension by 4.
13 ! 42t,m Correct: ... if the complex eigenvalues of the matrices $A_p$ are distinct and bounded by one and Since $z_1z_2 =1$ a sufficient condition for $z_1\neq z_2$ and $\abs{z_{1,2}}\le 1$ is that $\abs{\gamma_p}<1$. Correct also: $-1 < \cos(\pi p \Delta x) < 1$ and $\abs{z_{1,2}}\le 1$
14 51 Add: If $u\in C^4(\overline{\O})$ then we have the estimate ... in Proposition 1.17
# Chapter 2: Elliptic partial differential equations
Nr. Imp. Page(s) Correction
1 67b Replace $v\in C(\overline{\O})$ by $v\in C^1(\overline{\O})$
2 81b Change: Let $\O\subset \R^d$ be bounded, $u\in L^p(\O)$, and ...
3 93m Add: If $a$ is symmetric then the factor $k_a/\alpha$ can be replaced by its square root.
4 ! 95t Remove minus sign in front of boundary integral; replace Proof by Proof (sketched)
5 ! 95m Add We incorporate the result that because of the convexity of $\O$ smooth functions are dense in the set $V$ of all functions $v \in H^1_0(\O)$ sucht that $-\Delta v\in L^2(\O)$ with respect to the norm $v \mapsto \norm{\nabla v} + \norm{\Delta v}$, cf. [5].
6 70 Add ... called a scalar or inner product ... in Definition 2.3 (i).
7 71m Add: By the parallelogram identity $2 \norm{x}^2 + 2 \norm{y}^2 = \norm{x-y}^2 + \norm{x+y}^2$ we have that ...
8 72t Replace $r$ by $z$ and use that $u-z\in U$
9 95-96 Replace $x'$ by $z'$ (five times)
# Chapter 3: Finite element method
Nr. Imp. Page(s) Correction
1 ! 106b Replace factor $h_T^{-k}$ by $h_T^k$ in Proposition 3.3
2 108m Item (v): Replace first integration domain by $T$ and second by $\hT$ in first formula and first integration domain by $\hT$ in second formula
3 108b Change $\chi$ to $\widehat{\chi}$
4 ! 111t Replace $C^{m-1}$ by $C^r$ and correct $0\le k\le \min\mbox{{$r+1,m$}}$
5 116b Change $A^{-1} = (I-M)^{-1} D^{-1}$
6 118m Add: Assume $u_h\in V_h$ is such that $a_h(u_h,v_h) = \ell(v_h)$ for all $v_h\in V_h$.
7 131 Replace variables (nS,idx_Bdy,newCoord) by (nEdges,idxBdy,newCoords)
8 151m Use command "m_lumped = spdiags(m_lumped_diag,0,nC,nC);"
9 103b Add: By integrating the gradient field $w=(w_1,w_2,\dots,w_d)$ along appropriate paths ...
10 117 Add: , i.e., the minimum is attained on the boundary. at end of Proposition 3.5
11 118 Replace $c>0$ by $c_{SL}>0$ in Proposition 3.6
12 ! 118b The estimate for the $\abs{\ell(v_h)-\ell(v_h)}$ is suboptimal; a better result is provided in the pdf file quadrature.pdf
13 ! 122b Remove squares of first two norms
14 ! 144 Add prefactor $\tau$ to the sum in Proposition 3.13, replace factors $ch^2$ and $c\tau$ by $c\tau^{-1/2}h^2$ and $\tau^{1/2}$, respectively, and terms $u_t$ and $u_{tt}$ by $\partial_t u$ and $\partial_t^2 u$, respectively, in the proof
15 140,145 The solutions should be sought on the closed time interval $[0,T]$ in Definitions 3.13 and 3.15
# Chapter 4: Local resolution techniques
Nr. Imp. Page(s) Correction
1 206 Correct author name of Reference 22: Praetorius, D.
# Chapter 5: Iterative solution methods
Nr. Imp. Page(s) Correction
1 210 Add in Remark 5.1: This follows from the explicit characterization of eigenfunctions of the discretized Laplace operator on uniform grids, cf. Lemma 1.1 in Chapter 1.
Nr. Imp. Page(s) Correction
1 251m Add in Remark 6.2 (ii): ... $\norm{Mx} \ge \gamma \norm{x}_\ell$ for all $x\in \R^n$, i.e., $M$ is injective, hence regular, and to ...
2 253m Add: and analogously, noting $B_K=0$,... and $B^Ts = [0,B_I^Ts]^T\equiv B_I^Ts \in \text{Im} B^T$
3 ! 253b Exchange first and second conditions and enumerate them.
4 265b Add in Theorem 6.4: ... if and only if $a$ is coercive on $\text{ker}B= \text{{$v\in V: b(v,q) = 0 \text{ f.a. }q\in Q$}}$
5 267 Correct: ... let $u\in \text{ker}B \setminus \text{{0}}$; change With $p=0$ we have that $(u,p)=L^{-1}(\ell,0), i.e., ...$ correct $\sup_{v\in V\setminus \text{{0}}}$ after show that
6 274m Change: In fact, a simpler error estimate that resembles the classical Cea lemma can be proved if ...
7 ! 274b Insert condition ${q_h}\rvert_{\partial \Omega} = 0$ in definition of $Q_h$
8 ! 276m Delete such that there exist $v\in V$ and $q_h\in Q_h$ with $b(v,q_h)\neq0$ and and nontrivial
9 279t Add remark: Note that $v_h \mapsto \norm{Bv_h}_Q$ defines a norm on $V_h$ so that $\norm{B v_h}_Q \le c_h \norm{v_h}$, where $c_h$ depends on the dimension of $V_h$
10 279m Add: ... we expect $\sigma = 1$ since $\norm{\text{div} I_h v} \le c h \norm{D^2 v}$ for every smooth divergence-free vector field $v$ on $\Omega$
# Chapter 7: Mixed and nonstandard methods
Nr. Imp. Page(s) Correction
1 ! 317 Corrections in lines 21, 36, 45 of code "stokes_cr", see file stokes_cr_corr.m
2 284m Correct Remarks 7.1 (ii): The set $C^\infty(\overline{\Omega};\R^d)$ is dense in $H(\text{div};\Omega)$.
3 286b Replace factor $(1+c_P^2)$ by $(1+c_P^2)^{1/2}$ twice
4 288b Insert: But then $u_h(x) \cdot n_{S_{d+1}} = d_T (x-z_{d+1}) \cdot n_{S_{d+1}} = 0$ for every ...
5 289m Replace $x\in T_+\cup T_-$ by $x\in \Omega$
6 290m Add: ... has to be used which maps normals to normals.
7 292m Replace ... allows us to prove ... by ... allows us to deduce ...
8 293m Add comment: The smallness condition on $h$ cannot be avoided in general; for $h$ sufficiently small we have that the number of sides dominates the number of elements in $\mathcal{T}_h$.
9 293m Replace $v_h$ by $-v_h$ in Rem. 7.6
10 294t Replace $H^1(T)$ by $H^1(T;\R^d)$ twice
11 295b Add in Rem. 7.2 (ii): ... provided that the additional regularity condition $g\in H^1(\Omega)$ is satisfied.
12 305t Proof of the discrete inf-sup condition for the Taylor-Hood element: taylor_hood.pdf
13 305m Lemma 7.3: Add There exists $\beta'>0$ such that for all $q_h \in Q_h$ we have ...; define $w_z = 0$ for $z\in \Gamma_D = \partial \Omega$
14 307b Delete unnecessary brackets $(\beta'...)$ and incorrect term $-\mu \norm{\nabla u_h} \norm{p_h}_{L^2(\Omega)}$
15 309t Change second sentence in Rem. 7.10 (ii): Here, we use a special bilinear form $c_h$ and are able to avoid an inf-sup condition for $b$.
16 311b Write $b_h$ instead of $b$ for discretized bilinear form
17 312b Correct: Since $I_F$ reproduces constants we may replace $v$ by $v-\overline{v}_T$ ...
18 324m Insert: ... and in case that $\Gamma_N$ is empty in fact ...
19 328b Correct: Applying Gauss's theorem on inner sets $\Omega_j$ or imposing $q(u)\cdot n = 0$ on $\partial \O \cap \partial \O_j$ yields that ...
20 334m Insert dots for scalar products with vector $n_S$ twice
21 343b Move factor $d$ inside brackets in front of Kronecker symbol $\delta_{mn}$
# Chapter 8: Applications
Nr. Imp. Page(s) Correction
1 403 Add reference M. Costabel, A coercive bilinear form for Maxwell’s equations, Journal of Mathematical Analysis and Applications, 157 (1991), 527–541.
2 349m Correct: ... two nearby material points ...
3 354t Modify: then we obtain robust approximation results in the relative error for the limit $\lambda \to \infty$
4 356m Add in Rem. 8.4: which is equivalent to $\int_\O \mathbb{C} \veps_{\mathcal{T}} (u_h) : \veps_{\mathcal{T}} (v_h) dx = \ell(v_h)$
5 358t Correct: $a_h(u,v_h) = \sum_{T\in \mathcal{T}_h} \int_T \mathbb{C} \veps(u): \nabla v_h d x$
6 358t Correct two signs: $... = \int_\O f\cdot v_h dx + \int_{\Gamma_N} g \cdot v_h ds - \sum ...$
# Appendix A: Problems and projects
Nr. Imp. Page(s) Correction
1 ! 406 A.1.6: add factor $1/2$ to right-hand side
2 ! 407 A.1.8: correct $\phi_\ell(x) = e^{i\ell x}$
3 ! 410 A.1.21: replace $a_n$ and $b_n$ by $\alpha_n$ and $\beta_n$
4 ! 412 A.1.28: correct right-hand side $\frac12 \partial_t^-(\partial_t^+ U_j^k)^2$
5 ! 422 A.2.1: replace term $r^{-1} \widetilde{u}(r,\phi)$ by $r^{-1} \partial_r \widetilde{u}(r,\phi)$ in formula for $\Delta u(r,\phi)$
6 ! 429 A.2.33: remove factors $1/N$ (twice)
7 ! 438 A.3.26: where $\varrho_j>0$ is the inverse of the height of $T$ ...
8 ! 407 A.1.8.: Assume that $f=\sum_{\ell\in \mathbb{Z}} f_\ell \phi_\ell$ in second part of (ii)
9 411 A.1.26: Hint: Use that the spectral radius is a sharp lower bound for operator norms
10 ! 420 A.1.5: Correct $c = (\sigma/\varrho)^{1/2}$ and use the cosine transform to compute the coefficients $\alpha_m$
11 ! 421 A.1.8: Correct density $\varrho = 1.2041\, {\rm kg/m}^3$
# Appendix B: Implementation aspects
Nr. Imp. Page(s) Correction
1 509 Add command "d_fac = factorial(d)" in function "nodal_basis" and correct identity for "Vol_T(j)" to improve performance
# Other corrections
Nr. Imp. Page(s) Correction
1 iv Correct Universität
# Acknowledgment
Thanks for valuable hints to: D. Gallistl
|
{}
|
# Open problem in geometric group theory
16 Nov
Update: Per Jeremy’s comments, I’ve updated the end of this post to be more clear and less inaccurate/blatantly untrue. Thanks Jkun!
The other day in my topics in geometric group theory course, my professor described an open problem which I thought was pretty cool and relatively easy to explain.
First, if you haven’t (which you probably haven’t), check out the “groups” section in my post about Cayley graphs, which was really just about groups. I just need you to know what the group of integers is $\mathbb{Z}$.
Now we’re going to talk about generators. Again, mathematicians do a pretty good job of using reasonable words to name mathematical terms: generators of a group are elements of the group that “generate” that group- just like our parents created our generation (aww yeah), the integers “1” generates all of the integers. So 2 = 1+1, 17 = 1+ 1+ … +1, -23 = (-1)+ (-1)+ … + (-1). There’s a bit of terminology that I slipped in there without pointing it out: I used -1 as well as +1 to build all the integers, but I only said that 1 generated everything. Technically it’s that 1 and -1 generate all the integers, but since -1 is “negative one,” or the inverse of 1, I’ll just say that 1 generates the integers, leaving it implicit that we also include the inverse.
I could also choose more generators. So, for instance, “1” and “2”. Why would I do this when I can already write everything in terms of 1? Because it’s faster sometimes to write things in terms of 2. For instance, 6 = 1+ 1+ 1+ 1+ 1+ 1, but 6= 2+2+2. I only need three generators to express 6 if I use 2 as a generator, but I need six if I only have 1. One thing that geometers care about is distance. So if I just use “1” as my generators, the distance from 0 to 6 is 6. But I use “2” as a generator, the distance from 0 to 6 is 3, because I only needed to use three generators to get to 6.
Six blue arrows, only three red arrows
OK so that’s generators, and why we care about them (distances, a.k.a. metrics). We’re going to switch gears for a minute and then I’ll describe the open problem (it hinges on using different generators for the integers).
Next topic, much scarier sounding word that isn’t actually that scary: quasi-isometry. Let’s deal with “isometry” first, and “quasi” later. An isometry is a function from one space to another that preserves distances. So if x and y are distance 3 apart in the first space, then f(x) and f(y) are still distance 3 apart in the second space. A few examples: translating $x\mapsto x+1$ on the real number line is an isometry, since if $x-y = d$, then $f(x) - f(y) = (x+1)-(y+1) = x-y = d$. Multiplying $x\mapsto 3x$ isn’t an isometry: for instance, 1 and 4 are distance 3 apart on the real line, but 3*1 =3 and 3*4=12 are distance 12-3=9 apart. So this map didn’t preserve distance, and so it isn’t an isometry. In symbols, the equation for an isometry is $d(x,y) = d(f(x),f(y))$, and we want this to be true for all possible choices of x and y.
Isometries are great, but sometimes life is a little bit fuzzier than perfectly preserving distances. We wanted a word to describe functions that were almost isometries, and in math land, that would be quasi-isometry. Using an equation, we have that f is a quasi-isometry if there are constant numbers k and C so that $\frac{1}{k}d(f(x),f(y))-C \leq d(x,y) \leq k d(f(x),f(y))+C$. So it’s an isometry up to some multiplicative and additive constants. Examples and non examples: all isometries are quasi-isometries, we just let k=1 and C=0. That multiplication map from earlier is a quasi-isometry with k=3 and C=0: $d(f(x),f(y)) = d(3x,3y) = |3x-3y| = 3|x-y| = 3d(x,y)$. Here’s something that’s not a quasi-isometry: $x\mapsto e^x$. The exponential map grows faster than the multiplicative constant can hold it down.
And now that you know what the integers, generators, and quasi-isometries are, I can describe this open problem.
Remember that $2^0 =1, 2^1=2, 2^2= 4, 2^3 = 8,\ldots$ and $3^0 =1, 3^1=3, 3^2=9, 3^3=27,\ldots$. This is the question: is the integers generated by $\{2^n\}$ quasi-isometric to the integers generated by $\{3^n\}$? For instance, we know that $9=3^2$, so 9 has length 1 in the second set (since $3^2$ is a generator), but $9=2^3 + 2^0$, so 9 has length 2 in the first set. You can play around with any integer and write it as a sum of powers of 2 and of 3, and try to see if there’s a relationship between the lengths (i.e. the length of 45 is 23 in the 2-set, and is only 15 in the 3-set). To reiterate, the length of a number in the first set is the minimum number of powers of two you need to add up to it- so although $9 = 2 ^1 + 2^2 + 2^0+2^1$, the length of 9 in the first set isn’t 4, but just 2, which is the minimum number. Similarly, in the second set its the minimum number of powers of three you need.
Fact: If we only look at a finite set of generators, the sets are quasi-isometric. The tricky part is when we have infinitely many generators, as they get farther and farther apart.
Now you understand something that no one knows the answer to yet! Isn’t that cool??? This is math!
### 14 Responses to “Open problem in geometric group theory”
1. j2kun November 16, 2013 at 10:16 pm #
I’m confused. Is the metric on integers generated by powers of two different from the metric on integers generated by powers of three? Surely it must be, or else the identity map would be a quasi-isometry.
• j2kun November 16, 2013 at 10:17 pm #
(also, I think 7 has length 3 as well: 7 = 111 in binary)
• yenergy November 17, 2013 at 11:52 am #
Ha thanks Jeremy! I switched it to 9. I know binary…
This is the word metric, so in the powers of two, the powers of two are our alphabet. If you stop and say generated by 1, 2, 4 vs. 1, 3, 9, you’ll get a quasi-isometry, but the open problem is if you use all the powers- unclear if there’s a universal (k,C). Kevin mentioned this off-hand during class and I thought it’d be a cool little post.
2. Stephen Bigelow (@stephenpi) December 8, 2013 at 5:20 pm #
For fixed N, the powers of 3 modulo 2^N can be anything that is 1 or 3 modulo 8 (which I found in the Wikipedia artical on Multiplicative group of integers modulo n). Thus powers of 3 can be arbitrarily large in the powers-of-2 metric. Why doesn’t that answer the question?
• yenergy December 9, 2013 at 7:07 pm #
It might be this ridiculous cold but it’s probably just me- I don’t quite understand what you’re saying in your first sentence, Stephen. Can you explain further? Also, thanks for stopping by and commenting!
• Stephen Bigelow (@stephenpi) December 16, 2013 at 12:21 pm #
[Oops. I thought I typed a reply but it looks like it hasn’t shown up. Let me try again…]
If you write the powers of three in binary (1, 11, 1001, 11011, …) then you will eventually see any given sequence of 1s and 0s as a substring – at least, if I understand Wikipedia correctly. So there’s a power of three that goes blahblah1010101010101blahblah. Surely that could be made arbitrarily large in the powers-of-two metric, although it is length one in the powers-of-three metric.
I think that proves the identity map is not a quasiisomorphism between the two metrics, but I guess that leaves open the question of whether they are quasiisomorphic by some contrived function from Z to Z.
I trust your cold is better!
• yenergy December 19, 2013 at 9:45 am #
Fantastic; I love it when people answer their own questions. But now I have questions for you. I’m not seeing the connection between the integers mod 3 being generated by 2 (I’m guessing that’s what you’re referring to, based on the wikipedia entry?) and this writing powers of three in binary=getting any sequence of 1s and 0s as a substring. Can you clarify more?
It’s possibly whooping cough! I’ve got antibiotics and an inhaler and I feel much less bad about neglecting my blog.
3. Stephen Bigelow (@stephenpi) January 6, 2014 at 12:01 pm #
Whooping cough?! My mother had that as a young child, and has a very early memory of someone saying to her mother “Well, you can’t expect to keep them all.” Perhaps not the best anecdote to tell you – I’m sure you’ll be fine.
Now, about the powers of 3, let’s just do it modulo 128 instead of 2^N.
Let U be the group of units modulo 128. It’s just the odd numbers, so it has order 64.
Let G be the numbers in U that are 1 or 3 modulo 8. It’s easy to check that G is a subgroup of U with order 32.
It turns out G is cyclic, generated by 3. Thus, for any bits a,b,c,d, the numbers abcd001 and abcd011 are both powers of 3 modulo 128. That is, there are powers of 3 that are [junk]abcd001 and [junk]abcd011.
The same goes for “arbitrarily large values of 128”. So there are powers of 3 that are [junk][anything-you-want]001 and [junk][anything-you-want]011 in binary.
The fact that G is cyclic and generated by 3 is apparently due to Guass. I couldn’t find a proof online, though there’s lots of interesting stuff about powers of 3 in binary. It shouldn’t be too hard: you need to show 3^16 is not 1 modulo 128 (for arbitrary values of 128).
Anyway, thanks for the food for thought, and get well soon. Sorry I’m slow because I get no sort of notification when you reply to a comment…
### Trackbacks/Pingbacks
1. - November 27, 2013
[…] Quasi-isometries and an open problem in geometric group theory […]
2. - March 2, 2015
[…] and topology- “group theory” is the study of groups, which we’ve seen a few times before, and “geometric” means that we’ll be looking at shapes. Geometric group […]
3. - April 26, 2015
[…] describe our groups- every element will be written as a word in the alphabet of generators. So we’ll use letters like a,b,c… to represent generators, and we’ll put them […]
4. - September 24, 2015
[…] background on geometric group theory: you’ll want to know what a group action is and what a quasi-isometry is. (Refresher: a group G acts on a space X if each group element g gives a homomorphism of the […]
5. - November 14, 2015
[…] I’m obviously referring to a space, and then I mean the Cayley graph of that group (which changes depending on generating set, but if it’s a finite generating set then all Cayley graphs are […]
6. - November 26, 2015
[…] also like the open problems series: geometric group theory, combinatorics, and group […]
|
{}
|
# What Are The Factors Affecting The Efficiency Of Electrostatic Precipitators In Operation?
Aug. 30, 2021
Compartilhar
Electrostatic Smoke Precipitator Dust Collector
The electrostatic precipitator may reduce work efficiency due to some factors in use, which is not what you want to see. Therefore, this article summarizes 6 factors that affect its work efficiency. Hope it can be helpful to you.
## 1. Flue gas flow rate
Excessive flue gas flow rate not only shortens the residence time of the dust in the electric field, but also blows the dust up directly into the dust layer or during the vibration. The formation of dust particles or dust masses will lead to a reduction in dust removal efficiency.
## 2. Airflow distribution
If the airflow distribution is uneven, the dust removal efficiency is high in places where the airflow speed is low. The local airflow speed is high where the scouring occurs, causing secondary fluttering and reducing the efficiency of dust removal. The overall efficiency is reduced because the effect of high smoke speed is greater than the effect of low smoke speed.
## 3. Dust specific resistance
The best dust collection effect is achieved when the specific resistance of dust is 10,000 to 10 times Ω.cm. If the specific resistance is too small, the dust will not easily adhere to the dust collection plate. If the specific resistance is too small, the dust will not stick to the dust collection plate easily and will be easily carried away by the flue gas, which will reduce the dust collection efficiency. If the specific resistance is too large, the dust charge will not easily escape and the adhesion force will be larger. To remove the dust on the pole plate, it is necessary to increase the vibrating force, which will easily cause the dust to fly twice, resulting in a decrease in dust removal efficiency.
Electrostatic Dust Collector
## 4. Air leakage
In the negative pressure operation of the electric dust collector air leakage, will cause the secondary dust flying. Smoke speed increases and shortens the residence time of flue gas in electric field. Air leakage reduces the flue gas temperature, which may lead to condensation or even corrosion.
## 5. Flue gas temperature
When the flue gas temperature is within the range of 110~130˚C, the dust removal efficiency is good. The flue gas temperature is too high and the specific resistance of dust is reduced. The viscosity is small, the dust drive-in speed increases and the dust removal efficiency decreases. The smoke temperature is too low, the humidity increases. Ionization is weakened, corona current is reduced, and dust removal efficiency is reduced. The general sampling flue gas flow rate is 0.8~1.2m/s.
## 6. Soot concentration
The increase of soot concentration increases the dust particles in the electric field. This inhibits the generation of corona current and reduces the efficiency of dust removal. In severe cases, the current tends to zero, i.e. corona closure occurs. Therefore, the flue gas content of the dust collector inlet must be controlled to be below 5g/m3. The flue gas with high dust concentration should be pretreated.
|
{}
|
# How do I use something related to mobius inversion to solve this problem?
The problem is given below: For two sequences of complex numbers $\{a_0, a_1, \cdots, a_n, \cdots\}$ and $\{b_0, b_1, \cdots, b_n, \cdots\}$ show that the following relations are equivalent: $$a_n = \sum_{k = 0}^n{b_k}\ for\ all\ n \Leftrightarrow b_n = \sum_{k = 0}^n{(-1)^{k + n} a_k}\ for\ all\ n.$$
I was trying to learn Mobius Inversion Formula and Multiplicative functions and I found this problem. I understand how to use the formula to deal with something like $\sum_{d|n} f(d)$, but I didn't get how to solve things like $\sum_{d=0}^n f(d)$. What's the strategy when we face situations like this?
• Are you sure that your formulas are correct? If $b_k:=k$, then $a_0=0,a_1=1,a_2=3$, but for $c_n:=\sum_{k=0}^n(-1)^{k+n}a_k$ we have $c_0=0,c_1=1,c_2=2$, so $c_n\neq b_n$. – sranthrop Mar 6 '16 at 15:54
• $b_0=0,b_1=1,b_2=2$, and $c_0=0,c_1=1,c_2=2$ ??? Did I get it right? – SCaffrey Mar 7 '16 at 14:36
• I'm sorry, you need to go to $n=3$: $a_3=6$, $b_3=3$, but $c_3=4$. – sranthrop Mar 7 '16 at 16:19
|
{}
|
### Home > PC3 > Chapter 8 > Lesson 8.1.3 > Problem8-51
8-51.
Calculate the area of the triangle below. Homework Help ✎
Use the Law of Cosines to determine the measure of one of the angles.
Use the formula below to calculate the area.
$A=\frac{1}{2}ab\sin(\theta)$
|
{}
|
# Converting a BAM file into VCF
I have NGS illumina RNA-seq reads from M. musculus (mm10). I am trying to find variants along the strand portion of the reads in the refseq (mm10).
I mapped a pair of sequence files and generated a BAM file. Now I need to generate a VCF file from this BAM file. I’ve checked Galaxy for direct tools, but I couldn’t identify one. Any suggestions on how to turn a BAM file into VCF? Thank you in advance.
• – gringer
Apr 7 '18 at 23:48
• The approach works with RNASeq reads as well. All it needs is sufficient coverage across mapped reads. Where coverage is too high, reads can be sub-sampled via digital normalisation to reduce high-coverage regions without compromising coverage for low-coverage regions.
– gringer
Oct 11 '18 at 22:02
It's not really possible to convert bam to vcf. bam is a mapping file, it does not contain the information about variants, this information needs to be inferred in process called variant calling. I find important to mention that it's not just a different format of the same thing.
How to call variants (a vcf file) from mapped reads (a bam file) is very broad question, it depends on the sequencing technology (Illumina, PacBio, ...), the type of sequencing (whole genome sequencing, RAD, RNA-seq, ...), the sampling (calling of individuals vs population variability), the quality of a reference (human vs denovo assembly made by a postdoc in your lab)... For a lot of cases the answer is simply GATK, partially because it has well written manual, but mainly just because many people use it.
Some more detailed cases are well answered in more specific questions :
• Indeed, relevant to this question, GATK have a Variants from RNA-seq best practices. Or they used to, and now it seems to have disappeared. Oct 11 '18 at 10:37
• Honestly, I think good practices do not involve GATK. But it's true that GATK is what makes a review process much smoother (reviewers want GATK). Oct 11 '18 at 10:56
You can use Freebayes, provided you have your BAM file and the reference genome.
Example:
freebayes -f genome.fa aln.bam > var.vcf
I'd suggest you go through the whole github documentation, and choose the option(s) more suitable for your needs (ploidy, read support, etc.)
• Thanks. Unfortunately when I tried this approach I didn’t get any variant readings along the mapped portion of the refseq. Not sure why. I know there are a few variants. Apr 13 '18 at 19:58
• This answer would be more useful if it included a cautionary comment about how what the OP is asking about is not a format conversion, but an analytical process, ie variant calling, like Kamil S Jaron provided. In addition, the recommendation to "go through the whole github documentation" is unlikely to be satisfying given the OP is probably new to the field. Kamil's pointers to specific resources are much more helpful. Apr 23 '20 at 15:46
|
{}
|
Math Help - Finding the exact value of sin 3
1. Finding the exact value of sin 3
Question says if...
$cos 72 = \frac{\sqrt5 - 1}{4}$
then find the exact value of sin 3
How would one go about doing this?
2. Originally Posted by username11111
Question says if...
$cos 72 = \frac{\sqrt5 - 1}{4}$
then find the exact value of sin 3
How would one go about doing this?
Do you mean sin 72?
And I assume angles are being measured in degrees?
3. Originally Posted by mr fantastic
Do you mean sin 72?
And I assume angles are being measured in degrees?
No, it's cos 72.
...and yes, it's degrees.
4. Originally Posted by username11111
No, it's cos 72.
...and yes, it's degrees.
*Sigh* Didn't you notice what I highlighted in red? It means that I asked do you mean sin 72 rather than sin 3.
5. Originally Posted by mr fantastic
*Sigh* Didn't you notice what I highlighted in red? It means that I asked do you mean sin 72 rather than sin 3.
No. You're given the exact value of cos 72 and asked to find the exact value of sin 3.
6. Re: Finding the exact value of sin 3
Originally Posted by mr fantastic
Do you mean sin 72?
And I assume angles are being measured in degrees?
It realy is cos 72.
cos 72 = sin 18, and sin 18 = (51/2-1)/4
7. Re: Finding the exact value of sin 3
sin 3 = sin 18 - 15 = sin 18 cos 15 - cos 18 sin 15
8. Re: Finding the exact value of sin 3
Start with \displaystyle \begin{align*} \cos{ \left( 72^{\circ} \right) } = \sin{ \left( 90^{\circ} - 72^{\circ} \right) } = \sin{\left(18^{\circ}\right)} \end{align*}, so \displaystyle \begin{align*} \sin{\left(18^{\circ}\right)} = \frac{\sqrt{5} - 1}{4} \end{align*}. We then have
\displaystyle \begin{align*} \cos{\left(18^{\circ}\right)} &= \sqrt{ 1 - \left[ \sin{ \left( 18^{\circ} \right) } \right]^2} \\ &= \sqrt{1 - \left(\frac{\sqrt{5} - 1}{4}\right)^2} \\ &= \sqrt{ 1 - \frac{5 - 2\sqrt{5} + 1}{16} } \\ &= \sqrt{ 1 - \frac{6 - 2\sqrt{5}}{16} } \\ &= \sqrt{ \frac{16 - \left(6-2\sqrt{5}\right)}{16} } \\ &= \frac{\sqrt{10 + 2\sqrt{5}}}{4} \end{align*}.
Now note that
\displaystyle \begin{align*} \sin{\left(3^{\circ}\right)} &= \sin{\left(18^{\circ} - 15^{\circ}\right)} \\ &= \sin{\left(18^{\circ}\right)}\cos{\left(15^{\circ} \right)} - \cos{\left(18^{\circ}\right)}\sin{\left(15^{\circ} \right)} \end{align*}
To evaluate \displaystyle \begin{align*} \sin{\left(15^{\circ}\right)} \end{align*} and \displaystyle \begin{align*} \cos{\left(15^{\circ}\right)} \end{align*}, apply the half angle formula to \displaystyle \begin{align*} \sin{\left(30^{\circ}\right)} \end{align*} and \displaystyle \begin{align*} \cos{\left(30^{\circ}\right)} \end{align*}. Good luck
9. Re: Finding the exact value of sin 3
cos15=cos(60-45)=cos60cos45+sin60sin45=(1/2)(1/root2)+(root3/2)(1/root2)=(1+root3)/(2root2) whiich simplifies to (root2+root6)/4
Can do similar for sin15
10. Re: Finding the exact value of sin 3
sin 18 = (51/2-1)/4
sin 15 = sin 45-30 = sin 45 cos 30 - cos 45 sin 30
cos 18..
cos 15
11. Re: Finding the exact value of sin 3
The final awnser looks like this:
sin 3 degrees =
( sqrt( 3 - sqrt(5) + ( 3sqrt(3) - sqrt(15) )/2 ) - sqrt( 5 + sqrt(5) - ( 5sqrt(3) + sqrt(15) )/2 ) )/4
12. Re: Finding the exact value of sin 3
Originally Posted by DanvanVuu
The final awnser looks like this:
sin 3 degrees =
( sqrt( 3 - sqrt(5) + ( 3sqrt(3) - sqrt(15) )/2 ) - sqrt( 5 + sqrt(5) - ( 5sqrt(3) + sqrt(15) )/2 ) )/4
Here it is from latex.codecogs:
[IMG]http://latex.codecogs.com/png.latex?\displaystyle%20\begin{align*}\frac{\sqr t{3%20-%20\sqrt{%205%20}%20+%20\frac{\left%28%203%20sqrt{ 3}%20-%20sqrt{15}%20\right%29}{2}}%20-%20\sqrt{5%20+%20\sqrt{%205%20}%20-%20\frac{\left%28%205%20sqrt{3}%20+%20sqrt{15}%20\ right%29}{2}}}{4}\\%20\end{align*}[/IMG]
13. Re: Finding the exact value of sin 3
Write 72=45+(30-3) and apply the formulas for the cosine and the sine of a sum of two angles. This will give you an equation involving sin(3) and cos(3), to which you apply the Pythagorean theorem to express cos(3) in terms of sin(3); this will result in a second-order equation in sin(3), whose positive solution you seek.
|
{}
|
# Problems 5-1 through 5-2
5-1 Earned Value Calculation You are 4 months into a 6 month project. The project is linear, which means that the progress and spending occurs at a constant rate. Our crack project team of highly skilled associates has worked diligently and put in extra hours to keep the project going. Our accounting department has provided the following data at the end of month 4: Actual cost to date = $88,800 Planned expenditures to date =$101,000 The CFO is excited and has sent you an email congratulating you for being 12.07% under budget. However, is it really time to hold a team celebration? That would be fun but your project manager mentality kicks in. Those numbers look good but how are we ‘really’ doing? To understand the true project performance, we need to apply earned value techniques. The missing piece we need is Earned Value (i.e. what we have actually accomplished so far). You meet with your team and find that only 6 of the 7 tasks scheduled to be complete by the end of month 4 have actually been completed. Task 7 isn’t even started! This information gives you the final data you need to apply ‘Earned Value’ and develop an objective analysis. 1. What are the PV, EV, and AC for the project at the end of month 4? 2. What are the SV, CV, SPI, and CPI for the project? 3. Assess the project performance to date? Do you get to have the celebration? 5-2 Earned Value Calculation Project Schedule and Budget Data Provided: Task Duration (days) Predecessor(s) Budget A 2 --- $1,600 B 5 ---$4,000 C 3 A $14,050 D 4 B$5,800 E 6 B $12,000 F 4 C,D$5,200 G 3 E $3,900 Project Totals:$46,550 Status of the project at the end of day 6: Task % Complete Actual Cost of Work Performed A 100% $1,800 B 100%$4,500 C 90% $13,500 D 10%$500 E 0% $0 F 0%$0 G 0% \$0 Project Network Diagram Critical Path (CP) = B – E – G = 14 days Tasks Duration (Days) A A A B B B B B B C C C C D D D D D E E E E E E E F F F F F G G G G 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1. Calculate the schedule variance for each task and the total project. 2. Calculate the cost variance for each task and the total project. 3. What is your assessment of the project at this time?
Attachments:
Related Questions in Hospitality Management
• Earned Value Calculation June 06, 2011
1. Calculate the schedule variance for each task and the total project . 2. Calculate the cost variance for each task and the total project . 3 . What is your assessment of the project at...
• © AIPE Updated 19th May 2011 V.1.0 Intellectual Property of the Australian... (Solved) April 01, 2015
, which show three Americans in the back of a Toyota fourwheel-drive driving through a paddock watching kangaroos jump, isn't helping. " We need to get more sophisticated than this if we...
Solution Preview :
Introduction Tourism Australia has established 13 offices all around the globe to target 22 of its most advantageous markets. The key strategy behind the advertising and promotion of...
• 636 applied mangemnt July 04, 2011
first two assignments only proposal and report
• Kindly let me know how much it will cost for the attached assignments May 16, 2011
The questions are on the attached document.
• Case Study Analysis "Starbucks" (Solved) April 25, 2013
Starbucks? What do you recommend to Starbucks for addressing these issues and developing a strategic plan to compete with its rivals? Cite any sources using the APA format on a separate...
Solution Preview :
Starbucks Starbucks organizational domain Starbucks operates in more than 15000 stores and it employs more than one lakh employees in the organization. The company increases its expansion to...
|
{}
|
1. ## Notation again again
I do not know how to search this but does
$C^{\infty}$ mean that is differentiable any number of times, or something?
Any help appreciated.
2. Originally Posted by Mathstud28
I do not know how to search this but does
$C^{\infty}$ mean that is differentiable any number of times, or something?
Any help appreciated.
Let $I$ be an open interval. The notation $\mathcal{C}^k(I)$ means a function differenciable $k$ times on $I$ such that $f^{(k)}$ is a continous function on $I$. However, if the function is infinitely differenciable i.e. it is an element of $\mathcal{C}^{k}$ for any $k\geq 1$ then we say it $\mathcal{C}^{\infty}(I)$. For example, $\exp ,\sin , \cos \in \mathcal{C}^{\infty}(\mathbb{R})$. While $\log \in \mathcal{C}^{\infty}(0,\infty)$.
3. Originally Posted by ThePerfectHacker
Let $I$ be an open interval. The notation $\mathcal{C}^k(I)$ means a function differenciable $k$ times on $I$ such that $f^{(k)}$ is a continous function on $I$. However, if the function is infinitely differenciable i.e. it is an element of $\mathcal{C}^{k}$ for any $k\geq 1$ then we say it $\mathcal{C}^{\infty}(I)$. For example, $\exp ,\sin , \cos \in \mathcal{C}^{\infty}(\mathbb{R})$. While $\log \in \mathcal{C}^{\infty}(0,\infty)$.
Thanks TPH! Always answering my notation questions .
One question though, I understand that $e^x,\cos(x),\text{etc.}$ are $C^{\infty}\left(\mathbb{R}\right)$
But since $C^{\text{whatever}}$ seems to only be talking about the at least first derivative wouldnt that mean that the $I$ specified for $\ln(x)$ should be $\mathbb{R}/\left\{0\right\}$ since $\ln^{\left(n\in\mathbb{Z^+}\right)}(x)$ is continous for that interval?
Oh wait, I looked back and the interval $I$ is specified in relation to the original function, not the derivatives. Ok so I get it.
Thank you.
EDIT: Actually two more things. The first is obviously that
If $p(x)$ is a non-descript polynomial then it is $C^{\infty}$ Right?
Also how would you say that some function is this on some interval, in other words how would you put this notation down so it makes sense.
Like this?
" $f(x)$ is $C^{\infty}(a,b)$"?
4. Originally Posted by Mathstud28
But since $C^{\text{whatever}}$ seems to only be talking about the at least first derivative wouldnt that mean that the $I$ specified for $\ln(x)$ should be $\mathbb{R}/\left\{0\right\}$ since $\ln^{\left(n\in\mathbb{Z^+}\right)}(x)$ is continous for that interval?
It does not need to be an open interval it can be an open set. But since open sets are concepts from topology I did not want to mention them. An open set is a generalization of an open interval. For example, $\mathbb{R} - \{ 0 \} = (-\infty,0)\cup (0,\infty)$ is an open set. Basically an open set is such a set that has no boundary. More formally $S$ is open iff for any $x\in S$ there is $\epsilon > 0$ such that $(x-\epsilon,x+\epsilon) \subset S$. With open sets we can say that $\ln |x|$ is $\mathcal{C}^{\infty}(\mathbb{R} - \{ 0\})$.
If $p(x)$ is a non-descript polynomial then it is $C^{\infty}$ Right?
A polynomial is infinitely differenciable so it is $\mathcal{C}^{\infty}$.
Also how would you say that some function is this on some interval, in other words how would you put this notation down so it makes sense.
Like this?
" $f(x)$ is $C^{\infty}(a,b)$"?
That is exactly how we say it.
As an illustration of this notation we can state a stronger version of the Fundamental Theorem of Calculus. Let $f$ be continous on $[a,b]$ which is $\mathcal{C}^k(a,b)$. Define $F(x) = \smallint_a^x f$. Then $F$ is $\mathcal{C}^{k+1}(a,b)$.
Analysts like to say that integration "smoothens out a function". The theorem above is what this is all about.
5. Originally Posted by ThePerfectHacker
It does not need to be an open interval it can be an open set. But since open sets are concepts from topology I did not want to mention them. An open set is a generalization of an open interval. For example, $\mathbb{R} - \{ 0 \} = (-\infty,0)\cup (0,\infty)$ is an open set. Basically an open set is such a set that has no boundary. More formally $S$ is open iff for any $x\in S$ there is $\epsilon > 0$ such that $(x-\epsilon,x+\epsilon) \subset S$. With open sets we can say that $\ln |x|$ is $\mathcal{C}^{\infty}(\mathbb{R} - \{ 0\})$.
A polynomial is infinitely differenciable so it is $\mathcal{C}^{\infty}$.
That is exactly how we say it.
As an illustration of this notation we can state a stronger version of the Fundamental Theorem of Calculus. Let $f$ be continous on $[a,b]$ which is $\mathcal{C}^k(a,b)$. Define $F(x) = \smallint_a^x f$. Then $F$ is $\mathcal{C}^{k+1}(a,b)$.
Analysts like to say that integration "smoothens out a function". The theorem about is what this is all about.
Ok, amazing, I completely understand that.
Like the last part since $F'(x)=\frac{d}{dx}\int_a^{x}f(t)dt=f(x)$ by the fundamental theorem of calculus and we have stated that $f(x)\quad\text{is}\quad{C^{k}}$
$F(x)$ takes on all the k amount of differentiabilities of $f(x)$ and adds more since we know that if it is integrable it is differentiable back to f(x)!
Thanks TPH
6. Originally Posted by Mathstud28
Also how would you say that some function is this on some interval, in other words how would you put this notation down so it makes sense.
Like this?
" $f(x)$ is $C^{\infty}(a,b)$"?
Yes, but also:
$f \in C^{\infty}(a,b)$
RonL
7. Originally Posted by CaptainBlack
Yes, but also:
$f \in C^{\infty}(a,b)$
RonL
Thanks captain black, I want to be able to write stuff with as few words as possible (no sarcasm).
But by the way, this got me thinking, this notating would be nice to say things like,
"Let f be a eight times differentiable function" So instead of that could you say
"Let $f(x)\in{C^{k\geq{8}}}$ "
Also, can it apply to functions where $\mathbb{R}^{n>1}\to\mathbb{R}$
Can I say
$C_x^{\infty}\left(\mathbb{R}\right)$?
Or would no one know what that means, haha, I think I am inventing notation here.
8. Originally Posted by Mathstud28
Thanks captain black, I want to be able to write stuff with as few words as possible (no sarcasm).
But by the way, this got me thinking, this notating would be nice to say things like,
"Let f be a eight times differentiable function" So instead of that could you say
"Let $f(x)\in{C^{k\geq{8}}}$ "
Since:
$C^8 \supset C^9 \supset C^{10} \supset ...$
there is no need for this notation.
Also $C^8$ and the set of all functions eight times differentiable (on $(a,b)$ or whatever) are not the same thing since $C^8$ is the set of all $8$ times differentiable function with continuous $8$-th derivative.
RonL
9. Originally Posted by Mathstud28
...
But by the way, this got me thinking, this notating would be nice to say things like,
"Let f be a eight times differentiable function" So instead of that could you say
"Let $f(x)\in{C^{k\geq{8}}}$ "
not really, because there is another condition: $f^{(8)}(x)$ must also be continuous.
so i think, $f(x)\in{C^{k\geq{8}}}$ would mean $f^{(k\geq8)}(x)$ must also be continuous. so y don't you get the biggest k such that $f^k(x)$ is continuous with f is k times differentiable..
Originally Posted by Mathstud28
Also, can it apply to functions where $\mathbb{R}^{n>1}\to\mathbb{R}$
yes, like what TPH said, it can be defined in any open set, just define the open set in $\mathbb{R}^{n}$
Originally Posted by Mathstud28
Can I say
$C_x^{\infty}\left(\mathbb{R}\right)$?
Or would no one know what that means, haha, I think I am inventing notation here.
haven't seen this notation before..
10. Originally Posted by CaptainBlack
Since:
$C^8 \supset C^9 \supset C^{10} \supset ...$
there is no need for this notation.
Also $C^8$ and the set of all functions eight times differentiable (on $(a,b)$ or whatever) are not the same thing since $C^8$ is the set of all $8$ times differentiable function with continuous $8$-th derivative.
RonL
So what your saying is that since for a function to be $f(x)\in{C^{k\geq{8}}}$ it must be $f(x)\in{C^8}$ or in other words all functions of tenth order differentiablility are a subset of eight order or whatever.
So your saying that $f(x)\in{C^{n}}$ describes all functions who are differentiable at least n times, so this would include all functions that are differentiable $n+1,n+2,n+3,\cdots$ times?
11. if $f \in C^{n}$, then $f, f', f'', ..., f^{(n)}$ exists and $f^{(n)}$ is continuous..
$f \in C^{n+1}$ means $f, f', f'', ..., f^{(n)}, f^{(n+1)}$ exists and $f^{(n+1)}$ is continuous..
now, let $g \in C^{n+1}$. therefore, $g^{(n)}$ exists. $g^{(n)}$ must be continuous otherwise, $g^{(n)}$ will not be differentiable, hence $g^{(n+1)}$ will not exist.
therefore, $g \in C^{n}$, that is $C^{n+1} \subset C^{n}$
12. Originally Posted by kalagota
if $f \in C^{n}$, then $f, f', f'', ..., f^{(n)}$ exists and $f^{(n)}$ is continuous..
$f \in C^{n+1}$ means $f, f', f'', ..., f^{(n)}, f^{(n+1)}$ exists and $f^{(n+1)}$ is continuous..
now, let $g \in C^{n+1}$. therefore, $g^{(n)}$ exists. $g^{(n)}$ must be continuous otherwise, $g^{(n)}$ will not be differentiable, hence $g^{(n+1)}$ will not exist.
therefore, $g \in C^{n}$, that is $C^{n+1} \subset C^{n}$
So the operative phrasing here would be that $f(x)\in{C^{n}}$ implies that $f(x)$ is differentiable at least an n number of times, not at most.
13. yes.
14. Originally Posted by Mathstud28
So what your saying is that since for a function to be $f(x)\in{C^{k\geq{8}}}$ it must be $f(x)\in{C^8}$ or in other words all functions of tenth order differentiablility are a subset of eight order or whatever.
So your saying that $f(x)\in{C^{n}}$ describes all functions who are differentiable at least n times, so this would include all functions that are differentiable $n+1,n+2,n+3,\cdots$ times?
Something differentiable $m$ times with continuous $m$ th derivative is obviously continuously differentiable $n$-times for all $n \le m$, so $C^{m} \subset C^n, \ \forall n \le m.$
RonL
15. I will give two examples at my attempt to use this notation and please tell me if it is correct.
Say we are talking about L'hopital's and we are saying
" Let $\lim_{x\to{c}}\frac{f(x)}{g(x)}$ be indeterminate because either $\lim_{x\to{c}}f(x)=\lim_{x\to{c}}g(x)=0$ or $\lim_{x\to{c}}f(x)=\lim_{x\to{c}}g(x)=\infty$
Furthermore let $f(x)\in\mathcal{C}^1(c)$ and $g(x)\in\mathcal{C}^1(c)$ and then blah blah blah"
And in Rolle's Theroem
"Let $f(x)$ be continuous on $[a,b]$ and $f(x)\in\mathcal{C}^1(a,b)$ then ...."
Did I use it right?
Page 1 of 2 12 Last
|
{}
|
# Monotonic Cubic Spline interpolation (with some Rust)
Monotonic Cubic Spline interpolation (MCSI) is a popular and useful method which fits a smooth, continuous function through discrete data. MCSI has several applications in the field of computer vision and trajectory fitting. MCSI further guarantees monotonicity of the smoothed approximation, something which a cubic spline approximation alone cannot. In this post I’ll show how to implement the method developed by F. N. Fritsch and R. E. Carlson [Fritsch2005] in the Rust programming language.
## Rust
Why Rust? Definitely this is a type of solution so simple that it can be implemented in pratically any programming language we can think of. However, I do find that the best way to get acquainted with a new language and its concepts is precisely to try to implement a simple and well-know solution. Although this post does not intend to be an introduction to the Rust language, some of the fundamentals will be presented as we go along.
Idiomatic Rust Object-Oriented Programming (OOP) has several characteristics which differ significantly from “traditional” OOP languages. Rust achieves data and behaviour encapsulation by means of defining data structure blueprints (called struct) and then defining their behaviour though a concrete implementation (through impl). As an example, a simple “class” Foo would consist of:
struct Foo {
}
impl Foo {
fn new() -> Foo {
return Foo {};
}
fn method(&mut self) {}
fn static_method() {}
}
pub fn main() {
let mut f = Foo::new();
f.method();
Foo::static_method();
}
The “constructor” is defined typically as new(), but any “static” method which returns an initialised struct can be a constructor and “object” methods include the passing of the self instance not unlike languages such as Python. The &mut self refers to the control or exclusive access to self and it is not directly related to mut mutability control. These concepts touch on Rust's borrowing and ownership model which, unfortunately, are way beyond the scope of this blog post. A nice introduction is provided by the "Rust programming book" available here. Our implementation aims at building a MCSI class MonotonicCubicSpline by splitting the algorithm into the slope calculation at construction time, a Hermite interpolation function and a partial application function generator. This will follow the general structure
pub struct MonotonicCubicSpline {
m_x: Vec<f64>,
m_y: Vec<f64>,
m_m: Vec<f64>
}
impl MonotonicCubicSpline {
pub fn new(x : &Vec<f64>, y : &Vec<f64>) -> MonotonicCubicSpline {
// …
}
pub fn hermite(point: f64, x : (f64, f64), y: (f64, f64), m: (f64, f64)) -> f64 {
// …
}
pub fn interpolate(&mut self, point : f64) -> f64 {
// …
}
fn partial(x: Vec<f64>, y: Vec<f64>) -> impl Fn(f64) -> f64 {
// …
}
}
Vec is a vector, a typed growable collection available in Rust's standard library with documentation available here.
## Monotonic Cubic Splines
MCSI hinges on the concept of cubic Hermite interpolators. The Hermite interpolation for the unit interval for a generic interval $(x_k,x_{k+1})$ is
$$p(x)=p_k h_{00}(t)+ h_{10}(t)(x_{k+1}-x_k)m_k + h_{01}(t)p_{k+1} + h_{11}(t)(x_{k+1}-x_{k})m_{k+1}.$$
The $h_{\star}$ functions are usually called the Hermite basis functions in the literature and here we will use the factorised forms of:
\begin{aligned} h_{00}(t) &= (1+2t)(1-t)^2 \\ h_{10}(t) &= t(1-t)^2 \\ h_{01}(t) &= t^2 (3-2t) \\ h_{11}(t) &= t^2 (t-1). \end{aligned}
This can be rewritten as
$$p(x) = (p_k(1 + 2t) + \Delta x_k m_k t)(1-t)(1-t) + (p_{k+1} (3 -2t) + \Delta x_k m_{k+1} (t-1))t^2$$
where
\begin{aligned} \Delta x_k &= x_{k+1} - x_k \\ t &= \frac{x-x_k}{h}. \end{aligned}
This associated Rust method is the above mentioned “static” MonotonicCubicSpline::hermite():
pub fn hermite(point: f64, x : (f64, f64), y: (f64, f64), m: (f64, f64)) -> f64 {
let h = x.1 - x.0;
let t = (point - x.0) / h;
return (y.0 (1.0 + 2.0 t) + h m.0 t) (1.0 - t) (1.0 - t)
+ (y.1 (3.0 - 2.0 t) + h m.1 (t - 1.0)) t t;
}
where the tuples correspond to $x \to (x_k, x_{k+1})$, $t \to (y_k, y_{k+1})$ and $m \to (m_k, m_{k+1})$
For a series of data points $(x_k, y_k)$ with $k=1,\dots,n$ we then calculate the slopes of the secant lines between consecutive points, that is:
$$\Delta_k = \frac{\Delta y_{k}}{\Delta x_k}\qquad, \text{for}\ k=1,\dots,n-1$$
with $\Delta y_k = y_{k+1}-y_k$ and $\Delta x_k$ as defined previously.
Since the data is represented by the vectors x : Vec<f64> and y : Vec<f64> we implement this in the “constructor”:
let mut secants = vec![0.0 ; n - 1];
let mut slopes = vec![0.0 ; n];
for i in 0..(n-1) {
let dx = x[i + 1] - x[i];
let dy = y[i + 1] - y[i];
secants[i] = dy / dx;
}
The next step is to average the secants in order to get the tangents, such that
$$m_k = \frac{\Delta_{k-1}+\Delta_k}{2}\qquad, \text{for}\ k=2,\dots,n-1$$
.
This is achieved by the code:
slopes[0] = secants[0];
for i in 1..(n-1) {
slopes[i] = (secants[i - 1] + secants[i]) * 0.5;
}
slopes[n - 1] = secants[n - 2];
By definition, we want to ensure monotonicity of the interpolated points, but to guarantee this we must avoid the interpolation spline to go too far from a certain radius of the control points. If we define $\alpha_k$ and $\beta_k$ as
\begin{aligned} \alpha_k &= \frac{m_k}{\Delta_k} \\ \beta_k &= \frac{m_{k+1}}{\Delta_k}, \end{aligned}
to ensure the monotonicity of the interpolation we can impose the following constraint on the above quantities:
$$\phi(\alpha, \beta) = \alpha - \frac{(2\alpha+\beta-3)^2}{3(\alpha+\beta-2)}\geq 0,$$
that is
$$\alpha + 2\beta - 3 \leq 0, \text{or}\ 2\alpha+\beta-3 \leq 0$$
Typically the vector $(\alpha_k, \beta_k)$ is restricted to a circle of radius 3, that is
$$\alpha^2_l + \beta_k^2>9,$$
and then setting
$$m_{k+1} = t\beta_k\Delta_k,$$
where
\begin{aligned} h &= \sqrt{\alpha^2_k + \beta^2_k} \\ t &= \frac{3}{h}. \end{aligned}
One of the ways in which Rust implements polymorphism is through method dispatch. The f64 primitive provides a shorthand for the quantity $\sqrt{\alpha^2_k + \beta^2_k}$ as $\alpha.\text{hypot}(\beta)$. The relevant Rust code will then be:
for i in 0..(n-1) {
if secants[i] == 0.0 {
slopes[i] = 0.0;
slopes[i + 1] = 0.0;
} else {
let alpha = slopes[i] / secants[i];
let beta = slopes[i + 1] / secants[i];
let h = alpha.hypot(beta);
if h > 3.0 {
let t = 3.0 / h;
slopes[i] = t * alpha * secants[i];
slopes[i + 1] = t * beta * secants[i];
}
}
}
We are now able to define a “smooth function” generator using MCSI. We generate a smooth function $g(\cdot)$ given a set of $(x_k, y_k)$ points, such that
$$f(x_k, y_k, p) \to g(p).$$
## Partial application
Before anything, it is important to recall the difference between partial application and currying, since the two are (incorrectly) used interchangeably quite often. Function currying allows to factor functions with multiple arguments into a chain of single-argument functions, that is
$$f(x, y, z) = h(x)(y)(z)$$
The concept is prevalent in functional programming, since its initial formalisation [Curry1958]. Partial application, however, generally aims at using an existing function conditioned on some argument as a basis to build functions with a reduced arity. In this case this would be useful since ultimately we want to create a smooth, continuous function based on the control points $(x_k, y_k)$. The partial application implementation is done in Rust as
pub fn partial(x: Vec<f64>, y: Vec<f64>) -> impl Fn(f64) -> f64 {
move |p| {
let mut spline = MonotonicCubicSpline::new(&x, &y);
spline.interpolate(p)
}
}
An example of how to generate a concrete smoothed continuous function from a set of control points can be:
let x = vec![0.0, 2.0, 3.0, 10.0];
let y = vec![1.0, 4.0, 8.0, 10.5];
let g = partial(x, y);
// calculate an interpolated point
let point = g(0.39);
The full code can be found here.
## References
[Fritsch2005] Fritsch, F. N., & Carlson, R. E. (2005). Monotone Piecewise Cubic Interpolation. SIAM Journal on Numerical Analysis. https://doi.org/10.1137/0717021 🔝
[Curry1958] Curry, Haskell; Feys, Robert (1958). Combinatory logic. I (2 ed.). Amsterdam, Netherlands: North-Holland Publishing Company. 🔝
|
{}
|
## anonymous 3 years ago .
1. anonymous
b
2. Hero
@Luis_Rivera is very good at not giving answers
3. Hero
4. Hero
I'm not tellin :P
5. anonymous
yep cause you dont know
6. Hero
Actually, I do
7. anonymous
"ya sure you do"
8. Hero
By the way, the expression can also be written as $\frac{d}{2} - 10$
9. anonymous
@hero i know
10. anonymous
lol @Luis_Rivera
11. Hero
I wonder if you could come back here later when your teacher tells you b is incorrect.
12. anonymous
FYI i dont think its b i think its something else
13. Hero
Actually, I'll give you the correct expression for b: $\frac{d - 10}{2}$
14. Hero
So there's no way b is the correct answer choice.
15. Hero
@Luis_Rivera, she already stated that she doesn't believe the answer is b.
16. anonymous
I think its A
17. Hero
That's correct.
18. anonymous
I Knew it!! Thanks..
|
{}
|
# Upgrades to PA, speakers specifically
#### curtis73
##### Well-Known Member
Hey, all. Just had something kinda sprung on me. Skip to the bottom if you don't want the details. When I took over as TD here 7 years ago, we had very little in the way of sound. Over the years I have taken donations, traded stuff, and I've been able to twist wires together to make random things happen with a PA of sorts. It's a black box, 50x60. I currently have a rack with three amps, one of which is a pretty nice Yamaha 700w x 2. That amp is currently driving 4 Yamaha 10" floor wedges that I have totally sketch hung in a cluster in the center of the grid on swivels which can be spun depending on the seating configuration. Two of them have blown horns which for now I'm just EQing out. I also have a Yorkville 300w sub to round out lows. The quick learning part is that the boss just asked me to spec out what I want for a grant, and speakers are one of the things. Since I'm headed into tech for two shows, I don't have a ton of time to sit and research the latest and greatest.
TL;DR... looking for recommendations for 4 passive PA speakers (10" should be plenty, 8" if they reach low, 12" considered) to be hung in a 50x60 blackbox for main program PA. Wide (left/right) waveguides important and short-ish (up/down) waveguides are a plus since they hang directly over the center of the stage. Will be driven 2 parallel x 2 from a 700w x 2 Yamaha amp. Emphasis should be on bulletproof to last a long time and modest expense. Sound quality is important, but the black box is acoustically strange, so fortunately I'm a whiz with EQs. Thoughts on good choices?
Pic of the space for reference:
#### Attachments
• speaker cluster.jpg
187 KB · Views: 83
#### microstar
##### Well-Known Member
Having worked for long time in nearly identically-sized black box, and assuming you do plays and musicals, have you considered making the 4 speakers portable with a yoke for tilting and a C-clamp or similar mounting device so you could place them within the grid where needed for each production? We had 4 amp channels with a separate feed for each speaker. The space was configured at times as an end stage, thrust, arena, tennis-court, and a few other configurations and the speakers moved to where needed for SFX and music. You could still do a cluster in the center of the grid if desired. We never used wireless mics there, but could have with proper speaker placement. Cabling wasn't really an issue as you are not talking about huge cable runs. Just a thought.
#### RonHebbard
##### Well-Known Member
Hey, all. Just had something kinda sprung on me. Skip to the bottom if you don't want the details. When I took over as TD here 7 years ago, we had very little in the way of sound. Over the years I have taken donations, traded stuff, and I've been able to twist wires together to make random things happen with a PA of sorts. It's a black box, 50x60. I currently have a rack with three amps, one of which is a pretty nice Yamaha 700w x 2. That amp is currently driving 4 Yamaha 10" floor wedges that I have totally sketch hung in a cluster in the center of the grid on swivels which can be spun depending on the seating configuration. Two of them have blown horns which for now I'm just EQing out. I also have a Yorkville 300w sub to round out lows. The quick learning part is that the boss just asked me to spec out what I want for a grant, and speakers are one of the things. Since I'm headed into tech for two shows, I don't have a ton of time to sit and research the latest and greatest.
TL;DR... looking for recommendations for 4 passive PA speakers (10" should be plenty, 8" if they reach low, 12" considered) to be hung in a 50x60 blackbox for main program PA. Wide (left/right) waveguides important and short-ish (up/down) waveguides are a plus since they hang directly over the center of the stage. Will be driven 2 parallel x 2 from a 700w x 2 Yamaha amp. Emphasis should be on bulletproof to last a long time and modest expense. Sound quality is important, but the black box is acoustically strange, so fortunately I'm a whiz with EQs. Thoughts on good choices?
Pic of the space for reference:
@curtis73 Have you given any serious consideration to powered speakers?
A disadvantage is having to run power to them as well as signal.
A major plus is the manufacturer has precision matched the amps, to the cross overs, to the drivers and included as much protection for their drivers as practical. You should be able to find models which are designed / approved for overhead rigging with properly rated rigging points built in. There are models which accept balanced line levels only as well as models with built in mic preamps.
You could keep your previous system at stage level and / or use it for limited outdoor use
Toodleoo!
Ron Hebbard
#### macsound
##### Well-Known Member
First - choosing speakers is somewhat like choosing a car. It's probably impossible to find the 100% "right" choice. Do you have a small garage? Probably shouldn't get a Hummer. 6 kids, don't get a compact car. Otherwise, it's nuance, features, warranty, industry standing...
Anyway, my first suggestion is: don't discount powered speakers. I assume you're doing so because there isn't power up there.
If you're getting a grant, including infrastructure like a plug and some line cables to feed the cabinet isn't outside the price envelope of a grant, especially if you're not planning on specing something expensive like Meyer or D&B.
#### curtis73
##### Well-Known Member
If I had clean power up there, I'd consider powered speakers. I also only have two legs at 100A each to play with, which is modestly enough power since all the lighting fixtures are LED, but everything shares those two legs - everything. Audio, theatrical lighting, work lights, outlets, pit instruments, house lights, lobby lights, the fridge for concessions, the fridge for the green room, the air compressor in the shop... everything. I'm constantly playing with ground lifts depending on the configuration.
The cluster also hangs in its own "mini" grid above the lighting grid and the only reason I was able to get them there in the first place was with a borrowed scissor lift. They kinda need to be passive because they are so inaccessible. They are permanently there unless I rent equipment to reach them. The only access I have is setting up my tallest ladder at which point the only thing I can do is reach the bottom to rotate them on their hanging swivels. If I had access to the whole thing, no worries. But I don't. I would rather keep them passive and do all my config on the deck.
The grant will be written for $15k, which means (per our history from this grantor) we'll get$7500 if it's awarded at all. Infrastructure of this magnitude is capital campaign territory. That money needs to be spent on actual equipment. Infrastructure stuff at this point would be blowing the wad on a Mercedes and then not having money for gas to drive it. Buying speakers at this point is like buying gasoline for the trusty Camry. The grant bucket list includes a digital desk and iPad, 5 source4s, 8 colorsource pars, a real lighting grid instead of the mix of truss and sched 40 hanging from 3/16" wire rope, 8 nodes of cat6 with a patch panel, a working Mac with Qlab instead of an ancient windows laptop for sound cues.... ad infinitum. Long story short, I have \$75,000 worth of wants and I have to figure out what I NEED right now with whatever we're awarded. Speakers are a top priority because what I have is crap.
TL;DR - Capital campaign is in committee for some time in the next 3-4 years, at which point I'll have speakers which I can continue to use, or liquidate in favor of powered speakers, but for now, no powered speakers. Passive only.
#### Lextech
##### Well-Known Member
My favorite utility passive speaker is the JBL SRX712M. You can get a yoke for it so hanging is safe and easy. It is a wonderful as a wedge, works great with a sub and is rather light weight wise. At 700 watts you are a tad light power wise but should be fine as you remember that you have smaller amps. For a black box they would be first choice if I had to go passive.
#### curtis73
##### Well-Known Member
I like the numbers on those JBLs. I'll look and see if I can find any NOS. Looks like they're out of production. Fortunately I never get into SPLs where under-watting shouldn't be an issue.
#### JohnD
##### Well-Known Member
Fight Leukemia
As far as that JBL monitor speaker, yes, it was quite good. After it was discontinued more units "Magically" re-appeared in the warehouse. This went on for quite a while. @TimMc mentioned on another forum that that stopped about 6 years ago. Do be careful since that JBL range is often available as bootlegs from China.
You might take a look at the lower end of both JBL and Yamaha. There is the JRX series from JBL and Yamaha has the BR and ClubV series. Often the tradeoff is that they are made from MDF rather than plywood so they might not hold up as well with rough handling.
#### curtis73
##### Well-Known Member
Fortunately these will hang in the grid until they rot, so they won't get beat up. The Yamahas I have up there now are particle board and they've been... um... reinforced significantly and double safety-ed before suspending them over humans.
#### mbrown3039
##### Well-Known Member
The Renkus-Heinz C-series is another good choice for your needs (both spec-and money-wise). Made in SoCal and currently shipping.
#### Ken Summerall Jr
##### Member
I would also recommend taking a look at what EAW has to offer.They may be a little more expensive than JBL and Yamaha (or maybe not) but they sound great. They will also help you determine exactly what you need for your space. You can go to their website and submit drawings and specs and your expectations and they can send you a recommendation along with EASE drawings.
#### Ben Stiegler
##### Well-Known Member
ummm ... you cant EQ your way around blown horns without significantly stressing other components, or still losing a lot of fidelity.
QSC makes excellent affordable passive cabinet speakers as well
and just reading between the lines above, please please don't hang any speaker cabinet that wasn't designed at the factory for being rigged, and use only drop-forged hardware for rigging points on the cabinets. I say this from having known none of this when i started decades ago, and having done some installations which today make me cringe. there's a lot to learn about rigging - dynamic loads vs. static weight, etc. and to avoid the sort of lawsuit that puts your theater out of business - please don't skimp on safety nor having someone involved who is competent to rig speakers. That may be you!
#### curtis73
##### Well-Known Member
ummm ... you cant EQ your way around blown horns without significantly stressing other components, or still losing a lot of fidelity.
QSC makes excellent affordable passive cabinet speakers as well
and just reading between the lines above, please please don't hang any speaker cabinet that wasn't designed at the factory for being rigged, and use only drop-forged hardware for rigging points on the cabinets. I say this from having known none of this when i started decades ago, and having done some installations which today make me cringe. there's a lot to learn about rigging - dynamic loads vs. static weight, etc. and to avoid the sort of lawsuit that puts your theater out of business - please don't skimp on safety nor having someone involved who is competent to rig speakers. That may be you!
I hear you and don't disagree. I'm a proficient rigger and (while not specifically rated), they are safe. They hang from 3/8" forged shackles with double 1/8" 7x19 safeties. The speakers themselves have been internally trussed and epoxied, so while they aren't legal, I'm quite confident they won't come down. They're a temporary solution to get noises in the air.
And while I can't EQ for bad horns, what I did for now was EQ out the buzzing 2000 hz in the two with blown horns and boosted it a bit in the other two so it doesn't sound awful. Only works when I have one-sided seating, though. Just don't move your head or you get really fun phasing
#### Lextech
##### Well-Known Member
As my local dealer still has stock, I did not realize they were discontinued again. If you need a phone number just PM me.
#### almorton
##### Well-Known Member
while they aren't legal, I'm quite confident they won't come down
That's a "courageous" thing to say in a public forum.
|
{}
|
# Math Help - what am i doing wrong?
1. ## what am i doing wrong?
Find the present value of an annuity that pays $6000 at the end of each 6-month period for 8 years if the interest rate is 8% compounded semiannually. According to my book, the answer is$69,913.77
I don't understand why?
according to my method i get another answer..
An = 6000 [ 1- (1+0.01)^-16/.01 = 90,000
what am i doing wrong?
thanks.
2. Hello, jhonwashington!
Two errors . . .
Find the present value of an annuity that pays $6000 at the end of each 6-month period for 8 years if the interest rate is 8% compounded semiannually. According to my book, the answer is$69,913.77 .
. . . Right!
I don't understand why?
according to my method i get another answer..
An = 6000 [ 1- (1+0.01)^-16/.01 = 90,000 .
. . . Incorrect formula
. . and incorrect interest rate.
The rate is 8% compounded semiannualy: . $i \:=\:\frac{8\%}{2} \:=\:{\color{blue}0.04}$
The formula is: . ${\color{blue}P \;=\;A\,\frac{(1+i)^n - 1}{i(1+i)^n}}$
And we have: . $P \;=\;6000\,\frac{1.04^{16} - 1}{(0.04)(1.04^{16})} \;=\;69,\!913.77365...$
|
{}
|
# how can I test quality of sbox?
I have 11 sboxes, I want to test them and find the best one. How can I do that, I found several criterions for that but I could not understand.
• Statistical testing can only show something is really bad. At best the result is "not trivially broken". But it can not prove security, and even less is it able to indicate what is the best. I am not sure, the question can be answered objectively. – tylo Jun 28 '19 at 20:03
I'd advise the result of Daemen and Rijmen on the matter called the wide-trail design strategy that has been used to construct the current AES. Shortly you want s-boxes that have:
# High algebraic degree
If you have the ANF of the Boolean function induced by your permutation which is a polynomial $$\mathbb{F}_2[x_0,...,x_{n-1}]/(x_0^2 - x_0,..., x_{n-1}^2 - x_{n-1})$$ then the algebraic degree is the number of variables in the largest product term of the function’s ANF having a non-zero coefficient.
# Balancedness
Let $$F$$ be a function from $$\mathbb{F}_n^2$$ into $$\mathbb{F}_n^2$$. $$F$$ is balanced if it takes every value of the range exactly once.
# High Nonlinearity
The aggregated nonlinearity of your S-box is the minimum nonlinearity of all of it's component functions which you can get with the Walsh-Hadamard transform
# Low Differential uniformity
Define the difference distribution of any a function with respect to $$a$$ and $$b$$ elements from $$\mathbb{F}_2^n$$ as $$DF(a,b) = \{x∈F_2^n:F(x)⊕F(x⊕a) =b\}.$$ Then the differential uniformity is the maximum value got with this function using any pair of $$a$$ and $$b$$.
# High differential branch number
this is calculated by $$min_{x\neq y}wt(x⊕y) +wt(F(x)⊕F(y))$$ where $$wt$$ is the hamming weight.
Note that these are still a subset of all tricks used in literature used to argue about s-boxes but the wide-trail strategy is currently still a good pivot point. There are very useful tools in the SageMath library to easily check these properties: http://doc.sagemath.org/html/en/reference/cryptography/sage/crypto/sbox.html
• I actually want to know about: Avalanche Criterion (AVA), StrictAvalancheCriterion(SAC), Bit Independence Criterion (BIC) and Non-linearity Measure () – Mehmet Özcan Jun 27 '19 at 12:56
• Wide trail is (primarily) not about the S-boxes, see here. Also, some of the other criteria you mention such as the differential branch number of the S-box are not always relevant. – Aleph Jun 27 '19 at 12:56
• All criteria exist in their own bubble, and it is not objectively possible to put them in relation. Some criteria are older, some imply certain qualities in other scores. If the goal is to get a comparison, the only way is to learn and understand them all in depth - arguing one criteria over the other is much less a science but a form of art. This has to take all efforts in cryptanalysis into account, and I would guess only an expert in the field can actually produce a meaningful result. – tylo Jun 28 '19 at 19:50
The selection criteria of best S-box depends on what you are focusing on ; Security, implementation etc. in your comment , you focused only on BIC, SAC and nonlinearity but there are some criteria recently developed in term of security along with previous criteria (Difference Distribution Table (DDT, Linear Approximation Table (LAT) , Algebraic normal form (ANF), Algebraic immunity).
in term of implementation:
• Multiplicative complexity: the smallest number of nonlinear gates
• Bitslice gate complexity: the smallest number of operations in {AND, OR, XOR, NOT} required to compute this function
• Gate complexity: the smallest number of logic gates required to compute this function
• Circuit depth complexity: the length of the longest paths from an input gate to an output gate
in FSE 2019 , PEIGEN– a Platform for Evaluation, Implementation, and Generation of S-boxes was presented with open source code, this tool will be very helpful for your analysis.
|
{}
|
# Accurate fundamental parameters for lower main‐sequence stars
@article{Casagrande2006AccurateFP,
title={Accurate fundamental parameters for lower main‐sequence stars},
author={Luca Casagrande and Laura Portinari and Chris Flynn},
journal={Monthly Notices of the Royal Astronomical Society},
year={2006},
volume={373},
pages={13-44}
}
• Published 2006
• Physics
• Monthly Notices of the Royal Astronomical Society
We derive an empirical effective temperature and bolometric luminosity calibration for G and K dwarfs, by applying our own implementation of the Infrared Flux Method to multiband photometry. Our study is based on 104 stars for which we have excellent BV(RI)CJHKS photometry, excellent parallaxes and good metallicities. Colours computed from the most recent synthetic libraries (ATLAS9 and MARCS) are found to be in good agreement with the empirical colours in the optical bands, but some… Expand
149 Citations
M dwarfs: effective temperatures, radii and metallicities
• Physics
• 2008
We empirically determine effective temperatures and bolometric luminosities for a large sample of nearby M dwarfs, for which high accuracy optical and infrared photometry is available. We introduce aExpand
Metallicity of M dwarfs II. A comparative study of photometric metallicity scales
Stellar parameters are not easily derived from M dwarf spectra, which are dominated by complex bands of diatomic and triatomic molecules and not well described at the line by line level byExpand
Towards stellar effective temperatures and diameters at 1 per cent accuracy for future surveys
The apparent size of stars is a crucial benchmark for fundamental stellar properties such as effective temperatures, radii and surface gravities. While interferometric measurements of stellar angularExpand
Infrared flux method and colour calibrations
The Infrared Flux Method (IRFM) is one of the most accurate techniques to derive fundamental stellar parameters – namely effective temperatures, bolometric luminosities and angular diameters – in anExpand
Calibration of Strömgren uvby-H$\beta$ photometry for late-type stars – a model atmosphere approach
• Physics
• 2009
Context. The use of model atmospheres for deriving stellar fundamental parameters, such as Teff ,l ogg ,a nd [Fe/H], will increase as we find and explore extreme stellar populations where empiricalExpand
Synthetic stellar photometry – I. General considerations and new transformations for broad-band systems
• Physics
• 2014
After a pedagogical introduction to the main concepts of synthetic photometry, colours and bolometric corrections in the Johnson-Cousins, 2MASS, and HST-ACS/WFC3 photometric systems are generatedExpand
Absolute physical calibration in the infrared
We determine an absolute calibration for the Multiband Imaging Photometer for Spitzer 24 μm band and recommend adjustments to the published calibrations for Two Micron All Sky Survey (2MASS),Expand
New constraints on the chemical evolution of the solar neighbourhood and galactic disc(s) - improved astrophysical parameters for the Geneva-Copenhagen Survey
We present a re-analysis of the Geneva-Copenhagen survey, which benefits from the infrared flux method to improve the accuracy of the derived stellar effective temperatures and uses the latter toExpand
Deriving precise parameters for cool solar-type stars Optimizing the iron line list ?;??;???
• Physics
• 2013
Context. Temperature, surface gravity, and metallicitity are basic stellar atmospheric parameters necessary to characterize a star. There are several methods to derive these parameters and aExpand
The helium abundance and ΔY/ΔZ in lower main-sequence stars
We use nearby K dwarf stars to measure the helium-to-metal enrichment ratio ΔY/ΔZ, a diagnostic of the chemical history of the solar neighbourhood. Our sample of K dwarfs has homogeneously determinedExpand
#### References
SHOWING 1-10 OF 140 REFERENCES
Effective temperature scale and bolometric corrections from 2MASS photometry
We present a method to determine effective temperatures, angular semi-diameters and bolometric corrections for population I and II FGK type stars based on V and 2MASS IR photometry. AccurateExpand
Determination of effective temperatures for an extended sample of dwarfs and subdwarfs (F0-K5)
• Physics
• 1996
We have applied the InfraRed Flux Method (IRFM) to a sample of 475 dwarfs and subdwarfs in order to derive their eective temperatures with a mean accuracy of about 1.5%. We have used the newExpand
The effective temperature scale of FGK stars. I. Determination of temperatures and angular diameters with the infrared flux method
• Physics
• 2005
The infrared flux method (IRFM) has been applied to a sample of 135 dwarf and 36 giant stars covering the following regions of the atmospheric parameter space: (1) the metal-rich ([Fe/H] 0) endExpand
The Effective Temperature Scale of FGK Stars. II. Teff:Color:[Fe/H] Calibrations
• Physics
• 2005
We present up-to-date metallicity-dependent temperature versus color calibrations for main-sequence and giant stars based on temperatures derived with the infrared flux method (IRFM). SeventeenExpand
Comparison of White Dwarf Models with STIS Spectrophotometry
Computed flux distributions for four white dwarf (WD) stars with nearly pure hydrogen atmospheres are tied to the Vega flux scale with V-band Landolt photometry. With broadband photometric precisionExpand
Model atmospheres for G, F, A, B, and O stars
A grid of LTE model atmospheres is presented for effective temperatures ranging from 5500 to 50,000 K, for gravities from the main sequence down to the radiation pressure limit, for abundances solar,Expand
SPECTRAL LINE-DEPTH RATIOS AS TEMPERATURE INDICATORS FOR COOL STARS
The use of spectral line-depth ratios as a stellar thermometer in G and K dwarfs is developed and refined beyond an earlier study (Gray and Johnson 1991). Ratios incorporating a line with any degreeExpand
Determination of the temperatures of selected ISO flux calibration stars using the Infrared Flux Method
• Physics
• 1997
Eective temperatures for 420 stars with spec- tral types between A0 and K3, and luminosity classes between II and V, selected for a flux calibration of the Infrared Space Observatory, ISO, have beenExpand
Homogeneous photometry and metal abundances for a large sample of Hipparcos metal-poor stars★
• Physics
• 1999
Homogeneous photometric data (Johnson V, B-V, V-K, Cousins V-I and Stromgren b-y), radial velocities, and abundances of Fe, O, Mg, Si, Ca, Ti, Cr and Ni are presented for 99 stars with high-precisionExpand
Abundances for metal-poor stars with accurate parallaxes , I. Basic data
We present element-to-element abundance ratios measured from high dispersion spectra for 150 field subdwarfs and early subgiants with accurate Hipparcos parallaxes (errors <20%). For 50 stars newExpand
|
{}
|
# Convolution of signals sampled on a logarithmic grid
Is there a practical accelerated algorithm or a theoretical discrete (Fourier) transform based method to convolve discrete-time signals sampled on a logarithmic grid? What I mean is representing a discrete-time signal by the sequence:
$$[1, 1, 1],$$
as a shorthand notation for samples:
$$\big[\big(\log(1), 1\big), \big(\log(2), 1\big), \big(\log(3), 1\big)\big],$$
with autoconvolution:
$$[1, 1, 1]*[1, 1, 1]$$
$$=\big[\big(\log(1)+\log(1), 1\big), \big(\log(1)+\log(2), 1\big), \big(\log(1)+\log(3), 1\big)\big]+\\ \big[\big(\log(2)+\log(1), 1\big), \big(\log(2)+\log(2), 1\big), \big(\log(2)+\log(3), 1\big)\big]+\\ \big[\big(\log(3)+\log(1), 1\big), \big(\log(3)+\log(2), 1\big), \big(\log(3)+\log(3), 1\big)\big]$$
\begin{align}=&\big[\big(\log(1\times1), 1\big), \big(\log(1\times2), 1\big), \big(\log(1\times3), 1\big)\big]\\ +&\big[\big(\log(2\times1), 1\big), \big(\log(2\times2), 1\big), \big(\log(2\times3), 1\big)\big]\\ +&\big[\big(\log(3\times1), 1\big), \big(\log(3\times2), 1\big), \big(\log(3\times3), 1\big)\big]\end{align}
\begin{align}=&\big[\big(\log(1), 1\big), \big(\log(2), 1\big), \big(\log(3), 1\big)\big]\\ +&\big[\big(\log(2), 1\big), \big(\log(4), 1\big), \big(\log(6), 1\big)\big]\\ +&\big[\big(\log(3), 1\big), \big(\log(6), 1\big), \big(\log(9), 1\big)\big]\end{align}
\begin{align}=&[1, 1, 1, 0, 0, 0, 0, 0, 0]\\ +&[0, 1, 0, 1, 0, 1, 0, 0, 0]\\ +&[0, 0, 1, 0, 0, 1, 0, 0, 1]\\ \rule{0pt}{4ex}=&[1, 2, 2, 1, 0, 2, 0, 0, 1]\end{align}
The sequence notation is not affected by the choice of base of the logarithm. In an alternative representation, a sample $$(\log(0),1)$$ $$=$$ $$(-\infty,1)$$ could be added to the signal (appended to the beginning of the sequence), giving the pleasures of zero-based indexing without affecting the rest of the convolution result:
$$[1,1,1,1]*[1,1,1,1]$$ \begin{align}=&[4, 0, 0, 0, 0, 0, 0, 0, 0, 0]\\ +&[1, 1, 1, 1, 0, 0, 0, 0, 0, 0]\\ +&[1, 0, 1, 0, 1, 0, 1, 0, 0, 0]\\ +&[1, 0, 0, 1, 0, 0, 1, 0, 0, 1]\\ \rule{0pt}{4ex}=&[7, 1, 2, 2, 1, 0, 2, 0, 0, 1]\end{align}
I don't have a real application mind, but encountered such a convolution in the context of multiplication of discrete random variables. It is also interesting that if an infinite sequence of 1's would be log-sampling-convolved by itself, then the result would be the sequence OEIS A000005, the number of divisors of positive integers.
• If I am getting it correctly, the tuple describes $(t, x(t))$. If that is the case, then a $\log(t)$ would imply an increasing $Ts$ and consequently a decreasing $Fs$. Unless $x(t)$'s bandwidth decreases slower than that with time, then maybe you can adapt an existing method. Otherwise, you very quickly run into aliasing territory. That is not to say that this is not interesting. Just pointing out that it might be approachable from a different route than DSP. – A_A Jan 13 '19 at 10:41
|
{}
|
Enter your weight in pounds and your weight while gripping the bar into the calculator to determine your grip strength.
## Grip Strength Formula
The following equation is used to calculate Grip Strength.
GS = W - WHB
• Where GS is your grip strength (lbs)
• W is your body weight (lbs)
• WHB is your body weight while gripping the bar
To calculate grip strength, subtract your weight while gripping the bar from your body weight.
## What is Grip Strength?
Definition:
Grip strength is a measure of the amount of force a person can generate through their forearm-hand axis.
## How to Calculate Grip Strength?
Example Problem:
The following example outlines the steps and information needed to calculate Grip Strength.
First, determine your body weight. In this example, the person weighs 180 lbs.
Next, determine your body weight while gripping the bar. For this problem, the body weight while gripping is 100 lbs.
Finally, calculate the grip strength using the formula above:
GS = W – WHB
GS = 180-100
GS = 80lbs
|
{}
|
Q
In Fig. 6.42, if lines PQ and RS intersect at point T, such that ∠ PRT = 40°, ∠ RPT = 95° and ∠ TSQ = 75°, find ∠ SQT.
4. In Fig. 6.42, if lines PQ and RS intersect at point T, such that $\angle$ PRT = 40°, $\angle$ RPT = 95° and $\angle$ TSQ = 75°, find $\angle$ SQT.
Views
We have,
lines PQ and RS intersect at point T, such that $\angle$ PRT = 40°, $\angle$ RPT = 95° and $\angle$ TSQ = 75°
In $\Delta$PRT, by using angle sum property
$\angle$PRT + $\angle$PTR + $\angle$TPR = $180^0$
So, $\angle$PTR = $180^0 -95^0-40^0$
$\Rightarrow \angle PTR = 45^0$
Since lines, PQ and RS intersect at point T
therefore, $\angle$PTR = $\angle$QTS (Vertically opposite angles)
$\angle$QTS = $45^0$
Now, in $\Delta$QTS,
By using angle sum property
$\angle$TSQ + $\angle$STQ + $\angle$SQT = $180^0$
So, $\angle$SQT = $180^0-45^0-75^0$
$\therefore \angle SQT = 60^0$
Exams
Articles
Questions
|
{}
|
# Find the cardinality of the set of all norms on R^n (hint: show that every norm || || : R n → R is continuous).
Find the cardinality of the set of all norms on R n (hint: show that every norm || || : R n ? R is continuous).
Answers can be viewed only if
1. The questioner was satisfied and accepted the answer, or
2. The answer was disputed, but the judge evaluated it as 100% correct.
1 Attachment
• Do you also need the proofs for your other question or just the answer? I would say your offered amount is a bit low.
• Proofs, how much more do you think i should add?
• Well, it indeed needs three proofs (one for each case ) and will take about an hour to write. I think the offer should be at least tripled.
• Okay i'll do that
• I suggest you to also set a later deadline, if possible. 4 hours is too early for this kind of high level questions.
• Hey Phillip, i just realized, don't i need a lower bound for the Cardinality for the set of all norms
• Cool. So you do not need the second part of the argument.
• im confused. I still need a lower bound for the cardinality for the set of all norms
• Suppose you have a norm || . || . Then for any real number a, a*|| . || is also a norm. So The cardinality of the set of norms is at least that of the continuum. I hope this makes it clear.
|
{}
|
Tuesday, March 30, 2021
Agreement Among 3 Raters or More When A Subject Can be Rated by No More than 2 Raters
Most methods proposed in the literature for evaluating the extent of agreement among 3 raters assume that each rater is expected to rate all subjects. In some inter-rater reliability applications however, this requirement cannot be satisfied, either because of the prohibitive costs associated with the rating process or because of a rating process too demanding to a human subject. For example, scientific laboratories are often rated by accrediting agencies to have their work quality officially certified. These accrediting agencies themselves need to conduct inter-rater reliability studies to demonstration the high quality of their accreditation process. Given the high costs associated with accrediting a laboratory (a staggering number of lab procedures must be verified and documentation reviewed), agencies are willing to fund a single round of rating for each laboratory with one rater, and use another rater to provide the ratings during the regular accreditation process, which is funded by each lab.
The question now becomes Is it possible to evaluate the extent of agreement among 3 raters or more, given that a maximum of 2 raters are allowed to rate the same subject?'' The good news is that it is indeed possible to design an experiment that would achieve that goal. However, a price that must be paid to make this happen. The agreement coefficient based on such a design will has a higher variance than the traditional coefficient based on the fully-crossed design where each rater must rate all subjects. The general approach is as follows:
• Suppose your problem is to quantify the extent of agreement among the group of 5 raters $${\cal R}=\{Rater1, Rater2, Rater3, Rarer4, Rater5 \}$$
• Out of the roster of 5 raters $$\cal R$$, one can form the following 10 different pairs of raters (Note that if $$r$$ is the number of raters, then the associated number of pairs that can be formed is $$r(r-1)/2=5\times4/2=10$$):
• Suppose that a total of $$n=15$$ subjects will participate in your experiment. The procedure consists of selecting 15 pairs of raters randomly and with replacement (i.e. one pair of raters could be selected more than once) from the above 10 pairs. The 15 selected pairs of raters will be assigned to the 15 subjects on a flow basis (i.e. sequentially as they are selected).
• Select with replacement 15 random integers between 1 and 10. Suppose the 15 random integers are $$\{2, 6, 2, 5, 4, 1, 8, 1, 3, 3, 5, 4, 2, 5, 9\}$$. That is, the $$2^{nd}$$ pair (Rater1, Rater3) will be assigned to subjects 1, 3 and 13. The $$6^{th}$$ pair (Rater2, Rater4) will be assigned to subject 2 and so on. The experimental design will look this:
• Once all 15 subjects are rated, the dataset of ratings will have 3 columns. The first Subject column will identify subjects, the remain 2 columns will contain the ratings from the different pairs of raters assigned to subjects. The agreement coefficient will then be calculated as if the same 2 raters produced all the ratings. What will be different is the variance associated with the agreement coefficient.
• What is described here is referred to as a Partially Crossed design with 2 raters per subject, or $$\textsf{PC}_2$$ design and is discussed in details in the $$5^{th}$$ edition of the Handbook of Inter-Rater Reliability to be released in July of 2021.
|
{}
|
# Indexical uncertainty and subjective randomness
+ 2 like - 0 dislike
585 views
@user1247 wrote in the context of a many worlds discussion
given that indexical uncertainty in the space of all worlds leads to subjective randomness.
I'd like to have an explanation of what this means and in which sense the first leads to the other.
asked Apr 7, 2015 in Chat
recategorized Apr 8, 2015
@dimension10: But only if they are on a graduate+ level, which means, argued mathematically or experimentally. For example, Everett's paper introducing the MWI (and the Ph.D. thesis underlying it) is physics, published in a physics journal, and one can argue about its deductions in a logical way. Thus this question is fine for Q&A. But indexical uncertainty is a purely philosophical concept, and the conclusions have nothing to do with physics as a science.
+ 4 like - 0 dislike
Indexical uncertainties arise in philosophical discussions about Boltzmann brains, multiverse scenarios, cloning thought experiments, and Doomsday arguments, and really any case where you consider an observer trying to use Bayesian reasoning to determine his or her reference class. Other key-words are "self-locating uncertainty", "anthropic self-selection", "self-sampling", and here and here are a couple wikipedia articles that touch on the subject. A typical example of how it leads to subjective randomness in a deterministic scenario:
Suppose you enroll in an experiment in which you are blindfolded, taken into a room, and cloned. The clone always ends up in a red room, the original in a blue room. After the cloning operation you are then asked to remove your blindfold. At this point you have "indexical uncertainty" about whether you are the clone or the original, and so you predict a 50% chance of being in a red or blue room. Indeed, if you repeat the experience many times, you will find yourself in a red or blue room with a purely random 50% probability.
Now there are many puzzles and questions and debates about all of this, but as far as I know there is little debate about the basic concept, that subjectively pure randomness can arise from purely deterministic cloning scenarios. And of course in the MWI the "clones" are just branches of a universal wave function.
answered Apr 7, 2015 by (540 points)
edited Apr 8, 2015 by user1247
Thanks for the answer. Let me challenge the basic concept, that subjectively pure randomness can arise from purely deterministic cloning scenarios - if there hasn't been a debate about it so far (the concept is very young!) then there is one now.
After the cloning operation, there will be two selves, not one, since the self is of course being cloned, too. The original self will thus find itself with 100% probability in the blue room, the cloned self will find itself with 100% probability in the red room. Repeating the experiment $n$times with some of the intermediate selves will produce up to $2^n$ different selves, and the history of each one will be determined with 100% probability by the binary sequence indicating at which experiment it was the original or the cloned self. Each self will get a different probability statistics (the only statistics available to the self), and it depends on the process selecting for cloning whether in the long run these statistics converge, and to which distribution.
Now you might understand better my answer in the other thread - you may interpret each 1 as being cloned, and each 0 as being original after the corresponding cloning experiment.
In terms of MWI, to turn the philosophy into physics you need to answer my question there:
What is the probability distribution on the space of all worlds underlying Everett's interpretation? Without an assumed distribution, nothing definite follows.
More precisely, you'd have to specify a stochastic process that specifies the branching behavior at every time $t$. Only then one can begin to consider the physics behind the philosophical ''indexical uncertainty" argument.
Maybe it will help to consider a concrete example. The experiment consists of three cloning repetitions, with each participant (both the clone and the original) re-enrolling at each step. There are eight possible histories:
[111],[110],[011],[101],[100],[010],[001],[000], where '1' stands for red room, and '0' for blue room. Note that in 25% of histories the participant finds himself in a room of constant color, while in 75% of histories, the color ratio is 2:1.
From a "global perspective", of course the process is entirely deterministic. That is the point. But from the perspective of a participant in the experiment the situation is entirely different. At the end of the experiment 25% percent of participants will report that the room color was constant after each step, while 75% will report the color changing once.
Now if you are a scientist who participates in this experiment you might try to tell for yourself whether your history of "red room, blue room" is deterministic or random. So you hypothesize that if you do the experiment many times you can do a statistical test to check whether the room color changes are consistent with a binomial distribution and if so with what p. Well if you enroll in the experiment many times and do the counting it is clear that the majority of scientist participants will find themselves with histories statistically consistent with a binomial distribution with p=0.5. So for the majority of scientists who are actually subjectively experiencing this cloning scenario, they will find that their experiences are described by a purely random probability distribution.
Of course, like any probability distribution (deterministic or not) there will be outliers. Some scientists will see X$\sigma$ fluctuations that later regress to the mean, and a very very few will see a deterministic world in which the room is always one color. But the treatment here is no different from any other statistical assessment (for example ordinary QM ala Copenhagen) -- some scientists will be very "lucky" but the vast majority will have experiences that are consistent with the "purely random" hypothesis. This is born out by simply counting of histories or knowing about binomial distributions.
It would be helpful if you clarified where, exactly, in all of this, do you disagree? Do you disagree that a scientist enrolling in such an experiment will on average have a history that leads him to conclude the room changes color purely randomly with binomial p=0.5? Or do you have a philosophical objection that the clones are not "conscious"? You need to explain clearly exactly where you confusion lies, because currently you have not given me enough to be able to say.
@user1247:
At the end of the experiment 25% percent of participants will report that the room color was constant after each step, while 75% will report the color changing once.
Yes. A scientist observing the whole experiment will see the uniform distribution of the experimental result reproduced by the reports of most participants, so what? There is nothing subjective about it. If the experimenters would have chosen a different experimental distribution, this different distribution would have been observed, equally objective. The experimental setup completely determines the resulting most typical probabilities, and any set of probabilities can be obtained by picking appropriate rules for whom to clone when. This is what I meant in my answer with
What is the probability distribution on the space of all worlds underlying Everett's interpretation? Without an assumed distribution, nothing definite follows.
Note that you silently assumed the uniform distribution of the experiment, but you concluded the subjective 50% probability as a nontrivial result of your argumentation. It is this kind of fallacy that becomes apparent once one turns from mere philosophy to physics.
Note also that the cloning scenario is very different from the MWI setting, where only a single trajectory of cloned universes can be observed by mankind as a whole. I was talking about this single trajectory - it can have an arbitrary distribution completely unrelated to the distribution of the experiment, any distribution is as permissible as any other, and since one cannot observe any other trajectory, there is no way of assigning a meaningful experimental probability to which splits are chosen with which frequency by the assumed experimenter external to the universe.
Taking your silently assumed uniform prior as an admissible a priori hypothesis, I should conclude that we live in an extraordinarily exceptional trajectory since rather than seeing everything happen at random we observe marvellous regularities that keeps busy generations of physicists. Knowing that our trajectory is so extraordinarily exceptional, it is clear that we cannot deduce the slightest thing from presumed indexical uncertainties, as these can only cover the most likely cases, and we have proof that our universe is far too regular for this.
@Arnold, first of all, you continue to obfuscate the discussion by bringing in your objections to the MWI, which is putting the cart before the horse. Please stop until we clarify the more fundamental issue. You are already causing this discussion to nearly derail, as I am losing patience (you did not even answer my very direct questions).
Yes. A scientist observing the whole experiment will see the uniform distribution of the experimental result reproduced by the reports of most participants, so what? There is nothing subjective about it.
You are continuing to misunderstand. The point is not only that a scientist observing the whole experiment from the outside will see the binomial distribution of histories reported, but more to the point is that the experimental participants will report themselves experiencing those histories. What the scientist from the outside can conclude is that the participants, as reported by them, are subjectively experiencing pure randomness with statistics exactly consistent with a binomial distribution with p=0.5. The situation is no different, scientifically, from if those participants had instead measured whether a given radioactive atom had decayed during one interval of its half life, or any other quantum or assumed purely random process with p=0.5. In such a case they will also report experiences with the same statistical distribution. Therefore the two situations are empirically indistinguishable.
Note that you silently assumed the uniform distribution of the experiment, but you concluded the subjective 50% probability as a nontrivial result of your argumentation. It is this kind of fallacy that becomes apparent once one turns from mere philosophy to physics.
This is simply wrong. We apparently agree (see above) on what a scientist will see being reported to him. Such reports are indistinguishable from the uniform distribution you say I assumed. But this is where it would be helpful for you to directly answer my questions (such as "Do you disagree that a scientist enrolling in such an experiment will on average have a history that leads him to conclude the room changes color purely randomly with binomial p=0.5?").
Taking your silently assumed uniform prior as an admissible a priori hypothesis, I should conclude that we live in an extraordinarily exceptional trajectory since rather than seeing everything happen at random we observe marvellous regularities that keeps busy generations of physicists. Knowing that our trajectory is so extraordinarily exceptional, it is clear that we cannot deduce the slightest thing from presumed indexical uncertainties, as these can only cover the most likely cases, and we have proof that our universe is far too regular for this.
This is so wrong and confused and misinformed that I think it best to wait until clarifying the above.
"Do you disagree that a scientist enrolling in such an experiment will on average have a history that leads him to conclude the room changes color purely randomly with binomial p=0.5?"
Ah, the test persons are supposed to be the scientists. In this case, the (for definiteness male) scientist can perform an average only on the frequency of the colors he sees. Again, there is nothing subjective about it. The uncloned scientist will report only blue walls, and the scientists that have been subjected to k out of n cloning operations will report, objectively, a probability of $k/n$. If you ask only one scientist you'll get exactly one answer, which might be anything between 0 and 100%, but if n is odd, never exactly 50%.
This answer is an objective fact; in the scientist's report there is no subjective probabilities anywhere. Of course it depends on whom you cask what the resulting statistics will be. But this has nothing to do with subjective probability, it is rather standard conditional probability. Experiments under different conditions will give different results if the conditions affect the experiment.
You argue that it is most likely 50% since this is the peak of the binomial distribution with $p=0.5$ and a large number of trials. But this kind of likelihood has nothing to do with the probabilities the scientist can objectively record without having access to his clones.
These 50% are therefore only your subjecive probability that you assign knowing the cloning procedure. It is nothing the scientist can deduce from his color measurements. He measures a fact - an observed, objective probability.
Only you, who embed his observations into the cloning context, need subjective probabilities, since you argue from an assumed process to the outcome of a single trajectory of this process. And you want to argue from a single observed case for a probability law. But this is impossible to do either of the two in a rational manner, hence subjectivity enters. Rational probability statements never apply to a single case. The reason is that the meaning of probability is intrinsically tied to the possibility of frequently repeating the case under similar conditions.
Moreover, and that was my whole point, this number 50% completely depends on the details how the experiments are planned. If you modify the experiment so that the person in the room is cloned with a probability p only before being released, you don't get a symmetric binomial distribution, and the majority of scientists will report a probability different from 50%. or if you change the rule of how to color the walls. Thus which probability is most likely observed by a single scientist will depend completely upon the stochastic process used to generate the colors.
In your case, the cloning experiment is deterministic, it produces $2^n$ trajectories on equal footing. The randomness comes in through which person you ask. If you ask only one, you cannot attach rationally a probability to it. So you retreat to the uninformative prior, claiming that you chose this person uniformly from the set of all persons. A bold claim for a single choice, not supported by any underlying mathematics.
Let us return to the perspective of a single scientist. No matter which probability the scientist finds when he makes a statistics of the colors in his rooms it is an objective probability in the traditional sense of the word, number of good cases divided by number of all cases. Moreover, if as a scientist, he makes a theory about how the colors change in his room, the best theory he can actually test is to set up a stochastic or deterministic process for his changing walls. If he in addition speculates about any possible cloning that might have been going on, this is not science but pure speculation. For to make a prediction that explains his observed deterministic or stochastic trajectory (the only objective information about his current field of study), he must postulate [without the slightest evidence for it]
1. a mysterious cloning process that copies every detail about the past (in his memory, his notebook, his friends, etc.) except for changing the next moment.
2. a second deterministic or stochastic process for cloning - which will surely be at least as complex as the stochastic model he'd use to describe his single time series.
3. The assumption that his cloning observation is typical for cloning, i.e., that the most likely family of trajectories contains his own trajectory (whatever it is),
4. the mystery that all the generated alternatives in the cloning process are completely unobservable.
And as the scientific fruit of these assumptions he gets an ''explanation'' for his stochastic trajectory that caries no additional information beyond the stochastic model used for explaining his single time series without cloning. Thus he will conclude that it is most rational to drop the mysteries invoking cloning, following Ockham's famous principle.
In your case, the cloning experiment is deterministic, it produces 2n trajectories on equal footing. The randomness comes in through which person you ask. If you ask only one, you cannot attach rationally a probability to it. So you retreat to the uninformative prior, claiming that you chose this person uniformly from the set of all persons. A bold claim for a single choice, not supported by any underlying mathematics.
Again, your position can immediately seen to be wrong, since you can apply the exact same logic to taking a census of scientists doing any quantum experiment in the real world. Please think about this carefully.
Let us return to the perspective of a single scientist. No matter which probability the scientist finds when he makes a statistics of the colors in his rooms it is an objective probability in the traditional sense of the word, number of good cases divided by number of all cases. Moreover, if as a scientist, he makes a theory about how the colors change in his room, the best theory he can actually test is to set up a stochastic or deterministic process for his changing walls. If he in addition speculates about any possible cloning that might have been going on, this is not science but pure speculation. For to make a prediction that explains his observed deterministic or stochastic trajectory (the only objective information about his current field of study), he must postulate [without the slightest evidence for it]
The fact that I am not at all understanding you continues to indicate that you simply do not understand the thought experiment itself. Maybe I have not explained it well enough, but there is not much to explain. One thing in the above you seem confused about, and which I thought I made clear given the wording I used, is that the scientist is well aware that he is being cloned. There is no reason whatsoever for him to be in the dark about this. It doesn't matter to the thought experiment.
If you want specific questions answered, please post each question separately as a comment (preferably one by one, so that iknow which one is the most important one for you). In my previous (not the last) mail I had answered to your question
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverflo$\varnothing$Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
{}
|
# Preconditioner for nonlinear optimization
Hello,
I am working on the following nonlinear minimization problem:
\displaystyle{\min_{\boldsymbol{c} \in R^n}} \mbox{ }\frac{1}{M} \sum_{i=1}^{M} \left(\frac{1}{2} \left[\boldsymbol{c}^T \boldsymbol{f}_0 + \int_0^{x^i} \sigma(\boldsymbol{c}^T\boldsymbol{g}(t)) dt \right]^2 - \log(\sigma(\boldsymbol{c}^T \boldsymbol{g}(x^i)) \right), where \sigma(x) = \log(1 + \exp(x)) is the softplus function. x^1,x^2,\ldots, x^M are parameters with M \sim 200-300 and the coefficient \boldsymbol{c} has dimension \sim 20. This optimization is a critical part of my code.
For now, I am using Optim.jl with L-BFGS.
Do you have some advice to speed-up the optimization?
Are there some rules to design a pre-conditioner?
Should I supply the Hessian as well and move to a Newton method?
|
{}
|
# Check my solution to system of equations?
I have the following system of equations that I wanted to solve:
$$2x_1+12x_2+16x_3=24\\ 7x_1+6x_2+4x_3=18\\ 3x_1+2x_2+8x_3=32\\ 9x_1+5x_2+10x_3=14$$
I tried arranging into matrix form:
$$AX=b$$
where
$$A= \left[ \begin{array}{ccc} 2 & 12 & 16\\ 7 & 6 & 4\\ 3 & 2 & 8\\ 9 & 5 & 10 \end{array} \right] ,X = \left[ \begin{array}{c} x_1\\ x_2\\ x_3\\ \end{array} \right] ,b= \left[ \begin{array}{c} 24\\ 18\\ 32\\ 14 \end{array} \right]$$
I then tried to solve for $X$ using:
$$X=A^+b$$
where $A^+$ is the pseudoinverse of $A$, and obtained
$$X= \left[ \begin{array}{c} \frac{43146}{52789}\\ \frac{-49025}{52789}\\ \frac{235293}{105578}\\ \end{array} \right] = \left[ \begin{array}{c} 0.817329368\\ -0.928697267\\ 2.228617704\\ \end{array} \right]$$
Just to check my solution, I tried solving the system in another way using an iterative method on the computer and obtained the solution:
$$X^*= \left[ \begin{array}{c} 0.8173293681\\ -0.9286972669\\ 2.228617705\\ \end{array} \right]$$
I thought that since both methods gave pretty much the same answer for the unknown matrix, it would be correct, however, $AX^*$ (or $AX$) is not even close to b:
$$AX^*= \left[ \begin{array}{c} 26.1481747999999996\\ 9.06359279399999984\\ 18.4235351999999999\\ 24.9986550200000010 \end{array} \right]$$
Normally, any of these methods work for me but in this case, I'm not sure. I always thought that iterative methods would yield the best possible solution while the analytic solution for $X$ in $AX=b$ would give a more-or-less "exact" solution. So I would understand if $X^*$ is as close to the proper solution as an iterative method can get but I don't understand why the analytic solution for $AX=b$ would be as equally off the mark.
I'm not good at being able to just "look" at equations and tell if or what kind of solution exists, so I was hoping if someone can tell me if I'm doing something wrong and/or if there is a more proper solution. Like if there is actually no solution, is this the sort of thing that happens when you try to solve $AX=b$ either analytically or numerically?
Thanks!
-
Um, what's wrong with simply applying Gauss elimination? Is there any reason for using such heavy machinery? – Johannes Kloos Mar 12 '12 at 19:28
You have a over-determined system i.e. $4$ equations with $3$ unknowns. Hence, you cannot hope to get a unique solution unless $b \in \text{columnspace}(A)$.
In your case, $b$ doesn't belong to the column space of $A$. The solution you get $x^* = A^{+}b$ is the least square solution i.e. $x^{*} = \arg \min_{x} \|Ax-b\|_2$.
-
|
{}
|
9.3 Simplification of denominate numbers (Page 2/2)
Page 2 / 2
Practice set b
Perform each operation. Simplify when possible.
Add 4 gal 3 qt to 1 gal 2 qt.
6 gal 1 qt
Add 9 hr 48 min to 4 hr 26 min.
14 hr 14 min
Subtract 2 ft 5 in. from 8 ft 7 in.
6 ft 2in.
Subtract 15 km 460 m from 27 km 800 m.
12 km 340 m
Subtract 8 min 35 sec from 12 min 10 sec.
3 min 35 sec
Add 4 yd 2 ft 7 in. to 9 yd 2 ft 8 in.
14 yd 2 ft 3 in
Subtract 11 min 55 sec from 25 min 8 sec.
13 min 13 sec
Multiplying a denominate number by a whole number
Let's examine the repeated sum
$\underset{\text{3 times}}{\underbrace{\text{4 ft 9 in.}+\text{4 ft 9 in.}+\text{4 ft 9 in.}}}=\text{12 ft 27 in.}$
Recalling that multiplication is a description of repeated addition, by the distributive property we have
$\begin{array}{cccc}\hfill 3\left(\text{4 ft 9 in}\text{.}\right)& =& 3\left(\text{4 ft}+\text{9 in.}\right)\hfill & \\ & =& 3\cdot \text{4 ft}+3\cdot \text{9 in.}\hfill & \\ & =& \text{12 ft}+\text{27 in.}\hfill & \text{Now, 27 in.}=\text{2 ft 3 in.}\hfill \\ & =& \text{12 ft}+\text{2 ft}+\text{3 in.}\hfill & \\ & =& \text{14 ft}+\text{3 in.}\hfill & \\ & =& \text{14 ft 3 in.}\hfill & \end{array}$
From these observations, we can suggest the following rule.
Multiplying a denominate number by a whole number
To multiply a denominate number by a whole number, multiply the number part of each unit by the whole number and affix the unit to this product.
Sample set c
Perform the following multiplications. Simplify if necessary.
$\begin{array}{ccc}\hfill 6\cdot \left(\text{2 ft 4 in.}\right)& =& 6\cdot \text{2 ft}+6\cdot \text{4 in.}\hfill \\ & =& \text{12 ft}+\text{24 in.}\hfill \end{array}$
Since $\text{3 ft}=\text{1 yd}$ and ,
$\begin{array}{ccc}\hfill \text{12 ft}+\text{24 in.}& =& \text{4 yd}+\text{2 ft}\hfill \\ & =& \text{4 yd 2 ft}\hfill \end{array}$
$\begin{array}{ccc}\hfill 8\cdot \left(\text{5 hr 21 min 55 sec}\right)& =& 8\cdot 5\text{hr}+8\cdot \text{21 min}+8\cdot \text{55 sec}\hfill \\ & =& \text{40 hr}+\text{168 min}+\text{440sec}\hfill \\ & =& \text{40 hr}+\text{168 min}+\text{7 min}+\text{20 sec}\hfill \\ & =& \text{40 hr}+\text{175 min}+\text{20 sec}\hfill \\ & =& \text{40 hr}+\text{2 hr}+\text{55 min}+\text{20 sec}\hfill \\ & =& \text{42 hr}+\text{55 min}+\text{20 sec}\hfill \\ & =& \text{24hr}+\text{18hr}+\text{55 min}+\text{20 sec}\hfill \\ & =& \text{1 da}+\text{18 hr}+\text{55 min}+\text{20 sec}\hfill \\ & =& \text{1 da 18 hr 55 min 20 sec}\hfill \end{array}$
Practice set c
Perform the following multiplications. Simplify.
$2\cdot \left(\text{10 min}\right)$
20 min
$5\cdot \left(\text{3 qt}\right)$
$\text{15 qt}=\text{3 gal 3 qt}$
$4\cdot \left(5\text{ft 8 in}\text{.}\right)$
$\text{20 ft 32 in}\text{.}=\text{7 yd 1 ft 8 in}\text{.}$
$\text{10}\cdot \left(2\text{hr 15 min 40 sec}\right)$
$\text{20 hr 150 min 400 sec}=\text{22 hr 36 min 40 sec}$
Dividing a denominate number by a whole number
To divide a denominate number by a whole number, divide the number part of each unit by the whole number beginning with the largest unit. Affix the unit to this quotient. Carry any remainder to the next unit.
Sample set d
Perform the following divisions. Simplify if necessary.
$\left(\text{12 min 40 sec}\right)÷4$
Thus $\left(\text{12 min 40 sec}\right)÷4=3\text{min 10 sec}$
$\left(\text{5 yd 2 ft 9 in}\text{.}\right)÷3$
$\begin{array}{}\text{Convert to feet: 2 yd 2 ft}=\text{8 ft}\text{.}\\ \text{Convert to inches: 2 ft 9 in}\text{.}=\text{33 in}\text{.}\end{array}$
Thus $\left(\text{5 yd 2 ft 9 in}\text{.}\right)÷3=1\text{yd 2 ft 11 in}\text{.}$
Practice set d
Perform the following divisions. Simplify if necessary.
$\left(\text{18 hr 36 min}\right)÷9$
2 hr 4 min
$\left(\text{34 hr 8 min}\text{.}\right)÷8$
4 hr 16 min
$\left(\text{13 yd 7 in}\text{.}\right)÷5$
2 yd 1 ft 11 in
$\left(\text{47 gal 2 qt 1 pt}\right)÷3$
15 gal 3 qt 1 pt
Exercises
For the following 15 problems, simplify the denominate numbers.
16 in.
1 foot 4 inches
19 ft
85 min
1 hour 25 minutes
90 min
17 da
2 weeks 3 days
25 oz
240 oz
15 pounds
3,500 lb
26 qt
6 gallons 2 quarts
300 sec
135 oz
8 pounds 7 ounces
14 tsp
18 pt
2 gallons 1 quart
3,500 m
16,300 mL
16 liters 300 milliliters (or 1daL 6 L 3dL)
For the following 15 problems, perform the indicated operations and simplify the answers if possible.
Add 6 min 12 sec to 5 min 15 sec.
Add 14 da 6 hr to 1 da 5 hr.
15 days 11 hours
Add 9 gal 3 qt to 2 gal 3 qt.
Add 16 lb 10 oz to 42 lb 15 oz.
59 pounds 9 ounces
Subtract 3 gal 1 qt from 8 gal 3 qt.
Subtract 3 ft 10 in. from 5 ft 8 in.
1 foot 10 inches
Subtract 5 lb 9 oz from 12 lb 5 oz.
Subtract 10 hr 10 min from 11 hr 28 min.
1 hour 18 minutes
Add 3 fl oz 1 tbsp 2 tsp to 5 fl oz 1 tbsp 2 tsp.
Add 4 da 7 hr 12 min to 1 da 8 hr 53 min.
5 days 16 hours 5 minutes
Subtract 5 hr 21 sec from 11 hr 2 min 14 sec.
Subtract 6 T 1,300 lb 10 oz from 8 T 400 lb 10 oz.
1 ton 1,100 pounds (or 1T 1,100 lb)
Subtract 15 mi 10 in. from 27 mi 800 ft 7 in.
Subtract 3 wk 5 da 50 min 12 sec from 5 wk 6 da 20 min 5 sec.
2 weeks 23 hours 29 minutes 53 seconds
Subtract 3 gal 3 qt 1 pt 1 oz from 10 gal 2 qt 2 oz.
Exercises for review
( [link] ) Find the value: ${\left(\frac{5}{8}\right)}^{2}+\frac{\text{39}}{\text{64}}$ .
1
( [link] ) Find the sum: $8+6\frac{3}{5}$ .
( [link] ) Convert $2\text{.}\text{05}\frac{1}{\text{11}}$ to a fraction.
$2\frac{\text{14}}{\text{275}}$
( [link] ) An acid solution is composed of 3 parts acid to 7 parts water. How many parts of acid are there in a solution that contains 126 parts water?
( [link] ) Convert 126 kg to grams.
126,000 g
Is there any normative that regulates the use of silver nanoparticles?
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
why we need to study biomolecules, molecular biology in nanotechnology?
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
why?
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
anyone know any internet site where one can find nanotechnology papers?
research.net
kanaga
sciencedirect big data base
Ernesto
Introduction about quantum dots in nanotechnology
what does nano mean?
nano basically means 10^(-9). nanometer is a unit to measure length.
Bharti
do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment?
absolutely yes
Daniel
how to know photocatalytic properties of tio2 nanoparticles...what to do now
it is a goid question and i want to know the answer as well
Maciej
Abigail
for teaching engĺish at school how nano technology help us
Anassong
Do somebody tell me a best nano engineering book for beginners?
there is no specific books for beginners but there is book called principle of nanotechnology
NANO
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
fullerene is a bucky ball aka Carbon 60 molecule. It was name by the architect Fuller. He design the geodesic dome. it resembles a soccer ball.
Tarell
what is the actual application of fullerenes nowadays?
Damian
That is a great question Damian. best way to answer that question is to Google it. there are hundreds of applications for buck minister fullerenes, from medical to aerospace. you can also find plenty of research papers that will give you great detail on the potential applications of fullerenes.
Tarell
what is the Synthesis, properties,and applications of carbon nano chemistry
Mostly, they use nano carbon for electronics and for materials to be strengthened.
Virgil
is Bucky paper clear?
CYNTHIA
carbon nanotubes has various application in fuel cells membrane, current research on cancer drug,and in electronics MEMS and NEMS etc
NANO
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
Do you know which machine is used to that process?
s.
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
7hours 36 min - 4hours 50 min
|
{}
|
# A big sum
1. Feb 27, 2005
### damoclark
Can anyone help me with this sum
1^10 + 2^10 + 3^10 ......... +998^10 + 999^10 + 1000^10 = ?
I read that when Gauss was a kid at school he solved the simplier problem of summing all the numbers in his head between 1 and 100, before the teacher and all the other kids, by the observing the fact that 100+1=101, 99+2= 101, 98 +3 =101, etc and that there are 50 such sums, which gives an answer of 50*101=5050 for the problem.
The above sum, with powers of 10, I heard that one of the Bernoulli brothers could solve within 10 minutes with just pen and paper.
Any ideas on how he did it? I've been trying to work out a fast way to do this sum, with integrals but am always getting stuck.
Last edited: Feb 28, 2005
2. Feb 27, 2005
### Gokul43201
Staff Emeritus
It shouldn't be terribly hard (cumbersome, yes) to derive a formula for the sum of n-th powers. I think the easiest way to do this might be by employing a telescoping sum. I recall a method involving Stirling Numbers and permutations...perhaps you can even Google it ?
3. Feb 27, 2005
### Hurkyl
Staff Emeritus
It's known that the sum of the first n m-th powers will be a degree (m+1) polynomial in n. (In fact, a few of the terms are known)
I don't know how fast you could solve a 10x10 system of equations, though, to determine the coefficients. I know there are easier ways of deriving this, but I don't remember them.
4. Feb 27, 2005
### shmoe
Try a search for the "Euler-Maclaurin summation formula", it's possible it could handle that sum by hand in 15 minutes, but I haven't tried. It involves the Bernoulli numbers and functions, so perhaps it was known to (one of) them. I confess ignorance on the history of the method though.
5. Feb 28, 2005
### robert Ihnot
Jacques Bernoulli, around 1690, was evidently the first person to publish on this subject and says:
"With the help of [these formulas] it took me less than half of
a quarter of an hour to find that the 10th powers of the first
1000 numbers being added together will yield the sum
91409924241424243424241924242500"
(Hey damoclark! Is that the exact sum you are looking for?)
The reference is: http://www.mathpages.com/home/kmath279.htm.
If I remember right this was discussed in Newman's World of Mathematics
.
It is also known that the early terms, providing it goes that far, for nth power sums from 1 to X are: $$\frac{X^_(n+1)}{n+1}+\frac{X^n}{2}$$$$+\frac{nX^_(n-1)}{12}$$, etc., and all the terms can be constructed using Bernoulli numbers,see: http://numbers.computation.free.fr/Constants/Miscellaneous/bernoulli.html
Last edited: Feb 28, 2005
6. Feb 28, 2005
### damoclark
Thanks Shmoe, I'll check out the "Euler-Maclaurin summation formula".
So the Sum could be calculated with a polynomial of order 11, which would mean I would have to substitute the value of 10^3, for the polynomial variable, to calculate the answer.
I've been looking at the function $$y = x^{10}$$, and intergrating under that to get an approximation of the answer, ie $$\frac{x^{11}}{11}$$, so an over estimation of the sum would be,$$\frac{x^{11}}{11}$$, setting x=10^3, so I guess the answer has under 33 decimal digits.
The problem, now I,m having is finding a simple way to calculate the other co-efficients, without a calculator.
Last edited: Feb 28, 2005
7. Feb 28, 2005
|
{}
|
Order KöMaL! Competitions Portal
English Issue, December 2002
Previous pageContentsNext pageORDER FORM
Interesting new characteristics of the electrostatic field
Balázs Pozsgay
There are many important and interesting characteristics of the electrostatic field, such as the spherical symmetry of the field of a point charge or the theorem which states that outside a homogeneously charged spherical shell an electrostatic field generated by it is the same as that of a point charge having the same overall electric charge, and the electric field strength inside the shell is zero. Beyond these well-known features there arose a question in my mind of which I have not heard or read in any book, neither of the proposal nor of the solution of the problem. This problem - mainly after having been considered thoroughly - cast a new light on the characteristics of electrostatic fields (and also of those fields that can be described with similar equations, such as magnetostatic and gravitational fields) for me.
A problem of averaging
The question I was concerned with was: what result do we get in an arbitrary electrostatic field (not necessarily having a spherical symmetry) if we average the electric field strength vector and the electric potential on the surface of an imaginary sphere?
My guess was that the average electric field vector would be the same as the electric field vector in the centre of the sphere and the average potential would be the same as the electric potential in the centre of the sphere. As we will see, the guess is not correct in general, but is not too far from the truth and with a slight modification a true statement can be given.
The brute force' method
Let us calculate the average of the potential mechanically, that is let us call upon the help of the integral calculus. (If the Reader is inexperienced in this chapter of mathematics, do not put away this paper just skip this section, because in the latter sections there is also an elementary solution offered to this problem!)
Let us first take a field where there is only one point charge and it is in distance of the centre of the imaginary sphere of radius R. Under the average' of the U potential we mean the quantity of:
Interpretation: the surface of the sphere is divided into dfi surface elements (i is an appropriately chosen index of the elements) and the average of the electric potentials detectable on the surface weighted with the size of the surface elements is calculated. (We must emphasize here that dfi means the size of a surface element, a scalar quantity, and it should not be mixed up with the directional surface element' a vector used in calculating fluxes).
Let us get down to calculation. Take the co-ordinate frame as seen in the figure. The point charge resides on axis x, and let us take the centre of the sphere as the origin. Let us divide the surface of the sphere into belts of width dx. The first step is to determine the size of these surface belts.
That is, we got that the size of the spherical belts depend only on dx and that makes our further task quite simple. With the above equation the whole surface of the sphere can easily be calculated:
In the next step let us calculate the potential in the surface points with x co-ordinates.
where
Now, we can do the integration itself. Since the surface elements with the same distance from the point charge are on a thin spherical belt, it is sensible to divide the sphere into such belts or zones and make the averaging according to these.
We have arrived at a strange formula. Firstly, its form is peculiar, since in the final formula only one is present of the two quantities, R and , and in a symmetric way. Secondly, the physical interpretation is also interesting, because the same formula describes the potential derived from a homogeneously charged spherical shell of charge Q in distance . The present problem under investigation is quite different to that, as it appears, but we will see that there is a close connection between the two (and that can be well exploited).
An elementary solution resulting from energetic considerations
Now let us think that the so far imaginary (or virtual') spherical shell is an existing (insulator) body which is charged homogeneously with q=1 total charge. What is the interaction energy of this homogeneously charged spherical shell and a point charge of magnitude Q residing in distance from its centre?
This energy can be determined in two ways. On the one hand, we can calculate the potential derived from the spherical shell in the place where the point charge is located (this can easily be done since the charge distribution has a spherical symmetry) and multiply it with the magnitude of the point charge (Q). The result is well-known, it is exactly the same as the final form of equation (3).
But there is another way. In the spherical symmetric (Coulomb) field of the point charge Q we can calculate the potential energy of the homogeneously distributed charge of the spherical shell. In theory it can be done by dividing the spherical shell into small parts in thought, multiplying the quantities of charge of the surface elements (df), that is df/(4R2), by the potentials resulting from the Q point charge at the place of the given surface elements, and summing up these energies.
The second method is technically much more difficult than the first one. But fortunately we do not have to go through the complicated summing procedure since its result is obviously the same as that of the first calculation.
N.B. the quantity resulted by the second method is the average potential on the sphere of radius R originated from charge Q. Benefiting from the fact that the two calculations have identical results, the average value in question can be determined with an elementary method leaving out the integral calculus:
The average of the electric field strength vector
The next problem is the averaging of the electric field vector. On the basis of our foregoing results this can be done in two ways. The first method is based on the idea that the field vector is in close connection with the potential, or more exactly with the rate of its spatial variation. This connection can be described in a mathematical form: if from a location where the electric field vector is E we move on with a small x displacement vector, the change in the potential will be:
(In the above expression the dot between the two vector quantities means the dot-product of the two vectors, and the negative sign expresses that going in the direction of the potential energy decreases.)
From all these it follows that if we know the potential in every location in an electrostatic field, we can calculate the field vector with (5) as well. In a given place the components of the field vector can be calculated by moving out from the given location in the direction of all three co-ordinate axes by s displacement and divide the change in the potential with s. The resulting three quantities are just the field vector components (except for the sense).
It is not the field vector in a point we would like to determine now but rather the field vector average. Since the components of the average field vector are equal with the average of those consequent components and these can be expressed by the energy variation of the whole sphere due to displacement we can achieve our aim with equations (4) and (5). The x component of the average field vector can be calculated by displacing the sphere along the axis x by s, calculating the variation of the average potential in this position and dividing it by s. (We exploit here the fact that the change in the average and the average change in the potential are identical.)
Let us see what equation (4) and the above procedure tells us of the average field vector. If the point charge is inside the sphere, we can move the sphere in any direction but the average potential will not change, since it depends only on R and independent of . For this reason, the average field vector on the surface of the sphere is nil. If the charge Q is outside the sphere, the situation is a bit more difficult. However, in this case we can exploit the fact that the average potential is independent of R, so the sphere can be point-like as well. But in this case, the potential energy would be the well known interaction energy of two point charges and the field vector derived by its variation would be equal with the Coulomb filed in the centre of the sphere. Summarizing briefly: The average of the field vector of a point charge on a sphere can be calculated as:
Superposition
So far we have dealt with one point charge and the average of its electric field on an imaginary sphere. What is the case if the electrostatic field is generated by several point charges (or a continuous distribution of electric charge)? The resultant electric field is the vector sum (superposition) of the field of the single point-like charges and the average of the sum is the sum of the average of the single constituent fields.
In the previous section we saw that - in the case of a point charge - the average field vector is equal with the field vector detectable in the centre of the sphere. From the superposition principle it follows that the average of the resultant field vector of an arbitrarily complex electrostatic field is equal with the electric field vector in the centre of the sphere generated by the charges residing outside of the sphere. The charges residing inside the sphere do not add to the average. This is the small modification that was not included in the guess-based first form of our theorem. It is interesting that summing according to directed (considered as vector quantities) surface elements (averaging) the situation is just the opposite: the addition of the charges outside of the sphere is nil and the result depends only on the charges inside the sphere, and only these determine the electric flux going through the surface.
Action-reaction
As a conclusion we show that the result obtained for the average of the electric field vector could have been determined more simply without any reference of the potential, with a straight method valid for the most general fields without any symmetry.
Let us suppose that there is an actually existing, evenly charged spherical surface and its overall charge equals the unit charge. In this case, the average of the field vector just equals the force exerted on the sphere by the whole charge system. This force is - according to the action-reaction law - the same as the force exerted on the charge system by the electrostatic field of the sphere with opposite direction. As the sphere is evenly charged, its field corresponds to the field of a point charge (outside of the sphere). So the force is that of a point charge of unit charge exerted on the charges outside of the sphere.
Let us apply the action-reaction principle again: the resultant force exerted on the charges outside of the sphere is equal with the force exerted on a point-like unit charge in the centre of the sphere (apart from the sense), and this equals the electric field vector derived from the outer charges in the centre of the sphere. What is the case with the inner charges? We leave this matter to the reader.
Throughout these considerations we exploited only the inverse square law of the field of a point charge and the superposition principle. The Newtonian gravitational field also possesses these characteristics. And with some further considerations the results are also valid in a magnetic field (Although there is no magnetic pole in Nature, the magnetic field can be describe as if there were separate magnetic poles, and the same kind of laws are valid as in electrostatics). Therefore all of our statements are valid in the same form for the gravitational and magnetostatic fields and also applicable for all `Coulomb-like' vector fields that may be discovered in the future.
Támogatóink: ELTE
|
{}
|
# Representing Rational Numbers With Python Fractions
by Bartosz Zaczyński Oct 11, 2021 basics python
The fractions module in Python is arguably one of the most underused elements of the standard library. Even though it may not be well-known, it’s a useful tool to have under your belt because it can help address the shortcomings of floating-point arithmetic in binary. That’s essential if you plan to work with financial data or if you require infinite precision for your calculations.
Towards the end of this tutorial, you’ll see a few hands-on examples where fractions are the most suitable and elegant choice. You’ll also learn about their weaknesses and how to make the best use of them along the way.
In this tutorial, you’ll learn how to:
• Convert between decimal and fractional notation
• Perform rational number arithmetic
• Approximate irrational numbers
• Represent fractions exactly with infinite precision
• Know when to choose Fraction over Decimal or float
The majority of this tutorial goes over the fractions module, which in itself doesn’t require in-depth Python knowledge other than an understanding of its numeric types. However, you’ll be in a good place to work through all the code examples that follow if you’re familiar with more advanced concepts such as Python’s built-in collections module, itertools module, and generators. You should already be comfortable with these topics if you want to make the most out of this tutorial.
## Decimal vs Fractional Notation
Let’s take a walk down memory lane to bring back your school knowledge of numbers and avoid possible confusion. There are four concepts at play here:
1. Types of numbers in mathematics
2. Numeral systems
3. Notations of numbers
4. Numeric data types in Python
You’ll get a quick overview of each of these now to better understand the purpose of the Fraction data type in Python.
### Classification of Numbers
If you don’t remember the classification of numbers, here’s a quick refresher:
There are many more types of numbers in mathematics, but these are the most relevant in day-to-day life. At the very top, you’ll find complex numbers that include imaginary and real numbers. Real numbers are, in turn, comprised of rational and irrational numbers. Finally, rational numbers contain integers and natural numbers.
### Numeral Systems and Notations
There have been various systems of expressing numbers visually over the centuries. Today, most people use a positional numeral system based on Hindu-Arabic symbols. You can choose any base or radix for such a system. However, while people prefer the decimal system (base-10), computers work best in the binary system (base-2).
Within the decimal system itself, you can represent some numbers using alternative notations:
• Decimal: 0.75
• Fractional: ¾
Neither of these is better or more precise than the other. Expressing a number in decimal notation is perhaps more intuitive because it resembles a percentage. Comparing decimals is also more straightforward since they already have a common denominator—the base of the system. Finally, decimal numbers can communicate precision by keeping the trailing and leading zeros.
On the other hand, fractions are more convenient in performing symbolic algebra by hand, which is why they’re mainly used in school. But can you recall the last time you used fractions? If you can’t, then that’s because decimal notation is central in calculators and computers nowadays.
The fractional notation is typically associated with rational numbers only. After all, the very definition of a rational number states that you can express it as a quotient, or a fraction, of two integers as long as the denominator is nonzero. However, that’s not the whole story when you factor in infinite continued fractions that can approximate irrational numbers:
Irrational numbers always have a non-terminating and non-repeating decimal expansion. For example, the decimal expansion of pi (π) never runs out of digits that seem to have a random distribution. If you were to plot their histogram, then each digit would have a roughly similar frequency.
On the other hand, most rational numbers have a terminating decimal expansion. However, some can have an infinite recurring decimal expansion with one or more digits repeated over a period. The repeated digits are commonly denoted with an ellipsis (0.33333…) in the decimal notation. Regardless of their decimal expansion, rational numbers such as the number representing one-third always look elegant and compact in the fractional notation.
### Numeric Data Types in Python
Numbers with infinite decimal expansions cause rounding errors when stored as a floating-point data type in computer memory, which itself is finite. To make matters worse, it’s often impossible to exactly represent numbers with terminating decimal expansion in binary!
That’s known as the floating-point representation error, which affects all programming languages, including Python. Every programmer faces this problem sooner or later. For example, you can’t use float in applications like banking, where numbers must be stored and acted on without any loss of precision.
Python’s Fraction is one of the solutions to these obstacles. While it represents a rational number, the name Rational already represents an abstract base class in the numbers module. The numbers module defines a hierarchy of abstract numeric data types to model the classification of numbers in mathematics:
Fraction is a direct and concrete subclass of Rational, which provides the complete implementation for rational numbers in Python. Integral types like int and bool also derive from Rational, but those are more specific.
With this theoretical background out of the way, it’s time to create your first fraction!
## Creating a Python Fraction From Different Data Types
Unlike int or float, fractions aren’t a built-in data type in Python, which means you have to import a corresponding module from the standard library to use them. However, once you get past this extra step, you’ll find that fractions just represent another numeric type that you can freely mix with other numbers and mathematical operators in arithmetic expressions.
There are a few ways to create a fraction in Python, and they all involve using the Fraction class. It’s the only thing you ever need to import from the fractions module. The class constructor accepts zero, one, or two arguments of various types:
>>>
>>> from fractions import Fraction
>>> print(Fraction())
0
>>> print(Fraction(0.75))
3/4
>>> print(Fraction(3, 4))
3/4
When you call the class constructor without arguments, it creates a new fraction representing the number zero. A single-argument flavor attempts to convert the value of another data type to a fraction. Passing in a second argument makes the constructor expect a numerator and a denominator, which must be instances of the Rational class or its descendants.
Note that you must print() a fraction to reveal its human-friendly textual representation with the slash character (/) between the numerator and the denominator. If you don’t, it will fall back to a slightly more explicit string representation made up of a piece of Python code. You’ll learn how to convert fractions to strings later in this tutorial.
### Rational Numbers
When you call the Fraction() constructor with two arguments, they must both be rational numbers such as integers or other fractions. If either the numerator or denominator isn’t a rational number, then you won’t be able to create a new fraction:
>>>
>>> Fraction(3, 4.0)
Traceback (most recent call last):
...
raise TypeError("both arguments should be "
TypeError: both arguments should be Rational instances
You get a TypeError instead. Although 4.0 is a rational number in mathematics, it isn’t considered as such by Python. That’s because the value is stored as a floating-point data type, which is too broad and can be used to represent any real number.
Similarly, you can’t create a fraction whose denominator is zero because that would lead to a division by zero, which is undefined and has no meaning in mathematics:
>>>
>>> Fraction(3, 0)
Traceback (most recent call last):
...
raise ZeroDivisionError('Fraction(%s, 0)' % numerator)
ZeroDivisionError: Fraction(3, 0)
Python raises the ZeroDivisionError. However, when you specify a valid numerator and a valid denominator, they’ll be automatically normalized for you as long as they have a common divisor:
>>>
>>> Fraction(9, 12) # GCD(9, 12) = 3
Fraction(3, 4)
>>> Fraction(0, 12) # GCD(0, 12) = 12
Fraction(0, 1)
Both magnitudes get simplified by their greatest common divisor (GCD), which happens to be three and twelve, respectively. The normalization also takes the minus sign into account when you define negative fractions:
>>>
>>> -Fraction(9, 12)
Fraction(-3, 4)
>>> Fraction(-9, 12)
Fraction(-3, 4)
>>> Fraction(9, -12)
Fraction(-3, 4)
Whether you put the minus sign before the constructor or before either of the arguments, for consistency, Python will always associate the sign of a fraction with its numerator. There’s currently a way of disabling this behavior, but it’s undocumented and could get removed in the future.
You’ll typically define fractions as a quotient of two integers. Whenever you provide only one integer, Python will turn that number into an improper fraction by assuming the denominator is 1:
>>>
>>> Fraction(3)
Fraction(3, 1)
Conversely, if you skip both arguments, the numerator will be 0:
>>>
>>> Fraction()
Fraction(0, 1)
You don’t always have to provide integers for the numerator and denominator, though. The documentation states that they can be any rational numbers, including other fractions:
>>>
>>> one_third = Fraction(1, 3)
>>> Fraction(one_third, 3)
Fraction(1, 9)
>>> Fraction(3, one_third)
Fraction(9, 1)
>>> Fraction(one_third, one_third)
Fraction(1, 1)
In each case, you’ll get a fraction as a result, even though they sometimes represent integer values like 9 and 1. Later, you’ll see how to convert fractions to other data types.
What happens if you give the Fraction constructor a single argument that also happens to be a fraction? Try this code to find out:
>>>
>>> Fraction(one_third) == one_third
True
>>> Fraction(one_third) is one_third
False
You’re getting the same value, but it’s a distinct copy of the input fraction. That’s because calling the constructor always produces a new instance, which coincides with the fact that fractions are immutable, just like other numeric types in Python.
### Floating-Point and Decimal Numbers
So far, you’ve only used rational numbers to create fractions. After all, the two-argument version of the Fraction constructor requires that both numbers are Rational instances. However, that’s not the case with the single-argument constructor, which will happily accept any real number and even a non-numeric value such as a string.
Two prime examples of real number data types in Python are float and decimal.Decimal. While only the latter can represent rational numbers exactly, both can approximate irrational numbers just fine. Related to this, if you were wondering, Fraction is similar to Decimal in this regard since it’s a descendant of Real.
Unlike float or Fraction, the Decimal class isn’t formally registered as a subclass of numbers.Real despite implementing its methods:
>>>
>>> from numbers import Real
>>> issubclass(float, Real)
True
>>> from fractions import Fraction
>>> issubclass(Fraction, Real)
True
>>> from decimal import Decimal
>>> issubclass(Decimal, Real)
False
That’s intentional since decimal floating-point numbers don’t play well with their binary counterparts:
>>>
>>> from decimal import Decimal
>>> Decimal("0.75") - 0.25
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'decimal.Decimal' and 'float'
On the other hand, replacing Decimal with an equivalent Fraction would yield a float result in the example above.
Before Python 3.2, you could only create fractions from real numbers using the .from_float() and .from_decimal() class methods. While not deprecated, they’re redundant today because the Fraction constructor can take both data types directly as an argument:
>>>
>>> from decimal import Decimal
>>> Fraction(0.75) == Fraction(Decimal("0.75"))
True
Whether you make Fraction objects from float or Decimal objects, their values are the same. You’ve seen a fraction created from a floating-point value before:
>>>
>>> print(Fraction(0.75))
3/4
The result is the same number expressed in fractional notation. However, this code works as expected only by coincidence. In most cases, you won’t get the intended value due to the representation error that affects float numbers, whether they’re rational or not:
>>>
>>> print(Fraction(0.1))
3602879701896397/36028797018963968
Whoa! What happened here?
Let’s break it down in slow motion. The previous number, which can be represented as either 0.75 or ¾, can also be expressed as the sum of ½ and ¼, which are negative powers of 2 that have exact binary representations. On the other hand, the number ⅒ can only be approximated with a non-terminating repeating expansion of binary digits:
Because the binary string must eventually end due to the finite memory, its tail gets rounded. By default, Python only shows the most significant digits defined in sys.float_info.dig, but you can format a floating-point number with an arbitrary number of digits if you want to:
>>>
>>> str(0.1)
'0.1'
>>> format(0.1, ".17f")
'0.10000000000000001'
>>> format(0.1, ".18f")
'0.100000000000000006'
>>> format(0.1, ".19f")
'0.1000000000000000056'
>>> format(0.1, ".55f")
'0.1000000000000000055511151231257827021181583404541015625'
When you pass a float or a Decimal number to the Fraction constructor, it calls their .as_integer_ratio() method to obtain a tuple of two irreducible integers whose ratio gives precisely the same decimal expansion as the input argument. These two numbers are then assigned to the numerator and denominator of your new fraction.
Now, you can piece together where these two big numbers came from:
>>>
>>> Fraction(0.1)
Fraction(3602879701896397, 36028797018963968)
>>> (0.1).as_integer_ratio()
(3602879701896397, 36028797018963968)
If you pull out your pocket calculator and punch these numbers in, then you’ll get back 0.1 as a result of the division. However, if you were to divide them by hand or use a tool like WolframAlpha, then you’d end up with those fifty-five decimal places you saw earlier.
There is a way to find close approximations of your fraction that have more down-to-earth values. You can use .limit_denominator(), for example, which you’ll learn more about later in this tutorial:
>>>
>>> one_tenth = Fraction(0.1)
>>> one_tenth
Fraction(3602879701896397, 36028797018963968)
>>> one_tenth.limit_denominator()
Fraction(1, 10)
>>> one_tenth.limit_denominator(max_denominator=int(1e16))
Fraction(1000000000000000, 9999999999999999)
This might not always give you the best approximation, though. The bottom line is that you should never try to create fractions straight from real numbers such as float if you want to avoid the rounding errors that will likely come up. Even the Decimal class might be susceptible to that if you’re not careful enough.
Anyway, fractions let you communicate the decimal notation most accurately with a string in their constructor.
### Strings
There are two string formats that the Fraction constructor accepts, which correspond to decimal and fractional notation:
>>>
>>> Fraction("0.1")
Fraction(1, 10)
>>> Fraction("1/10")
Fraction(1, 10)
Both notations can optionally have a plus sign (+) or a minus sign (-), while the decimal one can additionally include the exponent in case you want to use the scientific notation:
>>>
>>> Fraction("-2e-3")
Fraction(-1, 500)
>>> Fraction("+2/1000")
Fraction(1, 500)
The only difference between the two results is that one is negative and one is positive.
When you use the fractional notation, you can’t use whitespace characters around the slash character (/):
>>>
>>> Fraction("1 / 10")
Traceback (most recent call last):
...
raise ValueError('Invalid literal for Fraction: %r' %
ValueError: Invalid literal for Fraction: '1 / 10'
To find out exactly which strings are valid or not, you can explore the regular expression in the module’s source code. Remember to create fractions from a string or a correctly instantiated Decimal object rather than a float value so that you can retain maximum precision.
Now that you’ve created a few fractions, you might be wondering what they can do for you other than group two numbers. That’s a great question!
## Inspecting a Python Fraction
The Rational abstract base class defines two read-only attributes for accessing a fraction’s numerator and denominator:
>>>
>>> from fractions import Fraction
>>> half = Fraction(1, 2)
>>> half.numerator
1
>>> half.denominator
2
Since fractions are immutable, you can’t change their internal state:
>>>
>>> half.numerator = 2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: can't set attribute
If you try assigning a new value to one of the fraction’s attributes, then you’ll get an error. In fact, you have to create a new fraction whenever you’d like to modify one. For example, to invert your fraction, you could call .as_integer_ratio() to get a tuple and then use the slicing syntax to reverse its elements:
>>>
>>> Fraction(*half.as_integer_ratio()[::-1])
Fraction(2, 1)
The unary star operator (*) unpacks your reversed tuple and relays its elements to the Fraction constructor.
Another useful method that comes with every fraction lets you find the closest rational approximation to a number given in the decimal notation. It’s the .limit_denominator() method, which you’ve already touched on earlier in this tutorial. You can optionally request the maximum denominator for your approximation:
>>>
>>> pi = Fraction("3.141592653589793")
>>> pi
Fraction(3141592653589793, 1000000000000000)
>>> pi.limit_denominator(20_000)
Fraction(62813, 19994)
>>> pi.limit_denominator(100)
Fraction(311, 99)
>>> pi.limit_denominator(10)
Fraction(22, 7)
The initial approximation might not be the most convenient to use, but it is the most faithful. This method can also help you recover a rational number stored as a floating-point data type. Remember that float may not represent all rational numbers exactly, even when they have terminating decimal expansions:
>>>
>>> pi = Fraction(3.141592653589793)
>>> pi
Fraction(884279719003555, 281474976710656)
>>> pi.limit_denominator()
Fraction(3126535, 995207)
>>> pi.limit_denominator(10)
Fraction(22, 7)
You’ll notice a different result on the highlighted line as compared to the previous code block, even though the float instance looks the same as the string literal that you passed to the constructor before! Later, you’ll explore an example of using .limit_denominator() to find approximations of irrational numbers.
## Converting a Python Fraction to Other Data Types
You’ve learned how to create fractions from the following data types:
• str
• int
• float
• decimal.Decimal
• fractions.Fraction
What about the opposite? How do you convert a Fraction instance back to these types? You’ll find out in this section.
### Floating-Point and Integer Numbers
Converting between native data types in Python usually involves calling one of the built-in functions such as int() or float() on an object. These conversions work as long as the object implements the corresponding special methods such as .__int__() or .__float__(). Fractions happen to inherit only the latter from the Rational abstract base class:
>>>
>>> from fractions import Fraction
>>> three_quarters = Fraction(3, 4)
>>> float(three_quarters)
0.75
>>> three_quarters.__float__() # Don't call special methods directly
0.75
>>> three_quarters.__int__()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Fraction' object has no attribute '__int__'
You’re not supposed to call special methods on objects directly, but it’s helpful for demonstration purposes. Here, you’ll notice that fractions implement only .__float__() and not .__int__().
When you investigate the source code, you’ll notice that the .__float__() method conveniently divides a fraction’s numerator by its denominator to get a floating-point number:
>>>
>>> three_quarters.numerator / three_quarters.denominator
0.75
Bear in mind that turning a Fraction instance into a float instance will likely result in a lossy conversion, meaning you might end up with a number that’s slightly off:
>>>
>>> float(Fraction(3, 4)) == Fraction(3, 4)
True
>>> float(Fraction(1, 3)) == Fraction(1, 3)
False
>>> float(Fraction(1, 10)) == Fraction(1, 10)
False
Although fractions don’t provide the implementation for the integer conversion, all real numbers can be truncated, which is a fall-back for the int() function:
>>>
>>> fraction = Fraction(14, 5)
>>> int(fraction)
2
>>> import math
>>> math.trunc(fraction)
2
>>> fraction.__trunc__() # Don't call special methods directly
2
You’ll discover a few other related methods in a section about rounding fractions later on.
### Decimal Numbers
If you try creating a Decimal number from a Fraction instance, then you’ll quickly find out that such a direct conversion isn’t possible:
>>>
>>> from decimal import Decimal
>>> Decimal(Fraction(3, 4))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: conversion from Fraction to Decimal is not supported
When you try, you get a TypeError. Since a fraction represents a division, however, you can bypass that limitation by wrapping only one of the numbers with Decimal and dividing them manually:
>>>
>>> fraction = Fraction(3, 4)
>>> fraction.numerator / Decimal(fraction.denominator)
Decimal('0.75')
Unlike float but similar to Fraction, the Decimal data type is free from the floating-point representation error. So, when you convert a rational number that can’t be represented exactly in binary floating-point, you’ll retain the number’s precision:
>>>
>>> fraction = Fraction(1, 10)
>>> decimal = fraction.numerator / Decimal(fraction.denominator)
>>> fraction == decimal
True
>>> fraction == 0.1
False
>>> decimal == 0.1
False
At the same time, rational numbers with non-terminating repeating decimal expansion will lead to precision loss when converted from fractional to decimal notation:
>>>
>>> fraction = Fraction(1, 3)
>>> decimal = fraction.numerator / Decimal(fraction.denominator)
>>> fraction == decimal
False
>>> decimal
Decimal('0.3333333333333333333333333333')
That’s because there’s an infinite number of threes in the decimal expansion of one-third, or Fraction(1, 3), while the Decimal type has a fixed precision. By default, it stores only twenty-eight decimal places. You can adjust it if you want, but it’s going to be finite nevertheless.
### Strings
The string representation of fractions reveals their values using the familiar fractional notation, while their canonical representation outputs a piece of Python code comprised of a call to the Fraction constructor:
>>>
>>> one_third = Fraction(1, 3)
>>> str(one_third)
'1/3'
>>> repr(one_third)
'Fraction(1, 3)'
Whether you use str() or repr(), the result is a string, but their contents are different.
Unlike other numeric types, fractions don’t support string formatting in Python:
>>>
>>> from decimal import Decimal
>>> format(Decimal("0.3333333333333333333333333333"), ".2f")
'0.33'
>>> format(Fraction(1, 3), ".2f")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported format string passed to Fraction.__format__
If you try, then you get a TypeError. That might be an issue if you’d like to refer to a Fraction instance in a string template to fill out the placeholders, for example. On the other hand, you can quickly fix this by converting fractions to floating-point numbers, especially as you don’t need to care about precision in such a scenario.
If you’re working in a Jupyter Notebook, then you might want to render LaTeX formulas based on your fractions instead of their regular textual representation. To do that, you must monkey patch the Fraction data type by adding a new method, ._repr_pretty_(), which Jupyter Notebook recognizes:
from fractions import Fraction
from IPython.display import display, Math
Fraction._repr_pretty_ = lambda self, *args: \
display(Math(rf"\frac{{{self.numerator}}}{{{self.denominator}}}"))
It wraps a piece of LaTeX markup in a Math object and sends it to your notebook’s rich display, which can render the markup using the MathJax library:
The next time you evaluate a notebook cell that contains a Fraction instance, it’ll draw a beautiful math formula instead of printing text.
## Performing Rational Number Arithmetic on Fractions
As mentioned before, you can use fractions in arithmetic expressions consisting of other numeric types. Fractions will interoperate with most numeric types except for decimal.Decimal, which has its own set of rules. Moreover, the data type of the other operand, regardless of whether it lies to the left or the right of your fraction, will determine the type of your arithmetic operation’s result.
You can add two or more fractions without having to think about reducing them to their common denominator:
>>>
>>> from fractions import Fraction
>>> Fraction(1, 2) + Fraction(2, 3) + Fraction(3, 4)
Fraction(23, 12)
The result is a new fraction that’s the sum of all the input fractions. The same will happen when you add up integers and fractions:
>>>
>>> Fraction(1, 2) + 3
Fraction(7, 2)
However, as soon as you start mixing fractions with non-rational numbers—that is, numbers that aren’t subclasses of numbers.Rational—then your fraction will be converted to that type first before being added:
>>>
>>> Fraction(3, 10) + 0.1
0.4
>>> float(Fraction(3, 10)) + 0.1
0.4
You get the same result whether or not you explicitly use float(). That conversion may result in some loss of precision since your fraction as well as the outcome are now stored in floating-point representation. Even though the number 0.4 seems right, it’s not exactly equal to the fraction 4/10.
### Subtraction
Subtracting fractions is no different than adding them. Python will find the common denominator for you:
>>>
>>> Fraction(3, 4) - Fraction(2, 3) - Fraction(1, 2)
Fraction(-5, 12)
>>> Fraction(4, 10) - 0.1
0.30000000000000004
This time, the precision loss is so significant that it’s visible at a glance. Notice the long stream of zeros followed by a digit 4 at the end of the decimal expansion. It’s the result of rounding a value that would otherwise require an infinite number of binary digits.
### Multiplication
When you multiply two fractions, their numerators and denominators get multiplied element-wise, and the resulting fraction gets automatically reduced if necessary:
>>>
>>> Fraction(1, 4) * Fraction(3, 2)
Fraction(3, 8)
>>> Fraction(1, 4) * Fraction(4, 5) # The result is 4/20
Fraction(1, 5)
>>> Fraction(1, 4) * 3
Fraction(3, 4)
>>> Fraction(1, 4) * 3.0
0.75
Again, depending on the type of the other operand, you’ll end up with a different data type in the result.
### Division
There are two division operators in Python, and fractions support both of them:
1. True division: /
2. Floor division: //
The true division results in another fraction, while a floor division always returns a whole number with the fractional part truncated:
>>>
>>> Fraction(7, 2) / Fraction(2, 3)
Fraction(21, 4)
>>> Fraction(7, 2) // Fraction(2, 3)
5
>>> Fraction(7, 2) / 2
Fraction(7, 4)
>>> Fraction(7, 2) // 2
1
>>> Fraction(7, 2) / 2.0
1.75
>>> Fraction(7, 2) // 2.0
1.0
Note that the floor division’s result isn’t always an integer! The result may end up a float depending on what data type you use together with the fraction. Fractions also support the modulo operator (%) as well as the divmod() function, which might help in creating mixed fractions from improper ones:
>>>
>>> def mixed(fraction):
... floor, rest = divmod(fraction.numerator, fraction.denominator)
... return f"{floor} and {Fraction(rest, fraction.denominator)}"
...
>>> mixed(Fraction(22, 7))
'3 and 1/7'
Instead of generating a string like in the output above, you could update the function to return a tuple comprised of the whole part and the fractional remainder. Go ahead and try modifying the return value of the function to see the difference.
### Exponentiation
You can raise fractions to a power with the binary exponentiation operator (**) or the built-in pow() function. You can also use fractions themselves as exponents. Go back to your Python interpreter now and start exploring how to raise fractions to a power:
>>>
>>> Fraction(3, 4) ** 2
Fraction(9, 16)
>>> Fraction(3, 4) ** (-2)
Fraction(16, 9)
>>> Fraction(3, 4) ** 2.0
0.5625
You’ll notice that you can use both positive and negative exponent values. When the exponent isn’t a Rational number, your fraction is automatically converted to float before proceeding.
Things get more complicated when the exponent is a Fraction instance. Since fractional powers typically produce irrational numbers, both operands are converted to float unless the base and the exponent are whole numbers:
>>>
>>> 2 ** Fraction(2, 1)
4
>>> 2.0 ** Fraction(2, 1)
4.0
>>> Fraction(3, 4) ** Fraction(1, 2)
0.8660254037844386
>>> Fraction(3, 4) ** Fraction(2, 1)
Fraction(9, 16)
The only time you get a fraction as a result is when the denominator of the exponent is equal to one and you’re raising a Fraction instance.
## Rounding a Python Fraction
There are many strategies for rounding numbers in Python and even more in mathematics. You can use the same set of built-in global and module-level functions for fractions and decimal numbers. They will let you assign an integer to a fraction or make a new fraction corresponding to fewer decimal places.
You already learned about a crude rounding method when you converted fractions to an int, which truncated the fractional part leaving only the whole part, if any:
>>>
>>> from fractions import Fraction
>>> int(Fraction(22, 7))
3
>>> import math
>>> math.trunc(Fraction(22, 7))
3
>>> math.trunc(-Fraction(22, 7))
-3
In this case, calling int() is equivalent to calling math.trunc(), which rounds positive fractions down and negative fractions up. These two operations are known as floor and ceiling, respectively. You can use both directly if you want to:
>>>
>>> math.floor(-Fraction(22, 7))
-4
>>> math.floor(Fraction(22, 7))
3
>>> math.ceil(-Fraction(22, 7))
-3
>>> math.ceil(Fraction(22, 7))
4
Compare the results of math.floor() and math.ceil() with your earlier calls to math.trunc(). Each function has a different rounding bias, which may affect the statistical properties of your rounded data set. Fortunately, there’s a strategy known as rounding half to even, which is less biased than truncation, floor, or ceiling.
Essentially, it rounds your fraction to the nearest whole number while preferring the closest even number for the equidistant halves. You can call round() to take advantage of this strategy:
>>>
>>> round(Fraction(3, 2)) # 1.5
2
>>> round(Fraction(5, 2)) # 2.5
2
>>> round(Fraction(7, 2)) # 3.5
4
Notice how those fractions get rounded up or down depending on where the closest even number is? Naturally, this rule only applies to ties when the distance to the nearest whole number on the left is the same as the one on the right. Otherwise, the rounding direction is based on the shortest distance to a whole number regardless of whether it’s even or not.
You can optionally provide the round() function with the second parameter, which indicates how many decimal places you want to retain. When you do, you’ll always get a Fraction rather than an integer, even when you request zero digits:
>>>
>>> fraction = Fraction(22, 7) # 3.142857142857143
>>> round(fraction, 0)
Fraction(3, 1)
>>> round(fraction, 1) # 3.1
Fraction(31, 10)
>>> round(fraction, 2) # 3.14
Fraction(157, 50)
>>> round(fraction, 3) # 3.143
Fraction(3143, 1000)
However, notice the difference between calling round(fraction) and round(fraction, 0), which yields the same value but uses a different data type:
>>>
>>> round(fraction)
3
>>> round(fraction, 0)
Fraction(3, 1)
When you omit the second argument, round() will return the nearest integer. Otherwise, you’ll get a reduced fraction whose denominator was originally a power of ten corresponding to the number of decimal digits you requested.
## Comparing Fractions in Python
In real life, comparing numbers written in fractional notation can be more difficult than comparing numbers written in decimal notation because fractional notation is comprised of two values instead of just one. To make sense of those numbers, you typically reduce them to a common denominator and compare only their numerators. For example, try arranging the following fractions in ascending order according to their value:
• 2/3
• 5/8
• 8/13
It’s not as convenient as with decimal notation. Things get even worse with mixed notations. However, when you rewrite those fractions with a common denominator, sorting them becomes straightforward:
• 208/312
• 195/312
• 192/312
The greatest common divisor of 3, 8, and 13 is 1. This means that the smallest common denominator for all three fractions is their product, 312. Once you’ve converted all fractions to use their smallest common denominator, you can ignore the denominator and focus on comparing the numerators.
In Python, this works behind the scenes when you compare and sort Fraction objects:
>>>
>>> from fractions import Fraction
>>> Fraction(8, 13) < Fraction(5, 8)
True
>>> sorted([Fraction(2, 3), Fraction(5, 8), Fraction(8, 13)])
[Fraction(8, 13), Fraction(5, 8), Fraction(2, 3)]
Python can quickly sort the Fraction objects using the built-in sorted() function. Helpfully, all the comparison operators work as intended. You can even use them against other numeric types, except for complex numbers:
>>>
>>> Fraction(2, 3) < 0.625
False
>>> from decimal import Decimal
>>> Fraction(2, 3) < Decimal("0.625")
False
>>> Fraction(2, 3) < 3 + 2j
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: '<' not supported between instances of 'Fraction' and 'complex'
The comparison operators worked with floats and decimals, but you get an error when you try with the complex number 3 + 2j. This has to do with the fact that complex numbers don’t define a natural ordering relation, so you can’t compare them to anything—including fractions.
## Choosing Between Fraction, Decimal, and Float
If you need to pick just one thing to remember from reading this tutorial, then it should be when to choose Fraction over Decimal and float. All these numeric types have their use cases, so it’s good to understand their strengths and weaknesses. In this section, you’ll have a brief look at how numbers are represented in each of these three data types.
### Binary Floating-Point: float
The float data type should be your default choice for representing real numbers in most situations. For example, it’s suitable in science, engineering, and computer graphics, where execution speed is more important than precision. Hardly any program requires higher precision than you can get with a floating-point anyway.
The unparalleled speed of floating-point arithmetic stems from its implementation in hardware rather than software. Virtually all math coprocessors conform to the IEEE 754 standard, which describes how to represent numbers in binary floating-point. The downside of using the binary system is, as you guessed, the infamous representation error.
However, unless you have a specific reason to use a different numeric type, you should just stick to float or int if possible.
### Decimal Floating-Point and Fixed-Point: Decimal
There are times when using the binary system doesn’t provide enough precision for real numbers. One notable example is financial calculations, which involve dealing with very large and very small numbers at the same time. They also tend to repeat the same arithmetic operation over and over again, which could accumulate a significant rounding error.
You can store real numbers using decimal floating-point arithmetic to mitigate these problems and eliminate the binary representation error. It’s similar to float as it moves the decimal point around to accommodate larger or smaller magnitudes. However, it operates in the decimal system instead of in the binary.
Another strategy to increase numerical precision is fixed-point arithmetic, which allocates a specific number of digits for the decimal expansion. For example, a precision of up to four decimal places would require storing fractions as integers scaled up by a factor of 10,000. To recover the original fractions, they would be scaled down accordingly.
Python’s decimal.Decimal data type is a hybrid of decimal floating-point and fixed-point representations under the hood. It also follows these two standards:
1. General Decimal Arithmetic Specification (IBM)
2. Radix-Independent Floating-Point Arithmetic (IEEE 854-1987)
They’re emulated in software rather than hardware, making this data type much less efficient in terms of time and space than float. On the other hand, it can represent numbers with arbitrary yet finite precision, which you’re free to adjust. Note that you may still face round-off errors if an arithmetic operation exceeds the maximum number of decimal places.
However, the safety buffer provided by the fixed precision today might become insufficient tomorrow. Consider hyperinflation or dealing with multiple currencies having vastly different rates, such as Bitcoin (0.000029 BTC) and Iranian Rial (42,105.00 IRR). If you want infinite precision, then use Fraction.
### Infinite Precision Rational Number: Fraction
Both the Fraction and Decimal types share a few similarities. They address the binary representation error, they’re implemented in software, and you can use them for monetary applications. Nevertheless, the primary use for fractions is to represent rational numbers, so they might be less convenient for storing money than decimals.
There are two advantages to using Fraction over Decimal. The first one is infinite precision bounded only by your available memory. This lets you represent rational numbers with non-terminating and recurring decimal expansion without any loss of information:
>>>
>>> from fractions import Fraction
>>> one_third = Fraction(1, 3)
>>> print(3 * one_third)
1
>>> from decimal import Decimal
>>> one_third = 1 / Decimal(3)
>>> print(3 * one_third)
0.9999999999999999999999999999
Multiplying 1/3 by 3 gives you exactly 1 in the fractional notation, but the result is rounded in the decimal notation. It has twenty-eight decimal places, which is the default precision of the Decimal type.
Take a second look at another benefit of fractions, one which you already started learning about earlier. Unlike Decimal, fractions can interoperate with binary floating-point numbers:
>>>
>>> Fraction("0.75") - 0.25
0.5
>>> Decimal("0.75") - 0.25
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for -: 'decimal.Decimal' and 'float'
When you mix fractions with floats, you get a floating-point number as a result. On the other hand, if you try to mix fractions with a Decimal data type, then you’ll run into a TypeError.
## Studying a Python Fraction in Action
In this section, you’ll go through a few fun and practical examples of using the Fraction data type in Python. You might be surprised how handy fractions can be and simultaneously how undervalued they are. Get ready to dive right in!
### Approximating Irrational Numbers
Irrational numbers play an important role in mathematics, which is why they occur tangentially to many subfields such as arithmetic, calculus, and geometry. Some of the most famous ones that you might have heard of before are:
• The square root of two (√2)
• Archimedes’ constant (π)
• The golden ratio (φ)
• Euler’s number (e)
In the history of mathematics, pi (π) has been particularly interesting, which resulted in many attempts at finding accurate approximations for it.
While ancient philosophers had to go to great lengths, today you can use Python to find pretty good estimates of pi using Monte Carlo methods, such as the Buffon’s needle or similar. However, having only a rough approximation in the form of a convenient fraction should suffice in most day-to-day problems. Here’s how you can determine a quotient of two integers that give gradually better approximations of an irrational number:
from fractions import Fraction
from itertools import count
def approximate(number):
history = set()
for max_denominator in count(1):
fraction = Fraction(number).limit_denominator(max_denominator)
if fraction not in history:
yield fraction
The function accepts an irrational number, converts it to a fraction, and finds a different fraction with fewer decimal digits. The Python set prevents yielding duplicate values by keeping historical data, and the itertools module’s count() iterator counts to infinity.
You can now use this function to find the first ten fractional approximations of pi:
>>>
>>> from itertools import islice
>>> import math
>>> for fraction in islice(approximate(math.pi), 10):
... print(f"{str(fraction):>7}", "→", float(fraction))
...
3 → 3.0
13/4 → 3.25
16/5 → 3.2
19/6 → 3.1666666666666665
22/7 → 3.142857142857143
179/57 → 3.1403508771929824
201/64 → 3.140625
223/71 → 3.140845070422535
245/78 → 3.141025641025641
267/85 → 3.1411764705882352
Nice! The rational number 22/7 is already quite close, which shows that pi can be approximated early on and isn’t particularly irrational after all. The islice() iterator stops the infinite iteration after receiving the requested ten values. Go ahead and play with this example by bumping up the number of results or finding approximations of other irrational numbers.
### Getting a Display’s Aspect Ratio
The aspect ratio of an image or a display is a quotient of its width to height that conveniently expresses proportions. It’s commonly used in film and digital media, while movie directors like to take advantage of the aspect ratio as an artistic measure. If you’ve ever been on a hunt for a new smartphone, then the specs might have mentioned a screen ratio such as 16:9, for example.
You can find out your computer monitor’s aspect ratio by measuring its width and height with Tkinter, which comes with the official Python distribution:
>>>
>>> import tkinter as tk
>>> window = tk.Tk()
>>> window.winfo_screenwidth()
2560
>>> window.winfo_screenheight()
1440
Note that if you have multiple monitors connected, then this code might not work as expected.
Calculating the aspect ratio is a matter of creating a fraction that will reduce itself:
>>>
>>> from fractions import Fraction
>>> Fraction(2560, 1440)
Fraction(16, 9)
There you go. The monitor has a 16:9 resolution. However, if you’re on a laptop that has a smaller screen size, then your fraction might not work out at first, and you’ll need to limit its denominator accordingly:
>>>
>>> Fraction(1360, 768)
Fraction(85, 48)
>>> Fraction(1360, 768).limit_denominator(10)
Fraction(16, 9)
Keep in mind that if you’re dealing with a mobile device’s vertical screen, you should swap the dimensions so that the first one is greater than the following one. You can encapsulate this logic in a reusable function:
from fractions import Fraction
def aspect_ratio(width, height, max_denominator=10):
if height > width:
width, height = height, width
ratio = Fraction(width, height).limit_denominator(max_denominator)
return f"{ratio.numerator}:{ratio.denominator}"
This will ensure consistent aspect ratios regardless of the order of arguments:
>>>
>>> aspect_ratio(1080, 2400)
'20:9'
>>> aspect_ratio(2400, 1080)
'20:9'
Whether you’re looking at the measurements of a horizontal or a vertical screen, the aspect ratios are the same.
So far, width and height have been integers, but what about fractional values? For example, some Canon cameras have an APS-C crop sensor, whose dimensions are 22.8 mm by 14.8 mm. Fractions choke on floating-point and decimal numbers, but you can turn them into rational approximations:
>>>
>>> aspect_ratio(22.2, 14.8)
Traceback (most recent call last):
...
raise TypeError("both arguments should be "
TypeError: both arguments should be Rational instances
>>> aspect_ratio(Fraction("22.2"), Fraction("14.8"))
'3:2'
In this case, the aspect ratio turns out to be exactly 1.5 or 3:2, but many cameras use a slightly longer width for their sensors, which gives a ratio of 1.555… or 14:9. When you do the math, you’ll find out that it’s the arithmetic mean of the wide-format (16:9) and the four-thirds system (4:3), which is a compromise to let you display pictures acceptably well in both of these popular formats.
### Calculating the Exposure Value of a Photo
The standard format for embedding metadata in digital images, Exif (Exchangeable Image File Format), uses ratios to store multiple values. Some of the most important ratios describe the exposure of your photo:
• Aperture Value
• Exposure Time
• Exposure Bias
• Focal Length
• F-Stop
• Shutter Speed
The shutter speed is colloquially synonymous with the exposure time, but it’s stored as a fraction in the metadata using the APEX system based on a logarithmic scale. It means that a camera would take the reciprocal of your exposure time and then calculate the logarithm base 2 of it. So, for example, 1/200 of a second exposure time would be written as 7643856/1000000 to the file. Here’s how you can calculate it:
>>>
>>> from fractions import Fraction
>>> exposure_time = Fraction(1, 200)
>>> from math import log2, trunc
>>> precision = 1_000_000
>>> trunc(log2(Fraction(1, exposure_time)) * precision)
7643856
You could use Python fractions to recover the original exposure time if you manually read this metadata without the help of any external libraries:
>>>
>>> shutter_speed = Fraction(7643856, 1_000_000)
>>> Fraction(1, round(2 ** shutter_speed))
Fraction(1, 200)
When you combine the individual pieces of the puzzle—that is, the aperture, the shutter speed, and the ISO speed—you’ll be able to calculate a single exposure value (EV), which describes the average amount of captured light. You can then use it to derive a log mean of the luminance in the photographed scene, which is invaluable in post-processing and applying special effects.
The formula for calculating the exposure value is as follows:
from math import log2
def exposure_value(f_stop, exposure_time, iso_speed):
return log2(f_stop ** 2 / exposure_time) - log2(iso_speed / 100)
Keep in mind that it doesn’t take into account other factors such as the exposure bias or the flash-lamp that your camera might apply. Anyway, give it a try against some sample values:
>>>
>>> exposure_value(
... f_stop=Fraction(28, 5),
... exposure_time=Fraction(1, 750),
... iso_speed=400
... )
12.521600439723727
>>> exposure_value(f_stop=5.6, exposure_time=1/750, iso_speed=400)
12.521600439723727
You can use fractions or other numeric types for the input values. In this case, the exposure value is around +13, which is relatively bright. The picture was taken outside on a sunny day, albeit in the shade.
### Solving the Change-Making Problem
You can use fractions to tackle computer science’s classic change-making problem, which you might encounter on a job interview. It asks for the minimum number of coins to get a certain amount of money. For example, if you consider the most popular coins of the US dollar, then you could represent $2.67 as ten quarters (10 ×$0.25), one dime (1 × $0.10), one nickel (1 ×$0.05), and two pennies (2 × $0.01). Fractions can be a convenient tool to represent coins in a wallet or a cash register. You could define the coins of the US dollar in the following way: from fractions import Fraction penny = Fraction(1, 100) nickel = Fraction(5, 100) dime = Fraction(10, 100) quarter = Fraction(25, 100) Some of them will automatically reduce themselves, but that’s okay because you’ll format them using decimal notation. You can use these coins to calculate the total value of your wallet: >>> >>> wallet = [8 * quarter, 5 * dime, 3 * nickel, 2 * penny] >>> print(f"${float(sum(wallet)):.2f}")
$2.67 Your wallet amounts to$2.67, but it has as many as eighteen coins. It’s possible to use fewer coins for the same amount. One way of approaching the change-making problem is by using a greedy algorithm, such as this one:
def change(amount, coins):
while amount > 0:
for coin in sorted(coins, reverse=True):
if coin <= amount:
amount -= coin
yield coin
break
else:
raise Exception("There's no solution")
The algorithm tries to find a coin with the highest denomination that’s no greater than the remaining amount. While it’s relatively straightforward to implement, it might not give an optimal solution in all coin systems. Here’s an example for the coins of the US dollar:
>>>
>>> from collections import Counter
>>> amount = Fraction("2.67")
>>> usd = [penny, nickel, dime, quarter]
>>> for coin, count in Counter(change(amount, usd)).items():
... print(f"{count:>2} × ${float(coin):.2f}") ... 10 ×$0.25
1 × $0.10 1 ×$0.05
2 × \$0.01
Using rational numbers is mandatory to find a solution because the floating-point values won’t cut it. Since change() is a generator function yielding coins that might repeat, you can use Counter to group them.
You might modify this problem by asking a slightly different question. For instance, what would be the optimal set of coins given the total price, customer’s coins, and seller’s coins available in the cash register?
### Producing and Expanding Continued Fractions
At the beginning of this tutorial, you learned that irrational numbers could be represented as infinite continued fractions. Such fractions would require an infinite amount of memory to exist, but you can choose when to stop producing their coefficients to get a reasonable approximation.
The following generator function will yield the given number’s coefficients endlessly in a lazy-evaluated fashion:
1def continued_fraction(number):
2 while True:
3 yield (whole_part := int(number))
4 fractional_part = number - whole_part
5 try:
6 number = 1 / fractional_part
7 except ZeroDivisionError:
8 break
The function truncates the number and keeps expressing the remaining fraction as a reciprocal that’s fed back as the input. To eliminate code duplication, it uses an assignment expression on line 3, more commonly known as the walrus operator introduced in Python 3.8.
Interestingly enough, you can create continued fractions for rational numbers too:
>>>
>>> list(continued_fraction(42))
[42]
>>> from fractions import Fraction
>>> list(continued_fraction(Fraction(3, 4)))
[0, 1, 3]
The number 42 has just one coefficient and no fractional part. Conversely, 3/4 has no whole part and a continued fraction comprised of 1 over 1 + 1/3:
As usual, you should watch out for the floating-point representation error that may creep in when you switch over to float:
>>>
>>> list(continued_fraction(0.75))
[0, 1, 3, 1125899906842624]
While you can represent 0.75 in binary faithfully, its reciprocal has a non-terminating decimal expansion despite being a rational number. As you go through the rest of the coefficients, you’ll eventually hit this huge magnitude in the denominator representing a negligibly small value. That’s your approximation error.
You can get rid of this error by replacing real numbers with Python fractions:
from fractions import Fraction
def continued_fraction(number):
while True:
yield (whole_part := int(number))
fractional_part = Fraction(number) - whole_part
try:
number = Fraction(1, fractional_part)
except ZeroDivisionError:
break
This small change lets you reliably generate the coefficients of continued fractions corresponding to decimal numbers. Otherwise, you could end up in an infinite loop even for terminating decimal expansions.
Okay, let’s do something more fun and generate the coefficients of irrational numbers with their decimal expansions chopped off at the fiftieth decimal place. For the sake of precision, define them as Decimal instances :
>>>
>>> from decimal import Decimal
>>> pi = Decimal("3.14159265358979323846264338327950288419716939937510")
>>> sqrt2 = Decimal("1.41421356237309504880168872420969807856967187537694")
>>> phi = Decimal("1.61803398874989484820458683436563811772030917980576")
Now, you can check the first few coefficients of their continued fractions using the familiar islice() iterator:
>>>
>>> from itertools import islice
>>> numbers = {
... " π": pi,
... "√2": sqrt2,
... " φ": phi
... }
>>> for label, number in numbers.items():
... print(label, list(islice(continued_fraction(number), 20)))
...
π [3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1, 1, 2, 2, 2, 2]
√2 [1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]
φ [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
The first four coefficients of pi give a surprisingly good approximation followed by an insignificant remainder. However, the continued fractions of the other two constants look very peculiar. They repeat the same number over and over again till infinity. Knowing this, you could approximate them by expanding those coefficients back to their decimal form:
def expand(coefficients):
if len(coefficients) > 1:
return coefficients[0] + Fraction(1, expand(coefficients[1:]))
else:
return Fraction(coefficients[0])
It’s convenient to define this function recursively so that it can call itself on successively smaller lists of coefficients. In the base case, there’s only one whole number, which is the roughest approximation possible. If there are two or more, then the result is the sum of the first coefficient followed by a reciprocal of the remaining coefficients expanded.
You can verify if both functions work as expected by calling them on their opposite return values:
>>>
>>> list(continued_fraction(3.14159))
[3, 7, 15, 1, 25, 1, 7, 4, 851921, 1, 1, 2, 880, 1, 2]
>>> float(expand([3, 7, 15, 1, 25, 1, 7, 4, 851921, 1, 1, 2, 880, 1, 2]))
3.14159
Perfect! If you feed the result of continued_fraction() to expand(), then you get back the initial value you had at the start. In some cases, though, you might need to convert the expanded fraction to the Decimal type instead of float for greater precision.
## Conclusion
You might have never thought about how computers store fractional numbers before reading this tutorial. After all, maybe it seemed that your good old friend float could handle them just fine. However, history has shown that this misconception may eventually lead to catastrophic failures that can cost big money.
Using Python’s Fraction is one way to avoid such catastrophes. You’ve seen the pros and cons of fractional notation, its practical applications, and methods for using it in Python. Now, you can make an informed choice about which numeric type is the most appropriate in your use case.
In this tutorial, you learned how to:
• Convert between decimal and fractional notation
• Perform rational number arithmetic
• Approximate irrational numbers
• Represent fractions exactly with infinite precision
• Know when to choose Fraction over Decimal or float
🐍 Python Tricks 💌
Get a short & sweet Python Trick delivered to your inbox every couple of days. No spam ever. Unsubscribe any time. Curated by the Real Python team.
Bartosz is a bootcamp instructor, author, and polyglot programmer in love with Python. He helps his students get into software engineering by sharing over a decade of commercial experience in the IT industry.
Each tutorial at Real Python is created by a team of developers so that it meets our high quality standards. The team members who worked on this tutorial are:
|
{}
|
# Climate Science Reports (other than IPCC)
Climate Science Reports (other than IPCC)
----------------------------
Climate Science Reports 2009-2014
------------
US Climate Action 2014 Report
As reported HERE, the U.S. Department of State submitted the 2014 U.S. Climate Action Report on Thursday to the United Nations Framework Convention on Climate Change (UNFCCC).
The report details actions the United States will take domestically and internationally to address climate change and reduce greenhouse gas emissions by 17 percent below 2005 levels by 2020. It also outlines President Obama's "Climate Action Plan" or CAP, which includes: cutting carbon pollution from power plants, doubling renewable electricity generated from wind and solar by 2020, and increasing clean energy research funding by $7.9 billion. Here is a graphic from the report showing the scenario of greenhouse gas emissions "to reach our goal of reducing U.S greenhouse emissions in the range of 17 percent below 2005 levels by 2020" including the CAP, in green. Go to top. --------------- Abrupt Impacts of Climate Change Both abrupt changes in the physical climate system and steady changes in climate that can trigger abrupt changes in other physical, biological, and human systems present possible threats to nature and society. This report by the US National Academy of Sciences summarizes the current state of knowledge on potential abrupt changes to the ocean, atmosphere, ecosystems, and high latitude areas, and identifies key research and monitoring needs. Click HERE for the summary, HERE for the abrupt-impacts table, and HERE for pictures: Left: Larsen B ice shelf collapse. Right: Drought. ## Credit: Left, Larsen B ice shelf collapse, Landsat 7 Science Team and NASA GSFC; Right, drought in Gujarat, Ahmad Masood/Reuters ## Credit: Left, Larsen B ice shelf collapse, Landsat 7 Science Team and NASA GSFC; Right, drought in Gujarat, Ahmad Masood/Reuters Go to top. --------------- Vatican Climate Report A working group of the Pontifical Academy of Sciences, one of the oldest scientific institutes in the world, issued a sobering report on the impacts for humankind as a result of the global retreat of mountain glaciers as a result of human activity leading to climate change. In their declaration, the working group calls, “on all people and nations to recognize the serious and potentially irreversible impacts of global warming caused by the anthropogenic emissions of greenhouse gases and other pollutants, and by changes in forests, wetlands, grasslands, and other land uses.” Read summary here. View entire report here. Hear the Working Group co-chair, Veerabhadran Ramanathan interviewed by Coalition director, Dan Misleh. Go to top. --------------- Regional US Reports (New York, New Jersey) Here are some regional US reports on climate. NEW YORK (ClimAID) HERE is a news article on the New York report. Executive Summary This study provides an overview assessment of the potential economic costs of climate change impacts and adaptations to climate change in eight major economic sectors in New York State. These sectors, all of which are included in the ClimAID report are: water resources, ocean and coastal zones, ecosystems, agriculture, energy, transportation, communications, and public health. Without adaptation, climate change costs in New York State for the sectors analyzed in this report may approach$10 billion annually by midcentury. However, there is also a wide range of adaptations that, if skillfully chosen and scheduled, can markedly reduce the impacts of climate change by amounts in excess of their costs. This is likely to be even more true when non-economic objectives such as environment and equity are taken into account. New York State as a whole has significant resources and capacity for effective adaptation responses; however, given the costs of climate impacts and adaptations, it is important that the adaptation planning efforts that are now underway are continued and expanded.
NEW JERSEY - Preparing NJ for Climate Change
HERE are presentations from a workshop on climate held at Rutgers University. The keynote speaker was from the U.S. Navy Task Force on Climate Change:
“This workshop takes the first step in the development of a common information base and the creation of a network of New Jersey leaders and practitioners who will be better prepared to serve New Jersey communities, businesses, and other stakeholders as we begin to address the challenges of climate preparedness,” said Dr. Anthony Broccoli, director of the Climate and Environmental Change Initiative.
The initiative is a university-wide multidisciplinary research, education, and outreach effort focused on understanding the mechanisms that drive global and regional climate change; predicting the future of the climate system and the impacts of change, including those on a densely populated, coastal society; and informing society about the causes and consequences of climate change.
Go to top.
---------------
NOAA - State of the Climate 2010
As appearing in the June 2011 issue (Vol. 92) of the Bulletin of the American Meteorological Society (BAMS).
##### Report at a Glance: Highlights (8 pages)
This supplement to the State of the Climate in 2010 draws upon materials from the larger report to provide a summary of key findings.
High Resolution (22.7 MB) | Low Resolution (9.8 MB)
Webinar Slides (12 slides)
1. Front Matter and Abstract (688 KB) [ Hi Rez (4.3 MB) ]
2. Introduction (845 KB) [ Hi Rez (6.8 MB) ] | Figures (coming soon)
3. Global Climate (2.1 MB) [ Hi Rez (38.7 MB) ] | Figures (coming soon)
4. Global Oceans (1.6 MB) [ Hi Rez (27.8 MB) ] | Figures (coming soon)
5. The Tropics (1.8 MB) [ Hi Rez (32.6 MB) ] | Figures (coming soon)
6. The Arctic (1.1 MB) [ Hi Rez (14.5 MB) ] | Figures (coming soon)
7. Antarctica (940 KB) [ Hi Rez (12.6 MB) ] | Figures (coming soon)
8. Regional Climates (1.9 MB) [ Hi Rez (32.8 MB) ] | Figures (coming soon)
9. Seasonal Global Summaries (798 KB) [ Hi Rez (9.0 MB) ] | Figures (coming soon)
10. References (982 KB) [ Hi Rez (7.3 MB) ]
Go to top.
---------------
America's Climate Choices project
The America's Climate Choices project issued the final 2011 volume of the US National Research Council's most comprehensive study of climate change to date.
Here is the the Press Release:
WASHINGTON — Warning that the risk of dangerous climate change impacts is growing with every ton of greenhouse gases emitted into the atmosphere, a National Research Council committee today reiterated the pressing need for substantial action to limit the magnitude of climate change and to prepare to adapt to its impacts. The nation's options for responding to the risks posed by climate change are analyzed in a new report and the final volume in America's Climate Choices, a series of studies requested by Congress. The committee that authored the report included not only renowned scientists and engineers but also economists, business leaders, an ex-governor, a former congressman, and other policy experts.
You can read the book online, free. You can also download it:
PDF Summary
Report In Brief
Previously, (May 2010) three reports were issued. These reports can be read free online. Here are the links along with a summary description for each volume:
1. Advancing the Science of Climate Change
The compelling case that climate change is occurring and is caused in large part by human activities is based on a strong, credible body of evidence, says Advancing the Science of Climate Change, one of the new books in the America's Climate Choices series. While noting that there is always more to learn and that the scientific process is never "closed," the book emphasizes that multiple lines of evidence support scientific understanding of climate change.
The core phenomenon, scientific questions, and hypotheses have been examined thoroughly and have stood firm in the face of serious debate and careful evaluation of alternative explanations, the book says.
2. Adapting to the Impacts of Climate Change
Reducing vulnerabilities to impacts of climate change that the nation cannot, or does not, avoid is a highly desirable strategy to manage and minimize the risks, says the book, Adapting to the Impacts of Climate Change. Some impacts--such as rising sea levels, disappearing sea ice, and the frequency and intensity of some extreme weather events like heavy precipitation and heat waves--are already being observed across the country.
The book notes that policymakers need to anticipate a range of possible climate conditions and that uncertainty about the exact timing and magnitude of impacts is not a reason to wait to act. In fact, it says boosting U.S. adaptive capacity now can be viewed as "an insurance policy against an uncertain future," while inaction could increase risks, especially if the rate of climate change is particularly large.
3. Limiting the Magnitude of Future Climate Change
Substantially reducing greenhouse gas emissions will require prompt and sustained efforts to promote major technological and behavioral changes, says Limiting the Magnitude of Future Climate Change, a new book from the America's Climate Choices study. Although limiting emissions must be a global effort to be effective, strong U.S. actions to reduce emissions will help encourage other countries to do the same.
In addition, the U.S. could establish itself as a leader in developing and deploying the technologies necessary to limit and adapt to climate change.
Go to top.
---------------
US Regional Climate Change Impact reports
HERE from the US Global Research Program are a number of reports, including US Regional Climate Change Impact reports:
Go to top.
-----------------
United States' Fifth Climate Action Report
HERE is the US Fifth Climate Action Report to the UN Framework Convention on Climate Change, 2010. The Fifth U.S. Climate Action Report presents a detailed outline of the actions the U.S. is taking to address climate change, contains updated projections on U.S. greenhouse gas emissions, and underscores the United States commitment to address climate change. Here are the links for downloading the report:
The draft Sixth Climate Action Report has appeared. It provides a detailed report on U.S. actions to address climate change. This report contains descriptions of specific measured and verified actions, outlines of broad policy initiatives, and summaries of activities conducted by the United States since the Fifth CAR, principally at the federal level. Further information is available on the Federal Register notice: https://federalregister.gov/a/2013-23475
Go to top.
---------------
Global Climate Change Impacts in the United States
Global Climate Change Impacts in the United States (U. S. Global Change Research Program). The most comprehensive, authoritative report on Global Climate Change Impacts in the United States was released on Tuesday June 16th, 2009. This report presents, in plain language, the science and impacts of climate change on the United States, now and in the future. It focuses on climate change impacts on U.S. regions and various aspects of society and the economy such as energy, water, agriculture, and health. A comprehensive series of web-pages were developed that highlight the findings and major conclusions of the report and contain complete downloadable files of the report, as well as a host of additional content on climate change impacts on the U.S.
Go to top.
---------------
State of the Climate in 2009 (NOAA, UK MET)
### Unmistakable signs of a warming world
Now for the first time, a report has brought together all the different ways of measuring changes in the climate. The ten indicators of climate change include measurements of sea level rise taken from ships, the temperature of the upper atmosphere taken from weather balloons and field surveys of melting glaciers.
New technology also means it is possible to measure the temperature of the oceans, which absorb 90 per cent of the world's heat.
The State of the Climate report shows “unequivocally that the world is warming and has been for more than three decades”.
And despite the cold winter in Europe and north east America, this year is set to be the hottest on record.
HERE is the report which can be downloaded free.
Go to top.
-----------------------
The Copenhagen Diagnosis
The Copenhagen Diagnosis is an important new report that documents the key findings in climate change science since the 2007 IPCC Science report. The Executive Summary is here, the press report is here, and the full report in high resolution is here. There is also text dealing with some common misconceptions. See HERE for more information.
Go to top.
---------------
Royal Society Report Climate Change - Summary of Science (2010)
HERE is a Guardian article on the report. It says:
The document entirely supports the mainstream scientific view of man-made climate change as summarised by the UN's climate science body, the Intergovernmental Panel on Climate Change.
Go to top.
---------------
### PBL (Netherlands) investigation of IPCC Regional Climate-Change Impacts
Key findings of IPCC on regional climate-change impacts overall considered well founded
"The PBL Netherlands Environmental Assessment Agency has investigated the scientific foundations for the IPCC summary conclusions of the Fourth Assessment Report of 2007 on projected regional climate-change impacts, at the request of the Dutch Minister for the Environment. Overall the summary conclusions are considered well founded, none have been found to contain any significant errors. The Working Group II contribution to the Fourth Assessment Report shows ample observational evidence of regional climate change impacts, which have been projected to pose substantial risks to most parts of the world, under increasing temperatures." HERE is the press release, HERE is the press conference presentation and HERE is the report.
Go to top.
-----------------------
### UNEP Climate Science Reports
Download the full report [low res | high res]
### Climate Change Science Compendium 2009
The Climate Change Science Compendium is a review of some 400 major scientific contributions to our understanding of Earth Systems and climate that have been released through peer-reviewed literature or from research institutions over the last three years, since the close of research for consideration by the IPCC Fourth Assessment Report.
The Compendium is not a consensus document or an update of any other process. Instead, it is a presentation of some exciting scientific findings, interpretations, ideas, and conclusions that have emerged among scientists.
Focusing on work that brings new insights to aspects of Earth System Science at various scales, it discusses findings from the International Polar Year and from new technologies that enhance our abilities to see the Earth’s Systems in new ways. Evidence of unexpected rates of change in Arctic sea ice extent, ocean acidification, and species loss emphasizes the urgency needed to develop management strategies for addressing climate change.
HERE is the link for the UNEP Science Reports.
Go to top.
-----------------------
Last edit 15Nov2017
• Featured News Article Cancun Agreements adopted at COP16
The Cancun Climate Conference COP16 (10 Dec 2010) adopted the "Cancun Agreements". The vote was unanimous with the sole exception of Bolivia. While not a binding... More »
• Featured News Article 2010 Ties for Warmest Year (NASA)
NASA Research - 2010 Tied for Warmest Year on Record January 12, 2011 Global surface temperatures in 2010 tied 2005 as the warmest on record, according to an analysis released... More »
• Featured Article Monckton Myths
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new... More »
Recently Updated
RealClimate Rebuts the Contrarians Last Updated on 2019-01-02 14:46:00 (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-89666169-1', 'auto'); ga('send', 'pageview'); RealClimate.org is the go-to climate science website run by professional climatologists. RealClimate has excellent responses to many fallacious contrarian arguments. There are also inline responses to comments made by contrarians in discussions. N.B: The GRAPHIC provides a very powerful argument against the contrarians; see below for discussion. RealClimate responses to common contrarian arguments Michaels misquotes Hansen Antarctic cooling, global warming? What does the lag... More »
Monckton Myths Last Updated on 2019-01-02 14:43:51 (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-89666169-1', 'auto'); ga('send', 'pageview'); John Cook's Skeptical Science website has done it again with a great compilation of Monckton Myths. Christopher Monckton, trained in classics and journalism with no scientific credentials, is influential among the right-wing libertarian and fossil-fuel supported contrarians/deniers/faux skeptics. Monckton falsely claims that he is a member of the British House of Lords and claims to have invented a cure for AIDS and the common cold. He was invited by Republicans to testify on climate science before the... More »
Mother Earth vs. World's People - a play about climate change Last Updated on 2019-01-02 14:37:51 (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-89666169-1', 'auto'); ga('send', 'pageview'); This is a great play about climate change by Doug Stewart, an author, playwright, director and performer, as well as a business team and management consultant. Doug is a board member of Senior Theatre USA, and an Honorary Board Member of the Santa Fe (New Mexico) Playhouse. He is also a member of the UU-UNO Climate Advisory Group. The play is a 350.org recommended resource. HERE is advice on staging the play, as well as a prologue from Bill McKibben. And here is... More »
Heartland "Absurdities" - Nature journal editorial Last Updated on 2019-01-02 14:33:21 (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-89666169-1', 'auto'); ga('send', 'pageview'); The recent contrarian/denier/faux-skeptic Heartland climate conference is analyzed HERE in a recent Nature editorial entitled "Heart of the Matter". The appropriate summary word in the editorial is "Absurdities". Here it is: Heartland Nature Editorial July2011(function() { var scribd = document.createElement("script"); scribd.type = "text/javascript"; scribd.async = true; scribd.src = "http://www.scribd.com/javascripts/embed_code/inject.js"; var s =... More »
MIT Climatologist Kerry Emanuel castigates "Climategate" attacks Last Updated on 2017-03-29 21:41:30 The distinguished MIT climatologist Prof. Kerry Emanuel wrote a very hard-hitting essay: "Climategate": A Different Perspective, posted on the politically conservative National Association of Scholars NAS website on 7/19/10. Emanuel castigates the vicious right-wing attack on legitimate climate science that uses the pretext of the hacked emails and the minor IPCC report errors. This essay is extremely important. Click here for quotes. HERE is a video of Emanuel testifying recently before the Science and Technology Committee of the US House of Representatives: Go to top ------------------ QUOTES from Emanuel's essay on climate" ...the scandal I see is very different from the one that has been presented to NAS members. Climategate is merely the latest in a series of coordinated, politically motivated attacks that represent an... More »
|
{}
|
Partially oriented yarn
fibre manufacturing
Alternative Title: POY
...a temperature no greater than about 70 °C [160 °F])—a process referred to as cold drawing. Other fibres, such as polyester, that are spun at extremely high rates yield what is known as partially oriented yarns (POY)—i.e., filaments that are partially drawn and partially crystallized and that can be drawn at a later time during textile operations. Many fibres, such as PET,...
MEDIA FOR:
partially oriented yarn
Previous
Next
Citation
• MLA
• APA
• Harvard
• Chicago
Email
You have successfully emailed this.
Error when sending the email. Try again later.
|
{}
|
What is the first 21-digit prime number after the decimal point of Pi
We know the recent computers are 64-bit, and the maximum integer number is 18446744073709551615, whether you can find the first 21-digit prime number after the decimal point of $\pi$?
-
You mean if we can find the first 21-digit prime in the decimal representation of $\pi$? What is that supposed to have to do with 64bit computers? – CBenni Jan 24 '13 at 17:58
Notice that the number size is out of the range of the maximum computer integer number. – hjin15 Jan 24 '13 at 18:02
There are many ways to represent numbers in computers. You are right you can't put a 21 decimal digit number into a 64 bit register, but Alpha, Maple, Python, and many others have the ability to handle larger integers. – Ross Millikan Jan 24 '13 at 18:13
What you are looking for is sometimes called an infinite precision math library that allows you to extend the size of your numbers beyond the maximum representable by the word size of your CPU.
Sometimes this is called an arbitrary precision library.
Languages like Java, and say Python provide such capabilities. Computer Algebra Systems like Mathematica (as does WolframAlpha), Maple, Maxima and Gp/PARI have this capability built in.
This and the answer by Ross should get you moving forward.
You also need to decide on a primality test to use against your numbers and that should be something like Miller-Rabin followed by Lucas or other choices too depending on your needs.
Regards.
-
Just wanted to reinforce this by pointing out that $2^{2281}-1$, a 687-digit prime, was already found 60 years ago by a computer having less than 10 kilobits of RAM (in the form of vacuum tubes). Today, there's probably more computing power inside a $5 pedometer, let alone a 64-bit computer. – Erick Wong Jan 24 '13 at 18:26 @ErickWong: Eric, thanks for the tidbit, I actually wasn't aware that they had done such a large prime at that time. Do you have a reference for this? I recall the good old days when we had to be so careful with using the very limited resources we had and were forced to do efficient things. Regards – Amzoti Jan 24 '13 at 19:00 Apparently the specific announcement was published by Lehmer as an followup note in Mathematics of Computation (AMS link). For the capacity of the computer (named SWAC) I just looked up Wikipedia. – Erick Wong Jan 24 '13 at 19:58 Nice, Amzoti! +1 – amWhy May 6 '13 at 2:03 You can certainly start taking the batches of$21$decimals and check them for primality. You start with$141592653589793238462$which is clearly not prime as it is even. The first odd batch is$592653589793238462643=7^2×11×17203×624683×102317113$. One would expect about one in$45$numbers of this size to be prime, so you shouldn't have to look far. You can check your favorite with Alpha The first one I find is$338327950288419716939\$, checked with Alpha. Alpha is able to fully factor numbers of this size quickly.
-
|
{}
|
# Math Help - change of base formula?
1. ## change of base formula?
Use the change of base formula then a calculator to evaluate
so i did change of base I think>? 9 ^42.11 then used the calculator and got 1.52?
2. Originally Posted by rj2001
Use the change of base formula then a calculator to evaluate
so i did change of base I think>? 9 ^42.11 then used the calculator and got 1.52?
What are you asking? How to use the change of base formula to calculate $\log_9(42.11)$?
By the change of base theorem this is equal to
$\frac{\log(42.11)}{\log(9)}=\frac{\ln(42.11)}{\ln( 9)}=\frac{\lg(42.11)}{\lg(9)}$
3. It says Use change of base formula and a calculator to evaluate the following Log round your answer to 3 dec places
1)Log base 9 42.11
2)Log base5 3
4. Originally Posted by rj2001
It says Use change of base formula and a calculator to evaluate the following Log round your answer to 3 dec places
1)Log base 9 42.11
2)Log base5 3
So do exactly what I showed you. And do the same for number two except replace 42.11 with 3 and 9 with 5
|
{}
|
# Items tagged with differentialdifferential Tagged Items Feed
### DEs and Mathematical Functions Updates...
May 06 2014
2 0
This is the first presentation of updates for the DE and Mathematical Functions programs of Maple 18. It includes several improvements, all in the Mathematical Functions sector, as well as some fixes. The update and instructions for its installation are available on the Maplesoft R&D webpage for DEs and mathematical functions. Some of the items below were mentioned here in Mapleprimes - you are welcome to present suggestions or issues; if possible they will be addressed right away in the next update.
• Filling gaps in the FunctionAdvisor regarding all the 6 complex components: abs, argument, conjugate, Im, Re, signum, as well as regarding Heaviside (step function), Dirac, min and max.
• Fix the simplification and differentation rule for doublefactorial
• Make convert(..., hypergeometric) work the same way as convert(blabla, hypergeom)
• Implement integral forms for Heaviside(z) and JacobiAM(z, k) via convert(..., Int)
• Implement appropriate display for the inert %intat function as well as its conversion to the inert Int
• Make the FunctionAdvisor/DE return not just the PDE system satisfied by f(z, k) = JacobiAM(z, k)and also (new) the ODE satisfied by f(z) = JacobiAM(z, k)
• Fix conversion rule from Heaviside(z) to Sum
• Fix unexpected error interruption when differentiating min(...) and max(...) containing more than three arguments
• Fix issue in simplify/conjugate
• Improvement in expand/int: factors in disguise are put outside the integration sign
• Various improvements in the case of multiple integrals involving the Dirac function
• Make Intat fully inert (before it was evaluating its arguments)
• Make value of inert indexed objects work
Edgardo S. Cheb-Terrab
Physics, Differential Equations and Mathematical Functions, Maplesoft
### Error / differential equation system...
March 29 2014
0 1
Hi,
What is the reason/Why:
Error, (in dsolve/numeric/bvp) unable to achieve requested accuracy of 0.1e-5 with maximum 128 point mesh (was able to get 0.66e-1), consider increasing maxmesh or using larger abserr
Thanks for the help :)
### Differential Equations - Quasi Period...
March 20 2014
1 3
Hi, can I get some help with this?
The question is:
Consider the following IVP for a mass of m = 2 kg attached to a spring with a spring constant k = 9 N/m. The spring mass system is in a medium with damping constant b.
2y" + by' + 9y = 0
y(0) = 0, D(y)(0) = -3
It then asks find three values b1, b2, b3 where b1 is underdamped, b2 is critical, b3 is over.
I set b1 as 1, b2 as sqt 72, b3 as 9.
Then it asks to find the quasi period.
I can't get my quasi period right. My answer is 2pi/ sqrt (4.5).
Any help?
### problem with diff and D...
March 09 2014
0 0
hi,i want to take differential with respect to another differential using physics package,but using D instead of diff,could anyone help me do that ? for example :
restart; with(Physics):
A1 := -(1/24)*1*rho*((diff(phi[1](x, t), t))^2)*(h^3)-(1/2)*1*rho*((diff(u[ref](x, t), t))^2)*h-(1/2)*rho*((diff(w(x, t), t))^2)*h+(1/24)*1*1*((diff(phi[1](x, t), x))^2)*(h^3)+(1/2)*1*(1*((diff(u[ref](x, t), x)+(1/2)*(diff(w(x, t), x))^2)^2)+K*1*((diff(w(x, t), x)+phi[1](x, t))^2))*h-1*q*w(x, t):
A2:=-diff(diff(A1,diff(u[ref](x,t),x)),x);
here i want to compute A2 using D command,not diff and i do not want use convert command ! i just need to calculate A2 directly using D command. tnx for your help.
### Nonlinear DE - Need hints on how to solve...
March 03 2014
1 1
I have the following nonlinear Differential Equation and don't know how to solve. Can anyone give me any hints on how solvle for E__fd(t). I don't even know the specific classification (other than nonlinear) of this DE can someone at least give me hint on that. Thanks.
.5*(diff(E__fd(t), t)) = -(-.132+.1*e^(.6*E__fd(t)))*E__fd(t)+0.5e-1
Thanks,
Melvin
### How do I solve this system of PDEs in Maple?...
February 13 2014
1 3
Hi there,
I'm quite new to Maple so please forgive me! I have a system of partial differential equations I'm trying to solve in Maple as such below
df/dt = f(1-f) - f * h
dh/dt = (g - h) + Laplacian(h),
where f,g,h are functions of space and time (i.e. f(x,y,z,t)). I guess my first question is - is this possible in Maple to evaluate? (I'm currently unsure on ICs as I'm figuring it out from the model - it's a model for cancer growth I'm trying to evaluate but have a rough idea of what I'd use).
If it is possible, can you please share how I'd write this? Everytime I've tried I seem to be failing to define anything properly, so your expertise would be greatly appreciated!
### Incorrect first argument error...
January 21 2014
1 4
I'm attempting to plot several solutions of this differential equation (I have uploaded my worksheet). I have used this series of commands before without issue, but for some reason I keep getting the error message: "Error, (in plot) incorrect first argument" ect.. Does anyone have any insight into what might be going wrong? Thank you.
ass_1_#9.mw
### How to solve nonlinear equation numerically in Map...
December 20 2013
1 3
restart;
diffeq := diff(w(r), $(r, 1))+2*beta*(diff(w(r), $(r, 1)))^3-(1/2)*S*(r-m^2/r) = 0;
con := w(1) = 1;
ODE := {con, diffeq};
sol := dsolve(ODE, w(r), type = numeric);
How can i have numerical solution of the above differential equation with corresponding boundary condition?
### how i can get a single associated Legendre functi...
December 19 2013
1 0
hi,
there is a common differential equation in my maple note,the solution of the eq. can be expressed by
associated Legendre function(s),but i get a result by hypergeometric representation.how i can translate the later into a single Legendre fun?
>
(1)
>
(2)
>
(3)
>
(4)
>
### Events in dsolve...
December 16 2013
1 5
Hi
I have a system of second order differential equation to be solved numerically. I would like to set up events to halt integration to find the values of phi when r(phi)=2/3 . Here is my code
DE:=diff(1/r(phi),phi,phi)+(1/r(phi))=(G*M/h^2)+(3*G*M/c^2*r(phi)^2));
ics:=r(0)=2/3,D(r)(0)=0;G:=1;M:=1;h:=1;c:=1;
p:=dsolve({DE,ics},numeric,events=[[r(phi),r(phi),halt]],[diff(r(phi),phi)=0,halt]]);
The code only works for the second event so it halts for r(phi)=1.63... etc
How do i stop this?
Thanks for any help.
### How do I format the following dsolve function from...
December 05 2013
1 2
The differential equation I'm solving for is:
### Solving of 4 differential equations simulateously...
November 18 2013
0 2
Hi everyone,
I am a new user of maple and i want to know the procedures to follow when solving 4 differential equations simultaneously.
e.g
ds/dt=Λ0-βcSI/N-μS
dL/dt=Λ1+βcSI/N-μ1L+ΑcIT/N
dI/dt=kL-μ2I
dT/dt=r1L+r2I-ΑcIT/N-μT
Any help will be highly appreciated. Regards
### Solving ODE system with initial conditions...
November 10 2013
1 4
Hello all,
During my last attempt to solve ODE system (autonomous system which includes 3 first order diff. equations) with initial conditions, Maple had performed the solution which includes d_z1 parameter as follows below (I present the solution of one of the equations):
S(t)=S(0)∫(QN_z1+A)d_z1, where integral ∫ is defined integral from 0 to t, S(0) is the initial value of S, Q, N and A are the parameters. I would like to ask, what does it mean _z1 and d_z1? Why if the ODE system is only time dependent, I received the integral with other differential, that is d_z1? Does it mean that the integral can't be evaluated or maybe something else?
Dmitry
### 2nd Order differential equation...
November 09 2013
1 4
Hey all new to Maplesoft my question is this;
### how to do diff commands in Maple?...
November 02 2013
0 1
2. Give the Maple command(s) to compute \frac{\partial^8 f}{\partial^5 x \partial^3 y} for f(x, y) = e^{2x+ cos(y)}.
1 2 3 4 5 6 Page 2 of 6
|
{}
|
# How do I fix this? Freshly upgraded F21 clears logo and leaves screen blank.
Yes, I know F21 is 'end of life', but the author of Fedup always recommended upgrading one level at a time, so I used Fedup to go from F19 to F20, which worked well, then from F20 to F21, which does not. If I let it boot by itself, I see the boot screen with the Fedora logo, the curly 'F', printed then cleared (so far, in the normal manner). But then instead of seeing the graphical login manager, I see the screen go blank and stay there.
Interestingly enough, if I hit Esc during the boot, I can get a console and login there with no obvious problems. Likewise if I edit the Grub line starting with 'linux' to include the runlevel '3' at the end. But then I have no network, which may be an unrelated problem (I suspect SELinux here, since the journal shows permission denied on reaching the socket; but I have tried both disabling SELinux and setting to permissive mode only; neither solved it though the former at least made SELinux errors go away).
Speaking of the journal, I did use "journalctl --no-full --reverse" to try to figure out what was going on, but I am not that sure how to interpret what I am seeing there: I see, for example, errors from X including "(EE) Fatal server error" (EE)" and "(EE) no driver available" but I realize X has long had a lot of unfixed error messages, so I am not sure how seriously to take this one. Nor do I know what to look for to see why the driver is not available; it looked to me like it was available.
I have seen Ask Fedora questions about similar sounding failures, but those were because of Nvidia cards. Mine is Radeon. (In a Lenovo E525).
So what should I be looking for in the journal or in which logs?
edit retag close merge delete
Honestly, if you are coming from Fedora 19 and you want to upgrade until you reach a supported version (currently 22 /23), I would recommend you to
1. backup you system (entire home folder with all .hidden files and folders and maybe /etc if you think you need it)
2. Do a clean install of Fedora 22
The likelyhood you end up with a proper and fast system is higher than upgrading 3 or 4 times.
( 2016-01-11 12:30:04 -0600 )edit
If even the non graphical console failed, I would do as you say, going ahead with a fresh install. I did a pretty comprehensive backup before the F19 -> F20 upgrade, though I will have to double check about the hidden files. But those are probably still there, since I can access the file system through the console created by Esc during bootup.
But since I can still run Fedora in a console, I am reluctant to give up when I seem so close.
( 2016-01-11 12:37:11 -0600 )edit
@mejohnsn: I understand.
An easy way to access your files in a graphical environment is through a live cd/dvd/usb. Just boot Fedora from there, mount your existing /home and copy all you need to an external drive. Or if your /home parition is already a separate partition and you are happy with the rest of the partition setup, you should be able to freshly install Fedora 22 and using the existing /home with its content. Just make sure you don't format it during partition setup.
( 2016-01-11 14:55:28 -0600 )edit
Have you tried running package-cleanup problems and package-cleanup --cleandupes yet? If nothing else, this will make sure that you don't have any package conflicts causing problems.
( 2016-01-11 23:37:50 -0600 )edit
@sideburns: "package-cleanup problems" is an error, ""package-cleanup --problems" gives a list without modifying anything. Sure, it is suspicious how many packages came up in said list as 'missing requirements'; most of them are packages I never use, but a few such as "libLLVM-3.4()so(64bit)" look suspicious.
Interestingly enough, after executing "package-cleanup", though this failed when it tried to access the network (please recall I did say that was not working), the next time I booted the system with runlevel=3 (i.e., ' 3', at end of "linux...), it did get to network. So some progress.
( 2016-01-13 01:26:03 -0600 )edit
|
{}
|
## Background & Summary
Tropical forests provide vitally important ecosystem services and support valued biodiversity13. As a result there are many national and global policies seeking to reduce deforestation4,5. However, forest conservation can result in real costs to forest frontier communities, many of whom are poor and marginalised, by preventing agricultural expansion and restricting access to valued wild-harvested products6,7. International conservation policies recognise that conservation should not make local people worse off, and conservation funded by multilateral donors is also subject to stringent standards to ensure poor people are not made worse off by such investments8. However, there is relatively little empirical data allowing a robust evaluation of the costs of conservation for comparison with any benefits distributed.
While forest-dependent people are difficult to define and therefore to count9, many millions of people living on the forest frontier in tropical countries make their living from small-scale swidden agriculture and harvesting products from the wild.10,11 In many areas these livelihoods face rapid change because of increasing land constraints and reductions in the availability of wild-harvested products due to increases in human populations and over exploitation of previously-abundant species1215 as well as conservation restrictions. In other areas, agricultural intensification and increased salaries for off-farm jobs is resulting in a shift away from forest-dependant livelihoods16. Collecting data on the household economy of forest frontier residents in low income countries can be difficult due to poor market integration and highly diversified income sources17. However detailed data on the economy of such households is needed to understand the likely impact of potential policies, including protected areas and REDD+ (Reducing Emissions from Deforestation and forest Degradation).
The datasets and surveys described in this paper were part of the project Can Paying 4 Global Ecosystem Services Reduce Poverty? (p4ges) (http://p4ges.org/). The p4ges project ran from 2013 to 2018 and was funded under the Ecosystem Services for Poverty Alleviation (ESPA) programme (www.espa.ac.uk). P4ges aimed to influence the development and implementation of international ecosystem service payment schemes in the interests of poverty alleviation. The project focused on the case study of a new protected area and REDD+ pilot project in eastern Madagascar: the Corridor Ankeniheny Zahamena (CAZ). The World Bank funded the establishment of the CAZ protected area18. This meant that households identified as Project Affected Persons (PAPs) were eligible for compensation under the World Bank’s social safeguard scheme for economic displacement caused by the conservation of the area19.
This paper presents the datasets created for estimating the magnitude and distribution of net local welfare impacts from the conservation approaches taken in and around CAZ. The data was collected in two phases. In the first phase we conducted household surveys with a stratified random sample of 603 households across five sites. We collected information on demographic and basic socio-economic characteristics of the households, including land holdings, general information on the use of wild-harvested products, whether the household had received compensation under the World Bank social safeguard scheme, assets and wealth indicators, and information on social and human capital. Alongside these interviews we also conducted a choice experiment, designed to estimate the opportunity cost of conservation restrictions which prevent households from clearing further agricultural land from forests.20,21 The second phase collected much more detailed information on land use, agricultural practices, off-farm income and wild-harvested products with a stratified random sub-sample of 171 households across four sites.
The datasets described here are already providing rich information for the analysis of opportunity costs of forest use restriction,2022 evaluating the implementation the World Bank social safeguards,22,23 and patterns of migration and the role of migration in land use change24. They will also allow opportunity cost estimates using the household production function approach, in-depth analysis of forest dependency and the swidden agricultural system practiced in this area, including how crop production varies with the land use history of a plot.
## Methods
This section introduces the case study, the rationale for the selection of study sites, the sampling strategy, the sampling frame, ethical considerations and the implementation of both phases of data collection. Figure 1 provides a map of the study area with pilot and study sites (a), and an overview of the site selection and sampling strategy (b) while Table 1 summarises the study sites selected.
### The case study
The Corridor Ankeniheny Zahamena (CAZ) is a 382,000 ha belt of rainforest linking a number of existing protected areas (most notably Zahamena and Mantadia National Parks) in eastern Madagascar. The CAZ, which is managed by Conservation International on behalf of the Malagasy government, was initially granted temporary protected status in 2006. Its status as an IUCN category VI protected area was confirmed in April 2015. More than 60,000 people live in more than 450 villages in and around this protected area and rely primarily on swidden agriculture, and on collecting wild-harvested products for their livelihoods18. As an IUCN category VI protected area, local residents are allowed to collect some forest products for personal use from parts of the protected area. However, swidden agriculture is completely prohibited meaning that the majority of households will be affected by conservation restrictions imposed due to the new protected status.22,23 The CAZ is considered a REDD+ pilot project as carbon finance is part of the long-term funding plan for CAZ25. Because the establishment of the CAZ was funded by the World Bank, the CAZ environmental and social safeguards plan18 follows the World Bank guidelines in identifying and compensating households considered as Project Affected Persons (commonly known as PAPs). This mandates that anyone who will be economically displaced by the project through restriction of access to the natural resources should be compensated. Compensation took the form of micro-development projects such as improved agriculture, small-scale livestock and beekeeping projects26. These were distributed to households identified as PAPs in 2014 (soon after the first phase of our field work).
### Selection of study sites
Five study sites, two adjacent to the CAZ new protected area, two next to adjacent long-established protected areas, and one away from the forest frontier (Fig. 1a), were purposively selected for this study following an extensive reconnaissance during January to March 2014. The reconnaissance included collecting site-level information on which to base the selection of the study sites and extensive key informant interviews (with local leaders, elders and school teachers) to ensure we understood the local context and to explore the availability of appropriate sampling frames at each site.
### Developing the sampling frame
The sampling unit in this work was the household — defined as one person living alone or a group of people living together, who pool some, or all, of their resources (labour, income and wealth) and who make common provision for food and other essentials for living28. Information on the location of rural communities in Madagascar and the number of households in each is not easily available, and where the data exist, they tend to be poor and not up-to-date23. In order to build a robust sampling frame for this study, most of our study sites required a thorough census of the households within these sites. As outlined in Fig. 1b, once the study sites were selected, we worked through various levels of local communities starting from the fokontany, down to hamlets to conduct a census of the households in each site for our sampling frame. This process involved starting with available maps and drawing sketch maps with local informants (including the fokontany president, village chiefs, elders and school teachers) and then visiting villages and hamlets to record the location of households with a handheld GPS. Given the difficulty in accessing many of our sites, and the location of individual households in difficult, hilly terrain, this process took a lot of time (approximately fifty person-days per site).
### Stratified random sampling
Once we established the sampling frame at each site, stratified by village as settlements were grouped within each fokontany in small villages, the households for the initial household survey and choice experiment (phase one) were identified through random sampling from each village stratum in proportion to get a set sample size for each site. We did not stratify by wealth status as it was not locally appropriate to discuss wealth status of households in focus groups for us to be able to collect this information. The sample size for each site was based on the minimum number of households required for the choice experiment analysis29, as this exceeded the minimum number of households required for other types of data analysis planned. As a rule of thumb, surveying 50 individuals per experimentally designed alternative is acceptable30 in choice experiment surveys. This implies that a minimum of 150 respondents were required for our choice experiment design of three alternatives per choice set. This is the lowest limit which provides adequate variation in the variables of interest for which robust models may be fitted. However, other aspects also determined the sample size: the number of choice sets per respondent, the number of attributes and levels, the task complexity, and the field conditions. We administered six choice sets per household in total (with four attributes of varying levels per choice set) and used a minimum sample of around 100 households per site where choice experiment was conducted. We needed a particularly large sample for the choice experiment in Ampahitra as we were testing willingness-to-pay and willingness-to-accept formulations20 before settling on the best performing approach for roll out across sites. As choice experiments were not being conducted in Amporoforo (there is no forest easily accessible from this site meaning it did not make sense to explore the opportunity cost of conservation in this way), the sample size in this site reflects the number of households needed in phase two of the survey. We randomly selected 10–15% more households than needed for our survey in each site to allow for refusals. If a household did not want to be interviewed, we turned to the next household from our sample. Refusals and dropout rates were very low (less than 4% across all sites).
### Ethical considerations and procedures
The surveys covered potentially sensitive subjects such as wealth indicators and land holdings. As clearing forest for swidden agriculture is forbidden in the study area, questions about land use and the history of swidden agricultural plots are particularly sensitive. Given we needed to record individual identifying information (to allow us to relocate households and tie their information together for the in-depth phase two interviews with a sub-sample of those initially interviewed), we had to be particularly careful to ensure that informants’ data was protected. Bangor University's College of Natural Sciences ethics committee approved this study, including the information sheet, consent protocol and the data collection protocol. All members of the survey team received ethics training before carrying out fieldwork, which covered topics such as the principle of voluntary participation, informed consent, and anonymity and data protection. At the beginning of each survey, we introduced the research team and the project objectives to the respondents, why they had been selected for interview, and explained that their participation in the research was voluntary and they could leave at any time. We explained that no information that could identify households or individuals would be shared outside the research teams at Bangor University and the University of Antananarivo and that the research would be used to help others understand about life in their village and how decisions made about conservation might influence them. We gave each household a leaflet to keep which explained the aims of the research in the local language and contained contact details for those responsible for the project with photos and names of the research team overleaf. Details of the consent forms and the information sheet are provided with the archived data. Both archived datasets described in this paper are completely anonymised. We collected some qualitative information alongside the quantitative data recorded in the archive. Given the sensitive nature of the issues discussed and difficulty in anonymising these qualitative information, they have not been archived alongside the quantitative datasets.
Participants in the household survey and choice experiment (phase one) were given a small gift of useful items to a total value of 3,000 ariary (approximately $1) as a gesture of appreciation at the end of the survey. Before giving this gift we asked households if they would be willing to be included in our sample for follow up interviews. The detailed agricultural surveys (phase two) took a full day and required the household head to work as a guide for the day taking us to their fields so we paid the daily wage rate of 5,000 ariary (approximately$1.85). The follow up wild-harvested product survey was compensated in the same way as the initial household survey and choice experiment.
### Survey implementation
Figure 2 shows images of the data collection. These are useful for understanding the context.
Phase one (household survey and choice experiment) and the first round of phase two (agricultural surveys) were implemented collaboratively by Bangor University (School of Environment, Natural Resources and Geography) and the University of Antananarivo (Ecole Supérieure des Sciences Agronomiques). The follow up wild-harvested product survey was implemented by Madagasikara Voakajy in close collaboration with the Bangor University and University of Antananarivo teams. Our team spent more than 150 person days in each site, usually with two or three people in the field together. We stayed with members of the community rather than camping separately or travelling in every day. This helped greatly in building relationships and trust.
Extensive field testing of the survey instruments was carried out in order to refine and polish the questions and ensure they could be well understood by our target population and that respondents were comfortable answering them. The pilot sites for testing of the survey instruments were chosen such that these resembled the actual study sites in their characteristics (see Fig. 1a–indicated by orange-coloured squares). Initial survey instruments for all phases were first piloted during February and March 2014 in the pilot study sites. The survey questionnaires were tested both on individual respondents (over 20 in total) and relevance of the questions discussed with focus groups (3-4 in each pilot site). Based on these pilots, survey questionnaires were modified, particularly the sections related to the wealth indicators, land use and types of crops farmed, collection of wild-harvested products and social capital. For the DCE survey, initial piloting helped decide on the key attributes and their levels for the final design of the choice sets. The CE presentation and framing (lengthy warm up steps and use of dolls and large images to desensitise forest clearance) were also significantly informed by the piloting20. The final survey instruments were tested again in the pilot sites during June 2014 before the main surveys. The final testing was also important for the training of the research assistants.
The bulk of the fieldwork was carried out by a relatively small team (see acknowledgements) of highly trained research assistants who remained involved in the project from the development of the survey instruments, through data collection, cleaning, archiving and analysis. Three additional short-term assistants helped with the initial household survey, and we had one additional assistant helping with the agricultural survey in one site. Interviews were conducted in person with the household heads (or another adult member of the household where household heads were not available). Interviews were conducted in Malagasy by native speakers comfortable with the dialect of the region.
The surveys were implemented between July 2014 and November 2015 (Table 2). We collected the data on paper forms in both phases and for all sites. While we experimented with data entry onto tablet computers in the field, problems with keeping them charged and the challenges of working on small screens meant we entered the majority of data collected after returning from the field. Data was entered by the research assistants themselves after they returned to the capital after each round of interviews in each site. Survey data and accompanying documents including all survey instruments in English and Malagasy are available on the ReShare repository (Data Citation 1; Data Citation 2).
### Phase one (household survey and choice experiment)
The purpose of phase one interviews was to collect demographic and basic socio-economic characteristics of the households, including migration status, land holdings, wealth indicators, general information on the use of wild-harvested products and to estimate the opportunity cost of conservation from a relatively large sample using discrete choice experiment, a stated preference technique (further details below). We also recorded whether the household had received compensation under the World Bank social safeguard scheme. A secondary purpose was to provide information from which to identify a representative sample for the more in-depth interviews in phase two. Interviews in phase one took between one hour and two hours each.
The socio-demographic and wealth indicators in the household survey were developed from a combination of questions from the Poverty and Environment Network (PEN) household surveys31 and World Bank’s Living Standards Measurement Surveys (LSMS). Many of the assets and wealth questions had to be adapted by the team to the rural Malagasy context as assets owned by many households are so limited, the standard LSMS items would not separate our households sufficiently. This was based on expert judgement of our team (among our winder team we had people with many decades of experience of field research in rural Madagascar), as well as from the information gathered during pilot surveys. Project-specific questions were added about land use and access to and use of forests. The key variables covered in the survey are shown in Table 3.
The choice experiment aimed to assess the opportunity costs experienced by households prevented from clearing forest for swidden agriculture due to the introduction of conservation restrictions. The choice experiment used the willingness to accept (WTA) format in all sites, with willingness to pay (WTP) format also being conducted in one site to compare the two formats20. Choice experiment is rooted in Lancaster’s model of consumer choice, which proposed that consumers derive satisfaction not from goods themselves but from the attributes they provide32. For example, a lake can be decomposed in terms of its turbidity, its recreational facilities (fishing, swimming), and its ecological quality. Changing attribute levels will essentially result in a different good being valued, choice experiments therefore focus on changes in these attributes. The attributes and levels in our surveys were informed by three focus group discussions and pilot testing of the design with 50 respondents. Based on the focus groups and pilot tests, four attributes with varying levels were included in the final choice tasks: i) Total cash donations (framed as development assistance) or total payments made to the government (in WTP format); ii) Number of annual instalments over which the household would receive/pay the cash; iii) Support for improved rice farming; and iv) Clearance of new forestlands for agriculture. The forest clearance attribute had three levels: free clearance (forest protection is lifted), permit for one hectare of clearance, and no clearance (strict enforcement of forest conservation). We combined alternative levels of the four attributes in choice tasks using an efficient design that seeks to minimize the standard error of the coefficients to be estimated33, and optimised the fractional factorial design for d-efficiency for the multinomial logit model based on information on the signs of the parameters obtained from the piloting34. The design generated 12 choice tasks which were divided into two blocks of six choice tasks each; with each respondent randomly assigned one of the two blocks in the experiment. An example of a choice task is presented in Fig. 3, and more detail of the choice experiment is available in documentation included with Data Citation, 1 and in citations 20, 21 and 29.
At the end of the survey respondents were asked if they would be willing to be interviewed in further surveys. 94% agreed to be included in our sampling frame for phase two.
### Phase two (in-depth agricultural survey and wild-harvested product survey)
Phase two interviews were carried out with a stratified random sample of the phase one households. We stratified households according to household size and land-holding and randomly sampled 40–50 households to represent these strata. The agricultural survey often took a whole day to complete while the wild-harvested product follow-up survey took between 30 minutes and 1.5 hours.
#### Agricultural survey
The agricultural survey comprised of two major sections – (i) general household-level information, including land ownership, land access, land use, livestock inputs and outputs, off-farm income, and information on clearance of primary forest; and (ii) field (plot) level data on the history of each individual plot (when it was cleared from forest, crop cycles) and inputs and outputs over the last complete agricultural year. We visited all accessible plots and mapped the boundary of over 80% of those with a handheld GPS in order to understand the distribution of land holdings relative to the forest frontier and to calculate the size of each plot (649 visited, 520 mapped; out of 898 plots recorded). Field level data for all the accessible plots was recorded at the plot location to aid recall, and in-depth agricultural inputs-outputs data was collected for all the plots cultivated by the households during the previous full farming year (721 in total). Overall, we collected plot level data for 898 plots belonging to the 171 households.
Cognisant of the potential limitations of the recall surveys, we identified a number of households in the site Zahamena who agreed to keep agricultural logs (daily or weekly) for the ongoing agricultural year to allow us to investigate the reliability of the recall in the agricultural surveys. These logs covered the same information as in the agricultural inputs-outputs and off-farm income surveys, however, recorded either daily or weekly. The formats used by the households to keep these logs and the data produced are archived alongside the main data. Where inputs and outputs were recorded in local units, these have also been converted to standard units in the dataset. Information on specific conversion factors were gathered at the community level from local markets and through measurement by researchers in the field as required.
Finally, we carried out yield measurement for hill rice for a selected number of plots during the harvest period in Zahamena. The selection of these plots was opportunistic rather than random based on the plots ready for harvesting during our team’s visit to the site at harvest period. A total of 11 hill rice plots were selected for yield measurement. To estimate the plot yield, we randomly selected ten (10) 1m2 quadrats in each plot, harvested the rice from these quadrats with the help of members of the households farming the plot, and measured the harvest both in standard and local units. The average yield from the 10 sample quadrats were used to extrapolate for the whole plot using the plot area.
#### Wild-harvested product survey
We attempted to reach all households who were interviewed in the agricultural survey in the wild-harvested product survey (carried out a few months later in each site). We managed to reach all but two households meaning that the sample size in these follow up surveys covered 169 of the 171 households targeted.
Before conducting the wild-harvested products survey in each site we carried out focus group discussions to identify a prioritised list of the most important wild-harvested products in each site. The survey then gathered data on each of the wild-harvested products on the prioritised list plus additional products that a household considered important. Data collected included seasonal collection of wild-harvested products, their use and sale, their importance to the livelihood of the household, and abundance historically and perception regarding their likely availability in the future.
### Code availability
This study did not use any computer codes to generate the dataset. Microsoft Excel was used to enter, store and quality check the collected data. Where new variables and data associated are calculated/derived from other survey variables, these have been detailed in metadata with the functions used to derive these variables.
## Data Records
Two archived datasets, both containing primary data and accompanying documentation, relate to this data paper (see Table 2). The first covers phase one (the household survey and choice experiment) (P4GES_HHS_CE.zip, Data Citation 1). The data was collected between June 2014 and June 2015 and comprises of 603 households. The second covers phase two (the agricultural survey and wild-harvested product survey) (P4GES_AGR_WHP.zip, Data Citation 2).
## Technical Validation
A number of steps were employed from the beginning of the study to ensure quality of the data collected and that of the datasets deposited in the archive. First we conducted extensive reconnaissance including key informant interviews to ensure our research team (many of whom had experience of the study region prior to this project) had a good understanding of the local context to inform development of the survey instruments. We also conducted an extensive pre-testing of questions to ensure they were relevant and well understood locally.
Secondly, our research team included native Malagasy speakers and only one of the non-Malagasy involved in designing the survey and developing protocols in the field does not speak Malagasy. The Malagasy version of the survey instruments were therefore developed alongside the English version and we constantly discussed how particular concepts would be communicated in Malagasy. The lead researcher from Bangor University who developed the choice experiment (OSR) is a native Malagasy speaker which was vitally important in ensuring that this challenging survey was developed in a way which could be effectively understood by respondents. The validity of the choice experiment was explicitly tested before being rolled out across study sites.20,21 The archived English version of the instruments were reverse translated from the Malagasy version used in the field.
Third, we made a conscious decision not to depend on a large team of enumerators, hired after the surveys were developed, trained and then deployed to collect data. Instead we worked closely with a relatively small group of research assistants, the majority of whom already had experience in the study area and experience of social research and the majority of which stayed with the project for at least three years (through reconnaissance, development of the survey instruments, piloting, data collection, cleaning and archiving). Many of these research assistants are co-authors on the papers coming out of this data set. In phase one surveys; this core team were helped by between one and three assistants (depending on the site). We provided comprehensive training to everyone involved in data collection. The acknowledgements section provides the names of everyone involved in data collection.
Fourth, the senior researchers (the authors of this paper) closely supervised data collection in the field, particularly at the beginning of the work in each phase at each site. The lead researcher who developed the choice experiment (OSR) was in the field for the whole of phase one (household survey and choice experiment) at two of the sites (Ampahitra and Mantadia; three months in total). MP was also regularly in the field for quite extended periods of time; spending eight weeks in the field in total while JPGJ, NH and JHR made shorter visits.
Fifth, at the end of each survey our team recorded answers to questions relating to their perception of how well the interview had gone including how open and engaged the respondent seemed and how well they had understood the process. They also kept daily logs based on their interviews and interactions with the respondents, and other general observations in the field, and compared notes with each other on a daily basis. These were helpful in learning from each other, and in resolving any minor issues during the surveys.
Finally, a great deal of time and effort was put into data entry, data checks and validation. For majority of the cases, the research assistants who carried out the surveys entered the data for the households that they had interviewed. This raw data was then checked systematically by another member of the team for any errors or inconsistencies. The data was then coded for clarity and consistency. The coded data was checked further by senior members of the research team - checking specially for potential typing errors and extreme values. Where inconsistencies and potential errors were located, these were checked against the original raw data as well as the original paper survey sheets to get to a resolution.
## Usage Notes
### Data access conditions
Individual household-level data has been anonymised before submitting to the archives and we have not included GPS locations of the household (converting this data to ‘distance to the forest frontier’). However, given the detailed nature of the data collected at the household level, particularly regarding their household demography, and on- and off-farm income activities, there exists certain level of sensitivity surrounding the data. This potentially creates a situation where households within each survey sites, particularly those with small sampling frames, can be identified from the survey data. For this reason, both datasets described in this paper have been made available as safeguarded data on the UK Data Archive’s data repository ReShare. Anyone wishing to download and use these data must register with the UKDA and agree to the conditions of their End User Licence, outlined at https://www.ukdataservice.ac.uk/get-data/how-to-access/conditions#/tab-end-user-licence. For commercial use, please contact the UK Data Service at help@ukdataservice.ac.uk.
### Variables included in both phase one and two
Some questions relating to land use and land holdings and the collection and use of wild-harvested products were asked to the respondents in both phase one and phase two so for the subsample interviewed in both phases, some of this information will be repeated in the two datasets. For a general analysis using a large sample size, phase one data is most appropriate, but phase two data allows for more in-depth analysis.
|
{}
|
## careless850 Group Title Solve for x in the proportion 2 years ago 2 years ago
1. careless850
$\frac {4x^{2}}{8x^{2}-8x} = \frac {2}{x}$
2. careless850
Last question for my assignment.
3. redshift
Have you learned cross-multiplication? If so: (x)(4x^2) = (2)(8x^2 - 8x) -expand -collect like terms -isolate for x Does that help?
4. careless850
No.
5. careless850
x = 16 and x = −16
6. careless850
x = 2!
7. redshift
(x)(4x^2) = (2)(8x^2 - 8x) 4x^3 = 16x^2 - 16x 4x^3 - 16x^2 + 16x = 0 4x(x^2 - 4x + 4) = 0 For 4x = 0: x = 0 For x^2 - 4x + 4 = 0: -what multiplies to 4 and adds to -4: -2, -2 x^2 - 4x + 4 = (x - 2)^2 (x - 2)^2 = 0 x - 2 = 0 x = 2
8. careless850
Yes!
9. redshift
and x = 0!
10. careless850
:]
11. redshift
But do you understand!?!?!?!
|
{}
|
Browse Questions
# True or False: In a LPP, the minimum value of the objective function $Z=ax+by$ is always 0 if origin is one of the corner point of the feasible region.
Toolbox:
• The objective function $Z=ax+by$,where $a$ and $b$ constants,which has to be minimized or maximized.
Step 1:
$m$ is the minimum value of $Z$,if the open half plane determined by $ax+by$ < $m$ has no point in common with the feasible region.
Step 2:
In a LPP,the minimum value of the objective function $Z=ax+by$ is always 0 if origin is one of the corner point of feasible region.
It is a False statement.
|
{}
|
## Solving piecewise set of 4th order homogeneous ODE...
Hi Maplers,
I'm struggling to write the correct code for this structural problem below.
The goal is to find the piecewise code for the horizontal deflection of a vertical beam (representing a offshore wind turbine) consisting of 5 segments. Below the EoM for the 5 segments, where segment 1 is in the soil hence the ksoil*u1(z) part.
sysODE := {
EI2*diff(u2(z), z, z, z, z) + m2*g*diff(u2(z), z, z) = 0,
EI3*diff(u3(z), z, z, z, z) + m3*g*diff(u3(z), z, z) = 0,
EI4*diff(u4(z), z, z, z, z) + m4*g*diff(u4(z), z, z) = 0,
EI5*diff(u5(z), z, z, z, z) + m5*g*diff(u5(z), z, z) = 0,
EI1*diff(u1(z), z, z, z, z) + m1*g*diff(u1(z), z, z) + ksoil*u1(z) = 0}
To find the general solution the dsolve function is used. This gives 5 eqs describing u1(z).. u5(z), with in total 20 unknown constants depicted as: _C1.._C20.
Now I have 20 BCs, shown below:
bcs := {
eval(subs(z = z2, EI2*diff(u2, z, z, z)) + Fwave = subs(z = z2, EI3*diff(u3, z, z, z))),
eval(subs(z = z5, EI5*diff(u5, z, z, z)) + Fwind = 0),
eval(subs(z = 0, EI1*diff(u1, z, z)) = 0),
eval(subs(z = 0, EI1*diff(u1, z, z, z)) = 0),
eval(subs(z = z1, u1) = subs(z = z1, u2)),
eval(subs(z = z1, diff(u1, z)) = subs(z = z2, diff(u2, z))),
eval(subs(z = z1, diff(u1, z, z)) = subs(z = z2, diff(u2, z, z))),
eval(subs(z = z1, diff(u1, z, z, z)) = subs(z = z1, diff(u1, z, z, z))),
eval(subs(z = z2, u2) = subs(z = z3, u3)),
eval(subs(z = z2, diff(u2, z)) = subs(z = z2, diff(u3, z))),
eval(subs(z = z3, u3) = subs(z = z3, u4)),
eval(subs(z = z3, diff(u3, z)) = subs(z = z3, diff(u4, z))),
eval(subs(z = z3, diff(u3, z, z)) = subs(z = z3, diff(u4, z, z))),
eval(subs(z = z3, diff(u3, z, z, z)) = subs(z = z3, diff(u4, z, z, z))),
eval(subs(z = z4, u4) = subs(z = z5, u5)),
eval(subs(z = z4, diff(u4, z)) = subs(z = z4, diff(u5, z))),
eval(subs(z = z4, diff(u4, z, z)) = subs(z = z4, diff(u5, z, z))),
eval(subs(z = z4, diff(u4, z, z, z)) = subs(z = z4, diff(u5, z, z, z))),
eval(subs(z = z5, EI5*diff(u5, z, z)) = 0)};
How do I write the correct Maple-code to find C1..C20.
krgs,
Floris
## Windows Maple 2023 crashes if Maplets will be used...
Hi,
i use an OpenMaple environment for my academic activity. Especially for plotting of results outside of Maple GUI. Until version 2023 everything was nice and working. But in 2023 calling of Maplets kills Maple 2023 and of course my jobs. Example to try:
#include <stdio.h>
#include <stdlib.h>
#include "maplec.h"
/* callback used for directing result output */
static void M_DECL textCallBack(void *data, int tag, const char *output)
{
printf("%s\n", output);
}
int main(int argc, char *argv[])
{
char err[2048]; /* command input and error string buffers */
MKernelVector kv; /* Maple kernel handle */
MCallBackVectorDesc cb ={textCallBack,
0, /* errorCallBack not used */
0, /* statusCallBack not used */
0, /* readLineCallBack not used */
0, /* redirectCallBack not used */
0, /* streamCallBack not used */
0, /* queryInterrupt not used */
0 /* callBackCallBack not used */
};
ALGEB r, l; /* Maple data-structures */
/* initialize Maple */
if((kv=StartMaple(argc, argv, &cb, NULL, NULL, err))==NULL) {
printf("Fatal error, %s\n", err);
return(1);
}
r=EvalMapleStatement(kv, "with(Maplets[Elements]): SimpleMaplet:=Maplet([Button('Close', Shutdown())]): Maplets[Display](SimpleMaplet):");
StopMaple(kv);
return(0);
}
Working in Maple 2020-2022, crashes in 2023
After a long investigation i found that the problem focuses in maple.dll.
It is only for me, or it is a general bug?
## Is there any bug in the new Quantifier Elimination...
The QuantifierElimination package has been added in Maple 2023.
However, in the following example, the old (yet not obsolete) RegularChains:-SemiAlgebraicSetTools:-QuantifierElimination command can solve the problem, but strangely, the new QuantifierElimination:-QuantifierEliminate command simply returns an error.
Maple Worksheet - Error
Failed to load the worksheet /maplenet/convert/QE_bug.mw .
So, is there any bug in Maple's QuantifierElimination package?
QuantifierElimination[QuantifierEliminate](forall([x, y, t], Implies(And(x^3 + y^2 - x = t, And(t^2 = 4/27, t < 0)), x^2 + y^2 >= rho)));
(By the way, it appears that doesn't work....)
## FHow can I set the attribute "Parameters" in a use...
Here is a notional example of a user defined Probability Distribution:
• One of the "attribute" or "property" (sorry but I don't know the correct word to use) in any known distribution (that is already implemented in Maple) is Parameters: an example is provided for a Gaussian RV.
• In a first stage I define a simple Distribution named MyDist and declare X as a random variable of distribution MyDist.
• Then I decide to complete the definition of MyDist by adding the "property" Parameters.
I get an error I don't know how to get rid of.
Does anyone has any idea to fix that ?
PS: I browse the Statistics library to try and understand how this "property" is defined for a known distribution but I couldn't find any clue.
> restart
> with(Statistics):
> G := RandomVariable(Normal(a, b)): attributes(G)[3]: exports(%); (attributes(G)[3]):-Parameters;
(1)
> MyDist := (mu, Sigma) -> Distribution(Mean=mu, Variance=Sigma): X := RandomVariable(MyDist(m, V)); (Mean, Variance)(X);
(2)
> MyDist := (mu, Sigma) -> Distribution(Mean=mu, Variance=Sigma, Parameters=[mu, Sigma]): Y := RandomVariable(MyDist(m, V));
(3)
>
## How do I solve this ODE?...
I need help to solve this ODE,
I didn't get series values F(k+3),Theta(k+2),phi(k+2),error comes in summation values.and
How to find the unknown parameters A,B,C.
## I need help to solve ODE using forward-backward sw...
Maple code for solving system of ODE using forward-backward sweep method.
## Maple 2023 workbook error...
While working a test workbook in Maple 2023 - added a file went away for a while and came back and started adding code to the workbook when I suppose it did an autosave and this error came up 4x in a row.
## Difficulty with pdsolve and electromagnetics...
perturb_mag_current_density_2.mw
I am trying to calculate the electric field E induced in a vibrating cantilever of conductive material, oscillating in the field of a permanent magnet. However, I am having some difficulty getting pdsolve to work the way I want it to. I'm also not sure if the partial differential eqations I derived from Maxwell's equations are correct, or if the boundary conditions for the electric field in the cantilever are correct. Currently pdsolve gives me no solutions, which makes me think that either my PDEs or my BCs are not correct. It may be that I need to try some sort of numerical method as well. I am assuming that the z component of the electric field is just 0. My third PDE comes from setting the divergence of the electric field to 0. My first two PDEs come from the vector laplacian and its relation to the divergence and curl:
Laplacian * E = Div(E) -Curl(Curl(E))
The x and y components of this should be my first and second PDE, respectively. Note that in this equation the divergence of E is 0, and the curl of E is -dB/dt, where B is the magnetic field.
My boundary conditions are simply that the components of the electric field at the surface of the cantilever is always tangent to the surface.
I have tried various simplifications, such as setting the right hand side of the PDEs to 0, and still I don't get any solution.
My question: Are my PDEs and BCs sensible? And if so, what do I need to do with pdsolve to get a proper solution?
## How can I implement a simple code to obtain the di...
Let us consider the following assumptions:
Any set of binomials $B \in R=K[x_1, \cdots, x_n]$ induces an equivalence relation on the set of monomials in $R$ under which $m_1∼m_2$ if and only if $m_1−tm_2\in \langle B \rangle$ for some non-zero $t\in K$. As a k-vector space, the quotient ring $R/B$ s spanned by the equivalence classes of monomials. Now let $f =x^2−y^2$ be a binomial in $K[x, y]$. Among monomials of total degree three, $x^3$ and $xy^2$, as well as $x^2y$ and $y^3$ become equal in $K[x, y] / \langle f\rangle$.
Why the degree three part in the quotient is two-dimensional with one basis vector per equivalence class?
Also, why does the polynomial $f=x^3+xy^2+y^3$ map to a binomial with a coefficient matrix [2, 1]? I think this matrix arises from the matrix [1, 1, 1, 0] by summing the columns corresponding to $x^3$ and $xy^2$ and those for $x^2y$ and $y^3$.
How can I implement a simple code to obtain these results in {\tt Maple}?
I am looking forward to hearing any help and guidance.
## Create excel file with sheets of independent varia...
Given a excel sheet with data for regression as input column Y will be a dependent variable
Say a maximum of 120 independent variables are their
A function that takes excel as input with headers as in the excel file
Do pick all possible subsets of maximum size 5 that is sizes 2,3,4,5 only
Pick each and eleminate not to store them all together as otherwise storage will be a problem if more data
Copy only those subsets of independent variables to seperate seperate sheet
Only those subsets
where the variables are independent pairwise between them
Step 2 function
Then need to run a multiple linear regression on these subsets like training and test
And choose the best models I each of cases of number of variable
That is best with 2 variables
Again with 3 variables
Similarly 4 and 5
Atleast if possible upto 2 to 3 variables it will be good kind help with your guidance and what is possible
If you are not able give the program of train test regression with charts no issues
Kind help with atleast storing the independent pairwise subsets of size 2 and 3 into seperate sheets in a Excel file atleast
I will do the regression at my end
## Taking an excel sheet perform operation create new...
I will have excel sheet with minium 500 coulmns and 1000 rows say
For sample to explain my question I attach a demo excel
I am looking to find all 2 way multiplication and add them as columns to my excel sheet and return it as a new excel sheet say
The column names for the new 2 way column should be like the
header name of column you are multiply * the name of the other columsn
Now in sample file if i multiple column with name A with column with name B I get a new column with header A*B the header name should be inserted and
Below that all the elements of that A column multiplied with that of B should come
I am looking to form columns for all possible2 way multiplication for the excel I will give.
As you can see the demo file
Excel_to_explain.xlsx
## Global setting of character size in maple 2022...
Hello!
I have changed the the global character size from 12 to 14 but it is not stable. Suddenly size 12 is coming up when a try to copy a row to a another row in the same worksheet.
It is very annoying!!
Any tips????
Regards
Kjell
## limit at infinity ...
Dear all
I have an equation obtained from partial derivable of some functions, I would like to compute the limit when my variable named Pe goes to infinity.
I hope to get a more appreciate presentation of my code to obtain the limit ( Pe -> + infty)
limit_infinity.mw
All derivative are well compute, but How can I add the limit as Pe goes to infinity
Thank you
## Maple on-line help...
Could someone explain to me why the Maple online help pages (see here) are still apparently matching the Maple 2021 release for the past few years? Does this mean that all the information on the Maple online help pages is out of date, or just this one specific page is terribly out of date?
|
{}
|
Calculating the error of Bayes classifier analytically
If two classes $w_1$ and $w_2$ have normal distribution with known parameters ($M_1$, $M_2$ as their means and $\Sigma_1$,$\Sigma_2$ are their covariances) how we can calculate error of the Bayes classifier for them theorically?
Also suppose the variables are in N-dimensional space.
Note: A copy of this question is also available at https://math.stackexchange.com/q/11891/4051 that is still unanswered. If any of these question get answered, the other one will be deleted.
• Is this question the same as stats.stackexchange.com/q/4942/919 ? – whuber Nov 26 '10 at 20:40
• @whuber Your answer suggests it is the case indeed. – chl Nov 26 '10 at 20:47
• @whuber: Yes. i don't know this question suited to which one. I am waiting for a response for one to remove the other one. Is it against the rules? – Isaac Nov 26 '10 at 20:49
• It might be easier, and surely would be cleaner, to edit the original question. However, sometimes a question is restarted as a new one when the earlier version collects too many comments that are made irrelevant by the edits, so it's a judgment call. In any event it's helpful to place cross-references between closely related questions to help people connect them easily. – whuber Nov 26 '10 at 20:52
There's no closed form, but you could do it numerically.
As a concrete example, consider two Gaussians with following parameters
$$\mu_1=\left(\begin{matrix} -1\\\\ -1 \end{matrix}\right), \mu_2=\left(\begin{matrix} 1\\\\ 1 \end{matrix}\right)$$
$$\Sigma_1=\left(\begin{matrix} 2&1/2\\\\ 1/2&2 \end{matrix}\right),\ \Sigma_2=\left(\begin{matrix} 1&0\\\\ 0&1 \end{matrix}\right)$$
Bayes optimal classifier boundary will correspond to the point where two densities are equal
Since your classifier will pick the most likely class at every point, you need to integrate over the density that is not the highest one for each point. For the problem above, it corresponds to volumes of following regions
You can integrate two pieces separately using some numerical integration package. For the problem above I get 0.253579 using following Mathematica code
dens1[x_, y_] = PDF[MultinormalDistribution[{-1, -1}, {{2, 1/2}, {1/2, 2}}], {x, y}];
dens2[x_, y_] = PDF[MultinormalDistribution[{1, 1}, {{1, 0}, {0, 1}}], {x, y}];
piece1 = NIntegrate[dens2[x, y] Boole[dens1[x, y] > dens2[x, y]], {x, -Infinity, Infinity}, {y, -Infinity, Infinity}];
piece2 = NIntegrate[dens1[x, y] Boole[dens2[x, y] > dens1[x, y]], {x, -Infinity, Infinity}, {y, -Infinity, Infinity}];
piece1 + piece2
• Nice answer. Could you please provide commands to reproduce your beautiful figures? – Andrej Oct 5 '12 at 13:42
• (+1) These graphics are beautiful. – COOLSerdash Jun 25 '13 at 7:05
It seems that you can go about this in two ways, depending on what model assumptions you are happy to make.
Generative Approach
Assuming a generative model for the data, you also need to know the prior probabilities of each class for an analytic statement of the classification error. Look up Discriminant Analysis to get the optimal decision boundary in closed form, then compute the areas on the wrong sides of it for each class to get the error rates.
I assume this is the approach intended by your invocation of the Bayes classifier, which is defined only when everything about the data generating process is specified. Since this is seldom possible it is always also worth considering the
Discrimination Approach
If you don't want to or cannot specify the prior class probabilities, you can take advantage of the fact that the discriminant function can under many circumstances (roughly, exponential family class conditional distributions) be modelled directly by a logistic regression model. The error rate calculation is then the one for the relevant logistic regression model.
For a comparison of approaches and a discussion of error rates, Jordan 1995 and Jordan 2001 and references may be of interest.
Here you might find several clues for your question, maybe is not there the full response but certainly very valuable parts of it. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2766788/
In classification with balanced classes, the Bayes error rate (BER) is exactly equal to $$(1 - TV) / 2$$, where $$TV$$ is the total variation distance between the +ve and -ve conditional distributions of the features. See Theorem 1 of this paper.
To complete, it's not hard to find good references computing the TV between multivariate Gaussian distributions.
|
{}
|
# Tag Info
## New answers tagged font-encodings
5
You can use --no-fonts option to dvisvgm then it looks OK although all the text is SVG paths rather than real text.
2
I am not sure what your problem is. But I think if you just want a quick and dirty solution, you can just put a {} between f and i, i.e., f{}irst.
9
Testing with Firefox 38.0.5 gives indeed a strange result, a test file with different ligatures and characters with smaller codes: \documentclass[preview]{standalone} \begin{document} first flavored coffee --- in the office \end{document} The generated SVG file contains the odd characters: <?xml version='1.0'?> <!-- This file was generated by ...
3
If you have Charis SIL in your computer (or any font that supports IPA characters), then you can use Xelatex or Lualatex. Here's an example with Xelatex. Alternatively you can also use the tipa package with pdflatex, see also the relevant Wikipedia page, but you would not be entering IPA directly, rather you'd use regular letters and Latex will interpret ...
1
You might want to switch to the more modern unicode engines, xelatex or lualatex. Just use the fontspec package (Latin Modern fonts are the default): \documentclass{article} \usepackage{fontspec} \begin{document} £ % lmodern character, in T1 encoding ỉ % lmodern character (ihookabove), in T5 encoding € % lmodern character, in TS1 encoding ...
0
Except \$ (for some reason…) you can typeset them directly if you load fontenc with options [TS1,T1], and if you use lmodern, load textcompanion. You also can use \pounds and \texteuro \documentclass[12pt]{article} \usepackage[utf8]{inputenc} \usepackage[TS1, T1]{fontenc} \usepackage{erewhon} \begin{document} \pounds\enspace £\enspace €\enspace ...
3
You should just change the single symbol which was disturbing you. This could look like: % arara: pdflatex \documentclass{article} \usepackage{libertine} \usepackage[libertine]{newtxmath} \DeclareSymbolFont{largesymbolsCM}{OMX}{cmex}{m}{n} % note: you have to take another name than "largesymbols" here, as we do not want to redefine the whole thing. ...
Top 50 recent answers are included
|
{}
|
# 3.5 Addition of velocities (Page 2/12)
Page 2 / 12
## Take-home experiment: relative velocity of a boat
Fill a bathtub half-full of water. Take a toy boat or some other object that floats in water. Unplug the drain so water starts to drain. Try pushing the boat from one side of the tub to the other and perpendicular to the flow of water. Which way do you need to push the boat so that it ends up immediately opposite? Compare the directions of the flow of water, heading of the boat, and actual velocity of the boat.
## Adding velocities: a boat on a river
Refer to [link] , which shows a boat trying to go straight across the river. Let us calculate the magnitude and direction of the boat’s velocity relative to an observer on the shore, ${\mathbf{\text{v}}}_{\text{tot}}$ . The velocity of the boat, ${\mathbf{\text{v}}}_{\text{boat}}$ , is 0.75 m/s in the $y$ -direction relative to the river and the velocity of the river, ${\mathbf{\text{v}}}_{\text{river}}$ , is 1.20 m/s to the right.
Strategy
We start by choosing a coordinate system with its $x$ -axis parallel to the velocity of the river, as shown in [link] . Because the boat is directed straight toward the other shore, its velocity relative to the water is parallel to the $y$ -axis and perpendicular to the velocity of the river. Thus, we can add the two velocities by using the equations ${v}_{\text{tot}}=\sqrt{{v}_{x}^{2}+{v}_{y}^{2}}$ and $\theta ={\text{tan}}^{-1}\left({v}_{y}/{v}_{x}\right)$ directly.
Solution
The magnitude of the total velocity is
${v}_{\text{tot}}=\sqrt{{v}_{x}^{2}+{v}_{y}^{2}}\text{,}$
where
${v}_{x}={v}_{\text{river}}=1\text{.}\text{20 m/s}$
and
${v}_{y}={v}_{\text{boat}}=0\text{.}\text{750 m/s.}$
Thus,
${v}_{\text{tot}}=\sqrt{\left(1\text{.}\text{20 m/s}{\right)}^{2}+\left(0\text{.}\text{750 m/s}{\right)}^{2}}$
yielding
${v}_{\text{tot}}=1\text{.}\text{42 m/s.}$
The direction of the total velocity $\theta$ is given by:
$\theta ={\text{tan}}^{-1}\left({v}_{y}/{v}_{x}\right)={\text{tan}}^{-1}\left(0\text{.}\text{750}/1\text{.}\text{20}\right)\text{.}$
This equation gives
$\theta =\text{32}\text{.}0º\text{.}$
Discussion
Both the magnitude $v$ and the direction $\theta$ of the total velocity are consistent with [link] . Note that because the velocity of the river is large compared with the velocity of the boat, it is swept rapidly downstream. This result is evidenced by the small angle (only $32.0º$ ) the total velocity has relative to the riverbank.
## Calculating velocity: wind velocity causes an airplane to drift
Calculate the wind velocity for the situation shown in [link] . The plane is known to be moving at 45.0 m/s due north relative to the air mass, while its velocity relative to the ground (its total velocity) is 38.0 m/s in a direction $\text{20}\text{.0º}$ west of north.
Strategy
In this problem, somewhat different from the previous example, we know the total velocity ${\mathbf{\text{v}}}_{\text{tot}}$ and that it is the sum of two other velocities, ${\mathbf{\text{v}}}_{\text{w}}$ (the wind) and ${\mathbf{\text{v}}}_{\text{p}}$ (the plane relative to the air mass). The quantity ${\mathbf{\text{v}}}_{\text{p}}$ is known, and we are asked to find ${\mathbf{\text{v}}}_{\text{w}}$ . None of the velocities are perpendicular, but it is possible to find their components along a common set of perpendicular axes. If we can find the components of ${\mathbf{\text{v}}}_{\text{w}}$ , then we can combine them to solve for its magnitude and direction. As shown in [link] , we choose a coordinate system with its x -axis due east and its y -axis due north (parallel to ${\mathbf{\text{v}}}_{\text{p}}$ ). (You may wish to look back at the discussion of the addition of vectors using perpendicular components in Vector Addition and Subtraction: Analytical Methods .)
how lesers can transmit information
griffts bridge derivative
below me
please explain; when a glass rod is rubbed with silk, it becomes positive and the silk becomes negative- yet both attracts dust. does dust have third types of charge that is attracted to both positive and negative
what is a conductor
Timothy
hello
Timothy
below me
why below you
Timothy
no....I said below me ...... nothing below .....ok?
dust particles contains both positive and negative charge particles
Mbutene
corona charge can verify
Stephen
when pressure increases the temperature remain what?
what is frequency
define precision briefly
CT scanners do not detect details smaller than about 0.5 mm. Is this limitation due to the wavelength of x rays? Explain.
hope this helps
what's critical angle
The Critical Angle Derivation So the critical angle is defined as the angle of incidence that provides an angle of refraction of 90-degrees. Make particular note that the critical angle is an angle of incidence value. For the water-air boundary, the critical angle is 48.6-degrees.
okay whatever
Chidalu
pls who can give the definition of relative density?
Temiloluwa
the ratio of the density of a substance to the density of a standard, usually water for a liquid or solid, and air for a gas.
Chidalu
What is momentum
mass ×velocity
Chidalu
it is the product of mass ×velocity of an object
Chidalu
how do I highlight a sentence]p? I select the sentence but get options like copy or web search but no highlight. tks. src
then you can edit your work anyway you want
Wat is the relationship between Instataneous velocity
Instantaneous velocity is defined as the rate of change of position for a time interval which is almost equal to zero
Astronomy
The potential in a region between x= 0 and x = 6.00 m lis V= a+ bx, where a = 10.0 V and b = -7.00 V/m. Determine (a) the potential atx=0, 3.00 m, and 6.00 m and (b) the magnitude and direction of the electric ficld at x =0, 3.00 m, and 6.00 m.
what is energy
hi all?
GIDEON
hey
Bitrus
energy is when you finally get up of your lazy azz and do some real work 😁
what is physics
what are the basic of physics
faith
base itself is physics
Vishlawath
tree physical properties of heat
tree is a type of organism that grows very tall and have a wood trunk and branches with leaves... how is that related to heat? what did you smoke man?
algum profe sabe .. Progressivo ou Retrógrado e Acelerado ou Retardado V= +23 m/s V= +5 m/s 0__> 0__> __________________________> T= 0 T=6s
Claudia
|
{}
|
3 years ago
# Density and $T_1$ of surface and bulk spins in diamond in high magnetic field gradients.
Martin de Wit, Tjerk Oosterkamp, Marc de Voogd, Gesa Welker
We report on surface and bulk spin density measurements of diamond, using ultra-sensitive magnetic force microscopy with magnetic field gradients up to 0.5 T/$\mu$m. At temperatures between 25 and 800 mK, we measure the shifts in the resonance frequency and quality factor of a cantilever with a micromagnet attached to it. A recently developed theoretical analysis allows us to extract a surface spin density of 0.072 spins/nm$^2$ and a bulk spin density of 0.4 ppm from this data. In addition, we find an increase of the $T_1$ time of the surface spins in high magnetic field gradients due to the suppression of spin diffusion. Our technique is applicable to a variety of samples other than diamond, and could be of interest for several research fields where surface, interface or impurity bulk spin densities are an important factor.
Publisher URL: http://arxiv.org/abs/1801.07535
DOI: arXiv:1801.07535v1
You might also like
Discover & Discuss Important Research
Keeping up-to-date with research can feel impossible, with papers being published faster than you'll ever be able to read them. That's where Researcher comes in: we're simplifying discovery and making important discussions happen. With over 19,000 sources, including peer-reviewed journals, preprints, blogs, universities, podcasts and Live events across 10 research areas, you'll never miss what's important to you. It's like social media, but better. Oh, and we should mention - it's free.
Researcher displays publicly available abstracts and doesn’t host any full article content. If the content is open access, we will direct clicks from the abstracts to the publisher website and display the PDF copy on our platform. Clicks to view the full text will be directed to the publisher website, where only users with subscriptions or access through their institution are able to view the full article.
|
{}
|
# Memory Management in R, and SOAR
May 8, 2012
By
(This article was first published on Data and Analysis with R, at Work, and kindly contributed to R-bloggers)
The more I’ve worked with my really large data set, the more cumbersome the work has become to my work computer. Keep in mind I’ve got a quad core with 8 gigs of RAM. With growing irritation at how slow my work computer becomes at times while working with these data, I took to finding better ways of managing my memory in R.
The best/easiest solution I’ve found so far is in a package called SOAR. To put it simply, it allows you to store specific objects in R (data frames being the most important, for me) as RData files on your hard drive, and gives you the ability to analyze them in R without having them loaded into your RAM. I emphasized the term analyze because every time I try to add variables to the data frames that I store, the data frame comes back into RAM and once again slows me down.
An example might suffice:
> r = data.frame(a=rnorm(10,2,.5),b=rnorm(10,3,.5))
> r
a b
1 1.914092 3.074571
2 2.694049 3.479486
3 1.684653 3.491395
4 1.318480 3.816738
5 2.025016 3.107468
6 1.851811 3.708318
7 2.767788 2.636712
8 1.952930 3.164896
9 2.658366 3.973425
10 1.809752 2.599830
> library(SOAR)
> Sys.setenv(R_LOCAL_CACHE=”testsession”)
> ls()
[1] “r”
> Store(r)
> ls()
character(0)
> mean(r[,1])
[1] 2.067694
> r\$c = rnorm(10,4,.5)
> ls()
[1] “r”
So, the first thing I did was to make a data frame with some columns, which got stored in my workspace, and thus loaded into RAM. Then, I initialized the SOAR library, and set my local cache to “testsession”. The practical implication of that is that a directory gets created within the current directory that R is working out of (in my case, “/home/inkhorn/testsession”), and that any objects passed to the Store command get saved as RData files in that directory.
Sure enough, you see my workspace before and after I store the r object. Now you see the object, now you don’t! But then, as I show, even though the object is not in the workspace, you can still analyze it (in my case, calculate a mean from one of the columns). However, as soon as I try to make a new column in the data frame… voila … it’s back in my workspace, and thus RAM!
So, unless I’m missing something about how the package is used, it doesn’t function exactly as I would like, but it’s still an improvement. Every time I’m done making new columns in the data frame, I just have to pass the object to the Store command, and away to the hard disk it goes, and out of my RAM. It’s quite liberating not having a stupendously heavy workspace, as when I’m trying to leave or enter R, it takes forever to save/load the workspace. With the heavy stuff sitting on the hard disk, leaving and entering R go by a lot faster.
Another thing I noticed is that if I keep the GLMs that I’ve generated in my workspace, that seems to take up a lot of RAM as well and slow things down. So, with writing the main dataframe to disk, and keeping GLMs out of memory, R is flying again!
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
|
{}
|
# Infinite series question with z-transform addendum
Tags:
1. Dec 27, 2016
### kostoglotov
1. The problem statement, all variables and given/known data
Hello,
I am currently doing some holiday pre-study for signals analysis coming up next semester. I'm mainly using MIT OCW 6.003 from 2011 with some other web resources (youtube, etc).
The initial stuff is heavy on the old infinite series stuff, that seems often skimmed over in previous calculus study, and was for me.
Working from a transfer function as an infinite series, I was curious if I could figure out a closed-form of the infinite right hand sum (0 to inf) of $nR^{n-1}$
$$0+1+2R+3R^2+4R^3+5R^4+...$$
by working in reverse through parts of synthetic division I found that $\frac{1}{(1-R)^2}$ works for a closed form of the previous infinite series.
Then by the same technique found that an infinite right hand sum for $n^2a^n$ has the closed form $\frac{a(1+a)}{(1-a)^3}$.
I thought I saw a pattern ($\frac{a(1+a)^k}{(1-a)^{k+1}}, from \ n^ka^n$), but for the infinite right hand sum of $n^3a^n$ reversing through synthetic division reveals a closed form $\frac{a(1+4a+a^2)}{(1-a)^4}$...so I thought maybe the coefficients of the numerators polynomial are simply getting squared...but going through this again for $n^4a^n$ shows that this is not the case.
What is the general closed form formula for an infinite right hand series of $n^ka^n$?
Also, I am aware of the property of Z-Transform Differentiation, $nx[n] \leftrightarrow -z\frac{dX(z)}{dz}$. Does this relate to what I'm exploring above? If so, how?
PS. I'd love to know of any other good resources for self-teaching signals and systems stuff :)
2. Relevant equations
3. The attempt at a solution
2. Dec 27, 2016
### MisterX
I don't think there is a general closed form with elemental functions. You seem to be asking about the poly-logarithm function just shifted by 1.
|
{}
|
I was wondering what would be the best waveform to generate for a phased array short range imaging radar? Square, sine, etc. Also what frequency would be best to operate at for the carrier wave given that this radar wants to point out people and dimensions of say a building or forest. So seeing through walls is an objective as well. I assume higher frequency for close range is best. Any info will help, thanks
• This is a really open question. You may need to focus it some more. But I do know that your frequency choices are limited to what the regulations allow. – Jeroen3 Jul 13 at 5:40
There's a many things you need to consider when designing a radar system. In your case it seems that you have some top-level requirements in mind!
You're seeking a solution for a short range radar using a phased array for the purpose of imaging. The words are in bold because we are considering these to be the main design considerations. We'll go over to what these words translate to in terms of overall radar architecture, receiver, waveforms, and some common trades associated with them.
## Short Range Considerations
"Classic" pulse radars are the ones we think of that transmit a simple rectangular pulse of length $$\\tau\$$ at some carrier frequency $$\f_c\$$. During this time, the receiver must be off in order to avoid damaging the receiver and/or to avoid self-interference.
Within this time since the receiver is off we must wait until our pulse is fully transmitted in order to turn on the receiver and begin receiving target returns. This minimum range is called the blind range is is given by
$$R_{blind} = \frac{c\tau}{2}$$
A 100 ns pulse would yield a blind range of 15 m. In other words, the pulse must travel at least 15 m before you can try and receive the return signal.
Keep in mind that "short range" means something very different to an automotive radar than it would to a traffic control radar.
We can avoid the blind range problem by considering a different kind of radar system: Frequency-Modulated Continuous Wave (FMCW). This type of system is continuously radiating a frequency-modulated wave and the receiver is always on. Below is an example of a linear up-chirp
This is the type of radar you see a lot in automotive applications (e.g. 77 GHz carrier) where we need to eliminate blind ranges for safety. In addition, we can also get the benefit of increased range resolution that we will talk more about below. This system requires a different kind of receiver that has its own challenges.
## Imaging Considerations
Whether "imaging" here means generic target separation in multiple measurement dimensions or actually forming a picture, we need good range, Doppler, and angle resolution to do so. We'll go over range resolution because it's the more straight-forward of the three in my opinion.
Range resolution is the measure of how two targets must be spaced in order to differentiate between them. For a lower range resolution of a system, the farther apart targets must be so that they do not meld into one during processing. For imaging, you generally want high resolution so you can differentiate between closely spaced targets.
Range resolution comes down to the bandwidth of the signal your transmit. For the case of a simple pulse, the range resolution is approximated by
$$\Delta R = \frac{c}{2B} = \frac{c\tau}{2}$$
Here we've made the approximation that the bandwidth of a rectangular pulse is the inverse of its pulse width so that $$\B \approx 1/\tau\$$. You can immediately see the trade-off: shorter pulse widths yield better range resolutions but we will suffer decreased energy on the target and thus decreased detection performance.
We can decouple the relationship between the pulse width and bandwidth. In order to do this, we introduce some kind of modulation to the rectangular pulse to increase its bandwidth. We've already went over one type: frequency modulation. Specifically, we looked at a linear frequency modulation (LFM) signal where we linearly increase the frequency during the pulse.
Consider two 100 ns pulses:
1. Rectangular pulse
2. LFM pulse with 100 MHz linear chirp
Using the range resolution equations
$$\Delta R_{Rect} = \frac{c}{2B} = \frac{c\tau}{2} = \frac{c}{2(10 \space MHz)} = 15 m$$
$$\Delta R_{LFM} = \frac{c}{2B} = \frac{c}{2(100 \space MHz)} = 1.5 m$$
You can see that using the LFM pulse gives us an order of magnitude in range resolution improvement and we get to keep the same pulse width! Visually we can see the range resolution performance from a nominal target return (zero delay) at the output of a matched filter, which is what is usually done in pulsed systems.
The rectangular matched filter output is very wide, so a second target must be spaced further in order to differentiate between the two, as expected. With the LFM pulse, the targets can get much closer as can be seen by how much narrower the main lobe is. There's no free lunch here: we increased our range resolution and maintained the same pulse width, but now our receiver bandwidth requirements have increased.
This was a rather high-level dump of some of the major aspects that must be considered. To summarize
1. Short range considerations - Find what your definition of "short" is and determine whether a traditional pulsed radar can be used or something like FMCW to eliminate blind ranges entirely.
2. Imaging considerations - Determine how close targets can be to each other when performing detection. Use this information to determine the range resolution you need which will then aid in waveform selection and bandwidth requirements.
3. Trade-offs - All of these benefits come with downsides. You will have to explore what effects these choices will have on your overall system design and how much it will cost.
This is not even close to being exhaustive but hopefully it will give you some orientation on the approach you want to take.
• this was a fantastic read, I really appreciate it. I am definitely left with, I love the idea of the FMCW for eliminating blind spots and gaining resolution with very short pulse width with the same pulse length (if I said that right). Having that pinpointing precision to basically stay out of your own way and to better separate objects is the way I'd choose to go, so I will have to be prepared for some heavy calculating on the receiver end. Will give this more rereads, thanks again – Af91 Jul 13 at 23:50
• @Af91 This is exactly why close-range high resolution radar systems go with FMCW. You eliminate the blind ranges, get the resolution you required, and despite higher bandwidth requirements is actually quite simple to implement. – Envidia Jul 14 at 0:26
• definitely makes sense. Will probably look into how to generate these waves, I'm guessing with MatLab or something. Really cool technology 👍👍 – Af91 Jul 14 at 3:26
• @Af91 If this answer sufficed please mark it as accepted to help the site out :) – Envidia Jul 14 at 16:24
• do you know how I do that? I tried pushing the up arrow useful button but don't have enough reputation. – Af91 Jul 15 at 0:07
Try 77 GHz sine waves. Thats what automotive radars use and it works very well.
|
{}
|
## Problem on vector length
Find the length of the vector from $(2,4,5)$ to $(3, -1, -2)$.
• ## Solution
First we find the vector from $(2,4,5)$ to $(3,-1,-2)$. Then, we compute it's length.
Recall that
The vector from $(2,4,5)$ to $(3, -1, -2)$ is thus $$(3,-1,-2) -(2,4,5) = (1, -5, -7).$$
Recall that
The length of the vector from $(2,4,5)$ to $(3, -1, -2)$ is thus $$\sqrt{1^2 + (-5)^2 + (-7)^2} = \sqrt{75}.$$
|
{}
|
Archive | Bacchus-Bosch
# An Adventure Game – Part 5
We’re rapidly approaching the end. This time we’ll implement a meta-language that makes it easier to create new games with the existing engine. Conceptually speaking we’re not adding anything new to the table, but the example game from the previous post was created in an ad-hoc manner that demanded knowledge of both Prolog/Logtalk and the engine in question. What we want is a declarative language in which it’s possible to define rooms, entities and how they should be connected. Exactly how this language is interpreted should not be the game designer’s concern. This is a scenario in which it’s best to start with the language itself, since it’s pretty much impossible to write an interpreter otherwise.
### The language
We want the language to be capable of:
• Creating entities.
• Adding properties to existing entities.
• Setting new values for entities. For instance, if we first create a door with the default lock we’ll probably want to change this later on.
• Adding entities to rooms.
• Connect two rooms with an entrance.
All these steps should be expressible within a single file. The first part might look something like (as always, I’m just making stuff up on the fly, it’s quite possible that there are simpler/better ways to accomplish this!):
```begin entities.
build(room, room1, "A rather unremarkable room.\n").
build(room, room2, "...").
build(door, door, "...").
build(lock, lock).
build(key, key, "A slightly bent key.\n").
build(player, player).
end entities.
```
Where $build(Type, Id)$ should be read as: build an entity of type $Type$ and name it $Id$. $build/3$ allows us to create printable entities, where the string is the description. This suggests that we’ll need a builder-object that’s capable of constructing some default entities. Since entities consist of properties, it would be possible to only build generic entities and manually add all the interesting properties in question, but we make the job easier for the game designer if he/she can assume that some primitive objects are already available. Of course, it wouldn’t be practical to demand that all entities are created in this manner. If two entities are identical except for a few differing properties then it would be simpler to create a generic entity and add the properties manually instead of defining a new build-predicate. For example, say that our game will consist of two different kinds of fruits: magical and non-magical. If we eat the former we finish the game, if we eat the latter we increase the health of the player. This is naturally implemented by creating two different properties: $fruit\_property$ and $magic\_fruit\_property$. Hence, to create two fruits – one magical and one non-magical – we first create two generic instances and then add the defining properties.
```begin entities.
build(generic_entity, apple, "An apple! \n").
build(generic_entity, banana, "A yellow banana. Hot diggity dog!\n").
end entities.
begin properties.
add_property(fruit_property, apple).
add_property(carriable_property, apple).
add_property(magic_fruit_property, banana).
add_property(carriable_property, banana).
end properties.
```
The Argus-eyed reader will probably realize that it would be ever better to factorize the common properties ($carriable\_property$) of the two fruits into a prototypical base object, and then clone this object and later add the unique properties ($magic\_fruit\_property$ and $fruit\_property$).
Since the entities that have been created thus far only has the default values we now turn to the problem of sending messages, so that we’re able to change these at whim. Say that we want to tell the lock that it should use the key that we just created, and the door that it should use the lock. A first attempt might look like this:
```begin relations.
action(set_key, lock, key).
action(set_lock, door, lock).
end relations.
```
All identifiers here refer to the entities that have already been created. This won’t work however, due to a subtle semantic difference between how locks and doors work. A door has a lock, it consists of a lock. Therefore it’s correct to send the lock entity as an argument to $set\_lock$. A lock on the other hand doesn’t consist of a key. It only needs to know what key will unlock it, hence it’s not correct to send the whole key entity as argument. We only need one part of the key entity, its identity, in this case. To be able to differentiate between these cases we’ll introduce the notation that a term preceded by a dollar sign (\$) will be sent as-is, instead of sending the entity corresponding to the identity. The previous attempt should hence be rewritten as:
```begin relations.
action(set_key, lock, \$ key).
action(set_lock, door, lock).
end relations.
```
The file will be interpreted from top to bottom, so if we added a property in the preceding block we’re able to change it here. The next step is to connect rooms. Strictly speaking this relation is not necessary since we’re already able to send messages to entities, but including it as a primitive in the language will make it much easier to use. The syntax is:
```begin relations.
.
.
.
connect(room1, room2, door).
end relations.
```
The full example game in all its glory would be written as:
```begin entities.
build(room, room1, "A rather unremarkable room.\n").
build(room, room2, "A room almost identical to the previous one. What on earth is going on!?\n").
build(door, door, "A wooden door with a small and rusty lock.\n").
build(lock, lock).
build(key, key, "A slightly bent key.\n").
build(generic_entity, apple, "An apple! \n").
build(generic_entity, banana, "A yellow banana. Hot diggity dog!\n").
build(player, player).
end entities.
begin properties.
add_property(fruit_property, apple).
add_property(carriable_property, apple).
add_property(magic_fruit_property, banana).
add_property(carriable_property, banana).
end properties.
begin relations.
action(set_key, lock, \$ key).
action(set_lock, door, lock).
action(set_state, door, \$ closed).
action(add_item, room1, apple).
action(add_item, room1, key).
action(add_item, room2, banana).
action(set_location, player, \$ room1).
connect(room1, room2, door).
end relations.
```
A substantial improvement in readability compared to the previous efforts!
### Parsing
Another boring, dry entry on parsing? Fear not, because I have a trick up my sleeve -there was a reason why the syntax of the meta-language was a spitting image of Prolog’s all along! One way to interpret the file is to say that begin/end and the dollar sign are all operators with a single argument. Then the file is nothing but a collection of facts that can be accessed as normal and we won’t have to worry about parsing at all. A slightly more contrived but more general approach is to use what in Prolog-nomenclature is known as term expansion. This is usually the preferred approach to handle embedded languages and is somewhat similar to macros in Lisp. The basic idea is simple: instead of taking a term at face-value we expand it according to a user-defined rule. What’s the point? Basically we don’t have to type as much. For example, let’s say that we have a database consisting of $rule/3$ facts, where the first argument is an atom, the second a list and the third an integer denoting the length of the list.
```
rule(a, [], 0).
rule(b, [a], 1).
```
Furthermore assume that we don’t want to calculate the length of the second argument at run-time. There’s nothing inherently wrong with this approach, but manually entering the length of the list is a drag and quite error-prone. It would be better if we could simply write:
```
rule(a, []).
rule(b, [a]).
```
And tell the Prolog system that these facts should be construed as $rule/3$ facts with an additional third argument which contains the length of the list. Fortunately this can easily be realized with the built-in predicate $term\_expansion/2$. The first argument of $term\_expansion$ is the term that shall be expanded. The second argument is the expanded term. A suitable definition for the previous example is:
```term_expansion((rule(Head, Body)), rule(Head, Body, Length)) :-
length(Body, Length).
```
Great success! We don’t have to concern ourselves with how or when this predicate is called, just that it will eventually be called when the file is loaded. Like all powerful language constructs it’s easy to abuse the term expansion mechanism and create unreadable code. We could for instance expand $A :- B$ to $B :- A$ if we were in a facetious mood (don’t do it, OK?). Fortunately the Logtalk support is a bit more sane than in most Prolog systems. Instead of defining $term\_expansion/2$ rules in the same file as the rules that shall be expanded, they’re encapsulated in a special expansion object. This object is later used as a hook in $logtalk\_load/2$ to make sure that the effect is localized to a given file. In summary, there’s two steps involved:
• Define the $term\_expansion/2$ rules in an expansion object (which must implement the expanding protocol).
• Load the file (the script file in our case) with the expansion object.
I shan’t spell out the full details of the expansion object, but what it does is to remove the begin/end directives and create a set of $entity/1$ facts with the initial entities. Also, to make things easier in the script interpreter, it removes $add\_property/2$, $action/3$, $connect/3$ and replaces them with unary facts instead.
### The builder
The builder is in charge of building game entities. As mentioned earlier it’s not strictly needed since it’s always possible to add properties manually, but it does simplify things. Here’s how a map and a room could be created:
``` build(world, Id, Rooms, Player, World) :-
Ps = [map_property-State],
build(final_state, F),
map_property::new([[F|Rooms], Player], State),
entity::new(Ps, Id, World).
build(room, Id, Description, Room) :-
Ps = [container_property - State1,
printable_property - State2],
container_property::new([], State1),
printable_property::new([Description], State2),
entity::new(Ps, Id, Room).
```
### Interpreting
We have a description of the game and want to transform it into an entity that has the $map\_property$. Strictly speaking this is not an interpreter, but rather a compiler from the script language to the entity language. The most important predicate is $interpret\_game/2$ that takes an expanded script file as argument and produces a game world. It works by extracting the entity directives, the property directives, the relations directives and then separately interprets each one of these. Finally it extracts the rooms and the player and asks the builder to build a game world.
``` interpret_game(DB, World) :-
findall(E, DB::entity(E), Es0),
findall(P, DB::add_property(P), Ps),
findall(A, DB::action(A), As),
findall(C, DB::connect(C), Cs),
interpret_properties(Ps, Es0, Es1),
interpret_actions(As, Es1, Es2),
interpret_connectors(Cs, Es2, Es),
get_rooms(Es, Rooms),
get_player(Es, Player),
builder::build(world, world1, Rooms, Player, World).
```
The $interpret\_x$ predicates are all rather similar. They iterate through the list of commands and changes the set of entities accordingly. For brevity, let’s concentrate on $interpret\_actions/3$.
``` interpret_actions([], Es, Es).
interpret_actions([t(M, Id1, Id2)|As], Es0, Es) :-
select_entity(Id1, Es0, E0, Es1),
lookup_argument(Id2, Es0, Arg),
entity::action(M, [Arg], E0, E),
interpret_actions(As, [E|Es1], Es).
lookup_argument(Id, Es, Arg) :-
( Id = \$(Symbol) ->
Arg = Symbol
; lookup_entity(Id, Es, Arg)
).
```
The body of $interpret\_actions/3$ should be read as: execute action $M$ on the entity corresponding to $Id1$ with the argument $Id2$ (remember that arguments preceded by a dollar-mark are left unchanged). Since we might need to update an entity several times, it’s re-added to the list of entities whenever it’s updated.
### Putting everything together
We need to make a small change to $init/0$ in the $game$ object. Instead of building a world manually it’ll take a script file as argument and ask the interpreter to interpret (compile!) it.
``` init(Game) :-
write('Welcome to Bacchus-Bosch!'), nl,
current_input(S),
script_interpreter::interpret_game(Game, World),
repl(S, [], World).
```
That’s pretty much it – the end of Bacchus-Bosch. As I suspected when I started the project the final game isn’t very fun. Wait, scratch that, it might be the worst game I’ve ever played, and that includes the infamous yellow car game. But it does have a magic banana, that ought to count for something. In any case it wouldn’t be hard to create more engrossing games since all the building blocks are in place. It should also be noted that the engine is hardly limited to text-based or turn-based games. In a real-time game we could for instance run the update method a fixed amount of times per second instead of waiting for player input. We could also add e.g. role playing elements by defining new properties.
I hope that this pentalogy has been at least somewhat comprehensible and coherent. What the future holds for the blog I dare not promise. Hopefully we’ll someday see the return of the magic banana!
### Source code
The source code is available at https://gist.github.com/924052.
Advertisements
# An Adventure Game – Part 4
And yet again I display exceptional prowess in the (un)holy art of procrastination. This time I blame lack of coffee, Sweden’s gloomy weather and the Illuminati. This post will concern two of the three remaining obstacles. First we’re going to add enough properties so that we can create a map consisting of a few rooms together with some entities. Second, we’re going to translate the parsed commands from the user into a suitable set of action commands and execute them with respect to the game world and its entities. As always it’s best to start with a use-case scenario and extrapolate some requirements.
> look
You see a small, rusty key and a door.
> pick up the key and open the door.
> go through the door.
This room is almost identical to the previous one. How eerie. You see a banana and a door.
> eat the door.
no.
> eat the banana
no.
> take the banana and eat it.
Congratulations! You solved mystery of the missing banana!
Let’s don our declarative thinking cap. This particular game consists of two rooms and a player. The player starts in the first room, picks up the key, unlocks the door and enters the second room where he/she in a moment of absolute intellectual clarity manages to deduce that the only way to beat the game is to eat the banana. Just like in real life. We can summarize the requirements as:
• The player must be able to move from one location to another.
• The player must be able to reference entities by their names.
• The door must be linked to the second room.
• The key must be able to lock/unlock the door.
• The banana must have the property that the game stops when the player eats it.
Fortunately we can already handle some of these. The most fundamental requirement is that of referencing entities. Since each entity is just a list of properties we’re currently unable to distinguish them in any sensible way. The most obvious solution is to add another property, $identity\_property$, which describes the property of having an identity. For example, the door in the scenario would be represented by the list:
$[openable\_property-State_1, printable\_property-State_2, ..., identity\_property-State_n]$
For simplicity the state of $identity\_property$ is just going to be an atom, e.g. $door$. In a real implementation it would of course be preferable to use a more complex data structure so that entities can be referenced by attributes as well, e.g. “wooden door”, but the basic idea is the same. Of course, just because this is a nice, declarative solution doesn’t mean that it’s good. To quote Richard O’Keefe from The Craft of Prolog:
The nasty thing about declarative programming is that some clear specifications make incredibly bad programs.
It’s not to hard to see that storing the identity of an entity as a property is hopelessly inefficient. To find an entity in a container we have to iterate through the whole container and check whether or not every element satisfies the identity property in question. Ouch. It would be a much better idea to demand that all entities have an identity and then store the container as a tree where nodes are entities ordered by their identities. Of course, it doesn’t mean that I’m going to use this solution just because it’s better! Sometimes it’s enough to be aware that something is a potential bottleneck and fix it if/when it becomes a real problem. Or buy a faster computer.
Next up is the problem of the game world. We shall introduce a new property by the name $map\_property$ and stipulate that it consists of a list of rooms and a player. Why not add the player as an item in the current room instead? Just for simplicity; it’s slightly easier to move the player from room to room if we don’t have to explicitly remove him/her from the first room and add him/her to the new one. Since we have removed the player from the rooms we’re going to need another property, that of having a position/being movable, so that it’s always possible to find the current room.
```:- object(movable_property,
extends(property)).
new(identity_property-_).
action(move, [New], Owner, _, Owner, New).
action(get_location, [Room], Owner, Room, Owner, Room).
:- end_object.
```
The state of $movable\_property$ is simply an identity property. When someone issues the move command the current location is changed. Exactly how this works should become clearer later on. For now, let’s concentrate on implementing $map\_property$. Its state will be a tuple of a list of rooms and the player, and it’ll have commands to add/remove rooms, get the current room, update the rooms and so on.
```:- object(map_property,
extends(property)).
new([]-[]).
update(E0, E) :-
entity::update_property(map_property, E0, Rooms0-P0, Rooms-P, E),
entity::update(P0, P),
update_rooms(Rooms0, Rooms).
update_rooms([], []).
update_rooms([R0|R0s], [R|Rs]) :-
entity::update(R0, R),
update_rooms(R0s, Rs).
action(add_rooms, [Rooms1], Owner, Rooms2-P, Owner, Rooms-P) :-
...
action(get_room, [Property, R], Owner, Rooms-P, Owner, Rooms-P) :-
...
action(select_room, [Property, R], Owner, Rooms-P, Owner, Rooms1-P) :-
...
action(print, [], Owner, Rooms-P, Owner, Rooms-P) :-
action(current_room, [Room], Owner, Rooms-P, _, _),
entity::action(print, [], Room, _).
action(current_room, [Current], Owner, Rooms-P, Owner, Rooms-P) :-
entity::action(get_location, [Id], P, _),
list::member(Current, Rooms),
entity::get_property(Current, Id).
action(get_player, [P], Owner, Rooms-P, Owner, Rooms-P).
:- end_object.
```
It’s not necessary to study the details of this particular implementation, but some of the predicates demand an explanation. $update/2$ uses $update\_property/5$ in $entity$ to update $map\_property$ with the state obtained by updating the list of rooms and the player. To put it more simply: it just calls $update/2$ on the rooms and the player. The $print$ command extracts the current room and prints it (it would not be very interesting to print anything else). The $current\_room$ command gets the current location of the player and then uses $member/2$ to find the room with that particular identity.
Then there are doors. To have a text-game without lots of doors would simply be madness. Given the representation of the game world, each door must contain the identity of the area which it leads to. It just stores the identity, not the area itself (since all areas are stored in the $map\_property$). We shall store this identity in $entrance\_property$:
```:- object(entrance_property,
extends(property)).
new(identity_property-_).
action(get_location, [Location], Owner, Location, Owner, Location).
:- end_object.
```
So when we create a door we plug in an $entrance\_property$ with a suitable identity of a room.
### Translating commands to entity actions
Before we begin the translation process we’re going to define some useful action commands in $container\_property$ and $map\_property$. The most frequently used action will be that of extracting an entity according to some property (e.g. the identity), perform some state-changing operation and then adding it anew. This procedure is necessary since we don’t have explicit state: whenever we extract an entity we get a copy of it, and changes to this copy won’t affect the original. For this purpose we’re going to define two additional action commands in $container\_property$ and $map\_property$ that takes three arguments:
• $P$ – the property of the entity that shall be updated.
• $Old$ – will be unified with the old entity.
• $New$ – the new entity.
```:- object(container_property,
extends(property)).
.
.
. % As before.
action(update_item, [P, Old, New], Owner, Items, Owner, [New|Items1]) :-
list::select(Old, Items, Items1),
entity::get_property(Old, P).
.
.
. % As before.
:- end_object.
```
The neat thing about this definition is that we can extract an entity with $Old$ and simply pass a variable as $New$, and unify this variable to the updated entity later on. The definition in $map\_property$ is similar, but works for the player and the current room.
```:- object(map_property,
extends(property)).
.
.
. % As before.
action(update_current_room, [Current0, Current],
Owner, Rooms-P, Owner, [Current|Rooms1]-P) :-
entity::action(get_location, [Id], P, _),
list::select(Current0, Rooms, Rooms1),
entity::get_property(Current0, Id).
action(update_player, [P0, P], Owner, Rooms-P0, Owner, Rooms-P).
.
.
. % As before.
:- end_object.
```
Now we can finally begin with the translation process. Since we know that the input from the user will be a list of commands (remember that conjunctions are allowed) we will execute them one by one and thread the state of the game world.
```eval_commands([], World, World).
eval_commands([C|Cs], World0, World) :-
write('The command is: '), write(C), nl,
eval_command(C, World0, World1),
eval_commands(Cs, World1, World).
```
Each $eval\_command/3$ rule changes the state of $World0$ to $World1$ according to the command in question. The simplest one is the look-command with no arguments, that just prints the current room:
``` eval_command(look-[], World, World) :-
entity::action(current_room, [Room], World, _),
write('You see: '), nl,
entity::action(print_children, [], Room, _).
```
It asks the game world for its current room and then issues the $print\_children$ command. The take-command is slightly more convoluted. It takes an identity as argument, tries to find the entity in question and asks the player to pick it up.
```eval_command(take-[Id], World0, World) :-
entity::action(update_current_room, [R0, R], World0, World1),
entity::action(update_player, [P0, P], World1, World),
entity::action(select_item, [identity_property-Id, Item], R0, R),
entity::action(add_item, [Item], P0, P).
```
The move-command is perhaps the most complex of the bunch, but follows the same basic structure:
``` eval_command(move-[Id], World0, World) :-
entity::action(current_room, [Room], World0, _),
entity::action(update_player, [P0, P], World0, World),
entity::action(get_item, [identity_property-Id, Entrance],
Room, _),
entity::action(open, [], Entrance, _),
entity::action(get_location, [Location], Entrance, _),
entity::action(move, [Location], P0, P).
```
It tries to find and open the entrance in the current room, asks it where it leads and finally asks the player to move to that location. The lock/unlock and open/close commands are implemented in the same way. One problem remains though: it’s possible to take the key, unlock the door, open it and go through it, but no way to actually finish the game. Just like everything else this functionality can be implemented in a number of ways. It might be tempting to somehow augment the top-loop and in every iteration check whether or not the final state have been reached, but this is needlessly complicated. Instead we’re going to introduce a special entity with only two properties, that of having the identity $final\_state$ and that of being printable. It’s constructed as:
``` build_win_screen(Screen) :-
Screen = [printable_property - State, identity_property-final_state],
State = "Congratulations! A winner is you!\n (No, you can't quit. Stop trying.)\n".
```
Then we need an object that has the effect that it asks the player to move itself to the final state whenever it is used, for instance a banana.
```:- object(fruit_property,
extends(property)).
action(dissolve, [E0, E], Owner, State, Owner, State) :-
entity::action(move, [identity_property-final_state], E0, E).
:- end_object.
```
The banana entity is created by combining an identity, a printable, a carriable and a fruit property:
``` build_test_banana(Banana) :-
Banana = [fruit_property - State1, printable_property-State2,
identity_property-banana, carriable_property-State3],
fruit_property::new(State1),
banana_description(State2),
carriable_property::new(State3).
```
Then we of course need an eat-command, but this is straightforward to implement. So what happens when the player eats the banana is that the current location changes to $final\_state$. This room doesn’t have any entities and doesn’t support any operations besides being printed, which means that the player can’t return to the rest of the game world and have completed the game.
### Putting everything together
We shall use the top-loop from part 2, but with some modifications. The input will be parsed into commands that are executed with respect to the current game world. The loop then calls itself recursively with the new state and asks for new input.
``` init :-
write('Welcome to Bacchus-Bosch!'), nl,
current_input(S),
build_test_world(World),
repl(S, [], World).
repl(S, History, World0) :-
entity::update(World0, World1),
entity::action(print, [], World1, _),
write('> '),
nlp::parse_line(S, Atoms),
write('The input is: '),
meta::map([X] >> (write(X), write(' ')), Atoms), nl,
nlp::tag_atoms(Atoms, AtomTags),
write('The tagged input is: '),
meta::map([X] >> (write(X), write(' ')), AtomTags), nl,
( eval(History, AtomTags, World1, World) ->
true
; write('no.'), % This is Prolog after all.
nl,
World = World1
),
write('-------------------'), nl,
repl(S, AtomTags, World).
eval(History, AtomTags, World, World1) :-
nlp::resolve_pronouns(History, AtomTags, AtomTags1),
nlp::parse_atoms(AtomTags1, _, Commands),
eval_commands(Commands, World, World1).
```
Like a professional TV-chef I prepared a small test world and got the following result (with some of the debug output omitted):
Welcome to Bacchus-Bosch!
A rather unremarkable room.
> look
You see:
A slightly bent key.
A wooden door with a small and rusty lock.
——————-
A rather unremarkable room.
> open the door
no.
——————-
A rather unremarkable room.
> take the key and unlock the door with it
——————-
A rather unremarkable room.
> go through the door
——————-
A room almost identical to the previous one. What on earth is going on!?
> look
You see:
A yellow banana. Hot diggity dog!
A wooden door with a small and rusty lock.
——————-
A room almost identical to the previous one. What on earth is going on!?
> take the banana
——————-
A room almost identical to the previous one. What on earth is going on!?
> eat it
——————-
Congratulations! A winner is you!
(No, you can’t quit. Stop trying.)
The final herculean task, the creation of a script-language, is saved for the next entry!
### Source code
The source code is available at https://gist.github.com/900462.
# An Adventure Game – Part 3
I’ve been procrastinating too much. It’s time to roll up our sleeves and get dirty with the nitty-gritty core of Bacchus-Bosch. We’re currently able to read input from the player and parse commands. Before even considering how commands should be executed we have to take a step back and make some ontological commitments. We know that the game will consists of interaction between game objects, the entities, but haven’t specified how these should be represented or how they are created. It might be tempting to simply use a set of unit clauses and statically recite everything of interest:
```entity(key).
entity(room_1).
entity(room_2).
entity(door).
contains(room_1, key).
contains(room_1, door).
connected(room_1, door, room_2).
.
.
.
%And so on for the whole level.
```
At a first glance this might look like an acceptable solution. It wouldn’t be very hard to add e.g. properties to the entities by adding more facts, but everything breaks down in the moment when the player changes the state of the world by picking up the key. Then the key is no longer directly contained by the room and should instead be added to the inventory of the player. The quick and easy hack is to use assert/retract in order to add and remove facts according to the fluctuating state of the game world, but in the long run that’s a terrible idea. To be forced to use impure features for such an integral part of the game is either a sign that we’re using the wrong data structures or the wrong language. I’ll go with the former theory!
Let’s begin by considering a simplified problem where the game only consists of a player with the property of having health in the form of an integer: “hit points”. It should be possible to both increase and decrease this value according to events from other entities. What’s an entity? A common and simple solution is to create a class hierarchy with an abstract class in the top and concrete classes such as items and monsters in the lower layers. While there’s nothing inherently wrong with this I think we can do better, for two reasons:
• The class hierarchy will be huge and hard to make sense of.
• It’s not (at least not in most languages, Logtalk happens to be an exception) possible to create new classes during runtime, with the effect that behaviour can’t be modified or extended once the game has started.
An often cited design principle is to favor composition over inheritance. We want the entities to be entirely composed of smaller objects, of properties. The sum of all the properties constitute the entity – nothing more, nothing less (followers of holism should stop reading here!). It should of course be possible to both add, remove and update properties during runtime. It’s easiest to start with a basic definition of a property, which we’ll do with a prototypical object. A property should be able to:
• Give the initial state of the property. For the health property, this might simply be a positive integer.
• Update the entity to which the property belong. If the property doesn’t directly influence the entity, it’s left unchanged.
• Execute an action that changes the state of the property, e.g. increasing or decreasing the health.
With these considerations the object becomes:
```:- object(property).
:- info([
version is 1.0,
author is 'Victor Lagerkvist',
date is 2011/03/19,
comment is 'A property constitutes the basic behaviours of the objects in Bacchus-Bosch.']).
:- public(update/2).
:- mode(update(+entity, -entity), zero_or_more).
:- info(update/2, [
comment is 'Update the entity to which the property belong.',
argnames is ['Entity', 'Entity1']]).
:- public(action/6).
:- mode(action(+atom, +list, +entity, +state, -state, -entity), zero_or_more).
:- info(action/6, [
comment is 'Execute the action Name with respects to Args, State and Entity, and store the resulting new state and entity in State1 and Entity1.',
argnames is ['Name', 'Args', 'Entity', 'State', 'Entity1', 'State1']]).
:- public(new/1).
:- mode(new(-term), zero_or_more).
:- info(new/1, [
comment is 'Unify State with the initial state of the property.',
argnames is ['State']]).
%% Basic definition: do nothing!
update(E, E).
%% Basic definition, overloaded in almost every descendant prototype.
new(void).
:- end_object.
```
Exactly how these predicates should be implemented will be clearer with an example.
```:- object(health_property,
extends(property)).
new(10).
action(decrease_health, [], Owner, H0, Owner, H) :-
H is H0 - 1.
action(increase_health, [], Owner, H0, Owner, H) :-
H is H0 + 1.
:- end_object.
```
Hence, the initial state for the health property is 10. To decrease the value one calls $action/6$ with the correct action name, an empty argument list, the owner of the property and the old state, and in return obtains the updated owner and new state with the fifth and sixth parameters. If a property doesn’t support a specific action it’ll simply fail. Let’s see how $update/2$ can be used together with the health property. It should be called in each tick of the game. For example: the health should be decreased if the player is poisoned or have somehow been ignited. Such a property won’t support any actions and only overload the definition of $update/2$ from $property$.
```:- object(on_fire_property,
extends(property)).
update(E0, E) :-
entity::action(decrease_health, [], E0, E).
:- end_object.
```
Since $update/2$ takes an entity as argument, the on_fire property simply asks this entity to decrease its health. Now we have to decide how to represent entities and how to delegate action messages. As mentioned earlier an entity is simply the sum of its properties, hence we can simply represent it as a list that contains properties and their states. The formal definition reads:
• $[]$ is an entity.
• $[P|Ps]$ is an entity if $Ps$ is an entity and $P$ is a tuple of the form: $N - S$, where $N$ is the name of a property and $S$ is a state of that property.
Continuing on our example, $[health\_property - 10]$ is an entity. If we decide to have some fun and for a moment indulge ourselves with arson, $[health\_property - 10, on\_fire\_property - void]$ is also an entity. A few basic definitions remain before we can actually do anything with these entities. First we need a predicate $update/2$ that iterates through the list of properties that constitute the entity and calls $update/2$ for every property.
```update(E0, E) :-
update(E0, E0, E).
update(E, [], E).
update(E0, [P|Ps], E) :-
P = Name - _,
Name::update(E0, E1),
update(E1, Ps, E).
```
The effect is that each property is updated with respect to the current state of the entity. Next up is $action/4$. It takes the name of the action that shall be performed, its arguments, the entity in question and returns the updated entity if the action could be performed or simply fails otherwise. Since an entity can’t perform any actions it has no choice but to try to delegate the action to its properties.
```action(A, Args, E0, E) :-
%% Select a property from the list such that the action A can
%% be performed with the arguments Args.
list::select(P, E0, E1),
P = PropertyName - State,
PropertyName::action(A, Args, E1, State, E2, State1),
%% Add the property with the updated state to E2.
P1 = PropertyName - State1,
E = [P1|E2].
```
As seen from the code $action/4$ uses $select/3$ to find the first property that supports the operation. If there’s more than one potential property it will simply choose the first one but leave choice point for the others. If we put it together into one massive object we get:
```:- object(entity).
:- info([
version is 1.0,
author is 'Victor Lagerkvist',
date is 2011/03/20,
comment is 'The entity operations of Bacchus-Bosch.']).
:- public(update/2).
:- public(action/4).
:- public(get_property/2).
:- public(select_property/3).
update(E0, E) :-
%% As before.
action(A, Args, E0, E) :-
%% As before.
get_property(E, P) :-
list::member(P, E).
select_property(P, E, E1) :-
list::select(P, E, E1).
```
This is more or less the basic building blocks, the engine, but of course quite a lot of work remains before we have an actual game in our hands. For the remainder of the post we shall implement enough properties so that it’s possible to construct a room with some entities in it. A room is an entity that contains other entities. We can model this as a property, namely the property of being a container. This property at least has to support the following operations:
• Being able to update the state of its children, i.e. overload $update/2$ from its parent.
• Add and remove items with actions.
For simplicity the data structure is going to be a list of entities. This makes it almost trivial to write the actions.
```:- object(container_property,
extends(property)).
new([]).
update(E0, E) :-
entity::select_property(container_property-Items, E0, E1),
update_children(Items, Items1),
E = [container_property-Items1|E1].
update_children([], []).
update_children([E|Es], [E1|E1s]) :-
entity::update(E, E1),
update_children(Es, E1s).
action(add_item, [E], Owner, Items, Owner, [E|Items]).
action(remove_item, [E], Owner, Items, Owner, Items1) :-
list::select(E, Items, Items1).
:- end_object.
```
Then we of course need a door. Doors are awesome: you can walk through them, open them, close them, lock them – the only limit is the imagination! Our particular door will only have one property though, that of being openable. But it’s only possible to open a door if it happens to be unlocked, hence the openable property in turn depends on whether or not the lock can be unlocked with the key in question. For simplicity we’re going to assume that the player uses the key every time he/she wishes to open the door, but extending these properties so that the lock has an internal state would not be terribly difficult.
```:- object(openable_property,
extends(property)).
new(closed-[lock_property- Lock]) :-
lock_property::new(Lock).
action(open, Key, Owner, closed-Lock, Owner, open-Lock) :-
entity::action(unlock, Key, Lock, _).
action(close, [], Owner, _-Lock, Owner, closed-Lock).
:- end_object.
```
The $action/6$ command should be read as: change the state of the door from $closed$ to $open$ if the key can unlock the lock. To complete the door we of course need to define the lock and key properties.
``` :- object(key_property,
extends(property)).
%% The default key.
new(key).
:- end_object.
:- object(lock_property,
extends(property)).
%% The (default) key that opens the lock.
new(key).
valid_key(E, Key) :-
entity::get_property(E, key_property-Key).
action(lock, [Entity], Owner, Key, Owner, Key) :-
valid_key(Entity, Key).
action(unlock, [Entity], Owner, Key, Owner, Key) :-
valid_key(Entity, Key).
:- end_object.
```
The two actions specifies that the entity can lock or unlock the lock if it has the property of being a key with the correct state. Note that the entity in question doesn’t just have to be a key, it can still support other properties as long as the key property is fulfilled.
We’re now able to create a room with a door and a key. What remains is the player. For the moment that’s just an entity that has the property of having health, which we defined earlier, and that of having an inventory. The Argus-eyed reader will probably realize that having an inventory is quite similar to that of being a container. The only difference is that the player can only pick up items that are carriable, so instead of defining everything from the ground up we’re going to inherit some definitions from $container\_property$. What happened to the principle of favoring composition over inheritance? Bah!
```:- object(inventory_property,
extends(container_property)).
valid_item(E) :-
entity::get_property(E, carriable_property-_).
action(add_item, [E], Owner, Items, Owner, [E|Items]) :-
valid_item(E).
action(drop_item, [E], Owner, Items, Owner, Items1) :-
valid_item(E), % This shouldn't be necessary since only valid item are added in the first place.
list::select(E, Items, Items1).
:- end_object.
:- object(carriable_property,
extends(property)).
%% Perhaps not the most interesting property in the game.
:- end_object.
```
It should be noted that I haven’t actually tested most of these properties, but it should/could work! So do we now have a functional but simplistic game? Not really. We have the basic entities, but there’s a quite glaring omission: everything is invisible since nothing can be printed. For a text game this is a distinct disadvantage! The simplest solution is to add another property, that of being printable, where the state is the string that shall be displayed.
```:- object(printable_property,
extends(property)).
new(Description) :-
list::valid(Description).
action(print, [], Owner, Description, Owner, Description) :-
format(Description).
:- end_object.
```
But since we also want the possibility to print items in a container, e.g. a room, we’re going to define an additional action in $container\_property$.
```:- object(container_property,
extends(property)).
new([]).
.
. % As before.
.
action(print_children, Args, Owner, Items, Owner, Items) :-
% Print all children that are printable.
meta::include([E] >>
(entity::action(print, Args, E, _)),
Items,
_).
:- end_object.
```
### Putting everything together
Let’s use the properties and construct some entities. The test program is going to build a room and then print its description together with its children.
``` init :-
write('Welcome to Bacchus-Bosch!'), nl,
build_test_room(Room),
entity::action(print, [], Room, Room1),
write('You see: '), nl,
entity::action(print_children, [], Room1, _Room).
```
Building the room and its components is not hard, just a bit tedious.
``` build_test_room(Room) :-
build_test_door(Door),
build_test_player(Player),
build_test_key(Key),
Room0 = [container_property - State1, printable_property - State2],
container_property::new(State1),
room_description(State2),
entity::action(add_item, [Door], Room0, Room1),
entity::action(add_item, [Player], Room1, Room2),
entity::action(add_item, [Key], Room2, Room).
```
``` build_test_door(Door) :-
Door = [openable_property-State1, printable_property-State2],
openable_property::new(State1),
door_description(State2).
build_test_key(Key) :-
Key = [key_property-State1, printable_property-State2],
key_property::new(State1),
key_description(State2).
build_test_player(Player) :-
Player = [inventory_property-State1, health_property-State2],
inventory_property::new(State1),
health_property::new(State2).
```
The string descriptions are simply given as facts.
``` room_description("A rather unremarkable room.\n").
door_description("A wooden door with a small and rusty lock.\n").
key_description("A slightly bent key.\n").
```
When running the test program we get the following output:
Welcome to Bacchus-Bosch!
A rather unremarkable room.
You see:
A slightly bent key.
A wooden door with a small and rusty lock.
true
Only three major obstacles remain before we have a game in our hands:
• More properties.
• Conjoin the parsed commands with the game world so that it’s possible for the user to interact with entities. It shouldn’t be too hard to see that commands will be executed by sending action commands to the entities in question.
• Abstract the creation of game objects, e.g. with the help of an embedded language.
Which will be the topics of the next post. Stay tuned!
### Source code
The source code is available at https://gist.github.com/878277.
# An Adventure Game – Part 2
When we left off last time we were able to read input from the player and tag the atoms with their word classes. This time we’ll use this information to analyze what the input means with two different techniques: pronominal anaphora resolution and chunk-based parsing. Don’t worry about the technical names – I’m way to lazy to do something that would actually be strenuous, so the presented solutions won’t be that hard to grasp. Let’s have a look at some typical user input and some potential answers:
> look
You are in a dimly lit room. A banana hangs from the ceiling in a thin, plastic wire.
> look at the banana
It’s a typical banana. Yellow, crooked and utterly delicious. Mmm.
> eat it
The banana is out of reach.
> grab the banana and eat it
Not even your long, hairy ape-arms are enough to bridge the vast gap.
> fling feces around the room
I’m sorry. I don’t know what ‘fling’ means.
All these sentences have something in common: they are imperative in nature and somehow either changes the state of the game world or inspects it. We’ll make the basic assumption that every command can be described as a tuple, $C-Args$, where $C$ is an atom and $Args$ is a list of tags. The “look”-command from the example can be represented as:
```command(look, [entity]).
command(look, []).
```
Since a banana is an entity, the sentence “look at the banana” can be parsed as a look-command with “banana” as argument. It can also be parsed simply as “look”, with no argument at all, but the first interpretation is preferable. There is however one problem that we must tackle before the parsing can take place: what does “it” mean? That depends on the context and is the subject of pronominal resolution.
### Pronominal resolution
Just like everything else in natural language processing, resolving pronouns is quite challenging in the general case. Fortunately we’re dealing with a rather restricted subset of English. When a pronoun such as “it” is used we can make the assumption that it’s just a shorthand for a previously identified entity. The task then becomes to replace all occurrences of a pronoun with the corresponding entity before the parsing. Let’s have a look at a few examples:
> take the banana and eat it
Here the pronoun refers to the banana, i.e. the sentence should be replaced by “take the banana and eat the banana”.
> look at the ceiling and take the banana and eat it
Here the pronoun probably refers to the banana, but it could possibly also refer to the first entity: the ceiling. Since we always want the possibility to backtrack and try alternative solutions in case the first one fails later on, it’s a good idea to make the predicate non-deterministic. Assuming that we’re given a list of tagged atoms, a simple algorithm may look as follows:
• If the list of tagged atoms is non-empty, inspect the first element.
• If it’s not a pronoun, ignore it and continue with the rest of the list.
• If it’s a pronoun, replace the pronoun with the corresponding entity, where the candidates of possible entities are the tagged atoms that occur to the left of the pronoun in the sentence.
Putting this into code is straightforward. The predicate will have three arguments: the list of tagged atoms, the tagged atoms that have been processed so far and the result.
``` resolve_pronouns(ATs, Xs) :-
resolve_pronouns(ATs, [], Xs).
resolve_pronouns([], _, []).
resolve_pronouns([A-T|Xs], Pre, [Y|Ys]) :-
( T = pronoun ->
resolve_pronoun(A, Pre, Y),
resolve_pronouns(Xs, Pre, Ys)
; Y = A-T,
resolve_pronouns(Xs, [Y|Pre], Ys)
).
```
$resolve\_pronoun/2$ is also a walk in the park. For now, we’re only going to be dealing with “it”. Other pronouns can be added in a similar manner.
``` %% NAME:
%% resolve_pronoun(Pronoun, AtomTags, X)
%% DESCRIPTION:
%% True if X is an entity corresponding to the pronoun resolved
%% with the help of AtomTags. It is the caller's responsibility to
%% make sure that AtomTags is in the correct (possibly reversed)
%% order.
resolve_pronoun(it, Xs, X) :-
X = A-entity,
list::member(X, Xs).
```
Nota bene: since the words are pushed onto $Pre$ in LIFO order, a pronoun will be resolved from right-to-left. While this algorithm is indeed very simple, it’s not particularly efficient. Due to the call to $member/2$ in $resolve\_pronoun/2$, the worst case execution time (to find one solution) for $resolve\_pronouns/2$ is exponential. But considering the fact that a typical sentence is just a few words long and that they rarely if ever contains more than one or two pronouns, I think we’ll survive.
One step remains. Note that “it” is used in one of the example sentences without referring to anything in that particular sentence. Instead it refers to the entity that was used in the previous sentence. The obvious yet quite flexible solution is to augment $resolve\_pronouns/2$ with an additional argument that holds the previous sentence. Then a pronoun can first be resolved with respect to the current sentence, but if that fails the previous sentence is tried instead.
### Parsing
Natural language contains a lot of noise and variations. This makes formal methods such as context-free grammars somewhat ill-suited and cumbersome to use. One deceivingly simple solution is to ignore everything that we’re not interested in and group the good stuff into chunks. A command, e.g. “eat”, can be considered a chunk. We know that it takes one entity-argument, so after encountering the command chunk we scan the rest of the sentence for an argument chunk of the correct type. This might sound like cheating – and it kind of is – but as long as the reduced, simpler problem still yields the correct output that’s hardly of any importance.
The goal is to produce one or more commands given a list of tagged atoms (where the pronouns are resolved). Remember that commands are defined as:
``` command(look, [entity]).
command(look, []).
command(eat, [entity]).
```
To allow some flexibility in the input, we’re also going to allow synonyms. These can be defined as additional facts.
``` variants(look ,[look, inspect]).
variants(eat, [eat, munch, digest]).
```
For every chunk of type $X$ we’re going to write a corresponding $X\_chunk$ predicate that has 3 arguments: the list of tagged atoms, the list that remains after this chunk is parsed, and the resulting chunk. First off is the sentence chunk.
``` %% NAME:
%% parse_atoms(+AtomTags, -Remainder, -Commands)
%% DESCRIPTION:
%% True if Commands is a list of commands parsed from from the atoms and their
%% tags in AtomTags.
parse_atoms(ATs, Remainder, Commands) :-
sentence_chunk(ATs, Remainder, Commands).
%% NAME:
%% sentence_chunk(+AtomTags, -Remainder, -Commands)
%% DESCRIPTION:
%% True if Commands is a list of commands parsed from from the atoms and their
%% tags in AtomTags. The remainder, i.e. the chunk after the
%% sentence, is stored in Remainder (most likely just the empty list).
sentence_chunk(ATs, Remainder, [C-Args|Cs]) :-
command_chunk(ATs, ATs1, C),
argument_chunk(ATs1, C, ATs2, Args),
( conjunction_chunk(ATs2, Remainder0, _) ->
sentence_chunk(Remainder0, Remainder, Cs)
; Cs = []
).
conjunction_chunk([Conj-conjunction|ATs], ATs, Conj-conjunction).
```
Which shall be read as “first parse a command followed by its argument, then if the arguments are followed by a conjunction, recursively parse the rest of the list”. The command chunk is slightly more involved since we have to deal with variants of commands.
``` %% NAME:
%% command_chunk(+AtomTags, -Remainder, -Command)
%% DESCRIPTION:
%% True if Command is the command parsed from the atoms and their
%% tags in AtomTags. The remainder, i.e. the chunk after the
%% command, is stored in Remainder.
command_chunk(ATs, ATs1, C) :-
%% First find a verb, i.e. a potential command.
list::append(_, [C0-verb|ATs1], ATs),
%% Then check whether or not the verb is a variant of a known
%% command.
game_logic::variants(C, Cs),
list::member(C0, Cs).
```
Note the use of $append/3$ to ignore everything in the list up to a verb. When parsing the arguments we scan the input and look after atoms of the correct type.
``` %% NAME:
%% argument_chunk(+AtomTags, +Command, -Remainder, -Args)
%% DESCRIPTION:
%% True if Args are the arguments corresponding to the arity of
%% Command with respect to the atoms and their tags in
%% AtomTags. The remainder, i.e. the chunk after the last
%% argument, is stored in Remainder.
argument_chunk(ATs, C, ATs1, Args) :-
game_logic::command(C, Tags),
matches(ATs, Tags, ATs1, Args).
%% NAME:
%% matches(+AtomTags, +Tags, -Remainder, -Args).
%% DESCRIPTION:
%% True if the list of atoms and their corresponding tags matches
%% Tags, i.e. there exists a sequence of atoms, not necessarily
%% in a a direct linear sequence, such that their tags can be
%% mapped to the tags in Tags.
matches(ATs, [], ATs, []).
matches(ATs, [T|Ts], Remainder, [A|As]) :-
list::append(_, [A-T|ATs1], ATs),
matches(ATs1, Ts, Remainder, As).
```
That’s all the constructs that we’re going to support at the moment. Two remarks should be made before we move on:
• The chunk predicates can be rewritten as DCGs instead since they’re essentially just threading state. I’ll leave this as an exercise to the reader!
• $argument\_chunk/4$ is given the ability to ignore everything that it can’t make sense of. While this is good in many circumstances, it also has the effect that some ill-formed sentences will be incorrectly parsed. For example, consider the sentence “eat the bnana and look at the room”. Assuming that $bnana$ is tagged as $unknown$, the sentence will be parsed as $eat-room$ since $room$ is the only entity that $argument\_chunk/4$ discovers. One possible solution is to restrict the argument parsing to the current clause and disallow it to continue past conjunctions.
### Putting everything together
Now we’re able to dissect the meaning of the user input and extract commands from it. The next task is to actually carry out the commands and update the state of the game world, but for the moment we’re just going to print the parsed commands to verify that everything works. The steps are the following:
• Read a line and convert it to atoms.
• Tag the atom list.
• Resolve the pronouns.
• Parse the resulting list of tagged atoms.
And it’s rather easy to cobble together.
``` init :-
write('Welcome to Bacchus-Bosch!'), nl,
current_input(S),
repl(S, []).
repl(S, History) :-
write('> '),
nlp::parse_line(S, Atoms),
write('The input is: '),
meta::map([X] >> (write(X), write(' ')), Atoms), nl,
nlp::tag_atoms(Atoms, AtomTags),
write('The tagged input is: '),
meta::map([X] >> (write(X), write(' ')), AtomTags), nl,
( eval(History, AtomTags) ->
true
; write('I''m sorry, Dave. I''m afraid I can''t do that'),
nl
),
repl(S, AtomTags).
eval(History, AtomTags) :-
nlp::resolve_pronouns(History, AtomTags, AtomTags1),
eval_commands(AtomTags1).
eval_commands(AtomTags) :-
nlp::parse_atoms(AtomTags, _, Commands),
write('The parsed commands are: '), nl,
meta::map([X] >> (write(X), nl), Commands).
```
If we feed this program with the example sentences we end up with the following dialogue:
Welcome to Bacchus-Bosch!
> look
The input is: look
The tagged input is: look-verb
The parsed commands are:
look-[]
> look at the banana
The input is: look at the banana
The tagged input is: look-verb at-preposition the-article banana-entity
The parsed commands are:
look-[banana]
> eat it
The input is: eat it
The tagged input is: eat-verb it-pronoun
The parsed commands are:
eat-[banana]
> grab the banana and eat it
The input is: grab the banana and eat it
The tagged input is: grab-verb the-article banana-entity and-conjunction eat-verb it-pronoun
The parsed commands are:
take-[banana]
eat-[banana]
> fling feces around the room
The input is: fling feces around the room
The tagged input is: fling-unknown feces-unknown around-preposition the-article room-entity
I’m sorry, Dave. I’m afraid I can’t do that
### Source code
The source code is available at https://gist.github.com/862464.
Due to a hefty amount of school-work (I should actually be crunching theorems in number theory right now!) part 3 will probably be delayed one or two weeks.
# An Adventure Game – Part 1
A Prolog programmer walks into a bar:
– Can I have beer?
– No.
As promised, in this and the following posts I’ll design and implement a simple, text-driven adventure game in the spirit of Zork. I shan’t be too concerned with superfluous features and instead direct most energy towards nifty algorithms and reasonably well-designed data structures. No guarantees whatsoever will be made regarding the actual playability of the final product! Since they say that a picture is worth more than a thousand words:
Wait. That’s probably not true in the case of a text-adventure game. Crap. In any case the basic ideas involved are not too hard to grasp. The only interface against the user is text in the form of output and input. Output is as one might expect one of the easiest parts to implement since all game objects will have textual descriptions. Input is far, far worse since we’ll be dealing with natural instead of formal languages. In the general case perfect natural language processing is more or less impossible. This is why almost all successful services that make use of natural languages are based on statistical/probabilistic methods instead of strictly formal models such as context-free grammars (hybrids also exist). But of course, just because something is very hard in the general case doesn’t necessarily imply that it’s just as hard in every case. We will however be forced to make some severe restrictions regarding what kind of input we want to accept and focus on these areas.
Where to start? UML-diagrams? Activity diagrams? Of course not — we’ve already put several minutes into the project without savoring a single cup of the brown, precious liquid known to man as coffee. That’s the first step. The second step is to blankly stare into the monitor for at least a few minutes and realize that it would probably be a good idea to put on some music before the mind-numbing silence quells all remaining sanity. Good choices are Einstürzende Neubauten, Melt Banana or Programmer Art, by Yellow Bear. The fourth step is to automatically generate a nonsensical name for the project. I used a soundex-algorithm with random names as input, and after some tinkering I ended up with:
Bacchus-Bosch
No, I’ve got absolutely no idea what it means, but it rolls of the tongue nicely. Now we’re finally in a position where we might be inclined to do some real work! This is an instance where the bottom-up methodology will work nicely: start with something simple that we know that we’ll need later on and expand from there. For no particular reason I’ll start with the most fundamental parts of user input: tokenizing and part-of-speech tagging. A typical example of user input is:
open the door
go through the door
How should these sentences be construed? Well, no matter what we’ll have to start by reading them character by character from an input stream. This will result in a list of characters. We can code this in more or less the same way as in an imperative language: continue reading characters until the end of line is reached.
```read_line(S, Chars) :-
get_char(S, C),
read_line(C, S, Chars).
read_line('\n', _, []) :- !.
read_line(C, S, [C|Chars]) :-
read_line(S, Chars).
```
Where $get\_char/2$ takes a stream as input and returns the next character. We could in theory analyze this character list already at this stage, but for simplicity it’s better to introduce some additional pre-processing steps. Therefore we’ll take our character list and group together the characters into words instead. For example, the character list from the first example sentence would be $[o,p,e,n,'\,',t,h,e,'\,',d,o,o,r]$, and would result in the word list $[open, the, door]$. There already exists a primitive for this purpose: $atom\_chars/2$, so what we need to do is to go through the list and look for groups separated by whitespace and convert each group to an atom. One simple solution is to use a set of DCG-rules (for brevity some basic rules are omitted):
```chars_to_list([A|As]) -->
chars_to_atom(A),
whitespace,
chars_to_list(As).
chars_to_list([A]) -->
chars_to_atom(A),
opt_whitespace.
chars_to_atom(A) -->
chars(Cs),
{atom_chars(A, Cs)}.
```
The goal in the curly braces, $atom\_chars(A, Cs)$, makes sure that each group is converted appropriately. This is certainly not the most efficient solution. For example: the last atom always needs to be parsed twice in $chars\_to\_list//1$, one time in the first rule and one time in the second. In a real compiler this might not be acceptable, but since we’re dealing with sentences of just a few words it’s A-OK.
Now we can recognize individual words, but we still don’t know what they mean, i.e. what semantic class they belong to. Are they imperative commands? Directions? Physical objects? For that we’ll use a technique that in natural language processing is known as tagging: each word will be assigned a tag corresponding to its lexical category. For example, the sentences above could be manually tagged as:
open/verb the/article door/noun
go/verb through/preposition the/article door/noun
Don’t fret if the part-of-speech lectures from middle school are nothing but a distant, hazy memory: it won’t become much more complicated than this. Like all other domains in natural language processing, tagging is very hard to do perfectly. Therefore we shall use a simple tagger that either looks up the tag for a word manually, or forms a decision based on the previously tagged words. First we need a small dictionary that contains words and their lexical categories. The simplest solution is to implement it as a set of $word/2$ facts:
```:- object(game_logic).
:- public(word/2).
word(entity, door).
word(entity, key).
word(entity, banana).
word(direction, north).
word(direction, east).
word(direction, south).
word(direction, west).
word(preposition, on).
word(preposition, through).
word(pronoun, he).
word(pronoun, it).
word(pronoun, she).
word(verb, go).
word(verb, take).
word(verb, open).
word(verb, eat).
word(verb, use).
word(article, a).
word(article, an).
word(article, the).
:- end_object.
```
A remark: it would be preferable to store the tuples with the arguments switched to exploit first-argument indexing, since the word is usually looked up more frequently than the lexical category. Obviously this is just a sample dictionary, but it can easily be extended once we begin working on the rest of the game. Next we’ll need a predicate that, given a word returns its lexical category as a tag. However, since we also want to be able to construct rules that takes the history into account, it will in addition to the word also need a list of the preceding words and their tags. For efficiency reason this list will be stored in reversed order. An example of such a rule is: “if the previous tag was an article, then the word must be an entity”.
``` %% NAME:
%% tag(+Preceding, -Word, -Tag).
%% DESCRIPTION:
%% True if Tag is the tag corresponding to Word. Preceding is used
%% whenever a rule needs to look at the history in order to tag the
%% word.
tag([_-article|_], _, entity).
tag(_, A, T) :-
game_logic::word(T, A).
```
It’s easy to extend $tag/3$ with additional rules, but let’s leave it as it is for now. The actual tagger, $tag\_atoms/2$, will just go through the list of atoms and call $tag/3$ for each of them.
``` %% NAME:
%% tag_atoms(+Atoms, -AtomTags).
%% DESCRIPTION:
%% True if AtomTags is a list of pairs of the form Atom-Tag,
%% where Tag is the tag corresponding to Atom.
tag_atoms(As, Ts) :-
tag_atoms(As, [], Ts).
tag_atoms([], _, []).
tag_atoms([A|As], Pre, [A-T|Ts]) :-
tag(Pre, A, T),
tag_atoms(As, [A-T|Pre],Ts).
tag_atoms([A|As], Pre, [A-unknown|Ts]) :-
%We don't use a cut since we want the ability to try several
%different tags if necessary.
\+ tag(Pre, A, _),
tag_atoms(As, [A-unknown|Pre], Ts).
```
Note something about $tag\_atoms/3$: it can return several solutions if $tag/2$ is non-deterministic. This is actually a good thing since natural language is inherently non-precise and ambiguous. If some other part in the game fails due to an erroneous tagging it’s always possible to backtrack and try again.
Here’s how the tagger looks in action:
-> eat the key
The input is: eat the key
The tagged input is: eat-verb the-article key-entity
-> open the door with the banana
The input is: open the door with the banana
The tagged input is: open-verb the-article door-entity with-preposition the-article banana-entity
Whether those specific commands make any sense in the finished game remains to be seen!
### Summary
We’ve implemented some fundamental parts of Bacchus-Bosch and are now able to get input from the player and make sense of it by tagging it. Next we’ll concern ourselves with parsing. The goal is to construct a parse tree from the tagged input that reflects the structure of the command that the player had in mind. Stay tuned!
### Source code
The source code is available at https://gist.github.com/847541.
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.