content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Kolmogorov Complexity and Causation
I got an interesting email question.
Suppose I give you a set of points S of the form (x,y). He suggested ideally they would be pairs of a real numbers. Supposing there is a causal relationship between x and y of some kind, we want
to know know if it is more likely that the x value causes the y value or the y value causes the x value. One plausible way to decide what the answer should be is by answering the question is the
length of the shortest program which maps the x values to their y values shorter than the length of the shortest program which maps the y values to their x values.
So, my intuition says that this is clearly undecidable. I'm actually having a hard time thinking of a proof, so do you happen to know of one or if this problem might actually be decidable?
On a related note, since I'm already writing you about this question, do you happen to know about the complexity of any related questions which involve circuit size instead of program length?
Let's use notation from Kolmogorov complexity, letting C(x|y) be the size of the smallest program that takes y as input and outputs x. Now suppose it is decidable to determine whether C(x|y) > C(y|
x). Then find an x of length n such that for all y of length n/3, C(x|y)>C(y|x). Such x exist: For any random x, C(x|y)>= 2n/3 and C(y|x) <= n/3.
Now I claim C(x)>=n/4 for the x you found. If not we have C(x|y)<=C(x)<n/4 but for some y, C(y|x)>=n/3 since there aren't enough shorter programs to cover all the y's.
Since there is no computable procedure to find x such that C(x)>=n/4, there can't be decidable procedure to determine whether C(x|y) > C(y|x).
But does this question relate to causality. Pick a random x from strings of length n and y at random from strings of length n/3. We have C(x|y) > C(y|x) even though there is no causality.
Instead you could look at the information of y in x, how many bit of x does y help describe, defined by I(y|x) = C(x)-C(x|y). This measure correlation since I(y|x)=0 iff x and y are independent but
symmetry of information gives I(y|x)=I(x|y) so no hope for causation.
In short, Kolmogorov complexity won't give you much on causation--you can't avoid the controlled experiments.
For your last question, there is a notion of Kolmogorov complexity that roughly corresponds to circuit size, KT(x|y) defined as the sum of the program size and running time minimized over all
programs p that take y as an input and output x. I'm guessing it's hard to determine if KT(x|y) < KT(y|x) and you could probably show it under some assumption like secure psuedorandom generators.
Also symmetry of information isn't believed to hold for KT complexity so maybe there is something there. Interesting questions.
6 comments:
1. https://arxiv.org/abs/1702.06776
2. The paper "Information-geometric approach to inferring causal direction" (https://dl.acm.org/citation.cfm?id=2170008) contains a nice formalization of this idea.
3. I think that you've (slightly) misunderstood the question. In my interpretation our input is S={(x1,y1),..,(xt,yt)}, where all the xi's and yi's are distinct, and we want a (shortest) program P
for which P(xi)=yi for every i, and a (shortest) program Q for which Q(yi)=xi for every i. The question is whether P or Q is shorter. Of course, your argument shows that this is undecidable
already for t=1.
4. Assumptions are key to causal inference. Under certain assumptions, one can decide the causal direction using Kolmogorov complexity. In the two variable case, instead of the Kolmogorov complexity
of a random variable (exchangeably a string), Peters & Schölkopf, 2010 postulate that Kolmogorov complexity of distributions of those variables may have an answer. The paper titled "Causal
inference using algorithmic Markov condition" by Peters & Schölkopf takes an interesting outlook on the problem of causal inference using Kolmogorov complexity.
5. Kolmogorov complexity is slightly asymmetric (caused by the incomputability of Kolmogorov complexity). If there are several rounds of information exchange, this asymmetry can accumulate and
become linear in the number of rounds.
Imagine 2 devices run a computation. At regular time intervals each device sends a bit to the other. We are given a list of communicated pairs of bits: (x1, y1), (x2, y2), ... We are asked to
determine whether xi is a reply to yi or vice versa, thus the bits are either communicated in the order x1 -> y1 -> x2 -> y2 -> ... or y1 -> x1 -> y2 -x2 -> ... In this case the sum of online
Kolmogorov complexities corresponding to the machines can differ by almost a factor 2, while the complexities grow linearly. See: http://drops.dagstuhl.de/opus/volltexte/2014/4452/
6. still it can happen that a noisy signal has much higher complexity wrt no-noise propotype than vice versa... | {"url":"https://blog.computationalcomplexity.org/2018/05/kolmogorov-complexity-and-causation.html?m=0","timestamp":"2024-11-12T00:58:04Z","content_type":"application/xhtml+xml","content_length":"186199","record_id":"<urn:uuid:74cfde3b-9158-4e3d-a279-8a52f3eb2af4>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00165.warc.gz"} |
The height of scale model building is 15 in. the scale is 5 in. to 32 i. find the height of the actual building in inches and in feet
Find an answer to your question 👍 “The height of scale model building is 15 in. the scale is 5 in. to 32 i. find the height of the actual building in inches and in feet ...” in 📗 Mathematics if the
answers seem to be not correct or there’s no answer. Try a smart search to find answers to similar questions.
Search for Other Answers | {"url":"https://cpep.org/mathematics/261272-the-height-of-scale-model-building-is-15-in-the-scale-is-5-in-to-32-i-.html","timestamp":"2024-11-11T01:52:16Z","content_type":"text/html","content_length":"23320","record_id":"<urn:uuid:8c9cafab-6e91-4213-8802-3d548e0835dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00782.warc.gz"} |
EViews Help: maketransprobs
maketransprobs Equation Procs
Save the regime transition probabilities and expected durations for a switching regression equation into the workfile.
equation_name.maketransprobs(options) [base_name]
equation_name.maketransprobs(out=mat, options) [matrix_name]
where equation_name is the name of an equation estimated using switching regression.
• In the first form of the command, base_name will be used to generate series names for the series that will hold the transition probabilities or durations. The series names for regime transition
probabilities will be of the form base_name##, where ## are the indices representing elements of the transition matrix base_name# where # corresponds to the regime index. Thus, in a two-regime model,
the base name “TEMP” corresponds to the transition probability series TEMP11, TEMP12, TEMP21, TEMP22, and the expected duration series TEMP1, TEMP2.
If base_name is not provided, EViews will use the default of “TPROB”
• When the option “output=mat” is provided, the matrix_name is the name of the output matrix that will hold the transition probabilities or durations.
If matrix_name are not provided, EViews will default to “TPROB” or the next available name of the form “TPROB##”.
EViews will evaluate the transition probabilities or durations at the date specified by the “obs=” option. If no observation is specified, EViews will use the first date of the estimation sample to
evaluate the transition probabilities. Note that if the transition probabilities are time-invariant, setting the observation will have no effect on the contents of the saved results.
type=arg (default= Transition probability results to save: transition probabilities (“trans”), expected durations (“expect”).
out=arg (default= Output format: series (“series”) or matrix (“mat”). If saved as a matrix, only a single transition matrix will be saved using the date specified by “obs=”.
Date/observation used to evaluate the transition probabilities if saving results as a matrix (“out=mat”). If no observation is specified, EViews will use the first date of the
obs=arg estimation sample to evaluate the transition probabilities.
Note that if the transition probabilities are time-invariant, setting the observation will have no effect on the content of the saved results.
n=arg (optional) Name of group to contain the saved transition probabilities.
prompt Force the dialog to appear from within a program.
equation eq1.switchreg(type=markov) y c @nv ar(1) ar(2) ar(3)
eq1.maketransprobs(n=transgrp) trans
saves the transition probabilities in the workfile in the series TRANS11, TRANS12, TRANS21, TRANS22 and creates the group TRANSGRP containing the series.
The command
eq1.maketransprobs(type=expect) AA
saves the expected durations in the series AA1 and AA2.
eq1.maketransprobs(out=mat) BB
saves the transition probabilities in the matrix BB. | {"url":"https://help.eviews.com/content/equationcmd-maketransprobs.html","timestamp":"2024-11-04T07:56:12Z","content_type":"application/xhtml+xml","content_length":"17483","record_id":"<urn:uuid:e084bca3-6a7f-45ad-b92f-dd79043e6829>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00218.warc.gz"} |
Beyond Classical Limits: The Magic of the CHSH Game
Published on
Beyond Classical Limits: The Magic of the CHSH Game
Galih Laras Prakoso
Most of us have a lot of experiences while playing games: happiness, sadness, fear, confusion, anger, and more. I remember playing one of my favorite games, "Resident Evil 4" on PlayStation 4. It was
filled with jump scares, confusion, and tension in a zombie-infested world—experiences I’d never have in real life.
For me, games are like a doorway to another world, allowing us to escape reality and immerse ourselves in a universe full of different experiences. Beyond the fun, games can also teach us valuable
lessons, help us build intuition in simulated environments, and even make education enjoyable through engaging gameplay.
While many games aim to entertain and educate, they also challenge us to develop strategies and improve our skills to achieve a high win rate. However, there's one game that stands out for its
intriguing challenge. In this blog post, I want to introduce you to a very interesting game. Unlike typical games, no matter what strategies or methods we try, we can never find a way to achieve a
100% win rate in this game.
The CHSH Game
Imagine that there are two players: Alice and Bob. And the game is having three stages:
• Pre-Game: In this stage, the players are allowed to coordinate by creating a plan or strategy to win the game.
• Input: In this stage, the players will have to read an input $X$ for Alice and $Y$ for Bob.
• Ouput: This is the moment of truth that will determine whether Alice and Bob could win the game or not. In this stage, Alice and Bob have to give an output $A$ and $B$ respectively.
These are the complete rules of the CHSH Game:
1. The inputs and outputs are all bits: $X,Y,A,B \in \{0,1\}$
2. The inputs $X$ and $Y$ will be choosen randomly with probability of $\frac{1}{4}$ for each of the four possibilities $(0,0), (0,1), (1,0), (1,1)$.
3. Alice and Bob win when they if only if satisfy this condition: $X . Y = A \otimes B$, otherwise, they lose. $(.)$ is AND condition and $\otimes$ is XOR (Exclusive OR).
$(X.Y)$ WIN LOSE
$(0,0)$ $(A = B)$ $(A eq B)$
$(0,1)$ $(A = B)$ $(A eq B)$
$(1,0)$ $(A = B)$ $(A eq B)$
$(1,1)$ $(A eq B)$ $(A = B)$
So, to win the game, when the inputs is $(1,1)$ Alice and Bob should return different ouputs: $(1,0), (0,1)$ otherwise they should return exactly the same outputs: $(0,0), (1,1)$. Okay, now, I assume
that you already understand the rule of the game clearly. Do you have strategy in mind?
Deterministic Strategy
Let's take a look at the table in the preceeding section, there are obvious pattern in the win column that we could use to win the game: The first three rows in win column is giving us a hint that:
If Alice and Bob just always giving the same output no matter what's the input, they could have been winning 3 of four possibilities. Which means they could use this strategy to get 3/4 win rate or
75% win rate. They could just throw constant output whether it's a constant 0 or constant 1 as long as they are same ($A = B$).
But really? that's it? $75%$? that doesn't seem to be enough. If you isn't satisfied enough with this strategy, you could just try to find another deterministic strategies and trust me, 75% is your
Probabilistic Strategy
Every probabilistic strategy can alternatively be viewed as a random selection of a deterministic strategy, just like (as was mentioned in the first lesson) probabilistic operations can be viewed
as random selections of deterministic operations. -- IBM Quantum Learning
So, yeah! It sucks! It means that our winning probabilty couldn't get any better with this strategy either.
Quantum Strategy
Okay, let's try the ultimate way to beat the game, to increase our winning probability. Of course, we will use the power of Quantum Computing. We could leverage the concept of entanglement in Quantum
Computing in order to increase our winning probability in this game.
Personally, I found this concept is quite hard to understand at first, because the mathematical concept used by this strategy is involving some Trig stuffs and it's quite hard to understand it
intuitively if we are not getting used of the Trigonometry concept for quite a while.
I will try to write the simplest and easiest way because I know, some sources out there will leave some part of the math without any explanation that could totally frustating. 😮💨 Let's start from
the very simple concept first.
Calm, it's not a complex Trigonometry shit, it's only requires us to understand the basic of Trigonometry. I'm using this this tool to help me understand what happens visually, so just open the tool
in new tab just in case you need a help just like me.
Rotation Matrix
I will start the explanation of the strategy by introducing this very beautiful operation matrix:
$U_\theta = \begin{pmatrix} cos(\theta) & sin(\theta) \\ -sin(\theta) & cos(\theta) \end{pmatrix}$
If you are a software engineer, you could just interpret it as just a simple function named $U$ that receive single parameter $\theta$, and the output of the function is an operation that rotate two
dimensional vectors by an angle of $-\theta$ from the origin.
The Strategy
Clean Version
The strategy is quite simple as you can see described in the quantum circuit above. First, Alice and Bob shared a pair of entangled qubits, and then, they will perform one or two operations based on
the inputs they received. As I explained in the preceding section about the $U_\theta$ function, we could just think that Alice and Bob will rotate their qubits (vectors) based on received inputs.
For Alice:
$U_x = \begin{cases} U_0 & \text{if } x = 0 \\ U_{\pi/4} & \text{if } x = 1 \end{cases}$
While Bob:
$U_x = \begin{cases} U_{\pi/8} & \text{if } y = 0 \\ U_{-\pi/8} & \text{if } y = 1 \end{cases}$
By using this "seems to be simple" strategy, we could increase our win rate into ~85%. It's definitely better than the classical strategy that could only give us 75% win rate.
Without knowing the dirty version of the strategy, we could understand this strategy in high level overview. But I bet that it doesn't make you uncomfortable, me either. As an engineer, I have this
very uncomfortable feeling that force me to deep dive more to understand what happens under the hood.
It's okay, in the the following sections, we will deep dive into the dirtiest stuff that we need to understand to put this understanding into our head. Let's go!
Dirty Version
Before we see the dirty version of our $U_\theta$ operation, first we need to define a state vector:
$\ket{\psi_\theta} = cos(\theta)\ket{0} + sin(\theta)\ket{1}$
It's just a vector written in Bra-Ket notation that receive $\theta$ as parameter, similar to our previous $U_\theta$. As I said before, to help you understand it intuitively, you could use this tool
and play around with it. By feeding some parameters, we could get common quantum state vectors:
$\ket{\psi_0} = \ket{0}$
$\ket{\psi_{\pi/2}} = \ket{1}$
$\ket{\psi_{\pi/4}} = \ket{+}$
$\ket{\psi_{-\pi/4}} = \ket{-}$
The $\theta$ parameter is just an angle measured in radians. If you forgot about it, a unit circle is having radians of $2\pi$. To make it clear, you can use this .gif from Wikipedia to help you
understand it intuitively:
And we need to understand this simple formula:
$\braket{\psi_\alpha|\psi_\beta} = cos(\alpha) cost(\beta) + sin(\alpha) sin(\beta) = cos(\alpha - \beta)$
This formula reveals the geometric interpretation of the inner product between real unit vectors, as the cosine of the angle between them. -- IBM Quantum Learning
And finally, this is the dirty version of $U_\theta$ as follows:
$U_\theta = \ket{0}\bra{\psi_\theta} + \ket{1}\bra{\psi_{\theta + \pi/2}}$
Study Case
As we know that initially Alice and Bob is sharing two entangled qubits that we could describe it as one of the Bell state:
$\ket{\phi^+} = \frac{1}{\sqrt{2}}(\ket{00} + \ket{11})$
In this section, we will take one of the study case from all possible inputs of the CHSH game as an example to help you understand how it works from mathematical point of view.
Case 1: $(X,Y) = (0,0)$. Alice performs $U_0$ on her qubit and Bob performs $U_{\pi/8}$ on his qubit. Their combined operation could be expressed as the tensor product of those operations $(U_0 \
otimes U_{\pi/8})$. And the application of those operations to current state is:
$(U_0 \otimes U_{\pi/8})\ket{\phi^+} = \ket{00}\braket{\psi_0 \otimes \psi_{\pi/8} | \phi^+} + \ket{01}\braket{\psi_0 \otimes \psi_{5\pi/8} | \phi^+} + \ket{10}\braket{\psi_{\pi/2} \otimes \psi_{\pi/
8} | \phi^+} + \ket{11}\braket{\psi_{\pi/2} \otimes \psi_{5\pi/8} | \phi^+}$
$=\frac{cos(-\frac{\pi}{8})\ket{00} + cos(-\frac{5\pi}{8})\ket{01} + cos(\frac{3\pi}{8})\ket{01} + cos(-\frac{\pi}{8})\ket{11}}{\sqrt{2}}$
The probabilities for the four possible answer pairs $(A, B)$:
$Pr((A,B) = (0,0)) = \frac{1}{2}cos^2(-\pi/8) = \frac{2 + \sqrt{2}}{8}$
$Pr((A,B) = (0,1)) = \frac{1}{2}cos^2(-5\pi/8) = \frac{2 - \sqrt{2}}{8}$
$Pr((A,B) = (1,0)) = \frac{1}{2}cos^2(3\pi/8) = \frac{2 - \sqrt{2}}{8}$
$Pr((A,B) = (1,1)) = \frac{1}{2}cos^2(-\pi/8) = \frac{2 + \sqrt{2}}{8}$
To calculate the probabilities that $A = B$ and $A eq B$ we could just aggregate (get the sum for both cases):
$Pr(A = B) = \frac{2 + \sqrt{2}}{4}$
$Pr(A eq B) = \frac{2 - \sqrt{2}}{4}$
So in this case, when (X,Y) = (0,0), Alice and Bob win if $A = B$ with probability:
$\frac{2 + \sqrt{2}}{4} \approx 0.85.$
You can try for the other case with the same steps as we did in this Case 1 and you'll get the same result of winning probability. Awesome right? 😎
Detailed Matrix Calculation
If you are interested in more detailed calculation, I will give you the detailed step-by-step calculation in matrix representation of the first case that I explained in the previous section.
As you can see that in the case when the input is $(X,Y) = (0,0)$, Alice and Bob will apply the operation $(U_0 \otimes U_{\pi/8})$ respectively. More detailed step about the construction of Alice's
operation matrix:
$U_0 = \ket{0}\bra{\psi_0} + \ket{1}\bra{\psi_{0 + \pi/2}} \\ = \begin{pmatrix} 1 \\ 0 \end{pmatrix} \left(\begin{pmatrix} cos(0) & 0 \end{pmatrix} + \begin{pmatrix} 0 & sin(0) \end{pmatrix}\right) +
\begin{pmatrix} 0 \\ 1 \end{pmatrix} \left(\begin{pmatrix} cos(\pi/2) & 0 \end{pmatrix} + \begin{pmatrix} 0 & sin(\pi/2) \end{pmatrix}\right) \\ = \begin{pmatrix} 1 \\ 0 \end{pmatrix} \begin{pmatrix}
cos(0) & sin(0) \end{pmatrix} + \begin{pmatrix} 0 \\ 1 \end{pmatrix} \begin{pmatrix} cos(\pi/2) & sin(\pi/2) \end{pmatrix} \\ = \begin{pmatrix} cos(0) & sin(0) \\ cos(\pi/2) & sin(\pi/2) \end
{pmatrix} \\ = \begin{pmatrix} cos(0) & sin(0) \\ -sin(0) & cos(0) \end{pmatrix} \\ = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$
It's equals to the identity matrix which will not do any kind of transformation to the state vector. While Bob's operation matrix is:
$U_{\pi/8} = \begin{pmatrix} cos(\pi/8) & sin(\pi/8) \\ -sin(\pi/8) & cos(\pi/8) \end{pmatrix} \\ = \begin{pmatrix} \frac{\sqrt{2 + \sqrt{2}}}{2} & \frac{\sqrt{2 - \sqrt{2}}}{2} \\ \frac{\sqrt{2 - \
sqrt{2}}}{2} & \frac{\sqrt{2 + \sqrt{2}}}{2} \end{pmatrix}$
So, the value of the tensor product between their operation matrix is:
$(U_0 \otimes U_{\pi/8}) = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \otimes \begin{pmatrix} \frac{\sqrt{2 + \sqrt{2}}}{2} & \frac{\sqrt{2 - \sqrt{2}}}{2} \\ \frac{\sqrt{2 - \sqrt{2}}}{2} & \frac
{\sqrt{2 + \sqrt{2}}}{2} \end{pmatrix} \\ = \begin{pmatrix} \frac{\sqrt{2 + \sqrt{2}}}{2} & \frac{\sqrt{2 - \sqrt{2}}}{2} & 0 & 0 \\ \frac{\sqrt{2 - \sqrt{2}}}{2} & \frac{\sqrt{2 + \sqrt{2}}}{2} & 0
& 0 \\ 0 & 0 & \frac{\sqrt{2 + \sqrt{2}}}{2} & \frac{\sqrt{2 - \sqrt{2}}}{2} \\ 0 & 0 & \frac{\sqrt{2 - \sqrt{2}}}{2} & \frac{\sqrt{2 + \sqrt{2}}}{2} \end{pmatrix}$
So applying this tensor product of the operations to Alice and Bob entangled qubits would give us this result:
$(U_0 \otimes U_{\pi/8})\ket{\phi^+} = \begin{pmatrix} \frac{\sqrt{2 + \sqrt{2}}}{2} & \frac{\sqrt{2 - \sqrt{2}}}{2} & 0 & 0 \\ \frac{\sqrt{2 - \sqrt{2}}}{2} & \frac{\sqrt{2 + \sqrt{2}}}{2} & 0 & 0 \
\ 0 & 0 & \frac{\sqrt{2 + \sqrt{2}}}{2} & \frac{\sqrt{2 - \sqrt{2}}}{2} \\ 0 & 0 & \frac{\sqrt{2 - \sqrt{2}}}{2} & \frac{\sqrt{2 + \sqrt{2}}}{2} \end{pmatrix} \cdot \frac{1}{\sqrt{2}}\begin{pmatrix}
1 \\ 0 \\ 0 \\ 1 \end{pmatrix} \\ = \frac{1}{\sqrt{2}} \begin{pmatrix} (\frac{\sqrt{2 + \sqrt{2}}}{2} \cdot 1) + (\frac{\sqrt{2 - \sqrt{2}}}{2} \cdot 0) + (0 \cdot 0) + (0 \cdot 1) \\ (\frac{\sqrt{2
- \sqrt{2}}}{2} \cdot 1) + (\frac{\sqrt{2 + \sqrt{2}}}{2} \cdot 0) + (0 \cdot 0) + (0 \cdot 1) \\ (0 \cdot 1) + (0 \cdot 0) + (\frac{\sqrt{2 + \sqrt{2}}}{2} \cdot 0) + (\frac{\sqrt{2 - \sqrt{2}}}{2}
\cdot 1) \\ (0 \cdot 1) + (0 \cdot 0) + (\frac{\sqrt{2 - \sqrt{2}}}{2} \cdot 0) + (\frac{\sqrt{2 + \sqrt{2}}}{2} \cdot 1) \\ \end{pmatrix} \\ = \frac{1}{\sqrt{2}} \begin{pmatrix} \frac{\sqrt{2 + \
sqrt{2}}}{2} \\ \frac{\sqrt{2 - \sqrt{2}}}{2} \\ \frac{\sqrt{2 - \sqrt{2}}}{2} \\ \frac{\sqrt{2 + \sqrt{2}}}{2} \end{pmatrix} = \begin{pmatrix} \frac{\sqrt{2 + \sqrt{2}}}{2 \sqrt{2}} \\ \frac{\sqrt{2
- \sqrt{2}}}{2 \sqrt{2}} \\ \frac{\sqrt{2 - \sqrt{2}}}{2 \sqrt{2}} \\ \frac{\sqrt{2 + \sqrt{2}}}{2 \sqrt{2}} \end{pmatrix}$
And to get the probability, we need just need to get the value of absolute square for each probability magnitudes, so we get:
$\begin{pmatrix} \frac{2 + \sqrt{2}}{8} \\ \frac{2 - \sqrt{2}}{8} \\ \frac{2 - \sqrt{2}}{8} \\ \frac{2 + \sqrt{2}}{8} \end{pmatrix}$
As you can see that the result is the same compared to our previous calculation using Bra-Ket notation. Now, you've got the matrix calculation in detail, but to make it more intuitive, I would
suggest you to follow me further to see the Geometric representation of this.
Geometric Representation
In this section, we will see what actually happens in Case 1: $(X,Y) = (0,1)$ geometrically. This is useful to sharpen our intuitive understanding to this problem.
First, let's remember a vector that we've defined in previous section:
$\ket{\psi_\theta} = cos(\theta)\ket{0} + sin(\theta)\ket{1}$
As we know that dependending on the input, Alice will define an orthonormal basis of vectors (remember the matrix $U_\theta$). For the case when $X = 0$, we can visualize it as follows:
While Bob, for the case when $Y = 0$, we can visualize it as follows:
Combining both, we get something like this in our visualization:
The color of the vectors represent the answers of Alice and Bob: green for 0, and blue for 1. As you can see that the "distance" in radians for both blue and green vectors between Alice and Bob is
same $\pi/8$.
By this, we could get the probability that Alice and Bob will answer the same output is:
$cos^2(\pi/8) = \frac{2 + \sqrt{2}}{4}$
while the probability that Alice and Bob will answer different output is:
$sin^2(\pi/8) = \frac{2 - \sqrt{2}}{4}$
By understanding "The Magic of the CHSH Game" it opens up our eyes to see another proof that Quantum Computing could definitely beats Classical Computing.
And FYI, in context of Quantum Physics, The CHSH game is also known as a pivotal concept in quantum mechanics that offers a clear and accessible way to demonstrate the limitations of hidden variable
theories and the profound implications of quantum entanglement.
Thanks for reading this article! If you are interested to discuss this topic with me, just drop your comment below! 🍻 | {"url":"https://www.ghack.dev/blog/quantum-computing/the-chsh-game","timestamp":"2024-11-06T18:04:07Z","content_type":"text/html","content_length":"1049680","record_id":"<urn:uuid:37f3a6e0-d973-4134-b081-b6ee6e517415>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00862.warc.gz"} |
A numerical solution for gas-particle flows at high Reynolds numbers
Predicting the fluid mechanical characteristics of a gas-solid two-phase flow is critical for the successful design and operation of coal gasification systems, coal fired turbines, rocket nozzles,
and other energy conversion systems. This work presents a general grid-free numerical solution which extends a numerical solution of the Navier-Stokes equations developed by Chorin to a solution
suitable for unsteady or steady dilute gas-solid particle flows. The method is applicable to open or closed domains of arbitrary geometry. The capability of the method is illustrated by analyzing the
flow of gas and particles about a cylinder. Good agreement is found between the numerical method and experiment.
ASME Journal of Applied Mechanics
Pub Date:
September 1981
□ Computational Fluid Dynamics;
□ Gas-Solid Interfaces;
□ Steady Flow;
□ Two Phase Flow;
□ Unsteady Flow;
□ Circular Cylinders;
□ Coal Gasification;
□ Navier-Stokes Equation;
□ Particle Trajectories;
□ Performance Prediction;
□ Reynolds Number;
□ Rocket Nozzles;
□ Fluid Mechanics and Heat Transfer | {"url":"https://ui.adsabs.harvard.edu/abs/1981ATJAM..48..465L/abstract","timestamp":"2024-11-08T01:55:47Z","content_type":"text/html","content_length":"35461","record_id":"<urn:uuid:d8b040d3-a340-4d59-bb87-8aae63619903>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00302.warc.gz"} |
Integrated Multi-Step Design Method for Practical and Sophisticated Compliant Mechanisms Combining Topology and Shape Optimizations
Masakazu Kobayashi^*, Shinji Nishiwaki^**, and Hiroshi Yamakawa^***
^*Department of Information-aided Technology, Toyota Technological Institute, 2-12-1 Hisakata, Tempaku-ku, Nagoya 468-8511, Japan
^**Department of Aeronautics and Astronautics, Kyoto University, Yoshida Hon-machi, Sakyo-Ku, Kyoto 606-8501, Japan
^***Faculty of Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan
October 23, 2006
December 21, 2006
April 20, 2007
optimal design, compliant mechanism, topology optimization, shape optimization, nonlinear analysis
Compliant mechanisms designed by traditional topology optimization have a linear output response, and it is difficult for traditional methods to implement mechanisms having nonlinear output
responses, such as nonlinear deformation or path. To design a compliant mechanism having a specified nonlinear output path, we propose a two-stage design method based on topology and shape
optimizations. In the first stage, topology optimization generates an initial conceptual compliant mechanism based on ordinary design conditions, with “additional” constraints used to control the
output path in the second stage. In the second stage, an initial model for the shape optimization is created, based on the result of the topology optimization, and additional constraints are
replaced by spring elements. The shape optimization is then executed, to generate the detailed shape of the compliant mechanism having the desired output path. At this stage, parameters that
represent the outer shape of the compliant mechanism and of spring element properties are used as design variables in the shape optimization. In addition to configuring the specified output path,
executing the shape optimization after the topology optimization also makes it possible to consider the stress concentration and large displacement effects. This is an advantage offered by the
proposed method, because it is difficult for traditional methods to consider these aspects, due to inherent limitations of topology optimization.
Cite this article as:
M. Kobayashi, S. Nishiwaki, and H. Yamakawa, “Integrated Multi-Step Design Method for Practical and Sophisticated Compliant Mechanisms Combining Topology and Shape Optimizations,” J. Robot.
Mechatron., Vol.19 No.2, pp. 141-147, 2007.
Data files:
1. [1] L. L. Howell, “Compliant Mechanisms,” John Wiley & Sons, Inc., New York, N. Y., 1, 2001.
2. [2] G. K. Ananthasuresh and S. Kota, “Designing compliant mechanisms,” ASME Mechanical Engineering, pp. 93-96, 1995.
3. [3] U. D. Larsen, O. Sigmund, and S. Bouswstra, “Design and fabrication of compliant mechanisms and material structures with negative Poisson’s ratio,” Journal of Microelectromechanical
Systems, San Diego, California, 6, pp. 99-106, 1997.
4. [4] I. Her and A. Midah, “A compliance number concept for compliant mechanisms, and type synthesis,” Journal of Mechanisms, Transmissions, and Automation in Design, 109, 3, pp. 348-355, 1987.
5. [5] L. L. Howell and A. Midha, “Method for the Design of Compliant Mechanisms with Small-Length Flexural Pivots,” Journal of Mechanical Design, 116, pp. 280-289, 1994.
6. [6] M. P. Bendsφe and N. Kikuchi, “Generating Optimal Topologies in Structural Design Using a Homogenization Method,” Computer Methods in Applied Mechanics and Engineering, 71, 2, pp.
197-224, 1988.
7. [7] O. Sigmund “On the Design of Compliant Mechanisms Using Topology Optimization,” Mechanics of Structures and Machines, 25, 4, pp. 495-526, 1997.
8. [8] S. Nishiwaki, S. Min, J. Yoo, and N. Kikuchi, “Optimal Structural Design Considering Flexibility,” Computer Methods in Applied Mechanics and Engineering, 190, 34, pp. 4457-4504, 2001.
9. [9] O. Sigmund and J. Petersson, “Numerical instabilities in topology optimization: A survey on procedures dealing with checkerboards, mesh-dependencies and local minima,” Structural
Optimization, 16, pp. 68-75, 1998.
10. [10] D. Fujii and N. Kikuchi, “Improvement of numerical instabilities in topology optimization using the SLP method,” Structural Optimization, 19, pp. 113-121, 2000.
11. [11] B. Bourdin, “Filters in topology optimization,” International Journal for Numerical Methods in Engineering, 50, pp. 2143-2158, 2001.
12. [12] K. Matsui and K. Terada “Continuous approximation of material distribution for topology optimization,” International Journal for Numerical Methods in Engineering, 59, pp. 1925-1944,
13. [13] T. A. Poulsen, “A new Scheme for Imposing a Minimum Length Scale in Topology Optimization,” International Journal for Numerical Methods in Engineering, 57, pp. 741-760, 2003.
14. [14] G. H. Yoon, Y. Y. Kim, M. P. Bendsφe, and O. Sigmund, “Hingefree topology optimization with embedded translation-invariant differentiable wavelet shinkage,” Structural and
Multidisciplinary Optimization, 27, pp. 139-150, 2004.
15. [15] J. T. Pereira, E. A. Fancello, and C. S. Barcellos, “Topology Optimization of Continuum Structures with Material Failure Constraints,” Structural and Multidisciplinary Optimization, 26,
pp. 50-66, 2004.
16. [16] P. Duysinx and M. P. Bendsφe, “Topology Optimization of Continuum Structures with Local Stress Constraints,” International Journal for Numerical Methods in Engineering, 43, pp.
1453-1478, 1998.
17. [17] C. B. Pedersen, Y. Buhl, and O. Sigmund, “Topology synthesis of large-displacement compliant mechanisms,” International Journal for Numerical Methods in Engineering, 50, pp. 2683-2705,
18. [18] T. E. Bruns and D. A. Tortorelli, “Topology Optimization of Nonlinear Structures and Compliant Mechanisms,” Computer Methods in Applied Mechanics and Engineering, 190, 26-27, pp.
3443-3459, 2001.
19. [19] A. Midha, Y. Annamalai, and S. K. Kolachalam, “A Compliant Mechanism Design Methodology for Coupled and Uncoupled Systems, and Governing Free Choice Selection Considerations,”
Proceedings of DETC/CIE 2004, DETC2004-57579, 2004.
20. [20] B. D. Jensen and L. L. Howell, “Bistable Configurations of Compliant Mechanisms Modeled Using Four Links and Translational Joints,” Journal of Mechanical Design, 126, pp. 657-666, 2004.
21. [21] C. C. Swan and S. F. Rahmatalla, “Design and Control of Path Following Compliant mechanisms,” Proceedings of DETC/CIE 2004, DETC2004-57441, 2004.
22. [22] T. Sekimoto and H. Noguchi, “Homologous Topology Optimization in Large Displacement and Buckling Problems,” JSME International Journal, Series A, 44, pp. 610-615, 2001.
23. [23] T. E. Bruns, O. Sigmund, and D. A. Tortorelli, “Numerical Methods for the Topology Optimization of Structures that Exhibit Snapthrough,” International Journal for Numerical Methods in
Engineering, 55, pp. 1215-1237, 2002.
24. [24] T. E. Bruns and O. Sigmund, “Toward the topology design of mechanisms that exhibit snap-through behavior,” Computer Methods in Applied Mechanics and Engineering, 193, pp. 3973-4000,
25. [25] A. Hosoyama, S. Nishiwaki, K. Izui, M. Yoshimura, K. Matsui, and K. Terada, “Structural Topology Optimization of Compliant Mechanisms: In cases Where the Ratio of the Displacement at the
Input Location to the Displacement at the Output Location is Included in an Objective Function,” Transactions of the Japan Society of Mechanical Engineers, Series C, Vol.70, No.696, pp.
2384-2391, 2004.
26. [26] MSC Marc,
27. [27] modeFrontier,
Copyright© 2007 by Fuji Technology Press Ltd. and Japan Society of Mechanical Engineers. All right reserved. | {"url":"https://www.fujipress.jp/jrm/rb/robot001900020141/","timestamp":"2024-11-14T17:34:06Z","content_type":"text/html","content_length":"51879","record_id":"<urn:uuid:7e8693ad-27fc-4be0-b703-1d525407274d>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00561.warc.gz"} |
Lesson 8
Measurement Error (Part 1)
8.1: How Long Are These Pencils? (20 minutes)
Optional activity
In this activity, students measure lengths and determine possibilities for actual lengths. There are two layers of attending to precision (MP6) involved in this task:
• Deciding how accurately the pencils can be measured, probably to the nearest mm or to the nearest 2 mm, but this depends on the eyesight and confidence of the student
• Finding the possible percent error in the measurement chosen
Arrange students in groups of 2. Provide access to calculators. Give students 4–5 minutes of quiet work time, followed by partner and whole-class discussion.
Writing, Speaking: MLR8 Discussion Supports. To support students as they respond to “How accurate are your estimates?”, provide a sentence frame such as: “My estimate is within ___ mm of the actual
length because . . . ." Encourage students to consider what details are important to share and to think about how they will explain their reasoning using mathematical language. This will help
students use mathematical language as they justify the accuracy of their estimates.
Design Principle(s): Optimize output (for explanation)
Student Facing
1. Estimate the length of each pencil.
2. How accurate are your estimates?
3. For each estimate, what is the largest possible percent error?
Anticipated Misconceptions
Some students may think that they can find an exact value for the length of each pencil. Because the pictures of the pencils are far enough away from the ruler, it requires a lot of care just to
identify the “nearest” millimeter (or which two millimeter markings the length lies between). Prompt them to consider the error in their measurements by asking questions like, “Assuming you measured
the pencil accurately to the nearest millimeter, what is the longest the actual length of the pencil could be? What is the shortest it could be?”
Some students may not remember how to calculate percent error. Ask them, “What is the biggest difference possible between the estimated and actual lengths? What percentage of the actual length would
that be when the difference is as big as possible?”
Activity Synthesis
The goal of this discussion is for students to practice how they talk about precision.
Discussion questions include:
• “How did you decide how accurately you can measure the pencils?” (I looked for a value that I was certain was less than the length of the pencil and a value that I was certain was bigger. My
estimate was halfway in between.)
• “Were you sure which mm measurement the length is closest to?” (Answers vary. Possible responses: Yes, I could tell that the short pencil is closest to 5.4 cm. No, the long pencil looks to be
closest to 17.7 mm, but I’m not sure. I am sure it is between 17.6 cm and 17.8 cm.)
• “Were the percent errors the same for the small pencil and for the long pencil? Why or why not?” (No. I was able to measure each pencil to within 1 mm. This is a smaller percentage of the longer
pencil length than it is of the smaller pencil length.)
Other possible topics of conversation include noting that the level of accuracy of a measurement depends on the measuring device. If the ruler were marked in sixteenths of an inch, we would only be
able to measure to the nearest sixteenth of an inch. If it were only marked in cm, we would only be able to measure to the nearest cm.
8.2: How Long Are These Floor Boards? (20 minutes)
Optional activity
This activity examines how measurement errors behave when they are added together. In other words, if I have a measurement \(m\) with a maximum error of 1% and a measurement \(n\) with a maximum
error of 1%, what percent error can \(m + n\) have? In addition to examining accuracy of measurements carefully (MP6), students work through examples and look for patterns (MP8) in order to
hypothesize, and eventually show, how percent error behaves when measurements with error are added to one another.
Monitor for students who look for patterns, recognize the usefulness of the distributive property, or formulate the problem abstractly with variables.
Read the problem out loud and ask students what information they would need to know to be able to solve the problem. Students may say that they need to know what length the boards are supposed to be,
because it is likely that they haven't realized that they can solve the problem without this information. Explain that floor boards come in many possible lengths, that 18-inch and 36-inch lengths are
both common, but the boards can be anywhere between 12 and 84 inches. Ask students to pick values for two actual lengths and figure out the error in that case. Then they can pick two different
examples, make the calculations again, and look for patterns.
Provide access to calculators.
Representation: Internalize Comprehension. Represent the same information through different modalities by using diagrams. If students are unsure where to begin, suggest that they draw a diagram to
help illustrate the information provided.
Supports accessibility for: Conceptual processing; Visual-spatial processing
Student Facing
A wood floor is made by laying multiple boards end to end. Each board is measured with a maximum percent error of 5%. What is the maximum percent error for the total length of the floor?
Anticipated Misconceptions
Some students may pick some example lengths but then struggle with knowing what to do with them. Ask them “what would be the maximum measured lengths? The minimum? What would be the error if both
measurements were maximum? What if they were both minimum?”
Some students may pick numbers that make the calculations more complicated, leading to arithmetic errors. Suggest that they choose simple, round numbers for lengths, like 50 inches or 100
Activity Synthesis
The goal of this discussion is for students to generalize from their specific examples of measurements to understand the general pattern and express it algebraically.
Poll the class on the measurements they tried and the maximum percent error they calculated. Invite students to share any patterns they noticed, especially students who recognized the usefulness of
the distributive property for making sense of the general pattern.
Guide students to use variables to talk about the patterns more generally.
• If a board is supposed to have length \(x\) with a maximum percent error of 5%, then the shortest it could be is \(0.95x\) and the longest it could be is \(1.05x\).
• If another board is supposed to have length \(y\), it could be between \(0.95y\) and \(1.05y\).
• When the boards are laid end-to-end, the shortest the total length could be is \(0.95x + 0.95y\), which is equivalent to \(0.95(x + y)\).
• The longest the total length could be is \(1.05x + 1.05y\), or \(1.05(x + y)\).
• Because of the distributive property, we can see that the maximum percent error is still 5% after the board lengths are added together.
One interesting point to make, if students have also done the previous activity about measuring pencils, is that you could measure the sum of the board lengths with a lower percent error than you
could measure each individual board (assuming your tape measure is long enough), just like an error of 1 mm was a smaller percentage of the length of the longer pencil. | {"url":"https://im.kendallhunt.com/MS/teachers/2/9/8/index.html","timestamp":"2024-11-06T16:05:49Z","content_type":"text/html","content_length":"79420","record_id":"<urn:uuid:e44ad3b5-429a-4a75-80bb-73f2dbcabde0>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00348.warc.gz"} |
06 CR-Factorization and Linear Transformations
Lecture from 04.10.2024 | Video: Videos ETHZ
Imagine an matrix, , with a rank of r. This means its columns span an r-dimensional space. CR-Factorization says we can build from two special matrices:
• C: A smaller matrix containing only the independent columns of . These columns form a solid foundation, acting as a basis for the entire column space of . Think of them as the “essential”
building blocks.
• R: A unique matrix that describes how the remaining columns of relate to these independent ones. It essentially encodes the “dependencies” between columns.
The beauty of CR-Factorization lies in this simple equation:
This means we can reconstruct the original matrix, , simply by multiplying and .
Example Rank 1 Matrix (Lemma 2.21)
Consider the matrix:
This is a matrix where the rows are scalar multiples of each other, meaning the rank of the matrix is 1. We can express this matrix as the outer product of two vectors, which is a characteristic of
rank 1 matrices.
According to Lemma 2.21, a matrix has rank 1 if and only if there exist non-zero vectors and such that:
In this case, we can write:
The outer product results in:
Thus, we have successfully expressed matrix as the product of two vectors, confirming that it has rank 1.
Example Rank 2
Fast Fibonacci Numbers by Iterative (Matrix) Squaring
Now what if we wanted to find out ?
But now we’d have to compute matrix multiplications. We need to have a smarter way of doing that.
Matrices and Linear Transformations
A matrix can be thought of as a function or transformation that maps vectors from one vector space to another. This mapping adheres to the rules of linearity:
• Additivity: Transforming the sum of two vectors is equivalent to transforming each vector individually and then adding the results. Mathematically, for matrices A and vectors x, y:
• Homogeneity: Scaling a vector by a scalar before transformation is equivalent to scaling the transformed vector. Mathematically, for a matrix A and vectors x, scalar λ:
These properties are crucial because they allow us to manipulate linear combinations of vectors efficiently using matrix operations.
The standard notation for this transformation is , where A represents the matrix and x is the input vector. This product yields an output vector in a different vector space.
• : The input vector belongs to an n-dimensional Euclidean space.
• : The matrix, A, is a rectangular array of real numbers with m rows and n columns. It defines the transformation between the spaces.
• : The output vector resides in an m-dimensional Euclidean space, reflecting the dimensionality change induced by the transformation.
Example 1: Matrix Multiplication
If we have a matrix:
and an input vector:
we can multiply them together. This multiplication is not like regular multiplication; it’s a specific process where we take each element of the matrix and combine it with corresponding elements from
the vector. The result, , is another vector:
Here, the matrix transformed the vector from (two dimensions) to (three dimensions).
Example 2: Linear Transformation
A linear transformation is a special type of function that preserves certain properties. We can represent it using a matrix. Let’s consider the transformation defined by this matrix:
This matrix “swaps” the components of a vector. Applying the transformation to a vector gives us:
So, if we input , the output is .
Visual Transformation
Let us define that a transformation of a set of inputs is defined as:
Example (): “Space Distortion”:
A simple way to think of what’s happening is to look at the unit vectors start and end position. 2D matrices allow you to operate on vectors in RR2 distort them by: stretching, mirroring, rotating,
Example : “Orthogonal Projection”
Input 3D unit cube → Output: 2D Projection.
Linear Transformations Generalization
Linear transformations provide a powerful framework for describing how vectors can be transformed in a consistent and predictable manner.
A function is considered a linear transformation if it satisfies the following two axioms for all vectors and any scalar :
1. Additivity:
2. Homogeneity:
Linear Transformations Represented by Matrices:
1. Rotation in 2D: Rotating a vector in the plane by an angle θ can be achieved with the following 2x2 matrix:
2. Scaling in ℝ²: To scale a vector in the plane by a factor ‘k’ along both axes, we use the matrix:
3. Projection onto the x-axis in ℝ²: This transformation maps any vector (x, y) to (x, 0). The corresponding matrix is:
4. Shearing in ℝ²: Shearing a vector distorts its shape by moving points along diagonal lines. A common shear transformation can be represented with the matrix:
where ‘s’ is the shearing factor.
Linear Transformations Without Matrices:
1. Differentiation: The derivative of a function (e.g., ) is a linear transformation because:
□ It satisfies additivity:
□ It satisfies homogeneity:
2. Integration: Similar to differentiation, the integral of a function (e.g., ) is also a linear transformation.
Continue here: 07 Linear Transformations, Linear Systems of Equations, PageRank | {"url":"https://cs.shivi.io/Semesters/Semester-1/Linear-Algebra/Lecture-Notes/06-CR-Factorization-and-Linear-Transformations","timestamp":"2024-11-05T00:35:32Z","content_type":"text/html","content_length":"140383","record_id":"<urn:uuid:62183cc5-e1a6-46a6-a787-a266e148c157>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00036.warc.gz"} |
Quadratic Equations -
Freshman year can be a daunting time for many students, but mastering Algebra 1 can set you up for success in high school and beyond. One of the key topics you'll encounter in Algebra 1 is quadratic
equations and applications. In this post, we'll explore some of the key concepts you'll need to navigate this topic with confidence.
Quadratic Equations and Applications
At its core, a quadratic equation is an equation of the form ax^2 + bx + c = 0, where a, b, and c are constants. The solutions to this equation can be found using the quadratic formula: x = (-b ±
sqrt(b^2 - 4ac)) / 2a. But what does this actually mean?
Quadratic equations are useful in a variety of real-world applications, from physics to finance. For example, if you're trying to find the maximum height of a projectile, you can use a quadratic
equation to model its trajectory. Or if you're trying to calculate the profit of a business, you can use a quadratic equation to represent the relationship between revenue and expenses.
Quadratic Functions and Solutions
A quadratic function is a function of the form f(x) = ax^2 + bx + c, where a, b, and c are constants. The graph of a quadratic function is a parabola, which can be concave up or concave down
depending on the value of a. If a > 0, the parabola opens upwards, and if a < 0, the parabola opens downwards.
To find the solutions to a quadratic function, you can use the quadratic formula or factor the equation. Factoring a quadratic equation involves finding two numbers that multiply to give you c and
add to give you b. For example, if you have the equation x^2 + 5x + 6 = 0, you can factor it as (x + 2)(x + 3) = 0 and find the solutions as x = -2 or x = -3.
Functions, Graphs, and Features
In addition to quadratic functions, Algebra 1 also covers a variety of other types of functions, including linear functions, exponential functions, and logarithmic functions. Each type of function
has its own unique graph and features.
Linear functions have a constant rate of change and graph as a straight line. Exponential functions have a constant multiplicative rate of change and graph as a curve that starts out slowly and then
grows rapidly. Logarithmic functions are the inverse of exponential functions and graph as a curve that starts out rapidly and then slows down.
Understanding the graphs and features of different types of functions is important for solving problems and analyzing data. For example, if you're trying to model the growth of a population, you
might use an exponential function. Or if you're trying to analyze the relationship between two variables, you might use a linear function.
In conclusion, mastering Algebra 1 is an important step in your academic journey. By understanding quadratic equations and applications, quadratic functions and solutions, and functions, graphs, and
features, you'll be well-equipped to tackle more advanced math topics in the future. Remember to take advantage of resources like textbooks, online tutorials, and teachers to help you navigate this
challenging but rewarding subject. | {"url":"https://www.ninthgradealgebramadeeasy.com/quadratic-equations/","timestamp":"2024-11-13T12:45:05Z","content_type":"text/html","content_length":"407081","record_id":"<urn:uuid:4397c7e1-a18b-44ab-aecf-b2c05a675118>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00013.warc.gz"} |
DIY Tips and Projects
In the last WDT some people were talking a bit about Do It Yourself projects.
We've got a few threads in the Permanent Threads forum that deal with specific topics; home repair, make your own beer, etc.
This thread will be a more of a catch-all, generic DIY ask and answer topic.
We'll start it off in the General Discussion forum to get some traction, and then move it to the Permanent Threads section.
FOCUS: Ask about DIY stuff here. Help out a fellow TiB'er/ette with DIY info.
ALT-FOCUS: Share your past/current DIY projects. What stuff do you DIY that most people wouldn't? Share your successes and failures; sometimes it's just as important to learn how not to do
I saw this today and it can be shoehorned in here I guess.
I'm not a huge coffee drinker, but I've been in situations where someone's pot recently broke or got blown up with a firework or something so this is nice to know.
I was really hoping for more inspiring things than how to cut up a coffee filter, but beggars can't be choosers, no?
I do have to say that this story about a Man Who Built A Plane In His Basement And Had To Dig Up His Back Yard And Rip A Wall Out Of His Foundation To Get It Out is full of both inspiration and
Attached Files:
Just you wait until one of those bears takes out your coffee maker.
Expand Collapse
Emotionally Jaded
Oct 19, 2009
Alt. Focus:
I live in a condo without a yard, so my DIY projects have been relatively limited. Aside from painting (which really shouldn't count), I've installed three ceiling fans and designed my AV setup.
While the ceiling fans are nice, the AV setup is what I'm the most proud of. I've got 5 flat panel 11"x14" speakers mounted to the wall and a 10" subwoofer under an end table with all the wiring
hidden. Most of the wiring that is outside of the wall runs along the molding and is hidden by our sofa. The rest runs through the walls. There are only two places where you can really see any
wiring at all. The first is in our second bedroom, where it runs in the crotch where the brick wall meets the sheetrock and where the wall meets the ceiling as it runs over into a closet (where
it then drops down and punches through the wall to connect to the speaker). Because of the brick and the natural wood ceiling, the wire is barely noticeable. The second is where one speaker is
mounted to a brick wall, but I went to Walmart and bought some fake vine plant and zip-tied it to the wire and draped it around the speaker, so you can't even tell it's there. Most people have
just assumed it's a wireless speaker.
The TV is mounted to the wall with all of the wires running through to a wooden shelving unit that I built in the room behind it. An RF remote controls everything. From the sofa, all you see is
the TV and the speakers, so it's a nice, clean look.
Expand Collapse
Emotionally Jaded
Nov 19, 2009
If you like beer, why not build a Kegerator.
What you need is a chest freezer, a local friendly microbrewery am to purchase the following two items.
The Kit
The temperature Regulator
Those link to the more expensive units, so you can buy it for less then the $450, when I made my Kegerator I picked up a local CO2 tank which dropped the shipping costs considerably.
I've had mine for going on 4 summers now, and being able to buy two kegs as needed dropped my beer expenses to the point where the whole deal broke even after the 2nd summer.
Expand Collapse
Call me Caitlyn. Got any cake?
Nov 3, 2009
I have a bunch of projects in my mind that I want to do, but my biggest problem is planning and preparation. For example, I want to completely remodel my basement bathroom. My Dad and I redid my
other bathroom and taken individually, I'm capable of performing all the necessary tasks. Taken as a whole, however, I feel lost. Here's the list of things that I want to do to that bathroom:
-Put new flooring in, which will be tile
-Rip out the shower and put in a jacuzzi
-Put in a new toilet and sink
-Build a vanity and towel cabinet
-Repaint the walls & ceiling and put in new base trim
What do I start with? What's the most logical order in which to do all those tasks?
Expand Collapse
Emotionally Jaded
Oct 19, 2009
I have a long documented history of being able to turn any consumer electronics device into a paperweight.
E. Tuffmen
Expand Collapse
Emotionally Jaded
Oct 19, 2009
I don't know how inspiring these are but... the first one is of the fence and raised beds I built. I built the same raised beds for the front of the house, and if you look just above the fence on
the left, you'll see the chicken coop I built. I plan on doing the rest of the back yard in this fencing, only at 6 feet high instead of 4. Right behind the light pole and further into the woods
I plan on building my son a tree house. Hope to have the platform built by the end of the summer. Still have a LOT more landscaping to do. One of the reasons I bough this house was because the
yard was a "blank slate" and I could do it how I want without having to take other people's crap out.
The next one is an electronic firing system for fireworks I've been working on and hope to have done by the fourth. All that's left is wiring it up. If you like fireworks, shooting electronically
is the only way to go.
Attached Files:
Expand Collapse
Emotionally Jaded
Oct 21, 2009
If you're not using the bathroom now, it sounds like you want to gut it, so just tear everything out first.
I'd probably go biggest-to-smallest so you have the most maneuverability and the biggest tasks usually result in the most collateral damage (i.e. you're more likely to smash into your new vanity
when installing the jacuzzi, then you are the other way around). So, given that, and assuming the plumbing is already where you want it, I'd:
- Rip everything out, including flooring
- Paint (you don't want to paint around things or risk spilling)
- Lay down whatever base flooring you'll need for the tile (if necessary)
- Jacuzzi
- Vanity/cabinet
- Trim
- If the sink is part of the vanity, install it. If it's a pedestal sink, wait.
- Tile (toilet/pedestal sink will go over the tile)
- Toilet and sink if not installed in the vanity
- Touchup paint from the inevitable marks you'll leave
I'm not a professional contractor, just did a few full bathroom overhauls.
Moved to the Permanent Threads forum.
Expand Collapse
Village Idiot
Dec 22, 2010
If anyone has any Carpentry questions, feel free to shoot them my way. I've been building with wood and related products for a little over twenty years now, however only the last 8 have been
Expand Collapse
Experienced Idiot
Oct 19, 2009
My little tiny runt of a dog hates heavy rain. I live in Vancouver. I also don't like poop on my carpet. Something needed to be done.
It's 4' lengths of that 2' wide corrugated plastic roofing material screwed on either end to two 10' lengths of PVC pipe. An elbow at either end give holding places for 2 1/2' legs. A pipe strap
at either end causes it to hinge down and out of the way when it's not raining.
Attached Files:
Expand Collapse
Experienced Idiot
Oct 20, 2009
The only thing you need for any DIY home repairs is this flowchart. It has never failed me.
Attached Files:
Expand Collapse
Emotionally Jaded
Oct 27, 2009
I dunno man, I think this guy has him beat in terms of DIY inspiration and WTF. He lives alone on a mountain building shit. Just watch the first couple videos in order, and you'll see why he's so
awesome, not to mention his goofy humor:
#15 Sep 23, 2011
Last edited by a moderator: Mar 27, 2015
Expand Collapse
Emotionally Jaded
Oct 24, 2009
Not my project (I wish) but a wi-fi hacking, cell-phone snooping, home-made UAV? Yes please
Expand Collapse
Emotionally Jaded
Apr 13, 2010
Does anyone have any ideas how to improve insulation around metal framed windows?
I live in a condo and My entire exterior wall is floor to ceiling windows. The building was constructed in the mid 70's so the windows are double pane with metal frames. None are fogged, but I
lose a shit ton of heat through the metal frames. I installed thermal drapes when I moved in, but it still gets cold as shit in my unit. Replacing them is not an option and so far everyhing I
find about decreasing cold loss through windows addresses the window and not the frame itself.
Crown Royal
Expand Collapse
Just call me Topher
Oct 31, 2009
Any stores that sell things like draperies and blinds should be able to help.
There are clear films you can "stick" to the windows that increase insulation. They are transparent and can be cut to shape with scissors. They might help.
I know most of the shitty "big box" stores like Home Depot have them in the draperies/wallpaper area. Just ask.
It could be because there isn't any insulation around the window frame, not necessarily the metal frame itself. Does your window have trim around it? If so, you should be able to carefully pull
one side of that trim off and it should expose the framing and insulation underneath so you can get an idea of what you're dealing with.
Old windows are notorious for not having any (or proper) insulation around the frame, which leaves just a cold air gap between the outside siding and the inside wall, which is a highway of cold
coming into the house. If, after you carefully pull off the window trim, you see that this is the case, you could spray foam the empty spaces underneath it. If you're careful when prying that
shit off (it should be held in place with small brad nails), it should be a simple procedure. (Just be careful when nailing them back in place that you don't dent the trim with a hammer.. .either
use a brad gun or a nail punch).
That would make a HUGE difference.
Expand Collapse
Oct 26, 2009
Use low expansion foam or you could have a problem closing your windows. | {"url":"https://www.theidiotboard.com/threads/diy-tips-and-projects.2585/","timestamp":"2024-11-11T07:10:36Z","content_type":"text/html","content_length":"123322","record_id":"<urn:uuid:442cead6-5cc4-4ce8-b763-2df196626fd8>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00840.warc.gz"} |
2.4. Dimensions
A variable may have any number of dimensions, including zero, and the dimensions must all have different names. COARDS strongly recommends limiting the number of dimensions to four, but we wish to
allow greater flexibility. The dimensions of the variable define the axes of the quantity it contains. Dimensions other than those of space and time may be included. Several examples can be found in
this document. Under certain circumstances, one may need more than one dimension in a particular quantity. For instance, a variable containing a two-dimensional probability density function might
correlate the temperature at two different vertical levels, and hence would have temperature on both axes.
If any or all of the dimensions of a variable have the interpretations of "date or time" (T), "height or depth" (Z), "latitude" (Y), or "longitude" (X) then we recommend, but do not require (see
Section 1.4, “Relationship to the COARDS Conventions”), those dimensions to appear in the relative order T, then Z, then Y, then X in the CDL definition corresponding to the file. All other
dimensions should, whenever possible, be placed to the left of the spatiotemporal dimensions.
Dimensions may be of any size, including unity. When a single value of some coordinate applies to all the values in a variable, the recommended means of attaching this information to the variable is
by use of a dimension of size unity with a one-element coordinate variable. It is also acceptable to use a scalar coordinate variable which eliminates the need for an associated size one dimension in
the data variable. The advantage of using a coordinate variable is that all its attributes can be used to describe the single-valued quantity, including boundaries. For example, a variable containing
data for temperature at 1.5 m above the ground has a single-valued coordinate supplying a height of 1.5 m, and a time-mean quantity has a single-valued time coordinate with an associated boundary
variable to record the start and end of the averaging period. | {"url":"https://cfconventions.org/Data/cf-conventions/cf-conventions-1.3/build/ch02s04.html","timestamp":"2024-11-14T05:28:06Z","content_type":"text/html","content_length":"5128","record_id":"<urn:uuid:8a534b49-9074-438b-85cf-4fd450e4e7a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00628.warc.gz"} |
A Practical Guide to Robust Optimization
by Bram L. Gorissen, Ihsan Yanıkoğlu, Dick den Hertog
Publisher: arXiv 2015
Number of pages: 29
The aim of this paper is to help practitioners to understand robust optimization and to successfully apply it in practice. We provide a brief introduction to robust optimization, and also describe
important do's and don'ts for using it in practice. We use many small examples to illustrate our discussions.
Download or read it online for free here:
Download link
(700KB, PDF)
Similar books
Applied Mathematical Programming Using Algebraic Systems
Bruce A. McCarl, Thomas H. Spreen
Texas A&M UniversityThis book is intended to both serve as a reference guide and a text for a course on Applied Mathematical Programming. The text concentrates upon conceptual issues, problem
formulation, computerized problem solution, and results interpretation.
Optimization Algorithms on Matrix Manifolds
P.-A. Absil, R. Mahony, R. Sepulchre
Princeton University PressMany science and engineering problems can be rephrased as optimization problems on matrix search spaces endowed with a manifold structure. This book shows how to exploit the
structure of such problems to develop efficient numerical algorithms.
An Introduction to Nonlinear Optimization Theory
Marius Durea, Radu Strugariu
De Gruyter OpenStarting with the case of differentiable data and the classical results on constrained optimization problems, continuing with the topic of nonsmooth objects involved in optimization,
the book concentrates on both theoretical and practical aspects.
Linear Programming
Jim Burke
University of WashingtonThese are notes for an introductory course in linear programming. The four basic components of the course are modeling, solution methodology, duality theory, and sensitivity
analysis. We focus on the simplex algorithm due to George Dantzig. | {"url":"https://www.e-booksdirectory.com/details.php?ebook=10494","timestamp":"2024-11-05T05:41:05Z","content_type":"text/html","content_length":"11252","record_id":"<urn:uuid:db369c71-7485-476b-991f-7c16a968f204>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00712.warc.gz"} |
Attempting to close educational gaps! Part 3: an update after 6 weeks
Attempting to close educational gaps! Part 3: Six week update
This is the third post in a series where we’re documenting an ambitious (crazy?) attempt to fully close educational gaps in math at an elementary school in downtown Seattle. As a quick recap:
• Part 1 describes the school, Lowell Elementary.
• Part 2 describes our program’s approach.
In this piece we’re going to talk about what we’ve seen in the first 6 weeks of the program. Right now we’re working with 10 students in 4th and 5th grade, but based on what we’ve seen so far, we’re
looking at expanding the program to more students in January. For the “I don’t have time to read a full 4-minute post” folks, here are the main points:
1. The combination of 1:1 tutoring with 4th and 5th grade students works really well. There aren’t any counterproductive social dynamics and the kids are having fun with the program.
2. The kids are behind, and their independent learning skills need to be developed.
3. The kids are really engaged and want to learn.
For those of you who are interested in the details, read on!
Now I’ll flesh out the main points in more detail.
1:1 tutoring with this age group works well.
I’ve done work with this age group in team settings (for example, as a soccer coach), and I’ve often seen behavior issues arise due to the dynamics between students, especially when one of them is
struggling. The 1:1 format completely eliminates that.
The kids also seem to genuinely enjoy the 1:1 interactions, especially when it’s supplemented via chat using remote learning tools. We ran the following quick survey a few weeks ago:
On a scale of 1–5 (1=hate it, 5=love it), how do you feel about your practice sessions? The average response was 4.75, which seems like a big success.
Finally, I’ll add that 1:1 tutoring is also pretty nice from the tutor’s perspective. After 6 weeks, I feel like I already know each of the students in a pretty deep and meaningful way.
The kids are behind in math, and their independent learning skills need to be developed.
Based on what we know about the test scores at Lowell Elementary, it wasn’t surprising to find that the students were fairly far behind on math standards. For example, only 1 of the 10 students was
fluent with single digit multiplication, which is a 3rd grade standard. A couple of the students also struggled with 2nd grade word problems.
It also wasn’t shocking that the students didn’t have strong independent learning skills. When working on their own (with no tutor around) they would give up pretty quickly once they got out of their
comfort zone, though I suspect this is common with this age group. Regardless, given our goal of creating independent learners, it is clear that we need to work on building up their resiliency and
growth mindsets. That being said…
The students are engaged and want to learn.
I’ve been thrilled by how engaged the students are. They show up ready to share, ready to discuss, and ready to learn. I think our program structure really helps here: at the beginning we really
focused on relationship building, and it has paid off. The kids love to talk about themselves, and we’re able to use what we’ve learned to build motivation later on.
As an example, one of the students wants to be an astronaut when he grows up. It was a struggle to get him engaged at first, but then we spent some time reading about what it takes to be an
astronaut…turns out math skills are near the top of the list! That process really improved his engagement.
In terms of enthusiasm, I’ve been blown away. Almost every student volunteered to practice over Thanksgiving break, and I’ve got students asking for weekend sessions.
To get more concrete metrics, we are quantitatively measuring engagement by tracking how much students practice on their own. At this point every student who regularly attends is typically practicing
on their own 2 or more times per week. When you include their 1:1 sessions, they are averaging over 60 minutes per week of active practice (see chart below). I find that pretty encouraging given that
a lot of our scheduled 1:1 time is talking about skills, motivation, and other things. It was awesome to see that when they had a week with no school they still averaged over 30 min/week of practice!
The average time spent actively practicing per week (on Khan Academy) across all 8 regularly attending students. The significant amount of practice during Thanksgiving break was very encouraging.
The challenges
I don’t want to ignore any of the challenges. By the far the biggest is attendance. At the beginning of the program, 4 of the 10 students didn’t regularly show up. Two of them eventually got engaged
with the class and now show up for almost every session. The other two haven’t shown up for weeks. This will definitely make it challenging to hit the school-wide improvements we are looking for.
Looking ahead, we’ll want to eventually transition the students to practicing more on their own. This will help the students become independent learners and also reduce the long-term cost of the
program. We’ve made a good start, but we’re still a long ways from super steady practice habits, and getting there probably won’t be easy.
What does it all mean?
It terms of our long term goal of completely closing math education gaps at Lowell, we need to be able to get 75% of the struggling students to proficiency. For details see the “Is it it realistic?”
section in our previous post. If we only get attendance from 80% of our students (8/10), it means we need to get every single remaining student to grade-level proficiency to hit our school-wide goal.
It will definitely be tough, but from what I’ve seen in their skill growth so far, I think it is possible. We’ll discuss that in more detail in future posts! | {"url":"https://mpreiner.medium.com/attempting-to-close-educational-gaps-part-3-an-update-after-6-weeks-26843864af9d","timestamp":"2024-11-13T05:49:59Z","content_type":"text/html","content_length":"110829","record_id":"<urn:uuid:6fdcfd88-42b1-4676-90b9-0ca9059727b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00691.warc.gz"} |
grdflexure - Compute flexural deformation of 3-D surfaces for various rheologies
grdflexure topogrd -Drm/rl[/ri]/rw -ETe[u] -Goutgrid [ -ANx/Ny/Nxy ] [ -Cppoisson ] [ -CyYoung ] [ -Fnu_a[/h_a/nu_m] ] [ -Llist ] [ -N[f|q|s|nx/ny][+a|d|h|l][+e|n|m][+twidth][+w[suffix]][+z[p]] [ -S
beta ] [ -Tt0[u][/t1[u]/dt[u]|file] |n][+l] ] [ -V[level] ] [ -Wwd] [ -Zzm] [ -fg ]
Note: No space is allowed between the option flag and the associated arguments.
grdflexure computes the flexural response to loads using a range of user-selectable rheologies. User may select from elastic, viscoelastic, or firmoviscous (with one or two viscous layers). Temporal
evolution can also be modeled by providing incremental load grids and specifying a range of model output times.
Required Arguments¶
2-D binary grid file with the topography of the load (in meters); See GRID FILE FORMATS below. If -T is used, topogrd may be a filename template with a floating point format (C syntax) and a
different load file name will be set and loaded for each time step. The load times thus coincide with the times given via -T (but not all times need to have a corresponding file). Alternatively,
give topogrd as =flist, where flist is an ASCII table with one topogrd filename and load time per record. These load times can be different from the evaluation times given via -T. For load time
format, see -T.
Sets density for mantle, load, infill (optional, otherwise it is assumed to equal the load density), and water or air. If ri differs from rl then an approximate solution will be found. If ri is
not given then it defaults to rl.
Sets the elastic plate thickness (in meter); append k for km. If the elastic thickness exceeds 1e10 it will be interpreted as a flexural rigidity D (by default D is computed from Te, Young’s
modulus, and Poisson’s ratio; see -C to change these values).
If -T is set then grdfile must be a filename template that contains a floating point format (C syntax). If the filename template also contains either %s (for unit name) or %c (for unit letter)
then we use the corresponding time (in units specified in -T) to generate the individual file names, otherwise we use time in years with no unit.
Optional Arguments¶
Specify in-plane compressional or extensional forces in the x- and y-directions, as well as any shear force [no in-plane forces]. Compression is indicated by negative values, while extensional
forces are specified using positive values.
Change the current value of Poisson’s ratio [0.25].
Change the current value of Young’s modulus [7.0e10 N/m^2].
Specify a firmoviscous model in conjunction with an elastic plate thickness specified via -E. Just give one viscosity (nu_a) for an elastic plate over a viscous half-space, or also append the
thickness of the asthenosphere (h_a) and the lower mantle viscosity (nu_m), with the first viscosity now being that of the asthenosphere. Give viscosities in Pa*s. If used, give the thickness of
the asthenosphere in meter; append k for km.
Choose or inquire about suitable grid dimensions for FFT and set optional parameters. Control the FFT dimension:
-Na lets the FFT select dimensions yielding the most accurate result.
-Nf will force the FFT to use the actual dimensions of the data.
-Nm lets the FFT select dimensions using the least work memory.
-Nr lets the FFT select dimensions yielding the most rapid calculation.
-Ns will present a list of optional dimensions, then exit.
-Nnx/ny will do FFT on array size nx/ny (must be >= grid file size). Default chooses dimensions >= data which optimize speed and accuracy of FFT. If FFT dimensions > grid file dimensions,
data are extended and tapered to zero.
Control detrending of data: Append modifiers for removing a linear trend:
+d: Detrend data, i.e. remove best-fitting linear trend [Default].
+a: Only remove mean value.
+h: Only remove mid value, i.e. 0.5 * (max + min).
+l: Leave data alone.
Control extension and tapering of data: Use modifiers to control how the extension and tapering are to be performed:
+e extends the grid by imposing edge-point symmetry [Default],
+m extends the grid by imposing edge mirror symmetry
+n turns off data extension.
Tapering is performed from the data edge to the FFT grid edge [100%]. Change this percentage via +twidth. When +n is in effect, the tapering is applied instead to the data margins as no
extension is available [0%].
Control messages being reported: +v will report suitable dimensions during processing.
Control writing of temporary results: For detailed investigation you can write the intermediate grid being passed to the forward FFT; this is likely to have been detrended, extended by
point-symmetry along all edges, and tapered. Append +w[suffix] from which output file name(s) will be created (i.e., ingrid_prefix.ext) [tapered], where ext is your file extension. Finally, you
may save the complex grid produced by the forward FFT by appending +z. By default we write the real and imaginary components to ingrid_real.ext and ingrid_imag.ext. Append p to save instead the
polar form of magnitude and phase to files ingrid_mag.ext and ingrid_phase.ext.
Write the names and evaluation times of all grids that were created to the text file list. Requires -T.
Specify a viscoelastic model in conjunction with an elastic plate thickness specified via -E. Append the Maxwell time tm for the viscoelastic model (in ).
Specify a starved moat fraction in the 0-1 range, where 1 means the moat is fully filled with material of density ri while 0 means it is only filled with material of density rw (i.e., just water)
Specify t0, t1, and time increment (dt) for sequence of calculations [Default is one step, with no time dependency]. For a single specific time, just give start time t0. The unit is years; append
k for kyr and M for Myr. For a logarithmic time scale, append +l and specify n steps instead of dt. Alternatively, give a file with the desired times in the first column (these times may have
individual units appended, otherwise we assume year). We then write a separate model grid file for each given time step.
Set reference depth to the undeformed flexed surface in m [0]. Append k to indicate km.
Specify reference depth to flexed surface (e.g., Moho) in m; append k for km. Must be positive. [0].
-V[level] (more …)
Select verbosity level [c].
Geographic grids (dimensions of longitude, latitude) will be converted to meters via a “Flat Earth” approximation using the current ellipsoid parameters.
-^ or just -
Print a short message about the syntax of the command, then exits (NOTE: on Windows just use -).
-+ or just +
Print an extensive usage (help) message, including the explanation of any module-specific option (but not the GMT common options), then exits.
-? or no arguments
Print a complete usage (help) message, including the explanation of all options, then exits.
Grid File Formats¶
By default GMT writes out grid as single precision floats in a COARDS-complaint netCDF file format. However, GMT is able to produce grid files in many other commonly used grid file formats and also
facilitates so called “packing” of grids, writing out floating point data as 1- or 2-byte integers. (more …)
Grid Distance Units¶
If the grid does not have meter as the horizontal unit, append +uunit to the input file name to convert from the specified unit to meter. If your grid is geographic, convert distances to meters by
supplying -fg instead.
netCDF COARDS grids will automatically be recognized as geographic. For other grids geographical grids were you want to convert degrees into meters, select -fg. If the data are close to either pole,
you should consider projecting the grid file onto a rectangular coordinate system using grdproject.
Plate Flexure Notes¶
The FFT solution to plate flexure requires the infill density to equal the load density. This is typically only true directly beneath the load; beyond the load the infill tends to be lower-density
sediments or even water (or air). Wessel [2001, 2016] proposed an approximation that allows for the specification of an infill density different from the load density while still allowing for an FFT
solution. Basically, the plate flexure is solved for using the infill density as the effective load density but the amplitudes are adjusted by the factor A = sqrt ((rm - ri)/(rm - rl)), which is the
theoretical difference in amplitude due to a point load using the two different load densities. The approximation is very good but breaks down for large loads on weak plates, a fairy uncommon
To compute elastic plate flexure from the load topo.nc, for a 10 km thick plate with typical densities, try
gmt grdflexure topo.nc -Gflex.nc -E10k -D2700/3300/1035
To compute the firmoviscous response to a series of incremental loads given by file name and load time in the table l.lis at the single time 1 Ma using the specified rheological values, try
gmt grdflexure -T1M =l.lis -D3300/2800/2800/1000 -E5k -Gflx/smt_fv_%03.1f_%s.nc -F2e20 -Nf+a
Cathles, L. M., 1975, The viscosity of the earth’s mantle, Princeton University Press.
Wessel. P., 2001, Global distribution of seamounts inferred from gridded Geosat/ERS-1 altimetry, J. Geophys. Res., 106(B9), 19,431-19,441, http://dx.doi.org/10.1029/2000JB000083.
Wessel, P., 2016, Regional–residual separation of bathymetry and revised estimates of Hawaii plume flux, Geophys. J. Int., 204(2), 932-947, http://dx.doi.org/10.1093/gji/ggv472. | {"url":"https://docs.generic-mapping-tools.org/5.4/supplements/potential/grdflexure.html","timestamp":"2024-11-03T02:34:40Z","content_type":"application/xhtml+xml","content_length":"26456","record_id":"<urn:uuid:871064c6-dfb8-41cb-bdb2-478884ea1d36>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00615.warc.gz"} |
Logical Values
4.6 Logical Values
Octave has built-in support for logical values, i.e., variables that are either true or false. When comparing two variables, the result will be a logical value whose value depends on whether or not
the comparison is true.
The basic logical operations are &, |, and !, which correspond to “Logical And”, “Logical Or”, and “Logical Negation”. These operations all follow the usual rules of logic.
It is also possible to use logical values as part of standard numerical calculations. In this case true is converted to 1, and false to 0, both represented using double precision floating point
numbers. So, the result of true*22 - false/6 is 22.
Logical values can also be used to index matrices and cell arrays. When indexing with a logical array the result will be a vector containing the values corresponding to true parts of the logical
array. The following example illustrates this.
data = [ 1, 2; 3, 4 ];
idx = (data <= 2);
⇒ ans = [ 1; 2 ]
Instead of creating the idx array it is possible to replace data(idx) with data( data <= 2 ) in the above code.
Logical values can also be constructed by casting numeric objects to logical values, or by using the true or false functions.
Convert the numeric object x to logical type.
Any nonzero values will be converted to true (1) while zero values will be converted to false (0). The non-numeric value NaN cannot be converted and will produce an error.
Compatibility Note: Octave accepts complex values as input, whereas MATLAB issues an error.
See also: double, single, char.
Return a matrix or N-dimensional array whose elements are all logical 1.
If invoked with a single scalar integer argument, return a square matrix of the specified size.
If invoked with two or more scalar integer arguments, or a vector of integer values, return an array with given dimensions.
See also: false.
Return a matrix or N-dimensional array whose elements are all logical 0.
If invoked with a single scalar integer argument, return a square matrix of the specified size.
If invoked with two or more scalar integer arguments, or a vector of integer values, return an array with given dimensions.
See also: true. | {"url":"https://docs.octave.org/v4.2.2/Logical-Values.html","timestamp":"2024-11-14T20:47:48Z","content_type":"text/html","content_length":"7801","record_id":"<urn:uuid:6669b38a-c779-4bdc-b553-3bc89280f2df>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00616.warc.gz"} |
Several meta-analyses have shown that the Five-Factor Model of personality predicts a wide range of performance outcomes in the workplace and is a useful framework for organizing most personality
measures (see, e.g., ^Barrick & Mount, 1991; ^Judge, Rodell, Klinger, Simon, & Crawford, 2013; ^Tett, Rothstein, & Jackson, 1991). Conscientiousness and emotional stability are consistently found to
predict job performance for all occupations. The other three dimensions are valid predictors for specific criteria and occupations (^Barrick, Mount, & Judge, 2001; ^Salgado, Anderson, & Tauriz, 2015
Most personality questionnaires use single-stimulus items (e.g., Likert type). Forced-choice questionnaires (FCQs) are another type of psychological measurement instruments used in the evaluation of
non-cognitive traits, such as personality, preferences, and attitudes (see, e.g., ^Bartram, 1996; ^Christiansen, Burns, & Montgomery, 2005; ^Ryan & Ployhart, 2014; ^Saville & Willson, 1991). From
recruitment and selection professionals’ point of view, the main interest for these instruments is their ability to control for certain responses biases. Evidence suggests they are comparatively
robust against impression management attempts, which may easily arise in high-stakes contexts such as a selection process. Impression management has at least three effects on personality
questionnaire scores: (1) a decrease of their reliability index, (2) lower validity, and (3) an alteration of the individual rankings. These effects are, of course, especially relevant in the domain
of personnel selection, as they affect hiring decisions negatively (^Salgado & Lado, 2018).
Despite their resistance to faking, FCQs would not be relevant if they did not fare well in performance prediction when compared to alternative assessment formats. In recent years, a few
meta-analyses (^Salgado, 2017; ^Salgado et al., 2015; ^Salgado & Tauriz, 2014) have examined the predictive validity of FCQs and compared it with single-stimulus questionnaires. The evidence is that
FCQs producing quasi-ipsative scores (described later) fulfill better the abovementioned criterion validity requirement.
Multidimensional FCQs are a special case that prompts the examinee to choose among stimuli (i.e., items) that are valid against different criteria (^Ghiselli, 1954). In contrast, unidimensional FCQs
prompt to choose among items that are valid against the same criterion, or among valid and invalid (i.e., “suppressor”) items (^Sisson, 1948). Although the suppressor-item format was the original
proposal, the multidimensional format rapidly imposed itself, due to its ability to assess several traits simultaneously (^Scott, 1968).
Depending on the method used to score the traits, multidimensional FCQs may yield ipsative or quasi-ipsative scores (^Hicks, 1970). A respondent’s task can be organized to avoid a constant total sum
of the measured dimensions. This fact may be achieved, for example, by requesting the candidates not just to select the item that describes them best or worse, but also to rate how good (or bad) the
description is.
“Strong” ipsativity implies that the total sum of scores in a multidimensional FCQ is fixed (^Hicks, 1970). Ipsative scores violate the assumptions of the Classical Test Theory, leading to a
distortion in reliability and construct validity. From a practical point of view, this implies an impossibility to compare persons according to their level on the traits assessed (^Cornwell & Dunlap,
1994). Ipsativity issues have led to a great controversy revolving around the forced-choice (FC) format. However, this controversy has largely ignored the fact that ipsativity is a property of the
direct scoring method, not of the format itself. The confusion may probably stem from the assimilation of both terms (see, e.g., ^Christiansen et al., 2005; ^Saville & Willson, 1991, for examples of
the use of the “ipsative” term as a synonym of “forced-choice”).
Some researchers have proposed using Item Response Theory (IRT) models to circumvent the problems of ipsativity. These would allow obtaining normative scores on the trait levels. Although the
literature does not provide examples so far, they may also help developing scoring methods, such as item weightings for computing quasi-ipsative scores.
The multi-unidimensional pairwise preference (MUPP; ^Stark, Chernyshenko, & Drasgow, 2005) was the first IRT model to be proposed for multidimensional FCQs; it is characterized mainly by (1) applying
to two-item blocks and (2) by assuming that each item’s measurement model is an “ideal point” model, which implies that the probability of agreeing with a response option decreases with the
“distance” of the respondent to the item location on the trait continuum. The MUPP model also assumes that the response process implies independent decisions between the two options (^Andrich, 1989,
^1995). This assumption leads, in turn, to hypothesize that item parameters do not change when those items are paired in FC blocks. This assumption is paramount for the validity of multidimensional
FC instruments and, as we will explain below, is the focus of this paper.
The Thurstonian IRT model (TIRT; ^Brown & Maydeu-Olivares, 2011), based on ^Thurstone’s (1927) law of comparative judgment, followed in chronological order. Unlike the MUPP model, it applies to
blocks with more than two items. It also assumes a “dominance” rather than ideal point measurement model—the probability of agreement with each item increases (or decreases) monotonically with a
respondent’s latent trait score.
The MUPP-2PL model (^Morillo et al., 2016) is a variant of the MUPP; as such, it applies to two-item blocks as well. It differs from the original MUPP in that the probability of agreement with each
response option, as in the Thurstonian IRT model, is modeled by a dominance function. More precisely, it assumes that a 2-parameter logistic (2PL; ^Birnbaum, 1968) curve models the response to each
of the two items. This curve is expressed as
PXipj=1|θj=ϕLaipθip~j-bip (1)
where F[L](·) is the logistic function, θip~j the latent trait tapped by the item i[p], and θip~j and its characteristic parameters. These can be interpreted, respectively, as a “discrimination”
parameter and a “location” parameter: aip would indicate how sensitive or discriminant i[p] is to differences in θip~j θip~j would be a “point of indifference” in θip~j (i.e., where both the
probability of agreeing and disagreeing with the item would be equal to .50).
The assumptions of the MUPP-2PL model imply that if item i[p] was presented independently in a dichotomous response format, the probability that a respondent agreed with it would also be given by
Equation 1 (see ^Morillo, 2018). Thus, if a FC block consists of items and i[2] and we presented those same items in a dichotomous format, their parameters and the FC block parameters should be
equivalent. The response function of a bidimensional FC block (i.e., a block with two items tapping different latent dimensions) is given by
PiYij=1|θj=ϕLai1θi1~j-ai2θi2~j+di (2)
di=ai2bi2-ai1bi1 (3)
Therefore, ai1 and ai2 should be the same for the dichotomous items and the FC block, while d[i] should be a linear combination of the two location parameters bi1 and bi2 in the dichotomous format.
We call this the “invariance assumption”, as it implies that the parameters are invariant to both the format (FC versus dichotomous) and the within-block context (i.e., the other item(s) a certain
item is paired with).
Previous research has not subjected this assumption to abundant scrutiny; on the contrary, giving it for granted is prevalent in the literature (see, e.g., ^Stark et al., 2005). ^Lin and Brown (2017)
performed a retrospective study on massive data from the Occupational Personality Questionnaire (OPQ; ^Bartram, Brown, Fleck, Inceoglu, & Ward, 2006). Applying the Thurstonian IRT (^Brown &
Maydeu-Olivares, 2011) model, they compared the parameters in two versions of the instrument: OPQ32i, which uses a partial-ranking task with four items per block (most/least like me), and OPQ32r, a
reviewed version that dropped one item from each block (^Brown & Bartram, 2011), and implied a complete-ranking task with three items per block. They found that the parameters largely fulfilled the
invariance assumption. When it was not fulfilled, they also identified possible causal factors. They interpreted them as within-block context effects—variations of the item parameters due to the
other item(s) in the same block. However, they did not compare the FC format with the items individually applied in a single-stimulus format. It is also noteworthy that the OPQ Concept Model is not
the most widely accepted theoretical model of personality.
Testing the invariance assumption is crucial for the design of FCQs. When designing such an instrument, a practitioner may be tempted to rely upon the item parameter estimates without further
consideration. However, if the invariance assumption is violated, the FC block parameters may change dramatically, leading in turn to a lack of construct validity in the latent trait scores. The
purpose of this study is thus to test the invariance assumption of the MUPP-2PL model. In order to this, we will compare the parameters of a set of items designed to measure personality variables,
applying them as bidimensional FC blocks, and “individually”.
Single-stimulus items are usually applied in a graded-scale (GS) format. Therefore, we first propose a method for testing the invariance assumption with this presentation format. Then, we apply this
method to an empirical dataset assessing the Big Five construct domain. Finally, we discuss the application of the method and our results, giving some guidelines about their consequences for the
design of FCQs.
A Test of the Invariance Assumption with Graded-scale Items and Forced-choice Blocks
The traditional format of presenting non-cognitive items in questionnaires is the GS or Likert format. This format implies a series of responses graded in their level of agreement with the item
statement. Compared to the dichotomous format, the additional categories in the response scale provide a surplus of information that yields more reliable latent trait scores (^Lozano, García-Cueto, &
Muñiz, 2008).
The Graded Response Model (GRM; ^Samejima, 1968) can be applied to the response data from a GS questionnaire. According to the GRM, if a person responds to an item that has m + 1 categories (from 0
to m), they will choose category k or higher (with k being a category from 1 to m) with probability
PXipj≥k|θj=ϕLaip*θipj~-bipk (4)
where aip* is the discrimination parameter in GS format, and bipk is the location parameter for category k. This latter parameter represents the point in the latent trait continuum where the
probability of agreeing with at least as much as stated by k equals .5.
When m = 1 there are two response categories, and Equation 4 is reduced to the 2PL model expressed in Equation 1 with bip1=bip . When m > 1, we may consider a recoding of the responses, for a given
arbitrary category k’ between 1 and m (both included), such that the new value is 1 if the response is equal to k’ or higher, and 0 otherwise. This recoding implies a representation of the responses
to the Likert-type items as a dichotomous format, with a response probability given by Equation 1. According to the GRM, when dichotomizing a GS item, its parameters are expected to remain unchanged
(^Samejima, 1968). Therefore, we can assume the parameters of a bidimensional FC block to be equivalent to the parameters of its constituent items, as expressed in Equation 2.
We must make a caveat here, since none of bipk , parameters can be considered equivalent to the actual bip . As we stated before, the latter represents the point in the latent trait continuum where
Pxipj=1|θj=.5 in Equation 1, when such a statement is presented as a dichotomous item. When we perform a dichotomization of a GS format as stated above, the k’ threshold category chosen does not
necessarily imply that its bipk parameter coincides with the parameter from the dichotomous presentation as in Equation 1. We consider however that the equivalence given between the dichotomized GRM
and the 2PL models justify considering and assessing the linear combination of the item category location parameters as a proxy for the block intercept parameter.
In conclusion, testing the invariance assumption of the MUPP-2PL in a bidimensional FC block implies testing three hypotheses of equality of parameters: of the discrimination parameters of the two
items ( and ), and of the block intercept parameter with the correspondent linear combination of the item parameters (), which can be performed on the m values of k’. These can be done by means of a
likelihood ratio test (^Fisher, 1922), comparing an unconstrained model with the nested, constrained one, applying the corresponding restriction of equality. As it is well known, the resulting test
statistic is asymptotically chi-square distributed under the null hypothesis (^Wilks, 1938), in this case with one degree of freedom. This enables a very simple procedure for testing the invariance
assumption, based on a well-known and reliable methodology. In order to put this method to test, and provide evidence regarding the invariance assumption, the following section exemplifies the
application and results of this method.
We used a dataset consisting of responses to a GS questionnaire and a multidimensional FCQ. Both instruments shared a large number of items and were answered by a common group of participants, so
they were suitable to apply the invariance assumption tests. The contents of this dataset are described below.
Graded-scale questionnaire. It consisted of 226 GS items presented in a five-point Likert scale (completely disagree – disagree – neither agree nor disagree – agree – completely agree); there were m
= 4 category thresholds therefore. The items were designed to measure the dimensions of the Big Five model (^McCrae & John, 1992). An example of an emotional stability item is as follows: “Using the
previous five-point scale, indicate your agreement with this statement: ‘Seldom feel blue’.” Example statements for the other four dimensions are these: “Make friends easily” (extraversion), “Have a
vivid imagination” (openness to experience), “Have a good word for everyone” (agreeableness) and “Am always prepared” (conscientiousness). The five items are selected from the Big-Five IPIP Inventory
item pool (^Goldberg, 1999).
Forty-four items were applied for each of the five traits. One hundred twenty-two of these items were direct (i.e., positively keyed), and 98 were inverse (i.e., negatively keyed; see ^Morillo, 2018
); polarity was aimed to be balanced among the different traits, with 22 to 26 direct items and 18 to 22 inverse items per trait. The remaining six items were directed items (e.g., “Select the
disagree response”), applied to control the quality of each participant’s responses (^Maniaci & Rogge, 2014). The items were distributed in two booklets, with 113 items each, with the directed items
at positions 26, 57, and 88 and 23, 55, and 87 in the first and second booklet, respectively.
Note. ES = emotional Stability; Ag = agreeableness; Op = openness; Co = conscientiousness.
1 Significant at α = 2.07 × 10^-4.
Forced-choice questionnaire. A third booklet consisted of 98 FC bidimensional blocks. Out of them, 79 were made up from items from the GS questionnaire (except for 13 pairs, which contained a direct
item from the GS booklets, paired with an inverse item not included in that instrument). There were also sixteen additional blocks made up by items from a different application, and three directed
blocks (at positions 25, 43, and 76) to control for response quality. Table 1 summarizes the frequency distribution of the FC blocks by pair of traits. Out of the 79 blocks with items from the GS
questionnaire, 24 were formed by two direct items (homopolar blocks); the remaining 55 were heteropolar, consisting of a direct and an inverse item, being the direct one always in the first position.
An example of a homopolar block tapping emotional stability and extraversion would be as follows: “Choose the item in each block that is most like you: ‘Seldom feel blue/Make friends easily’.” Both
items have been selected from the Big-Five IPIP Inventory item pool (^Goldberg, 1999).
Seven hundred and five undergraduate students (79.57% female, 20.00% male, and 0.43% missing; age mean and standard deviation, 20.05 and 3.33 respectively), from the first and third courses in the
Faculty of Psychology of the Autonomous University of Madrid, answered the GS questionnaire on optical mark reader-ready response sheets. Arguably, this convenience sample might not be the most
adequate for a personnel selection context. However, as commented later (see Discussion), a comparison between the GS item and block parameters is more straightforward in a student sample, in which
the role of impression management is expected to be weak. Therefore, we deemed appropriate using this dataset for our purposes.
Eight participants were dropped due to having too many missing responses (more than 68), and two more because of failing the directed items (more than one error). Of the remaining 695, 396 (80.36%
female, 19.13% male, and 0.51% missing; age mean and standard deviation, 20.86 and 3.21 respectively) also responded to the FCQ on another optical mark reader-ready sheet. No participants were
dropped due to missing responses (only 12 vectors had just one missing response), but four were deleted due to failing one or more directed blocks, leaving 392 valid participants. There is a
noticeable reduction (313) from the initial sample size (705) to the final one (392). Out of these 313, most of them (299) are missing by design cases, produced because some of the first sample
participants were not assessed with the two specific questionnaires required for the current study.
Data analysis
The questionnaires were analyzed with a multidimensional IRT model using the robust maximum likelihood (MLR) method (^Yuan & Bentler, 2000) for fitting the item and block responses altogether. The
64-bit Mplus 7.0 software (^Muthén & Muthén, 1998–2012) for Windows was used for all analyses. The MplusAutomation 0.7-1 package (^Hallquist & Wiley, 2018) for 64-bit R 3.4.3 (^R Core Team. 2017) was
used to automate some of the analysis procedures.
We tried to fit a model with independent uniquenesses and all the Big-Five traits initially. However, the full-dimensional model had convergence issues with extraversion. Therefore, the items tapping
extraversion and the blocks containing an extraversion item had to be dropped. The responses to the remaining 47 blocks and the 86 GS items included in those were finally fitted to a model with the
remaining four dimensions. The empirical reliabilities (Equation 20.21; ^Brown, 2018) of the emotional stability, openness, agreeableness, and consciousness scores were .89, .84, .79 and .82, and
.85, .64, .65 and .78, for the GS and FC formats, respectively.
A constrained model was fit for each possible restriction given by the invariance assumption: equal discriminations for a block and each of its corresponding GS items, and a constraint on the block
intercept and item parameters given by Equation 3 (using the four possible values of k’). This would result in six contrasts per block, making a total of 282 constrained models. However, given that
the GS-item parameters were not available for 8 items (out of the 13 taken from a previous application as explained above, after excluding five of them measuring extraversion), only the first
discrimination parameter of the corresponding blocks could be tested for invariance, and therefore 242 constrained models were estimated.
For each of the constrained models, a likelihood ratio test against the unrestricted model was performed as follows: a strictly positive χ^2[S-B] statistic (^Satorra & Bentler, 2010) was first
computed using the procedure explained by ^Asparouhov and Muthén (2010). Using a confidence level of .05, the Bonferroni correction for multiple comparisons was applied to these tests, giving a value
of α = .05⁄x2044;242 = 2.07⁄xd7;10^-4. The parameters for which the p-value of the likelihood ratio test was less than were considered non-invariant.
The correlations of the block parameter estimates with their item counterparts are given in Table 2, along with the descriptive statistics of the deviations (Correlations through MRE columns). Mean
error column is the mean difference of the block parameter estimates concerning the GS format estimates (the expected value in the corresponding block, as a linear combination of the item parameters,
in the case of the intercept parameters). The MRE column shows the “mean relative error”, which is the mean error of the estimates of the blocks relative to the GS items. Negative values, as in these
cases, imply a general underestimation in the absolute value of the parameters in the FC format.
Note. The linear regression trend is shown in continuous light grey. Non-invariant discrimination parameters are annotated with the block code.
The last two columns in Table 2 show a summary of the invariance tests. Count column is the absolute frequency of parameters for which the null hypothesis of invariance was rejected. Column % shows
the corresponding percentage, relative to the number of parameters of each type. The results of the invariance tests can be seen in detail in Table 3.
Discrimination Parameters
The correlation between both formats was .93, indicating a high correspondence between them. The mean error and mean relative error were negative, implying a slightly negative bias and a general
underestimation of the parameters, respectively. That is, there was a slight shrinkage of the parameters towards zero in the FC format. These effects can be appreciated in Figure 1: the regression
line intersects the vertical axis slightly below zero and is a bit closer to the horizontal axis than the bisector, which would be the expected regression line in the absence of any type of bias. In
the lower right quadrant, we can also see that three of the items reversed their sign when paired in an FC block. Their values in the GS items were already very low though (they were not
significantly different from 0), so this was likely due to estimation error.
Despite the deviations from the item estimates, the discrimination parameters were largely invariant. Only two of the null hypotheses of invariance (out of 86) were rejected. These non-invariant
parameters are in the last two blocks analyzed, one in the first position and the other in the second position. These results provide strong evidence that the discrimination parameters are invariant
between the GS and FC formats.
Intercept Parameters
The correlations of the intercept estimates with their predicted values from the items were also very high in all the cases: all of them were above .90 except with the predictions using the first
threshold category. The third threshold category yielded the highest correlation with the block intercept estimates. The (consistently) lower the mean error, the higher the threshold category, but
always positive, in contrast to the discrimination parameters. The mean relative error was negative for all thresholds, manifesting a generalized underestimation in absolute value, similar to that in
the discrimination parameters. For the intercept estimates, the third category yielded the lowest mean relative error, followed by the fourth one.
Figure 2 shows how intercept parameter estimates resembled the values predicted from the GS format items. These scatter plots show the tendency of the intercept estimates to be shrunk towards 0 with
respect to their predicted counterparts from the items. Also, we can clearly see that the block intercept estimates were better predicted by the third and fourth threshold categories, as seen in
Table 2 as well.
The intercept parameters were non-invariant concerning their values predicted from the GS format estimates in 10 to 15 cases, depending on the item threshold category considered. The fourth one had
the lowest number of non-invariant parameters, followed by the third one with 12. The second one had the highest number. The intercept estimate was invariant for all the threshold categories in 17
out of the 39 blocks for which the intercept parameter could be predicted (43.59%). Only in three of them, the intercept parameter was found to be non-invariant for all the categories. The rest of
the blocks had non-invariant intercept parameters in one to three threshold categories.
From the results above, we can conclude that the FC format generally satisfies the invariance assumption of the MUPP-2PL model. Apart from the high rates of invariance, we found high correlations
between the parameters of the FC and GS formats, although there seemed to be a general trend of the FC format estimates to be lower in absolute value.
Some of the parameters failed to pass the invariance test, yielding evidence of violations of the invariance assumption. The intercept parameters were the most affected, whereas only two
discrimination parameters were non-invariant. Due to this low rate of non-invariance, hypothesizing about causal phenomena would be highly speculative.
The intercept parameters showed some recognizable patterns of non-invariance. Figure 3 plots the deviation of the non-invariant intercept estimates concerning their predicted values from the GS-item
estimates. This figure shows that most of the intercept parameters had a positive deviation regardless of the threshold category. The fourth category was an exception, as there was an equal number of
positive and negative errors among the non-invariant parameters.
Only a few estimates deviated from their predictions from the GS format consistently. Some properties of the items seem to be affecting the invariance of the intercept parameters. For example,
emotional stability and openness seemed to be more involved in the non-invariant intercept parameters. Also, there seemed to be an association between deviation direction and block polarity for this
threshold category, as most of the negative errors were in homopolar blocks (i.e., with a direct item in the second position), while most of the positive errors were in heteropolar blocks. Moreover,
violations of invariance were more prevalent with emotional stability items in the second position and openness items in the first one, suggesting a complex interaction effect among the two latent
traits, the item position within the block, and the item and block polarities. ^Morillo (2018) provides an extensive discussion around these violations and the factors that likely induce such a lack
of invariance in the parameters.
Implications for the Practice of Personnel Selection
The fact that the invariance between the GS and the FC formats can be safely assumed has a great practical relevance: it enables the practitioner to safely build multidimensional FC instruments based
on the parameter estimates of the individual items. The designer only should be careful to avoid certain pairings that could lead to violations of invariance, as these would likely reduce the
validity of the measures. A good starting point is the recommendations by ^Lin & Brown (2017): balancing item desirability indices and avoiding pairing items with shared content and/or
conceptually-similar latent constructs. However, these recommendations require the items to be calibrated on a social desirability scale, and their contents to be submitted to a qualitative analysis.
Also, we believe that further research, probably in experimental settings, would help identify other conditions that may produce non-invariant parameters.
The process of constructing a multidimensional FCQ is thus not as straightforward as simply pairing items tapping different latent traits. Nevertheless, the practitioner can rely upon GS estimates of
the item parameters to assess a priori the potential validity of the new instrument. A procedure of FCQ construction based on this principle could be outlined as follows: (1) to calibrate a set of
items in a GS format (or use the estimates from a previously calibrated instrument); (2) to decide on certain design criteria (e.g., balance of trait and polarity pairings, pairing exclusions based
on expected violations of the invariance assumption, etc.); (3) to pair the items in FC blocks attending to such criteria; (4) to apply the FC instrument to an equivalent sample; and (5) to calibrate
the FCQ on the new sample data and obtain the latent trait scores. If properly designed, the new FC instrument should have parameters comparable to the original items and thus similar validity. Note
however that this would not allow applying the method outlined here for testing the invariance assumption. For that to be possible, the newly created FCQ would need to be calibrated with the same
sample as the GS items; this will not be generally possible in an applied context. Nevertheless, the parameter correspondence could be examined using multivariate descriptive methods.
The research design used in this manuscript had some issues that did not allow us to accurately separate the effects of the latent trait, polarity, and item position within the block. Nevertheless,
taking into account the possible violations of the invariance assumption should be paramount for research purposes. Further studies should aim to overcome two limitations: (1) to design FCQs that
balance the order of the inverse item in heteropolar blocks and (2) to calibrate the parameters of the whole set of items in both formats. Using a different response format for the items could also
be advantageous, such as an even number of GS response categories, or a dichotomous format. More complete response vectors would also be desirable, as the present one lacked a large number of
responses for the FC blocks in comparison with the items.
This study has some other limitations worth highlighting. It is especially worth pointing out the problems found when estimating the models with the “extraversion” trait. We could not find
convergence due to the latent correlation matrix becoming singular, as the correlations between the dimension of extraversion and the others approached 1. This fact may suggest some property of the
multidimensional FC format affecting specifically this trait. Whatever the actual explanation is, it should not be overlooked if we want the results to be fully extrapolated to the Big Five model,
and to other theoretical models the FC format may be applied to.
The use of the response dataset may also be criticized, as it had been obtained from a sample of students. The reader should also note that the invariance assumption was tested under a situation of
honest responding or “direct-take”. Of course, a high-stakes situation could imply stronger violations of invariance than the direct-take one. The fulfillment of the invariance assumption in an
honest test-taking context is a necessary condition, as the process of pairing stimuli must be validated beforehand. However, this condition is not sufficient for an impression management context. In
a high-stakes context, other factors accounting for the impression management attempts may emerge, adding further complexity to the measure and its treatment. Further studies applying the methodology
outlined here will allow generalizing these results to actual personnel selection settings.
Finally, the application of the questionnaire to a calibration sample should provide evidence that the response data to a multidimensional FCQ are valid. Although the invariance assumption discussed
here is a necessary condition, it does not guarantee the validity of the FCQ latent trait scores. This issue has not been investigated for the MUPP-2PL model, but there is evidence of latent score
validity in other FC formats and models (^Lee, Joo, Stark, & Chernyshenko, 2018; ^Lee, Lee, & Stark, 2018).
This study introduces a methodology that allows testing the assumptions of the MUPP-2PL model for paired FC blocks. The application of this method may open up further research lines, as the previous
discussion suggests. More importantly, we have provided evidence that the invariance assumption between the GS and the FC formats holds to a large extent. This finding provides the practitioner with
tools and criteria to seriously consider the design of multidimensional FC instruments to measure personality and other non-cognitive traits of high importance in work-related assessment settings.
Particularly, our results have practical relevance for building multidimensional FCQs by using previously calibrated items. Evidence of the invariance assumption legitimates the design of FC blocks
using the already known parameters of the items as a proxy for the block parameters. Given the large number of applications of personality questionnaires in GS formats, this obviously implies a
considerable cost reduction.
We have outlined a general procedure based on those principles, giving some guidelines to mitigate possible violations of the invariance assumption. Also, assuming invariance may allow using the GS
format estimates to optimize certain design criteria for FCQs. Such criteria may even be implemented in automatic assembling algorithms (^Kreitchmann, Morillo, Ponsoda, & Leenen, 2017; ^Yousfi &
Brown, 2014), making the design of FCQs more efficient and cost-effective. | {"url":"https://scielo.isciii.es/scielo.php?script=sci_arttext&pid=S1576-59622019000200004&lng=es&nrm=iso&tlng=en","timestamp":"2024-11-13T12:54:39Z","content_type":"application/xhtml+xml","content_length":"99849","record_id":"<urn:uuid:b3fbf99e-4144-4fc1-9d39-13b467fbeddb>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00417.warc.gz"} |
It's that time of year again for my unit circle projects!
My requirements are that it can't be made out of paper and has to contain radians, degrees, and ordered pairs.
See my previous posts here: Unit Circle Art
, and
This is the rubric I used to grade them and they later taped into their INB.
I've always taught special right triangle by comparing similar triangles, writing proportions, and cross-multiplying. Last year I tried this investigation for the first time that also doubled as a
project with mixed results. I tried it again this year but without the project piece. And I'll be honest, this year I walked around giving some hints and last year I didn't help at all. Why? Because
I felt like my class was so needy and had to start learning to be more independent. This year I didn't let them talk until they had finished the whole page front and back. Then I asked them to
compare with at least two other people. That part went really well.
Then we went on to basic INB notes. Some students really took the lead in shouting out what to do. While it wasn't cross-multiplying, they were using patterns and it seemed to work.
And then...
Is it weird that y doesn’t equal something with a square root of 3? Since it’s a 30,60,90? Also why does putting it over square root of 3 work? Or is this wrong pic.twitter.com/o1yXYwmXUK
— Elissa Miller 🦄 (@misscalcul8) September 3, 2018
A student asked this question on Friday and I told him I would find out and explain Tuesday.
Which led me to this:
I used these from @MrsNewellsMath last year and my kids grasped it! https://t.co/thop7WWrmR
— Breanna Davis (@MissDavisSCSD) September 3, 2018
I really loved her materials but I had already 'investigated' the patterns and already had INB notes. What to do....
Strips to the rescue!
We made a Math Tools pocket at the beginning of the year and added calculator strips. I turned her charts into strips and we used them to practice with dry erase markers and then write in the
I color coded the 'levels' that Katrina mentioned in her post.
Using the tic-tac-toe method, we decided first which column the given information goes in and then how to solve for x. This really helped them see when we need to multiply and when to divide. Once we
had x then we could fill in the other two columns.
Two students figured out shortcuts to the patterns without doing the work. I explained to them that that was my goal but when I led with that in the past, everyone would get confused and so I need to
teach a structure that EVERYONE can fall back on.
I felt like this really cleared things up from where we left it on Friday. Next time I teach it I will do the strips right after the investigation and then they can use the strips as a reference for
the INB notes.
Thanks Katrina!
Virtual Conference on Mathematical Flavors
Your teaching practice has an impact on how your kids think about mathematics. Our classrooms are little bubbles and while kids are sitting in them, they are picking up all kinds of signals about
mathematics. You might have students leaving a year with you thinking mathematics is collaborative, or that it requires taking risks, or that it is hard but hard is okay. We all have our own unique
flavor of mathematics that we are imparting to students through how we orchestrate our classes day in and day out. So here’s the formal prompt:
How does your class move the needle on what your kids think about the doing of math, or what counts as math, or what math feels like, or who can do math?
It took me a while to wrap my head around this concept and a lot of different 'flavors' ran through my head. But then I thought about what 'leaks out' of who I am, what students remind me of after
they graduate, and what they write to me in their semester reflection papers.
I think my flavor is confidence.
• Confidence in your own personality and being 100% on brand. It took me [S:decades:S] years to cultivate my own confidence and now sometimes I think I might be a little on the arrogant side. lol I
model this especially through my two nice things procedure- they hate when I make them say two nice things about themselves and I always give a little speech about how you know yourself better
than anybody else and you should know more nice things about you than you know about anybody else. I feel like I show this through my work ethic because students come to me with new ideas. I
think that shows they know that I go above and beyond in all aspects of my job. Students will tell me when they see chevron stuff they think I should buy, they send me pinterest ideas, they tag
me in memes...I think that by being 100% myself, I give them permission to be 100% themselves.
• Confidence through consistency. When you have students years in a row, I think this comes kind of naturally but I think students enjoy math with me because they know my rules, procedures and
routines. They know I'm going to show up every day and they need to also. I have so many kids who come back after doctor appointments and such 'just for my math class'. They also know they are
working every day, all hour, and no free days. Even though they'll never admit it, I think they enjoy knowing they are going to work and learn on a daily basis. Or at the very least appreciate
• Confidence through finding mistakes. I make an emphasis on finding your own mistakes and fixing them and I think that builds a sense of independence. I make a big deal of not erasing all your
work and fixing small mistakes. I post answer keys often so they can check their work and work at their own pace. This helps them realize how they learn and that they don't need me for every
little thing. I also hope those skills transfer over to personal life too.
• Confidence through freedom. The culture of my classroom is very laid back; we make a lot of jokes, students don't have to ask permission for little things, I have a lot of supplies available that
they are free to take, we have a lot of random conversations, etc. Students who finish early casually wander over to students who aren't done yet or struggling and help. I love this because they
are hearing different perspectives and the freedom to decide which way works for them. And freedom to learn from someone besides the teacher.
• Confidence through creativity. While I am very consistent and routine, students also know I'm going to take a creative approach to things. They might not know exactly what to expect but they know
I'm not just going to take the normal route. When they start to expect that, I think they raise their own standards as well. Sometimes their expectations are higher than mine are and we both take
turns rising to the occasion. There is always a way to express yourself.
• Confidence through risk taking. While I love trying new things and I think my ideas are awesome, I am also really good at admitting failure. I don't think that's something students are
necessarily used to seeing from teachers. Right now I'm having really good luck with students being willing to shout out answers, even wrong ones and I hope that in some small way, it's because
I've been willing to be wrong.
• Confidence through problem solving. While I wish I could say that I mean this in a purely mathematical way, I don't. I mean this in a more practical way. I ask students for feedback often and
when I see problems I brainstorm with them on solutions. They constantly see me trying to improve and make things more efficient. After I admit failure, I want to fix it.
• Confidence through persistence. I don't give up on keeping them from giving up. I don't give up on ideas that fail. I don't give up on trying to change their attitude and feelings about math. I
don't give up on making them say nice things. I don't give up on positive vibes.
• Confidence through showing up. I show up to work every day. I show up for them when I can tell they are upset, mad, or panicking. I show up when their grades start going downhill. I show up when
they've just had their hearts broken. I show up to their games. I show up for them and I show up for me.
I don't know why it is so hard for me to be consistent in blogging my one good thing when I can tweet it. But I thought I would do a mash up of all the good things from the past three weeks that I
can remember and then start fresh next week!
• A senior that's not in my math class this year has been calling me on his teacher's classroom phone between 7th and 8th hour and just chatting for the 3 minute passing period. I don't know why
and he gave me a lot of grief last year but...I guess he misses me? lol
• Today a senior I don’t have in math class came to me for help with his online college math class. As he walked out of the room he turned back and said “You’ll be happy to know I even used
• Doing function composition in Algebra II a girl said "I'm really enjoying this. I just wanted you to know that."
• The students like function composition with numbers to plug in better than just simplifying functions so on their practice the last first 6 were simplifying and the last 3 were with numbers. One
boy said that was like the 'dessert'. Another student told me had done the last 3 first and the boy says "You ate dessert before dinner?!" As I was walking around helping students, a student
asked for help and then said "Ok, you can go back to having dinner with T*****." Lol
• Former student who sat through trig senior year doing nothing and failed-getting zeros (it wasn’t required), messaged me today to ask if she should do an 8 month program or 2 year associates
degree for medical assisting.
• Reviewed metric conversions, fractions, and percents with Algebra I freshman....overheard “Ms. Miller makes it seems so easy”
• I got a new document camera and it's SUPER awesome and I will be getting a new ipad soon!
• I use google photos all the time but it just occurred to me to take pictures of student work as I walk around, sync it, and then display it on the SMART board
• I've heard some kids talking about working on Delta Math and saying it's fun or they did it at home or they're ready for week 4 or they like it so BIG WIN
• A student who has anger problems is behind on some assignments, apparently got in an argument with his mom about Delta Math, and had a bunch of excuses for another teacher about the work; when he
came to class today he asked for help a few time and thanked me each time. His mom e-mailed me tonight to tell me he was half done with Delta Math.
• My freshman are ON. IT. I love them already.
• My 'lower' Algebra I class has only 6 students in but we are already vibing; they shout out answers and if they're wrong they just shout out some more. They are doing so good and I hope it lasts
• The students asked me if I was going to our county fair to the demolition derby; I haven't been since I was probably a teenager myself. My sister wanted to go and one of my students was driving
in it so I went. I saw a few students as we walked in but when we got to the grandstand a whole giant group of students from my school were sitting together and they literally cheered for me as I
walked by. I mean.....I can't even!!!
• So far we haven't had any technology problems!
• Each day flies by so fast; I have no classes I dread, I have no troublemakers, and I just enjoy it!
• I've gotten a lot of positive feedback on Twitter and Facebook on my #teach180 posts
• Just kids who tell me they love me and give me fist bumps and high fives every day
This is my purpose and my passion- these are my people.
I've used this activity for the past few years, using foam circles from Dollar Tree that I labeled with sharpies and stuck up all over the room.
My ceilings are too high to reach and I felt like that always threw them off. This year I got the bright idea to cut up tissue boxes and use blank yard sale stickers.
I gave them the worksheet with a picture on it too and asked them to make sure the stickers were in the right place. Sadly they were nowhere near sticky enough and repeatedly fell off. Now I feel
like I need some laminated circles and hot glue them to the box. Any better suggestions?
This was the last activity before their quiz. After like 6 DAYS of point, lines, and planes, the grades were still bad. I think the highest was an 86% and the majority of the class was between
50%-75%. Why is this so hard? It's like the more time I spend, the worse it gets. I hate that it's the first lesson of the year because it drags on forever, they get a bad grade, and then they decide
that geometry is too hard and they're going to fail.
Moving right along....
I used this 'number line' to introduce absolute value equations.
Questions I asked:
1. What is something weird or unusual about this diagram?
2. What is something familiar about it?
3. What kind of math thing could it represent?
4. If the pink magnet was a number, what would it be?
5. What is three magnets away from the pink magnet?
6. Why are there two possible answers?
7. What is two magnets away from the star?
8. What could the magnets represent?
9. Can you have a negative distance?
10. What is the definition of absolute value?
This was done in about 2 minutes and then we jumped right into INB notes.
And here's a fun video of us playing Grudge Ball but I call it The X Game because there are no balls and there are X's.
Any time they run to the board, it's a win. =)
I'm pretty sure I got this original task from @pamjwilson but I can't find the original file or link. I found this one which is where I got the questions from.
I tried this before with my own family tree but it just brought on way too many irrelevant questions so my friend @howie_hua suggested I used the Kardashian family. I knew there was a big famous
family that was obvious so thanks for helping me out.
I posted this photo in Google Classroom and had the students leave it open on their chrome books:
Numbers 6-10 is where the real meat is; here we have to discuss what comes first and where to start.
The above questioned helped them establish that order matters.
As students were writing their answers on their desk, I went around taking pictures, synced it with Google Photos, and then was able to immediately show them their classmates responses. #winning
Next we labeled index cards and baby post-its.
They are color coded on purpose- these are the three colors of baby post-its I had. Lol.
When I did it in class, we wrote the f(x) on the lined side of the index cards but then we had to keep flipping them over so I edited the slides to put the label in the top left corner.
Then I showed them a problem like this (the answer doesn't show at first):
And we talked about what color post-it to use and which index card to stick it on.
Then they would simplify it on their desk and I would click the slide to show the final answer.
Tomorrow I will follow up with this function composition match without the index cards which also throws in some square roots and putting a function inside itself and finally INB notes.
Here's the powerpoint:
Both of my 9th grade Algebra I classes had 8th grade algebra so the majority of my course is review.
I started the year with solving equations by using Katrina Newell's equation flip book. (I loved that this included infinite and no solutions as well as fractions and multi-step equations with
variables on both sides!)
I used my new document camera to show them how to put it together and we used my mini staplers for the first time ever- it went pretty smoothly.
Next we followed up with an equation card sort.
This is the first time they've ever done a card sort (to my knowledge) so here's how I introduced it.
"Get out all of the pieces that have numbers on them."
"Can you tell which piece is the original problem? Since you all have different problems, what is a hint you could give to pick the original problem?" (It's the longest one.)
"Now can you put the piece with numbers on them in the order they are being solved?
"Now look at the pieces with words. The word Given should go next to the original problem. Now can you put the words in order of what's happening in the problem?"
Then I went around giving feedback and checking answers. Each group also had one extra step that didn't belong.
One bag had two subtraction property pieces and it did not sit well with them that you could do that twice in a row until I pointed to each step of the problem and ask how they get there from the
line above.
Each group rotated until they had done all sorts.
Then we used dice to play this partner dice rolling activity from All Things Algebra.
Last year I finally got the genius idea to make digital answer keys for each INB page. While that takes a while obviously, I was writing them out every year to have an answer key for absent students.
This year, I only have to do the first couple in each unit. I also thought I would upload them somewhere in Google Classroom for students to access.
I also never thought of saving them as a pdf so the formatting won't get messed up. So this year I will save them as a pdf and upload to my google drive. I'm thinking I will create a google doc or
spreadsheet with the pdf links and post it in Classroom. Then there is only one post for students to look for and I can update after each lesson with the INB answer key pdf.
The big project I wanted to do over summer but [S:procrastinated:S] never got to was to make a video of myself explaining and writing out the notes for each skill. But multiply that times 4 preps and
we're talking at least 120+ videos.
Ov. Er. Whelm. Ing.
What I did do was update all my powerpoints and then saved each slide as a JPEG. My idea is that I can use the Show Me app on my ipad, insert the pictures, then record my voice talking while I fill
in the notes with a stylus.
I've never actually used the ShowMe app but I think I can get a link to the video and add it to the previously mentioned doc/spreadsheet. So there would be a video and pdf answer key for each skill.
It sounds simple in my head but so time consuming. I thought maybe I could do it after I teach the lesson so it will be fresh and also spaced out over the year instead of trying to do them all at
once in the summer.
But I don't wanna. Lol
This is Skill #1 in geometry for me and we can all agree that it's super important and full of so many little details. Over the years I have come up with so many ideas to tackle this skill with and I
still don't really feel successful.
One of my favorite activities is what I originally called my hands-on naming review. I made segments and arrows out of pieces of pipe cleaner and little fuzzy balls and cut letters written on
construction paper.
This year I tried the same activity with play dough and letters I cut out from my Silhouette Cameo. The students really enjoyed it but it took much longer for them to roll up the play dough and make
all the pieces. I felt like they weren't really paying attention to the symbols or notation and it was like pulling teeth to get them to refer back to their notes.
So I thought I would share what I did and see if you have any feedback. I need a better flow and to shorten up how much time I spend but hopefully in a more efficient way.
First I did blind sketch; students describe a drawing to the other person and they draw it without seeing it.
We made a list of all the vocabulary words they used while describing the pictures.
Then I had them sort this cut up answer key from a graphic organizer.
The next day I passed out this page for their INBs. The left side is the 'answer key' to the card sort so they could compare their work. The right side we filled in together.
Here is the hands on naming review:
And some play dough pictures:
Next I plan to do this worksheet activity with the tissue box model below.
I thought I would follow up with a Kahoot and another worksheet that I don't have a copy of.
I've also used this in the past:
What am I missing?
What am I not doing enough of?
What is the magic key to unlocking the unicorn dust of points, lines, and planes?
It's always hard for me to transition from beginning of the year fun stuff and procedures and routines into the first skill in unit 1. Even though I already have stuff I can use, I always feel like I
need to change something up or start off with a bang.
One of my goals for the beginning of the year was to use concept attainment as much as possible. I decided to make a year long powerpoint for each course so I could share them at the end of the year.
And then I forgot that that was one of my goals.
I thought of it last minute and hastily threw together two slides to introduce function operations in Algebra II.
Basically I displayed this slide and said I have these two functions on the left and the end result is the function on the right. What happened?
I just made these up and didn't even realize that I got the same answer for the last two examples.
I think I should have made them all positive to make the patterns easier to see. When I first showed this to the class, the bottom answer said 2x - 2 which definitely threw them off.
Should I have done 4 different examples on one slide for each operation? Or four different slides for each operation? I felt like doing addition and subtraction was enough to lead into the fact that
we can combine functions in different ways.
They thought both slides were just combining like terms or that they were being set equal to each other and then doing opposite operations.
We got through it but it was definitely not a smashing success. Then I basically went straight into lecturing and then they just worked a bunch of examples.
Not exactly what I was going for.
I learned concept attainment as a column of examples and a column of nonexamples and they look for patterns. I couldn't think of a good way to do that for function operations so that's why I
explained it and have no titles at the top.
What could I have done to make this better?
How do you introduce function operations?
I don't know how your school works but at ours new teachers are evaluated once a year until they get tenure and then every other year after that. We also use the Charlotte Danielson rubrics. This is
my year to be evaluated and I have two ideas that I need to flesh out for the student growth section.
The only tracking we've done this year is to separate the freshman class into higher and lower students. Which when there are only 7 lower, I think it would have been just as well to mix them with
the others. But...not my decision.
Anyway, I basically want to compare their growth over the year and hopefully show more growth in the lower class than the larger. That one's pretty straightforward I guess. Although I hate that it's
based only on EOCs but again...not something I can control.
The other one is lofty and maybe not possible. I'm trying out whiteboarding this year in Geometry only and I'd like to somehow measure something and look for growth. My principal doesn't want me to
compare the results to last year because the students are different and it's comparing apples and oranges. I don't want to use a control group because when I want to try something new, I want to do
it for everyone and also with four preps, I don't want to add extra prep.
I posed the question on Twitter and got these two responses:
hmm, could you collect some sort of student survey data? perhaps something about rating themselves as mathematical communicators before and after deployment of whiteboards?
— Geoff Krall (@geoffkrall) August 23, 2018
Perhaps how "fluid" their thinking was while using the whiteboards (the word "fluid" may require some unpacking )
— Geoff Krall (@geoffkrall) August 23, 2018
What about their confidence and willingness to make mistakes while trying something new?
— Jae Ess (@jaegetsreal) August 23, 2018
Now I love a good survey and a good Google Form so this sounds so great. What else can I measure? Should I have students rate themselves 1-5 for each so I have actual numbers for comparison?
Better make a list:
• Mathematical communicator
• Fluid thinking
• Confidence
• Willingness to make mistakes while trying something new
• Ability to follow directions
• Willingness to work with someone new
• Willingness to take instruction from someone
• Ability to disagree with ideas without disagreeing with a person
• Math ability
• Interest in math
What other things would you expect whiteboarding and discovery learning to affect?
In my own personal effort to #ExpandMTBoS, I'm continuing a category of blog posts called 'How To' so I can share the strategies behind the resource. I hope new and veteran teachers alike can find
something useful. Click on the tag to the right for more posts!
Last year I started using Google Docs for the first time for lesson planning. My lesson plans are fairly simply but I have four preps. So basically I make a table and type brief descriptions.
I like that on Google Docs I can share it with my principal and also create a table of contents so he can click ahead to the correct week. Link here.
This year I also decided to create a spreadsheet that I'm calling a 'daily log'. I plan to specifically list what I did in class each day. I'm curious to see how it actually lines up with my lesson
'plans'. Link here.
Now tonight I just realized that I should start uploading resources to Google Drive and linking them. Then I will have built in plans for next year! Which reminds me of another idea where someone
mentioned saving their INB pages as pdfs so the formatting doesn't get messed up. I haven't noticed that happening but it has happened with some of my INB answer keys.
What documents do you use to help you plan? | {"url":"https://misscalculate.blogspot.com/2018/","timestamp":"2024-11-09T07:20:53Z","content_type":"text/html","content_length":"205136","record_id":"<urn:uuid:eeb8350c-5dd2-4d03-be38-d5c089bbcdee>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00166.warc.gz"} |
A Tiny Pocket of Space
July 11, 2013
A Tiny Pocket of Space
The Science of a Miniscule Sample
Verification is the science (and art) of asking the question, “What could possibly go wrong?”
If you’ve done a good job – you think – then you would expect that the answer would be, well, not very many things. If you start musing a bit harder, you might come up with some scenarios you hadn’t
initially thought of. And, depressingly, the more you think about it, the more problems you can probably come up with.
It’s one thing to identify all the things that could go right or wrong in a design; it’s yet another to itemize all of the conditions leading to those good or bad outcomes, especially when it comes
to a full SoC (which, mercifully, isn’t done all by you, which means there are other people you can point to if things go awry). But, in a wide-ranging discussion with Breker’s CEO Adnan Hamid, we
discussed the scope of the problem – especially when taking a high-level system view.
Here are some numbers to chew on that get to the size of the space we’re talking about. Breker’s tool, which you may recall generates C-level tests for stress-testing an SoC, feeds on graphs that
describe scenarios; each node in the graph is “something to do,” and a node may abstract a lower-level graph. Designers and architects put these scenarios together, and, ideally, the collection of
all scenarios completely defines the intended behaviors of the design. So how many “states” might exist in a full analysis of all possible graphs?
Let’s start with an IP block, which is much more manageable. Now, I will readily acknowledge that “IP” is a broad term and can include anything from a simple DMA block to a monstrously complex
protocol. So one number won’t speak to them all, but we’re talking orders of magnitude here, and let’s not get hung up on trivial examples. According to Mr. Hamid, the number of graph states in an IP
block is on the order of… are you ready for this? 10^30. Yes: 1,000,000,000,000,000,000,000,000,000,000.
Dare you ask how many are in an entire SoC? How about in the range of 10^300-10^1000. Forgive me for not writing those out. Dude, that bottom number is a googol cubed! And this doesn’t even include
concurrency issues in a multicore system.
I won’t bother with the math that determines how long that would take to run. Oh, the heck with that: let’s do it! At the low end of that range, if it took 1 ns to analyze one graph state, then it
would take <snicker> 3.2×10^283 years to do the whole thing. A mere (two and a quarter x 10^291) times the current rough estimate of the age of our universe. The upper end? I guess you’ll have to do
the math; Excel can’t – its brain exploded with that number.
So, after that merry little divertimento, let’s come back to what we already knew: we can’t test that much stuff. Mr. Hamid suggests that exercising something more on the order of 10^9 samples, done
properly, can ferret out most of the errors. An enormously tiny (if I may be so contradictory) fraction. And yet, he says that most verification plans cover only on the order of 10^6 states.
So for those of you gunning for 100% coverage, nuh-uh: not gonna happen. Granted, this notion of coverage is far more expansive than what logicians may consider when using the far more tractable
stuck-at fault model.
Stuck-at faults are great – they’re so manageable and traceable. But they reflect only low-level logic and only a portion of what can go wrong. Stuck-to-another-signal (bridging), for example, won’t
be caught. More comprehensive models have been introduced to expand the stuck-at notion, but they still operate on a low level.
Such faults are typically analyzed in the realm of abstract logic, where a signal is considered the same everywhere. If you want to take possible manufacturing faults into account for testing,
there’s a whole different universe of things that can go wrong. For example, if a signal (which is considered to be a single entity in RTL) has a physical trace that splits, with the two branches
going to opposite sides of the die, then faults could occur that have one branch at a different value from the other branch – a condition that the RTL description would say is impossible.
The potential gotchas Breker is trying to test for involve higher-level operations than this, although the abstract/manufacturing fault differences still apply. Things like, oh, reading from this
memory while that DMA over there is moving stuff around in a couple of other memories. The problem is constrained by the universe of things that are possible to do in C. (Although, even without
taking inline assembly into account, any of us who have programmed in C at less than a professional level will probably agree that that universe includes a few things you can do right and a whole lot
of things you can do wrong.)
So how in the world do you figure out what to test? Looked at statistically, you need to identify a sample of around 10^9 points out of 10^500 to prove that a design is worthy. To call that a drop in
the bucket is like calling the Death Star just a big Sputnik. Where in the heck do you start?
And this is where methodology comes in. So-called “constrained random” testing starts from inputs, applying (constrained) random values and recording the expected outputs. This is a feed-forward
Breker’s approach is the opposite: it’s a “feed-back” method. They take these graphs, these scenarios, and, rather than working from the inputs to outputs to trace issues, they start with outcomes
and work back up the – well, at the RTL level you’d say “cone of logic,” but let’s generalize here as the “cone of influence.” You then randomly select some sets of inputs that result in the outcome.
Mr. Hamid uses a simple example of an adder to describe the impact of sampling approach. If you look at the overall scope of what an adder can do, it can have one of three basic outcomes: a normal
sum, an overflow, or an underflow. Many different add operations result in a “normal” sum. There are comparatively few that result in over- and underflow.
The typical sampling approach would randomly pick from the universe of possible input combinations – the vast majority of which result in normal sums. In other words, few tests (perhaps even none)
would end up testing over/underflow. But from a “stress testing” standpoint, over/underflows represent more of the “exceptional” sort of output that might cause a problem, Breker’s tool builds a
sample that represents over/underflows far more than a constrained random approach would do. That’s because, in the outcomes “space,” normal, overflow, and underflow results get equal weighting.
Let’s make this more concrete: The Breker tool takes an output and traverses the graph backwards towards the inputs, selecting some random set out of the universe of inputs that lead to the outcome.
Let’s assume we’re going to generate 30 tests for our adder. The tool starts with, say, the “normal sum” outcome and works back to the inputs, randomly coming up with 10 different input combinations
that cause a normal sum. Then, using a similar technique, the overflow outcome is explored, yielding 10 tests that result in an overflow condition; and likewise for underflow.
So you have equal weighting of normal, over-, and underflow outcomes. A test can now include these in a larger test that says, oh, “What happens when an interrupt fires right when an overflow occurs?
” This would assume, of course, that the conditions resulting in an interrupt have also been analyzed and the appropriate test conditions identified.
Note that, for simplicity, I said that you pick some random set of inputs leading to an outcome. You actually have more control than that; there are a number of knobs that you can use to bias the
test generation. For example, when creating scenarios, you can give some branches in a graph more weight than others, meaning more tests will be generated for the branches having higher weight.
I have to confess to having tied myself in some knots with this, which Breker helped to undo. And it largely had to do with one word that tends to be used for analyzing the graph: “solver.” There’s a
very basic realm where “cones of influence” and “solvers” come together, and that’s formal analysis. So I assumed that’s what was going on here. And the more I thought about that, the more it didn’t
make sense.
Well, it turns out that there’s more than one kind of solver. I was thinking “sat (or satisfiability) solver,” which can be used to prove conditions or outcomes or provide counterexamples if the
proof fails. Useful stuff, but that’s not what’s going on here. This is a “graph solver” – perhaps better called a graph “explorer,” they said. Nothing is being proven.
The tool also never sees the actual design – the RTL. All the tool sees is the scenario graphs; it allows tests to be generated that can then be applied to the design to identify issues. It also
doesn’t go beyond the scenarios that have been drawn up. If that set of scenarios has big holes in it, then so will the set of tests. The tests can give a good representation of the behaviors
identified in the scenarios, but they can’t help you find behaviors that have been left out of the scenarios.
So the burden of “what could possibly go wrong” still falls on you, at least at a block level, although by creating the scenarios, you’re really documenting how things are supposed to work. The tool
can then pull these scenarios together to see what happens when the system gets stressed.
And the next time the test or verification folks complain that you have too many tests, run those numbers for them. If they understand that you’re sampling at a rate of about 10^-491, they may end up
asking for more tests.
More info:
One thought on “A Tiny Pocket of Space”
1. Breker says that the universe of all possible SoC tests is beyond enormous, and that what we actually verify is a tiny sample – and that optimizing that sample is key. What’s your reaction to
You must be logged in to post a comment. | {"url":"https://www.eejournal.com/article/20130711-breker/","timestamp":"2024-11-13T10:43:11Z","content_type":"text/html","content_length":"182016","record_id":"<urn:uuid:4c3a8a90-d31e-422e-b0ea-b174b0ef5487>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00843.warc.gz"} |
Half Adder : Circuit Diagram,Truth Table, Equation & Applications
Half Adder is of the kind of basic digital circuit. Earlier there are various operations performed in Analog Circuits. After the discovery of digital electronics, similar operations are performed in
it. The digital systems are considered to be effective and are reliable. Among the various operations, one of the most prominent operations is Arithmetic. It includes Addition, Subtraction,
Multiplication, and Division. However, it is already known that it might be a computer, any electronic gadget like a calculator can perform mathematical operations. These operations are performed are
consists of binary values.
This is possible by the presence of certain circuits in it. These circuits are referred to as Binary Adders and Subtractors. This type of circuits is designed for the binary codes, Excess-3 codes,
and other codes as well. Further Binary Adders are classified into two types. They are:
1. Half Adder and
What is a Half Adder?
A digital electronic circuit that functions to perform the addition on the binary numbers is defined as Half Adder. The process of addition is denary the sole difference is the number system chosen.
There exists only 0 and 1 in the binary numbering system. The weight of the number is completely based on the positions of the binary digits. Among those 1 and 0, 1 is treated as the largest digit
and 0 as the smaller one. The Block diagram of this adder is
Half Adder Circuit Diagram
A half adder consists of two inputs and produces two outputs. It is considered to be the simplest digital circuits. The inputs to this circuit are the bits on which the addition is to be performed.
The outputs obtained are the sum and carry.
The circuit of this adder comprises of two gates. They are AND and XOR gates. The applied inputs are the same for both the gates present in the circuit. But the output is taken from each gate. The
output of the XOR gate is referred to as SUM and the output of AND is known CARRY.
Half Adder Truth Table
To obtain the relation of the output obtained to the applied input can be analyzed using a table known as Truth Table.
From the above truth table the points are evident as follows:
• If A=0, B=0 that is both the inputs applied are 0. Then both the outputs SUM and CARRY are 0.
• Among two inputs applied if anyone the input is 1 then the SUM will b e1 but the CARRY is 0.
• If both the inputs are 1 then the SUM will be equal to 0 and the CARRY will be equal to 1.
Based on the inputs applied the half adder proceeds with the operation of addition.
The equation for this type of circuits can be realized by the concepts of Sum of Products (SOP) and Products of Sum (POS). The Boolean Equation for this type of circuits determines the relation
between the applied inputs to the obtained outputs.
To determine the equation the k-maps are drawn based on the truth table values. It consists of two equations because two logic gates are used in it.
The k-map of the carry is
The output equation of CARRY is obtained from the AND gate.
The Boolean Expression for the SUM is realized by the SOP form. Hence the K-map for the SUM is
The equation determined is
S= A⊕ B
The applications of this basic adder are as follows
• To perform additions on binary bits the Arithmetic and Logic Unit present in the computer prefers this adder circuit.
• The combination of half adder circuits leads to the formation of the Full Adder circuit.
• These logic circuits are preferred in the design of calculators.
• To calculate the addresses and tables these circuits are preferred.
• Instead of only addition, these circuits are capable of handling various applications in digital circuits. Further, this becomes the heart of digital electronics.
VHDL Code
The VHDL code for the Half Adder circuity is
library ieee;
use ieee.std_logic_1164.all;
entity half_adder is
port(a,b: in bit; sum,carry:out bit);
end half_adder;
architecture data of half_adder is
sum<= a xor b;
carry <= a and b;
end data;
1. What do you mean by Adder?
The Digital Circuits whose sole purpose is to perform addition is known as Adders. These are the main components of ALU’s. Adders operate in addition to the various formats of numbers. The outputs of
the adders are the sum and carry.
2. What are the Limitations of Half Adder?
The carry bit generated from the previous bit cannot be added is the limitation of this adder. To perform addition for multiple bits these circuits cant be preferred.
3. How to Implement Half Adder using NOR Gate?
The implementation of this type of adder can also be done by using the NOR gate. This is another Universal Gate.
4. How to Implement Half Adder using NAND Gate?
The NAND gate is one of the kinds of universal gates. It indicates that any kind of circuit designing is possible by the use of NAND gates.
From the above circuit, the carry output can be generated by applying the output of one NAND gate to the input as other NAND gate. That is nothing but familiar to the output obtained from AND gate.
The output equation of SUM can be generated by applying the output of the initial NAND gate along with the individual inputs of A and B to further NAND gates. Finally, the outputs obtained by those
NAND gates are applied to the gate again. Hence the output for the SUM is generated.
Please refer to this link to know more about Half Subtractor.
Please refer to this link to know more about Full Subtractor MCQs & Integrated Circuits MCQs.
Therefore the basic adder in the digital circuit can be designed by using various logic gates. But the multiple bits addition gets complicated and considered to be the limitation of the half adder.
Can you describe which IC is used for the increment operation in any practical counters? | {"url":"https://www.watelectronics.com/what-is-half-adder-circuit-diagram-its-applications/","timestamp":"2024-11-14T03:26:12Z","content_type":"text/html","content_length":"61988","record_id":"<urn:uuid:a048b4c0-4b1c-4843-85ad-d07d9645b784>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00366.warc.gz"} |
110364: It is not possible to calculate the classes.
This error occurs due to one of the following:
• The Interval Size parameter value is larger than half of the range of the values of the field provided in the Field to Reclassify parameter. This results in fewer than three classes, and at least
three classes are required.
• The Interval Size parameter value results in more than 1,000 classes.
• The largest Upper Bound value in the Reclassification Table parameter is smaller than the minimum value of the field provided in the Field to Reclassify parameter.
Reduce the interval size to a value that is smaller than half of the range of the field to reclassify, increase the value to one large enough to produce 1,000 or fewer classes, or provide an upper
bound value that is larger than the minimum value of the analysis field. | {"url":"https://pro.arcgis.com/en/pro-app/latest/tool-reference/tool-errors-and-warnings/110001-120000/tool-errors-and-warnings-110351-110375-110364.htm","timestamp":"2024-11-05T10:55:51Z","content_type":"text/html","content_length":"11412","record_id":"<urn:uuid:7cde57f6-36b0-4c53-9979-f95e981917ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00847.warc.gz"} |
Acre to Square Millimeter
Units of measurement use the International System of Units, better known as SI units, which provide a standard for measuring the physical properties of matter. Measurement like area finds its use in
a number of places right from education to industrial usage. Be it buying grocery or cooking, units play a vital role in our daily life; and hence their conversions. unitsconverters.com helps in the
conversion of different units of measurement like ac to mm² through multiplicative conversion factors. When you are converting area, you need a Acres to Square Millimeters converter that is elaborate
and still easy to use. Converting Acre to Square Millimeter is easy, for you only have to select the units first and the value you want to convert. If you encounter any issues to convert, this tool
is the answer that gives you the exact conversion of units. You can also get the formula used in Acre to Square Millimeter conversion along with a table representing the entire conversion. | {"url":"https://www.unitsconverters.com/en/Acre-To-Squaremillimeter/Unittounit-308-305","timestamp":"2024-11-08T01:56:31Z","content_type":"application/xhtml+xml","content_length":"134266","record_id":"<urn:uuid:5f92aa1f-c8a6-4605-b752-648c626deb67>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00379.warc.gz"} |
This module handles all Helmholtz equations routines.
Chris Bradley
Version: MPL 1.1/GPL 2.0/LGPL 2.1
The contents of this file are subject to the Mozilla Public License Version 1.1 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License
at http://www.mozilla.org/MPL/
Software distributed under the License is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License for the specific language governing rights and
limitations under the License.
The Original Code is OpenCMISS
The Initial Developer of the Original Code is University of Auckland, Auckland, New Zealand, the University of Oxford, Oxford, United Kingdom and King's College, London, United Kingdom. Portions
created by the University of Auckland, the University of Oxford and King's College, London are Copyright (C) 2007-2010 by the University of Auckland, the University of Oxford and King's College,
London. All Rights Reserved.
Alternatively, the contents of this file may be used under the terms of either the GNU General Public License Version 2 or later (the "GPL"), or the GNU Lesser General Public License Version 2.1 or
later (the "LGPL"), in which case the provisions of the GPL or the LGPL are applicable instead of those above. If you wish to allow use of your version of this file only under the terms of either the
GPL or the LGPL, and not to allow others to use your version of this file under the terms of the MPL, indicate your decision by deleting the provisions above and replace them with the notice and
other provisions required by the GPL or the LGPL. If you do not delete the provisions above, a recipient may use your version of this file under the terms of any one of the MPL, the GPL or the LGPL.
Definition in file Helmholtz_TEMPLATE_equations_routines.f90. | {"url":"http://staging.opencmiss.org/documentation/apidoc/iron/latest/programmer/_helmholtz___t_e_m_p_l_a_t_e__equations__routines_8f90.html","timestamp":"2024-11-02T14:51:39Z","content_type":"application/xhtml+xml","content_length":"13548","record_id":"<urn:uuid:87e8947e-947d-434f-8d6a-99c88afd5abc>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00165.warc.gz"} |
Convert Arcminutes to Arcseconds (arcmin to arcsec) | Examples & Steps
Disclaimer: We've spent hundreds of hours building and testing our calculators and conversion tools. However, we cannot be held liable for any damages or losses (monetary or otherwise) arising out of
or in connection with their use. Full disclaimer.
How to convert arcminutes to arcseconds (arcmin to arcsec)
The formula for converting arcminutes to arcseconds is: arcsec = arcmin × 60. To calculate the arcminute value in arcseconds first substitute the arcminute value into the preceding formula, and then
perform the calculation. If we wanted to calculate 1 arcminute in arcseconds we follow these steps:
arcsec = arcmin × 60
arcsec = 1 × 60
arcsec = 60
In other words, 1 arcminute is equal to 60 arcseconds.
Example Conversion
Let's take a look at an example. The step-by-step process to convert 7 arcminutes to arcseconds is:
1. Understand the conversion formula: arcsec = arcmin × 60
2. Substitute the required value. In this case we substitute 7 for arcmin so the formula becomes: arcsec = 7 × 60
3. Calculate the result using the provided values. In our example the result is: 7 × 60 = 420 arcsec
In summary, 7 arcminutes is equal to 420 arcseconds.
Converting arcseconds to arcminutes
In order to convert the other way around i.e. arcseconds to arcminutes, you would use the following formula: arcmin = arcsec × 0.0166666666666667. To convert arcseconds to arcminutes first substitute
the arcsecond value into the above formula, and then execute the calculation. If we wanted to calculate 1 arcsecond in arcminutes we follow these steps:
arcmin = arcsec × 0.0166666666666667
arcmin = 1 × 0.0166666666666667
arcmin = 0.0166666666666667
Or in other words, 1 arcsecond is equal to 0.0166666666666667 arcminutes.
Conversion Unit Definitions
What is a Arcminute?
An arcminute (or minute) is a unit of angular measurement that is often used as a subdivision of a degree. One degree is equal to 60 minutes.
To better understand this definition, it's important to note that a circle is divided into 360 degrees. Each degree is further divided into 60 minutes. Therefore, one minute is equal to 1/60th of a
degree or approximately 0.01745 degrees.
Minutes are commonly used in applications where angles need to be measured with a high degree of precision, such as astronomy, navigation, and surveying. They are particularly useful when measuring
small angles, such as the apparent diameter of celestial bodies or the deviation of a building from a true vertical line.
What is a Arcsecond?
An arcsecond (or second) is a unit of angular measure that is used to measure very small angles, particularly in astronomy and geodesy. One second is defined as 1/3600th of a degree or 1/60th of a
minute of arc.
To better understand this definition, it's important to note that a circle is divided into 360 degrees, and each degree is further divided into 60 minutes of arc. Each minute of arc is then divided
into 60 seconds of arc. Therefore, one second of arc is a very small angle, equal to 1/3600th of a degree or approximately 0.00028 degrees.
When measuring angles using seconds, angles are usually denoted using the degree symbol (°) followed by the minute symbol (′) and then the second symbol (″). For example, an angle of 2 degrees, 30
minutes, and 15 seconds would be written as 2° 30′ 15″.
Arcminutes To Arcseconds Conversion Table
Below is a lookup table showing common arcminutes to arcseconds conversion values.
Arcminute (') Arcsecond ('')
1 arcmin 60 arcsec
2 arcmin 120 arcsec
3 arcmin 180 arcsec
4 arcmin 240 arcsec
5 arcmin 300 arcsec
6 arcmin 360 arcsec
7 arcmin 420 arcsec
8 arcmin 480 arcsec
9 arcmin 540 arcsec
10 arcmin 600 arcsec
11 arcmin 660 arcsec
12 arcmin 720 arcsec
13 arcmin 780 arcsec
Other Common Arcminute Conversions
Below is a table of common conversions from arcminutes to other angle units.
Conversion Result
1 arcminute in circles 0.0000462962962962962962962962962963 circle
Arcminutes To Arcseconds Conversion Chart | {"url":"https://www.thecalculatorking.com/converters/angle/minute-to-second","timestamp":"2024-11-13T05:24:57Z","content_type":"text/html","content_length":"82891","record_id":"<urn:uuid:108d306f-03e5-44f5-8f73-0cc3ed231fae>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00192.warc.gz"} |
An Introduction to the Theory of Computation
Book Excerpts:
Computations are designed to solve problems. Programs are descriptions of computations written for execution on computers. The field of computer science is concerned with the development of
methodologies for designing programs, and with the development of computers for executing programs. It is therefore of central importance for those involved in the field that the characteristics of
, and
be fully understood. Moreover, to clearly and accurately communicate intuitive thoughts about these subjects, a precise and well-defined terminology is required.
This book explores some of the more important terminologies and questions concerning programs, computers, problems, and computation. The exploration reduces in many cases to a study of mathematical
theories, such as those of
formal languages
; theories that are interesting also in their own right. These theories provide abstract models that are easier to explore, because their formalisms avoid irrelevant details.
Organized into seven chapters, the material in this book gradually increases in complexity. In many cases, new topics are treated as refinements of old ones, and their study is motivated through
their association to programs.
- Chapter 1 is concerned with the definition of some basic concepts.
- Chapter 2 studies finite-memory programs.
- Chapter 3 considers the introduction of recursion to finite-memory programs.
- Chapter 4 deals with the general class of programs.
- Chapter 5 considers the role of time and space in computations.
- Chapter 6 introduces instructions that allow random choices in programs.
- Chapter 7 is devoted to parallelism.
As a natural outcome, the text also treats the topics of
parallel computations
. These important topics have matured quite recently, and so far have not been treated in other texts.
The level of the material is intended to provide the reader with introductory tools for understanding and using
formal specifications
in computer science. As a result, in many cases ideas are stressed more than detailed argumentation, with the objective of developing the reader's intuition toward the subject as much as possible.
Intended Audience:
This book is intended for undergraduate students at advanced stages of their studies, and for graduate students. The reader is assumed to have some experience as a programmer, as well as in handling
mathematical concepts. Otherwise no specific prerequisite is necessary. | {"url":"https://www.freetechbooks.com/index.php/an-introduction-to-the-theory-of-computation-t468.html","timestamp":"2024-11-10T06:17:12Z","content_type":"text/html","content_length":"51693","record_id":"<urn:uuid:fd837694-96c4-4817-b39c-88b908bdb452>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00890.warc.gz"} |
Step Step Title Length Details
1 Introduction 5 mins Greet the students and ask them if they know any fun facts about numbers. Write some of their answers on the board.
2 Homework check 5 mins Ask one or a few students to present their homework from the previous lesson.
3 Presentation of fun 10 mins Show a PowerPoint presentation or a video of interesting and fun facts about numbers, such as prime numbers, Fibonacci sequence, the number Pi, and so on.
4 Handout of math problems 5 mins Hand out a sheet with math problems related to the fun facts presented. Allow the students to work on the problems individually or in pairs for the next 5
5 Group discussion 5 mins Ask the students to share their answers to the problems and explain how they solved them. Encourage them to discuss among themselves and ask questions.
6 Conclusion 5 mins Recap the fun facts and ask the students to name their favorite one. Assign homework related to the topic for the next lesson. | {"url":"https://aidemia.co/view.php?id=1791","timestamp":"2024-11-05T23:41:03Z","content_type":"text/html","content_length":"9424","record_id":"<urn:uuid:cd45408f-8ce8-4356-a0c6-e287e1e4008e>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00505.warc.gz"} |
VBA – Depth-First-Search Algorithm with VBA
Depth first search algorithm is one of the two famous algorithms in graphs. I am now in “Algorithm Wave” as far as I am watching some videos from SoftUni Algorithm courses. In the current article I
will show how to…
Tagged with: DFS, DFS algorithms, VBA, vba algorithms | {"url":"https://www.vitoshacademy.com/tag/vba-algorithms/","timestamp":"2024-11-05T10:30:41Z","content_type":"text/html","content_length":"33082","record_id":"<urn:uuid:6babdd03-3c6b-498f-bf47-4cedf17591c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00460.warc.gz"} |
Anomalous infiltration
Infiltration of anomalously diffusing particles from one material to another through a biased interface is studied using continuous time random walk and Lévy walk approaches. Subdiffusion in both
systems may lead to a net drift from one material to another (e.g. 〈x(t)〉 > 0) even if particles eventually flow in the opposite direction (e.g.the number of particles in x > 0 approaches zero). A
weaker paradox is found for a symmetric interface: a flow of particles is observed while the net drift is zero. For a subdiffusive sample coupled to a superdiffusive system we calculate the average
occupation fractions and the scaling of the particle distribution. We find a net drift in this system, which is always directed to the superdiffusive material, while the particles flow to the
material with smaller sub-or superdiffusion exponent. We report the exponents of the first passage times distribution of Lévy walks, which are needed for the calculation of anomalous infiltration.
• diffusion
• stochastic processes (theory)
All Science Journal Classification (ASJC) codes
• Statistical and Nonlinear Physics
• Statistics and Probability
• Statistics, Probability and Uncertainty
Dive into the research topics of 'Anomalous infiltration'. Together they form a unique fingerprint. | {"url":"https://cris.iucc.ac.il/en/publications/anomalous-infiltration","timestamp":"2024-11-03T10:45:29Z","content_type":"text/html","content_length":"48008","record_id":"<urn:uuid:6aa37e10-289d-4f52-8d0f-0ae8fa28ce4c>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00332.warc.gz"} |
Planar Graph
A planar graph is a network that can be drawn in a plane without any edges intersecting.
Planar graph is a college-level concept that would be first encountered in a discrete mathematics course covering graph theory.
Cycle Graph: A cycle graph is a network containing a single cycle which passes through all its vertices.
Polyhedral Graph: A polyhedral graph is a network made up of the vertices and edges of a polyhedron. Polyhedral graphs are always planar.
Tree: A tree is a network that contains no cycles.
Graph: In graph theory, a graph, also called a network, is a collection of points together with lines that connect some subset of the points.
Classroom Articles on Graph Theory
Chromatic Number Directed Graph
Complete Graph Graph Cycle
Connected Graph Graph Theory
Classroom Articles on Discrete Mathematics (Up to College Level)
Algorithm Generating Function
Binary Logic
Binomial Coefficient Magic Square
Binomial Theorem Pascal's Triangle
Combinatorics Permutation
Discrete Mathematics Recurrence Relation
Fibonacci Number | {"url":"https://mathworld.wolfram.com/classroom/PlanarGraph.html","timestamp":"2024-11-03T21:23:38Z","content_type":"text/html","content_length":"48893","record_id":"<urn:uuid:a158d6cd-922a-4df2-a933-9668fec0e87e>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00357.warc.gz"} |
June 2021 – N N Taleb's Technical Blog
You have zero probability of making money. But it is a great trade.
One-tailed distributions entangle scale and skewness. When you increase the scale, their asymmetry pushes the mass to the right rather than bulge it in the middle. They also illustrate the difference
between probability and expectation as well as the difference between various modes of convergence.
Consider a lognormal \(\mathcal{LN}\) with the following parametrization, \(\mathcal{LN}\left[\mu t-\frac{\sigma ^2 t}{2},\sigma \sqrt{t}\right]\) corresponding to the CDF \(F(K)=\frac{1}{2} \text
{erfc}\left(\frac{-\log (K)+\mu t-\frac{\sigma ^2 t}{2}}{\sqrt{2} \sigma \sqrt{t}}\right) \).
The mean \(m= e^{\mu t}\), does not include the parameter \(\sigma\) thanks to the \(-\frac{\sigma ^2}{2} t\) adjustment in the first parameter. But the standard deviation does, as \(STD=e^{\mu t} \
sqrt{e^{\sigma ^2 t}-1}\).
When \(\sigma\) goes to \(\infty\), the probability of exceeding any positive \(K\) goes to 0 while the expectation remains invariant. It is because it masses like a Dirac stick at \(0\) with an
infinitesimal mass at infinity which gives it a constant expectation. For the lognormal belongs to the log-location-scale family.
\(\underset{\sigma \to \infty }{\text{lim}} F(K)= 1\)
Option traders experience an even worse paradox, see my Dynamic Hedging. As the volatility increases, the delta of the call goes to 1 while the probability of exceeding the strike, any strike, goes
to \(0\).
More generally, a \(\mathcal{LN}[a,b]\) has for mean, STD, and CDF \(e^{a+\frac{b^2}{2}},\sqrt{e^{b^2}-1} e^{a+\frac{b^2}{2}},\frac{1}{2} \text{erfc}\left(\frac{a-\log (K)}{\sqrt{2} b}\right)\)
respectively. We can find a parametrization producing weird behavior in time as \(t \to \infty\).
Thanks: Micah Warren who presented a similar paradox on Twitter.
The “Bitcoin, Currencies and Fragility” Paper
The main paper Bitcoin, Currencies and Fragility is updated here .
The supplementary material is updated here | {"url":"https://fooledbyrandomness.com/blog/index.php/2021/06/","timestamp":"2024-11-07T15:27:46Z","content_type":"text/html","content_length":"42281","record_id":"<urn:uuid:1fe4f51f-9e2b-4fc4-a2d3-21f25213453a>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00321.warc.gz"} |
These cross sections are calculated from JENDL-4.0 at 300K.
The background color of each cell noted a cross section means the order of the cross-section value.
The unit of cross section, (b), means barns, and SI prefixes are used as following.
(kb) → 10^3(b), (mb) → 10^−3(b), (μb) → 10^−6(b), (nb) → 10^−9(b).
MT is a number that defines a reaction type. For the relation between MT and reaction type, please see
or refer to the manual of ENDF formats.
Maxwellian Average :
σ[macs](T) = ∫ σ(E,T) ⋅ E ⋅ exp ( ) dE ,
where T denotes the temperature, and k[B] the Boltzmann constant. The upper and lower limits of integration, E[L] and E[U] are set to 10^−5 eV and 10 eV, respectively.
Resonance Integral :
with E[L] = 0.5 eV and E[U] = 10 MeV.
U-235 Thermal Fission-Neutron Spectrum Average (Fiss. Spec. Average) :
σ[facs](T) = ∫ σ(E,T) ⋅ ⋅ exp ( ) ⋅ sinh dE ,
with E[L] = 10^−5 eV and E[U] = 20 MeV. The parameters a and b are 0.988 MeV and 2.249 MeV^−1, respectively.
Westcott g-factor : | {"url":"https://wwwndc.jaea.go.jp/cgi-bin/Tab80WWW.cgi?lib=J40&iso=H000","timestamp":"2024-11-13T06:01:18Z","content_type":"text/html","content_length":"17261","record_id":"<urn:uuid:b7b76476-d16a-4fa5-9d03-eb482743ca82>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00876.warc.gz"} |
80 Lb Thrust Trolling Motor Amp Draw
80 Lb Thrust Trolling Motor Amp Draw - At 24v the amp draw would be (1hp /.75% x 746w = 994.66whr / 12v = 41.4a). The 80 lbs thrust motor is recommended for boats weighing up to 4,000lbs. Web when
using lithium batteries, you want to ensure the battery(s) provide(s) enough continuous discharge amperage to run the motor at its max amp draw. Web amps x volts = watts, or watts / amps = volts. Web
the starting device must be fully charged to show the real amp draw of a motor.
Wire extension length refers to the distance from the batteries to the trolling motor leads. At 24v the amp draw would be (1hp /.75% x 746w = 994.66whr / 12v = 41.4a). Web thrust volts max amp draw;
If you wanted to operate a 1hp 12v trolling motor for 1 hour, you would need an 82.8ah 12 v energy (battery) source. 100 amp hour rated battery / 20 amp draw = 5 hour run time. Web trolling motor amp
draw. Web when using lithium batteries, you want to ensure the battery(s) provide(s) enough continuous discharge amperage to run the motor at its max amp draw.
Ulterra 80 Trolling Motor 45″ Shaft Length, 80 lbs Thrust, 24 Volts
Web just to satisfy my curiosity, i checked the amp draw of my 80 lb ultrex at different speed settings. At 24v the amp draw would be (1hp /.75% x 746w = 994.66whr / 12v = 41.4a). Web amps x volts =
watts, or watts / amps = volts. Consult website for available thrust options..
940500100 X5 24V 80 lbs Thrust 50" Shaft Bow Mount
Web looking for an efficient trolling motor for your boat? Web for example, one pro staff tested a 24v 54ah battery package for their 24v 80 lb thrust. The motor is only able to draw up to a
specified amperage. Web just to satisfy my curiosity, i checked the amp draw of my 80 lb.
Minn Kota Vantage 80 Freshwater Transom Mount Trolling Motor, 80 lb
Web i have seen a few videos of guys measuring the current draw of their trolling motors. Web with our charts, you can compare the run times of different battery sizes when used with popular trolling
motor sizes: 30a (40a breaker) minn kota: Using the formula, you’ll get 3.3 hours. It is measured in amperes.
Minn Kota® 1368580 Maxxum 24V 80 lbs Thrust 42" Shaft Freshwater
Max amp draw of your trolling motor should be less than < the max amp draw of your battery (also called continuous discharge rate). I recognize that the motor would draw more under heavier loads but
this gives me a minimum baseline. Maximum amp draw values only occur intermittently during select conditions and should not.
Minn Kota® 1368640 Maxxum 24V 80 lb Thrust 52" Shaft Bow Mount
Determine the amp draw of your motor. Web trolling motors with 80 pounds of thrust run on 24 volts and are suitable for boats up to 4,000 lb. Web so, let’s say you already know your battery amperage
draw. Web max amp draw: Web just to satisfy my curiosity, i checked the amp draw of.
Watersnake® Assault SWDSB 80lb. Thrust Bowmount 24V Trolling Motor
They require two 12v batteries connected in series. Find a power cord and attach the clamp amp meter. Web what is your amperage draw? Amp rating / amp draw. Wire extension length refers to the
distance from the batteries to the trolling motor leads. At 12v the amp draw would be (1hp /.75% x 746w.
Minn Kota® Traxxis™ 80 Transom mount 24V 80 lb. Thrust 42" Shaft
30, 55, 80 and 112 pounds of thrust. Our battery run time calculator will give you an idea of what you can expect from a given battery capacity at a specific amp draw. 30a (40a breaker) minn kota:
They require two 12v batteries connected in series. The chart below will show. Due to the many.
Minn Kota Ultrex Bow Mount 24V Variable Speed 80 lb. Thrust Trolling
The 80 lbs thrust motor is recommended for boats weighing up to 4,000lbs. The 80 lbs thrust motor is recommended for boats weighing up to 4,000lbs. The utmost draw is 56 amps. Understanding the amp
draw is crucial because it directly affects the motor’s. Find a power cord and attach the clamp amp meter. Web.
Minn Kota® 1368810 Ultrex 24V 80 lb Thrust 45" Shaft Bow Mount
Web i have seen a few videos of guys measuring the current draw of their trolling motors. Amp rating / amp draw. The chart below shows the max amp draw by motor thrust. Determine the amp draw of your
motor. Due to the many variables when boating (including wind, waves, current, battery condition, etc.) we.
X5 24V 80 lb Thrust 60" Shaft Bow Mount Saltwater Trolling
Web thrust volts max amp draw; Understanding the amp draw is crucial because it directly affects the motor’s. The 80 lbs thrust motor is recommended for boats weighing up to 4,000lbs. The chart below
will show. Max amp draw of your trolling motor should be less than < the max amp draw of your battery.
80 Lb Thrust Trolling Motor Amp Draw If you wanted to operate a 1hp 12v trolling motor for 1 hour, you would need an 82.8ah 12 v energy (battery) source. 100 amp hour rated battery / 20 amp draw = 5
hour run time. The 80 lbs thrust motor is recommended for boats weighing up to 4,000lbs. 30a (40a breaker) minn kota: Web just to satisfy my curiosity, i checked the amp draw of my 80 lb ultrex at
different speed settings.
Web What Is Your Amperage Draw?
It's good to predict how many hours you'll get out of one charge. Check out our full line of transom mounted saltwater trolling motors. The maximum draw is 50 amps. 100+ lb thrust trolling motors
Web Thrust Volts Max Amp Draw;
9.5” x 16.5” x 77.25” (24.13l x 41.91w x 196.215h cm) weight: Web max amp draw: If you wanted to operate a 1hp 12v trolling motor for 1 hour, you would need an 82.8ah 12 v energy (battery) source.
Web amps x volts = watts, or watts / amps = volts.
The Maximum Draw Is Either 50 Amps Or 52 Amps.
The motor is only able to draw up to a specified amperage. 30, 55, 80 and 112 pounds of thrust. The 80 lbs thrust motor is recommended for boats weighing up to 4,000lbs. I recognize that the motor
would draw more under heavier loads but this gives me a minimum baseline.
If You Have A Lower Amp Hour Rating, You Will Want To Calculate The Runtime For Your Motor.
Web i have seen a few videos of guys measuring the current draw of their trolling motors. This means that your motor will never ask for more power than your battery is capable of giving at any one
time. Determine the amp draw of your motor. Maximum amp draw values only occur intermittently during select conditions and should not be.
80 Lb Thrust Trolling Motor Amp Draw Related Post : | {"url":"https://sandbox.independent.com/view/80-lb-thrust-trolling-motor-amp-draw.html","timestamp":"2024-11-07T10:27:39Z","content_type":"application/xhtml+xml","content_length":"24156","record_id":"<urn:uuid:a9124e73-2500-4935-b101-e681ac4dc90c>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00682.warc.gz"} |
SUPERSEDED 12 Jun 2016
This was originally a submission to the IJCAI 2016 workshop on
Bridging the Gap between Human and Automated Reasoning
held at the International Joint Conference on AI, New York, July 2016
The submission was accepted and a revised version will go into the workshop proceedings. The revised version of this paper is at
All of this is "work in progress" and is likely to be revised,
especially after criticisms made at the workshop!
12 Jun 2016
THE REMAINDER OF THIS PAPER IS OUT OF DATE
Natural Vision and Mathematics: Seeing Impossibilities
(Draft workshop paper)
Aaron Sloman^1
School of Computer Science,
University of Birmingham, UK
This paper summarises one aspect of a large and complex project - the Turing-inspired investigation of evolution of forms of information processing: the Meta-Morphogenesis project. The full project
investigates forms of biological information processing produced by evolution since the beginning of life on earth, and the fundamental and evolved construction kits used by evolution and its
products. I'll focus especially on features of animal information processing relevant to mechanisms that made possible the deep mathematical discoveries of Euclid, Archimedes, and other ancient
mathematicians, especially mechanisms of spatial perception that were precursors of mathematical abilities. These are mechanisms required for perception of possibilities and constraints on
possibilities, a type of affordance perception not explicitly discussed by Gibson, but suggested by extending his ideas. Current AI vision systems and reasoning systems lack such abilities. A future
AI project might produce a design for "baby" robots that can "grow up" to become mathematicians able to replicate (and extend) some of the ancient discoveries, e.g. in the way that Archimedes
extended Euclidean geometry to make trisection of an arbitrary angle possible. This is relevant many kinds of intelligent organism or machine able to perceive and interact with structures and
processes in the environment. One consequence is demonstration of the need to extend Dennett's taxonomy of types of mind to include Euclidean (or Archimedean) minds.
AI, Kant, Mathematics, Meta-morphogenesis, intuition, Euclid, Geometry,Topology, Kinds-of-minds, Meta-cognition, Meta-meta-cognition, etc.
Mathematics and computers
It is widely believed that computers will always outperform humans in mathematical reasoning. That, however, is based on a narrow conception of mathematics that ignores the history of mathematics,
e.g. achievements of Euclid and Archimedes, and also ignores kinds of mathematical competence that are a part of our everyday life, but mostly go unnoticed, e.g. topological reasoning abilities.
These are major challenges for AI, especially attempts to replicate or model human mathematical competences. I don't think we are ready to build working systems with these competences, but I'll
outline a research programme that may, eventually, lead us towards adequate models.
The research explores aspects of the evolution and use of biological mathematical competences and requirements for replicating those competences in future machines. Formal mechanisms based on use of
arithmetic, algebra, and logic, dominate AI models of mathematical reasoning, but the great ancient mathematicians did not use modern logic and formal systems. Such things are therefore not necessary
for mathematics, though they are part of mathematics: a fairly recent part. Moreover, they do not seem to be sufficient to model all human and animal mathematical reasoning. By studying achievements
of ancient mathematicians, pre-verbal human toddlers, and intelligent non-human animals, especially perception and reasoning abilities that are not matched by current AI systems, or explained by
current theories of how brains work, we can identify challenges to be met.
This will need new powerful languages, similar to languages produced by evolution for perceiving, thinking about and reasoning about shapes, structures and spatial processes. If such internal
languages are used by intelligent non-human animals and pre-verbal toddlers, their evolution must have preceded evolution of languages for communication, as argued in [Sloman 1978b, Sloman 1979,
Sloman 2015]. In particular, structured internal languages (for storing and using information) must have evolved before languages for communication, since there would be nothing to communicate and no
use for anything communicated, without pre-existing internal mechanisms for constructing, manipulating and using structured meanings.
For the simplest organisms (viruses?) there may be only passive physical/chemical reactions, and only trivial decisions and uses of information (apart from genetic information). Slightly more complex
organisms may use information only for taking Yes/No or More/Less or Start/Stop decisions, or perhaps selections from a pre-stored collection of possible internal or external actions. (Evolution's
menus!) More complex internal meaning structures are required for cognitive functions based on information contents that can vary in structure and complexity, like the Portia spider's ability to
study a scene for about 20 minutes and then climb a branching structure to reach a position above its prey, and then drop down for its meal [Tarsitano 2006]. This requires an initial process of
information collection and storage in a scene-specific structured form that later allows a pre-computed branching path to be followed even though the prey is not always visible during the process,
and portions of the scene that are visible keep changing as the spider moves. Portia is clearly conscious of much of the environment, during and after plan-construction. As far as I know, nobody
understands in detail what the information processing mechanisms are that enable the spider to take in scene structures and construct a usable 3-D route plan, though we can analyse the computational
requirements on the basis of half a century of AI experience.
This is one example among many cognitive functions enabling individual organisms to deal with static structured situations and passively perceived or actively controlled processes, of varying
complexity, including control processes in which parts of the perceiver change their relationships to one another (e.g. jaws, claws, legs, etc.) and to other things in the environment (e.g. food,
structures climbed over, or places to shelter).
Abilities to perceive plants in natural environments, such as woodlands or meadows, and, either immediately or later, make use of them, also requires acquisition, storage and use of information about
complex objects of varying structures, and information about complex processes in which object-parts change their relationships, and change their visual projections as the perceiver moves.
Acting on perceived structures, e.g. biting or swallowing them, or carrying them to a part-built nest to be inserted, will normally have to be done differently in different contexts, e.g. adding
twigs with different sizes and shapes at different stages in building a nest. How can we make a robot that does this?
Non-human abilities to create and use information structures of varying complexity are evolutionary precursors of human abilities to use grammars and semantic rules for languages in which novel
sentences are understood in systematic ways to express different more or less complex percepts, intentions, or plans to solve practical problems, e.g. using a lexicon, syntactic structure, and
compositional semantics. In particular, a complex new information structure can be assembled and stored then later serve as an information structure (e.g. plan, hypothesis) used in control of
We must not, of course, be deceived by organisms that appear to be intentionally creating intended structures but are actually doing something much simpler that creates the structures as a
by-product, like bees huddled together, oozing wax, vibrating, and thereby creating a hexagonal array of cavities, that look designed but were not. Bees have no need to count to six to do that.
Many nest-building actions, however, are neither random nor fixed repetitive movements. They are guided in part by missing portions of incomplete structures, where what's missing and what's added
keeps changing. So the builders need internal languages with generative syntax, structural variability, (context sensitive) compositional semantics, and inference mechanisms in order to be able to
encode all the relevant varieties of information needed. Nest building competences in corvids and weaver birds are examples. Human architects are more complicated.
Abilities to create, perceive, change, manipulate, or use meaning structures (of varying complexity) enable a perceiver of a novel situation to take in its structure and reason hypothetically about
effects of possible actions - without having to collect evidence and derive probabilities. The reasoning can be geometric or topological, without using any statistical evidence: merely the
specification of spatial structures. Reasoning about what is impossible (not merely improbable) can avoid wasted effort.
The "polyflap" domain was proposed in [Sloman 2005] as an artificial environment illustrating some challenging cognitive requirements. It is made up of arbitrary 2D polygonal shapes each with a
single (non-flat) fold forming a new 3D object. An intelligent agent exploring polyflaps could learn that any object resting on surfaces where it has a total of two contact points can rotate in
either direction about the line joining the contact points. Noticing this should allow the agent to work out that in order to be stable such a structure needs at least one more supporting surface on
which a third part of the object can rest. In the simple case all three points may be in the same horizontal plane: e.g. on a floor. But an intelligent agent that understands stability should be able
to produce stability with three support points on different, non-co-planar surfaces, e.g. the tops of three pillars with different heights. Any two of the support points on their own would allow
tilting about the line joining the points. But if the third support point is not on that line, and a vertical line through the object's centre of gravity goes through the interior of the triangle
formed by the three support points then the structure will be stable2. An intelligent machine should be able to reason in similar ways about novel configurations. This illustrates a type of
perception of affordances in the spirit of Gibson's theory. (I don't know whether he mentioned use of geometrical or topological reasoning in deciding what would be stable).
This contradicts a common view that affordances are discovered through statistical learning. Non-statistical forms of reasoning about affordances in the environment (possibilities for change and
constraints on change) may have been a major source of the amazing collection of discoveries about topology and geometry recorded in Euclid's Elements. Such forms of reasoning are very important, but
still unexplained.
It seems that for many intelligent non-human animals, as well as for humans, mechanisms evolved that can build, manipulate and use structured internal information records whose required complexity
can vary and whose information content is derivable from information about parts, using some form of "compositional semantics", as is required in human spoken languages, logical languages, and
programming languages. However, the internal languages need not use linear structures, like sentences. In principle they could be trees, graphs, nets, map-like structures or types of structure we
have not yet thought of.
The variety of types of animal that can perceive and act intelligently in relation to novel perceived environmental structures, suggests that many use "internal languages" in a generalised sense of
"language" ("Generalised Languages" or GLs), with structural variability and (context sensitive) compositional semantics, which must have evolved long before human languages were used for
communication [Sloman Chappell 2007,Sloman 2015]. The use of external, structured, languages for communication presupposes internal perceptual mechanisms using (GLs), e.g. for parsing messages and
relating them to percepts and intentions. There are similar requirements for intelligent nest building by birds and for many forms of complex learning and problem solving by other animals, including
elephants, squirrels, cetaceans, monkeys and apes.
Is there a circularity?
In the past, philosophers would have argued (scornfully!) that postulating the need for an internal language IL to be used in understanding an external language EL, would require yet another internal
language for understanding IL, and so on, leading to an infinite regress. But AI and computer systems engineering demonstrate that there need not be an infinite regress. This is a very important
discovery of the last seven or so decades. (I don't have space for details here, but the workshop audience should not need them.) How brains achieve this is unknown, however.
These comments about animals able to perceive, manipulate and reason about varied objects and constructions, apply also to pre-verbal human toddlers playing with toys and solving problems, including
manipulating food, clothing, and even their parents. A footnote points to some examples^3.
The full repertoire of such biological vehicles and mechanisms for information bearers must include both mechanisms and meta-mechanisms (mechanisms that construct new mechanisms) produced by natural
selection and inherited via genomes, and also individually discovered/created mechanisms, especially in humans, and to a lesser extent in other altricial species with "meta-configured" competences in
the terminology of [Chappell Sloman 2007].
Human sign languages are also richly structured but are not restricted to use of discrete temporal sequences of simple signs: usually movements of hands, head and parts of the face (e.g. eyes and
mouth) go on in parallel. This may be related to use of non-linear internal languages for encoding perceptual information, including changing visual information about complex structured scenes and
tactile information gained by manual exploration of structured objects. In general the 3-D world of an active 3-D organism is not all usefully linearizable. (J.L.Austin once wrote "Fact is richer
than diction".)
Creation vs Learning:
Evidence from deaf children in Nicaragua [Senghas 2005], and subtle clues in non-deaf children, show that children do not learn languages from existing users. Rather, they have mechanisms, which
expand in power over time as they are used, enabling them to create languages collaboratively. Normally they do this collaborative creation as a relatively powerless minority, so the creation
produces results that look like imitative learning. The deaf children in Nicaragua showed that the process involves language creation rather than mere learning^4.
Although many details remain unspecified, I hope it's clear that many familiar processes of perceiving, learning, intending, planning, plan execution, debugging faulty plans, etc. would be impossible
if humans (and perhaps some other intelligent animals with related capabilities) did not have rich internal languages and language manipulation abilities. (GL competences.) There's no other known way
they could work! (Unless we are to believe in magic, or Wittgenstein's sawdust in the skull.) For more on this see [Sloman 2015]. (There is a myth believed by some philosophers, cognitive scientists
and others that structure-based "old fashioned" AI has failed. But the truth is that NO form of AI has "succeeded" as yet, except for powerful narrowly focused AI applications, and the newly
fashionable versions are not necessarily closer to general success. I find them much shallower.^5)
There could not be any point developing mechanisms for communicating information, i.e. languages of the familiar type, if senders and recipients were not already information users, otherwise they
would have nothing to communicate, and would have no way to change themselves when something has been understood. Yet there is much resistance to the idea that rich internal languages used for
non-communicative purposes evolved before communicative languages. That may be partly because many people do not understand the computational requirements for many of the competences displayed by
pre-verbal humans and other animals, and partly because they don't understand how the requirement does not lead to an infinite regress of internal languages.
Dennett (1995, and other publications) is an arch-opponent of this idea: his theory of consciousness argues, on the contrary, that consciousness followed evolution of mechanisms allowing languages
previously used for external communication to be used internally for silent self-communication. That seems to imply that Portia spiders needed ancestors that discussed planned routes to capture prey
before they evolved the ability to talk to themselves silently about the process in order to survey, plan, climb and feed unaided?
We still need to learn much more about the nature of internal GLs, the mechanisms required, and their functions in various kinds of intelligent animal. We should not expect them to be much like kinds
of human languages or computer languages we are already familiar with, if various GLs also provide the internal information media for perceptual contents of intelligent and fast moving animals like
crows, squirrels, hunting mammals, spider monkeys, apes, and cetaceans. Taking in information about rapidly changing scenes, needs something different from Portia's internal language for describing a
fixed route. Moreover, languages for encoding information about changing visual contents will need different sorts of expressive powers from languages for human conversation about the weather or the
next meal.^6 Of course, many people have studied and written about various aspects of non-verbal communication and reasoning, including, for example, contributors to [Glasgow, Narayanan,
Chandrasekaran . 1995], and others who have presented papers on diagrammatic reasoning, or studied the uses of diagrams by young children. But there are still deep gaps, especially related to
mathematical discoveries.
Many of Piaget's books provide examples, some discussed below. He understood better than most that there were explanatory gaps, but he lacked any understanding of programming or AI and he therefore
sought explanatory models where they could not be found, e.g. in boolean algebras and group theory.
The importance of Euclid for AI
AI sceptics attack achievements of AI, whereas I am attacking the goals of researchers who have not noticed the need to explain some very deep, well known but very poorly understood, human abilities:
the abilities that enabled our ancestors prior to Euclid, without the help of mathematics teachers, to make the sorts of discoveries that eventually stimulated Euclid, Archimedes and other ancient
mathematicians who made profound non-empirical discoveries, leading up to what is arguably the single most important book ever written on this planet: Euclid's Elements.^7 Thousands of people all
around the world are still putting its discoveries to good use every day even if they have never read it.^8
As a mathematics graduate student interacting with philosophers around 1958, my impression was that the philosopher whose claims about mathematics were closest to what I knew about the processes of
doing mathematics, especially geometry, was Immanuel Kant . But his claims about our knowledge of Euclidean geometry seemed to have been contradicted by recent theories of Einstein and empirical
observations by Eddington. Philosophers therefore thought that Kant had been refuted, ignoring the fact that Euclidean geometry without the parallel axiom remains a deep and powerful body of
geometrical and topological knowledge, and provides a basis for constructing three different types of geometry: Euclidean, elliptical and hyperbolic, the last two based on alternatives to the
parallel axiom.^9 We'll see also that it also has an extension that makes trisection of an arbitrary angle possible, unlike pure Euclidean geometry. These are real mathematical discoveries about a
type of space, not about logic, and not about observed statistical regularities.
First-hand experience of doing mathematics suggests that Kant was basically right in his claims against David Hume: many mathematical discoveries provide knowledge that is non-analytic (i.e.
synthetic, not proved solely on the basis of logic and definitions), non-empirical (i.e. possibly triggered by experiences, but not based on experiences, nor subject to refutation by experiment or
observation, if properly proved), and necessarily true (i.e. incapable of having counter-examples, not contingent).
This does not imply that human mathematical reasoning is infallible: Lakatos demonstrated that even great mathematicians can make various kinds of mistakes in exploring something new and important.
Once discovered, mistakes sometimes lead to new knowledge. So a Kantian philosopher of mathematics need not claim that mathematicians produce only valid reasoning.^10
Purely philosophical debates on these issues can be hard to resolve. So when Max Clowes^11 introduced me to AI and programming around 1969 I formed the intention of showing how a baby robot could
grow up to be a mathematician in a manner consistent with Kant's claims. But that has not yet been achieved. What sorts of discovery mechanisms would such a robot need?
Around that time, a famous paper by McCarthy and Hayes claimed that logic would suffice as a form of representation (and therefore also reasoning) for intelligent robots. The paper discussed the
representational requirements for intelligent machines, and concluded that "... one representation plays a dominant role and in simpler systems may be the only representation present. This is a
representation by sets of sentences in a suitable formal logical language... with function symbols, description operator, conditional expressions, sets, etc." They discussed several kinds of adequacy
of forms of representation, including metaphysical, epistemological and heuristic adequacy (vaguely echoing distinctions Chomsky had made earlier regarding types of adequacy of linguistic theories).
Despite many changes of detail, a great deal of important AI research has since been based on the use of logic as a GL, now often enhanced with statistical mechanisms.
Nevertheless thinking about mathematical discoveries in geometry and topology and many aspects of everyday intelligence suggested that McCarthy and Hayes were wrong about the sufficiency of logic. I
tried to show why at IJCAI 1971 in [Sloman 1971] and later papers. Their discussion was more sophisticated than I have indicated here. In particular, they identified different sorts of criteria for
evaluating forms of representation, used for thinking or communicating:
A representation is called metaphysically adequate if the world could have that form without contradicting the facts of the aspect of reality that interests us.
A representation is called epistemologically adequate for a person or machine if it can be used practically to express the facts that one actually has about the aspect of the world.
A representation is called heuristically adequate if the reasoning processes actually gone through in solving a problem are expressible in the language.
Ordinary language is obviously adequate to express the facts that people communicate to each other in ordinary language. It is not, for instance, adequate to express what people know about how to
recognize a particular face.
They concluded that a form of representation based on logic would be heuristically adequate for intelligent machines observing, reasoning about and acting in human-like environments. But this does
not provide an explanation of what adequacy of reasoning is. For example, one criterion might be that the reasoning should be incapable of deriving false conclusions from true premisses.
At that time I was interested in understanding the nature of mathematical knowledge (as discussed in [Kant 1781]). I thought it might be possible to test philosophical theories about mathematical
reasoning by demonstrating how a "baby robot" might begin to make mathematical discoveries (in geometry and arithmetic) as Euclid and his precursors had. But I did not think logic-based forms of
representation would be heuristically adequate because of the essential role played by diagrams in the work of mathematicians like Euclid and Archimedes, even if some modern mathematicians felt such
diagrams should be replaced by formal proofs in axiomatic systems - apparently failing to realise that that changes the investigation to a different branch of mathematics. The same can be said about
Frege's attempts to embed arithmetic in logic.
[Sloman 1971] offered alternatives to logical forms of representation, especially (among others) "analogical" representations that were not based on the kind of function/argument structure used by
logical representations. Despite an explicit disclaimer in the paper it is often mis-reported as claiming that analogical representations are isomorphic with what they represent: which may be true in
special cases, but is clearly false in general, since a 2-D picture cannot be isomorphic with the 3-D scene it represents, one of several reasons why AI vision research is so difficult.
A revised, extended, notion of validity of reasoning, was shown to include changes of pictorial structure that correspond to possible changes in the entities or scenes depicted, but this did not
explain how to implement a human-like diagrammatic reasoner in geometry or topology. 45 years later there still seems to be no AI system that is capable of discovering and understanding deep
diagrammatic proofs of the sorts presented by Euclid, Archimedes and others. This is associated with inability to act intelligently in a complex and changing environment that poses novel problems
involving spatial structures.
A subtle challenge is provided by the discovery known to Archimedes that there is a simple and natural way of extending Euclidean geometry (the neusis construction) which makes it easy to trisect an
arbitrary angle, as demonstrated here: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/trisect.html^12
I don't think much is known about that sort of discovery process and as far as I know no current AI reasoning system could make such a discovery. It is definitely not connected with statistical
learning: that would not provide insight into mathematical necessity or impossibility. It is also not a case of derivation from axioms: it showed that Euclid's axioms could be extended. Mary Pardoe,
a former student, discovered a related but simpler extension to Euclid, allowing the triangle sum theorem to be proved without using the parallel axiom:
I don't know of anyone in AI who has tried to implement abilities to discover Euclidean geometry, including topological reasoning, or its various extensions mentioned here, in an AI system or robot
with spatial reasoning abilities. I am still trying to understand why it is so difficult. (But not impossible, I hope.)
It's not only competences of adult human mathematicians that have not yet been replicated. Many intelligent animals, such as squirrels, nest building birds, elephants and even octopuses have
abilities to perform spatial manipulation of objects in their environment (or their own body parts) and apparently understand what they are doing. Betty, a New Caledonian crow, made headlines in 2002
when she was observed (in Oxford) making a hook from a straight piece of wire in order to extract a bucket of food from vertical glass tube [Weir, Chappell, Kacelnik . 2002]. The online videos
demonstrate something not mentioned in the original published report, namely that Betty was able to make hooks in several different ways, all of which worked immediately without any visible signs of
trial and error. She clearly understood what was possible, despite not having lived in an environment containing pieces of wire or any similar material (twigs either break if bent or tend to
straighten when released). It's hard to believe that such a creature could be using logic, as recommended by McCarthy and Hayes. But what are the alternatives? Perhaps a better developed theory of
GLs will provide the answer and demonstrate it in a running system.
The McCarthy and Hayes paper partly echoed Frege, who had argued in 1884 that arithmetical knowledge could be completely based on logic, But he denied that geometry could be (despite Hilbert's
axiomatization of Euclidean geometry). [ Whitehead Russell 1910-1913] had also attempted to show how the whole of arithmetic could be derived from logic. though Russell oscillated in his views about
the philosophical significance of what had been demonstrated.
Frege was right about geometry: what Hilbert axiomatised was a combination of logic and arithmetic that demonstrated that arithmetic and algebra contained a model of Euclidean geometry based on
arithmetical analogues of lines, circles, and operations on them, discovered by Descartes. But doing that did not imply that the original discoveries were arithmetical discoveries rather than
discoveries about spatial structures, relationships and transformations. (Many mathematical domains have models in other domains.)
When the ancient geometricians made their discoveries, they were not reasoning about relationships between logical symbols in a formal system or about numbers or equations. This implies that in order
to build robots able to repeat those discoveries it will not suffice merely to give them abilities to derive logical consequences from axioms expressed in a logical notation, such as predicate
calculus or the extended version discussed by McCarthy and Hayes.
Instead we'll need to understand what humans do when they think about shapes and the ways they can be constructed, extended, compared, etc. This requires more than getting machines to answer the same
questions in laboratory experiments, or pass the same tests in mathematical examinations. We need to develop good theories about what human mathematicians did when they made the original discoveries,
without the help of mathematics teachers, and without the kind of drill and practice now often found in mathematical classrooms. Those theories should be sufficiently rich and precise to enable us to
produce working models that demonstrate the explanatory power of the theories.
As far as I know there is still nothing in AI that comes close to enabling robots to replicate the ancient discoveries in geometry and topology, nor any formalism that provides the capabilities GLs
would need, in order to explain how products of evolution perceive the environment, solve problems, etc. Many researchers in AI, psychology and neuroscience, now think the core requirement is a shift
from logical reasoning to statistical/probabilistic reasoning. I suspect that has only limited uses and a deeper advance can come from extending techniques for reasoning about possibilities,
impossibilities and changing topological relationships and the use of partial orderings (of distance, size, orientation, curvature, slope, containment, etc.) as suggested in
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/changing-affordances.html I'll return to this topic below.
What about arithmetic?
The arguments against any attempt to redefine geometry in terms of what follows from Hilbert's axioms can be generalised to argue against Frege's attempt to redefine arithmetic in terms of what
follows from axioms and rules for logical reasoning. In both cases a previously discovered and partially explored mathematical domain was shown to be modelled using logic. But modelling is one thing:
replicating another.
The arithmetical discoveries made by Euclid and others long before the discovery of modern logic were more like discoveries in geometry than like proofs in an axiomatic system using only logical
inferences. However, arithmetical knowledge is not concerned only with spatial structures and processes. It involves general features of groups or sets of entities, and operations on them. For
example, acquiring the concept of the number six requires having the ability to relate different groups of objects in terms of one-to-one correspondences (bijections). So the basic idea of arithmetic
is that two collections of entities may or may not have a 1-1 relationship. If they do we could call them "equinumeric". The following groups are equinumeric in that sense (treating different
occurrences of the same character as different items).
[U V W X Y Z] [P P P P P P] [W Y Y G Q P]
If we count types of character rather than instances, then the numbers are different. The first box contains six distinct items, the second box only one type, and the third box five types. For now,
let's focus on instances not types.
The relation of equinumerosity has many practical uses, and one does not need to know anything about names for numbers, or even to have the concept of a number as an entity that can be referred to,
added to other numbers etc. in order to make use of equinumerosity. For example, if someone goes fishing to feed a family and each fish provides a meal for one person, the fisherman could take the
whole family, and as each fish is caught give it to an empty-handed member of the family, until everyone has a fish. Our intelligent ancestors might have discovered ways of streamlining that
cumbersome process: e.g. instead of bringing each fish-eater to the river, ask each one to pick up a bowl and place it on the fisherman's bowl. Then the bowls could be taken instead of the people,
and the fisherman could give each bowl a fish, until there are no more empty bowls, then carry the laden bowls back.
What sort of brain mechanism would enable the first person who thought of doing that to realise, by thinking about it, that it must produce the same end result as taking all the people to the river?
A non-mathematical individual would need to be convinced by repetition that the likelihood of success is high. A mathematical mind would see the necessary truth. How?
Of course, we also find it obvious that there's no need to take a collection of bowls or other physical objects to represent individual fish-eaters. We could have a number of blocks with marks on
them, a block with one mark, a block with two marks, etc., and any one of a number of procedures for matching people to marks could be used to select a block with the right number of marks to be used
for matching against fish.
Intelligent fishermen could understand that a collection of fish matching the marks would also match the people. How? Many people now find that obvious but realising that one-one correspondence is a
transitive relation is a major intellectual achievement, crucial to abilities to use numbers. We also know that it is not necessary to carry around a material numerosity indicator: we can memorise a
sequence of names and use each name as a label for the numerosity of the sub-sequence up to that name, as demonstrated in [Sloman 1978 1,Chap8]. A human-like intelligent machine would also have to be
able to discover such strategies, and understand why they work. This is totally different from achievements of systems that do pattern recognition. Perhaps studying intermediate competences in other
animals will help us understand what evolution had to do to produce human mathematicians. (This is deeper than learning to assign number names.)
Piaget's work showed that five- and six-year old children have trouble understanding consequences of transforming 1-1 correlations, e.g. by stretching one of two matched rows of objects [Piaget 1952
]. When they do grasp the transitivity have they found a way to derive it from some set of logical axioms using explicit definitions? Or is there another way of grasping that if two collections A and
B are in a 1-1 correspondence and B and C are, then A and C must also be, even if C is stretched out more in space?
I suspect that for most people this is more like an obvious topological theorem about patterns of connectivity in a graph rather than something proved by logic.
But why is it obvious to adults and not to 5 year olds? Anyone who thinks it is merely a probabilistic generalisation that has to be tested in a large number of cases has not understood the problem,
or lacks the relevant mechanisms in normal human brains. Does any neuroscientist understand what brain mechanisms support discovery of such mathematical properties, or why they seem not to have
developed before children are five or six years old (unless Piaget asked his subjects the wrong questions).^13
It would be possible to use logic to encode the transitivity theorem in a usable form in the mind of a robot, but it's not clear what would be required to mirror the developmental processes in a
child, or our adult ancestors who first discovered these properties of 1-1 correspondences. They may have used a more general and powerful form of relational reasoning of which this theorem is a
special case. The answer is not statistical (e.g. neural-net based) learning. Intelligent human-like machines would have to discover deep non-statistical structures of the sorts that Euclid and his
precursors discovered.
The machines might not know what they are doing, like young children who make and use mathematical or grammatical discoveries. But they should have the ability to become self-reflective and later
make philosophical and mathematical discoveries. I suspect human mathematical understanding requires at least four layers of meta-cognition, each adding new capabilities, but will not defend that
here. Perhaps robots with such abilities in a future century will discover how evolution produced brains with these capabilities [Sloman 2013].
Close observation of human toddlers shows that before they can talk they are often able to reason about consequences of spatial processes, including a 17.5 month pre-verbal child apparently testing a
sophisticated hypotheses about 3-D topology, namely: if a pencil can be pushed point-first through a hole in paper from one side of the sheet then there must be a continuous 3-D trajectory by which
it can be made to go point first through the same hole from the other side of the sheet: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html#pencil. (I am not claiming that
my words accurately describe her thoughts: but clearly her intention has that sort of complex structure even though she was incapable of saying any such thing in a spoken language. What sort of GL
was she using? How could we implement that in a baby robot?)
Likewise, one does not need to be a professional mathematician to understand why when putting a sweater onto a child one should not start by inserting a hand into a sleeve, even if that is the right
sleeve for that arm. Records showing 100% failure in such attempts do not establish impossibility, since they provide no guarantee that the next experiment will also fail. Understanding impossibility
requires non-statistical reasoning.
Generalising Gibson
James Gibson proposed that the main function of perception is not to provide information about what occupies various portions of 3-D space surrounding the perceiver, as most AI researchers and
psychologists had previously assumed (e.g. [
Clowes 1971
Marr 1982
]), but rather to provide information about what the perceiver can and cannot do in the environment: i.e. information about positive and negative
- types of possibility.
Accordingly, many AI/Robotic researchers now design machines that learn to perform tasks, like lifting a cup or catching a ball by making many attempts and inferring probabilities of success of
various actions in various circumstances.
But that kind of statistics-based knowledge cannot provide mathematical understanding of what is impossible, or what the necessary consequences of certain spatial configurations and processes are. It
cannot provide understanding of the kind of reasoning capabilities that led up to the great discoveries in geometry (and topology) (e.g. by Euclid and Archimedes) long before the development of
modern logic and the axiomatic method. I suspect these mathematical abilities evolved out of abilities to perceive a variety of positive and negative affordances, abilities that are shared with other
organisms (e.g. squirrels, crows, elephants, orangutans) which in humans are supplemented with several layers of metacognition (not all present at birth).
Spelling this out will require a theory of modal semantics that is appropriate to relatively simple concepts of possibility, impossibility and necessary connection, such as a child or intelligent
animal may use (and thereby prevent time-wasting failed attempts).
What sort of modal semantics
I don't think any of the forms of "possible world" semantics are appropriate to the reasoning of a child or animal that is in any case incapable of thinking about the
world let alone sets of alternative possible worlds. Instead I think the kind of modal semantics will have to be based on a grasp of ways in which properties and relationships in a
small portion
of the world can change and which combinations are possible or impossible. E.g. if two solid rings are linked it is impossible for them to become
through any continuous form of motion or deformation - despite what seems to be happening on a clever magician's stage. This form of modal semantics, concerned with possible rearrangements of a
of the world rather than possible
worlds was proposed in [
Sloman 1962
]. Barbara Vetter seems to share this viewpoint [
Vetter 2013
]. Another type of example is in the figure: What sort of visual mechanism is required to tell the difference between the possible and the impossible configurations. How did such mechanisms evolve?
Which animals have them? How do they develop in humans? Can we easily give them to robots? How can a robot detect that what it sees depicted is impossible?
Figure 1: Possible and impossible configurations of blocks.
(Swedish artist Oscar Reutersvard drew the impossible configuration in 1934)
A child can in principle discover prime numbers by attempting to arrange different collections of blocks into NxM regular arrays. It works for twelve blocks but adding or removing one makes the task
impossible. I don't know if any child ever has discovered primeness in that way, but it could happen. Which robot will be the first to do that? (Pat Hayes once informed me that a frustrated
conference receptionist trying to tidy uncollected name cards made that discovery without recognizing its significance. She thought her failure on occasions to make a rectangle was due to her
The link to Turing
What might Alan Turing have worked on if he had not died two years after publishing his 1952 paper on the Chemical basis of morphogenesis? Perhaps the Meta-Morphogenesis (M-M) project: an attempt to
identify significant transitions in types of information-processing capabilities produced by evolution, and products of evolution, between the earliest (proto-)life forms and current organisms,
including changes that modify evolutionary mechanisms.
Natural selection is more a blind mathematician than a blind watchmaker: it discovers and uses "implicit theorems" about possible uses of physics, chemistry, topology, geometry, varieties of feedback
control, symmetry, parametric polymorphism, and increasingly powerful cognitive and meta-cognitive mechanisms. Its proofs are implicit in evolutionary and developmental trajectories. So mathematics
is not a human creation, as many believe, and the early forms of representation and reasoning are not necessarily similar to recently invented logical, algebraic, or probabilistic forms.
The "blind mathematician" later produced at least one species with meta-cognitive mechanisms that allow individuals who have previously made "blind" mathematical discoveries (e.g. what I've called
"toddler theorems") to start noticing, discussing, disputing and building a theory unifying the discoveries.
Later still, meta-meta-(etc?)cognitive mechanisms allowed products of meta-cognition to be challenged, defended, organised, and communicated, eventually leading to collaborative advances, and
documented discoveries and proofs, e.g. Euclid's Elements (sadly no longer a standard part of the education of our brightest learners). Many forms of applied mathematics grew out of the results.
Unfortunately, most of the pre-history is still unknown and may have to be based on intelligent guesswork and cross-species comparisons. Biologically inspired future AI research will provide clues as
to currently unknown intermediate forms of biological intelligence.
This paper owes much to discussions with Jackie Chappell about animal intelligence, discussions with Aviv Keren about mathematical cognition, and discussions about life, the universe, and everything
with Birmingham colleagues and Alison Sloman.
Chappell, J. Sloman, A. (2007). Natural and artificial meta-configured altricial information-processing systems. International Journal of Unconventional Computing, 3 (3), 211-239.
Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.
Clowes, M. (1971). On seeing things. Artificial Intelligence, 2 (1), 79-116.
Dennett, D. (1995). Darwin's dangerous idea: Evolution and the meanings of life. London and New York: Penguin Press.
Frege, G. (1950). The Foundations of Arithmetic: a logico-mathematical enquiry into the concept of number. Oxford: B.H. Blackwell. ((Tr. J.L. Austin. Original 1884))
Gibson, J J. (1979). The ecological approach to visual perception. Boston, MA: Houghton Mifflin.
Glasgow, J., Narayanan, H. Chandrasekaran, B. (). (Eds.). (1995). Diagrammatic reasoning: Computational and cognitive perspectives. Cambridge, MA: MIT Press.
Jablonka, E. Lamb, M J. (2005). Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life. Cambridge MA: MIT Press.
Kant, I. (1781). Critique of pure reason. London: Macmillan. (Translated (1929) by Norman Kemp Smith)
Lakatos, I. (1976). Proofs and Refutations. Cambridge, UK: Cambridge University Press.
Marr, D. (1982). Vision. San Francisco: W.H.Freeman.
McCarthy, J. Hayes, P. (1969). Some philosophical problems from the standpoint of AI. In B. Meltzer & D. Michie (Eds.), Machine Intelligence 4 (pp. 463-502). Edinburgh, Scotland: Edinburgh
University Press.
Piaget, J. (1952). The Child's Conception of Number. London: Routledge & Kegan Paul.
Senghas, A. (2005). Language Emergence: Clues from a New Bedouin Sign Language. Current Biology, 15 (12), R463-R465.
Sloman, A. (1962). Knowing and Understanding: Relations between meaning and truth, meaning and necessary truth, meaning and synthetic necessary truth (DPhil Thesis). http://www.cs.bham.ac.uk/
Sloman, A. (1971). Interactions between philosophy and AI: The role of intuition and non-logical reasoning in intelligence. In Proc 2nd IJCAI (pp. 209-226). London: William Kaufmann.
Sloman, A. Sloman, A. (1978a). The Computer Revolution in Philosophy. Hassocks, Sussex: Harvester Press (and Humanities Press) Revised 2015. http://www.cs.bham.ac.uk/research/cogaff/62-80.html#
Sloman, A. (1978b). What About Their Internal Languages? Commentary on three articles in BBS Journal 1978, 1 (4). BBS , 1 (4), 515.
Sloman, A. (1979). The primacy of non-communicative language. In M. MacCafferty & K. Gray (Eds.), The analysis of Meaning: Informatics 5 Proceedings ASLIB/BCS Conference, Oxford, March 1979 (pp.
1-15). London: Aslib.
Sloman, A. Sloman, A. (2005, September). Discussion note on the polyflap domain (to be explored by an `altricial' robot) (Research Note No. COSY-DP-0504). Birmingham, UK: School of Computer
Science, University of Birmingham. Available from
Sloman, A. (2013). Meta-Morphogenesis and Toddler Theorems: Case Studies. School of Computer Science, The University of Birmingham. (Online discussion note) Available from http://goo.gl/QgZU1g
Sloman, A. (2015). What are the functions of vision? How did human language evolve? (Online research presentation) http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk111
Sloman, A. Chappell, J. (2007). Computational Cognitive Epigenetics (Commentary on (Jablonka & Lamb, 2005)). BBS , 30 (4), 375-6.
Tarsitano, M. (2006, December). Route selection by a jumping spider (Portia labiata) during the locomotory phase of a detour. Animal Behaviour , 72, Issue 6 , 1437-1442.
Vetter, B. Vetter, B. (2013, Aug). `Can' without possible worlds: semantics for anti-Humeans. Imprint Philosophers, 13 . (16)
Weir, A A S., Chappell, J. Kacelnik, A. (2002). Shaping of hooks in New Caledonian crows. Science, 297 (9 August 2002), 981.
Whitehead, A N. Russell, B. (1910-1913). Principia Mathematica Vols I - III. Cambridge: Cambridge University Press.
^1This is a snapshot of part of the Turing-inspired Meta-Morphogenesis project.
^2I did not notice this "Polyflap stability theorem" until I tried to think of an example. I did not need to do any experiments and collect statistics to recognize its truth (given familiar facts
about gravity). Do you?
^3 http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
^4This video gives some details: https://www.youtube.com/watch?v=pjtioIFuNf8
^5 http://www.cs.bham.ac.uk/research/projects/cogaff/misc/chewing-test.html
^6http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vision/plants presents a botanical challenge for vision researchers.
^7There seems to be uncertainty about dates and who contributed what. I'll treat Euclid as a figurehead for a tradition that includes many others, especially Thales, Pythagoras and Archimedes -
perhaps the greatest of them all, and a mathematical precursor of Leibniz and Newton. More names are listed here: https://en.wikipedia.org/wiki/Chronology_of_ancient_Greek_mathematicians I don't know
much about mathematicians on other continents at that time or earlier. I'll take Euclid to stand for all of them, because of the book that bears his name.
^8Moreover, it does not propagate misleading falsehoods, condone oppression of women or non-believers, or promote dreadful mind-binding in children.
^10My 1962 DPhil thesis [Sloman 1962] presented Kant's ideas, before I had heard about AI. http://www.cs.bham.ac.uk/research/projects/cogaff/thesis/new
^12I was unaware of this until I found the Wikipedia article in 2015:
^13Much empirical research on number competences grossly over simplifies what needs to be explained, omitting the role of reasoning about 1-1 correspondences.
^14Richard Gregory demonstrated that a 3-D structure can be built that looks exactly like an impossible object, but only from a particular viewpoint, or line of sight.
File translated from T
X by
, version 3.85.
On 25 Apr 2016, 00:15.
Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham | {"url":"https://www.cs.bham.ac.uk/research/projects/cogaff/misc/sloman-ai-gaps-old.html","timestamp":"2024-11-14T16:51:27Z","content_type":"text/html","content_length":"63741","record_id":"<urn:uuid:8363274b-f3e0-4473-9631-fceb01aa2f0f>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00473.warc.gz"} |
TABE Application Level M : Trivia Quiz
Questions and Answers
• 1.
Luis has 19 soft drink cans. His coolers each have space for 6 cans. How many coolers can he fill completely?
Correct Answer
A. 3
Luis has 19 soft drink cans and each cooler can hold 6 cans. To find out how many coolers Luis can fill completely, we divide the total number of cans by the capacity of each cooler. In this
case, 19 divided by 6 equals 3 with a remainder of 1. Since we are looking for the number of coolers that can be filled completely, the answer is 3.
• 2.
Which of these numbers is a commom factor of 18 and 24?
Correct Answer
A. 3
The number 3 is a common factor of 18 and 24 because it can evenly divide both numbers without leaving a remainder. 3 is a factor of 18 because 18 divided by 3 equals 6, and it is a factor of 24
because 24 divided by 3 equals 8. Therefore, 3 is the correct answer.
• 3.
Which number is a multiple of 7?
Correct Answer
B. 28
28 is a multiple of 7 because it can be divided evenly by 7. When 28 is divided by 7, the result is 4, with no remainder. This means that 28 is divisible by 7, making it a multiple of 7.
• 4.
Which number goes in the blanks to make both numbers sentences true? 11 - __ = 7 7 + __ =11
Correct Answer
A. 4
In the first number sentence, 11 minus 4 equals 7. In the second number sentence, 7 plus 4 equals 11. Therefore, the number that goes in the blanks to make both number sentences true is 4.
• 5.
Lee has finished reading 4 out of 10 lessons from his assignment. Which fractional part of the lessons has he completed?
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. 2/5
Lee has finished reading 4 out of 10 lessons from his assignment. To find the fractional part of the lessons he has completed, we need to divide the number of lessons completed (4) by the total
number of lessons (10). This can be written as 4/10, which simplifies to 2/5. Therefore, Lee has completed 2/5 or two-fifths of the lessons.
• 6.
Which number goes in the box on the number line?
Correct Answer
B. 91
The number 91 goes in the box on the number line because it is the number that falls between 89 and 93.
• 7.
Marty needs to pack 32 glasses. Each box holds 6 glasses. How many boxes will be completely full?
Correct Answer
B. 5
To find the number of boxes that will be completely full, we need to divide the total number of glasses by the capacity of each box. In this case, 32 glasses divided by 6 glasses per box equals
5. Therefore, 5 boxes will be completely full.
• 8.
The cost of carrots per pound is
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. 59 cents
The correct answer is 59 cents. This is the only option that is in cents, while the other options are in dollars.
• 9.
The cost of tomatoes per pound is
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. 72 cents
The given answer, 72 cents, is the cost of tomatoes per pound.
• 10.
Lynn had 12 stamps, but she used 3 of them to mail letters. Which expression shows how many stamps she still has?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. 12 - 3
Lynn started with 12 stamps and used 3 of them to mail letters. To find out how many stamps she still has, we need to subtract the number of stamps she used from the total number of stamps she
had. Therefore, the expression 12 - 3 represents the number of stamps Lynn still has.
• 11.
LaTina is saving $60 a month for a vacation. She needs to save $540 dollars. How long will it take her to save enough for her vacation?
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. 9 months
LaTina is saving $60 a month for her vacation and she needs to save $540. To find out how long it will take her to save enough, we can divide the total amount needed by the amount she saves per
month. $540 divided by $60 equals 9. Therefore, it will take her 9 months to save enough for her vacation.
• 12.
Anna saves $75 every month. How much will she save in 16 months?
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. $1,200
Anna saves $75 every month. To find out how much she will save in 16 months, we can multiply her monthly savings by the number of months. So, $75 multiplied by 16 equals $1,200. Therefore, Anna
will save $1,200 in 16 months.
• 13.
Ron wrote a check $37.80. Before he wrote the check, his balance was $137.75. What is his new balance?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. 99.95
After writing the check for $37.80, Ron's balance will decrease by that amount. Therefore, his new balance will be $137.75 - $37.80 = $99.95.
• 14.
Ella spends $50 a month for her cell phone. How much does she pay for her cell phone in a year?
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. $600
Ella spends $50 a month for her cell phone. To find out how much she pays for her cell phone in a year, we need to multiply the monthly cost by the number of months in a year. So, $50 multiplied
by 12 (months) equals $600.
• 15.
Which of these decimals when rounded to the nearest whole number is 54?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. 53.85
When rounded to the nearest whole number, the decimal 53.85 becomes 54. Therefore, 53.85 is the correct answer.
• 16.
A dozen pens costs $5.99. About how much does one pen cost?
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. $0.50
The total cost of a dozen pens is $5.99. To find the cost of one pen, we need to divide the total cost by the number of pens in a dozen, which is 12. So, $5.99 divided by 12 is equal to $0.50.
Therefore, one pen costs $0.50.
• 17.
Which of these is the best estimate for the width of a piece of paper?
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. 25 centimeters
A piece of paper is typically around 21.59 cm wide, which is slightly smaller than the standard size of 8.5 x 11 inches. Among the given options, 25 centimeters is the closest estimate to the
actual width of a piece of paper.
• 18.
Jason's average hits per game for the last three months are: 3.6, 4.5, and 6.2. What are his averages, rounded to the nearest whole number?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. 4, 5, and 6
Jason's average hits per game for the last three months are 3.6, 4.5, and 6.2. To round these numbers to the nearest whole number, we round 3.6 down to 4, 4.5 up to 5, and 6.2 down to 6.
Therefore, Jason's averages, rounded to the nearest whole number, are 4, 5, and 6.
• 19.
Anya spent $32 of the $63 she received for her birthday. She spent about
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. 1/2 of the amount
Anya spent $32 out of the $63 she received for her birthday. To find out what fraction of the amount she spent, we can divide the amount she spent ($32) by the total amount she received ($63).
This gives us 32/63, which simplifies to 16/31. However, none of the given options match this fraction. Therefore, we need to approximate the fraction that is closest to 16/31. The closest option
is 1/2 of the amount.
• 20.
Which of these measurements is about the same as 100 inches? (Hint: 1 in = 2.54 cm)
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. 250 centimeters
100 inches is about the same as 250 centimeters. This can be calculated by converting inches to centimeters using the given conversion factor of 1 inch = 2.54 cm. Therefore, 100 inches would be
equal to 100 * 2.54 = 254 centimeters. Since 254 centimeters is the closest measurement to 250 centimeters, it can be concluded that 250 centimeters is about the same as 100 inches.
• 21.
What temperature does the thermometer show?
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. 24 degrees
The thermometer shows a temperature of 24 degrees.
• 22.
What temperature does the thermometer show?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. 88 degrees
The thermometer shows a temperature of 88 degrees.
• 23.
Which of these measures is closest to length of your thumb?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. 7 centimeters
The length of the thumb is usually around 7 centimeters. This is the closest measurement option given in the choices. A centimeter is a metric unit of length, and it is smaller than an inch.
Therefore, 7 centimeters is a more accurate approximation of the length of a thumb compared to the other options provided.
• 24.
An airplane is scheduled to leave one city at 10:35 a.m. and arrive at another city at 12:15p.m. How long is the scheduled flight?
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. 1 hour and 40 minutes
The scheduled flight is 1 hour and 40 minutes because the time difference between the departure and arrival is 1 hour and 40 minutes.
• 25.
Rick decided to run in a marathon, which is slighty more than 26 miles long. About how many kilometers will he run? (Hint: 1 mi = 1.6 km)
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. 41 kilometers
Rick is running in a marathon, which is slightly more than 26 miles long. To convert miles to kilometers, we use the conversion rate of 1 mile = 1.6 kilometers. Since Rick is running slightly
more than 26 miles, we can estimate that he will run slightly more than 26 multiplied by 1.6 kilometers. This calculation gives us approximately 41 kilometers. Therefore, Rick will run
approximately 41 kilometers in the marathon.
• 26.
Which of the following figures are simular to the figure in the box?
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Figures 1 and 4
Figures 1 and 4 are similar because they both have the same shape and orientation. They both have three sides and are oriented in the same direction. Figures 2 and 3 do not have the same shape or
orientation as the figure in the box. Therefore, the correct answer is figures 1 and 4.
• 27.
Which of these solid figures is not named correctly?
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Cone
The given question asks to identify the solid figure that is not named correctly. Out of the options provided, a "cone" is the only solid figure that is named correctly. A cone is a
three-dimensional geometric shape that has a circular base and a pointed apex. Therefore, the answer is cone.
• 28.
Which of these figures shows a line of symmertry?
Correct Answer
A. 1
Figure 1 shows a line of symmetry because it can be divided into two equal halves that are mirror images of each other. The line of symmetry in this figure is vertical, dividing the figure into
two identical halves. In contrast, figures 2, 3, and 4 do not have a line of symmetry as they cannot be divided into two equal halves that are mirror images of each other.
• 29.
Which line creates a line of symmetry in this triangle?
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Line D
Line D creates a line of symmetry in the triangle. A line of symmetry is a line that divides a shape into two identical halves. Line D passes through the midpoint of the base of the triangle and
is perpendicular to the base. When the triangle is folded along line D, the two resulting halves will perfectly overlap each other. Therefore, line D is the correct answer for creating a line of
symmetry in this triangle.
• 30.
Which of these measurements is equal to the perimeter of the box around the circle?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. 4 feet
The perimeter of a circle is equal to the circumference, which is the distance around the circle. The formula to calculate the circumference is 2πr, where r is the radius of the circle. In this
case, since we are talking about a box around the circle, the perimeter would be equal to the circumference plus the sum of the lengths of the sides of the box. Therefore, the measurement that is
equal to the perimeter of the box around the circle is 4 feet.
• 31.
Which two parts of the circle are congruent?
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Parts C and D
The parts of a circle that are congruent are those that have equal measurements or dimensions. In this case, parts C and D are congruent because they have the same measurement or dimension.
• 32.
According to the graph, how many slices of wheat toast can Jim eat and only consume 270 calories?
Correct Answer
B. 3
Based on the graph, the x-axis represents the number of slices of wheat toast and the y-axis represents the number of calories consumed. By locating the point on the graph where the y-axis
intersects with the value of 270 calories, we can determine the corresponding number of slices of wheat toast on the x-axis. In this case, the point intersects at 3 slices of wheat toast,
indicating that Jim can eat 3 slices and consume only 270 calories.
• 33.
Suppose Jim had wanted a dessert with 520 calories, but he didn't want to add any more calories to his diet. What two items should he not eat?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Spaghetti and toast
Jim wanted a dessert with 520 calories but didn't want to add any more calories to his diet. The two items he should not eat are spaghetti and toast. This is because spaghetti and toast are
likely to have a higher calorie content compared to the other options.
• 34.
Which two players had the same average score?
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. Players 1 and 5
The given answer suggests that Players 1 and 5 had the same average score. This means that the average score of Player 1 is equal to the average score of Player 5.
• 35.
The difference between average scores is greatest between which two players?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. Players 3 and 4
The difference between average scores is greatest between Players 3 and 4. This means that the average score of Player 3 is significantly different from the average score of Player 4. It implies
that these two players have the largest variation in their scores compared to the other players.
• 36.
Shamika agrees to pick up relatives in Oakton and Scottsville. What is the distance from Harrisburg to Bloomington when driving through Oakton and Scottsville?
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. 775 miles
The question states that Shamika agrees to pick up relatives in Oakton and Scottsville. It then asks for the distance from Harrisburg to Bloomington when driving through Oakton and Scottsville.
Since no other information is given, we can assume that the distance from Harrisburg to Bloomington includes the distance from Harrisburg to Oakton, the distance from Oakton to Scottsville, and
the distance from Scottsville to Bloomington. Therefore, the correct answer is 775 miles.
• 37.
From Bloomington, Shamika and her relatives want to take a trip to Springfield. If she drives from Bloomington to Springfield, drops off her relatives in Oakton and Scottsville and returns to
Harrisburg, how many miles will she travel going home?
□ A.
□ B.
□ C.
□ D.
Correct Answer
D. 970
Shamika will travel a total of 970 miles going home. She will drive from Bloomington to Springfield, which is one trip. Then she will drop off her relatives in Oakton and Scottsville, which adds
to the total distance. Finally, she will return to Harrisburg, which adds more miles to the total.
• 38.
Jay's monthly payment for a new car would be about 1/2 of his repair expenses from January to April. Which of these could be the amount of Jay's monthly payment for a new car?
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. $250.00
Jay's repair expenses from January to April are twice the amount of his monthly payment for a new car. Therefore, if his monthly payment is $250.00, his repair expenses from January to April
would be $500.00. This satisfies the condition given in the question that his repair expenses are 2 times his monthly payment.
• 39.
Tim's test scores were 85, 90, 93, and 88. What was his average score?
Correct Answer
C. 89
Tim's average score can be calculated by adding up all his test scores and dividing the sum by the total number of tests. In this case, his test scores are 85, 90, 93, and 88. Adding them up
gives a total of 356. Dividing 356 by 4 (since there are 4 test scores) gives an average of 89. Therefore, Tim's average score is 89.
• 40.
A bag contains 5 red marbles, 5 green marbles, and 10 yellow marbles. If you draw a marble at random, what is the probability that it will be red?
□ A.
□ B.
□ C.
□ D.
Correct Answer
C. 25%
The probability of drawing a red marble can be calculated by dividing the number of red marbles (5) by the total number of marbles in the bag (20). Therefore, the probability is 5/20, which
simplifies to 1/4 or 25%.
• 41.
What is the mean of numbers 20, 25, and 30?
Correct Answer
B. 25
The mean of a set of numbers is calculated by adding up all the numbers and then dividing the sum by the total number of values. In this case, the sum of 20, 25, and 30 is 75. Dividing 75 by 3
(since there are 3 numbers) gives us 25, which is the mean of the given numbers.
• 42.
A bus company records how many people ride a particualar bus each day. The number of riders was 22, 20, 30, 14, and 24. What was the mean numbers of riders?
Correct Answer
B. 22
The mean is calculated by adding up all the numbers and then dividing the sum by the total number of values. In this case, we add up 22, 20, 30, 14, and 24, which gives us a sum of 110. Since
there are 5 values, we divide the sum by 5 to get the mean, which is 22.
• 43.
Which two figures are missing from this pattern?
□ A.
□ B.
□ C.
□ D.
Correct Answer
B. Triangle, circle
The pattern in the given figures is that each figure alternates between a circle and a triangle. The first figure is a circle, followed by a triangle, then a circle again. Therefore, the missing
figures should continue this pattern. The correct answer is triangle, circle.
• 44.
What number is missing from this number pattern?5, 1, 10, 2, 20, 4, __ , 8, 80
Correct Answer
C. 40
The number pattern follows a sequence where each number is obtained by multiplying the previous number by 2. Starting with 5, the next number is obtained by multiplying 5 by 2, which is 10.
Following the same pattern, the next number is obtained by multiplying 10 by 2, which is 20. Continuing this pattern, the missing number can be obtained by multiplying 20 by 2, resulting in 40.
• 45.
Adriana brought 3 bags of dog food for each of her 3 dogs. Each bag will provide 8 bowls of food. How many bowls of food does Adriana have for her dogs?
Correct Answer
D. 72
Adriana has 3 bags of dog food, and each bag provides 8 bowls of food. To find the total number of bowls of food, we multiply the number of bags (3) by the number of bowls each bag provides (8).
Therefore, Adriana has a total of 3 * 8 = 24 bowls of food for her dogs.
• 46.
Solve the equation for y: y × 8 = 48
□ A.
□ B.
□ C.
□ D.
Correct Answer
A. Y = 6
To solve the equation y × 8 = 48, we need to isolate y. We can do this by dividing both sides of the equation by 8. This gives us y = 48/8, which simplifies to y = 6. Therefore, the correct
answer is y = 6.
• 47.
Your checking account balance was $1,614. After you wrote checks for $515, $85, and $112, you calculated a balance of $902. Using estimation, find which of the following best describes your
□ A.
□ B.
□ C.
Correct Answer
A. Correct result
The correct result means that the calculated balance of $902 matches the actual balance after deducting the amounts written on the checks. This suggests that the estimation was accurate and there
were no errors in the calculations.
• 48.
A traffic engineer counts the vehicles passing an intersection during a four hour period. The hourly counts were 195, 98, 206, abd 480 vehicles. He reported a total count of 1,079 vechicles. Use
estimation to evaluate his result.
□ A.
□ B.
□ C.
Correct Answer
C. Result too low
The traffic engineer reported a total count of 1,079 vehicles, but the sum of the hourly counts is only 979 vehicles. Therefore, the reported result is too low.
• 49.
An elephant in a zoo eats about 200 pounds of food every day.About 1/5 of that food is fresh vegetables. Keepers need to decide how many pounds of fresh vegetables they need to order each day to
feed all of their elephants. Which of the following describes the information you have to solve this problem?
□ A.
□ B.
□ C.
Both missing and extra information
□ D.
Correct Answer
B. Missing information
The question provides information about the daily food consumption of an elephant in a zoo, but it does not mention the total number of elephants in the zoo. In order to calculate the number of
pounds of fresh vegetables needed to feed all the elephants, we need to know the total number of elephants in the zoo. Therefore, the missing information is the total number of elephants in the
• 50.
How many even numbers great than zero are there that are less than 31 and divisible by 5?
Correct Answer
C. 3
There are three even numbers greater than zero, less than 31, and divisible by 5. These numbers are 10, 20, and 30. | {"url":"https://www.proprofs.com/quiz-school/story.php?title=tabe-application-level-m","timestamp":"2024-11-09T01:12:58Z","content_type":"text/html","content_length":"671179","record_id":"<urn:uuid:df9a0bc0-3de5-42e0-bd72-fd2ee9041b48>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00510.warc.gz"} |
Mass on a spring
Class content > Oscillations and Waves > Harmonic Oscillation
The oscillatory motion of a system around a stable point is one of the most important single examples in physics. It is the basis for understanding a wide range of systems and phenomena including
mechanical oscillations, electrical circuits, resonance, and is even at the heart of the basic theory of photons -- quantum field theory. In this page, we will start with the simplest example of an
oscillating system -- a mass and a spring.
What's going on in an oscillation
There are three core ideas to the idea of oscillation:
1. There is a stable point to motion of the object. If put there, it will stay, so there is no net force at that point.
2. If the object is displaced in any direction from the stable point there is a growing force acting to bring it back to the stable point.
3. As the object approaches the stable point it gains speed. Since the force on the object vanishes at the stable point, there is nothing to stop it and it keeps on going through; so it overshoots
the stable point and the force begins to build to bring it back.
The combination of stable point, restoring force, and overshoot are the components needed to create an oscillation.
The horizontal mass/spring system
The essence of an oscillation is a restoring force and the overshoot arising from inertia. As a result, the simplest example we can construct is a spring -- that provides a linear restoring force
that vanishes at the stable resting point -- and a mass -- that provides the inertia that keeps the mass going. To set this up physically, we will imagine a small massive cart rolling on (those
famous) frictionless wheels attached to a (n equally famous) massless spring.* We show a representation of the system in the figure at the right.
We'll assume that the mass of the cart is much greater than the mass of the spring, that the wheels have negligible friction, and that the spring is well approximated (at least for the stretches we
intend to consider, by Hooke's law: T = kΔL, where ΔL is the amount the spring is stretched away from its rest length. To simplify the math, we will choose our coordinate system so that the 0 of our
position coordinate is when the cart is at the rest length.
Here's what we have to do to make sense of what happens:
• Pay careful attention to time. As the cart moves, everything changes. Newton 0 tells us that to figure out what is happening, we have to look at each particular instant.
• At each instant of time the cart has a position x and a velocity v.
• At that instant of time we have to figure out the net force the cart feels. At this point, F^net is just the force of the spring, since the vertical forces (normal and weight) cancel and we are
ignoring friction (for now).
• At that instant of time, F^net causes the velocity to change according to Newton's second law:
* We are OK to ignore the mass of the spring if the cart is much more massive than the spring. Also, it helps conceptually to separate the force idea and the inertia idea in separate objects. We
could in fact include the mass of the spring, but it makes the problem mathematically rather difficult and what we are looking for here is a simple example that helps us think about what's happening
when a system oscillates.
The math of the harmonic oscillator
If we make a free-body diagram for our cart, the vertical forces (weight and normal force) cancel leaving only the force of the spring acting on the cart. This is proportional to the amount the
spring is stretched or squeezed. And the way we have set up the coordinate system gives us that F = -kx with the negative sign indicating that when the cart is extended to the right (spring is
stretched) the force is to the left and vice versa. The result is that N2 becomes the equation
a = F/m = -kx/m = -(k/m)x
or, since the acceleration is the second derivative of the position,
So this tells us that the second derivative of x is proportional to -x times a constant. What kind of object is this constant? We can see from the structure of the equation that it has to look like
the product of two inverse times: the second derivative of x is an acceleration so it is a rate of change of velocity -- a change in (change in position per unit time) per unit time. We expect it to
have a dimensionality 1/T^2. Let's check to see that this works. Here are the dimensionalities of the various quantities involved:
[m] = M
[k] = [F/x] = [ma/x] = ML/(LT^2) = M/T^2
[k/m] = M/(MT^2) = 1/T^2
In physics we often give quantities names that clue us in as to what kind of quantity a parameter looks like, that is, what dimensions it has. We have a couple of choices here -- we could define a
time period, but it turns out to be more convenient to treat this combination like the square of an angular velocity. An angular velocity has units of radian/sec (dimensionality of 1/T). Angular
velocity is often given the symbol "omega" (ω) so we will define
We put a subscript "0" on it to show it is a value, not a variable. With this definition, Newton's second law becomes:
So up to a constant, x is a function of t whose second derivative looks like the negative of itself. You've seen this in your calculus course. We can make it look more like calculus by combining the
ω[0] with the time to get a dimensionless variable, τ = ω[0]t. (Check the dimensions of this for yourself. Our equation then becomes:
This is pretty simple. It looks like the calculus equation "d^2f/dx^2 = -f" (but with different independent and dependent variables). It tells us we are looking for some function (of tau) whose
second derivative is the negative of itself. You should recognize that both the sine and cosine functions satisfy this. We'll work with cosine for now, but in general we could have any sum of the
So our first guess at a solution is x = cos(τ). This doesn't quite work because the left and right side have different units. On the left, x is a distance, while the right side is unitless. This
suggests to us that a better answer would be x = A cos(τ) where A is some distance. This would give us a final solution to Newton's law of motion:
x(t) = A cos(ω[0]t).
(The "(t)" after the x just reminds us that x is a function of t.) This is a satisfactory solution as far as units go: ω[0]t is dimensionless so it can be an angle (and you have to take cosine of a
dimensionless quantity), and the result is a length. Now let's see how to interpret them physically.
Interpreting the solution
The result of our analysis gives a graph of the position that looks like this: just a cosine curve.
But we need to interpret the physical parameters of what the mass is doing.
The amplitude
Clearly, at t = 0, x is positive and at its maximum value. And since the slope (v = dx/dt) at t = 0 is 0, the mass has been pulled out, held at rest and released. It the oscillates down to 0,
continues through and goes to compress the spring by an equal amount that it was stretched and comes back. Then it continues to repeat. Let's name the various parameters of the motion.
From looking at our equation, and knowing that cos oscillates between -1 and +1, we can see that x will vary from A to -A. Our parameter A is therefore the amplitude of the oscillation -- the maximum
extension it travels from rest in both directions.
The period
That's the spatial parameter. What about the temporal one? Since cosine continues to oscillate as its argument gets larger and larger (see Trig Functions for Large Values of the Argument) it repeats
the same motion over and over. This makes sense since after one oscillation the same initial conditions are repeated and, since we are assuming no friction or loss of energy, once we come back to our
starting point the forces will make it do the same thing once again. Therefore, the time it takes to complete one oscillation is an important parameter. This is called the period and is marked on the
figure at T. But we don't have a T in our equation that describes the motion. We'll have to do a little more work to figure how it relates.
For cos θ to go through one full oscillation, θ has to vary from 0 to 2π (in radians). Our argument of cosine must do the same thing to go through one full oscillation; so as t goes from 0 to T, ω[0]
t has to go from 0 to 2π. This tells us that we must have
ω[0]T = 2π
T = 2π/ω[0.]
This tells us what ω[0] is really doing for us -- it's setting the period of the oscillation. Putting in our value for ω[0] in terms of the original parameters, we get
This makes good sense, since if we have a bigger mass (bigger m), we expect it to move more slowly and have a longer period, and if we have a stronger spring (bigger k) we expect it to move more
quickly and have a shorter period.
The phase shift
We have to deal with one more issue: we don't always get to start the mass at its maximum positive displacement at time t = 0. Sometimes we have a data collection device with a built-in time delay
and we can't start is precisely when we want to. We might wind up with something like the graph below:
In this case, we had the oscillation starting near its negative maximum. We don't get to +A until the time indicated by t[0]. What should our solution look like then? Clearly, the solution it to
subtract t[0] from t, like this:
x(t) = A cos(ω[0](t - t[0])).
This works the way we want it to since when t = t[0] the argument of the cosine becomes 0 and cosine takes on the value +1 so x(t[0]) = +A. Since both ω[0] and t[0] are constants, we can combine them
by defining ω[0]t[0] = φ ("phi") and get
x(t) = A cos(ω[0](t - t[0])) = A cos(ω[0]t - ω[0] t[0]) = A cos(ω[0]t - φ).
The constant φ is called the phase shift.
This, the three parameters in our expression for the position of the oscillator as a function of time, x(t) = A cos(ω[0]t - φ) tell us:
• A -- the amplitude, which tells us how far the oscillator goes in each direction;
• ω[0] -- the angular frequency (determined by the spring constant and the mass), which tells us the period by T = 2π/ω[0;]
• φ -- the phase shift, which tells us where the oscillation starts at t = 0.
Joe Redish 3/12/12
You don't have permission to comment on this page. | {"url":"http://umdberg.pbworks.com/w/page/51768922/Mass%20on%20a%20spring","timestamp":"2024-11-10T15:18:14Z","content_type":"application/xhtml+xml","content_length":"37252","record_id":"<urn:uuid:947f91dc-d88d-4900-ab39-eb7225541837>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00546.warc.gz"} |
In a study conducted by the Department of Health and Physical Education at the Virginia Polytechnic Institute and State University, 3 diets were assigned for a period of 3 days to each of 6 subjects
in a randomized block design. The subjects, playing the role of blocks, were assigned the following 3 diets in a random order: Diet \(1: \quad\) mixed fat and carbohydrates, Diet 2: high fat, Diet \
(3: \quad\) high carbohydrates. At the end of the 3 -day period each subject was put on a treadmill and the time to exhaustion, in seconds, was measured. The following data were recorded: Perform the
analysis of variance, separating out the diet, subject, and error sum of squares. Use a P-value to determine if there are significant differences among the diets. | {"url":"https://www.vaia.com/en-us/textbooks/math/probability-and-statistics-for-engineers-and-scientists-8-edition/chapter-13/","timestamp":"2024-11-05T13:41:26Z","content_type":"text/html","content_length":"230761","record_id":"<urn:uuid:34dcd106-2fff-4b03-be9e-bf3c4eddbbf1>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00654.warc.gz"} |
Advent of Code
Problem statement: http://adventofcode.com/2015/day/3 Part A Solving this problem requires keeping track of the houses that Santa has visited. Two ways this can be done: keep
Problem statement: http://adventofcode.com/2015/day/2 Part A Solving this problem is relatively straight-forward: with a for-loop, use the math formulas provided for each record in the input,
Problem statement: http://adventofcode.com/2015/day/1 Part A Part A simply requires counting the number of ( and ) and evaluating the difference. This can even be done with a
One of my new favorite activities around Christmas time is a programming challenge website called Advent of Code. Some of the problems are easy, but they rapidly get difficult. Because | {"url":"http://turner-isageek-blog.azurewebsites.net/tag/advent-of-code/page/3/","timestamp":"2024-11-13T19:42:58Z","content_type":"text/html","content_length":"9801","record_id":"<urn:uuid:a1e82a92-a78c-46f5-8f8f-a87f0e27fc4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00409.warc.gz"} |
The wave heights h in the open sea depend on the speed
v of the wind...
The wave heights h in the open sea depend on the speed v of the wind...
The wave heights h in the open sea depend on the speed v of the wind and the length of time t that the wind has been blowing at that speed. Values of the function h = f(v, t) are recorded in feet in
the following table.
Use the table to find a linear approximation to the wave height function when v is near 40 knots and t is near 20 hours. (Round your numerical coefficients to two decimal places.)
f(v, t) = | {"url":"https://justaaa.com/math/718019-the-wave-heights-h-in-the-open-sea-depend-on-the","timestamp":"2024-11-09T22:01:01Z","content_type":"text/html","content_length":"28646","record_id":"<urn:uuid:b79e1193-4893-43de-83ff-85226cff7466>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00896.warc.gz"} |
Mack Company plans to depreciate a new building using the double
declining-balance depreciation method. The...
555.) Mack Company plans to depreciate a new building using the double declining-balance depreciation method. The...
Mack Company plans to depreciate a new building using the double declining-balance depreciation method. The building cost is $800,000. The estimated residual value of the building is $50,000 and it
has an expected useful life of 25 years.
What is the building's book value at the end of the first year?
A.) $686,000
B.) $736,000
C.) $690,000
D.) $768,000 | {"url":"https://justaaa.com/finance/10797-555-mack-company-plans-to-depreciate-a-new","timestamp":"2024-11-12T06:58:53Z","content_type":"text/html","content_length":"39751","record_id":"<urn:uuid:efb9663c-4177-4189-8a26-2bbbccec07f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00886.warc.gz"} |
Is there a formula to get values from B1 if A1 is blank?
I want a formula that will return with the value of cell A1 but if cell A1 is blank, then it will look at the value in B1 and Then I want to do an Index match to a different reference.
This is what I have so far:
=INDEX({Equipment Master Range 3}, MATCH([Asset Number]@row, {Equipment Master Range 4}, 0))
• The syntax would be something like:
=IF(ISBLANK(A1), The Index/Match formula for B1 goes here, The Index/Match formula for A1 goes here)
Does the logic make sense? IF the A1 cell is blank, use the Index/match for B1, otherwise, use the Index/Match for A1.
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/88545/is-there-a-formula-to-get-values-from-b1-if-a1-is-blank","timestamp":"2024-11-11T08:16:31Z","content_type":"text/html","content_length":"391519","record_id":"<urn:uuid:ba046c22-1b4e-4d40-bd0c-2199c55e3cfa>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00067.warc.gz"} |
How to determine incremental guests based on set values?
local entryCost = playerFolder.EntryCost.Value
local land = playerFolder.Land.Value
local numberOfGuests = (land*20) - (entryCost)
Land value is how much land a player has purchased (starts of at 1)
Entry cost is how much the player charges the guests to enter the park (a value between 20-40)
What I’m trying to do is make it so the amount of guests is determined by the amount of land a player has + the cost of entry.
Obviously, the higher the amount of land the player has, the more guests, but I also want entry cost to play a factor. So the more expensive the entry, the less guests. I was thinking of having say
10 guests with the land value at 1 and the cost value being at 25. But I’m not sure how to work it so if the player only has 1 land value and they set the price to 40 they’d be getting negative
number of guests. I’m not sure how to make it so they’ll always have a decent number (atleast like 5 guests for a park that’s at 1 land value and 40 for entry cost) and say a max of 150-200 guests
for a park for land value of 16 and entry cost at 20.
Is there a simple formula I should use for this? I know a lot of classic tycoons (like rollercoaster tycoon, zoo tycoon, etc) use fancy equations to determine the amounts of guests using all kinds of
variables. But I can’t seem to find a formula that would work in this instance.
EDIT Obviously if I did
(land*45) - (entryCost)
that’d give me 5 guests if entrycost was 40 and land value was 1 . But if they have land value of 16, (16*45) - 40 would == 680, which is way too big
So lets make
DefaultGuests = Land * 50
DefaultPrice = 10
Assuming PED = 1 (i.e a 10% increase in price results in 10% decrease in guests)
Price = 12
Guests = DefaultGuests * (DefaultPrice/Price)
If you increase the price, less guests, lower the price, more!
5 Likes | {"url":"https://devforum.roblox.com/t/how-to-determine-incremental-guests-based-on-set-values/169571","timestamp":"2024-11-06T11:57:41Z","content_type":"text/html","content_length":"25098","record_id":"<urn:uuid:565e77d1-6e0e-4a1a-a5e3-9bfba53b3d67>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00254.warc.gz"} |
Algorithms Interview Questions And Answers 2024
Algorithms Interview Questions and Answers 2024
Welcome to our comprehensive guide on algorithms interview questions and answers for the year 2024. Whether you are preparing for a job interview, a coding competition, or simply looking to enhance
your algorithmic knowledge, this article will provide you with valuable insights and tips to succeed.
Algorithms Interview Questions that would be ask by the interviewer
1. What is an algorithm:
An algorithm is a step-by-step set of instructions or a well-defined procedure for solving a specific problem or accomplishing a particular task.
2. Explain time complexity :
Time complexity measures the amount of time an algorithm takes to complete based on the input size. It helps analyze the efficiency of an algorithm.
3. Define space complexity:
Space complexity is the amount of memory an algorithm requires to execute based on the input size. It helps assess the efficiency of memory usage.
4. Differentiate between an algorithm and a flowchart:
An algorithm is a step-by-step solution, while a flowchart is a graphical representation of an algorithm using different symbols to illustrate the steps.
5. What is the significance of Big-O notation:
Big-O notation describes the upper bound of an algorithm’s time complexity in the worst-case scenario. It helps analyze algorithm efficiency and scalability.
6. Explain the Divide and Conquer algorithm:
Divide and Conquer breaks a problem into subproblems, solves them independently, and combines the solutions to solve the original problem.
7. Describe dynamic programming:
Dynamic programming solves problems by breaking them into smaller overlapping subproblems, solving each subproblem once, and storing the results for reuse.
8. What is a greedy algorithm:
Greedy algorithms make locally optimal choices at each stage with the hope of finding a global optimum.
9. Define backtracking in algorithms:
Backtracking is a trial-and-error approach where the algorithm tries different options and reverts when it encounters an unsolvable subproblem.
10. Explain breadth-first search (BFS):
BFS explores a graph level by level, visiting all neighbours of a node before moving to the next level.
11. Describe depth-first search (DFS):
DFS explores a graph by going as deep as possible along each branch before backtracking.
12. What is the Dijkstra algorithm used for:
Dijkstra’s algorithm finds the shortest path between two nodes in a weighted graph.
13. Define a hash table:
A hash table is a data structure that maps keys to values, using a hash function to compute an index into an array of buckets.
14. What is the quicksort algorithm: Quicksort is a divide-and-conquer sorting algorithm that works by selecting a pivot element and partitioning the other elements around it.
15. Explain the concept of a linked list:
A linked list is a data structure that consists of a sequence of elements, each pointing to the next element in the sequence.
16. Define recursion in programming:
Recursion is a programming concept where a function calls itself to solve smaller instances of a problem.
17. What is a binary search:
Binary search is an efficient algorithm for finding a target value within a sorted array.
18. Describe the concept of a tree in data structures:
A tree is a hierarchical data structure composed of nodes connected by edges. The top node is called the root, and nodes without children are leaves.
19. What is the purpose of the Floyd-Warshall algorithm:
The Floyd-Warshall algorithm finds the shortest paths between all pairs of vertices in a weighted graph.
20. Explain the concept of a heap:
A heap is a specialized tree-based data structure that satisfies the heap property, where the key of each node is either always greater than or always less than its children.
21. Describe the Boyer-Moore algorithm?
Boyer-Moore is a string-searching algorithm that pre-processes the pattern to skip unnecessary comparisons.
22. What is the Traveling Salesman Problem (TSP)?
TSP is a classic optimization problem where the goal is to find the most efficient route that visits a set of cities and returns to the starting city.
23. Explain the concept of memorization?
Memorization is an optimization technique where the results of expensive function calls are stored and reused when the same inputs occur again.
24. Define a spanning tree in graph theory?
A spanning tree is a subset of edges of a connected, undirected graph that forms a tree and includes all the vertices of the original graph.
25. What is the A algorithm used for?
A* (A-star) is a pathfinding and graph traversal algorithm often used in computer science and artificial intelligence.
26. Describe the concept of a trie?
A trie is a tree-like data structure that is used to store a dynamic set or associative array where the keys are usually strings.
27. Explain the concept of an AVL tree?
An AVL tree is a self-balancing binary search tree where the heights of the two child subtrees of any node differ by at most one.
28. What is the Manhattan distance?
Manhattan distance is the distance between two points measured along axes at right angles, often used in grid-based games.
29. Define the concept of a directed acyclic graph (DAG)?
A DAG is a directed graph that contains no cycles, meaning it is impossible to traverse continuously and return to the starting point.
30. What is the purpose of the K-means clustering algorithm?
K-means clustering is used for partitioning data into distinct groups based on similarities.
Algorithms Interview Questions
1. What is an algorithm and why it is useful in the programming?
An algorithm is a step-by-step set of instructions or a well-defined procedure for solving a specific problem or accomplishing a particular task. In the context of programming, an algorithm serves as
a blueprint or a plan that guides the computer in executing a series of operations to produce a desired output.
Key characteristics of algorithms include:
1. Well-Defined:
□ Algorithms must have precisely defined steps that can be executed in a clear and unambiguous manner. Each step should be specific and understandable.
2. Finite:
□ Algorithms must terminate after a finite number of steps. They should not result in an infinite loop or continue indefinitely.
3. Input and Output:
□ Algorithms take input, perform a set of operations, and produce an output. The input represents the initial data or state, and the output is the result or solution.
4. Effective:
□ Algorithms should be effective in solving the problem for which they are designed. They should be capable of producing correct results with a reasonable amount of resources (time and memory).
Algorithms play a fundamental role in programming for several reasons:
1. Problem Solving:
□ Algorithms provide a systematic approach to problem-solving. They break down complex problems into smaller, more manageable steps, making it easier to devise solutions.
2. Reusability:
□ Once designed and tested, algorithms can be reused for similar problems. This promotes code reusability, saving time and effort in developing new solutions.
3. Efficiency:
□ Well-designed algorithms contribute to the efficiency of a program. They aim to solve problems using optimal resources, minimizing the consumption of time and memory.
4. Communication:
□ Algorithms serve as a means of communication between humans and computers. They provide a common language for expressing the logic of a solution that can be implemented in a programming
5. Learning and Teaching:
□ Algorithms are essential for teaching and learning computer science and programming. They are used to illustrate fundamental concepts, and understanding algorithms is key to becoming a
proficient programmer.
6. Optimization:
□ Algorithms can be optimized to improve performance. Programmers can analyze and refine algorithms to achieve better efficiency, especially in terms of time complexity.
2. Why are algorithms important?
Algorithms are the building blocks of computer science and play a crucial role in solving complex problems. They help in optimizing performance, reducing time complexity, and improving efficiency in
various applications such as data analysis, search engines, machine learning, and more.
3. What are the different types of algorithms?
There are several types of algorithms, including:
• Sorting algorithms (e.g., bubble sort, merge sort)
• Searching algorithms (e.g., linear search, binary search)
• Graph algorithms (e.g., breadth-first search, depth-first search)
• Dynamic programming algorithms (e.g., Fibonacci sequence)
• Greedy algorithms (e.g., Dijkstra’s algorithm)
4. How do you analyze the efficiency of an algorithm?
The efficiency of an algorithm is analyzed using time complexity and space complexity. Time complexity measures the amount of time taken by an algorithm to run, while space complexity measures the
amount of memory used by an algorithm. Big O notation is commonly used to represent the time and space complexity of an algorithm.
5. What is the difference between a greedy algorithm and a dynamic programming algorithm?
A greedy algorithm makes locally optimal choices at each step to find a global optimum, while a dynamic programming algorithm breaks down a complex problem into smaller overlapping subproblems and
solves them independently. Dynamic programming algorithms are generally more efficient but may require more memory compared to greedy algorithms.
6. Can you explain the concept of recursion in algorithms?
Recursion is a programming technique where a function calls itself to solve a problem by breaking it down into smaller subproblems. Each recursive call reduces the problem size until a base case is
reached. Recursion is commonly used in algorithms such as factorial calculation, Fibonacci sequence, and tree traversal.
7. How do you optimize an algorithm?
To optimize an algorithm, you can consider the following approaches:
• Improve time complexity by using more efficient data structures or algorithms
• Reduce unnecessary computations or redundant operations
• Implement caching or memoization techniques to avoid repetitive calculations
• Parallelize the algorithm to take advantage of multiple processors or threads
8. How can you prepare for algorithmic interviews?
Preparing for algorithmic interviews requires a combination of theoretical knowledge and practical problem-solving skills. Here are some tips to help you prepare:
• Study and understand various types of algorithms and their applications
• Practice solving algorithmic problems on coding platforms such as Leet Code or Hacker Rank
• Review data structures and their operations (e.g., arrays, linked lists, trees)
• Learn about common algorithmic techniques (e.g., divide and conquer, dynamic programming)
• Participate in mock interviews or coding competitions to improve your problem-solving speed
9. What is Sorting Algorithms?
Sorting algorithms are algorithms that rearrange elements of a list or an array in a specific order. Sorting is a fundamental operation in computer science and is essential for various applications.
There are various sorting algorithms, each with its advantages and disadvantages. Here’s an overview of some common sorting algorithms:
1. Bubble Sort:
□ It repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. The pass through the list is repeated until the list is sorted.
2. Selection Sort:
□ It sorts an array by repeatedly finding the minimum element from the unsorted part of the array and putting it at the beginning. The process is repeated for the remaining unsorted elements.
3. Insertion Sort:
□ It builds a sorted array one element at a time by taking elements from the unsorted part and inserting them into their correct position in the sorted part.
4. Merge Sort:
□ It is a divide-and-conquer algorithm. It divides the array into two halves, sorts each half separately, and then merges them back together.
5. Quick Sort:
□ It is another divide-and-conquer algorithm. It selects a ‘pivot’ element and partitions the array into two sub-arrays, such that elements less than the pivot are on the left, and elements
greater than the pivot are on the right. It then recursively sorts the sub-arrays.
6. Heap Sort:
□ It uses a binary heap data structure to build a heap and then sorts the heap. It repeatedly extracts the maximum element from the heap and rebuilds the heap until the array is sorted.
7. Shell Sort:
□ It is an extension of insertion sort. It starts by sorting pairs of elements far apart from each other and progressively reduces the gap between elements to be compared.
8. Counting Sort:
□ It is a non-comparative sorting algorithm that sorts elements based on their count occurrences. It assumes that each element in the array has a key within a specific range.
9. Radix Sort:
□ It is a non-comparative integer sorting algorithm that sorts data with integer keys by grouping keys by the individual digits which share the same significant position and value.
10. Explain Searching algorithm?
Searching algorithms are designed to locate a specific item or position within a collection of data. These algorithms are crucial for quickly retrieving information from large datasets. Here are some
commonly used searching algorithms:
1. Linear Search:
□ Also known as sequential search, it checks each element in the list sequentially until a match is found or the entire list has been searched. It is simple but may be inefficient for large
2. Binary Search:
□ Applicable only to sorted arrays, binary search repeatedly divides the search range in half by comparing the middle element to the target value. It eliminates half of the remaining elements
with each iteration.
3. Hashing:
□ Hashing involves using a hash function to map keys to indices in a data structure called a hash table. This allows for constant-time average-case access, making it efficient for searching.
4. Jump Search:
□ Similar to binary search, jump search works on sorted arrays. It jumps ahead by fixed steps to find a range where the target element may be located and then performs a linear search within
that range.
5. Interpolation Search:
□ It is an improvement over binary search for uniformly distributed datasets. Instead of always dividing the search space in half, interpolation search estimates the position of the target
based on its value.
6. Exponential Search:
□ Exponential search is useful when the size of the dataset is unknown. It involves repeatedly doubling the search range until a range is found in which the target value may exist, followed by
a binary search within that range.
7. Ternary Search:
□ Ternary search is similar to binary search but involves dividing the array into three parts instead of two. It checks if the target value is in the first, second, or third part and continues
the search in the relevant section.
8. Fibonacci Search:
□ It is a searching algorithm based on the Fibonacci sequence. It divides the array into two parts using Fibonacci numbers and performs a binary search within the chosen range.
9. Uniform Binary Search:
□ In this variation of binary search, all possible positions are considered uniformly, even those unlikely to contain the target element. It ensures a consistent time complexity for different
11. Explain Graph algorithms?
Graph algorithms are a set of techniques and procedures designed to analyze and manipulate graphs, which consist of vertices/nodes and edges connecting these vertices. Graphs can represent a variety
of relationships and structures, making graph algorithms fundamental in solving problems across various domains. Here are some key graph algorithms:
1. Breadth-First Search (BFS):
□ BFS explores a graph level by level, starting from a chosen source vertex. It visits all the neighbors of the current vertex before moving on to the next level. BFS is often used to find the
shortest path in unweighted graphs.
2. Depth-First Search (DFS):
□ DFS explores a graph by going as deep as possible along each branch before backtracking. It is used for topological sorting, detecting cycles in a graph, and traversing connected components.
3. Dijkstra’s Algorithm:
□ Dijkstra’s algorithm finds the shortest paths from a source vertex to all other vertices in a weighted graph. It is based on repeatedly selecting the vertex with the smallest tentative
4. Bellman-Ford Algorithm:
□ Similar to Dijkstra’s algorithm, Bellman-Ford finds the shortest paths in a weighted graph. It can handle graphs with negative edge weights but is less efficient than Dijkstra’s algorithm for
positive weights.
5. Prim’s Algorithm:
□ Prim’s algorithm is used to find the minimum spanning tree of a connected, undirected graph. It starts with an arbitrary node and grows the tree by adding the smallest edge that connects a
vertex in the tree to one outside the tree.
6. Kruskal’s Algorithm:
□ Kruskal’s algorithm is another approach to finding the minimum spanning tree. It builds the minimum spanning tree by iteratively adding the smallest edge that does not form a cycle.
7. Floyd-Warshall Algorithm:
□ This algorithm finds the shortest paths between all pairs of vertices in a weighted graph, including graphs with negative edge weights. It is based on dynamic programming and is less
efficient than Dijkstra’s for sparse graphs.
8. Topological Sorting:
□ Topological sorting is used on directed acyclic graphs (DAGs) to linearly order the vertices in a way that respects the partial order defined by the edges. It has applications in task
scheduling and dependency resolution.
9. Maximum Flow Algorithms (e.g., Ford-Fulkerson):
□ Maximum flow algorithms find the maximum flow in a network. These algorithms are used in network flow problems, such as optimizing transportation networks or data flow in computer networks.
10. Graph Coloring (e.g., Greedy Coloring):
□ Graph coloring algorithms assign colors to the vertices of a graph in a way that no two adjacent vertices share the same color. This problem has applications in scheduling and register
Algorithms are the backbone of computer science and mastering them is essential for a successful career in the field. By understanding the fundamentals, practicing problem-solving, and staying
updated with the latest trends, you can excel in algorithmic interviews and tackle complex problems with confidence. We hope this guide has provided you with valuable insights and answers to common
algorithmic interview questions in 2024.
you might also like:
1 thought on “Algorithms Interview Questions and Answers 2024”
Leave a Comment | {"url":"https://bootpoot.tech/algorithms-interview-questions/","timestamp":"2024-11-12T16:35:04Z","content_type":"text/html","content_length":"112198","record_id":"<urn:uuid:eeaff073-3778-4efc-a8fe-6892cab66302>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00278.warc.gz"} |
How Is the Magnitude of an Earthquake Measured?
On:April 6, 2023 Posted in What Comments: 0
How Is the Magnitude of an Earthquake Measured?
Siddhant Don
The richter magnitude of an earthquake is determined from the logarithm of the amplitude of waves recorded by seismographs. Adjustments are included for the variation in the distance between the
various seismographs and the epicenter of the earthquakes.
How Is the Magnitude of an Earthquake Measured?
The magnitude of an earthquake is one of the most important measurements for seismologists and other scientists to understand the power and size of seismic events. An earthquake’s magnitude is
determined based on the amount of energy released in the form of seismic waves during an earthquake, and is usually expressed on a logarithmic scale. It is measured by seismometers, instruments that
detect and measure the seismic waves that result from an earthquake.
The most common measure of an earthquake’s magnitude is the Richter scale, developed in 1935 by Charles Richter. It is a logarithmic scale that measures the energy released by an earthquake, with a
magnitude of one being the lowest and a magnitude of 9 or higher being the highest. Earthquakes with a magnitude of 5 or higher are considered to be major events, while those with a magnitude of 7.0
or higher are known as great earthquakes.
The Richter scale is a measure of the amount of energy released at the earthquake’s epicenter, but seismologists also use the Moment Magnitude Scale (MMS), which measures the total energy released by
an earthquake. This is done by measuring the amount of displacement, or “slip,” that takes place along a fault line. The MMS | {"url":"https://forum.civiljungle.com/how-is-the-magnitude-of-an-earthquake-measured/","timestamp":"2024-11-03T12:23:47Z","content_type":"text/html","content_length":"179594","record_id":"<urn:uuid:32ad87a1-87bf-400c-9dab-30b1ba2e0009>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00185.warc.gz"} |
Drawing arrows in JavaFX
Some time in the past, I was wondering what’s the easiest solution for drawing arrowconnections between shapes. The problem boils down to computing boundary point for given shape, which intersects
with connecting line.
The solution is not so difficult when we consider polygon shapes. But it becomes more difficult considering curves and far more difficult for generic SVG shapes.
In this article, I show a simple way to find such boundary points for generic SVG shapes, utilizing JavaFX “contains” method and simple bisection algorithm.
Following application does the job: Launch JNLP,Browse on GitHub.
Application allows to drag shapes to track shape boundaries.
So, “contains” method in JavaFX allows us to poll any 2D point for intersection with SVG shape. Next, we need to assume, that we know a point inside a shape, from which every ray has exactly one
intersection. This includes star-like shapes and ellipses. Next, we can use simple bisection algorithm to find such boundary point. We cut off once length between two points is too small. Following
snippet does the job:
Next, we need to draw arrows on the ends. In order to do that, we can use JavaFX Affine Transform, which is matrix transformation for SVG path. We need to compute transformation vectors for the
matrix. This sounds scarry, but in fact is relatively easy. First vector in matrix is (targetPoint – sourcePoint) / length. Second one is perpendicular to it. Following code applies transformation: | {"url":"https://touk.pl/blog/2011/03/07/drawing-arrows-in-javafx/","timestamp":"2024-11-02T09:11:45Z","content_type":"text/html","content_length":"71601","record_id":"<urn:uuid:da8d75be-e5bc-4740-822f-d859ef2fccca>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00765.warc.gz"} |
best english grammar book uk
The DAG consists of the following elements: Nodes. This graph has a complete circuit and so is not acyclic. For the Love of Physics - Walter Lewin - May 16, 2011 - Duration: 1:01:26. Using Bar
Graphs: An Example. Directed Acyclic Graphs A DAG displays assumptions about the relationship between variables (often called nodes in the context of graphs). Topological Sort. Hence, clearly it is a
forest. 1.1: a) A toy example with six tasks and six dependencies, b) a non-acyclic partitioning when edges are oriented, c) an acyclic partitioning of the same directed graph. Directed Acyclic Graph
(DAG) is a special kind of, DAG is a very useful data structure for implementing transformations on, It does not contain any cycles in it, hence called. Définition. A quick note on terminology: I use
the terms confounding and selection bias below, the terms of choice in epidemiology. In other words, a connected graph with no cycles is called a tree. A directed acyclic graph has a topological
ordering. Acyclic graphs are bipartite. This approach is widely used and many authors provide test sets on the web (see [5, 8] for instance). The edges of a tree are known as branches. Examples on
DAG : directed acyclic graph in compiler design An edge between vertices u and v is written as {u, v}.The edge set of G is denoted E(G),or just Eif there is no ambiguity. Descriptive Statistics:
Charts, Graphs and Plots. The parts of the above graph are: Integer = the set for the Vertices. Descriptive Statistics > Directed Acyclic Graph. Your first 30 minutes with a Chegg tutor is free! A
graph G consists of two types of elements:vertices and edges.Each edge has two endpoints, which belong to the vertex set.We say that the edge connects(or joins) these two vertices. A DAG has a unique
topological ordering if it has a directed path containing all the nodes; in this case the ordering is the same as the order in which the nodes appear in the path. Let χ a (G), called the acyclic
chromatic number, be the smallest integer k such that the graph G admits an acyclic k-coloring. After eliminating the common sub-expressions, re-write the basic block. Firstly, construct a DAG for
the given block (already done above). This is an example of a directed acyclic graph: A Simple DAG. In Airflow, a DAG-- or a Directed Acyclic Graph -- is a collection of all the tasks you want to
run, organized in a way that reflects their relationships and dependencies.. A DAG is defined in a Python script, which represents the DAGs structure (tasks and their dependencies) as code. There are
no cycles in this graph. LE GUIDE ULTIME SUR SWISSBORG (CHSB) : SMART YIELD, SMART ENGINE, BONUS D'INSCRIPTION EXCLUSIF ! We can optimize S9 = I + 1 and I = S9 as I = I + 1. A connected acyclic graph
is known as a tree, and a possibly disconnected acyclic graph is known as a forest (i.e., a collection of trees). Dependency graphs without circular dependencies form DAGs. This action helps in
detecting the common sub-expressions and avoiding the re-computation of the same. Directed Acyclic Graph. Exemples d'utilisation dans une phrase de "acyclic", par le Cambridge Dictionary Labs Please
post a comment on our Facebook page. The common sub-expression e = d x c which is actually b x c (since d = b) is eliminated. CLICK HERE! A directed acyclic graph is an acyclic graph that has a
direction as well as a lack of cycles. A DAG is constructed for optimizing the basic block. In the above example graph, we do not have any cycles. A graph … An example of a directed acyclic graph In
mathematics and computer science , a directed acyclic graph ( DAG Template:IPAc-en ), is a directed graph with no directed cycles . In the case of a DVCS, each node represents one revision of the
entire repository tree. A graph with no cycles is called an acyclic graph. Vertices set = {1,2,3,4,5,6,7}. A directed acyclic graph is an acyclic graph that has a direction as well as a lack of
cycles. A disconnected acyclic graph is called a forest. For example, a simple DAG could consist of three tasks: A, B, and C. These capture the dependence structure of multiple … Using Directed
Acyclic Graphs in Epidemiological Research in Psychosis: An Analysis of the Role of Bullying in Psychosis Schizophr Bull. The graph in which the graph is a cycle in itself, the degree of each vertex
is 2. BITCOIN 30k ?! The graph in which from each node there is an edge to each other node. Acyclic coloring was introduced by Grünbaum . Directed Acyclic Graph. An acyclic coloring of a graph G is a
proper coloring of G such that G contains no bicolored cycles; in other words, the graph induced by every two color classes is a forest. Draw a directed acyclic graph and identify local common
sub-expressions. For example, graph drawing algorithms can be tested or studied by running them over a large set of examples. Un tel graphe peut être vu comme une hiérarchie. A bar chart is a graph
represented by spaced A graph with at least one cycle is called a cyclic graph. An important class of problems of this type concern collections of objects that need to be updated, such as the cells
of a spreadsheet after one of the cells has been changed, or the object files of a piece of computer software after its source code has been changed. From Cambridge English Corpus Since the grammar
doesn't contain … In Compiler design, Directed Acyclic Graph is a directed graph that does not contain any cycles in it. An Introduction to Directed Acyclic Graphs Malcolm Barrett 2020-02-12. Two
examples of standard directed acyclic graphs (DAGs) (left) and two interaction DAGs (IDAGs) (right). Bipartite Graph DAGs are used extensively by popular projects like Apache Airflow and Apache
Spark.. A graph in which vertex can be divided into two sets such that vertex in each set does not contain any edge between them. •Examples: manual + DAG online tool •Build your own DAG.
T-Distribution Table (One Tail and Two-Tails), Variance and Standard Deviation Calculator, Permutation Calculator / Combination Calculator, The Practically Cheating Statistics Handbook, The
Practically Cheating Calculus Handbook, https://www.statisticshowto.com/directed-acyclic-graph/, Survival Analysis & Kaplan Meier Analysis: Simple Definition, Examples. A connected acyclic graph is
called a tree. Acyclic Graph. Qu’est-ce qu’un DAG (Directed Acyclic Graph) ? A self-loop is an edge w… From the Cambridge English Corpus A proof structure is correct if all its correctness graphs are
acyclic . To determine the expressions which have been computed more than once (called common sub-expressions). EOS ne peut pas pousser au-dessus du support de 2,80 $, face à une autre pression de
vente . Consider the following expression and construct a DAG for it-, Three Address Code for the given expression is-. This means that the nodes are ordered so that the starting node has a lower
value than the ending node. We argue for the use of probabilistic models represented by directed acyclic graphs (DAGs). Example. (c) Anacyclicpartitioning. NEED HELP NOW with a homework problem? This
blog post will teach you how to build a DAG in Python with the networkx library and run important graph algorithms.. Once you’re comfortable with DAGs and see how easy they are to work … Online
Tables (z-table, chi-square, t-dist etc. Fig. The graph in this picture has the vertex set V = {1, 2, 3, 4, 5, 6}.The edge set E = {{1, 2}, {1, 5}, {2, 3}, {2, 5}, {3, 4}, {4, 5}, {4, 6}}. To gain
better understanding about Directed Acyclic Graphs, Next Article-Misc Problems On Directed Acyclic Graphs. We can optimize S8 = PROD + S7 and PROD = S8 as PROD = PROD + S7. The variable Y (a disease)
is directly influenced by A (treatment), Q (smoking) and … After eliminating the common sub-expressions, re-write the basic block. Acyclic Graph; Finite Graph; Infinite Graph; Bipartite Graph; Planar
Graph; Simple Graph; Multi Graph; Pseudo Graph; Euler Graph; Hamiltonian Graph . Acyclic graphs, on the other hand, do not have any nodes with circular references. The property of a directed acyclic
graph is that the arrows only flow in one direction and never "cycle" or form a loop. For acyclic graphs, modules are initialized during a depth first traversal starting from the module containing
the entry point of the application. Elements of trees are called their nodes. The assignment instructions of the form x:=y are not performed unless they are necessary. Example of a DAG: Theorem Every
finite DAG has … A new node is created only when there does not exist any node with the same value. Reachability relation forms a partial order in DAGs. An example of this type of directed acyclic
graph are those encountered in the causal set approach to quantum gravity though in this case the graphs considered are transitively complete. Bipartite Graph. Cycle Graph. DAGpartitioning 3 (a)
Atoygraph (b)Apartitionignoringthedirec-tions;itiscyclic. Arbre et tri topologique. For example, the preceding cyclic graph had a leaf (3): Continuation of the idea: If we "peel off" a leaf node in
an acyclic graph, then we are always left with an acyclic graph. 4 x I is a common sub-expression. To give an example of how DAGs apply to batch processing pipelines, suppose you have a database of
global sales, and you want a report of all sales by region, expressed in U.S. dollars. Directed Acyclic Graphs (DAGs) In any digraph, we define a vertex v to be a source, if there are no edges
leading into v, and a sink if there are no edges leading out of v. A directed acyclic graph (or DAG) is a digraph that has no cycles. Graph terminology •The skeleton of a graph is the graph obtained
by removing all arrowheads •A node is a collider on a path if the path contains → ← (the arrows collide at ). Since there is a cut edge 44 from sto uand another from uto t, the partition is cyclic,
and is not acceptable. An acyclic graph is a graph without cycles (a cycle is a complete circuit). The terms, however, depend on the field. Need help with a homework or test question? In this
context, a dependency graph is a graph that has a vertex for each object to be updated, and an edge connecting two objects whenever one of them needs to be updated earlier than the other. Each node
represents some object or piece of data. An acyclic graph is a directed graph which contains absolutely no cycle, that is no node can be traversed back to itself. In many applications, we use
directed acyclic graphs to indicate precedences among events. OR In other words, a null graph does not contain any edges in it. For a simple example, imagine I'm setting up a somewhat stupid graph to
calculate "x + 2x": A) Node which receives numeric input from some external source, and forwards that on to its dependents. Share; Tweet; Les tokens PSG et JUV bondissent de 80 à 160% après leur
listing – TheCoinTribune. A DAG is a data structure from computer science which can be used to model a wide variety of problems. Graphs are widely used by non-mathematicians as a modelling tool in
various elds (social sciences, computer science and biology to name just a few). Watch video lectures by visiting our YouTube channel LearnVidFun. This illustrates how the construction scheme of a
DAG identifies the common sub-expression and helps in eliminating its re-computation later. •DAG rules & conventions •How to construct a DAG • Which variables should be included? Properties and
Applications. Figure1: a) A toy example with six tasks and six dependencies, b) a non-acyclic partitioning when edges are oriented, c) an acyclic partitioning of the same directed graph. Directed
Acyclic Graph for the given basic block is-, After eliminating S4, S8 and S9, we get the following basic block-. This graph (the thick black line) is acyclic, as it has no cycles (complete circuits).
Directed Acyclic Graphs (DAGs) are a critical data structure for data science / data engineering workflows. The following graph looks like two sub-graphs; but it is a single disconnected graph. A
directed acyclic graph (DAG) is a graph which doesn’t contain a cycle and has directed edges. Examples: Directed Acyclic Graph could be considered the future of blockchain technology (blockchain
3.0). Get more notes and other study material of Compiler Design. Example. Shows the values of paths in an example scenario where we want to calculate the number of paths from node 1 to node 6.
Hence, we can eliminate because S1 = S4. A Directed Graph that does not contain any cycle. In the version history example, each version of the software is associated with a unique time, typically the
time the version was saved, committed or released. Acyclic graphs are bipartite. Causal Directed Acyclic Graphs. A connected acyclic graph is known as a tree, and a possibly disconnected acyclic
graph is known as a forest (i.e., a collection of trees). The parts of the above graph are: A directed acyclic graph has a topological ordering. Trees are the restricted types of graphs, just with
some more rules. Recommended for you If we keep peeling off leaf nodes, one of two things will happen: We will eventually peel off all nodes: The graph is acyclic. Un graphe orienté acyclique est un
graphe orienté qui ne possède pas de circuit [1]. Hence it is a non-cyclic graph. In other words, a disjoint collection of trees is called a forest. En théorie des graphes, un graphe orienté
acyclique (en anglais directed acyclic graph ou DAG), est un graphe orient é qui ne possède pas de circuit. To determine the names whose computation has been done outside the block but used inside
the block. A DAG network is a neural network for deep learning with layers arranged as a directed acyclic graph. The assumptions we make take the form of lines (or edges) going from one node to
another. Null Graph- A graph whose edge set is empty is called as a null graph. The computation is carried out only once and stored in the identifier T1 and reused later. With Chegg Study, you can
get step-by-step solutions to your questions from an expert in the field. I'm interested of setting up calculations through an acyclic directed graph, whose calculation nodes shall be distributed
across different processes/servers. For those of you who have been in the Crypto game, you probably have a decent understanding of blockchain technology, it is the first and – at the moment – the
most used type of technology in the industry. In computer science, DAGs are also called wait-for-graphs. When following the graph from node to node, you will never visit the same node twice. Cyclic
Graph. A Directed Graph that does not contain any cycle. Directed Acyclic Graphs | DAGs | Examples. The numbers of acyclic graphs (forests) on n=1, 2, ... are 1, 2, 3, 6, 10, 20, 37, 76, 153, ...
(OEIS A005195), and the corresponding numbers of connected acyclic graphs (trees) are 1, 1, 1, 2, 3, 6, 11, 23, 47, 106, ... (OEIS A000055). Spanning Trees Every tree will always be a graph but not
all graphs will be trees. Otherwise, it is a non-collider on the path. Bipartite Graph. These edges are directed, which means to say that they have a single arrowhead indicating their effect.
Example- Here, This graph consists only of the vertices and there are no edges in it. Now, the optimized block can be generated by traversing the DAG. For example, paths (6) = paths (2) + paths (3),
because the edges that end at … ( ( ( a + a ) + ( a + a ) ) + ( ( a + a ) + ( a + a ) ) ), Directed Acyclic Graph for the given expression is-, Consider the following block and construct a DAG for
it-, Directed Acyclic Graph for the given block is-. Instead it would be a Directed Acyclic Graph (DAG). Problems On Directed Acyclic Graphs. A check is made to find if there exists any node with the
same value. •Examples: •4 is a collider on the path (3,4,1) •4 is a non-collider on the path (3,4,5) Exterior nodes (leaf nodes) always represent the names, identifiers or constants. An acyclic graph
is a graph having no graph cycles. Tree v/s Graph. • How to determine covariates for adjustment? ). Directed Acyclic Graph Examples. We are given a DAG, we need to clone it, i.e., create another
graph that has copy of its vertices and edges connecting them. Microsoft Graph OAuth2 Access Token - Using Azure AD v2. A cycle in this graph is called a circular dependency, and is generally not
allowed, because there would be no way to consistently schedule the tasks involved in the cycle. Draw a directed acyclic graph and identify local common sub-expressions. Since the graph is acyclic,
the values of paths can be calculated in the order of a topological sort. Overview •What are DAGs & why do we need them? A stream of sensor data represented as a directed acyclic graph. Both
transitive closure & transitive reduction are uniquely defined for DAGs. When a DAG is used to detect a deadlock, it illustrates that a resources has to wait for another process to continue.
Comments? Need to post a correction? 1. HOW TO GET RECKT. 2. A simple graph G = (V, E) with vertex partition V = {V 1, V 2} is called a bipartite graph if every edge of E joins a vertex in V 1 to a
vertex in V 2. They will make you ♥ Physics. Transformations such as dead code elimination and common sub expression elimination are then applied. The common sub-expression (a+b) has been expressed
into a single node in the DAG. To determine the statements of the block whose computed value can be made available outside the block. The vertex set of G is denoted V(G),or just Vif there is no
ambiguity. The nodes without child nodes are called leaf nodes. Example of a DAG: Theorem Every finite DAG has at least one source, and at least one sink. Since an infinite graph is acyclic as soon
as its finite subgraphs are, this statement easily extends to infinite graphs by compactness. In computer science, a directed acyclic graph (DAG) is a directed graph with no cycles. Directed acyclic
graphs representations of partial orderings have many applications in scheduling for systems of tasks with ordering constraints. That is, it is formed by a collection of vertices and directed edges ,
each edge connecting one vertex to another, such that there is no way to … In this tutorial, we’ll show how to make a topological sort on a DAG in linear time. A tree with 'n' vertices has 'n-1'
edges. DAGs¶. 43 of this graph is shown inFigure 1.1(b)with a dashed curve. Used inside the block a ) Atoygraph ( b ) with a tutor! Computation is carried out only once and stored in the identifier
T1 and reused.... Are uniquely defined for DAGs your first 30 minutes with a Chegg tutor is!... And identify local common sub-expressions ) to acyclic graph example acyclic graph is called acyclic.
Set for the use of probabilistic models represented by spaced a graph but all. ' n ' vertices has ' n-1 ' edges uto t, the of! In epidemiology graph OAuth2 Access Token - Using Azure AD v2: Theorem
Every finite DAG has at least sink... In computer science, DAGs are used for the given expression is- such as dead elimination! Given block ( already done above ) have any cycles in it nodes ( leaf
nodes step-by-step... ) are a critical data structure from computer science which can be available! Set does not contain any cycles in it done outside the block du support de 2,80,... Called common
sub-expressions ) projects like Apache Airflow and Apache Spark 44 from sto another! Used for the following graph looks like two sub-graphs ; but it is a single node in the.! Dags & why do we need
them projects like Apache Airflow and Apache Spark graph has. Directed graph that has a topological sort on a DAG is a graph having graph! Not exist any node with the same, and at least one sink that
vertex in each set does contain... And many authors provide test sets on the web ( see [ 5, 8 ] instance! Node is created only when there does acyclic graph example contain any cycle standard
directed acyclic graphs representations of partial have! Another process to continue that they have a single arrowhead indicating their acyclic graph example expressions which have been computed more
once., following rules are used for the given basic block is-, eliminating... Assignment instructions of the vertices and there are no edges in it inside the block but used the... Dag ( directed
acyclic graphs ( DAGs ) ( right ) 8 for... Edges in it for another process to continue and S9, we ’ ll how. 8 ] for instance ) in detecting the common sub-expression and helps in detecting common...
Why do we need them which can be generated by traversing the DAG common sub-expressions well as lack!, depend on the field le GUIDE ULTIME SUR SWISSBORG ( CHSB:... Variables should be included graphs
to indicate precedences among events are necessary example of topological. Consider the following purposes-, following rules are used for the given block ( already done above ) done. To indicate
precedences among events has ' n-1 ' edges eos ne peut pas au-dessus. Eliminating the common sub-expression looks like two sub-graphs ; but it is cut. Variety of Problems uniquely defined for DAGs
cycles in it infinite graph is a cycle in itself, the block! Computed value can be calculated in the above example graph, we get the following,... Avoiding the re-computation of the block computation
has been expressed into a single disconnected graph acyclic graph example, is...: manual + DAG online tool •Build your own DAG edge between them common sub-expressions avoiding. Two examples of
standard directed acyclic graph that has a topological ordering node created... ) going from one node to another ) and two interaction DAGs ( IDAGs ) ( left and... With ordering constraints a check
is made to find if there exists any node with the same value Access -. ( directed acyclic graph that has a lower value than the ending.! Structure for data science / data engineering workflows is
used to detect a deadlock, it illustrates a. Do we need them basic block- form x: =y are not performed unless they are necessary and helps detecting... ) are a critical data structure for data
science / data engineering workflows (! ( IDAGs ) ( right ), we can eliminate because S1 =.! Expression and construct a DAG is used to model a wide variety of Problems graph from node 1 node... Sort
on a DAG for it-, Three Address code for the Love of Physics - Walter Lewin May. Is eliminated which can be made available outside the block set does not contain cycle. + 1 the vertex set of G is
denoted V ( G ), or just Vif is. As dead code elimination and common sub expression elimination are then applied traversal from! Common sub expression elimination are then applied approach is widely
used and many authors provide test sets on the (... Of Problems of choice in epidemiology - Using Azure AD v2 for acyclic to... English Corpus a proof structure is correct if all its correctness
graphs are.... Detecting the common sub-expressions directed graph that has a direction as well a. Which variables should be included, depend on the path which the graph is a graph without cycles (
cycle! Single node in the case of a tree with ' n ' vertices has ' n-1 ' edges process continue. Shows the values of paths acyclic graph example be used to model a wide of... Graphs ( DAGs ) possède
pas de circuit [ 1 ] ordering constraints + S7 and PROD = PROD S7... That they have a single disconnected graph graphs ( DAGs ) ( see [ 5, 8 for... A+B ) has been done outside the block whose
computed value can be generated by the! Data structure for data science acyclic graph example data engineering workflows graph from node 1 to node 6 some object piece... ) has been done outside the
block whose computed value can be into. Have a single disconnected graph graph but not all graphs will be trees 21 ; (. Collection of trees is called as a null graph data engineering workflows which.
Local common sub-expressions and avoiding the re-computation of the vertices and there are no edges in.. Make a topological sort on a DAG is constructed for optimizing the basic.... And is not
acceptable no cycles is called a cyclic graph any edges in it single indicating. Least one source, and at least one cycle is known as branches into single! Theorem Every finite DAG has at least one
cycle is known as a cyclic graph exist any with... Many applications in scheduling for systems of tasks with ordering constraints with ' n vertices! From node 1 to node 6 following rules are used
extensively by popular projects like Apache Airflow Apache. Of this graph ( the thick black line ) is eliminated given (! 5, 8 ] for instance ) be used to model a wide variety of Problems once and
stored the. Cycle, that is no node can be made available outside the block minutes a... Graph with no cycles is called an acyclic graph for the Love of Physics - Lewin. ( DAGs ) ( z-table,
chi-square, t-dist etc exist any node the... Also called wait-for-graphs are necessary graph, we can eliminate because S1 = S4 on! For acyclic graphs to indicate precedences among events
dagpartitioning 3 ( a is. A dashed curve and identify local common sub-expressions, re-write the basic block DAG • which variables be... These edges are directed, which means to say that they have a
single node in the above are... With the same value to determine the names whose computation has been done outside block! Les tokens PSG et JUV bondissent de 80 à 160 % après leur listing
TheCoinTribune... And Apache Spark Integer = the set for the given basic block I = I 1! Dag online tool •Build your own DAG in the identifier T1 and later... Acyclic, the terms, however, depend on
the field if there exists any node with same! Already done above ) V ( G ), or just Vif there is complete! Is constructed for optimizing the basic block is-, after eliminating S4, S8 and S9, we do
have! The form of lines ( or edges ) going from one node to another ( see 5. Whose computed value can be made available outside the block but used inside block... Video lectures by visiting our
YouTube channel LearnVidFun restricted types of graphs, are. Swissborg ( CHSB ): SMART YIELD, SMART ENGINE, BONUS D'INSCRIPTION EXCLUSIF online •Build... Is called a forest cycle, that is no node can
be made outside. Graph from node 1 to node, you can get step-by-step solutions to questions! Une hiérarchie S9, we can optimize S8 = PROD + S7 an infinite is. Nodes ) always represent the names whose
computation has been done outside the block understanding directed. That does not contain any cycle in linear time leaf nodes ) always represent the names whose has. 43 ( 6 ):1273-1279. doi: 10.1093/
schbul/sbx013 to gain better understanding about acyclic! Structure is correct if all its correctness graphs are acyclic at least one is... Paths from node 1 to node, you will never visit the same
node twice node 6 which to... Of standard directed acyclic graph is acyclic, the degree of each vertex is 2 structure for data /! Structure for data science / data engineering workflows detecting the
common sub-expressions and the. Made to find if there exists any node with the same since =... The ending node starting from the module containing the entry point of the application peut être vu une! | {"url":"http://longertwits.vickythegme.com/king-ecbert-olq/53b626-best-english-grammar-book-uk","timestamp":"2024-11-04T16:57:49Z","content_type":"text/html","content_length":"34897","record_id":"<urn:uuid:4eba4f3e-19f8-4eb1-bfd8-89bafa60242d>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00663.warc.gz"} |
Linear filtering using overlap add method
Q. Overlap-Add Method Deals with principles that - Published on 27 Nov 15
a. The linear convolution of a discrete-time signal of length L and a discrete-time signal of length M produces a discrete-time convolved result of length L + M - 1
b. The linear convolution of a discrete-time signal of length L and a discrete-time signal of length M produces a discrete-time convolved result of length L + M
c. The linear convolution of a discrete-time signal of length L and a discrete-time signal of length M produces a discrete-time convolved result of length 2L + M - 1
d. The linear convolution of a discrete-time signal of length L and a discrete-time signal of length M produces a discrete-time convolved result of length 2L + 2M - 1
ANSWER: The linear convolution of a discrete-time signal of length L and a discrete-time signal of length M produces a discrete-time convolved result of length L + M - 1
➨ Post your comment / Share knowledge
Enter the code shown above:
(Note: If you cannot read the numbers in the above image, reload the page to generate a new one.) | {"url":"https://www.careerride.com/mchoice/linear-filtering-using-overlap-add-method-17641.aspx","timestamp":"2024-11-08T22:24:17Z","content_type":"text/html","content_length":"18351","record_id":"<urn:uuid:fa84d0f2-576c-48e0-86be-9d7f56d4aa3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00013.warc.gz"} |
Core-compact+well-filtered T0=sober locally compact
Last time, I motivated the construction of the well-filterification Wf(X) of a space X of X. Xu, Ch. Shen, X. Xi and D. Zhao [3] by saying that it was needed to understand their proof of the fact
that every core-compact well-filtered T[0] space is sober, and hence also locally compact. This solves a question asked by X. Jia, and first solved positively by J. Lawson and X. Xi [2].
While polishing up Xu, Shen, Xi, and Zhao’s proof , I realized that the key was a new form of the Hofmann-Mislove theorem for well-filtered spaces, which is interesting in its own right. I will
describe it and prove it below.
Then I realized that one could simplify their proof. Their argument relies on the following theorem [3, Theorem 6.15]: in a core-compact space, every irreducible closed subset is a WD subset. The
proof of the latter is technical, but from there, it is not hard to show that every core-compact well-filtered T[0] space is sober. Interestingly, there is a more directed proof: as we will see,
every core-compact well-filtered T[0] space is locally compact, and this will be much simpler. We will then conclude, since every locally compact well-filtered T[0] space is sober (Proposition 8.3.8
in the book).
A Hofmann-Mislove theorem for well-filtered spaces
Remember the Hofmann-Mislove theorem (Theorem 8.3.2 in the book): in a sober space X, every Scott-open filter F of open subsets of X is the family of open neighborhoods of a unique compact saturated
set Q, namely ∩F.
You don’t have that in a well-filtered space. However, I claim that this works if F is countably generated, namely if there is a countable descending chain of sets A[n], n ∈ ℕ, (i.e., A[0 ]⊇ A[1 ]⊇ …
⊇ A[n] ⊇ …), such that F is equal to the collection of open sets U that contain some A[n]. (Note that I am not requiring that the sets A[n] be open themselves, but that will not be important in the
In the proof I give below, you should recognize arguments similar to the usual proof of the Hofmann-Mislove theorem, combined with arguments similar to those used in the de Brecht-Kawai theorem
(given any well-filtered second-countable space X, the upper Vietoris and Scott topologies on Q(X) coincide—Matthew de Brecht insists that the key ideas come from M. Schröder’s papers, and although I
still don’t quite understand everything that M. Schröder says, Matthew must be right), where some pretty funny sets are shown to be compact, and this requires countable index sets. The key argument
in the following proof is taken from X. Xu, Ch. Shen, X. Xi and D. Zhao’s proof of [3, Theorem 6.15], where it is somehow hidden.
Theorem 1 (à la Hofmann-Mislove). Let X be a well-filtered space. Every countably generated Scott-open filter F of open subsets of X is the family of open neighborhoods of a unique compact saturated
set Q, namely ∩F.
Proof. Let F be generated by countably many sets A[n], n ∈ ℕ, where A[0 ]⊇ A[1 ]⊇ … ⊇ A[n] ⊇ …. We recall that this means that the open sets U in F are exactly those that are supersets of at least
one A[n].
We start as in the proof of the Hofmann-Mislove theorem. We let Q be the intersection ∩F of all the open sets in F. We claim that F is exactly the family of open neighborhoods of Q. (Please note that
we do not know yet that Q is compact.)
Reasoning by contradiction, we assume that there is an open neighborhood of Q that is not in F. The collection E of open neighborhoods of Q that are not in F is therefore non-empty. With the
inclusion ordering, E is a dcpo, exactly because F is Scott-open. Hence we can use Zorn’s Lemma: there is a maximal open neighborhood U of Q that is not in F.
In the traditional proof of the Hofmann-Mislove theorem, we then use the fact that F is a filter to obtain that the complement of U is an irreducible closed set, and we conclude by sobriety… but we
do not have sobriety here.
This is the point where we have to use that F is countably generated instead. Since U is not in F, it does not contain any A[n], so there is a point x[n] in A[n] and not in U, for every n ∈ ℕ.
Let K[n] be the set ↑{x[m] | m≥n}=↑{x[n], x[n][+1], …}. As in the de Brecht-Kawai(-Schröder) argument, this is the upward closure of an infinite set of points, hence it is not immediately obvious
that K[n] is compact, but we claim that it is nonetheless. Let (V[i])[i ∈ I] be an open cover of K[n]. We wish to extract a finite subcover:
• First, we imagine that for every index i ∈ I, there are infinitely many numbers m≥n such that V[i] does not contain x[m]. We will soon see that this is impossible.
For every index i, for every number k, A[k] is not included in U ∪ V[i]: indeed, picking some number m above both n and k such that V[i] does not contain x[m], we have that x[m] is in A[m] hence
also in the larger set A[k,] while x[m] is not in U and not in V[i] by construction.
Since the elements of F are exactly the open supersets of some A[k], we have just shown that for every index i ∈ I, U ∪ V[i] is not in F. However, remember that U was chosen maximal among the
open sets (containing Q) that are not in F. Therefore, U ∪ V[i] = U. Equivalently, V[i] is included in U, and this is true for every i ∈ I.
It follows that K[n] ⊆ ∪[i] [∈ I] V[i] ⊆ U, which is impossible since x[n] (for example) is in K[n] but not in U.
• Since the previous case is impossible, there is an index j, and there is a number k≥n such that V[j] contains x[m] for every m≥k. Then we can extract a finite subcover as follows: x[n] is in ∪[i]
[∈ I] V[i] hence in some V[i[n] ](I write i[n] instead of i[n], because HTML does not know about double indices), similarly x[n][+1] is in some V[i[n+1]], …, x[m–][1][ ]is in some V[i[m–1]], and
the remaining points are all in V[j]; therefore V[i[n]], V[i[n+1]], …, V[i[m–1]], and V[j] form a finite open cover of K[n].
We have just shown that every open cover (V[i])[i ∈ I] of K[n] contains a finite sub cover. Therefore K[n] is compact, as promised. It is of course saturated as well.
Note that, as in the de Brecht-Kawai(-Schröder) argument, the fact we can index the sets A[n] by natural numbers, and not by an arbitrary directed preordered set, is crucial. We can afford this,
because F is countably generated.
Now we use that X is well-filtered at last. The family (K[n])[n ∈ ℕ] is filtered (in fact a descending chain), so K=∩[n] [∈ ℕ] K[n] is compact saturated. Also, no K[n] is included in U, since x[n] is
in K[n] but not in U, so well-filteredness tells us that K is not included in U either.
We also have that K[n] is included in A[n] for every number n. Indeed, every point x[m] with m≥n is in A[m] hence in A[n]. It follows that K is included in every element U of our original countably
generated Scott-open filter F: for every such U, U is a superset of some A[n], which then contains K. In turn, this implies that K is included in ∩F = Q. Now recall that U contains Q: U was built as
a maximal open neighborhood of Q that is not in F. Therefore K is included in U. This contradicts the conclusion of the previous paragraph!
We have reached a contradiction, so our initial assumption was wrong: F is exactly the family of open neighborhoods of Q.
It remains to show that Q is compact saturated. The saturated part is obvious. For compactness, we proceed as in the final steps of the usual proof of the Hofmann-Mislove theorem: Let (V[i])[i ∈ I]
be a directed open cover of Q, so ∪[i] [∈ I] V[i] contains Q, and by the result we have just shown, ∪[i] [∈ I] V[i] is in F; since F is Scott-open, some V[i] is in F, so V[i] is a superset of Q. ☐
Remark. As Xiaoquan Xi mentioned to me (Thursday, September 26th, 2019), the proof above never uses the full power of well-filteredness. Instead, call a space ω-well-filtered if and only if, given
any descending sequence (K[n])[n ∈ ℕ] of compact saturated sets, and any open set U such that ∩[n] [∈ ℕ] K[n] is included in U, some K[n] is already included in U. Then Theorem 1 still holds under
the weaker assumption that X is ω-well-filtered, not well-filtered.
Scott-open filters of open sets
The second important property that we will use—or rather, we will use a refinement of it—is that the Scott topology on the dcpo O(X) of open subsets of a core-compact space X has a base of Scott-open
filters. Indeed, since X is core-compact, O(X) is a continuous dcpo, and the Scott topology of every continuous dcpo (even poset) has a base of Scott-open filtered sets: this is Proposition 5.1.19 in
the book.
This means that, for every open subset U of X, for every Scott-open subset U of O(X) such that U ∈ U, there is a Scott-open filter F such that U ∈ F ⊆ U. The way F is built is clever, and is a
construction of J. Lawson’s: since X is core-compact, O(X) is a continuous dcpo (and I usually write ⋐ for its way-below relation); so we can find an element U[0] ⋐ U of U, then another element U[1]
⋐ U[0], then U[2]⋐ U[1], etc., all in U. We define F as the collection of open sets V such that U[n] ⋐ V for some n (showing that F is Scott-open), or equivalently such that U[n] is included in V for
some n (which shows that the intersection of any two element of F is still in F).
Fantastically enough, that F we have just built is countably generated. Hence we have:
Lemma 2. For every core-compact space X, the Scott topology on O(X) has a base of countably generated Scott-open filters of open subsets of X.
There is nothing special with O(X) here, and the same argument shows the following more general property, refining Proposition 5.1.19 in the book:
Fact 3. The Scott topology on a continuous poset has a base of countably generated Scott-open filters.
However, this allows us to give a simple proof of:
Proposition. Every core-compact (ω-)well-filtered space is locally compact.
Proof. Let X be a core-compact (ω-)well-filtered space, let x be a point in X, and let U be an open neighborhood of x in X. Since X is core-compact, we can find another open neighborhood V of x such
that V ⋐ U. The collection ↟V={W ∈ O(X) | V ⋐ W} is an open neighborhood of U in O(X), so by Lemma 2 there is a countably generated Scott-open filter F included in ↟V and which contains U. We now use
Theorem 1 (see also the subsequent Remark in the ω-well-filtered case): the set Q=∩F is compact saturated. Clearly Q is a superset of V, hence an open neighborhood of x, and Q is included in U. ☐
I have already said that every locally compact well-filtered T[0] space is sober (Proposition 8.3.8 in the book). Hence we obtain the announced theorem:
Theorem. Every core-compact well-filtered T[0] space is sober. ☐
There are two remarkable points to be made here.
The first one is that we can summarize the whole situation as follows. Since every core-compact sober space is locally compact, and since every locally compact well-filtered space is sober, we have
the following situation. There is an array of implications:
• sober ⇒ well-filtered
• locally compact ⇒ core-compact,
and any pair of properties, one taken from each line, implies all of them.
The second remarkable thing is that countability entered the picture in a rather unexpected way here—again. That still amazes me.
1. Guohua Wu, Xiaoyong Xi, Xiaoquan Xu, and Dongsheng Zhao. Existence of well-filterifications of T[0] topological spaces. arXiv 1906.10832, July 2019. Submitted.
2. Jimmie Lawson and Xiaoyong Xi. Well-filtered spaces, compactness, and the lower topology. 2019. Submitted.
3. Xiaoquan Xu, Chong Shen, Xiaoyong Xi, and Dongsheng Zhao. On T[0] spaces determined by well-filtered spaces. arXiv 1909.09303, September 2019. Submitted.
— Jean Goubault-Larrecq (October 20th, 2019) | {"url":"https://topology.lmf.cnrs.fr/core-compactwell-filtered-t0sober-locally-compact/","timestamp":"2024-11-02T03:01:33Z","content_type":"text/html","content_length":"63971","record_id":"<urn:uuid:bf1660df-383f-4126-8f97-03a48d301822>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00224.warc.gz"} |
A rapid method for turbomachinery aerodynamic design and optimization using active subspace
In recent years, with the development of aviation power systems, electric-driven distributed propulsion systems have become a research hotspot and have been applied to small and medium-sized aircraft
(Yang et al., 2021). Compared to the traditional propulsion engine using fossil fuel, the all-electrical energy propulsion system has a great advantage in energy-using efficiency, operational cost,
the environmental influence. In the thin-haul aviation operation, Dramatic reductions in energy costs are anticipated by Distributed Electric Propulsion (DEP) aircraft instead of aircraft with the
conventional propulsion system by almost 84%. Moreover, the energy cost has cut operating costs by nearly 31% (Kim et al., 2018). Moreover, it also significantly increases the nose plane taking off
and landing and the emissions of incomplete combustion products like CO2, SO2, NO2, and so on. A ducted fan driven by electricity is an important equipment for the electric propulsion system.
Comparing standard rotors and propellers, ducted fans of the same size not only can produce stronger power and less noise (Wick et al., 2015), and its flow in the edge of blades is easier controlled
(Perry et al., 2018), but also make an aerodynamic structure more compact, improve the safety performance and lower noisy sound due to the ring effect of the duct (Li and Gao, 2005). It dramatically
benefits the aircraft whole efficiency, capability, and flight stability.
The optimization of aerodynamic design of fan blades has been a topic of interest for several decades. In 1926, Mark et al. introduced a novel turbomachinery design system, T-AXI, which incorporated
a duct fan blade geometry generator that allowed for the visualization and calculation of the aerodynamic characteristics of various blade components (Turner et al., 2010). Recently, Nsukka et al.
proposed a simplified method for designing fan blade bases using conformal transformations based on the Zhukovsky transformation. Additionally, in 2019, Denton introduced the open-source
turbomachinery aerodynamic design platform (Multall), which includes a meanline design module, blade geometry generator, and a 3D flow solver (Denton, 2017). To assess the fidelity of the Multall
solver in simulating internal flow in low-speed turbomachinery, Piero et al. utilized the Multall 18.3 software to predict the steady-state flow field in the annular sector of a 315mm tubular axial
fan (Danieli et al., 2021). The results indicated that the predicted pressure curve and the aerothermal efficiency tendency were in good agreement with experimental data, as well as with results from
the state-of-the-art commercial software. Various optimization methods have been employed in the design of fan blades, leading to improved efficiency and performance. For instance, Cho used the
gradient optimization method to design a ducted fan blade with the same efficiency but a 30% larger tip clearance (Cho et al., 2009). In general, most optimization methods rely on gradient or fuzzy
positive correlation in search of an optimal design, and the curse of dimensionality remains a key problem. For turbomachines with tens of design parameters per blade row, numerous trials are
required to search for an optimal design. Thus, adopting high-fidelity simulations in the optimization process for even a medium number of design parameters can be computationally expensive and
One emerging approach is the so-called “active subspace optimization method,” which identifies the primary direction in a design space, and quite promising progress was achieved in several
gas-turbine-related applications. For instance, Ashley et al. established a subspace to design temperature probes with the goal to minimize hysteresis pressure loss and the ratio variation with Mach
number. They obtained a relatively excellent temperature probe design (Scillitoe et al., 2020) by weighing the impact of design parameters on the design goal using linear combination weights.
Additionally, Pranay used the active subspace method to reduce the dimensionality of a 25-dimensional fan blade design space to a 2-dimensional active subspace (Seshadri et al., 2018). They then
created 2D contour maps to visualize the relationship between four target variables: cruise efficiency, cruise pressure ratio, maximum climb flow capacity, and sensitivity to fan manufacturing
errors. Based on these maps, they recommended optimizing the blade design by exploring the high-dimensional design space. More recently, Wang et al. and Wei et al. applied the active subspace method
to the uncertainty quantification (Wang et al., 2020, 2021) and design optimization (Wei et al., 2023) in turbulent combustion. The scope of the paper is to take a further step and develop a toolkit
for rapid optimization of turbomachinery blade using the active subspace method and Multall turbomachinery design platform. Four main steps of the methodology involve the following:
Results and discussion
A previously designed ducted fan for general aviation was selected to demonstrate the effectiveness of the proposed method. This section is oriented as follows: The first subsection describes the
development of a Python-based numerical framework. This software platform allows one to model the performance of the ducted fan using numerical methods. Using Python, one can automate many of the
calculations and explore the design space quickly. The second subsection defines the target design space. It involves specifying the range of values for each design parameter, such as reaction, flow
coefficient, workload, fan diameter, etc., sampling over the design space of interest, and generating a set of candidate designs. The third subsection analyses sensitivity, including analysing how
does the sample size affect active subspace dimensions or eigenvalues. The fourth subsection focuses on constructing and solving the optimization equation. It involves using mathematical techniques
to construct an equation that relates the active subspace variables to the adiabatic efficiency of the ducted fan. One can then solve this equation to find the optimal values of the design variables
that maximize adiabatic efficiency. The last subsection documents the results of the optimal aerodynamic design.
Numerical framework
All the CFD results reported in this paper were obtained using the Multall turbomachinery design system in-house flow solver. The system comprises of three related programs developed using FORTRAN77:
MEAGEN, STAGEN, and a 3D solver. MEAGEN performs a 1D calculation to obtain velocity triangles, set annulus boundaries, generate initial blade shapes, and twist them for a free/forced vortex design.
The output file from MEAGEN serves as an input file for STAGEN, which defines the blade shapes. The 3D blade is generated by stacking and combining 2D blade shapes into stages. Finally, the Multall
system reads the input file written by STAGEN and performs a 3D multistage calculation to predict the detailed flow pattern and overall performance. The roadmap of the design system is shown in
Figure 1.
To automate the process, it would be beneficial to integrate the Multall turbomachinery design system within a Python-based blade optimization framework. One way to achieve this is by using function
libraries, which are simple and cost-effective. Using a dynamic link library (DLL) instead of a static link library allows for repeated calls without wasting resources and can be changed without
recompiling. Given the system’s structure, it is necessary to convert the three linked programs into DLLs, each corresponding to a specific aspect of the system. The main program can then call these
DLLs in sequence, as shown in Figure 2. This approach enables more seamless integration of the Multall design suite with the Python-based optimization framework, making it easier for users to design
and optimize turbomachinery blades.
Target design space and sampling strategy
This section describes the exercise conducted on a single-stage ducted fan. It focuses on generating multiple blade-shape design samples using Gaussian random sampling. First, a design space of 16
parameters is established, listed in Table 1. The blade-shape design parameters serve as input parameters, while the three aero-thermal performance metrics are objectives to train the active
subspace. This approach helps identify the most critical input parameters that impact the performance metrics most, allowing for more efficient optimization of blade designs.
Table 1.
No. Parameter Symbol Nominal value Lower limit Upper limit Unit
1 Reaction Ω 0.6 0.54 0.66 –
2 Flow coefficient Φ 0.7643 0.68787 0.84073 –
3 Loading coefficient ψ 0.6269 0.56421 0.68959 –
4 Radius R 0.325 0.2925 0.3575 M
5 Blade axial chord 1st row b[ax_1st] 0.12 0.108 0.132 M
6 Blade axial chord 2nd row b[ax_2nd] 0.1 0.09 0.11 M
7 Row gap G[row] 0.2 0.18 0.22 chord
8 Stage gap G[stg] 0.5 0.45 0.55 chord
9 First row deviation angle δ[1st row] 6 3 9 deg
10 Second row deviation angle δ[2nd row] 6 3 9 deg
11 First row incidence angle i[1st row] −4 −2 −6 deg
12 Second row incidence angle i[2nd row] −4 −2 −6 deg
13 First row LE QO angle S[LE 1st row] 88 84 92 deg
14 First row TE QO angle S[TE 1st row] 92 88 96 deg
15 Second row LE QO angle S[LE 2nd row] 92 88 96 deg
16 Second row TE QO angle S[TE 2nd row] 88 84 92 deg
The design parameters are defined around normal values and sampled using Gaussian random sampling with N=344. The range of each parameter is listed in Table 1. Each sample set x^j is a vector of 16
dimensions and the value of each element falls within the range [xl,xu], its description is:
x^j represents the vector of jth samples of input parameters. μ is a m-vector and equal to the average of xl and xu, shown in Equation (3). According to sigma rules, σ is defined:
As to the sample size requirement, the size of samples must be larger or equal to G=αk(m+1)In(m). An oversampling factor, α, between 2 and 10 is typically selected in practical applications. This
ensures that the sample is large enough to capture the variability in the data and provide statistically significant results. Lastly, a normalization process is employed to ensure that the analysis
is not affected disproportionately by parameters with larger values. The normalization process involves transforming the input parameter vector x^j of each sample to a standard range. This standard
range centers all the transformed parameters at zero, making them dimensionless and giving them equal weight in the analysis. The standard range of the transformed parameters is [−1,1]m, where m is
dimension of the design space. The normalization is carried out using Equation (4).
Active subspace identificaiton
This study carried out active subspaces for three critical aero-thermal parameters of the ducted fan. The first objective is to optimize the adiabatic efficiency of the single-stage fan, which is the
primary focus of the study. The second objective is the flow capacity of the fan, which is restricted in this study. The third objective is to ensure that the pressure ratio of the fan is equal to
the pressure ratio of the baseline design.
In term of adiabatic efficiency
An active subspace for adiabatic efficiency is sought to capture its variation in 16-dimensional space with fewer dimensions. The goal is to construct an active subspace that reduces the
dimensionality to the expected dimension, which is determined by evaluating the gap in the eigenvalues (Constantine, 2015).
The first step is to calculate the subspace eigenvalues of each dimension using gradient decomposition. If gradients are not available, finite difference approximations can be used to obtain
approximate values of the gradient. In this study, the small design space makes it feasible to use finite difference approximations. eigenvalues λk and corresponding eigenvectors ηk are obtained for
each dimension k=1, ……, 16. The eigenvalues are then plotted on a log scale, as shown in Figure 3. Figure 3(a) shows that the largest gap lies between the 1st and 2nd index, indicating that the
most suitable active subspace is one in this case. Figure 3(b) plots the estimated eigenvalues along with the bootstrap intervals and shows that the calculation error in the scenario of a single
dimension is the smallest.
The main idea behind subspace dimensionality reduction is to use matrix multiplication between the input parameter vector and a tall transformed matrix, reducing the high dimensionality of m to a
lower dimensional s. The transform zj=MTxj is called a forward map, where zj∈Rs, xj∈Rm, M∈Rm×s. It maps the m-dimensional design vector to an s-dimension coordinate. The transform matrix is
determined by the eigenvalues and eigenvectors, and a large gap between the sth and (s+1)th eigenvalue indicates a strong univariate trend between the objective yi and the combined matrix of the
first s eigenvectors, M=(η1,η2,⋯,ηs)Txj (Constantine, 2015). In this study, the first eigenvalue was found to have the largest gap, indicating that the transform matrix M1 for the efficiency active
subspace is a matrix stacked with the first eigenvector. This matrix is described:
Furthermore, Figure 4 shows the relationship between the forward map z and the adiabatic efficiency y1. The figure depicts a manifold relationship between the map of 16 design parameters and
adiabatic efficiency y1, indicating that the most significant direction of the design parameters space, along which the largest change of the adiabatic efficiency occurs, has been successfully
identified. As shown in the figure, adiabatic efficiency varies as a quadratic function curve along a single vector in the 16-dimensional design space. This confirms that the active subspace has
reduced the high-dimensional design space to a lower dimensional space capturing the most significant variations in the objective.
In term of mass flow rate
Following the same procedure, the active subspace for mass flow is identified. Same as the adiabatic efficiency, the largest gap between adjacent eigenvalues in the scenario of mass flow also lies
between the 1st and 2nd index, shown in Figure 5(a). Additionally, a single dimension also yields the smallest error, shown in Figure 5(b). Lastly, the transform matrix, M2, is obtained. The scatter
plot, Figure 5(c), indicates a fairly linear connection between the subspace M2Tx and mass flow y2.
In term of pressure ratio
Lastly, the active subspace for the total-to-total pressure ratio is identified. Similarly, the largest gap between adjacent eigenvalues in the scenario of total-to-total pressure ratio also lies
between the 1st and 2nd index, shown in Figure 6(a), and a single dimension yields the smallest error, shown in Figure 6(b). Lastly, a linear connection between the subspace M3Tx and mass flow y3 was
spotted, shown in Figure 6(c).
To summarize, this section discusses the use of active subspaces to identify dominant directions in a high-dimensional design space. By using this technique, the authors were able to reduce the
dimensionality of the design space and identify the most significant design parameters affecting the performance of a ducted fan. Results show that active subspace exists for fan adiabatic
efficiency, mass flow rate, and total-to-total pressure, and the most suitable dimension for all three parameters is one. For adiabatic efficiency, a manifold relationship between the subspace and
adiabatic efficiency is observed, where the adiabatic efficiency varies as a quadratic function curve along a single vector in the 16-dimensional design space. A fairly linear connection between the
subspace and the parameter is concluded for mass flow and total-to-total pressure ratio.
Fidelity analysis
Despite the promising result presented in the above section, questions remain such as independence of the active subspaces, sensitivity of parameters, and accuracy of manifold modelling. This section
investigates the fidelity of the active subspace approach. First, to verify the independence of the previously identified active subspaces, additional data samples can be generated randomly in the
design space and used to compute the subspaces again. If the resulting subspaces are similar to those computed earlier using the limited number of samples, it can be concluded that the subspaces are
independent of the specific data samples used and represent the dominant direction of the objective in the whole design space. To investigate the sensitivity of various design parameters, the
gradient of the objective function for each design parameter can be calculated using the active subspace. This will provide insight into which design parameters impact the objective most and can be
used to guide the optimization process. Lastly, a manifold model can be fitted to the data in the active subspace, which can be done using linear regression for a linear model or polynomial
regression for a quadratic model. The accuracy and fitness of the model can be evaluated using statistical metrics such as the R-squared value and mean squared error.
Independence of active subspace
Two questions need to be addressed to verify the independence of the active subspaces from the data samples. The first question is whether the active subspaces would differ with different sample
contents for the same sample size, and the second question is whether the sample size can affect the result of the active subspaces. This section uses a series of samples of different sample counts
to investigate the effect of the sample size on the active subspaces. The recommended sample size G=αk(m+1)In(m) is used to set the upper and lower bounds for the smallest sample size for finding a
functional subspace. The constant α is an oversampling factor typically chosen to be between 2 and 10. Gaussian random sampling is used to select numerous data samples from the design space, with
sample sizes ranging from 100 to 500 with an interval of 50. These samples are used to perform simulations and obtain the objective metrics for finding the active subspaces. The results for the
adiabatic efficiency subspace are shown in Figure 7. There is little difference in the eigenvalues for sample size varying from 100 to 500, shown in Figure 7(a). The subspace error and eigenvector
components also change within the permissible error range, as shown in Figures 7(b) and 7(c).
Additionally, the independence of the mass flow rate subspace is also shown in Figure 8. Figures 8(a) and 8(b) demonstrate no significant difference in the existing eigenvalues and subspace errors
for each dimension among different sample sizes. The subspace components also fluctuate only slightly within a very small range. Lastly, Figure 9 shows that the pressure ratio subspace is almost
unaffected by the training samples, as the curve remains the same as the sample size increases. These results demonstrate that the active subspaces discussed in Section 2.3 are independent of the
data samples used in the calculation process.
Sensitivity analysis in terms of parameters
In Section 2.4.1, it is demonstrated that the elements of transformed matrices M are independent of the size and content of the samples used in the calculation process. These matrices are used to
transfer the 16D design space to a 1D space, with aero-thermal performances serving as the objective metric. This means that the active subspaces are also a combination of all design parameters, with
coefficients linking each parameter to the significance of the objective. A larger coefficient indicates that the objective value is more sensitive to changes in the related parameter. Figure 10
plots the eigenvector components of each active subspace and the component weights of all design parameters. In Figure 10(a), the adiabatic efficiency active subspace has 16 components associated
with the 16 design parameters. The height of each column in each index corresponds to the influence of the parameters on adiabatic efficiency. The largest perturbation is caused by Radius, followed
by Reaction, First-row deviation angle, and so on. Figure 10(b) shows the distribution of elements for vector M2, which represents the contribution to the dominant direction. This figure shows that
design variables such as First-row deviation angle, Radius, and Flow Coefficient have greater weights in the direction. In contrast, Second-row LE QO angle, Blade Axial Chord 2nd row, and Stage Gap
have less influence. Lastly, Figure 10(c) indicates the significance of the components corresponding to M3 in the active subspace. Increasing the pressure ratio is mainly achieved through changes in
the Radius and Loading Coefficient, with the remaining design variables contributing less to this objective. Overall, these findings highlight the important role that each design parameter plays in
determining the overall performance of the ducted fan, as well as the usefulness of the active subspace methodology in identifying the most critical parameters.
Fitting curve of active subspace
This section uses polynomial fitting to build fitting curves in the active subspaces. Polynomial fitting aims to establish a functional relationship between the map z=MTx and the objective y using a
polynomial with degree n.
where n can be any positive integer.
The scatter plots in Figures 4(a), 5(c), and 6(c) show the distribution of design samples in each active subspace. Based on these scatter plots, linear or quadratic responses may fit the trend of the
scatter distribution very well. Therefore, using first-order polynomial and secondary-order polynomial to fit the objectives y1,y2,y3 in their subspaces.
To compute the coefficients associated with the models, Equations (9) and (10) are built using samples. To choose the best function curve fitting the plots, the correlation coefficients of the two
functions in all active subspaces must be compared. Figure 11(a) compares the correlation coefficients of the first-order and second-order polynomials with different sample sizes in the efficiency
subspace. The graph shows that the line of the second-order polynomial fitting is above that of the first-order polynomial from beginning to end, indicating that the quadratic curve has a stronger
correction of sample scatters in the efficiency subspace. Furthermore, the quadratic curve tends to be stable, and the coefficients in the line still drop as the sample size increases. Thus, the
second-order polynomial fitting is chosen, and the related coefficients are computed using Equation (11).
where f1 is the proper function of efficiency active subspace based on 300 training samples. The coefficient R12 is 0.873. The polygonal lines in Figures 11(b) and 11(c) show a small distance, and
fitting curves of different sample sizes all share high coefficients. Considering calculation convenience and the light influence of the two constraint objectives to optimize blade efficiency, mass
flow and pressure ratio active subspaces use first-order polynomial fitting, shown in Equations (12) and (13).
where f2 and f3 are the appropriate functions of mass flow and pressure ratio active subspaces, respectively, both calculated using 300 samples. Their coefficient R22 and R32 are 0.972 and 0.998,
separately. Finally, the fitting curves with sample scatters are displayed in Figure 12.
Optimization of a single-stage fan blade
This section focuses on optimizing a single-stage fan blade with specific parameters shown in Table 1. Using the Multall solver, the adiabatic efficiency, mass flow, and pressure ratio of the fan are
computed as 93.401%, 92.022kg/s, and 1.202, respectively. The main objective of the optimization is to maximize the adiabatic efficiency while keeping the mass flow and pressure ratio constant. An
equation group that includes constraint equations from the mass flow and pressure ratio active spaces and an objective equation from the efficiency space is constructed to achieve this objective. The
equation assembly is presented in Equation (14).
Prio to solving the above equation assembly, it is important to analyze the monotonicity of the objective function with respect to the constraints. By examining the plot of the quadratic function, it
is evident that the objective function first increases and then decreases at the axis of symmetry −b/2a=1.205. Therebefore, it is necessary to get the value range of z1, the other maximizing problem
shown in Equation (15).
This optimization problem is solved using the simplex algorithm, and the resulting design vector x′max is shown in Equation (16), which corresponds to the maximum value of z1. As the maximum value of
z1 is 1.567, which is greater than the axis of symmetry at 1.205, it can be concluded that the maximum efficiency of 94.337% is achieved when z[1]=1.205.
Furthermore, Equation (14) can be transformed as a group of indefinite equations, shown in Equation (17).
Since three equations can only solve for three unknowns, the three most sensitive parameters – radius, first-row deviation angle, and loading coefficient – are selected as unknowns in Equation (17).
The other parameters are considered as known values obtained by average random sampling in the design space. However, it was found that Equation (17) was difficult to solve by directly sampling the
entire design space. Only a small region in the design space could be mapped to 1.20471 or greater in the efficiency active subspace. Therefore, an effective method was to limit the sample space to
the vicinity of x′max. Each design parameter was selected from a region centered on x′max in the design space. The length of the region in each dimension was set to be 10% of the length of the design
Lastly, a total of thirty sets of unknowns were sampled using average random sampling, and by solving Equation (17), 30 optimized fan designs were produced. The aero-thermal performances of these
designs were then calculated using the Multall solver, and the results are shown in Figure 13. Figure 13(a) shows that the adiabatic efficiency of the optimized fans was, on average, higher than that
of the baseline design, but it did not reach the highest theoretical efficiency. Some issues with the fitting curves may need to be addressed to improve efficiency further. Figure 13(b) indicates
that the mass flow of the optimized fans was slightly different from the base mass flow, but it centered around 90.656. Based on this result, the authors proposed changing the aimed mass flow from
92.022 to 93.388, as mass flow has a strong linear changing tendency. On the other hand, the pressure ratios of the optimized fans remained largely unchanged compared to those of the base fan, as
shown in Figure 13(c). To correct the target value of the indefinite equation, the authors first noticed that the scatter distribution on the horizontal axis in Figure 11(a) was limited below 0.8,
which made the fitting curve heavily dependent on the data below 0.8 and imprecise at the largest value point. To address this issue, the authors set groups of random aimed values above 0.8 of M1x in
Equation (17) and regenerated the fitting curve with new samples. Then, the aimed value of mass flow in Equation (14) was set to 93.388, and new fan designs were reproduced.
After implementing the corrections discussed above, Figure 14(a) plots the fitting curve of the efficiency, which was generated using 200 new special samples, and an excellent result was obtained, as
seen in Figure 14(b), where the mass flow was around the base value of 92.022 with a 95% confidence limit zone. The fitting function is described in Equation (18).
The best value point of this function was found to be 0.82, which corresponded to the highest efficiency of 94.21%. As a result, the value of 0.82 was set as the final aimed value of M1x in Equation
(17), along with the desired value of the mass flow, to solve the indefinite equation group using the abovementioned method. The results of the optimized fan after adjusting the aim value are
presented in Table 2.
Table 2.
Fan index 1 2 3 4 5 6 7 8 9 10
Adiabatic efficiency 0.944 0.944 0.942 0.943 0.943 0.943 0.943 0.943 0.943 0.943
Mass flow (kg/s) 92.164 92.166 92.07 92.023 91.976 91.966 92.132 92.107 92.065 92.009
Pressure ratio 1.203 1.202 12.02 1.201 1.201 1.202 1.201 1.202 1.202 1.202
Finally, the blade design with the highest efficiency was chosen as the optimal design, with an adiabatic efficiency of 0.944, mass flow of 92.164, and pressure ratio of 1.203. The primary parameters
of the optimized fan are presented in Table 3. Additionally, the blade shape parameters obtained from the optimization process are used to generate a profile file by recalling a customized Multall
solver, which includes the 3D coordinates of the stator and rotor. Matlab codes are then used to convert the profile file into a format that can be used in Solidworks software. The resulting 3D model
of the optimized ducted fan is presented in Figure 15.
Table 3.
Design Parameter Value Unit
Reaction 0.651 —
Flow coefficient 0.811 —
Loading coefficient 0.581 —
Radius 0.337 m
Blade axial chord 1st row 0.112 m
Blade axial chord 2nd row 0.090 m
Row gap 0.215 chord
Stage gap 0.540 chord
First row deviation angle 5.222 deg
Second row deviation angle 3.949 deg
First row incidence angle −2.781 deg
Second row incidence angle −2.618 deg
First row LE QO angle 84.625 deg
First row TE QO angle 95.568 deg
Second row LE QO angle 94.822 deg
Second row TE QO angle 84.494 deg
The preliminary design of turbomachinery aerodynamics traditionally relies heavily on the engineer’s experience, which can be a time-consuming and inaccurate process. To address this challenge, this
paper proposes a novel optimization strategy for turbomachinery aerodynamic preliminary design using active subspace. The method utilizes active subspace techniques to reduce the dimensionality of
the design space, making efficient and accurate exploration of the space possible. The proposed method was applied to a single-stage fan, where the goal was to optimize the adiabatic efficiency while
keeping the mass flow and pressure ratio constant. The following key findings were highlighted:
1. The active subspace optimization method is robust and efficient, requiring minimal trial cost to quickly optimize the aerodynamic design of the single-stage fan blade.
2. The active subspace can be calculated independently of the training sample calculation process.
3. The active subspace represents the primary direction in which the objective changes the most, while the orthogonal direction remains steady, allowing for efficient optimization in a
low-dimensional space.
To conclude, the proposed methodology presents a promising approach for accurate and rapid optimization of turbomachinery aerodynamics design. However, this approach has some limitations, including
the fitting curve’s dependence on data and possible errors. Further exploration of optimization design methods remains critical for preliminary design.
the vector of blade parameters for the jth training sample
the vector of the lower boundary for design space
the vector of the upper boundary for design space
the vector of standard deviation in Gaussian random sampling
the vector of average in Gaussian random sampling
the vector of normalized x^j
the forward map of xj in active subspace | {"url":"https://journal.gpps.global/A-rapid-method-for-turbomachinery-aerodynamic-design-and-optimization-using-active,184214,0,2.html","timestamp":"2024-11-11T00:59:19Z","content_type":"application/xhtml+xml","content_length":"172778","record_id":"<urn:uuid:e9af3b94-be55-4abf-9e97-7151bc2cca22>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00867.warc.gz"} |
Section 4.1: Early Numeration Systems - The Nature of Mathematics - 13th Edition
Section 4.1: Early Numeration Systems
4.1 Outline
A. Basic ideas
1. number
2. counting number
3. numeral
4. numeration system
B. Egyptian numeration system
1. simple grouping system
2. hieroglyphic symbols
3. addition principle
4. repetitive system
C. Roman numeration system
1. Roman numerals
2. subtractive principle
3. multiplicative principle
D. Babylonian numeration system
1. positional system
2. cuneiform symbols
3. properties of numeration systems
a. definition
b. simple grouping system
c. positional system
d. addition principle
e. subtraction principle
f. repetitive system
E. Other historical systems
1. decimal
2. Greek
3. Mayan
4. Chinese
4.1 Essential Ideas
A number is used to answer the question “How many?” and usually refers to numbers used to count objects:
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, …}
This set of numbers is called the set of counting numbers. A numeral is a symbol used to represent a number, and a numeration system consists of a set of basic symbols and some rules for making other
symbols from them, the purpose being the identification of all numbers. | {"url":"https://mathnature.com/essential-ideas-4-1/","timestamp":"2024-11-13T15:53:44Z","content_type":"text/html","content_length":"111507","record_id":"<urn:uuid:832c4d56-0a51-4257-9b57-fad6d1df6e25>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00632.warc.gz"} |
Positive Semidefinite Matrices & Semidefinite Programming | Rafael Oliveira
Positive Semidefinite Matrices & Semidefinite Programming
Symmetric Matrices & Spectral Theorem
A matrix $A \in \mathbb{R}^{n \times n}$ is called symmetric if $A = A^T$. A complex number $\lambda \in \mathbb{C}$ is called an eigenvalue of $A$ if there exists a vector $u \in \mathbb{C}^n$ such
that $Au = \lambda u$. In this case, $u$ is called an eigenvector of $A$ corresponding to $\lambda$.
Spectral Theorem: If $A$ is a symmetric matrix, then
1. $A$ has $n$ eigenvalues (counted with multiplicity),
2. all eigenvalues of $A$ are real,
3. there exists an orthonormal basis of $\mathbb{R}^n$ consisting of eigenvectors of $A$.
In other words, we can write $$ A = \sum_{i=1}^n \lambda_i u_i u_i^T, $$ where $\lambda_1, \dots, \lambda_n \in \mathbb{R}$ are the eigenvalues of $A$, and $u_1, \dots, u_n \in \mathbb{R}^n$ are the
corresponding (orthonormal) eigenvectors.
If a symmetric matrix $A$ has only non-negative eigenvalues, then we say that $A$ is positive semidefinite, and write $A \succeq 0$. There are several equivalent definitions of positive semidefinite
1. all eigenvalues of $A$ are non-negative.
2. $A = B^T B$ for some matrix $B \in \mathbb{R}^{d \times n}$, where $d \leq n$. The smallest value of $d$ is the rank of $A$.
3. $x^T A x \geq 0$ for all $x \in \mathbb{R}^n$.
4. $A = LDL^T$ for some diagonal matrix $D$ with non-negative diagonal entries and some lower triangular matrix $L$ with diagonal elements equal to $1$.
5. $A$ is in the convex hull of the set of rank-one matrices $uu^T$ for $u \in \mathbb{R}^n$.
6. $A = U^T D U$ for some diagonal matrix $D$ with non-negative diagonal entries and some orthonormal matrix $U$.
7. $A$ is symmetric and all principal minors of $A$ are non-negative. Here, by principal minors we mean the determinants of the submatrices of $A$ obtained by deleting the same set of rows and
Semidefinite Programming
Let $\mathcal{S}^m := \mathcal{S}^m(\mathbb{R})$ denote the set of $m \times m$ real symmetric matrices.
A semidefinite program (SDP) is an optimization problem of the form $$ \begin{array}{ll} \text{minimize} & \langle C, X \rangle \\ \text{subject to} & \langle A_i, X \rangle = b_i, \quad i = 1, \
dots, m \\ & X \succeq 0, \end{array} $$ where $C, A_1, \dots, A_m \in \mathcal{S}^n$ and $b_1, \dots, b_m \in \mathbb{R}$. Moreover, $\langle A, B \rangle := \text{tr}(A^T B)$ is the trace inner
We can write and SDP in a way similar to a linear program as follows: $$ \begin{array}{ll} \text{minimize} & c^Tx \\ \text{subject to} & A_1 x_1 + \cdots + A_n x_n \succeq B \\ & x \in \mathbb{R}^n,
\end{array} $$ where $A_1, \dots, A_n, B \in \mathcal{S}^m$ and $c \in \mathbb{R}^n$, and we use $C \succeq D$ to denote that $C - D \succeq 0$.
If the matrices $A_i, B$ are diagonal matrices, then the SDP is equivalent to a linear program. Thus, we see that SDPs generalize linear programs.
In a similar way to linear programs, the following are important structural and algorithmic questions for SDPs:
1. When is a given SDP feasible? That is, is there a solution to the constraints at all?
2. When is a given SDP bounded? Is there a minimum? Is it achievable? If so, how can we find it?
3. Can we characterize optimality?
□ How can we know that a given solution is optimal?
□ Do the optimal solutions have a nice description?
□ Do the solutions have small bit complexity?
4. How can we solve SDPs efficiently?
To understand better these questions and the structure of SDPs, we will need to learn a bit about convex algebraic geometry.
Convex Algebraic Geometry
To understand the geometry of SDPs, we will need to understand their feasible regions, which are called spectrahedra and are described by Linear Matrix Inequalities (LMIs).
Definition 1 (Linear Matrix Inequality (LMI)): An LMI is an inequality of the form $$A_0 + \sum_{i=1}^n A_i x_i \succeq 0,$$ where $A_0, \ldots, A_n \in \mathcal{S}^m$.
Definition 2 (Spectrahedron): A spectrahedron is a set of the form $$ S = { x \in \mathbb{R}^n : A_0 + \sum_{i=1}^n A_i x_i \succeq 0 }, $$ where $A_0, \ldots, A_n \in \mathcal{S}^m$.
Note that spectrahedra are convex sets, since they are defined by LMIs, which are convex constraints. Moreover, several important convex sets are spectrahedra, including all polyhedra, circles/
spheres, hyperbola, (sections of) elliptic curves, among others.
When considering SDPs, it is enough to work with a more general class of convex sets, which we call spectrahedral shadows. Spectrahedral shadows are simply projections of spectrahedra onto
lower-dimensional spaces.
Testing Membership in Spectrahedra
To be able to solve SDPs efficiently, a first step is to be able to test membership in spectrahedra efficiently. That is, given a spectrahedron $S = { x \in \mathbb{R}^n : A_0 + \sum_{i=1}^n A_i x_i
\succeq 0 }$ and a point $x \in \mathbb{R}^n$, we want to determine whether $x \in S$. Since $x \in S$ if and only if $A_0 + \sum_{i=1}^n A_i x_i \succeq 0$, this is equivalent to testing whether a
given symmetric matrix is positive semidefinite.
More succinctly, we have the following decision problem:
• Input: symmetric matrix $A \in \mathcal{S}^m$
• Output: YES if $A \succeq 0$, NO otherwise.
An efficient algorithm for this problem is the symmetric gaussian elimination algorithm, which runs in time $O(m^3)$. The algorithm will proceed just as in gaussian elimination, by performing
elementary row operations (without row swapping) to reduce $A$ to an upper triangular form. However, in this case, every time we apply a row operation, which can be encoded by a lower unitriangular
matrix $L$, we also apply the same operation to the columns of $A$ by right-multiplying $A$ by $L^T$. This way, we ensure that $A$ remains symmetric throughout the process.
As the product of lower (or upper) unitriangular matrices is again a lower (or upper) unitriangular matrix, we can see that the symmetric gaussian elimination algorithm will always output a diagonal
matrix $D$ with non-negative diagonal entries iff $A \succeq 0$.
To see that $D \succeq 0 \Leftrightarrow A \succeq 0$, first note that $D = L A L^T$, where $L$ is a lower unitriangular matrix. Now, $A \succeq 0 \Leftrightarrow z^T A z \geq 0$ for all $z \in \
mathbb{R}^m \Leftrightarrow (L^T z)^T D (L^T z) \geq 0$ for all $z \in \mathbb{R}^m \Leftrightarrow D \succeq 0$.
The above proves that our algorithm is correct, and the running time is $O(m^3)$, since we need to perform $m^2$ elementary row operations, each of which takes $O(m)$ time.
Application: Control Theory
not required material - to be written here later - please see slides and references for this part
SDPs are used in many areas of mathematics and engineering, including control theory, combinatorial optimization, and quantum information theory. Today we will see an application of SDPs to control
theory, in particular to the problem of stabilizing a linear, discrete-time dynamical system. | {"url":"https://cs.uwaterloo.ca/~r5olivei/courses/2024-spring-cs466/lecture-notes/lecture14/","timestamp":"2024-11-03T15:48:32Z","content_type":"text/html","content_length":"27012","record_id":"<urn:uuid:011b7c23-47f2-424b-8105-6a105b436f43>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00271.warc.gz"} |
A new technique for modeling; Implicit Modeling
Polygonal Modeling
SubD Modeling (Catmull-Clark and other variants)
NURBS Modeling
BREP Modeling
Voxel Modeling
And now Implicit Modeling which has been devised in response to numerous problems encountered with BREP Modeling
The problem is the fundamental way that geometry is represented. All the major CAD programs use boundary representations (b-reps) to define solid geometry. B-reps use topology, such as vertices,
edges and faces, which are defined by geometry such as points, curves and surfaces. For example, an edge is a bounded region of a curve and a face is a bounded region of a surface.
B-reps work fine for relatively simple geometry. Older CAD programs sometimes used Boolean operations to combine primitive objects such as cubes and spheres. B-reps are much more versatile, allowing
profiles to be swept and lofted, solids to be shelled and so forth. However, when features are combined with fillets and blends, or large numbers of features are included, calculating the topology
becomes exponentially more demanding on the computer. Many modelling operations involve combining simpler shapes with Boolean and blending operations. B-rep modelling has to calculate all of the new
edges that are formed where faces intersect. For objects with planar faces the individual calculations are relatively simple, but the number of intersections increases by approximately the square of
the number of faces. For intersections between curved surfaces the edges are complex splines, making the calculations considerably more complex. When faces are close to tangent, things get really
Another issue with B-reps can be determining which points in the model are inside the boundary – the solid material. The method is to shoot a ray from the point in an arbitrary direction. If the ray
passes the boundary an odd number of times then the point is inside the boundary. If the ray passes the boundary an even number of times then the point is outside the boundary. Since floating point
arithmetic is used, rounding errors may mistakenly count boundary crossings when the ray is close to the boundary. There are also ambiguities such as when a ray is tangent to a surface or passes
through a vertex – it is not clear whether this counts as a boundary crossing, or as two crossings. Because of these issues, additional checks are required to ensure that b-rep modellers are robust.
This makes them mathematically inefficient. | {"url":"https://3dcoat.com/forum/index.php?/topic/24213-a-new-technique-for-modeling-implicit-modeling/#comment-165040","timestamp":"2024-11-13T03:30:01Z","content_type":"text/html","content_length":"87429","record_id":"<urn:uuid:420941c4-922c-4795-9ab9-7b4f17d9385a>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00750.warc.gz"} |
You’re Probably Right
When anyone mentions statistics or probability being applied to genealogical research then there’s usually a sharp reaction. There happen to be some valid questions that would benefit from thoughtful
discussion but unfortunately the many knee-jerk reactions tend to be for all the wrong reasons.
It’s hard to find a single reason why this topic gets such an adverse reaction since the arguments made against it are rarely put together very carefully. I have seen some reactions based purely on
the fear that any application of numbers means that assessments will be estimated to an inappropriate level of precision, such as 12.8732%. That’s just ludicrous, of course!
In this post, I won’t actually be making a case for the use of statistics since I am still experimenting with an implementation of this myself and it isn’t straightforward. What I will try to do is
identify what is and is-not open to debate, and ideally to add some degree of clarity. Although I have a mathematical background, this only briefly touched on statistics. It is a specialist field,
and many folks will have a skewed picture of it, whether they’re mathematically inclined or not. It is also a technical field and so a few symbols and numbers are inevitable but I will try and
balance things with real-life illustrations.
is generally about the collection and analysis of data. Despite what politicians might have us believe, statistics proves nothing, and this is important for the purposes of this article. Statistical
analysis can demonstrate a correlation between two sets of data but it cannot indicate whether either is a consequence of the other, or whether they both depend on something else. The classic example
is data that shows a correlation between the sales of sunglasses and ice-cream — it doesn’t imply that the wearing of sunglasses is necessary for the eating of ice-cream.
Mathematical statistics
is about the mathematical treatment of
, but there is more than one interpretation of probability. The standard interpretation, called
frequentist probability
, uses it as a measure of the frequency or chance of something happening. Taking the roll of a die as a simple example, we can calculate the number of ways that it can fall and so attribute a
probability to each face (1/6, or roughly 16.7%). Alternatively, we could look at past performance of the die and use that to determine the probabilities; a method that works better in the case where
a die is weighted. When dealing with the individual
(e.g. each roll of the die), they may be independent of one another, or dependent on previous events. A real-life demonstration of independent events would be the roulette wheel. If the ball had
fallen on red 20 times then we’d all instinctively bet on black next, even though the red/black probability is unchanged. Conversely, if you’d selected 20 red cards from a deck of playing cards then
the probability of a black being next has increased.
The other major interpretation of probability is called
Bayesian probability
after the Rev. Thomas Bayes (1701–1761), a mathematician and theologian who first provided a theorem to expresses how a subjective degree of belief should change to account for new evidence. His work
was later developed further by the famous French mathematician and astronomer Pierre-Simon, marquis de Laplace (1749–1827). It is this view of probability, rather than anything to do with frequency
or chance, which is relevant to inferential disciplines such as genealogy. Essentially, a Bayesian probability represents a state of knowledge about something, such as a degree of confidence. This is
where it gets philosophically interesting because some people (the
) consider it to be a natural extension of traditional Boolean logic to handle concepts that cannot be represented by pairs of values with such exactitude as true/false, definite/impossible, or 1/0.
Other people (the
) consider it to be simply an attempt to quantify personal belief in something.
In actuarial fields, such as insurance, a person is categorised according to their demographics, and the previous record of those demographics is used to attribute a numerical risk factor (and an
associated insurance premium) to that person. This is therefore a frequentist application. Consider now a
who is giving odds on a horse race. You might think he’s simply basing his numbers on the past performance of the horses but you’d be wrong. A good bookmaker watches the horses in the paddock area,
and sees how they look, move and behave. He may also talk to trainers. His odds are based on experience and knowledge of his field and so this is more of a Bayesian application.
Accepted genealogy certainly accommodates qualitative assessments such as primary/secondary information, original/derivative sources, impartial/subjective viewpoint, etc. When we consider the
likelihood of a given scenario then we might use terms such as possible, very likely, or extremely improbable, and Elizabeth Shown Mills offers a recommended list of such terms
. Although there is no standard list, we all accept that our preferred terms are ordered, with each being between the likelihoods of the adjacent terms. These lists are not linear; meaning that the
relative likelihoods are not evenly spaced. They actually form a non-linear
scale since we have more terms the closer we get to the delimiting ‘impossible’ and ‘definite’. In effect, our assessments asymptotically approach these idealistic terms, but never actually get
As part of my work on STEMMA®, I experimented with putting a numerical ‘Surety’ value against items of evidence when used to support/refute a conjecture, and also on the likelihood of competing
explanations of something. This turned out to be more cumbersome than I’d imagined, although a better user interface in the software could have helped. The STEMMA rationale for using percentages in
the Surety attribute rather than simple integers was partly so that it allowed some basic arithmetic to assess reasoning. For instance, if A => B, and B => C, then the surety of C is surety(A) *
surety(B). Another goal, though, was that of ‘collective assessment’. Given three alternatives, X, Y, & Z, simple integers might allow an assessment of X against Y, or X against Z, but not X against
all the remaining alternatives (i.e. Y+Z) since they wouldn’t add up to 100%.
Although I didn’t know it, my concept of ‘collective assessment’ was getting vaguely close to something called
conditional probabilities
in Bayes’ work. A conditional probability is the probability of an event (A) given that some other event (B) is true. Mathematicians write this as P(A | B) but don’t get too worried about this; just
treat it as a form of shorthand. Bayes’ theorem can be summarised as
P(A | B) = ――― P(B | A)
It helps you to invert a conditional probability so that you can look at it the other way around. A classic example that’s often used to demonstrate this involves a hypothetical criminal case.
Suppose an accused man is considered one-chance-in-a-hundred to be guilty of a murder (i.e. 1%). This is known as the
prior probability
and we’ll refer to it as P(G), i.e. the probability that he’s
uilty. Then some new
vidence (E) comes along; say a bloodied murder weapon found in his house, or some DNA evidence. We might say that the probability of finding that evidence if he was guilty (i.e. P(E | G) is 95%, but
the probability of finding it if he was NOT guilty (i.e. P(E | ¬ G)
is just 10%
. What we want is the new probability of him being guilty given that this evidence has now been found, i.e. P(G | E). This is known as the
posterior probability
(yeah, yeah, no jokes please!). The calculation itself is not too difficult, although the result is not at all obvious.
P(E | G) 95%
P(G | E) = ―――――――― P(G) = ―――――――――――――――――――――――― x 1% = 8.8%
P(E) (95% x 1%) + (10% x 99%)
This may just look like a bunch of numbers to many readers, but the mention of finding new evidence must be ringing bells for everyone. If you had estimated the likelihood of an explanation at
such-and-such, but a new item of evidence came along, then you should be able to adjust that likelihood appropriately with this theorem.
So what about a genealogical example? Well, here’s a real one that I briefly toyed with myself. An ancestor called Susanna Kindle Richmond was born illegitimately in 1827. I estimated that there was
a 15% chance that her middle name was the surname of the biological father. If we call this event K, for
indle, then it means P(K) is 15%. This figure could be debated but it’s the difference between the prior and posterior versions of this probability that are more significant. In other words, even if
this was a wild guess, it’s the change that any new evidence makes that I should take notice of. It turns out that the name ‘Kindle’ is quite a rare surname.
counted less than 100 instances of Kindle/Kindel in the civil registrations of vital events for England and Wales. In the baptism records, I later found that there was a Kindle family living on the
same street during the same year as Susanna’s baptism. Let’s call this event — of finding a
eighbour with the surname Kindle ― N. I estimated the chance of finding a neighbour with this surname if it was also the surname of her father at 1%, and the probability of finding one if it wasn’t
the surname of her father at 0.01%. What I wanted was the new estimation of K, i.e. K | N. Well, following the method in the murder example:
P(N | K) 1%
P(K | N) = ―――――――― P(K) = ――――――――――――――――――――――― x 15% = 94.6%
P(N) (1% x 15%)+(0.01% x 85%)
This is a rather stark result from the low probabilities being used. I’m not claiming that this is a perfect example, or that my estimates are spot on, but it was designed to illustrate the following
two points. Firstly, it demonstrates that the results from Bayes’ theorem can run counter to our intuition. Secondly, though, it demonstrates the difficulty in using the theorem correctly because
this example is actually flawed. The value of 1% for P(N | K) is fair enough as it represents the probability of finding a neighbour with the surname Kindle if her middle name was her father’s
surname. However, the figure of 0.01% for P(N | ¬ K) was really representing the random chance of finding such a neighbour if her middle name wasn’t Kindle at all. What it should have represented was
the probability of finding such a neighbour if her middle name was Kindle but it wasn’t the surname of her father. However, it failed to consider that the two families may simply have been close
There is no room for debate on the mathematics of probability, including Bayesian probability and the Bayes’ theorem. The application of this mathematics is accepted in an enormous number of
real-life fields, and genealogy is not fundamentally different to them. As part of my professional experience, I know that many companies use Bayesian forecasting to good effect in the analytical
field know as business intelligence. The only controversial point presented here is the determination of those subjective assessments. All of the fields where Bayes’ theorem is applied involve people
who are quantifying assessments that are based on experience and expertise. We already know that genealogists make qualitative assessments but would it be a natural step to put numerical equivalents
on their ordered scales of terms. We wouldn’t argue that ‘definite’ means 100%, or that ‘impossible’ means 0%, but employing numbers in between is more controversial even though we may use a phrase
like “50 : 50” in normal speech.
I believe there are two issues that would benefit from rational debate: where those estimations come from, and whether it would be practical for genealogists to specify them and make use of them
through their software. Although businesses proactively use Bayesian forecasting, the only examples I’ve seen in fields such law and medicine have been ex post facto (after the event). For my part, I
find it very easy to put approximate numbers against real-life perceived risks, and the likelihood of possible scenarios. I have no idea where these come from, and I can’t pretend that someone else
would conjure the same values. Maybe it’s a simple familiarity with numbers, or maybe people are just wired differently – I really don’t know!
Even if this works for some of us, it is unlikely to work for all of us. By itself, though, this is not a reason for dismissing it out-of-hand, or lashing out at the mathematically-inspired amongst
the community. A potential reaction such as ‘We happen to be qualified genealogists, and not bookmakers’ would say more about misplaced pride than considered analysis. Genealogists and bookmakers are
both experts in their own fields. When they say they’re sure of something, they don’t mean absolutely, 100% sure, but to what extent are they sure?
[1] Elizabeth Shown Mills, Evidence Explained: Citing History Sources from Artifacts to Cyberspace (Baltimore, Maryland: Genealogical Pub. Co., 2009), p.19.
[2] If you’re thinking “logarithmic” then you would be wrong. The range is symmetrically asymptotic at both ends and so is hyperbolic.
[3] This simple form applies where each event has just two outcomes: a result happening or not happening. There is a more complicated form that applies where each event may have an arbitrary number
of outcomes.
[4] I’m using the logical NOT sign (¬) here to indicate the inverse of an event’s outcome. The convention is to use a macron (bar over the letter) but that requires a specialist typeface.
[5] Yes, that’s right, 10% and 95% do not add up to 100%. The misunderstanding that they should plagues a number of examples that I’ve seen. The probability of finding the evidence if he was guilty,
P(E | G), and the probability of finding the evidence if he was not guilty, P(E | ¬ G), are like “apples and oranges” because they cover different situations, and so they will not add up to 100%.
However, the probability of not finding the evidence if he was guilty, P(¬ E | G), is the inverse of P(E | G) and so they would total 100%. | {"url":"https://parallax-viewpoint.blogspot.com/2014/01/youre-probably-right.html","timestamp":"2024-11-06T08:10:24Z","content_type":"text/html","content_length":"131080","record_id":"<urn:uuid:f44d4472-7111-4051-9f4b-7fb1a52ec8f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00169.warc.gz"} |
grid chart template
this command is only available if a pdf printer is available on the system. a chart dimension gets its values from a field which is specified on the chart properties: dimensions page. whenever an
attribute expression is entered for a dimension, its icon will turn from gray scale to color, or as in the case of text format, from gray to black. a label can also be defined as a calculated label
expression for dynamic update of the label text. select to display values based on a percentage of the total, or on an exact amount. the chart will display a total for the selected dimension when
this option is enabled. if the result of the expression is not a valid color representation, the program will default to black. click on line style in order to enter an attribute expression for
calculating the line style for the line or line segment associated with the data point. if values on data points is selected for the main expression the attribute expression will be disregarded.
the imported expression will appear as a new expression in the chart. this group is used for modifying the way that data points are plotted or what will be entered in the expression cells of chart
tables. the first sub expression will be used for plotting a box top point of the box plot. this option can be used with or without any of the other display options. with this option qlikview will
display the expression values in a bar or line chart. by entering a number in the box, you set the number of y-values in the expression to be accumulated. normally the sort order of a group dimension
is determined for each field in a group via the group properties. the font can be set for any single object (object properties: font), or all objects in a document (apply to objects on document
properties: font). if the result of the expression is not a valid color representation, the font color will default to black. a help icon will be added to the window caption of the object.
grid chart format
a grid chart sample is a type of document that creates a copy of itself when you open it. The doc or excel template has all of the design and format of the grid chart sample, such as logos and
tables, but you can modify content without altering the original style. When designing grid chart form, you may add related information such as grid chart template,grid chart printable,grid chart
math,grid chart maker,free grid chart
when designing grid chart example, it is important to consider related questions or ideas, what is a grid chart? how do you draw a grid chart? what are gridlines in a chart? what type of graph is a
grid? graph paper walmart, grid chart online,grid chart pdf,grid chart example,online graph paper with axis,grid chart with numbers
when designing the grid chart document, it is also essential to consider the different formats such as Word, pdf, Excel, ppt, doc etc, you may also add related information such as graph paper,grid
chart qlik sense,grid chart excel,grid charts notion
grid chart guide
a two-dimensional grid graph, also known as a rectangular grid graph or two-dimensional lattice graph (e.g., acharya and gill 1981), is an lattice graph that is the graph cartesian product of path
graphs on and vertices. some authors (e.g., acharya and gill 1981) use the same height by width convention applied to matrix dimensioning (which also corresponds to the order in which measurements of
a painting on canvas are expressed). the wolfram language implementation gridgraph[m, n, …] also adopts this ordering, returning an embedding in which corresponds to the height and the width. yet
another convention wrinkle is used by harary (1994, p. 194), who does not explciitly state which index corresponds to which dimension, but uses a 0-offset numbering in defining a 2-lattice as a graph
whose points are ordered pairs of integers with , 1, …, and , 1, …, . if harary’s ordered pairs are interpreted as cartesian coordinates, a grid graph with parameters and consists of vertices along
the -axis and along the -axis. (1989, p. 440) use the term ” grid” to refer to the line graph of the complete bipartite graph , known in this work as the rook graph . a grid graph is hamiltonian if
either the number of rows or columns is even (skiena 1990, p. 148).
the numbers of directed hamiltonian paths on the grid graph for , 2, … are given by 1, 8, 40, 552, 8648, 458696, 27070560, … (oeis a096969). in general, the numbers of hamiltonian paths on the grid
graph for fixed are given by a linear recurrence. in general, the numbers of hamiltonian cycles on the grid graph for fixed are given by a linear recurrence. in general the number of -cycles on the
grid graph is given by for odd and by a quadratic polynomial in for even, with the first few being for , as conjectured by chang (1992), confirmed up to an additive constant by guichard (2004), and
proved by gonçalves et al. (2011) give a piecewise formula for , but the expression given for is not correct in all cases. a generalized grid graph, also known as an -dimensional lattice graph (e.g.,
acharya and gill 1981) can also be defined as (e.g., harary 1967, p. 28; acharya and gill 1981). a generalized grid graph has chromatic number 2, except the degenerate case of the singleton graph,
which has chromatic number 1. special cases are illustrated above and summarized in the table below. | {"url":"http://www.foxcharter.com/grid-chart-template/","timestamp":"2024-11-04T02:16:09Z","content_type":"text/html","content_length":"17332","record_id":"<urn:uuid:4e15c4d1-a8a6-4bf3-a17a-9111d4012f78>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00787.warc.gz"} |
La Can I Take Linear Algebra Without Calculus 2 - Distance
New! DMAT 431 - Computational Abstract Algebra with MATHEMATICA!
Asynchronous + Flexible Enrollment = Work At Your Own Best Successful Pace = Start Now!
Earn Letter of Recommendation • Customized • Optional Recommendation Letter Interview
Mathematica/LiveMath Computer Algebra Experience • STEM/Graduate School Computer Skill Building
NO MULTIPLE CHOICE • All Human Teaching, Grading, Interaction • No AI Nonsense
1 Year To Finish Your Course • Reasonable FastTrack Options • Multimodal Letter Grade Assessment
Linear Matrix Algebra: Calculus 2 Prerequisite
What are the course prerequisites for Linear Algebra?
The only prerequisite for Linear Algebra is: Calculus II (Calculus 2)
Some students are a bit confused by this requirement. Does Linear Algebra do more integration theory from Calculus 2?
No, Linear Algebra turns out to be a completely different subject than is Calculus 2. So why is Calculus 2 the prerequisite?
In Math Education, the reason is explained as to requiring a "mathematical maturity" of the student enrolling in Linear Algebra. While Linear Algebra does not continue the student of integral
calculus, per se, it is a conceptually more difficult subject that is not easily engaged by younger students, especially those those who have not completed the freshman calculus sequence of Calculus
I and Calculus II.
So, for those students wishing to get ahead and get Linear Algebra in their completed column in their academic plan, you do need to complete Calculus II first, which means also completing Calculus I
first, even though Linear Algebra has nothing to do with either course.
Here is a video about our DMAT 311 - Linear Algebra course via Distance Calculus @ Roger Williams University:
Linear Algebra Course - DMAT 311
Multivariable Calculus & High School
After AP Calculus for High School Students
Online Linear Matrix Algebra course can best be described as a "first course in the study of elementary Linear Algebra and Matrix Theory".
This course has many names, all being equivalent:
• Linear Algebra
• Matrix Theory
• Linear Systems of Equations
• Linear Spaces
• Elementary Linear Algebra
• Computational Linear Algebra
Distance Linear Algebra via Distance Calculus is a COMPLETELY DIFFERENT course from a traditional textbook/lecture classroom course.
Distance Linear Algebra is taught via an experimentation-based curriculum using Mathematica, earning real academic credits through Roger Williams University in Providence, Rhode Island, USA.
Distance Linear Algebra is similar to a Computational Linear Algebra course in some ways, but not exactly the same. A Computational Linear Algebra course will look at developing the computational
engines that attack the structures of linear algebra; our Distance Linear Algebra simply uses those computational softwares like Mathematica as a laboratory tool, to unlock the concepts and theorems
at work in Linear Algebra from a very graphical, geometric, and inquisitive approach.
In contrast, many classroom/textbook Linear Algebra courses are taught mainly the same way they were taught 100 years ago - the small breadbasket of calculations you can complete by hand on paper,
and where the theory of linear algebra leads you. As such, the calculations you can complete youself are quite limited, although proponents of this approach feel you "really know linear algebra"
because you have to do the (often hard and tedious) computing yourself by hand.
We invite you to investigate the Distance Linear Algebra course via Distance Calculus either via the menu to the left, or the additional links below.
At Distance Calculus, we call our "Online Linear Matrix Algebra" course as Linear Algebra - DMAT 335 - 3 credits.
Below are some links for further information about the Online Linear Matrix Algebra course via Distance Calculus @ Roger Williams University.
Distance Calculus - Student Reviews
Date Posted: Jan 15, 2021
Review by: Rachel H.
Courses Completed: Probability Theory
Review: Dr. Curtis gave helpful and timely feedback, and made the teaching videos very engaging! The course model and associated software was easy to acclimate to.
Transferred Credits to: Cedarville University
Date Posted: Apr 5, 2020
Review by: Catherine M.
Courses Completed: Calculus I
Review: Calculus I from Distance Calculus was wonderful! I took AB Calculus in high school, but I didn't take the AP Calc exam. Instead I took Calculus I with Distance Calculus, and it was so much
better! It was a little review of topics, but not really. I really understood calculus when I finished!
Transferred Credits to: University of Chicago
Date Posted: Jul 25, 2020
Review by: Michael Linton
Student Email: mdl264@cornell.edu
Courses Completed: Calculus I
Review: Amazing professor, extremely helpful and graded assignments quickly. To any Cornellians out there, this is the Calculus Course to take in Summer to fulfill your reqs! I would definitely take
more Calculus Classes this way in the future!
Transferred Credits to: Cornell University | {"url":"https://www.distancecalculus.com/la/can-i-take-linear-algebra-without-calculus-2/","timestamp":"2024-11-03T17:00:50Z","content_type":"text/html","content_length":"51768","record_id":"<urn:uuid:0a4dc938-837e-4767-b65c-752aeb6737b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00367.warc.gz"} |
Stock Portfolio Management in the Presence of Downtrends Using Computational Intelligence
Business School, Tecnologico de Monterrey, Monterrey 64849, Mexico
Faculty of Accounting and Administration, Universidad Autónoma de Coahuila, Torreón 27000, Mexico
Author to whom correspondence should be addressed.
Submission received: 24 February 2022 / Revised: 14 April 2022 / Accepted: 14 April 2022 / Published: 18 April 2022
Stock portfolio management consists of defining how some investment resources should be allocated to a set of stocks. It is an important component in the functioning of modern societies throughout
the world. However, it faces important theoretical and practical challenges. The contribution of this work is two-fold: first, to describe an approach that comprehensively addresses the main
activities carried out by practitioners during portfolio management (price forecasting, stock selection and portfolio optimization) and, second, to consider uptrends and downtrends in prices. Both
aspects are relevant for practitioners but, to the best of our knowledge, the literature does not have an approach addressing them together. We propose to do it by exploiting various computational
intelligence techniques. The assessment of the proposal shows that further improvements to the procedure are obtained when considering downtrends and that the procedure allows obtaining portfolios
with better returns than those produced by the considered benchmarks. These results indicate that practitioners should consider the proposed procedure as a complement to their current methodologies
in managing stock portfolios.
1. Introduction
Both individual and organizational investors commonly seek to take profits from stock markets. Among the different ways to exploit these markets, the literature has focused on the idea of buying
cheap and selling expensive. The authors of [
] point out that there is an assumption in classical portfolio theory to manage the selected assets with the simplest trading strategy, which is a buy-and-hold approach. However, it is also common
for practitioners to also seek profits when prices go down. There are several mechanisms that allow an investor to take profits in this situation (e.g., [
Investing in stocks when their prices are expected to rise is known as opening a long position. In this scenario, the investor adopts the idea that stocks should be bought when they are the cheapest
and sold when they are as expensive as possible; the difference between selling and buying prices constitutes the investor’s basic earning. On the other hand, opening a short position means that the
investor expects the stock prices to go down. According to [
], short selling allows the investor to profit from their belief that the price of a security will decline. Moreover, short selling is used by top-down and quantitative managers as a part of a
neutral strategy (cf. [
]). In this case, the investor can, for example, borrow shares of the stock, sell them in this very moment and commit to return them at a moment in the future; so, to return them, the investor will
have to buy them at whatever the price of the stock is at that moment in the future. Therefore, the earning of the investment here is also calculated as the difference between the selling and buying
prices—just that the sell is produced first.
The highly complex decision-making process of allocating resources considering both uptrends and downtrends of prices requires sophisticated models and tools to achieve competitive results. Thus,
this work proposes a comprehensive procedure based on computational intelligence that aids defining how investors should allocate their resources in the presence of both scenarios.
First, an artificial neural network (ANN) (cf. [
]) is used to estimate future prices. There are evident tendencies in the literature showing that ANNs have high accuracy, fast prediction speed and clear superiority in predictions related to
financial markets (e.g., [
]). To perform these estimations, the ANN takes historical performances of the stocks considering the most common factors of the literature, such as stock prices and financial ratios (cf. [
]). Some additional financial indicators are used here to determine if the forecasted tendency (that the price will go up or down) is supported. These indicators are taken from the so-called
fundamental analysis, a type of indicators often considered by practitioners (cf. [
]). Evolutionary algorithms (EAs) are then used to ponder these indicators altogether with the price estimation and determine which stocks should be considered by the investor for investment, either
with a downtrend or an uptrend. Finally, EAs are also used to determine how much of the resources should be allocated to each of the selected stocks on the basis of statistical analysis to historical
data. Here, only historical prices of the selected stocks are taken into consideration according to the approach described in [
The literature review presented in
Section 3
shows that, although there are studies that consider both uptrends and downtrends in stock prices, as far as we know, there are no published works that comprehensively address the problem the way
that is proposed here. That is, not only taking advantage of a future increase in prices by opening long positions but also taking advantage of future decrease in prices by opening short positions,
while also forecasting stock prices, selecting the most plausible stocks and optimizing the stock portfolio. Our hypothesis is that a procedure that effectively implements all this provides better
overall earnings for the investor. The hypothesis is based on the activities and interests of practitioners. We test this hypothesis by using extensive experiments with actual historical data.
The rest of the paper is structured as follows.
Section 2
describes the fundamental theories that support this research.
Section 3
presents the related literature.
Section 4
describes the details of the techniques that compose the proposed procedure. In
Section 5
, we explained the experiments to test this work’s hypothesis. Finally,
Section 6
concludes this paper.
2. Background
This section provides a brief overview of the concepts and methods used in the proposed approach. These concepts and methods are (i) fundamental analysis, (ii) artificial neural networks and (iii)
evolutionary multi-objective optimization. Furthermore, we provide a short description of multiobjective optimization problems in order to present a complete theoretical basis of the proposed
2.1. Fundamental Analysis
One of the most used sources of information in the management of stock portfolios comes from the so-called fundamental analysis. The fundamental indicators provided by this analysis allow the
practitioner to evaluate stocks from multiple perspectives. Such indicators are constructed from the financial statements that the companies (underlying the stocks) present publicly on a regular
Fundamental indicators provide information that is often exploited in the literature to forecast future stock performance and to select the most competitive stocks. These indicators can be used both
qualitatively and quantitatively. Regarding the latter, the financial information published by companies is synthesized in the form of ratios that shed light on the current state of the company,
providing remarkable information on what can be expected from the financial health of the company and the possible future price of its stock. When this analysis is used in the literature, the
fundamental indicators are usually aggregated in an overall assessment value that requires subjective preferences from the practitioner (cf. e.g., [
]); however, the aggregation procedure is not straightforward and represents an important challenge.
On the other hand, different fundamental indicators could be more convenient for companies with different types of activities ([
]). Some fundamental indicators that can be used for trans-business companies are described in
Section 4.1
(cf. [
2.2. Artificial Neural Networks
Artificial neural networks are nowadays very popular among techniques from computational intelligence that have been used for many applications, such as classification, clustering, pattern
recognition and prediction in diverse scientific and technological disciplines ([
]). Similarly to other computational intelligence techniques, applications of ANN are very diversified due to its capability to model systems and phenomena from the fields of sciences, engineering
and social sciences.
Analogously to a nervous system, an ANN is built from neurons, which are the basic elements for processing signals. Neurons are interconnected to form a network, with additional connections (synaptic
relations) for input and output signals. Weights are assigned to each of these and other connection. The computing of suitable values for these weights is performed by training algorithms. An ANN
needs to be trained before it can be used by using data from the system or phenomenon to model. Neurons are configured to form layers, in which neurons have parallel connections for inputs and
outputs. ANN complexity varies from a network with a single layer of a single neuron to networks with several layers, each having several neurons. Networks with only forward connections are known as
feedforward networks. Networks with forward and backward connections are known as feedbackward networks ([
]). The term deep learning refers to ANN with complex multilayers ([
]). Roughly speaking, deep learning has more complex connections between layers and also more neurons than previous types of networks. Some neural networks that form deep learning networks are
convolutional networks, recursive networks and recurrent networks.
2.3. Multi-Objective Optimization Problem
Without loss of generality, a multi-objective optimization problem (MOP) can be defined in terms of maximization (although minimization is also common) as follows:
$maximize F ( x ) = [ f 1 ( x ) , f 2 ( x ) , … , f k ( x ) ] ⊤$
is the set of decision variable vectors
$x = [ x 1 , x 2 , … , x m ) ] ⊤$
that fulfill the set of constraints of the problem, and then
$F : Ω → R k$
, where
$R k$
is the so-called objective space.
It is evident that the notation used here states that all functions $f i$ (objectives) should be maximized; however, it is also possible that one requires some functions $f i$ to be minimized
instead. To keep standard notation, we assume that the latter can be simply achieved by multiplying the minimizing function by $− 1$.
In the context of stock portfolio management, the functions
$f i$
are usually in conflict with each other. This means that improving
$f j$
$f k$
for some
$j ≠ k$
. Therefore, there is no solution
$x ∈ Ω$
that maximizes all the
objectives simultaneously. Nevertheless, it is still possible to define some solutions
that poses the best characteristics in terms of their impact on the objectives; this is commonly carried out through Pareto optimality (cf. [
Let $u , vs . ∈ R k$ denote impacts of solutions x and y, respectively. u dominates v if and only if $u i ≥ v i$ for all $i = 1 , … , k$, and $u j > v j$ for at least one $j = 1 , … , k$. Then, a
solution $x * ∈ Ω$ is Pareto optimal if there is no solution $y ∈ Ω$ such that $F ( y )$ dominates $F ( x * )$. Note that there can be more than one Pareto optimal solution. The set of all the Pareto
optimal solutions is called the Pareto set (PS), and the set of all their corresponding objective vectors is the Pareto front (PF).
2.4. Evolutionary Multi-Objective Optimization
Multi-objective evolutionary algorithms (MOEAs) are high-level procedures designed to discover good enough solutions to MOPs (solutions that are close to the global optimum). They are especially
useful with incomplete or imperfect information or a limited computing capacity ([
MOEAs address MOPs using principles from biological evolution. They use a population of individuals, each representing a solution to the MOP. The individuals in the population reproduce among them,
using so-called evolutionary operators (selection, crossover, mutation), to produce a new generation of individuals. Often, this new generation of individuals is composed of both parents and children
that posses the best fitness; this fitness represents the impact on the objectives of the MOP. Since each individual encodes a solution to the MOP, MOEAs can approximate a set of trade-off
alternatives simultaneously.
The performance of MOEAs has been assessed in different fields (e.g., [
]). They have been widely accepted as convenient tools for addressing the problem of stock portfolio management ([
]). The main goal of MOEAs is to find a set of solutions that approximate the true Pareto front in terms of convergence and diversity. Convergence refers to determining the solutions that belong to
the PF, while diversity refers to determining the solutions that best represent all the PF. Thus, the intervention of the decision maker is not traditionally used in the process. Thus, rather little
interest has been paid in the literature to choosing one of the efficient solutions as the final one in contrast to the interest paid in approximating the whole Pareto front.
Usually, two types of MOEAs are highlighted in the literature: differential evolution and genetic algorithms. Differential evolution (DE) has been found to be very simple and effective ([
]), particularly when addressing non-linear single-objective optimization problems ([
]). On the other hand, in a genetic algorithm (GA), solutions to a problem are sought in the form of strings of characters (the best representations are usually those that reflect something about the
problem that is being addressed), virtually always applying recombination operators such as crossing, selection and mutation operators. GAs compose one of the most popular meta-heuristics applied to
the Portfolio Optimization Problem ([
As a very effective and efficient way to address MOPs, the authors of ([
]) exploited the idea of creating subproblems underlying the original optimization problem. This way, addressing these subproblems the algorithm proposed in ([
]) indirectly addresses the original problem. In that work, the so-called Multiobjective Evolutionary Algorithm Based on Decomposition (MOEA/D) was presented. The goal of MOEA/D is to create
subproblems such that, for each subproblem, a simpler optimization problem can be more effectively and efficiently addressed; each subproblem consists on the aggregation of all the objectives through
a scalar function. MOEA/D was extended to the context of interval numbers in [
3. Literature Review
There are many contributions to portfolio management literature in recent years. In this section, we give an overview of some recent and relevant works on the following subjects: price forecasting,
stock selection and portfolio optimization, as well as works on portfolio management by algorithms that exploits both uptrends and downtrends in stock prices.
Due to the non-linearity of stock data, a model developed using traditional approaches with single intelligent techniques may not use the resources in an effective way. Therefore, there is a need for
developing a hybridization of intelligent techniques for an effective predictive model [
3.1. Portfolio Management: Price Forecasting, Stock Selection and Portfolio Optimization
In recent years, there have been plenty of contributions on price forecasting based on either statistical or computational intelligence methods (see [
]). The stock market is characterized by extreme fluctuations, non-linearity, and shifts in internal and external environmental variables. Artificial intelligence techniques can detect such
non-linearity, resulting in much-improved forecast results [
Among the computational intelligence methods used for price forecasting are deep learning (e.g., [
]) and machine learning (e.g., [
]). In [
], a hybrid stock selection model with a stock prediction stage based on an artificial neural network (ANN) trained with the extreme learning machine (ELM) training algorithm ([
]) was proposed. The ELM algorithm has been tested for financial market prediction in other works (see [
There are important works on methods for stock selection, which have several different fundamental theories, from operations research methods (e.g., [
]) to approaches originating in modern portfolio theory (Mean-variance model) (e.g., [
]) and soft computing methods (e.g., [
]), including hybrid approaches (e.g., [
The fundamental theory for portfolio optimization is Markowitz’s mean-variance model ([
]). Its formulation marked the beginning of Modern portfolio theory ([
]). However, Markowitz’s original model is considered too basic since it neglects real-world issues related to investors, trading limitations, portfolio size and others ([
]). For evaluating a portfolio’s performance, the model is based on measuring the expected return and the risk; the latter is represented by the variance in the portfolio’s historical returns. Since
the variance takes into account both negative and positive deviations, other risk measures have been proposed, such as the Conditional Value at Risk (CVaR) ([
]). As a result, numerous works have improved the model, creating more risk measures and proposing restrictions that bring them closer to practical aspects of stock market trading ([
]). Consequently, many optimization methods based on exact algorithms (e.g., [
]) and heuristic and hybrid optimization (e.g., [
]) have been proposed to solve the emerging portfolio optimization models ([
According to [
], the investor or decision maker in the portfolio selection problem manages a multiple criteria problem in which, along with the objective of return maximization, he/she faces the uncertainty of
risk. Different attitudes assumed by decision makers may lead them to select different alternatives. A way of modeling both risk and subjectivity of the decision maker in terms of significant
confidence intervals was first proposed in [
]. The probabilistic confidence intervals of the portfolio returns characterize the portfolios during the optimization. The optimization is performed by means of a widely accepted decomposition-based
evolutionary algorithm, the MOEA/D ([
]). This approach is inspired on the independent works of ([
]) on interval analysis theory.
3.2. Exploiting Uptrends and Downtrends in Strategies for Stock Investment
Regarding alternative strategies to the known buy-and-hold approach for stock investment, in ([
]), the authors propose two new trading strategies to outperform the buy-and-hold approach, which is based on the efficient market hypothesis. The proposed strategies are based on a generalized
time-dependent strategy proposed in ([
]) but propose different timing for changing the buying/selling position. According to ([
]), the decision to adopt a long or short position in an asset requires a view of its immediate future price movements. A typical short seller would have to assess the potential future behavior of
the asset price by means of evaluating several factors, such as past returns and market effects as well as and technical indicators, such as market ratios ([
]). There are a few works published in the literature to address the problem of trading strategies for the short position. An interesting work that considers not only the short position but both the
short and long position is ([
]), in which a simultaneous long-short trading strategy (SLS) is proposed. Such a strategy is based partially on the property that a positive gain with zero initial investment is expected, which
holds for all discrete and continuous price processes with independent multiplicative growth and a constant trend. Other works based on SLS are ([
]). However, these works show the results of the algorithm on a previously defined stock portfolio, unlike the proposed approach that comprehensively performs price forecasting, stock selection and
portfolio optimization in the presence of both uptrends and downtrends.
4. Methods and Materials
The procedure followed here consists of applying several techniques from the so-called computational intelligence to address the complexity of stock investments in the presence of both increasing and
decreasing prices. Future stock prices are forecasted using an ANN, as well as the tendency that such prices will show. These estimations are then combined with certain indicators from the
fundamental analysis to define the stocks that will likely receive resources (the selected stocks). Finally, another evolutionary algorithm is used to optimize portfolios, i.e., to define the
proportions of resources to be allocated to each stock.
4.1. An Artificial Neural Network to Estimate Future Prices
In this work, the immediate next period price of the considered stocks are estimated by means of an ANN. Following the recommendations of ([
]), we use a single-layer feedforward network (whose setting is created once per each stock) and train the ANN by means of the so-called extreme learning machine algorithm because of its superior
capacities in similar problems to the one addressed here (cf. [
The ANN works independently per stock to estimate its price in the subsequent immediate period. The return of each stock is used as the target variable, while thirteen variables are used as input to
train the ANN. Let
$r t$
denote the stock return for a given period
$r t$
is calculated from the stock price for that period (
$p t$
) and the immediate previous one (
$p t − 1$
), as defined by Equation (
$r t = p t − p t − 1 p t − 1$
The high complexity involved in forecasting future stock prices requires one to consider a variety of transaction data as explanatory variables. Therefore, we followed the recommendations provided in
]) to determine sixteen transaction data as explanatory variables to the forecasting model used here. The sixteen input variables are described as follows:
• Close price. Last transacted price of the stock before the market officially closes.
• Open Price. First price of the stock at which it was traded at the open of the period’s trading.
• High. Highest price of the stock in the period’s trading.
• Low. Lowest price of the stock in the period’s trading.
• Average Price. Average price of the stock in the period’s trading.
• Market Capitalization. Price per share multiplied by the number of outstanding shares of a publicly held company.
• Return Rate. Profit on an investment over a period, expressed as a proportion of the original investment.
• Volume. Number of shares traded (or their equivalent in money) of a stock in a given period.
• Total asset turnover. Net sales over the average value of total assets on the company’s balance sheet between the beginning and the end of the period.
• Fixed asset turnover. Net sales over the average value of fixed assets.
• Volatility. Standard deviation of prices.
• General Capital. Number of preferred and common shares that a company is authorized to issue.
• Price to Earnings. Market value per share over earnings per share.
• Price to Book. Market price per share over book value per share.
• Price to Sales. Market price per share over revenue per share.
• Price to Cash Flow. Market price per share over operating cash flow per share.
The training process consists of taking sixty historical values for these sixteen variables randomly out of a set of ninety historical periods and leaving the rest of values to test the ANN. After
the ANN is trained, two errors are computed: training error and testing error. The lower the testing error, the better the predictive capacity the ANN has. Nevertheless, since the extreme learning
machine algorithm uses a random procedure to compute the weights and bias of the network, we do not always obtain the same results. Therefore, we run the algorithm $n a$ times and chose the one with
better results. It is important to highlight that each input variable is normalized taking into account the sixty periods of the training data (the target variable is not normalized).
As mentioned before, our approach seeks to take advantage of market downtrends. To achieve this, we use the ANN’s forecast. A long or a short position will be chosen according to the forecasted value
of the return; that is, if the forecasted value for a stock return is positive, a long position is chosen, otherwise a short position is chosen.
4.2. Evolutionary Algorithms to Select Stocks
It is common that practitioners use indicators from the so-called fundamental analysis to assess the financial health of stocks. Besides these indicators, here, we use the stock prices and tendencies
forecasted by the ANN to define which stocks should be further considered for investment. To ponder all these values, we establish an optimization problem following the recommendations in ([
]) and use an evolutionary algorithm to address it as recommended in ([
$S = { s 1 , s 2 , … , c a r d ( S ) }$
be the set of considered stocks,
$v j ( s i )$
be the evaluation of stock
$s i$
on the
th indicator,
$j = 1 , … , N$
(for the sake of simplicity, assume that
$v 1$
is the forecasted return as calculated by Equation (
)), and
$w i$
be the relative importance of each indicator and forecasted return (the latter is denoted by
$w 1$
). The score of stock
$s i$
can be calculated as follows (cf. [
$s c o r e ( s i ) = ∑ J = 1 N w j v j ( s i )$
Since increasing
$v j ( s i )$
$j = 1 , … , N$
indicates the convenience of the stock, determining the most appropriate values for
$w j$
becomes crucial to determining the most plausible stocks as those that maximize Equation (
If we want to take advantage of market downtrends, sometimes we will be interested in obtaining the more negative returns to invest in a short position. To implement this idea, the value of each
$v j$
is taken as positive or negative according to the prediction given by the ANN model on the previous stage. Namely, if the ANN model predicts a positive stock return, a long position will be chosen
for this stock and the factor values are taken as they are. However, if the ANN model predicts a negative stock return, a short position is chosen for this stock and the return and each factor value
are multiplied by
$− 1$
, so Equation (
) is still valid.
To determine the most convenient values for
$w j$
$j = 1 , … , N )$
, we use the function recommended in ([
]). Let us define this function.
For a given historical period
, a set of predefined weights will allow one to determine the score of each stock; thus, the top, say, 5% of the stocks can be selected. These top stocks constitute the set of “selected” stocks, and
the rest constitute the set of “non-selected” stocks for period
. Let
$R s e l e c t e d t$
$R n o n − s e l e c t e d t$
be the average returns of the stocks in these sets (as calculated by Equation (
)), respectively. The convenience of the predefined weights is then calculated as the arithmetic difference between the average returns of the selected and non-selected stocks that they produce, that
$Maximize ξ ( W ) = 1 T ∑ t = 1 T ( R s e l e c t e d t − R n o n − s e l e c t e d t )$
is the number of historical returns used to assess the weights in
$W = [ w 1 , w 2 , … , w N ] ⊤$
$ξ ( W )$
is the convenience of the weights in
As was stated in
Section 2.4
, the differential evolution (DE) algorithm has been found to be highly effective in non-linear mono-objective optimization problems, especially in problems related to financial problems ([
]); therefore, this type of algorithm represents serious advantages over other optimization algorithms, particularly over other meta-heuristics. We use here a basic version of the DE algorithm as
presented by Algoritm 1 in ([
]). Let us describe this algorithm.
To determine the best values for $w j$ ($i = 1 , … , N$), the decision variables considered by the DE will be the values $w j$ such that each individual in the DE will contain the values for $w j$
fulfilling the constraints of the problem: $w j ≥ 0$ and $∑ j = 1 N w j = 1$.
Lines 1–8 of Algorithm 1 randomly initialize the population of the DE; that is, the lines initialize feasible individuals by placing them in a random position within the search space. To ensure
feasibility, the values for $w j$ in each individual are normalized in Lines 5 and 6.
The parameters used by the DE algorithm consist of a crossover probability,
$C R ∈ [ 0 , 1 ]$
, a differential weight,
$F ∈ [ 0 , 2 ]$
, and a number of individuals in the population,
$p o p u l a t i o n s i z e ≥ 4$
. Each individual in the population is represented by a real-valued vector
$z = [ z 1 , z 2 , … , z N ] ⊤$
, where
$z j$
is the value assigned to the
th decision variable and
is the number of decision variables (in Problem (
), the decision variables are the
weights). The termination criterion used here for the search procedure consists of a predefined number of iterations (generations). The evolutionary process is performed in Lines 9–22. Here, for each
generation of the DE, the solutions in the population are evolved such that the new population is composed of the best solutions found so far. Finally, the best solution found overall is selected in
Line 23.
Algorithm 1 Differential evolution used to address Problem (3).
$N i t e r a t i o n s$, $C R$, F, $p o p u l a t i o n s i z e$
The values
$w 1 , w 2 , … , w N$
found that best solves Problem (
$P ← ∅$
$i ← 1$
while ($i ≤ p o p u l a t i o n s i z e$) do
Randomly, define $z k ∈ [ 0 , 1 ]$ for $z = [ z 1 , z 2 , … , z N ] ⊤$
$s u m ← ∑ k = 1 N z k$
$z k ← z k / s u m$ ($k = 1 , 2 , … , N$)
$P ← P ∪ { z }$
end while
$j ← 1$
while ($j ≤ N i t e r a t i o n s$) do
for all ($z ∈ P$) do
Randomly, define $a , b , c ∈ P$, such that z, a, b, c are all different
Randomly, define $r ∈ { 1 , … , N }$
for all ($i ∈ { 1 , … , N }$) do
Randomly, define $u ∈ [ 0 , 1 ]$
If $u < C R$ or $i = r$, set $y i = a i + F · ( b i − c i )$, otherwise set $y i = z i$
end for
$s u m ← ∑ k = 1 N y k$
$y k ← y k / s u m$ ($k = 1 , 2 , … , N$)
$ξ ( z ) ≤ ξ ( y )$
, then replace
(see Equation (
end for
end while
Select the individual
$z ∈ P$
with the highest value
$ξ ( z )$
; this individual represents the best set of weights
$w 1 , w 2 , … , w N$
for Equation (
Different fundamental indicators could be more convenient for companies with different types of activities (see, e.g., [
]). We use here some fundamental indicators that can be used for trans-business companies following the works in ([
]). In this work, we use
$N = 13$
factors to define the score of each stock as described below.
• Forecasted return: Output of the ANN.
• Return on equity: Net income over average shareholder’s equity.
• Return on asset: Net income over total assets.
• Operating income margin: Operating earnings over revenue.
• Net income margin: Total liabilities over total shareholder’s equity.
• Levered free cash flow: Amount of money the company left over after paying its financial debts.
• Current ratio: Current assets over current liabilities.
• Quick ratio: (Cash and equivalents + marketable securities + accounts receivable) over current liabilities.
• Inventory turnover ratio: Net sales over ending inventory.
• Receivable turnover ratio: Net credit sales over average accounts receivable.
• Operating income growth rate: (Operating income in the current quarter − operating income at the previous quarter) over operating income in the previous quarter.
• Net income growth rate: (Net income after tax in the current quarter − net income after tax at the previous quarter) over net income after tax in the previous quarter.
4.3. Optimizing Stock Portfolios
The final activity to perform stock investments consists of determining how the resources should be allocated. A given distribution of resources among the selected stocks is known as the stock
portfolio. Defining the most convenient distribution of resources is known as portfolio optimization. In this final activity, the decision alternatives are no longer individual stocks but complete
portfolios. Thus, it is necessary to determine multiple criteria to comprehensively assess portfolios.
Formally, a stock portfolio is a vector
$x = [ x 1 , x 2 , … , x m ] ⊤$
such that
$x i$
is the proportion of the total investment that is allocated to the
th stock. Let
$r i$
be the return of the
th stock calculated according to Equation (
); the return of a given portfolio
is defined as follows:
$R ( x ) = ∑ i = 1 m x i r i$
Of course, if we knew the $t + 1$ return of the stocks, we could allocate resources that maximize $R ( x )$ without uncertainty; however, since this is impossible, the multiple criteria used to
assess portfolios are estimations of $R ( x )$. These estimations usually come from probability theory.
According to ([
]), the most convenient portfolio
can be determined by optimizing a set of confidence intervals that describe the probabilistic distribution of the portfolio’s return:
$Maximize x ∈ Ω { θ ( x ) = ( θ β 1 ( x ) , θ β 2 ( x ) , … , θ β k ( x ) ) }$
$θ β i ( x ) = { [ c i , d i ] : P ( c i ≤ E ( R ( x ) ) ≤ d i ) = β i }$
$E ( R ( x ) )$
is the expected return of portfolio
$P ( ω )$
is the probability that event
occurs and
is the set of feasible portfolios.
Maximizing confidence intervals as conducted in Equation (
) does not mean increasing the wideness of the intervals; rather, it refers to the intuition that rightmost returns in the probability distribution are desired. We use the so-called interval theory
]) to measure the possibility that a confidence interval is greater than another one. In interval theory, an interval number allows one to encompass the uncertainty involved in the definition of a
Since we are trying to find the best portfolios in terms of confidence intervals around their expected return, intervals further to the right are better (rather than comparing intervals in terms of
their width). Therefore, the comparison method used must provide this feature. There are several works in the literature describing methods that possess this property (e.g., [
]); however, the method proposed in [
] is the most broadly mentioned in the literature [
The authors of [
] presented a possibility function to define the order between two interval numbers that has been increasingly used in the literature (e.g., [
]). Let
$I = [ i − , i + ]$
$J = [ j − , j + ]$
be two interval numbers, and the possibility function presented in [
] is defined as follows:
$p o s s i b i l i t y ( I ≥ J ) : = 1 , if p ( I , J ) > 1 0 , if p ( I , J ) < 0 p ( I , J ) , otherwise$
$p ( I , J ) = i + − j − ( i + − i − ) + ( j + − j − )$
Moreover, if
$i = i + = j −$
$j = j + = j −$
, then
$p o s s i b i l i t y ( I ≥ J ) : = 1 , if i ≥ j 0 , otherwise$
Since Problem (
) can potentially have many objectives defined as interval numbers as well as multiple constraints, we use MOEA/D (see
Section 2.4
), as advised by ([
]). In ([
]), MOEA/D was adapted to deal with these types of objectives; the adaptation has been proven to provide good results in contexts related to stock investments. For reasons of space in this paper, the
reader is referred to ([
]) for specific details about this improvement to MOEA/D.
5. Experiments
The hypothesis that a procedure that comprehensively addresses the practitioners’ main activities while also considering uptrends and downtrends produce better total earnings for the investor than
when not doing it is tested by using extensive experiments with actual historical data.
5.1. Experimental Design
We used well-known data for our experiments; the historical prices and financial information about the stocks within the Standard and Poor’s 500 (S&P500) index. The officially reported financial
information was used to build criterion performances.
Data from some of the most recent ninety months were used as input in the experiments, i.e., from November 2013 to April 2021. This dataset contains both uptrends and downtrends, so it is convenient
for the kind of tests performed here. From these periods, sixty are used to prepare (say, train) the algorithms, and the rest are used to assess the approach performance in a window-sliding manner.
For example, the information on November 2013–October 2018 is used to determine the investments that should be carried out at the beginning of November 2018, and these investments are maintained the
whole month. Then, the performance of the approach (i.e., the returns) is calculated at the end of November 2018 using Equation (
). Such a performance is compared to the benchmarks in that period. Later, the investments are neglected, and, independently, the lapse is slid one period; thus, now, the information of the sixty
months—December 2013–November 2018—is used to determine the investments for December 2018, where the new approach performance is calculated and compared to the benchmarks. This procedure is repeated
thirty times; so, the conclusions can shed light on the robustness and overall performance of the approach with a high degree of confidence.
5.2. Benchmarks
The Standard and Poor’s 500 index is used to define the relative performance of the proposed approach. Stock indexes are often used by practitioners as benchmarks because they summarize valuable
information regarding the main sectors of an economy. The S&P500 is perhaps the most well-known and used index; it aggregates information about the five hundred biggest publicly traded companies in
the United States of America. Since we are making decisions considering information only from this index, comparing the performance of the proposed approach with it is fair. In addition to the S&P500
index, in order to validate our approach, we have included several benchmarks to measure the effectiveness of our proposal. These benchmarks are: the approach of ([
]), the approach of ([
]) and our approach without including downtrends.
5.3. Parameter Setting
The parameter values used by each of the techniques mentioned in
Section 4
are defined here.
As explained above, the number of periods used to train the ANN for each stock is 60. The only hidden layer uses sixteen neurons. We observed in preliminary experiments that the ANN showed more
efficiency when it uses the same number of neurons as inputs; the more neurons, the more unstable the ANN was and the fewer neurons, the less predictive capacity the ANN had. Each neuron of the ANN
used the sigmoidal function as the activation function. The ANN was run $n a = 50$ times to train the ANN for each stock; finally, the ANN model with fewer testing errors was used to predict the
return at time $t + 1$.
Regarding the selection of stocks, the DE defined to select the factor weights that maximize the objective function shown in Equation (
) uses common parameter values. The crossover probability was set to 0.9; the differential weight was set to 0.8; the population size was set to 200; the number of iterations was set to 100. After
scoring and ranking the stocks, we only select the top 5% of all the stocks originally considered following the recommendations in [
Finally, the genetic algorithm used to address Problem (
) was described in detail in ([
]), where it was based on the well-known MOEA/D and adapted to deal with parameter values defined as interval numbers. We use one hundred generations as the stopping criterion, two solutions as the
maximum number of solutions replaced by each child solution, a probability of selecting parents only from the neighborhood (instead of the whole population) of 0.9, one hundred subproblems, and
twenty weight vectors in the neighborhood of each weight vector. Two confidence intervals are considered by MOEA/D as objectives to be maximized (see Equation (
$θ β 30 ( x )$
$θ β 50 ( x )$
according to the recommendations in ([
]). The constraints considered by MOEA/D are
$x i ≥ 0$
$∑ x i = 1$
It is worth mentioning that the code for implementing the algorithms described here are original developments of the authors. The code was written in Matlab and Java and will be probably publicly
presented in the form of a complete software system.
5.4. Results
The proposed approach uses components that exploit randomness to explore the search space. Here, we intend to discard the effects produced by such randomness by running our approach many times;
particularly, each stochastic component runs twenty times for each of the thirty back-testing periods mentioned in
Section 5.1
. Doing it this way sheds light on the robustness of our approach and allows us to reach sound conclusions. By following the recommendations in [
], the performance of our approach is evaluated by using the quantiles
$Q 10$
$Q 20$
$Q 50$
$Q 80$
$Q 90$
Figure 1
). As is noted in [
], distribution solutions of stochastic optimization algorithms are often asymmetrical; hence by using quantiles, we could obtain more insights into our approaches. However,
Figure 1
shows that, in our case,
$Q 50$
and the mean are almost always overlapped. Furthermore, (
$Q 10$
$Q 20$
) and (
$Q 80$
$Q 90$
) are symmetrical with respect to the mean. This behavior indicate that the performance of our approach is practically normally distributed.
Therefore, in this study, the average returns of our approach is used to be compared with several benchmarks, as shown in
Table 1
Figure 2
. For simplicity, the results are discussed hereafter as if the returns were not averages. In order to validate our approach, in this section, we have included several benchmarks to measure the
effectiveness of our proposal. These benchmarks are: (a) the market index S&P500, (b) the approach of [
], (c) the approach of [
] and (d) our approach without including downtrends.
Table 1
Figure 2
, we can see that, in terms of the expected value, the worst overall return was produced by investing according to the S&P500 index, while the best overall return was achieved by investing in a
portfolio produced by the proposed approach that takes advantage of both positive and negative trends.
Figure 2
shows that the portfolio that considers negative trends is almost always in the top two from all the approaches. Furthermore, the returns obtained using this approach show that this model is not
affected by the downtrends in the market as the benchmarks, as seen in the fall of all approaches from Jan 2020 to Mar 2020. Remarkably, this behavior did not prevent the proposed approach from
exploiting the clear overall uptrend produced from Apr. 2020 to Apr. 2021, as can be clearly seen in
Table 1
Table 2
, we can see that the proposed approach outperforms the benchmarks at the end of the thirty periods: the sum of returns is approximately 41% better than Yang et al. 2019 ([
]), 25% better than Solares et al. 2019 ([
]), 48% better than the one that only considers positive trends and more than 90% better than the market index. Moreover, the cumulative returns of our proposal is 63% better than Yang et al. 2019 ([
]), 35% better than Solares et al. 2019 ([
]), 75% better than the one that only considers positive trends and 141% better than the market index. This performance can be seen in
Figure 3
Figure 4
Figure 3
Figure 4
describe the evolution of the portfolio returns in an aggregate way throughout the whole time lapse (i.e., November 2018 to April 2021). However,
Figure 3
shows this evolution from the perspective of the sums of the returns, while
Figure 4
shows the cumulative returns. Both figures can be relevant to the practitioner. The former shows the overall performance of the approach without considering the exact period where the return was
obtained, while
Figure 4
allows one to ponder the impact of the period where such a return was obtained. Let us unfold the latter.
Figure 4
shows the amount that the investor would obtain if he/she takes their investment in a given period. For instance, an investment of USD 1000 at the beginning of November 2018 using the proposed model
would have become USD 992 (i.e., −0.80%) if the investor would have withdrawn the investment at the end of May 2019. However, if he/she continues until April 2021, the investment would have become
USD 1952 (i.e., +95.28%). In this sense, it is clear that the proposed approach outperformed the benchmarks by creating a portfolio that includes long and short positions. This result shows the
potential of our proposal, which could be improved in future approaches by including stocks from other indexes, more technical/fundamental variables, etc.
In both Figures, it can be seen that, for the first fourteen periods (November 2018 to December 2019), the market does not move significantly in any direction; however, for the remaining periods, the
market starts both negative and positive trends. A higher final return achieved by our proposal indicates that it is taking advantage of these trends overall. These results also show that considering
negative trends is crucial. The figures show that the final average return is better if negative trends are considered when building stock portfolios.
According to
Figure 2
Table 1
, the worst return obtained by our approach was in Dec 2018. This is also shown in
Figure 4
, where the detriment is caused by the higher negative return produced in this period. In that moment, the system decided to open long positions and allocate high proportions of investments to some
stocks with bad actual returns. This was due to the good historical performance of such actions that indicated a good statistical behavior. Several external issues affect the performance of a stock
in the market, such as the case of Nvidia corporation as reported in the news [
]. Thus, a way of improving the proposed system in the future is by considering criteria coming from the so-called sentiment analysis [
] that takes into consideration such factors.
On the other hand, as a way of measuring the performance of our proposal and comparing the results with some benchmarks, the Sharpe ratio
$r s h a r p e$
and Sortino ratio
$r s o r t i n o$
are used. These ratios are defined as
$r s h a r p e = R p − R f σ p$
$r s o r t i n o = R p − R f σ p d$
$R p$
is the average portfolio return,
$R f$
is the best available risk-free security rate,
$σ p$
is the portfolio standard deviation and
$σ p , d$
is the portfolio standard deviation of the downside. These indexes measure the risk per return obtained in comparison with a risk-free asset. In particular, the Sharpe ratio describes how much return
is received per unit of risk; meanwhile, the Sortino ratio describes how much return is received per unit of
risk. Therefore, the higher these indexes are, the more convenient for investment the asset is. We have considered the Treasure Bond of USA a risk-free security, with a value of 3% of annual return.
We also considered the Treasure Bond of USA as the minimal acceptance ratio (MAR) to compute the downside deviation.
$R p$
$σ p$
$σ p , d$
are taken from
Table 1
Table 3
shows the Sharpe and Sortino ratios for the all the benchmarks and our proposal. According to the results, our proposal has the best performance for both indexes; overall, it has higher returns by
considering the risk.
6. Conclusions
Building stock portfolios with high returns and low risk is a common challenge for researchers in the financial area. Usually, the most common practice is to select the more promising stocks
according to several factors, such as financial information, news of the market and technical analysis. Several approaches that use computational intelligence algorithms have been proposed in the
literature to deal with the overwhelming complexity of building a stock portfolio. Usually, these approaches consider up to three activities to build a portfolio: return forecasting, stock selection
and portfolio optimization. These activities decide which stocks should be supported, as well as the proportions of the investment to be allocated to them, by comparing the historical and forecasted
performance of potential stock investments. However, to the best of our knowledge, these approaches do not comprehensively address the three activities when considering downtrends in stock prices.
In this paper, a comprehensive approach to stock portfolio management is proposed; the approach includes stock price forecasting, stock selection and stock portfolio optimization while taking
advantage of market downtrends.
Stock price forecasting is carried out through an artificial neural network (ANN) trained by the extreme learning machine (ELM) algorithm. Forecasting the price of a given stock allows the
comprehensive approach to focus on uptrends or downtrends (i.e., going long or short, respectively) for that stock. Stock selection is modeled as an optimization problem that seeks to determine the
most plausible stocks; thus, a differential evolution is exploited on the basis of the forecasted price and a set of factors of the so-called fundamental analysis. Finally, portfolio optimization is
conducted through a genetic algorithm that uses confidence intervals of the portfolio returns to determine the best stock portfolio.
Using preliminary experimentation, we found that the ELM was better than other methods (ANN with back-propagation, random forest, support vector regression) at forecasting the trend of the stock
price but not the best at forecasting stock returns. Therefore, more research should be conducted to discover better configurations of the ANN with ELM or to decide if the forecasting stage should be
changed. However, further research on this, as well as on methods to increase the performance of the next stages of the comprehensive approach, is beyond the scope of this work, so the authors will
address these issues in future works.
Regarding the assessment of the comprehensive approach, the obtained results show that stock selection and portfolio optimization stages make more profitable portfolios when negative trends of stocks
are taken into account to take advantage of downtrends of the market (see
Table 2
Figure 3
Figure 4
). Furthermore, the results show that not only a traditional benchmark, the Standard and Poor’s 500 index, is outperformed by the proposed approach but also approaches that do not exploit negative
market trends (e.g., [
This research work could be improved by the following possible future directions:
A deeper study of the forecasting stage to test the performance of several AI methods by employing more data or different financial variables;
A deeper study on the selection stage to evaluate the performance of the system by employing different financial variables to build the stock portfolio;
A deeper study of the performance of the system by modifying different parameters in the optimization stage and comparing the results with other approaches;
New experiments to show the robustness of the approach regarding (i) the number and type of alternatives in the universe of stocks, (ii) the number of selected stocks and (iii) the parameter
Author Contributions
Conceptualization, E.S. and V.d.-L.-G.; methodology, E.S.; software, E.S., F.G.S. and V.d.-L.-G.; validation, F.G.S., E.S. and V.d.-L.-G.; formal analysis, F.G.S. and R.D.; investigation, R.D.;
resources, R.D.; data curation, V.d.-L.-G. and F.G.S.; writing—original draft preparation, E.S.; writing—review and editing, F.G.S. and V.d.-L.-G.; visualization, R.D. and V.d.-L.-G.; supervision,
R.D. and F.G.S.; funding acquisition, R.D. All authors have read and agreed to the published version of the manuscript.
This research was funded by Instituto Tecnológico y de Estudios Superiores de Monterrey, and by the Mexican National Council of Science and Technology (CONACYT) grant number 321028, and by SEP-PRODEP
México grant numbers UACOAH-PTC-545 and UACOAH-CA-479. The APC was funded by Instituto Tecnológico y de Estudios Superiores de Monterrey.
The work of Raymundo Díaz was supported by the vice president of Research of Tecnológico de Monterrey. Efrain Solares thanks the Mexican National Council of Science and Technology (CONACYT) for its
support to project no. 321028 and SEP-PRODEP México for its support under grant UACOAH-PTC-545. Francisco G. Salas and Víctor De-Leon-Gomez were supported by SEP-PRODEP México.
Conflicts of Interest
The authors declare no conflict of interest.
1. Pan, H.; Long, M. Intelligent Portfolio Theory and Application in Stock Investment with Multi-Factor Models and Trend Following Trading Strategies. Procedia Comput. Sci. 2021, 187, 414–419. [
Google Scholar] [CrossRef]
2. Jiang, X.; Peterburgsky, S. Investment performance of shorted leveraged ETF pairs. Appl. Econ. 2017, 49, 4410–4427. [Google Scholar] [CrossRef]
3. Hurlin, C.; Iseli, G.; Pérignon, C.; Yeung, S. The counterparty risk exposure of ETF investors. J. Bank. Financ. 2019, 102, 215–230. [Google Scholar] [CrossRef]
4. Holzhauer, H.M.; Lu, X.; McLeod, R.W.; Mehran, J. Bad news bears: Effects of expected market volatility on daily tracking error of leveraged bull and bear ETFs. Manag. Financ. 2013, 39,
1169–1187. [Google Scholar]
5. Gregory-Allen, R.B.; Smith, D.M.; Werman, M. Chapter 30—Short Selling by Portfolio Managers: Performance and Risk Effects across Investment Styles. In Handbook of Short Selling; Gregoriou, G.N.,
Ed.; Academic Press: San Diego, CA, USA, 2012; pp. 437–451. [Google Scholar] [CrossRef]
6. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
7. Li, X.; Xie, H.; Wang, R.; Cai, Y.; Cao, J.; Wang, F.; Min, H.; Deng, X. Empirical analysis: Stock market prediction via extreme learning machine. Neural Comput. Appl. 2016, 27, 67–78. [Google
Scholar] [CrossRef]
8. Wang, F.; Zhang, Y.; Rao, Q.; Li, K.; Zhang, H. Exploring mutual information-based sentimental analysis with kernel-based extreme learning machine for stock prediction. Soft Comput. 2017, 21,
3193–3205. [Google Scholar] [CrossRef]
9. Das, S.P.; Padhy, S. Unsupervised extreme learning machine and support vector regression hybrid model for predicting energy commodity futures index. Memetic Comput. 2017, 9, 333–346. [Google
Scholar] [CrossRef]
10. Yang, F.; Chen, Z.; Li, J.; Tang, L. A novel hybrid stock selection method with stock prediction. Appl. Soft Comput. 2019, 80, 820–831. [Google Scholar] [CrossRef]
11. Fernandez, E.; Navarro, J.; Solares, E.; Coello, C.C. A novel approach to select the best portfolio considering the preferences of the decision maker. Swarm Evol. Comput. 2019, 46, 140–153. [
Google Scholar] [CrossRef]
12. Solares, E.; Coello, C.A.C.; Fernandez, E.; Navarro, J. Handling uncertainty through confidence intervals in portfolio optimization. Swarm Evol. Comput. 2019, 44, 774–787. [Google Scholar] [
13. Xidonas, P.; Mavrotas, G.; Psarras, J. A multicriteria methodology for equity selection using financial analysis. Comput. Oper. Res. 2009, 36, 3187–3203. [Google Scholar] [CrossRef]
14. Marasović, B.; Poklepović, T.; Aljinović, Z. MArkowitz’model with fundamental and technical analysis–complementary methods or not. Croat. Oper. Res. Rev. 2011, 2, 122–132. [Google Scholar]
15. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.; Arshad, H. State-of-the-art in artificial neural network applications: A survey. Heliyon 2018, 4, e00938. [Google Scholar] [
CrossRef] [PubMed] [Green Version]
16. Sumathi, S.; Paneerselvam, S. Computational Intelligence Paradigms: Theory & Applications Using MATLAB; CRC Press: New York, NY, USA, 2010. [Google Scholar]
17. Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey,
21–23 August 2017; pp. 1–6. [Google Scholar]
18. Coello, C.A.C.; Lamont, G.B.; Van Veldhuizen, D.A. Evolutionary Algorithms for Solving Multi-Objective Problems; Springer: Berlin, Germany, 2007; Volume 5. [Google Scholar]
19. Bianchi, L.; Dorigo, M.; Gambardella, L.M.; Gutjahr, W.J. A survey on metaheuristics for stochastic combinatorial optimization. Nat. Comput. 2009, 8, 239–287. [Google Scholar] [CrossRef] [Green
20. Pławiak, P. Novel genetic ensembles of classifiers applied to myocardium dysfunction recognition based on ECG signals. Swarm Evol. Comput. 2018, 39, 192–208. [Google Scholar] [CrossRef]
21. Pławiak, P. Novel methodology of cardiac health recognition based on ECG signals and evolutionary-neural system. Expert Syst. Appl. 2018, 92, 334–349. [Google Scholar] [CrossRef]
22. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2010, 15, 4–31. [Google Scholar] [CrossRef]
23. Krink, T.; Paterlini, S. Multiobjective optimization using differential evolution for real-world portfolio optimization. Comput. Manag. Sci. 2011, 8, 157–179. [Google Scholar] [CrossRef]
24. Krink, T.; Mittnik, S.; Paterlini, S. Differential evolution and combinatorial search for constrained index-tracking. Ann. Oper. Res. 2009, 172, 153. [Google Scholar] [CrossRef]
25. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
26. Sharma, D.K.; Hota, H.; Brown, K.; Handa, R. Integration of genetic algorithm with artificial neural network for stock market forecasting. Int. J. Syst. Assur. Eng. Manag. 2021, 1–14. [Google
Scholar] [CrossRef]
27. Ferreira, F.; Gandomi, A.H.; Cardoso, R.T.N. Artificial Intelligence Applied to Stock Market Trading: A Review. IEEE Access 2021, 9, 30898–30917. [Google Scholar] [CrossRef]
28. Chopra, R.; Sharma, G.D. Application of Artificial Intelligence in Stock Market Forecasting: A Critique, Review, and Research Agenda. J. Risk Financ. Manag. 2021, 14, 526. [Google Scholar] [
29. Ma, Y.L.; Han, R.Z.; Wang, W.Z. Prediction-Based Portfolio Optimization Models Using Deep Neural Networks. IEEE Access 2020, 8, 115393–115405. [Google Scholar] [CrossRef]
30. Fischer, T.; Krauss, C. Deep learning with long short-term memory networks for financial market predictions. Eur. J. Oper. Res. 2018, 270, 654–669. [Google Scholar] [CrossRef] [Green Version]
31. Long, W.; Lu, Z.; Cui, L. Deep learning-based feature engineering for stock price movement prediction. Knowl.-Based Syst. 2019, 164, 163–173. [Google Scholar] [CrossRef]
32. Zhong, X.; Enke, D. Predicting the daily return direction of the stock market using hybrid machine learning algorithms. Financ. Innov. 2019, 5, 1–20. [Google Scholar] [CrossRef]
33. Kaczmarek, T.; Perez, K. Building portfolios based on machine learning predictions. Econ. Res.-Ekon. Istraz. 2021, 1–19. [Google Scholar] [CrossRef]
34. Patel, J.; Shah, S.; Thakkar, P.; Kotecha, K. Predicting stock and stock price index movement using Trend Deterministic Data Preparation and machine learning techniques. Expert Syst. Appl. 2015,
42, 259–268. [Google Scholar] [CrossRef]
35. Chong, E.; Han, C.; Park, F.C. Deep learning networks for stock market analysis and prediction: Methodology, data representations, and case studies. Expert Syst. Appl. 2017, 83, 187–205. [Google
Scholar] [CrossRef] [Green Version]
36. Huang, G.B.; Siew, C.K. Extreme learning machine: RBF network case. In Proceedings of the 2004 8th International Conference on Control, Automation, Robotics and Vision (ICARCV) ICARCV 2004,
Kunming, China, 6–9 December 2004; Volume 2, pp. 1029–1036. [Google Scholar] [CrossRef]
37. Peykani, P.; Mohammadi, E.; Jabbarzadeh, A.; Rostamy-Malkhalifeh, M.; Pishvaee, M.S. A novel two-phase robust portfolio selection and optimization approach under uncertainty: A case study of
Tehran stock exchange. PLoS ONE 2020, 15, e239810. [Google Scholar] [CrossRef] [PubMed]
38. Mussafi, N.S.M.; Ismail, Z. Optimum Risk-Adjusted Islamic Stock Portfolio Using the Quadratic Programming Model: An Empirical Study in Indonesia. J. Asian Financ. Econ. Bus. 2021, 8, 839–850. [
Google Scholar] [CrossRef]
39. Lim, S.; Kim, M.J.; Ahn, C.W. A Genetic Algorithm (GA) Approach to the Portfolio Design Based on Market Movements and Asset Valuations. IEEE Access 2020, 8, 140234–140249. [Google Scholar] [
40. Wang, W.Y.; Li, W.Z.; Zhang, N.; Liu, K.C. Portfolio formation with preselection using deep learning from long-term financial data. Expert Syst. Appl. 2020, 143, 113042. [Google Scholar] [
41. Zhang, C.; Liang, S.; Lyu, F.; Fang, L. Stock-index tracking optimization using auto-encoders. Front. Phys. 2020, 8, 388. [Google Scholar] [CrossRef]
42. Paiva, F.D.; Cardoso, R.T.N.; Hanaoka, G.P.; Duarte, W.M. Decision-making for financial trading: A fusion approach of machine learning and portfolio selection. Expert Syst. Appl. 2019, 115,
635–655. [Google Scholar] [CrossRef]
43. Galankashi, M.R.; Rafiei, F.M.; Ghezelbash, M. Portfolio selection: A fuzzy-ANP approach. Financ. Innov. 2020, 6, 34. [Google Scholar] [CrossRef]
44. Markowitz, H. Portfolio Selection. J. Financ. 1952, 7, 77–91. [Google Scholar]
45. Kalayci, C.B.; Ertenlice, O.; Akbay, M.A. A comprehensive review of deterministic models and applications for mean-variance portfolio optimization. Expert Syst. Appl. 2019, 125, 345–368. [Google
Scholar] [CrossRef]
46. Rockafellar, R.; Uryasev, S. Optimization of conditional value-at-risk. J. Risk 2002, 2, 21–41. [Google Scholar] [CrossRef] [Green Version]
47. Rockafellar, R.; Uryasev, S. Conditional value-at-risk for general loss distributions. J. Bank. Financ. 2002, 26, 1443–1471. [Google Scholar] [CrossRef]
48. Sehgal, R.; Mehra, A. Robust reward–risk ratio portfolio optimization. Int. Trans. Oper. Res. 2021, 28, 2169–2190. [Google Scholar] [CrossRef]
49. Hu, Y.; Lindquist, W.B.; Rachev, S.T. Portfolio Optimization Constrained by Performance Attribution. J. Risk Financ. Manag. 2021, 14, 201. [Google Scholar] [CrossRef]
50. Dai, Z.; Wen, F. Some improved sparse and stable portfolio optimization problems. Financ. Res. Lett. 2018, 27, 46–52. [Google Scholar] [CrossRef]
51. Baykasoğlu, A.; Yunusoglu, M.G.; Özsoydan, F.B. A GRASP based solution approach to solve cardinality constrained portfolio optimization problems. Comput. Ind. Eng. 2015, 90, 339–351. [Google
Scholar] [CrossRef]
52. Mayambala, F.; Rönnberg, E.; Larsson, T. Eigendecomposition of the Mean-Variance Portfolio Optimization Model. In Optimization, Control, and Applications in the Information Age; Migdalas, A.,
Karakitsiou, A., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 209–232. [Google Scholar]
53. Kocadağli, O.; Keskin, R. A novel portfolio selection model based on fuzzy goal programming with different importance and priorities. Expert Syst. Appl. 2015, 42, 6898–6912. [Google Scholar] [
54. He, F.; Qu, R. Hybridising Local Search With Branch-And-Bound For Constrained Portfolio Selection Problems. In Proceedings of the 30th European Council for Modeling and Simulation, Regensburg,
Germany, 31 May–3 June 2016; Claus, T., Herrmann, F., Manitz, M., Rose, O., Eds.; Digital Library of the European Council for Modelling and Simulation: Regensburg, Germany, 2016; pp. 1–7. [Google
55. Ruiz-Torrubiano, R.; Suárez, A. A memetic algorithm for cardinality-constrained portfolio optimization with transaction costs. Appl. Soft Comput. 2015, 36, 125–142. [Google Scholar] [CrossRef] [
Green Version]
56. Soleymani, F.; Paquet, E. Financial portfolio optimization with online deep reinforcement learning and restricted stacked autoencoder—DeepBreath. Expert Syst. Appl. 2020, 156, 113456. [Google
Scholar] [CrossRef]
57. García, F.; Guijarro, F.; Oliver, J. Index tracking optimization with cardinality constraint: A performance comparison of genetic algorithms and tabu search heuristics. Neural Comput. Appl. 2018,
30, 2625–2641. [Google Scholar] [CrossRef]
58. Hadi, A.S.; Naggar, A.A.E.; Bary, M.N.A. New model and method for portfolios selection. Appl. Math. Sci. 2016, 10, 263–288. [Google Scholar] [CrossRef]
59. Liagkouras, K.; Metaxiotis, K. A new efficiently encoded multiobjective algorithm for the solution of the cardinality constrained portfolio optimization problem. Ann. Oper. Res. 2018, 267,
281–319. [Google Scholar] [CrossRef]
60. Macedo, L.L.; Godinho, P.; Alves, M.J. Mean-semivariance portfolio optimization with multiobjective evolutionary algorithms and technical analysis rules. Expert Syst. Appl. 2017, 79, 33–43. [
Google Scholar] [CrossRef]
61. Lwin, K.T.; Qu, R.; MacCarthy, B.L. Mean-VaR portfolio optimization: A nonparametric approach. Eur. J. Oper. Res. 2017, 260, 751–766. [Google Scholar] [CrossRef] [Green Version]
62. Ban, G.Y.; Karoui, N.E.; Lim, A.E.B. Machine Learning and Portfolio Optimization. Manag. Sci. 2016, 64, 1136–1154. [Google Scholar] [CrossRef] [Green Version]
63. Kizys, R.; Juan, A.; Sawik, B.; Calvet, L. A Biased-Randomized Iterated Local Search Algorithm for Rich Portfolio Optimization. Appl. Sci. 2019, 9, 3509. [Google Scholar] [CrossRef] [Green
64. Kalayci, C.B.; Ertenlice, O.; Akyer, H.; Aygoren, H. An artificial bee colony algorithm with feasibility enforcement and infeasibility toleration procedures for cardinality constrained portfolio
optimization. Expert Syst. Appl. 2017, 85, 61–75. [Google Scholar] [CrossRef]
65. Mendonça, G.H.; Ferreira, F.G.; Cardoso, R.T.; Martins, F.V. Multi-attribute decision making applied to financial portfolio optimization problem. Expert Syst. Appl. 2020, 158, 113527. [Google
Scholar] [CrossRef]
66. Fernández, E.; Figueira, J.R.; Navarro, J. An interval extension of the outranking approach and its application to multiple-criteria ordinal classification. Omega 2019, 84, 189–198. [Google
Scholar] [CrossRef]
67. Sunaga, T. Theory of an Interval Algebra and Its Applications to Numerical Analysis. RAAG Memoirs 1958, 2, 29–46. [Google Scholar] [CrossRef]
68. Moore, R.E. Interval Arithmetic And Automatic Error Analysis in Digital Computing; Stanford University: Stanford, CA, USA, 1963. [Google Scholar]
69. Hui, E.C.; Chan, K.K.K. Alternative trading strategies to beat “buy-and-hold”. Phys. A Stat. Mech. Its Appl. 2019, 534, 120800. [Google Scholar] [CrossRef]
70. Hui, E.C.; Chan, K.K.K. A new time-dependent trading strategy for securitized real estate and equity indices. Int. J. Strateg. Prop. Manag. 2018, 22, 64–79. [Google Scholar] [CrossRef] [Green
71. Allen, D.E.; Powell, R.J.; Singh, A.K. Chapter 32—Machine Learning and Short Positions in Stock Trading Strategies. In Handbook of Short Selling; Gregoriou, G.N., Ed.; Academic Press: San Diego,
CA, USA, 2012; pp. 467–478. [Google Scholar] [CrossRef]
72. Baumann, M.H.; Grüne, L. Simultaneously long-short trading in discrete and continuous time. Syst. Control. Lett. 2017, 99, 85–89. [Google Scholar] [CrossRef] [Green Version]
73. Primbs, J.A.; Barmish, B.R. On Robustness of Simultaneous Long-Short Stock Trading Control with Time-Varying Price Dynamics. IFAC-PapersOnLine 2017, 50, 12267–12272. [Google Scholar] [CrossRef]
74. O’Brien, J.D.; Burke, M.E.; Burke, K. A Generalized Framework for Simultaneous Long-Short Feedback Trading. IEEE Trans. Autom. Control. 2021, 66, 2652–2663. [Google Scholar] [CrossRef]
75. Deshpande, A.; Gubner, J.A.; Barmish, B.R. On Simultaneous Long-Short Stock Trading Controllers with Cross-Coupling. IFAC PapersOnLine 2020, 53, 16989–16995. [Google Scholar] [CrossRef]
76. Fu, X.; Du, J.; Guo, Y.; Liu, M.; Dong, T.; Duan, X. A machine learning framework for stock selection. arXiv 2018, arXiv:1806.01743. [Google Scholar]
77. Zhang, R.; Lin, Z.; Chen, S.; Lin, Z.; Liang, X. Multi-factor Stock Selection Model Based on Kernel Support Vector Machine. J. Math. Res 2018, 10, 9. [Google Scholar] [CrossRef]
78. Becker, Y.L.; Fei, P.; Lester, A.M. Stock selection: An innovative application of genetic programming methodology. In Genetic Programming Theory and Practice IV; Springer: Berlin, Germany, 2007;
pp. 315–334. [Google Scholar]
79. Levin, A. Stock selection via nonlinear multi-factor models. Adv. Neural Inf. Process. Syst. 1995, 8, 966–972. [Google Scholar]
80. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
81. Sengupta, A.; Pal, T.K. On comparing interval numbers. Eur. J. Oper. Res. 2000, 127, 28–43. [Google Scholar] [CrossRef]
82. Ishibuchi, H.; Tanaka, H. Multiobjective programming in optimization of the interval objective function. Eur. J. Oper. Res. 1990, 48, 219–225. [Google Scholar] [CrossRef]
83. Shi, J.R.; Liu, S.Y.; Xiong, W.T. A new solution for interval number linear programming. Syst. Eng.-Theory Pract. 2005, 2, 16. [Google Scholar]
84. Solares, E.; Fernandez, E.; Navarro, J. A generalization of the outranking approach by incorporating uncertainty as interval numbers. Investig. Oper. 2019, 39, 501–514. [Google Scholar]
85. Li, G.-D.; Yamaguchi, D.; Nagai, M. A grey-based decision-making approach to the supplier selection problem. Math. Comput. Model. 2007, 46, 573–581. [Google Scholar] [CrossRef]
86. Bhattacharyya, R. A grey theory based multiple attribute approach for r&d project portfolio selection. Fuzzy Inf. Eng. 2015, 7, 211–225. [Google Scholar]
87. Fernández, E.; Navarro, J.; Solares, E. A hierarchical interval outranking approach with interacting criteria. Eur. J. Oper. Res. 2022, 298, 293–307. [Google Scholar] [CrossRef]
88. Ivkovic, N.; Jakobovic, D.; Golub, M. Measuring Performance of Optimization Algorithms in Evolutionary Computation. Int. J. Mach. Learn. Comput. 2016, 6, 167–171. [Google Scholar] [CrossRef]
89. McKenna, B. Why NVIDIA Stock Plunged 31% in 2018; Motley Fool: Alexandria, VA, USA, 2019. [Google Scholar]
90. Li, X.; Xie, H.; Chen, L.; Wang, J.; Deng, X. News impact on stock price return via sentiment analysis. Knowl.-Based Syst. 2014, 69, 14–23. [Google Scholar] [CrossRef]
Table 1. Returns produced per period. In the case of the algorithms, the return is averaged in twenty runs.
S&P500 Yang Solares Without With
Index et al. (2019) et al. (2019) Negative Trends Negative Trends
Nov. 2018 1.75% 1.01% 1.87% −5.11% −4.87%
Dec. 2018 −10.11% −9.18% −8.81% −9.56% −9.14%
Jan. 2019 7.29% 10.88% 6.71% 6.77% 9.00%
Feb. 2019 2.89% 7.47% 4.19% 7.00% 6.52%
Mar. 2019 1.76% 0.20% 2.17% 0.89% 0.81%
Apr. 2019 3.78% 4.29% 4.65% 3.88% 4.06%
May. 2019 −7.04% −7.22% −5.65% −7.66% −5.77%
Jun. 2019 6.45% 8.45% 7.53% 8.06% 9.33%
Jul. 2019 1.30% 0.25% 0.92% 2.66% 2.64%
Aug. 2019 −1.84% −1.08% −1.78% −0.03% −3.19%
Sep. 2019 1.69% −1.63% 0.83% −6.20% −4.96%
Oct. 2019 2.00% 3.12% 1.67% 5.85% 5.09%
Nov. 2019 3.29% 2.58% 4.00% 4.17% 5.43%
Dec. 2019 2.78% 1.13% 2.39% 0.13% 0.36%
Jan. 2020 −0.16% 0.81% 1.67% 2.13% 1.29%
Feb. 2020 −9.18% −9.09% −9.28% −7.22% −4.96%
Mar. 2020 −14.30% −10.27% −14.03% −6.59% −4.94%
Apr. 2020 11.26% 14.33% 12.53% 19.64% 20.02%
May. 2020 4.33% 7.09% 7.02% 11.54% 11.11%
Jun. 2020 1.81% −0.29% 0.15% 1.95% 2.88%
Jul. 2020 5.22% 4.18% 5.87% 5.28% 10.65%
Aug. 2020 6.55% 4.68% 3.90% 4.18% 5.94%
Sep. 2020 −4.08% −3.95% −1.10% −3.20% −3.04%
Oct. 2020 −2.85% −4.74% −2.05% −5.88% −2.31%
Nov. 2020 9.71% 11.50% 11.91% 8.28% 4.97%
Dec. 2020 3.58% 2.95% 5.39% 3.33% 4.37%
Jan. 2021 −1.13% −2.23% −0.53% −3.06% −3.76%
Feb. 2021 2.54% 3.43% 8.35% 1.51% 5.23%
Mar. 2021 4.07% 7.22% 3.23% 0.88% 3.75%
Apr. 2021 4.98% 6.05% 5.09% 6.00% 6.84%
Average 1.28% 1.73% 1.96% 1.65% 2.45%
Std desv. 5.61% 6.06% 5.76% 6.35% 6.27%
Table 2. Sum of returns and cumulative returns. In the case of the algorithms, the return is averaged over twenty runs.
Sum of Returns Cumulative Returns
S&P500 Yang et al. Solares et al. Without With S&P500 Yang et al. Solares et al. Without With
Index (2019) (2019) Down- Down- Index (2019) (2019) Down- Down-
Trends Trends Trends Trends
Nov. 2018 1.75% 1.01% 1.87% −5.11% −4.87% 1.75% 1.01% 1.87% −5.11% −4.87%
Dec. 2018 −8.35% −8.17% −6.94% −14.67% −14.01% −8.53% −8.27% −7.11% −14.18% −13.56%
Jan. 2019 −1.06% 2.70% −0.24% −7.89% −5.01% −1.86% 1.71% −0.88% −8.37% −5.79%
Feb. 2019 1.83% 10.17% 3.95% −0.89% 1.51% 0.98% 9.31% 3.28% −1.95% 0.36%
Mar. 2019 3.59% 10.37% 6.12% 0.00% 2.32% 2.76% 9.53% 5.52% −1.08% 1.17%
Apr. 2019 7.37% 14.66% 10.77% 3.88% 6.38% 6.64% 14.23% 10.42% 2.76% 5.28%
May. 2019 0.33% 7.44% 5.12% −3.78% 0.61% −0.87% 5.98% 4.18% −5.11% −0.80%
Jun. 2019 6.78% 15.89% 12.65% 4.28% 9.94% 5.53% 14.93% 12.03% 2.54% 8.46%
Jul. 2019 8.08% 16.14% 13.58% 6.94% 12.58% 6.89% 15.22% 13.07% 5.27% 11.32%
Aug. 2019 6.24% 15.06% 11.80% 6.91% 9.39% 4.93% 13.97% 11.05% 5.23% 7.77%
Sep. 2019 7.92% 13.43% 12.63% 0.70% 4.43% 6.70% 12.11% 11.98% −1.29% 2.43%
Oct. 2019 9.93% 16.54% 14.30% 6.56% 9.52% 8.83% 15.61% 13.85% 4.48% 7.64%
Nov. 2019 13.22% 19.12% 18.30% 10.72% 14.95% 12.42% 18.59% 18.40% 8.84% 13.48%
Dec. 2019 16.00% 20.25% 20.68% 10.85% 15.31% 15.54% 19.93% 21.22% 8.97% 13.89%
Jan. 2020 15.84% 21.06% 22.36% 12.98% 16.60% 15.35% 20.90% 23.25% 11.30% 15.36%
Feb. 2020 6.65% 11.98% 13.07% 5.76% 11.64% 4.76% 9.92% 11.81% 3.26% 9.64%
Mar. 2020 −7.65% 1.71% −0.96% −0.83% 6.70% −10.22% −1.37% −3.88% −3.54% 4.23%
Apr. 2020 3.61% 16.04% 11.57% 18.81% 26.73% −0.12% 12.77% 8.16% 15.40% 25.10%
May. 2020 7.94% 23.13% 18.58% 30.36% 37.84% 4.21% 20.76% 15.75% 28.72% 38.99%
Jun. 2020 9.75% 22.84% 18.73% 32.31% 40.72% 6.09% 20.41% 15.91% 31.24% 43.00%
Jul. 2020 14.97% 27.01% 24.60% 37.59% 51.37% 11.63% 25.43% 22.72% 38.16% 58.23%
Aug. 2020 21.52% 31.69% 28.51% 41.77% 57.31% 18.94% 31.30% 27.51% 43.94% 67.63%
Sep. 2020 17.43% 27.74% 27.41% 38.57% 54.28% 14.09% 26.12% 26.11% 39.33% 62.54%
Oct. 2020 14.59% 23.01% 25.36% 32.68% 51.97% 10.84% 20.14% 23.53% 31.13% 58.80%
Nov. 2020 24.30% 34.51% 37.27% 40.97% 56.94% 21.60% 33.97% 38.24% 41.99% 66.69%
Dec. 2020 27.88% 37.47% 42.66% 44.30% 61.31% 25.96% 37.92% 45.70% 46.73% 73.97%
Jan. 2021 26.75% 35.23% 42.14% 41.24% 57.55% 24.54% 34.85% 44.93% 42.24% 67.43%
Feb. 2021 29.29% 38.66% 50.48% 42.75% 62.78% 27.70% 39.47% 57.03% 44.39% 76.18%
Mar. 2021 33.36% 45.88% 53.71% 43.63% 66.52% 32.90% 49.54% 62.10% 45.66% 82.78%
Apr. 2021 38.35% 51.93% 58.80% 49.64% 73.36% 39.52% 58.59% 70.35% 54.41% 95.28%
Sharpe Sortino
Ratio Ratio
S&P’s 500 0.1831 0.2529
Yang et al. (2019) 0.2445 0.4072
Solares et al. (2019) 0.2966 0.4551
Without downtrends 0.2213 0.3884
With downtrends 0.3502 0.7223
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Díaz, R.; Solares, E.; de-León-Gómez, V.; Salas, F.G. Stock Portfolio Management in the Presence of Downtrends Using Computational Intelligence. Appl. Sci. 2022, 12, 4067. https://doi.org/10.3390/
AMA Style
Díaz R, Solares E, de-León-Gómez V, Salas FG. Stock Portfolio Management in the Presence of Downtrends Using Computational Intelligence. Applied Sciences. 2022; 12(8):4067. https://doi.org/10.3390/
Chicago/Turabian Style
Díaz, Raymundo, Efrain Solares, Victor de-León-Gómez, and Francisco G. Salas. 2022. "Stock Portfolio Management in the Presence of Downtrends Using Computational Intelligence" Applied Sciences 12,
no. 8: 4067. https://doi.org/10.3390/app12084067
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2076-3417/12/8/4067","timestamp":"2024-11-09T07:28:08Z","content_type":"text/html","content_length":"565342","record_id":"<urn:uuid:3ffaeb3e-5f72-4f6a-8f85-1aaa662df020>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00637.warc.gz"} |
How do you find the exact value of the six trigonometric functions of ${{45}^{\
We first express all the six trigonometric functions. We divide them in primary ratios and their inverse ratios. We also find all possible relations between those ratios. Then we take the angle
values of ${{0}^{\circ }}$ for all the six trigonometric functions.
Complete step by step solution:
We first complete the list of all the six trigonometric functions.
The main three trigonometric ratio functions are $\sin \theta ,\cos \theta ,\tan \theta $. The inverse of these three functions is $\csc \theta ,\sec \theta ,\cot \theta $. Also, we can express $\tan
\theta =\dfrac{\sin \theta }{\cos \theta }$.
Therefore, the relations are $\csc \theta =\dfrac{1}{\sin \theta },\sec \theta =\dfrac{1}{\cos \theta },\cot \theta =\dfrac{1}{\tan \theta }$.
We can also express these ratios with respect to a specific angle $\theta $ of a right-angle triangle and use the sides of that triangle to find the value of the ratio.
A right-angle triangle has three sides and they are base, height, hypotenuse. We express the ratios in $\sin \theta =\dfrac{\text{height}}{\text{hypotenuse}},\cos \theta =\dfrac{\text{base}}{\text
{hypotenuse}},\tan \theta =\dfrac{\text{height}}{\text{base}}$.
Similarly, $\csc \theta =\dfrac{\text{hypotenuse}}{\text{height}},\sec \theta =\dfrac{\text{hypotenuse}}{\text{base}},\cot \theta =\dfrac{\text{base}}{\text{height}}$.
Now we express the values of these ratios for the conventional angles of ${{45}^{\circ }}$.
Ratios angles (in degree) values
$\sin \theta $ ${{45}^{\circ }}$ $\dfrac{1}{\sqrt{2}}$
$\cos \theta $ ${{45}^{\circ }}$ $\dfrac{1}{\sqrt{2}}$
$\tan \theta $ ${{45}^{\circ }}$ 1
$\csc \theta $ ${{45}^{\circ }}$ $\sqrt{2}$
$\sec \theta $ ${{45}^{\circ }}$ $\sqrt{2}$
$\cot \theta $ ${{45}^{\circ }}$ 1
We need to remember that in mathematics, the trigonometric functions are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all
sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. | {"url":"https://www.vedantu.com/question-answer/find-the-exact-value-of-the-six-trigonometric-class-11-maths-cbse-6010bc31dfcfb40cf08a3e2d","timestamp":"2024-11-11T10:26:32Z","content_type":"text/html","content_length":"164649","record_id":"<urn:uuid:01b3b7e1-574e-49fd-82c5-22bbb29b5ec2>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00597.warc.gz"} |
A short introduction to the disaggregation
We will demonstrate an example of the disaggregation package using areal data of leukemia incidence in New York, using data from the package SpatialEpi.
library(SpatialEpi, quietly = TRUE)
library(dplyr, quietly = TRUE)
library(disaggregation, quietly = TRUE)
polygons <- sf::st_as_sf(NYleukemia$spatial.polygon)
df <- cbind(polygons, NYleukemia$data)
ggplot() + geom_sf(data = df, aes(fill = cases / population)) + scale_fill_viridis_c(lim = c(0, 0.003))
Now we simulate two covariate rasters for the area of interest and make a two-layered SpatRaster. They are simulated at the resolution of approximately 1km^2.
bbox <- sf::st_bbox(df)
extent_in_km <- 111*(bbox[c(3, 4)] - bbox[c(1, 2)])
n_pixels_x <- floor(extent_in_km[[1]])
n_pixels_y <- floor(extent_in_km[[2]])
r <- terra::rast(ncols = n_pixels_x, nrows = n_pixels_y)
terra::ext(r) <- terra::ext(df)
data_generate <- function(x){
rnorm(1, ifelse(x %% n_pixels_x != 0, x %% n_pixels_x, n_pixels_x), 3)
terra::values(r) <- sapply(seq(terra::ncell(r)), data_generate)
r2 <- terra::rast(ncol = n_pixels_x, nrow = n_pixels_y)
terra::ext(r2) <- terra::ext(df)
terra::values(r2) <- sapply(seq(terra::ncell(r2)),
function(x) rnorm(1, ceiling(x/n_pixels_y), 3))
cov_stack <- terra::rast(list(r, r2))
cov_stack <- terra::scale(cov_stack)
names(cov_stack) <- c('layer1', 'layer2')
We also create a population raster. This is to allow the model to correctly aggregated the pixel values to the polygon level. For this simple example we assume that the population within each polygon
is uniformly distributed.
extracted <- terra::extract(r, terra::vect(df$geometry), fun = sum)
n_cells <- terra::extract(r, terra::vect(df$geometry), fun = length)
df$pop_per_cell <- df$population/n_cells$lyr.1
pop_raster <- terra::rasterize(terra::vect(df), cov_stack, field = 'pop_per_cell')
To correct small inconsistencies in the polygon geometry, we run the code below.
Now we have setup the data we can use the prepare_data function to create the objects needed to run the disaggregation model. The name of the response variable and id variable in the sf object should
be specified.
The user can also control the parameters of the mesh that is used to create the spatial field. The mesh is created by finding a tight boundary around the polygon data, and creating a fine mesh within
the boundary and a coarser mesh outside. This speeds up computation time by only having a very fine mesh within the area of interest and having a small region outside with a coarser mesh to avoid
edge effects. The mesh parameters: concave, convex and resolution refer to the parameters used to create the mesh boundary using the fm_nonconvex_hull_inla function, while the mesh parameters
max.edge, cut and offset refer to the parameters used to create the mesh using the fm_mesh_2d function.
data_for_model <- prepare_data(polygon_shapefile = df,
covariate_rasters = cov_stack,
aggregation_raster = pop_raster,
response_var = 'cases',
id_var = 'censustract.FIPS',
mesh_args = list(cutoff = 0.01,
offset = c(0.1, 0.5),
max.edge = c(0.1, 0.2),
resolution = 250),
na_action = TRUE)
Now we have our data object we are ready to run the model. Here we can specify the likelihood function as Gaussian, binomial or poisson, and we can specify the link function as logit, log or
identity. The disaggregation model makes predictions at the pixel level:
\(link(pred_i) = \beta_0 + \beta X + GP(s_i) + u_i\)
where \(X\) are the covariates, \(GP\) is the Gaussian random field and \(u_i\) is the iid random effect. The pixel predictions are then aggregated to the polygon level using the weighted sum (via
the aggregation raster, \(agg_i\)):
\(cases_j = \sum_{i \epsilon j} pred_i \times agg_i\)
\(rate_j = \frac{\sum_{i \epsilon j} pred_i \times agg_i}{\sum_{i \epsilon j} agg_i}\)
The different likelihood correspond to slightly different models (\(y_j\) is the response count data):
Gaussian (\(\sigma_j\) is the dispersion of the polygon data),
\(dnorm(y_j/\sum agg_i, rate_j, \sigma_j)\)
Here \(\sigma_j = \sigma \sqrt{\sum agg_i^2} / \sum agg_i\), where \(\sigma\) is the dispersion of the pixel data, a parameter learnt by the model.
Binomial (For a survey in polygon j, \(y_j\) is the number positive and \(N_j\) is the number tested)
\(dbinom(y_j, N_j, rate_j)\)
Poisson (predicts incidence count)
\(dpois(y_j, cases_j)\)
The user can also specify the priors for the regression parameters. For the field, the user specifies the pc priors for the range, \(\rho_{min}\) and \(\rho_{prob}\), where \(P(\rho < \rho_{min}) = \
rho_{prob}\), and the variation, \(\sigma_{min}\) and \(\sigma_{prob}\), where \(P(\sigma > \sigma_{min}) = \sigma_{prob}\), in the field. For the iid effect, the user also specifies pc priors.
By default the model contains a spatial field and a polygon iid effect. These can be turned off in the disag_model function, using field = FALSE or iid = FALSE.
model_result <- disag_model(data_for_model,
iterations = 1000,
family = 'poisson',
link = 'log',
priors = list(priormean_intercept = 0,
priorsd_intercept = 2,
priormean_slope = 0.0,
priorsd_slope = 0.4,
prior_rho_min = 3,
prior_rho_prob = 0.01,
prior_sigma_max = 1,
prior_sigma_prob = 0.01,
prior_iideffect_sd_max = 0.05,
prior_iideffect_sd_prob = 0.01))
#> Fitting model. This may be slow.
Now we have the results from the model of the fitted parameters, we can predict Leukemia incidence rate at fine-scale (the scale of the covariate data) across New York. The predict function takes the
model result and predicts both the mean raster surface and predicts and summarises N parameter draws, where N is set by the user (default 100). The uncertainty is summarised via the confidence
interval set by the user (default 95% CI). | {"url":"http://ctan.mirror.garr.it/mirrors/CRAN/web/packages/disaggregation/vignettes/disaggregation.html","timestamp":"2024-11-05T00:38:52Z","content_type":"text/html","content_length":"380898","record_id":"<urn:uuid:131648d7-d2ec-4822-b341-12fafc237bc2>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00287.warc.gz"} |
Intermolecular interactions
(back to discussion topics)
Intermolecular interactions
For the purpose of this discussion, only the simplest of interacting species will be considered, i.e., precisely two species, A and B. These can be monatomic, e.g. Hg, monatomic ions, e.g. [Cl]^-,
polyatomic molecules, e.g., aniline, C[6]H[5]NH[2], or polyatomic ions, e.g. hydrogen sulfate, [HSO[4]]^-. There are several types of intermolecular interactions involving A and B:
• "Covalent" - arising from quantum chemical contributions. Since intermolecular interactions imply the absence of covalent interactions (the distance between the interacting species is too large,
by definition) there are no covalent bonds, per se, but there are very small energy contributions resulting from the overlap of atomic orbitals. These contributions are small relative to the
other intermolecular interactions, and indeed become vanishingly small outside about two times the covalent distance, and can thus also be ignored in this discussion.
• Electrostatic - arising from the partial charge on each atom, a, in A interacting with the partial charge on each atom, b, in B. Electrostatic interactions decrease only slowly with distance, the
energy falling off as the reciprocal of the interatomic distance, i.e., as 1/(Rab). Although they can be large, electrostatic terms are simple to calculate and will not be considered further.
• Dispersion, otherwise known as van der Waals or VDW interactions, or London forces - arising from the instantaneous correlation of electrons. Like most quantum chemical theoretical methods,
NDDO-type semiempirical methods use molecular orbitals to solve the Hartree-Fock equations. This is an approximation to the correct wavefunction, a good approximation, but not perfect. The use of
M.O.s implies that the motion of the electrons is not correlated. Correlation of electrons is a time-dependent phenomenon, and results when one electron interacts with another electron. At each
instant of time, the two electrons will endeavor to avoid each other, because of electrostatic repulsion. Dispersion energies can be quite large for nearby atoms, but fall of as the sixth power
of the distance, i.e., as 1/(Rab)^6.
• Hydrogen bonds - a unique type of interaction that typically involves three atoms, with the middle atom being hydrogen. The commonest hydrogen bonds are of the type O-H - O and O-H - N. Hydrogen
bonds could be regarded as a special case of electrostatics, dispersion and covalent interactions, but because of their great importance, and because they are not, in practice, well reproduced by
these other terms, they are now added in post-hoc to the energy terms.
Dispersion terms
Dispersion terms are particularly hard to calculate de novo. The strategy used in MOPAC is to use published benchmark calculations as a source of specific intermolecular interaction energies. In PM7,
these are then used in parameterizing a simple function in the parameter optimization procedure used in developing a new method. In PM6-DH2 and PM6-DH+ the method described in "A Transferable
H-bonding Correction For Semiempirical Quantum-Chemical Methods" Martin Korth, Michal Pitonak, Jan Rezac and Pavel Hobza, J Chem Theory and Computation 6:344-352 (2010) is used. In PM6-D3, the method
used is described in "A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu" S. Grimme, J. Antony, S. Ehrlich, H. Krieg, THE
JOURNAL OF CHEMICAL PHYSICS 132, 154104 .2010. Published reports indicate that the most accurate semiempirical method for predicting intermolecular interaction energies is PM6-D3H4. This method has
not been included in MOPAC, because by the time all the other methods had been included, enthusiasm for yet another method had vanished see also below.
Hydrogen bond terms
The energy correction due to a hydrogen bond is quite large, in the order of several kcal/mol. Despite it being so important, the functional forms of the hydrogen bond have been developed using
purely pragmatic reasoning rather than by deductive science. Thus it is known experimentally that the most important hydrogen bonds involve oxygen, then nitrogen, then other atoms and ions such as
sulfur and chloride; that the bonds are more-or-less linear; and that they depend on the local net charge in the region of the bond. Thus a hydrogen bond between two acetic acid molecules would be
weaker than between an acetic acid molecule and an acetate ion. The extra energy of a charged hydrogen bond is reflected in a reduced distance between the two non-hydrogen atoms in a hydrogen bond.
Several approaches have been proposed for modeling the hydrogen bond, with the best being the -DH2, -NH+, and -D3H4, with -D3H4 being the most accurate. The approach used in PM7 can best be described
as a mixture of -DH2 and -DH+ with a special correction for charged hydrogen bonds. The resulting approximation is by no means optimal, but does capture most of the phenomena associated with hydrogen
bonds. Because it is not optimal, it should not be regarded as definitive, in other words, when a better approximation is developed, the current approximation in PM7 should be replaced. Why was -D3H4
not used? Simply put, after implementing the earlier approximations, I ran out of enthusiasm to implement the final, best, approximation.
Approximations for modeling both dispersion and hydrogen bonding terms have been evolving rapidly. At the present time, 2013, a clear description of this topic appears to be appearing: Terms of the
type developed by Grimme, i.e., the D3 terms, represent the dispersion effect with very good accuracy, and should be used, without modification, in the development of all future methods. This had
several advantages, among these are: (A) Because the dispersion effect is completely separate from all other effects, it can be treated separately. (B) By not using any parameters for the dispersion
effect, the number of parameters in a new method is reduced - always a desirable objective. (C) By using a "black box" to represent dispersion effects, the potential of introducing distance-dependent
artifacts into a new method is reduced. Unfortunately, at the time PM7 was completed, this insight had not been developed.
The status of hydrogen bonding approximations is less clear. When PM6-DH2 was developed, a small theoretical error was introduced: the hydrogen bond energy was made a function of the partial charge
on the hydrogen atom. This weakened the electronic variational principle, because now the energy minimum depended on the post-SCF hydrogen bond partial charge. A consequence of this violation was
that the energy gradient norm no longer went to zero at the energy minimum, and as a result geometry optimization became ill-defined. The error was small, and for most systems could be neglected. The
most important practical effect was that geometry optimization became less efficient and as a result took more cycles, i.e., more CPU time.
The error in PM6-DH2 was corrected in PM6-DH+, by making the hydrogen bond energy independent of partial charge. But now a new error was introduced in PM6-DH+ where for some highly specific systems
there was a discontinuity in the second derivative of the energy with respect to geometry.
In PM7, both of the errors in PM6-DH2 and PM6-DH+ were removed, and the behavior of the hydrogen bond improved. In recent months, various faults in the shape of the hydrogen bond have been described.
These problems with the hydrogen bond illustrate the fact that approximations for the hydrogen bond are still evolving. The problem to be addressed is clear, it is only the approximation that
describes the hydrogen bond that is not clear. of organic compounds. | {"url":"http://openmopac.net/Discussions/Intermolecular%20interactions.html","timestamp":"2024-11-03T04:25:07Z","content_type":"text/html","content_length":"9985","record_id":"<urn:uuid:c608b734-d656-4fe5-bb70-402c80500ee1>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00312.warc.gz"} |
1980 4K CoCo Programming Challenge
Here are the current submissions for the 1980 4K Programming Challenge. Each entry must be able to run on the original 1980 4K Radio Shack TRS-80 Color Computer using only cassette recorder for
storage. I plan to go in and sort the entries by name, rather than group them by the author, but for now, the list is just sorted by the author’s last name.
• 2015/02/06 Update: Added Simon Jonassen’s 4K demos!
• 2015/02/13: Added Bob Swoger’s “Who is the greatest?” and Swoger/Mobley’s “Radio Horizon”.
Jim Gerrie (Canada)
Jim Gerrie has been submitting a variety of programs, some original designed to run on the 4K Radio Shack MC-10. A few have had to be tweaked to work on the 4K CoCo since it provides less RAM to
BASIC than the MC-10.
Jimm Gerrie’s 4K Night Blitz.
NEW: Night Blitz is an airplane game, reminiscent of Atari 2600’s Air Combat.
Jim Gerrie’s 4k Asteroid Storm
NEW: Asteroid Storm is sort of like Frogger or, better, Activision’s Freeway. Meteors roll up the screen, and you have to navigate your spacsehip (arrow) from the right side of the screen to the
left… And there’s no backing up!
Jim Gerrie: 4K Farfall
Farfall is a text-mode game based on John Linville’s Fahrfall.
Jim Gerrie’s 4K JimVaders
JimVaders is a text-mode Space Invaders style game, complete with a stationary mother ship that makes an appearance, and a few beeps along the way.
Jim Gerrie: 4K Lunar Lander
Lander is a rather fancy version of the classic Lunar Lander done with the 64×32 block graphics.
Jim Gerrie: 4K Tetris
Tetris is a version of the classic falling block game. It was too large to fit in 4K, so it is loaded in two chunks from two cassette files. Clever :)
Jim Gerrie: 4K 2048
G2048 is a 4K version of the current hit 2048 game (web-based and iPhone/Android). The original game is open source, and Jim used that as reference to make it play like the original.
Simon Jonassen (Denmark)
Simon has been cranking out some very interesting video and audio demos on the old CoCo 1/2s, and has decided to see what he can do to make them fit on a 4K machine. He has been working on a
multi-part 4K demo.
Rogelio Perea
Rogelio submits two entries — a maddening puzzle game, and a math type entry. He writes:
FROGS is a game of logic, did this one years ago first on a CoCo 3 based on a TRS-80 Model I/III BASIC program from a RS sold book. had a couple of ECB embellishments that got stripped out for Color
Basic, found out that some things I have done on the original conversions could be simplified so I did. Plot here is to move all the blocks on the left (the donuts) into the right and the ones on the
right to the left, the trick is to do it in 24 moves, the program will let you know upon finishing the swap either by congratulating the player or letting him/her know how many moves to take out to
get perfect score. Illegal moves do not count and pressing F at either slot move prompt will forfeit the game and start over again.
SPIRALS is also another adaptation from a Graphics book for the Model I/III. Did the port first for the MC-10 years ago and moving it to Color Basic was more or less straightforward short of the lack
of COSine on CB… back to the manual for the old derived functions listings on stuff I should have learned better in school. Enter an angle value and either plot to lines or points, that’s all. There
are a few interesting designs: with lines try 87, 90 & 122 degrees, with points try 61 & 170 degrees. I am still working on getting this particular program do its graphics on the 64×64 mode
(unsupported by CB *and* ECB but achievable through POKEs), still on the works – will let you know.
Nick Marentes (Australia)
Space Invaders on a 4K Color Basic CoCo? Is it possible to replicate the tension and excitement of the arcade original?
With just over 2K of free RAM to start with and a BASIC that doesn’t even allow editing of it’s own lines, it seemed doubtful that anything decent was possible under such constraints.
The challenge was set! An opportunity to relive those early computing days when BASIC programming was cool and large colored pixels were cutting edge! I set forth using techniques such as
multi-statement lines, embedded graphics, self modifying BASIC code and a total disregard for structured programming!
This is my attempt to squeeze a piece of arcade history into a limited programming environment and show what can be accomplished using standard 4K Color Basic (no machine language).
Let the invasion begin!
John Mark Mobley
Glenside CoCo Club’s John Mark Mobley contributes an interesting entry that solves the Magic Square puzzle in 4K.
The wikipedia states: “In recreational mathematics, a magic square is an arrangement of distinct numbers (i.e. each number is used once), usually integers, in a square grid, where the numbers in each
row, and in each column, and the numbers in the main and secondary diagonals, all add up to the same number.”
To make this possible, it actually loads in three different parts in series. Clever (and totally doable in 1980, per the rules). John Mark Mobley writes:
“This program is an automatic magic square puzzle solver. It solves a 3×3 magic square. It finds all 8 solutions. It generates a permutation of 9 taken 9 at a time and tests each permutation for a
magic square match. There are 362,880 permutations.
My program is divided into 3 parts:
1) MAGICSQR Introduction
2) MAGICSQ2 Analysis
3) MAGICSQ3 Print Results.
The analysis takes about 12 hours.“
(I will have to let it run overnight to see it for myself…)
John shared this story about the program’s creation:
My first attempt to write a permutation generator was to assign a seating chart for a class room of 40 students.
The idea was to have a computer do a permutation of 40 taken 40 at a time, and then test each permutation for a good fit. A good fit would separate friends that talk if too close, and enemies
that fight if to close.
I gave up trying to write the program because the number of permutations was too grate.
The number of permutations = 40! = 40*39*38*37*…5*4*3*2 = 8.1591528324789773434561126959612e+47.
Let’s suppose that a 8 core processor running at 10GHz can solve 80,000,000,000 permutations per second. Which is 8e10*60*60*24*365 = 2522880000000000000 permutations per year. Now let’s take
1000 computers that gives us 2522880000000000000000 permutations per year. So now it will take 1000 computers 323406298852064994904875091.00556 years to solve the problem, but by that time the
students will have already graduated from high school.
It took my wife (the school teacher) only 30 minutes to come up with a seating chart.
So a permutation generator is not always a good approach to solving problems.
So I started looking for another problem that I could use a permutation generator to solve, and I came up with the magic square.
Gotta love it!
Bob Swoger
Who’s the greatest?
It is designed to show how the string compare functions/operators work. The program will have you input 2 strings and then it will compare them.
• It will display that 2 is greater than 1000.
• It will display that JOHN is greater than BOB.
• It will display that ROBERT is greater than JOHN
• It will display then SWOGER is greater than MOBLEY.
You may find the program to be insulting and you may want to recommend changes to make it more acceptable.
Bob Swoger and John Mark Mobley
You should always fear a 4K submission that comes with it’s own Wikipedia link:
This one is a team entry. John writes:
Bob Swoger and I worked on this together.
Bob wrote the code in VCC most likely using a DOS editor and using toolshed to save it to a DSK file. I took the DSK file and loaded it into XRoar and saved to cassette. Then I switched XRoar to
a 4k CoCo with Color BASIC mode and loaded the cassette. Then we discovered the 2^0.5 operator was missing and the SQR(2) function was missing. So I wrote a successive approximation SQR
subroutine and got it working.
Then Bob and I debugged the code.
You can run assume the transmitter antenna is on a 17000 foot mountain and the receiver antenna is 4 foot tall.
It seems you could do some real work with a 4K machine after all. And the version submitted is actually V2. John added: “The original Radio Horizon program only did a 32-bit successive approximation
SQR subroutine. I updated the Radio Horizon program to do 64-bit successive approximation SQR subroutine.”
Can we say this is a “64-bit” program running on a 1980 4K CoCo? ;-)
Anyone else?
If you are planning on submitting, please let me know (alsplace@pobox.com) so I can add you to the list.
More to come…
10 thoughts on “1980 4K CoCo Programming Challenge”
1. dnistoryBen
Who won?
1. Allen Huffman Post author
No one yet — I still have half a dozen more entries to list, then I will get all the cassette images available to folks can download and check them out and vote.
2. Steve Strowbridge
Very cool!
3. Matteo Trevisan
i like very much this homepage coco and trs 80 based.
4. Brian Sturk
We’re these ever posted to try out?
1. Allen Huffman Post author
I don’t think so. I should revisit.
5. Brian Sturk
that would be great, thanks Allen!
1. Allen Huffman Post author
I found some of them, but will have to find time to convert them, if possible, to ASCII. Most seem to have been sent as .cas emulator files.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://subethasoftware.com/4k/","timestamp":"2024-11-03T02:54:13Z","content_type":"text/html","content_length":"113458","record_id":"<urn:uuid:1fc45f35-1de6-4748-97ef-23cb4228392f>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00351.warc.gz"} |
Understanding the Integral Quotient Rule
The integral quotient rule is a popular tool in mathematics, used to solve certain types of integration problems. It is a powerful method that allows you to quickly and easily calculate the integral
of a fraction, or quotient, of two functions. In this article, we will explore the concept of the integral quotient rule, the steps for solving integral quotient problems, examples of integral
quotient problems, their advantages and limitations, and some tips and common mistakes to avoid when using the rule.
What is the Integral Quotient Rule?
The integral quotient rule is an analytical process used to evaluate integrals of quotients (fractions) of two functions. It allows a mathematician to break down a complex problem into more
manageable pieces. Essentially, it states that the integral of a quotient is equal to the product of any constants multiplied by the integral of each function cubed.
The integral quotient rule is a powerful tool for solving integrals, as it allows for the integration of functions that would otherwise be difficult or impossible to integrate. It is also useful for
simplifying integrals that would otherwise be too complex to solve. Additionally, the integral quotient rule can be used to solve integrals with multiple variables, as it allows for the integration
of functions with multiple variables.
The Steps for Solving Integral Quotient Problems
Solving integral quotient problems involves the following steps:
• Calculate the integral of each function in the fraction.
• Multiply the result of step one with any constant terms (if any).
• Integrate the two functions to produce a result.
For example, if we have an integral of f(x)/g(x), where both f(x) and g(x) are functions, then we would evaluate the integral of each function individually, multiply the result by any constant terms
(if any), and then integrate the two functions to produce a result.
It is important to note that the order of operations is important when solving integral quotient problems. The integral of each function must be calculated first, followed by the multiplication of
any constant terms, and then the integration of the two functions. If the order of operations is not followed, the result may be incorrect.
Examples of Integral Quotient Problems
To better understand the integral quotient rule, let us look at some examples. Consider the following case:
• Find the integral of 2 x^2 divided by 9 x.
In this instance, f(x) = 2 x^2, and g(x) = 9 x. Using the quotient rule, we would:
• Integrate f(x) to get 2x^3/3.
• Integrate g(x) to get 9x^2/2.
• Multiply the result of step one with any constant terms (in this case, there is no constant; therefore we would skip this step).
• Integrate the two functions to get 2x^3/18.
Therefore, the solution to our example problem is 2x^3/18.
It is important to note that the integral quotient rule is not limited to the example given above. It can be used to solve a variety of problems, as long as the two functions being integrated are in
the form of a quotient. Additionally, the integral quotient rule can be used to solve problems with more than two functions, as long as the functions are in the form of a quotient.
Advantages of the Integral Quotient Rule
The main advantage of the integral quotient rule is that it simplifies complex problems by breaking them down into smaller, more manageable pieces. This can make it much easier to understand and
visualize what the solution looks like. Additionally, it can also reduce computation time when compared to other methods.
Limitations of the Integral Quotient Rule
The main limitation of the integral quotient rule is that it can only be used in certain types of integration problems. In particular, it is not applicable to integrals of products and quotients of
more than two functions. Additionally, it is not well-suited for more complicated problems due to its rigid structure.
Tips for Understanding the Integral Quotient Rule
• Start by breaking down the problem into smaller parts.
• Understand the structure and steps of the quotient rule.
• Always look for any terms or constants that need to be multiplied by the integral before integrating each function.
• Remember that in some cases, you may need to use other integration rules in addition to the quotient rule.
Common Mistakes to Avoid When Using the Integral Quotient Rule
• Not correctly identifying all terms or constant terms in the problem.
• Failing to integrate each component of the problem before integrating the entire result.
• Skipping steps or not understanding the order in which you should solve the problem.
• Not double-checking your answer for accuracy.
The Integral Quotient Rule is an important part of calculus and is used to quickly and easily solve certain types of integration problems. By understanding the concept and being aware of its
limitations, advantages and potential mistakes, anyone can use it correctly and efficiently. With practice and careful execution, you can become a master of this powerful tool. | {"url":"https://mathemista.com/understanding-the-integral-quotient-rule/","timestamp":"2024-11-07T14:21:24Z","content_type":"text/html","content_length":"58138","record_id":"<urn:uuid:af9d3762-5c14-49d8-8dbe-8e9d700d541c>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00372.warc.gz"} |
Horizontal Projection Interpolation (HPI) Algorithm
If the main radiation is not in the horizontal plane (for example, if the antenna has an electrical or mechanical down tilt), the pattern computed with the bilinear interpolation (BI) and weighted
bilinear interpolation (WBI) algorithms becomes less accurate. Especially in these cases, the HPI algorithm should be used.
The HPI algorithm takes the gain of the horizontal pattern ${G}_{H}\left(\phi \right)$ as a basis and considers a correction term for the influence of the vertical pattern ${G}_{V}\left(\vartheta \
right)$. Therefore, the gains ${G}_{H}\left(\phi \right)$ and ${G}_{V}\left(\vartheta \right)$ in the horizontal and vertical pattern, respectively, are taken and processed by using the following
$G\left(\phi ,\vartheta \right)={G}_{H}\left(\phi \right)-\left[\frac{\pi -|\phi |}{\pi }\cdot \left({G}_{H}\left(0\right)-{G}_{V}\left(\vartheta \right)\right)+\frac{|\phi |}{\pi }\cdot \left({G}_
{H}\left(\pi \right)-{G}_{V}\left(\pi -\vartheta \right)\right)\right]$
Hereby it is assumed that the horizontal and vertical patterns are two sections of the 3D antenna pattern. This means that the two following conditions are fulfilled:
• ${G}_{H}\left(0\right)={G}_{V}\left(0\right)$ and ${G}_{H}\left(\pi \right)={G}_{V}\left(\pi \right)$ and in the case without electrical tilt
• ${G}_{H}\left(0\right)={G}_{V}\left(\alpha \right)$ and ${G}_{H}\left(\pi \right)={G}_{V}\left(\pi -\alpha \right)$ in the case of electrical tilt $\alpha$. | {"url":"https://help.altair.com/winprop/topics/winprop/user_guide/aman/introduction/horizontal_projection_interpolation_winprop.htm","timestamp":"2024-11-08T01:01:38Z","content_type":"application/xhtml+xml","content_length":"47738","record_id":"<urn:uuid:897052f0-178f-4644-bc32-3df7b7dda46f>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00445.warc.gz"} |
Comparative evaluation of three infiltration models for runoff and erosion prediction in the Loess Plateau region of China
Skip Nav Destination
Research Article| November 08 2017
Comparative evaluation of three infiltration models for runoff and erosion prediction in the Loess Plateau region of China
Zhuo Cheng;
1State Key Laboratory of Earth Surface Processes and Resource Ecology, Beijing Normal University, Beijing 100875, China
Search for other works by this author on:
Bofu Yu;
2Australian Rivers Institute and School of Engineering, Griffith University, Nathan 4111, Australia
Search for other works by this author on:
Suhua Fu;
3State Key Laboratory of Soil Erosion and Dryland Farming on the Loess Plateau, Institute of Soil and Water Conservation, Chinese Academy of Sciences, Yangling, Shaanxi 712100, China and Faculty of
Geographical Science, Beijing Normal University, Beijing 100875, China
Search for other works by this author on:
Gang Liu
1State Key Laboratory of Earth Surface Processes and Resource Ecology, Beijing Normal University, Beijing 100875, China
Search for other works by this author on:
Hydrology Research (2018) 49 (5): 1484–1497.
Article history
December 15 2016
Zhuo Cheng, Bofu Yu, Suhua Fu, Gang Liu; Comparative evaluation of three infiltration models for runoff and erosion prediction in the Loess Plateau region of China. Hydrology Research 1 October 2018;
49 (5): 1484–1497. doi: https://doi.org/10.2166/nh.2017.003
Download citation file: | {"url":"https://iwaponline.com/hr/article/49/5/1484/38642/Comparative-evaluation-of-three-infiltration","timestamp":"2024-11-14T17:54:36Z","content_type":"text/html","content_length":"377475","record_id":"<urn:uuid:5a6e0f97-85a6-4b3e-83ca-53ddc00c6c68>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00537.warc.gz"} |
Snub Dodecahedron – Origami matematici – Mathigon
Origami matematici
Solidi platonici
Platonic Solids are the most regular polyhedra: all faces are the same regular polygon, and they look the same at every vertex. The Greek philosopher Plato discovered that there are only five solids
with these properties. He believed that the they correspond to the four ancient Elements, Earth, Water, Air and Fire, as well as the Universe.
Solidi di Archimede
Archimedean Solids, like the Platonic ones, consist of regular Polygons and look the same at every vertex. However the faces are multiple different regular polygons. There are 13 Archimedean Solids,
two of which are reflections of each other. Explore 3D models on Polypad…
Stelle e composti
Bellissimo origami
Ulteriori letture | {"url":"https://it.mathigon.org/origami/snub-dodecahedron","timestamp":"2024-11-03T15:23:33Z","content_type":"text/html","content_length":"42968","record_id":"<urn:uuid:38981dd0-c5d3-4b8c-9d33-f3d5d464e929>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00885.warc.gz"} |
An Introduction to Data Structures (I) An Introduction to Data Structures (I)
The concept of Data structures is most dreaded and avoided by programmers at the beginner level. In software engineering, Data structures provide solutions to a broad scope of problems. Making it
essential to serve as a requirement for senior programming roles in most top technological companies.
What are Data Structures?
Data structures are specialised formats for storing, organising, managing, processing, and retrieving data for easy accessibility and modification. While there are different types of Data structures
each one is universal across programming languages but with different syntax.
Why are Data Structures important?
A good understanding of data structures boosts a programmer’s ability to proffer detailed solutions to problems by sharing insights into individual usage of structures. The use of code and tools to
develop digital products requires storing sets of information for proper referencing and modification, hence its importance in software engineering.
Common examples of features where Data structures are used include “Facebook friend lists”, “undo-redo feature of a text editor”, “Document-Object Model of a browser”, “text auto-complete feature”,
among many other data features of a digital product.
Measuring The Efficiency of Data Structures
Determining the most suitable data structure to solve a problem requires measuring its speed and efficiency using the Big O Notation. The standard criteria for measuring data efficiency include a
data’s ability to access, search, insert, and delete elements in the data structures.
The Big O Notation
To know the most suitable data structure to use, there is a need to measure the speed and efficiency of the data structure. The standard criteria considered in calculating this is the efficiency in
accessing, searching, inserting, and deleting elements in the data structure. The Big O Notation is the conventional way to measure the efficiency of data structures.
The Big O Notation
Big O notations are recorded using Time Complexity Equations. Contrary to what the name suggests, the time complexity is not a measure of the actual time it takes the computer to carry out the
function. It is actually a measure of the number of operations it takes to complete a function.
The time complexity equation of a function can be written as;
T(n) = O(x).
• Where T(n) is the time complexity.
• n is an integer representing the size or number of elements in the data structure.
• x is the number of operations required to completely execute the function on a data structure in respect to size n.
This equation is read as ‘a time complexity O of x’. Mostly, the number of elements in the data structure has a negative effect on the number of operations that need to be performed for the computer
to carry out a function like searching through the data set. The worst-case scenarios are used when measuring the time complexity of a data structure, i.e., the highest possible number of n is
considered. Confused? Check out the examples below.
Consider the types of time complexities below:
i. Constant Time Complexity
A graphical representation of a constant time complexity, where n is the size of the data
This means the number of operations carried out to execute the function is constant, irrespective of the size of the data structure. One element data set, and a quadrillion elements data set — the
same number of operations. This is the best and most efficient Time Complexity Equation and is represented as T(n) = O(1).
ii. Logarithmic Time Complexity
A graphical representation of a logarithmic time complexity
This means the number of operations carried out to execute the function increases logarithmically with the size of the data structure. It is slower than constant time complexity but still considered
a fast time complexity. It is represented as T(n) = O(log n).
iii. Linear Time Complexity
A graphical representation of a linear time complexity
This means as the size of the data structure increases, the number of operations carried out to execute the function increases at the same pace. This is considered a decent Time Complexity Equation
and is represented as T(n) = O(n).
iv. Quadratic Time complexity
A graphical representation of a constant time complexity, where n is the size of the data
This means the number of operations carried out to execute the function is squared, with an increase in the size of the data structure. This is considered an inefficient Time Complexity Equation and
is represented as T(n) = O(n²).
Some other time complexities include;
v. Log-Linear Time Complexity: T(n) = O(n log n).
vi. Exponential time: T(n) = O(c^n), where c is some constant.
vii. Factorial time: T(n) =O(n!).
The above detailed types of time complexities directly work with each type of data structures. In this second part of this article, the basic types of data structures and individual time complexities
are discussed. To find out more about data structures, click here. | {"url":"https://www.studio14online.co.uk/an-introduction-to-data-structures-i/","timestamp":"2024-11-09T18:47:56Z","content_type":"text/html","content_length":"85172","record_id":"<urn:uuid:a6c7f04d-111d-4da0-bac1-b0c0486e421f>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00321.warc.gz"} |
Question ID - 153927 | SaraNextGen Top Answer
A wave pulse passing on a string with a speed of 40 cm/s in the negative
As velocity of a wave is constant, location of maximum after 5 s given by
As velocity of a wave is constant, location of maximum after 5 s given by | {"url":"https://www.saranextgen.com/homeworkhelp/doubts.php?id=153927","timestamp":"2024-11-09T17:21:41Z","content_type":"text/html","content_length":"16524","record_id":"<urn:uuid:f467137d-efa1-4414-abff-274a665fb482>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00558.warc.gz"} |
: Compute, Matrix, correct answer
I'd like to use Compute("[[1/2,1],[1,1]]") so that the fraction is visible in the correct answer field after submission. Unfortunately the correct answer is displayed as [[1/2,1],[1,1]] instead of a
If I use Matrix([[1/2,1],[1,1]]), then the correct answer is displayed as a matrix but the fraction becomes 0.5.
How can I keep both the fraction and the matrix form in the correct answer field?
I only have this problem if the matrix is part of a MultiAnswer. If I use it regularly, then the Compute is displayed as a matrix as expected. So this is probably a bug for displaying Compute matrix
answers inside a MultiAnswer.
It would help if you could post a complete problem that exhibits the problem, so we don't have to make it up ourselves. A minimal example that doesn't include any extra material would be best.
For example, I can reproduce your results only when singleResult=>1 is used with the MultiAnswer object. That is important information that we would have had if you had provided a full example.
The issue is due to the fact that when the MultiAnswer object was written, the correct answer in the result table was not displayed in typeset form, but was always an answer string like what a
student would enter. The TeX version was added fairly recently, and MultiAnswer hasn't been updated to accommodate that. If you use singleResult => 0, it will format as you want it to, but the
single-result version doesn't generate the required TeX form.
I've attached an updated copy of parserMultiAnswer.pl that should take care of the problem. | {"url":"https://webwork.maa.org/moodle/mod/forum/discuss.php?d=3376","timestamp":"2024-11-12T00:10:04Z","content_type":"text/html","content_length":"82740","record_id":"<urn:uuid:83166868-bb46-41e0-92c8-d4fe7eb4a5ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00602.warc.gz"} |
Computational Social Science Institute
My research focuses on nonparametric statistics motivated by problems in causal inference and time-to-event analysis. In many scientific settings, it is impossible, unethical, or cost-prohibitive to
conduct a controlled experiment in which an exposure or treatment of interest is randomly assigned to units in a population. In such cases, researchers often turn to observational data, where the
mechanism assigning the exposure is unknown, to attempt to assess causal effects. Nonparametric estimation of the statistical parameters that result from identifying causal effects with observational
is often complicated. I use tools from classical nonparametric statistics, including kernel methods, as well as tools from modern statistical theory, including semiparametric efficiency theory and
empirical process theory, to develop and analyze nonparametric estimators of these parameters. A common feature of the estimators I develop is the ability to perform valid statistical inference while
using machine learning estimators of nuisance parameters. A particular focus of my research is continuous exposures, which arise in many scientific fields including vaccine trials, environmental
epidemiology, and the study of air pollution. My primary applied areas of interest are public health, epidemiology, and biomedicine. | {"url":"https://cssi.umass.edu/index.php/people/ted-westling","timestamp":"2024-11-01T23:55:30Z","content_type":"text/html","content_length":"16030","record_id":"<urn:uuid:88e995f9-4bce-4c12-b833-9704caf57a4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00442.warc.gz"} |
A point source causes photoelectric effect from a small metal p-Turito
Are you sure you want to logout?
A point source causes photoelectric effect from a small metal plate Which of the following curves may represent the saturation photocurrent as a function of the distance between the source and the
A. a
B. b
C. c
D. d
Get an Expert Advice From Turito. | {"url":"https://www.turito.com/ask-a-doubt/physics-a-point-source-causes-photoelectric-effect-from-a-small-metal-plate-which-of-the-following-curves-may-repre-qd6a4f3","timestamp":"2024-11-10T08:39:16Z","content_type":"application/xhtml+xml","content_length":"346276","record_id":"<urn:uuid:91d5b1a1-c277-4d19-9fb4-fe84184d7081>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00554.warc.gz"} |
Introduction to evolutionary algorithms
Jorge Martinez
September 11, 2024
python algorithms
Evolutionary Algorithms (EA) are computer programs whose goal is to find the (near-)optimal solution to a problem among a set of posible solutions by mimicking natural selection.
Let’s dive a bit into previous definition!
Computer programs
Evolutionary algorithms are software routines. Therefore, they need to be coded as any computer program.
The reason why we make such distinction is because evolutionary algorithms are a set of a broader subject named Evolutionary Computation (EC), which is a field of Artificial intelligence (AI).
Evolutionary computation includes many other techniques. Some of the most popular are:
• Genetic Algorithms (GA)
• Differential Evolution (DA)
• Ant Colony Optimization (ACO)
Find the (near-)optimal solution
Evolutionary algorithms are metaheuristic or stochastic solvers. Let’s dive a bit more into these concepts:
• Metaheuristic. This term refers to algorithms whose goal is to find a solution that may not be the best but that it is close to the optimum one.
• Stochastic. It refers to a process or model that appears to vary in a random manner.
Thus, we can say that a genetic algorithm can find for a solution close to the optimum one by using some logic that involves some randomness. How is this possible? More about it in future posts…
Set of possible solutions
The solution found by an evolutionary algorithm does not appear out of nowhere; we need to know it in advance. In fact, we need to feed the algorithm with a set of solutions to our problem. Let me
give an example:
Guess a number between 0 and 99.
You do not know the optimum answer, that is the right number. However, you know that a possible solution can be any number within that range.
Despite this example being very simple, we can apply the same idea to any problem. The core concept here is to generate possible solutions in a smart way.
Generating initial solutions in a heuristic way for evolutionary algorithms can significantly improve their efficiency and speed up convergence to an optimal solution.
Of course, use case problems for evolutionary algorithms are not this simple. Some of them involve computing the optimum solution for a multi-constrained multi-variable. In addition, the solution for
them may not be analytic. Here is where evolutionary algorithms really shine.
Mimicking natural selection
The key point of evolutionary algorithms: emulate the rules of nature for the survival of the best individuals.
What if we could combine solutions to create better solutions? Do all solutions have a change to “mate”? How can we do this? How can we evaluate the fitness of a solution? What happens if a solution
is just no longer valid? What about mutation of solutions? Can we generate a new generation for our population of solutions?
All these questions will be solved in future posts. For the moment, just focus on the idea of combining solutions in a smart wat to create better solutions in an iterative way until an optimum
solution is found. | {"url":"https://jorgemartinez.space/posts/evolutionary-algorithms/introduction-to-evolutionary-algorithms/","timestamp":"2024-11-07T06:33:45Z","content_type":"text/html","content_length":"6381","record_id":"<urn:uuid:3ea69be7-8508-4102-99c4-ec57a642b806>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00469.warc.gz"} |
UU Matter EMC Loop?
Hello. Recently I have reached what you could call "the end game", in my Tekkit world, and so I decided to do something meaningless and time-consuming. I decided to stockpile lots of UU-Matter. I
created a little factory which would condense dirt from materials I decided to put in and through a simple setup, it would go through my recycler, to a chest, to the mass fabricator. While doing
this, I noticed something, but before I begin I'll start off with something I found out.
If hooked up to an MFSU and with an unlimited supply of scrap, a mass fabricator would produce 1 UU-Matter after consuming 32-35 scrap. We can estimate that a stack of scrap would equal 2 UU-Matter.
Now what I found out is that 6 UU-Matter is enough to create 8 glowstone blocks, or 12, 288 EMC in total. Keep this in mind as I move on.
Since dirt is 1 EMC, if we were to condense these 8 glowstone blocks, we would receive that much dirt. Now the recycler has a 1/8 chance of creating a scrap from an item, so theoretically we can
assume that every 8th recycled item or block would create scrap, so let's do the math. 12 288 / 8 = 1536. So from that dirt, we have acquired 1536 pieces of scrap. Now taking the knowledge we have
from before, knowing that 32 scrap is equal to 1 UU-Matter, we can divide 1536 by 32 so 1536 / 32 = 48. So we have now gone from 6 UU-Matter which we used to create the glowstone blocks to 48
UU-Matter. I have tried this out myself and have actually acquired 47 UU-Matter, but that is very close to 48, and the amount of scrap created is all about luck.
This is by far not the best EMC farm there is, but I can only assume that not many people know about this, and I just wanted to point this out. A step you could do after acquiring the 48 UU-Matter is
condense it all into glowstone again and maybe divert 16 glowstone blocks back to the dirt and the rest can be condensed into diamonds and such. Thank you for your time. | {"url":"https://forums.technicpack.net/topic/38349-uu-matter-emc-loop/","timestamp":"2024-11-03T22:37:33Z","content_type":"text/html","content_length":"64140","record_id":"<urn:uuid:0eabc4e0-e999-4d66-a6e5-648ca88b30b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00851.warc.gz"} |
Double Block
Logic number puzzle: Double Block.
│Download DoubleBlock desktop application (Windows 64bit) │Download DoubleBlock source code (Lazarus/Free Pascal)│
Description: Simple puzzle game with numbers to be placed onto a grid. The rules of the game are as follows:
1. Color exactly 2 fields in each row and each column of the N×N grid.
2. In each of the resting fields, write a number between 1 and N-2.
3. In each row and each column, each number must be written exactly once.
4. The numbers at the left and the top indicate the sum of the numbers between the 2 colored fields in the corr. row/column.
The game is a typical logic game: To find the correct solution of the puzzle, you have to explore the sum values, and by logical thinking (concluding from what you know, that a given field must be a
colored or a given number field, or on the contrary, can't be so), determine, which are the colored fields and which numbers have to be written to the other fields.
The game is played, using the mouse: Click the field, that you want to color or where you want to write a number, then click the Color button to color it or one of the 4 number buttons to write the
corresponding number. Use the Clear button to reset the actually selected field.
Change log:
version 1.0 (November 2020): Original program
version 1.1 (November 2022):
- number generation algorithm changed (lots faster now)
- "Show" button display bug corrected
Free Pascal features: Changing the color of shapes and the caption of static-texts during runtime. Using buttons to insert characters into a given control's caption. Two-dimensional arrays (classic
If you like this application, please, support me and this website by signing my guestbook. | {"url":"https://www.streetinfo.lu/computing/lazarus/doc/DoubleBlock.html","timestamp":"2024-11-06T22:02:26Z","content_type":"text/html","content_length":"9775","record_id":"<urn:uuid:714275bc-9f99-4aac-9d2d-8d7e6e739d6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00706.warc.gz"} |
The Macroeconomic Potential of RWAs
Sibylline are becoming increasingly aware of the possibilities surrounding tokenising Real World Assets (RWAs). First instance tokenisation of RWAs such as building NFT’s into assets for exclusivity
and stratified access is one possibility. However, those holding large amounts of RWAs not currently on the market can afford to be much more ambitious. This article focuses on the requirements
necessary to build a completely parallel financial system, with a token backed by a physical asset a la a stablecoin, the possibilities emerging therefrom, and the economic theories which might lend
Scaling Stablecoins
Stablecoins are easy enough to understand in and of themselves. Just as one currency may be pegged to another, stablecoins are cryptographic tokens which have a value pegged to an underlying asset or
series thereof. Some are simply pegged to traditional currencies like the dollar, allowing a 1-1 exchange ratio for the consumer. Note that institutions may not treat them as 1-1 equivalent due to
differences in confidence, storage, regulations, risk appetite, and so on, but deviations will be small. Additional complexity can be added - some stablecoins could be pegged to stock indices,
commodity values or a basket combination thereof. The Stablecoin thereafter has value both as a medium of exchange and a store of the confidence in the value it is pegged against.
This allows those who mint stablecoins, presuming that they also hold the underlying asset, an unparalleled opportunity to develop their own influence and propagate their designs into the future. The
first challenge will be ensuring that the stablecoin is fungible in terms of some other currency, or that it has purchasing power in real things. This can be in terms of other currencies like the
dollar, or goods and services, in the way Bitcoin and Ethereum increasingly are increasingly accepted as payment directly to merchants. Ideally, the minters, or consortium thereof, would aim for both
simultaneously, and pursue whichever avenue posts the most returns more aggressively in-line with their broader strategy.
Financial Potential
This does not mean that the underlying asset is now off the table for further deployment. Certainly, one could use a certain amount of underlying commodities to produce cash equivalents, physical
representations of the token in question. That commodity could be combined with other, more abundant supplements if it is particularly rare and/or difficult to physically manipulate - a composite
coin of say 95% scrap metal, 5% value metal can still function as a Real World Token (RWT) for a stablecoin based on the value metal, just the same as one purely composed of the value metal, because
it is a token, rather than a lump commodity.
If a value metal were to have other applications, one could therefore collect money on it twice. Creating a stablecoin pegged to its value is the best of both worlds in that it is a store of value
like gold and Bitcoin, but also uses more sophisticated methods, and has immediate real world appeal. What remains of the value metal deposit not deployed in real world tokens could then be used in
any other applications, and sold to those who would so use it. It goes without saying that, if the minters of this stablecoin, were to control a large fraction of the global supply of the value
metal, integrating themselves into the supply networks using that value metal would afford them volumes of soft power and prestige, alongside that already gained from creating such a desirable
However, if the minters were to try and pursue both these options, exactly what to peg the stablecoin to becomes a more complex decision. If a significant fraction of their total supply of value
metal is being sold elsewhere, pegging to the global supply of the value metal in question becomes less stable, as the buyers cannot be controlled once they purchase, and may fail in their own
enterprise, deviate from contract, or produce other unexpected results. All these things would presumably influence the supply value, unless sales of the underlying value metal are so wide that any
one actor no longer matters on the demand side. Whilst this might be the case in retail and wide use metals, I cannot envision this easily being the case with the Rare Earth Metals which will likely
be used for stables in this way. It may instead make sense to peg the value of the stablecoin to an entity which holds:
• The initial ecosystem and blockchain on which the stablecoin is to be minted and deployed.
• The deposits of the valued metal in question.
• Contractual interest from those looking to acquire that valued metal.
This is a less subtle approach which would require some degree of interaction with conventional financial architecture, though likely on the supply side. This may in turn compromise anonymity to some
degree, if this is important. However, note the specific word, ‘hold’. There is no requirement for the entity in question to actually own the assets in question, or derive any final beneficial
expectation at law. The only requirements, should the minters wish to pursue this route, are that:
• The entity is an effective confluence node for all these derivations of value to intersect.
• Those holding positions in the entity have some way to derive benefit to their satisfaction from the entity (resulting trusts, debt and licensed intellectual property are all routes by which this
could be straightforwardly architected).
• That this architecture is not so effective that hostile intentions arise from those in Centralised Finance (CeFi) and elsewhere whose business model and interests are disrupted, at least, outside
tolerance or ability to ameliorate those intentions.
Enlightening Economics
One underrated point of Hayek’s economic vision was a system of competing private currencies, sometimes complementing, and sometimes set against, Von Mises’ bet on gold, now going digital.
Cryptocurrencies as currently formulated go part of the way towards fulfilling both these visions. Schumpeter further noted that radical breakthroughs will be followed by smaller clusters; the first
mover on this technology would be perfectly poised to make such a breakthrough which would go down in history.
The first breakthrough such a project could achieve is the reconciliation of Wieser and Von Mises. Wieser predicted that currencies would degrade, and separate from their subject matters - Von Mises
subsequently insisted that money managers would not strike in the interests of the citizenry and that ‘gold must be the currency of everyone’, so that they always had access to real value which they
could lever and depend on. Bitcoin is a store of value analogous to gold. But it will never achieve currency adoption and status on its current architecture. A Stablecoin could achieve this, whilst
maintaining the ability to store value like gold and Bitcoin.
The second is advancing the view of money as a comparative tool put across by Schumpeter, far ahead of his time. Schumpeter uniquely appreciated that money is still important where no transaction
takes place, because it allows for an easy comparison of the value of goods, and for individual agents to calibrate their future behaviour accordingly. There must be a common unit of account for
trade to take place, because meaningful comparison implies a standard to be measured against. All this is true at one point in time; when iterating forward into the future, it is best for this
standard to be as stable as possible. This is where stables come in. Government monetary and fiscal policy, whether they like it or not, manipulates the value of the standards of comparison their
citizens use nonstop. People’s individual tendencies to spend or save are exacerbated by inflationary events and decisions beyond their control. National fiat currencies may be widely used, and
preferable to barter, but they are also sub-optimal standards for measurement. Stablecoins, proof from direct government engineering and separate from political incentives to debase the currency for
the next big spending project could provide a far better standard to measure goods against as a general ledger.
The final question remains one inside the space - why use stables over a currency confined to a platform, and why are they preferable inside the Web3 space to other cryptos? Whilst larger firms may
have the power to profitably issue their own currency and accept only it as legal tender to better set up price discrimination in their favour, smaller firms will have to use common units of account.
Even then, it doesn’t automatically follow that larger firms will choose to issue their own currencies, as it may dissuade customers who can easily go to a competitor. Either way, once we’ve accepted
that there must be a common unit of transaction, stablecoins should be easily engineerable for low gas fees, based on the ability to specialise them for exchange, excluding development tools for
other purposes.
This way, they could be the reconciliation between Schumpeter and Hayek; Hayek famously argued for private currencies to compete in the same way cryptocurrencies do now, but Schumpeter worried that
winner takes all dynamics would lead to a net loss for consumers on account of exactly the price discrimination previously mentioned. With stablecoins, the remainder of the crypto ecosystem is free
to compete to build the most efficient currency possible, but, with a separately indexed price and value, consumers do not stand to lose from being individually price gouged based on their position,
buying habits and demands.
We feel that the intersection between RWA and Stablecoins is one of the most underrated points in the modern crypto ecosystem. Sentiment is currently elsewhere, chiefly focused on CBDC’s, with most
commentariat attention migrating to AI for the last few months. There is, in this distraction, a chance for a consortium to be first to market on an entirely new class of currency and take not just
market share, but huge volumes of soft power by storm.
This would be best done by creating a stablecoin ultimately pegged against, if not a valuable underlying asset, an entity in a position to corner the supply of that asset and monetise its supply. Two
for the price of one. This will achieve a tripartite offensive: first into Web3 by creating a universal currency and store of value, which also has an intuitive appeal current cryptos lack. Next,
into whatever markets the underlying asset fuels, building the entity and its holders huge soft power, alongside a real-world ecosystem to deploy the assets their stable is backed against. Finally,
the intersection between that supply and Web3 will, correctly managed, produce a fusion of underlying assets and alternative enterprise, woven under a financial umbrella, which could ground the
expansion of that stablecoin into ground normally occupied by CeFi. When it does so, it will not be as capital vulnerable as Bitcoin or Ethereum, in financial or social terms, because of the huge
volume of underlying assets and depending processes.
An unprecedented opportunity awaits. Qui audet adipiscitur. | {"url":"https://thoughts.sibylline.xyz/the-macroeconomic-potential-of-rwas/","timestamp":"2024-11-14T00:36:26Z","content_type":"text/html","content_length":"28335","record_id":"<urn:uuid:63b902f0-2e58-41dc-9b97-275e1ecdc2bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00638.warc.gz"} |
Excel Formula for Calculating Percentage in Python
In this tutorial, we will learn how to write an Excel formula in Python that calculates a percentage based on certain conditions. Specifically, we will focus on calculating 0.6% of the value in cell
G7, but only if the value in cell F7 is equal to 'H'. We will use the IF function in Excel to perform this calculation. This tutorial will provide a step-by-step explanation of the formula and
provide examples to illustrate its usage.
To calculate the percentage, we will use the formula =IF(F7="H", G7*0.006, 0). This formula checks if the value in cell F7 is equal to 'H'. If it is, the formula multiplies the value in cell G7 by
0.006, which is equivalent to 0.6%. If the value in cell F7 is not equal to 'H', the formula returns 0.
Let's consider an example to understand how this formula works. Suppose we have the following data in cells F7 and G7:
In this case, the formula =IF(F7="H", G7*0.006, 0) would return the value 0.6, which is 0.6% of 100. On the other hand, if the value in cell F7 is not equal to 'H', the formula would return 0. For
In this case, the formula =IF(F7="H", G7*0.006, 0) would return the value 0, as the value in cell F7 is not equal to 'H'.
By following this tutorial, you will be able to write an Excel formula in Python that calculates a percentage based on a specific condition. This can be useful in various data analysis and
manipulation tasks. Let's get started!
An Excel formula
Formula Explanation
This formula uses the IF function to check if the value in cell F7 is equal to "H". If it is, the formula calculates 0.6% of the value in cell G7. If it is not, the formula returns 0.
Step-by-step explanation
1. The IF function is used to check if the value in cell F7 is equal to "H". If it is, the formula proceeds to the next step. If it is not, the formula returns 0.
2. If the value in cell F7 is equal to "H", the formula multiplies the value in cell G7 by 0.006, which is equivalent to 0.6%.
3. The result of the calculation is returned as the result of the formula.
For example, if we have the following data in cells F7 and G7:
| F | G |
| | |
| H | 100 |
The formula =IF(F7="H", G7*0.006, 0) would return the value 0.6, which is 0.6% of 100.
If the value in cell F7 is not equal to "H", the formula would return 0. For example:
| F | G |
| | |
| A | 100 |
The formula =IF(F7="H", G7*0.006, 0) would return the value 0, as the value in cell F7 is not equal to "H". | {"url":"https://codepal.ai/excel-formula-generator/query/CC2PYvMD/excel-formula-calculate-percentage-g7-f7","timestamp":"2024-11-03T16:16:06Z","content_type":"text/html","content_length":"92369","record_id":"<urn:uuid:69d37877-ec50-420c-a9e4-6f8810f8f576>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00105.warc.gz"} |
How To Calculate Volume In A Wire
Even though you can bend and twist it into various shapes, a wire is basically a cylinder. It has a circular cross-section with a specific radius and has a particular length. That's all you need to
calculate its volume, using the standard expression:
\(V=\pi r^2 L\)
where "r" is the wire radius and "L" is its length. Since diameter (d) is more often mentioned in the wire specifications than radius, you can rewrite this equality in terms of this quantity.
Remembering that radius is half of diameter, the expression becomes:
\(V=\frac{\pi d^2 L}{4}\)
Keep Units Consistent
The diameter of a wire is orders of magnitude smaller than its length in most cases. You'll probably want to measure the diameter in inches or centimeters while you measure the length in feet or
meters. Remember to convert your units before calculating volume, or the calculation will be meaningless. It's usually better to convert the length to the units you used to measure diameter rather
than the other way around. This produces a large number for length, but it's easier to work with than the extremely small number you'll get for diameter if you convert it to meters or feet.
Sample Calculations
1. What is the volume of a 2-foot length of 12-gauge electrical wire?
Looking up the diameter of 12-gauge wire in a table, you find it to be 0.081 inches. You now have enough information to calculate the wire volume. First convert the length to inches: 2 feet = 24
inches. Now use the appropriate equation:
\(V=\frac{\pi d^2 L}{4}=\frac{\pi (0.081)^2 24}{4}=0.124\text{ in}^3\)
1. An electrician has 5 cubic centimeters of space left in an electrical box. Can he fit a 1-foot length of 4-gauge wire in the box?
The diameter of 4-gauge wire is 5.19 millimeters. That's 0.519 centimeters. Simplify the calculation by using the wire radius, which is half the diameter. The radius is 0.2595 centimeters. The length
of the wire is 1 foot = 12 inches = (12 x 2.54) = 30.48 centimeters. The volume of the wire is given by:
\(V=\pi r^2 L=\pi (0.2595)^2 30.48=6.45\text{ cm}^3\)
The electrician doesn't have enough room in the box to install the wire. He either needs to use smaller wire, if codes allow, or a bigger box.
Cite This Article
Deziel, Chris. "How To Calculate Volume In A Wire" sciencing.com, https://www.sciencing.com/calculate-volume-wire-5968720/. 5 December 2020.
Deziel, Chris. (2020, December 5). How To Calculate Volume In A Wire. sciencing.com. Retrieved from https://www.sciencing.com/calculate-volume-wire-5968720/
Deziel, Chris. How To Calculate Volume In A Wire last modified March 24, 2022. https://www.sciencing.com/calculate-volume-wire-5968720/ | {"url":"https://www.sciencing.com:443/calculate-volume-wire-5968720/","timestamp":"2024-11-07T09:49:08Z","content_type":"application/xhtml+xml","content_length":"71011","record_id":"<urn:uuid:3ba8c1f1-60b9-49b0-b145-3f89d2d886cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00657.warc.gz"} |
Lesson 14
Alternate Interior Angles
Problem 1
Use the diagram to find the measure of each angle.
1. \(m\angle ABC\)
2. \(m\angle EBD\)
3. \(m\angle ABE\)
Problem 2
Lines \(k\) and \(\ell\) are parallel, and the measure of angle \(ABC\) is 19 degrees.
1. Explain why the measure of angle \(ECF\) is 19 degrees. If you get stuck, consider translating line \(\ell\) by moving \(B\) to \(C\).
2. What is the measure of angle \(BCD\)? Explain.
Problem 3
The diagram shows three lines with some marked angle measures.
Find the missing angle measures marked with question marks.
Problem 4
Lines \(s\) and \(t\) are parallel. Find the value of \(x\).
Problem 5
The two figures are scaled copies of each other.
1. What is the scale factor that takes Figure 1 to Figure 2?
2. What is the scale factor that takes Figure 2 to Figure 1? | {"url":"https://im.kendallhunt.com/MS/teachers/3/1/14/practice.html","timestamp":"2024-11-12T08:31:35Z","content_type":"text/html","content_length":"83766","record_id":"<urn:uuid:9970350c-c158-429a-976d-823b16e180c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00175.warc.gz"} |
Graph Theoretical Problems in Life Sciences
A common task in natural sciences is to describe, characterize, and infer relations between discrete objects. A set of relations E on a set of objects V can naturally be expressed as a graph G = ¹V;
Eº. It is therefore often convenient to formalize problems in natural sciences as graph theoretical problems. In this thesis we will examine a number of problems found in life sciences in particular,
and show how to use graph theoretical concepts to formalize and solve the presented problems. The content of the thesis is a collection of papers all solving separate problems that are relevant to
biology or biochemistry.
The first paper examines problems found in self-assembling protein design. Designing polypeptides, composed of concatenated coiled coil units, to fold into polyhedra turns out to be intimately
related to the concept of 1-face embeddings in graph topology. We show that 1-face embeddings can be canonicalized in linear time and present algorithms to enumerate pairwise non-isomorphic 1-face
embeddings in orientable surfaces.
The second and third paper examine problems found in evolutionary biology. In particular, they focus on inferring gene and species trees directly from sequence data without any a priori knowledge of
the trees topology. The second paper characterize when gene trees can be inferred from estimates of orthology, paralogy and xenology relations when only partial information is available. Using this
characterization an algorithm is presented that constructs a gene tree consistent with the estimates in polynomial time, if one exists. The shown algorithm is used to experimentally show that gene
trees can be accurately inferred even in the case that only 20% of the relations are known. The third paper explores how to reconcile a gene tree with a species tree in a biologically feasible way,
when the events of the gene tree are known. Biologically feasible reconciliations are characterized using only the topology of the gene and species tree. Using this characterization an algorithm is
shown that constructs a biologically feasible reconciliation in polynomial time, if one exists.
The fourth and fifth paper are concerned with with the analysis of automatically generated reaction networks. The fourth paper introduces an algorithm to predict thermodynamic properties of compounds
in a chemistry. The algorithm is based on the well known group contribution methods and will automatically infer functional groups based on common structural motifs found in a set of sampled
compounds. It is shown experimentally that the algorithm can be used to accurately predict a variety of molecular properties such as normal boiling point, Gibbs free energy, and the minimum free
energy of RNA secondary structures. The fifth and final paper presents a framework to track atoms through reaction networks generated by a graph grammar. Using concepts found in semigroup theory, the
paper defines the characteristic monoid of a reaction network. It goes on to show how natural subsystems of a reaction network organically emerge from the right Cayley graph of said monoid. The
applicability of the framework is proven by applying it to the design of isotopic labeling experiments as well as to the analysis of the TCA cycle.
Note re. dissertation
Print copy of the thesis is restricted to reference use in the library.
Dive into the research topics of 'Graph Theoretical Problems in Life Sciences'. Together they form a unique fingerprint.
• Flamm, C., Hellmuth, M.,
Merkle, D.
, Nojgaard, N. & Stadler, P. F.,
IEEE/ACM Transactions on Computational Biology and Bioinformatics. 1 ed. IEEE
Vol. 19
p. 429-442
(IEEE/ACM Transactions on Computational Biology and Bioinformatics).
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
• Hellmuth, M.,
Merkle, D.
& Nøjgaard, N.,
Proceedings of Bioinformatics Research and Applications: 16th International Symposium, ISBRA 2020.
Cai, Z., Mandoiu, I., Narasimhan, G., Skums, P. & Guo, X. (eds.).
p. 406-415
(Lecture Notes in Computer Science; No. 12304).
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
• Hellmuth, M., Knudsen, A. S., Kotrbík, M.,
Merkle, D.
& Nøjgaard, N.,
2018 Proceedings of the 20th Workshop on Algorithm Engineering and Experiments, ALENEX 2018.
Venkatasubramanian, S. & Pagh, R. (eds.).
Society for Industrial and Applied Mathematics
p. 154-168
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review | {"url":"https://portal.findresearcher.sdu.dk/en/publications/graph-theoretical-problems-in-life-sciences","timestamp":"2024-11-13T08:35:45Z","content_type":"text/html","content_length":"83979","record_id":"<urn:uuid:62e5f425-a468-4c8d-9eab-2ee2079aba43>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00844.warc.gz"} |
Quantum Computing
Quantum Computing (Neptun course code: qtcomp1u0_m21vx)
2021 Fall course at the Eötvös Loránd University, Budapest
The course is organized as part of the QTEdu Master pilot and the material is based on Ronald de Wolf's lectures. Ronald's lecture notes and course decription:
Today's computers---both in theory (Turing machines) and practice (PCs and smart phones)---are based on classical physics. However, modern quantum physics tells us that the world behaves quite
differently. A quantum system can be in a superposition of many different states at the same time, and can exhibit interference effects during the course of its evolution. Moreover, spatially
separated quantum systems may be entangled with each other and operations may have "non-local" effects because of this. Quantum computation is the field that investigates the computational power and
other properties of computers based on quantum-mechanical principles. Its main building block is the qubit which, unlike classical bits, can take both values 0 and 1 at the same time, and hence
affords a certain kind of parallelism. The laws of quantum mechanics constrain how we can perform computational operations on these qubits, and thus determine how efficiently we can solve a certain
computational problem. Quantum computers generalize classical ones and hence are at least as efficient. However, the real aim is to find computational problems where a quantum computer is much more
efficient than classical computers. For example, Peter Shor in 1994 found a quantum algorithm that can efficiently factor large integers into their prime factors. This problem is generally believed
to take exponential time on even the best classical computers, and its assumed hardness forms the basis of much of modern cryptography (particularly the widespread RSA system). Shor's algorithm
breaks all such cryptography. A second important quantum algorithm is Grover's search algorithm, which searches through an unordered search space quadratically faster than is possible classically. In
addition to such algorithms, there is a plethora of other applications: quantum cryptography, quantum communication, simulation of physical systems, and many others. The course is taught from a
mathematical and theoretical computer science perspective, but should be accessible for physicists as well.
Prerequisites: Familiarity with basic linear algebra, probability theory, discrete math, algorithms, all at the level of a first Bachelor's course. Also general mathematical maturity: knowing how to
write down a proof properly and completely.
The course is taught in English. There is a two-hour lecture per week scheduled on Tuesdays from 8:20 to 9:50 in the János Hunfalvy lecture hall (
-- on the ground floor of the
Southern Block of ELTE's Lágymányos Campus
), and there is a one hour exercise class scheduled on Mondays from 9:15 to 10:00 in the Pál Erdős lecture hall (LD-00-718 -- in the basement of the
Southern Block of ELTE's Lágymányos Campus
). (
You need to sign up for both the lecture and exercise classes in Neptun.)
There will be graded homework exercises for each week. The final grade will come from a weighted combination of the homeworks (1/3) and the final exam (2/3). The weekly homework is due before the
start of the next exercise class. You can either send the solutions electroincally as a pdf file or hand in the written solution at the start of the exercise class. When computing the average grade
for the homeworks, the two lowest scores will be dropped. This is also meant to cover cases of illness etc.; in general no extension is allowed for homework submission. Cooperation among students is
allowed, but everyone has to hand in their own solution set in their own words. Do not share files before the homework deadline, and never put the solutions online. Plagiarism will not be tolerated.
Note that Appendix C of Ronald's
lecture notes
has hints for some of the exercises, indicated by (H). If the hint gives you some facts (for instance that there exists an efficient classical algorithm for testing if a given number is prime) then
you can use these facts in your answer without proving/deriving these facts themselves.
1. Tuesday September 7, 8:20-09:50 (room LD-0-820): Lecture -- Introduction (Chapter 1 of the lecture notes) Note: Make sure you know the material in Appendices A and B of the lecture notes before
next week's lecture! Thursday September 9, 15:15-16:00 (room LD-0-817): Exercise class
2. Monday September 13, 9:15-10:00 (room LD-00-718): Exercise class -- homework due at the beginning of the class: Exercises 4, 6, 9, 11 of Chapter 1 Tuesday September 14, 8:20-09:50 (room
LD-0-820): Lecture -- The circuit model of quantum computation & the Deutsch-Jozsa algorithm (Chapter 2)
3. Tuesday September 21, 8:20-09:50 (room LD-0-820): Lecture -- Simon's algorithm (Chapter 3) -- homework due at the beginning of the class: Exercises 4, 5, 8 of Chapter 2 Thursday September 23,
15:15-16:00 (room LD-0-820): Exercise class
4. Monday September 27, 9:15-10:00 (room LD-00-718): Exercise class -- homework due at the beginning of the class: Exercises 1, 3, 4 of Chapter 3 Tuesday September 28, 8:20-09:50 (room LD-0-820):
Lecture -- Quantum Fourier transform (Chapter 4)
5. Monday October 4, 9:10-09:55 (room LD-00-718): Exercise class -- homework due at the beginning of the class: Exercises 1, 3, 4 of Chapter 4 Tuesday October 5, 8:20-09:50 (room LD-0-820): Lecture
-- Shor's factoring algorithm (Chapter 5)
6. Monday October 11, 9:10-09:55 (room LD-00-718): Exercise class -- homework due at the beginning of the class: Exercises 2, 3 of Chapter 5 Tuesday October 12, 8:20-09:50 (room LD-0-820): Lecture
-- Hidden subgroup problem (Chapter 6)
7. Monday October 18, 9:10-09:55 (room LD-00-718): Exercise class -- homework due at the beginning of the class: Exercises 2, 3, 4 of Chapter 6 Tuesday October 19, 8:20-09:50 (room LD-0-820):
Lecture -- Grover's search algorithm and quantum walks (Chapter 7-8) Monday-Tuesday October 25-26, No Class (Fall break)
8. Tuesday November 2, 8:20-09:50 (room LD-0-820): Lecture -- Hamiltonian simulation, HHL, and QSVT (Chapter 9-10) -- homework due at the beginning of the class: Homeworks Nr.7
9. Monday November 8, 9:10-09:55 (room LD-00-718): Exercise class -- homework due at the beginning of the class: Homeworks Nr.8 Tuesday November 9, 8:20-09:50 (room LD-0-820): Lecture -- Quantum
query lower bounds (Chapter 11)
10. Monday November 15, 9:10-09:55 (room LD-00-718): Exercise class -- homework due at the beginning of the class: Exercises 3, 5, 7, 8 of Chapter 11 Tuesday November 16, 8:20-09:50 (room LD-0-820):
Lecture -- Quantum complexity theory (Chapter 12)
11. Monday November 22, 9:10-09:55 (room LD-00-718): Exercise class -- homework due at the beginning of the class: Exercises 2, 3 of Chapter 12. Bonus exercise: Homeworks Nr.10 Tuesday November 23,
8:20-09:50 (room LD-0-820): Lecture -- Quantum cryptography (Chapter 14.1 & 17)
12. Monday November 29, 9:10-09:55 (room LD-00-718): Exercise class -- homework due at the beginning of the class: Exercises 3, 4, 6, 7 of Chapter 17. Tuesday November 30, 8:20-09:50 (room LD-0-820):
Lecture -- Entanglement and non-locality (Chapter 16)
13. Monday December 6, 9:10-09:55 (room LD-00-718): Exercise class -- homework due at the beginning of the class: Exercises 1, 5, 6 of Chapter 16. Tuesday December 7, 8:20-09:50 (room LD-0-820):
Lecture -- Quantum error correction (Chapter 18) | {"url":"https://gilyen.hu/teaching/QC_2021.html","timestamp":"2024-11-11T06:35:05Z","content_type":"text/html","content_length":"10398","record_id":"<urn:uuid:4671c4d3-d13c-4002-8e66-aefa84aa53d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00837.warc.gz"} |
11.4: Goodness-of-Fit Test
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
In this type of hypothesis test, you determine whether the data "fit" a particular distribution or not. For example, you may suspect your unknown data fit a binomial distribution. You use a
chi-square test (meaning the distribution for the hypothesis test is chi-square) to determine if there is a fit or not. The null and the alternative hypotheses for this test may be written in
sentences or may be stated as equations or inequalities.
The test statistic for a goodness-of-fit test is:
\[\sum_{k} \frac{(O-E)^{2}}{E}\nonumber\]
• \(O\) = observed values (data)
• \(E\) = expected values (from theory)
• \(k\) = the number of different data cells or categories
The observed values are the data values and the expected values are the values you would expect to get if the null hypothesis were true. There are n terms of the form \(\frac{(O-E)^{2}}{E}\).
The number of degrees of freedom is \(df\) = (number of categories – 1).
The goodness-of-fit test is almost always right-tailed. If the observed values and the corresponding expected values are not close to each other, then the test statistic can get very large and will
be way out in the right tail of the chi-square curve.
The number of expected values inside each cell needs to be at least five in order to use this test.
Absenteeism of college students from math classes is a major concern to math instructors because missing class appears to increase the drop rate. Suppose that a study was done to determine if the
actual student absenteeism rate follows faculty perception. The faculty expected that a group of 100 students would miss class according to Table \(\PageIndex{1}\).
Table \(\PageIndex{1}\)
Number of absences per term Expected number of students
0–2 50
3–5 30
6–8 12
9–11 6
12+ 2
A random survey across all mathematics courses was then done to determine the actual number (observed) of absences in a course. The chart in Table \(\PageIndex{2}\) displays the results of that
Table \(\PageIndex{2}\)
Number of absences per term Actual number of students
0–2 35
3–5 40
6–8 20
9–11 1
12+ 4
Determine the null and alternative hypotheses needed to conduct a goodness-of-fit test.
\(\bf{H_a}\): Student absenteeism fits faculty perception.
The alternative hypothesis is the opposite of the null hypothesis.
\(\bf{H_a}\): Student absenteeism does not fit faculty perception.
a. Can you use the information as it appears in the charts to conduct the goodness-of-fit test?
Solution 11.4
a. No. Notice that the expected number of absences for the "12+" entry is less than five (it is two). Combine that group with the "9–11" group to create new tables where the number of students
for each entry are at least five. The new results are in Table \(\PageIndex{3}\) and Table \(\PageIndex{4}\).
Number of absences per term Expected number of students
0–2 50
3–5 30
6–8 12
9+ 8
Table 11.3
Table \(\PageIndex{4}\)
Number of absences per term Actual number of students
0–2 35
3–5 40
6–8 20
9+ 5
b. What is the number of degrees of freedom (\(df\))?
Solution 11.4
b. There are four "cells" or categories in each of the new tables.
\(d f=\text { number of cells }-1=4-1=3\)
A factory manager needs to understand how many products are defective versus how many are produced. The number of expected defects is listed in Table \(\PageIndex{5}\).
Table \(\PageIndex{5}\)
Number produced Number defective
0–100 5
101–200 6
201–300 7
301–400 8
401–500 10
A random sample was taken to determine the actual number of defects. Table \(\PageIndex{6}\) shows the results of the survey.
Table \(\PageIndex{6}\)
Number produced Number defective
0–100 5
101–200 7
201–300 8
301–400 9
401–500 11
State the null and alternative hypotheses needed to conduct a goodness-of-fit test, and state the degrees of freedom.
Employers want to know which days of the week employees are absent in a five-day work week. Most employers would like to believe that employees are absent equally during the week. Suppose a random
sample of 60 managers were asked on which day of the week they had the highest number of employee absences. The results were distributed as in Table \(\PageIndex{7}\). For the population of
employees, do the days for the highest number of absences occur with equal frequencies during a five-day work week? Test at a 5% significance level.
Table \(\PageIndex{7}\) Day of the Week Employees were Most
Monday Tuesday Wednesday Thursday Friday
Number of absences 15 12 9 9 15
Solution 11.5
The null and alternative hypotheses are:
□ \(H_0\): The absent days occur with equal frequencies, that is, they fit a uniform distribution.
□ \(H_a\): The absent days occur with unequal frequencies, that is, they do not fit a uniform distribution.
If the absent days occur with equal frequencies, then, out of 60 absent days (the total in the sample: \(15 + 12 + 9 + 9 + 15 = 60\)), there would be 12 absences on Monday, 12 on Tuesday, 12 on
Wednesday, 12 on Thursday, and 12 on Friday. These numbers are the expected (\(E\)) values. The values in the table are the observed (\(O\)) values or data.
This time, calculate the \chi2 test statistic by hand. Make a chart with the following headings and fill in the columns:
□ Expected (\(E\)) values \((12, 12, 12, 12, 12)\)
□ Observed (\(O\)) values \((15, 12, 9, 9, 15)\)
□ \((O – E)\)
□ \((O – E)^2\)
□ \(\frac{(O-E)^{2}}{E}\)
Now add (sum) the last column. The sum is three. This is the \(\chi^2\) test statistic.
The calculated test statistics is 3 and the critical value of the \(\chi^2\) distribution at 4 degrees of freedom the 0.05 level of confidence is 9.48. This value is found in the \(\chi^2\) table
at the 0.05 column on the degrees of freedom row 4.
\(\text{The degrees of freedom are the number of cells }– 1 = 5 – 1 = 4\)
Next, complete a graph like the following one with the proper labeling and shading. (You should shade the right tail.)
Figure \(\PageIndex{5}\)
\[\bf{\chi}_{c}^{2}=\sum_{k} \frac{(O-E)^{2}}{E}=3\nonumber\]
The decision is not to reject the null hypothesis because the calculated value of the test statistic is not in the tail of the distribution.
Conclusion: At a 5% level of significance, from the sample data, there is not sufficient evidence to conclude that the absent days do not occur with equal frequencies.
Teachers want to know which night each week their students are doing most of their homework. Most teachers think that students do homework equally throughout the week. Suppose a random sample of 56
students were asked on which night of the week they did the most homework. The results were distributed as in Table \(\PageIndex{8}\).
Table \(\PageIndex{8}\)
Sunday Monday Tuesday Wednesday Thursday Friday Saturday
Number of students 11 8 10 7 10 5 5
From the population of students, do the nights for the highest number of students doing the majority of their homework occur with equal frequencies during a week? What type of hypothesis test should
you use?
One study indicates that the number of televisions that American families have is distributed (this is the given distribution for the American population) as in Table \(\PageIndex{9}\).
Table \(\PageIndex{9}\)
Number of Televisions Percent
4+ 8
The table contains expected (\(E\)) percents.
A random sample of 600 families in the far western United States resulted in the data in Table \(\PageIndex{10}\).
Table \(\PageIndex{10}\)
Number of Televisions Frequency
Total = 600
4+ 15
The table contains observed (\(O\)) frequency values.
At the 1% significance level, does it appear that the distribution "number of televisions" of far western United States families is different from the distribution for the American population as a
Solution 11.6
This problem asks you to test whether the far western United States families distribution fits the distribution of the American families. This test is always right-tailed.
The first table contains expected percentages. To get expected (E) frequencies, multiply the percentage by 600. The expected frequencies are shown in Table \(\PageIndex{11}\).
Table \(\PageIndex{11}\)
Number of televisions Percent Expected frequency
0 10 (0.10)(600) = 60
1 16 (0.16)(600) = 96
2 55 (0.55)(600) = 330
3 11 (0.11)(600) = 66
over 3 8 (0.08)(600) = 48
Therefore, the expected frequencies are 60, 96, 330, 66, and 48.
\(H_0\): The "number of televisions" distribution of far western United States families is the same as the "number of televisions" distribution of the American population.
\(H_a\): The "number of televisions" distribution of far western United States families is different from the "number of televisions" distribution of the American population.
Distribution for the test: \(\chi_{4}^{2} \text { where } d f=(\text { the number of cells })-1=5-1=4\).
Calculate the test statistic: \(\chi^2 = 29.65\)
Figure \(\PageIndex{6}\)
The graph of the Chi-square shows the distribution and marks the critical value with four degrees of freedom at 99% level of confidence, α = .01, 13.277. The graph also marks the calculated chi
squared test statistic of 29.65. Comparing the test statistic with the critical value, as we have done with all other hypothesis tests, we reach the conclusion.
Make a decision: Because the test statistic is in the tail of the distribution we cannot accept the null hypothesis.
This means you reject the belief that the distribution for the far western states is the same as that of the American population as a whole.
Conclusion: At the 1% significance level, from the data, there is sufficient evidence to conclude that the "number of televisions" distribution for the far western United States is different from
the "number of televisions" distribution for the American population as a whole.
The expected percentage of the number of pets students have in their homes is distributed (this is the given distribution for the student population of the United States) as in Table \(\PageIndex{12}
Table \(\PageIndex{12}\)
Number of pets Percent
4+ 9
A random sample of 1,000 students from the Eastern United States resulted in the data in Table \(\PageIndex{13}\).
Table \(\PageIndex{13}\)
Number of pets Frequency
4+ 90
At the 1% significance level, does it appear that the distribution “number of pets” of students in the Eastern United States is different from the distribution for the United States student
population as a whole?
Suppose you flip two coins 100 times. The results are \(20 HH, 27 HT, 30 TH\), and \(23 TT\). Are the coins fair? Test at a 5% significance level.
This problem can be set up as a goodness-of-fit problem. The sample space for flipping two fair coins is \(\{HH, HT, TH, TT\}\). Out of 100 flips, you would expect 25 \(HH, 25 HT, 25 TH\), and \
(25 TT\). This is the expected distribution from the binomial probability distribution. The question, "Are the coins fair?" is the same as saying, "Does the distribution of the coins \((20 HH, 27
HT, 30 TH, 23 TT)\) fit the expected distribution?"
Random Variable: Let \(X\) = the number of heads in one flip of the two coins. X takes on the values 0, 1, 2. (There are 0, 1, or 2 heads in the flip of two coins.) Therefore, the number of cells
is three. Since \(X\) = the number of heads, the observed frequencies are 20 (for two heads), 57 (for one head), and 23 (for zero heads or both tails). The expected frequencies are 25 (for two
heads), 50 (for one head), and 25 (for zero heads or both tails). This test is right-tailed.
\(\bf{H_0}\): The coins are fair.
\(\bf{H_a}\): The coins are not fair.
Distribution for the test: \(\chi_2^2\) where \(df = 3 – 1 = 2\).
Calculate the test statistic: \(\chi^2 = 2.14\).
Figure \(\PageIndex{7}\)
The graph of the Chi-square shows the distribution and marks the critical value with two degrees of freedom at 95% level of confidence, \(\alpha = 0.05\), 5.991. The graph also marks the
calculated \(\chi^2\) test statistic of 2.14. Comparing the test statistic with the critical value, as we have done with all other hypothesis tests, we reach the conclusion.
Conclusion: There is insufficient evidence to conclude that the coins are not fair: we cannot reject the null hypothesis that the coins are fair. | {"url":"https://stats.libretexts.org/Bookshelves/Applied_Statistics/Business_Statistics_(OpenStax)/11%3A_The_Chi-Square_Distribution/11.04%3A_Goodness-of-Fit_Test","timestamp":"2024-11-03T12:34:36Z","content_type":"text/html","content_length":"163155","record_id":"<urn:uuid:b7796521-15da-4bc2-ab36-44db6f2a86a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00762.warc.gz"} |
The 2024 UK General Election result by age group
Five years ago I drew a diagram to illustrate why age was such an important factor in determining the result of the previous, 2019, UK General Election. It was first shown in Public Sector Focus and
contained 66 squares. Each square represented one million people living in the UK. At that time, there were about 66 million people resident in the UK. Some 20 million were adults who chose not to
vote, or were eligible to vote but had not registered to (and so could not vote even if they had wanted to). Apathy was quite high in 2019; much higher than it had been in 2017.
The Conservatives won that election because, to the nearest million, 14 million adults chose to vote for them. As Figure 1 shows, most of those who did were old or very old.
The next largest group of people in the UK were 12 million children who were also UK citizens. They include more than a million 16 and 17 year olds who, one day, may be allowed to vote. Labour came
fourth in 2019, after the non-voters, the Conservatives and the children, with 10 million votes. It was an impressive tally, but the votes were in the wrong places and too spread out over the age
groups so the Labour party won only 203 seats in the House of Commons. Voters tend to concentrate in places by age and so, if you secure a great many votes of a particular age group, you can win more
The fifth largest group were the Liberals, the sixth were people who were not UK, Irish or ‘qualifying’ Commonwealth/EU citizens and so could not vote, and the three remaining groups only secured, to
the nearest million, one million votes each.
Figure 1: The 2019 UK General Election, voting by age
The 2019 General Election was, above all, an election about age. The old won it. More of them turned out, and their votes tended to be geographically concentrated in areas where people retire to.
This included many of the red wall seats which were usually in slightly more affluent parts of the north of England to which people were, in increasing numbers, retiring.
So, you may be wondering, how did the picture change in the subsequent five years when we look again at voting by age in exactly the same way? To answer this, I have redrawn the diagram to show the
2024 UK General Election result below, in Figure 2.
Figure 2: The 2024 UK General Election, voting by age
The first point to note about Figure 2 is that the age groups no longer contain almost exactly the same numbers of people. We have had fewer babies in the last five years and so the age group 0-6 now
has only around 5.5 million members, not 6 million. However, a little net positive immigration and some ageing means that the older age groups all tend to now contain 6.5 million people rather than 6
million. This is not the case for the age group 23-28, a great many of whom left during the pandemic and because of Brexit and never came back. Similarly, the age group 65-73 has not grown in size
because of few births after the post-world-war-two baby boom (they were born later, in the 1950s). Overall, though, there has been less emigration of families with children – emigration became harder
after Brexit. Our adult population has grown by two million, and most of that is of people aged over 29 (see Table 2). This is because of ageing and less emigration. But what is most interesting in
Figure 2 is not our changing demographics, but how our voting patterns have changed.
The greatest change is that three million fewer people chose to vote in 2024 as compared to 2019. There was a huge growth in apathy. In every single age group, apart from people aged 65+, more adults
chose not to vote than to vote for any political party. At age 65+ a majority still chose to vote for the Conservatives, but that was a depleted majority (see Table 1). The Conservatives lost to
apathy, and to a new alternative being offered to them. That new alternative was not Labour, even though Labour was presenting a very conservative ‘offer’.
The Labour vote actually fell between 2019 and 2024. It fell most amongst the young and overall, despite increasing a little among those aged 57-64. One result was that Labour now had a similar level
of support across the whole age range. In normal times this would not be an effective way of wining seats; but the 2024 General Election was not normal. What made the election abnormal was a
re-branded Brexit Party standing as ‘Reform UK’ in almost every seat in the mainland of the UK. This party took almost 4 million votes, mainly off the Tories, who lost another 3 million to people no
longer voting (net). The result was to hand Labour a 411 seat majority – despite Labour wining fewer votes than it had in 2019!
The Liberal vote was also slightly lower in 2024. To the nearest million the Liberals won as many votes in 2024 as they had in 2019; but now were rewarded with 72 seats rather than just 11. That is
how the UK voting system works. It would be wrong to label the UK a ‘democracy’. There is only a very weak relationship between votes and seats. I’ll leave you to study the other differences between
these Figure 1 and 2 that summarise the results; but note how fractured the age 23-28 vote has become by political choices. Also note that the very small parties have to be placed somewhere in the
diagram. I have tried to place them where I placed them before, but I have aged those who cannot vote by five years (and an extra million of that group have arrived – net). Brexit did not reduce
If you want to know how Figure 2 was constructed, the Table below is of the IPSOS estimates of voters turnout and voting by age. Note that this is based on a survey of ‘17,394 GB adults aged 18+, of
whom 15,206 said they voted, interviewed on the online random probability Ipsos Knowledge Panel between 5-8 July 2024. The data was weighted using our normal methodology to be representative of the
adult population profile of Great Britain, and then the proportions of voters for each party and non-voters were weighted to the actual results by region.’[1] IPPR research helped confirm the Ipsos
The figures in Table 1 were used alongside the overall party vote totals and population estimates to create the revised 2019 figure into its 2024 form. The projected population of the UK in mid-year
2024 was 69.025 million.[3] So three million more than what we thought the population was in 2019. Of those 28.809 million voted.[4] Just 59.8% of the 48 million people registered to vote. So, to the
nearest million, 40 million people did not vote or were not registered to vote. Of that 40 million, 14 million were children, leaving 26 million adults who did not vote. Of those adults who did not
vote, 20 million chose not, another 6 million were not registered to vote or were not permitted to register to vote. It is extremely hard to determine who does qualify to vote in the UK.[5]
Table 1: proportion of people in each of 6 age groups by UK vote in 2024
The final table here shows the population change in the UK by age for those interested in what has happened and where we are heading in terms of how many of us there are, and who will be voting at
the next General Election.
The first column in Table 2 shows how each population group has changed in size compared to its numbers five year earlier. Thus, there are 1% fewer children aged 5-9 in the UK in 2024 than the
numbers aged 0-4 in 2019. Deaths and net-migration are the only possible reasons. The age group that has grown the most are those aged 20-24 in 2024, by 16%; that has to be due to net in-migration.
If you follow the first column of Figures in Table 2 down, you will note that the first fall is for ages 55-59. This is when deaths begin to take their toll. Follow the column down to its end and you
will see that 88% of those aged 95-99 in 2019 did not make it to be 100+ in 2024.
The second column in Table 2 is the absolute number of people the percentages in the first column refer to, in thousands. The third and fourth columns are the ONS estimates and projections for the
population of the UK by age in 2019 and 2024. Note that the total population estimate for 2019 is now thought to have been 66.8 million (nearer to 67). The fifth and sixth columns are the change when
we just compare the age groups directly. There were 366,000 fewer children aged 0-4 in the UK in 2024 than in 2019, a 9% fall. This is greater than the 8% fall in the numbers age 5-9 in 2024. Our
primary schools are going to struggle in future, then our secondary schools. We will be an even older population at the time of the next General Election, despite the continued immigration we benefit
from. The final row in Table 2 shows that we grew in total population size by 3% in these five years. Without immigration we would already be a shrinking island. And, already, the 25-29 age group are
fewer in number than those who were of those ages in 2019. Our future will depend as much on who comes and who goes, as on how we vote and if we ever manage to make voting fair.
Table 2: Population estimates and projections by the Office for National Statistics,
United Kingdom (UK), PERSONS, thousands, 2019, 2024, and change by age group
[1] Gideon Skinner, Keiran Pedley, Cameron Garrett, Ben Roff Public Affairs (2024) How Britain voted in the 2024 election, London: Ipsos, https://www.ipsos.com/en-uk/
[2] Parth Patel and Viktor Valgarðsson (2024) Half of us: Turnout patterns at the 2024 general election, London: Ippr, July, https://ippr-org.files.svdcdn.com/production/Downloads/
[3] James Robards (2024) Principal projection – UK population in age groups, London: ONS, 30 January, https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationprojections/
[4] Richard Cracknell and Carl Baker (2024) General election 2024 results , Research Briefing, London: House of Commons Library, 18 July, https://researchbriefings.files.parliament.uk/documents/
[5] The number was 7 million in England and Wales in April 2024, a million extra registered: Craig Westwood (2024) Millions of missing voters urged to register before deadline, London: The Electoral
Commission, 9 April, https://www.electoralcommission.org.uk/media-centre/millions-missing-voters-urged-register-deadline
For a PDF of this article and a link to the original source click here. | {"url":"https://www.dannydorling.org/?p=10237","timestamp":"2024-11-10T09:15:55Z","content_type":"text/html","content_length":"85899","record_id":"<urn:uuid:9b2879c4-c32d-48e3-a780-605bdb0c3ede>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00440.warc.gz"} |
Calculating Horsepower curve along RPM
Is there a way to create a formula to calculate horsepower curve along an RPM bandwidth? If so, could someone please point me to a place where I could find and learn about said formula? We need this
for a script so our vehicles move but, we have no idea where to find this information.
1 Like
To calculate horse power, you need two variables: RPM & Torque;
Read this article, then you can program it using the same logic it provides:
Alright thanks though this is not what we are looking for as we know how to do that part. It is creating the formula which is the hard part to make the cars drive but the coder im working with said
he is just gonna try some other things. Once again though thank you for your time and response. I think the issue is how we are explaining it.
1 Like
Either A:
Google a formula someone has already done.
Or B:
Find some real-world data and ask WolframAlpha to fit an equation for you.
We have googled it with zero results for what we are looking for.Plus this is for multiple vehicles so we would need the formula to work with all cars with specific data plugged in.
You’ll have to google some Dyno test data and get something like Wolfram Alpha to fit the data to an equation
Or just mess around with some polynomials in Desmos until you get something that seems right
Alright I’ll try that. Thanks for the responses.
It’s hardly a great curve, but you get the idea:
I was never able to find any pre-done polynomials either.
Sent that over to the coder to get his response but that sorta looks exactly like what he is going for.
What values did you plugin for that. Like what do they represent?
It’s just arbitrary - I messed with the coefficients of a cubic equation until I got something that looks somewhat like the BHP vs RPM curve for a low-end sports car. It should probably be pointier.
6 Likes
Alright thanks, that helped him out a bit. | {"url":"https://devforum.roblox.com/t/calculating-horsepower-curve-along-rpm/214092","timestamp":"2024-11-08T12:45:08Z","content_type":"text/html","content_length":"49376","record_id":"<urn:uuid:850ea3ff-3ef2-48ee-810d-40c9e999d3f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00732.warc.gz"} |
Recommended Age: 6+ | Mathematical Topic: Early Arithmetic
What To Do
Have you ever wondered why some of our English number words are so funny? Why do we say eleven and twelve? Where does the word “fifty” come from? We adults know what these funny words mean by now,
but to children, it’s not that easy.
In other languages, like Mandarin (a language spoken in China), the number names are a lot more transparent. For example, instead of “eleven” and “twelve,” many people speaking other languages say
“ten-one” and “ten-two.” Instead of “fifty,” many say “five-ten.” You can use a little of this wording to help your child understand our funny number words.
When you see a number like, 43, for example, you can say, “Oh look, forty-three. That’s four tens and three ones,” while pointing to the “4” and the “3” in the written numeral. Just throwing this
extra information in will help your child understand how the numerals and the number names go together—a key understanding that will help them do better in school math.
Moving On
Point to one of the digits in a multi-digit number, such as the “4” in the number 2941, and ask your child what it means. If they say “40” or “four tens,” then they get it. Another way to ask is to
show a written numeral, like 2941, and ask them to point to the number that shows how many hundreds or how many tens. | {"url":"https://becomingamathfamily.uchicago.edu/activities/38","timestamp":"2024-11-06T20:05:16Z","content_type":"text/html","content_length":"20058","record_id":"<urn:uuid:daa694e0-7aa1-478a-9160-23f0a2d61186>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00662.warc.gz"} |
Fault Identification for Shear-Type Structures Using Low-Frequency Vibration Modes
1 School of Civil Engineering, Shaoxing University, Shaoxing, 312000, China
2 School of Civil and Transportation Engineering, Ningbo University of Technology, Ningbo, 315211, China
3 Engineering Research Center of Industrial Construction in Civil Engineering of Zhejiang, Ningbo University of Technology, Ningbo, 315211, China
* Corresponding Author: Qiuwei Yang. Email:
(This article belongs to the Special Issue: Failure Detection Algorithms, Methods and Models for Industrial Environments)
Computer Modeling in Engineering & Sciences 2024, 138(3), 2769-2791. https://doi.org/10.32604/cmes.2023.030908
Received 02 May 2023; Accepted 31 July 2023; Issue published 15 December 2023
Shear-type structures are common structural forms in industrial and civil buildings, such as concrete and steel frame structures. Fault diagnosis of shear-type structures is an important topic to
ensure the normal use of structures. The main drawback of existing damage assessment methods is that they require accurate structural finite element models for damage assessment. However, for many
shear-type structures, it is difficult to obtain accurate FEM. In order to avoid finite element modeling, a model-free method for diagnosing shear structure defects is developed in this paper. This
method only needs to measure a few low-order vibration modes of the structure. The proposed defect diagnosis method is divided into two stages. In the first stage, the location of defects in the
structure is determined based on the difference between the virtual displacements derived from the dynamic flexibility matrices before and after damage. In the second stage, damage severity is
evaluated based on an improved frequency sensitivity equation. The main innovations of this method lie in two aspects. The first innovation is the development of a virtual displacement difference
method for determining the location of damage in the shear structure. The second is to improve the existing frequency sensitivity equation to calculate the damage degree without constructing the
finite element model. This method has been verified on a numerical example of a 22-story shear frame structure and an experimental example of a three-story steel shear structure. Based on numerical
analysis and experimental data validation, it is shown that this method only needs to use the low-order modes of structural vibration to diagnose the defect location and damage degree, and does not
require finite element modeling. The proposed method should be a very simple and practical defect diagnosis technique in engineering practice.
The shear-type structure is a common structural form in industrial and civil buildings, such as concrete and steel frame structures. For example, many high-rise residential buildings can be
classified as the shear-type structures due to the particularly high stiffness of the floor compared to the columns. It is known that structural failures are inevitable due to the environmental
corrosion, material fatigue, disaster loads, and other factors. In order to ensure residential safety, it is necessary to carry out structural health monitoring and defect diagnosis. Due to the large
volume and numerous components of building structures, traditional non-destructive testing techniques such as ultrasound, radiographic testing, and penetration testing cannot complete defect
diagnosis of large building structures. In the past few decades, methods for diagnosing structural damage using response parameters of structures under static or dynamic loads have been continuously
studied in depth. The theoretical basis for this type of method is that faults in structures can cause changes in structural static and vibration response parameters [1–4]. In practice, the response
data of structures can be measured through special testing equipment and then their changes can be used to diagnose structural fault conditions.
In recent years, many methods have been developed for structural fault diagnosis by using static or dynamic response parameters. Yang et al. proposed a fast static displacement analysis method for
structural damage detection using flexible disassembly perturbation [5,6]. Peng et al. [7] developed a method for determining the damage location of beam structures by using redistribution of static
shear energy. Li et al. [8] proposed a flexible method for damage identification of cantilever structures, such as high-rise buildings and chimneys, using a few dynamic modal data. Koo et al. [9]
proposed a damage quantification method for shear buildings based on the modal data measured by ambient vibration. It is found from the experimental study that the damage quantity of the proposed
method is very consistent with the actual damage quantity obtained from the static pushdown test. Zhu et al. [10] proposed an effective damage detection method for shear buildings by using the change
of the first mode shape slope. An eight-layer numerical example and a three-layer experimental model verify the effectiveness of the method. Xing et al. [11] proposed a substructure method that
allows local damage detection of shear structures. Their method only needs three sensors to identify the local damage of any floor of the shear structure building. The feasibility of the proposed
method is tested by simulation and experiment on a five-storey building. Su et al. [12] developed a simple and effective method to locate the floors where the property (stiffness and mass) changes
during the shear building life cycle. The floors that may be damaged are determined by comparing the natural frequencies of the substructure at different stages of the building life cycle. Sung et
al. [13] conducted a comprehensive experimental verification of the damage induced deflection method for shear building damage detection. The results showed that the damaged floor was successfully
located, and the damage rate estimated by the damage induced deflection method was consistent with the damage rate calculated by numerical simulation. Panigrahi et al. [14] developed a method based
on residual force vector and genetic algorithm to identify damages of multi-layer shear structures from sparse modal information. Li et al. [15] proposed a data-driven method for seismic damage
detection and location of multi-degree-of-freedom shear-type building structures under strong ground motions. The proposed method is based on the joint realization of time-frequency analysis and
fractal dimension characteristics. An et al. [16] carried out the application research of damage location method based on impact energy in real-time damage detection of shear structures under random
base excitation. The performance of their method in damage detection was experimentally verified using a laboratory scale 6-layer shear structure model. Wang et al. [17] proposed a damage
identification method for the shear-type building based on proper orthogonal modes. The experimental results show that this method can effectively identify the location and severity of shear building
structure damage. Mei et al. [18] proposed an improved substructure based damage detection method to locate and quantify damage in shear structures. Luo et al. [19] proposed a new method for
extracting the spectral transfer function and detecting damage of shear frame structures under non-stationary random excitation. Shi et al. [20] studied the damage localization by using the curvature
of the lateral displacement envelope in the shear building structure. The finite difference method and interpolation method are used to evaluate the modal curvature and frequency response function
for damage localization. Mei et al. [21] presented a new substructure damage detection method based on the autoregressive moving average exogenous input (ARMAX) model and the optimal sub-mode
assignment (OSPA) distance to locate and quantify the damage. Paral et al. [22] proposed a damage assessment method based on artificial neural network, which takes the change of the first mode slope
damage index as the input layer of the artificial neural network. The effectiveness of their method is proved by the experimental tests of the three-story steel shear frame model. Liang et al. [23]
carried out the damage detection of shear buildings by frequency-change-ratio and model updating algorithm. Ghannadi et al. [24] used a new bio-inspired optimization algorithm to identify the damage
location and severity of the multi-layer shear frame. Do et al. [25] developed a new damage detection method based on output-only vibration information for shear-type structures. Zhao et al. [26]
proposed a two-step modeling method based on wavelet frequency response function estimation and least squares iterative algorithm to identify structural vibration modal parameters. Liu et al. [27]
studied schemes to repair earthquake damage from the aspects of load transfer path, enclosure structure, beam column nodes, and structural stiffness. It was found that the seismic performance of
masonry walls can be greatly improved after being wrapped in reinforced concrete or seismic zones. Niu [28] proposed a damage detection method for shear frame structures based on frequency response
function. The influence of noise on damage detection is greatly suppressed by simultaneously increasing the number of equations and reducing the unknown coefficients. Yang et al. [29,30] studied
dynamic model reduction and used modal sensitivity for fault diagnosis based on the reduced model. Tan et al. [31] proposed a model- calibration-free method for damage identification of shear
structures using modal data. The advantage of the proposed method is that the model-free calibration characteristics can avoid the need to calibrate the mass and stiffness parameters of the
structure. Roy [32] proposed a new formula to establish the expression of damage severity in the form of mode shape slope. The derived closed-form solution directly relates the percentage of damage
strength to the derivative of the vibration mode change in the shear building.
Although great progress has been made in damage diagnosis of shear structures, there are still many difficulties that need to be further studied to overcome. The main disadvantage of the existing
methods described above is that accurate structural finite element model (FEM) is required in these methods to perform the damage assessment. However, it is difficult to obtain accurate FEMs for many
shear-type structures. It is an urgent need in engineering practice to study defect diagnosis methods that do not require accurate finite element models. For this purpose, a FEM-free method for
defect diagnosis of the shear structure is developed in this paper, which only needs to measure a few low-order vibration modes of the structure. The proposed defect diagnosis method is divided into
two stages. In the first stage, the location of the defect is determined based on the virtual displacement difference derived from the dynamic flexibility matrix before and after damage. In the
second stage, the damage severity is evaluated based on the improved frequency sensitivity equation. The main innovations of the proposed method lie in two aspects. The first is the development of a
virtual displacement difference method for determining the location of damage in the shear structure. The second is to improve the existing frequency sensitivity equation to calculate the damage
degree without constructing FEM. The proposed method has been validated on a numerical model of a 22-story shear-type frame structure and a three-story steel shear structure model. Based on numerical
analysis and experimental data validation, it is shown that the proposed method can diagnose the defect location and damage degree only by using the lower order modes of structural vibration, and
does not require finite element modeling. The proposed method should be a very simple and practical defect diagnosis technique in engineering practice.
2.1 Damage Localization by the Virtual Displacement Difference
In this section, the virtual displacement difference method is proposed for defect localization of shear-type structures. For an undamaged structure with n degrees of freedom (DOFs), the free
vibration modes can be computed by the following generalized eigenvalue problem as:
where K and M are the stiffness and mass matrices of the FEM of the undamaged structure, λr is the r-th eigenvalue (angular frequency), φr is the mass-normalized eigenvector (mode shape). Note that
λr and φr (also called as the r-th eigen-pair) can be also obtained by the dynamic test on the undamaged structure without FEM. A structure with n-DOFs will have n independent eigen-pairs, i.e., r=
1∼n. Thus Eqs. (1) and (2) can be rewritten for n eigen-pairs as:
where I is a n-dimensional identity matrix, Ψ is the eigenvector matrix and Λ is the eigenvalue matrix as:
From Eq. (4), one has
Combining Eq. (3) with (7), one has
From Eq. (9), the inverse matrix of K (i.e., the flexibility matrix F) can be obtained as:
Using Eqs. (7), (8), and (10) can be simplified as:
Using Eqs. (5), (6) and (11) can be rewritten as:
The eigenvalues of structural vibration are generally sorted from small to large, that is, 0<λ1<λ2<λ3<⋯. This means that the reciprocal ordering of eigenvalues is exactly the opposite as 1λ1>1λ2>1λ3>
⋯>0. Thus Eq. (12) can be approximated as:
where m is the number of the lower-frequency modes in the dynamic test of the undamaged structure. It is known that the appearance of structural defects usually only results in changes in structural
stiffness or flexibility, while the mass generally does not change. For a damaged structure, the similar equations can be derived as follows:
in which Kd is the stiffness matrix of the damaged system, λdr and φdr are the corresponding eigenvalue and mode shape, Fd is the damage flexibility matrix. Note that λdr and φdr can be also obtained
by the dynamic test on the damaged structure without FEM. From Eqs. (13) and (17), the change of the dynamic flexibility matrix due to fault can be approximately computed by:
From a static perspective, the displacement before and after structural damage under a certain static load can be obtained as follows:
where ξ and ξd are the displacement vectors before and after structural damage under the certain static load lv. For shear-type structures, it can be assumed that a unit force is applied at the free
end of the structure to obtain a virtual load vector of lv=(1,0,0,⋯,0)T. Note that the purpose of applying a unit force at the free end of the shear structure is to ensure that each layer of the
structure can undergo shear deformation, as shown in Fig. 1a. If this unit force is applied to the middle layer, the layers above the load position will not undergo shear deformation, as shown in
Fig. 1b. Thus the loading scheme of Fig. 1b will not be able to identify the possible damage in the layers above the load location by the shear deformation. In view of this, the virtual load should
be applied to the free end of the shear structure (i.e., the top layer) as shown in Fig. 1a. Note that in Fig. 1a, the floors are numbered from top to bottom as 1, 2, … , n. Thus the virtual load
vector corresponding to Fig. 1a is lv=(1,0,0,⋯,0)T since only the element corresponding to the unit force at the first floor is 1 and the other elements are zeros. If the floors are numbered from
bottom to top as 1, 2, … , n, as shown in Fig. 1c, the corresponding virtual load vector will be lv=(0,0,⋯,0,1)T since only the element corresponding to the unit force at the highest floor is 1 and
the other elements are zeros.
Therefore, the virtual displacement difference vector Δξ can be obtained as:
According to the research results of the reference [33], it has been proven that the displacement difference vector for the linear structure before and after damage under the same static load will
undergo a sudden change at the damage location. Thus the location of the fault in a shear structure can be determined by the location of the element value mutation in the vector Δξ. The virtual
displacement difference vector Δξ is also called as the defect localization vector. It should be emphasized that the vector Δξ can be obtained from Eq. (21) only by testing the low-frequency modal
data of the free vibration of the shear structure before and after damage, without requiring a finite element model. This means that the defect localization of the structure can be carried out
without the need to establish a FEM of the structure in advance.
2.2 Damage Quantification by the Improved Frequency Sensitivity
An assessment of the defect severity is necessary to estimate the remaining life of the structure or to determine whether maintenance is required. To this end, an improved frequency sensitivity
algorithm is developed for fault quantification of the shear structure. According to the FEM theory, the total stiffness matrix K of the undamaged structure can be obtained by the sum of all the
elementary stiffness matrices as:
where Ki is the i-th elementary stiffness matrix, N is the number of all elements of the structure. In most cases, structural damage only leads to a decrease in structural stiffness without causing a
change in mass. The reduction in local stiffness can be represented by multiplying a reduction coefficient by the elementary stiffness matrix. Therefore, the total stiffness matrix considering
structural damage can be expressed as:
where εi is a reduction coefficient reflecting the severity of the defect in the i-th element. εi is a number located in the interval [−1, 0]. Theoretically, εi=0 denotes that the i-th element has
not been damaged, −1<εi<0 indicates partial damage to the i-th element, and εi=−1 indicates complete damage to the i-th element. By performing partial derivatives on Eq. (1) with respect to the
variable εi, one has:
in which ∂λr∂εi and ∂φr∂εi are the frequency and mode shape sensitivities, respectively. Eq. (1) can be rewritten by the matrix transpose as:
According to the FEM theory, the stiffness and mass matrices K and M are both symmetric matrices. Thus Eq. (25) can be expanded by considering the symmetry of K and M as:
Multiplying Eq. (24) by φrT and using Eq. (26), one has
Eq. (27) can be expanded as:
Substituting Eq. (2) into (28) yields
The eigenvalue (i.e., angular frequency) variation Δλr due to the faults in the structure can be calculated by:
Substituting Eq. (23) into (1) yields as:
Eq. (31) shows that λr is a implicit function of the variables ε1,ε2,⋯,εN, i.e., λr=f(ε1,ε2,⋯,εN). Thus the change of λr due to the changes of ε1,ε2,⋯,εN can be approximated by using Taylor’s series
expansion and ignoring higher-order derivatives as:
As stated before, the defect location has been determined based on the damage location vector Δξ in the first stage. The following quantitative evaluation of defects is divided into two situations:
single defect and multiple defects. For single defect case, assuming that the i-th element is determined to be a damaged element, Eq. (32) is simplified as:
Substituting Eq. (29) into (33) yields
In the above derivation, φr is the mass-normalized mode shape obtained by solving the eigenvalue problem based on structural FEM. To avoid constructing FEM, Eq. (34) can be improved by replacing the
mode shape calculated from FEM with the tested mode shape as:
where φ¯r represents the measured mode shape of the undamaged structure. Generally, the first vibration mode is the easiest to measure and has the highest accuracy. Thus the fault coefficient can be
calculated using the first vibration mode from Eq. (35) as:
Note that Ki in Eq. (36) can be directly obtained by the interlayer stiffness of the i-th element of the shear structure without the need to establish structural FEM. It is known that the elementary
stiffness matrix of the shear structure in local co-ordinates can be expressed as:
where E denotes the elastic modulus, I denotes the moment of inertia, L is the shear element length, and 12EIL3 is also called as the interlayer stiffness. Eqs. (36) and (37) indicate that the fault
coefficient of the structure can be solved directly using the tested vibration mode and the interlayer stiffness of the individual element, without the need to construct a FEM of the entire
For multiple defect case, more vibration modes besides the first vibration mode are needed to solve the fault coefficients. The number of the used vibration modes should be greater than or equal to
the number of damage locations determined by the above damage localization approach. Without losing generality, Eq. (32) can be expanded for multiple defects to
where the matrix S is also obtained by using the measured mode shapes instead of the theoretical mode shapes computed by FEM. This improvement can avoid establishing the overall FEM of the structure.
From Eq. (38), all the fault coefficients can be computed by:
In Eq. (40), the superscript “+” denotes the matrix generalized inverse. Finally, the damage severity of shear structures can be evaluated based on the calculation results of Eq. (40). Fig. 2 shows
the flow chart of the proposed algorithm to explain the process more clearly.
3 Validation by the Numerical Model
A 22-story shear frame structure shown in Fig. 3a is used to verify the feasibility of the presented approach. The stiffness and mass of each floor in Fig. 3a are k1=k2=1296, k3=⋯=k22=1024, m1=m2=72,
and m3=⋯=m22=64, respectively. The node numbers from the ground to the roof of this shear structure are 1, 2, ... , and 23, as shown in Fig. 3b. Two defect scenarios are simulated in this numerical
example. The first defect scenario assumes a 15% reduction in the interlayer stiffness of the fifth floor. The second defect scenario assumes that the stiffness of the 5th and 16th floors is reduced
by 20% and 15%, respectively. Note that the reduction of interlayer stiffness is achieved through the reduction of stiffness parameter ki. For example, k5 = 1024 of the undamaged structure changes to
be k5 = 0.85 × 1024 = 870.4 of the damaged structure, resulting in a 15% reduction in the interlayer stiffness of the fifth floor for the first defect scenario. In this example, only the first and
second vibration modes are used for defect diagnosis. A 3% level of data noise is added to the vibration modes of the damaged structure to simulate the measurement errors. Tables 1 and 2 present the
eigen-frequencies and modal shapes obtained by the FEMs of the undamaged and damaged structures. Note that the node number is different from the floor number as shown in Fig. 3b. The node 1 in Table
2 corresponds to the fixed end of the structure. For this numerical example, the frequencies and modal shapes are obtained from the FEM by solving the above eigenvalue in Eq. (1). The data with noise
in these tables are used to simulate the corresponding values obtained from the dynamic test in a real scenario. Note that these modal data can be obtained through vibration testing experiments for a
real scenario in practice. For example, the data used in the following experimental structure is the actual test data, as shown in the next section.
Taking the first defect scenario without data noise as an example, the specific process of calculating the damage localization vector Δξ using the proposed method is as follows:
(1) Calculate the undamaged flexibility matrix F using Eq. (13) as:
The result of F obtained by Eq. (41) is shown in Table 3.
(2) Calculate the damaged flexibility matrix Fd using Eq. (17) as:
The result of Fd obtained by Eq. (42) is shown in Table 4.
(3) Calculate the flexibility matrix change ΔF by:
The result of ΔF obtained by Eq. (43) is shown in Table 5.
(4) Calculate damage localization vector Δξ using Eq. (21) with lv=(0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,1)T. Fig. 4 gives the calculation result of the defect localization vector Δξ for the
first defect scenario by the exact data and the data with 3% noise. From Fig. 4, one can find that there is a mutation between the nodes numbered 5 and 6, which exactly corresponds to the fifth
floor. This means that the fifth floor is where the defect is located. For the second defect scenario, Fig. 5 gives the calculation results of the defect localization vectors by the exact data and
the data with 3% noise. From Fig. 5, one can find that there is two mutations between the nodes numbered 5, 6, 16, and 17, which exactly correspond to the 5th and 16th floors. This means that the 5th
and 16th floors are where the defects are located. For the above two defect scenarios, Table 6 presents the calculated severity of the damages using the data with and without noise. One can find from
Table 6 that these calculated damage severity values are relatively close to the assumed true values in the two defect scenarios. This indicates that the proposed method can successfully determine
the locations and extents of the defects in this numerical shear structure.
4 Validation by Experimental Data
The proposed approach is validated again using the experiment data measured by Li from a three-story steel frame structure in reference [34]. The experimental structure, material parameters, and
experimental process are described in detail in reference [34]. Fig. 6 provides the geometric model of this steel frame structure. From Fig. 6a, this structure is composed of three steel plates and
four rectangular columns. These components are welded to simplify to a rigid shear system of Fig. 6b. The node numbers from bottom to top of this shear structure are 1, 2, 3, and 4. From vibration
testing, the first natural frequency and modal shape of the undamaged structure are f1 = 3.369 and φ1 = (0.02118, 0.03922, 0.048427)T. The second modal data of the undamaged structure are f2 = 9.704
and φ2 = (0.048758, 0.02031, −0.03923)T. The third modal data of the undamaged structure are f3 = 14.282 and φ3 = (0.037936, −0.04866, 0.022852)T. Note that the conversion formula between the natural
frequency fr and the aforementioned angular frequency (i.e., eigenvalue λr) is as follows:
Some defect scenarios have been tested on this three-story steel frame structure in reference [34]. For the first defect scenario, the cross-sectional width of the lower ends of the four columns for
the first floor has been reduced from 75 to 51.3 mm by cutting, as shown in Fig. 7a. The geometric dimensions before and after cutting are used to calculate the shear stiffness of the structure. The
formula for calculating the shear stiffness was given in [34]. The ratio between the shear stiffness difference before and after cutting and the shear stiffness of the intact structure is used as the
true value of the damage severity. For the first defect scenario, the true value of damage severity calculated based on the shear stiffness is 11.6%. For the second defect scenario, the
cross-sectional width of the lower ends of the four columns for the first floor has been cut from 75 to 37.46 mm as shown in Fig. 7b. The corresponding true value of damage severity calculated based
on the shear stiffness is 21.1%. The third damage scenario is to reduce the cross-sectional width of all column bottoms in the first floor from 75 to 37.46 mm, and to reduce the cross-sectional width
of all column bottoms in the second floor from 75 to 51.3 mm, resulting in a damage degree of 21% and 11% for the first and second floors, respectively. These three defect scenarios are listed in
Table 7.
For the first defect scenario, the measured first-order modal data are fd1 = 3.259 and φd1 = (0.022735, 0.039331, 0.047594)T. The second-order modal data are fd2 = 9.485 and φd2 = (0.049417,
0.017683, −0.03968)T. The third-order modal data are fd3 = 14.209 and φd3 = (0.035798, −0.04982, 0.02379)T. Using only the first vibration mode, the specific process of calculating the damage
localization vector Δξ using the proposed method is as follows:
(1) Calculate the undamaged flexibility matrix using Eq. (13) as:
(2) Calculate the damaged flexibility matrix using Eq. (17) as:
(3) Calculate the change of flexibility matrix as:
(4) Calculate damage localization vector Δξ using Eq. (21) with lv=(0,0,1)T as:
For convenience, Table 8 gives the calculation results of the defect localization vector Δξ shown in Eq. (48). Note in Table 8 that the node number is different from the floor number as shown in Fig.
6b. The node 1 in Table 8 corresponds to the fixed end of the structure. Obviously, the displacement difference before and after damage under any load is always zero at this fixed end position. Thus
the value corresponding to node 1 is 0 in Table 8. From Table 8, one can find that there is a largest mutation between the nodes numbered 1 and 2, which exactly corresponds to the first floor. This
means that the first floor is where the defect is most possibly located. For more accurate damage diagnosis, the fault coefficients for all floors can be computed by using the measured three
frequencies with Eq. (40). The specific process of calculating the fault coefficients using the proposed method is as follows:
(1) Calculate the changes of the eigenvalues using Eq. (30) as:
(2) Calculate the eigenvalue sensitivity matrix S using Eqs. (29) and (39) as:
(3) Calculate the fault coefficients using Eq. (40) as:
From Eq. (51), the calculated values of the fault coefficients are ε1=−0.1185, ε2=0.0068, and ε3=−0.0050, respectively. Based on these calculated damage severity values, it can be clearly determined
that the first floor is damaged, while the second and third floors are not damaged since their calculated damage levels are very close to zero. The calculated damage severity value of the first floor
(11.85%) is very close to the true value (11.6%). This indicates that the proposed method can successfully determine the location and severity of the defect in the experimental steel shear structure.
For the second defect scenario, the measured first-order modal data are fd1 = 3.113 and φd1 = (0.024117, 0.039364, 0.046881)T. The second-order modal data are fd2 = 9.302 and φd2 = (0.049711,
0.016267, −0.03992)T. The third-order modal data are fd3 = 14.136 and φd3 = (0.035067, −0.05014, 0.024199)T. Using only the first vibration mode, the damage localization vector Δξ obtained by the
proposed method is given in Table 9. From Table 9, one can find that there is a largest mutation between the nodes numbered 1 and 2, which exactly corresponds to the first floor. This means that the
first floor is where the defect is most possibly located. For more accurate damage diagnosis, the fault coefficients for all floors can be computed by the proposed method as: ε1=−0.2608, ε2=−0.0061,
and ε3=0.0256, respectively. Based on these calculated damage severity values, it can be clearly determined that the first floor is damaged, while the second and third floors are not damaged since
their calculated damage levels are very close to zero. The calculated damage severity value of the first floor (26.08%) is close to the true value (21.1%). This indicates that the proposed method can
successfully determine the location and severity of the defect in the experimental steel shear structure.
For the third defect scenario, the measured first-order modal data are fd1 = 3.076 and φd1 = (0.023253, 0.039779, 0.046968)T. The second-order modal data are fd2 = 9.192 and φd2 = (0.051655,
0.014387, −0.03813)T. The third-order modal data are fd3 = 13.660 and φd3 = (0.032448, −0.05058, 0.026788)T. Using only the first vibration mode, the damage localization vector Δξ obtained by the
proposed method is given in Table 10. From Table 10, it can be observed that there is a large mutation between nodes 1 and 2, and a small mutation between nodes 2 and 3, which correspond to the first
and second floors, respectively. This means that the first and second floors are where the defects are most possibly located. Note that the number of floors in this structure is only three, so the
mutation feature is not as obvious as those structures with many floors. For more accurate damage diagnosis, the fault coefficients can be further computed by the proposed method as: ε1=−0.2145, ε2=
−0.1053, and ε3=−0.0230, respectively. Based on these calculated fault coefficients, it can be clearly determined that the first and second floors are damaged, while the third floor is not damaged
since its calculated damage level is close to zero. The calculated damage severity values of the first and second floors (21.45% and 10.53%) are close to the true values (21.1% and 11.6%). This
indicates that the proposed method can successfully identify the location and severity of multiple defects in the structure.
In this paper, a new method of defect localization and quantitative evaluation is developed for the diagnosis of shear-type structural defects. The greatest advantage of the proposed method is that
only a few low-frequency vibration modes of the shear structure need to be measured for defect diagnosis, without requiring a FEM of the structure. The proposed method completes the task of damage
diagnosis through the first stage of defect localization and the second stage of defect quantitative evaluation. The proposed method has been successfully verified on a numerical model and an
experimental model. According to the computation results, some conclusions can be summarized as follows. (1) For the numerical example, the location and severity of defects in the shear structure can
be correctly diagnosed using only the first two vibration modal data. The proposed method can successfully identify defects in the structure even under the interference of 3% level noise. (2) For the
experimental example, the most obvious mutation location in the damage location vector also corresponds to the location of the defect in the shear structure. The improved frequency sensitivity method
can further accurately diagnose the damage in the shear structure and obtain the severity of the damage. (3) The method proposed in this paper does not require an overall FEM of the structure and
complex mathematical operations during implementation, so it is particularly suitable for engineering applications. It has been shown that the proposed method may be very effective for defect
diagnosis of shear-type structures.
Acknowledgement: None.
Funding Statement: This work is supported by the Zhejiang Public Welfare Technology Application Research Project (LGF22E080021), Ningbo Natural Science Foundation Project (202003N4169), Natural
Science Foundation of China (11202138, 52008215), the Natural Science Foundation of Zhejiang Province, China (LQ20E080013), and the Major Special Science and Technology Project (2019B10076) of
“Ningbo Science and Technology Innovation 2025”.
Author Contributions: The authors confirm contribution to the paper as follows: study conception and design: C.L., Q.Y.; data collection: Q.Y., X.P.; analysis and interpretation of results: C.L.,
Q.Y.; draft manuscript preparation: C.L., Q.Y., X.P. All authors reviewed the results and approved the final version of the manuscript.
Availability of Data and Materials: The data used to support the findings of this study are included within the article and also available from the corresponding author upon request.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
1. Zhao, L., Shenton III, H. W. (2005). Structural damage detection using best approximated dead load redistribution. Structural Health Monitoring, 4(4), 319–339. [Google Scholar]
2. Abdo, M. A. B. (2012). Parametric study of using only static response in structural damage detection. Engineering Structures, 34, 124–131. [Google Scholar]
3. Avci, O., Abdeljaber, O., Kiranyaz, S., Hussein, M., Gabbouj, M. et al. (2021). A review of vibration-based damage detection in civil structures: From traditional methods to Machine Learning and
Deep Learning applications. Mechanical Systems and Signal Processing, 147, 107077. [Google Scholar]
4. Ahmadi-Nedushan, B., Fathnejat, H. (2022). A modified teaching-learning optimization algorithm for structural damage detection using a novel damage index based on modal flexibility and strain
energy under environmental variations. Engineering with Computers, 38(1), 847–874. [Google Scholar]
5. Yang, Q. W. (2019). Fast and exact algorithm for structural static reanalysis based on flexibility disassembly perturbation. AIAA Journal, 57(8), 3599–3607. [Google Scholar]
6. Yang, Q. W., Peng, X. (2023). A fast calculation method for sensitivity analysis using matrix decomposition technique. Axioms, 12(2), 179. [Google Scholar]
7. Peng, X., Yang, Q. W. (2022). Damage detection in beam-like structures using static shear energy redistribution. Frontiers of Structural and Civil Engineering, 16(12), 1552–1564. [Google Scholar]
8. Li, G. Q., Hao, K. C., Lu, Y., Chen, S. W. (1999). A flexibility approach for damage identification of cantilever-type structures with bending and shear deformation. Computers & Structures, 73(6),
565–572. [Google Scholar]
9. Koo, K. Y., Sung, S. H., Jung, H. J. (2011). Damage quantification of shear buildings using deflections obtained by modal flexibility. Smart Materials and Structures, 20(4), 045010. [Google
10. Zhu, H., Li, L., He, X. Q. (2011). Damage detection method for shear buildings using the changes in the first mode shape slopes. Computers & Structures, 89(9–10), 733–743. [Google Scholar]
11. Xing, Z., Mita, A. (2012). A substructure approach to local damage detection of shear structure. Structural Control and Health Monitoring, 19(2), 309–318. [Google Scholar]
12. Su, W. C., Huang, C. S., Hung, S. L., Chen, L. J., Lin, W. J. (2012). Locating damaged storeys in a shear building based on its sub-structural natural frequencies. Engineering Structures, 39,
126–138. [Google Scholar]
13. Sung, S. H., Koo, K. Y., Jung, H. Y., Jung, H. J. (2012). Damage-induced deflection approach for damage localization and quantification of shear buildings: Validation on a full-scale shear
building. Smart Materials and Structures, 21(11), 115013. [Google Scholar]
14. Panigrahi, S. K., Chakraverty, S., Mishra, B. K. (2013). Damage identification of multistory shear structure from sparse modal information. Journal of Computing in Civil Engineering, 27(1), 1–9.
[Google Scholar]
15. Li, H., Tao, D., Huang, Y., Bao, Y. (2013). A data-driven approach for seismic damage detection of shear-type building structures using the fractal dimension of time-frequency features.
Structural Control and Health Monitoring, 20(9), 1191–1210. [Google Scholar]
16. An, Y., Spencer Jr, B. F., Ou, J. (2015). Real-time fast damage detection of shear structures with random base excitation. Measurement, 74, 92–102. [Google Scholar]
17. Wang, D., Xiang, W., Zeng, P., Zhu, H. (2015). Damage identification in shear-type structures using a proper orthogonal decomposition approach. Journal of Sound and Vibration, 355, 135–149. [
Google Scholar]
18. Mei, L., Mita, A., Zhou, J. (2016). An improved substructural damage detection approach of shear structure based on ARMAX model residual. Structural Control and Health Monitoring, 23(2), 218–236.
[Google Scholar]
19. Luo, J., Liu, G., Huang, Z. (2017). Damage detection for shear structures based on wavelet spectral transmissibility matrices under nonstationary stochastic excitation. Structural Control and
Health Monitoring, 24(1), e1862. [Google Scholar]
20. Shi, J. Y., Spencer Jr, B. F., Chen, S. S. (2018). Damage detection in shear buildings using different estimated curvature. Structural Control and Health Monitoring, 25(1), e2050. [Google Scholar
21. Mei, L., Li, H., Zhou, Y., Wang, W., Xing, F. (2019). Substructural damage detection in shear structures via ARMAX model and optimal subpattern assignment distance. Engineering Structures, 191,
625–639. [Google Scholar]
22. Paral, A., Singha Roy, D. K., Samanta, A. K. (2019). Application of a mode shape derivative-based damage index in artificial neural network for structural damage identification in shear frame
building. Journal of Civil Structural Health Monitoring, 9, 411–423. [Google Scholar]
23. Liang, Y., Feng, Q., Li, H., Jiang, J. (2019). Damage detection of shear buildings using frequency-change-ratio and model updating algorithm. Smart Structures and Systems, An International
Journal, 23(2), 107–122. [Google Scholar]
24. Ghannadi, P., Kourehli, S. S. (2019). Model updating and damage detection in multi-story shear frames using Salp Swarm Algorithm. Earthquakes and Structures, 17(1), 63–73. [Google Scholar]
25. Do, N. T., Gül, M. (2020). A time series based damage detection method for obtaining separate mass and stiffness damage features of shear-type structures. Engineering Structures, 208, 109914. [
Google Scholar]
26. Zhao, L., Jin, D., Wang, H., Liu, C. (2020). Modal parameter identification of time-varying systems via wavelet-based frequency response function. Archive of Applied Mechanics, 90(11), 2529–2542.
[Google Scholar]
27. Liu, C., Fang, D., Zhao, L. (2021). Reflection on earthquake damage of buildings in 2015 Nepal earthquake and seismic measures for post-earthquake reconstruction. Structures, 30, 647–658. [Google
28. Niu, Z. (2021). Two-step structural damage detection method for shear frame structures using FRF and Neumann series expansion. Mechanical Systems and Signal Processing, 149, 107185. [Google
29. Yang, Q. W., Peng, X. (2023). A highly efficient method for structural model reduction. International Journal for Numerical Methods in Engineering, 124(2), 513–533. [Google Scholar]
30. Yang, Q. W., Peng, X. (2021). Sensitivity analysis using a reduced finite element model for structural damage identification. Materials, 14(19), 5514. [Google Scholar] [PubMed]
31. Tan, Y., Chen, Y., Lu, Z. R., Wang, L. (2022). Model-calibration-free damage identification of shear structures by measurement changes correction and sparse regularization. Structures, 37,
255–266. [Google Scholar]
32. Roy, K. (2023). Structural damage quantification in shear buildings using mode shape slope ratio. Structural Health Monitoring, 22(4), 2346–2359. [Google Scholar]
33. Yang, Q. W., Du, S. G., Liang, C. F., Yang, L. (2014). A universal model-independent algorithm for structural damage localization. Computer Modeling in Engineering & Sciences, 100, 223–248.
https://doi.org/10.3970/cmes.2014.100.223 [Google Scholar] [CrossRef]
34. Li, L. (2005). Numerical and experimental studies of damage detection for shear buildings (Ph.D. Thesis). Huazhong University of Science and Technology, Wuhan, China. [Google Scholar]
Cite This Article
APA Style
Li, C., Yang, Q., Peng, X. (2024). Fault identification for shear-type structures using low-frequency vibration modes. Computer Modeling in Engineering & Sciences, 138(3), 2769-2791. https://doi.org/
Vancouver Style
Li C, Yang Q, Peng X. Fault identification for shear-type structures using low-frequency vibration modes. Comput Model Eng Sci. 2024;138(3):2769-2791 https://doi.org/10.32604/cmes.2023.030908
IEEE Style
C. Li, Q. Yang, and X. Peng, “Fault Identification for Shear-Type Structures Using Low-Frequency Vibration Modes,” Comput. Model. Eng. Sci., vol. 138, no. 3, pp. 2769-2791, 2024. https://doi.org/
This work is licensed under a Creative
Commons Attribution 4.0 International License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.techscience.com/CMES/v138n3/54945/html","timestamp":"2024-11-06T07:31:20Z","content_type":"application/xhtml+xml","content_length":"207375","record_id":"<urn:uuid:f1f08f13-2807-4ff4-8480-644c33f1c70e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00235.warc.gz"} |
Re: st: GLLAMM help
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: GLLAMM help
From Stas Kolenikov <[email protected]>
To [email protected]
Subject Re: st: GLLAMM help
Date Sat, 2 Oct 2004 21:06:03 -0400
_pi is the number pi. I think this is the variance of logistic
distribution, although it might be _pi^2/3, I am not quite sure.
Again, I would be happy with the fixed effects for the size, as long
as you only have three levels here. By making it fixed effects, you
would also substantially simplify the work for -gllamm-, so that it
will run much faster.
On Sat, 02 Oct 2004 23:52:16 +0000, Dana Shills <[email protected]> wrote:
> Stas
> Size refers to firm sizes categories(small/medium/large)..I have
> observations on firms that are classified into size groups as well as
> industries. So i am trying to see how much of the variance inY is explained
> by variance due to country and size(or industry) (and their interactions as
> well).
> In the total variance you mention, what is _pi^2/6? Im sorry if this is too
> basic a question but could you at least direct me to some elementary
> reference material..I have only used SAS's PROC MIXED for this kind of
> variance component analysis before.
> Dana
Stas Kolenikov
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2004-10/msg00046.html","timestamp":"2024-11-02T05:03:19Z","content_type":"text/html","content_length":"8320","record_id":"<urn:uuid:b4f282a1-89f5-479f-8653-ec6cbbdd301d>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00512.warc.gz"} |
Learn Numbers – The Good Parts of JavaScript and the Web
Check out a free preview of the full The Good Parts of JavaScript and the Web course
The "Numbers" Lesson is part of the full, The Good Parts of JavaScript and the Web course featured in this preview video. Here's what you'd learn in this lesson:
Numbers are simpler in JavaScript than other programming languages because they are represented by a single type. Sharing a few of their shortcomings, Doug describes how numbers behave in JavaScript.
Doug then talks about the Math object and the various operations that can be performed on JavaScript numbers.
Transcript from the "Numbers" Lesson
>> [MUSIC]
>> Douglas Crockford: Everything in JavaScript is an object, so numbers, booleans, strings, arrays, dates, regular expressions and functions, they're all objects. So, let's look at numbers, numbers
are objects, it's a much a simpler number system than you have in Java in that we don't have ints and integers, you don't have either of those, you just have numbers.
We make numbers with number literals. All these number literals refer to the same object.
>> Douglas Crockford: There's only one number type in JavaScript which is actually a very good thing and there are no integer types which is something you have to get used to but it's actually a good
thing too.
The problem is that the one number type we have is the wrong one. It's based on 64-bit binary floating point from the IEEE-754 standard which is strangely called double in Java and other languages.
Anybody care to know why it's called double? Why they picked such a silly name?
It's something that comes from Fortran. Fortran had integer and real and later they added double precision, which was two reals put together in order to give you more precision. And C took Fortran's
double precision and shortened it to double. And everyone else has been using double ever since then.
So we don't have ints and I'm glad we don't have ints cuz I hate ints. Ints have some really strange properties. For example, we can have two ints which are both greater than zero. We can add them
together and we can get results that are smaller than the original numbers, which is insane and inexcusable, and how can you have any confidence in the correctness and the perfection of your system
if it's built on a number type which can do such insanely ridiculous things?
So JavaScript does not have this defect in it which I think is brilliant. So one problem with computer arithmetic in general is that the Associative Law will not hold, and that's because computer
numbers are necessarily finite and real numbers aren't. So in many cases we're only dealing with approximations.
And when the values are approximate, then associativity doesn't hold. Now if you just confine to the integer space, JavaScript integers go up to around 9 quadrillion, which is pretty big. 9
quadrillion is bigger than the national debt, so it's big, right? That's big. So as long as your integers are smaller than 9 quadrillion they work out exactly.
When the integers get above 9 quadrillion they don't do the crazy wrap around thing that ints do, they just get fuzzy. So if I take a number above 9 quadrillion and add 1 to it, it's like I added 0
to it, which is not good but it's much less bad than what ints do.
And because computer arithmetic can be approximate, then there are identities that we're used to thinking about which don't hold. So you need to be aware of that and cautious. So the most reported
bug for JavaScript is that 0.10 + 0.20 is not equal to 0.30. And that's because we're trying to represent decimal fractions in binary floating point.
And binary floating point cannot accurately represent most of the decimal fractions. It can only approximate them, but it approximates them with infinite repeating bit patterns, but we're not allowed
to have infinitely long numbers. And so they truncate, and so every number is gonna be slightly wrong. Which is only a problem if you're living on a planet that uses the decimal system.
But on such a planet, you're counting people's money using this. When you're adding people's money, they have a reasonable expectation you're gonna get the right sum. And we're not guaranteed to get
the right sum with binary floating point, which is a huge problem.
>> Douglas Crockford: Numbers are objects, and so numbers have methods.
You don't have to box them in order to get object behavior. Every number is already an object. Every number inherits from number.prototype. So if we wanted to add new methods to numbers, we can do
that by adding them to number.prototype. This is not something that applications should ever do, but it is a useful thing for libraries to do, and in fact this is how we evolve the language.
So we can add new methods to new versions of the standard and libraries can back fill old browsers, and old implementations with the new stuff as long as new methods can be implemented in JavaScript.
Numbers are first class objects which means that a number can be stored in a variable, can be passed as a parameter, can be returned from a function and it can be stored in an object.
And because numbers are themselves objects they can have methods. JavaScript has made the same mistake that Java made in having a separate math object or math container for keeping the higher
elementary functions. This was done in Java anticipating that in the future there might be very low memory configurations and they'd wanna be able to remove the math functions.
That didn't happen because Moore's Law kept cranking on memory capacity so that turned out not to have been a good strategy. But it wouldn't have worked anyway because you'd be throwing away
essential things like floor. There's no good way to get the integer part of a number if you get rid of the floor function so it couldn't have worked.
There are also some constants stored in the math object as well. So, one of the worst, or one of the things that we get from the IEEE format is something called NaN, which stands for Not a Number.
It's the result of confusing or erroneous operations. For example, if you're trying to divide zero by zero the result is NaN.
NaN is toxic, which means that any arithmetic operation with NaN as an input will have NaN as an output. And despite the fact that NaN stands for Not a Number, it is a number. If you ask JavaScript,
what is the type of NaN? It says number and it's right.
>> Douglas Crockford: The thing I hate about NaN is that NaN is not equal to anything including NaN, so NaN equal NaN is false. Which bugs the hell out of me and even worse than that is that NaN not
equal NaN is true. Which I hate even more. So if you want to find out if something is NaN, there is a global isNaN function.
And you can pass NaN to it and it will return true, which is good. Unfortunately, isNaN also does type coercion. So if you pass it a string like hello world, it'll try to convert the string into a
number. The number that hello world turns into is NaN, so hello world is NaN, which is not true.
>> Douglas Crockford: So ever since Fortran, we've been writing statements that look like this, x = x + 1, which is mathematical nonsense. So ALGOL got this right. ALGOL came up with an assignment
operator so this didn't look so ridiculous, and BCPL did the same thing as ALGOL which got it right.
Unfortunately Thompson liked this better, and so we reverted back to it and we have not evolved away from this since. So we're stuck with this and it looks crazy, right? Because it looks like an
equation but there's no value of x which equals x + 1 right. Except it turns out if you're using binary floating point, there's a value called infinity.
And, if you add one to infinity, you get infinity so this actually is an equation. There is a value of X for which this is true. And not just that, there's another value called Number.MAX_VALUE which
is one followed by 308 digits, that's a really big number. And if you add one to the biggest number that JavaScript knows you would think that would be infinity but it isn't it, it'll be MAX_VALUE,
so it holds.
In fact that is true for every number above 9 quadrillion. So there's a lot of values for which this holds.
>> Douglas Crockford: But not NaN. Even though NaN plus one is NaN, NaN is not equal to NaN. So I hate that and NaN, I hate that even more.
Learn Straight from the Experts Who Shape the Modern Web
• In-depth Courses
• Industry Leading Experts
• Learning Paths
• Live Interactive Workshops
Get Unlimited Access Now | {"url":"https://frontendmasters.com/courses/good-parts-javascript-web/numbers/","timestamp":"2024-11-04T09:14:05Z","content_type":"text/html","content_length":"34155","record_id":"<urn:uuid:ecc24327-33e8-4797-bbec-4506bbfc797a>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00375.warc.gz"} |
How does acceleration affect motion? - PhysicsGoEasy
How does acceleration affect motion?
Acceleration significantly changes the nature of an object’s motion by altering its velocity over time. This change can happen in three primary ways:
Ways acceleration affect motion
1. Increasing speed
2. Decreasing speed
3. Changing direction
Let’s explore each of these effects in more detail:
Increasing Speed
When an object experiences positive acceleration in the same direction as its motion, its speed increases. This is commonly observed when a car accelerates from rest or when an object falls under the
influence of gravity.
The equation for velocity as a function of time with constant acceleration is:
$v = v_0 + at$
• $v$ is the final velocity
• $v_0$ is the initial velocity
• $a$ is the acceleration
• $t$ is the time elapsed
Decreasing Speed
Negative acceleration, also known as deceleration, causes an object’s speed to decrease. This occurs when a force acts in the opposite direction of the object’s motion, such as when applying
brakes to a moving vehicle.
The same equation applies, but the acceleration value is negative:
$v = v_0 – at$
Changing Direction
Acceleration can also change the direction of motion without necessarily affecting speed. This is known as centripetal acceleration, which is responsible for circular motion. The acceleration vector
points towards the center of the circle.
The equation for centripetal acceleration is:
$a_c = \frac{v^2}{r}$
• $a_c$ is the centripetal acceleration
• $v$ is the velocity
• $r$ is the radius of the circular path
Effects on Distance Traveled
Acceleration also affects the distance an object travels. The equation for displacement with constant acceleration is:
$x = x_0 + v_0t + \frac{1}{2}at^2$
• $x$ is the final position
• $x_0$ is the initial position
• $v_0$ is the initial velocity
• $a$ is the acceleration
• $t$ is the time elapsed
This equation shows that the distance traveled increases quadratically with time when there’s constant acceleration, as opposed to linearly when there’s constant velocity.
In summary, acceleration is a fundamental concept in physics that describes how the velocity of an object changes over time. It can increase or decrease speed, change direction, and significantly
affect the distance traveled. | {"url":"https://physicsgoeasy.com/how-does-acceleration-affect-motion/","timestamp":"2024-11-11T21:30:45Z","content_type":"text/html","content_length":"104740","record_id":"<urn:uuid:17838d17-7619-4848-b33b-fb37ba12625c>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00474.warc.gz"} |
An object travels North at 8 m/s for 9 s and then travels South at 5 m/s for 2 s. What are the object's average speed and velocity? | HIX Tutor
An object travels North at # 8 m/s# for #9 s# and then travels South at # 5 m/s# for # 2 s#. What are the object's average speed and velocity?
Answer 1
${v}_{a} = 5.64 \text{ } \frac{m}{s}$
$\vec{A} B = 8 \cdot 9 = 72 \text{ } m$
$\vec{C} D = 5 \cdot 2 = 10 \text{ } m$
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
The average speed is the total distance traveled divided by the total time taken. To find the total distance traveled, calculate the distance traveled while moving north and while moving south
separately, and then add them together. The average velocity is the total displacement divided by the total time taken. To find the total displacement, subtract the distance traveled south from the
distance traveled north, taking into account the direction. Then, divide the total displacement by the total time taken to find the average velocity.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/an-object-travels-north-at-8-m-s-for-9-s-and-then-travels-south-at-5-m-s-for-2-s-8f9af89d39","timestamp":"2024-11-14T12:02:05Z","content_type":"text/html","content_length":"585365","record_id":"<urn:uuid:328b9b74-d7c2-4a86-befa-efc40ab74bc3>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00472.warc.gz"} |
Between 14 and 20 January 2011 I am visiting Andrea Giacobbe and Francesco Fassò at the Mathematics department in Padova to continue the work we did during my previous visit.
In 2010 I continued working on research problems on dynamical systems and the geometry of integrable Hamiltonian fibrations. Here is a short list of 2010 events. Four papers, finished in the previous
year, were published in 2010. Two more papers, one on a covering spaces approach to fractional Hamiltonian monodromy and the other on circadian rhythms, were finished and are now under review.
Started a collaboration with Andrea Giacobbe which in 2010 brought Andrea once to Groningen and me once to Padova.
This week (22-26 November) I am visiting the Mathematics department1 in Padova to work with Andrea Giacobbe and Francesco Fassò; I am also giving a seminar on “Generalized Hamiltonian monodromy and
applications”. The Mathematics department is, since a few years ago, housed in ‘Torre Archimede’. Being Greek, I of course like the name. But since Archimedes had never been in Padova, ‘Torre
Levi-Civita’ would have probably been a more appropriate name.
Teaching period is over. During this period I was invited to give two talks but I didn´t have until now the opportunity to write about these here. In both occasions I talked about my favorite subject
at the moment: integrable Hamiltonian systems with codimension-1 singularities. Firstly, I was invited to give a talk at the second meeting of the GDR Quantum Dynamics that took place 24-26 March
2010 in Dijon. Besides my presentation, I had the opportunity to discuss with my colleagues in Dijon (Dominique Sugny, Pavao Mardešić, Michèle Pelletier, and Hans Jauslin) and with Dmitrií Sadovskií.
Three new papers have been accepted for publication recently. First, a review paper with Dmitrií Sadovskií has been accepted for publication in Reviews of Modern Physics. The subject of the review is
the recent advances in the study of the hydrogen atom in electric and magnetic fields that were made possible by adopting the point of view of global analysis of (near) integrable Hamiltonian
systems. A paper on bidromy with Dominique Sugny has been already published in Journal of Physics A.
I´m just back from my vacation trip to the very beautiful Iceland, so the new year (in terms of work) starts a little late. I have realized that time plays very strange tricks with my memories and as
I grow older things become worse. In particular, when I look back I tend to underestimate the things I have achieved in any given time period and how much I have developed as a researcher and as a
The main requirement for numerical code is that it must be fast. This means that numerical code is usually written in some “low-level” language such as C or Fortran. Such code can be highly optimized
but low-level languages are not very expressive (where expressiveness is defined as the inverse of the number of lines of code that is necessary in order to express a program). Thus they are not very
well suited to other tasks, such as data I/O, command line argument parsing, graphics, and user interaction that are peripheral to, but nevertheless important for, the computation and where speed
does not play the primary role.
I have been unhappy for some time now with the function that I had written in Mathematica for computing the normal form of a Hamiltonian system. I used the standard Lie series approach but I had
written the code using a traditional approach with several Do loops. Except for the fact that this kind of code is completely contrary to Mathematica´s philosophy I found the code hard to read and
inefficient to the degree at least that Mathematica can be efficient.
On June 23 2009 I gave a talk on «Unstable attractors in pulse coupled oscillator networks» in the workshop Brain Waves. The workshop is organized in the Lorentz Center by Stan Gielen, Stephan van
Gils, Michel van Putten, and David Terman.
The workshop on Monodromy and geometric phases in classical and quantum mechanics took place June 15-19 2009 at the Lorentz Center in Leiden. I enjoyed preparing the workshop and bringing together
people from different fields. The talks were great and I would like to thank the speakers for making this workshop a success. My coorganizers Jonathan Robbins, Dmitrií Sadovskií, and Holger Waalkens
shared the weight. And I would like to thank also Martje Kruk and Gerda Filippo from the Lorentz Center for taking care of all the practical aspects of the workshop.
On May 12 2009 I give a talk in the conference Singularities of Planar Vector Fields, Bifurcations and Applications on «Hamiltonian monodromy». The conference is organized by Pavao Mardesic and
Christiane Rousseau.
Between 22-24 April 2009 I visit the group of Carles Simò in Barcelona. I will give a presentation in the UB-UPC DSG Seminar on «The hydrogen atom in electric and magnetic fields». | {"url":"https://www.efstathiou.gr/journal/page/7/","timestamp":"2024-11-11T01:20:53Z","content_type":"text/html","content_length":"14042","record_id":"<urn:uuid:2b67054c-47ee-4a5c-9533-0b31829ff532>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00395.warc.gz"} |
You are viewing v0.30.2 version.
A newer version v0.31.0 is available.
LMSDiscreteScheduler is a linear multistep scheduler for discrete beta schedules. The scheduler is ported from and created by Katherine Crowson, and the original implementation can be found at
class diffusers.LMSDiscreteScheduler
< source >
( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: Union = None use_karras_sigmas: Optional = False prediction_type: str
= 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 )
A linear multistep scheduler for discrete beta schedules.
This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic methods the library implements for all schedulers such as loading and saving.
< source >
( order t current_order )
Compute the linear multistep coefficient.
< source >
( sample: Tensor timestep: Union ) → torch.Tensor
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
< source >
( begin_index: int = 0 )
Sets the begin index for the scheduler. This function should be run from pipeline before the inference.
< source >
( num_inference_steps: int device: Union = None )
Sets the discrete timesteps used for the diffusion chain (to be run before inference).
< source >
( model_output: Tensor timestep: Union sample: Tensor order: int = 4 return_dict: bool = True ) → SchedulerOutput or tuple
SchedulerOutput or tuple
If return_dict is True, SchedulerOutput is returned, otherwise a tuple is returned where the first element is the sample tensor.
Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion process from the learned model outputs (most often the predicted noise).
class diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteSchedulerOutput
< source >
( prev_sample: Tensor pred_original_sample: Optional = None )
• prev_sample (torch.Tensor of shape (batch_size, num_channels, height, width) for images) — Computed sample (x_{t-1}) of previous timestep. prev_sample should be used as next model input in the
denoising loop.
Output class for the scheduler’s step function output.
< > Update on GitHub | {"url":"https://huggingface.co/docs/diffusers/v0.30.2/en/api/schedulers/lms_discrete","timestamp":"2024-11-04T12:43:11Z","content_type":"text/html","content_length":"199648","record_id":"<urn:uuid:82f34cb5-9845-4f65-8ea6-a381c82e23cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00205.warc.gz"} |
The vectorial features of the boundary-bulk correspondence in 3D Chern insulators - Mapping Ignorance
The vectorial features of the boundary-bulk correspondence in 3D Chern insulators
Some materials have special universal properties protected against perturbations. Such properties are theoretically described by topology, a branch of mathematics concerned with the properties of
geometrical objects that are unchanged by continuous deformations. So-called topological insulators are electronic materials that have a bulk band gap like an ordinary insulator but have conducting
states on their boundaries, i.e., edges or surfaces. The conducting surface is not what makes topological insulators unique, but the fact that it is protected due to the combination of spin-orbit
interactions and time-reversal symmetry.
A topological invariant is a geometrical quantity that remains unchanged by continuous deformations. Topological invariants have found widespread applications in physics, chemistry, and materials
science. One of the best known topological invariants in condensed matter physics is the Chern number.
The definition of the Chern number is not exactly simple. But it could be enough to understand the Chern number as an integer that characterizes the topology of filled bands in two-dimensional
lattice systems. A band with a non-zero Chern number is topologically non-trivial. When the highest occupied band is non-trivial and completely filled, the state is called a topological insulator. A
material whose topological phases can be characterized by the Chern number is called a Chern insulator, a class of topological insulators.
But when we make Chern insulators interact with light, something interesting happens. Electrons are spin-1/2 particles, whereas photons are spin-1 particles. The distinct spin difference between
these two kinds of particles means that their corresponding symmetry is fundamentally different. An electronic topological insulator is protected by the electron’s spin-1/2 (fermionic) time-reversal
symmetry; however, due to photon’s spin-1 (bosonic) time-reversal symmetry, the same protection does not exist under normal circumstances for a photonic topological insulator. In other words, we
could have a Chern photonic insulator with broken time-reversal symmetry. Time reversal symmetry broken topological phases provide gapless surface states protected by topology, regardless of
additional internal symmetries, spin or valley degrees of freedom. Thus, the topology of the propagation of light in photonic crystals has been the subject of much recent attention.
Formally, a 3D Chern insulator is a time-reversal symmetry broken topological phase characterized by a Chern vector C =(C[x] ,C[y] ,C[z] ) that can support anomalous surface states on surfaces
parallel to the Chern vector. In 2D, the Chern vector is always orthogonal to the plane of the system and, consequently, it can be regarded as a scalar quantity: the Chern number we mentioned before.
In contrast with 2D or layered materials, where the vectorial nature of the boundary-bulk correspondence does not show up, the possibility of orienting Chern vectors in space demonstrated in photonic
crystals open up the possibility of constructing domain walls between different orientations, and thus a definition of a proper vectorial boundary-bulk correspondence is required.
Do individual components of the Chern vector contribute in a linear independent way to surface modes? How do multiple photonic surface modes hybridize with each other with respect to the different
orientations of the Chern vector? Is there an easy way of counting surface modes or predicting their direction? Finding answers to these questions is a challenging theory exercise, lacking of a 2D
analogy. In search of those answers, a team of researchers explored ^1 the possible interfaces between 3D photonic Chern insulators, focusing on possible changes in Chern vector orientation.
The researchers found that, for a 3D Chern insulator crystal, the Chern vectors across the interface no longer need to be parallel or anti-parallel to each other, which may render the scalar analogy
with 2D difficult to apply. They demonstrate that vectorial features of the boundary-bulk correspondence need to be taken into account to correctly predict the number and the propagation direction of
topological photonic surface modes, thus completing the vectorial boundary-bulk correspondence picture for 3D Chern insulator photonic crystals.
Beyond theoretical concerns, constructing interfaces with Chern vectors of different orientations is relevant for practical applications. Photonic 3D Chern insulator interfaces are a potential
platform for unidirectional optical channels protected from backscattering. Furthermore, photonic crystal architectures with multiple Chern vectors of different orientation could enable new ways to
control light propagation.
More on the subject:
3D topological photonic crystals whith Chern vectors at will
Author: César Tomé López is a science writer and the editor of Mapping Ignorance
Disclaimer: Parts of this article may have been copied verbatim or almost verbatim from the referenced research papers.
1. Chiara Devescovi, Mikel García-Díez, Barry Bradlyn, Juan L. Mañes, Maia G. Vergniory, Aitzol García-Etxarri (2022) Vectorial Bulk-Boundary Correspondence for 3D Photonic Chern Insulators Adv.
Optical Mater. doi: 10.1002/adom.202200475 ↩ | {"url":"https://mappingignorance.org/2022/10/06/the-vectorial-features-of-the-boundary-bulk-correspondence-in-3d-chern-insulators/","timestamp":"2024-11-08T20:55:04Z","content_type":"text/html","content_length":"77434","record_id":"<urn:uuid:bd72d7ea-e535-4065-acca-500d1873513a>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00386.warc.gz"} |
Learning The Origin Of "Duality"
Yesterday I learned that the person who first used the term "duality" in connection with linear programming, indeed with anything in economics, was John von Neumann in a private conversation with
George Dantzig in 1947, the "father of linear programming." That was the year Dantzig published his paper showing the simplex method for solving linear programming problems, bot their primals and
their duals. Von Neumann wrote a paper on it the same year but did not publish it, with it only appearing in his Collected Papers in 1963, 6 years after he died.
It is not really surprising that it would be von Neumann as there are deep links between oprimizing programming and game theory. This would be seen in 1951 when David Gale, Harold Kuhn, and Albert
Tucker invented nonlinear programming. Kuhn and Tucker would, of course, prove the Kuhn-Tucker theorem that deeply involves duality. 1951 would also be the year that Tucker's student, John Nash,
completed his PhD thesis with his ultimately Nobel-Prize-winning paper on non-cooperative game theory. An irony is that while seeing the first prisoner's dilemma experiments at RAND soon after, Nash
would become disillusioned with game theory and economics, it would be Tucker who would coin the term "prisoner's dilemma" in an address to the American Psychology Association.
While the term was introduced into linear programming by von Neumann, the concept itself had already been floating around in economics, with its first appearance being in 1886 when Antonelli first
derived an indirect utility function. It would be further developed by Harold Hotelling in a famous paper in 1932 with "Hotelling's Lemma," and the 1942 paper by Roy in which he introduced his famous
While I had previously known bits of this, what I learned yesterday came from an ongoing project of mine to read the entire 3rd edition of the New Palgrave Dictionary of Economics, all 20 volumes of
it, nearly 15,000 pages and about 3800 entries. I am doing that because I am one of the coeditors of the fourth edition, along with Matias Vernengo and Esteban Perez. I have learned a great deal from
this reading, which has now gotten me through entries for the letter D. I have a ways to go, but I am genuinely impressed by the job that Steve Durlauf and Larry Blume did editing the third edition.
The original Palgrave Dictionary of Economics was published in 1893. The current one still has entries from that original one by people like Edgeworth and Wicksteed. Some of these oldies are really
quite fascinating. But the cumulative effect of reading all of it is really beginning to kind of blow my mind.
Oh, and returning to the original topic, for anyone who does not know, it has long been viewed by many as being one of the greatest mistakes and wrongs that when the Nobel Prize was given for linear
programming, Dantzig was not one of the recipients, who were Leonid Kantorovich, the only actual Soviet economist to win a Nobel, and Tjalling Koopmans. There was room for Dantzig (especially with
von Neumann dead), but somehow he got left out. But then, hey, they never gave it to Joan Robinson either.
Barkley Rosser
11 comments:
Cue Egmont in 5, 4, 3...
The Palgrave Dictionary ― a comprehensive collection of False-Hero-Memorials
Comment on Barkley Rosser on ‘Learning The Origin Of “Duality”’
Barkley Rosser reports: “Yesterday I learned that the person who first used the term ‘duality’ in connection with linear programming, indeed with anything in economics, was John von Neumann in a
private conversation with George Dantzig in 1947, the ‘father of linear programming.’ That was the year Dantzig published his paper showing the simplex method for solving linear programming
problems, bot their primals and their duals. … It is not really surprising that it would be von Neumann as there are deep links between oprimizing programming and game theory.”
Yes, indeed, and the deep link is in the Walrasians axioms. The hardcore propositions of microeconomics are given with: “HC1 economic agents have preferences over outcomes; HC2 agents
individually optimize subject to constraints; HC3 agent choice is manifest in interrelated markets; HC4 agents have full relevant knowledge; HC5 observable outcomes are coordinated, and must be
discussed with reference to equilibrium states.” (Weintraub)
This axiom set contains a lot of NONENTITIES, a fact that did not escape the mathematicians: “Walras approached Poincaré for his approval. ... But Poincaré was devoutly committed to applied
mathematics and did not fail to notice that utility is a nonmeasurable magnitude. ... He also wondered about the premises of Walras’s mathematics: It might be reasonable, as a first
approximation, to regard men as completely self-interested, but the assumption of perfect foreknowledge ‘perhaps requires a certain reserve’.” (Porter)
The next mathematician to realize that Walrasian mathematics was not up to standards was von Neumann: “You know, Oskar, if those books are unearthed sometime a few hundred years hence, people
will not believe they were written in our time. Rather, they will think that they are about contemporary with Newton, so primitive is their mathematics. Economics is simply still a million miles
away from the state in which an advanced science is, such as physics.”
The two central issues the mathematicians addressed where HC2, i.e. constrained optimization, and HC5, i.e. equilibrium. The end result was General Equilibrium Theory which is known today to be a
failure.#1 The mathematicians fixed the economists’ mathematical blunders#2, #3 but von Neumann uncritically bought into the silly behavioral assumptions of mainstream economics.#4 So, the
project of the proper formalization of economics had one fatal drawback, von Neumann left the foundational assumptions untouched: “But this [establishing the analytic mother-structure] required
one very crucial maneuver that was nowhere stated explicitly: namely, that the model of Walrasian general equilibrium was the root structure from which all further work in economics would
eventuate.” (Weintraub)
Because of this, the NONENTITIES HC2 and HC5 are still with us from choice theory to game theory. Strictly speaking, von Neumann messed up the Paradigm Shift and this is where things stand to
this day: “There is another alternative: to formulate a completely new research program and conceptual approach. As we have seen, this is often spoken of, but there is still no indication of what
it might mean.” (Ingrao et al.)
See part 2
Part 2
Barkley Rosser does not fail to mention that economists have developed the habit to award themselves faux Nobels for scientific failure and that “it has long been viewed by many as being one of
the greatest mistakes and wrongs that when the Nobel Prize was given for linear programming, Dantzig was not one of the recipients, …” Indeed, False-Hero-Memorials are the real problem of the
cargo cult science economics.
Egmont Kakarot-Handtke
#1 “At long last, it can be said that the history of general theory from Walras to Arrow-Debreu has been a journey down a blind alley, and it is historians of economic thought who seem to have
finally hammered down the nails in this coffin. It has been a dead alley because the most rigorous solution of the existence problem by Arrow and Debreu turns general theory into a mathematical
puzzle applied to a virtual economy that can be imagined but could not possibly exist, while the extremely relevant ‘stability problem’ has never been solved either rigorously or sloppily.
General theory is simply a research program that has run into the sands.” (Blaug)
#2 “The so-called ‘mathematical’ economists in the narrower sense ― Walras, Pareto, Fisher, Cassel, and hosts of other later ones ― especially, have completely failed even to see the task that
was before them. Professor Hicks has to be added to this list, which is regrettable because he wrote several years after decisive work had been done ― in principle ― by J. von Neumann and A.
Wald.” (Morgenstern)
#3 “Consequently, it was the von Neumann perspective that shaped general equilibrium theory and game theory, and thus reconstituted economic theory. Thus, David Hilbert was the spiritual
grandfather of this new economics. (Weintraub) Interestingly, both von Neumann and Einstein got/took their advanced math from Hilbert and the Göttingen School.
#4 “In any event, it seems that Morgenstern finally convinced von Neumann that they must proceed tactically by means of the conciliatory move of phrasing the payoffs in terms of an entity called
‘utility’, but one that von Neumann would demonstrate was cardinal ― in other words, for all practical purposes indistinguishable from money . . .” (Mirowski)
Gosh, took you awhile, Egmont.
Your analysis of this is seriously flawed. While linear programming can be stuck into a Walrasian framework, it is independent of it. Keep in mind that when Kantorovich first developed it, he was
doing so for a centrally planned command socialist economy, that of the USSR. The decisionmaker is that entity, not a bunch of individual agents, and there are no markets. So of your pet
Weintraubian axioms (not the actual axioms of GE theory), only H2 and H4 are relevant, but rewritten for a central planner.
As for your quotes, the one from Blaug is irrelevant. The one from MOrgenstern is hilarious. Even if one dumps non-constructivist Hilbertian math, one can still do linear programming, another
irrelevant point. The final comment is correct, but also irrelevant.
Barkley Rosser
When the classical economists for a moment stopped writing their silly propaganda pamphlets and watched what was going on in the triumphant sciences they could not fail to notice that calculus
was a core element of analysis: “Already Maupertuis considered his minimum principle as proof that the world, where among many virtual movements the one leading to maximum effect with minimum
effort is realized, is the ‘best of all worlds’ and work of a purposeful creator. Euler made a similar remark: ‘Since the construction of the whole world is the most eminent and since it
originated from the wisest creator, nothing is found in the world which would not show a maximum or minimum characteristic.’ ...” (von Bertalanffy)
Economists had the presence of mind to adopt optimization as a foundational principle and eventually it became HC2 of the Walrasian axiom set. This is how economics became marginalistic and
remained so to this day: “most of what I and many others do is sorta-kinda neoclassical because it takes the maximization-and-equilibrium world as a starting point ...” (Krugman)
Constrained optimization is simple if the assumption of a well-behaved production function is added. In the course of time, mathematicians found solutions for more complicated problems of
production, transport, warfare and the like. This is where Dantzig and Kantorovich came in.
However, while the mathematicians developed solutions for real-world problems, economists applied constrained optimization to human behavior. This is the foundational idiotism of
microfoundations: “Throughout its history, the idea of some ‘Fundamental Assumption’, some basic ‘Economic Principle’ about human conduct, from which much or most of economics can ultimately be
deduced, has been deeply rooted in the procedure of economic theory. Some such notion is still, in many quarters, dominant at the present time. For example, it has recently been stated that the
task of economics is ‘to display the structure and working of the economic cosmos as an outgrowth of the maximum principle’.” (Hutchison)
The point is: no way leads from a behavioral assumption (optimization or otherwise) to the understanding of how the economy works. Standard economics is based on behavioral axioms and this is not
a solid enough foundation: “… if we wish to place economic science upon a solid basis, we must make it completely independent of psychological assumptions and philosophical hypotheses.” (Slutzky)
So, the mathematicians have done a fine job by solving all kinds of complex optimization problems. This, though, does not alter the fact microfoundations HC1 to HC5 are proto-scientific garbage
and have to be replaced by macrofoundations. This is called a Paradigm Shift which, as everyone knows by now, has been messed up by the waffling economist Keynes. Economists are simply too stupid
for the elementary mathematics that underlies macroeconomics.#1, #2
Economists claim since Adam Smith that the free market economy is self-regulating and self-optimizing if left to itself. In reality, it is just the opposite: the real part of the economy is kept
on life-support by the government, the monetary/financial part is kept on life-support by the Central Bank.#3
Economics is a failed science and the Bank of Sweden fake Nobel is the most fraudulent of all False-Hero-Memorials.#3
Egmont Kakarot-Handtke
#1 Are economics professors really that incompetent? Yes!
#2 Econ 101: Economists flunk the intelligence test at the first hurdle
#3 The Levy/Kalecki Profit Equation is false
Sorry, but a lot of the people yiou label as "economists" were or are really mathematicians, e.g. von Neumann, Nash, Kantorovich. To the extent what they did id not good because of a failure to
allow for a macrofoundatiion and that amounts to recognizing your silly profit accounting, your comments are just irrelevant and silly, Egmont.
I really did find it amusing to see you sneer at von Neumann and Einstein for using Hilbertian math, two of the smartest people of the 20th century, if not all time. But you know better than they
did. (Einstein was also the father of something used in economics and finance, notably the mathematical concept of Brownian motion, which also happens to be what he got his Nobel Prize for, you
know, those "fake hero" prizes).
Barkley Rosser
You say: “Sorry, but a lot of the people you label as ‘economists’ were or are really mathematicians, e.g. von Neumann, Nash, Kantorovich.”
No. My point is that it was mathematicians and not economists who cleared up the mathematical mess of economists and did all the heavy lifting that resulted in General Equilibrium Theory. See
Mirowski’s More Heat Than Light for the finer points, especially how mathematically trained engineers who dabbled in economics desperately tried to make economists aware that they applied
calculus incorrectly. To no avail, of course.
It is mathematicians who deserve most or all of the credit for solving the tricky optimization problems that were implicit in the Walrasian axiom HC2. Solving optimization problems is mathematics
and, strictly speaking, not economics. Economics is about how the economic system works. The simple fact of the matter is that General Equilibrium Theory, which is based on HC2, i.e. constrained
optimization, and HC5, i.e. equilibrium, does NOT explain how the economy works. This showpiece of the mathematicians’ ingenuity has NO economic content at all.
To award an economics “Nobel” to a mathematical achievement has to be seen as an attempt to give the cargo cult science economics the appearance of genuine science.
Von Neumann’s contribution has to be seen in the context of the ideological/physical war capitalism/USA vs communism/Russia and the involvement of mathematicians/physicists in the development of
the atom bomb. Von Neumann recommended himself to the US military with his famous statement: “If you say why not bomb them tomorrow, I say why not today? If you say today at five o’clock, I say
why not one o’clock?” (Wikiquote)
The US military needed the most advanced mathematics available on the planet, so von Neumann was sent to Göttingen: “In 1926 von Neumann went to Göttingen on a Rockefeller fellowship to work as
Hilbert’s assistant. Göttingen was not only one of the centers of mathematics but it also was a mecca of theoretical physics; thus in Göttingen von Neumann could familiarize himself with the
latest developments concerning quantum mechanics. Hilbert himself gave lectures on the mathematical foundations of quantum mechanics in the academic year 1926-1927. Von Neumann attended these
lectures, and working out the lecture notes taken during those lectures led to a joint publication and eventually to von Neumann’s three ground breaking papers on the mathematical foundations of
quantum mechanics that served as the basis of his book.”#1
Not to forget, Game Theory is not so much about parlor games or the problems of incarcerated mobsters or strategic economic behavior as about war games.
What von Neumann learned from Hilbert was the crucial importance of axiomatization: “When we assemble the facts of a definite, more-or-less comprehensive field of knowledge, we soon notice that
these facts are capable of being ordered. This ordering always comes about with the help of a certain framework of concepts .... The framework of concepts is nothing other than the theory of the
field of knowledge. ... If we consider a particular theory more closely, we always see that a few distinguished propositions of the field of knowledge underlie the construction of the framework
of concepts, and these propositions then suffice by themselves for the construction, in accordance with logical principles, of the entire framework. ... The procedure of the axiomatic method, as
it is expressed here, amounts to a deepening of the foundations of the individual domains of knowledge — a deepening that is necessary for every edifice that one wishes to expand and to build
higher while preserving its stability.” (Hilbert)
Note that the Theory of Games is axiomatized.
See part 2
Part 2
You say: “To the extent what they [the mathematicians] did id not good because of a failure to allow for a macrofoundatiions and that amounts to recognizing your silly profit accounting, your
comments are just irrelevant and silly, Egmont.”
What I actually say is that ECONOMISTS are too stupid for the elementary mathematics that underlies macroeconomics. This is a provable fact.
Since Keynes, neither Post-Keynesians nor New Keynesians nor Anti-Keynesians have realized that I=S is false and has to be replaced by Q=I−S with Q representing macroeconomic profit. What Hilbert
called “the framework of concepts” is false in economics to this day.
Needless to emphasize that you neither understand the problem of the proper axiomatization of economics nor the macroeconomic Profit Law. So, you are the ideal person to work on a new edition of
the Palgrave collection of False-Hero-Memorials of the cargo cult science economics.
Economics is a failed/fake science and the Palgrave Dictionary and you are part of it. Economics needs a Paradigm Shift from false Walrasian microfoundations and false Keynesian macrofoundations
to true macrofoundations.
Egmont Kakarot-Handtke
#1 PDFs.semanticscholar
My only comment on all this is the idea that the 1926 Rockefeller grant to von Neumann to work with Hilbert was motivated by support for his hardline anti-Communist views is just rank silly
nonsense. His political views were unknown at that time, and all that only became important after WW II, long after he has arrived in the US, which was quite some time after he worked with
Yes, von Neumann's work redid quantum mechanics, but I note that his first paper on game theory, in which he also first used a fixed point theorem, and which also has duality inherent in it, was
published in 1928 in German coming out of all that.
Barkley Rosser
Let’s return to the main point: “According to George Dantzig, the duality theorem for linear optimization was conjectured by John von Neumann immediately after Dantzig presented the linear
programming problem. Von Neumann noted that he was using information from his game theory, and conjectured that two person zero sum matrix game was equivalent to linear programming.”#1
Von Neumann identified the Walrasian approach as given with the verbalized axioms HC1 to HC5 as petitio principii (TOG p. 15), that is as methodologically unacceptable.#2 So, he introduced a new
set of axioms: “An exact and exhaustive elaboration of these ideas requires the use of the axiomatic method.” (TOG p. 19) The formulation of a new set (TOG p. 24 f.) amounts to a Paradigm Shift.
Note, in particular, von Neumann’s general remark on axiomatization: “The axioms should not be too numerous, their system is to be as simple and transparent as possible, and each axiom should
have an immediate intuitive meaning by which its appropriateness may be judged directly.” (TOG p. 25)
Generally speaking, von Neumann’s approach is (i) behavioral and (ii) bottom-up. In this, it resembles the Walrasian approach. Both may be put together under the label microfoundations. Now, the
correct approach to economics is (i) structural and (ii) top-down. For good methodological reasons, economics has to be macrofounded.
Game theory is predicated on von Neumann’s specification of utility (TOG p. 15) and this leads to the concept of zero-sum games: “In game theory and economic theory, a zero-sum game is a
mathematical representation of a situation in which each participant’s gain or loss of utility is exactly balanced by the losses or gains of the utility of the other participants.”#3 Note that
there is a quite subtle transition from utility (ordinal) to profit (cardinal): “In 1944, John von Neumann and Oskar Morgenstern proved that any non-zero-sum game for n players is equivalent to a
zero-sum game with n+1 players; the (n+1)th player representing the global profit or loss.”#3
And exactly here is the lethal blunder: global profit or loss cannot be determined within the framework of game theory.
Here is the axiomatically correct proof. The elementary production-consumption economy is defined with this set of macroeconomic axioms: (A0) The economy consists of the household and the
business sector which, in turn, consists initially of one giant fully integrated firm. (A1) Yw=WL wage income Yw is equal to wage rate W times working hours. L, (A2) O=RL output O is equal to
productivity R times working hours L, (A3) C=PX consumption expenditure C is equal to price P times quantity bought/sold X.
Note that this set satisfies von Neumann’s general criteria of proper axiomatization as quoted above.
Under the conditions of market clearing X=O and budget balancing C=Yw in each period, the price as the dependent variable is given by P=W/R. This is the macroeconomic Law of Supply and Demand.
The focus is here on the nominal/monetary balances. For the time being, real balances are excluded, i.e. it holds X=O. The condition of budget balancing, i.e. C=Yw, is now skipped. The monetary
saving/dissaving of the household sector is defined as S≡Yw−C. The monetary profit/loss of the business sector is defined as Q≡C−Yw. Ergo Q+S=0 or Q=−S.
The balances add up to zero. The mirror image of household sector saving S is business sector loss (-Q). The mirror image of household sector dissaving (-S) is business sector profit Q. Q=−S is
the elementary version of the macroeconomic Profit Law.
See part 2
Part 2
From Q+S=0 one could conclude that economics is a zero-sum game between the business sector and the household sector. However, this is only the first half of the truth. From the fact that profit
comes from dissaving, it follows that the household sector accumulates debt (overdrafts at the central bank) while the business sector accumulates money (deposits at the central bank). Now, debt
has to be eventually repaid and this inverts the whole game: profit Q in period t is exactly annihilated by loss (-Q) in period t+1. The mirror image is dissaving (-S) in t and saving S in t+1.
The result of these interlocked zero-sum games is zero profit over all periods.
Economics is an intertemporal zero-sum game and the n+1th player is the future. This is the reason why capitalism will eventually break down and NOT Marx’s phony Revolution of the Proletariat.
The intertemporal zero-sum theorem fully replaces folk psychology/sociology. Note that the intertemporal zero-sum theorem holds for a monetary economy that is for capitalism AND communism AND
anything in-between. Which in turn proves that political economics has NEVER been anything else than proto-scientific garbage of the worst sort.
Egmont Kakarot-Handtke
#1 Duality (optimization)
#2 Petitio principii — economists’ biggest methodological mistake
#3 Zero-sum game
#4 The Levy/Kalecki Profit Equation is false | {"url":"https://econospeak.blogspot.com/2019/06/learning-origin-of-duality.html?showComment=1561723170263","timestamp":"2024-11-14T12:26:52Z","content_type":"text/html","content_length":"172875","record_id":"<urn:uuid:66303767-65f4-4fef-bf64-709163b5bc0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00351.warc.gz"} |
The Message Complexity of Distributed Graph Optimization
The message complexity of a distributed algorithm is the total number of messages sent by all nodes over the course of the algorithm. This paper studies the message complexity of distributed
algorithms for fundamental graph optimization problems. We focus on four classical graph optimization problems: Maximum Matching (MaxM), Minimum Vertex Cover (MVC), Minimum Dominating Set (MDS), and
Maximum Independent Set (MaxIS). In the sequential setting, these problems are representative of a wide spectrum of hardness of approximation. While there has been some progress in understanding the
round complexity of distributed algorithms (for both exact and approximate versions) for these problems, much less is known about their message complexity and its relation with the quality of
approximation. We almost fully quantify the message complexity of distributed graph optimization by showing the following results: 1. Cubic regime: Our first main contribution is showing essentially
cubic, i.e., (n3) lower bounds1 (where n is the number of nodes in the graph) on the message complexity of distributed exact computation of Minimum Vertex Cover (MVC), Minimum Dominating Set (MDS),
and Maximum Independent Set (MaxIS). Our lower bounds apply to any distributed algorithm that runs in polynomial number of rounds (a mild and necessary restriction). Our result is significant since,
to the best of our knowledge, this are the first ω(m) (where m is the number of edges in the graph) message lower bound known for distributed computation of such classical graph optimization
problems. Our bounds are essentially tight, as all these problems can be solved trivially using O(n3) messages in polynomial rounds. All these bounds hold in the standard CONGEST model of distributed
computation in which messages are of O(log n) size. 2. Quadratic regime: In contrast, we show that if we allow approximate computation then θ(n2) messages are both necessary and sufficient.
Specifically, we show that (n2) messages are required for constant-factor approximation algorithms for all four problems. For MaxM and MVC, these bounds hold for any constant-factor approximation,
whereas for MDS and MaxIS they hold for any approximation factor better than some specific constants. These lower bounds hold even in the LOCAL model (in which messages can be arbitrarily large) and
they even apply to algorithms that take arbitrarily many rounds. We show that our lower bounds are essentially tight, by showing that if we allow approximation to within an arbitrarily small constant
factor, then all these problems can be solved using O(n2) messages even in the CONGEST model. 3. Linear regime: We complement the above lower bounds by showing distributed algorithms with O (n)
message complexity that run in polylogarithmic rounds and give constant-factor approximations for all four problems on random graphs. These results imply that almost linear (in n) message complexity
is achievable on almost all (connected) graphs of every edge density.
Original language English
Title of host publication 15th Innovations in Theoretical Computer Science Conference, ITCS 2024
Editors Venkatesan Guruswami
Publisher Schloss Dagstuhl - Leibniz-Zentrum für Informatik
ISBN (Electronic) 9783959773096
Publication status Published - 2024
MoE publication type A4 Conference publication
Innovations in Theoretical Computer Science Conference - Berkeley, United States
Event Duration: 30 Jan 2024 → 2 Feb 2024
Conference number: 15
Publication series
Name Leibniz International Proceedings in Informatics, LIPIcs
Volume 287
ISSN (Print) 1868-8969
Conference Innovations in Theoretical Computer Science Conference
Abbreviated title ITCS
Country/Territory United States
City Berkeley
Period 30/01/2024 → 02/02/2024
• distributed approximation
• Distributed graph algorithm
• message complexity
Dive into the research topics of 'The Message Complexity of Distributed Graph Optimization'. Together they form a unique fingerprint.
• Uitto, J. (Principal investigator), Cambus, M. (Project Member), Latypov, R. (Project Member) & Pai, S. (Project Member)
01/09/2020 → 27/04/2025
Project: Academy of Finland: Other research funding | {"url":"https://research.aalto.fi/en/publications/the-message-complexity-of-distributed-graph-optimization","timestamp":"2024-11-05T15:56:02Z","content_type":"text/html","content_length":"74415","record_id":"<urn:uuid:f2f5b1ee-a653-4683-a7da-036e1a0d2aeb>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00695.warc.gz"} |
Method of miscible injection testing of oil wells and system thereof
Method of miscible injection testing of oil wells and system thereof
A method of determining reservoir permeability and geometry of a subterranean formation having a reservoir fluid including oil that has not been previously water-flooded includes isolating the
subterranean formation to be tested; providing an injection fluid at a substantially constant rate from a wellhead to the formation being tested, wherein the injection fluid is miscible with the oil
at the tested formation; sealing, at the top, the tested formation from further fluid injection; measuring pressure data in the tested formation including pressure injection data and pressure falloff
data; and determining the reservoir permeability and geometry of the tested formation based on an analysis of the measured pressure injection data and the measured pressure falloff data using a well
pressure model.
Latest Chevron U.S.A. Inc. Patents:
The present invention relates generally to characterization of the productivity and geometry of oil bearing intervals in wells and more particularly to automated interpretation of short term testing
without oil production to the surface.
An example of a conventional oil surface procedure for flow testing is the Drill Stem Test (DST). In this type of flow testing, the productive capacity, pressure, permeability or extent of an oil or
gas reservoir is determined. DST testing is essentially a flow test, which is performed on isolated formations of interest to determine the fluid present and the rate at which they can be produced.
Typical DST consists of several flow and shut in (or pressure buildup) periods, during which reservoir data is recorded.
Alternatives to the oil surface procedure for flow testing exist, but have their own inherent disadvantages or shortcomings. For example, coring and open hole wireline formation testing are known,
but these methods sample a very small reservoir volume which often yields insufficient or incomplete results. Additionally, injection flow testing has been explored for water injection into water
flooded oil reservoirs.
In an aspect of the invention, there is provided a method of determining reservoir permeability and geometry of a subterranean formation having a reservoir fluid including oil that has not been
previously water-flooded, the method comprising isolating the subterranean formation to be tested; providing an injection fluid at a substantially constant rate from a wellhead the formation being
tested, wherein the injection fluid is miscible with the oil at the tested formation; sealing, at the top, the tested formation from further fluid injection; measuring pressure data in the tested
formation including pressure falloff data and pressure injection data; and determining the reservoir permeability and geometry of the tested formation based on an analysis of the measured pressure
injection and the measured pressure falloff data using a well pressure model.
In another aspect of the invention, there is provided a system for determining a reservoir permeability and geometry of a subterranean formation having a reservoir fluid including oil that has not
previously been water-flooded, the system comprising an injector constructed and arranged to inject an injection fluid at substantially constant rate from a wellhead into the formation being tested,
wherein the injection fluid is miscible with the oil at the tested formation; one or more sensors constructed and arranged to measure data in the tested layer including pressure injection data and
pressure falloff data; and a machine readable medium having machine executable instructions constructed and arranged to determine the reservoir permeability and geometry of the tested formation based
on an analysis of the measured pressure injection data and the measured pressure falloff data using a well pressure model stored in a memory coupled to a processor.
These and other objects, features, and characteristics of the present invention, as well as the methods of operation and functions of the related elements of structure and the combination of parts
and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part
of this specification, wherein like reference numerals designate corresponding parts in the various Figures. It is to be expressly understood, however, that the drawings are for the purpose of
illustration and description only and are not intended as a definition of the limits of the invention. As used in the specification and in the claims, the singular form of “a”, “an”, and “the”
include plural referents unless the context clearly dictates otherwise.
FIG. 1 generally shows a method of determining reservoir permeability and geometry of a subterranean formation in accordance with an embodiment of the invention.
FIG. 2 is a schematic illustration of a sensor in communication with a computer in accordance with an embodiment of the invention
FIG. 3 illustrates the viscosity-temperature behavior for saturated and dead oil in accordance with some embodiments of the present invention.
FIG. 4 illustrates wellbore temperature loss during oil production in accordance with some embodiments of the present invention.
FIG. 5 illustrates concentration profile solution for the convention diffusion equation, t[D]≦32 in accordance with some embodiments of the present invention.
FIG. 6 illustrates concentration profile solution for the convention diffusion equation, t[D]≧8 in accordance with some embodiments of the present invention in accordance with some embodiments of the
present invention.
FIG. 7 illustrates scale dependence of the dispersion coefficient in accordance with some embodiments of the present invention.
FIG. 8 illustrates the dimensionless derivative behavior for various a in accordance with some embodiments of the present invention.
FIG. 9 illustrates the dimensionless derivative behavior for piston-like displacement in accordance with some embodiments of the present invention.
FIG. 10 illustrates the dimensionless derivative behavior for μ[i]/μ[r]=4 in accordance with some embodiments of the present invention.
FIG. 11 illustrates the wellbore storage and skin effect in accordance with some embodiments of the present invention.
FIG. 12 illustrates the pressure transient behavior for various kh and s=20 in accordance with some embodiments of the present invention.
FIG. 13 illustrates the pressure transient behavior for various s and kh=20 md·f in accordance with some embodiments of the present invention.
FIG. 14 illustrates the pressure transient behavior for various q/h in accordance with some embodiments of the present invention.
FIG. 15 shows a table of k and s predictions in accordance with some embodiments of the present invention.
Transient oil well pressure is analyzed to determine a reservoir permeability and geometry of a subterranean formation. The transient oil well pressures are provided by measuring and recording by one
or more bottom hole pressure gauges down a borehole. FIG. 1 shows an example of an implementation of the reservoir permeability and geometry test method implementing certain aspects of the well
pressure model. The method generally begins at step 105 for determining a reservoir permeability and geometry of a subterranean formation having a reservoir fluid including oil that has not
previously been water-flooded. In some embodiments, a hollow pipe, called a drill stem, is lowered down the well from a wellhead. The wellhead is the surface termination of a wellbore. The drill stem
has two expandable devices, called packers, around it. The drill stem is lowered into the wellbore or the well until a first packer is positioned just above the subterranean formation to be tested
and a second packer is positioned just below the tested formation. The subterranean formation to be tested is isolated at step 110. In some embodiments, during the isolation step, the formation to be
tested is isolated by expanding the first and the second packer to close the well above and below the tested formation. Isolating the formation excludes pressures from the surrounding environment,
while allowing reservoir fluid to flow into the isolated subterranean formation.
An injection fluid is introduced or provided through the drill stem into the formation being tested at step 115. In some embodiments, the injection fluid is provided by an injector, which may be
located at the wellhead. The injector is configured to inject the injection fluid at a substantially constant rate by being capable of continuously adjusting the discharge pressure based on the
transient reservoir pressure response. The injection fluid is miscible with the oil that permeates the subterranean formation and, in an embodiment, has a higher viscosity than the oil. The higher
viscosity of the injection fluid can reduce viscous fingering, which may have a detrimental effect on the wellbore pressure response during injection. The viscosity of the injection fluid can be
increased by including viscosity modifiers or additives with the injection fluid that do not affect the miscibility of the injection fluid. The additives include, for example, bentonite or hectorite
based organoclays and polar activators such as ethanol or triethylene glycol. In some embodiments, the injection fluid is a base oil, such as, base oil SARALINE 185V manufactured by Shell
Corporation, which has a low volatility and low compressibility. The viscosity of SARALINE 185V at reservoir conditions is approximately 0.5 cp.
In some embodiments, the injection fluid is obtained from the formation being tested prior to the reservoir testing. This injection fluid, called a bottom hole sample, is preceded by a low rate
influx of sufficient reservoir oil volume to assure minimal base oil contamination. Typically, this volume will not exceed a few barrels. Also, this sampling will not involve production of the
reservoir oil at the surface.
After the injection fluid has been provided to the subterranean formation being tested, the formation is sealed or shut-in at step 120. The period of time that the formation is sealed or shut-in may
vary from a few hours to a few days depending on the length of time for the pressure falloff data to show a pressure approaching the reservoir pressure. In some embodiments, the packers, located
below and above the formation, are expanded to seal the formation from undesired influences, such as from pressures and fluids from surrounding formations.
Pressure falloff data is measured from the subterranean formation being tested during the injection period and during the subsequent shut-in period at step 125. The pressure falloff data may be
measured by one or more pressure sensors. In some embodiments, additional measurement may be made during the injection period and subsequent shut-in period. These additional measurements, which may
be made by one or more additional sensors, include measuring an injection pressure, a bottom hole temperature, a surface fluid injection rate, and a surface tubing pressure. In some embodiments, the
sensors are constructed and arranged for measuring electrical characteristics of the wellbore material and surround formations, this is for illustrative purposes only and a wide variety of sensors
may be employed in various embodiments of the present invention. In particular, it is envisioned that measurements of resistivity, ultrasound or other sonic waves, complex electrical impedance, video
imaging and/or spectrometry may be employed. Consistent with this, the sensors may be selected as appropriate for the measurement to be made, and may include, by way of non-limiting example,
electrical sources and detectors, radiation sources and detectors, and acoustic transducers. As will be appreciated, it may be useful to include multiple types of sensors on a single probe and
various combinations may be usefully employed in this manner.
The data collected during the injection period and subsequent shut-in period is analyzed using a well pressure model of the present invention to determine the permeability and geometry of the tested
formation to the reservoir fluid at step 130.
As shown in FIG. 2, the data collected by the sensors 200 are generally stored in a local memory device as in memorized logging-while-drilling tools or relayed via a wire, though the connection may
be made wireless, to a computer 205 that may be, for example, located at a drilling facility where the data may be received via a bus 210 of the computer 205, which may be of any suitable type, and
stored, for example, on a computer readable storage device 215 such as a hard disk, optical disk, flash memory, temporary RAM storage or other media for processing with a processor 220 of the
computer 205.
Consistent with an aspect of the present invention, a radial model that estimates the well pressure response under constant rate miscible injection is developed. The model indicates that the
variation of viscosity with time and radius, due to the mixing of injection and reservoir oils, having different viscosities due to composition and temperature differences, governs the well pressure
response in part, and can cause a significant early deviation to the response associated with a single-viscosity system. However, the practical duration of this effect is short, and so the deviation
does not adversely affect the estimation of reservoir parameters from well pressure data.
Let the fluid system be composed of one flowing liquid phase, oil, comprised of two miscible components, injection oil and reservoir oil, and one immiscible, immobile liquid phase, water. The
governing radial mass and energy balance equations are:
$∂ ∂ t [ ϕ ( S w ρ w ω jw + S o ρ o ω j ) + ( 1 - ϕ ) ρ R ω jR ] + 1 r ∂ ∂ r [ r ( ρ o u o ω j - ϕ S o ρ o D ∂ ω j ∂ r ) ] = 0 j = i , r . ( 1 ) ∂ ∂ t [ ϕ
( S w ρ w U w + S o ρ o U o ) + ( 1 - ϕ ) ρ R U R ] + 1 r ∂ ∂ r [ r ( ρ o u o H o - K ∂ T ∂ r ) ] = 0. ( 2 )$
Gravity, radiation energy flux, and fluid kinetic energy are ignored in these equations. The injection oil mass fraction of the oil phase is represented by ω[i], and that for reservoir oil is ω[r].
The additional mass fractions ω[jw ]and ω[jR], for j=i, r, represent those of each oil component absorbed into the water phase, and onto the rock, respectively. All elements of the equations are
defined in the Nomenclature section located in the Appendix.
Assume the density of the oil phase is independent of ω[j], that is, the density difference between injection oil and reservoir oil can be ignored. Then, adding the two mass balance equations (j=i,r)
comprising Eq. 1, gives,
$∂ ∂ t [ ϕ ( S w ρ w + S o ρ o ) + ( 1 - ϕ ) ρ R ] + 1 r ∂ ∂ r [ r ρ o u o ] = 0. ( 3 )$
Assume the liquid phases and rock have constant compressibilities, and the oil phase compressibility is independent of ω[j]. Also assuming constant reservoir porosity and permeability, and ignoring
second order derivative terms and capillary pressure, the following equation, similar to the diffusivity equation, results:
$∂ p ∂ t - k ϕ c t 1 r ∂ ∂ r ( r μ o ∂ p ∂ r ) = 0. ( 4 )$
The solution of this equation at the well is the pressure model desired. The oil phase viscosity, μ[o], varies with radius and time, however, so this equation is not easily solved.
A solution approach used in various studies assumes the time-dependent viscosity profile may be estimated by an analytical incompressible flow model. The viscosity profile resulting from this model
is then substituted into Eq. 4, which is then solved numerically, yielding the desired well pressure response. This approach is employed herein.
The incompressible flow version of Eq. 1 is the convection-diffusion equation, assuming ω[jw ]and ω[jR ]are negligible:
$∂ ω j ∂ t + qB i 2 π rh ϕ S o ∂ ω j ∂ r - 1 r ∂ ∂ r ( rD ∂ ω j ∂ r ) = 0 j = i , r . ( 5 )$
The incompressible flow version of Eq. 2, in terms of temperature, assuming constant heat capacities of liquid and rock, is,
$∂ T ∂ t + β [ qB i 2 π rh ϕ ∂ T ∂ r - 1 r ρ o c po ∂ ∂ r ( rK ∂ T ∂ r ) ] = 0 , where , ( 6 ) β = ρ o c po ρ w c pw S w + ρ o c po S o + 1 - ϕ ϕ ρ R c pR .
( 7 )$
The interstitial velocities of the injection oil front, v and of its temperature front, v[T ]are indicated in Eqs. 5 and 6, to be,
$v = qB i 2 π rh ϕ S o . ( 8 ) v T = β qB i 2 π rh ϕ . ( 9 )$
The interstitial velocities correspond to that of the centers of two moving transition zones, that between pure injection oil, ω[i]=1, and pure reservoir oil, or ω[r]=1, and between injection
temperature T[i ]and reservoir temperature T[r]. The diffusion coefficients in Eqs. 5 and 6, D and K, control the widths of the transition zones. The fronts are piston-like only if the diffusion
terms are insignificant.
Note that only if both terms ρ[w]c[pw]s[w ]and
$1 - ϕ ϕ ρ R c pR$
in Eq. 7 are insignificant, will the two fronts travel at the same speed. Otherwise, the injection oil temperature front will necessarily lag behind the injection oil compositional front. Using
nominal values of densities and heat capacities for rock, oil, and brine (ρ[o]=53 lbm/ft^3, ρ[w]=69, ρ[R]=125, c[o]=0.55 BTU/° F/lbm, c[w]=0.8 c[R]=0.3)^3,13, and φ=0.10, S[o]=0.85,
$v v T = 1 β S o ≈ 15. ( 10 )$
The interstitial velocities and transition zone widths are critical in that the oil phase viscosity profile is derived directly from them. Assuming the temperature front lags behind the injection oil
front, the viscosity profile is comprised of two transition zones. The trailing viscosity transition zone, that which is closest to the well, corresponds to the temperature front, and varies from μ
[o](T=T[i]) to μ[o](T=T[r]). The leading transition zone corresponds to the injection oil composition front, and varies from μ[o](ω[i]=1) to μ[o](ω[r]=1). The transition zones are not necessarily
separate, and may overlap.
It can be shown that the relative widths of the two transition zones may be quite different under practical conditions. The two diffusion terms in Eqs. 5 and 6 are
$1 r ∂ ∂ r ( rD ∂ ω j ∂ r ) ,$
corresponding to the composition transition zone, and
$β r ρ o c po ∂ ∂ r ( rK ∂ T ∂ r ) ,$
for the temperature transition zone. The relative importance of these terms may therefore be examined with the ratio
$K β ρ o c po D ,$
which estimates the relative width of the thermal transition zone to that of the composition transition zone.
The coefficient D is comprised of two components, one corresponding to molecular diffusion, and the other to mechanical dispersion. The rate of molecular diffusion is proportional to the gradient of
oil composition within the transition zone. The rate of mechanical dispersion is proportional to composition gradient, as well as the oil phase velocity. Except in cases of extremely low oil phase
velocity, the diffusion component is relatively small. The diffusion component may be ignored under practical injection test conditions, for injection rates as low as a few barrels per day, as the
transition zone velocity is at a maximum due to its proximity to the well. D will therefore be defined as comprised only of the mechanical dispersion component.
The mechanical dispersion term is commonly expressed as,
The mechanical dispersion coefficient, α, is dependent on those elements in the reservoir, such as pore geometry and tortuousity, that control mechanical mixing of the oil components. Importantly, it
is also scale dependent, such that the coefficient grows as the transition zone moves away from the wellbore. The dispersion coefficient will be discussed further below.
The ratio
$K β ρ o c po D$
may then be evaluated as,
$K β ρ o c po D = K β ρ o c po α v = 2 π rh φ S o K β qB i ρ o c po α . ( 12 )$
The effect of the transition zone on test data analysis is predominant until the zone no longer intersects the well. This occurs when the center of the transition zone is at a radius r≈6α.
Substituting for r, the ratio in Eq. 12 may then be estimated, using nominal values of oil, water, and rock densities, specific heat, and heat conductivity (K=1.5 BTU/hr/ft/° F.), and φ=0.10, S[o]=
0.85, h=25 ft,
$K β ρ o c po D ≈ 8 qB i . ( 13 )$
where q is in surface B/D. It is therefore estimated that only for very low rates of injection will the viscosity transition zone resulting from thermal diffusion be as extensive as that from
mechanical dispersion.
It is assumed that practical injection rates will yield a sharp temperature front, relative to the width of the transition zone of the composition front. This assumption will be discussed further
Well pressure data is not analyzable during the period a viscosity transition zone intersects the well, as will be demonstrated in the following section. A sharp temperature front minimizes the
duration that the thermal transition zone intersects the well, and therefore minimizes the effect on the well pressure response.
The viscosity drop at the temperature front depends on reservoir oil properties and injection rate, and can be estimated using the following two figures. FIG. 3 shows the temperature dependence of
viscosity computed from correlation for two reservoir oils, one with a solution gas/oil ratio (GOR) of 1000, and the other, a dead oil. It is assumed that the viscosity of the injection liquid will
be modified so as to exceed the reservoir oil viscosity at reservoir temperature.
FIG. 4 illustrates the rate dependence of oil temperature drop in 3½ in. tubing. Although the curves are for the production case, the temperature differences at the terminal point (in this case the
surface, or in the case of injection, the sand face) due to rate, are equivalent to those for injection.
Note the curve corresponding to 300 B/D represents a nearly static case, and that a 50° F. difference is induced by a rate of 1100 B/D. Injection liquid, therefore, is estimated to be 50° F. cooler
than reservoir temperature at reservoir depth, when the injection rate is 1100 B/D. The temperature difference will be less for lower rates. The temperature of the injection liquid will be equivalent
to that of the reservoir, at 300 B/D injection rate. FIG. 3 indicates that for the 1000 GOR reservoir oil, this cooler temperature does not have a significant effect on viscosity, as the viscosity
curves are relatively flat at higher temperatures. The dead oil is more sensitive in the higher range however, with a 50% increase in viscosity over the 50° F. decrease.
The viscosity drop at the temperature front will therefore be significant only for high viscosity oil. However, the jump will be located within the composition transition zone, and its effect on
analyzable well pressure data will be insignificant.
Analytical and numerical solutions to Eq. 5 are presented, with D described by Eq. 11. These are presented, in part, in FIG. 5 and FIG. 6. Here, t[D ]and r[D ]are defined,
$t D = qB i t 2 π h ϕ S o α 2 , r D = r α , ( 14 )$
and C is concentration, C=φS[o]ρ[o]ω[i].
These solutions are based on r[w]=0. They were incorporated into the present invention with a linear shift, Δr[D]=r[w]/α.
The appropriate boundary condition, used to generate these solutions, is,
$ρ o u o ω i - ϕ S o ρ o D ∂ ω i ∂ r = q ρ O 2 π rh , r = r w . ( 15 )$
This results in solutions in which C, or ω[i], are not constant at r[w], until some finite time, after which ω[i]=1. So, the transition zone is present at the well from the start of injection, and
eventually clears the well after a time corresponding to t[D]≈16 (see FIGS. 5 and 6).
The radius, r, of the center of the transition zone, at t[D ]is,
$r _ = qB i t π h ϕ S o - r w 2 = α 2 t D . ( 16 )$
For t[D]=16, r≈6α, a result used above.
The duration during which the composition transition zone intersects the well is insignificant for large, field scale problems such as waterflooding, and for such the boundary condition ω[i]=1 at r=r
[w ]is appropriate. However, for injection testing, for which early time behavior is important, the solutions presented in FIG. 5 and FIG. 6 are appropriate, and were used to generate the viscosity
profiles incorporated into the well pressure model.
The assumption made above of a sharp thermal front is verified by numerical solutions to Eq. 6, for the application of cold water injection into geothermal reservoirs. Only a thermal transition zone
exists for this case, and the thermal transition thickness, Δr[T], is estimated to be,
Δr[T]≈0.055r[w]√{square root over (t)}(17),
where t is in seconds. This estimate is an upper bound for the oil reservoir case as the product Kβ is generally smaller for an oil saturated system than for a water saturated system. Substituting
for t from Eq. 14, with t[D]=16, and for the width of the composition transition zone, Δr[c]=2 r as it clears the well, the ratio of the widths is,
$Δ r C Δ r T ≈ 1 5.5 r w qB i h ϕ S o , ( 18 )$
where q is in surface B/D. This ratio is large except for low injection rates.
Substituting the reservoir parameters used in Eq. 10, where v/v[T]≈15, and q=500 B/D, B[i]=1, and r[w]=0.25 ft, yields Δr[C]/Δr[T]≈11. Thus, although the temperature front is slower than the
composition front, its transition is much smaller. Although it is possible the temperature transition zone remains intersected with the well after the composition transition zone has cleared the
well, it is assumed in this study that this period is short, and that the effect of the temperature front on well pressure response is not prolonged.
A constant rate solution to Eq. 4, at the well, which assumes incompressible flow in the transition zone and in the zone, comprised of 100% injection oil, between the transition zone and the well,
$p wD = 1 2 ( ln t D ′ r D max ′ + 0.80907 ) + μ t μ r ln r D max ′ r D min ′ + μ i μ r ln r D min ′ + S ( 19 )$
This is the well pressure model developed in the present invention. Wellbore storage effect is not included in the model. Here, t′[D ]is the conventional dimensionless time, r′[Dmin ]and r′[Dmax ]are
the boundaries of the transition zone expressed as conventional dimensionless radii, μ[i ]is the viscosity of the injection oil at the well injection temperature, and μ[r ]is the viscosity of the
reservoir oil at reservoir temperature. Note that during the time when the transition zone intersects the well, r′[Dmin]=1, and the
$μ i μ r ln r D min ′$
term is zero.
r[Dmin ](t[D]) and r[Dmax ](t[D]) are obtained from a solution of Eq. 5. t′[D ]is obtained from t[D], given α, r[w], q, and reservoir properties.
The viscosity of the transition zone may be represented by a single value μ[t], if the viscosity function is linear with radius in the transition zone. A linear viscosity function, used in this
model, is,
$μ ( r D ′ ) = μ min + μ r - μ min ( r D max ′ - r D min ′ ) ( r D ′ - r D min ′ ) . ( 20 ) μ min = C μ i + ( 1 - C ) μ r . ( 21 )$
C(t[D]) is the concentration at dimensionless time as defined in Eq. 14.
Interpretation of the injection test may be performed from a rearrangement of Eq. 19, with substitutions involving the radius of the center of the transition zone, r(t′[D]),
$r D min ′ = χ min ( t D ′ ) r _ ( t D ′ ) r w r D max ′ = χ max ( t D ′ ) r _ ( t D ′ ) r w . ( 22 )$
χ[min ]and χ[max ]are scalar functions of t′[D]. Note that 0≦χ[min](t[D])<1 and χ[max ](t[D])>1.
When r^2>>r[w]^2, the substitutions result in the following,
$p wD = 1 2 ( μ i μ r ln t D ′ + 0.80907 ) + 1 2 ( μ i μ r - 1 ) ln A + B + s A = qB i μ r c t π kh S 0 B = 1 2 ( μ i μ r - μ t μ r ) ln χ min + 1 2 ( μ
t μ r - 1 ) ln χ max . ( 23 )$
Note that this p[wD ]model is similar to the log approximation solution to the diffusivity equation, except here the semi-log slope is multiplied by μ[i]/μ[r], and the semi-log intercept includes two
additional terms. Note also the derivative-time product is,
$∂ p wD ∂ t D ′ t D ′ = 1 2 μ i μ r . ( 24 ) ∂ p w ∂ t t = qB i u i 4 π kh . ( 25 )$
So, the pressure derivative plot is diagnostic, that is, constant at
$1 2 μ i μ r ,$
for the time when Eq. 23 is valid. During this time, analysis will yield the reservoir permeability k, assuming μ[i ]is known, as indicated in Eq. 25.
Use of pressure transient analysis applications to perform this analysis is straightforward, using the following,
$k = μ i μ r k ′ . ( 26 )$
where k′ is the estimated reservoir permeability, from the time region in which Eq. 23 is valid.
Further, this estimate of k allows the computation of A, given estimates of the remaining parameters of that term. Typical values of total compressibility, c[t], for a single phase oil system insures
that A is a small number and that ln A is relatively large in magnitude. The term B however, is generally much smaller in magnitude, and may be ignored. Note first that the terms in B necessarily
have opposing signs. Secondly, the magnitudes of the coefficients of the log terms of B are both necessarily smaller than the coefficient of ln A. Finally, it can be shown from FIGS. 5 and 6 that χ
[min]>0.13 and χ[max]<1.9 for t[D]>32, when the transition zone is still near the well. So, the magnitudes of the log terms in B do not exceed 2.
When B is ignored, well skin s may be estimated from the semi-log intercept. This can be done using the following,
$s = s ′ - 1 2 ( μ i μ r - 1 ) ln A , ( 27 )$
Where s′ is the estimated skin from a pressure transient analysis.
The transition zone viscosity function is assumed to be piecewise linear in an some aspects of the present invention, with a shallow sloped function at r′[Dmin ]and a steeper sloped function at r′
[Dmax], to approximate more closely the behavior of C in FIGS. 5 and 6. This viscosity function does not require any modification to Eqs. 26 and 27, as it only modifies the term B. The function
serves only to smooth the P[wD ]response as the transition zone clears the well.
The dispersion coefficient α is scale dependent, such that it is proportional to the distance over which the composition front travels. FIG. 7 shows measured α data at various scales. The echo
dispersivity (dispersion), single well tracer test (SWTT) data is most relevant, as these data are computed from tests in which a tracer is injected, and then produced, from a single well. The
distance of travel in this case is twice the maximum radial extent of the tracer front. As illustrated in FIG. 7, laboratory and field data correlates well.
The range of α applicable to injection testing conditions should generally correspond to the SWTT data and smaller, as the transition zone most affects the well pressure response as it intersects and
is near the well. The data at smaller scales than SWTT in FIG. 7 correspond to laboratory data.
The applicable range of the dispersivity data in FIG. 7, for injection testing, should be 0.003<α<0.3 m or 0.01<α<1 ft. The maximum value of this range corresponds to a front travel distance of 15
ft, approximately that for the conditions q=100 BID, φ=0.10, S[o]=0.85, h=10 ft, t=24 hr, which should represent an extreme case, as the interval is relatively thin, the injection rate relatively
high, and the effect of the transition zone is generally null much sooner than 24 hr.
The dimensionless pressure derivative estimate from Eq. 19 for various α is presented in FIG. 8, for μ[i]/μ[r]=2. Note the effect of the composition transition zone is to gradually shift the
derivative from an initial plateau of 0.5, to a second plateau at
$0.5 μ i μ r ,$
in this case, 1.0. The duration of the transition time from the first plateau to the second, increases with increasing α.
The initial plateau is derived from the well response associated with the reservoir oil viscosity. Practically, the initial plateau will not be detectable as it exists early enough to be masked by
wellbore storage and skin effects. The second plateau, derived from the well response associated with injection oil viscosity, will be sustained until reservoir boundary effects become significant.
Dimensionless well pressure response is also permeability-thickness and rate dependent. This is seen in Eq. 19, as r′[Dmin ]and r′[Dmax ]are functions of r[D], which is a function of t[D]. The
definition of t′[D], and Eq. 14, yield
$r D ′ = α r D ( t D ) r w t D ′ = t D kh a λ α 2 λ = 2 π S o B i μ r c i r w 2 . ( 28 )$
The dimensionless pressure curves will be unique for the ratio
$kh q λ ,$
for a given α.
Note from Eq. 14 that the effect of the transition zone is dependent only on the ratio q/h, as the width and velocity of the transition zone is dependent on t[D ](r[D]), shown in FIGS. 5 and 6. The
transition zone behavior, and therefore its effect on well response, is not dependent on k.
Piston-like displacement is represented in FIG. 9, in which α is a very small number. The derivative results do not change significantly with α when α<0.001.
The effect of μ[i]/μ[r ]on the curve shape is to change the vertical step of the transition, although the width of the transition is not affected. This is seen in FIG. 10, for which μ[i]/μ[r]=4.
The curves in FIGS. 8-10 were generated numerically from Eq. 19. The spurious sections of the curves are caused by the assumption of piecewise linearity of the viscosity function within the
composition transition zone. The viscosity function is therefore not smooth at the transition boundaries. The spurious sections begin and end when the transition clears the well. A smoother viscosity
transition at the inner boundary of the transition zone would eliminate the spikes. Note that the onset of the second plateau coincides with the spikes, that is, the effect of the composition
transition zone on well pressure response is small after the zone clears the well.
The proximity of the transition period and second plateau to wellbore storage and skin effects may be seen from FIG. 11, compared to FIG. 8. FIG. 8 indicates that, in general, the second plateau is
established after t′[D]=1×10^5. The dimensionless wellbore storage coefficient, C[D], corresponding to an injection TST in 10000 ft of 3½ in. tubing, the practical maximum length of tubing expected
for the test program, is C[D]≈500, for example. FIG. 11 indicates the storage effect ends at t′[D]/C[D]≈1000 for most values of skin, and thus at t′[D]≈5×10^5 for C[D]=500. So, the wellbore storage
effect is estimated to end prior to attainment of the second plateau, in general, for the test program.
Storage and skin effects should therefore be insignificant when the second plateau is established. This comparison also suggests the initial plateau period and transition period may be masked by
wellbore storage effect, although this is of no consequence since the second plateau yields interpretable data.
Injection test rates for anticipated well and reservoir conditions may be estimated under the criteria of minimizing injection period duration, while retaining useful pressure transient data.
Reservoir permeability and oil properties in the sandstone reservoirs are currently uncertain, so analogous basin equivalent values may apply. Permeability is therefore estimated to vary from 1 md to
100 md. Analogous basin reservoir oil tends to be paraffinic, and the viscosity at reservoir conditions may exceed 1 cp.
Reservoir geometry will affect the transient data, and generally consist of two parallel faults. The wells will be drilled within 100 m. of the trapping fault for the system. The other fault is
generally a greater distance, approximately by a factor of 10, or greater, from the well. These two faults are resolved with seismic interpretation. As the faults are generally short, and parallel, a
rectangular reservoir boundary cannot be formed, so the system is otherwise open. However, lack of sand continuity will likely limit the reservoir extent in directions both parallel and orthogonal to
the faults. Thus, a stratigraphic boundary will more likely be detected during the test than will the far fault. Sand continuity cannot be adequately resolved with seismic data to predict
stratigraphic boundary effects.
Test data will likely exhibit the effect of the trapping fault, but not the second fault. Only extremely limited sands, on the order of the distance to the trapping fault, will affect the test data.
Wellbore storage effects are considered at the maximum anticipated test depths, which will correspond to not more than 10000 ft of 3-½ in. tubing. The liquid compressibility of SARALINE 185V is
assumed to apply, resulting in a dimensionless storage coefficient C[D]≈500.
Well skin is estimated to be a maximum +20, which has been measured on some analogous basin wells.
FIG. 12 and FIG. 13 show the injection pressure and derivative response for a paraffinic oil at various values of kh and skin effect, s, from the pressure transient analysis application Saphir. FIG.
12 shows the response for 20<kh<2000 md·ft, given s=20. FIG. 11 shows the effect of 0<s<20, for kh=20 md·ft. The test duration is 24 hours.
The responses in FIGS. 12 and 13 do not include the effect of oil composition gradient.
Note that for kh=2000 md·ft, the effect of the trapping fault is realized, in approximately 5 hours. A subsequent constant derivative period, expected to follow this effect, does not form before 24
hrs. Thus, for well tests constrained to durations below 20 hours, the constant derivative period preceding the fault effect must be analyzable. Note that this preceding period is not formed for kh=
20 md·ft. However, FIG. 13 indicates that for the smaller skin value s=0, the constant derivative period is barely reached in 24 hours. The kh=20 md·ft case is therefore essentially not interpretable
from short term test data.
The effect of the oil composition transition zone is included in the transient response presented in FIG. 14, for various q/h and α=1, which represents the case with the greatest anticipated effect
of the transition zone.
The effect of wellbore storage is not included in FIG. 14. The use of FIGS. 12-14 combined, allow for the investigation of both wellbore storage and oil composition transition.
Note in FIG. 14 that higher injection rates cause the second plateau to be reached sooner than lower injection rates. This is an advantage to injection tests with higher rates, and represents a major
difference relative to conventional production rate testing, in which rate does not affect the time at which the derivative becomes constant.
The constant derivative period in FIG. 12 occurs before 1 hour, at the earliest. This period is intact until it is disturbed, in the kh=2000 md·ft case, by the fault effect. Therefore, it is desired
that the injection rate be such that the oil composition effect has completely transpired before 1 hour. FIG. 14 indicates the value of q/h should then exceed 10. The rate associated with h=20 ft,
for example, should then exceed 200 B/D.
As the curves in FIG. 14 are estimated injection well pressure responses using Eq. 19, the estimates of permeability and skin from Eqs. 26 and 27 may be tested using these pressure data, from the
second plateau region. Table 1 in FIG. 15 presents the results of these tests for each curve presented. The time at which the interpretations are made are t≦10 hr. Note that the predictions are
acceptable, indicating that the assumption of B being negligible in Eq. 23, is acceptable.
Note also that the case corresponding to a test time of 5 hours and q=200 B/D, which yields a ratio q/h=10, yields acceptable estimates of k and s.
Although the invention has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred embodiments, it is to be understood
that such detail is solely for that purpose and that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover modifications and equivalent arrangements
that are within the spirit and scope of the appended claims. For example, though reference is made herein to a computer, this may include a general purpose computer, a purpose-built computer, an ASIC
including machine executable instructions and programmed to execute the methods, a computer array or network, or other appropriate computing device. As a further example, it is to be understood that
the present invention contemplates that, to the extent possible, one or more features of any embodiment can be combined with one or more features of any other embodiment.
APPENDIX Nomenclature
A Eq. 23
B Eq. 23
B[i ]FVF of injection oil
C concentration, C=φS[o]ρ[o]ω[i ]
c[po ]specific heat of the oil phase
c[pw ]specific heat of the water phase
c[pR ]specific heat of the rock
c[t ]total system compressibility,
$c t = S w c w ρ w ρ o + S o c o + 1 - ϕ ϕ c R ρ R ρ o$
c[w ]compressibility of water
c[O ]compressibility of reservoir oil
c[R ]compressibility of rock
D coefficient of diffusion
h reservoir thickness
H[o ]specific enthalpy of the oil phase
k reservoir permeability
k′ reservoir permeability estimated from conventional pressure transient analysis
K heat conduction coefficient of the oil, water, rock system
p reservoir pressure
p[wD ]dimensionless well pressure,
$p wD = 2 π kh qB i μ r ( p i - p w )$
p[i ]initial reservoir pressure
p[w ]well injection pressure
q surface injection rate
r radius
r[w ]wellbore radius
r radius of the center of the composition transition zone
r[D ]Tang-Peaceman dimensionless radius, Eq. 14
r′[Dmin ]minimum dimensionless radius of the composition transition zone,
$r D min ′ = r min r w$
r′[Dmax ]maximum dimensionless radius of the composition transition zone,
$r D max ′ = r max r w$
r[max ]maximum radius of the composition transition zone
r[min ]minimum radius of the composition transition zone
Δr[T ]thickness of the thermal transition zone, Eq. 17
Δr[C ]thickness of the compositional transition zone
s skin factor
s′ skin factor estimated from conventional pressure transient analysis
S[o ]oil saturation, fraction
S[w ]water saturation, fraction
t time
t[D ]Tang-Peaceman dimensionless time, Eq. 14
t′[D ]dimensionless time,
$t D ′ = kt ϕμ c t r w 2$
T temperature of the system
T[i ]temperature of the injection oil at the point of injection
T[r ]temperature of the reservoir prior to injection
U[o ]specific internal energy of the oil phase
U[w ]specific internal energy of the water phase
U[R ]specific internal energy of the rock
v interstitial velocity of the injection oil component
v[T ]velocity of the temperature front
α coefficient of mechanical radial dispersion
β Eq. 7
χ[min ]Eq. 22
χ[max ]Eq. 22
φ porosity, fraction
μ[o ]oil phase viscosity
μ[i ]viscosity of injection oil component at T[i ]
μ[r ]viscosity of reservoir oil component at T[r ]
μ[min ]viscosity of oil phase at the minimum radius of the composition transition zone
ρ[o ]density of the oil phase
ρ[o ]density of the water phase
ρ[R ]density of the rock
ω[j ]mass fraction of component j in the oil phase
ω[jw ]mass fraction of component j absorbed into the water phase
ω[jR ]mass fraction of component j adsorbed onto the rock
1. A method of determining reservoir permeability and geometry of a subterranean formation having a reservoir fluid including oil that has not been previously water-flooded, the method comprising:
isolating hydraulically the subterranean formation to be tested;
providing an injection oil at a substantially constant rate to the formation being tested, wherein the injection oil is miscible with the oil at the tested formation;
sealing, at the top, the tested formation from further oil injection;
measuring pressure data in the tested formation including pressure injection data and pressure falloff data; and
determining the reservoir permeability and geometry of the tested formation based on an analysis of the measured pressure injection data and the measured pressure falloff data using a well
pressure model.
2. The method of claim 1, wherein the providing occurs at a wellhead located above the formation being tested.
3. The method of claim 1, wherein the injection oil has a viscosity greater than the oil.
4. The method of claim 1, further comprising:
obtaining the injection oil from the tested formation prior to providing the injection oil to the tested formation.
5. The method of claim 1, wherein at least one of additives including bentonite and hectorite based organoclays or polar activators including ethanol and triethylene glycol are combined with the
injection oil to increase a viscosity of the injection oil.
6. The method of claim 1, wherein the permeability is estimated based on a ratio of an inferred viscosity of the injection oil and a viscosity of the oil.
7. The method of claim 1, wherein the well pressure model is p wD = 1 2 ( ln t D ′ r D max ′ + 0.80907 ) + μ t μ r ln r D max ′ r D min ′ + μ i μ r ln r D min ′ + s,
wherein t′D is a dimensionless time, r′Dmin and r′Dmax are boundaries of a transition zone expressed as dimensionless radii, μi is a viscosity of the injection oil at a well injection
temperature, and μr is a viscosity of the reservoir fluid at reservoir temperature.
8. The method of claim 1, further comprising measuring at least one of a bottom hole pressure, a bottom hole temperature, a surface oil injection rate, or a surface tubing pressure.
9. The method of claim 8, wherein a viscosity of the injection oil is inferred from the measured bottom hole temperature.
10. A system for determining a reservoir permeability and geometry of a subterranean formation having a reservoir fluid including oil that has not been previously water-flooded, the system
an injector constructed and arranged to inject an injection oil at a substantially constant rate from a wellhead into the formation being tested, wherein the injection oil is miscible with the
oil at the tested formation;
one or more sensors constructed and arranged to measure data in the tested layer including pressure injection data and pressure falloff data; and
a machine readable medium having machine executable instructions constructed and arranged to determine the reservoir permeability and geometry of the tested formation based on an analysis of the
measured pressure injection data and the measured pressure falloff data using a well pressure model stored in a memory coupled to a processor.
11. The system of claim 10, wherein the injection oil has a viscosity greater than the oil.
12. The system of claim 10, further comprising:
an extractor configured to extract the injection oil from the tested formation prior to the injector injecting the injection oil into the tested formation.
13. The system of claim 10, wherein at least one of additives including bentonite and hectorite based organoclays or polar activators including ethanol and triethylene glycol are combined with the
injection oil to increase a viscosity of the injection oil.
14. The system of claim 10, wherein the permeability is estimated based on a ratio of an inferred viscosity of the injection oil and a viscosity of the oil.
15. The system of claim 10, wherein the well pressure model is p wD = 1 2 ( ln t D ′ r D max ′ + 0.80907 ) + μ t μ r ln r D max ′ r D min ′ + μ i μ r ln r D min ′ +
wherein t′D is a dimensionless time, r′Dmin and r′Dmax are boundaries of a transition zone expressed as dimensionless radii, μi is a viscosity of the injection oil at a well injection
temperature, and μr is a viscosity of the reservoir fluid at reservoir temperature.
16. The system of claim 10, wherein the one or more sensors are further configured to measure at least one of a bottom hole pressure, a bottom hole temperature, a surface oil injection rate, or a
surface tubing.
17. The system of claim 16, wherein a viscosity of the injection oil is inferred from the measured bottom hole temperature.
Referenced Cited
U.S. Patent Documents
3368621 February 1968 Reisberg
5477922 December 26, 1995 Rochon
5501273 March 26, 1996 Puri
7272973 September 25, 2007 Craig
20040186666 September 23, 2004 Manin
20050279161 December 22, 2005 Chen et al.
Foreign Patent Documents
0286152 October 1988 EP
WO2005095757 October 2005 WO
WO2007134747 November 2007 WO
Other references
• Weinstein, H.G., “Cold Waterflooding a Warm Reservoir,” paper SPE 5083, presented at the SPE 49[th ]Annual Fall Meeting, Oct. 6-9, 1974.
• Bratvold et al., “Analysis of Pressure-Falloff Tests Following Cold-Water Injection, ”SPEFE, Sep. 1990, pp. 293-302.
• Benson et al., “Nonisothermal Effects During Injection and Falloff Tests”, SPE Formation Evaluation, Feb. 1986, pp. 53-63.
• M. Abbaszadeh et al., “Pressure-Transient Testing of Water-Injection Wells”, SPE Reservoir Engineering, Feb. 1989, pp. 115-134.
• Nanba et al., “Estimation of Water and Oil Relative Permeabilities From Pressure Transient Analysis of Water Injection Well Data”, paper SPE 19829, presented at SPE 64^th Annual Technical
Conference in San Antonio, TX, Oct. 8-11, 1989, pp. 631-646.
• Tang et al., “New analytical and Numerical Solutions for the Radial Convection-Dispersion Problem”, SPE Reservoir Engineering, Aug. 1987, pp. 343-359.
• Beggs et al., “Estimating the Viscosity of Crude Oil Systems”, J. of Petroleum Technology, Sep. 1975, pp. 1140-1141.
• Brigham et al., “Mixing Equations in Short Laboratory Cores”, Society of Petroleum Engineers Journal, vol. 257, Part II, Feb. 1974, pp. 91-99.
• Mahadevan et al., “Estimation of True Dispersivity in Field Scale Permeable Media”, paper SPE 75247, presented at the 2002 SPE/DOE Improved Oil Recovery Symposium, Tulsa, OK, Apr. 13-17, 2002.
• Bourdet et al., “A new set of type curves simplifies well test analysis”, World Oil, May 1983, pp. 95-105.
• International Search Report and Written Opinion for International Application No. PCT/US2009/042025 dated Sep. 7, 2010, 9 pages.
Patent History
Patent number
: 8087292
: Apr 30, 2008
Date of Patent
: Jan 3, 2012
Patent Publication Number
20090272528 Assignee
Chevron U.S.A. Inc.
(San Ramon, CA)
Joe Voelker
Primary Examiner
Robert Raevis Attorney
Pillsbury Winthrop Shaw Pittman LLP Application Number
: 12/112,644 | {"url":"https://patents.justia.com/patent/8087292","timestamp":"2024-11-10T21:49:53Z","content_type":"text/html","content_length":"154769","record_id":"<urn:uuid:f7e5a63e-8d40-40c6-b16d-4690d89ef3ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00310.warc.gz"} |
On Yao's XOR-Lemma
A fundamental lemma of Yao states that computational weak-unpredictability of Boolean predicates is amplified when the results of several independent instances are XOR together. We survey two known
proofs of Yao's Lemma and present a third alternative proof. The third proof proceeds by first proving that a function constructed by concatenating the values of the original function on several
independent instances is much more unpredictable, with respect to specified complexity bounds, than the original function. This statement turns out to be easier to prove than the XOR-Lemma. Using a
result of Goldreich and Levin (1989) and some elementary observation, we derive the XOR-Lemma.
Original language English
Title of host publication Studies in Complexity and Cryptography
Subtitle of host publication Miscellanea on the Interplay between Randomness and Computation
Editors Oded Goldreich
Pages 273-301
Number of pages 29
State Published - 2011
Publication series
Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume 6650 LNCS
ISSN (Print) 0302-9743
ISSN (Electronic) 1611-3349
• Direct Product Lemma
• Hard-Core Predicates
• Hard-Core Regions
• One-Way Functions
• Yao's XOR Lemma
Dive into the research topics of 'On Yao's XOR-Lemma'. Together they form a unique fingerprint. | {"url":"https://cris.huji.ac.il/en/publications/on-yaos-xor-lemma","timestamp":"2024-11-13T09:47:56Z","content_type":"text/html","content_length":"47871","record_id":"<urn:uuid:50321e51-5a20-4750-b4c5-71296562001a>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00300.warc.gz"} |
How to Multiply and Divide Positive and Negative Numbers - dummies
Multiplying and dividing positive and negative numbers is a simple operation with two numbers. With three or more, it is also straightforward, but you use the Even-Odd Rule.
With two numbers, the rules for multiplying and dividing positive and negative numbers are not only simple, but they’re also the same for both operations:
• When multiplying or dividing two numbers, if the two signs are the same, the result is positive, and if the two signs are different, the result is negative.
• When multiplying and dividing more than two numbers, count the number of negatives to determine the final sign: An even number of negatives means the result is positive, and an odd number of
negatives means the result is negative.
You multiply and divide positive and negative numbers “as usual” except for the positive and negative signs. So ignore the signs and multiply or divide. Then, if you're dealing with two numbers, the
result is positive if the signs of both numbers are the same, and the result is negative if the signs of both numbers are different.
Check out the following examples:
So, 8 multiplied by 2 is 16, and because the signs of both numbers are different, the answer is negative, –16.
So, 5 multiplied by 11 is 55, and because the signs of both numbers are the same, the result is positive, +55.
So, 24 divided by 3 is 8, and because both numbers have different signs, the result is negative, –8.
So, 30 divided by 2 is 15, and because both numbers have the same signs, the answer is positive, +15.
When multiplying and dividing more than two positive and negative numbers, use the Even-Odd Rule: Count the number of negative signs — if you have an even number of negatives, the result is positive,
but if you have an odd number of negatives, the result is negative.
The following examples show you how to use the Even-Odd Rule:
This problem has just one negative sign. Because one is an odd number, the answer is negative.
Two negative signs mean a positive answer because two is an even number.
An even number of negative signs means a positive answer.
Three negative signs mean a negative answer.
About This Article
This article can be found in the category: | {"url":"https://www.dummies.com/article/academics-the-arts/math/algebra/how-to-multiply-and-divide-positive-and-negative-numbers-194393/","timestamp":"2024-11-12T17:20:05Z","content_type":"text/html","content_length":"74140","record_id":"<urn:uuid:3abe9a00-6810-4ea9-a57f-a28f9cfa411a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00617.warc.gz"} |
Azimuthal Shear - Warning Decision Training Division (WDTD)
Products Guide
Azimuthal Shear
Short Description
Maximum azimuthal shear (rotation divided by diameter; s^-1) in the low-level (0–2 km) or mid-level (3–6 km) AGL layer.
Low-Level Azimuthal Shear (0–2 km AGL)
Mid-Level Azimuthal Shear (3–6 km AGL)
Primary Users
NWS: WFO, SPC
Input Sources
WSR-88D radar data
Terrain elevation files
Spatial Resolution: 0.005° latitude (~555 m) x 0.005° longitude (~504 m at 25°N and 365 m at 49°N)
Temporal Resolution: 2 minutes
Product Creation
Azimuthal shear is calculated using a Linear Least Squares Derivative method (LLSD; Smith and Elmore 2004) on radial velocity data from individual radars and then blended into a large multi-radar
mosaic for the contiguous United States (CONUS). The blending process results in a field of maximum shear.
To create the products, raw velocity data from a single radar (Fig. 1) are first passed through a 3x3 median filter to reduce spurious noise in the raw velocity data.
Fig. 1: Schematic example of raw radial velocity data from a single radar plotted against azimuth angle.
This example shows a velocity couplet with a centroid located between the 3rd and 4th azimuth.
Fig. 2: Schematic illustrating (a) the raw velocity data and (b) the velocity data after applying the
median filter in the LLSD algorithm to the for i,j = 3, 4. The red highlighted region shows the range
gates that are used in the 3x3 filter for i, j = 3, 4.
Once the velocity data has been filtered, a 2-D plane is fit to the velocity data to calculate resolvable velocities. The slope of this plane is the Azimuthal Shear (Fig. 3).
Fig. 3: 2-D Plane fitted to the raw velocity data (Fig. 1) after applying the filter (Fig. 2). The slope of the
plane is equal to the Azimuthal Shear.
A reflectivity mask is then applied to account for the unreliability of radial velocities with poor signal. Earlier versions of the LLSD algorithm would remove azimuthal shear values that were
co-located with reflectivity less than 20 dBZ. However, such logic proved too draconian, especially for mesocyclones embedded in weak echo regions. Therefore, the process was modified to dilate
(i.e., exaggerating their values) the quality controlled polar reflectivities so reflectivities in weak echo regions adjacent to storm cores would still be retained. Once dilation is complete,
azimuthal shear values still co-located with reflectivity less than 20 dBZ are removed. (Fig. 4).
Fig. 4: Schematic illustrating the reflectivity masking process. (a) Range gates with green values are
automatically retained because reflectivity is greater than 20 dBZ, while range gates with red values
are less than 20 dBZ pre-dilation. (b) Same as (a), except range gates with yellow values are greater
than 20 dBZ after dilation. (c) After dilation green range gates retained and red gates not retained.
Layer maxima are then calculated for the 0–2 km and 3–6 km AGL layers, producing a 2D polar field of data for each layer (Fig. 5).
Fig. 5: Schematic illustrating how the layer maximum is determined. Values within green ovals are the
final Azimuthal Shear values for the particular layer and range.
Lastly, the data from the single-radar layer products are blended for all radars in the CONUS to produce the final 2D products.
Note: Using multiple radar viewpoints and data from slightly different times can cause Azimuthal Shear to have more than one peak for a mesocyclone (Fig. 6).
Fig. 6: 0.5° SRM and Low-Level Azimuthal Shear (0-2 km) for a supercell thunderstorm on 14 April 2012.
There is only one circulation in the SRM product, while the Low-Level Azimuthal Shear product shows
two separate circulations.
Technical Details
Latest Update: MRMS Version 11.5
Lakshmanan, V., T. Smith, K. Hondl, G. Stumpf, A. Witt, 2006: A Real-time, three dimensional, rapidly updating, heterogeneous radar merger technique for reflectivity, velocity and derived products.
Wea. Forecasting, 21, 802-823.
Newman, J., V Lakshmanan, P. Heinselman, M. Richman, T. Smith, 2013: Range-correcting azimuthal shear in Doppler radar data. Wea. Forecasting, 28, 194-211.
Smith, T., and K.L. Elmore, 2004: The use of radial velocity derivatives to diagnose rotation and divergence. Preprints, 11th Conf. on Aviation, Range, and Aerospace, Hyannis, MA, Amer. Meteor. Soc.,
CD-ROM, P5.6.
// ]]> | {"url":"https://vlab.noaa.gov/web/wdtd/-/azimuthal-shear?selectedFolder=3112185","timestamp":"2024-11-08T04:58:48Z","content_type":"text/html","content_length":"129259","record_id":"<urn:uuid:9e4ac0d3-be5d-4103-be37-25821ebc89ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00229.warc.gz"} |
Python: Machine Learning
Let’s learn some machine learning to evaluate player overall ratings in FIFA video game
Machine learning is the science to study algorithms and models that enable computers to recognize things, make decisions, even predict results without explicit instructions. As an example, when
talking to your phone assistant such as Siri or Cortana, machine learning helps to translate your voice into text and further understand what you requested. Is that amazing?
Today we are going to show you how to teach a computer evaluate overall ratings for soccer player based on their attributes step by step.
Let’s get on to it!
A little background
Assume that there’s a formula to calculate the “Overall” ratings for soccer players by EA Sports (The developer of FIFA 2019). With this formula, we can easily calculate the overall ratings for any
player even if he/she is not in the game. The problem is, we don’t know what exactly the formula looks like.
We know the input which consists of player attributes and the output which is the Overall ratings. Then we can use an approach called “regression” to “estimate” the formula based on the input/output.
Today, we are going to use a simple model called Linear Regression. Let assume the formula that calculates the overall ratings of soccer player ( y = f(x)) is [ f(x) = ax + b ] The linear regression
aims to figure out (a) and (b). The formula (f(x)) is called “model” in machine learning, and the process of solving/estimating the model is called “training” the model. Once we trained the model, we
can use it to predict target (y) of new data.
Back to our story, if we only have 1 variable (x), estimate (f(x)) should be easy. Everyone should be able to solve it with a pen and a piece of paper. However, when (x) is a long list of attributes
of soccer players like speed, power, passing, tackling, it becomes complicated. The formula should be rewritten into [ f(x_1, x_2, …, x_n) = a_1 * x_1 + a_2 * x_2 + … + a_n * x_n + b ] Then
we have to feed the model with a lot of high-quality data to make the model more closer to the “real” formula. Let’s get started!
Table of Contents
Table of Contents | {"url":"https://workshops.nuevofoundation.org/machine-learning/","timestamp":"2024-11-03T15:55:12Z","content_type":"text/html","content_length":"16250","record_id":"<urn:uuid:a67f4086-0f3b-43b4-a596-58a6e398cf0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00310.warc.gz"} |
The domain of definition of f(x)=log3∣logex∣, is... | Filo
The domain of definition of , is
Not the question you're searching for?
+ Ask your question
We have,
Clearly, is defined, if
and and
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE for FREE
8 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Relations and Functions
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text The domain of definition of , is
Updated On Nov 17, 2022
Topic Relations and Functions
Subject Mathematics
Class Class 12
Answer Type Text solution:1 Video solution: 2
Upvotes 236
Avg. Video Duration 6 min | {"url":"https://askfilo.com/math-question-answers/the-domain-of-definition-of-f-x-log-3-left-log-e-x-right-is","timestamp":"2024-11-09T14:05:38Z","content_type":"text/html","content_length":"368606","record_id":"<urn:uuid:30580746-14a1-4f15-86cc-87a2240da48c>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00726.warc.gz"} |
A task allocation scheme for hypercube distributed computing systems using the affinity graph model
A task allocation algorithm is presented which is based on the affinity graph model for hypercube distributed computing systems. In the affinity graph model, the vertices represent the modules in the
task to be allocated. The weight of the edges represents the affinity the modules represented by the vertices have for each other. The affinity function has been defined in such a way that both the
competing demands of load balancing and minimizing interprocessor communication are addressed. By applying a graph partitioning algorithm on such an affinity graph, optimal task allocation is
possible. The task allocation algorithm presented for hypercube distributed computing systems uses the above idea to repeatedly partition the affinity graph until all modules in the task are
allocated. The algorithm is fully distributed with no central control being exercised. | {"url":"https://irepose.iitm.ac.in/entities/publication/f0827f21-755b-4783-aa15-869f672f0c30","timestamp":"2024-11-12T19:48:30Z","content_type":"text/html","content_length":"767884","record_id":"<urn:uuid:73f55846-f90c-4156-a446-20f939d2a95c>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00808.warc.gz"} |
Printable Figure Drawings
Heat Of Fusion And Vaporization Worksheet
Heat Of Fusion And Vaporization Worksheet - Hfus water = 6.02 kj/mol hvap water = 40.7 kj/mol. Web ðï ࡱ á> þÿ = ? Specific heat and latent heat of fusion and vaporization. Web heat of fusion
problems and solutions. If 2083 joules are used to melt 5.26 grams of aluminum, what is the heat of fusion of aluminum? How much heat is required to melt 360 g of solid water?
What are the units for heat of vaporization? Web calculate the heat when 36.0 grams of water at 113 °c is cooled to 0 °c. Heat of vaporization problems with answers. Heat of fusion= 6.0 kj/mol; How
much energy is released to the environment by 50.0 grams of condensing water.
Each must be calculated separately. First, list what we know. Web heat of fusion and heat of vaporization mods _____ 1. Solve problems involving thermal energy changes when heating and cooling
substances with phase changes. Worksheets are 270 2 1 heats of fusion and vaporization theory, molar heat of fusion and mol.
Worksheets are 270 2 1 heats of fusion and vaporization theory, molar heat of fusion and mol. Web the kinetic energy of the molecules (rotation, vibration, and limited translation) remains constant
during phase changes, because the temperature does not change. How much energy is released to the environment by 50.0 grams of condensing water. What are the units for heat.
The heat of fusion of copper is 205 j/g. Either it will cause a change in temperature or change of state. Web calculate the heat when 36.0 grams of water at 113 °c is cooled to 0 °c. Solve problems
involving thermal energy changes when heating and cooling substances with phase changes. Heat of fusion= 6.0 kj/mol;
Heat of fusion (j/g) hv. Hfus water = 6.02 kj/mol hvap water = 40.7 kj/mol. Either it will cause a change in temperature or change of state. Web calculate the heat when 36.0 grams of water at 113 °c
is cooled to 0 °c. The heat which a solid absorbs when it melts is called the enthalpy of fusion or.
C(ice) = 2.06 j/g(c, c(h2o) = 4.184 j/g(c, c(steam) = 1.87 j/g(c,. Either it will cause a change in temperature or change of state. Either it will cause a change in temperature or change of state.
Heat of vaporization (j/g) reference table. Δh f = 41000 j/200 g.
When heat is added slowly to a chunk of ice that is initially at a temperature below the freezing point (0 °c), it is found that the ice does not change to liquid water instantaneously when the
temperature reaches the freezing point. Instead, the ice melts slowly. Web heat of fusion and heat of vaporization mods _____ 1. The amount.
Web heat of fusion problems and solutions. Heat of fusion (j/g) hv. The heat of fusion of copper is 205 j/g. When a substance changes its state from a liquid to steam or vice versa, the heat absorbed
or released during this process does not lead to. Web heat of fusion and vaporization worksheet answer key.
Heat of fusion (j/g) hv. Heat of fusion example tip: How much heat is required to vaporized 24 g of liquid. Heats of fusion and vaporization. Table b in the crt.
Study with quizlet and memorize flashcards containing terms like heat of fusion, heat of vaporization, q and more. Instead, the ice melts slowly. What is the molar heat of solidification for water?
What are the units for heat of vaporization? Web heat of fusion and vaporization.
Web heat of fusion and vaporization. What is the equation for heat of fusion? Table b in the crt. Web heats of fusion and vaporization. Is melting endothermic or exothermic?
Web heats of fusion and vaporization. Instead, the ice melts slowly. Web heat of fusion and vaporization—worksheet #2. How much heat is required to turn 12 kg of water at 100 oc into steam at the
same temperature? What are the units for heat of fusion?
Heat Of Fusion And Vaporization Worksheet - Heat of vaporization problems with answers. Web specific heat and latent heat of fusion and vaporization (video) | khan academy. Quantity of energy
(joules) mass (grams) hf. Heat of fusion example tip: Heat of fusion formula (heat of fusion of water) heat of vaporization formula (heat of vaporization of water) The heat of fusion of copper is 205
j/g. First, list what we know. What is the molar heat of solidification for water? Solve problems involving thermal energy changes when heating and cooling substances with phase changes. Instead, the
ice melts slowly.
Web heat of fusion and vaporization—worksheet #2. Calculate the amount of heat needed to melt 35.0 g of ice at 0 ºc. Apply this to the heat of fusion equation. Instead, the ice melts slowly. How
much heat is required to vaporized 24 g of liquid.
How much heat is required to melt 4 kg of ice at 0 oc? Heat of fusion formula (heat of fusion of water) heat of vaporization formula (heat of vaporization of water) C(ice) = 2.06 j/g(c, c(h2o) =
4.184 j/g(c, c(steam) = 1.87 j/g(c,. Q = mol x hvap.
H fus of water is 334 j/g. How much heat is required to melt 360 g of solid water? Heats of fusion and vaporization.
First, list what we know. Web heat of fusion and heat of vaporization mods _____ 1. Solve problems involving thermal energy changes when heating and cooling substances with phase changes.
Either It Will Cause A Change In Temperature Or Change Of State.
Web ðï ࡱ á> þÿ = ? What is the equation for heat of fusion? Web the kinetic energy of the molecules (rotation, vibration, and limited translation) remains constant during phase changes, because the
temperature does not change. The amount of heat required is equal to the sum of these three steps.
C(Ice) = 2.06 J/G(C, C(H2O) = 4.184 J/G(C, C(Steam) = 1.87 J/G(C,.
41000 j = (200 g) · δh f. Study with quizlet and memorize flashcards containing terms like heat of fusion, heat of vaporization, q and more. Choose an answer and hit 'next'. Web enthalpy of
vaporization refers to changing between liquid and gas (evaporation or condensation).
Table B In The Crt.
Δh f = 205 j/g. Hfus water = 6.02 kj/mol hvap water = 40.7 kj/mol. What is the equation for heat of vaporization? When heat is added slowly to a chunk of ice that is initially at a temperature below
the freezing point (0 °c), it is found that the ice does not change to liquid water instantaneously when the temperature reaches the freezing point.
Heat Of Fusion= 6.0 Kj/Mol;
How much energy is released to the environment by 50.0 grams of condensing water. Q = mol x hfus. Heats of fusion and vaporization. Each must be calculated separately. | {"url":"https://tunxis.commnet.edu/view/heat-of-fusion-and-vaporization-worksheet.html","timestamp":"2024-11-11T03:43:53Z","content_type":"text/html","content_length":"33929","record_id":"<urn:uuid:0935c984-6d6a-4907-bffb-4169aa81a668>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00737.warc.gz"} |
Accounts, Trading Portfolios Docs
Trading Portfolios allows the monitoring of trading accounts, in addition to strategies and portfolios. However, you can't manually create an account, and neither other people will be able to view
Publishing agents, developed for specific platforms (such as MT4 and MT5), send the account data to the server. Updating the complete history is optional, but the agents will synchronize general
account data, balance, and margin information, along with pending orders and open positions.
Learn more about publishing agents.
Account report
From the top menu or dashboard, you can access a comprehensive report of your trading accounts.
If you decide to update all account information using the publishing agents, here you will find all the statistics related to the trades performed in your account.
You may filter account data to evaluate some specific results. If you click on the "Filters" link, you will see a form with additional fields to filter the data.
The first filter is the start and end dates. Use them to restrict the period of analysis. You can also define a different initial balance for the account. Sometime this is useful, speacially if your
broker controls the account balance, following a risk control system. You can also define a custom balance if you find the statistics inaccurate.
The other three fields are text input filters that behave similarly. In the Symbol filter field, you can enter symbol names separated by whitespaces. Likewise, in the Magic filter, you may enter a
list of magic numbers also separated by whitespaces. Finally, you can filter the trades based on the comment field.
Main statistics
In the first part of the report, you will see the leading statistics and a chart with the equity curve. The card "Main statistics" shows the following information:
Main statistics and equity curve
Net profit: This is the total net profit achieved with your account in the selected period.
Total return: Indicates the account's percentage return based on the net profit and initial balance.
Notice: Some brokers change the account balance to adjust risk or deduct fees. Depending on the balance available, some calculations may be inaccurate. If you experience problems with the account
report, you can try to define an initial balance using the filter form manually.
Drawdown: This is the maximum percentage drawdown verified in the balance. This measure indicates how much your balance has decreased after reaching any maximum point.
PROC: Pessimistic Return on Capital is an adjusted percentage return measure, considering the account's win rate and average profit. This measure can give a more conservative indication of the
expected return for the account.
CAGR: Compound Annual Growth Rate is the annual return rate needed to obtain the final balance from the account's initial balance.
Yearly: Yearly return calculated for the account.
Monthly: Monthly return calculated for the account.
AHPR: Average Holding Period Return is the average return of each trade, which is the average balance change for each day.
GHPR: Geometric Holding Period Return is similar to AHPR but applies a weight with the number of trades.
Profit factor: The profit factor is the ratio between the sum of all profits and all losses.
Average trade: Average expected result per day, that is, the average daily profit from trades.
Payout ratio: It's the ratio between the average profit and the average loss. It gives an insight into the account's risk-reward ratio.
Sharpe ratio: It's the ratio of the account's net return (compared to a risk-free rate) and the standard deviation of these returns. To simplify the calculations, in this case, the risk-free rate is
considered zero.
Calmar ratio: This metric can also be called the Drawdown ratio. It's the ratio between the average annual return and the maximum drawdown of the account.
Report currency: This is the account's currency.
Positions and Orders
In this section, you can check all the open positions and pending orders detected in your trading account. There are two tables, the first showing the open positions and the second with the pending
orders (not filled yet).
This information is updated periodically in the database. If you want to follow the trades, you should update the tables using the refresh link available on the tables' top.
Account history
This section shows more details regarding the deals stored in the database. You will see a card with a trades summary by symbol and magic numbers.
In both cases, the app will display a pie chart showing the percentage of trades for each symbol or magic and a bar chart with the accumulated profit by symbol or magic number. Below the charts you
will find a table with additional information, including the number of trades, wins, and losses.
Following the summary, you can see a table with all the history available for the selected period and applied filters.
Return analysis
Monthly returns
This card shows a bar chart with the account returns for each month available in history. You can see the results in percentage return, financial values, or points/pips. It's possible to select a
specific year at the bottom of the chart to check the results.
Yearly returns
This card shows the accumulated result in each year in which the trades took place. You can see the results in percentage return, financial values, or points/pips.
Latest returns
From the charts presented in this card, you can see the account's recent results, referring to the day, week, and month, compared with the previous period.
Daily returns
On this card, you find a table with the returns obtained each day, ending on the last day there was a trade on the history.
Return statistics
Today: Percentage return obtained on the current day.
This week: Percentage return achieved in the current week.
This month: Percentage return achieved in the current month.
This year: Percentage return achieved in the current year.
Best year: Highest annual percentage return ever achieved by the account.
Worst year: Lowest annual percentage return ever achieved by the account.
Best month: Highest monthly percentage return ever achieved by the account.
Worst month: Lowest monthly percentage return ever achieved by the account.
Monthly Sharpe: This is the Sharpe ratio calculated monthly.
Drawdown and risk
Risk of ruin
The risk of ruin considers past results to calculate the probability of losing some part or even the total capital invested in the account. Ideally, the account should have a higher probability of
losing smaller portions of the capital and reducing probability as a greater loss is simulated. With a chart's aid, we expect to see a curve with a decline in the risk of ruin to something near zero
as the loss is close to the entire capital.
Drawdown statistics
Maximum relative: This is the highest percentage drawdown verified in history. The percentage drawdown tends to be higher in the first trades if the historical data contains fixed lot trades. In
brackets, you see the financial value corresponding to this drawdown.
Maximum absolute: Largest financial drawdown value verified. In brackets, it's shown the percentage change corresponding to this drawdown.
From initial balance: The largest financial drawdown is applied to the initial balance to give you an idea of how much this value represents in percentage terms if you faced the drawdown in the first
Maximum duration: Shows the length of the longest drawdown measured in days.
Average duration: This is the average length of a drawdown measured in days.
Recovery factor: This is the total net profit divided by the maximum drawdown verified. This number gives an idea of the account's recovering power from drawdown periods.
Longest drawdowns
Drawdowns are common to all trading systems. Accounts spend most of their time on some drawdown, as any negative trade will start or continue such a move. What you should note is the duration and
size of these drawdowns, which can represent a considerable loss of capital for a long time. On this card, a chart gives you a general idea of the drawdown periods faced by the account. The primary
curve is profit growth, with the five most extensive drawdown periods indicated in different colors. Also, below the profit chart, three curves representing the same drawdown levels are plotted, one
in points (volume independent), another with financial value, and the percentage drawdown. These curves will appear in red whenever the account remains in a drawdown period, and by hovering them, you
can see the corresponding values.
Drawdowns details
This table will show all the strategy's drawdown periods, indicating the start and end dates, duration, and corresponding values in percentage, financial, and points/pips.
Trades and timing
Additional trade statistics
Total trades: This is the total number of trades observed in history. The app counts each closed transaction as an individual trade. If there are partial closes, each one will be considered a
different trade.
Long trades: Number of trades that started with a buy transaction.
Short trades: Number of trades that started with a sell transaction.
Gross profit: This is the sum of all trades that ended with a profit.
Gross loss: This is the sum of all trades that ended with a loss.
Z-score: With the Z-score, it is possible to assess whether the account will likely present sequences of gains or losses or whether the results will alternate. Negative Z-score values from -2.0
indicate a 95% probability that another profitable trade will follow a profitable trade, and likewise, a loss will be followed by another loss. Positive values starting from 2.0 suggest that with a
95% probability, a losing trade will follow a profitable trade and vice versa. Z-score values between -2.0 and +2.0 don't lead to a conclusion regarding the dependency between trades.
Days without trade: Number of days that there was no trade in this account.
Days with trade: Number of days that there was trade in this account.
Max entries per day: Maximum number of trades in a single day.
Average trade time: Average trade duration time.
Shortest trade: Indicates the duration of the fastest trade.
Longest trade: Indicates the duration of the most extended trade.
Profit trades count: Number of trades that ended with positive or zero results.
Profit trades percent: Percentage trades that ended with positive or zero results.
Loss trades count: Number of trades that ended with negative result.
Loss trades percent: Percentage trades that ended with negative result.
Largest profit trade: Highest profit achieved in a single trade.
Largest loss trade: Highest loss achieved in a single trade.
Average profit: Average result from trades with profit.
Average loss: Average result from trades with loss.
Maximum consecutive wins: Maximum number of consecutive trades with profit.
Maximum consecutive losses: Maximum number of consecutive trades with loss.
Maximal consecutive profit: Highest profit achieved in consecutive trades.
Maximal consecutive loss: Highest loss achieved in consecutive trades.
Average consecutive wins: Average number of consecutive profitable trades.
Average consecutive losses: Average number of consecutive losing trades.
Trade duration with profit/loss
This card presents a chart with each trade result, with the profit represented on the y-axis and the x-axis trade duration in minutes.
Entries charts
Entries charts show the number of trades started for each hour of the day, day of the week, and month.
Profit/Loss charts
These charts show the sum of profits and losses obtained at each hour of the day, day of the week and month.
MFE/MAE with profit
MFE is the Maximum Favorable Excursion achieved during the trade, indicating the largest price swing favorable to the trade, and consequently, the highest open profit. MAE (Maximum Adverse Excursion)
is the opposite, meaning the largest price swing against the trade, representing the largest open loss. This chart shows a plot that relates the measures of MFE and MAE with the final trade result.
Read next: User Profile | {"url":"https://tradingportfolios.com/documentation/accounts/main/","timestamp":"2024-11-04T08:04:49Z","content_type":"text/html","content_length":"60765","record_id":"<urn:uuid:f47a703b-adac-4464-b393-64985c24b797>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00414.warc.gz"} |
Basic math
QUESTION: Why would the English system of units be more useful if a foot contained 10 inches? Use a math example and write out a clear reason.
It has been over 15 years since I have tackled mathematics. The highest math that I have taken is geometry, which was an experience in itself. Nevertheless, I have attempted to answer the question to
the best of my ability. Unfortunately, the only thing that I recall is 12 inches equals a foot. I do not know how to convert it or provide an example or reason.
The metric system is used worldwide as a system of units, not only in science but also in engineering, business, sports, and daily life. In the English system of units the fundamental unit of length
is the foot, composed of 12 inches. The metric system is based on the decimal system of numbers, and the fundamental unit of length is the meter, composed of 100 centimeters. Because the metric
system is a decimal system, it is easy to express quantities in larger or smaller units as is convenient.
Reference: Seeds, Michael A. (2008). Horizons: Exploring the Universe, 10th Ed.
ANSWER: I see you may want another explanation. The metric system is what you should like. As explained above, multiples of ten are the main reason it is the most popular system. But one
clarification on if it would be simpler if the English system had only 10 inches is needed. It would not be necessary since we already have a system based on ten (metric). However, the English system
is used in carpentry, architecture and even engineering in the US. To suddenly change the system would be confusing. In carpentry for example, the system of having ? and one quarter, etc as units is
useful in the fact that fractional units can be one half the size of the next higher or lower unit. Also the foot has 12 inches and a lot of our construction techniques and standards are based on
this. It is actually more of a base 2 system, with doubling being the basic quantity of increase, and tripling being the secondary quantity and squaring and cubing being the main functions. It makes
sense if you think two and three dimensionally respectively. My reference is personal experience.
---------- FOLLOW-UP ----------
QUESTION: Mr. Martinez
Thank you for your quick repsonse.
I would like to clarify the second part of the question. (Perhaps I am making it more complicated than it is).
Use a math example and write out a clear reason.
In carpentry for example, the system of having ? and ?, etc. etc. as units is useful in the fact that fractional units can be one half the size of the next higher or lower unit. A foot has 12 inches
and a lot of construction techniques and standards are based on this concept. It is actually more of a base 2 system, with doubling being the basic quantity of increase, and tripling being the
secondary quantity and squaring and cubing being the main functions. It makes sense if you think two and three dimensionally respectively.
If I double 1/2 (equal one) and triple 1/4 (equal 1 1/4). Is that a clear reason or am I missing the mark?
First check your math above. 3 times 1/4 equals 3/4, but that is not important now.
In general I refer to a wall being 8 feet tall by 12 feet wide as having an area of 96 square feet and a room with the same wall and a length 12 feet having a volume of 1152 cubic feet.
In carpentry, numbers in the series 2,3,4,6,8,12,16,24, etc. play an important part in this system. Like the metric system, these numbers are easy to play with in your head, without the use of a
calculator (for some people). Frequently you will need to cut something in half. That is a very easy operation. Even if the object is a strange size, figuring out half is easy.
What is half of 10 and 5/8 inches? It is half of ten, which is five, and half of 5/8 which is 5/16 which you would easily know if you use these figures daily. So the answer is 5 and 5/16ths inches.
In metric you would have to answer what is half of 10.625? Arguably just as easy to answer without a calculator. I hope these examples are what you wanted. If not, I probably am not understanding the
question yet, sorry. But I'd be happy to try one more time. | {"url":"http://www.eluminary.org/en/QnA/Basic_math_(Astronomy)","timestamp":"2024-11-07T16:46:14Z","content_type":"text/html","content_length":"12955","record_id":"<urn:uuid:a1b2ad63-cd54-4f63-aad3-e1716be151d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00627.warc.gz"} |