content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
the Math Toolkit
Using the Math Toolkit
What is It?
Student materials include student texts which provide the investigative and summarizing questions and homework, and Reference and Practice (RAP) books which provide summaries of concepts and skills,
practice sets, and tips for taking tests such as the ACT and SAT. These are the backbone of the program, but the single most useful item is not one that is provided but student-generated: the Math
Toolkit. Teachers may simply refer to this as notes, but the intention is not just to have students copy from something provided by the teacher. In keeping with current Research on Learning, the
texts provide prompts at appropriate places so that students can reflect on concepts and skills, summarize them in their own words. Usually students make notes in their toolkits following major
checkpoints, but the decision about what to put in the toolkit and when to add to this should become a student responsibility. This summarizing activity is crucial to the learning process.
Students' toolkits should include concepts, properties, procedures, important formulas, summaries, and sample problems. Because they are chosen by students and individually annotated with hints,
reminders, warnings, not only do they give an overview of the curriculum, but they function as study guides. The more individual it is, and the better it is organized, the more useful the Math
Toolkit will be. Some ideas for organizing the toolkit are:
• Put titles and dates on all notes.
• Add personal notes to help recall what was tricky about an idea.
• Stay caught up.
• Highlight important ideas.
• Add examples.
• Have readily available a graphing calculator, graph and notebook paper, ruler, compass, and protractor.
How is it Used?
The toolkit becomes a very useful device when students are doing homework or reviewing for a test or quiz. In a traditional text, there would be worked examples as reference. In CPMP, the same
function is performed more meaningfully by using student class notes and the Math Toolkit. At a minimum, students should use this to review for quizzes and tests. At the teacher's discretion, the
Math Toolkit may be used on in-class assessments. The philosophy of the program is to focus on conceptual development. However, there are cases where a teacher may decide that memorization of a
procedure or a skill is essential. In those cases, the teacher will not permit use of the toolkit on a test or quiz. Obviously, whether as assistance on homework, or as a study guide for tests, the
toolkit is only useful if it is complete, up to date, and well organized.
Some students think that if they can use a toolkit they do not have to study. But they then find that locating important information and reviewing how to use it become impossible in the context of
the limited time that is available in class for assessments. The toolkit reflects the student's current thinking and understanding, but cannot replace study. | {"url":"http://www.wmich.edu/cpmp/parentresource/toolkit.html","timestamp":"2014-04-19T04:38:39Z","content_type":null,"content_length":"20640","record_id":"<urn:uuid:6f658728-2be8-4406-90ec-5d25476d24c2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
need to compute this problem having problems with
"pramod kumar" <pramod.kilu@gmail.com> wrote in message <jpbakv$7mo$1@newscl01ah.mathworks.com>...
> clear all, close all
> realizations=10;
> N=1;
> %a.plot 10 realisations of X(t)
> for i=1:realizations
> theta=2*pi*rand(N,1);
> t=0:0.0001:4*pi;
> A=exprnd(1,N,1);
> Xt=A*cos(t+theta);
> plot(t,Xt); hold on;
> end
> by using this code i have calculated 10 iterations please let me know for each realization i would like to have different colur so let me know how can i plot this in different colors
The PLOT command only offers 8 different colors, but you can alternate both colors and line styles using something like the following:
plotcolors={'r','b','g','m','k', 'r*','b*','g*','m*','k*'};
clear all, close all
%a.plot 10 realisations of X(t)
for i=1:realizations
plot(t,Xt,plotcolors{i}); hold on; | {"url":"http://www.mathworks.nl/matlabcentral/newsreader/view_thread/320241","timestamp":"2014-04-21T12:17:29Z","content_type":null,"content_length":"45214","record_id":"<urn:uuid:c3ed9ae4-5783-4c93-a405-38001353e027>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lectures on Algebraic Statistics
How does an algebraic geometer studying secant varieties further the understanding of hypothesis tests in statistics? Why would a statistician working on factor analysis raise open problems about
determinantal varieties? Connections of this type are at the heart of the new field of "algebraic statistics". In this field, mathematicians and statisticians come together to solve statistical
inference problems using concepts from algebraic geometry as well as related computational and combinatorial techniques. The goal of these lectures is to introduce newcomers from the different camps
to algebraic statistics. The introduction will be centered around the following three observations: many important statistical models correspond to algebraic or semi-algebraic sets of parameters; the
geometry of these parameter spaces determines the behaviour of widely used statistical inference procedures; computational algebraic geometry can be used to study parameter spaces and other features
of statistical models.
We haven't found any reviews in the usual places.
Markov Bases 1
12 Markov Bases of Hierarchical Models 11
13 The Many Bases of an Integer Lattice 19
Likelihood Inference 29
22 Likelihood Equations for Implicit Models 40
23 Likelihood Ratio Tests 48
Conditional Independence 60
32 Graphical Models 69
52 Exact Integration for Discrete Models 114
Exercises 123
62 Quasisymmetry and Cycles 128
63 A Colored Gaussian Graphical Model 131
64 Instrumental Variables and Tangent Cones 135
65 Fisher Information for Multivariate Normals 142
66 The Intersection Axiom and Its Failure 144
67 Primary Decomposition for CI Inference 147
33 Parametrizations of Graphical Models 79
Hidden Variables 89
42 Factor Analysis 99
Bayesian Integrals 105
68 An Independence Model and Its Mixture 150
Open Problems 157
Bibliography 164
Bibliographic information | {"url":"http://books.google.co.uk/books?id=TytYUTy5V_IC","timestamp":"2014-04-18T18:18:57Z","content_type":null,"content_length":"115266","record_id":"<urn:uuid:737c46f8-fc8b-4be5-834a-bd740b88076a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tutors in Brookline
This is a list of Math tutors in Brookline - 19 tutors found.
Check tutor profiles for information about rates, subjects, tutor reviews, availability, etc.
Brian Clare Tutor Rating: not rated Individual Lesson Fee: $30
Experienced full-time tutor in all areas of mathematics -
tutor profile
Leon Zang Tutor Rating: not rated Individual Lesson Fee: $20 - $50
I can teach mathematics in all levels, from elementary school
to College, including all the fields.
Please write to me if you need more detailed information -
tutor profile
Mitasvil Patel Tutor Rating: Individual Lesson Fee: $25 - $50
If you are in search of a preeminent tutor then you have reached the right person.I have more than 5 years of tutoring experience and have a complete unique methodology of that sets me apart.I teach
Math(K-12th,SAT,ACT,GRE) to students of all age groups. -
tutor profile
American Tutoring Tutor Rating: not rated Individual Lesson Fee: $32 - $49
AmericanTutoring.com provides one-on-one, in-home academic tutoring for pre K through college. -
tutor profile
Meltem Duran Tutor Rating: not rated Individual Lesson Fee: $20 - $60
B.A. Amherst College, Physics and Astronomy
M.S. Mechanical Engineering, University of Massachusetts Amherst
PhD student at UMass Amherst -
tutor profile
Boston-Tutors.com Tutor Rating: not rated Individual Lesson Fee: $35 - $49
Boston-Tutors.com (http://boston-tutors.com) provides individual help for all grades and subjects, in the convenience and security of your home, in and around the Boston area. Students and parents
now have a better choice for tutoring - one that understands that all students learn at their own speed -
tutor profile
Amanda Egan Tutor Rating: not rated Individual Lesson Fee: $30 - $65
I am a high school math teacher. I have taught Algebra I, Geometry, Algebra II, and Pre-calculus.I worked for a summer enrichment program through UMass Boston. I have been tutoring for about 6 years.
I have tutored from the elementary to the college level. -
tutor profile
Marc Amir Tutor Rating: not rated Individual Lesson Fee: $35 - $65
Basic Math and Pre-Algebra, and Algebra
Linear programming and Optimization Methods
Geometry & Trigonometry
Finance and Economics
Pre-Calculus and Calculus
Probability and Statistics for all MAJORS,and for MBA students
Linear Algebra
Standardized Tests:GRE, GMAT, PSAT,SAT,GED -
tutor profile
Brendon Ferullo Tutor Rating: not rated Individual Lesson Fee: $50 - $75
I am a high school math teacher who teaches all levels of high school mathematics. I've been doing it for years and have had wonderful results. -
tutor profile
Positive Edge Tutoring, LLC Tutor Rating: not rated Individual Lesson Fee: $60 - $75
We offer K-12th grade one-on-one in-home academic tutoring services for parents who want to help their children succeed. Call us today and let us pair you with a math, science, reading, writing,
organization skills, study skills, and/or learning skills tutor. We work with everyone from spe -
tutor profile
1 2 next | {"url":"http://www.moretutors.com/brookline-math-calculus-tutors.html","timestamp":"2014-04-18T16:38:48Z","content_type":null,"content_length":"19887","record_id":"<urn:uuid:cc4f84cc-ce09-4b3d-9393-1fcc7394abff>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pagerank is eventual equilibrium
posted by Ranee Brylinski and Jean-Luc Brylinski
PageRank is the eventual equilibrium of the flow of probability on a finite, directed graph $G$. Here the flow evolves according to a specific type of random walk process on the nodes of the graph.
This random walk process on the nodes of the graph is the Google random surfer model, introduced for document ranking and retrieval by Brin and Page in their seminal 1998 paper [S. Brin and L. Page,
Comp. Networks ISDN Systems 30, 107, 1998]. The random surfer “surfs” the graph, moving in discrete time steps from node to node (e.g., from document to document, or from web page to web page). Let’s
label the nodes of $G$ by the numbers $1,…,N$; the total number $N$ of nodes is finite.
The random surfer, call her Sue, moves between nodes according to some fixed set of probabilities, so that with probability $p(i,j)$ she moves from node $i$ to node $j$. The self-node probabilities
$p(i,i)$ are all positive, and so Sue may simply stay in place in any given move. The possibility to stay put is the possibility to be lazy, and so Sue executes a lazy random walk on the nodes of the
graph, as opposed to a conventional random walk. Anyway, the $p(i,j)$ are just the transition probabilities of a finite state Markov chain, with the states being the nodes of the graph. The
transition matrix $$P=[p(i,j)]$$ is the Google matrix. Note $p(i,j)$ only comes into play when Sue is already at node $i$, and so the transition probability $p(i,j)$ is the conditional probability $p
(j|i)$, the probability of $j$ conditional on $i$ (one time step is implicit here, but later we will encounter $n$-step conditional probabilities).
The transition probabilities $p(i,j)$ in the Google random surfer model are constructed in the following way. To begin with, the graph $G$ is specified by its set $V$ of nodes and by its set $E$ of
directed edges. Here $(i,j)$ means an edge starting from node $i$ and going to node $j$; we say $i$ points to $j$ and write $i\leadsto{j}$. The out-degree $d_{out}(i)$ of node $i$ is the number of
edges of the graph which start at $i$. Ordinarily, there are no loops $(i,i)$ in the graph, but this may not be true, or we may end up creating loops in order to set up a Markov chain on the graph
with decent dynamics.
The plain vanilla way to take a random walk on $G$ is to move from node $i$ along an edge to node $j$ by choosing uniformly at random from the available edges. More precisely, if there is at least
one edge starting at $i$, so $d_{out}(i)$ is not zero, then the probability of going to node $j$ is either $$m(i,j) = 1/d_{out}(i)$$ if $(i,j)$ is an edge or $m(i,j)=0$ otherwise. If no edge starts
at $i$ ($i$ is a sink), then we change $G$ by adding edges starting at $i$, say $r$ of them, so that now $d_{out}(i)=r\gt{0}$, and then we define $m(i,j)$ in the same way. A common remedy is to put
in edges $(i,j)$ from $i$ to every node $j$, including $i$, so that $r=N$. (There are other ways to deal with sink nodes by changing $G$. We can introduce an extra node into $V$, or introduce a loop
at each node into $E$, or just assume there are no sinks. Maybe using the teleportation vector is most natural; for standard teleportation this is the $r=N$ case.) Then $M=[m(i,j)]$ is the
transition matrix of this plain vanilla walk (which may well be lazy).
Now here is what the Google random surfer Sue does in each step: with probability $\beta$, she follows a plain vanilla random walk and with probability $(1-\beta)$, she teleports to any node $j$
chosen uniformly at random from all $N$ nodes. Here $0\lt\beta\lt 1$ is the damping factor, which is set to $.85$ in the Brin-Page paper. After teleporting, Sue may end up at the same node (thereby
being lazy). The transition probabilities for Sue are thus $$p(i,j) = {\beta}m(i,j) + (1-\beta)\frac{1}{N}$$ and so the Google matrix $P$ is $$P = {\beta}M + (1-\beta)\mathbf{e}\mathbf{u}^T$$ where $
\mathbf{e}$ is the column vector of length $N$ made up of all $1$’s and $\mathbf{u}^T$ is the uniform probability row vector $$\mathbf{u}^T = \frac{1}{N}\mathbf{e}^T$$ Here the teleportation vector
is $\mathbf{u}^T$, but in various applications it may be a different probability row vector of length $N$. The Google matrix satisfies the matrix inequalities $$P\succeq(1-\beta)\mathbf{e}\mathbf{u}^
T\succ{0}$$ where the comparisons are true entry-wise.
As Sue surfs the graph $G$, she generates a flow of probability. The probability which is flowing is the probability of finding Sue somewhere on the graph, at some node. This probability flows as
time goes by, in discrete steps, through $t=0,1,2, …$, with no end. Sue never sleeps, she just surfs.
Suppose Sue started initially a node $i=i_0$. Then the probability, at time $t=0$, of finding Sue at a node is $1$ at node $i$ and $0$ at all other nodes. This is a probability distribution $\pi_0$
on the nodes, namely the Dirac measure concentrated at $i$. Now at time $t=1$, Sue has taken one step and the probability of finding her at a node $j=i_1$ is $p(i,j)$. So the Dirac measure $\pi_0$
has evolved, or flowed, into a new probability distribution $\pi_1$ on the nodes; note every node $j$ is reachable in one step due to teleportation. Sue will certainly move somewhere after one step,
thus $\sum_j p(i,j) = 1$ and $\pi_1$ is indeed a probability distribution. Upon Sue’s second step, at time $t=2$, the probability of finding Sue at a node $k=i_2$ is $$p_2(i,k) = \sum_j p(i,j)p(j,k)
$$ which we get by collecting all the two-step walks from $i$ to $k$ and noting that the walk $i\leadsto j\leadsto k$ has probability $p(i,j)p(j,k)$. Sue will certainly move somewhere after two
steps, thus $\sum_k p_2(i,k) = 1$ and $\pi_0$ has flowed into a probability distribution $\pi_2$. After $t$ steps, the probability of finding Sue at a node $q=i_t$ is $$p_t(i,q) = \sum_{\gamma}\
mathbf{p}(\gamma)$$ summed over all $t$-step walks $\gamma$ from $i$ to $q$ where $\mathbf{p}(\gamma)$ is the probability that Sue follows $\gamma$ so that $\mathbf{p}(\gamma)$ is the product of the
$t$ transition probabilities $p(i_s,i_{s+1})$ for the steps of $\gamma$. Again, $\sum_q p_t(i,q) = 1$ and $\pi_0$ has flowed into the probability distribution $\pi_t$ with values $$\pi_t(q) = p_t
(i,q)$$ The $t$-step transition probability $p_t(i,q)$ is the $t$-step conditional probability $p_t(q|i)$, the probability of finding Sue at node $q$ at time $t$ conditional on finding her at node
$i$ at time $0$.
This discussion works equally well if Sue’s initial location at time $0$ on the graph was uncertain so that her initial probability distribution $\pi_0$ was not a Dirac measure (meaning $\pi_0$ had
some entropy). The flow of probability is linear and we easily see that, in $t$ steps, $\pi_0$ flows to the probability distribution $\pi_t$ given by $$\pi_t(q) = \sum_i \pi_0(i)p_t(i,q)$$ Clearly
$p_t(i,q)$ is just the $(i,q)$ coefficient of the matrix $P^t$, the $t$th power of $P$. So the flow of probability equation expressed in matrix form is: $$\pi_t = \pi_0P^t$$ where the probability
distributions are written as probability row vectors. This is consistent at $t=0$ because the $0$th power of $P$ is the identity matrix.
Now we can ask: what happens to the flow of probability as Sue surfs and time $t$ passes, so that $t\to\infty$ ? The point is that as time passes, the flow of probability mixes the probabilities on
the nodes, the components of the probability row vectors. The probability row vectors are probability distributions on the set of $N$ nodes. For instance, a probability distribution concentrated on
any one node mixes to a probability distribution supported (being non-zero on) all the nodes. This is mixing in the thermodynamic sense, and so we can ask if equilibrium is reached ? Of course, this
is mixing of probability and we are asking about an equilibrium of probability. Markov chain theory applied to the Google matrix $P$ gives the answer; but we will derive it “from scratch” in the next
post using matrix analysis (functional analysis) and some geometry/topology rather than purely probabilistic arguments.
The answer is that the flow of probability produces a limit probability distribution $$\pi_\infty = \lim_{t\to\infty}\pi_t$$ which is independent of the starting distribution $\pi_0$. Here the limit
is pointwise so that the probabilities $\pi_t(i)$ all converge together, uniformly in $t$, to probabilities $\pi_\infty(i)$. This convergence also takes place in norm, for any $\ell_p$ norm on the
set of nodes, $p\ge{1}$. This limit satisfies $$\pi_{\infty} = \pi_{\infty}P$$ meaning that $\pi_{\infty}$ is an equilibrium, or stationary probability distribution: it is exactly preserved under the
flow of probability. Moreover $\pi_{\infty}$ is the unique equilibrium probability distribution. This limit $$\pi=\pi_\infty$$ is the PageRank vector, so that $\pi(i)$ is the PageRank of the node
The equilibrium equation $\pi=\pi_\infty$ says that at each node $j$, $$\pi(j) = \beta\sum_{i\leadsto{j}}\pi(i)\frac{1}{d_{out}(i)} + (1-\beta)\sum_i\pi(i)\frac{1}{N}$$ We have expanded out $\frac{1}
{N}$ as $\sum_i\pi(i)\frac{1}{N}$ in order to show the amount of its PageRank that node $i$ sends out to node $j$. This amount is $\pi(i)p(i,j)$, and it is equal to the sum of the terms above which
involve $\pi(i)$. Notice that PageRank satisfies $$\pi\succeq(1-\beta)\mathbf{u}^T\succ{0}$$ where as above the comparisons are true entry-wise.
The PageRank vector $\pi$ is the probability distribution for finding the position of the Google random surfer Sue at the eventual equilibrium, that is, at the equilibrium that takes place eventually
when $t$ passes to $\infty$. At equilibrium, much of the information about the Google random surfer Sue, and so about the graph $G$, has been lost: the initial probability vector $\pi_0$ is
completely lost and much of the transition matrix $P$ has been lost. The information that survives is exactly the equilibrium probability vector $\pi$ (cf. the next post).
The $i$th component $\pi(i)$ of the PageRank vector is the eventual, or limiting long-run, probability of finding Sue at node $i$.
to be continued …
posted by Ranee Brylinski and Jean-Luc Brylinski | {"url":"http://www.brylinski.com/post/21409877954/pagerank-is-eventual-equilibrium","timestamp":"2014-04-20T23:28:26Z","content_type":null,"content_length":"45844","record_id":"<urn:uuid:88e83b27-0ce1-4fd4-a436-baf91ca0bc89>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00363-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thermodynamics: Structure
Problem :
Suppose that we have a fuel cell battery, in which electrons flow from a terminal in one half cell to a terminal in another. Explain this phenomenon in terms of the chemical potential.
We can look at the battery as two systems in diffusive contact through the connecting wire. The electrons simply then flow from the cell with the higher chemical potential to that with the lower
until an equilibrium is reached, if ever.
Problem :
Show that the units of pressure as we have defined it agree with those of the conventional understanding of pressure.
The conventional units are . We have defined pressure so that we have an energy in the numerator and a volume in the denominator. But remember that energy has the same units as work, namely FORCE×
LENGTH , and therefore we have .
Problem :
Forcing a system into a small volume makes the energy of the system grow, whereas expanding the system, colloquially speaking, gives the particles more room to relax, and the energy of the system
decreases (all for a process at constant entropy). Using the definition of pressure we've investigated, show what happens to the pressure at large volumes and very small volumes of the system. Does
this agree with your intuition?
For a system of small volume for the number of particles, the energy is high. Increasing the volume some small amount, δV , will cause a great decrease in the energy U . Therefore, the pressure is:
p = - ([ σ ] = -
For a system of great volume for the number of particles, the energy is already low. Increasing the volume some small amount, δV , will only cause a small decrease in the energy U . Therefore, the
pressure is:
p = - ([ σ ] = -
This makes sense to us. We expect a cramped system to have a high pressure and sprawling system to have low pressure.
Problem :
Is the energy of the system U an intensive or extensive variable?
Doubling the system should double the energy, so U is an extensive variable.
Problem :
Explain why the entropy is an extensive variable.
Remember that entropy was defined as σ = log g where g was the multiplicity function. We defined the entropy in this manner so that the entropies of two systems in contact would add together, since
their individual g functions multiply together. So, doubling the system means that σ [new] = σ [original] + σ [duplicate] = 2σ [original] . Therefore entropy is an extensive variable. | {"url":"http://www.sparknotes.com/physics/thermodynamics/structure/problems.html","timestamp":"2014-04-19T02:25:14Z","content_type":null,"content_length":"55588","record_id":"<urn:uuid:b4bfc141-5ff5-40df-a655-18168dd9608d>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to higher algebra and representation theory
• Basic notions of linear algebra
□ Linear space, subspace, factor-space
□ Linear mappings between spaces, complexes, exact sequences
□ Direct sum and tensor product of linear spaces, linear functionals, dual space, tensors
□ Complexification and realization of linear spaces, real form
□ Grading and filtration on linear space
• Basic algebraic systems
□ Associative algebras, Lie algebras
□ Mappings between algebras, homomorphisms, isomorphisms
□ Subalgebra, ideal and factor-algebra, simple and semi-simple algebras; Abelian, nilpotent, solvable and reductive Lie algebras
□ Short exact sequences of algebras
□ Algebras Mat[n] (C), Mat[n] (R), gl (n, C), gl (n, R)
□ Simplicity of algebras Mat[n] (C) and Mat[n] (R)
□ Notions of algebra module and algebra representation
□ Reducible, irreducible and completely reducible representations
□ Representation homomorphisms, intertwining operator, Schur's lemma
• Theory of weight representations of sl (2, C)
□ Notions of highest weight representation, Verma module and contragradient module
□ Realization of the Verma module and its contragradient in polinomial space of one variable
□ Notion of singular vectors and its explicit finding
□ Finite-dimensional irreducible modules
□ Weight representations without highest weight
□ Extensions of representations
□ Examples of representations not being weight
• Classical Lie groups and its Lie algebras
□ Definition of groups, group homomorphisms and isomorphisms, subgroups, normal subgroups
□ Permutation group, finite groups
□ Bi-linear forms which are invariant under group action
□ Classical Lie groups GL (n) , SL (n) , O (n) , SO (n) , Sp (n) ; Euclidean, Lorentz and Poincare groups; groups U (n) and SU (n)
□ Connection between Lie group and its Lie algebra for matrix groups
□ Lie algebras of classical Lie groups
□ Bi-linear forms which are invariant under Lie algebra action
□ Group action on a multitude, group representation, group orbits, adjoint and coadjoint group representation
□ Bi-linear forms on representations of Lie algebras and groups, Killing form
□ Cartan subgroup and subalgebras, Borel subgroup and subalgebras, top and bottom triangular subgroups and subalgebras, Gauss decomposition of classical groups and algebras
□ Enveloping algebras of Lie algebras
□ Free associative algebras and Lie algebras, algebra assignment by means of generatrices and relations
□ Enveloping algebra of representation, universal enveloping algebra
□ Poincare-Birkhoff-Witt basis
□ Connection between representations of Lie algebra and its universal eneveloping algebra
□ Universal eneveloping algebra of classical Lie algebras
□ Center of universal eneveloping algebra, Casimir operators and center of U (sl (2)) , universal eneveloping algebras of Lorentz and Poincare algebras
□ Notion of induced representation, Verma modules of classical Lie algebras
□ Tensor product of associative algebras and representations, notion of coproduct, coassociativity and cocommutativity, commutative diagrams, bi-algebras and Hopf algebra
□ Realization of universal enveloping algebra of Lie algebra as a space of generalized functions on group, structure of Hopf algebra on universal enveloping algebra of Lie algebra
□ Tensor products of algebra representations and contragradient modules
□ Tensor products of finite-dimensional irreducible representations of sl (2, C) , Klebsh-Gordan coefficients
• Representations of classical Lie algebras in tensor spaces
□ Schur's duality, representation of symmetric group and Young diagram, types of symmetry
□ Anticommutative variables, Grassmann algebras, Clifford algebras, connection between Clifford algebras and matrix algebras, complex and real cases, notion of super Lie algebra, simplest
examples, trace and supertrace on associative algebra
□ Spinors as Clifford algebra representations, charge-conjugate matrix, Majorana and Weyl spinors
□ Realization of representations of classical Lie algebras in boson and fermion Fock spaces
• Elements of Cartan theory
□ Cartansubalgebras, root vectors, root decomposition, Chevalley's basis, lattices of roots and weights
□ Weight representations, representations of the highest and lowest weight, lattice of representation weights, character of representation, representation from " categories O "
□ Simple roots, Dynkin diagrams, classification of simple complex Lie algebras, reconstruction of Lie algebra using Dynkin diagram
□ Weyl group, Weyl chambers, prepotent weights, integral weights, elements of the structural theory of Verma modules , Jordan-Gelder series for Verma module, repetition factor of simple
subfactors, singular vectors, Katz-Kazhdan theorem, structure of " categories O ", shift functor
• Elements of homological algebra
□ Chevalley cohomologies of Lie algebras, interpretation of younger cohomological classes, central extensions, affine algebras, Virassaro algebra
□ Hochschield cohomologies, elements of deformation theory, deformed brackets
□ Some notions of category theory
□ Basic computational means: diagrammatic search, long exact sequences of cohomology, algebraical homotopy, resolvents and the simplest spectral sequences
□ Computing of cohomologies with coefficients lying in finite-dimesional irreducible representations for simple algebras and its subalgebras, Bernstein-Gelfand-Gelfand resolvent
• Algebraical aspects of quantization
□ Poisson algebras, Poisson algebras deformations, notion of deformed quantization
□ Poisson-Lie groups, Sklyanin bracket, quantum groups
□ Quantum universal enveloping algebras of semi-simple Lie algebras, quasi-tensorial categories, universal R-matrix,Yang-Baxter equations
□ Quantum sl(2) and its representations in the root of the identity, infinite-dimensional center | {"url":"http://www.chair.lpi.ru/eng/lect/qfth.html","timestamp":"2014-04-16T16:32:23Z","content_type":null,"content_length":"8092","record_id":"<urn:uuid:7f1e3b6f-a0a1-4ae4-ab72-e24fb922ac56>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Projecting the utilization of labor and capital
So the cobra equation has made me re-evaluate the dynamics of the effective demand limit. The main reason is that the graph of the Cobra equation shows that capital utilization decreases as
employment increases at the effective demand limit. I was assuming that as employment increased, the utilization of capital would also increase. That was an error.
As businesses in the aggregate reach for profits, the path at the effective demand limit shows that capital will be less utilized as labor is more utilized.
I had been thinking that unemployment would bottom out at 6.7% to 7.0%. That was assuming that capital utilization would increase. But now I see that unemployment can go lower, because capital
utilization will not increase.
Here is the path of the utilization of labor and capital (blue circles) since 2009 up to today’s data from unemployment. (3rd quarter unemployment is 7.3%, which is 92.7% in the graph.)
Equations for the two lines are given above the graph.
The blue oval shows the projected range of the equilibrium point where profit maximization crosses the effective demand limit. The economy gravitates toward that blue oval area. The utilization of
labor and capital (blue circles) is moving along the effective demand limit following increasing profits toward the blue oval.
The Cobra equation is the equation that I have been searching for since last year. Now that I can see it, the dynamics of profit at the effective demand limit are coming clear. I still see the
environment for a recession starting in one year, but the dynamics of that environment are clearer.
For reference… here is the Cobra equation to measure the profitability of utilizing labor and capital dependent upon labor share.
Measure of profitability in the aggregate = (x + y) – ax^2y^2
x = capital utilization rate
y = employment rate
a = coefficient for labor share to establish profit maximization.
Coefficient “a” = (els)^2 – 2.474*(els) + 2.0 … (els = effective labor share, for example 80% as 0.80). | {"url":"http://investingchannel.com/article/270524/Projecting-the-utilization-of-labor-and-capital","timestamp":"2014-04-18T02:59:03Z","content_type":null,"content_length":"14164","record_id":"<urn:uuid:2df14e7c-df6f-4e00-998c-9a31f409d2c2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
June 6, 2007
Thomas J Osler
1. Leibniz rule for fractional derivatives generalized and an application to infinite
series, SIAM J. Appl. Math., 18 (1970), pp. 658-674. MR 41#5562 (download from JSTOR) download PDF file
2. The fractional derivative of a composite function, SIAM J. Math. Anal, 1 (1970),
pp. 288-293. MR 41#5563 download PDF file
3. Taylor’s series generalized for fractional derivatives and applications, SIAM J.
Math. Anal. 2 (1971), pp. 37-48. MR 45#3682 download PDF file
4. Fractional derivatives and leibniz rule, Amer. Math. Monthly, 78 (1971),
pp. 645-649. (download from JSTOR) download PDF file
5. A further extension of Leibniz rule to fractional derivatives and its relation to Parseval’s formula, SIAM J. Math. Anal., 3 (1972), pp. 1-16. MR 48#2323a download PDF file
6. An integral analogue of Taylor’s series and its use in computing Fourier
transforms, Math. Comp., 26 (1972), pp. 449-460. MR 46#5950. download PDF file
7. The integral analog of the Leibniz rule, Math. Comp., 26 (1972), pp. 903-915.
MR 47#2792. download PDF file
8. A correction to Leibniz rule for fractional derivatives, SIAM J. Math. Anal., 4 (1973), pp. 456-459. MR 48#2323b download PDF file
9. Differences of arbitrary order (With J. B. Diaz), Math. Comp., 28 (1974),
pp. 185-202. MR 49#11077 (download from JSTOR) download PDF file
10. Fundamental properties of fractional derivatives via Pochhammer integrals.(With J. L. Lavoie and R. Tremblay). A paper in "Fractional Calculus and Its Applications" edited by B. Ross,
Springer-Verlag, New York, 1975, pp. 323-356.
MR 57#16490
11. Open questions for research. (Proc. Internat. Conf. Univ. New Haven, West
Haven, Conn., 1974) pp. 376-381. Lecture Notes in Math., Vol. 457, Springer,
Berlin, 1978. (A. M. Brickner) MR 57#16493
12. An indentity for simplifying certain generalized hypergeometric functions, Math Comp., 29 (1975), pp. 888-893. MR 51#10710 download PDF file
13. Leibniz rule for fractional derivatives used to generalize formulas of Walker and Cauchy, Bul. Inst. Polit, Jasi, 21-(25), 1-2, (1975), pp. 21-24. MR 52#8822 download PDF file
14. Fractional derivatives and special functions, SIAM Review, 18 (1976),
pp. 240-268. MR 53#13666 download PDF file
15. A quick look at Lyapunov space, Mathematics and Computer Education,
28 (1994), pp. 183-197.
16. A mathematical art exhibit, Mathematics and Computer Education, 29 (1995),
pp. 245-252.
17. Get billions and billions of digits of pi from a wrong formula, Mathematics and Computer Education , Vol. 33, No. 1, 1999, pp.40-45. download PDF version
18. The trinomial triangle, (with James Chappell, student), The College Mathematics Journal, Vol. 30, No. 2, 1999, pp. 141-142. download PDF version
19. Extending the Babylonian algorithm, Mathematics and Computer Education, Vol. 33, No. 2, 1999, pp. 120-128. download PDF version
20. The tautochrone under arbitrary potentials using fractional derivatives, (with Eduardo Flores), American Journal of Physics, (Am. J. Phys.), 67(1999), pp. 718- 722. download PDF version
21. The united Vieta’s and Wallis’s products for pi, American Mathematical Monthly, 106 (1999), pp. 774-776. download PDF version
22. Fractal Movies, Mathematics and Computer Education, Vol. 33 (1999), pp. 236-243. download PDF version
23. A child’s garden of fractional derivatives, (with Marcia Kleinz), The College Mathematics Journal, Vol. 31, No. 2, (2000), pp. 82-887 download PDF version
24. The binomial to binomial theorem, The College Mathematics Journal, Vol. 31, No. 3, (2000), pp. 211-212. download PDF version
25. Spacetime numbers the easy way, (with Nick Borota and Eduardo Flores), Mathematics and Computer Education, Vol. 34, No. 2, (2000), pp. 159-168. download PDF version
26. Playing with partitions on the computer, (with Abdul Hassen), Mathematics and Computer Education , Vol. 35, No. 1, (2001), pp. 5-17. download PDF version
27. Mutations of the Mandelbrot set, (with Jae Hattrick-Simpers, student), Mathematics and Computer Education, Vol. 35, No. 1, (2001), pp. 18-26. download PDF version
28. The remarkable incircle of a triangle. (with Ira Fine), Mathematics and Computer Education, Vol. 35, No. 1, (2001), pp. 44-50. download PDF version
29. Cardan polynomials and the reduction of radicals, Mathematics Magazine, Vol 47, No. 1, (2001), pp. 26-32. download PDF version
30. The rotating tautochrone, (with Eduardo Flores), Journal of Applied Mechanics,
68(2001), pp. 353-356. download PDF version
31. Rearranging terms of a harmonic-like series, Mathematics and Computer Education, 35(2001), pp. 136-139. download PDF version
32. Fun with 0.999... , The AMATYC Review, 22(2001), pp. 53-56. download PDF version
33. An unusual approach to Kepler’s first law, American Journal of Physics, 69(2001), pp. 1036-8. download PDF version
34. An unusual view of the ellipse, Mathematics and Computer Education, 35,(2001), pp. 234-236. download PDF version
35. Variations on Vieta’s and Wallis’s products for pi, (with Michael Wilhelm), Mathematics and Computer Education, 35(2001), pp. 225-232. download PDF version
36. A tale of two series, (with Marcus Wright), The College Mathematics Journal, 33(2002), pp. 99-106. download PDF file
37. Divergence of the harmonic series, p series and others by rearrangement, (with Bernard August, Emeritus), The College Mathematics Journal, 33(2002), pp. 233-4. download PDF version
38. Fermat’s little theorem from the multinomial theorem, The College Mathematics Journal, 33(2002), p. 239. download PDF version
39. A table of the partition function, (with Abdul Hassen and T. R. Chandrupatla), The Mathematical Spectrum, 34(2001/2002), pp. 55-57. download PDF version
40. Visual Basic for QuickBasic Programmers, (with Seth Bergmann and T. R. Chandrupatla), Mathematics and Computer Education, 36(2002), pp. 133-138. download PDF file
41. Functions of a spacetime variable, (with Nicolae A. Borota), Mathematics and Computer Education, 36(2002), pp. 231-239. Download PDF file
42. An easy introduction to biplex numbers, (with Dipti Bardhan), Mathematics and Computer Education, 36(2002), pp. 278-286. Download PDF file
43. An easy look at the cubic formula, Mathematics and Computer Education, 36(2002), pp. 287-290. Download PDF file
44. Proof without words: Integral of sine squared, The AMATYC Review, 24(2002), p. 65. Download PDF file
45. A computer hunt for Apery's constant, (with Brian Seaman), Mathematical Spectrum, 35(2002/2003), No.1, pp. 5-8. Download PDF file
46. A magic trick from Fibonacci, (with James Smoak), The College Mathematics Journal, 34(2003), pp. 58-60. Download PDF file
47. Variations on a theme from Pascal's triangle, The College Mathematics
Journal, 34(2003), pp. 216-223. Download PDF file
48. Visual proofs of two integrals, The College Mathematics Journal, 34(2003), p.231-232.. Download PDF file
49. An unusual product for sin z and variations of Wallis’s product, The Mathematical Gazette, 87(2003), pp. 134-139. Download PDF file
50. Finding Zeta(2p) from a product of sines. The American Mathematical Monthly, 111(2004), pp. 52-54. Download (PDF) original version containing motivational material.
Download (PDF) final accepted version (more formal with less motivation)
51. Another intuitive approach to Stirling’s formula, International Journal of Mathematical Education in Science and Technology, 35(2004), pp.111-118. Download PDF file.
52. Geometric construction of Pythagorean triangles, (with Tirupathi R. Chandrupatla), Mathematics and Computer Education, 38(2004), pp. 90-91.
Download PDF file
53. Variations on the linear first order ODE, (with Brian Seaman), International Journal of Mathematical Education in Science and Technology, 35(2004), pp.309-315. Download (PDF) version.
54. Extending Theon's ladder to any square root, (with Shaun Giberson), The College Mathematics Journal, 35(2004), pp. 222-226. Download PDF file
55. Problems on divisibility of binomial coefficients, (with James Smoak). The AMATYC Review, 25(2004), pp. 17-21.Download PDF file.
56. Generalizing integrals involving x^x and series involving n^n, (with Jeffrey Tsay). Mathematics and Computer Education, 39(2005), pp. 31-36. Download PDF File
57. Theon's ladder for any root (with Marcus Wright and Michael Orchard, International Journal of Mathematical Education in Science and Technology, 36(2005), pp. 389-398. Download PDF file
58. Why sin 80x looks like sin x on my TI-89, (with Jim Zeng).International Journal of Mathematical Education in Science and Technology, 36(2005), pp. 437-441. Download PDF file
59. Unexpected constructable numbers, The AMATYC Review, 26(2005), no. 2, pp. 2-3. Download PDF file
60. Some unusual expressions for the inradius of a triangle, (with Tirupathi R. Chandrupatla), The AMATYC Review, 26(2005), no.2, pp. 12-17. Download PDF file
61. A simple geometric construction of the harmonic mean of n variables (with Jim Zeng), Mathematical Spectrum, 37(2004/2005), No. 3, pp. 109-111. Download PDF file
62. Proof Without Words: Arc length of the cycloid, The Mathematical Gazette, 89(2005), no. 515, page 250. Download PDF file.
63. Ptolmey vs. Copernicus. (With Joseph Diaco.) The Mathematical Spectrum, 38(2005/2006), pp. 3-6. Download PDF File
64. The general Vieta-Wallis product for pi, The Mathematical Gazette, 89(2005), pp. 371-378. Download PDF file
65. Using simple harmonic motion to help in the search for tautochrone curves, (with Tirupathi R. Chandrupatia), International Journal of Mathematical Education in Science and Technology, 37 ( 2006),
pp, 104-9. Download PDF file
66. Motivation for finding Zeta(2p) from a product of sines, International Journal of Mathematical Education in Science and Technology 37(2006), pp. 231-235. Download PDF file
67. A proof of the continued fraction for e^1/M, American Mathematical Monthly, 113(2006), pp. 62-66. Download PDF file
68. A mechanical view of Archimedes quadrature of the parabola. The College Mathematics Journal, 37(2006), pp. 24-28. Download PDF file
69. Interesting finite and infinite products from simple algebraic identities. The Mathematical Gazette, 90(2006), pp. 90-93. Download PDF File.
70 Proof with few words: Quadratic convergence of the AGM, to appear in The Mathematical Gazette 90(2006), pp. 116-117. Download PDF File.
71. A spiral of triangles related to the great pyramid, with Matthew Oster. The Mathematical Spectrum 38(2005/2006), pp. 108-112,. Download PDF File
72. An asymptotic approach to constructing the hyperbola, with David Grochowski. The Mathematical Spectrum 38(2005/2006), pp. 113-115. Download PDF File
73.. A collection of numbers whose proof of irrationality is like that of the number e, (with Nicholas Stugard). Mathematics and Computer Eduation, 40(2006), pp. 103-107. Download PDF file
74. Translation with Notes of Euler's paper "Remarks on a beautiful relation between direct as well as reciprocal series" (E352),(with Lucas Willis), on the web at the Euler Archive, http://
www.math.dartmouth.edu/~euler/. Download PDF file
75. Synopsis of Euler's paper "Remarks on a beautiful relation between direct as well as reciprocal series" (E352),(with Lucas Willis), on the web at the Euler Archive http://www.math.dartmouth.edu/
~euler/. Download PDF file
76. Translation with notes of Euler’s paper E796 ( with Kristen McKeen), Recherches sur le problem de trois nombres carres tels que la somme de deux quelconques moins le troisieme fasse un nombre
carre,“Research into the problem of three square numbers such that the sum of any two less the third one provides a square number.”, on the web at the Euler Archive http://www.math.dartmouth.edu/
~euler/. Download PDF file .
77. Synopsis by section of Euler's paper E769 ( with Kristen McKeen), Recherches sur le problem de trois nombres carres tels que la somme de deux quelconques moins le troisieme fasse un nombre carre,
“Research into the problem of three square numbers such that the sum of any two less the third one provides a square number”, on the web at the Euler Archive http://www.math.dartmouth.edu/~euler/.
Download PDF file
78. Translation with notes of Euler's paper E46, Methodus Universalis Serierum Convergentium Summas Quam Proxime Inveniendi, (A general method for finding approximations to the sums of convergent
series). (with Walter Jacob.) On the web at the Euler Archive http://www.math.dartmouth.edu/~euler/. Download PDF file
79. Synopsis of Euler's paper E46, Methodus Universalis Serierum Convergentium Summas Quam Proxime Inveniendi, (A general method for finding approximations to the sums of convergent series). (with
Walter Jacob.) on the web at the Euler Archive http://www.math.dartmouth.edu/~euler/. Download PDF file
80. The integers of James Booth, (with John F. Kennedy), Mathematical Spectrum, 39(2006/2007), pp. 71-72. Download PDF file
81. A simple geometric method of estimating the error in using Vieta's product for pi, International Journal of Mathematics Education in Science and Technology, (38)2007, pp. 136-142. Download PDF
82. Some cosine relations and the regular heptagon, ( with Phongthong Heng) Mathematics and Computer Education, (41)2007, pp. 17-21. Download PDF file
83. Developing formulas by skipping rows in Pascal's triangle (with Robert J. Buonpastore). Mathematics and Computer Education, (41)2007, pp. 25-29. Download PDF file
84. Some long telescoping series. The Mathematical Gazette, 91(2007), pp. 104-5. Download PDF file
85. Finding Zeta(2n) from a recursion relation for Bernoulli numbers (with Jim Zeng), The Mathematical Gazette, 91(2007), pp. 123-6. Download PDF file
86. The quadratic equation as solved by Persian mathematicians of the middle ages, (with Mohamed Teymour). Mathematical Spectrum, 39(2006/2007), pp. 115-8. Download PDF file
87. Lucky fractions: Where bad arithmetic gives correct results, Mathematics and Computer Education, 41(2007), pp.162-167.
88. Constructing a line segment whose length is equal to the measure of a given angle, (with Walter Jacob, IV), International Journal of Mathemeatical Education in Science and Technology, 38(2007),
pp. 529-230. Download PDF file
89. The unlikely connection between partitons and divisors, (with Abdul Hassen and T. R. Chandrupatla). To appear in The College Mathematics Journal.
90. Geometric constructions approximating pi related to Vieta's product of nested radicals. To appear in The Mathematical Spectrum.
91. Vieta-like products of nested radicals with Fibonacci and Lucas numbers. To appear in the Fibonacci Quarterly.
92. An unusual proof that Fm divides Fmn for Fibonacci numbers using hyperbolic functions, (with Adam Hilburn). To appear in The Mathematical Gazette November 2007.
93. Another geometric vision of the hyperbola. To appear in The Mathematical Spectrum.
94. Approximating the path of a celestial body with a circular orbit from two close observations, (with Joseph Palma) To appear in The Mathematical Scientist.
95. Euler's little summation formula and sums of powers, (with Andrew Robertson). To appear in The Mathematical Spectrum.
96. Oblique-angled diameters and the conic sections, with Edward Greve. To appear in The Mathematical Spectrum.
97. Euler's little summation formula and special values of the zeta function. To appear in The Mathematical Gazette in July 2008. | {"url":"http://www.rowan.edu/open/depts/math/osler/my_papersl.htm","timestamp":"2014-04-17T21:48:56Z","content_type":null,"content_length":"28996","record_id":"<urn:uuid:ddd7acd9-994a-455b-bd81-c7d269768fea>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fractal Dimension of Population Distribution
3.7. Fractal Dimension of Population Distribution
Old definition of a fractal: a figure with self-similarity at all spatial scales.
Fractal is what will appear after infinite number of steps.
Examples of fractals were known to mathematicians for a long time, but the notion was formulated by Mandelbrot (1977).
New definition of a fractal: Fractal is a geometric figure with fractional dimension
It is not trivial to count the number of dimensions for a geometric figure. Geometric figure can be defined as an infinite set of points with distances specified for each pair of points. The question
is how to count dimensions of such a figure. Hausdorf suggested to count the minimum number of equal spheres (circles in the picture below) that cover the entire figure.
The number of spheres, n, depends on their radius, r, and dimension was defined as:
For example, dimension of a line equals to 1 (see figure above):
"Normal" geometric figures have integer dimensions: 1 for a line, 2 for a square, 3 for a cube. However, fractals have FRACTIONAL dimensions, as in the example below: Here we use rather large
circles, and thus, the precision is not high. For example, we got D=2.01 for a square instead of D=2.
Dimension of a square and fractal is estimated as follows:
Below is the Mandelbrot set which is also a fractal:
Fractal dimension, D, is related to the slope of the variogram plotted in log-log scale, b:
D = 2 - b/2 for a 1-dimensional space
D = 3 - b/2 for a 2-dimensional space
In the figure above, b=1, and thus, D=1.5 for a 1-dimensional space. | {"url":"http://home.comcast.net/~sharov/PopEcol/lec3/fracdim.html","timestamp":"2014-04-18T03:01:28Z","content_type":null,"content_length":"2688","record_id":"<urn:uuid:27749e92-c96f-4829-a3e3-b71e79a02267>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Saxon 76 Tests
Saxon 76 Tests PDF
Sponsored High Speed Downloads
Middle Grades Math Placement Test For Students New to the Saxon Math Program
Saxon Math 76 TexT Saxon Math 76 (3rd Edition) In t r o d u c t I o n Welcome to Saxon Math! John Saxon believes that “Mathematics is not difficult; it’s just different, and tasks that are different
become familiar through practice.
Saxon Homeschool Placement Guide Saxon books are skill level books, not grade level books. It is essential that each student is placed in the text that meets the skill level of the individual ...
Algebra 2 Algebra A Math 87 Math 76 7 Algebra Aor
Weekly Saxon tests are optional-may give every other Saxon test Post-test review is recommended, but optional In the Pacing Schedule / Instructional Guide, Saxon Math 65 lessons have been correlated
to Saxon Math 76 as optional lessons available for below basic students. Lesson 27: Adding and ...
to the Saxon math program. This test includes selected content from Math 54, Math 65, Math 76, Math 87, and Algebra 1/2. Please note that this placement test is not infallible. It is simply one
indicator ... tests. We can also be contacted at 2450 John Saxon Blvd., Norman, ...
A Harcourt Achieve Standard Correlation of Saxon Math Course 1, 1st Ed. Teacher’s Manual To the Texas Essential Knowledge and Skills (TEKS) 1
To contact your Saxon Representative, www.SaxonMath.com Call: 800-531-5015 ... Cumulative Tests 13, 14, 20 : A2.10.3 . Use the unit circle to ... 71-76, 78-97, 101-119 . Cumulative Tests 6-13, 16-23
: A3.1.2 . Adapt the general symbolic
Saxon Math Intermediate 3 Standards Success Overview Common Core State Standards and the Saxon Math Pedagogy The Saxon Math philosophy stresses that incremental and integrated instruction, with
Saxon program should start in Saxon’s Math 54, Math 65, Math 76, Math 87, Algebra 1/2, or Algebra 1 textbook. Please note that this placement test is not a fool-proof placement ... ment tests. We can
also be contacted at 1320 W. Lindsey, Norman, OK 73069; or by e-mail at [email protected]
Saxon strongly recommends that the student ... caution as the problem type may appear on any tests during the year. In the beginning, do not worry about “getting ... you may decide to move the
student back to Math 87 or Math 76. In either case, ...
Saxon Math 3, Saxon Publishers: Norman, OK ... Students can be evaluated through tests, daily practice sets ... First Semester: Lesson 1 - 75 Second Semester: Lesson 76 - 140 Course Objectives: At
the end of this course students should be able to: 1. Memorize all addition, subtraction ...
Short-Term Instructional Objectives for Saxon Math 7/6 Instructional Objectives: Date Projected: Evaluation Results: Given a series of written computation and word problems involving
If a student is struggling somewhat with these assignments and scoring less than 80% on his tests ... Saxon Algebra 1 Homeschool Kit, Solutions Manual (recommended), DIVE CD ... ____Day 93 Complete
DIVE Lesson 76 or read Lesson 76 (pp313 – 314).
A Harcourt Achieve Correlation Of Saxon Math 7/6 Teacher Manuals To The Iowa Test Of Basic Skills – Complete Battery 1 LEVEL 12 IOWA TEST OF BASIC SKILLS SAXON MATH 7/6 ... 76, 77, 78, 79, 82, 83,
85, 86, 87, 88, 89, 90, 91, 93, 94, 96, 97,
The Saxon Mathematics 7/6 Tests and Worksheet booklet is represented by the abbreviation WORK. Each weekly assignment is summarized in the first rows of the week’s daily course plan along with the
goals and notes for that week.
Day 76 Saxon Math 5/4, Lesson 61, “Remaining Fraction,” “Two-Step Equations” Saxon Math Tests and Worksheets, Facts Practice Test I (page 135) and Recording Forms B and C (following page 283) Saxon
Math Solutions Manual, pages 114-116 and 279
A Harcourt Achieve Correlation Of Saxon Math 6/5 To The Michigan Grade Level Content Expectations 1 GRADE FIVE MIGHIGAN GRADE LEVEL CONTENT EXPECTATIONS SAXON MATH 6/5 ... Lesson Practice: 76, 29,
109 Mixed Practice: 77, 78, 79, 30, 31, 32, 110, 111
Saxon Math Intermediate 4 Cumulative Test 6B 11. (27) If it is evening, what time will it be in 2 hours and 15 minutes according to this clock? 3 12 11 10 9 8 ... 76 _____ 29 17. (13) 286 _____ 388
Find each missing number in problems 18 and 19. 18. (14) 22 _____ r 57 19. (16) 87 _____ p 43 20. (1)
Tests only 54, 65, 76, 87 $10.00each Solutions Manuals Algebra 1/2 $28.00 Advanced Math $28.00 Algebra 1 $28.00 Calculus $28.00 ... Math Manipulatives For Saxon Program School In A Box: A Multi-Level
Teaching Tool $19.95 This
Extension Tests from this publication in classroom quantities for instructional use and not for resale. Requests ... 76 Saxon Math Intermediate 5 Extension Activity 2 • Finding Area of a Rectangle
with Fractional Side Lengths (CC.5.NF.4b)
Day 76 Saxon Math 6/5—Homeschool, Lesson 61, “Using Letters to Identify Geometric Figures” ... Before Class… Make 20 copies of the Recording Forms B and C in Saxon Math Tests and Worksheets,
following page 261. You will need one for every lesson in this unit. Warm-Up Facts Practice
Saxon Calculus, 2nd Edition Answer Key & Tests In Print 1266924 9781600321184 22.20 $17.76 Saxon Calculus, 2nd Edition Tests only In Print 1266612 9781600320156 14.10 $11.28 Saxon Calculus, 2nd
Edition Solutions Manual In Print 1255355 9781565771482 43.75 $35.00
Course Curriculum Saxon II Tests and Answer Key (505460), Saxon II Solutions Manual ... Prerequisites – Successful completion of VPSA S- axon 76 in previous 24 months, or accepted work from 6th grade
math or competency test for Saxon 76
Parent purchased text: Saxon Algebra 1/2 3rd Edition, tests, answer key and solutions manual. 09:00 AM PreAlgebra ... Parent purchased text: Saxon Math 76 - 4th Edition Home school Kit; Parents must
purchase complete Test/Worksheet Book. 09:00 AM
Saxon Herald Page 3 A WORD FROM PRINCIPAL FOSTER Ferris High School Mission Statement We are to affirm a supportive learning environment, character-
Saxon Math 3 rd Edition MONTH CONTENT PARALLEL TASKS ASSESSMENT August – ... Tests Multiplication Facts Tests Problem Sets ... Facts Tests Parallel Tasks March Lessons 76-90, 111, 113 • Division with
3-Digit Answers,
4 A Beka Testing Guide for Iowa Before Testing Begins A Beka Testing Dates for Test Coordinator Please remember these important dates while you are administering your standardized tests .
Success At Home! Saxon Math is the only major math program on the market today that systematically distributes instruction, practice, and assessment throughout the academic
Math 76. Seven or more correct from ... the Saxon mathematics program are best placed well served by these texts when they are placed at levels consistent with their competencies. This test is not
intended for use with current Saxon students. ... tests. We can also be ...
4 A Beka Testing Guide for Stanford Before Testing Begins A Beka Testing Dates for Test Coordinator Please remember these important dates while you are administering your standardized tests .
Supplies: Saxon Algebra 1/2 3rd Edition, tests, answer key and solutions manual. B0974 9:00 AM Algebra 1 (7th-10th) ... Supplies: Saxon Math 76 - 4th Edition Home school Kit; Parents must purchase
complete Test/Worksheet Book. B1056 10:00 AM
Saxon Math 6/5 Third Edition Page 1 ... Lesson 76 – Multiplying Fractions Multiplying fractions Lesson 77 – Converting Units of Weight and Mass ... assignments and tests aligned to your texts. Saxon
Math 5/4 Third Edition Saxon Math 6/5 Third Edition
Weekly Saxon tests are optional-may give every other Saxon test Post-test review is recommended, but optional Grading Period ... Lesson 76: Area, Part 1 Lesson 77: Multiplying by multiples of 10
Lesson 78: Division with 2-digit answers and a remainder
Resellers of products from AXON PUBLISHERS INC Homeschool Placement Guide and Adapted Tests for Australia & New Zealand Originally developed by Saxon Publishers
Pre-Algebra: Saxon Algebra ½ Student Text, Answer Key and Tests, and Solutions Manual (VP Catalog # 003550 $44.50) ... Pre-Algebra : minimum age of 11, completion of 5th grade; Saxon 76
Saxon Publishers Algebra 1: ... 76 Consecutive Integers 77 Consecutive Odd and Consecutive Even Integers – Fraction and Decimal Word Problems 78 Rational Equations ... Tests for Functions 2.2.C,
2.8.Q, 2.8.R, 2.8.S, 2.8.T, 2.9.G, 2.11.A,
MATH – Sarah has been using the Saxon 76 Math Textbook and Test Book. She has completed 30 out of 140 lessons, which is about ¼ of the book and has taken 5 tests. Her average for the quarter is 93%.
That's it. Repeat for each subject.
Dan has a list of the scores of all his math tests and quizzes during the year. What are the mean, median ... 78, 79, 91, 75, 42, 74, 82, 75, 87, 85 A. mean = 78.5, mode = 75, median = 76.8 The mean
is the best description of the scores. B. mean = 75, mode = 78.5, median = 75 The mode is the ...
Saxon Algebra ½ Tests Only 3E 15.00 Saxon Algebra 1 Kit (Gr. 9) 80.40 Saxon Algebra 1 Solutions Manual 42.00 ... Saxon Math 76 Kit New 4 th Edition (Grade 6) 101.40 Student Textbook 47.95 Tests and
Worksheets 29.40 Solutions Manual 35.40
*Chapter and Unit tests *Student grades on text practices *Final Examination Curriculum: May be selected from the following: Textbook; Saxon Math 76 (7th grade level) Saxon 87 (8th grade level)
Textbook; Basic Math, Globe-Ferron
Saxon math 76 3rd Edition, Saxon Publishers: Norman, OK. Supplemental Materials: Course Description: ... Students can be evaluated through tests, quizzes, daily practice sets, homework problem sets,
lab grades quarterly exams, semester exams
MS Saxon Math 76 Teacher: Kelley Buchanan Once a student has completed Math 65, they may go into either Math 76 or Math 87. This class is for students who are weak in their multiplication skills,
decimals, and fractions.
Packet), Second edition, by John H. Saxon, Jr., Saxon Publishers Inc., 1996 Contents of Saxon Advanced Mathematics: Preface ... evaluating Functions; domain and range; types of Functions; tests for
Functions 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14 ... 76. 77. 78. 79. 80. 81. 82. 83. 84. 85 ...
Math 1 math Saxon Math 1, Home Study Kit* Saxon 9781565770188 CBD WW18800 $76.05Required * ... Tests BJU 3rd edition (2010)9781591669678 CBD WW265157 $15.00Required Test Answer Key BJU 3rd edition
(2010)978-1-59166-968-5 CBD WW271809 $9.17no
6TH GRADE: SAXON MATH 76 (3 RD OR 4 TH EDITION) 000HSP076 $63.75 7TH GRADE: SAXON MATH 87 ... Tests, Wkbk & AK $55.29 . Title: Microsoft Word - MIDDLE SCHOOL BOOKS LIST.doc Author: Donna Created
Date: 8/9/2012 2:20:39 AM ...
COMPASS Mathematics Tests. Calculators must, however, meet ACT’s specifications, which are the same for COMPASS and the ACT Assessment. ... C. 76 D. 77 E. 78. Numerical Skills/Prealgebra Placement
Test Sample Items © 2004 by ACT, Inc.
Student textbook, lst edition, good shape $7. Student Tests to lst edition, like new $2.50. Student Maps and Study booklet Missing pages ... $6 ppd. Nice. ... Saxon Math 76 student textbook, Gr. 7 --
$13 ppd. Good. 7Th Grade | Used Home School Curriculum Abeka A Beka 7th Grade A Healthier You ...
510.76--dc22 2008008247 Typeset by Saxon Graphics Ltd, Derby Printed and bound in Great Britain by MPG Books Ltd, ... 76. A currency is devalued by a factor of 0.02 a year. ... Tests of data
sufficiency seek to measure a candidate’s ability to
Saxon Math (Harcourt Achieve, 2008) en VisionMATH (Scott-Addison, 2009) Texas Math ... SE 74-76 Never How many problems practice * addition with regrouping in the ... on Practice Tests and Chapter
Tests for Chap-ters 3, 6-8, & 10; Mas-
developed intelligence tests to evaluate newly arriving immigrants. Poor test scores among immigrants who were not of Anglo-Saxon heritage were attributed by some psychologists of that day to: ... 76
.Intelligence tests are most likely to be considered | {"url":"http://ebookilys.org/pdf/saxon-76-tests","timestamp":"2014-04-19T04:43:25Z","content_type":null,"content_length":"41620","record_id":"<urn:uuid:79ef4fff-c8b7-4e6f-a8b9-c4e1074a3856>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elsie is making a quilt using quilt blocks like the one in the diagram.
a. How many lines of symmetry are there?
b. Does the quilt square have rotational symmetry? If so, what is the angle of rotation?
Elsie is making a quilt using quilt blocks like the one in the diagram. a. How many lines of symmetry are there? b. Does the quilt square have rotational symmetry? If so, what is the angle of
Please provide the diagram. I need it as reference so that I can properly help you with your task. Thank you.
Expert answered|
|Points 1860|
Asked 10/15/2012 7:29:38 AM
0 Answers/Comments
Not a good answer? Get an answer now. (FREE)
There are no new answers. | {"url":"http://www.weegy.com/?ConversationId=2270BFF0","timestamp":"2014-04-18T05:33:21Z","content_type":null,"content_length":"39207","record_id":"<urn:uuid:94e03bef-1807-4c6c-9ebc-6f94eede0c6e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trial designs - Non-inferiority vs. Superiority vs. Equivalence
The primary purpose of a clinical trial is to address a scientific hypothesis. To address a hypothesis, different statistical methods are used depending on the type of question to be answered. Most
often the hypothesis is related to the effect of one treatment as compared to another. For example, one trial could compare the effectiveness of a new antibiotic to that of an older antibiotic. Yet
the specific comparison to be used will depend on the hypothesis to be addressed. Let’s use this two antibiotic example for this discussion.
We can construct 3 possible hypothesis to be addressed when comparing these two antibiotics. The three hypothesis are:
1. The New Antibiotic is at least as good as the Old Antibiotic.
2. The New Antibiotic is better than the Old Antibiotic.
3. The New Antibioitic is equivalent to the Old Antibiotic.
While each of these hypothesis may seem similar, they are slightly different scientific questions, thus each requires a slightly different statistical test. For the first hypothesis (“at least as
good as”), the New Antibiotic must be as good as the Old Antibiotic, but it can also be better than the Old Antibioitic. In mathematical terms:
1. $New Antibiotic \ge Old Antibiotic$
For the second hypothesis (“better than”), the New Antibiotic can no longer be equivalent to the Old Antibiotic. In mathematical terms:
2. $New Antibiotic \textgreater Old Antibiotic$
The last hypothesis (“equivalent to”) indicates that the New Antibiotic cannot be worse than or better than the Old Antibiotic. In mathematical terms:
3. $\frac{New Antibiotic}{Old Antibiotic} = 1\pm \alpha$
As you can see, each hypothesis contains a slightly different mathematical arrangement. These mathematical expressions have been given “lay names” by satisticians in an effort to describe the math to
non-statisticians; these names are:
1. Non-inferiority
● $New Antibiotic \ge Old Antibiotic$
2. Superiority
● $New Antibiotic \textgreater Old Antibiotic$
3. Bioequivalence
● $\frac{New Antibiotic}{Old Antibiotic} = 1\pm \alpha$
Of these three comparisons, the non-inferiority has the largest range of successful trial outcomes (equivalence or superiority). Thus a calculated sample size for a non-inferiority trial is usually
the smallest of the three hypothesis. The superiority comparison is a subset of the non-inferiority and will have a sample size that is similar to the non-inferiority or a sample size that is much
larger. As the expected difference between the two treatments decreases, the sample size will increase, often dramatically. Finally, the bioequivalence comparison is the most restrictive because it
requires that the two treatments be identical within some acceptable range defined by α (normally ±20%). In general a bioequivalence trial will have a sample size that is larger than a
non-inferiority trial.
So which is best to use? It all depends on which scientific question you are trying to answer. All three study types are useful in the development of drugs. Non-inferiority studies are used to show
that a minimum level of efficacy has been achieved. In comparison studies with a current therapy, non-inferiority is used to demonstrate that the new therapy provides at least the same benefit to the
patient. Superiority trials are always used when comparisons are made to placebo or vehicle treatments. In these studies, it is critical that the effect in the treatment group be clearly superior to
any effects in the placebo groups. Failure to demonstrate superiority over vehicle suggests that the drug is not effective. Superiority trials are also used for marketing purposes (“our drug is
better than your drug” studies). Bioequivalence trials are used to show that a new treatment is identical (within an acceptable range) to a current treatment. This is used in the registration and
approval of generic drugs that are shown to be bioequivalent to their branded reference drugs.
In the end, ask yourself which hypothesis am I trying to address, then use the appropriate study design. Best of luck!
[...] identical to a brand-name drug. The type of analysis to show this is called bioequivalence. See my previous post for a definition of equivalence trials. In a bioequivalence trial, systemic drug
levels are [...]
Speak Your Mind Cancel reply | {"url":"http://learnpkpd.com/2011/01/01/trial-designs-non-inferiority-vs-superiority-vs-equivalence/","timestamp":"2014-04-20T02:14:43Z","content_type":null,"content_length":"38488","record_id":"<urn:uuid:37f6e85f-e6d2-4b35-8a6e-2e3f5ef6b1bc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
Seasonal Twin Sudokus
Copyright © University of Cambridge. All rights reserved.
'Seasonal Twin Sudokus' printed from http://nrich.maths.org/
by Henry Kwok
Twin A
Twin B
Rules of Seasonal Twin Sudokus
Each row, each column and each of the nine 3x3 boxes in Twin A and Twin B contains the 9 elements - a total of 10 letters in Twin A and all the nine different numbers 1 to 9 in Twin B.
This version of Sudoku consists of a pair of linked standard Sudoku puzzles, with some letters and digits as starters in twin A and twin B respectively. To get a complete solution for the twin
puzzles, we have to solve each twin puzzle with the usual strategies but we can only get the complete solution for the twin puzzles by substituting the corresponding element of each cell from one
twin Sudoku into the other.
So, for example we can see that in (9,1), that is row 9 column 1, Twin A contains the letter A and Twin B contains the number 9. This means that everywhere a letter A appears in twin A - a 9 appears
in the corresponding position in twin B and vice versa.
This immediately generates some more elements in the cells of each twin!
Note that (4,3) in Twin A contains two letters (CH) instead of one. This is a deviation from the regular Wordoku which contains only one letter per square. This is the only way to squeeze more than 9
required letters into the 9x9 grid of Twin A to form two hidden words.
Can you guess or spot the two hidden words in Twin A? | {"url":"http://nrich.maths.org/5518/index?nomenu=1","timestamp":"2014-04-18T03:19:46Z","content_type":null,"content_length":"4549","record_id":"<urn:uuid:254d024b-882a-4c31-896f-3e72cae4b740>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
1 project tagged "Linear Algebra"
jMathLab is a platform for mathematical and numerical computations. It uses the Matlab/Octave programming language. It runs on any platform where Java is installed, and can also run on the Web
browser. The following packages are included: symbolic calculations (simplification, differentials, integration), numeric calculations, evaluations of mathematical functions, special functions,
linear algebra with vectors and matrices, plotting data and functions, saving data (vectors and matrices) in files, random numbers, statistics, and solving linear and non-linear equations | {"url":"http://freecode.com/tags/linear-algebra?page=1&sort=name&with=30566&without=","timestamp":"2014-04-18T05:00:03Z","content_type":null,"content_length":"18408","record_id":"<urn:uuid:0f66d16f-f6ca-4e57-83b5-2c6b675f4f2d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from October 2009 on Not About Apples
S is for Symmetry
29 October 2009
If your basic education was like mine, you learned about symmetry in elementary school, and it was pretty much limited to telling which shapes were symmetric and which ones weren’t. Of course
symmetry isn’t just a matter of yes-or-no, and some objects are more symmetric than others. A square is more symmetric than a rectangle, say, and a circle is more symmetric than either. (This can
be made precise, of course; in this case, even the crudest way is adequate: a square has 8 symmetries but a non-square rectangle only has 4, and a circle has infinitely many.)
And as the previous two posts show, symmetries are not just geometric in nature. Structures of all sorts, in wildly varying contexts, have interesting symmetries; moreover, these symmetries can
explain some strange phenomena and put them into their proper perspective.
A good working definition of symmetry is a transformation of an object which preserves its important features (the shape of a geometry object, the algebra of a number system, etc.)
You may not like my count of 4 symmetries for a rectangle. For the record, these are the four.
1. flip it over horizontally
2. flip it over vertically
3. do both (in other words, half-turn)
4. do nothing
Doing nothing certainly is , and it turns out to just be better to include it in our lists. For one thing, the numbers 4 and 8 are more suggestive of the notion that a square is “twice as symmetric”
as a non-square rectangle than 3 and 7 would be. There’s a better motivation, which we’ll see next time. For now, let’s just agree that every object has at least the trivial symmetry, and that to
be “symmetric” means to have at least 2 symmetries (at least one that is interesting).
So if you have only one symmetry, it’s the trivial one, and you’re asymmetric. Just like if only one person shows up to your birthday party, it’s you, and you’re lonely.
So we can classify objects based on how many symmetries they have. Loosely speaking, the more symmetric something is, the more symmetries it has. Actually there’s a lot to learn even from this low
level of sophistication.
Lots of things in life have exactly two symmetries. The human face, the grid of a traditional crossword puzzle, the shape of a piece of bread, etc. And this particular sort of symmetry, which seems
to be intrinsically aesthetically pleasing to most people, means that these objects all have something deep in common.
But we can say more. For example: how might you compare and contrast the following shapes?
1. quarter-turn
2. half-turn
3. three-quarter-turn
4. nothing (or a full turn)
And for the right square:
1. flip it over horizontally
2. flip it over vertically
3. do both (in other words, half-turn)
4. do nothing
But these collections of symmetries have a deeper difference. If you just look at the shapes a minute, move them around in your head, you’ll probably notice they “feel” different. It’s hard to
nail down why exactly, but we can.
In the purple/cyan case, there was really only one kind of symmetry—rotation. All the symmetries just come by repeating the basic quarter turn. But there’s no one symmetry of the blue-green square
that gives rise to all the others. Also, every symmetry of the blue-green square has the property that if you do it twice, you’re back where you started, and the purple-cyan squre has other kinds of
If this seems like this is too confusing or somehow too “high-level” mathematically, then as my six-year-old daughter always says, be brave in your heart. I firmly believe that, like the
mathematical abstraction that is counting, the mathematical abstraction that is symmetry is something that is instinctive for humans. The deficit is not in human faculties, rather it is in human
language. Standard English provides a pretty poor language to talk about these issues. (The analogy with our sense of smell, I think, is apt; experiment suggests that our latent sense of smell is
as good as any dogs, but we think about what we talk about, and our words to differentiate smells are pretty limited and clumsy, so our sense of smell is highly impaired in practice, but not by lack
of latent ability.) But it turns out that mathematics, like it or not, is an ideal setting to verbalize gut notions of symmetry. Like that smell you can’t quite identify and don’t know how to
describe to your friend, like why you enjoyed that movie much, like the difference between how normal coffee and decaf taste, like so many things in life, symmetry seems ineffable. But as I always
say, eff the ineffable. Mathematics provides a comprehensive language for formulating and expressing ideas about symmetry, the language of group theory. Stay tuned for the next post, where I’ll talk
about this extensive language mathematicians have developed to discuss and understand symmetries.
So I’ll concede that I was forced into awkward language in that paragraph distinguishing the two shapes, but I think it’s fair to say that we understand the difference between the two shapes better
than we did before. The mind’s eye knows they’re differently symmetric, and now the mind’s mouth can make that precise.
Symmetries are one of those things that, once you know to look for them, you see them everywhere. There are lots of important examples of symmetry issues in physics, but let’s look at a very simple
one: the so called arrow of time. That is, does time have an intrinsic forward and backward? There is the direction that we think of as forward, but is there an intrinsic difference? Of course at
the level of human experience, they are different. I remember the year 2000 but not the year 2020. But at the level of physical laws, it’s much less clear. If I showed you a movie of interactions
of electrons, would you be able to tell whether you were watching it playing forward or backward? Do you see how this is like the $i$ vs. $-i$ question from a recent post? The question is this: is
there a symmetry of space-time which interchanges past and future, but preserves physical laws? If there isn’t, then that means the arrow of time, our perception of the direction it flows, is
intrinsic to the universe. If there is, then it’s perfectly plausible to imagine, say, some other creatures which perceive themselves as moving through time the other way. What’s funny here is that
on a small enough level, say at the level of subatomic particles, most theories have past-future symmetry (an electron can gain a photon, or it can lose a photon, which is like gaining a photon
backwards). But on the larger scale of time and space, we do not see past-future symmetry. Thermodynamics, for example, is not symmetric. Entropy increases over time. If I made a video of a
breaking egg, you’d know which way was future. Eggs break, eggshells don’t assemble. The correct reconciliation of these ideas is, I think, an important open issue in physics.
There are also examples that are far less serious. Ever play rock-paper-scissors? Against a computer that just picks its move at random with equal probabilities? (You can do just that at eyezmaze,
a site with outstanding games, if you don’t count this one.) If you have, then you probably lost interest pretty fast, because you realized that your decisions don’t matter. And why don’t they
matter? Because rock-paper-scissors has symmetries, three symmetries which preserve the rules about which throws beat which and which also preserve the computer’s “strategy”, just enough to
interchange all the possible throws and guarantee that rock, paper, and scissors are always exactly equally good throws. This is different from RPS against a person, which is interesting, because
your opponent’s psychology doesn’t have symmetry.
Okay let’s stop there, because if you’re me it doesn’t get any better than explaining in precise mathematical terms why one game is more fun than another.
P.S. (at least a bit heavier than what precedes)
Symmetries are at the heart of the so-called Erlangen Program for geometry developed by Felix Klein. The sound-byte version of which is “If you want to understand a geometry, understand the
symmetries that preserve it.” In the case of ordinary plane geometry (the kind you learned in Mrs. Gunderson’s algebra class in high school), this means understanding the transformations of the
plane which preserve lengths and angles. There are some obvious types of transformations that work, such as the follow.
• rotations around a point
• reflections across a line
• translations
It turns out that all the symmetries of the usual geometric plane are given by rotations or reflections, possibly followed by a translation. All the fundamentals of Euclidean geometry can be
recovered by really understanding these families of symmetries. You may have heard of something called hyperbolic geometry, where parallel lines behave differently. How might someone get a concrete
handle on how the hyperbolic plane works? We can characterize its symmetries, and see that this plane has different kinds of symmetries than the ordinary Euclidean plane. And when I say compare
them, I don’t mean that in a fuzzy, hand-wavy way. All these symmetries can be expressed in concrete numerical ways (using matrices). The power of the method leads to an increase importance of
understanding various matrix groups (whatever that means) in geometry, and a closer relationship between algebra and geometry. But this is a subject for another day.
Thought Experiment: Talking to the Other Aliens
22 October 2009
This is a direct continuation of the previous post, so read that one first if you haven’t yet. In some sense this post is simpler than the previous, in that it uses simpler concepts and doesn’t
involve understanding of the real number system. But it may be harder for many readers, because I’m asking you to imagine an alien race which does not understand certain things that you probably
can’t remember a time when you didn’t understand. And it’s hard to imagine what it would be like not to know what we know.
It’s interesting, isn’t it, how people are much better at temporarily adding an unfamiliar concept to their working context than they are at temporarily subtracting a familiar one?
Thought Experiment: Talking Math with the Aliens
20 October 2009
Though the connection may not at first be apparent, this is part of my promised (threatened?) attempt to put the fundamentals of Galois theory in terms suitable for readers of this blog. It will be
a slow build, because there are a lot of a pieces to put into play.
Today, a thought experiment. Imagine you have made contact with another form of intelligent life. Communication is still at a primitive stage, but you’ve devised a way of sending each other
signals, and you and the alien are in the process of building up your shared vocabulary in this new language. (I’m imagining some sort of IM window, your imagination may vary.)
Well you’ve heard that the universal language is mathematics, and you want to establish a shared vocabulary for basic math. With some effort, you establish an agreement on the concepts of “addition”
and “multiplication” (think about how you might do this, how you might distinguish these two operations from one another). You figure out what name they have for what you call “zero” and “one”
easily enough. (For example, you could ask what number plus itself equals itself to nail down zero, then ask what number multiplied by itself equals itself, other than zero, to nail down one — think
about it.) Once you have zero and one, addition and multiplication, you can get 2, 3, 4, etc., then the negative integers, and then fractions.
It would take some time, but suppose you eventually get sufficient communication to have shared language for the real number line (maybe you explain Dedekind cuts, whatever, I don’t care). (Actually
this isn’t essential, and it’s just as interesting to suppose you don’t establish shared vocabulary for the real numbers; we’ll explore that elsewhen.)
So now you’re feeling ambitious, and you want to know how the alien talks about imaginary numbers. What does the alien call your $i$? You assume (reasonably) that such a developed race would also
have some corresponding concept, so you ask for a number which multiplies by itself to give negative one, and the alien says “blarg: blarg times blarg plus one is zero”. Victory!
But then doubt sets in. Are you really sure his blarg is your $i$? After all, $(-i)^2=-1$ too. Maybe blarg is negative $i$? How would you know? Think about it as long as you like, but the answer
is, you wouldn’t. There are no questions you could ask that would say for sure whether blarg was $i$ or $-i$.
(You might try to say something about “the one on the upper half of the complex numbers”, but that’s no good. You have no reason to believe that they visualize complex numbers anything like how you
do, and anyway that distinction is happening only in your mind, not in the math. It’s no more constructive than defining “three” as “the number that looks like half an eight”. That’s not math, not
even arithmetic. It’s trivia about our way of writing numbers.)
We could rephrase this whole thing without aliens (but why would you ever prefer not to include aliens?). Suppose that I had misunderstood my teacher the day she defined the complex plane; suppose I
had thought that $i$ was one unit below the origin, the opposite of the convention you’re probably used to. What would happen when I try to talk math with the people like you who learned it the
usual way? Nothing interesting! You and I believe all the same statements about numbers! We both think $(3+2i)+(4-i)=6+i$ and we both think $(3+2i)(4-i)= 14+5i$. If we visualize these facts
geometrically, then the picture in my head doesn’t match the picture in yours (it’s upside down). As long as we stick to the numbers and equations, as long as nobody explicitly mentions the pictures
we are thinking about, we’ll be in perfect agreement about complex numbers.
You may have learned in high school that, if you have a polynomial with real coefficients and $a+bi$ is a root, then so is $a-bi$. Now we see the reason that underlies this truth: no algebraic
statement in terms of real numbers can distinguish $a\pm bi$ from one another. The point in your mind I call $a+bi$, I call $a-bi$, and vice versa.
In fancier talk: the complex numbers have a symmetry, usually called complex conjugation, which preserves all the real numbers and which preserves any facts and relationships which can be expressed
in terms of basic algebra. The numbers $a+bi$ and $a-bi$ are interchangeable because they have to be, because they are bound by the symmetry. Symmetries are magical things.
As we shall see, symmetries are powerful tools for understanding many kinds of situations, and the language of mathematics is the right language for getting at symmetries.
But there is more to the story. We’ll talk to the aliens a little more next time.
Now there’s completeness and then there’s completeness…
15 October 2009
This post achieves a fortuitous segue from the last post into my serious of articles on the beauty of Galois theory.
In the previous post I introduced Dedekind cuts as a means of constructing the real number line, and I said that this perspective is responsible for the completeness of the real numbers $\mathbb{R}$.
Now, that was completeness in the topological sense. There is another, very different notion of algebraic completeness.
A number system is called algebraically complete if every polynomial equation in one variable with coefficients from that number system can be solved in that number system.
Dedekind cuts
15 October 2009
I am currently teaching a course in geometry for teachers (Euclidean, nonEuclidean, projective, the whole ball of wax), and we were recently discussing the need for the Dedekind axiom for plane
geometry, which guarantees in effect that the points on a geometric line behave the same , way as real numbers on a real number line. What was interesting to me was that, even after all we’d said
about all the ways that geometry might behave in unexpected ways if we don’t make certain assumptions, somehow the idea that geometric lines were real number lines was more deeply ingrained. The
idea that there might be a world were there were no line segments with length $\pi$ was harder to imagine than the idea of a world where there are multiple lines through a point parallel to another
It got me thinking, why is that?
Return to Purple Squares
10 October 2009
In the last post, we show that the following simple diagram provides all the information needed to prove that $\sqrt{2}$ is irrational.
But as it true so often in mathematics, there is much more to see beyond the surface-level observations, and this time I cannot resist going back to this picture to say more.
Our main points last time were:
1. the big square has the same area as the two light squares together if and only if the dark square has the same area as the two white squares together
2. a square of side $m$ has the same area as two squares of side $n$ if and only if $m/n = \sqrt 2$.
We know we can’t get equality in either case though, which motivates the following approximate version.
1. the big square has almost the same area as the two light squares together if and only if the dark square has almost the same area as the two white squares together
2. a square of side $m$ has about the same area as two squares of side $n$ if and only if $m/n \approx \sqrt 2$.
In other words, if the ratio of the sides of the large and light squares is “about” $\sqrt 2$, then the ratio of the sides of dark and white squares is also “about” $\sqrt 2$. Which one is a better
approximation? The one involving the larger squares. The absolute discrepancy in area between the squares is the same, but the relative discrepancy will be smaller if the areas are larger. (The
same reason I was so much more dramatically older than my sister when I was 6 and she was 2 than I will be when I’m 82 and she’s 78.)
Let’s add some letters to simplify the statements. If $m$ is the side of the dark square and $n$ is the side of the white square, then $m+2n$ is the side of the big square, and $m+n$ is the side of
the light square. (Make sure you can see this in the picture.) Then our claim is that if $m/n$ is a reasonable approximation to $\sqrt 2$, then $(m+2n)/(m+n)$ will be a better one.
It’s too much to hope for $m^2=2m^2$, but if we take $m=1,n=1$, then they’re only off by 1. So we can take $1/1$ as a starting point. Then we expect $3/2=1.5$ to be a better approximation. But why
stop here? Taking $m=3,n=2$, $7/5=1.4$ is a better estimate. We can keep this up forever, giving the following sequence of increasingly good rational approximations to $\sqrt 2$:
$1/1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, 577/408, 1393/985, \cdots$
These approximations are getting very close very fast (the last one is right to six decimal places, enough for any practical application I can think of), and we’re not working very hard to get them!
Actually, more still is true. If we start with any fraction $m/n$, even one which is nowhere near $\sqrt 2$, repeatedly applying the rule $m/n \mapsto (m+2n)/(m+n)$ will give us a sequence of
numbers that, in the long run, will converge to $\sqrt 2$. Since the absolute area discrepancy doesn’t change, but the squares get larger and larger, the approximation is eventually as close as we
might like. The sequence from the previous paragraph is still the best one, though, because there the area discrepancy is 1, which is the best we can hope for since we proved last time that 0 is
Actually, it can be proven, without anything fancier than stuff we’ve already said, that all the solutions in positive integers of the equation $m^2-2n^2=\pm 1$ come from the sequence two paragraphs
back. Can you see how?
It can also be proven that the approximations will be alternately overestimates and underestimates. Can you see why?
There is a rich theory of rationally approximating irrational numbers, including a method based on continued fractions for finding optimal approximating fractions to any real number. What is
amazing is that in this case we can get exactly the same answers predicted by the general theory without knowing anything sophisticated. We don’t need continued fractions or even a precise
definition of “good rational approximation”. All we need is the picture.
(In case you either don’t like pictures or really like algebra, then the corresponding algebra fact is $(m+2n)^2 - 2(m+n)^2 = -(m^2-2n^2)$, but that’s so much less colorful…)
I am aware that the triangle diagram in the previous post somehow got removed from my WordPress uploads. I can’t fix this until I get back in my office on Monday, but I will do it at that time.
Back from Maine with Squares and Triangles
6 October 2009
Ah, how I’ve missed my little blog. Sorry about the hiatus, but now I’m back from the number theory conference with a head full of new ideas. Unfortunately, most of the topics of the conference
have far too many prerequisites to fit in this blog. Let’s just say I saw many beautiful things and was reminded (in case I had forgotten) why I am a number theorist.
There was one historical talk, in which David Cox lectured on Galois theory according to Galois. If you aren’t a math major, don’t worry, you’ve probably never heard of Evariste Galois. So inspired
was I by this talk, and by the beauty of the ideas at play in what Galois brought to light, that I want to share the heart of Galois theory with all of you. This will take quite a few posts to
realize, working our way there one vignette, one thought experiment at a time. Fasten your seatbelts, ladies and gentlemen . . . the next few weeks will be interesting.
I did learn one extremely clever thing which is suitable for this audience “right out of the box”. The inimitable Steve Miller showed me the following purely graphical proof that $\sqrt{2}$ is
What would it mean for $\sqrt{2}$ to be rational? It would mean that $\sqrt{2}=m/n$ for some integers m and n, which we can choose to be in lowest terms. In other words, there is a square of integer
side length (m) whose area is the same as two squares of another integer side length (n), and furthermore we couldn’t find smaller integer squares with this relationship. Place the two smaller
squares in opposite corners of the larger square as sshown in the picture.
By our setup, the two light purple squares together have the same area as the large square. This means that the uncovered area (the two white squares) must account for the same area as the
doubly-covered area (the darker purple square). If the original squares have whole-number sides, then so do these. And the new squares are obviously smaller than the new ones, since they’re
physically inside the new ones. But we had supposedly chosen the smallest possible integer squares with this property. Contradiction.
Neither is this trick is limited to $\sqrt{2}$. The following picture can be seen as a demonstration of the irrationality of the square root of 3, if you look at it right. I leave that to you.
If you want a further challenge, try to find proofs in the same spirit that $\sqrt{6}$ and $\sqrt{10}$ are irrational. | {"url":"http://notaboutapples.wordpress.com/2009/10/","timestamp":"2014-04-17T09:34:22Z","content_type":null,"content_length":"65533","record_id":"<urn:uuid:5ea3112a-ce4b-4ce1-a7c0-db37f31dc461>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
rational function
The topic rational function is discussed in the following articles:
algebraic expressions
• TITLE: elementary algebraSECTION:
Algebraic expressions
By extending the operations on polynomials to include division, or ratios of polynomials, one obtains the rational functions. Examples of such rational functions are 2/3x and (a + bx2)/(c + dx2 +
ex5). Working with rational functions allows one to introduce the... | {"url":"http://www.britannica.com/print/topic/492008","timestamp":"2014-04-20T01:25:15Z","content_type":null,"content_length":"6549","record_id":"<urn:uuid:6faf4bfa-1f83-4d07-abbf-461e17b7c67a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linfield, PA Algebra Tutor
Find a Linfield, PA Algebra Tutor
I am currently employed as a secondary mathematics teacher. Over the past eight years I have taught high school courses including Algebra I, Algebra II, Algebra III, Geometry, Trigonometry, and
Pre-calculus. I also have experience teaching undergraduate students at Florida State University and Immaculata University.
9 Subjects: including algebra 1, algebra 2, geometry, GRE
...What helped me have those results with my students were a number of factors: 1. a very good understanding of the subject2. trying to understand my students’ needs by trying to figure out
what they do not understand, what they do not know3. trying to explain everything in a way they can understand...
7 Subjects: including algebra 1, chemistry, geometry, organic chemistry
...Able to help focus students on necessary grammar rules and help them with essay composition. I majored in Operations Research and Financial Engineering at Princeton, which involved a great
deal of higher level math similar to that seen on the Praxis test. To earn the degree I had to take a number of upper-level math courses.
19 Subjects: including algebra 1, algebra 2, calculus, statistics
...I am passionate about Math in the early years, from Pre-Algebra through Pre-Calculus. Middle school and early High School are the ages when most children develop crazy ideas about their
abilities regarding math. It upsets me when I hear students say, 'I'm just not good in math!' Comments like ...
9 Subjects: including algebra 2, algebra 1, geometry, precalculus
Hi,My name is Zekai. I am graduated from Drexel university last year majoring in Mechanical Engineering and minored in Business Administration. I am currently employed with a company as design
engineer but want to fill my free time with something productive and at the same time earn a second income to pay off my heavy student debt.
8 Subjects: including algebra 1, algebra 2, precalculus, trigonometry
Related Linfield, PA Tutors
Linfield, PA Accounting Tutors
Linfield, PA ACT Tutors
Linfield, PA Algebra Tutors
Linfield, PA Algebra 2 Tutors
Linfield, PA Calculus Tutors
Linfield, PA Geometry Tutors
Linfield, PA Math Tutors
Linfield, PA Prealgebra Tutors
Linfield, PA Precalculus Tutors
Linfield, PA SAT Tutors
Linfield, PA SAT Math Tutors
Linfield, PA Science Tutors
Linfield, PA Statistics Tutors
Linfield, PA Trigonometry Tutors
Nearby Cities With algebra Tutor
Athol, PA algebra Tutors
Charlestown, PA algebra Tutors
Delphi, PA algebra Tutors
Englesville, PA algebra Tutors
Gabelsville, PA algebra Tutors
Graterford, PA algebra Tutors
Limerick, PA algebra Tutors
Morysville, PA algebra Tutors
Parker Ford algebra Tutors
Rahns, PA algebra Tutors
Sanatoga, PA algebra Tutors
Valley Forge algebra Tutors
West Monocacy, PA algebra Tutors
Worman, PA algebra Tutors
Zieglersville, PA algebra Tutors | {"url":"http://www.purplemath.com/Linfield_PA_Algebra_tutors.php","timestamp":"2014-04-20T02:09:10Z","content_type":null,"content_length":"24164","record_id":"<urn:uuid:df74927a-8965-4e07-b139-34962c1211fe>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
Great Basin College Profile
Xunming Du
Mathematics Professor
Contact Information
Office Location: EIT 258, Elko Campus
Phone: 775.753.7081
Email: xunming.du(a)gbcnv.edu
NOTE: Substitute @ for (a) when sending a message.
Recommended Web Links
• Du's Website
NOTE: Viewing syllabi in Word (blue symbol) or Excel (green symbol) requires that your computer has those Microsoft products.
Viewing PDF documents (red symbol) requries the Adobe Reader plugin for your browser, available free from Adobe.
Whether or not syllabi are posted here is up to the discretion of the faculty member.
MATH 096 Title: Intermediate Algebra
MATH Title: Fundamentals of College Mathematics
Catalog Includes real numbers, consumer mathematics, variation, functions, relations, graphs, geometry, probability, and statistics. Course is broad in scope, emphasizing applications.
Description: Fulfills the lower-division mathematics requirement for a Bachelor of Arts Degree. Satisfies mathematics requirement for baccalaureate degrees. It is recommended that students
have completed prerequisites within two years of enrolling in this course.
MATH Title: Calculus I
Catalog The fundamental concepts of analytic geometry and calculus functions, graphs, limits, derivatives, integrals, and certain applications. It is recommended that students have
Description: completed prerequisites within two years of enrolling in this course.
MATH Title: Calculus III
Catalog A continuation of MATH 182. Topics include infinite sequences and series, vectors, differentiation and integration of vector-valued functions, the calculus of functions of
Description: several variables, multiple integrals and applications, line and surface integrals, Green's Theorem, Stokes' Theorem, and the Divergence Theorem. It is recommended that students
have completed prerequisites within two years of enrolling in this course.
MATH Title: Linear Algebra
Catalog An introduction to linear algebra, including matrices and linear transformations, eigenvalues, and eigenvectors. It is recommended that students have completed
Description: prerequisites within three years of enrolling in this course.
STAT Title: Introduction to Statistics
Catalog Includes descriptive statistics, probability models, random variables, statistical estimation and hypothesis testing, linear regression analysis, and other topics. Designed to
Description: show the dependence of statistics on probability. It is recommended that students have completed prerequisites within two years of enrolling in this course. | {"url":"http://www.gbcnv.edu/profiles/du_x.html","timestamp":"2014-04-17T00:59:30Z","content_type":null,"content_length":"16495","record_id":"<urn:uuid:0e22ca9a-5cff-46b1-b4ff-93fdb04ecbed>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
A-level Mathematics/MEI
This chapter is designed to aid students studying AS/A2 Level Mathematics for the New MEI GCE (Maintained under OCR). It ought to comply with the applicable specification for the GCE, available here
in pdf format from OCR's website. This books aims at the very least to teach the bare minimum information specified in the official specification, but the reader should note that relying on this
textbook alone is ill-advised. All readers are strongly encouraged to make bold additions to these pages, where doing so will improve the book for all/most users. If at any stage you feel that an
extra/replacement example would be helpful, please do not hesitate to make the necessary change!
Pure Modules
Core Modules
Further Pure Modules
Numerical Modules
Applied Modules
Differential Equations
Mechanics Modules
Statistics Modules
Decision Modules
Last modified on 27 May 2010, at 22:22 | {"url":"https://en.m.wikibooks.org/wiki/A-level_Mathematics/MEI","timestamp":"2014-04-18T18:17:46Z","content_type":null,"content_length":"17207","record_id":"<urn:uuid:5d64b3e1-7e85-4001-98ae-e486eeb7830a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
It can be hard to find an error-free article that involves any use of ratios, units, or simple mathematics (if you can call adding and multiplying numbers, mathematics). The use of numbers seems to
cause writers to abandon any attempts at checking, despite the skills involved being high school level or below. I think I’ll start collecting them.
Today’s entry, that all-time classic, confusing power with energy. There’s an article in
today about a device that burns waste restaurant oil (though the writer incorrectly calls it ‘grease’).
“Put 80 gallons of grease into the Vegawatt each week, and its creators promise it will generate about 5 kilowatts of power.”
Only, a gallon of oil is a measure of energy, but a kilowatt is a measure of power. Energy is measured in joules (or any other energy unit, calories, BTUs, whatever). A joule is a watt-seconds,
watts times seconds. Watts (power) are joules per second, the amount of work done per time, or the rate of energy conversion. The key being, energy is a scalar, power is a rate.
So does this thing produce 5 kilowatts for the full 168 hours of the week, or some lesser period? The writer doesn’t say and probably doesn’t know. Assuming it produces the 5 kW continuously, it
would then do 5 x 168 = 840 kilowatt-hours (kWh) of work per week. 840kWh is worth about $126, with power costing about 15c per kWh. So 80 gallons is supposed to generate $126 worth of electricity,
say $1.50 per gallon.
The writer goes on:
At New England electricity rates, the system offsets about $2.50 worth of electricity with each gallon of waste oil poured into it.
So now he’s saying a gallon of waste oil generates 25 kWh of electricity, or $2.50 worth of electricity per gallon. Wait, didn’t he just say $1.50 per gallon?
Vegawatt’s founder and inventor, James Peret, estimates that restaurants purchasing the $22,000 machine will save about $1,000 per month in electricity costs, for a payback time of two years.
OK so they’re claiming $1,000 a month, or 10,000 kWh of electricity, or about 14 kW continuous (hey it might be a 24 hour restaurant…). Depending on which of his numbers you go with ($2.50 or $1.50
of electricity per gallon of oil), that’s between 400 and 660 gallons of oil a month. The only number he gives though is 80 gallons a week or about 330 a month. These numbers are not astonishingly
out, only a factor of 2, not bad for general writing.
I just saw this
article on the Watt
in Wikipedia which has a section ‘Confusion of watts and watt-hours’. ”Power and energy are frequently confused in the media”, it says. No kidding.
Perl Hashes Ate My Workstation
Perl is not noted for its leanness but today I finally ran some little tests to see just how much memory it was devouring. I use some OO Perl code to process image files, there is a base class
Image::Med from which are derived Image::Med::DICOM, Image::Med::Analyze, and a few others. I store each DICOM element in an object instantiated as a hash; it’s of class
Image::Med::DICOM::DICOM_element which is derived from a base class Image::Med::Med_element. The inheritance works quite well and I’m able to move most of the functionality into the base classes, so
adding new subclasses for different file formats is reasonably easy.
Perl hashes are seductive, it’s so easy to add elements and things tend to just work. So my derived DICOM element class ends up having 13 elements in its hash, of which 10 are in the base class
(‘name’, ‘parent’, ‘length’, ‘offset’ and so on) and three are added in the derived class (‘code’, ‘group’, ‘element’) as being DICOM-specific.
As mentioned, I never claim Perl is svelte (or fast) but today I was sorting about 2,000 DICOM files. I like to keep them all in memory, for convenience and sorting, before writing or moving to
disk. Heck we’re only talking about a few thousand things here and computers work in the billions…all too easy to forget about memory usage.
I was unpleasantly surprised to find that each time I read in a DICOM file of just over 32 kB (PET scans are small, 128 x 128 x 2 bytes), I was consuming over 300 kB of memory. So my full dataset of
only 70 MB was using up almost a GB of RAM. And that was for only 2,100 files, whereas I have one scanner that generates over 6,500 DICOM files per study. I have the RAM to handle it, but my inner
CS grad has a problem with a tenfold usage of memory.
I used the Perl module Devel::Size to measure the size of hashes and the answers aren’t pretty: on my 64-bit Linux workstation each hash element is consuming 64 bytes in overhead. Crikey! So 64
bytes, times 13 fields per DICOM element, times 200-odd DICOM elements per object, that’s over 200 kB per DICOM object before I even put any data into it.
On my 64-bit Mac with perl 5.8.8 it’s not much better at 39 bytes per minimal element. I compared it with an array, which turned out to use 16 bytes per minimal element.
#! /usr/local/bin/perl -w
use Devel::Size ‘total_size’;
my %h = ();
print “0 hash elements, size = ” . total_size(%h) . “n”;
$h{‘a’} = 1;
print “1 hash elements, size = ” . total_size(%h) . “n”;
$h{‘b’} = 2;
print “2 hash elements, size = ” . total_size(%h) . “n”;
my @a = ();
print “0 array elements, size = ” . total_size(@a) . “n”;
$a[0] = 1;
print “1 array elements, size = ” . total_size(@a) . “n”;
$a[1] = 2;
print “2 array elements, size = ” . total_size(@a) . “n”;
[widget icon] 167% ~/tmp/hashsize.pl
0 hash elements, size = 92
1 hash elements, size = 131
2 hash elements, size = 170
0 array elements, size = 56
1 array elements, size = 88
2 array elements, size = 104
I know the answer is, don’t use giant hashes in Perl, or perhaps it is, don’t use Perl when you’re manipulating 2,000 x 200 x 13 elements. But I like Perl, it’s so convenient. Perhaps I’ll
reimplement the whole thing as an array (ugh), and/or cut down the number of elements per DICOM field (indexing a 13-element array, not fun). | {"url":"http://idoimaging.com/blog/?m=200901","timestamp":"2014-04-21T12:24:16Z","content_type":null,"content_length":"24149","record_id":"<urn:uuid:59e41089-e650-469c-abe9-82485f101fbf>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Getting to Know Angle Pairs
Getting to Know Angle Pairs
Adjacent angles and vertical angles always share a common vertex, so they’re literally joined at the hip. Complementary and supplementary angles can share a vertex, but they don’t have to. Here are
the definitions for the different angle pairs:
• Adjacent angles: Adjacent angles are neighboring angles that have the same vertex and that share a side; also, neither angle can be inside the other. This very simple idea is kind of a pain to
define, so just check out the figure below — a picture’s worth a thousand words.
None of the unnamed angles to the right are adjacent because they either don’t share a vertex or don’t share a side.
Warning: If you have adjacent angles, you can’t name any of the angles with a single letter.
Instead, you have to refer to the angle in question with a number or with three letters.
• Complementary angles: Two angles that add up to 90° (or a right angle) are complementary. They can be adjacent angles but don’t have to be.
• Supplementary angles: Two angles that add up to 180° (or a straight angle) are supplementary. They may or may not be adjacent angles.
Such angle pairs are called a linear pair.
Angles A and Z are supplementary because they add up to 180°.
• Vertical angles: When intersecting lines form an X, the angles on the opposite sides of the X are called vertical angles.
Two vertical angles are always the same size as each other. By the way, as you can see in the figure, the vertical in vertical angles has nothing to do with the up-and-down meaning of vertical. | {"url":"http://www.dummies.com/how-to/content/getting-to-know-angle-pairs.html","timestamp":"2014-04-21T09:03:26Z","content_type":null,"content_length":"52983","record_id":"<urn:uuid:18f6145d-1316-4574-8721-25be0034741f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
La Puente Prealgebra Tutor
Find a La Puente Prealgebra Tutor
Hi there! I'm here to help all kids (elementary - high school) with all their academic needs. I'm great with math up to algebra, U.S. history & government, literature (I love all the greats that
kids had to read in school), and writing.
23 Subjects: including prealgebra, English, reading, writing
...I would like to become an elementary school teacher in the future. I have tutored students as young as pre-K all the way through the eighth grade and I am very passionate about helping others.I
am qualified to tutor students in reading because I have worked at a reading tutoring center for almos...
15 Subjects: including prealgebra, English, reading, grammar
...During my time at USC, I conducted astronomy research studying the flow of subsurface matter in the sun, and I also worked at Mt. Wilson Observatory. After graduation, I moved to Kiev, Ukraine
to teach English.
15 Subjects: including prealgebra, English, physics, writing
...I have also had successful experiences tutoring online. I try to be flexible with time. While I do have a 24-hour cancellation policy, I offer makeup classes.
40 Subjects: including prealgebra, English, algebra 1, physics
I am currently enrolled at Cal Poly Pomona. This summer I would like to help out anyone who is struggling with math, particularly elementary math, pre-algebra, and Algebra 1; and maybe also some
reading. I like helping kids out with topics that may be tricky for them in a way that is innovative, f...
4 Subjects: including prealgebra, algebra 1, precalculus, elementary math | {"url":"http://www.purplemath.com/la_puente_ca_prealgebra_tutors.php","timestamp":"2014-04-20T06:52:51Z","content_type":null,"content_length":"23804","record_id":"<urn:uuid:b7b0eac9-c6e4-49e9-96d5-1a8e6bf52381>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/iheartfood/medals/1","timestamp":"2014-04-19T22:38:18Z","content_type":null,"content_length":"108401","record_id":"<urn:uuid:a145b589-f912-4113-9e4e-de04948d4fb9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00197-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bob Milnikel
Prior to his 2002 arrival at Kenyon, Bob Milnikel studied at Carleton College and Cornell University and taught at Wellesley College. His research is focused on the mathematical analysis of logic as
used in computer science. His teaching also bridges math and CS, including algebra and calculus courses as well as logic and introductory programming. Currently serving on the Tenure and Promotion
Committee, Bob is also active in several of Kenyon's musical ensembles. His Chicago area roots engendered an enduring fondness for good pizza and hapless baseball teams.
1999 — Doctor of Philosophy from Cornell University
1996 — Master of Science from Cornell University
1992 — Bachelor of Arts from Carleton College, Phi Beta Kappa
Courses Recently Taught
Academic & Scholarly Achievements
"Derivability in the Logic of Proofs" is $\Pi^p_2$-complete, To appear, Annals of Pure and Applied Logic
"Sequent Calculi for Skeptical Reasoning in Predicate Default Logic and Other Nonmonotonic Systems", Annals of Mathematics and Artificial Intelligence 44:1 (2005), 1-34
"Embedding Modal Nonmonotonic Logics into Default Logic", Studia Logica, 75 (2003), 377-382
"The Complexity of Predicate Default Logic over a Countable Domain", Annals of Pure and Applied Logic 120, 1-3 (April 2003), pp. 151-163.
"Skeptical Reasoning in FC-Normal Logic Programs is $\Pi^1_1$-Complete", Fundamenta Informaticae 45, 3 (2001), pp. 237-252. | {"url":"http://www.kenyon.edu/directories/campus-directory/biography/bob-milnikel/","timestamp":"2014-04-17T03:51:23Z","content_type":null,"content_length":"60518","record_id":"<urn:uuid:d396c5ac-6e9e-4f61-90a3-eb9bbd3be6e5>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lasker ring
From Encyclopedia of Mathematics
A commutative ring in which any ideal has a primary decomposition, that is, can be represented as the intersection of finitely-many primary ideals. Similarly, an [1] proved that there is a primary
decomposition in polynomial rings. E. Noether [2] established that any Noetherian ring is a Lasker ring.
[1] E. Lasker, "Zur Theorie der Moduln und Ideale" Math. Ann. , 60 (1905) pp. 19–116
[2] E. Noether, "Idealtheorie in Ringbereiche" Math. Ann. , 83 (1921) pp. 24–66
[3] N. Bourbaki, "Elements of mathematics. Commutative algebra" , Addison-Wesley (1972) (Translated from French)
How to Cite This Entry:
Lasker ring. V.I. Danilov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Lasker_ring&oldid=15831
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Lasker_ring","timestamp":"2014-04-16T10:29:29Z","content_type":null,"content_length":"16213","record_id":"<urn:uuid:b55c8c00-e413-4d34-9dbe-8c6c7fa62e12>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relation between momentum and mass of quarks
Hi Hluf!
We say on-shell and off-shell mass of quarks. 1) What is the difference on-shell and off-shell mass of quarks.
a quark always has on-shell mass (usually just called "mass")
a quark never has off-shell mass
off-shell mass is a mathematical trick which helps in the calculations for
Feynman diagrams
2) At lab. center of mass frame for lepton particles p^2= -m^2. Can we apply this equation for quarks.
(p is the four-momentum)
yes this applies to
(and in
frame): leptons hadrons and photons
you can regard it as the
of m (the mass) | {"url":"http://www.physicsforums.com/showthread.php?s=17c79089032eebba3da4ddb9816bf0a0&p=4618201","timestamp":"2014-04-23T12:05:15Z","content_type":null,"content_length":"45695","record_id":"<urn:uuid:f8a3cfb6-af2b-4dee-8bf0-207fa8c96c9a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Rafael, CA Algebra 2 Tutor
Find a San Rafael, CA Algebra 2 Tutor
...Civil Engineering, Carnegie-Mellon University M.S., Ph.D. Environmental Engineering Science, California Institute of Technology (Caltech) Dr. G.'s qualifications include a Ph.D. in engineering
from CalTech (including a minor in numerical methods/applied math) and over 25 years experience as a practicing environmental engineer/scientist.
13 Subjects: including algebra 2, calculus, statistics, physics
...I developed The Math Cheat Sheet for Apple devices to help students with common equations and formulas needed in algebra, geometry, trig, and calculus. I’ve also completed contract work as a
solution author for math textbooks. In addition, I was a leading tutor at the University of Arizona both privately and through the Math and Science Tutoring Resource center.
4 Subjects: including algebra 2, calculus, algebra 1, precalculus
...But then I take them back to the beginning, find out what they missed learning, and correct that. Math is like building a brick wall, each layer relies on a solid foundation. If you didn't
learn fractions or the multiplication table, you're never going to get through Algebra.
10 Subjects: including algebra 2, calculus, precalculus, algebra 1
...I am a certified EMT via the San Francisco Paramedics Association thus, I am CPR, First Aid, and AED certified. This course taught me how to remain calm in any emergency situation as well as
provide proper care to an injured person. In addition, this includes proper administration of common prescription medications such as inhalers, vasodialators, vasoconstrictors, etc.
30 Subjects: including algebra 2, English, Spanish, reading
...In high school, I took an assortment of AP classes and got straight As in all of my classes. I have experience teaching the SAT. I received a 2300 on the SAT in one sitting and my superscore
was a 2370.
18 Subjects: including algebra 2, reading, algebra 1, SAT math | {"url":"http://www.purplemath.com/San_Rafael_CA_Algebra_2_tutors.php","timestamp":"2014-04-21T14:42:06Z","content_type":null,"content_length":"24270","record_id":"<urn:uuid:24f12be9-3c4e-448a-81f2-763ee55fbdc3>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
This entry was posted in
and tagged
Blownapart Studios
. Bookmark the
2 Responses to Battlestations!
1. Pingback: Twitter Trackbacks for Battlestations! « The Math Less Traveled [mathlesstraveled.com] on Topsy.com
2. Hey, not sure if you like math or not…. :D
Anyways, made a couple new fractal formulas over the past few months, one of which I used to create and animation called “battleship extended” which I originally named battlestation (before
publishing it on youtube..) which you can check out on my youtube page (the website link). If you want to get a hold of me (email is fake), I lurk on fractalforums.com, and have recently posted 2
new formulas in the images gallery and am about to upload a brand new animation of one of the 2 new formulas when it is done calculating (frame 197/211 as of now: short 15 fps animation,
hopefully it looks neat). | {"url":"http://mathlesstraveled.com/2010/02/01/battlestations/","timestamp":"2014-04-20T18:28:26Z","content_type":null,"content_length":"63345","record_id":"<urn:uuid:79f09b15-c49f-45f6-a1f5-d5ae73162b4f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Prove Leibniz Formular:product rule
• one year ago
• one year ago
Best Response
You've already chosen the best response.
\[(fg)^{(n)}=\sum_{k=0}^{n}\left(\begin{matrix}n \\ k\end{matrix}\right)f^{k}g^{n-k}\] where\[f^{(n)}=\frac{ d^{(n)} f}{ dx^{n} }\]
Best Response
You've already chosen the best response.
link can also help
Best Response
You've already chosen the best response.
I am stuck on \[\left(\begin{matrix}m \\ k-1\end{matrix}\right)+\left(\begin{matrix}m \\ k\end{matrix}\right)=?\]
Best Response
You've already chosen the best response.
\[\left(\begin{matrix}m \\ k-1\end{matrix}\right)+\left(\begin{matrix}m \\ k\end{matrix}\right)=\frac{ m! }{ (k-1)!(m-k+1)! }+\frac{ m! }{ k!(m-k!) }\]\[=\frac{ m! }{ (k-1)!(m-k)! }\left[ \frac{
1 }{ m-k+1 }+\frac{ 1 }{ k } \right]\]\[=\frac{ m! }{ (k-1)!(m-k)! }\left[ \frac{ k+m-k+1 }{ k(m-k+1) } \right]\]\[=\frac{ m! }{ (k-1)!(m-k)! }.\frac{ m+1 }{ k(m-k+1) }\]\[=\frac{ (m+1)! }{ k!
(m-k+1)! }=\left(\begin{matrix}m+1 \\ k\end{matrix}\right)\]
Best Response
You've already chosen the best response.
@Jonask hope this helped
Best Response
You've already chosen the best response.
thanks guys ,appreciated
Best Response
You've already chosen the best response.
3 Medals
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ab58c3e4b064039cbd83aa","timestamp":"2014-04-16T22:35:18Z","content_type":null,"content_length":"47112","record_id":"<urn:uuid:4faafe1f-3ef5-4f73-bbbe-2f92d4ab5cf9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00215-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deus Ex Macchiato
We all know how to discount future cashflows – they teach you that in Econ 101. But what do you do if you aren’t certain of the rate at which to discount? Mark Buchanan, in a fascinating blog post
discussing work by Farmer and Geanakoplos (HT Naked Capitalism) answers the question. While you do what most people would expect (OK, most people with a quant background), namely take the
probability-weighted average over paths of the effective discount factor on each path, what you end up with is more surprising. According to Farmer and Geanakoplos, in this setting discount factors
follow a power law:
D(T) = (1 + aT)^-b
where a and b are constants.
The crucial observation is that this falls off much slower than the usual exp(-rT), and hence typically gives much more value to cashflows in the distant future. If one wanted to be hyperbolic about
this (yes, yes, pun intended), then one would say that the cure for short termism is simply to use the right discounting function.
And here comes Hurst, they think it’s all over! It is now! July 14, 2010 at 6:06 am
There has been some comment recently about a paper by Reginald Smith on the impact of high frequency trading (HFT) on market dynamics. I want to spend a little timing explaining what the paper says,
roughly, and why it matters.
We can clearly demonstrate that HFT is having an increasingly large impact on the microstructure of equity trading dynamics… the Hurst exponent H of traded value in short time scales (15 minutes
or less) is increasing over time from its previous Gaussian white noise values of 0.5. Second, this increase becomes most marked, especially in the NYSE stocks, following the implementation of
Reg NMS by the SEC which led to the boom in HFT. Finally, H > 0.5 traded value activity is clearly linked with small share trades which are the trades dominated by HFT traffic. In addition, this
small share trade activity has grown rapidly as a proportion of all trades.
So first, what is a Hurst exponent?
Roughly speaking, Hurst exponents measure autocorrelation or, even more loosely, predictability. If H is close to 0.5, the series is a random walk, or what we were told equity prices did in Finance
101. In particular, if H = 0.5, the idea of volatility makes sense, and we can quantify risk using volatility.
If H is bigger than 0.5, though, the series shows positive autocorrelation: roughly, it has very busy periods when volatility is high, and quieter low volatility periods. It switches regimes between
these with no warning. Thus we might try to calibrate a simple risk model but if we are unlucky we will calibrate it to a low vol period and then when the high vol hits, our risk estimates are wrong.
So, what the paper seems to have proved (and I have not checked all the details) is that HFT has changed the nature of stock price returns from being a random walk (H = 0.5) to having significant
positive autocorrelation. Increasingly we see quiet periods when not much happens followed by periods of intense volatility, and the change between these is unpredictable. Now notice the time period
cited, 15 minutes or less. What is happening, then, is that HFT appears to be creating islands of high volatility amid an ocean of more stable prices. Something sets off a price change, which creates
a flurry of HFT activity, exacerbating volatility; this then dies away over a period of minutes or hours.
Why does this matter to the ordinary investor? Simply that their trading might hit one of those flurries of activity, and they might well get a significantly worse price than average if it does.
Moreover of course simple risk models such as VAR will be less and less accurate risk gauges the higher the autocorrelation. I suspect on the typical VAR one day holding period this does not matter
much, but it might.
Finally, there is the issue that HFT might be increasing the risk of flash crashes. If autocorrelation is too high then the probability of very large deviations from the mean over short timescales
increases dramatically. I have no idea if this research supports the idea that we have got to that point yet. But I do think that someone should find out.
Quants, Lightbulbs and the Demise of the Financial System January 28, 2010 at 6:48 am
From Naked Capitalism reader Matthew G:
How many quants does it take to screw in a lightbulb?
Using ten racks of co-located blade servers, one quant can detect a janitorial inefficiency, step in between janitor and light fixture, and screw in 49,500 bulbs in less than a millisecond,
keeping five hundred lightbulbs of profit.
Two quants competing with each other can screw in 99,998 bulbs in a millisecond, with each quant retaining a profit of one lightbulb.
When ten quant firms try to screw in a light bulb, the bulb explodes, the light fixture gets ripped from the ceiling, the building falls down, the entire electrical grid of the city of Greenwich
shuts down, innocent civilians all over the world have their retirement accounts electrocuted, and the Federal Reserve has to give the counterparties of each quant firm five hundred million light
bulbs to maintain the stability of the system.
Update. FT alphaville saves me from having an entirely frivolous post by referrring to this article at Trader’s Magazine. They say:
Bryan Harkins, an executive with the Direct Edge ECN, noted the market is “saturated” with high-frequency shops. He doesn’t expect overall industry volume to increase substantially in the next
few years.
Volume, in the past three years, has doubled due to a large extent to the activities of high-frequency traders. Average daily volume is about 10 billion shares today. That compares to 5 billion
shares in early 2007.
“Someone leaves a high-frequency trading shop to start a new one,” Harkins said. “You do a meeting [with them] and they say ‘We’re going to do 100 million shares a day.’ You get all excited with
the next big account and then six months later they’re struggling to stay in business.” About half of Direct Edge’s volume comes from high-frequency trading firms, Harkins said.
[NYSE Euronext's Paul] Adcock noted the changes in volume at NYSE Arca’s top five high-frequency accounts mirror those of the VIX “almost perfectly.” And because most high-frequency strategies
are similar, he adds, only the “biggest and fastest will make those strategies work.”
(Emphasis mine.) The comments above refer to the good times, too. Imagine what would happen if one of those big guys liquidates, or if we have a very high volatility episode with extreme
decorrelations, as might happen for instance if there is a sovereign crisis. It won’t be pretty but we cannot say that we have not been warned.
I’ll take an alpha please Bob* December 1, 2009 at 2:50 pm
Robert Litterman is head of quantitative resources at Goldman Sachs Asset Management… And as he sees it, … quantitative hedge funds have to do a better job of making money for their clients. And
in Litterman’s considered opinion, they need to find new ways of making money. New and non-quantitative, apparently.
We’re putting together data that’s not machine-readable.
I see. Any other pearls of wisdom?
You have to adapt your process. What we’re going to have to do to be successful is to be more dynamic and more opportunistic.
Totally worth the price of admission to the Quant Invest 2009 conference (flight to Paris not included). Thank you, Bob.
Now that is quite amusing, but perhaps a little unfair. What is clear is that you can make money for extended periods of time by being long liquidity premiums and short volatility. Many hedge fund
‘strategies’ are just versions of this strategy: get exposure to illiquid assets, leverage up, and hope there is not a flight to quality before you have got paid your 2 and 20. If you can guarantee
your leverage through good times and bad (or are not leveraged at all and can lock investors in for long enough), this strategy is often successful even through a crisis. But if you have to sell into
the storm, things will go rather less well.
One thing that might be interesting, then, is somehow measure alpha relative to probability of having to deleverage. That is, we ought to level the playing field between funds that generate high
alpha at the expense of running the risk of having to sell into a crisis and those funds which generate less excess return, but which never have to deleverage.
*OK, some of you might not remember Blockbuster. It was a classic, in the sense of classically, heroically awful.
Physics envy, History envy June 29, 2009 at 8:51 am
Physics is in some ways the geekiest science. It’s fundamental, it has hard maths in it, and it has had enormous success at explaining the phenomena it tries to study. What other subject can
successfully predict something to twelve decimal places?
As a result, some practioners in other fields have physics envy. This is a notable problem for finance quants, many of whom didn’t make it as academic physicists (or did make but didn’t like the
salaries). Indeed in retrospect one can make a case that one of the causes of the Credit Crunch was the collapse of the Soviet Union – the argument would go that the collapse freed up lots of highly
trained mathematians and physicists, some of whom came to work for investment banks – no bulge bracket firm was without its Academy of Sciences prize winner; the geeks used used the maths that they
knew, which was mostly stochastic calculus, to model things; these models were dangerous but not easy to falsify (because they were only really wrong in a crisis); so the industry used them and was
subsequently screwed. In one way at least communism brought capitalism down with it.
Anyway, the desire to build highly mathematical models has in practice lead finance down a dangerous path. Perhaps the aspiration was good, but the implementation has been deeply flawed.
Let me instead propose a different aspiration. History envy. History is a lovely subject. There are lots of facts, but most historians ignore many of the relevant ones. They are interested in
motivations, in causes, in the evolution of ideas. They want to understand the why as well as the what. A good history text is carefully argued and insightful. It provokes discussion, and casts fresh
light on the present. It’s not clearly wrong, given the evidence, but it can never be said to be right, either.
How much better would finance be if it took these desiderata? Abandon the spurious and misleading quest for quantification. Just try to make an interesting argument about why things happen.
Rebuilding May 22, 2009 at 9:59 am
There is a lot of comment around at the moment about how broken finance is: here, for instance, is a piece by Pablo Triana. And certainly there are many, many issues that we have no idea how to deal
with in practice, including fat tails, autocorrelation, correlation smiles, and hidden systematic risks. These phenomena challenge option pricing models, CDO pricing, basket option pricing, ABS
pricing and all sorts of quantitative risk management model.
But, but, but. There are some things that work. The huge push in the 1990s on the behaviour of the yield curve has at least left us with a good idea how to manage single currency swaps books. Vanilla
puts and calls can mostly be hedged effectively. Credit derivatives – despite stident claims otherwise – have not caused the end of the world as we know it.
We need then to return to the things we do actually know, and to be very critical about what has worked well, what has worked acceptably, and what has turned out to be unhelpful. Saying the whole
edifice of mathematical finance is rotten is just as counterproductive as saying that none of it is. For once, finance theorists need to be disinterested and critical observers of reality rather than
cheerleaders (or hooligans). Let’s see what we need to tear down and what is still standing now that the tumult is dying down.
Copula counterfactual April 28, 2009 at 6:30 am
How different would the world be if David Li had written about a variety of different copulas rather than just the Gaussian one? (Do read the excellent Sam Jones piece that the link points to.)
More on models April 22, 2009 at 8:16 am
From Daniel Kahneman, via portfolio.com:
A group of Swiss soldiers who set out on a long navigation exercise in the Alps. The weather was severe and they got lost. After several days, with their desperation mounting, one of the men
suddenly realized he had a map of the region.
They followed the map and managed to reach a town. When they returned to base and their commanding officer asked how they had made their way back, they replied, “We suddenly found a map.” The
officer looked at the map and said, “You found a map, all right, but it’s not of the Alps, it’s of the Pyrenees.”
Correlation is not causality April 3, 2009 at 8:51 am
From the social science statistics blog via Naked Capitalism, an amusing illustration of this truth:
Sociologists do models, kinda February 13, 2009 at 6:59 am
From Reflexive Modeling: The Social Calculus of the Arbitrageur by Daniel Beunza and David Stark:
Modeling entails fundamental assumptions about the probability distribution faced by the actor, but this knowledge is absent when the future cannot be safely extrapolated from the past…
By privileging certain scenarios over others, by selecting a few variables to the detriment of others, and in short, by framing the situation in a given way, models and artifacts shape the final
outcome of decision-making. This … is the fundamental way in which the economics discipline shapes the economy, for it is economists who create the models in the first place…
…models can lead to a different form of entanglement. In effect, models can lock their users into a certain perspective on the world, even past the point in which such perspective applies to the
case at hand. In other words, models disentangle their users from their personal relationship with the actor at the other side of the transaction, but only at the cognitive cost of entangling
them in a certain interpretation.
Despite the focus on relatively uninteresting models (merger arb), this is an interesting paper for anyone interested in how traders really use models.
Tarring Taleb January 21, 2009 at 12:44 pm
I have always been a little suspicious of Nassim Taleb. He seems to take too much pleasure in discussion of crises. And his first book — a very conventional account of hedging — isn’t actually very
useful for actually running portfolios of options. Now a post on Models and Agents (an excellent blog I have only found recently) gives a more focussed critique:
the current crisis is not a black swan. Alas, the world’s economic history has offered a slew of (very consequential) credit and banking crises … So not only aren’t credit crises highly remote;
they can be a no-brainer, particularly if they involve extending huge loans to people with no income, no jobs and no assets.
Taleb also recommends that we buy insurance against good black swans—that is, investments with a tremendous (though still highly remote) upside but limited downside. For example, you could buy
insurance against the (unlikely?) disappearance of Botox due to the discovery of the nectar of eternal youth. And make tons of money if it happens.
And that surely is the point. Yes, the unexpected happens with considerable frequency. But knowing which black swan is more likely than the market is charging for is the hard part. Buying protection
in the wings on everything is far too expensive to be a good trading strategy. If all Taleb’s observations amount to is the claim that being long gamma can sometimes be profitable, then they are
hardly prophetic. What would be much more useful would be his analysis of when, exactly, black swan insurance is worth buying.
No arbitrage requires arbitrageurs November 20, 2008 at 6:14 am
No arbitrage conditions are not natural laws. You can only rely on them if there are enough arbitrageurs around to keep the markets in line. At the moment, that isn’t true in many settings. John
Dizard points out an example from the Tips market:
seven-year Tips bonds are asset swapping at 130 basis points over Libor
As Dizard says, this is partly because the Tips are illiquid and hard to finance (and thus to leverage), and partly because there is not enough risk capital around:
The dealers can’t afford to make efficient markets, given their decapitalisation, downsizing, and outright disappearance. That means anomalies sit there for weeks and months, where they would
have disappeared in minutes or seconds. The arbs, well, they thought they had risk-free books with perfectly offsetting positions. These turned out to be long-term, illiquid investments that
first bled out negative carry, and then were sold off by merciless prime brokers.
What are the dynamics of risky bond prices? November 4, 2008 at 7:11 am
The Basel Committee’s two papers on incremental risk in the trading book (incremental to that captured by VAR, that is) – here and her e – led me to muse on what the real dynamics of risky bond
prices are.
Firstly clearly there is an interest rate risk component. Let’s ignore that as it is the best understood.
Second there is jump to default risk. The phrase itself is slightly misleading in that bond prices often fall a long way in the period before default, and indeed recoveries are sometimes higher than
pre-default bond prices would suggest. Skip to default might a better term. Still, the idea that there is a jump process which can cause non-continuous changes in risky bond prices is reasonable.
Then there are ‘everyday’ movements in credit spreads. Now, here’s the six hundred and forty billion dollar question (OK, OK, not the size of the corporate bond market I know) – if you take out the
jumps, is what you are left with even vaguely normal? My guess is that it isn’t, and that autocorrelation is significant even after jumps have been taken out. The hard part is that you need a lot of
credit spread data to look at this kind of thing, and it isn’t easy to come by. CDS data won’t do in this instance simply because single name CDS have only been liquid for ten years or so, and you’d
at least want data going back well before the ’98 LTCM/Russian crisis. I’ll get around to this sometime soon…
Spread dynamics are the flipside to my earlier post on what CDS spreads mean: that was about what causes the spread to move; this is about how you can model those movements.
Swaps spreads and other lunch toppings October 26, 2008 at 10:34 am
Why, sometimes I’ve believed as many as six impossible things before breakfast said Alice. This quotation came to mind in the discussion of the 30y dollar swap spread in the FT recently:
“Negative swap spreads have been considered by many to be a mathematical impossibility, just like negative probabilities or negative interest rates,” said Fidelio Tata, head of interest rate
derivatives strategy at RBS Greenwich Capital Markets.
Oh dear me. A mathematical impossibility is 2 and 2 adding to 5, or the sudden discovery of a third square root of 4. A physical impossibility is something that we think is impossible according to
our current understanding of science: accelerating from rest to go faster than the speed of light, say.
Negative swap spreads are neither of those. They simply represent an arbitrage. An arbitrage is when you can make free money without taking risk. Ignoring for a moment the risk de nos jours –
counterparty risk – swap spreads allow one to lock in a positive P/L if one can fund at Libor flat. Free lunches do not often exist in finance, but they do happen in particular when there are no
arbitrageurs left standing. No arbitrage relies not on the theoretical possibility of a free lunch, but on enough people actually wanting to dine for nothing that prices move to stop the feast. At
the moment there is such a shortage of risk capital that one can indeed find free food. So `impossible’ things are happening not just before breakfast but all through the day. Bon appetit.
What is a derivatives pricing model anyway? September 4, 2008 at 12:29 pm
I had a conversation about this last night and thought it was worth writing some of it down and extending it a little. So…
Let’s begin with the market. For our purposes there are some known current market variables which we assume are correct. This could be a stock price, interest rates, a dividend yield — and perhaps
one or more implied volatilities.
Secondly we have a model. The model is often, but not always, standard, i.e. shared between most market participants. Let’s start with standard models. Here the model is first calibrated to the known
market variables.
At this point we are ready to use the model. There is a safe form of use and a less safe one. In the safe one we use the model as an interpolator. For instance we know the coupons of the current 2,
3, 5, 7 and 10 year par swaps (plus the interest rate futures prices and deposits) and we want to find the fair value coupon for a 4.3 year swap. Or we know the prices of 1000, 1050 and 1100 strike
index options and we want to price a 1040 strike OTC of the same maturity.
The less safe use is when we use the model as an extrapolator. We want a 12 year swap rate, for instance, or the price of a 1200 strike option. That’s not too bad provided we don’t go too far beyond
the available market data, but it is definitely a leap.
(Both of these, by the way, count as FAS 157 level 2.)
Note that there are two ways that we realise P/L in derivatives. Either we trade them or we hedge them. If we are in the flow business then trading is important. We need to use the same model as
everyone else simply because we are in the oranges business and we need to kInow what everyone else thinks an orange is worth. We take a spread just like traders of other assets, buying for a dollar
and selling for a dollar ten, or whatever. The book might well be hedged while we are waiting to trade, but basically we are in the moving business. Swaps books, index options, short term single
stock, FX, interest rate and commodity options, and much plain vanilla options trading falls into this camp.
In the hedging business in contrast we trade things that we do not expect to have flow in. Most exotic option businesses are an example here, as are many long dated OTC options. There is no active
market here so instead we have to hedge the product to maturity. Thus here the model hedge ratios are just as important as the model prices. Valuation should reflect the P/L we can capture by hedging
using the model greeks over the life of the trade. Thus standard models are more questionable in the hedging business than in the moving business since it is not just their prices — which are correct
by construction — but also their greeks that matter.
Things start to get really hairy when we move away from standard models. Now we are almost certainly dealing with products where there is no active market (some kinds of FX exotics are a
counterexample) and we do not even know that the model prices are correct. There is genuine disagreement across the market as to what some of these things are worth. Different models also produce
radically different hedge ratios. How can we judge the correctness of such a model? The answer is evident from the previous paragraph: it is correct if the valuation predicted can genuinely be
captured by hedging using the model hedge ratios. [Note that this does not necessarily give a unique 'correct' model.]
In summary then: for flow businesses we need interpolators between known prices and, to a lesser extent, extrapolators. For storage businesses we need models which produce good hedge ratios.
Where are the ‘risk free’ curves in dollars now? August 25, 2008 at 7:43 am
It is a serious question: the Treasury curve is being moved by concerns about the cost of the GSE bailout, with some commentators saying that the US could lose its AAA.
And the Libor curve, according to a money manager quoted by Bloomberg,
aren’t reflective of the entire banking system but of three or four major banks that continue to have pressure on liquidity
So where is the ‘risk free’ curve in dollars exactly?
Undercover equilibrium: Holiday reading 2 August 11, 2008 at 4:17 pm
The other popular economics book I read on holiday was Tim Harford’s Undercover Economist. It’s an interesting if quick read, and I can entirely see how it stimulated admissions to undergraduate
economics programmes. It strikes me, though, that Harford’s examples work best when the notion of price is unproblematic. He is presenting classical economics, so he assumes that prices are known and
that all agents have a view as to the correct price for a good or service. Things get a lot more interesting once prices are not observable or when agents don’t know what the ‘right’ price is. As,
umm, in the debt markets at the moment. The Big Picture has a related discussion concerning the failure of equilibrium economics.
I have several favorite examples of where markets simply get it wrong. When I spoke with the reporter on this, I used the credit crunch as exhibit A. It began in August 2007 (though some had been
warning about it long before that). Despite all of the obvious problems that were forthcoming, after a minor wobble, stock markets raced ahead. By October 2007, both the Dow Industrials and the S
&P500 had set all time highs. So much for that discounting mechanism.
We’ve seen that sort of extreme mispricing on a fairly regular basis. In March 2000, the market was essentially pricing stocks as if earnings didn’t matter, growth could continue far above
historical levels indefinitely, and value was irrelevant.
My own view is that finance is not an equilibrium discipline, mostly, so while classical economics might work well in explaining the price of coffee – one of Harford’s examples – it does rather less
well in asset allocation or explaining the return distribution of financial assets. Rather new news arrives faster than the market can restore equilibrium after the last perturbation, meaning that
most of the time equilibrium is not a useful concept.
Soros and Equilibrium: Holiday Reading 1 at 4:02 pm
While I was away, perhaps slightly masochistically*, I read the new Soros book, The New Paradigm for Financial Markets: The Credit Crisis of 2008 and What it Means. It is not a particularly good
summary of what happened, nor a detailed analysis of why it happened, but it does make an interesting point. Soros claims, I think very plausibly, that finance is reflexive, that is that the very
study of it changes the object being studied. I have written about this before, but it is interesting to see Soros making much of the role of reflexity in the formation of asset price bubbles.
Of course, this feature of finance renders the received wisdom of classical economics rather suspect. In particular, models in finance are not time-stable in the same way that a good piece of science
is, simply because the way market practitioners behave changes. The S&P return distribution with over half of all trades done by machine (2008) is unlikely to be the as that when most of the market
went via floor traders (1988).
* ‘We read popular finance books so you don’t have to dot com’ has not, funnily enough, been registered as a domain name…
Renaissance man December 20, 2007 at 7:47 am
Alea reports a talk that James Simons, founder of Renaissance Technologies, gave at NYU recently. Simons is a highly successful quant investor so his remarks are interesting. The part of the Alea
article that really piqued my interest was:
[...] perhaps the most interesting observation came in response to a question posed by the moderator, Nobel Prize-winner Robert Engle: “Why don’t you publish your research, the theory behind your
trading methods? If not while you are active in the markets, perhaps later on.”
Simons’ reply – there is nothing to publish. Quantitative investment is not physics. The markets have no fundamental, set-in-stone truths, no immutable laws. Financial “truth” changes constantly,
so that a new paper would be needed almost every week.
The implication is that there is no eternal theorem of finance that could serve as an infallible guide through all the ages. Indeed, there can be no Einstein or Newton of finance. Even the math
genius raking in $1 billion and consistently generating 30%-plus annual returns wouldn’t qualify. The terrain is just too lawless.
Simon’s view seems to me to be obviously true, although I don’t quite agree with the Alea spin. It isn’t that there is no law, it is that the law changes as the behaviour of market participants
change. Yesterday’s arb is today’s theorem is tomorrow’s unrealistic simplification. As I said over a year ago, mostly the market trades based on the current orthodoxy. But big news changes that
orthodoxy – as is happening at the moment in the liquidity markets, and so to make a lot of money you need to be willing to keep changing your theory of asset prices.
This neatly brings me to a related topic, the non-equilibrium nature of financial markets. In retrospect, Walras’ idea of an auctioneer groping towards equilibrium (word of the week – tâtonnement) is
really unhelpful because it suggests that there is enough time for this process to be completed and equilibrium reached before the next piece of news hits the market. I don’t think this is true.
Rather I conjecture that the process is much more like a game of tetherball, with each new news item changing people’s opinions and hence moving the market long before equilibrium is reached from the
previous piece. The ball almost never hangs by the pole, so any theory which analyses where it will come to rest isn’t much use in determining who is going to win the game.
The last piece of the puzzle is the primary role of transactions. There are no prices without transactions to establish them – lots of transactions. So it is only opinions about asset prices which
lead to trading that matter. You can be right for a long period about the fundamentals, but if your assumption about how fundamentals lead to trading is wrong then you will lose money. For example I
called the weakness of Japan completely correctly through the 2nd half of the 1990s and first half of 2000s, but I was wrong for extended periods on dollar/yen because I hadn’t accounted for the
actions of the BoJ and other market participants beliefs about the BoJ. To make a lot of money you need to predict what most other market participants trading-related beliefs will be and get your
position on before they do. Predicting fundamentals is only useful if they will influence future trading: on the flip side, predicting wrong beliefs is just as good as predicting right ones if they
pertain for long enough for you to make money.
Level 3 and default correlation November 12, 2007 at 8:46 am
Suppose you take a collection of assets of known price and put them into an SPV. The SPV issues tranched debt. Obviously the sum of the values of the tranches equals the sum of the values of the
original assets (unless there is some external credit enhancement for the SPV which adds value, or some other external influence).
Clearly too in a conventional tranching structure the senior tranche bears the least default risk and the junior tranche the most. But what is the fair value of each silo individually?
This is the problem many banks are facing at the moment. They have either retained tranches in their own CDOs or purchased tranches in other people’s, and these securities have not traded for some
time. Therefore level 1 (mark to an observable market price) valuation is impossible. The assets have to be valued, though, so they have to do something, albeit in level 3.
The basic modeling problem is to assign value between tranches. This assignment depends on something that is popularly known as default correlation, but should properly be called default comovement:
if the occurrence of one default in the CDO assets makes another much more likely, then the senior tranche is less valuable. If one default is more or less unrelated to another, then the senior is
safer and hence more valuable.
Default comovement is idiosyncratic. There is no reason to believe it will be the same between one pool of mortgages and another, let along between one diverse pool of ABS and another. Some limited
information on it in particular cases can be inferred from the prices of the few liquid securities – the iTraxx tranches and the ABX, for instance – but there is no compelling reason to believe this
carries over to other asset pools. This means that more or less any CDO tranche at the moment is a level 3 asset and any valuation necessarily has a measure of model risk and/or model parameter risk. | {"url":"http://blog.rivast.com/?cat=12","timestamp":"2014-04-19T04:35:22Z","content_type":null,"content_length":"88529","record_id":"<urn:uuid:e5ae1214-9749-49af-a0a4-d10f183437b2>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Presentation Summary : Linear Functions Review of Formulas Formula for Slope Linear Functions Review of Formulas Formula for Slope Standard Form Slope-intercept Form Point-Slope Form *where ...
Source : http://teachers.henrico.k12.va.us/math/HCPSAlgebra2/Documents/2-2/2_2%20presentation.ppt
Graphing Linear Functions - Henrico County Public Schools PPT
Presentation Summary : Objectives The student will be able to: 1. graph linear functions. 2. write equations in standard form. SOL: A.6 Designed by Skip Tyler, Varina High School
Source : http://teachers.henrico.k12.va.us/math/HCPSAlgebra1/Documents/5-4/GraphingLinFunctions.ppt
Linear Functions and Models - LeTourneau University PPT
Presentation Summary : Linear Functions and Models Lesson 2.1 Problems with Data Real data recorded Experiment results Periodic transactions Problems Data not always recorded accurately ...
Source : http://www.letu.edu/people/stevearmstrong/Math1203/Lesson%202.1.ppt
1.2 Linear functions - Shelton State Community College PPT
Presentation Summary : 1.2 Linear functions & Applications Linear Functions: y=mx+b, y-y1=m(x-x1) Supply and Demand Functions Equilibrium Point Cost, Revenue, and Profit Functions
Source : http://www.sheltonstate.edu/sites/www/Uploads/files/faculty/Lisa%20Nix/Math%20120/120PP%20Section%201.2.ppt
Linear Vs. Non-Linear Functions - Solon PPT
Presentation Summary : Linear Vs. Non-Linear Functions We are learning to model functions as tables, graphs and equations (rules). * With your team brainstorm characteristics of graphs ...
Source : http://www.solonschools.org/accounts/jeames/1026200924304_LinearVsNonLinearFunctions.ppt
Linear Functions and Slope - Mercer County Community College PPT
Presentation Summary : linear functions and slope what is slope? what is slope? the steepness of the graph, the rate at which the y values are changing in relation to the changes in x ...
Source : http://www.mccc.edu/~greenbay/documents/LinearFunctionsandSlope_000.ppt
Nonlinear Functions and their Graphs - LeTourneau University PPT
Presentation Summary : Nonlinear Functions and their Graphs Lesson 4.1 Polynomials General formula a0, a1, … ,an are constant coefficients n is the degree of the polynomial Standard form ...
Source : http://www.letu.edu/people/stevearmstrong/Math1203/Lesson%204.1.ppt
Linear or Nonlinear Functions - aaronwolff - home PPT
Presentation Summary : Title: Linear or Nonlinear Functions Author: Lindsay Darracott Last modified by: Aaron Wolff Created Date: 3/16/2011 2:28:17 AM Document presentation format
Source : http://aaronwolff.cmswiki.wikispaces.net/file/view/Day+4-+linear+and+nonlinear+functions.ppt
Linear Functions - TeacherWeb PPT
Presentation Summary : Title: Linear Functions Author: Kay's Laptop Last modified by: Kasele Mshinda Created Date: 9/9/2008 2:17:05 PM Document presentation format: On-screen Show (4:3)
Source : http://teacherweb.com/GA/SouthAtlantaAcademyofLawandSocialJustice/MsMshinda/2.8-2.4-and-2.5.ppt
Linear Functions and Models PPT
Presentation Summary : Title: Linear Functions and Models Author: Elon Last modified by: Anthony Mancuso Created Date: 1/24/2006 8:52:23 PM Document presentation format
Source : http://facstaff.elon.edu/amancuso2/MTH116/Linear%20Models.ppt
Linear and Nonlinear Functions - Teaching Portfolio PPT
Presentation Summary : Linear and Nonlinear Functions Identifying functions on tables, graphs, and equations. Irma Crespo 2010 Warm Up Graph y = 2x + 1 Rewrite the linear equation 3y + x ...
Source : http://teachingportfolio.info/yahoo_site_admin/assets/docs/Linear_and_Nonlinear_Functions.62170116.ppt
Graphing Linear Functions - Sparcc PPT
Presentation Summary : Title: Graphing Linear Functions Author: Skip Tyler Last modified by: Technical Support Services Created Date: 6/19/2001 2:03:32 AM Document presentation format
Source : http://www.crestwood.sparcc.org/userfiles/784/Classes/168/3.3%20Linear%20Functions.ppt
Presentation Summary : Write linear functions to model Solution (c) profit Use parentheses here. Example 9 WRITING LINEAR COST, REVENUE, AND PROFIT FUNCTIONS ...
Source : http://kisdwebs.katyisd.org/campuses/MCHS/teacherweb/schotta/Teacher%20Documents/College%20Algebra/lca11_0204.ppt
Linear Functions - ladnertamath - home PPT
Presentation Summary : Identify and Graph Linear Equations Name and Graph X and Y Intercepts Vocabulary for this lesson Linear Equation – the equation of a line whose graph is a straight ...
Source : http://tammyladner.wikispaces.com/file/view/Linear+Functions.ppt
Presentation Summary : Linear Functions Slope and y = mx + b Remember Slope… Slope is represented by m Determining Slope To calculate slope we used the formula: Example Example 1: Find ...
Source : http://misshmiller.wikispaces.com/file/view/Linear+Functions+-+slope.ppt
3.5 Arithmetic Sequences as Linear Functions PPT
Presentation Summary : 3.5 Arithmetic Sequences as Linear Functions Objective: 1) Use inductive reasoning in continuing number patterns 2) Write rules for arithmetic sequences
Source : http://intra.burltwpsch.org/users/mstevens/Documents/Chapter%203/3.5%20Arithmetic%20Sequences%20as%20Linear%20Functions.ppt
Presentation Summary : Graphing Linear Functions. Write equation in slope-intercept form (solve for y) Find y-intercept and plot. Use slope to plot next points. Draw a line to connect
Source : http://www.schurzhs.org/ourpages/auto/2013/12/9/52497561/Linear%20Functions%20Day%201.pptx
Linear Functions - Online Mathematics At The University of ... PPT
Presentation Summary : Linear Functions Definition A function f is linear if its domain is a set of numbers and it can be expressed in the form where m and b are constants and x denotes ...
Source : http://online.math.uh.edu/MiddleSchool/Modules/Module_2_Algebraic_Thinking/Supporting_Files/LinearFunctions.ppt
Chapter 3: Linear Functions - nhvweb.net PPT
Presentation Summary : Chapter 3: Linear Functions Lesson 9: Step Functions Mrs. Parziale Step Functions A step function is a graph that looks like a series of steps, such as the graph of ...
Source : http://www.nhvweb.net/nhhs/Math/jparziale/presentations/alg2pps/annoalg2les3-9.ppt
Chapter 1 Linear Functions PPT
Presentation Summary : Chapter 1 Linear Functions Section 1.2 Linear Functions and Applications Linear Functions Many situation involve two variables related by a linear equation.
Source : http://www.sheltonstate.edu/Uploads/files/faculty/Tina%20Evans/MTH%20100/old%20pp/1.2%20-%20old.ppt
Presentation Summary : Linear Functions Review of Formulas Formula for Slope DO NOW: Find the intercepts and graph the line 3x + 9y = -9 2) 4x – 2y = 16 DO NOW: Write an equation that ...
Source : http://salem.k12.va.us/staff/edenton/Pre-IB%20Algebra%202/1st%206%20weeks/2.2%20notes%20reviewing%20linear%20equations.ppt
Ch 1 Linear Functions, Equations and Inequalities PPT
Presentation Summary : Title: Ch 1 Linear Functions, Equations and Inequalities Subject: A Graphical Approach to College Algebra, 3rd Edition Author: Hornsby/Lial/Rockswold
Source : http://web.cerritos.edu/fseres/SitePages/Math114%20Sp2013/Math114_power%20points/hca04_0103.ppt
Representing Linear Functions - Irving Independent School ... PPT
Presentation Summary : Representing Linear Functions Objectives: Represent situations using a linear function Determine Domain and Range values Use a variety of methods to represent linear ...
Source : http://www.irvingisd.net/~steveknight/8Representing%20Linear%20Functions.ppt
Presentation Summary : Title: Linear Functions Foldable Author: user Last modified by: npoole1 Created Date: 7/29/2005 9:44:23 PM Document presentation format: On-screen Show (4:3)
Source : http://classroom.kleinisd.net/users/2454/docs/linear_functions_foldable_-revised-_2012.ppt
If you find powerpoint presentation a copyright It is important to understand and respect the copyright rules of the author. Please do not download if you find presentation copyright. If you find a
presentation that is using one of your presentation without permission, contact us immidiately at | {"url":"http://www.xpowerpoint.com/ppt/linear-functions.html","timestamp":"2014-04-20T20:55:24Z","content_type":null,"content_length":"23276","record_id":"<urn:uuid:1f2da697-9175-49ee-b2dc-363c30106ec1>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] How much of math is logic?
joeshipman@aol.com joeshipman at aol.com
Wed Feb 28 20:48:26 EST 2007
>My suggestion is that, in order to avoid arguments about contentious
>topics that are tangential to your (first) main question, you rephrase
>your question as follows:
>For suitable "X", one can say that ZFC = logic + AxInf + X. Just how
>can "X" be made to be?
I do want to avoid tangential discussions, and I like this suggestion.
To clarify, it is also the case that one can say PA = logic + Y; and we
also want to know how weak Y can be; what I was really driving at is,
for the weakest such Y, how close logic + AxInf +Y comes to ZFC.
This deals with the deductive side of mathematics. For the semantic
side, where I don't care about proof calculi but just expressive power,
my question is "what mathematical X are not interpretable in
second-order logic?"
>I am still interested in a summary of what Russell did. I can't
>that I'm the only one on FOM who doesn't know exactly what degree of
>strength each of the assumptions in PM buys you.
I don't know this exactly either; I believe Russell could recover
elementary number theory without needing his reducibility axiom or
Choice, but I am not familiar with the details. Can anyone else help
-- JS
AOL now offers free email to everyone. Find out more about what's free
from AOL at AOL.com.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2007-February/011365.html","timestamp":"2014-04-18T10:38:52Z","content_type":null,"content_length":"3902","record_id":"<urn:uuid:2e7bd85d-b9ca-4737-9eb9-4c70e45e6ee1>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tools Browse
Browsing: All Content in Calculus for Additivity and linearity discussions Login to Subscribe / Save Results
Resource Name Topic (Course) Technology Type $? Rating
Definite Integrals Additivity and linearity (Calculus) Flash Tool
Proof using Riemann sums that the integral of a sum is the same as the sum of the integrals.
More: lessons, discussions, ratings, reviews,...
Definite Integrals Additivity and linearity (Calculus) Flash Tool
Proof using Riemann sums that you can bring a constant factor outside an integral.
More: lessons, discussions, ratings, reviews,...
Definite Integrals Additivity and linearity (Calculus) Java Applet Tool
Proof using Riemann sums that the integral of a sum is the same as the sum of the integrals.
More: lessons, discussions, ratings, reviews,...
Definite Integrals Additivity and linearity (Calculus) Java Applet Tool
Proof using Riemann sums that you can bring a constant factor outside an integral.
More: lessons, discussions, ratings, reviews,... | {"url":"http://mathforum.org/mathtools/cell.html?co=c&tp=15.11.6&sortBy=G!asc&offset=0&limit=25&resort=1","timestamp":"2014-04-21T07:56:55Z","content_type":null,"content_length":"17582","record_id":"<urn:uuid:ee835848-39e9-4ccf-982e-b0ed5101da17>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
sheaf of ideals
sheaf of ideals
An (left, right, two-sided) ideal of ring (or $k$-algebra) $R$ is a (left, right, bi) $R$-submodule of $R$ itself. One can consider similarly the notion of a presheaf of modules? over a presheaf of
rings: and the presheaf of ideals is then the presheaf of submodules of a presheaf of rings.
In algebraic geometry, quasicoherent sheaves of modules are of special importance; thus also quasicoherent ideals as those ideals which are quasicoherent as sheaves of modules. An important example
of a quasicoherent sheaf of ideals is the defining sheaf of ideals of a closed subscheme.
Wikipedia uses less preferrable term ideal sheaf.
Having a finitary monad $T$ in the category of sets, the sheafification functor from the category of presheaves of sets to the category of sheaves of sets, can be strictly lifted to the category of
presheaves of $T$-modules (or, if you like, $T$-algebras) to the category of sheaves of $T$-modules. More generally, one can consider sheaves of finitary monads and corresponding categories of
sheaves of modules and of bimodules. The discussions of sheaves of ideals extends easily to this setting.
Revised on March 6, 2013 19:35:13 by
Zoran Škoda | {"url":"http://www.ncatlab.org/nlab/show/sheaf+of+ideals","timestamp":"2014-04-20T00:41:56Z","content_type":null,"content_length":"14335","record_id":"<urn:uuid:4c40a1f5-dc61-457d-a53d-e60dac40d242>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
You searched for scientific calculator - Android Apps
Search for scientific calculator
Smart Scientific CalculatorSmart Calculator, It is All-in-one!! - Scientific Calculator Mode.- Computer Calculator Mode. - Statistics Calculator Mode.- 공학용 계산기, ì „ì‚° 계산기, 통계
Based on Arity-2.1.6** (Arithmetic Engine for Java), Scientific Calculator 3 evaluates basic mathematical as well as nested scientific expressions, following the basic algebraic convention "brack...
Smart Calculator, It is All-in-one!! - Scientific Calculator Mode.- Computer Calculator Mode. - Statistics Calculator Mode.- sin, cos, tan, sinh, cosh, tanh, ?, log, Mod, Exp,,,- +, -, /, *, %, &
Panecal Scientific Calculator is a scientific calculator that can be calculated while checking the formula. Because it can be displayed in the expression of more than one line, you can prevent the
This is the Pro version of Scientific Calculator CalcAndroid.It is the same that the free version, except for: * No ads! More space for your chubby fingers!* You help mamuso to keep on developing!*
This scientific calculator offers the following facilities and features:o Degrees, Radians and Gradians operating Modes.o Standard Trigonometric Functions including:- Sine, Cosine, Tangents, Secant,
CuteCalc/S is the ultimate scientific calculator application for Android.It can preserve, modify and execute calculation formula which consists of key input sequence and shows big effect when reuse a
One of the most complete scientific calculator in the Android Market with a built in database of conversion factors, atomic weights and physical constants. Functionalities: -Standard operations: *,
AI calculator perhaps is the most humanized calculator in android platform. It's most distinguishing feature is very adaptive to touch screen. It's centralized with multiple function as multi-touch,
This is the most easy-to-use and advanced scientific calculators out there with features including:log, sin, cos, tan, clear, sqrt, asin, acos, atan, pi, ln, ee, parenthesis, division, squares,
This is a high-end emulation of the famous HP-11C RPN Scientific Calculator.Using the same mathematics as the original, the HC-11C provides a full set of functions: - 1/x - sq. root - x^2 - LOG x -
As the ONLY emualtor of HP 15C calculator on Android Market, Vicinno 15C scientific calculator provides all functions of the world-renowned HP 15C RPN high-end scientific programmable calculator for
HF Scientific Calculator - formerly called "Scientific Calculator"An advanced Scientific CalculatorThis is a fully featured calculator which supports matrix, complex numbers and equation sol...
NeoCal is an advanced, award-winning calculator designed to work just like a real calculator, only better. The intuitive keyboard design provides efficient access to many scientific, statistical,
Panecal free scientific calculator that can be used free of charge. Arithmetic operations, trigonometric functions, inverse trigonometric functions, exponential functions, logarithmic functions,
*Get it now while the price is temporarily reduced.A Scientific Expression Calculator that is easily accessed from your home screen that can handle multiple operations at a time. Choose from 3 themes
CalcAndroid - Scientific calculator.Looks, works and feels like the old ones.Features:* 10 digits display + 2 digit exponent ( range from -9.9e99 to 9.9e99 )* Typical operations ( add, subtract,
Scientific Calculator with Unit ConversionsJ-Calc is a feature rich scientific calculator and unit converter that is designed to be very easy to use.It includes: - Math (square root, Logarithm ,
A calculator with 8 computing modes in one application + a handy scientific reference facility - different modes allow: 1) scientific calculations, 2) hex, oct & bin format calculations, 3) graphi...
Use this app to assist in your chemical calculations! Calculate mass, volume and molarity!...
Use this app to assist in your chemical calculations! Calculate mass, volume and molarity!...
FX-602P Simulator is a very precise simulation of the classic FX-602P programmable calculator and and all it's accessories. This simulation is not a toy but but a full features simulations of almost
Android's #1 Scientific Calculator. A fully featured scientific calculator which looks and operates like the real thing.Looking for fractions? Degrees/minutes/seconds? Landscape mode? You need
This program is design for tablets and phone devises! Math in One graphing calculator can evaluate any mathematical expression of any length and complexity that includes real or complex numbers. It
The Professional BMI Calculator from instamedic.co.ukTHIS CALCULATOR USES METRIC MEASUREMENTS (CENTIMETRES & KILOGRAMS). GET THE IMPERIAL VERSION FOR POUNDS & INCHES.*No advertising whatsoever...
The main aim of this calculator is to get the right result if you have big formulas, because in this printed style you will not loose the overview.RealMath is a scientific calculator which prints
WScientific is a full scientific RPN calculator. It supports most scientific, trig. & stat functions found in hand held calculators. Supports also unit conversions such as cm/inches, km/miles, Cel...
Adds, Subtracts, Multiplies,Divides, raise the Power and Inverse of any Fractions.Display the answer in mixed, simplified or decimal number form just pressing the fraction button.Find the least
345...»Last » | {"url":"http://android-apps.com/?s=scientific%20calculator","timestamp":"2014-04-17T00:36:00Z","content_type":null,"content_length":"39574","record_id":"<urn:uuid:cf336995-798c-45b4-b226-b09d11b3d1b0>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Usually Choose Same
RPM on May 31, 2007
In yesterday’s post, I argued that, when flipping two unfair discs (or coins), there is a greater chance that both discs will land with the same side up than different sides up. As pointed out in the
comments, I was assuming that the probability of heads is equal for both discs:
Aren’t you assuming that p (and q=1-p) are the same for both discs? But isn’t it more reasonable to assume that, while no disc has a perfect p=0.5 probability of landing ‘heads’, the p’s of no
two discs are likely to be the same? (Assume, perhaps, that each disc’s p is drawn independently from some kind of larger distribution, maybe dependent on something like manufacturing or the way
the person throwing it holds it, before throwing).
What happens if we allow for different probabilities of heads on the two discs?
This is a graph of the probability of same for various probabilities of heads for each disc. The probabilities of heads on disc 1 is shown on the X-axis, and the probability of heads on disc 2 are
0.1, 0.3, 0.5, 0.7, and 0.9 (the five different lines). As you can see, the probability of same is greater than 0.5 when the two discs are biased in the same way (i.e., both have a probability of
heads greater than 0.5, or both have a probability of heads less than 0.5). On the other hand, the probability of different is greater than 0.5 when the discs are biased in opposite directions (one
with a probability of heads greater than 0.5, and one with a probability less than 0.5).
So, I say to all ultimate players out there, CHOOSE SAME if you think the discs have the same biases, and CHOOSE DIFFERENT if you think they are both biased, but in different directions.
Recent Comments
• Louise on This is a Good-bye Post
• Jean-Claude DECOTTIGNIES on The Not-So-Royal We | {"url":"http://scienceblogs.com/evolgen/2007/05/31/usually-choose-same/","timestamp":"2014-04-16T16:53:40Z","content_type":null,"content_length":"40639","record_id":"<urn:uuid:000fce63-f250-49c8-9d5f-31bd0ff66faf>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homogeneity (psychometrics)
34,117pages on
this wiki
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory
Homogeneity in statistics and data analysis pertains to properties of logically consistent data matrices. Within this framework, the coefficient of homogeneity indicates the degree data approximate
the Guttman implicatory scales.
The etymology of the term scale can be traced to the Latin word scala meaning ladder, steps, stairway. Homogeneous, internally consistent data matrices form step-like structures (cf., Fig. 3). The
lack of homogeneity indicates the degree data structures depart from this ideal lattice form. The conceptual differences between homogeneity and internal consistency reliability are often poorly
understood. These differences are best elucidated by contrasting the limiting cases of both indices.
Tautologous lattices
Fig. 1. Tautological lattice. Both the internal consistency reliability and homogeneity of this abstract data structure are equal to zero.
Normal (tautologous) structure for binary variables p, q, and r, shown in Fig. 1 for five variables, can be defined as Y = (p 1 q) & (q 1 r) where 1 signifies the logical operator of tautology.
Since Y is a unit vector, tautologous structures do not have to be rectified. The intercorrelations of the p, q, and r variables form an identity matrix shown below.
$\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix}$
The coefficient of the internal consistency reliability and the coefficient of homogeneity for tautologous lattices are both equal to zero.
Equivalential lattices
Fig. 2. Equivalential lattice. Internal consistency reliability of this abstract data structure equals one, homogeneity is less than one.
Parallel (equivalential) structure for binary variables p, q, and r can be defined as y = (p = q) & (q = r).
When rectified (according to the values of the rectifying variable Y), you can get step scale such as shown in Fig. 2 for eight variables.
Intercorrelations for this type of abstract data are shown below:
$\begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \\ \end{bmatrix}$
For the equivalent lattices, the coefficient of the internal consistency reliability equals one and the coefficient of homogeneity is less than one.
Implicational lattices
Fig. 3. Implicational lattice. Homogeneity of this abstract data structure equals one, Internal consistency reliability is less than one.
Hierarchical (implicational) structure for binary variables p, q, and r can be defined as y = (p -> q) & (q -> r).
When rectified (according to the values of the rectifying variable Y), you can get the implicational (Guttman) scale such as shown in Fig. 3 for eight variables.
Intercorrelations for this type of abstract data are shown below
$\begin{bmatrix} 1.000 & 0.577 & 0.333 \\ 0.577 & 1.000 & 0.577 \\ 0.333 & 0.577 & 1.00 \\ \end{bmatrix}$
For the implicational lattices, the coefficient of the internal consistency reliability is less than one and the coefficient of homogeneity equals one.
Tetrad criterion
The above correlation matrix is compliant with Spearman's tetrad criterion, characteristic of hierarchical unidimensional structures. The tetrad criterion is based on computations of products and
differences of four (from Gr. prefix τετρα-, four) elements of correlation matrices. For the above matrix of correlations, the tetrad criterion can be tested as .577(.577)-1.000(.333). If the result
(as in this instance) equals zero or is close to zero, the tetrad criterion is met.
Spearman tetrads are in fact the 2 x 2 minors of a matrix. In factor analysis, the number of common factors is one less than the order of the lowest-order minor that will vanish. In the case of
implicatory scales, even minors with order equal to two (tetrads) will vanish, thus these data structures are unidimensional.
Coefficient of homogeneity
The original coefficient of homogeneity, wrapped in complex algebraic considerations, was introduced in 1948 by Loevinger. Interest in homogeneity of data was revived during the closing decades of
the last century by Cliff (1977), and by Krus and Blackman (1988). On the basis of theoretical analysis outlined above, Krus and Blackman defined the coefficient of homogeneity as
$h_{xx} =\frac{MS_I - MS_{RES}}{MS^*_I - MS^*_{RES}}$
where MS stands for mean square, I for individuals and RES for residual terms of the analysis of variance. The * indicates that these indices were obtained from the data matrix where the variance of
the variables was maximized. This coefficient of homogeneity is numerically equivalent with both the Loevinger's and Cliff's conceptualizations of the coefficient of homogeneity. As the Hoyt's (1941)
formula for the internal consistency reliability is
$r_{xx} =\frac{MS_I - MS_{RES}}{MS_I}$
the Krus and Blackman formulation of the coefficient of homogeneity brings both the coefficient of internal consistency reliability and the coefficient of homogeneity within the framework of the
analysis of variance.
Image below shows results of a data analysis described elsewhere of which a calculation of the coefficient of homogeneity was an integral part. The slices of the pie show the proportions of variance
obtained by the analysis of variance design for the obtained and ordered data sets. The original residual variance component (.088) was further partitioned into the component due to the lack of
ordinality (.060) and the residual component proper (.028).
These results were interpreted that about 6% of the total variance reflected the “illogical” relationships between the data elements.
• Hoyt, C. (1941). Test reliability estimated by analysis of variance. Psychometrika, 6, 153-160
• Cliff, N. (1977). A theory of consistency of ordering generalizable to tailored testing. Psychometrika, 42, 375-399.
• Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-333.
• Kuder, G. & Richardson, M. (1937). The theory of estimation of test reliability. Psychometrika, 2, 151-160.
• Krus, D.J., & Blackman, H.S. (1988).Test reliability and homogeneity from perspective of the ordinal test theory. Applied Measurement in Education, 1, 79-88 (Request reprint).
• Loevinger, J. (1948). The technic of homogeneous tests compared with some aspects of scale analysis and factor analysis. Psychological Bulletin, 45, 507-529.
See also | {"url":"http://psychology.wikia.com/wiki/Homogeneity_(psychometrics)","timestamp":"2014-04-23T23:11:04Z","content_type":null,"content_length":"73151","record_id":"<urn:uuid:6015c5e4-dd5c-4bfa-b9a6-a4a748099850>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
On ${\rm STD}_6[18,3]$'s and ${\rm STD}_7[21,3]$'s Admitting a Semiregular Automorphism Group of Order 9
We characterize symmetric transversal designs ${\rm STD}_{\lambda}[k,u]$'s which have a semiregular automorphism group $G$ on both points and blocks containing an elation group of order $u$ using the
group ring ${\bf Z}[G]$. Let $n_\lambda$ be the number of nonisomorphic ${\rm STD}_{\lambda}[3\lambda,3]$'s. It is known that $n_1=1,\ n_2=1,\ n_3=4, n_4=1$, and $n_5=0$. We classify ${\rm STD}_6
[18,3]$'s and ${\rm STD}_7[21,3]$'s which have a semiregular noncyclic automorphism group of order 9 on both points and blocks containing an elation of order 3 using this characterization. The former
case yields exactly twenty nonisomorphic ${\rm STD}_6[18,3]$'s and the latter case yields exactly three nonisomorphic ${\rm STD}_7[21,3]$'s. These yield $n_6\geq20$ and $n_7\geq 5$, because B. Brock
and A. Murray constructed two other ${\rm STD}_7[21,3]$'s in 1991. We used a computer for our research.
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v16i1r148/0","timestamp":"2014-04-19T22:57:23Z","content_type":null,"content_length":"15731","record_id":"<urn:uuid:aeb918bf-b08b-46b1-8688-2dbc1bac0e96>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
So far very little has been said about the actual process by which the required information is located. In the case of document retrieval the information is the subset of documents which are deemed
to be relevant to the query. In Chapter 4, occasional reference was made to search efficiency, and the appropriateness of a file structure for searching. The kind of search that is of interest, is
not the usual kind where the result of the search is clear cut, either yes, the item is present, or no, the item is absent. Good discussions of these may be found in Knuth[1] and Salton[2]. They are
of considerable importance when dictionaries need to be set-up or consulted during text processing. However, we are more interested in search strategies in which the documents retrieved may be more
or less relevant to the request.
All search strategies are based on comparison between the query and the stored documents. Sometimes this comparison is only achieved indirectly when the query is compared with clusters (or more
precisely with the profiles representing the clusters).
The distinctions made between different kinds of search strategies can sometimes be understood by looking at the query language, that is the language in which the information need is expressed. The
nature of the query language often dictates the nature of the search strategy. For example, a query language which allows search statements to be expressed in terms of logical combinations of
keywords normally dictates a Boolean search. This is a search which achieves its results by logical (rather than numerical) comparisons of the query with the documents. However, I shall not examine
query languages but instead capture the differences by talking about the search mechanisms.
Boolean search
A Boolean search strategy retrieves those documents which are 'true' for the query. This formulation only makes sense if the queries are expressed in terms of index terms (or keywords) and combined
by the usual logical connectives AND, OR, and NOT. For example, if the query Q = (K1 AND K2) OR (K3 AND (NOT K4)) then the Boolean search will retrieve all documents indexed by K1 and K2, as well as
all documents indexed by K3 which are not indexed by K4.
An obvious way to implement the Boolean search is through the inverted file. We store a list for each keyword in the vocabulary, and in each list put the addresses (or numbers) of the documents
containing that particular word. To satisfy a query we now perform the set operations, corresponding to the logical connectives, on the Ki-lists. For example, if
K1 -list : D1, D2, D3, D4
K2 -list : D1, D2
K3 -list : D1, D2, D3
K4 -list : D1
and Q = (K1 AND K2) OR (K3 AND (NOT K4))
then to satisfy the (K1 AND K2) part we intersect the K1 and K2 lists, to satisfy the (K3 AND (NOT K4)) part we subtract the K4 list from the K3 list. The OR is satisfied by now taking the union of
the two sets of documents obtained for the parts. The result is the set {D1, D2, D3} which satisfies the query and each document in it is 'true' for the query.
A slight modification of the full Boolean search is one which only allows AND logic but takes account of the actual number of terms the query has in common with a document. This number has become
known as the co-ordination level. The search strategy is often called simple matching. Because at any level we can have more than one document, the documents are said to be partially ranked by the
co-ordination levels.
For the same example as before with the query Q = K1 AND K2 AND K3 we obtain the following ranking:
Co-ordination level
3 D1, D2
2 D3
1 D4
In fact, simple matching may be viewed as using a primitive matching function. For each document D we calculate |D [[intersection]] Q|, that is the size of the overlap between D and Q, each
represented as a set of keywords. This is the simple matching coefficient mentioned in Chapter 3.
Matching functions
Many of the more sophisticated search strategies are implemented by means of a matching function. This is a function similar to an association measure, but differing in that a matching function
measures the association between a query and a document or cluster profile, whereas an association measure is applied to objects of the same king. Mathematically the two functions have the same
properties; they only differ in their interpretations.
There are many examples of matching functions in the literature. Perhaps the simplest is the one associated with the simple matching search strategy.
If M is the matching function, D the set of keywords representing the document, and Q the set representing the query, then:
is another example of a matching function. It is of course the same as Dice's coefficient of Chapter 3.
A popular one used by the SMART project, which they call cosine correlation, assumes that the document and query are represented as numerical vectors in t-space, that is Q = (q1, q2, . . , qt) and D
= (d1, d2, . . ., dt) where qi and di are numerical weights associated with the keyword i. The cosine correlation is now simply
or, in the notation for a vector space with a Euclidean norm,
where [[theta]] is the angle between vectors Q and D.
Serial search
Although serial searches are acknowledge to be slow, they are frequently still used as parts of larger systems. They also provide a convenient demonstration of the use of matching functions.
Suppose there are N documents Di in the system, then the serial search proceeds by calculating N values M(Q, Di) the set of documents to be retrieved is determined. There are two ways of doing this:
(1) the matching function is given a suitable threshold, retrieving the documents above the threshold and discarding the ones below. If T is the threshold, then the retrieved set B is the set {Di |M(
Q, Di) > T}.
(2) the documents are ranked in increasing order of matching function value. A rank position R is chosen as cut-off and all documents below the rank are retrieved so that B = {Di |r(i) < R} where r(i
) is the rank position assigned to Di. The hope in each case is that the relevant documents are contained in the retrieved set.
The main difficulty with this kind of search strategy is the specification of the threshold or cut-off. It will always be arbitrary since there is no way of telling in advance what value for each
query will produce the best retrieval.
Cluster representatives
Before we can sensibly talk about search strategies applied to clustered document collections, we need to say a little about the methods used to represent clusters. Whereas in a serial search we need
to be able to match queries with each document in the file, in a search of a clustered file we need to be able to match queries with clusters. For this purpose clusters are represented by some kind
of profile (a much overworked word), which here will be called a cluster representative. It attempts to summarise and characterise the cluster of documents.
A cluster representative should be such that an incoming query will be diagnosed into the cluster containing the documents relevant to the query. In other words we expect the cluster representative
to discriminate the relevant from the non-relevant documents when matched against any query. This is a tall order, and unfortunately there is no theory enabling one to select the right kind of
cluster representative. One can only proceed experimentally. There are a number of 'reasonable' ways of characterising clusters; it then remains a matter for experimental test to decide which of
these is the most effective.
Let me first give an example of a very primitive cluster representative. If we assume that the clusters are derived from a cluster method based on a dissimilarity measure, then we can represent each
cluster at some level of dissimilarity by a graph (see Figure 5.2). Here A and B are two clusters. The nodes represent documents and the line between any two nodes indicates
that their corresponding documents are less dissimilar than some specified level of dissimilarity. Now, one way of representing a cluster is to select a typical member from the cluster. A simple way
of doing this is to find that document which is linked to the maximum number of other documents in the cluster. A suitable name for this kind of cluster representative is the maximally linked
document. In the clusters A and B illustrated, there are pointers to the candidates. As one would expect in some cases the representative is not unique. For example, in cluster B we have two
candidates. To deal with this, one either makes an arbitrary choice or one maintains a list of cluster representatives for that cluster. The motivation leading to this particular choice of cluster
representative is given in some detail in van Rijsbergen[3] but need not concern us here.
Let us now look at other ways of representing clusters. We seek a method of representation which in some way 'averages' the descriptions of the members of the clusters. The method that immediately
springs to mind is one in which one calculates the centroid (or centre of gravity) of the cluster. If {D1, D2, . . ., Dn} are the documents in the cluster and each Di is represented by a numerical
vector (d1, d2, . . ., dt) then the centroid C of the cluster is given by
where ||Di|| is usually the Euclidean norm, i.e.
More often than not the documents are not represented by numerical vectors but by binary vectors (or equivalently, sets of keywords). In that case we can still use a centroid type of cluster
representative but the normalisation is replaced with a process which thresholds the components of the sum [[Sigma]]Di. To be more precise, let Di now be a binary vector, such that a 1 in the jth
position indicates the presence of the jth keyword in the document and a 0 indicates the contrary. The cluster representative is now derived from the sum vector
(remember n is the number of documents in the cluster) by the following procedure. Let C = (c1, c2, . . . ct) be the cluster representative and [Di]j the jth component of the binary vector Di, then
two methods are:
So, finally we obtain as a cluster representative a binary vector C. In both cases the intuition is that keywords occurring only once in the cluster should be ignored. In the second case we also
normalise out the size n of the cluster.
There is some evidence to show that both these methods of representation are effective when used in conjunction with appropriate search strategies (see, for example, van Rijsbergen[4] and Murray[5]).
Obviously there are further variations on obtaining cluster representatives but as in the case of association measures it seems unlikely that retrieval effectiveness will change very much by varying
the cluster representatives. It is more likely that the way the data in the cluster representative is used by the search strategy will have a larger effect.
There is another theoretical way of looking at the construction of cluster representatives and that is through the notion of a maximal predictor for a cluster[6]. Given that, as before, the documents
Di in a cluster are binary vectors then a binary cluster representative for this cluster is a predictor in the sense that each component (ci) predicts that the most likely value of that attribute in
the member documents. It is maximal if its correct predictions are as numerous as possible. If one assumes that each member of a cluster of documents D1, . . ., Dn is equally likely then the expected
total number of incorrect predicted properties (or simply the expected total number of mismatches between cluster representative and member documents since everything in binary) is,
This can be rewritten as
The expression (*) will be minimised, thus maximising the number of correct predictions, when C = (c1, . . . , ct) is chosen in such a way that
is a minimum. This is achieved by
So in other words a keyword will be assigned to a cluster representative if it occurs in more than half the member documents. This treats errors of prediction caused by absence or presence of
keywords on an equal basis. Croft[7] has shown that it is more reasonable to differentiate the two types of error in IR applications. He showed that to predict falsely 0 (cj = 0) is more costly than
to predict falsely a 1 (cj = 1). Under this assumption the value of [1]/2 appearing is (3) is replaced by a constant less than [1]/2, its exact value being related to the relative importance attached
to the two types of prediction error.
Although the main reason for constructing these cluster representatives is to lead a search strategy to relevant documents, it should be clear that they can also be used to guide a search to
documents meeting some condition on the matching function. For example, we may want to retrieve all documents Di which match Q better than T, i.e.
{Di |M (Q, Di) > T}
For more details about the evaluation of cluster representative (3) for this purpose the reader should consult the work of Yu et al. [8,9].
One major objection to most work on cluster representatives is that it treats the distribution of keywords in clusters as independent. This is not very realistic. Unfortunately, there does not appear
to be any work to remedy the situation except that of Ardnaudov and Govorun[10].
Finally, it should be noted that cluster methods which proceed directly from document descriptions to the classification without first computing the intermediate dissimilarity coefficient, will need
to make a choice of cluster representative ab initio. These cluster representatives are then 'improved' as the algorithm, adjusting the classification according to some objective function, steps
through its iterations.
Cluster-based retrieval
Cluster-based retrieval has as its foundation the cluster hypothesis, which states that closely associated documents tend to be relevant to the same requests. Clustering picks out closely associated
documents and groups them together into one cluster. In Chapter 3, I discussed many ways of doing this, here I shall ignore the actual mechanism of generating the classification and concentrate on
how it may be searched with the aim of retrieving relevant documents.
Suppose we have a hierarchic classification of documents then a simple search strategy goes as follows (refer to Figure 5.3 for details). The search starts at the root of the tree, node 0 in the
example. It proceeds by evaluating a matching function at the nodes immediately descendant from node 0, in the example the nodes 1 and 2. This pattern repeats itself down the tree. The search is
directed by a decision rule, which on the basis of comparing the values of a matching function at each stage decides which node to expand further. Also, it is necessary to have a stopping rule which
terminates the search and forces a retrieval. In Figure 5.3 the decision rule is: expand the node corresponding to the maximum value of the matching function achieved within a filial set. The
stopping rule is: stop if the current maximum is less than the previous maximum. A few remarks about this strategy are in order:
(1) we assume that effective retrieval can be achieved by finding just one cluster;
(2) we assume that each cluster can be adequately represented by a cluster represent- ative for the purpose of locating the cluster containing the relevant documents;
(3) if the maximum of the matching function is not unique some special action, such as a look-ahead, will need to be taken;
(4) the search always terminates and will retrieve at least one document.
An immediate generalisation of this search is to allow the search to proceed down more than one branch of the tree so as to allow retrieval of more than one cluster. By necessity the decision rule
and stopping rule will be slightly more complicated. The main difference being that provision must be made for back-tracking. This will occur when the search strategy estimates (based on the current
value of the matching function) that further progress down a branch is a waste of time, at which point it may or may not retrieve the current cluster. The search then returns (back-tracks) to a
previous branching point and takes an alternative branch down the tree.
The above strategies may be described as top-down searches. A bottom-up search is one which enters the tree at one of its terminal nodes, and proceeds in an upward direction towards the root of the
tree. In this way it will pass through a sequence of nested clusters of increasing size. A decision rule is not required; we only need a stopping rule which could be simply a cut-off. A typical
search would seek the largest cluster containing the document represented by the starting node and not exceeding the cut-off in size. Once this cluster is found, the set of documents in it is
retrieved. To initiate the search in response to a request it is necessary to know in advance one terminal node appropriate for that request. It is not unusual to find that a user will already known
of a document relevant to his request and is seeking other documents similar to it. This 'source' document can thus be used to initiate a bottom-up search. For a systematic evaluation of bottom-up
searches in terms of efficiency and effectiveness see Croft[7].
If we now abandon the idea of having a multi-level clustering and accept a single-level clustering, we end up with the approach to document clustering which Salton and his co-workers have worked on
extensively. The appropriate cluster method is typified by Rocchio's algorithm described in Chapter 3. The search strategy is in part a serial search. It proceeds by first finding the best (or
nearest) cluster(s) and then looking within these. The second stage is achieved by doing a serial search of the documents in the selected cluster(s). The output is frequently a ranking of the
documents so retrieved.
Interactive search formulation
A user confronted with an automatic retrieval system is unlikely to be able to express his information need in one go. He is more likely to want to indulge in a trial-and-error process in which he
formulates his query in the light of what the system can tell him about his query. The kind of information that he is likely to want to use for the reformulation of his query is:
(1) the frequency of occurrence in the data base of his search terms;
(2) the number of documents likely to be retrieved by his query;
(3) alternative and related terms to be the ones used in his search;
(4) a small sample of the citations likely to be retrieved; and
(5) the terms used to index the citations in (4).
All this can be conveniently provided to a user during his search session by an interactive retrieval system. If he discovers that one of his search terms occurs very frequently he may wish to make
it more specific by consulting a hierarchic dictionary which will tell him what his options are. Similarly, if his query is likely to retrieve too many documents he can make it more specific.
The sample of citations and their indexing will give him some idea of what kind of documents are likely to be retrieved and thus some idea of how effective his search terms have been in expressing
his information need. He may modify his query in the light of this sample retrieval. This process in which the user modifies his query based on actual search results could be described as a form of
Examples, both operational and experimental, of systems providing mechanisms of this kind are MEDLINE[11] and MEDUSA[12] both based on the MEDLARS system. Another interesting sophisticated
experimental system is that described by Oddy[13].
We now look at a mathematical approach to the use of feedback where the system automatically modifies the query.
The word feedback is normally used to describe the mechanism by which a system can improve its performance on a task by taking account of past performance. In other words a simple input-output system
feeds back the information from the output so that this may be used to improve the performance on the next input. The notion of feedback is well established in biological and automatic control
systems. It has been popularised by Norbert Wiener in his book Cybernetics. In information retrieval it has been used with considerable effect.
Consider now a retrieval strategy that has been implemented by means of a matching function M. Furthermore, let us suppose that both the query Q and document representatives D are t-dimensional
vectors with real components where t is the number of index terms. Because it is my purpose to explain feedback I will consider its applications to a serial search only.
It is the aim of every retrieval strategy to retrieve the relevant documents A and withhold the non-relevant documents `A. Unfortunately relevance is defined with respect to the user's semantic
interpretation of his query. From the point of view of the retrieval system his formulation of it may not be ideal. An ideal formulation would be one which retrieved only the relevant documents. In
the case of a serial search the system will retrieve all D for which M(Q,D) > T and not retrieve any D for which M(Q,D) <= T, where T is a specified threshold. It so happens that in the case where M
is the cosine correlation function, i.e.
the decision procedure
M(Q,D) - T > 0
corresponds to a linear discriminant function used to linearly separate two sets A and `A in R[t]. Nilsson[14] has discussed in great detail how functions such as this may be 'trained' by modifying
the weights qi to discriminate correctly between two categories. Let us suppose for the moment that A and `A are known in advance, then the correct query formulation Q0 would be one for which
M(Q0,D) > T whenever D [[propersubset]] A
M(Q0,D) <= T whenever D [[propersubset]] `[[Alpha]]
The interesting thing is that starting with any Q we can adjust it iteratively using feedback information so that it will converge to Q0. There is a theorem (Nilsson[14], page 81) which states that
providing Q0 exists there is an iterative procedure which will ensure that Q will converge to Q0 in a finite number of steps.
The iterative procedure is called the fixed-increment error correction procedure.
It goes as follows:
Qi = Qi-1 + cD if M(Qi-1, D) - T <= 0
and D [[propersubset]] A
Qi = Qi-1 - cD if M(Qi-1, D) - T > 0
and D [[propersubset]] `A
and no change made to Qi-1 if it diagnoses correctly. c is the correction increment, its value is arbitrary and is therefore usually set to unit. In practice it may be necessary to cycle through the
set of documents several times before the correct set of weights are achieved, namely those which will separate A and `A linearly (this is always providing a solution exists).
The situation in actual retrieval is not as simple. We do not know the sets A and `A in advance, in fact A is the set we hope to retrieve. However, given a query formulation Q and the documents
retrieved by it we can ask the user to tell the system which of the documents retrieved were relevant and which were not. The system can then automatically modify Q so that at least it will be able
to diagnose correctly those documents that the user has seen. The assumption is that this will improve retrieval on the next run by virtue of the fact that its performance is better on a sample.
Once again this is not the whole story. It is often difficult to fix the threshold T in advance so that instead documents are ranked in decreasing matching value on output. It is now more difficult
to define what is meant by an ideal query formulation. Rocchio[15] in his thesis defined the optimal query Q0 as one which maximised:
If M is taken to be the cosine function (Q, D) /||Q || ||D || then it is easy to show that [[Phi]] is maximised by
where c is an arbitrary proportionality constant.
If the summations instead of being over A and `A are now made over A [[intersection]] Bi and `A [[intersection]] Bi where Bi is the set of retrieved documents on the ith iteration, then we have a
query formulation which is optimal for Bi a subset of the document collection. By analogy to the linear classifier used before, we now add this vector to the query formulation on the ith step to get:
where wi and w2 are weighting coefficients. Salton[2] in fact used a slightly modified version. The most important difference being that there is an option to generate Qi+1 from Qi, or Q, the
original query. The effect of all these adjustments may be summarised by saying that the query is automatically modified so that index terms in relevant retrieved documents are given more weight
(promoted) and index terms in non-relevant documents are given less weight (demoted).
Experiments have shown that relevance feedback can be very effective. Unfortunately the extent of the effectiveness is rather difficult to gauge, since it is rather difficult to separate the
contribution to increased retrieval effectiveness produced when individual documents move up in rank from the contribution produced when new documents are retrieved. The latter of course is what the
user most cares about.
Finally, a few comments about the technique of relevance feedback in general. It appears to me that its implementation on an operational basis may be more problematic. It is not clear how users are
to assess the relevance, or non-relevance of a document from such scanty evidence as citations. In an operational system it is easy to arrange for abstracts to be output but it is likely that a user
will need to browse through the retrieved documents themselves to determine their relevance after which he is probably in a much better position to restate his query himself.
Bibliographic remarks
The book by Lancaster and Fayen[16] contains details of many operational on-line systems. Barraclough[17] has written an interesting survey article about on-line searching. Discussions on search
strategies are usually found embedded in more general papers on information retrieval. There are, however, a few specialist references worth mentioning.
Anew classic paper on the limitations of a Boolean search is Verhoeff et al.[18]. Miller[19] has tried to get away from a simple Boolean search by introducing a form of weighting although maintaining
essentially a Boolean search. Angione[20] discusses the equivalence of Boolean and weighted searching. Rickman[21] has described a way of introducing automatic feedback into a Boolean search. Goffman
[22] has investigated an interesting search strategy based on the idea that the relevance of a document to a query is conditional on the relevance of other documents to that query. In an early paper
by Hyvarinen[23], one will find an information-theoretic definition of the 'typical member' cluster representative. Negoita[24] gives a theoretical discussion of a bottom-up search strategy in the
context of cluster-based retrieval. Much of the early work on relevance feedback done on the SMART project has now been reprinted in Salton[25]. Two other independence pieces of work on feedback are
Stanfel[26] and Bono[27].
1. KNUTH, D.E., The Art of Computer Programming, Vol. 3, Sorting and Searching, Addison-Wesley, Reading, Massachusetts (1973).
2. SALTON, G., Automatic Information Organisation and Retrieval, McGraw-Hill, New York (1968).
3. van RIJSBERGEN, C.J., 'The best-match problem in document retrieval', Communications of the ACM, 17, 648-649 (1974).
4. van RIJSBERGEN, C.J., 'Further experiments with hierarchic clustering in document retrieval', Information Storage and Retrieval, 10, 1-14 (1974).
5. MURRAY, D.M., 'Document retrieval based on clustered files', Ph.D. Thesis, Cornell University Report ISR-20 to National Science Foundation and to the National Library of Medicine (1972).
6. GOWER, J.C., 'Maximal predictive classification', Biometrics, 30, 643-654 (1974).
7. CROFT, W.B., Organizing and Searching Large Files of Document Descriptions, Ph.D. Thesis, University of Cambridge (1979).
8. YU, C.T., and LUK, W.S., 'Analysis of effectiveness of retrieval in clustered files', Journal of the ACM, 24, 607-622 (1977).
9. YU, C.T., LUK, W.C. and SIU, M.K., 'On the estimation of the number of desired records with respect to a given party' (in preparation).
10. ARNAUDOV, D.D.. and GOVORUN, N.N. Some Aspects of the File Organisation and Retrieval Strategy in Large Databases, Joint Institute for Nuclear Research, Dubna (1977).
11. Medline Reference Manual, Medlars Management Section, Bibliographic Services Division, National Library of Medicine.
12. BARRACLOUGH, E.D., MEDLARS on-line search formulation and indexing, Technical Report Series, No. 34, Computing Laboratory, University of Newcastle upon Tyne.
13. ODDY, R.N., 'Information retrieval through man-machine dialogue', Journal of Documentation, 33, 1-14 (1977).
14. NILSSON, N.J., Learning Machines - Foundations of Trainable Pattern Classifying Systems, McGraw-Hill, New York (1965).
15. ROCCHIO, J.J., 'Document retrieval systems - Optimization and evaluation', Ph.D. Thesis, Harvard University, Report ISR-10 to National Science Foundation, Harvard Computation Laboratory (1966).
16. LANCASTER, F.W. and FAYEN, E.G., Information Retrieval On-line, Melville Publishing Co., Los Angeles, California (1973).
17. BARRACLOUGH, E.D., "On-line searching in information retrieval', Journal of Documentation, 33, 220-238 (1977).
18. VERHOEFF, J., GOFFMAN, W. and BELZER, J., 'Inefficiency of the use of boolean functions for information retrieval systems', Communications of the ACM, 4, 557-558, 594 (1961).
19. MILLER, W.L., 'A probabilistic search strategy for MEDLARS', Journal of Documentation, 17, 254-266 (1971).
20. ANGIONE, P.V., 'On the equivalence of Boolean and weighted searching based on the convertibility of query forms', Journal of the American Society for Information Science, 26, 112-124 (1975).
21. RICKMAN, J.T., 'Design consideration for a Boolean search system with automatic relevance feedback processing', Proceedings of the ACM 1972 Annual Conference, 478-481 (1972).
22. GOFFMAN, W., 'An indirect method of information retrieval', Information Storage and Retrieval, 4, 361-373 (1969).
23. HYVARINEN, L., 'Classification of qualitative data', BIT, Nordisk Tidskrift för Informationsbehandling, 2, 83-89 (1962).
24. NEGOITA, C.V., 'On the decision process in information retrieval', Studii si cercetari de documentare, 15, 269-281 (1973).
25. SALTON, G., The SMART Retrieval System - Experiment in Automatic Document Processing, Prentice-Hall, Englewood Cliffs, New Jersey (1971).
26. STANFEL, L.E., 'Sequential adaptation of retrieval systems based on user inputs', Information Storage and Retrieval, 7, 69-78 (1971).
27. BONO, P.R., 'Adaptive procedures for automatic document retrieval', Ph.D. Thesis, University of Michigan (1972). | {"url":"http://dcs.gla.ac.uk/Keith/Chapter.5/Ch.5.html","timestamp":"2014-04-17T22:19:38Z","content_type":null,"content_length":"37051","record_id":"<urn:uuid:2a8c828d-470f-41f0-bf95-c4de173d3653>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to draw sin(1/x)
Note that the period is 1/360.(correct me if im wrong) As it seems to me,it would squeeze towards the origin.
Ah that's interesting. I'm curious to know why it started to open up. My guess is that when x = 1 it will start to widen. As with eg. sin(x/2) = 720 degree period.
Start with $sin(n{\pi})=0,\;\;\;n=0,\;1,\;2,\;3......$ $\displaystyle\ sin\left[\frac{(4n+1){\pi}}{2}\right]=1$ $\displaystyle\ sin\left[\frac{(4n+3){\pi}}{2}\right]=-1$ Then if $\displaystyle\ sin(n
{\pi})=0\Rightarrow\ n{\pi}=\frac{1}{x}\Rightarrow\ x=\frac{1}{n{\pi}}$ shows that the graph crosses the x-axis at $\displaystyle\ x=\frac{1}{\pi},\;\;\frac{1}{2{\pi}},\;\;\frac{1}{ 3{\pi}},\;\;,\
frac{1}{4{\pi}},.....$ an "infinite" number of times approaching x=0. The function is non-periodic and approaches 0 as x approaches $\infty$
Last edited by Archie Meade; November 28th 2010 at 06:14 AM. | {"url":"http://mathhelpforum.com/trigonometry/164592-how-draw-sin-1-x.html","timestamp":"2014-04-20T23:35:11Z","content_type":null,"content_length":"54561","record_id":"<urn:uuid:4de97880-7abb-4cd8-a469-0f9b5b788a62>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
Crockett, CA Science Tutor
Find a Crockett, CA Science Tutor
...I was the first ever undergraduate student to teach a graduate course in my department. As a doctoral student at UC Berkeley, I assisted in teaching as well as in curriculum development of
several Bioengineering and Physics courses (undergraduate and graduate levels) for engineering and science ...
24 Subjects: including organic chemistry, calculus, precalculus, trigonometry
...I believe patience and clear communication is key when working with elementary students. I teach all of my students study skills necessary to do well and succeed on tests and in school. I show
my students how to restore information better by using flashcards, visual imagery, acronyms, and a lot more.
59 Subjects: including organic chemistry, physics, physical science, chemistry
...Environmental Engineering Science, California Institute of Technology (Caltech) Dr. G.'s qualifications include a Ph.D. in engineering from CalTech (including a minor in numerical methods/
applied math) and over 25 years experience as a practicing environmental engineer/scientist. In addition, ...
13 Subjects: including physics, calculus, statistics, geometry
...As for eye makeup, I have a tremendous love for highly pigmented colors --M.A.C. eye shadows are one of my favorites but also Clinique, Dior, Sephora and Smashbox. However, I also do well with
neutral colors, too! Although I worked for Clinique, I strive to use various cosmetic brands for variation in my tools.
16 Subjects: including geology, English, reading, elementary (k-6th)
...While I was there, I also took various Calculus courses and courses in other areas of math that built on what I learned in high school. I'm a definite believer in the value of knowing the ways
the world works, and the value of a good education. That said, I’ve been through the education system, and have seen its flaws, and places where it could work better.
6 Subjects: including physics, calculus, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/crockett_ca_science_tutors.php","timestamp":"2014-04-20T19:18:59Z","content_type":null,"content_length":"24046","record_id":"<urn:uuid:bda907d6-d2e1-4c34-881f-63b82ee0423b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Venn diagram
Venn diagram
Background to the schools Wikipedia
This Schools selection was originally chosen by SOS Children for schools in the developing world without internet access. It is available as a intranet download. To compare sponsorship charities this
is the best sponsorship link.
Venn diagrams are illustrations used in the branch of mathematics known as set theory. Invented in 1881 by John Venn, they show all of the possible mathematical or logical relationships between sets
(groups of things). They normally consist of overlapping circles. For instance, in a two set Venn diagram, one circle may represent all things that are liquid at room temperature, while another
circle may represent the set of all chemical elements. The overlapping area (intersection) would then represent things that are both liquid at room temperature and elements, e.g. mercury. Other
shapes can be employed (see below), and this is necessary for more than three sets.
The Hull-born British philosopher and mathematician John Venn (1834-1923) introduced the Venn diagram in 1881.
A stained glass window in Caius College, Cambridge, where Venn studied and spent most of his life, commemorates him and represents a Venn diagram.
The orange circle ( set A) might represent, for example, all living creatures that are two-legged. The blue circle, (set B) might represent living creatures that can fly. The area where the blue and
orange circles overlap contains all living creatures that can fly and that have two legs — for example, parrots. (Imagine each separate type of creature as a point somewhere in the diagram.)
Humans and penguins would be in the orange circle, in the part which does not overlap with the blue circle. Mosquitoes have six legs, and fly, so the point for mosquitoes would be in the part of the
blue circle which does not overlap with the orange one. Things which are not two-legged and cannot fly (for example, whales and spiders) would all be represented by points outside both circles.
Technically, the Venn diagram above can be interpreted as "the relationships of set A and set B that may have some (but not all) elements in common".
The combined area of sets A and B is called the union of A and B, denoted by AUB. The union in this case contains all things that either have two legs, or which fly, or both.
The area in both A and B, where the two sets overlap, is called the intersection of A and B, denoted by A∩B. The intersection of the two sets is not empty, because the circles overlap, i.e. there are
creatures that are in both the orange and blue circles.
Sometimes a rectangle called the " Universal set" is drawn around the Venn diagram to show the space of all possible things. As mentioned above, a whale would be represented by a point that is not in
the union, but is in the Universe (of living creatures, or of all things, depending on how one chose to define the Universe for a particular diagram).
Extensions to higher numbers of sets
Venn diagrams typically have three sets. Venn was keen to find symmetrical figures…elegant in themselves representing higher numbers of sets and he devised a four set diagram using ellipses. He also
gave a construction for Venn diagrams with any number of curves, where each successive curve is interleaved with previous curves, starting with the 3-circle diagram.
Simple Symmetric Venn Diagrams
D. W. Henderson showed in 1963 that the existence of an n-Venn diagram with n-fold rotational symmetry implied that n was prime. He also showed that such symmetric Venn diagrams exist when n is 5 or
7. In 2002 Peter Hamburger found symmetric Venn diagrams for n = 11 and in 2003, Griggs, Killian, and Savage showed that symmetric Venn diagrams exist for all other primes. Thus symmetric Venn
diagrams exist if and only if n is a prime number.
Edwards' Venn diagrams
A. W. F. Edwards gave a construction to higher numbers of sets that features some symmetries. His construction is achieved by projecting the Venn diagram onto a sphere. Three sets can be easily
represented by taking three hemispheres at right angles (x≥0, y≥0 and z≥0). A fourth set can be represented by taking a curve similar to the seam on a tennis ball which winds up and down around the
equator. The resulting sets can then be projected back to the plane to give cogwheel diagrams with increasing numbers of teeth. These diagrams were devised while designing a stained-glass window in
memoriam to Venn.
Other diagrams
Edwards' Venn diagrams are topologically equivalent to diagrams devised by Branko Grünbaum which were based around intersecting polygons with increasing numbers of sides. They are also 2-dimensional
representations of hypercubes.
Smith devised similar n-set diagrams using sine curves with equations y=sin(2^ix)/2^i, 0≤i≤n-2.
Charles Lutwidge Dodgson (a.k.a. Lewis Carroll) devised a five set diagram.
Classroom use
Venn diagrams are often used by teachers in the classroom as a mechanism to help students compare and contrast two items. Characteristics are listed in each section of the diagram, with shared
characteristics listed in the overlapping section. | {"url":"http://schools-wikipedia.org/wp/v/Venn_diagram.htm","timestamp":"2014-04-18T08:21:31Z","content_type":null,"content_length":"19601","record_id":"<urn:uuid:c286c0e5-fad4-4995-80fd-2e5b836f2551>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
Divergence of limit superior
$\lim\sup x_n=-\infty\implies \lim x_n=-\infty,$ does the converse hold? How to prove this?
If you understand the definition then this is basically trivial. $\limsup x_{n} = \lim_{n\to\infty} \sup \{x_{i}\ : i \geq n\}$ Then use the definition of $\lim_{n\to\infty}x_{n} = -\infty$
Last edited by Beaky; March 29th 2011 at 06:27 PM. | {"url":"http://mathhelpforum.com/differential-geometry/176230-divergence-limit-superior.html","timestamp":"2014-04-17T03:54:25Z","content_type":null,"content_length":"36917","record_id":"<urn:uuid:4f7ec651-936b-45ee-abf3-f3628fd14857>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 72
- Neural Computing Surveys , 2001
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For
computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Cited by 1492 (93 self)
Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For
computational and conceptual simplicity, such a representation is often sought as a linear transformation of the original data. Well-known linear transformation methods include, for example,
principal component analysis, factor analysis, and projection pursuit. A recently developed linear transformation method is independent component analysis (ICA), in which the desired representation
is the one that minimizes the statistical dependence of the components of the representation. Such a representation seems to capture the essential structure of the data in many applications. In this
paper, we survey the existing theory and methods for ICA. 1
- INTERNATIONAL JOURNAL OF NEURAL SYSTEMS , 1997
"... This paper discusses the application of a modern signal processing technique known as inde-pendent component analysis (ICA) or blind source separation to multivariate financial time series such
as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series int ..."
Cited by 57 (1 self)
Add to MetaCart
This paper discusses the application of a modern signal processing technique known as inde-pendent component analysis (ICA) or blind source separation to multivariate financial time series such as a
portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). This can be viewed as a factorization
of the portfolio since joint probabilities become simple products in the coordinate system of the ICs. We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the
results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent but large shocks (responsible for the major
changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly
well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar
to the original one. Independent component analysis is a potentially powerful method of analyzing and understanding driving mechanisms in financial markets. There are further
- IEEE Transactions on Speech and Audio Processing , 2005
"... We present a Bayesian approach for blind separation of linear instantaneous mixtures of sources having a sparse representation in a given basis. The distributions of the coefficients of the
sources in the basis are modeled by a Student t distribution, which can be expressed as a Scale Mixture of Gau ..."
Cited by 49 (9 self)
Add to MetaCart
We present a Bayesian approach for blind separation of linear instantaneous mixtures of sources having a sparse representation in a given basis. The distributions of the coefficients of the sources
in the basis are modeled by a Student t distribution, which can be expressed as a Scale Mixture of Gaussians, and a Gibbs sampler is derived to estimate the sources, the mixing matrix, the input
noise variance and also the hyperparameters of the Student t distributions. The method allows for separation of underdetermined (more sources than sensors) noisy mixtures. Results are presented with
audio signals using a Modified Discrete Cosine Transfrom basis and compared with a finite mixture of Gaussians prior approach. These results show the improved sound quality obtained with the Student
t prior and the better robustness to mixing matrices close to singularity of the Markov Chains Monte Carlo approach.
- Journal of Machine Learning Research , 2003
"... We present a generalization of independent component analysis (ICA), where instead of looking for a linear transform that makes the data components independent, we look for a transform that
makes the data components well fit by a tree-structured graphical model. This tree-dependent component analysi ..."
Cited by 42 (0 self)
Add to MetaCart
We present a generalization of independent component analysis (ICA), where instead of looking for a linear transform that makes the data components independent, we look for a transform that makes the
data components well fit by a tree-structured graphical model. This tree-dependent component analysis (TCA) provides a tractable and flexible approach to weakening the assumption of independence in
ICA. In particular, TCA allows the underlying graph to have multiple connected components, and thus the method is able to find “clusters ” of components such that components are dependent within a
cluster and independent between clusters. Finally, we make use of a notion of graphical models for time series due to Brillinger (1996) to extend these ideas to the temporal setting. In particular,
we are able to fit models that incorporate tree-structured dependencies among multiple time series.
, 2005
"... Source separation arises in a variety of signal processing applications, ranging from speech processing to medical image analysis. The separation of a superposition of multiple signals is
accomplished by taking into account the structure of the mixing process and by making assumptions about the sour ..."
Cited by 35 (1 self)
Add to MetaCart
Source separation arises in a variety of signal processing applications, ranging from speech processing to medical image analysis. The separation of a superposition of multiple signals is
accomplished by taking into account the structure of the mixing process and by making assumptions about the sources. When the information about the mixing process and sources is limited, the problem
is called ‘blind’. By assuming that the sources can be represented sparsely in a given basis, recent research has demonstrated that solutions to previously problematic blind source separation
problems can be obtained. In some cases, solutions are possible to problems intractable by previous non-sparse methods. Indeed, sparse methods provide a powerful approach to the separation of linear
mixtures of independent data. This paper surveys the recent arrival of sparse blind source separation methods and the previously existing non-sparse methods, providing insights and appropriate hooks
into the literature along the way.
"... A new algorithmic framework called denoising source separation (DSS) is introduced. The main benefit of this framework is that it allows for easy development of new source separation algorithms
which are optimised for specific problems. In this framework, source separation algorithms are constuct ..."
Cited by 30 (6 self)
Add to MetaCart
A new algorithmic framework called denoising source separation (DSS) is introduced. The main benefit of this framework is that it allows for easy development of new source separation algorithms which
are optimised for specific problems. In this framework, source separation algorithms are constucted around denoising procedures. The resulting algorithms can range from almost blind to highly
specialised source separation algorithms. Both simple linear and more complex nonlinear or adaptive denoising schemes are considered. Some existing independent component analysis algorithms are
reinterpreted within DSS framework and new, robust blind source separation algorithms are suggested. Although DSS algorithms need not be explicitly based on objective functions, there is often an
implicit objective function that is optimised. The exact relation between the denoising procedure and the objective function is derived and a useful approximation of the objective function is
presented. In the experimental section, various DSS schemes are applied extensively to artificial data, to real magnetoencephalograms and to simulated CDMA mobile network signals. Finally, various
extensions to the proposed DSS algorithms are considered. These include nonlinear observation mappings, hierarchical models and overcomplete, nonorthogonal feature spaces. With these extensions, DSS
appears to have relevance to many existing models of neural information processing.
- In Proc. of the 4th Int. Symp. on Independent Component Analysis and Blind Signal Separation (ICA2003 , 2003
"... Abstract — In this paper, we briefly review recent advances in blind source separation (BSS) for nonlinear mixing models. After a general introduction to the nonlinear BSS and ICA (independent
Component Analysis) problems, we discuss in more detail uniqueness issues, presenting some new results. A f ..."
Cited by 30 (2 self)
Add to MetaCart
Abstract — In this paper, we briefly review recent advances in blind source separation (BSS) for nonlinear mixing models. After a general introduction to the nonlinear BSS and ICA (independent
Component Analysis) problems, we discuss in more detail uniqueness issues, presenting some new results. A fundamental difficulty in the nonlinear BSS problem and even more so in the nonlinear ICA
problem is that they are nonunique without extra constraints, which are often implemented by using a suitable regularization. Post-nonlinear mixtures are an important special case, where a
nonlinearity is applied to linear mixtures. For such mixtures, the ambiguities are essentially the same as for the linear ICA or BSS problems. In the later part of this paper, various separation
techniques proposed for post-nonlinear mixtures and general nonlinear mixtures are reviewed. I. THE NONLINEAR ICA AND BSS PROBLEMS Consider Æ samples of the observed data vector Ü, modeled by
, 2001
"... this article, we propose a simple batch algorithm that guarantees blind extraction of any source signal s i that satisfies for the specific time delay # i the following relations: E[s i (k)s i
(k E[s i (k)s j (k i j. (2.1) Because we want to extract only a desired source signal, we c ..."
Cited by 29 (7 self)
Add to MetaCart
this article, we propose a simple batch algorithm that guarantees blind extraction of any source signal s i that satisfies for the specific time delay # i the following relations: E[s i (k)s i (k E[s
i (k)s j (k i j. (2.1) Because we want to extract only a desired source signal, we can use a simple processing unit described as y(k) x(k), where y(k) is the output signal (which estimates the
specific source signal s i ), k is the sampling number, andw is the weight vector. For the purpose of developing the algorithm, let us first define the following error, #(k) y(k) by(k p), (2.2) where
b is a coefficient of a simple FIR filter with single delay z -pT s , where T s is the sampling period (assumed to be one)
- JOURNAL OF MACHINE LEARNING RESEARCH , 2004
"... A new efficient algorithm is presented for joint diagonalization of several matrices. The algorithm is based on the Frobenius-norm formulation of the joint diagonalization problem, and addresses
diagonalization with a general, non-orthogonal transformation. The iterative scheme of the algorithm i ..."
Cited by 25 (3 self)
Add to MetaCart
A new efficient algorithm is presented for joint diagonalization of several matrices. The algorithm is based on the Frobenius-norm formulation of the joint diagonalization problem, and addresses
diagonalization with a general, non-orthogonal transformation. The iterative scheme of the algorithm is based on a multiplicative update which ensures the invertibility of the diagonalizer. The
algorithm 's efficiency stems from the special approximation of the cost function resulting in a sparse, block-diagonal Hessian to be used in the computation of the quasi-Newton update step.
Extensive numerical simulations illustrate the performance of the algorithm and provide a comparison to other leading diagonalization methods. The results of such comparison demonstrate that the
proposed algorithm is a viable alternative to existing state-of-the-art joint diagonalization algorithms.
- IEEE Trans. Inform. Theory
"... Abstract—Situations in many fields of research, such as digital communications, nuclear physics and mathematical finance, can be modelled with random matrices. When the matrices get large, free
probability theory is an invaluable tool for describing the asymptotic behaviour of many systems. It will ..."
Cited by 23 (14 self)
Add to MetaCart
Abstract—Situations in many fields of research, such as digital communications, nuclear physics and mathematical finance, can be modelled with random matrices. When the matrices get large, free
probability theory is an invaluable tool for describing the asymptotic behaviour of many systems. It will be explained how free probability can be used to estimate covariance matrices. Multiplicative
free deconvolution is shown to be a method which can aid in expressing limit eigenvalue distributions for sample covariance matrices, and to simplify estimators for eigenvalue distributions of
covariance matrices. Index Terms—Free Probability Theory, Random Matrices, deconvolution, limiting eigenvalue distribution, G-analysis. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=98966","timestamp":"2014-04-19T06:33:49Z","content_type":null,"content_length":"40483","record_id":"<urn:uuid:482a83b8-0905-4833-bfbf-72c9aaf642bb>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recurrence Relation Problem...
October 10th 2010, 01:30 PM
Recurrence Relation Problem...
Solve the recurrence relation :
= 2an−1 − 2an−2
subject to the initial conditions
a0 = 1, a1 = 3.
October 10th 2010, 01:44 PM
First solve the characteristic equation $s^2-2s+2= 0$
What do you get?
October 10th 2010, 01:55 PM
October 10th 2010, 02:02 PM
That sounds right.
The next step is to put these solutions into polar form. Once you have done this, the general solution to the recurrance relation will be of the form
$\displaystyle a_n = r^n(\alpha \cos n\theta +\beta \sin n\theta)$
using $a_0=1,a_1=3$ to solve for $\alpha$ and $\beta$
October 10th 2010, 02:12 PM
That sounds right.
The next step is to put these solutions into polar form. Once you have done this, the general solution to the recurrance relation will be of the form
$\displaystyle a_n = r^n(\alpha \cos n\theta +\beta \sin n\theta)$
using $a_0=1,a_1=3$ to solve for $\alpha$ and $\beta$
Thanks a lot.
But, after converting the 1+1i and 1-1i into polar forms, how do I get the achieve that form of equation. Also , in the end , am I likely to end up with a simultaneous equation while solving for
the two variables.
October 10th 2010, 02:41 PM
That equation is the general form of the solution. Just substitute the values you find for r and $\theta$ into it.
Yes (Wink) | {"url":"http://mathhelpforum.com/discrete-math/159075-recurrence-relation-problem-print.html","timestamp":"2014-04-20T19:44:56Z","content_type":null,"content_length":"10761","record_id":"<urn:uuid:b7d823e6-008e-4a27-b64f-2d58d6c6f787>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Use self-similarity to get a limit from an inferior or superior limit.
This article is a stub.This means that it cannot be considered to contain or lead to any mathematically interesting information.
Quick description
In some situations the existence of a limit can be derived by using inferior or superior limit and suitably dividing a domain to use self-similarity.
Example 1: Maximal density of a packing
Fix a compact domain in Euclidean space (for example, a ball). A packing is then a union of domains congruents to , with disjoint interiors. The density of a packing is defined as
where is the volume and is the square , when the limit exists.
The present trick shows that if is a packing contained in of maximal volume, then exists.
First, since , the superior limit exists. We want to prove that the inferior limit equals the superior one. Given any , there is an such that . Now for any integer the square can be divided into
translates of . Each of these contains a packing of density greater than , so that for all . This rewrites as . By maximality, is non-decreasing in ; for all one can introduce and gets . It follows
so that as soon as is large enough, . As a consequence, and we are done.
Example 2: Weyl's inequality and polynomial equidistribution
This example is taken from a mathoverflow question and answer.
Let be a polynomial with real coefficients, and irrational. Let
Weyl's Equidistribution theorem for polynomials is equivalent to the claim that as . Though it is not the most easy way to prove this, let us deduce this theorem from the following Weyl's inequality.
Let be a rational number in lowest terms with . Weyl's Inequality is the bound:
If and are both large enough, and of the same order of magnitude, then the right-hand side gets small. The point is that the conditions on prevent one to apply this to arbitrary . However,
Dirichlet's theorem tells us that arbitrary high satisfy the needed condition, so that Weyl's inequality implies .
Now the trick comes into play: the right-hand side in Weyl's inequality does not depend on , but only on . It therefore gives a uniform bound simultaneously for the sum and the sums computing using
instead of .
For all \varepsilonN_0tS_{N_0}^t<\varepsilon N_0KNK=\lfloor N/N_0 \rfloorS_N
Login or register to post comments | {"url":"http://www.tricki.org/node/466/revisions/3770/view","timestamp":"2014-04-17T04:34:22Z","content_type":null,"content_length":"30117","record_id":"<urn:uuid:f404a98b-4d3b-4357-8ffd-223f12269117>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - NSolve output
Date: Jan 12, 2013 9:51 PM
Author: Berthold Hamburger
Subject: NSolve output
I want to solve two simple equations with NSOLVE to calculate the
dimensions of a rectangular box that is open on top. The length of the
box is twice the width and the volume and area of the box are
respectively 40 cubic feet and 60 square feet.
I defined the area and volume:
a = xz + 2xy + 2yz
v = xyz
then substituted x with 2z:
a2 = a /. x->2z
v2 = v /. x->2z
a2 = 6 y z + 2 z^2
v2 = 2 y z^2
solved with NSolve:
step1 = NSolve[{a2 == 60, v2 == 40}, {y, z}, Reals] [[1,2]] //Column
which returns the results
I would like to get an output that returns values for all 3 variables
x,y,z as in
but somehow cannot find a way to do it
What would be the correct way for doing that?
Berthold Hamburger | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8063580","timestamp":"2014-04-16T17:00:21Z","content_type":null,"content_length":"1854","record_id":"<urn:uuid:e409d5db-de8a-490f-96d4-68a6b15b3775>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thomaston, NY Algebra 2 Tutor
Find a Thomaston, NY Algebra 2 Tutor
...I have been playing guitar for almost 14 years. I took 6 years of private lessons to start, and then expanded upon that on my own. I have played in many bands, done many solo performances,
released two albums, and currently work in a recording studio where I am around guitars all day long!
22 Subjects: including algebra 2, calculus, geometry, trigonometry
...As a tutor, I feel it is my job to fill in those gaps so the student can learn effectively. I have experience tutoring in a broad subject range, from Algebra through college level Calculus.I
recently passed and an proficient in the material on both Exams P/1 and FM/2. I am able to tutor for the Praxis for Mathematics Content Knowledge.
21 Subjects: including algebra 2, calculus, geometry, statistics
...Right now my schedule is pretty flexible consisting mostly of reviewing my own material, and other school related tasks. I have experience with tutoring biology and chemistry. I tutor by
figuring out the weaknesses in study technique and study material at the first session then decide at the end of each session what we will do the next session.
10 Subjects: including algebra 2, chemistry, biology, algebra 1
...Not only teaching them something about religion, but it's about making them know that of course they belong to something, but also letting these children feel appreciated. It's always nice to
involve children in major events at the temple, because for me, it's a great message to them as well as ...
30 Subjects: including algebra 2, reading, English, algebra 1
...We also had another two semesters of a course called abstract algebra. Vector Spaces can be studied in a more general sense in abstract algebra as well. This area of math is concerned with
abstract mathematical structures such as groups, rings, and fields.
11 Subjects: including algebra 2, calculus, physics, geometry
Related Thomaston, NY Tutors
Thomaston, NY Accounting Tutors
Thomaston, NY ACT Tutors
Thomaston, NY Algebra Tutors
Thomaston, NY Algebra 2 Tutors
Thomaston, NY Calculus Tutors
Thomaston, NY Geometry Tutors
Thomaston, NY Math Tutors
Thomaston, NY Prealgebra Tutors
Thomaston, NY Precalculus Tutors
Thomaston, NY SAT Tutors
Thomaston, NY SAT Math Tutors
Thomaston, NY Science Tutors
Thomaston, NY Statistics Tutors
Thomaston, NY Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Albertson, NY algebra 2 Tutors
Baxter Estates, NY algebra 2 Tutors
Great Nck Plz, NY algebra 2 Tutors
Great Neck algebra 2 Tutors
Great Neck Estates, NY algebra 2 Tutors
Great Neck Plaza, NY algebra 2 Tutors
Kensington, NY algebra 2 Tutors
Kings Point, NY algebra 2 Tutors
Little Neck algebra 2 Tutors
Manhasset algebra 2 Tutors
Plandome, NY algebra 2 Tutors
Roslyn Estates, NY algebra 2 Tutors
Roslyn Harbor, NY algebra 2 Tutors
Russell Gardens, NY algebra 2 Tutors
University Gardens, NY algebra 2 Tutors | {"url":"http://www.purplemath.com/Thomaston_NY_Algebra_2_tutors.php","timestamp":"2014-04-21T14:49:22Z","content_type":null,"content_length":"24445","record_id":"<urn:uuid:dab8fdcf-de40-40e1-bc68-35ef1e1f90cd>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics for Informatics and Computer Science
Mathematics for Informatics and Computer Science Follow
Author: Pierre Audibert
Publisher: Wiley-Blackwell, 2010
Aimed at: Playful programmers
Rating: 4
Pros: A real world approach, makes combinatorics fun
Cons: If you want a pure math approach you may disapprove
Reviewed by: Mike James
With its coverage confined to matters combinatoric and the related probability theory, does this book's novel approach work?
Author: Pierre Audibert
Publisher: Wiley-Blackwell, 2010
Aimed at: Practical algorithmic programmers
Rating: 4
Pros: A real world approach, makes combinatorics fun
Cons: If you want a pure math approach you may disapprove
Reviewed by: Mike James
There isn't much scope for new books on the mathematics of computer science - or so you might have thought. In fact there are few coherent accounts to choose from and they all have
their particular emphasis that makes them incomplete. So it is with this relatively new work. It has just short or 1000 pages yet its coverage is confined to matters combinatoric
and the related probability theory. As long as this is what you are looking for it is a worthwhile addition to your bookshelf.
It is divided into three parts - Combinatorics, Probability and Graphs prefaced by a with a nice historical introduction to the sorts of problems and solutions that led up to the
modern maths of algorithms. Then we get started on combinatorics proper. It really does start off from the very beginnings of the subject with a look at Pascal's triangle and the
combinations that are related to it. From here the emphasis falls on combinatorics that arise naturally from the considerations of algorithms. Chapter 3 deals with enumerations in
alphabetical order, then Chapter 4 deals with enumerations that result from tree structures. Chapter 5 looks at generating functions and recurrences. Chapter 6 is about routes in a
square grid and Chapter 7 deals with arrangements and combinations with repetition including anagrams and lots of detailed algorithmic applications. Then we have sieve formulae and
two chapters on "mountain ranges" - arrangements of line segments corresponding to up/down - which is the bracket nesting problem in thin disguise. Then we have some example
applications, Burnside's formula, matrices and circulations on a graph, parts and partitions of a set, partitions of a number, flags, walls with stacks, tiling and permutations,
This brings us to the end of the first part and as you can tell by the number of topics each chapter is quite short.
Part 2 is about probability and it starts off with a revision on discrete probabilities and then a chapter on implementing some basic probability processes on a computer. Chapter 22
introduces the idea of a continuous distribution but the main focus of the book is on discrete distributions so this is no more than an aside. Chapter 23 deals with generating
functions, then we have graphs and matrices, repeated games of heads-or-tails, random routes on a graph, repetitive draws until a condition is satisfied, and some exercises. This is
not a book on the theory of probability or advanced methods such as Monte Carlo simulation or Markov chains. It's mostly the application of basic ideas and the presentation of
The final part of the book is about graph theory. Chapter 29 introduces the basic ideas. Then we have a look at different types of exploration of a graph, then trees with numbered
nodes, binary trees, weighted graphs including shortest path and minimum spanning tree - all classic computer science and algorithms. Chapter 34 is about Eulerian paths, then we
have the enumeration of spanning trees, enumeration of Eulerian paths and finally Hamiltonian Paths. The book ends with two appendixes on linear algebra and determinants.
This is not a deeply theoretical book. You won't find any discussion of measure theory or NP hard problems. In the main it takes very simple algebraic methods and applies them
relentlessly to problems expressed in real world terms. For example, instead of repeated Bernoulii trials the topic is called "repeated games of heads-or-tails". In many ways the
book reads more like a title on recreational mathematics than pure math. As a result there are some with a leaning to pure math who are going to see the book as unnecessary and
naive in its approach and certainly if you prefer stochastic differential equations and Markov chain Monte Carlo then the chances that you will approve of this book are low.
On the other hand if you are looking for an approach to combinatorics that is routed in applications and with lots of exercises then this is the book for you. Yes, dare I say it,
it's fun.
Game Physics Engine Development
Author: Ian Millington
Publisher: Morgan Kaufmann, 2nd edition, 2010
ISBN: 978-0123819765
Aimed at: Games and graphics programmers
Rating: 4
Pros: Direct and down to earth approach
Cons: Tendency to avoid the maths
Reviewed by: Mike James
Many graphics programmers are completely ignorant of physi [ ... ]
+ Full Review
Computer Science Illuminated
Author: Nell Dale & John Lewis
Publisher: Jones & Bartlett
ISBN: 978-1449672843
Audience: Undergraduate students
Rating: 4
Reviewer: Mike James
A gentle introduction to computer science - just what a lot of people are looking for.
+ Full Review
More Reviews
Last Updated ( Friday, 21 January 2011 )
RSS feed of book reviews only
RSS feed of all content
Copyright © 2014 i-programmer.info. All Rights Reserved. | {"url":"http://i-programmer.info/bookreviews/65-mathematics/1854-mathematics-for-informatics-and-computer-science.html","timestamp":"2014-04-19T18:16:01Z","content_type":null,"content_length":"38734","record_id":"<urn:uuid:5a87424c-27d1-4229-824c-ee7f88be049c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathFiction: The Lions in the Desert (David Langford)
a list compiled by Alex Kasman (College of Charleston)
Home All New Browse Search About
The Lions in the Desert (1993)
David Langford
(click on names to see more mathematical fiction by the same author)
Note: This work of mathematical fiction is recommended by Alex for hardcore fans of science fiction.
Two men are hired to guard a mysterious treasure. One of them is a math grad student, and so their discussions to pass the time take on a mathematical flavor. Of particular interest are the
references to the old "how to catch a lion" jokes that have become standard fare in mathematical humor. (See, for example, here and here.) The story concludes with additional mathematical references,
to the idea of reducing a new problem to one that has already been solve and to the mathematical notion of an existence proof. (An "existence proof" is one that demonstrates that something exists
without actually presenting it or even a method for obtaining it. A real example is that the existence of a well-ordering on the real numbers is a consequence of the Axiom of Choice, but nobody has
actually been able to produce one.) However, unlike the humorous references to the lions, these latter story element take a sharp turn into the horrific, which is why this story was reprinted in
Year's Best Horror Stories XXII. (It originally appeared in Weerde II, a collection of stories all involving the idea of shape-shifters.)
Thanks to Vijay Fafat for discovering this wonderful little story and bringing it to our attention.
Jean-Pierre Zurru has written to let me know that an early source of the mathematical lion hunting method jokes (at least in English) is the article The mathematical theory of big game hunting
written pseudonymously by "Hector Petard of Princeton University" and published in the American Mathematical Monthly in the August-September issue of 1938. There seem to be a few copies available for
free on the internet, such as this PDF.
Buy this work of mathematical fiction and read reviews at amazon.com.
(Note: This is just one work of mathematical fiction from the list. To see the entire list or to see more works of mathematical fiction, return to the Homepage.)
Works Similar to The Lions in the Desert
According to my `secret formula', the following works of mathematical fiction are similar to this one:
Ratings for The Lions in the Desert:
Content: Have you seen/read this work of mathematical fiction? Then click here to enter your own votes on its mathematical content and literary quality or send me comments to post on
3/5 (1 votes) this Webpage.
Literary Quality:
3/5 (1 votes)
Genre Fantasy, Horror,
Medium Short Stories,
Home All New Browse Search About
Your Help Needed: Some site visitors remember reading works of mathematical fiction that neither they nor I can identify. It is time to crowdsource this problem and ask for your help! You would help
a neighbor find a missing pet...can't you also help a fellow site visitor find some missing works of mathematical fiction? Please take a look and let us know if you have seen these missing stories
(Maintained by Alex Kasman, College of Charleston) | {"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf933","timestamp":"2014-04-20T03:10:07Z","content_type":null,"content_length":"10958","record_id":"<urn:uuid:9265781f-fc95-4e07-b4cd-aba87bd065f4>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Irrational Number Phenomenon
Anyone ever question why we haven't been able to established a clean-cut, division system that overrides this phenomenon?
We have lots of ways to notate numbers. "2/3", for example, is a perfectly good notation for the number you get when you divide 2 by 3.
Decimal notation for real numbers is taught because:
• It's simple
• It fits well with decimal notation for integers
• It's very easy to trade precision for simplicity. (e.g. just write the first few digits)
Most people don't have any reason to learn notations other than a mix of algebraic expressions with decimals. | {"url":"http://www.physicsforums.com/showpost.php?p=3457249&postcount=5","timestamp":"2014-04-20T21:21:57Z","content_type":null,"content_length":"8534","record_id":"<urn:uuid:02b8626d-bf04-4709-a6d9-23899ca12f70>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is transversity and transverse spin?
I am reading through
article for a paper and ran across the term transversity. I did some google searching but didn't come up with anything I could understand. I think it might be linked with transverse spin but I am
also having trouble getting clarification on what that is. I know what spin is, and I'm wondering if it's just spin along a latitude instead of a longitude.
My ultimate goal is to be able to explain what these guys are doing on the COMPASS project. I understand their aim is to try to solve the "proton spin crisis" and I have a pretty good understanding
of what that is from some other articles I've read. So if you happen to have knowledge of that experiment as well as transversity and transverse spin, I'd really appreciate it. | {"url":"http://www.physicsforums.com/showthread.php?t=397023","timestamp":"2014-04-21T01:59:21Z","content_type":null,"content_length":"23881","record_id":"<urn:uuid:b1ddb8e4-c895-4c79-9df5-bb1d6725ab0a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modular curve
From Encyclopedia of Mathematics
A complete algebraic curve modular group
(see Modular group). The least such [4], Vol. 2, [2]). The genus of
[2]). A holomorphic differential form on a modular curve zeta-function of a modular curve is a product of the Mellin transforms (cf. Mellin transform) of modular forms and, consequently, has a
meromorphic continuation and satisfies a functional equation. This fact serves as the point of departure for the Langlands–Weil theory on the relationship between modular forms and Dirichlet series
(see [7], [8]). In particular, there is a hypothesis that each elliptic curve over [1]).
A modular curve parametrizes a family of elliptic curves, being their moduli variety (see [7], Vol. 2). In particular, for
Over each modular curve [3], [5]). The zeta- functions of [3], [7]).
The rational points on a modular curve correspond to elliptic curves having rational points of finite order (or rational subgroups of points); their description (see [6]) made it possible to solve
the problem of determining the possible torsion subgroups of elliptic curves over
The investigation of the geometry and arithmetic of modular curves is based on the use of groups of automorphisms of the projective limit of the curves Modular form, [3]).
[1] Yu.I. Manin, "Parabolic points and zeta-functions of modular curves" Math. USSR Izv. , 6 : 1 (1972) pp. 19–64 Izv. Akad. Nauk SSSR Ser. Mat. , 36 : 1 (1972) pp. 19–66
[2] G. Shimura, "Introduction to the arithmetic theory of automorphic functions" , Math. Soc. Japan (1971)
[3] V.V. [V.V. Shokurov] Šokurov, "Holomorphic differential forms of higher degree on Kuga's modular varieties" Math. USSR Sb. , 30 : 1 (1976) pp. 119–142 Mat. Sb. , 101 : 1 (1976) pp. 131–157
[4] F. Klein, R. Fricke, "Vorlesungen über die Theorie der elliptischen Modulfunktionen" , 1–2 , Teubner (1890–1892)
[5] M. Kuga, G. Shimura, "On the zeta function of a fibre variety whose fibres are abelian varieties" Ann. of Math. , 82 (1965) pp. 478–539
[6] B. Mazur, J.-P. Serre, "Points rationnels des courbes modulaires Sem. Bourbaki 1974/1975 , Lect. notes in math. , 514 , Springer (1976) pp. 238–255
[7] J.-P. Serre (ed.) P. Deligne (ed.) W. Kuyk (ed.) , Modular functions of one variable. 1–6 , Lect. notes in math. , 320; 349; 350; 476; 601; 627 , Springer (1973–1977)
[8] A. Weil, "Ueber die Bestimmung Dirichletscher Reihen durch Funktionalgleichungen" Math. Ann. , 168 (1967) pp. 149–156
[a1] N.M. Katz, B. Mazur, "Arithmetic moduli of elliptic curves" , Princeton Univ. Press (1985)
How to Cite This Entry:
Modular curve. A.A. PanchishkinA.N. Parshin (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Modular_curve&oldid=13202
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Modular_curve","timestamp":"2014-04-21T12:18:46Z","content_type":null,"content_length":"27933","record_id":"<urn:uuid:10ee0a48-497a-4778-997c-7d06d00572ab>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
The module provides k of n encoding - a way to generate (n - k) secondary blocks of data from k primary blocks such that any k blocks (primary or secondary) are sufficient to regenerate all blocks.
All blocks must be the same length and you need to keep track of which blocks you have in order to tell decode. By convention, the blocks are numbered 0..(n - 1) and blocks numbered < k are the
primary blocks.
:: Int the number of primary blocks
-> Int the total number blocks, must be < 256
-> FECParams
Return a FEC with the given parameters.
:: FECParams
-> [ByteString] a list of k input blocks
-> [ByteString] (n - k) output blocks
Generate the secondary blocks from a list of the primary blocks. The primary blocks must be in order and all of the same size. There must be k primary blocks.
:: FECParams
-> [(Int, ByteString)] a list of k blocks and their index
-> [ByteString] a list the k primary blocks
Recover the primary blocks from a list of k blocks. Each block must be tagged with its number (see the module comments about block numbering)
Utility functions
:: Int the number of parts requested
-> ByteString the data to be split
-> IO [ByteString]
Break a ByteString into n parts, equal in length to the original, such that all n are required to reconstruct the original, but having less than n parts reveals no information about the orginal.
This code works in IO monad because it needs a source of random bytes, which it gets from devurandom. If this file doesn't exist an exception results
Not terribly fast - probably best to do it with short inputs (e.g. an encryption key)
:: Int the number of blocks required to reconstruct
-> Int the total number of blocks
-> ByteString the data to divide
-> [ByteString] the resulting blocks
A utility function which takes an arbitary input and FEC encodes it into a number of blocks. The order the resulting blocks doesn't matter so long as you have enough to present to deFEC.
:: Int the number of blocks required (matches call to enFEC)
-> Int the total number of blocks (matches call to enFEC)
-> [ByteString] a list of k, or more, blocks from enFEC
-> ByteString
Reverses the operation of enFEC. | {"url":"http://hackage.haskell.org/package/fec-0.1/docs/Codec-FEC.html","timestamp":"2014-04-18T23:37:36Z","content_type":null,"content_length":"12569","record_id":"<urn:uuid:8be6f4e5-c95a-48c6-8977-9e8c36e68bc3>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prospect Heights Prealgebra Tutor
...I have also taught writing (as well as other subjects) to middle school students. I tutor students from first grade through high school. I am a certified special education teacher endorsed to
teach high school English.
33 Subjects: including prealgebra, reading, English, writing
...I was a student of standardized tests, achieving the highest possible score on the math section of the ACT and SAT, as well as high scores on the equivalent sections of the GRE (for graduate
school acceptance). As an undergraduate engineering student, I continued to apply advanced math and logic ...
12 Subjects: including prealgebra, statistics, geometry, algebra 1
...I am open to travelling to any location convenient for my student as long as we both agree it is convenient for studying. I have had tutoring done in public library in the past. I look forward
to your email to discuss an opportunity to help you achieve the success you desire and deserved.I have experience teaching Algebra 1 to a group of students and one on one basis for over 10 years.
9 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...I give students additional practice problems to ensure that they understand the concepts, and I have the student explain the concepts back to me. I also help students prepare for the ACT. I
provide my own materials, including actual ACT exams from previous years.
21 Subjects: including prealgebra, reading, study skills, algebra 1
...Thanks Lewis." (Masters, Time-Series Forecasting) If you would like any help with econometrics, I would be delighted to be of service. In our data-driven age, applications of linear algebra are
ubiquitous. Linear algebra applies math to such matrices as any Excel spreadsheet, and using the po...
57 Subjects: including prealgebra, chemistry, English, calculus | {"url":"http://www.purplemath.com/prospect_heights_prealgebra_tutors.php","timestamp":"2014-04-16T13:28:34Z","content_type":null,"content_length":"24407","record_id":"<urn:uuid:00c999bf-6c4e-403c-a10e-296fe4a17f2e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
Warrenville, IL Calculus Tutor
Find a Warrenville, IL Calculus Tutor
...As a physicist I work everyday with math and science, and I have a long experience in teaching and tutoring at all levels (university, high school, middle and elementary school). My son (a 5th
grader) scores above 99 percentile in all math tests, and you too can have high scores.My PhD in Physics...
23 Subjects: including calculus, physics, statistics, geometry
...While working there I would help the children with their homework and play games with them. I have coached volleyball for 5th-7th graders. I also have a range of younger cousins that I enjoy
spending quality time with.
19 Subjects: including calculus, reading, geometry, algebra 1
...I taught trigonometry and algebra 2 to high school juniors in the far north suburbs of Chicago for the past two years. I am currently attending DePaul University to pursue my master's degree
in applied statistics. I have tutored students of varying levels and ages for more than six years.
19 Subjects: including calculus, geometry, statistics, algebra 1
...Since then I earned a bachelor's degree in Greek from Pillsbury Baptist Bible College in Minnesota. I also took some graduate Greek courses during my master's in Bible at Bob Jones University
in South Carolina. I have taught C++ at the college level.
25 Subjects: including calculus, writing, GRE, geometry
...Number Lines and Signed Numbers 5. Fractions and Decimals 6. Operations with Fractions 7.
17 Subjects: including calculus, reading, geometry, statistics
Related Warrenville, IL Tutors
Warrenville, IL Accounting Tutors
Warrenville, IL ACT Tutors
Warrenville, IL Algebra Tutors
Warrenville, IL Algebra 2 Tutors
Warrenville, IL Calculus Tutors
Warrenville, IL Geometry Tutors
Warrenville, IL Math Tutors
Warrenville, IL Prealgebra Tutors
Warrenville, IL Precalculus Tutors
Warrenville, IL SAT Tutors
Warrenville, IL SAT Math Tutors
Warrenville, IL Science Tutors
Warrenville, IL Statistics Tutors
Warrenville, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/warrenville_il_calculus_tutors.php","timestamp":"2014-04-21T07:16:16Z","content_type":null,"content_length":"23958","record_id":"<urn:uuid:de27f0ed-b090-4606-977e-4178f5b4cbd0>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Lake, CO Science Tutor
Find an East Lake, CO Science Tutor
...I have successfully help many students accomplish their goal with this class. Through this exposure to chemistry, I am a qualified tutor on this subject. As a biology graduate, I have taken
and have passed general college physics 1 and 2 both with A.
7 Subjects: including physics, biology, chemistry, nutrition
...I would be delighted to help you or your child master these important design programs.I have taught many of the general applications such as the Microsoft products for adult learners. I also
teach high school students. I am very good and breaking down ideas and making them simple and I have a great deal of patience.
12 Subjects: including sociology, psychology, reading, Microsoft Word
...I have strong knowledge of Zoology with 10+ years of experience in teaching. I am eager to tutor at the high school and university students. Please contact me for further details.
6 Subjects: including biology, biochemistry, physiology, zoology
...I took geometry in middle school and pursued my interest in Mathematics through Math Counts, an extracurricular math program. I took a specific mathematic course for Linear Algebra &
Differential equations. Differential equations were also used extensively in my chemical engineering courses.
18 Subjects: including chemical engineering, organic chemistry, chemistry, calculus
...Let's talk! I will help you find your comfort zone and the learning style that best fits you! I have a master's degree in education and special education.
42 Subjects: including anthropology, reading, sign language, ESL/ESOL
Related East Lake, CO Tutors
East Lake, CO Accounting Tutors
East Lake, CO ACT Tutors
East Lake, CO Algebra Tutors
East Lake, CO Algebra 2 Tutors
East Lake, CO Calculus Tutors
East Lake, CO Geometry Tutors
East Lake, CO Math Tutors
East Lake, CO Prealgebra Tutors
East Lake, CO Precalculus Tutors
East Lake, CO SAT Tutors
East Lake, CO SAT Math Tutors
East Lake, CO Science Tutors
East Lake, CO Statistics Tutors
East Lake, CO Trigonometry Tutors
Nearby Cities With Science Tutor
Bow Mar, CO Science Tutors
Columbine Valley, CO Science Tutors
Commerce City Science Tutors
Dacono Science Tutors
Eastlake, CO Science Tutors
Edgewater, CO Science Tutors
Erie, CO Science Tutors
Federal Heights, CO Science Tutors
Firestone Science Tutors
Henderson, CO Science Tutors
Lafayette, CO Science Tutors
Lakeside, CO Science Tutors
Northglenn, CO Science Tutors
Thornton, CO Science Tutors
Westminster, CO Science Tutors | {"url":"http://www.purplemath.com/East_Lake_CO_Science_tutors.php","timestamp":"2014-04-16T19:32:56Z","content_type":null,"content_length":"23847","record_id":"<urn:uuid:96e94b7a-cf51-424c-86a1-c175d0219d49>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00511-ip-10-147-4-33.ec2.internal.warc.gz"} |
PA/LCM Home Page
We no longer look upon a deductive system as one that establishes the truth of its theorems by deducing them from 'axioms' whose truth is quite certain (or self-evident, or beyond doubt); rather, we
consider a deductive system as one that allows us to argue its various assumptions rationally and critically, by systematically working out their consequences. Deduction is not used merely for
purposes of proving conclusions; rather, it is used as an instrument of rational criticism. (Karl Popper, Realism and the Aim of Science, Part I, Chapter 4.)
What's PA/LCM
• PA/LCM @stands for Proof Animation @and Limit Computable Mathematics
To test formalization of proofs by experiments (animation) via Gold's limiting recursive functions. The goal is making proof developments on computers easier, friendly and less costly. Nonetheless,
it raises surprisingly many interesting theoretical questions and bridges various areas of computer science and mathematics. It is also expected as a tool of CAI of mathematics to teach students
"proofs are graphical and dynamic beings, and fun!".
PA: Proof Animation
Formal proof has been subject to formal inference rules. Or it is the very definition of formal proofs. Proof Animaiton changes this tradition. We propose to utilize Curry-Howard isomorphism to test
formal proofs, which may be unfinished incomplete proofs. It is used to find bugs of proofs under development and even of proofs fully checked. In the latter, bugs may exist in formalization
(definitions and theorems) itself, even if applications of inference rules are checked to be correct. (Such bugs have been reported in literatures of formal methods, especially, of hardware
verification.) Since this resembles the animation of specifications in formal methods, we call it "Proof Animation."
LCM: Limit Computable Mathematics
When the project was started, the main difficulty of PA is that we did not know how to animate classical proofs. After examinations of various theories, we finally arrived the conclusions
• It is plausible that there is no such a method for full classical proofs.
• An idea from Learning Theory by Gold serves as a good tool of animation for a fragment of classical mathematics, we call it Limit Computable Mathematics.
• A majority of proofs for concrete and pracitical mathematics needs only LCM.
The idea of LCM unexpectedly came from a study of history of mathematics. An origin of modern abstract algebra, Hilbert's finite basis theorem in the 1890's, looked similar to an example of proof
execution by S. Berardi. A. Yamamoto pointed out that both of them are the same as limiting recurison in the learning theory. Then, Hayashi conceieved the idea. After starting the basic theory of
LCM, it turned out that it is related to various areaas of computer scinece, mathematics and logic. See Hayashi S, and Nakata M.: Towards Limit Computable Mathematics below for more detailed
I tried to put all of my thoughts on LCM obtained so far. However, there remained some ideas still immature to be spelled out in definite words and so not included in it. Some of them are "connection
of LCM to methods in applied mathematics in the traditional sense such as numerical analysis" and "computation as an interaction between machine and human being: motto machines compute and human
beings decide when the machines stop", and relations to category theory.
The ultimate goal
As UML is emerging as a standard language for OO methods, formal or semi-formal languages and methods are now becoming practical and very important. I guess that, in a near future, formal language
would become a very important factor of societies. Then, it becomes very important to understand how and how far we can describe real worlds by such languages. We must understand the gaps between
formal entities and informal entities, and how we can relate them. Goedel's incompletness theorem is a mathematical theorem of such a kind. However, what really important is to understand the gaps
from practical point of views. We cannot make a space craft traveling beyond the speed of the light by Einstein's theory. Incompletness by Goedel's theorem is a limitation of such a kind. But, real
difficulty of building space crafts is not the limitation, since our space crafts are much slower and human-factors are much more important. Space crafts must carry us! Goedel's theorem gives us an
undecidable proposition. However, real difficulty of formalization of systems lie elsewhere. We have just started to understand the gaps from realistic point of views.
The ultimate goal of my research is to understand how far and how we can describe the world by man-made languages in pracitical senses. This would be a deep question as philosophers ask. Kripke's
"Wittgenstein's paradox" is an example of such questions asked by philosophers. Computer scientists and logicians may attack the problem from more practical sides and I believe that, without such a
practice-oriented researches, we will never understand the real nature of the problem. PA is an example of a general framework bridging formal entities and informal entities. Specification animation,
algorithm (program) animation, visualization, modelling, simulation, verification, validiation, conformance tests, etc.etc.. are technologies of the similar kinds. LCM not only realizes PA but may
hopefully also birdge formalization of discrete systems by formal languages in our sense and more traditional "formalization" of continuous systems by more traditional "formal language" of calculus
and classical physics such as differential equations.
What's going on
Here I list our activities. This is for publisizing the project and sharing informations between project members.
• 2001
□ Some papers were written and published.
• 2002
□ Eric Martin visits Akihiro Yamamoto and LLLL Workshop at Sapporo with him.
1. First meeting with Learning theory circle in colllaboration with Ken Satoh's Kaken-hi project.
2. RichProlog is expected to be an implementation language for LCM. A way of implementatin of LCM which we did not expect might be possible by RichProlog.
□ Mini workshop with John Case in Kyoto.
1. John Case, Akama, Arimura (left to right), Hayashi had discussions on LCM and Learning theory.
□ Hayashi and Yasugi visit Kohlenbach at BRICS, Aarhus university, Denmark.
1. Calibration of classical principles.
2. Various discussions on proof theoretic sides of LCM and related subjects, mainly the ones done by Kohlenbach.
□ Brain Storming Workshop at Kobe university (March 4,5,6) schedule for project members (in Japanese)
□ S. Berardi visted Kyoto and Kobe in July.
□ H. Barendregt visited Kobe in August.
□ F. Stephan and E. Martin invited by Yamamoto's project are scheduled to visit Kobe in October.
□ U. Kohlenbach invited by Yasugi's project is scheduled to visit Kyoto in December.
□ The second LLLL Workshop organized as a part of SIGFAI of Japanese Society of Artificail Intelligence will be held in Hakata in December. http://www.i.kyushu-u.ac.jp/~arim/sigfai02/
• 2003
□ Vasco Brattka visted Kyoto.
□ Eric Martin visited Yamamoto at Kyoto University.
□ The works by Brattka and Martin are both on Borel hierarchy. Borel hierarchy seems to correspond to LCM.
□ I was rather lazy to maitain this page......
• 2004
□ The LCM arithmetical hierarchy was finally worked out!!
□ Toftdal did the first significant step in the practce of calibration theory in his master thesis work.
□ Thierry Coquand, Stefano Berardi, and Hayashi jointly found a game semantics equivalent to LCM realizability. (The game semantics is defined only for prenex normal forms.) It turned out that
it's equivalent to simple backtracking game considerd by Coquand before his JSL paper 1995. This must be a significance step for theory of LCM and practice for PA/LCM. A brief description is
found in Hayashi's paper for ALT02 special issue of TCS.
□ A paper on LCM arithmetical hierarchy by Akama, Berardi, Hayashi, Kohlenbach was accepted by LICS 2004.
□ A paper on calibration theory by Toftdal was accepted by ICALP 2004.
□ A Kakenhi-grant for "Logic of PAC-Learning" was approved.
□ SMART project homepage started. The project is a practical UML-related project, but it is based on the same philosophy as of LCM.
□ S. Berardi visited Japan in December. Slides of his talk at Kyoto university.
□ The general concept of 1-backtracking game has been established by Berardi. His slides at Types meeting at Paris.
• 2005
□ Berardi gave a talk on 1-backtracking semantics by Berardi, Coquand and Hayashi at Galop 05.
□ Hayashi gave an invited talk on LCM and games at TLCA '05.
• 2006
□ Berardi visited Tokyo and Kyoto in March. He is now working on very natural game semantics of implications. It is fully first-order and applicable to intuitionistic mathematics as well as
Literatures on LCM, PA and related subjects
1. Akama, Y., Berardi, S., Hayashi, S., Kohlenbach, U.: An arithmetical hierarchy of the law of excluded middle and related principles,LICS 2004: 192-201. pdf (cautions: not the final version, there
are some small errors... I do not know if it is legal to put the final verison here.)
2. Akama, Y.: Limiting Partial Combinatory Algebras Towards Infinitary Lambda-calculi and Classical Logic, to appear In Proc. of Computer Science Logic, Lecture Notes in Computer Science, Springer,
2001, A theory of limiting PCA's. Akama's limit construction is more elaborated than Hayashi-Nakata's, and a fine mathematical theory has been developed. Akama tries to make limit-models as a
semantics of concurrent processes, Available at http://www.math.tohoku.ac.jp/~akama/lcm/
3. Berardi S.: Classical logic as Limit Completion I, a constructive model for non-recursive maps, Manuscript, Jan. 2001,Berardi introduced a new version of his approximation theory based on
limit-idea. It utilizes limits over directed sets. He tries to make his theory as a sematics of concurrent processes. Available at http://www.di.unito.it/~stefano/
Berardi-ClassicalLogicAsLimit-I.rtf: updated on May 24, 2001, section 1 and appendix are still drafts.
4. Berardi,S, Coquand, Th., and Hayashi, S.: Games with 1-backtracking. GALOP 2005: 210-225
5. Brattka, V.: Effective Borel Measurability and Reducibility of Functions, A talk at Kyoto Sangyo University, October 2003. Supported by LCM project. This talk give an interesting developments of
discontinuous function theory in computable analysis. This would be a good target for calibration. A full paper version. The final version will appear in MLQ.
6. Hayashi S., Sumitomo R., and Shii K.:Towards Animation of Proofs -testing proofs by examples- ps , dvi :, Theoretical Computer Science, 272, pp.177--195: The position paper of PA, It discusses
the idea of PA. This is a papter written before the idea of LCM, but published after the idea of LCM. It took 3 years or so to be published! Thus, it is somehow "old-fashinoned", but still it is
the best paper to know the idea of Proof Animation.
7. Hayashi S. and Sumitomo R.; Testing Proofs by Examples, in Advances in Computing Science, Asian '98 : 4th Asian Computing Science Conference, Manila, the Philippines, December 1998, J. Xiang and
A. Ohori, eds., Lecture notes in Computer Sciences No. 1538, pp.1-3, 1998, ps, pdf
8. Hayashi S, and Nakata M.: Towards Limit Computable Mathematics, in Types for Proofs and Programs, International Workshop, TYPES 2000, Durham, UK, December 2000, Selected Paper, P. Challanghan, Z.
Luo, J. McKinna, R. Pollack, eds., Springer Lecture Notes in Computer Science 2277, pp.125-144 : The position paper of LCM. It contains a plenty of open problems. Contents: The notion of PA and
LCM. Illustrations of PA constructive or non-constructive by some examples. Shoratge of existing theory of classical proof execution for PA. LCM serves as a good foundation. A formalization of
LCM by realizability and semi-classical principles. Relationships to various fields of computer science, mathematics and logic, including recursive degree, learning theory, computability of
discontinuous functions, BSS theory of computational complexity over reals, Hilbert invariant theory, and a kind of reverse mathematics, etc.etc.. Some practical issues, e.g., a good calculus of
limiting computable processes, and formal languages for proof checker-proof executor for LCM.
9. Hayashi S, and Nakatogawa K.: Hilbert and Computation pdf (an early version of the first chapter of a paper in preparation), It was a handout for a talk at Hilbert workshop Jan. 2002 at Keio
Universtiy, (html, pdf versions of PPT slides for the talk). Although the subject is history of science on Hilbert's early conception of computation, the LCM research was initiated by the the
investigation of history. The relationships between classical logic, learning theory, proof animation, algebra were conceived through Hilbert's proof method used in his finite basis theorem,
which was accused as "theology" by Paul Gordan. History of science is a great source of ideas of our research, as old mathematical articles are sources of algorithms for the computer algebra
circle. Hibert invariant theory is our main target of PA/LCM case study. The research mainly foucuses on pre-1920's. Since Mancosu and Zach have studied Hilbert's and his pupils' notions of
computation, e.g., Zach has shown that they were already beyond RCA_0 even in early 1920's, Mancosu suggested that their researches and our research complementary to each other and would have
serve a complete picture of Hilbert's notion of computation.
10. Hayashi S.: Limit Computable Mathematics and Interactive Computation, dvi, ps, pdf: An introduction to LCM and a paradiagm of interactive computation (early draft).
11. Hayashi S.: Mathematics based on Learning, Algorithmic Learning Theory, LNAI 2533, Springer, pp.7-21: Largely introductory account of LCM for learning theory audiences. Addendum and Errata to the
paper. 2002
12. Hayashi, S.:Mathematics based on Incremental Learning -Excluded middle and Inductive inference-, a preliminary version. submitted to TCS, 2003, revised in 2004.
13. Hayashi, S.:Can Poofs be Animated by Games?,inTyped Lambda Calculi and Applications: 7th International Conference, TLCA 2005, Nara, Japan, April 21-23, 2005. Proceedings, Editors: P. Urzyczyn,
Springer LNCS No.3461, pp.11-22. pdf
14. Kohlenbach U.: Unpublished handwritten notes commented and typesetted by Mariko Yasugi, Nov., 2000, Existence of hierarchy of semi-classical principles was proved for some constructive formal
systems of arithemetics, HA, HA^omega, etc... dvi, ps, pdf.
15. Nakata M. and Hayashi S,: A limiting first order realizability interpretation, Scientificae Mathematicae Japonicae http://www.jams.or.jp/scmjol/5.html, paper number 5-49: dvi, ps, pdf (caution,
these are not the final version): This paper introduces realizability intepretation of a first order LCM-arithmetic by limiting computable functions and shows applications to proof animation. It
also discuess some examples in details and speculations.
16. Toftdal, M.: A Calibration of Ineffective Theorems of Analysis in a Hierarchy of Semi-Classical Logical Principles, ICALP 2004: 1188-1200.
17. Toftdal, M.: Calibration of Ineffective Theorems of Analysis in a Constructive Context, Master fs Thesis at Department of Computer Science, University of Aarhus, 17 May 2004,ps, pdf
18. Yasugi M, Brattka V. and Washihara M.: Computability aspects of some discontinuous functions, "Computability of discontinuous functions over reals is considered from some point of views. Among
them, they show that some typical discontinuous functions as Gasussian function are functions in the sense of LCM. Available at http://www.kyoto-su.ac.jp/~yasugi/recent.html
19. M. Yasugi and M. Washihara, A note on Rademacher functions and computability, Words, Languages and Combinatorics III, World Scientific, 466-475, Computation by limit of the Rademacher function
20. M. Yasugi and Y. Tsujii, Computability of a function with jumps-- Effective uniformity and limiting recursion-- to appear in Special Issue of "Topology and its Applications", Equivalence of limit
computation and effective uniform space under certain condition.
21. M. Yasugi and Y. Tsujii, Two notions of sequential computability of a function with jumps, Electronic Notes in Theoretical Computer Science 66 No.1 (2002), Preliminary version of 16.
Invited talks and an award
only invited talks at international meetings are listed. sorry for domestic meetings...
Finacial supports
• 98: PA project started by a Kaken-hi grant of Ministry of Education. 1998-2000, Research leader: Susumu Hayashi
• 01: PA/LCM project started by a Kaken-hi grant. 2001-2003, Research leader: Susumu Hayashi
• 03: Awarded by a finacial support from Okawa foudations: 2003-2004, Research leader: Susumu Hayashi
• 04: PA/LCM (2004-2006) and SMART (2004-2005) projects are supported by Kaken-hi grant, Research leader: Susumu Hayashi
• 04: Logic of PAC Learning project by Kaken-hi grant, 2004-2006, Research leader: Susumu Hayashi
• 06: Proof animation by 1-games by Kaken-hi grant, 2006-2007. Research leader: Mariko Yasugi | {"url":"http://www.shayashi.jp/PALCM/","timestamp":"2014-04-18T06:37:37Z","content_type":null,"content_length":"26200","record_id":"<urn:uuid:c5aa999f-feba-4a86-bb13-80f37e92f362>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
Automatic measurement of the evolutionary process dynamics of primary of biliary cirrhosis
PBC is a chronic inflammatory autoimmune disease [1] that evolves into cirrhosis via four stages. These are determined by the successive implementation in hepatic tissue of the following events:
inflammation, destruction and regeneration of biliary tissue CK7+ and fibrosis. This paper describes a method to study of the dynamics of the disease histologic behaviour [2].
Materials and methods
We studied 58 liver biopsy samples, from the archives of Departments of Pathological Anatomy of Istituto Clinico Humanitas IRCCS, Rozzano, Milan, Italy and the Department of Gastroenterology of
Milan’s Ospedale Policlinico IRCCS, of Caucasian subjects with PBC in various stages. Each procedure was performed in accordance with the guidelines of the Ethics Committees of the hospitals involved
and diagnoses were defined by the expert pathologists of these same hospitals.
Three consecutive 2-3 µm thick sections were obtained from each sample: the first was used to identify inflammatory cells; the second was stained with Sirius red to visualize collagen deposits and
the third was used to visualize the biliary tree with CK7 antibody.
The histological metrization method [3] was developed in the Laboratory for the Study of Metric Methods in Medicine of the Istituto Clinico Humanitas (Rozzano, Milan, Italy). The prototype of this
computerized device called HM and its controlling software were entirely designed and developed in our laboratory. The Metrizer is a totally computer-driven machine that automates the focusing of the
microscope and the exclusion of Glisson’s capsule from the computation of fibrotic islets. It performs reproducible metric measurements from digitalized images of the entire histological section,
giving results within a few minutes. This method measures all notable liver structures bared of any property not involved in the measurement.
Statistical analysis
The data are given as numbers and percentages or mean or median values and ranges where appropriate. All variables were log-transformed in order to approximate a Gaussian distribution and normalized
to a (0,1) range to allow for comparison between variables. Thus the final data are expressed in percentage of the interval between 0 and 1. All of the analyses were performed with Stata10 [http://
www.stata.com webcite].
Results and discussion
Inflammation (Figure1A) was identified by defining the borders around clusters of mononuclear cells (lymphocytes) with Delaunay's triangulation method [4].
Figure 1. Catalogue of the multifarious fragments of liver structures measured by HM.; A. Inflammation cell clusters B. CK7; and C. fibrosis elements, The black image is a Glisson membrane fragment
excluded by the Metrizer.
This method identifies a line that connects the centers of outermost cells that are distanced ≤20 μm. This line is arbitrarily considered the separator of the inflammatory cells within a cluster from
the mononuclear cells dispersed in the surrounding hepatic tissue.
The inflammatory basin, which increases constantly due to the autoimmune process, is the set of areas in a section that are covered by inflammatory cell clusters. The area of the basin covered by
cluster-resident cells is called the pure inflammatory space, which varies with the number (density) of cells present in the clusters.
The intra-hepatic biliary duct area (Figure 1B) was metrically measured. During the course of PBC, the biliary tree duct status is determined by two components: autoimmune destruction and
regeneration of intrahepatic CK7+ ductular segment.
The collagen islets forming fibrosis (Figure 1-C) were measured in linear meters; the result was corrected by the fractal dimension [5] to include details of the irregularity of their shapes The
fractal dimension was obtained by means of the box counting method because the objects to be measured were “truncated fractals” [6], the fractal dimension was used as a dilation factor rather than an
exponent [7] Three classes of islet magnitude were arbitrarily identified: area from 10 and 10^3 μm^2, from 10^3 to 10^4 μm^2 and over 10^4 μm^2.
The tissue disorder quantitative assessment was performed with a Tectonic Index (TI), which describes the loss of tissue organization or any deviation from the natural order (a high TI indicates a
high degree of tissue disorder). The TI defines the organization of liver tissues and is expressed with a range of values from 0 to 1. The TI was calculated as 1 – H, where H is Hurst’s coefficient
(range zero to 1) [8], defined as D[γ] + 1 – D, where D is the fractal dimension and D[γ] the Euclidean dimension of the observed object. So TI = 1 – H = D - D[γ]. Numerical results of all of these
parameters are summarized in Table 1.
Table 1. Summary of all of the metric data obtained from the structural measurements and the data used for staging purposes. Minimum, median, and maximum values of tissue parameters are given in % of
total histologic section area of biopsy specimen.
In order to approach the PBC behavioural dynamics [9], the first key was to set into the interval (0,1) the logarithmic transformation and normalization of the measures of CK7+; tissue and fibrosis,
taken as the most representative structural elements with which to metrically construct the liver state portrait that defines the grading and staging of the semiquantitative subjective methodologies.
A second key is the transformation of portrait scalars into a single vector to reduce the multiplicity of elements of the liver section into a dot-like geometrical figure. This translation into
vectorial arithmetic in classic dynamics is crucial for the construction of the dot-like geometrical operator, called dynamic particola that represents the whole system.
In order to gain a method to study the system’s behaviour dynamics [9], the immune CK7+ biliary ductules resulting from the destruction-regeneration and intrahepatic fibrosis, considered a set of
phenomena that generates the irreversible tectonic disorder leading to cirrhosis. The values of vectors representing irreversible parameters, resulting from the HM measurement, were taken to create
the particola. The vectorial transformation is obtained by plotting the value of CK7 area on the y and the value of fibrosis areas on the x orthogonal axes. The value of the modulus of this resultant
sum in the orthogonal space is taken as the Newtonian dot-like dynamic particola that shows an instant of the PBC behaviour (Figure 2A). The set of points (each a particola) on the x-y orthogonal
space, resembles a Gibbs cloud of points, each expressing in log scale the magnitude of the vectorial value of a particola with a scalar. These scalars ordered on a real number line, the simplest
state space, will graphically show the trajectory that is the reference of the dynamic behavior of PBC process. (Figure 2B).
Figure 2. A): The set of dynamic particulae resembling a Gibbs cloud-point obtained expressing as vector magnitudes the normalised log value of CK7+ areas and the normalised log value of fibrosis in
our patients. B) The points are transferred to the oriented line of real numbers representing the standardised trajectory of the process. C) An ogival cumulative curve orders the dynamic particula
within the three clouds in panel A. This curve is sub-divided into three tertiles marked by blue points and represents the trajectory of the overall dynamic process of PBC from α to ω.
Furthermore, we plotted the value of each particula on the y axis (ranging from 0 to 1) versus the disease timing, identified with all particulae, on the x axis (Figure 2C). Next, we identified the
tertiles on this curve that discriminate three phases. This term used in our new method is different from the ‘stages’ characterising semiquantitative static descriptions. As a result, the three
phases are extrapolated by increasing amounts of particolae formed by fibrosis and CK7+ tissue. In particular, phases 1, 2, and 3 represent early, intermediate, and final disease to feature mean
particula values of 0.2075, 0.4722, and 0.7386, respectively.
The inflammation was excluded from the constitutive components of the trajectory describing the course of PBC process, for its reversibility. This different behaviour is due to the entropy produced
in the interior of the inflammation process that is not transferred across its boundaries into the surrounding environment [10].
The method we constructed, with its technology, strongly reduced the computational time and improved the liver tissue structure recognition and description. As these tools allowed this first study of
PBC behaviour in terms of physics of dynamics, this paper supports the hypothesis that the long periods of cessation in the history of dynamic knowledge were due to the very higher rapid theoretic
development toward facilitating computation devices [11,12]. The technology introduced into our methodology facilitated the study of PBC behaviour by the following points:
1) Exclusion of the inflammatory infiltrates from dynamic study, as they do not produce topic stable entropy deposits. Collagen interstitial deposition generating fibrosis is hepato-cellular necrosis
2) Standardization of a strictly objective evaluation producing scalars by metrizing the images of the histological structures of the liver section.
3) Metrical measurement also of the smallest dispersed islets of fibrosis, and tiny CK7+ biliary ducts normally undetected with optical microscope.
4) Transformation of scalars into vectors leading to vectorial PBC histological section portraits. A homogeneity was created with this operation that maintains the concepts of mixture and identity of
the mixed elements in the sum of CK7+ biliary ducts and fibrosis vectors that define the particola, geometric figure which graphically will trace the process trajectory.
5) Description of PBC evolution by the cumulative curve of the particolae, each representing an actual state of the processes and taking this planar curve as the ideal finite trajectory α–ω) of
entire PBC process.
6) Definition of past and future percentage of the disease course with its particola on the trajectory.
7) The score of PBC evolution into three phases on its trajectory and describe the ideal course of the process based on mathematical measurements.
To conclude let us say that any correlation is recognizable between the metrical data of our case-list, divided into three phases by the rules of dynamics and the semi-quantitative data, divided into
four stages according to the rules of the method of Scheuer. (Table 2) For example the 20 specimen classified in our higher phase III included 9 patients (45%) classified by the semi-quantitative
Scheuer classification at minor levels of staging. Where is the truth?
Table 2. Distribution in the three stadia of the dynamic trajectory of the patients’list classified according to Scheuer’s classification.
List of abbreviations
PBC: Primary biliary cirrhosis; IRCCS: Institute for Scientific Research Hospitalisation and Care; HM: Histologic Metrizer; TI: Tectonic Index
Authors’ contributions
ND wrote the manuscript and ideated the theory around the machine and dynamics and the machine itself, CR ideated the software and constructed the machine, EM did the statistical analysis and
contributed to revision of the text, BF, SDB and SM did the histological preparations, GB revised the text.
This study was completed with the financial support of the Fondazione Michele Rodriguez, Istituto Clinico Humanitas IRCCS, and Attilio and Livia Pagani.
The entire group is extremely grateful to Rosalind Roberts for her numerous translations of the manuscripts.
Sign up to receive new article alerts from Diagnostic Pathology | {"url":"http://www.diagnosticpathology.org/content/8/S1/S16/","timestamp":"2014-04-19T22:16:30Z","content_type":null,"content_length":"78744","record_id":"<urn:uuid:8b888265-fa3f-4cc7-94d8-b42394cdda02>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Matheology 203
Replies: 3 Last Post: Feb 6, 2013 4:06 PM
Messages: [ Previous | Next ]
Re: Matheology 203
Posted: Feb 6, 2013 9:40 AM
On 6 Feb., 13:32, Alan Smaill <sma...@SPAMinf.ed.ac.uk> wrote:
> WM <mueck...@rz.fh-augsburg.de> writes:
> > On 6 Feb., 04:47, Ralf Bader <ba...@nefkom.net> wrote:
> >> According to Mückenheim, "There is no
> >> sensible way of saying that 0.111... is more than every
> >> FIS". Of the authorities you called upon, whom would you find capable of
> >> regardng this as a sensible assertion
> > Compare Matheology § 030: We can create in mathematics nothing but
> > finite sequences, and further, on the ground of the clearly conceived
> > "and so on", the order type omega, but only consisting of equal
> > elements {{i.e. numbers like 0,999...}}, so that we can never imagine
> > the arbitrary infinite binary fractions as finished {{Brouwers Thesis,
> > p. 143}}. [Dirk van Dalen: "Mystic, Geometer, and Intuitionist: The
> > Life of L.E.J. Brouwer", Oxford University Press (2002)]
> van Dalen, unlike WM, is careful to note Brouwer's own note
> on "equal elements":
> "Where one says 'and so on', one means the arbitrary
> repetition of the same thing or operation, even though that thing or
> operation may be defined in a complex way"
> thus justifying existence of expansions like 0.12121212...
Unlike WM? Did I deny that??? Of course even the existence of 0.
[142857] and every other periodic decimal fraction is possible
according to Brouwer. If you can't believe that this is covered by my
§ 030, then simply use the septimal system even if it is not an
optimal system.
> "arbitrary" sequences are a different matter.
Of course. That's why no uncoutable sets exist.
> And in van Dalen, p 118, a letter from Brouwer summarising his thesis:
> "I can formulate:
> 1. Actual infinite sets can be created mathematically, even
> though in the practical applications of mathematics in the world
> only finite sets exist."
Brouwer obviously had not the correct understanding of what actual
infinity is, at least when writing that letter. Errare humanum est.
Just a question: Have you ever seen a Cantor-list where more than half
of the interesting sequences (a_j) of digits a_kj with k < j had
infinite length? Have you ever seen a Cantor-list with at least one of
the interesting sequences of digits having infinite length? No? Why
the heck do you believe that they play the crucial role in Cantor's
Try to imagine this "proof" without the obviously counterfactual
belief that irrelevant tails beyond a_jj play any role. What remains?
Regards, WM | {"url":"http://mathforum.org/kb/message.jspa?messageID=8254575","timestamp":"2014-04-19T02:50:40Z","content_type":null,"content_length":"21215","record_id":"<urn:uuid:cfff99f5-1dee-4d60-b041-43893b385ac2>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] .T Transpose shortcut for arrays again
Bill Baxter wbaxter at gmail.com
Tue Jul 4 23:03:34 CDT 2006
Just wanted to make one last effort get a .T attribute for arrays, so that
you can flip axes with a simple "a.T" instead of "a.transpose()", as with
numpy matrix objects.
If I recall, the main objection raised before was that there are lots of
ways to transpose n-dimensional data.
Fine, but the fact is that 2D arrays are pretty darn common, and so are a
special case worth optimizing for.
Furthermore transpose() won't go away if you do need to do some specific
kind of axes swapping other than the default, so noone is really going to be
harmed by adding it.
I propose to make .T a synonym for .swapaxes(-2,-1) {*}, i.e. the last two
axes are interchanged. This should also make it useful in many N-d array
cases (whereas the default of .transpose() -- to completely reverse the
order of all the axes -- is seldom what you want). Part of the thinking is
that when you print an N-d array it's the last two dimensions that get
printed like 2-d matrices separated by blank likes. You can think of it as
some number of stacks of 2-d matrices. So this .T would just transpose
those 2-d matrices in the printout. Those are the parts that are generally
most contiguous in memory also, so it makes sense for 2-d matrix bits to be
stored in those last two dimensions.
Then, if there is a .T, it makes sense to also have .H which would
basically be equivalent to .T.conjugate().
Finally, the matrix class has .A to get the underlying array -- it would
also be nice to have a .M on array as a shortcut for asmatrix(). This one
would be very handy for matrix users, I think, but I could go either way on
that, having abandoned matrix myself. Ex: ones([4,4]).M
Other possibilities:
- Make .T a function, so that you can pass it the same info as
.transpose(). Then the shortcut becomes a.T(), which isn't as nice, and
isn't consistent with matrix's .T any more.
- Just make .T raise an error for ndim>2. But I don't really see any
benefit in making it an error as opposed to defining a reasonable default
- Make .T on a 1-dim array return a 2-dim Nx1 array. (My default suggestion
is to just leave it alone if ndim < 2, an exception would be another
possiblility). Would make an easy way to create column vectors from arrays,
but I can think of nothing else in Numpy that acts that way.
This is not a 1.0 must have, as it introduces no backward compatibility
issues. But it would be trivial to add if the will is there.
{*} except that negative axes for swapaxes doesn't seem work currently, so
instead it would need to be something like:
a.transpose( a.shape[:-2] + (a.shape[-1],a.shape[-2]) )
with a check for "if ndim > 1", of course.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20060705/34b6cb76/attachment.html
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-July/009124.html","timestamp":"2014-04-21T05:51:49Z","content_type":null,"content_length":"5546","record_id":"<urn:uuid:d707a2c2-d48f-48e8-bb16-c1c509804b3f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Five Strands of Mathematics
(1) Conceptual understanding refers to the “integrated and functional grasp of mathematical ideas”, which “enables them [students] to learn new ideas by connecting those ideas to what they already
know.” A few of the benefits of building conceptual understanding are that it supports retention, and prevents common errors.
(2) Procedural fluency is defined as the skill in carrying out procedures flexibly, accurately, efficiently, and appropriately.
(3) Strategic competence is the ability to formulate, represent, and solve mathematical problems.
(4) Adaptive reasoning is the capacity for logical thought, reflection, explanation, and justification.
(5) Productive disposition is the inclination to see mathematics as sensible, useful, and worthwhile, coupled with a belief in diligence and one’s own efficacy. (NRC, 2001, p. 116)
National Research Council. (2001). Adding it up: Helping children learn mathematics. J Kilpatrick, J. Swafford, and B. Findell (Eds.). Mathematics Learning Study Committee, Center for Education,
Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.
For more info | {"url":"http://mason.gmu.edu/~jsuh4/teaching/strands.htm","timestamp":"2014-04-18T08:05:50Z","content_type":null,"content_length":"5846","record_id":"<urn:uuid:7252e725-88b8-4436-b45d-a3a385a8ecd9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
The _____ area on the status bar displays common calculations, such as SUM or AVERAGE, for selected numbers in the worksheet. (Points : 2)
The _____ area on the status bar displays common calculations, such as SUM or AVERAGE, for selected numbers in the worksheet. (Points : 2) AutoFormat AutoComplete AutoFunction AutoCalculate
The AutoCalculate area on the status bar displays common calculations, such as SUM or AVERAGE, for selected numbers in the worksheet.
Not a good answer? Get an answer now. (FREE)
There are no new answers.
split bar
Added 9/19/2012 3:54:35 PM
The insertion point is a blinking vertical line that indicates where the next typed character will appear.
Added 9/19/2012 3:57:16 PM
The person or persons requesting the worksheet should supply their requirements in a requirements document.
Added 9/19/2012 3:57:55 PM | {"url":"http://www.weegy.com/?ConversationId=F29FC37F","timestamp":"2014-04-17T00:51:44Z","content_type":null,"content_length":"37048","record_id":"<urn:uuid:5d4d8c74-f165-4491-b6ed-f21f07e181f2>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
CCONTR : term -> thm -> thm
Implements the classical contradiction rule.
When applied to a term t and a theorem A |- F, the inference rule CCONTR returns the theorem A - {~t} |- t.
A |- F
--------------- CCONTR `t`
A - {~t} |- t
Fails unless the term has type bool and the theorem has F as its conclusion.
The usual use will be when ~t exists in the assumption list; in this case, CCONTR corresponds to the classical contradiction rule: if ~t leads to a contradiction, then t must be true.
CONTR, CONTR_TAC, NOT_ELIM. | {"url":"http://www.cl.cam.ac.uk/~jrh13/hol-light/HTML/CCONTR.html","timestamp":"2014-04-16T07:56:21Z","content_type":null,"content_length":"1558","record_id":"<urn:uuid:ebc549de-3a88-4a60-850b-9d08396e0d7e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Q
Click here to go to the NEW College Discussion Forum
Discus: SAT/ACT Tests and Test Preparation: October 2003 Archive: Math Q
A set of numbers is called a "unified" set if for every number x in the set, there is a number y in the set such that xy = 1. If the set {.1, .2, .5, 5, 10, k} is a unified set, what is the value of
A- 0
B- 1
C- 2
D- 15
E- 20
Is this how u approach the problem u multiply (.1)(.2)(.5)(5)(10) and u get .5 and then see which number it will multiply with to = 1?
C, u know .1 * 10 = 1 so eliminate those 2, .2 * 5 = 5 eliminate those 2, and that leaves u with .5 and k, .5 * k = 1 so k = 2.
ANd heres another
S = {1 , 2, 3, 4, 5, 6}
T = {1, 2 , 3, 4, 5, 6,}
A number is to be chosen at random from set S, and then a number is to be chosen at ramdon from set T.
Col A.
Probability that the number chosen from set S will be 5
Col B.
The probability that the sum of the 2 numbers chosen will be 7
Can sum1 explain that and another one here too
The area of a field F is 20 sq yards. (1 yard = 3 feet)
Col A
The number of sq feet in the area of field F
Col B.
The probabiity that the sum of the 2 numbers chosen will be 7
Interested i still dont see what u did and why u did it?
tell me if im rite and ill explain
1.C 2. C
For 3. - are u sure u wrote Column B correctly?
1 and 2 are correct
lol sorry on 3 COLUM B i ment to put 60
C. 2 is correct. (.1x 10, .2x 5, .5x k)
For the problem with probabilities, column A is 1/6 (1 number ot ouf 6) and Column B is 6/36 or 1/6. (combinations that yield 7 as total)
You can find the answers thru a couple of ways. The first is to write all the answers that total 7 (there are 6) and compare it to the maximum of 36 choices. There are 6 choices which is the same as
for column A. You could also see that for WHATEVER number is picked in first set, there is only ONE possibility in set B to be a complement to 7. This means that there are only 6 choices.
For the problem with square feet, be careful of the trap 3 x 20 = 60. Remember that 1 square yard equals 9 square feet - 3x3=9.
what word in the world problem made u think .1 x 10 why did u do that, i dun even understand wut its asking plz help on this
(.1x 10, .2x 5, .5x k)=1, thus k=2
Report an offensive message on this page E-mail this page to a friend | {"url":"http://www.collegeconfidential.com/discus/messages/69/29823.html","timestamp":"2014-04-21T12:14:23Z","content_type":null,"content_length":"17753","record_id":"<urn:uuid:74dcd232-d3ef-4d67-a1eb-28b0af918b84>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Re: svyset for stratified probability proportional to size desig
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Re: svyset for stratified probability proportional to size design
From Nick Winter <nw53@cornell.edu>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Re: svyset for stratified probability proportional to size design
Date Thu, 11 Aug 2005 13:05:07 -0400
You only -svyset- one weight variable--which correspond to the ultimate probability of selection of each observation. With multiple stages of sampling, these weights will be a function of the
probabilities of selection at each stage, possibly with additional corrections for non-response, etc. So the sample might be self-weighting at the first stage, but perhaps not at lower stages,
resulting in non-constant weights for the ultimate observations.
So you would do something like:
. svyset psu1 [pw=weight] , str(strat1) || psu2 , fpc(fpc2) || _n , fpc(fpc3)
Note, however, that if the first stage is actually sampled with replacement, than lower-level without-replacement sampling is ignored. This will be conservative--your reported standard errors will be
somewhat larger than they would otherwise be.
--Nick Winter
At 02:37 AM 8/11/2005, you wrote:
We run on Stata 9, updated all July 5.
We have a complex survey design data set with the first stage stratified
into two parts. The primary sampling units (census tracts) have been drawn
with replacement from each of the two strata with probability proportional
to population size (PPS) in each of the strata.
We have the total population, the population of each stratum, and the
population of each PSU. There are two further stages in the sampling
design, random sampling of 3 blocks within each PSU without replacement and
random sampling of 2 households within each block without replacement.
Our difficulty lies in specifying specifying svyset for this design,
particularly selecting (or not) the pw weights for the first level. On the
one hand, we read from texts that the glory of probability proportional to
population sample is that it doesn't need weights. On the other hand, we
see from svyset examples at the UCLA site, that considerable effort has
gone into calculating the values for the first stage weights with PPS
design, though without access to the text, we are unclear how to apply their
Could any suggest how to at least set up svyset for the first stage of the
above described design?
Many thanks,
Steve Rothenberg
Instituto Nacional de Salud Pública
Cuernavaca, México
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
Nicholas J. G. Winter 607.255.8819 t
Assistant Professor 607.255.4530 f
Department of Government nw53@cornell.edu e
Cornell University falcon.arts.cornell.edu/nw53 w
308 White Hall
Ithaca, NY 14853-4601
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2005-08/msg00427.html","timestamp":"2014-04-18T10:52:28Z","content_type":null,"content_length":"8444","record_id":"<urn:uuid:fb8b52ed-4559-4e02-a254-ffdb44e65064>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Two Dimensional Derivative of Boundary Representation
Next: Qualitative Two Dimensional Shape Up: 2D Qualitative Geometry from Previous: A Two Dimensional Derivative
The second approach which I took to two dimensional qualitative geometry was developed from boundary representation methods rather than from constructive solid geometry methods. The reason for this
was that the most important aspects of the ASSF representation in reasoning about motion seemed to be boundary related rather than axis related, as I explain later.
Solid modelling using boundary representation requires a larger amount of information to describe basic three dimensional shape than the combination of constructive solid geometry and generalised
cones does, but it can also describe a wider range of objects. This is because all surfaces of the object are described explicitly, whereas in a CSG description they are implicit in the combination
of primitives, and in the shape sweeping methods used for describing primitives.
Explicit boundary description provides important advantages where the representation is used for reasoning about interaction between objects, rather than just properties of a single object. This is
because objects only contact other objects on their boundaries, so a description of the boundary must be available to the reasoning system, whether it is given explicitly, or determined by
computation from a constructed solid description.
The generalised cones method requires that a method of representing two dimensional shape be used to describe the cross section for ``sweeping'' operations, but a boundary representation must
describe the two dimensional shape of every face in the three dimensional object. This has the advantage that individual features are separately described in local two dimensional contexts for the
whole object, whereas for CSG, features in planes other than the cross section must be inferred from the sweeping function.
An object boundary can be qualitatively described in two dimensions by identifying sections that are qualitatively homogeneous, then describing the relationships between those sections. If the
homogeneous sections were all straight lines, the resulting description would be a polygon. For more generalised shape description, the sections can also be curves or wiggles, and the description
becomes an extended polygon.
The qualitative representation which I derived from three dimensional boundary representation describes shape in exactly this way - as a collection of qualitatively different segments arranged to
form an extended polygon. For brevity, I will refer to this extended polygon boundary method as EPB.
Next: Qualitative Two Dimensional Shape Up: 2D Qualitative Geometry from Previous: A Two Dimensional Derivative Alan Blackwell | {"url":"http://www.cl.cam.ac.uk/~afb21/publications/masters/node45.html","timestamp":"2014-04-16T16:03:52Z","content_type":null,"content_length":"6078","record_id":"<urn:uuid:13fcd5df-df79-419d-9389-6d93e7b2488f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus AP Quesions
http://www.fcps.edu/WestfieldHS/acad...ap_calc_ab.pdf I need help on numbers 5, 13, 14, 15, 17, 32H, 41-46, 20 Help is much appreciated
for 5. i ended up with (x (7x^3 -5x^2-37x-35))/(x^4+5x^2-6) = 0..but that doesn't seem right in 14 and 15 i don't know what the triangles mean I have never graphed functions with restrictions like in
16 and 17 i think 13a. doesn't have a domain i don't know what the symbols in 41-43 stand for | {"url":"http://mathhelpforum.com/calculus/101001-calculus-ap-quesions.html","timestamp":"2014-04-17T02:18:47Z","content_type":null,"content_length":"42397","record_id":"<urn:uuid:ee27bbf6-71b0-46c4-b972-a4aa2bd3fbbc>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
Engineering Acoustics/Solution Methods: Electro-Mechanical Analogies
After drawing the electro-mechanical analogy of a mechanical system, it is always safe to check the circuit. There are two methods to accomplish this:
Review of Circuit Solving MethodsEdit
Kirchkoff's Voltage law
"The sum of the potential drops around a loop must equal zero."
$v_1 + v_2 + v_3 + v_4 = 0 \displaystyle$
Kirchkoff's Current Law
"The Sum of the currents at a node (junction of more than two elements) must be zero"
$-i_1+i_2+i_3-i_4 = 0 \displaystyle$
Hints for solving circuits:
Remember that certain elements can be combined to simplify the circuit (the combination of like elements in series and parallel)
If solving a ciruit that involves steady-state sources, use impedances. Any circuit can eventually be combined into a single impedance using the following identities:
Impedances in series: $Z_\mathrm{eq} = Z_1 + Z_2 + \,\cdots\, + Z_n.$
Impedances in parallel: $\frac{1}{Z_\mathrm{eq}} = \frac{1}{Z_1} + \frac{1}{Z_2} + \,\cdots\, + \frac{1}{Z_n} .$
Dot Method: (Valid only for planar network)Edit
This method helps obtain the dual analog (one analog is the dual of the other). The steps for the dot product are as follows: 1) Place one dot within each loop and one outside all the loops. 2)
Connect the dots. Make sure that only there is only one line through each element and that no lines cross more than one element. 3) Draw in each line that crosses an element its dual element,
including the source. 4) The circuit obtained should have an equivalent behavior as the dual analog of the original electro-mechanical circuit.
The parallel RLC Circuit above is equivalent to a series RLC driven by an ideal current source
Low-Frequency Limits:Edit
This method looks at the behavior of the system for very large or very small values of the parameters and compares them with the expected behavior of the mechanical system. For instance, you can
compare the mobility circuit behavior of a near-infinite inductance with the mechanical system behavior of a near-infinite stiffness spring.
Very High Value Very Low Value
Capacitor Short Circuit Open Circuit
Inductor Open Circuit Closed Circuit
Resistor Open Circuit Short Circuit
Last modified on 6 June 2010, at 22:59 | {"url":"http://en.m.wikibooks.org/wiki/Engineering_Acoustics/Solution_Methods:_Electro-Mechanical_Analogies","timestamp":"2014-04-20T10:55:31Z","content_type":null,"content_length":"24804","record_id":"<urn:uuid:6d2f8ac7-f9ec-42a3-8951-cf301f67efa3>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pressure at a point in a fluid
1. The problem statement, all variables and given/known data
DEFINE PRESSURE AT A POINT IN A FLUID AND SHOW THAT IT IS A SCALOR QUANTITY?
2. Relevant equations
I saw this in a recent paper and i am not too sure about how i should go about doing it..
Its a 7 mark question..
Any Ideas?
3. The attempt at a solution | {"url":"http://www.physicsforums.com/showthread.php?p=3303165","timestamp":"2014-04-18T00:37:46Z","content_type":null,"content_length":"24672","record_id":"<urn:uuid:4b590829-a55e-4779-b352-7430dccea7e9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
ECON 809: Optimization Techniques II (3)
Economic models involving the maximization of an integral (a vector of integrals) subject to differential equality (inequality), integral equality (inequality), and finite equality (inequality)
constraints. Characterization of optimal paths by way of first and second derivatives. Existence of optimal paths. Prerequisite: Consent of instructor. LEC
View current sections... | {"url":"http://www2.ku.edu/~distinction/cgi-bin/index.php?id=2006&dt=courses_201213_combined&classesSearchText=ECON+809","timestamp":"2014-04-18T08:04:48Z","content_type":null,"content_length":"1059","record_id":"<urn:uuid:4c18d1fa-58d8-468e-a598-87a16f0ef3d1>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: [patch] MIPS/gcc: Revert removal of DImode shifts for 32-bit targets
On Tue, 3 Aug 2004, Nigel Stephens wrote:
> Note that there is one slightly controversial aspect of these sequences,
> which is that they don't truncate the shift count, so a shift outside of
> the range 0 to 63 will generate an "unusual" result. This didn't cause
> any regression failures, and I believe that this is strictly speaking
> acceptable for C, since a shift is undefined outside of this range - but
> it could cause some "buggy" code to break. It wouldn't be hard to add an
> extra mask with 0x3f if people were nervous about this - it's just that
> I didn't have enough spare temp registers within the constraints of the
> existing DImode patterns.
Well, masking is trivial with no additional temporary :-) and for ashrdi3
we can "cheat" and use $at to require only a single additional instruction
compared to the others.
Here are my proposals I've referred to previously. Instruction counts
are 9, 9 and 10, respectively, as I've missed an additional instruction
required to handle shifts by 0 (or actually any multiples of 64). The
semantics they implement corresponds to one of the dsllv, dsrlv and dsrav,
respectively. I've expressed them in terms of functions rather than RTL
patterns, but a conversion is trivial. This form was simply easier to
validate for me and they can be used as libgcc function replacements for
Linux for MIPS IV and higher ISAs.
long long __ashldi3(long long v, int c)
long long r;
long r0;
"sllv %L0, %L2, %3\n\t"
"sllv %M0, %M2, %3\n\t"
"not %1, %3\n\t"
"srlv %1, %L2, %1\n\t"
"srl %1, %1, 1\n\t"
"or %M0, %M0, %1\n\t"
"andi %1, %3, 0x20\n\t"
"movn %M0, %L0, %1\n\t"
"movn %L0, $0, %1"
: "=&r" (r), "=&r" (r0)
: "r" (v), "r" (c));
return r;
unsigned long long __lshrdi3(unsigned long long v, int c)
unsigned long long r;
long r0;
"srlv %M0, %M2, %3\n\t"
"srlv %L0, %L2, %3\n\t"
"not %1, %3\n\t"
"sllv %1, %M2, %1\n\t"
"sll %1, %1, 1\n\t"
"or %L0, %L0, %1\n\t"
"andi %1, %3, 0x20\n\t"
"movn %L0, %M0, %1\n\t"
"movn %M0, $0, %1"
: "=&r" (r), "=&r" (r0)
: "r" (v), "r" (c));
return r;
long long __ashrdi3(long long v, int c)
long long r;
long r0;
"not %1, %3\n\t"
"srav %M0, %M2, %3\n\t"
"srlv %L0, %L2, %3\n\t"
"sllv %1, %M2, %1\n\t"
"sll %1, %1, 1\n\t"
"or %L0, %L0, %1\n\t"
"andi %1, %3, 0x20\n\t"
".set push\n\t"
".set noat\n\t"
"sra $1, %M2, 31\n\t"
"movn %L0, %M0, %1\n\t"
"movn %M0, $1, %1\n\t"
".set pop"
: "=&r" (r), "=&r" (r0)
: "r" (v), "r" (c));
return r;
I don't know if the middle-end is capable to express these operations,
but they are pure ALU, so I'd expect it to. | {"url":"http://www.linux-mips.org/archives/linux-mips/2004-08/msg00013.html","timestamp":"2014-04-20T04:06:52Z","content_type":null,"content_length":"15760","record_id":"<urn:uuid:ffe53854-0a3e-4320-aff7-50801b4e32f7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chromatograms Continued
Considering the prior question:
If the two ions are traveling at the same speed, set by the flow of the eluent, what can you say about when they will emerge? Will they emerge at the same time?
From the figure, you should be able to see that the path in red is much shorter than the path in blue. Since the path is shorter, and the ions are traveling at the same speed, the ion following the
red path will emerge first. Thus a normal chromatogram peak will have a gaussian distribution, symmetric around the mean, as seen in the figure on the right. You can also peruse a more extensive
mathematical modeling of the chromatogram peak here(link takes you to a different website).
Since we are concerned with the concentration of ions present in the solution, how will the chromatogram change as you increase the amount of analyte loaded onto the column? | {"url":"http://machias.edu/ICchromatograms-II.html","timestamp":"2014-04-21T04:33:16Z","content_type":null,"content_length":"17470","record_id":"<urn:uuid:3310e5e1-fe8d-46f5-8d49-a7722bc75346>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thanks for Visual Studio 2008!!!!
I think MS was able to give us the most per-formant IDE since vs6 time.
Thanks Thanks Thanks Thanks.
Can we get a showcase of all the new features?
The Visual Studio 2008 Week? to see the new innovations right from the devs?
I am so happy with Visual Studio 2008. Its really cool, Especially with the long awaited LINQ.
Hip hip hurray, Developer Division and MS.
figuerres wrote:
SecretSoftware wrote: Why does C++ even exist? Why does MS keeps maintaining such a language that caused so many buffer overflows, and generally was not as secure as C# or Vb.NET.
The only + point for C++ is that it compiles into machine code directly, and if we can get C# to do that, then there is no need for a language like C++.
I was wondering why C++ still exists when C# is that good of a language.
MS, why not retire C++, and just focus on C# and Vb.NET and F#?
Expand C# capabilities, and get rid of P/invoke and replace it with a new kind of mechanism to call dlls outside the .NET framework.
Finally, make the .NET framework really .NET, in the sense that it is distributed in terms of processing power, by enabling sharing. So that my application could use the processor that is
Idle in a second room in the the house, automatically through the use of Remoting in LAN.
Kill C++, and lets all be on one page, with C#.
Its confusing many people, and things needs to be simpler, with few languages. C# for experts, VB.NET for beginners and intermediates.
That is all.
PS: some might say, there are programmers outthere who enjoy dealing with buffer overflows, and the pains of C++, and to them I say stick with Visual Studio 6 C++ IDE. and that is that.
please do some resarch in to the topic.
like C and C++ are ANSI standards for a start, not owned by Microsoft or any other company.
and MSFT has provided new C Runtime libraries to help with buffer and memory problems.
and so on...
they can stop supporting it in future VS builds.
John Melville, MD wrote:
SecretSoftware wrote: Why does C++ even exist? Why does MS keeps maintaining such a language that caused so many buffer overflows, and generally was not as secure as C# or Vb.NET.
I'm hoping I just missed the sarcasm in that comment.
Even if we assume, as you seem to, that C++ is an obsolete and uselss language then there are still billions of lines of C++ code out there that work and do the job they were designed, and
purchased, to do. There is absolutely no chance that all of that investmen is just going to vanish any time in the next several decades. C++ will be with us through the remainder of any of our
careers and well beyond.
Since someone will be maintaining C++ code for the next 3 decades at least, Microsoft has, wisely, decided to make more money by selling modern tools to those developers.
PS: I still find C++ to be pretty useful in some circumstance. I'm just ingroning that for this post.
Let these people use VS 6 , it has C++ 6 and its good for them.
I Yearn for the day when Visual Studio will not have C++ in it anymore.
OSes will be using managed code in the future.
Games too.
So lets just slowly get rid of C++ and focus on C#.
C# is the future, and C++ brings back bad memories , of sleepless nights.
Customers who still use C++ should upgrade to C#. And that is that.
Any one who likes C++ should just stay with VS 6. or previous builds of vs.
Why does C++ even exist? Why does MS keeps maintaining such a language that caused so many buffer overflows, and generally was not as secure as C# or Vb.NET.
The only + point for C++ is that it compiles into machine code directly, and if we can get C# to do that, then there is no need for a language like C++.
I was wondering why C++ still exists when C# is that good of a language.
MS, why not retire C++, and just focus on C# and Vb.NET and F#?
Expand C# capabilities, and get rid of P/invoke and replace it with a new kind of mechanism to call dlls outside the .NET framework.
Finally, make the .NET framework really .NET, in the sense that it is distributed in terms of processing power, by enabling sharing. So that my application could use the processor that is Idle in a
second room in the the house, automatically through the use of Remoting in LAN.
Kill C++, and lets all be on one page, with C#.
Its confusing many people, and things needs to be simpler, with few languages. C# for experts, VB.NET for beginners and intermediates.
That is all.
PS: some might say, there are programmers outthere who enjoy dealing with buffer overflows, and the pains of C++, and to them I say stick with Visual Studio 6 C++ IDE. and that is that.
I liked the speed of the Intellisense, it makes development much fun.
However, I want intellisense to do more. I want there to be more information (from MSDN Libraries if installed). I wand this information to appear in the yellow tooltip that appears over classes when
I hover over intellisense box items.
So when I do Dim X as String ( I wand to hover over string in the intellisense box and the tool tip box would have an expandable tree (clickable + sign "more information") that has more information
from msdn library about the string class for example. This way I would have the information in my fingertip). so I can see the code comments and if i want more i can click the + sign and it can show
me information about the string class from MSDN library in small font or even link to it. Also it would be better if when i click on the links the browser to browse the msdn library would be with in
the IDE. (I hate to open too much development windows).
I used the string class as an example. There are some classes that I or others might have not worked with, and its a pain in the you know where, to have to open the MSDN explorer browser and sift
through the informaiton. Its too much hassel. Its better to have information in your fingertips as you are coding.
Would this not be cool?
Well, its sad to see Adam go.
Good luck in your new team.
Perhaps we will see you again in Ch9 videos to tell us about Client side stuff. WPF/Win Forms/sliverlight..etc.
Its kind of shocking to see you leave C9 before the beta became gold , but..
I hope you remain as a member of C9. [A]
PS: Good to see Rory. Hope all is well:)
When will Part 2 be up?
Thanks for the Interview.
Very Cool Interview.
Certainly one of the cool videos in C9!
It touched on some questions I was wondering about in terms of Multi-Core and Cryptography.
The main problem here is that many of the crypto algorithms depend on the impracticality of factoring prime numbers in a meaningful time frame.
With the advent of Multi-Core architecture, this problem has become more manageable from a cracking point of view. Hence this represents a concern to many business owners, because they cannot sleep
thinking their data/business/customers' private info are secure, because the computing power is certainly here.
The only solution is to develop newer cryptographic and stenographic algorithms, that does not depend on computing power at all. Rather it should depend on mathematical unknowns. Like the 3 unknown
algorithms etc..
I believe there is a need for the development of the One Time Pad over PKIs, so it can be used securely in business platforms. Because only one time pad algorithm (Vernam's algorithm) does not depend
on the processing power as a factor in the security of the algorithm.
One of the problems that many devs face is that you have to trust a 3rd party in the Public Key exchange scenario, and I managed to develop a secure way to exchange public keys without the need for
3rd party to be trusted. And I am working on optimizing this.
As a developer I feel that there should be more algorithms that depend on mathematical solvability problems rather than computing power impracticality as a function of time. Because as time goes on,
we will get more powerful processors, and we are at a stage where we cannot afford to keep the algorithms that were developed in the 60s and 70s time frame.
On the whole, it was a very good channel9 video. Many thanks to Charles and many thanks to channel9.
PS: It would be good if we get more crypto videos and in general more security videos, because this topic is fascinating. I call cryptography and stenography the 7th wonder of the computing world.[A]
This is exciting news. VS Orcas beta 1, and now Longhorn server beta 3.
MS just rocks!.
Very good and realistic video. The walk through the campus was really good! transparency cannot get any better than this!
Very good work Longhorn Server Team!
Live Long, and Prosper \\//_.[A]
Can we get some screen shots of Beta 3? | {"url":"http://channel9.msdn.com/Niners/SecretSoftware/Comments","timestamp":"2014-04-19T00:36:18Z","content_type":null,"content_length":"40779","record_id":"<urn:uuid:9ab42070-043f-46d2-a96d-2c297f1de490>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Terrace, NY Algebra 1 Tutor
Find a The Terrace, NY Algebra 1 Tutor
...This method of education consistently yields positive learning outcomes. I am a mechanical engineering major who graduated from Columbia University. I am also an iOS developer, the first
application I brought to market launched in spring 2013 (title: Side-by-Side: Dual Column Task Manager). All of my work is done on Macintosh computers.
32 Subjects: including algebra 1, reading, physics, calculus
...I use full length tests, online resources and my own material. I have been teaching/tutoring the math section of SSAT for many years. During the first session, I evaluate the student(s) to see
which areas of the test the student needs to improve on.
23 Subjects: including algebra 1, English, reading, geometry
...Understanding Differential Equations is an integral part of the my Financial Math degree. In fact, Black Scholes equation is really a partial differential equations which is "one step higher"
as compared to the Ordinary Differential Equations. In addition, I've taken classes have expertise in t...
25 Subjects: including algebra 1, physics, calculus, geometry
...I offer Independent School Entrance Exam prep, specializing in math and test taking tips and strategies for this unique, timed test. In addition to reviewing the most-tested math concepts, the
quantitative comparisons in the math section of the ISEE are unfamiliar to most students, so a particul...
24 Subjects: including algebra 1, chemistry, geometry, GRE
...I love grammar, and I currently work as a writer and editor for both creative and non-fiction work. I have worked with elementary and middle school students on other school subjects, as well
as assisted in improving study habits and organization. My previous experience includes work with children of all ages and a variety of special needs.
12 Subjects: including algebra 1, reading, writing, English
Related The Terrace, NY Tutors
The Terrace, NY Accounting Tutors
The Terrace, NY ACT Tutors
The Terrace, NY Algebra Tutors
The Terrace, NY Algebra 2 Tutors
The Terrace, NY Calculus Tutors
The Terrace, NY Geometry Tutors
The Terrace, NY Math Tutors
The Terrace, NY Prealgebra Tutors
The Terrace, NY Precalculus Tutors
The Terrace, NY SAT Tutors
The Terrace, NY SAT Math Tutors
The Terrace, NY Science Tutors
The Terrace, NY Statistics Tutors
The Terrace, NY Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Baxter Estates, NY algebra 1 Tutors
East Atlantic Beach, NY algebra 1 Tutors
Fort Totten, NY algebra 1 Tutors
Garden City South, NY algebra 1 Tutors
Glenwood Landing algebra 1 Tutors
Harbor Acres, NY algebra 1 Tutors
Harbor Hills, NY algebra 1 Tutors
Kenilworth, NY algebra 1 Tutors
Manorhaven, NY algebra 1 Tutors
Maplewood, NY algebra 1 Tutors
Meacham, NY algebra 1 Tutors
Port Washington, NY algebra 1 Tutors
Roslyn, NY algebra 1 Tutors
Saddle Rock Estates, NY algebra 1 Tutors
University Gardens, NY algebra 1 Tutors | {"url":"http://www.purplemath.com/The_Terrace_NY_algebra_1_tutors.php","timestamp":"2014-04-17T00:59:10Z","content_type":null,"content_length":"24514","record_id":"<urn:uuid:d98770f2-7daa-4a5c-a506-1138dcf2dccb>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Chicago, IN Trigonometry Tutor
Find a New Chicago, IN Trigonometry Tutor
...These guides have been used to improve scores all over the midwest. I've been tutoring test prep for 15 years, and I have a lot of experience helping students get the score they need on the
ACT. I've helped students push past the 30 mark, or just bring up one part of their score to push up their overall score.
24 Subjects: including trigonometry, calculus, physics, geometry
...I have taught a variety of students at many ability levels. I am confident that I can provide understanding of scientific concepts to students who are strong in science and mathematics as well
as students who may need some remedial help to grasp the basics. I have a degree in Chemistry where the course work included a course in Organic Chemistry.
14 Subjects: including trigonometry, chemistry, physics, algebra 1
...I even tutored on the side during my summers off during college and have substitute taught for high school classes. I am willing to work with all ages--elementary through college and beyond. I
hope to talk soon!I can help make algebra easy to learn!
28 Subjects: including trigonometry, English, reading, writing
...A Bachelors of Science in Mathematics2. Several years of computer certification programs including Website Design, Database Design, Desktop Publishing, Microsoft Office Specialist in the
entire Office Suite - i.e. Word, Excel, Access, Power Point, and Outlook and more.
53 Subjects: including trigonometry, reading, calculus, English
...I am married and a father of a boy and a girl and enjoy teaching my kids math. Both of my children are “A” students in accelerated math and a two-time first place winner of math competitions
in local high schools. I am also a proficient user of Microsoft Excel and am able to tutor all of its f...
18 Subjects: including trigonometry, geometry, algebra 2, study skills
Related New Chicago, IN Tutors
New Chicago, IN Accounting Tutors
New Chicago, IN ACT Tutors
New Chicago, IN Algebra Tutors
New Chicago, IN Algebra 2 Tutors
New Chicago, IN Calculus Tutors
New Chicago, IN Geometry Tutors
New Chicago, IN Math Tutors
New Chicago, IN Prealgebra Tutors
New Chicago, IN Precalculus Tutors
New Chicago, IN SAT Tutors
New Chicago, IN SAT Math Tutors
New Chicago, IN Science Tutors
New Chicago, IN Statistics Tutors
New Chicago, IN Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Beverly Shores trigonometry Tutors
Boone Grove trigonometry Tutors
Gary, IN trigonometry Tutors
Hebron, IN trigonometry Tutors
Hobart, IN trigonometry Tutors
Kouts trigonometry Tutors
La Crosse, IN trigonometry Tutors
Lake Station trigonometry Tutors
Leroy, IN trigonometry Tutors
Lowell, IN trigonometry Tutors
Ogden Dunes, IN trigonometry Tutors
Pottawattamie Park, IN trigonometry Tutors
Wanatah trigonometry Tutors
Wheeler, IN trigonometry Tutors
Whiting, IN trigonometry Tutors | {"url":"http://www.purplemath.com/New_Chicago_IN_Trigonometry_tutors.php","timestamp":"2014-04-20T02:28:35Z","content_type":null,"content_length":"24447","record_id":"<urn:uuid:3246f42a-1a99-4835-829d-fe0b51953657>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there a list of all connected T_0-spaces with 5 points?
up vote 3 down vote favorite
Is there some place (on the internet or elsewhere) where I can find the number and preferably a list of all (isomorphism classes of) finite connected $T_0$-spaces with, say, 5 points?
In know that a $T_0$-topology on a finite set is equivalent to a partial ordering, and wikipedia tells me that there are, up to isomorphism, 63 partially ordered sets with precisely 5 elements.
However, I am only interested in connected spaces, and I'd love to have a list (most preferably in terms of Hasse diagrams).
gn.general-topology reference-request
I also asked this at math.stackexchange.com/questions/7295/… but did not receive much response. – Rasmus Bentmann Oct 21 '10 at 17:01
add comment
2 Answers
active oldest votes
There is a Java applet that displays all 5-element connected posets at http://www1.chapman.edu/~jipsen/gap/posets.html.
up vote 4 down vote
I don't get it to work. Do you also see just one yellow thing if you try, and don't know what to do with that? – Rasmus Bentmann Oct 21 '10 at 18:09
It worked o.k. for me. I got the Hasse diagrams of all connected 5-element posets by clicking on "All connected Posets with 5 elements". – Richard Stanley Oct 22 '10 at
Ok, thank you for the information. – Rasmus Bentmann Oct 22 '10 at 13:41
1 For those who have the same problem: askubuntu.com/questions/9279/problems-with-java-applet – Rasmus Bentmann Oct 25 '10 at 17:32
add comment
At the online encyclopedia of integer sequences we find, when we type T_0 topologies several hits. Sequence A028856 is the sequence of homeomorphism classes of T_0 topologies, and A028858
up vote 3 has all connected ones (308 topologies of which 235 connected, on 5 points). No explicit list of spaces, though, but some literature references that might help.
down vote
I think the number 235 refers to 6-point spaces, doesn't it? For the second number in the sequence, 3, should refer to 3-point spaces (on a set with two elements there is only one
connected T_0 topology. – Rasmus Bentmann Oct 21 '10 at 17:49
So, the number I asked for is 44, I guess. – Rasmus Bentmann Oct 21 '10 at 17:52
add comment
Not the answer you're looking for? Browse other questions tagged gn.general-topology reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/43069/is-there-a-list-of-all-connected-t-0-spaces-with-5-points?sort=votes","timestamp":"2014-04-18T15:56:15Z","content_type":null,"content_length":"61714","record_id":"<urn:uuid:f9252dab-e7e9-4298-8998-bcc8aba5af15>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
M Theory Lesson 90
M Theory Lesson 90
How does one discuss a category of operads when the intention was to define categories using operads in the first place? Such self referential questions appear to lurk behind the mystery of weak
n-categories. We want an $\omega$-category of $\omega$-operads such that the categories defined as algebras are precisely what we get when we take the algebras of a special Koszul monad. Gee, that's
already way too much mathematics. And duality won't do for all (categorical) dimensions: in logos theory there are n-alities, so we need a concept of Koszul n-ality. Fortunately, the importance of 2
to topos theory (a la locales and schizophrenic objects) is just the place we thought about extending dualities. So we want an $\omega$-category of $\omega$-operads such that an $\omega$-monad of
Koszul n-alities gives weak n-categories as n-algebras. Sigh. Maybe we should return to pictures of trees and discs.
Aside: I don't like inventing new words for things, since there are so many definitions in mathematics as it is, so I will ignore all objections to the term schizophrenic object. After all, do dwarfs
object to having stars named after them? If a term is descriptive, it is a good term.
2 Comments:
kneemo said...
Maybe we should return to pictures of trees and discs.
Or even better, return to the work of Mulase & Waldron and try to figure out what kind of ribbon graphs correspond to the Gaussian Exceptional Ensemble. ;)
Kea said...
Hi kneemo. Ah, yes, we must not forget all that. Where were we? Non associativity is important, and the only real ideas there are from Bar-Natan (as far as I know). And we were thinking about
Fano planes to motivate triple-ribbons, if I recall. | {"url":"http://kea-monad.blogspot.com/2007/08/m-theory-lesson-90.html","timestamp":"2014-04-20T23:26:29Z","content_type":null,"content_length":"27532","record_id":"<urn:uuid:a7aad6ad-2233-4f92-b4e9-b52610233b44>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cantor's Diagonalization Proof of the uncountability of the real numbers
if you give a recursive enumeration of every real between 0 and 1, that should sufice. such an algorithm, if you indeed possess it and it does what you claim, it would be of great interest to many
but...it is highly likely that there is an undiscovered flaw in your algorithm, which might not easily be detectable (especially if the algorithm is of sufficient complexity).
It is a simple algorithm. Don't underestimate my abilities. :)
I will tailor it to the needs of the discussion. It is probably easier to do an algorithm that counts all possible reals than one limited to [0-1).
Let me ask a question to make sure we are agreeing on a general idea:
I am going to choose a non-repeating decimal to make sure you don't think I am over-simplifying.
Take the number (2)**0.5. The representation of this real number is given to you by a counting number (2) and a function ()**0.5. I would argue that we know such a number is not finite in
representation because there is no FINITE sequence of digits which repeats.
eg: the first digits in a 4 mantissa and 4 fractional representation would be: 0001.4142
By induction, the next symmetrical word size up would be:
00001.41428 ... etc.
It is not a rational number that it should be represented by any ratio of counting numbers.
The proof of the infinite nature of the decimal representation is *inductive* in that no matter how many digits I show you, you can claim there is a representation with n+1 digits.
It is not necessary to write all the digits down to show that it is infinite, but simply to show that it grows without bound. But, none the less, the only way to express this number is a mapping
through a function (sqrt) of a counting number. The sequence of digits, however, 1.41428.... is unique.
I am going to assume that a sequence that preserves previously made digits and adds new ones in a predictable (inductive) pattern is sufficient to show the signature of a unique real number; and that
likewise, a counting number may be converged to by a signature (in the reverse direction). Such a pattern, that one may work out to any desired number of digits (without END and therfore infinite) is
sufficient to show the existence in the decimal system of such a number. Eg: The sequence 1.41428.... no matter what algorithm produces it, so long as it matches the sequence of digits with not a
single exception to the algorithm of sqrt(2) ... must be considered to have produced the same rational number.
I let you ponder that for a bit, because I want to give you the best answer I can.
(This is starting to get fun.... I wish people would help me with my simple statistics and propagation of error problems IF I complete this successfully ..... deal? )
And now the question, is the signature of the rational number inductively matched to all possible digits sufficient to indicate that it has been counted? (I am not saying we actually match all
infinite digits, but that we show they *WILL* match.) | {"url":"http://www.physicsforums.com/showthread.php?p=3750005","timestamp":"2014-04-21T04:36:43Z","content_type":null,"content_length":"96420","record_id":"<urn:uuid:d261cc48-db78-4928-aee8-96b8128bafca>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stack with affine stabilizers but not quasi-affine diagonal
up vote 4 down vote favorite
Give an example of a stack X with affine stabilizer groups and separated but not quasi-affine diagonal.
1) If X has finite stabilizer groups then the diagonal is quasi-finite and separated, hence quasi-affine (Zariski's MT).
2) If we drop the condition that the diagonal of X is separated, it is easy to find examples.
3) The stabilizer groups of X are affine if and only if they are quasi-affine.
ag.algebraic-geometry stacks separation-axioms
add comment
1 Answer
active oldest votes
Here is an example:
In X13 of "Faisceaux amples sur les schemas en groupes", Raynaud provides an example of a group scheme G -> S in chacteristic 2 where
1. S is a local regular scheme of dimension 2.
2. G -> S is smooth, separated and quasi-compact.
3. The fibers of G ->S are affine and the generic fiber is connected.
such that G -> S is not quasi-projective.
Therefore, the classifying stack BG has affine stabilizers but does not have a quasi-affine diagonal.
On the other hand, in VII 2.2, Raynaud proves that if G -> S is a smooth, finitely presented group scheme such that
up vote 4 1. S is normal.
down vote 2. G -> S has connected fibers.
3. The maximal fibers are affine.
then G -> S is quasi-affine.
Question: Is the above statement true if (2) is weakened to require that the number of connected components over a fiber s \in S be prime the characteristic of the residue field k(s)?
Of course, one would really like to know if the statement is true if G->S is not necessarily flat so that one could apply it to the inertia stack.
On a related note, Raynaud also provides an example in VII3 of a smooth quasi-affine group scheme G -> A^2 over a field k with connected fibers but which is not affine. The classifying
stack BG gives an example of stack with affine and connected stabilizers but with non-affine inertia stack. In the example of a scheme with non-affine diagonal, the inertia is of course
affine. It's also easy to provide examples of non-affine group schemes with affine but non-connected fibers (eg. the group scheme obtained by removing the non-identity element over the
origin from Z/2Z -> A^2).
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry stacks separation-axioms or ask your own question. | {"url":"http://mathoverflow.net/questions/76/stack-with-affine-stabilizers-but-not-quasi-affine-diagonal/92","timestamp":"2014-04-18T03:37:07Z","content_type":null,"content_length":"51306","record_id":"<urn:uuid:56172018-a70b-4408-8bdd-67389a70d131>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quality Factor (Q)
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
Quality Factor (Q)
The quality factor (Q) of a two-pole resonator is defined by [20, p. 184]
where transfer function:
Note that Q is defined in the context of continuous-time resonators, so the transfer function Laplace transform (instead of the z transform) of the continuous (instead of discrete-time)
impulse-response D. The parameter damping constant (or ``damping factor'') of the second-order transfer function, and resonant frequency [20, p. 179]. The resonant frequency impulse response when the
damping constant sinusoidal oscillation under an exponential decay). For larger damping constants, it is better to use the imaginary part of the pole location as a definition of resonance frequency
(which is exact in the case of a single complex pole). (See §B.6 for a more complete discussion of resonators, in the discrete-time case.)
By the quadratic formula, the poles of the transfer function
Therefore, the poles are complex only when critically damped, while overdamped. A resonator (underdamped, and the limiting case undamped.
Relating to the notation of the previous section, in which we defined one of the complex poles as
For resonators, 20, p. 624]
Since the imaginary parts of the complex resonator poles are amplitude response. If we eliminate the negative-frequency pole, exactly the peak frequency. In other words, as a measure of resonance
peak frequency, frequency response, which is usually negligible except for highly damped, low-frequency resonators. For any amount of damping
Subsections Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email] | {"url":"https://ccrma.stanford.edu/~jos/filters/Quality_Factor_Q.html","timestamp":"2014-04-16T07:34:57Z","content_type":null,"content_length":"19124","record_id":"<urn:uuid:b27f536e-fdb4-41c8-b6bb-e66e3c8bb018>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
First 10-digit prime found in consecutive digits of e
The billboard shown above is part of a recruiting campaing of Google. I am far from smart enough to solve this problem, and it this moment I haven’t got any plans to move to the United States. But I
am interested in the solution, so for the moment I wait until somebody has solved this problem, and publishes it on internet. After that, I will be able to find it throught Google.
[Update 2004.07.14]: Google already gives some answers. The best discussion about the answer —7427466391.com— is found at the FogCreek forum.
Interesting... I'll work on this tonight.
then goto www.linux.org;
login= bobsyouruncle
password= 5966290435 | {"url":"http://braintags.com/blog/2004/07/first-10digit-prime-found-in-consecutive-digits-of-e/","timestamp":"2014-04-17T09:52:56Z","content_type":null,"content_length":"10664","record_id":"<urn:uuid:e607f588-7e00-4514-a0af-63ff4e7bfe29>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Generic Open-address Hash Table Implementation with an STL-like Interface
Hash tables are very common data structures. They provide efficient key based operations to insert and search for data in containers. Like many other things in Computer Science, there are trade offs
associated to the use of hash tables. They are not good choices when there is a need for sort and select operations.
There are two main issues regarding the implementation of hash based containers: the hash function and the collision resolution mechanism. The hash function is responsible for the arithmetic
operation that transforms a particular key into a particular table address. The collision resolution mechanism is responsible for dealing with keys that hash to the same address.
Although the C++ STL (Standard Template Library) does not support hash based containers, most compilers like GCC or Visual C++ have their own implementation - it is worth noting that TR1 (Technical
Report 1) already offers unordered map and set structures. Also, a good alternative is STLport [1], which provides the following class templates:
• hash_set
• hash_map
• hash_multiset
• hash_multimap
Interfaces of hash table implementations from different compilers may vary. Besides, not all of them support TR1. Integrating STLport into your code might not be that easy (unless you are already
using it, of course). Finally, most of these hash based containers use separate chaining as the collision resolution mechanism, which is a straightforward method, but not the only one.
Open addressing is a method for handling collisions through sequential probes in the hash table. It can be very useful when there is enough contiguous memory and knowledge of the approximate number
of elements in the table is available. Two well known open addressing strategies are linear probing and double hashing, which I briefly discuss in the next section.
In this article, I present a generic standalone STL-like implementation of a hash table that uses either linear probing or double hashing as the collision resolution mechanism. It serves as the
underlying implementation of the four class templates mentioned above, and it is constructed with many C++ techniques applied in STLport. The code follows very closely to the STL and extended SGI
concepts. Precisely, only one operation is not implemented (the reason is explained later). In addition, the code also works as an interesting introduction to generic programming. This is a brief
article, and I do not provide neither theoretical discussion about hash tables nor all the implementation details. I am more interested in providing the source code so others can inspect, give
suggestions, and hopefully benefit from it.
Hash Table Parameterization
The following class template is the underlying implementation of hash_set, hash_map, hash_multiset, and hash_multimap.
template <
class key_t,
class value_t,
class hash_fcn_t,
class increment_t,
class equal_key_t,
class get_key_t,
class alloc_t>
class hash_table__
Most of the template parameters above have meaningful names and a straightforward role. The first one, key_t, is the type of the hash table key. The second, value_t, is the type of the value. (In a
set, the value is the actual the key; in a map, the value is a pair containing the key and value.) The hash function is represented by the third parameter. Skipping the fourth parameter, which I will
explain in the next paragraph, the fifth parameter, equal_key_t, is a binary predicate used to determine whether two keys are equal. The parameter get_key_t has an interesting responsibility: it is a
unary function used to obtain the key of a value stored in the hash table. This parameter must be set in accordance to the type of the value. The last template parameter is the allocator type.
There is only one significant difference between linear probing and double hashing. With linear probing, when a collision is detected, the algorithm looks for the position right next to the current
table address to check if it is available. It keeps doing that until it finds an empty spot (once it reaches the end of the table, it starts over). Naturally, it is important that the table is not
full. (This implementation assumes a maximum load factor of 5/10.) With double hashing, a second hash function is used as the increment from the current table address to the other positions the
algorithm looks for. In this particular case, there is an extra concern of writing a second hash function that will not lead to an infinite loop. A good usual choice is an increment that is prime to
the size of the table. I already provide two template specializations (for integral types) to use as the argument for the parameter increment_t.
//For linear probing.
template <class key_t>
struct unit_increment
std::size_t operator()(const key_t&){ return 1; }
//For double hashing.
template <class key_t>
struct hash_increment{};
//Specialization for integral types are just like this one.
template <>
struct hash_increment<short>
std::size_t operator()(short x){return (x % 97) + 1;}
The behaviours of linear probing and double hashing are very similar. However, double hashing is less subjected to the formation of clusters, since keys that hash to the same address are likely to be
spread throughout the table. On the other hand, implementation of linear probing might be a little bit simpler because of issues related to the erase operation. See [2] for more details.
As I mentioned before, hash_table__ is the underlying implementation of the map and set class templates. This is done through composition as shown in the code below for the case of hash_map.
template <
class key_t,
class value_t,
class hash_fcn_t = hash<key_t>,
class increment_t = unit_increment<key_t>,
class equal_key_t = std::equal_to<key_t>,
class alloc_t = std::allocator<std::pair<key_t, value_t> > >
class hash_multimap
typedef std::pair<key_t, value_t> Map_pair;
typedef hash_table__<
alloc_t> HT;
HT underlying_;
Under the hood, the hash table is implemented with std::vector. Therefore, implementation of an iterator is relatively simple. Basically, it is necessary to have a pointer to the vector, an integral
type to indicate the current position (or index), and some intelligence for the iteration process. There is, though, one point that is worth talking about.
When an iterator is dereferenced, a reference to an instance is provided. However, when a const_iterator is dereferenced, a constant reference is provided. Since this is just a mater of type
difference, instead of writing two implementations for the iterators, we could use an auxiliary traits class template to do the job.
One of the template parameters of the iterator implementation is the hash table type. The other is the traits type. For the definition of the type iterator, the class template non_const_traits is
used. For the definition of the type const_iterator, the class template const_traits is used. The following code shows the idea:
template <class value_t>
struct const_traits
typedef const value_t* pointer;
typedef const value_t& reference;
template <class value_t>
struct non_const_traits
typedef value_t* pointer;
typedef value_t& reference;
//Iterator implementation.
template <
class hash_container_t,
class constness_traits_t>
struct hash_table_iterator__
typedef typename constness_traits_t::pointer pointer;
typedef typename constness_traits_t::reference reference;
reference operator*()const {return (*this->container_)[this->current_].value_;}
pointer operator->()const {return &(operator*());}
A final issue is that conversion between an iterator and a const_iterator should be handled properly. Please refer to the source code for examples.
Using the Code
The usage of the code is just like for any STL container. Here is an example:
#include "hash_map.h"
#include <iostream>
int main()
using namespace hashcol;
typedef hash_map<int,double> Map;
typedef Map::value_type value_type;
typedef Map::iterator iterator;
Map map;
map.insert(value_type(8, 8.888));
map.insert(value_type(1, 1.111));
map.insert(value_type(12, 12.12));
map.insert(value_type(3, 3.3));
map.insert(value_type(122, 122.122));
std::cout << "\nSize is: " << map.size();
std::cout << "\nElements are:";
for (iterator it = map.begin(); it != map.end(); ++it)
std::cout << "\n\tKey = " << it->first
<< " Value = " << it->second;
return 0;
• [1] STLport - http://www.stlport.org/
• [2] Sedgewick R. Algorithms in C++ - Fundamentals, Data Structures, Sorting and Searching (3^rd edn). Addison-Wesley, 1998 | {"url":"http://www.codeproject.com/Articles/23085/A-generic-open-address-hash-table-implementation-w?fid=985032&df=90&mpp=10&sort=Position&spc=None&tid=3128666","timestamp":"2014-04-16T07:00:52Z","content_type":null,"content_length":"91595","record_id":"<urn:uuid:06cded2c-4cf6-40a1-aa16-2074f7aa6e50>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Point Clouds to Mesh in “MeshLab”
Importing Data
Occasionally you will need to sub-sample your point-cloud data to make it easier to work with. This does inevitably reduce the resolution of the data but if proper techniques are used you can
maintain a high level of fidelity in the point cloud data. *** Especially in noisy scan’s from the Kinect
We will want to recreate a surface, which through trial and error (at least with objects that contain a lot of curves or contours) the Poisson disk method obtains the best results.
The “Filter->Sampling->Poisson Disk Sampling”
Make sure you check the “Base Mesh Subsampling” box.
The algotrithim it was designed to create circular window over the point cloud and calculate those points that are statistically ”random” according to a Poisson distribution.
Like previously mentioned the exact parameters used in your process are TOTALLY APPLICATION DEPENDENT. Meaning that what worked well with a point cloud of a million points for the interior of a room,
may not work with a million points of a human face.
More on Subsampling
The image below the point cloud captured from the Microsoft Kinect (of a human chest – side view) and it has points that are not apart of the actual object we want to creat a 3D model of. So to avoid
have spikes or deformities in our data we should apply a few methods in eliminating them when possible.
While there are many different ways to deal with these rouge points we can once again apply the Poisson distribution, which seems to have the best results in the automated filters offered by
Much like the filtering of noise in LiDAR data the Poisson takes the entire area of interest(the radius of the window size we specify in this case) and looks at the corresponding distribution of
points in 3D space. When points are determined to be statistically random following the number of iterations you specify the alogritim will remove that point from the recreation of the surface.
Even though the Poisson does an excellent job there are still cases where manually cleaning these points from the data is required. (Meaning select it and delete it)
It is also important to note that since the Poisson is a stochastic process no two subsamples will be exactly the same even if the exact same parameters are used. So save your data often!!
Reconstructing the Normals
Reconstructing the Surface (Creating the Mesh)
At this point you will need to choose one of the surface reconstruction algorithms that MeshLab offers.
The “Filters -> Point Set-> Surface Reconstruction: Poisson”
*** Note: This could get time consuming and at least in my experience crashes when the data is huge(“huge” is a scientific word for bigger than normal)
As mentioned before in the subsampling discussion a few tabs ago you can also use the “Marching Cubes (APSS)” which has pretty good results on data with few contours.
For you inquisitive folks who need to know more about each of these processes for surface reconstruction please check out these two links: Marching Cubes or the Poisson
The Next Steps in MeshLab
So now that you have created a “mesh” you can use the rest of the many wonderful tools MeshLab has to offer.
Unlike other programs that are specifically inclined to working with the point set data, MeshLab as the name eludes prefers to use meshes. Therefore, if you need to fill any holes where there is
missing data, add texture information, or take measurements ….etc.; you need to use a mesh. Which of course I hope this little tutorial should you how to do. | {"url":"http://gmv.cast.uark.edu/scanning/point-clouds-to-mesh-in-meshlab/","timestamp":"2014-04-18T08:22:32Z","content_type":null,"content_length":"54350","record_id":"<urn:uuid:573af776-cfee-4274-97b5-fd4251f8956e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00491-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definitions for Geometrydʒiˈɒm ɪ tri
This page provides all possible meanings and translations of the word Geometry
Random House Webster's College Dictionary
ge•om•e•trydʒiˈɒm ɪ tri(n.)
1. the branch of mathematics that deals with the deduction of the properties, measurement, and relationships of points, lines, angles, and figures in space.
Category: Math
2. any specific system of this that operates in accordance with a specific set of assumptions:
Euclidean geometry.
Category: Math
3. a book on geometry, esp. a textbook.
Category: Math
4. the shape or form of a surface or solid.
5. a design or arrangement of objects in simple rectilinear or curvilinear form.
Origin of geometry:
1300–50; ME < L geōmetria < Gk geōmetría. See geo-, -metry
Princeton's WordNet
1. geometry(noun)
the pure mathematics of points and lines and curves and surfaces
Kernerman English Learner's Dictionary
1. geometry(noun)ʒiˈɒm ɪ tri
the part of mathematics connected with shapes, angles, lines, etc.
We'll be doing geometry in math this year.
1. geometry(Noun)
the branch of mathematics dealing with spatial relationships
2. geometry(Noun)
a type of geometry with particular properties
spherical geometry
3. geometry(Noun)
the spatial attributes of an object, etc.
Webster Dictionary
1. Geometry(noun)
that branch of mathematics which investigates the relations, properties, and measurement of solids, surfaces, lines, and angles; the science which treats of the properties and relations of
magnitudes; the science of the relations of space
2. Geometry(noun)
a treatise on this science
1. Geometry
Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. A mathematician who works in the field of geometry is
called a geometer. Geometry arose independently in a number of early cultures as a body of practical knowledge concerning lengths, areas, and volumes, with elements of a formal mathematical
science emerging in the West as early as Thales. By the 3rd century BC geometry was put into an axiomatic form by Euclid, whose treatment—Euclidean geometry—set a standard for many centuries to
follow. Archimedes developed ingenious techniques for calculating areas and volumes, in many ways anticipating modern integral calculus. The field of astronomy, especially mapping the positions
of the stars and planets on the celestial sphere and describing the relationship between movements of celestial bodies, served as an important source of geometric problems during the next one and
a half millennia. Both geometry and astronomy were considered in the classical world to be part of the Quadrivium, a subset of the seven liberal arts considered essential for a free citizen to
Translations for Geometry
Kernerman English Multilingual Dictionary
a branch of mathematics dealing with the study of lines, angles etc
He is studying geometry.
Get even more translations for Geometry »
Find a translation for the Geometry definition in other languages:
Use the citation below to add this definition to your bibliography:
Are we missing a good definition for Geometry? | {"url":"http://www.definitions.net/definition/Geometry","timestamp":"2014-04-18T19:30:25Z","content_type":null,"content_length":"39802","record_id":"<urn:uuid:df76f7a1-65bd-44a3-9925-e90c9fa405b7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Create a set and call it A. Next, create a subset of A and call it B. Draw a Venn diagram to illustrate your two sets. Finally, find B′. To receive full credit for this problem, you must: •Clearly
state A. •Clearly state B. •Cleary state B′. •Submit a copy of your Venn diagram. Part 2: With identity theft on the rise, it is important that online websites, especially banking and credit card
sites, require longer pin numbers. Complete each part of the assessment below. Genuine Bank’s website requires that your pi
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c8eba1e4b09c5571443320","timestamp":"2014-04-18T13:48:17Z","content_type":null,"content_length":"25678","record_id":"<urn:uuid:26666f1a-ad8e-47dc-92e5-9a73d5938688>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
" Likelihood "
A.W.F.Edwards, Gonville and Caius College, Cambridge, U.K.
CIMAT, Guanajuato, Mexico, March 1999
*This article is a preliminary version of what will be published in the
International Encyclopedia of the Social and Behavioral Sciences,
and was written while Professor Edwards was visiting CIMAT, March 8-19, 1999.
A statistical model for phenomena in the sciences or social sciences is a mathematical construct which associates a probability with each of the possible outcomes. If the data are discrete, such as
the numbers of people falling into various classes, the model will be a discrete probability distribution, but if the data consist of measurements or other numbers which may take any values in a
continuum, the model will be a continuous probability distribution. When two different models, or perhaps two variants of the same model differing only in the value of some adjustable parameter(s),
are to be compared as explanations for the same observed outcome, the probability of obtaining this particular outcome can be calculated for each and is then known as likelihood for the model or
parameter value(s) given the data.
Probabilities and likelihoods are easily (and frequently) confused, and it is for this reason that in 1921 R.A.Fisher introduced the new word: ‘What we can find from a sample is the likelihood of any
particular value of [the parameter], if we define the likelihood as a quantity proportional to the probability that, from a population having that particular value, the [observed sample] should be
obtained. So defined, probability and likelihood are quantities of an entirely different nature’.
The first difference to be noted is that the variable quantity in a likelihood statement is the hypothesis (a word which conveniently covers both the case of a model and of particular parameter
values in a single model), the outcome being that actually observed, in contrast to a probability statement, which refers to a variety of outcomes, the hypothesis being assumed and fixed. Thus a
manufacturer of dice may reasonably assert that the outcomes 1, 2, 3, 4, 5, 6 of a throw each have probability 1/6 on the hypothesis that his dice are well-balanced, whilst an inspector of casinos
charged with testing a particular die will wish to compute the likelihoods for various hypotheses about these probabilities on the basis of data from actual tosses.
The second difference arises directly from the first. If all the outcomes of a statistical model are considered their total probability will be 1 since one of them must occur and they are mutually
exclusive; but since in general hypotheses are not exhaustive – one can usually think of another one – it is not to be expected that the sum of likelihoods has any particular meaning, and indeed
there is no addition law for likelihoods corresponding to the addition law for probabilities. It follows that only relative likelihoods are informative, which is the reason for Fisher’s use of the
word ‘proportional’ in his original definition.
The most important application of likelihood is in parametric statistical models. Consider the simplest binomial example, such as that of the distribution of the number of boys r in families of size
n (an example which has played an important role in the development of statistical theory since the early eighteenth century). The probability of getting exactly r boys will be given by the binomial
distribution indexed by a parameter p, the probability of a male birth. Denote this probability of r boys by P(r|p), n being assumed fixed and of no statistical interest. Then we write
L(p||r) r|p)
for the likelihood of p given the particular value r, the double vertical line || being used to indicate that the likelihood of p is not conditional on r in the technical probability sense. In this
binomial example L(p||r) is a continuous function of the parameter p and is known as the likelihood function. When only two hypotheses are compared, such as two particular values of p in the present
example, the ratio of their likelihoods is known as the likelihood ratio.
The value of p which maximises L(p||r) for an observed r is known as the maximum-likelihood estimate of p and is denoted by p^; expressed in general form as a function of r it is known as the
maximum-likelihood estimator. Since the pioneering work of Fisher in the 1920s it has been known that maximum-likelihood estimators possess certain desirable properties under repeated-sampling
(consistency and asymptotic efficiency, and in an important class of models sufficiency and full efficiency), and for this reason they have come to occupy a central position in repeated-sampling (or
‘frequentist’) theories of statistical inference.
However, partly as a reaction to the unsatisfactory features which repeated-sampling theories display when used as theories of evidence, coupled with a reluctance to embrace the full-blown Bayesian
theory of statistical inference, likelihood is increasingly seen as a fundamental concept enabling hypotheses and parameter values to be compared directly.
The basic notion, championed by Fisher as early as 1912 whilst still an undergraduate at Cambridge but now known to have been occasionally suggested by other writers even earlier, is that the
likelihood ratio for two hypotheses or parameter values is to be interpreted as the degree to which the data support the one hypothesis against the other. Thus a likelihood ratio of 1 corresponds to
indifference between the hypotheses on the basis of the evidence in the data, whilst the maximum-likelihood value of a parameter is regarded as the best-supported value, other values being ranked by
their lesser likelihoods accordingly. This was formalised as the Law of Likelihood by Ian Hacking in 1965. Fisher’s final advocacy of the direct use of likelihood will be found in his last book
Statistical Methods and Scientific Inference (1956).
Such an approach, unsupported by any appeal to repeated-sampling criteria, is ultimately dependent on the primitive notion that the best hypothesis or parameter-value on the evidence of the data is
the one which would explain what has in fact been observed with the highest probability. The strong intuitive appeal of this can be captured by recognizing that it is the value which would lead, on
repeated sampling, to a precise repeat of the data with the least expected delay. In this sense it offers the best statistical explanation of the data.
In addition to specifying that relative likelihoods measure degrees of support, the likelihood approach requires us to accept that the likelihood function or ratio contains all the information we can
extract from the data about the hypotheses in question on the assumption of the specified statistical model – the so-called Likelihood Principle. It is important to include the qualification
requiring the specification of the model, first because the adoption of a different model might prove necessary later and secondly because in some cases the structure of the model enables inferences
to be made in terms of fiducial probability which, though dependent on the likelihood, are stronger, possessing repeated-sampling properties which enable confidence intervals to be constructed.
Though it would be odd to accept the Law of Likelihood and not the Likelihood Principle, Bayesians necessarily accept the Principle but not the Law, for although the likelihood is an intrinsic
component of Bayes’s Theorem, Bayesians deny that a likelihood function or ratio has any meaning in isolation. For those who accept both the Law and the Principle it is convenient to express the two
together as:
The Likelihood Axiom: Within the framework of a statistical model, all the information which the data provide concerning the relative merits of two hypotheses is contained in the likelihood ratio of
those hypotheses on the data, and the likelihood ratio is to be interpreted as the degree to which the data support the one hypothesis against the other (Edwards, 1972).
The likelihood approach has many advantages apart from its intuitive appeal. It is straightforward to apply because the likelihood function is usually simple to obtain analytically or easy to compute
and display. It leads directly to the important theoretical concept of sufficiency according to which the function of the data which is the argument of the likelihood function itself carries the
information. This reduction of the data is often a simple statistic such as the sample mean. Moreover, the approach illuminates many of the controversies surrounding repeated-sampling theories of
inference, especially those concerned with ancillarity and conditioning. Birnbaum (1962) argued that it was possible to derive the Likelihood Principle from the concepts of sufficiency and
conditionality, but to most people the Principle itself seems the more primitive concept and the fact that it leads to notions of sufficiency and conditioning seems an added reason for accepting it.
Likelihoods are multiplicative over independent data sets referring to the same hypotheses or parameters, facilitating the combination of information. For this reason log-likelihood is often
preferred because information is then combined by addition. In the field of genetics, where likelihood theory is widely applied, the log-likelihood with the logarithms taken to the base 10 is known
as a LOD, but for general use natural logarithms to the base e are to be preferred, in which case log-likelihood is sometimes called support. Most importantly, the likelihood approach is compatible
with Bayesian statistical inference in the sense that the posterior Bayes distribution for a parameter is, by Bayes’s Theorem, found by multiplying the prior distribution by the likelihood function.
Thus when, in accordance with Bayesian principles, a parameter can itself be given a probability distribution (and this assumption is the Achilles’ heel of Bayesian inference) all the information the
data contain about the parameter is transmitted via the likelihood function in accordance with the Likelihood Principle. It is indeed difficult to see why the medium through which such information is
conveyed should depend on the purely external question of whether the parameter may be considered to have a probability distribution, and this is another powerful argument in favour of the Principle
In the case of a single parameter the likelihood function or the log-likelihood function may easily be drawn, and if it is unimodal limits may be assigned to the parameter, analogous to the
confidence limits of repeated-sampling theory. Calling the log-likelihood the support, m-unit support limits are the two parameter values astride the maximum at which the support is m units less than
at the maximum. For the simplest case of estimating the mean of a normal distribution of known variance the 2-unit support limits correspond closely to the 95% confidence limits which are at
The representation of a support function for more than two parameters naturally encounters the usual difficulties associated with the visualisation of high-dimensioned spaces, and a variety of
methods have been suggested to circumvent the problem. It will often be the case that information is sought about some subset of the parameters, the others being considered to be nuisance parameters
of no particular interest. In fortunate cases it may be possible to restructure the model so that the nuisance parameters are eliminated, and in all cases in which the support function is quadratic
(or approximately so) the dimensions corresponding to the nuisance parameters can simply be ignored.
Several other approaches are in use to eliminate nuisance parameters. Marginal likelihoods rely on finding some function of the data which does not depend on them; notable examples involve the normal
distribution, where a marginal likelihood for the variance can be found from the distribution of the sample variance which is independent of the mean, and a marginal likelihood for the mean can
similarly be found using the t-distribution. Profile likelihoods, also called maximum relative likelihoods, are found by replacing the nuisance parameters by their maximum-likelihood estimates at
each value of the parameters of interest. It is easy to visualise from the case of two parameters why this is called a profile likelihood.
Naturally, a solution can always be found by strengthening the model through adopting particular values for the nuisance parameters, just as a Bayesian solution using integrated likelihoods can
always be found by adopting a prior distribution for them and integrating them out, but such assumptions do not command wide assent. When a marginal likelihood solution has been found it may
correspond to a Bayesian integrated likelihood for some choice of prior, and such priors are called neutral priors to distinguish them from so-called uninformative priors for which no comparable
justification exists. However, in the last analysis there is no logical reason why nuisance parameters should be other than a nuisance, and procedures for mitigating the nuisance must be regarded as
All the common repeated-sampling tests of significance have their analogues in likelihood theory, and in the case of the normal model it may seem that only the terminology has changed. At first sight
an exception seems to be the x^2 goodness-of-fit test, where no alternative hypothesis is implied. However, this is deceptive, and a careful analysis shows that there is an implied alternative
hypothesis which allows the variances of the underlying normal model to depart from their multinomial values. In this way the paradox of small values of x^2 being interpreted as meaning that the
model is ‘too good’ is exposed, for in reality they mean that the model is not good enough and that one with a more appropriate variance structure will have a higher likelihood. Likelihood ratio
tests are based on the distribution under repeated-sampling of the likelihood ratio and are therefore not part of likelihood theory.
When likelihood arguments are applied to models with continuous sample spaces it may be necessary to take into account the approximation involved in representing data, which are necessarily discrete,
by a continuous model. Neglect of this can lead to the existence of singularities in the likelihood function or other artifacts which a more careful analysis will obviate.
It is often argued that in comparing two models by means of a likelihood ratio, allowance should be made for any difference in the number of parameters by establishing a ‘rate of exchange’ between an
additional parameter and the increase in log-likelihood expected. The attractive phrase ‘Occam’s bonus’ has been suggested for such an allowance (J.H.Edwards, 1969). However, the proposal seems only
to have a place in a repeated-sampling view of statistical inference, where a bonus such as that suggested by Akaike’s information criterion is sometimes canvassed.
The major application of likelihood theory so far has been in human genetics, where log-likelihood functions are regularly drawn for recombination fractions (linkage values) (see Ott, 1991), but even
there a reluctance to abandon significance-testing altogether has led to a mixed approach. Other examples, especially from medical fields, will be found in the books cited below.
Although historically the development of a likelihood approach to statistical inference was almost entirely due to R.A.Fisher, it is interesting to recall that the Neyman–Pearson approach to
hypothesis testing derives ultimately from a remark of ‘Student’s’ (W.S.Gossett) in a letter to E.S.Pearson in 1926 that ‘if there is any alternative hypothesis which will explain the occurrence of
the sample with a more reasonable probability … you will be very much more inclined to consider that the original hypothesis is not true’, a direct likelihood statement (quoted in McMullen and
Pearson, 1939). Indeed, it has been remarked that ‘Just as support [log-likelihood] is Bayesian inference without the priors, so it turns out to be Neyman–Pearson inference without the ‘errors’
(Edwards, 1972).
The literature on likelihood is gradually growing as an increasing number of statisticians become concerned at the inappropriate use of significance levels, confidence intervals and other
repeated-sampling criteria to represent evidence. The movement is most advanced in biostatistics as may be seen from books such as Clayton and Hills (1993) and Royall (1997), but general texts such
as Lindsey (1995) exist as well. Amongst older books Cox and Hinkley (1974) contains much that is relevant to likelihood, whilst Edwards (1972, 1992) was the first book to advocate a purely
likelihood approach, and is rich in relevant quotations from Fisher’s writings. The history of likelihood is treated by Edwards (1974; reprinted in Edwards 1992).
Birnbaum, A. (1962) On the foundations of statistical inference. J. Amer. Statist. Ass. 57, 269–326.
Clayton, D.G. and Hills, M. (1993) Statistical Models in Epidemiology. Oxford University Press.
Cox, D.R. and Hinkley, D.V. (1974) Theoretical Statistics. London: Chapman & Hall.
Edwards, A.W.F. (1972) Likelihood. Cambridge University Press.
Edwards, A.W.F. (1974) The history of likelihood. Int. Statist. Rev. 42, 9-15.
Edwards, A.W.F. (1992) Likelihood. Baltimore: Johns Hopkins University Press.
Edwards, J.H. (1969) In: Computer Applications in Genetics, ed. N.E.Morton. Honolulu: University of Hawaii Press.
Fisher, R.A. (1912) On an absolute criterion for fitting frequency curves. Mess. Math. 41, 155-60.
Fisher, R.A. (1921) On the ‘probable error’ of a coefficient of correlation deduced from a small sample. Metron 1 pt 4, 3-32.
Fisher, R.A. (1956) Statistical Methods and Scientific Inference. Edinburgh: Oliver & Boyd.
Hacking, I. (1965) Logic of Statistical Inference. Cambridge University Press.
Lindsey, J.K. (1995) Introductory Statistics: A Modelling Approach. Oxford: Clarendon Press.
McMullen, L. and Pearson, E.S. (1939) William Sealy Gosset, 1876–1937. Biometrika 30, 205-50.
Royall, R. (1997) Statistical Evidence: A Likelihood Paradigm. London: Chapman & Hall. | {"url":"http://www.cimat.mx/reportes/enlinea/D-99-10.html","timestamp":"2014-04-20T05:42:25Z","content_type":null,"content_length":"20151","record_id":"<urn:uuid:f874f710-b9af-4f3c-a28c-10a7f78bdbb7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
natural language processing blog
Searn is my baby and I'd like to say it can solve (or at least be applied to) every problem we care about. This is of course not true. But I'd like to understand the boundary between reasonable and
unreasonable. One big apparently weakness of Searn as it stands is that it appears applicable only to fully supervised problems. That is, we can't do hidden variables, we can't do unsupervised
I think this is a wrong belief. It's something I've been thinking about for a long time and I think I finally understand what's going on. This post is about showing that you can actually recover
forward-backward training of HMMs as an instance of Searn with a particular choice of base classifier, optimal policy, loss function and approximation method. I'll not prove it (I haven't even done
this myself), but I think that even at a hand-waving level, it's sufficiently cool to warrant a post.
I'm going to have to assume you know how Searn works in order to proceed. The important aspect is essentially that we train on the basis of an optimal policy (which may be stochastic) and some loss
function. Typically I've made the "optimal policy" assumption, which means that when computing the loss for a certain prediction along the way, we approximate the true expected loss with the loss
given by the optimal policy. This makes things efficient, but we can't do it in HMMs.
So here's the problem set up. We have a sequence of words, each of which will get a label (for simplicity, say the labels are binary). I'm going to treat the prediction task as predicting both the
labels and the words. (This looks a lot like estimating a joint probability, which is what HMMs do.) The search strategy will be to first predict the first label, then predict the first word, then
predict the second label and so on. The loss corresponding to an entire prediction (of both labels and words) is just going to be the Hamming loss over the words, ignoring the labels. Since the loss
doesn't depend on the labels (which makes sense because they are latent so we don't know them anyway), the optimal policy has to be agnostic about their prediction.
Thus, we set up the optimal policy as follows. For predictions of words, the optimal policy always predicts the correct word. For predictions of labels, the optimal policy is stochastic. If there are
K labels, it predicts each with probability 1/K. Other optimal policies are possible and I'll discuss that later.
Now, we have to use a full-blown version of Searn that actually computes expected losses as true expectations, rather than with an optimal policy assumption. Moreover, instead of sampling a single
path from the current policy to get to a given state, we sample all paths from the current policy. In other words, we marginalize over them. This is essentially akin to not making the "single sample"
assumption on the "left" of the current prediction.
So what happens in the first iteration? Well, when we're predicting the Nth word, we construct features over the current label (our previous prediction) and predict. Let's use a naive Bayes base
classifier. But we're computing expectations to the left and right, so we'll essentially have "half" an example for predicting the Nth word from state 0 and half an example for predicting it from
state 1. For predicting the Nth label, we compute features over the previous label only and again use a naive Bayes classifier. The examples thus generated will look exactly like a training set for
the first maximization in EM (with all the expectations equal to 1/2). We then learn a new base classifier and repeat.
In the second iteration, the same thing happens, except now when we predict a label, there can be an associated loss due to messing up future word predictions. In the end, if you work through it, the
weight associated with each example is given by an expectation over previous decisions and an expectation over future decisions, just like in forward-backward training. You just have to make sure
that you treat your learned policy as stochastic as well.
So with this particular choice of optimal policy, loss function, search strategy and base learner, we recover something that looks essentially like forward-backward training. It's not identical
because in true F-B, we do full maximization each time, while in Searn we instead take baby steps. There are two interesting things here. First, this means that in this particular case, where we
compute true expectations, somehow the baby steps aren't necessary in Searn. This points to a potential area to improve the algorithm. Second, and perhaps more interesting, it means that we don't
actually have to do full F-B. The Searn theorem holds even if you're not computing true expectations (you'll just wind up with higher variance in your predictions). So if you want to do, eg., Viterbi
F-B but are worried about convergence, this shows that you just have to use step sizes. (I'm sure someone in the EM community must have shown this before, but I haven't seen it.)
Anyway, I'm about 90% sure that the above actually works out if you set about to prove it. Assuming its validity, I'm about 90% sure it holds for EM-like structured prediction problems in general. If
so, this would be very cool. Or, at least I would think it's very cool :).
5 comments:
酒店經紀PRETTY GIRL 台北酒店經紀人 ,禮服店酒店兼差PRETTY GIRL酒店公關酒店小姐彩色爆米花酒店兼職,酒店工作彩色爆米花酒店經紀, 酒店上班,酒店工作 PRETTY GIRL酒店喝酒酒店上班彩色爆米花台北酒店酒店小姐
PRETTY GIRL酒店上班酒店打工PRETTY GIRL酒店打工酒店經紀彩色爆米花
艾葳酒店經紀公司提供專業的酒店經紀, 酒店上班小姐,八大行業,酒店兼職,傳播妹,或者想要打工兼差、打工,兼差,八大行業,酒店兼職,想去酒店上班, 日式酒店,制服酒店,ktv酒店,禮服店,整天穿得水水漂漂的,還是想去
條件給水水們。心動嗎!? 趕快來填寫你的酒店上班履歷表
Really trustworthy blog. Please keep updating with great posts like this one. I have booked marked your site and am about to email it to a few friends of mine that I know would enjoy reading..
sesli sohbetsesli chatkamerali sohbetseslisohbetsesli sohbet sitelerisesli chat siteleriseslichatsesli sohpetseslisohbet.comsesli chatsesli sohbetkamerali sohbetsesli chatsesli sohbetkamerali
seslisohbetsesli sohbetkamerali sohbetsesli chatsesli sohbetkamerali sohbet
Really trustworthy blog. Please keep updating with great posts like this one. I have booked marked your site and am about to email it
to a few friends of mine that I know would enjoy reading..
sesli sohbet
sesli chat
sesli site
görünlütü sohbet
görüntülü chat
kameralı sohbet
kameralı chat
sesli sohbet siteleri
sesli chat siteleri
görüntülü sohbet siteleri
görüntülü chat siteleri
kameralı sohbet siteleri
canlı sohbet
sesli muhabbet
görüntülü muhabbet
kameralı muhabbet
sesli sex
Really trustworthy blog. Please keep updating with great posts like this one. I have booked marked your site and am about to email it
to a few friends of mine that I know would enjoy reading..
sesli sohbet
sesli chat
sesli site
görünlütü sohbet
görüntülü chat
kameralı sohbet
kameralı chat
sesli sohbet siteleri
sesli chat siteleri
sesli muhabbet siteleri
görüntülü sohbet siteleri
görüntülü chat siteleri
görüntülü muhabbet siteleri
kameralı sohbet siteleri
kameralı chat siteleri
kameralı muhabbet siteleri
canlı sohbet
sesli muhabbet
görüntülü muhabbet
kameralı muhabbet
sesli sex | {"url":"http://nlpers.blogspot.com/2007/04/searn-versus-hmms.html","timestamp":"2014-04-16T21:58:50Z","content_type":null,"content_length":"116085","record_id":"<urn:uuid:221b43b7-eb58-48f8-a532-c7bf150d2a0b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is the given amount converted to the given unit ? 195 s; minutes
• 7 months ago
• 7 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5238ffa8e4b0b9d6b2d759e8","timestamp":"2014-04-18T03:26:44Z","content_type":null,"content_length":"56557","record_id":"<urn:uuid:799e373e-fc7e-420d-9e51-21e440c3cf10>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weight-biased leftist trees and modified skip lists
, 2005
"... Abstract. The Standard Template Library (STL) is a library of generic algorithms and data structures that has been incorporated in the C++ standard and ships with all modern C++ compilers. In
the CPH STL project the goal is to implement an enhanced edition of the STL. The priority-queue class of the ..."
Cited by 2 (2 self)
Add to MetaCart
Abstract. The Standard Template Library (STL) is a library of generic algorithms and data structures that has been incorporated in the C++ standard and ships with all modern C++ compilers. In the CPH
STL project the goal is to implement an enhanced edition of the STL. The priority-queue class of the STL is just an adapter that makes any resizable array to a queue in which the elements stored are
arranged according to a given ordering function. In the C++ standard no compulsory support for the operations delete(), increase(), or meld() is demanded even if those are utilized in many algorithms
solving graph-theoretic or geometric problems. In this project, the goal is to implement a CPH STL extension of the priority-queue class which provides, in addition to the normal priority-queue
functionality, the operations delete(), increase(), and meld(). To make the first two of these operations possible, the class must also guarantee that external references to compartments inside the
data structure are kept valid at all times.
, 1999
"... The speed of computer processors is growing rapidly in comparison to the speed of DRAM chips. The cost of a cache miss, measured in processor clock cycles, is increasing exponentially, and this
is quickly becoming a bottleneck for indexing in main memory. We study several indexing data structures on ..."
Add to MetaCart
The speed of computer processors is growing rapidly in comparison to the speed of DRAM chips. The cost of a cache miss, measured in processor clock cycles, is increasing exponentially, and this is
quickly becoming a bottleneck for indexing in main memory. We study several indexing data structures on a simulated architecture and show that the relative performance of cache-conscious indexing
structures is increasing with memory latency. In addition, we show that top-down algorithms for maintaining these structures reduce the total instruction count, leading to a modest improvement in
execution time over the corresponding bottom-up algorithms.
"... The speed of computer processors is growing rapidly in comparison to the speed of DRAM chips. The cost of a cache miss, measured in processor clock cycles, is increasing exponentially, and this
is quickly becoming a bottleneck for indexing in main memory. We study several indexing data structur ..."
Add to MetaCart
The speed of computer processors is growing rapidly in comparison to the speed of DRAM chips. The cost of a cache miss, measured in processor clock cycles, is increasing exponentially, and this is
quickly becoming a bottleneck for indexing in main memory. We study several indexing data structures on a simulated architecture and show that the relative performance of cache-conscious indexing
structures is increasing with memory latency. In addition, we show that top-down algorithms for maintaining these structures reduce the total instruction count, leading to a modest improvement in
execution time over the corresponding bottom-up algorithms. 1. Introduction Note: All figures and tables appear at the end of this document due to a formatting problem. The speed of computer
processors is growing rapidly in comparison to the speed of DRAM chips. The processor clock cycle has been decreasing at a rate of roughly 70% every year, while the cycle time of common DRAM chips is
"... this paper is to demonstrate the generality of two techniques used in [6] to develop an MDEPQ representation from an MPQ representation - height biased leftist trees. These methods - total
correspondence and leaf correspondence - may be used to arrive at efficient DEPQ and MDEPQ data structures from ..."
Add to MetaCart
this paper is to demonstrate the generality of two techniques used in [6] to develop an MDEPQ representation from an MPQ representation - height biased leftist trees. These methods - total
correspondence and leaf correspondence - may be used to arrive at efficient DEPQ and MDEPQ data structures from PQ and MPQ data structures such as the pairing heap [8; 18], Binomial and Fibonacci
heaps [9], and Brodal's FMPQ [2] which also provide efficient support for the operation: --Delete(Q,p): delete and return the element located at p We begin, in Section 2, by reviewing a rather
straightforward way, dual priority queues, to obtain a (M)DEPQ structure from a (M)PQ structure. This method [2; 6] simply puts each element into both a minPQ and a maxPQ. In Section 3, we describe
the total correspondence method and in Section 4, we describe leaf correspondence. Both sections provide examples of PQs and MPQs and the resulting DEPQs and MDEPQs. Section 5 gives complexity
results. In Section 6, we provide the result of experiments that compare the performance of the MDEPQs based on height biased leftist tree [7], pairing heaps [8; 18], and FMPQs [2]. For reference
purpose, we also provide run times for the splay tree data structure [16]. Although splay trees were not specifically designed to represent DEPQs, it is easy min Heap max Heap Fig. 1. Dual heap
structure to use them for this purpose. Note that splay trees do not provide efficient support for the Meld operation
"... Abstract. A range of attacks on network components, such as algorithmic denial-of-service attacks and cryptanalysis via timing attacks, are enabled by data structures for which an adversary can
predict the durations of operations that he will induce on the data structure. In this paper we introduce ..."
Add to MetaCart
Abstract. A range of attacks on network components, such as algorithmic denial-of-service attacks and cryptanalysis via timing attacks, are enabled by data structures for which an adversary can
predict the durations of operations that he will induce on the data structure. In this paper we introduce the problem of designing data structures that confound an adversary attempting to predict the
timing of future operations he induces, even if he has adaptive and exclusive access to the data structure and the timings of past operations. We also design a data structure for implementing a set
(supporting membership query, insertion, and deletion) that exhibits timing unpredictability and that retains its efficiency despite adversarial attacks. To demonstrate these advantages, we develop a
framework by which an adversary tracks a probability distribution on the data structure’s state based on the timings it emitted, and infers invocations to meet his attack goals. 1
"... Abstract. We present the Skip lifts, a randomized dictionary data structure inspired from the skip list [Pugh ’90, Comm. of the ACM]. Similarly to the skip list, the skip lifts has the finger
search property: Given a pointer to an arbitrary element f, searching for an element x takes expected O(log ..."
Add to MetaCart
Abstract. We present the Skip lifts, a randomized dictionary data structure inspired from the skip list [Pugh ’90, Comm. of the ACM]. Similarly to the skip list, the skip lifts has the finger search
property: Given a pointer to an arbitrary element f, searching for an element x takes expected O(log δ) time where δ is the rank distance between the elements x and f. The skip lifts uses nodes of O
(1) worst-case size and it is one of the few efficient dictionary data structures that performs an O(1) worstcase number of structural changes during an update operation. Given a pointer to the
element to be removed from the skip lifts the deletion operation takes O(1) worst-case time. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=922498","timestamp":"2014-04-16T05:57:28Z","content_type":null,"content_length":"25921","record_id":"<urn:uuid:5264c2ad-fb63-4cce-88b1-97e857292b38>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.
Find millions of documents on Course Hero - Study Guides, Lecture Notes, Reference Materials, Practice Exams and more. Course Hero has millions of course specific materials providing students with
the best way to expand their education.
Below is a small sample set of documents:
Kettering - MASTERING - PHYS
MasteringPhysics: Assignment Print Viewhttp:/session.masteringphysics.com/myct/assignmentPrint?assig.[ Assignment View ]Elisfri 2, vor 200726. DC CircuitsAssignment is due at 2:00am on Wednesday,
February 21, 2007Credit for problems submitted late w
Kettering - MASTERING - PHYS
MasteringPhysics: Assignment Print Viewhttp:/session.masteringphysics.com/myct/assignmentPrint?assig.[ Assignment View ]Elisfri 2, vor 200727. Magnetic Field and Magnetic ForcesAssignment is due at
2:00am on Wednesday, February 28, 2007Credit for pr
Kettering - MASTERING - PHYS
MasteringPhysics: Assignment Print View04/19/2007 02:22 PM[ Assignment View ][ PriElisfri 2, vor 200728. Sources of Magnetic FieldAssignment is due at 2:00am on Wednesday, March 7, 2007Credit for
problems submitted late will decrease to 0% after th
Kettering - MASTERING - PHYS
MasteringPhysics: Assignment Print View04/19/2007 02:22 PM[ Assignment View ][ PriElisfri 2, vor 200729a. Electromagnetic InductionAssignment is due at 2:00am on Wednesday, March 7, 2007Credit for
problems submitted late will decrease to 0% after t
Kettering - MASTERING - PHYS
http:/session.masteringphysics.com/myct04/19/2007 05:02 PM[ Assignment View ][ Print ]Elisfri 2, vor 200730. InductanceAssignment is due at 2:00am on Wednesday, March 14, 2007Credit for problems
submitted late will decrease to 0% after the deadline
Kettering - MASTERING - PHYS
MasteringPhysics: Assignment Print View04/19/2007 05:03 PM[ Assignment View ][ PriElisfri 2, vor 200731. Alternating Current CircuitsAssignment is due at 2:00am on Wednesday, March 21, 2007Credit for
problems submitted late will decrease to 0% afte
Kettering - MASTERING - PHYS
http:/session.masteringphysics.com/myct04/19/2007 05:03 PM[ Assignment View ][ Print ]Elisfri 2, vor 200732. Electromagnetic WavesAssignment is due at 2:00am on Wednesday, March 28, 2007Credit for
problems submitted late will decrease to 0% after t
Kettering - MASTERING - PHYS
MasteringPhysics: Assignment Print Viewhttp:/session.masteringphysics.com/myct/assignmentPrint?assig.[ Assignment View ]Elisfri 2, vor 200733. The Nature and Propagation of LightAssignment is due at
2:00am on Wednesday, January 17, 2007Credit for pr
Kettering - MASTERING - PHYS
MasteringPhysics: Assignment Print Viewhttp:/session.masteringphysics.com/myct/assignmentPrint?assig.[ Assignment View ]Elisfri 2, vor 200734. Geometric Optics and Optical InstrumentsAssignment is
due at 2:00am on Wednesday, January 17, 2007Credit f
Kettering - MASTERING - PHYS
MasteringPhysics: Assignment Print Viewhttp:/session.masteringphysics.com/myct/assignmentPrint?assig.[ Assignment View ]Elisfri 2, vor 200735. InterferenceAssignment is due at 2:00am on Wednesday,
January 17, 2007Credit for problems submitted late w
Kettering - MASTERING - PHYS
MasteringPhysics: Assignment Print Viewhttp:/session.masteringphysics.com/myct/assignmentPrint?assig.[ Assignment View ]Elisfri 2, vor 200736. DiffractionAssignment is due at 2:00am on Wednesday,
January 17, 2007Credit for problems submitted late wi
Kettering - MASTERING - PHYS
Chapte 24. Ele r ctric Pote ntial24.1. What is Physics? 24.2. Ele ctric Pote ntial Ene rgy 24.3. Ele ctric Pote ntial 24.4. Equipote ntial S urface s 24.5. C alculating thePote ntial fromtheFie ld
24.6. Pote ntial Dueto a Point C harge 24.7. Pote ntial D
Kettering - MASTERING - PHYS
Electric Potential Electric Potential Energy versus Electric Potential Gravitational Force and Potential Energy First we review the force and potential energy of an object of mass gravitational field
that points downward (in the near the earth's surface.
Kettering - MASTERING - PHYS
PHYS-225Homework 225CFall 2009Dr. RussellPart 1: Mean and Uncertainty (6 pts) The data set at right shows a set of data points representing measurements of the electric field at a certain location.1.
Enter this data into an Excel spreadsheet. 2. Calc
Kettering - MASTERING - PHYS
23.4. Model: Light rays travel in straight lines. Also, the red and green light bulbs are point sources.Visualize:Solve:The width of the aperture is w = 1 m. From the geometry of the figure for red
light,w2 x = x = 2w = 2 (1.0 m ) = 2.0 m 1m 3m + 1mT
Kettering - MASTERING - PHYS
24.40. Model: Assume thin lenses and treat each as a simple magnifier with M = 25cm/f .Visualize: Equation 24.10 gives the magnification of a microscope.M = mobjM eye = L 25cm f obj f eyeSolve: (a)
The more powerful lens (4) with the shorter focal len
Kettering - MASTERING - PHYS
22.2. Model: Two closely spaced slits produce a double-slit interference pattern.Visualize: The interference pattern looks like the photograph of Figure 22.3(b). It is symmetrical, with the m = 2
fringes on both sides of and equally distant from the cent
Kettering - MASTERING - PHYS
MasteringPhysics10/19/08 6:50 PMAssignment Display Mode:View Printable Answers[PPhysys 202 Fall08HW4Due at 11:00pm on Wednesday, October 8, 2008View Grading DetailsIntroduction to Electric
CurrentDescription: Mostly conceptual questions about el
Kettering - MASTERING - PHYS
26.14. Model: Model the plastic spheres as point charges.Visualize:Solve:(a) The charge q1 = 50.0 nC exerts a force F1 on 2 on q2 = 50.0 nC to the right, and the charge q2 exerts9 2 2 9 9 K q1 q2 (
9.0 10 N m /C ) ( 50.0 10 C ) ( 50.0 10 C ) = = 0.056
Kettering - MASTERING - PHYS
27.10. Model: The rod is thin, so assume the charge lies along a line. Visualize:Solve: The force on charge q is F = qErod . From Example 27.3, the electric field a distance r from the center of a
charged rod isErod = Thus, the force is1Q4 0 r r 2 +
Kettering - MASTERING - PHYS
28.4. Model: The electric flux flows out of a closed surface around a region of space containing a netpositive charge and into a closed surface surrounding a net negative charge. Visualize: Please
refer to Figure EX28.4. Let A be the area in m2 of each o
Kettering - MASTERING - PHYS
29.28. Model: The electric potential at the dot is the sum of the potentials due to each charge.Visualize: Please refer to Figure EX29.28. Solve: The electric potential at the dot isV=1 q1 1 q2 1 q3
+ + 4 0 r1 4 0 r2 4 0 r3 5.0 109 C 5.0 109 C q = ( 9
Kettering - MASTERING - PHYS
MasteringPhysics5/10/09 3:33 PMAssignment Display Mode:View Printable Answers[ Print ]phy260S09HW9Due at 11:00pm on Thursday, April 16, 2009View Grading DetailsCharged Aluminum SpheresDescription:
Find the number of electrons in an aluminum sphe
Kettering - MASTERING - PHYS
MasteringPhysics5/10/09 3:36 PMAssignment Display Mode:View Printable Answers[ Print ]phy260S09HW10Due at 11:00pm on Tuesday, April 28, 2009View Grading DetailsCharged RingDescription: Find the
electric field from a uniformly charged ring (quali
Kettering - MASTERING - PHYS
MasteringPhysics5/10/09 3:39 PMAssignment Display Mode:View Printable Answers[ Print ]phy260S09HW11Due at 11:00pm on Tuesday, May 5, 2009View Grading DetailsEnergy Stored in a Charge
ConfigurationDescription: Find the work required to assemble f
Kettering - MASTERING - PHYS
MasteringPhysics5/10/09 3:42 PMAssignment Display Mode:View Printable Answers[ Print ]phy260S09HW12Due at 11:00pm on Tuesday, May 12, 2009View Grading DetailsCapacitors in SeriesDescription: Contains
several questions that help practice basic ca
Kettering - MASTERING - PHYS
MasteringPhysics5/10/09 3:55 PMAssignment Display Mode:View Printable Answersphy260S09HW13Due at 12:00am on Monday, June 1, 2009View Grading DetailsHeating a Water BathDescription: Calculate the time
required for a resistor to heat a water bath t
Ohio State - BUSFIN - 600
MID TERM ESSAY QUESTIONS A. Variable life insurance has the premiums invested in separate accounts and the face value may increase if the investment results are favorable. Prospects are those who
desire life insurance at a fixed, level premium but want to
Ohio State - BUSFIN - 600
BF 640 AUTUMN 2008 FINAL MULTIPLE CHOICE EXAM INSTRUCTOR: CHARLES A. BRYAN NAME: _ Place your answer on the answer sheet and turn the answer sheet in. You may keep the exam. Answers will be posted on
Carmen within three days. 1. Risk can be defined as a.
Ohio State - BUSFIN - 600
BF 640 AUTUMN 2008 MID TERM EXAM MULTIPLE CHOICE QUESTIONS ANSWER SHEET NAME: _ Place your answer on the answer sheet and turn the answer sheet in. You may keep the exam. Answers will be posted on
Carmen. 1. Risk can be defined as a. Uncertainty concernin
Indiana - MATH - 311
University of Alaska Southeast - CIS - 29977
Fall 2003 Hulstein & Hulstein Intermediate Accounting Chapter # 9 Quiz B Name _1. Designated market value a. is always the middle value of replacement cost, net realizable value, and net
realizable value less a normal profit margin. b. should always be e
U. Houston - ASSEMBLY - 0356
Laboratory Short CourseIntroduction to CodeWarrior Running Assembly Programs on the Microcontrollerwww.freescale.com/universityprogramsFreescale and the Freescale logo are trademarks of Freescale
Semiconductor, Inc. All other product or service names a
Clarkson - ES - 220
Clarkson - ES - 220
Clarkson - ES - 220
Clarkson - ES - 220
Clarkson - ES - 220
Clarkson - ES - 220
Clarkson - ES - 220
Clarkson - ES - 220
Clarkson - ES - 220
Clarkson - ES - 220
Clarkson - ES - 220
Clarkson - ES - 220
Academy of Art University - ARTS - arts 101
Note that the following lectures include animations and PowerPoint effects such as fly ins and transitions that require you to be in PowerPoint's Slide Show mode (presentation mode).Chapter 4The
Origin of Modern AstronomyGuidepostThe sun, moon, and pl
DeAnza College - ARTS - arts 101
Review question from chapter 1:3, What is the difference between our solar system, our galaxy a nd the universe? Our solar system is made up of the Sun (the nearest star), and bodies (like the
planets) which are gravitationally bound to it. It is much sm
New York College of Podiatric Medicine - AUD - 5721
1Chapter 1 Introduction to Federal Taxation and Understanding the Federal Tax LawSUMMARY OF CHAPTERThis chapter presents information on the magnitude of federal taxes collected and on taxpayer
obligations. Also, the history of U.S. federal taxation is
New York College of Podiatric Medicine - ACT - 3243
9Chapter 2 Tax Research, Practice, and ProcedureSUMMARY OF CHAPTERTax practice involves the preparation of tax returns and representation of clients before the audit or appellate divisions of the
Internal Revenue Service. To become a competent professi
Cornell - HADM - 3301
Challenge1 Afterrunningchallenge1withandwithoutbatching,itbecameimmediatelyclearthatBenihana mustusebatchingtoproducethemostprofitableresults.Inoursimulationwithbatchingthe
averageprofitwas360.00.Withoutbatchingprofitwasintheredat139.00.Fromthetableyou ca
Cornell - H ADM - 301
The only way to address the bottleneck issue is to get another oven. Kristen has two options, she can either rent an oven from her neighbor or buy another one. If she were to rent another oven, the
process would change. Kristen would have to run back and
Cornell - H ADM - 3387
HADM 387 9/16/08 Sexual HarassmentThe original title VII did not include sex so you could technically exclude women from your restaurant There is very little history on sex in the title 1986 Sex.H
starts under federal law. Bill passed by congress added
Lawson State - BUSINESS - Acct113
Quiz 4 (20 points) 1. Which of the following circumstances creates a future taxable amount? A. Service fees collected in advance from customers: taxable when received, recognized for f inancial
reporting when earned. B. Accrued compensation costs for futu
University of Texas - M - M408K
Frameset OverviewThe Blackboard Learn environment includes a header frame with images and buttons customized by the institution and tabs that navigate to different areas within Blackboard Learn.
Clicking on a tab will open that area in the content frame.
University of Texas - M - M408K
Frameset OverviewThe Blackboard Learn environment includes a header frame with images and buttons customized by the institution and tabs that navigate to different areas within Blackboard Learn.
Clicking on a tab will open that area in the content frame.
University of Texas - M - M408K
amaefule (bca357) HW02 Gilbert (57195) This print-out should have 13 questions. Multiple-choice questions may continue on the next column or page nd all choices before answering. 001 10.0 points1t
(seconds) 0 1 2 3 4 5 s (feet) 0 20 28 36 50 62 Find the
Minnesota - BIOLOGY - A&P
BIOL 237 Case History 1 A 27 year old man, who works outdoors, notices a growth on the skin which includes a darkly pigmented spot surrounded by a halo of inflamed skin. This man has a very fair
complexion, with numerous freckles and blond hair. Because o
Minnesota - BIOLOGY - A&P
Angelo Crespin 09/09/09 237.001Connective Tissue HistologyTissue TypeCharacteristics 1. Well Vascularized 2. Contains fibers, cell types, ground substances 3. Most abundant connective
tissueFunctionsLocationsAreolar1. Contain Blood, vessels, and ne
Minnesota - BIOLOGY - A&P
Angelo Crespin 09/09/09 237.001 Epithelial Tissue HistologyTissue TypeCharacteristicsFunctions 1. Thinnest tissue in the body. 2. Forms semipermeable membrane in lungs and capillaries. 3. Secretes
serous fluid in serous membrane. 1. Goblet cells secret
Minnesota - BIOLOGY - A&P
Epithelial Tissue Histology - Template Tissue Type Simple Squamous Epithelium Characteristics Single Layered Thin and Flat Functions 1. Thinnest tissue in the body. 2. Forms semipermeable membrane in
lungs and capillaries. 3. Secretes serous fluid in sero
Minnesota - BIOLOGY - A&P
Angelo Crespin 10/30.09 Biology 237.001Stimulus-Contracting Coupling1) An impulse arrives at the axon terminus generating an action potential which is propagated along the sarcolemma and down the T
tubules 2) Next, the action potential triggers Ca2+ rel | {"url":"http://www.coursehero.com/file/5655048/25/","timestamp":"2014-04-20T20:59:23Z","content_type":null,"content_length":"65677","record_id":"<urn:uuid:6d2ffe94-168c-4ff8-a554-1b4c5c875905>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
and Jörg Siekmann. Uni cation theory
, 1995
"... We demonstrate an equivalence between the rank 2 fragments of the polymorphic lambda calculus (System F) and the intersection type discipline: exactly the same terms are typable in each system.
An immediate consequence is that typability in the rank 2 intersection system is DEXPTIME-complete. We int ..."
Cited by 26 (1 self)
Add to MetaCart
We demonstrate an equivalence between the rank 2 fragments of the polymorphic lambda calculus (System F) and the intersection type discipline: exactly the same terms are typable in each system. An
immediate consequence is that typability in the rank 2 intersection system is DEXPTIME-complete. We introduce a rank 2 system combining intersections and polymorphism, and prove that it types exactly
the same terms as the other rank 2 systems. The combined system suggests a new rule for typing recursive definitions. The result is a rank 2 type system with decidable type inference that can type
some interesting examples of polymorphic recursion. Finally,we discuss some applications of the type system in data representation optimizations such as unboxing and overloading. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1662065","timestamp":"2014-04-17T23:26:34Z","content_type":null,"content_length":"13987","record_id":"<urn:uuid:60aec65b-e645-46df-b7ab-260bd9ba9aea>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
Articles: Thermodynamics of Global Warming
June 6, 2012
Thermodynamics of Global Warming
What percentage change in global mean temperature (GMT) has occurred since the Industrial Revolution began? This can be calculated only by using an absolute temperature scale. Answer = +0.3%. Can
this be so alarming to Al Gore? Indeed, the Kelvin absolute scale for temperature is one of only seven basic units of measure recognized in the International System of Units. Temperature measures the
heat content of a substance -- a simple linear relation so long as the zero of the temperature scale is properly placed. Heat itself is a form of energy measured in joules, calories, or BTUs. The
thermodynamic science of heat flow requires the use of Kelvin because Kelvin eliminates the problem of negative temperature readings encountered with the Celsius or Fahrenheit scales. Heat can flow
into and out of any mass, be it solid, liquid, or gas. Reduction of heat makes it colder. There is no such thing as negative heat (anti-heat?). Therefore, negative temperature conveys no meaning,
Absolute zero temperature occurs at -273.15º C, or -491.67º F, and signifies a state of matter displaying complete absence of heat. The PBS NOVA television program broadcast an excellent introduction
to the science of cryogenics and its fascinating history. Anders Celsius invented his centigrade scale in the late 18^th century. Zero of Celsius scale is the temperature of ice-water (32º F), while
100º C is the boiling point (212º F) of water at sea level -- both chosen by Celsius because they are easily reproduced as experimental temperature calibration standards in laboratories around the
world. In 1848, Lord Kelvin invented his eponymous thermodynamic temperature scale which employs the same "degree" as the Celsius scale but shifts the zero point to absolute zero^1. Therefore, any
temperature value recorded in Celsius can be easily converted to Kelvin just by adding 273.15.
This measurement scale vastly simplifies the mathematics of, for example, equations of state for an ideal compressible permanent gas such as air (Figure 1). These are the same equations that govern
the evolution of the climate and necessarily are at the heart of any computer-numerical algorithms developed for climate modeling.
Figure 1: Equations of State for air and other gases. T = Absolute temperature (K)
Mole is a specific number of molecules = Avogadro's number = 6.022 × 10^23
Thermal properties of air are dominated by 78% nitrogen + 21% oxygen permanent gas content.
Global Warming: The common use of the Celsius scale in reports of "Climate Change" is therefore improper. This misapplication obscures essential information conveyed by the Thermodynamic Temperature
scale. As illustrated in Figures 2 -- reproduced from MIT Prof. Richard Lindzen's presentation to the British Parliament -- global mean temperature (GMT) is approx. 0ºC ≈ 273 K; I have added the
Kelvin scale in red at the left of Figure 2 for proper comparison. When looking at the original plot of GMT Anomalies in Celsius in Figure 2, one can't help but see the false impression of GMT
oscillating with large percentage swings around a mean value somewhere near zero. It should be no surprise, with oceans consisting substantially of ice-water, that the GMT is circa 0ºC -- arbitrarily
defined as the temp of ice-water. Properly replotted in Figure 3, these small fluctuations of order 0.1 K oscillating somewhere around value of 273 K look like a very straight line -- hardly alarming
at all!
Figure 2: Hadley CRUT3 global temperature anomaly. Reproduced from Prof. Lindzen's presentation to House of Commons, Slide 11 of 58 -- with superimposed absolute temperature in red on left added for
Figure 3: Hadley CRUT3 global temperature anomaly data from Fig. 2, replotted on correct thermodynamic temperature scale. Temperature changes of concern for "Global Warming" are, in fact, almost
indiscernible on this proper scale. GMT in degrees Kelvin.
Prof. Lindzen reports that over the last 150 years (since the Industrial Revolution), shift in GMT is about +0.8 °C = +0.8 K. Expressed as a percentage shift, this is only +0.8 K/273 K = +0.29%.
Although they admit that it is highly unlikely, the U.N. IPCC predicts at the very most a GMT rise of +5 K = +5 K / 273 K = +1.8%. Heaven forfend!
^1A mathematical simplification akin to the one that occurs in calculating planetary orbits when Copernicus correctly placed the zero-point (center) of the solar system at the sun.
What percentage change in global mean temperature (GMT) has occurred since the Industrial Revolution began? This can be calculated only by using an absolute temperature scale. Answer = +0.3%.
Can this be so alarming to Al Gore? Indeed, the Kelvin absolute scale for temperature is one of only seven basic units of measure recognized in the International System of Units. Temperature measures
the heat content of a substance -- a simple linear relation so long as the zero of the temperature scale is properly placed. Heat itself is a form of energy measured in joules, calories, or BTUs. The
thermodynamic science of heat flow requires the use of Kelvin because Kelvin eliminates the problem of negative temperature readings encountered with the Celsius or Fahrenheit scales. Heat can flow
into and out of any mass, be it solid, liquid, or gas. Reduction of heat makes it colder. There is no such thing as negative heat (anti-heat?). Therefore, negative temperature conveys no meaning,
Absolute zero temperature occurs at -273.15º C, or -491.67º F, and signifies a state of matter displaying complete absence of heat. The PBS NOVA television program broadcast an excellent introduction
to the science of cryogenics and its fascinating history. Anders Celsius invented his centigrade scale in the late 18^th century. Zero of Celsius scale is the temperature of ice-water (32º F), while
100º C is the boiling point (212º F) of water at sea level -- both chosen by Celsius because they are easily reproduced as experimental temperature calibration standards in laboratories around the
world. In 1848, Lord Kelvin invented his eponymous thermodynamic temperature scale which employs the same "degree" as the Celsius scale but shifts the zero point to absolute zero^1. Therefore, any
temperature value recorded in Celsius can be easily converted to Kelvin just by adding 273.15.
This measurement scale vastly simplifies the mathematics of, for example, equations of state for an ideal compressible permanent gas such as air (Figure 1). These are the same equations that govern
the evolution of the climate and necessarily are at the heart of any computer-numerical algorithms developed for climate modeling.
Figure 1: Equations of State for air and other gases. T = Absolute temperature (K)
Mole is a specific number of molecules = Avogadro's number = 6.022 × 10^23
Thermal properties of air are dominated by 78% nitrogen + 21% oxygen permanent gas content.
Global Warming: The common use of the Celsius scale in reports of "Climate Change" is therefore improper. This misapplication obscures essential information conveyed by the Thermodynamic Temperature
scale. As illustrated in Figures 2 -- reproduced from MIT Prof. Richard Lindzen's presentation to the British Parliament -- global mean temperature (GMT) is approx. 0ºC ≈ 273 K; I have added the
Kelvin scale in red at the left of Figure 2 for proper comparison. When looking at the original plot of GMT Anomalies in Celsius in Figure 2, one can't help but see the false impression of GMT
oscillating with large percentage swings around a mean value somewhere near zero. It should be no surprise, with oceans consisting substantially of ice-water, that the GMT is circa 0ºC -- arbitrarily
defined as the temp of ice-water. Properly replotted in Figure 3, these small fluctuations of order 0.1 K oscillating somewhere around value of 273 K look like a very straight line -- hardly alarming
at all!
Figure 2: Hadley CRUT3 global temperature anomaly. Reproduced from Prof. Lindzen's presentation to House of Commons, Slide 11 of 58 -- with superimposed absolute temperature in red on left added for
Figure 3: Hadley CRUT3 global temperature anomaly data from Fig. 2, replotted on correct thermodynamic temperature scale. Temperature changes of concern for "Global Warming" are, in fact, almost
indiscernible on this proper scale. GMT in degrees Kelvin.
Prof. Lindzen reports that over the last 150 years (since the Industrial Revolution), shift in GMT is about +0.8 °C = +0.8 K. Expressed as a percentage shift, this is only +0.8 K/273 K = +0.29%.
Although they admit that it is highly unlikely, the U.N. IPCC predicts at the very most a GMT rise of +5 K = +5 K / 273 K = +1.8%. Heaven forfend!
^1A mathematical simplification akin to the one that occurs in calculating planetary orbits when Copernicus correctly placed the zero-point (center) of the solar system at the sun. | {"url":"http://www.americanthinker.com/2012/06/thermodynamics_of_global_warming.html","timestamp":"2014-04-24T23:53:45Z","content_type":null,"content_length":"35723","record_id":"<urn:uuid:6136e3a2-c977-4f04-92a0-637350e3fbfe>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
2500k help
a b à CPUs
a b K Overclocking
January 4, 2013 9:44:26 PM
h100i pushpull
1866 gskill
i was overclocking on my 100i i got a few hours ago
i was stable and temps between 57-60c at 46 multiplier after 5 minutes. i went and tryed for a 47 multi and my system froze and bluescreened i diddnt get to read what it was but since i now have my
cpu at 4ghz. if my temps are stable and my prime 95 tests went error free could that 100mhz really have caused an error or so i need to change something else i read something somehwere about memory
controllers not being able to keep up? please anyone even with a comment if not a solution.i was really hopeing to do some sirius overclocking now that i got the h100i
a c 78 à CPUs
a b K Overclocking
January 5, 2013 1:51:25 AM
2500k memory controller is supposed to max out at 1333 IIRC. (I have heard it's no big deal to run 1600, but I can't see the difference in every day use, so I don't bother)
1866, has got to be pushing your luck. What voltage are you running on the RAM? (That's also an area of concern with sandy bridge)
a b à CPUs
a b K Overclocking
January 5, 2013 2:11:56 AM
Can't find your answer ? Ask !
a c 123 à CPUs
a b K Overclocking
January 5, 2013 4:33:46 AM
1333 is the stock speed and will usually max at ~2133 for SB. But ram speed really doesn't make much difference for SB.
Most people get 1600 simply because it's usually the same price as 1333. Ram speed doesn't matter when ocing since ram speed and cpu speed are not connected for SB. You never stated your vcore but
you can be unstable at any speed depending on the voltage. Also 5 mins is not going to test stability. You have been told this before.
Btw you can look up the bsod in your reliability history but it is obvious the oc did it. 100mhz is plenty make it unstable.
a b à CPUs
a b K Overclocking
January 5, 2013 5:35:27 AM
a b à CPUs
a b K Overclocking
January 5, 2013 11:36:07 AM
a b à CPUs
a b K Overclocking
January 5, 2013 4:34:14 PM
at 46x and 4.59ghz the vcore is 1.27-1.3v on auto and temps are all under 60 right now, im doing a few hour stability test.it should be noted that my cpu frequency varies between 4490 and 4590.3
multiplier is at 46 and bus ratio 99.8
but this also change to 99.6 here and there while testing,and my multiplier goes to 45 and back to 46 on its own
a b à CPUs
a b K Overclocking
January 5, 2013 4:51:24 PM
a c 123 à CPUs
a b K Overclocking
January 5, 2013 7:11:59 PM
Ram speed is irrelevant. Don't use auto vcore, that is the issue. It won't know what voltage you need. I'd suggest offset. Cpu frequency fluctuating has already been explained to you, it's normal.
The ratio changing can be stopped by changing the long duration but it's just another normal thing. You can do short tests just trying to find the right speed but once you get where you want, do it
4.7 can be done on my 212+. I just went straight to 1.35v and saw what's the highest I can get and that is what I got. I just fine tuned it from there. Temps were still fine and it saves time. There
is nothing to worry about unless you jack up your voltage.
a b à CPUs
a b K Overclocking
January 5, 2013 7:23:33 PM
a c 123 à CPUs
a b K Overclocking
January 5, 2013 8:10:32 PM
a b à CPUs
a b K Overclocking
January 5, 2013 8:34:38 PM
a b à CPUs
a b K Overclocking
January 10, 2013 1:53:48 PM
ok so now that im at offset voltage i was stable in prime 95 with a 46 multiplier. i had the voltage to -0.025 and i was stable in prime good temps and all but than i was playing bf3 and i got a 0124
bsod, so i upped the voltage to -0.020 and my pc was fine till i reset it self,i wasn't around to see the code so i upped it to -0.015 and now i wake up to another reset so i upped it again to
-0.010. the offset voltage was set on stage 5 i just changed it to stage 4 . i also still have pll over voltage disabled. any help would be much appreciated
a c 123 à CPUs
a b K Overclocking
January 10, 2013 6:02:27 PM
You can tell me an offset but that is useless as it doesn't tell me the vcore. The base offset value is different for different mobos so -.05 can give me 1.25 but it can be 1.15 for you and that is a
big difference. Llc would also change the vcore. Offset doesn't have stages, I assume you meant llc level. Did you read an oc guide so you know what everything is?
a b à CPUs
a b K Overclocking
January 10, 2013 7:26:07 PM
As of now i am still stable in games but im going to wait till tonight to run prime 95,in my bios the stage 4 was for the vdroop i think it being at stage 5 dropped it to low as well as me having my
vcore to low. today through out out 2-3 hours of gaming and normal use the voltages minimum is 0.94 and max is 1.31,averaging around 0.95-0.97 on idle and 1.26-1.29 while gaming . and temps at idle
low are 23 25 22 26 and a package 27,the average of idle are from that to 30,the max while gaming 48 50 50 51 and 51 package. ill have more info from a prime 95 test tomorrow morning but i have a
feeling it was my vdroop going to low because it only reset once after i upped the offset back to -0.15 and after that i upped to -0.10 and went from stage 5 vdroop to stage 4 witch from the
motherboards graph doesn't drop the voltage as low when at idle .Also to be noted until i upped the voltage i was getting slight shudders while watching videos although i cant be sure if it was the
pc or the website i was witching off of but when i went back to re watch the part it didn't shudder the second time.
Can't find your answer ? Ask !
Read discussions in other Overclocking categories | {"url":"http://www.tomshardware.com/forum/282175-29-2500k-help","timestamp":"2014-04-19T23:13:04Z","content_type":null,"content_length":"150252","record_id":"<urn:uuid:5768ebcf-3e92-4299-9b9c-e044681308a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Building elliptic curves into a family
up vote 11 down vote favorite
Suppose $E/ \mathbb{Q}$ is an elliptic curve whose Mordell-Weil group $E(\mathbb{Q})$ has rank r. When can we realize E as a fiber of an elliptic surface $S\to C$ fibered over some curve, with
everything defined over $\mathbb{Q}$, such that the group of $\mathbb{Q}$-rational sections of $S$ has rank at least r?
Edit: Let me also demand that the resulting family is not isotrivial, i.e. the j-invariants of the fibers are not all equal.
nt.number-theory elliptic-curves ag.algebraic-geometry
add comment
2 Answers
active oldest votes
If you require $C = P^1$ then it's probably not possible except for very small values of $r$. If you don't care about $C$, then here is something that might work.
Suppose $E$ is given by $y^2=x^3+ax+b$ and $P_i=(x_i,y_i),i=1,\ldots,r$ is a basis for the Mordell-Weil group. Let $C$ be the curve given by the system of equations $u_i^2 = (t^i+x_i)^3
up vote 10 + a(t^i+x_i) + b + t, i=1,\ldots,r$ in $t,u_1,\ldots,u_r$ and $S$ be the family $y^2 = x^3 + ax + b+t$ pulled back to $C$. So above $t=0$, $C$ has a point with $u_i=y_i$ and and the
down vote fiber of $S$ above this point is $E$. Also $C$ is defined so that there are sections of $S$ with $x$-coordinate $x=t^i+x_i$ and I bet they are independent. Finally the family is non
accepted isotrivial if $a \ne 0$. If $a=0$ adjust the construction is an obvious way.
add comment
Evidence that it is not known to always be possible over $\mathbb{QP}^1$: If you look at the records of the elliptic curves with high rank, they are broken down into three
1. Elliptic curves over $\mathbb{Q}$.
2. Nonisotrivial curves over $E$, where $E$ is an elliptic curve over $\mathbb{Q}$ with $|E(\mathbb{Q})|$ infinite.
3. Nonisotrivial curves over $\mathbb{QP}^1$.
up vote 4 down vote
If we could always deform, these records would be the same. In fact, according to the tables here and here, the highest known rank of type 1 is 28, of type 2 is 19 and of type 3
is 18.
I do not know whether there are examples where it is known that such a deformation is impossible.
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory elliptic-curves ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/8885/building-elliptic-curves-into-a-family","timestamp":"2014-04-19T04:53:06Z","content_type":null,"content_length":"54191","record_id":"<urn:uuid:21859d2d-40c2-42a1-8de2-f080cd94b845>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
please any body help me
how to calculate the bit rate and the bit depth of audio 8->
Determine your limit, then determine the length of audio in seconds. Multiply the depth by the sampling rate and multiply that by the number of channels. Multiply the audio bit-rate by the length of
the audio. Subtract the number from the maximum number. Then divide the number by the length of the audio. This gives you the bit-rate.
thanks very much but i ask if there is a method in java to calculate it?
None that I know of. You can try and create your own class to calculate it. I found a algorithm to calculate the time remaining if that's of any help. Code java: //Common algorithm to calculate
remaining time seconds_elapsed = current time - start time seconds_per_unit = seconds_elapsed / units_processed units_left = total_units - units_processed seconds_remaining = unit_left /
//Common algorithm to calculate remaining time seconds_elapsed = current time - start time seconds_per_unit = seconds_elapsed / units_processed units_left = total_units - units_processed
seconds_remaining = unit_left / seconds_per_unit | {"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/24784-please-any-body-help-me-printingthethread.html","timestamp":"2014-04-19T03:39:34Z","content_type":null,"content_length":"6138","record_id":"<urn:uuid:214ce723-4a20-4670-971d-19f3d8dbccce>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding the constant coefficients in a simple linear model
November 25th 2012, 06:11 PM #1
Oct 2009
Finding the constant coefficients in a simple linear model
Let X~exp(1), Y=e^-X and consider the simple linear model $Y= \alpha + \beta X + \gamma X^2 + W$ with $E(W) = 0 = \rho(X, W) = \rho(X^2, W)$
I need to evaluate the constants alpha, beta, and gamma using the information given above. I'm wondering if I am doing this correctly. I showed that Y~unif(0,1) which made calculations a bit
easier, then I found E(Y), Cov(X,Y), Cov(X^2,Y), got 3 equations and solved for the 3 unknowns.
pic of my work:http://i.imgur.com/HHlMl.jpg
moment calculations:http://i.imgur.com/O0m0h.jpg
I can't figure out where I went wrong; the next part of the question involves calculating sigma(W)/sigma(Y), and using my calculated coefficients I am getting a negative number for sigma(W).
Can someone point out where I'm making a mistake and/or how to do part c?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-statistics/208413-finding-constant-coefficients-simple-linear-model.html","timestamp":"2014-04-19T03:45:46Z","content_type":null,"content_length":"31284","record_id":"<urn:uuid:f6f98162-61c0-404a-9c79-2fc78a4f3873>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spaceefficient planar convex hull algorithms
"... We present space-efficient algorithms for computing the convex hull of a simple polygonal line in-place, in linear time. It turns out that the problem is as hard as stable partition, i.e., if
there were a truly simple solution then stable partition would also have a truly simple solution, and vice v ..."
Cited by 15 (2 self)
Add to MetaCart
We present space-efficient algorithms for computing the convex hull of a simple polygonal line in-place, in linear time. It turns out that the problem is as hard as stable partition, i.e., if there
were a truly simple solution then stable partition would also have a truly simple solution, and vice versa. Nevertheless, we present a simple self-contained solution that uses O(log n) space, and
indicate how to improve it to O(1) space with the same techniques used for stable partition. If the points inside the convex hull can be discarded, then there is a truly simple solution that uses a
single call to stable partition, and even that call can be spared if only extreme points are desired (and not their order). If the polygonal line is closed, then the problem admits a very simple
solution which does not call for stable partitioning at all.
, 2007
"... We present a space-efficient algorithm for reporting all k intersections induced by a set of n line segments in the plane. Our algorithm is an in-place variant of Balaban’s algorithm and, in the
worst case, runs in O(n log2 n+k) time using O(1) extra words of memory in addition to the space used f ..."
Cited by 8 (2 self)
Add to MetaCart
We present a space-efficient algorithm for reporting all k intersections induced by a set of n line segments in the plane. Our algorithm is an in-place variant of Balaban’s algorithm and, in the
worst case, runs in O(n log2 n+k) time using O(1) extra words of memory in addition to the space used for the input to the algorithm.
- In: Proceedings of the 10th Scandinavian Workshop on Algorithm Theory (SWAT ’06 , 2006
"... Abstract. We describe space-efficient algorithms for solving problems related to finding maxima among points in two and three dimensions. Our algorithms run in optimal O(n log n) time and occupy
only constant extra space in addition to the space needed for representing the input. 1 ..."
Cited by 6 (1 self)
Add to MetaCart
Abstract. We describe space-efficient algorithms for solving problems related to finding maxima among points in two and three dimensions. Our algorithms run in optimal O(n log n) time and occupy only
constant extra space in addition to the space needed for representing the input. 1
- Proc. Int’l Workshop Computational Geometry and Applications , 2004
"... Abstract. Finding the fastest algorithm to solve a problem is one of the main issues in Computational Geometry. Focusing only on worst case analysis or asymptotic computations leads to the
development of complex data structures or hard to implement algorithms. Randomized algorithms appear in this sc ..."
Cited by 5 (4 self)
Add to MetaCart
Abstract. Finding the fastest algorithm to solve a problem is one of the main issues in Computational Geometry. Focusing only on worst case analysis or asymptotic computations leads to the
development of complex data structures or hard to implement algorithms. Randomized algorithms appear in this scenario as a very useful tool in order to obtain easier implementations within a good
expected time bound. However, parallel implementations of these algorithms are hard to develop and require an in-depth understanding of the language, the compiler and the underlying parallel computer
architecture. In this paper we show how we can use speculative parallelization techniques to execute in parallel iterative algorithms such as randomized incremental constructions. In this paper we
focus on the convex hull problem, and show that, using our speculative parallelization engine, the sequential algorithm can be automatically executed in parallel, obtaining speedups with as little as
four processors, and reaching 5.15x speedup with 28 processors. 1
, 2005
"... We give space-efficient geometric algorithms for three related problems. Given a set of n axis-aligned rectangles in the plane, we calculate the area covered by the union of these rectangles
(Klee’s measure problem) in O(n 3/2 log n) time with O(√n) extra space. If the input can be destroyed and the ..."
Cited by 5 (0 self)
Add to MetaCart
We give space-efficient geometric algorithms for three related problems. Given a set of n axis-aligned rectangles in the plane, we calculate the area covered by the union of these rectangles (Klee’s
measure problem) in O(n 3/2 log n) time with O(√n) extra space. If the input can be destroyed and there are no degenerate cases and input coordinates are all integers, we can solve Klee’s measure
problem in O(n log² n) time with O(log² n) extra space. Given a set of n points in the plane, we find the axis-aligned unit square that covers the maximum number of points in O(n log³ n) time with O
(log² n) extra space.
, 2011
"... A constant-workspace algorithm has read-only access to an input array and may use only O(1) additional words of O(log n) bits, where n is the size of the input. We show that we can find a
triangulation of a plane straight-line graph with n vertices in O(n²) time. We also consider preprocessing a sim ..."
Cited by 2 (2 self)
Add to MetaCart
A constant-workspace algorithm has read-only access to an input array and may use only O(1) additional words of O(log n) bits, where n is the size of the input. We show that we can find a
triangulation of a plane straight-line graph with n vertices in O(n²) time. We also consider preprocessing a simple n-gon, which is given by the ordered sequence of its vertices, for shortest path
queries when the space constraint is relaxed to allow s words of working space. After a preprocessing of O(n²) time, we are able to solve shortest path queries between any two points inside the
polygon in O(n²/s) time.
, 2007
"... Abstract We revisit a classic problem in computational geometry: preprocessing a planar n-point set to answer nearest neighbor queries. In SoCG 2004, Br"onnimann, Chan, and Chen showed that it
is possible to design an efficient data structure that takes no extra space at all other than the inpu ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract We revisit a classic problem in computational geometry: preprocessing a planar n-point set to answer nearest neighbor queries. In SoCG 2004, Br"onnimann, Chan, and Chen showed that it
is possible to design an efficient data structure that takes no extra space at all other than the input array holding a permutation of the points. The best query time known for such "in-place
data structures " is O(log 2 n). In this paper, we break the O(log 2 n) barrier by providing a method that answers nearest neighbor queries in time O((log n) log3=2 2 log log n) = O(log
"... Abstract. Slope selection is a well-known algorithmic tool used in the context of computing robust estimators for fitting a line to a collection P of n points in the plane. We demonstrate that
it is possible to perform slope selection in expected O(n log n) time using only constant extra space in ad ..."
Add to MetaCart
Abstract. Slope selection is a well-known algorithmic tool used in the context of computing robust estimators for fitting a line to a collection P of n points in the plane. We demonstrate that it is
possible to perform slope selection in expected O(n log n) time using only constant extra space in addition to the space needed for representing the input. Our solution is based upon a
space-efficient variant of Matouˇsek’s randomized interpolation search, and we believe that the techniques developed in this paper will prove helpful in the design of space-efficient randomized
algorithms using samples. To underline this, we also sketch how to compute the repeated median line estimator in an in-place setting. 1
"... In this paper, we consider the problem of designing in-place algorithms for computing the maximum area empty rectangle of arbitrary orientation among a set of points in 2D, and the maximum
volume empty axisparallel cuboid among a set of points in 3D. If n points are given in an array of size n, the ..."
Add to MetaCart
In this paper, we consider the problem of designing in-place algorithms for computing the maximum area empty rectangle of arbitrary orientation among a set of points in 2D, and the maximum volume
empty axisparallel cuboid among a set of points in 3D. If n points are given in an array of size n, the worst case time complexity of our proposed algorithms for both the problems is O(n 3); both the
algorithms use O(1) extra space in addition to the array containing the input points. 1
"... One of the classic data structures for storing point sets in R 2 is the priority search tree, introduced by McCreight in 1985. We show that this data structure can be made in-place, i.e., it can
be stored in an array such that each entry stores only one point of the point set and no entry is stored ..."
Add to MetaCart
One of the classic data structures for storing point sets in R 2 is the priority search tree, introduced by McCreight in 1985. We show that this data structure can be made in-place, i.e., it can be
stored in an array such that each entry stores only one point of the point set and no entry is stored in more than one location of that array. It combines a binary search tree with a heap. We show
that all the standard query operations can be answered within the same time bounds as for the original priority search tree, while using only O(1) extra space. We introduce the min-max priority
search tree which is a combination of a binary search tree and a min-max heap. We show that all the standard queries which can be done in two separate versions of a priority search tree can be done
with a single min-max priority search tree. As an application, we present an in-place algorithm to enumerate all maximal empty axisparallel rectangles amongst points in a rectangular region R in R 2
in O(m log n) time with O(1) extra-space, where m is the total number of maximal empty rectangles. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=778000","timestamp":"2014-04-16T10:12:36Z","content_type":null,"content_length":"35789","record_id":"<urn:uuid:04550e20-616b-451d-acf6-3398b822e20c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
PhD Candidates working with Dr. Nicolosi
Cybersecurity has transformed into an everyday reality for all users of Internet-connected devices. Joining wireless networks, shopping or paying bills online, logging into password-protected Web
accounts, and toting always-connected mobile devices presents a constant security challenge. Cisco estimates that 1 trillion unique devices will be connected to the Internet by the year 2013,
amplifying issues of information security and reliability.
Cisco estimates that 1 trillion unique devices will be connected to the Internet by the year 2013, amplifying issues of information security and reliability.
Researchers at Stevens Institute of Technology are planning for an even bigger challenge to cryptography and cybersecurity. While scientists and engineers worldwide are working intently to make the
dream of quantum computing a reality, information security managers are watching the horizon with some apprehension. The overwhelming computational abilities accessible upon the arrival quantum
computers would quickly unravel the most advanced public key encryption available today, thus rendering current cryptographic standards obsolete. If quantum computers appeared as a viable technology
tomorrow, there would be precious little alternative and acceptable means for securing our online and wireless transactions.
In the meantime, cybersecurity is becoming even more critical to our future. The White House issued its Cyberspace Policy Review in 2009, declaring cybersecurity a matter of national public safety
and a priority for the current administration.
“Information technology is an established component of the infrastructure of modern societies, and cryptography is a cornerstone of information security.”
–Dr. Antonio Nicolosi
“Information technology is an established component of the infrastructure of modern societies, and cryptography is a cornerstone of information security,” says Dr. Antonio Nicolosi, professor of
Computer Science at Stevens.
Experts predict another twenty years before the advent of quantum computing. However, the magnitude of these concerns is enough for researchers across the world to devote millions to the study of new
methods for online encryption and the hiding of messages, known as Post-Quantum Cryptography.
Traditional Vs. Post-Quantum Cryptography
Cryptography is the practice and study of protecting information and computation processes. This fundamental computing discipline is the backbone of the security which converts your message into a
secure format that can reach its destination without being read by an eavesdropping party.
Image courtesy NIST
Our most widely implemented use of Traditional Cryptography is a public-key encryption algorithm known as RSA. To use RSA, a user creates and publishes a public key that is the product of two large
prime numbers, and keeps the prime factors secret. Anyone can use the public key to encode a message to that user, but only that user has the private key (based on the prime factors) which is
necessary for decoding. The security of RSA depends on the difficulty of recovering the prime factors when only the multiplicative product is known.
Current cryptographic encryption methods are analogous to an arms race. Researchers add complexity to the algorithms that encode our messages while hackers attempt to break them. For now, researchers
are able to outpace the majority of these malicious attempts. However, quantum computing would compel the use of keys so long that most public key systems in use today would not be practical. This
necessitates a complete redesign of the algorithms involved in online encryption, and those efforts are known as Post-Quantum Cryptography.
Because the most complex encryption methods in existence today will be resolved in “real-time” on quantum computers, the challenge for researchers is to develop newly structured mathematical
algorithms that maintain security amid quantum computers.
“Each of the major public-key protocols used in e-commerce today depends on a procedure which is easy to do but difficult to undo,” says Dr. Robert Gilman of the Department of Mathematical Sciences
at Stevens. In RSA, for instance, it is easy to multiply two large primes, but very difficult to factor a large integer into a product of primes. Researchers have long thought factoring to be a
"hard" problem, but they have not proved it. “If it turns out that there is an easy way to factor, or in the case of quantum computing, if technology suddenly gains the power to solve hard problems
much more quickly, then RSA will not be secure.”
Public key systems are built from computational problems in algebra and number theory. For the system to be secure its corresponding computational problem should be virtually impossible to solve in
the absence of the secret key. However, none of the computational problems in use today have been proven to be “hard” in this sense. (And the computational problems corresponding to most public key
systems are known to be not hard for quantum computers.)
Setting up the challenge as an interdisciplinary pursuit, researchers at Stevens have effectively organized their skills by focusing on specific aspects of the greater problem. The methodologies
being investigated by individual researchers, such as the use of lattice theory and group theory, function as part of an all-encompassing approach to defining, planning, and executing a strategy for
complete cryptographic enhancement.
Dr. Miasnikov
Mathematics Department Director Alexei Miasnikov founded the Algebraic Cryptography Center (ACC) to investigate new techniques from computational algebra and their applications to practical problems
in cryptography and cryptanalysis. Post-quantum cryptography is one of the research themesbeing investigated by Alexei Miasnikov, Alex Myasnikov, Alexander Ushakov, Denis Serbin, Werner Backes, Sven
Dietrich, Antonio Nicolosi, Susanne Wetzel, and Dr. Robert Gilman.
The ACC was established to facilitate interdisciplinary research on foundational problems of cryptography and cybersecurity in collaboration with associate members at other institutions throughout
the world. The center investigates unique applications of mathematical algorithms that may be used to facilitate security in quantum computing systems – an integral blending of both computer science
and mathematics in pursuit of a greater goal.
Dr. Miasnikov looks upon this emerging research with genuine enthusiasm. “Everything in the field of quantum computing and cryptographic mathematics is wide open. There remain fundamental
mathematical questions that affect the very nature of current and future cryptographic methods. These not only present challenging and exciting opportunities for researchers such as myself, but offer
an exceptional chance for students to participate in this groundbreaking field.”
Dr. Miasnikov’s sentiments were echoed by Professors Wetzel and Gilman who feel the programs at Stevens illustrate the importance of collaboration between Computer Science and Mathematical
disciplines to success in this field. According to Dr. Nicolosi, the pursuit of more secure cryptography is a great area for students at all levels who are interested in research. It has provided
undergraduate students with an opportunity to help code encryption systems. Meanwhile, graduate students with training in cryptography can try their hand at refining the techniques that researchers
are developing, or assessing their robustness by devising new approaches to the cryptoanalysis of the underlying computational problems.
The Search for Hard Problems
Dr. Gilman
The question of what makes a computational problem difficult is a focus of research for Mathematics Department colleagues Alex Myasnikov, Alexander Ushakov, and Robert Gilman. This work can be
classified as a pursuit within Complexity Theory.
In the face of a growing need for robust new encryption methods, there is a gap in the system that researchers must overcome. Beyond empirical demonstration, it is difficult to mathematically prove
that a security algorithm is “hard.” As researchers go beyond incremental improvements—such as boosting the integer size in an RSA key—and propose innovative departures from traditional cryptography,
increasingly there will be a demand for proof of a computational problem’s hardness.
The ideal problem for use in cryptography is difficult to solve for every possible key. However, that is impossible in practice, because at the most basic level, one can store answers for some keys
in memory. Therefore, researchers at the Algebraic Cryptography Center have been seeking problems with generic complexity—those that are “hard” to solve for most inputs. They are directing their
energies to the cases that occur "most of the time" rather than the "worst possible or average scenarios", because problems that have proven to be difficult for the latter cases have occasionally
turned out to be not “hard” overall.
An important point about security measures is that they are never foolproof. Current methods of encryption are based upon mathematical equations with the idea being that the more complex the
equation, the longer it takes to decode. While security advancements are continuously advancing, so are the methods of intrusion, and breaches still occur.
“As we move forward, we have to be able to ask, ‘To what extent can we prove a crypto system is secure?’” Dr. Gilman states. “And we need to be able to find an answer.”
As a Professor of Mathematics at Stevens and leading member of the Algebraic Cryptography Center, Dr. Gilman is sparking conversation about how to examine cryptographic effectiveness and therefore
plan for the future.
Hard Problems in Group Theory
Dr. Nicolosi
According to Dr. Antonio Nicolosi of the Department of Computer Science at Stevens, there are many approaches to improving cryptography being investigated within Stevens and at institutions around
the world. He says, “Heterogeneity has an inherent strength to it, and diversifying the set of difficult problems used to encrypt our vital information enhances the robustness of the overall
structure of cryptography.” It is highly beneficial for the future health of cryptography because, as Dr. Gilman noted, no security measure is foolproof, using various difficult problems means that
even if one is broken, the information secured by other methods would not be compromised.
Dr. Nicolosi, with collaborators at CCNY, aims to make unauthorized decryption of a message so difficult that even the power of a quantum computer would not help. They are pursuing a line of research
in post-quantum cryptography which looks at identifying “hard” group-theoretic learning problems. This approach is inspired by the recent success of the learning parity with noise (LPN) and learning
with errors (LWE) problems as a platform for a wide variety of cryptographic applications.
LPN and LWE can be thought of as a variation of the elementary problem of solving a system of linear equations, Ax=b (where x and b are vectors and A is matrix), which can be solved with standard
techniques from linear algebra. The new ingredient in the LPN/LWE problems is the introduction of noise, which perturbs the system in a very slight and unpredictable manner. The issue then becomes
akin to approximately solving a noisy linear system Ax + v = b (where v is a mostly-zero vector, with a few "noise" entries). Using known methods, solving this problem is unfeasible when the length
of the vector x is in the order of a couple of hundreds (much shorter than a typical key in popular crypto-system like RSA).
Dr. Nicolosi’s group is taking a LWE approach into the realm of homomorphisms, or structure-preserving mappings between groups. This potentially yields “hard” problems where the computational task is
the recognition of noisy (preimage, image) samples for a hidden homomorphism, as opposed to random pairs of elements from the relevant domain and co-domain. Their working hypothesis is that the
problems obtained in this manner are even harder than the ones resulting from the LPN/LWE approach, and thus shorter keys are sufficient.
Lattice-based Systems
Dr. Wetzel
Lattice theory has also been proposed as a possible source of post-quantum cryptographic systems. Werner Backes, Antonio Nicolosi, and Susanne Wetzel are investigating various aspects of
lattice-based systems. Lattices are geometric objects that can be visualized as the set of intersection points of a regular but not necessarily orthogonal n-dimensional grid. They were first found to
be useful in breaking cryptosystems, but since certain lattice problems can also be very difficult to solve, researchers have found them to be promising in the design of secure cryptosystems.
Dr. Backes
According to Werner Backes, this work “not only provides effective tools for cryptanalysis, but is also believed that it can even bring about new cryptographic primitives that exhibit strong security
even in the presence of quantum computers.” Since joining Stevens as a Post Doctoral Researcher in 2005, Dr. Backes has worked with Professor Wetzel in developing state-of-the-art algorithms that
have the potential of being used in a multi-core, many-core or distributed environment.
Developments in Post-Quantum Cryptography are indeed required to keep us secure well into the future. If researchers crack the code behind Post-Quantum Cryptography, these solutions could be
immediately put into use of our current infrastructure to improve it exponentially. Any advancement discovered in the research phases would have an equally dramatic impact today as well as twenty
years into the future.
Interested in learning more and finding out how you can participate? Learn more about the Department of Mathematical Sciences or the Department of Computer Science, or apply at Undergraduate
Admissions or Graduate Admissions. | {"url":"http://research.stevens.edu/post-quantum-cybersecurity","timestamp":"2014-04-24T22:45:50Z","content_type":null,"content_length":"70369","record_id":"<urn:uuid:5ae4c55e-b408-4115-b250-efca56a2e456>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00355-ip-10-147-4-33.ec2.internal.warc.gz"} |